text
stringlengths 56
7.94M
|
---|
\begin{document}
\renewcommand{\textup{2010} Mathematics Subject Classification}{\textup{2010} Mathematics Subject Classification}
\renewcommand{\thefootnote}{}
\footnote{\textup{2010} Mathematics Subject Classification{ 30B50, 11M41, 30B40, 20F69, 11G50 }}
\keywords {Dirichlet series, Euler product, singularities, natural
boundary, zeta functions of groups}
\begin{abstract} We classify singularities of Dirichlet series having
Euler products which are rational functions of $p$ and $p^{-s}$ for $p$ a prime
number and give examples of natural boundaries from zeta functions of
groups and height zeta functions.
\end{abstract}
\maketitle
\section{Introduction and results}
Many Dirichlet-series occurring in practice satisfy an Euler-product,
and if they do so, the Euler-product is often the easiest way to
access the series. Therefore, it is important to deduce information on
the series from the Euler-product representation. One of the most
important applications of Dirichlet-series, going back to Riemann, is
the asymptotic estimation of the sum of its coefficients via Perron's
formula, that is, the use of the equation
\[
\sum_{n\leq x} a_n = \frac{1}{2\pi i}\int\limits_{c-i\infty}^{c+i\infty}
\Big(\sum_{n\geq 1}\frac{a_n}{n^s}\Big)\frac{x^s}{s}\;ds.
\]
To use this
relation, one usually shifts the path of integration to the left,
thereby reducing the contribution of the term $x^s$. This becomes
possible only if the function $D(s)=\sum\frac{a_n}{n^s}$ is holomorphic on the
new path and therefore the question of continuation of Dirichlet-series
beyond their domain of absolute convergence is a central issue in this
theory. In fact, the importance of the Riemann hypothesis stems from
the fact that it would allow us to move the path of integration for
$D(s)=\frac{\zeta'}{\zeta}(s)$ to the line $1/2+\epsilon$ without
meeting any singularity besides the obvious pole at 1.
Estermann\cite{Est} appears to be the first to address this problem. He showed
that for an integer valued polynomial $W(x)$
with $W(0)=1$ the
Dirichlet-series $D(s)=\prod_p W(p^{-s})$ can either be written as a
finite product of the form $\prod_{\nu\leq N}\zeta(\nu s)^{c_\nu}$ for
certain integers $c_\nu$, and is therefore meromorphically continuable
to the whole complex plane, or is continuable to the half-plane
$\mathbb{R}e\,s>0$. In the latter case the line $\mathbb{R}e\,s=0$ is the natural
boundary of
the Dirichlet-series.
The strategy of his proof was to show that every point on the line
$\mathbb{R}e\,s=0$ is an accumulation point of poles or zeros of $D$. Note
that $\zeta$, the Riemann-zeta function itself, does not fall among the cases under
consideration, since $W(X)=(1-X)^{-1}$ is a rational
function. Dahlquist\cite{Dahl} generalized Estermann's work allowing
$W$ to be a function holomorphic in the unit circle with the exception
of isolated singularities and in particular covering the case that $W$
be rational. This
method of proof was extended to much greater generality, interest
being sparked by $\zeta$-functions of nilpotent groups
introduced by Grunewald, Segal and Smith\cite{GSS} as well as height zeta
functions\cite{2nob}. Functions arising
in these contexts are often of the form $D(s)=\prod W(p, p^{-s})$ for an
integral polynomial $W$. Du Sautoy and Grunewald\cite{ghost} gave a
criterion for such a function to have a natural boundary which, in a
probabilistic sense, applies to almost all polynomials.
Again, it is shown that every point on
the presumed boundary is an accumulation point of zeros or
poles. The following conjecture, see for example \cite[\bf 1.11]{book}\cite[\bf 1.4]{ghost},
is believed to be true.
\begin{Con}
\label{Con:Main}
Let $W(x, y)=\sum_{n,m} a_{n,m} x^n y^m$ be an integral polynomial
with $W(x, 0) = 1$. Then $D(s)=\prod_p W(p, p^{-s})$ is
meromorphically continuable to the whole complex plane if and if only
if it is a finite product of Riemann $\zeta$-functions. Moreover, in
the latter case if $\beta=\max\{\frac{n}{m}: m\ge 1, a_{n,m}\neq 0\}$, then
$\mathbb{R}e\,s=\beta$ is the natural boundary of $D$.
\end{Con}
In this paper we show that any refinement of Estermann's method is bound to
fail to prove this conjecture.
If $W(X, Y)$ is a rational function, we expand $W$ into a power
series $W(X, Y)=\sum_{n, m\geq 0} a_{n, m} X^nY^m$, and define
$\alpha=\sup\{\frac{n+1}{m}: m\ge 1,a_{n,m}\neq 0\}$,
$\beta=\sup\{\frac{n}{m}: m\ge 1,a_{n,m}\neq 0\}$. It is easy to see that
the supremum is actually attained, and that the function
$\tilde{W}=1+\sum_{\frac{n}{m}=\beta} a_{n, m} X^nY^m$ is again a
rational function. We call
$\tilde{W}$ the main part of $W$, since only
$\tilde{W}$ is responsible for the convergence of the product
$D(s)$. For $W$ a polynomial $\tilde{W}$ was called the ghost of $W$
in \cite{ghost}. A rational function $W$ is called cyclotomic if it
can be written as the product of cyclotomic polynomials and their
inverses.
We define an obstructing point $z$ to be a complex number with
$\mathbb{R}e\,z=\beta$, such that there exists a sequence of complex numbers
$z_i$, $\mathbb{R}e\,z_i>\beta$, $z_i\rightarrow z$, such that $D$ has a pole
or a zero in $z_i$ for all $i$. Obviously, each obstructing point is
an essential singularity for $D$, the converse not being true in
general.
Our main result is the following.
\begin{Theo}
\label{thm:class}
Let $W(X, Y)$ be a rational function, which can be written as
$\frac{P(X, Y)}{Q(X, Y)}$, where $P, Q\in\mathbb{Z}[X, Y]$ satisfy $P(X,
0)=Q(X, 0)=1$. Define $a_{n, m}, \beta, \tilde{W}$ and $D$ as
above. Then the product representation of $D$ converges in the
half-plane $\mathbb{R}e\;s>\alpha$, $D$
can meromorphically continued into the half-plane $\mathbb{R}e\;s>\beta$, and
precisely one of the following holds true.
\begin{enumerate}
\item $W$ is cyclotomic and once its unitary factors are removed,
$W=\tilde{W}$ ; in this case $D$ is a
finite product of Riemann $\zeta$-functions;
\item $\tilde{W}$ is not cyclotomic; in this case every point of the
line $\mathbb{R}e\,s =\beta$ is an obstructing point;
\item\label{type2}
$W\neq\tilde{W}$, $\tilde{W}$ is cyclotomic and there are
infinitely many pairs $n,m$ with $a_{n,m}\neq 0$ and
$\frac{n}{m}<\beta<\frac{n+1}{m}$; in this case $\beta$ is an
obstructing point;
\item $W\neq\tilde{W}$, $\tilde{W}$ is cyclotomic, there are only
finitely many pairs $n,m$ with $a_{n,m}\neq 0$ and
$\frac{n}{m}<\beta<\frac{n+1}{m}$, but there are infinitely many
primes $p$ such that the equation $W(p, p^{-s})=0$ has a solution
$s_0$ with $\mathbb{R}e\,s_0>\beta$; in this case every point of the line
$\mathbb{R}e\,s=\beta$ is an obstructing point;
\item None of the above; in this case no point on the line
$\mathbb{R}e\,s=\beta$ is an obstructing point.
\end{enumerate}
\end{Theo}
We remark that each of these cases actually occurs, that is, there are
Euler-products for which Estermann's approach cannot work.
Notice that while in the third case we need information on the zeros of the
Riemann-zeta function to know about the meromorphic continuation, in
the last case we can say nothing about their continuation.
While the above classification looks pretty technical, these cases
actually behave quite differently. To illustrate this point we consider
a domain $\Omega\subseteq\mathbb{C}$ with a function
$f:\Omega\rightarrow\mathbb{C}$, let $N_\pm(\Omega)$ the number of zeros
and poles of $f$ in $\Omega$ counted with positive multiplicity, that
is, an $n$-fold zero or a pole of order $n$ is counted $n$
times. Then we have the following.
\begin{coro}
Let $W$ be a rational function, and define $\beta$ as above. Then one of the
following two statements holds true:
\begin{enumerate}
\item For every $\epsilon>0$ we have $N_\pm(\{|z-\beta|<\epsilon, \mathbb{R}e
z>0\}) = \infty$;
\item We have $N_\pm(\{\mathbb{R}e z>\beta, |\Im z|<T\} =
\mathcal{O}(T\log T)$.
\end{enumerate}
If $\ W$ is a polynomial and we assume the Riemann
hypothesis as well as the $\mathbb{Q}$-linear independence of the imaginary parts
of the non-trivial zeros of $\zeta$, then there exist constants $c_1,
c_2$, such that $N_\pm(\{\mathbb{R}e z>\beta, |\Im z|<T\}) =
c_1T\log T + c_2 T + \mathcal{O}(\log T)$.
\end{coro}
Finally we remark that for $\zeta$-functions of nilpotent groups the
generalization to rational functions is irrelevant, since a result of du
Sautoy\cite{Denominator} implies that if $\zeta_G(s)=\prod_p W(p,
p^{-s})$ for a rational function $W(X, Y)=\frac{P(X, Y)}{Q(X, Y)}$,
then $Q$ is a cyclotomic polynomial, that is, $\zeta_G$ can be written
as the product of finitely many Riemann $\zeta$-functions and a
Dirichlet-series of the form $\prod_p W(p, p^{-s})$ with $W$ a
polynomial. However, for other applications it is indeed important to
study rational functions, one such example occurs in the recent work of de
la Bret\`eche and Swynnerton-Dyer\cite{2nob}.
\section{Proof of case 2}
In this section we show that if $\tilde{W}$ is not cyclotomic, then
$\mathbb{R}e\;s=\beta$ is the natural boundary of the meromorphic
continuation of $D$. For $W$ a polynomial this was shown by du Sautoy
and Grunewald\cite{ghost}, our proof closely follows their lines of
reasoning.
The main difference between the case of a polynomial and a rational function
is that for polynomials the local zeros created by different primes can never
cancel, whereas for a rational function the zeros of the numerator belonging
to some prime number $p$ might coincide with zeros of the denominator
belonging to some other prime $q$, and may therefore not contribute to the
zeros or poles needed to prove that some point on the presumed boundary is a
cluster point. We could exclude the possibility of cancellations by assuming
some unproven hypotheses from transcendence theory, however, here we show that
we can deal with this case unconditionally by proving that the amount of
cancellation remains limited. We first consider the case of cancellations
between the numerator and denominator coming from the same prime number.
\begin{Lem}
\label{Lem:alggeo}
Let $P, Q\in\mathbb{Z}[X, Y]$ be co-prime non-constant polynomials. Then there are only
finitely many primes $p$, such that for some complex number $s$ we have $P(p,
p^{-s})=Q(p, p^{-s})=0$.
\end{Lem}
\begin{proof}
Let $V$ be the variety of $\langle P, Q\rangle$ over $\mathbb{C}$. Assume there are
infinitely many pairs $(p, s)$, for which the equation $P(p,
p^{-s})=Q(p, p^{-s})=0$ holds true. Then $V$ is infinite, hence, at least
one-dimensional. Since $P$ and $Q$ are non-constant, we have $V\neq \mathbb{C}^2$,
hence, $V$ is one-dimensional. Let $V'$ be a one-dimensional irreducible
component, and let $R$ be a generator of the ideal corresponding to $V'$. Then
$\langle P, Q\rangle\subseteq\langle R\rangle$, that is, $R$ divides $P$ and
$Q$, which implies that $R$ is constant. But a constant polynomial cannot
define a one-dimensional variety and this contradiction proves our claim.
\end{proof}
Next we use the following graph-theoretic result, describing graphs which are
rather close to trees. We call a cycle in a graph {\it minimal}\/ if it is of length
$\geq 3$, and not the union of two cycles of smaller length.
\begin{Lem}
\label{Lem:Graph}
Let $\mathcal{G}$ be a graph, $k\geq 2$ an integer, such that every vertex has
degree $\geq 3k$, and that there exists a symmetric relation $\sim$ on the
vertices, such that every vertex $v$ is in relation to at most $k$ other
vertices, and every minimal cycle passing through $v$ also passes
through one of the vertices in relation to $v$. Then $\mathcal{G}$ is
infinite.
\end{Lem}
\begin{proof}
Suppose that $\mathcal{G}$ were finite, and fix some vertex $v_0$. We call a
geodesic path {\it good}\/ if no two vertices of the path
stand in relation to each other. We want to construct an infinite good
path. Note that
$p_1$ and $p_2$ are good paths of finite length, they cannot intersect in but
one point, for otherwise their union would contain a cycle, and choosing one
of the intersection points we would obtain a contradiction with the
definition of a
good path. Hence, the union of the good paths starting in $v_0$ forms a tree. There
are $\geq 3k$ vertices connected to $v_0$, at most $k$ of which are forbidden.
Hence, the first layer of the tree contains at least $2k$ points. Each of
these points is connected to at least $3k$ other points. It stands in relation
to at most $k$ of them and hence we can extend every path in at least $2k$ ways,
and of all these paths at most $k$ stand in relation with $v_0$. Hence, the
second layer contains at least $4k²-k$ points. Denote by $n_i$ the number of
points in the $i$-th layer of the tree. Then, continuing in this way, we
obtain
\[
n_{i+1}\geq 2kn_i - k(n_{i-1} + \dots + n_1 + 1).
\]
From this and the assumption that $k\geq 2$ it follows by induction that
$n_{i+1}\geq kn_i$, hence, the tree and therefore the graph $\mathcal{G}$,
which contains the tree, is infinite.
\end{proof}
Note the importance of symmetry: if the relation is allowed to be
non-symmetric, we can get two regular trees, and identify their leaves. Then every
minimal cycle passing through one point either passes through its parent node
or the mirror image of the point. Thus in the absence of symmetry the result becomes
wrong for arbitrarily large valency even for $k=2$.
We can now prove our result on non-cancellation.
\begin{Lem}
\label{Lem:nocancel}
Let $P, Q\in\mathbb{Z}[X, Y]$ be co-prime polynomials with $\beta$ defined as in the
introduction. Let $\epsilon>0$ be given, and suppose that for a prime $p_0$
sufficiently large $P(p_0, p_0^{-s})$ has a zero on the segment $[\sigma+it,
\sigma+it+\epsilon]$, where $\sigma>\beta$. Then $\prod_p\frac{P(p,
p^{-s})}{Q(p, p^{-s})}$ has a zero or a pole on this segment.
\end{Lem}
\begin{proof}
Since the local zeros converge to the line $\mathbb{R}e\;s=\beta$, there are only
finitely many primes $p$ for which the numerator or denominator has a zero,
hence, we may assume that $P(p, p^{-s}), Q(p, p^{-s})\neq 0$ for $p>p_0$. For
each prime $p$ let $z_1^p, \ldots, z_k^p$ be the roots of the equation
$P(p,p^{-s})=0$ in the segment $\mathbb{R}e\;s=\beta$, $0\leq\Im\;s\leq
\frac{2\pi}{\log p}$, and let $w_1^p, \ldots, w_\ell^p$ be the roots of the
equation $Q(p,p^{-s})=0$ on this segment. Such roots need not exist but
if they do then their number is bounded independently of
$p$. The roots of the equations $P(p, p^{-s})=0$ and $Q(p, p^{-s})=0$ form a
pattern with period $\frac{2\pi i}{\log p}$. If $p_0$ is sufficiently large,
then $\delta$ becomes arbitrary small, hence, if $p$ is not large then
the equations $P(p,p^{-s})=0$ and $Q(p, p^{-s})=0$ do not have
solutions on the
line $\mathbb{R}e\;s=\beta+\delta$. Let $p_1$ be the least prime for which such solutions
exist. For $p_1$ sufficiently large and $p>p_1$, either $P(p, p^{-s})=0$ has
no solution on the segment under consideration or it has at least
$\big[\frac{\epsilon\log p}{2\pi}\big]$ such solutions. Note that by fixing
$\epsilon$ and choosing $p_0$ sufficiently large we can make this expression
as large as we need. Further note that by choosing $p_0$ large we can ensure,
in view of Lemma~\ref{Lem:alggeo}, that $P(p, p^{-s})=Q(p, p^{-s})=0$ has no
solution on the line $\mathbb{R}e\;s=\beta+\delta$.
We now define a bipartite graph $\mathcal{G}$ as follows: The vertices of the
graph are all complex numbers $z_i^p$ in one set and all complex numbers
$w_i^p$ in the other set, where $p\leq p_0$. Two vertices $z_i^p$, $w_j^q$
are joined by an edge if there exists a complex number $s$ with
$\mathbb{R}e\;s=\beta+\delta$, $t\leq\Im\; s\leq t+\epsilon$, such that $s$ is
congruent to $z_i^p$ modulo $\frac{2\pi i}{\log p}$, and congruent to $w_j^q$
modulo $\frac{2\pi i}{\log q}$. In other words, the existence of an edge
indicates that one of the zeros of $P(p, p^{-s})$ obtained from $z_i^p$ by
periodicity cancels with one zero of $Q(q, q^{-s})$ obtained from $w_j^q$.
If $\prod_p\frac{P(p,p^{-s})}{Q(p, p^{-s})}$ has neither a zero nor a pole on
the segment, then every zero of one of the polynomials cancels with a zero of
the other polynomial, that is, every vertex has valency at least
$\big[\frac{\epsilon\log p_1}{2\pi}\big]$.
We next bound the number of cycles. Suppose that $z_{i_1}^{p_1}\sim
w_{i_2}^{p_2}\sim\dots\sim w_{i_\ell}^{p_\ell}\sim z_{i_1}^{p_1}$. Then there
is a complex number $s$ in the segment which is congruent to
$z_{i_1}^{p_1}$ modulo $\frac{2\pi i}{\log p_1}$ and congruent to
$w_{i_2}^{p_2}$ modulo $\frac{2\pi i}{\log p_2}$. Going around the
cycle and collecting the differences we obtain an equation of the form $2\pi
i\sum_i \frac{\lambda_i}{\log p_i} = 0$, $\lambda_i\in\mathbb{Z}$, which can only hold
if the combined coefficients vanish for each occurring prime. However the
coefficients cannot vanish if some prime occurs only once. If the cycle is
minimal the same vertex cannot occur twice, hence, there is some $j$ such
that $p_1=p_j$, but $i_1\neq i_j$. Hence, every minimal cycle containing
$z_{i_1}^{p_1}$ must contain $z_{i_j}^{p_j}$ or $w_{i_j}^{p_j}$ for some $i\neq
j$. The relation defined by $x_i^p\sim x_j^q\Leftrightarrow p=q$, $x\in\{z,
w\}$ is an equivalence relation with equivalence classes bounded by some
constant $K$. If we choose $p_1>\exp(6\pi K\epsilon^{-1})$, the assumptions of
Lemma~\ref{Lem:Graph} are satisfied, and we conclude that $\mathcal{G}$ is
finite.
But we already know that there is no $p>p_0$ for which $P(p,p^{-s}) = 0$ or
$Q(p, p^{-s}) = 0$ has a solution, that is, $\mathcal{G}$ is finite. This
contradiction completes the proof.
\end{proof}
Using Lemma~\ref{Lem:nocancel} the proof now proceeds in the same fashion as
in the polynomial case; for the details we refer the reader to the proof given
by du Sautoy and Grunewald\cite{ghost}.
\section{Development in cyclotomic factors}
A rational function $W(X, Y)$ with $W(X, 0)=1$ can be written as an
infinite product of polynomials of the form $(1-X^aY^b)$. Here
convergence is meant with
respect to the topology of formal power series, that is, a product
$\prod_{i=1}^\infty(1-X^{a_i}Y^{b_i})$ converges to a power series $f$
if for each $N$ there exists an $i_0$, such that for $i_1>i_0$ the
partial product $\prod_{i=1}^{i_1}(1-X^{a_i}Y^{b_i})$ coincides with
$f$ for all coefficients of monomials $X^aY^b$ with $a, b<N$. The
existence of such an extension is quite obvious, however, we need some
explicit information on the factors that occur and we shall develop the
necessary information here.
For a set $A\subseteq\mathbb{R}^2$ define the convex cone $\overline{A}$
generated by $A$ to be the smallest convex subset containing $\lambda
a$ for all $a\in A$ and $\lambda>1$. A point $a$ of $A$ is extremal, if it
is contained in the boundary of $\overline{A}$ and there exists a
tangent to $A$ intersecting $A$ precisely in $a$, or set theoretically
speaking, if $\overline{A\setminus\{a\}}\neq\overline{A}$. Note that a
convex cone forms an additive semi-group as a subsemigroup of $\mathbb{R}^2$.
To a formal power series $W=\sum_{n, m} a_{n, m} X^n Y^m\in\mathbb{Z}[[X, Y]]$ we
associate the set $A_W = \{(n, m): a_{n, m}\neq 0\}$. Suppose we start
with a rational function $W\in\mathbb{Z}[X, Y]$, which is of the form
$\frac{P(X, Y)}{Q(X, Y)}$ with $P(X, 0)=Q(X, 0)=1$. Then
\[
\frac{1}{Q(X, Y)} = \sum_{\nu=0}^\infty \big(Q(X, Y)-1\big)^\nu =
\sum_{n, m} b_{n, m} X^n Y^m,
\]
say, where the convergence of the geometric series as a formal power series
follows from the fact that every monomial in $Q$ is divisible by
$Y$. The set $\{(n, m):b_{n, m}\neq 0\}\subseteq\mathbb{R}^2$ is contained in
the semigroup generated by the points corresponding to monomials in
$Q$, but may be strictly smaller, as there could be unforeseen
cancellations. Multiplying the power series by $P(X, Y)$, we obtain
that $A_W$ is contained within finitely many shifted copies of
$A_{Q^{-1}}$.
Let $(n, m)$ be an extremal point of
$A_w$. Then we have $W=(1-X^n Y^m)^{-a_{n, m}} W_1(X, Y)$, where
$W_1(X, Y) = (1-X^n Y^m)^{a_{n, m}} W(X, Y)$. Obviously, $W_1(X, Y)$
is a formal power series with integer coefficients, we claim that
$\overline{A_{W_1}}$ is a proper subset of $\overline{A_W}$. In fact,
the monomials of $W_1$ are obtained by taking the monomials of $A_W$,
multiplying them by some power of $X^nY^m$, and possibly adding up the
contribution of different monomials. Hence, $A_{W_1}$ is contained
in the semigroup generated by $A_W$. But $(n, m)$ is not in $A_{W_1}$,
and since $(n, m)$ was assumed to be extremal, we obtain
\[
A_{W_1}\subseteq\langle A_W\rangle\setminus\{(n, m)\} \subseteq
\overline{A_W}\setminus\{(n, m)\} \subset\overline{A_W}.
\]
Taking the convex cone is a hull operator, thus $\overline{A_{W_1}}$
is a proper subset of $\overline{A_{W}}$. Since we begin and end with a subset
of $\mathbb{N}^2$, we can repeat this procedure so that after finitely many
steps the resulting power series $W_k$ contains no non-vanishing
coefficients $a_{n, m}$ with $n<N, m<M$. This suffices to prove the
existence of a product decomposition, in fact, if one is not
interested in the occurring cyclotomic factors one could avoid power
series and stay within the realm of polynomials by setting $W_1(X, Y)
= (1+X^n Y^m)^{-a_{n, m}} W(X, Y)$ whenever $a_{n, m}$ is negative.
However, in this way we trade one operation involving power series for
infinitely many involving polynomials, which is better avoided for
actual calculations.
While we can easily determine a super-set of $A_{W_1}$, in general we
cannot prove that some coefficient of $W_1$ does not vanish, that is,
knowing only $A_W$ and not the coefficients
we cannot show that $A_{W_1}$ is as large as we suspect it to
be. However, it is easy to see that when eliminating one extremal
point all other extremal points remain untouched. In particular, if we
want to expand a polynomial $W$ into a product of cyclotomic polynomials,
at some stage we have to use every extremal point of $A_W$, and the
coefficient attached to this point has not changed before this step,
by induction it follows that the expansion as a cyclotomic product is
unique.
We now assume that $\tilde{W}$ is cyclotomic, while $W$ is not. We
further assume that $\tilde{W}$ is a polynomial, and that the
numerator of $W$ is not divisible by a cyclotomic polynomial. We can
always satisfy these assumptions by multiplying or dividing $W$ with
cyclotomic polynomials, which corresponds to the multiplying or
dividing $D$ with certain shifted $\zeta$-functions, and does not
change our problem. Our aim is to find some
information on the set $\{(n,m):c_{n, m}\neq 0\}$, where the
coefficients $c_{n, m}$ are defined via the expansion $W(X, Y)=\prod
(1-X^n Y^m)^{c_{n, m}}$.
In the first step we remove all points on the line
$\frac{n}{m}=\beta$. By assumption we can do so by using finitely
many cyclotomic polynomials. The resulting power series be
$W_1$. The inverse of the product of finitely many cyclotomic
polynomials is a power series with poles at certain roots of unity,
hence, we can express the sequence of coefficients as a polynomial in
$n$ and Ramanujan-sums $c_d(n)$ for $d$ dividing some integer
$q$. Consider some point $(n, m)\in\overline{A_W}$, and compute the
coefficient attached to this point in $W_1$. If $A_W$ does not contain
a point $(n', m')$, such that $(n-n', m-m')$ is collinear to $(\beta,
1)$, then this coefficient is clearly 0. Otherwise we consider all
points $(n_1, m_1), \ldots, (n_k, m_k)$ in $A_W$, which are on the
parallel to $(\beta t, t)$ through $(n, m)$. The coefficients of
$W_1$ attached to points on $\ell$ are linear combinations of shifted
coefficients of inverse cyclotomic polynomials, hence, they can be
written as some polynomial with periodic coefficients. In particular,
either there are only finitely many non-vanishing coefficients, or
there exists a complete arithmetic progression of non-vanishing
coefficients. Hence, we find that $A_{W_1}$ is contained within a
locally finite set of lines parallel to $(\beta t, t)$, and every line
either contains only finitely many points, or a complete arithmetic
progression. Suppose that every line contains only finitely many
points. Then there exists some $\beta'>\beta$, such that $A_{W_1}$
is contained in $\{(s, t):s\geq\beta_1 t\}$, in particular, $W_1$ is
regular in $\{(z_1, z_2):|z_1|^\beta\leq|z_2|\}$. But
$W_1=\frac{P}{Q\tilde{W}}$, and by assumption $P$ is not divisible by
$\tilde{W}$, therefore, there exist points $(z_1, z_2)$, where
$\tilde{W}$ vanishes, but $P$ does not, and these points are
singularities of $W_1$ satisfying $|z_1|^\beta\leq|z_2|$. Hence,
there exists some line containing a complete arithmetic progression.
Let $(x, 0)+t(\beta, 1)$ be the unique line containing infinitely many
elements of $A_{W_1}$, such that for each $y>0$ the line $(y, 0)+t(\beta,
1)$ contains only finitely many elements $(n_1, m_1), \ldots, (n_k, m_k)$ of
$A_{W_1}$. Set $\delta=-x$, that is, the distance of the right
boundary of $A_W$ from the line $(x, 0)+t(\beta, 1)$ measured
horizontaly, and set $\delta_i=-m_i\beta/n_i$, that is, $\delta_i$ is
the distance of $(n_i, m_i)$ from the right boundary, also measured
horizontally, and $\delta_-=\min\delta_i>0$.
We now eliminate the points $a_i$ to obtain the power series
$W_2$. When doing so we introduce lots of
new elements to the left of the line $(x, 0)+t(\beta, 1)$, which are
of no interest to us, and finitely many points on this line or to the
right of this line, in fact, we can get points at most at the points of the
form $\lambda(n_i, m_i)+\mu(n_j, m_j)$, $\lambda, \mu\in\mathbb{N}$, $\lambda,
\mu>0$. Note that the horizontal distance from the line $t(\beta, 1)$
is additive, that is, $A_{W_2}$ is contained in the intersection of
$\overline{A_W}$ and the half-plane to the left of the line $(x,
0)+t(\beta, 1)$, together with finitely many points between the lines $(x,
0)+t(\beta, 1)$ and $t(\beta, 1)$, each of which has distance at
least $2\delta_-$ from the latter line. Repeating this procedure, we
can again double this distance, and after finitely many steps this minimal
distance is larger than the width of the strip, which means that we
have arrived at a power series $W_3$ such that $A_{W_3}$ is contained
in the intersection of $\overline{A_W}$ and the half-plane to the left
of the line $(x, 0)+t(\beta, 1)$. Moreover, since at each step there
are only finitely many points changed on the line $(x, 0)+t(\beta, 1)$
, we see that the intersection of $A_{W_3}$ with this line
equals the intersection of $A_{W_1}$ with this line up to finitely
many inclusions or omissions. Since an infinite arithmetic progression, from
which finitely many points are deleted still contains an infinite
arithmetic progression, we see that $A_{W_3}$ contains an infinite
arithmetic progression.
Next we eliminate the points on $A_W$ starting at the bottom and
working upwards. When eliminating a point, we introduce (possibly
infinitely many) new points, but all of them are on the left of the
line $(x, 0)+t(\beta, 1)$. Hence, after infinitely many steps we
arrive at a power series $W_4$, for which $A_{W_4}$ is contained in
the intersection of $\overline{A_W}$ and the open half-plane to the
left of $(x, 0)+t(\beta, 1)$.
Fortunately, from this point on we can be less explicit. Consider
the set of differences of the sets $A_{W_i}$ from the line $t(\beta,
1)$. Taking the differences is a semi-group homomorphism, hence, at
each stage the set of differences is contained in the semi-group
generated by the differences we started with. But since $W$ is a
polynomial, this semi-group is finitely generated, and therefore
discrete. Hence, no matter how we eliminate terms, at each stage the
set $A_{W_i}$ is contained in a set of parallels to $t(\beta, 1)$
intersecting the real axis in a discrete set of non-positive numbers.
Collecting the cyclotomic factors used during this procedure, we have
proven the following.
\begin{Lem}
\label{Lem:elim}
Let $W(X, Y)$ be a rational function such that $\tilde{W(X, Y)}$
is a cyclotomic polynomial, but $W$ itself is not cyclotomic. Define
$\beta$ as above. Then there is a unique
expansion $W(X, Y) = \prod_{n, m} (1-X^n Y^m)^{c_{n, m}}$. The set
$C=\{(n, m): c_{n, m}\neq 0\}$ contains an infinite arithmetic
progression with difference a multiple of $(\beta, 1)$, only finitely
many elements to the right of this line, and all entries are on lines
parallel to $t(\beta, 1)$, such that the lines intersect the real
axis in a discrete set of points.
\end{Lem}
\section{Proof of case 3}
We prove that $\beta$ is an obstructing point. For integers $n, m$ with
$c_{n, m}\neq 0$ the factor $\zeta(-n+ms)^{c_{n, m}}$ creates a pole
or a zero
at $\frac{n+1}{m}$, which for $\frac{n+1}{m}>\beta$ is to the right of
the supposed boundary. Hence, if $\beta$ is not an obstructing point,
for some $\epsilon>0$ and all rational numbers $\xi\in(\beta,
\beta+\epsilon)$ we would have $\sum_{\frac{n+1}{m}=\xi} c_{n, m} = 0$. We now
show that this is impossible by proving that there are pairs $(n, m)$
with $\frac{n+1}{m}$ arbitrarily close to $\beta$, $c_{n, m}\neq 0$,
such that the sum consists of a single term, and is therefore non-zero
as well.
Let $\frac{k}{\ell}$ be the slope of the rays.
Let $\{(n_i, m_i)\}$ be a list of the starting points of the rays
described in Lemma~\ref{Lem:elim}, where $(n_0, m_0)$ defines the
right-most ray.
Take an integer
$q$, such that $c_{km\nu+n_0, \ell q\nu+m_0}\neq 0$ for all but
finitely many natural numbers $\nu$. Let $d$ be the greatest common divisor of
$m_0$ and $q$. The prime number theorem for arithmetic progressions guarantees
infinitely many $\nu$, such that $\frac{\ell\nu+m_0}{d}=p$ is
prime. Suppose there is a pair $n', m'$ belonging to another ray, such
that $c_{n', m'}\neq 0$ and $\frac{n'+1}{m'}=
\frac{k\nu+n_0}{\ell\nu+m_0}$. The point $(n', m')$ must lie on one
of the finitely many rays, hence, we can write $n'=k\nu'+n_1$,
$m'=\ell\nu'+m_1$. Since $p$ is a divisor of the denominator of the
right hand side, it also has to divide the denominator of the left
hand side. We obtain that $p$ divides both $\ell\nu+m_0$ and
$\ell\nu'+m_1$. Restricting, if necessary, to an arithmetic
progression, we obtain an infinitude of indices such that
$\ell\nu'+m_1=t(\ell\nu +m_0)$, where $t\in[0, 1]$ is a rational number with
denominator dividing $d$. Hence, we obtain that the equations
\[
\ell\nu'+m_1=t(\ell\nu +m_0),\quad t(k\nu'+n_1) = k\nu+n_0
\]
have infinitely many solutions $\nu, \nu' \in \mathbb{N} $. Two linear
equations in two variables, none of which is trivial, can only have
infinitely many solutions, if these equations are equivalent, that is, $t^2=1$,
which implies $t=1$ since $t$ is positive by definition. Hence, writing the
equations as vectors, we have
\[
(\nu-\nu')\binom{k}{\ell} = \binom{n_1}{m_1} - \binom{n_0}{m_0},
\]
that is, the vector linking $\binom{n_0}{m_0}$ with
$\binom{n_1}{m_1}$ is collinear with $\binom{k}{\ell}$, contrary
to the assumption that $n', m'$ was on a ray other than that of $n, m$. Hence,
poles of $\zeta$-factors accumulate at $\beta$. It remains to check
that these poles are not cancelled by zeros of other factors. Since
zeros of $\zeta$-factors are never positive reals, these factors do not
cause problems. Suppose that a pole of $\zeta(ns-m)$ cancels with a
zero of the local factor $W(p, p^{-s})$, that is, $W(p,
p^{-(m+1)/n})=0$. Since $W$ has coefficients in $\mathbb{Z}$, this implies
that $p^{-(m+1)/n}$ is algebraic of degree at most equal to the degree
of $W$, hence, $\frac{m+1}{n}$ can be reduced to a fraction with
denominator at most equal to the degree of $W$. There are only
finitely many rational numbers in the interval $[\beta, \beta+1]$
with bounded denominator, hence, only finitely many of the poles
can be cancelled, that is, $\beta$ is in fact an obstructing point.
For the corollary note that in cases (2)--(4) $\beta$ is an
obstructing point, that is, in these cases the first condition of the
corollary holds true. In case (1) and (5), we can represent $D$ as the
product of finitely many Riemann $\zeta$-functions multiplied by some
function which is holomorphic in the half-plane
$\mathbb{R}e\;s>\beta$, and has zeros only where the finitely many local
factors vanish. A local factor belonging to the prime $p$ creates a
$\frac{2\pi i}{\log p}$-periodic pattern of zeros, hence, the number
of zeros and poles is bounded above by the number of zeros of the
finitely many $\zeta$-functions, which is $\mathcal{O}(T\log T)$, and
the finitely many sets of periodic patterns, which create
$\mathcal{O}(T)$ zeros. Hence, $N_\pm(\{\mathbb{R}e z>\beta, |\Im z|<T\})$ is
$\mathcal{O}(T\log T)$. It may happen that there are significantly
less poles or zeros, if poles of one factor coincide with poles of
another factor, however, we claim that under RH and the assumption of
linear independence of zeros the amount of cancellation is
negligible. First, if the imaginary part of zeros of $\zeta$ are
$\mathbb{Q}$-linearly independent, then we cannot have
$\zeta(n_1s-m_1)=\zeta(n_2s-m_2)=0$ for integers $n_1, n_2, m_1, m_2$
with $(n_1, m_1)\neq (n_2, m_2)$, that is, zeros and poles of
different $\zeta$-factors cannot cancel. There is no cancellation among
local factors, since local factors can only have zeros and never
poles. Now consider cancellation among zeros of local factors and
$\zeta$-factors. We want to show that there are at most finitely many
cancellations. Suppose otherwise. Since there are only finitely many
local factors and finitely many $\zeta$-factors, an infinitude of
cancellation would imply that there are infinitely many cancellations
among one local factor and one $\zeta$-factor. The zeros of a local
factor are of the form $\xi_i+\frac{2k\pi i}{\log p}$, where
$\xi_i$ is the logarithm of one of the roots of $W(p, X)=0$ chosen
in such a way that $0\leq\Im\xi_i<\frac{2\pi i}{\log p}$. Since an
algebraic equation has only finitely many roots, an infinitude of
cancellations implies that for some complex number $\xi$ and
infinitely many integers $k$ we have $\zeta(n(\alpha+\frac{2k\pi
i}{\log p})-m)=0$. Choose 4 different such integers $k_1, \ldots,
k_4$, and let $\rho_1, \ldots, \rho_4$ be the corresponding roots of
$\zeta$. Then we have $\rho_1-\rho_2=\frac{2(k_1-k_2)n\pi}{\log p}$,
$\rho_3-\rho_4=\frac{2(k_3-k_4)n\pi}{\log p}$, that is,
$(k_3-k_4)(\rho_1-\rho_2)=(k_1-k_2)(\rho_3-\rho_4)$, which gives a
linear relation among the zeros of $\zeta$, contradicting our assumption.
Hence, if the imaginary pars of the roots of $\zeta$ are $\mathbb{Q}$-linear
independent, the number of zeros and poles of $D$ in some domain
coincides with the sum of the numbers of zeros and poles of all
factors, up to some bounded error, and our claim follows.
\section{Examples}
In this section we give examples to show that our classification is
non-trivial in the sense that every case actually occurs.
\begin{ex}
The sum $\sum_{n=1}^\infty\frac{\mu^2(n)\sigma(n)}{n^s}=
\frac{\zeta(s)\zeta(s-1)}{\zeta(2s)\zeta(2s-2)}$ corresponds to the
polynomial $W(X, Y)=(1+Y)(1+XY)$, while the sum
$\sum_{n=1}^\infty\frac{\sigma(n)}{n^s}=
\zeta(s)\zeta(s-1)$ corresponds to the rational function $W(X,
Y)=\frac{1}{(1+Y)(1+XY)}$.
\end{ex}
\begin{ex}
{\bf (a)} Let $\Omega(n)$ be the number of prime divisors of $n$ counted with
multiplicity. Then
$\sum_{n=1}^\infty\frac{2^{\Omega(n)}}{n^s}=\prod_p(1+\frac{2}{p^s}(1-p^{-s})))$
corresponds to the rational function $W(X, Y)=1+\frac{2Y}{1-Y}$ with
main part $1+2Y$, which is not cyclotomic.
\noindent
{\bf (b)} Let $G$ be the direct
product of three copies of the Heisenberg-group,
$a_n^\triangleleft(G)$ the number of normal subgroups of $G$ of index
$n$. Then $\zeta_G^\triangleleft(s) = \sum_{n=1}^\infty
\frac{a_n^\triangleleft(G)}{n^s}$ was computed by Taylor\cite{Taylor}
and can be written as a finite product
of $\zeta$-functions and an Euler-product of the form $\prod_p W(p,
p^{-s})$, where $W$ consists of 14 monomials and $\tilde{W}(X, Y) =
1-2X^{13}Y^8$, which is not cyclotomic.
\end{ex}
\begin{ex}
{\bf (a)} Let $G$ be the free nilpotent group of class two with three
generators. Then $\zeta_G^\triangleleft(s)$ can be written as a finite
product of $\zeta$-functions and the Euler-product $\prod_p W(p,
p^{-s})$, where
\[
W(X, Y) = 1+X^3Y^3 + X^4Y^3+X^6Y^5+X^7Y^5+X^{10}Y^8.
\]
We have $\tilde{W}(X, Y)=1+X^7Y^5$, which clearly does not divide $W$,
hence, while $\tilde{W}$ is cyclotomic, $W$ is not. Hence, $W$ is not
case 1 or 2. Theorem~\ref{thm:class} implies that $7/5$ is an
essential singularity of $\zeta_G^\triangleleft$. Du Sautoy and
Woodward\cite{book} showed that in fact the line $\mathbb{R}e\;s=7/5$ is the
natural boundary for $\zeta_G^\triangleleft$.
\noindent
{\bf (b)} Now consider the product
\begin{eqnarray*}
f(s) = \prod_p \Big(1+p^{-s} + p^{1-2s}\Big)
\end{eqnarray*}
Again, the polynomial $W(X, Y)=1+Y+XY^2$ is not cyclotomic, while
$\tilde{W}$ is cyclotomic. Again, Theorem~\ref{thm:class} implies that
$1/2$ is an obstructing point of $f$. However, the question
whether there exists another point on the line $\mathbb{R}e\;s=1/2$ which is an
obstructing point is essentially equivalent to the Riemann hypothesis. We
have
\begin{eqnarray*}
f(s) & = & \frac{\zeta(s)\zeta(2s-1)\zeta(3s-1)}{\zeta(2s)\zeta(4s-2)} R(s)\\
&&\times\;\prod_{m\geq1} \frac{\zeta((4m+1)s-2m)}
{\zeta((4m+3)s-2m-1)\zeta((8m+2)s-4m)},
\end{eqnarray*}
hence, if $\zeta$ has only finitely many zeros off the line $1/2+it$,
then the right hand side has only finitely many zeros in the domain
$\mathbb{R}e\;s>1/2$, $|\Im\;s|>\epsilon$, hence, $1/2$ is the unique obstructing
point on this line. On the other hand, if $\zeta(s)$ has infinitely
many non-real zeros off the line $1/2+it$, then every point on this
line is an obstructing point for $f$ (confer \cite{BSP}).
Hence, while for some polynomials the natural boundary can be
determined, we do not expect any general progress in this case.
\end{ex}
\begin{ex}
{\bf (a)} The local
zeta function associated to the algebraic group $\mathcal G$ is defined as
$$
Z_p(\mathcal G, s)=\int_{\mathcal G_p^+} \mid \det(g)\mid_p^{-s}d\mu
$$
where $\mathcal G_p^+=G(\mathbb{Q}_p)\cap M_n (\mathbb{Z}_p)$ , $\mid.\mid_p$ denotes the
p-adic valuation
and $\mu$ is the normalised Haar measure on $ \mathcal G(\mathbb{Z}_p)$.
In particular
the zeta function associated to the group $\mathcal G=GSp_6$\cite{Igusa}
is given by
\begin{eqnarray*}
Z(s/3) = \zeta(s)\zeta(s-3)\zeta(s-5)\zeta(s-6)\prod_p
\Big(1+p^{1-s}+p^{2-s}+p^{3-s}+p^{4-s}+p^{5-2s}\Big).
\end{eqnarray*}
The polynomial
$$W(X, Y)=1+(X+X^2+X^3+X^4)Y+X^5Y^2$$
satisfies the relation
$\tilde{W}(X, Y)=1+X^4Y$, that is, $\tilde{W}$ is cyclotomic, while $W$
is not. Du Sautoy an Grunewald\cite{ghost} showed that in the
cyclotomic expansion of $W$ there are only finitely many $(n, m)$ with
$c_{n, m}\neq 0$ and $\frac{n+1}{m}>4$, and that $W(p, p^{-s})=0$ has
solutions with $\mathbb{R}e\;s>4$ for infinitely many primes, hence, $W$ is an
example of type 4, and $Z(s/3)$ has the natural boundary $\mathbb{R}e\;s=4$.
\noindent
{\bf (b)} Let $V$ be the cubic variety $x_1x_2x_3=x_4^3$, $U$ be the open subset
$\{\vec{x}\in V\cup\mathbb{Z}^4: x_4\neq 0\}$, $H$ the usual height
function. De la Bret\`eche and Sir Swynnerton-Dyer\cite{2nob} showed
that $Z(s)=\sum_{x\in U} H(x)^{-s}$ can be written as the product of
finitely many $\zeta$-functions, a function holomorphic in a
half-plane strictly larger than $\mathbb{R}e\;s>3/4$, and a function having an
Euler-product corresponding to the rational function
\[
W(X, Y) = 1 + (1-X^3Y)(X^6Y^{-2}+X^5Y^{-1}+X^4+X^2Y^2+XY^3+Y^4)-X^9Y^3.
\]
They showed that in the cyclotomic expansion of this function there
occur only finitely many terms $c_{n, m}X^nY^m$ with $c_{n, m}\neq 0$
and $\frac{n+1}{m}>\frac{3}{4}$, and all but finitely many local
factors have a zero to the right of $\mathbb{R}e\;s=3/4$, hence, $\mathbb{R}e\;s=3/4$
is the natural boundary of $Z(s)$.
\end{ex}
\begin{ex}
Let $J_2(n)$ be the Jacobsthal-function, i.e. $J_2(n)=\#\{(x,
y):1\leq x, y\leq n, (x, y, n)=1\}$, and define $g(s)=\sum_{n\geq
1}\frac{\mu(n)J_2(n)}{n^2}$. Since $J_2$ is multiplicative, $g$ has
an Euler-product, which can be computed to give
\begin{eqnarray*}
g(s) = \prod_p \Big(1+p^{-s} -p^{2-s}\Big).
\end{eqnarray*}
We have
\[
g(s) = \prod_p(1-p^{2-s})\prod_p(1+\frac{p^{-s}}{1-p^{2-s}}) =
\zeta(s-2)D^*(s),
\]
say. For $\sigma=\mathbb{R}e\;s>2+\epsilon$ the Euler product for $D^*$ converges
uniformly, since
\[
\sum_p \big|\frac{p^{-s}}{1-p^{2-s}}\big| \leq \sum_p
\frac{p^{-\sigma}}{1-2^{2-\sigma}}\leq \frac{\zeta(2)}{\epsilon}.
\]
Hence, $D^*$ is holomorphic and non-zero in $\mathbb{R}e\;s>2$, that is, no
point on the line $\mathbb{R}e\;s=2$ is an obstructing point, that is,
Estermann's method cannot prove the existence of a single singularity
of this function.
\end{ex}
\section{Comparison of our classification with the classification of
du Sautoy and Woodward}
In \cite{book}, du Sautoy and Woodward consider several classes of
polynomials for which they can prove Conjecture 1. Since their classes
do not coincide with the classes described in Theorem~\ref{thm:class},
we now describe how the two classifications compare. We will refer to the
classes described in Theorem~\ref{thm:class} as `cases', while we will
continue
to refer to the polynomials of du Sautoy and Woodward by their original
appellation of ` type'.
Polynomials of type I are polynomials $W$ such that $\tilde{W}$ is not
cyclotomic, this class coincides with polynomials in case (2).
Polynomials of type II are polynomials $W$ such that $\tilde{W}$ is
cyclotomic, there are only finitely many $c_{n, m}> 0$ with
$\frac{n+1}{m}>\beta$, and for infinitely many primes we have that
$W(p, p^{-s})$ has zeros to the right of $\beta$. This class contains
all polynomials in case (4), and all polynomials of type II fall under
case (3) or (4), but there are polynomials in case (3) which are not
of type II\@. For polynomials of type II they prove that the line
$\mathbb{R}e\;s=\beta$ is the natural boundary of meromorphic continuation of
$D$, their result for polynomials therefore clearly supersedes
the relevant parts of Theorem~\ref{thm:class}.
Polynomials of type III are polynomials $W$ as in type II, but there
are infinitely many pairs $n, m$ with $c_{n, m}>0$,
$\frac{n+1}{m}>\beta$. These polynomials fall under case (3), they
show under the Riemann hypothesis that $\mathbb{R}e\;s=\beta$ is a natural
boundary. For such polynomials the results are incomparable, our
results are unconditional, yet weaker.
Polynomials of type IV are polynomials with infinitely many pairs $(n,
m)$ satisfying $c_{n,m}\neq 0$ and $\frac{n+1/2}{m}>\beta$, and such
that with the exception of finitely many $p$ there are no local zeros
to the right of $\mathbb{R}e\;s=\beta$. For such
polynomials du Sautoy and Woodward show that $\mathbb{R}e\;s=\beta$ is the
natural boundary, if the imaginary parts of the zeros of $\zeta$ are
$\mathbb{Q}$-linearly independent. All polynomials of type IV fall under case
(3), again, the results are incomparable.
Polynomials of type V are polynomials $W$ such that $\tilde{W}$ is
cyclotomic, with the exception of finitely many $p$ there are no local
zeros to the right of $\beta$, and
there are only finitely many pairs $n, m$ with $c_{n, m}\neq 0$ and
$\frac{n+1}{m}\geq\beta$. This correspond to case (5).
Polynomials of type VI are polynomials $W$ such that $\tilde{W}$ is
cyclotomic, with the exception of finitely many $p$ there are no local
zeros to the right of $\beta$, there are infinitely many pairs $(n,
m)$ with $c_{n, m}\neq 0$ and $\frac{n+1}{m}>\beta$, only finitely
many of which satisfy $\frac{n+1/2}{m}>\beta$. These fall under case
(3).
Case (1) does not occur in their classification as it is justly
regarded as trivial.
\section{Comparison with the multivariable case}
The object of our study has been the Dirichlet-series $D(s)=\prod W(p,
p^{-s})$. This will be called the
$1\frac{1}{2}$-variable problem since the polynomial has two variables, but the
Dirichlet-series depends on only one complex variable. If the
coefficients of the above series have some arithemetical meaning,
and this meaning translates into a statement on each monomial of $W$, then
the Dirichlet-series $D(s_1, s_2)=\prod_p W(p^{-s_1}, p^{-s_2})$ retains more
information, and it could be fruitful to consider this function instead. Of
course, the gain in information could be at the risk of the technical
difficulties introduced by considering several variables. However, here we
show that the multivariable problem is actually easier then the original
question of $1\frac{1}{2}$-variables.
Where there is no explicit reference to $p$, the problem of a natural boundary
was completely solved by Essouabri, Lichtin and the first named author\cite{Forum}.
\begin{Theo}
Let $W\in\mathbb{Z}[X_1, \ldots, X_k]$ be a polynomial satisfying $W(0, \ldots, 0) =
1$. Set $D(s_1, \ldots, s_k)=\prod_p W(p^{-s_1}, \ldots, p^{-s_k})$. Then $D$
can be meromorphically continued to the whole complex plane if and only if $W$
is cyclotomic. If it cannot be continued to the whole complex plane, then its
maximal domain of meromorphic continuation is the intersection of a finite
number of effectively computable half-spaces. The bounding hyper-plane of each
of these half-spaces passes through the origin.
\end{Theo}
At first sight one may think that one can pass from the 2-dimensional by
fixing $s_1$, however, this destroys the structure of the problem, as is
demonstrated by the following.
\begin{ex}
\label{ex:2to1}
The Dirichlet-series $D(s_1, s_2) = \prod_p 1+(2-p^{-s_1})p^{-s_2}$ as a
function of two variables can be
meromorphically continued into the set $\{(s_1, s_2): \mathbb{R}e\;s_2>0,
\mathbb{R}e\;s_1+s_2>0\}$, and the boundary of this set is the natural boundary of
meromorphic continuation. If we fix $s_1$ with $\mathbb{R}e\;s_1\geq 0$, and view $D$
as a function of $s_1$, then $D$ can be continued to $\mathbb{C}$ if and only if
$s_1=0$. In every other case the line $\mathbb{R}e\;s_2=0$ is the natural boundary.
\end{ex}
\begin{proof}
The behaviour of $D(s_1, s_2)$ follows from \cite[Theorem 2]{Forum}.
If we fix $s_1$,
then $1+(2-p^{-s_1})p^{-s_2}$ has zeros with relatively large real part,
provided that either $\mathbb{R}e s_1>0$, or $\mathbb{R}e s_1=0$ and $\mathbb{R}e p^{-s_1}<0$. In the
first case we can argue as in the case that $\tilde{W}$ is not cyclotomic. By
the prime number theorem for short intervals we find that the number of prime
numbers $p<x$ satisfying $\mathbb{R}e p^{-s_1}<0$ is greater than $c\frac{x}{\log x}$, and we see
that we can again adapt the proof for the case $\tilde{W}$ non-cyclotomic.
\end{proof}
In other words, the natural boundary for the $1\frac{1}{2}$-variable problem
is the same as for the 2-variable problem, with one exception, in which the
$1\frac{1}{2}$-variable problem collapses to a 1-variable problem, and in
which case the Euler-product becomes continuable beyond the 2-variable
boundary.
It seems likely that this behaviour should be the prevalent one, it is less
clear what precisely ``this behaviour'' is. One quite strong possibility is
the following:
{\em Suppose that $D(s_1, s_2)=\prod_p W(p^{-s_1}, p^{-s_2})$ has a natural
boundary at $\mathbb{R}e\;s_1=0$. Then there are only finitely many values $s_2$, for
which the specialization $D(\cdot, s_2)$ is meromorphically continuable
beyond $\mathbb{R}e\;s_1=0$.}
However, this statement is right now supported only by a general lack of
examples, and the fact that example~\ref{ex:2to1} looks quite natural,
so we do not dare a conjecture. However we believe that some progress in
this direction could be easier to obtain than directly handling
Conjecture~\ref{Con:Main}. In particular those cases, in which zeros of
$\zeta$ pose a serious threat for local zeros would become a lot easier since this
type of cancellation can only affect a countable number of values for $s_2$.
\begin{tabular}{ll}
Gautami Bhowmik, & Jan-Christoph Schlage-Puchta,\\
Universit\'e de Lille 1, & Albert-Ludwigs-Universit\"at,\\
Laboratoire Paul Painlev\'e, & Mathematisches Institut,\\
U.M.R. CNRS 8524, & Eckerstr. 1,\\
59655 Villeneuve d'Ascq Cedex, & 79104 Freiburg,\\
France & Germany\\
[email protected] & [email protected]
\end{tabular}
\end{document} |
\begin{document}
\title{Extremal sequences for the Bellman function of three variables of the dyadic maximal operator related to Kolmogorov's inequality}
\begin{abstract}
We give a characterization of the extremal sequences for the Bellman function of three variables of the dyadic maximal operator in relation to Kolmogorov's inequality. In fact we prove that they behave approximately like eigenfunctions of this operator for a specific eigenvalue. For this approach we use the one introduced in \cite{11}, where the respective Bellman function has been precisely evaluated.
\end{abstract}
\section{Introduction} \label{sec:1}
The dyadic maximal operator on $\mb R^n$ is a useful tool in analysis and is defined by
\begin{equation} \label{eq:1p1}
\mc M_d\phi(x) = \sup\left\{ \frac{1}{|S|} \int_S |\phi(u)|\,\mr du: x\in S,\ S\subseteq \mb R^n\ \text{is a dyadic cube} \right\},
\end{equation}
for every $\phi\in L^1_\text{loc}(\mb R^n)$, where $|\cdot|$ denotes the Lebesgue measure on $\mb R^n$, and the dyadic cubes are those formed by the grids $2^{-N}\mb Z^n$, for $N=0, 1, 2, \ldots$.\\
It is well known that it satisfies the following weak type (1,1) inequality
\begin{equation} \label{eq:1p2}
\left|\left\{ x\in\mb R^n: \mc M_d\phi(x) > \lambda \right\}\right| \leq \frac{1}{\lambda} \int_{\left\{\mc M_d\phi > \lambda\right\}} |\phi(u)|\,\mr du,
\end{equation}
for every $\phi\in L^1(\mb R^n)$, and every $\lambda>0$,
from which follows in view of Kolmogorov's inequality the following $L^q$-inequality
\begin{equation} \label{eq:1p3}
\int_E \left|\mc M_d\phi(u)\right|^q\mr du \leq \frac{1}{1-q} |E|^{1-q} \|\phi\|_1^q,
\end{equation}
for every $q\in(0,1)$, every $\phi\in L^1(\mb R^n)$ and every measurable subset of $\mb R^n$, $E$, of finite measure. It is not difficult to see that the weak type inequality \eqref{eq:1p2} is best possible. For refinements of this inequality one can see \cite{15}, \cite{17} and \cite{18}.
An approach for studying in more depth the behaviour of this maximal operator is the introduction of the so called Bellman functions related to them which reflect certain deeper properties of them by localizing. Such functions related to the $L^q$ inequality \eqref{eq:1p3} have been precisely evaluated in \cite{11}. Define $\Av_E(\psi)=\frac{1}{|E|} \int_E |\psi|$, where $E\subseteq \mb R^n$ is measurable of positive measure and $\psi$ is measurable on $E$, and fixing a dyadic cube define the localized maximal operator $\mc M'_d\phi$ as in \eqref{eq:1p1} but with the dyadic cubes $S$ being assumed to be contained in the ambient dyadic cube $Q$. Then for every $q\in(0,1)$ we let
\begin{equation} \label{eq:1p4}
B_q(f,h)=\sup\left\{ \frac{1}{|Q|} \int_Q (\mc M'_d\phi)^q: \Av_Q(\phi)=f,\ \Av_Q(\phi^q)=h, \right\}
\end{equation}
where $\phi$ is nonnegative in $L^1(Q)$ and the variables $f, h$ satisfy $0<h\leq f^q$. By a scaling argument it is easy to see that the above is independent of the choice of $Q$ (so we just have written $B_q(f,h)$ and we may take $Q=[0,1]^n$).
In \cite{11}, now the function \eqref{eq:1p4} has been precisely evaluated. The proof has been given in a much more general setting of tree-like structures on probability spaces.
More precisely we consider a non-atomic probability space $(X,\mu)$ and let $\mc T$ be a family of measurable subsets of $X$, that has a tree-like structure similar to the one in the dyadic case (the exact definition will be given in Section \ref{sec:2}).
Then we define the dyadic maximal operator associated with $\mc T$, by
\begin{equation} \label{eq:1p5}
\mc M_{\mc T}\phi(x) = \sup \left\{ \frac{1}{\mu(I)} \int_I |\phi|\,\mr d\mu: x\in I\in \mc T \right\},
\end{equation}
for every $\phi\in L^1(X,\mu)$. \\
This operator is related to the theory of martingales and satisfies essentially the same inequalities as $\mc M'_d$ does. Now we define the corresponding Bellman function of $\mc M_{\mc T}$, by
\begin{multline} \label{eq:1p6}
B_q^Q(f,h,L,k) = \sup \left\{ \int_E \left[ \max(\mc M_{\mc T}\phi, L)\right]^q\mr d\mu: \phi\geq 0, \int_X\phi\,\mr d\mu=f, \right. \\ \left. \int_X\phi^q\,\mr d\mu = h,\ E\subseteq X\ \text{measurable with}\ \mu(E)=k\right\},
\end{multline}
the variables $f, h, L, k$ satisfying $0<h\leq f^q$, $L\geq f$ and $k\in (0,1]$.
The evaluation of \eqref{eq:1p6} is now given in \cite{11}, and has been done in several steps. The first one is to find the value of
\begin{equation} \label{eq:1p7}
B_q^{\mc T}(f,h,f,1) = \sup\left\{ \int_X (\mc M_{\mc T}\phi)^q\,\mr d\mu:\ \phi \geq 0,\ \int_X \phi\,\mr d\mu = f,\ \int_X \phi^q\,\mr d\mu = h\right\}.
\end{equation}
It is proved in \cite{11}, that \eqref{eq:1p7} equals $h\,\omega_q\!\left(\frac{f^q}{h}\right)$ where $\omega_q: [1,+\infty) \to [1,+\infty)$ is defined as $\omega_q(z) = \left[H^{-1}_q(z)\right]^q$, where $H^{-1}_q$ is the inverse of $H_q$ given by $H_q(z) = (1-q)z^q + qz^{q-1}$, for $z\geq 1$. \\
The second step for the evaluation of \eqref{eq:1p6} is to find $B_q^{\mc T}(f,h,L,1)$ for arbitrary $L\ge f$.
We state the related result:
\begin{ctheorem}{1} \label{thm:1}
With the above notation
\begin{equation} \label{eq:1p8}
B_q^{\mc T}(f,h,L,1) = h\,\omega_q\!\left( \frac{(1-q)L^q + qL^{q-1}f}{h}\right).
\end{equation}
\end{ctheorem}
Our aim in this paper is to characterize the extremal sequences of functions involving \eqref{eq:1p8}. More precisely we will prove the following
\begin{ctheorem}{A} \label{thm:a}
Let $\phi_n: (X,\mu) \to \mb R^+$ be such that $\int_X \phi_n\mr d\mu=f$ and $\int_X \phi_n^q\mr d\mu=h$, where $f,h$ are fixed with $0< h\leq f^q$, $q\in (0,1)$ and $n\in \mb N$. Suppose additionally that $L\geq f$. Then the following are equivalent:
\begin{enumerate}[i)]
\item \quad $\displaystyle \lim_n \int_X \left[ \max(\mc M_{\mc T}\phi_n, L)\right]^q\mr d\mu = B_q^{\mc T} (f, h, L, 1) $
\item \quad $\displaystyle \lim_n \int_X \left| \max(\mc M_{\mc T}\phi_n, L) - c^\frac{1}{q}\phi_n\right|^q\mr d\mu = 0 $,
\end{enumerate}
where $c = \omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right)$.
\end{ctheorem}
We discuss now the method of the proof of Theorem \ref{thm:a}. We begin by proving two Theorems (\ref{thm:4p1} and \ref{thm:4p2}) which are generalizations of the results in \cite{11}. By using these theorems, we prove Theorem \ref{thm:4p3} which is valid for any extremal sequence and in fact is a weak form of Theorem \ref{thm:a}. We then apply Theorem \ref{thm:4p3} to a new sequence of functions, called $(g_{\phi_n})_n$, which arises from $(\phi_n)_n$ by a natural, as we shall see, way.
The function $g_{\phi_n}$ is in fact equal to $\phi_n$ on the set $\left\{ \mc M_{\mc T} \phi_n \leq L\right\}$, and constant on certain subsets of $E_n = \left\{\mc M_{\mc T}\phi_n > L\right\}$, which are enough for one to describe the behavior of $\mc M_{\mc T}\phi_n$ in $E_n$.
This new sequence has the property that it is in fact arbitrary close to $(\phi_n)_n$, thus it is extremal. An application then of Theorem \ref{thm:4p3} to this new sequence and some combinations of lemmas will enable us to provide the proof of Theorem \ref{thm:a}.
We need also to mention that the extremizers for the standard Bellman function for the case $p>1$ has been studied in \cite{16}, inspired by \cite{10}. In this paper we study the more general case (for the Bellman function of three variables and for $q\in (0,1)$) which presents additional difficulties because of the presence of the third variable $L$.
We note also that further study of the dyadic maximal operator can be seen in \cite{19} and \cite{18} where symmetrizations principles for this operator are presented, while other approaches for the determination of certain Bellman function can be found in \cite{26}, \cite{27}, \cite{31}, \cite{32}, and \cite{33}.
Also we need to say that the phenomenon that the norm of a maximal operator is attained by a sequence of eigenfuntions doesn't occur here for the first time, for example see \cite{4} and \cite{5}.
Nevertheless as far as we know this phenomenon is presented here and in \cite{6} for the first time for the case of more generalized norms, such as the Bellman functions that we describe.
There are several problems in Harmonic Analysis where Bellman functions naturally arise. Such problems (including the dyadic Carleson imbedding and
weighted inequalities) are described in \cite{14} (see also \cite{12}, \cite{13}) and also connections to Stochastic Optimal Control are provided,
from which it follows that the corresponding Bellman functions satisfy certain nonlinear second order PDE.
The exact computation of a Bellman function is a difficult task which is connected with the deeper structure of the corresponding Harmonic Analysis
problem. Thus far several Bellman functions have been computed (see \cite{2}, \cite{3}, \cite{10}, \cite{25}, \cite{27}, \cite{31}, \cite{32}, \cite{33}). L.Slavin, A.Stokolos and V. Vasyunin \cite{26} linked the Bellman function computation to solving certain PDE's of the Monge-Amp\`{e}re type, and in this way they obtained an alternative proof of the Bellman functions related to the dyadic maximal operator in \cite{10}. In this last mentioned work it is precisely
evaluated the corresponding to \eqref{eq:1p7} Bellman function for the case $q>1$. Also in \cite{33} using the Monge-Amp\`{e}re equation approach a more general Bellman function than the one related to the dyadic Carleson imbedding Theorem has been precisely evaluated thus generalizing the corresponding result in \cite{10}. For more recent developments and results related to the Bellman function technique we refer to \cite{1}, \cite{6}, \cite{7}, \cite{22}, \cite{23}, \cite{24}, \cite{28}, \cite{29}, \cite{36}. Additional results can be found in \cite{2}, \cite{21}, \cite{34}, \cite{35}, while for the study of the general theory of maximal operators one can consult \cite{30}.
In this paper, as in our previous ones, we use Bellman functions as a means to gain deeper understanding of the corresponding maximal operators and we are
not using the standard techniques as Bellman dynamics and induction, corresponding PDE's, obstacle conditions etc. Instead, our methods being different from the Bellman function technique, we rely on the combinatorial structure of these operators. For such approaches, which enable us to study and solve problems such as the one which is described in this article one can see \cite{8}, \cite{9}, \cite{10}, \cite{11}, \cite{16} and \cite{19}.
\section{Preliminaries} \label{sec:2}
Let $(X,\mu)$ be a nonatomic probability space. We give the following from \cite{10} or \cite{11}.
\begin{definition} \label{def:2p1}
A set $\mc T$ of measurable subsets of $X$ will be called a tree if the following are satisfied
\begin{enumerate}[i)]
\item $X\in\mc T$ and for every $I\in\mc T$, $\mu(I) > 0$.
\item For every $I\in\mc T$ there corresponds a finite of countable subset $C(I)$ of $\mc T$ containing at least two elements such that
\begin{enumerate}[a)]
\item the elements of $C(I)$ are pairwise disjoint subsets of $I$.
\item $I = \cup\, C(I)$.
\end{enumerate}
\item $\mc T = \cup_{m\geq 0} \mc T_{(m)}$, where $\mc T_{(0)} = \left\{ X \right\}$ and $\mc T_{(m+1)} = \cup_{I\in \mc T_{(m)}} C(I)$.
\item The following holds
\[
\lim_{m\to\infty} \sup_{I\in \mc T_{(m)}} \mu(I) = 0
\]
\end{enumerate}
\end{definition}
\noindent We state now the following lemma as is given in \cite{10}.
\begin{lemma} \label{lem:2p1}
For every $I\in \mc T$ and every $\alpha\in (0,1)$ there exists a subfamily $\mc F(I) \subseteq \mc T$ consisting of pairwise disjoint subsets of $I$ such that
\[
\mu\!\left( \underset{J\in\mc F(I)}{\bigcup} J \right) = \sum_{J\in\mc F(I)} \mu(J) = (1-\alpha)\mu(I).
\]
\end{lemma}
\noindent Suppose now that we are given a tree $\mc T$ on a nonatomic probability space $(X,\mu)$. Then we define the associated dyadic maximal operator $\mc M_{\mc T}$ by \eqref{eq:1p5}. (see the Introduction).
\begin{definition} \label{def:2p2}
Let $(\phi_n)_n$ be a sequence of $\mu$-measurable nonnegative functions defined on $X,\ q\in(0,1)$, $0<h\leq f^q$ and $L\geq f$. Then $(\phi_n)_n$ is called extremal if the following hold $\int_X \phi_n\,\mr d\mu = f$, $\int_X \phi_n^q\,\mr d\mu = h$ for every $n\in \mb N$, and $\lim_n \int_X \left[ \max\left( \mc M_{\mc T}\phi_n,L\right) \right]^q\mr d\mu = c h$, where $c = \omega_q\!\left( \frac{(1-q)L^q + qL^{q-1}f}{h} \right)$. (See Theorem \ref{thm:1}, relation \eqref{eq:1p8}).
\end{definition}
For the proof of Theorem \ref{thm:1} an effective linearization was introduced for the operator $\mc M_{\mc T}$ valid for certain functions $\phi$.
We describe it. For $\phi \in L^1(X,\mu)$ nonnegative function and $I\in \mc T$ we define $\Av_I(\phi) = \frac{1}{\mu(I)} \int_I \phi\,\mr d\mu$. We will say that $\phi$ is $\mc T$-good if the set
\[
\mc A_\phi = \left\{ x\in X: \mc M_{\mc T}\phi(x) > \Av_I(\phi)\ \text{for all}\ I\in \mc T\ \text{such that}\ x\in I \right\}
\]
has $\mu$-measure zero.
\noindent Let now $\phi$ be $\mc T$-good and $x\in X\setminus\mc A_\phi$. We define $I_\phi(x)$ to be the largest in the nonempty set
\[
\big\{I\in \mc T: x\in T\ \text{and}\ \mc M_{\mc T}\phi(x) = \Av_I(\phi)\big\}.
\]
Now given $I\in\mc T$ let
\begin{align*}
A(\phi,I) &= \big\{ x\in X\setminus\mc A_\phi: I_\phi(x)=I \big\} \subseteq I,\ \text{and} \\[2pt]
S_\phi &= \big\{ I\in\mc T: \mu\left(A(\phi,I)\right) > 0 \big\} \cup \big\{X\big\}.
\end{align*}
Obviously then, $\mc M_{\mc T}\phi = \sum_{I\in S_\phi} \Av_I(\phi) \mc X_{A(\phi,I)}$, $\mu$-almost everywhere on $X$, where $\mc X_S$ is the characteristic function of $S\subseteq X$.
We define also the following correspondence $I \to I^\star$ by: $I^\star$ is the smallest element of $\{ J\in S_\phi: I\subsetneq J \}$.
This is defined for every $I\in S_\phi$ except $X$. It is obvious that the $A(\phi,I)$'s are pairwise disjoint and that $\mu\!\left( \cup_{I\notin S_\phi} A(\phi,I) \right)=0$, so that $\cup_{I\in S_\phi} A(\phi,I) \approx X$, where by $A\approx B$ we mean that $\mu(A\setminus B) = \mu(B\setminus A) = 0$. \\
We will need the following
\begin{lemma} \label{lem:2p2}
Let $\phi$ be $\mc T$-good and $I\in \mc T$, $I \neq X$. Then $I\in S_\phi$ if and only if every $J\in \mc T$ that contains properly $I$ satisfies $\Av_J(\phi) < \Av_I(\phi)$.
\end{lemma}
\begin{proof}
Suppose that $I\in S_\phi$. Then $\mu(A(\phi,I))>0$. As a consequence $A(\phi,I)\neq \emptyset$, so there exists $x\in A(\phi,I)$. By the definition of $A(\phi,I)$ we have that $I_\phi(x)=I$, that is $I$ is the largest element of $\mc T$ such that $\mc M_{\mc T}\phi(x) = \Av_I(\phi)$. As a consequence the implication stated in our lemma holds.
Conversely now, suppose that $I\in \mc T$ and for every $J\in \mc T$ with $J \supsetneq I$ we have that $\Av_J(\phi) < \Av_I(\phi)$. Then since $\phi$ is $\mc T$-good, for every $x\in I\setminus \mc A_\phi$ there exists $J_x$ ($= I_\phi(x)$) in $S_\phi$ such that $\mc M_{\mc T}\phi(x) = \Av_{J_x}(\phi)$ and $x\in J_x$. By our hypothesis we must have that $J_x \subseteq I$.
Now, consider the family $S' = \left\{ J_x,\ x\in I\setminus \mc A_\phi \right\}$. This has the property $\cup_{x\in I\setminus \mc A_\phi} J_x \approx I$. Choose a subfamily $S^2 = \{J_1, J_2, \ldots\}$ of $S'$, maximal under the $\subseteq$ relation. Then $I\approx \cup_{i=1}^\infty J_i$ where the last union
is pairwise disjoint because of the maximality of $S^2$.
Suppose now that $I$ does not belong to $S_\phi$. This means that $\mu(A(\phi,I))=0$, that is we must have that for every $x\in I\setminus\mc A_\phi$, $J_x\subsetneq I$. Since $J_x$ belongs to $S_\phi$ for every such $x$, by the first part of the proof of this Lemma we conclude that $\Av_{J_x}(\phi) > \Av_I(\phi)$.
Thus for every $i$, we must have that $\Av_{J_i}(\phi) > \Av_I(\phi)$. Since $S^2$ is a partition of $I$, we reach to a contradiction. Thus we must have that $I\in S_\phi$.
\end{proof}
Now the following is true, obtained in \cite{3}.
\begin{lemma} \label{lem:2p3}
Let $\phi$ be $\mc T$-good
\begin{enumerate}[i)]
\item If $I, J\in S_\phi$ then either $A(\phi,J)\cap I = \emptyset$ or $J\subseteq I$.
\item If $I\in S_\phi$, then there exists $J\in C(I)$ such that $J\notin S_\phi$.
\item For every $I\in S_\phi$ we have that
\[
I \approx \underset{\substack{J\in S_\phi \\ J\subseteq I\ \,}}{\cup} A(\phi,J).
\]
\item For every $I\in S_\phi$ we have that
\begin{gather*}
A(\phi,I) = I\setminus \underset{\substack{J\in S_\phi \\ J^\star = I\ }}{\cup} J,\ \ \text{so thet} \\
\mu(A(\phi,I)) = \mu(I) - \sum_{\substack{J\in S_\phi\\ J^\star=I\ }} \mu(J).
\end{gather*}
\end{enumerate}
\end{lemma}
\section{Some technical Lemmas}
In this section we collect some technical results whose proofs can be seen in \cite{4}. We begin with
\begin{lemma} \label{lem:3p1}
Let $0<q<1$ be fixed. Then
\begin{enumerate}[i)]
\item The function $\omega_q: [1,+\infty) \to [1,+\infty)$ is strictly increasing and strictly concave.
\item The function $U_q(x) = \frac{\omega_q(x)}{x}$ is strictly increasing on $[1,+\infty)$.
\end{enumerate}
\end{lemma}
\noindent For the next Lemma we consider for any $q\in (0,1)$ the following formula
\[
\sigma_q(k,x) = \frac{H_q\!\left(\frac{x(1-k)}{1-kx}\right)}{H_q(x)},
\]
defined for all $k, x$ such that $0<k<1$ and $0<x<\frac{1}{k}$. A straightforward computation shows (as is mentioned in \cite{4}) that
\[
\sigma_q(k,x) = \frac{(1-q)x + q - kx}{(1-k)^{1-q}(1-kx)^q((1-q)x+q)}.
\]
We state now the following
\begin{lemma} \label{lem:3p2}
\begin{enumerate}[i)]
\item For any fixed $\lambda>1$ the equation
\begin{equation} \label{eq:3p1}
H_q\!\left(\frac{x(1-k)}{1-kx}\right) = \lambda H_q(x),
\end{equation}
has a unique solution $x = \mc X(\lambda,k) = \mc X_\lambda(k)$ on the interval $\left(1,\frac 1 k\right)$ and it has a solution in the interval $(0,1)$ if and only if $\lambda<(1-k)^{q-1}$ in which case this is also unique.
\item For $\mu\geq 0$ define the following function
\begin{equation} \label{eq:3p2}
R_{q,\mu}(k,x) = \left(\frac{x(1-k)}{1-kx}\right)^q \frac{1}{\sigma_q(k,x)} + \left(\mu^q-x^q\right)(1-k),
\end{equation}
on $W=\big\{(k,x): 0<k<1\ \text{and}\ 1<x<\frac{1}{k}\big\}$. \\
Then if $\mu > 1$ and $\xi$ is in $(0,1]$ the maximum value of $R_{q,\mu}$ on the set $\left\{ (k,x)\in W: 0<k\leq \xi\ \text{and}\ \sigma_q(k,x)=\lambda\right\}$ is equal to $\frac{1}{\lambda}\omega_q(\lambda H_q(\mu))$ if $\xi\geq k_0(\lambda,\mu)$, where $k_0(\lambda,\mu)$ is given by
\begin{equation} \label{eq:3p3}
k_0(\lambda,\mu) = \frac{\omega_q(\lambda H_q(\mu))^\frac{1}{q} - \mu}{\mu\!\left(\omega_q(\lambda H_q(\mu))^\frac{1}{q}-1\right)},
\end{equation}
and is the unique in $\left(0, \frac 1 \mu\right)$ solution of the equation $\sigma_q(k_0,\mu)=\lambda$.
Additionally comparing with \rnum{1}) of this Lemma we have that $\mc X_\lambda(k_0)=\mu$.
\end{enumerate}
\end{lemma}
\noindent Now for the next Lemma we fix real numbers $f,\ h$ and $k$ with $0<h<f^q$ and $0<k<1$, and we consider the functions
\[
\ell_k(B) = (1-k)^{1-q}(f-B)^q + k^{1-q}B^q,
\]
defined for $0\leq B\leq f$ and
\begin{equation} \label{eq:3p4}
R_k(B) = \left\{ \begin{aligned}
& \left( h - (1-k)^{1-q}(f-B)^q \right) \omega_q\!\left( \frac{k^{1-q}B^q}{h - (1-k)^{1-q}(f-B)^q} \right), \\
& \hphantom{\frac{k^{1-q}B^q}{1-q},}\hspace{70pt} \text{if}\ (1-k)^{1-q}(f-B)^q< h\leq \ell_k(B), \\[5pt]
& \frac{k^{1-q}B^q}{1-q},\hspace{70pt} \text{if}\ h\leq (1-k)^{1-q}(f-B)^q,
\end{aligned} \right.
\end{equation}
defined for all $B\in [0,f]$ such that $\ell_k(B)\geq h$. \\
Noting that $\ell_k$ has an absolute maximum at $B=kf$ with $\ell_k(kf) = f^q > h$ and that it is monotone on each of the intervals $(0, kf)$ and $(kf, f)$ we conclude that either $\ell_k(f) \le h$ i.e. $k^{1-q}f^q < h$, in which case the equation $\ell_k(B) = h$ has a unique solution in $(kf,f)$ and this is denoted by $\rho_1 = \rho_1(f,h,k)$, or $\ell_k(f) \geq h$, in which case we set $\rho_1 = \rho_1(f,h,k) = f$.
Also either $\ell_k(0)<h$, i.e. $(1-k)^{1-q}f^q < h$ in which case the equation $\ell_k(B)=h$ has a unique solution on $(0,kf)$ and this is denoted by $\rho_0 = \rho_0(f,h,k)$, or $\ell_k(0)\geq h$ in which case we set $\rho_0 = \rho_0(f,h,k) = 0$. In all cases the domain of definition of $R_k$ is the interval $W_k = W_k(f,h) = [\rho_0,\rho_1]$. \\
We are now able to give the following
\begin{lemma} \label{lem:3p3}
The maximum value of the function $R_k$ on $W_k$ is attained at the unique point $B^\star = \mc X_\lambda(k)kf > kf$ where $\lambda = \frac{f^q}{h}$ (see Lemma \ref{lem:3p2}). Moreover
\begin{equation} \label{eq:3p5}
\max_{W_k}(R_k) = h\,\omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k)) \right) - (1-k)f^q(\mc X_\lambda(k))^q.
\end{equation}
Additionally $B^\star$ satisfies
\[
(1-k)^{1-q}(f-B^\star)^q < h < \ell_k(B^\star).
\]
\end{lemma}
The above Lemmas are enough for us to study the extremal sequences for \eqref{eq:1p8} as we shall see in the next Section.
\section{Extremal sequences for the Bellman function}
We prove the following
\begin{theorem} \label{thm:4p1}
Let $\phi$ be $\mc T$-good function such that $\int_X \phi\,\mr d\mu = f$. Let also $B=\{I_j\}_j$ be a family of pairwise disjoint elements of $S_\phi$, which is maximal on $S_\phi$ under $\subseteq$ relation. That is $I\in S_\phi \Rightarrow I\cap (\cup I_j) \neq \emptyset$. Then the following inequality holds
\begin{multline*}
\int_{X\setminus \cup_j I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq \\
\frac{1}{(1-q)\beta} \left[ (\beta+1) \left(f^q - \sum\mu(I_j)y_{I_j}^q\right) - (\beta+1)^q \int_{X\setminus \cup_j I_j} \phi^q\,\mr d\mu \right]
\end{multline*}
for every $\beta>0$, where $y_{I_j} = \mathrm{Av}_{I_j}(\phi)$.
\end{theorem}
\begin{proof}
We follow \cite{4}.
Let $S = S_\phi$, $\alpha_I = \mu(A(\phi,I))$, $\rho_1 = \frac{\alpha_I}{\mu(I)}\in(0,1]$ and
\[
y_I = \Av_I(\phi) = \frac{1}{\mu(I)} \sum_{J\in S: J\subseteq I} \alpha_J x_J,\ \ \text{for every}\ \ I\in S,
\]
where $x_J= \frac{1}{\alpha_J} \int_{A(\phi,J)} \phi\,d\mu$, for any $J \in S_\phi$.
It is easy now to see in view of Lemma \ref{lem:2p3} \rnum{4}) that
\[
y_I\mu(I) = \sum_{J\in S: J^\star=I} y_J\mu(J) + \alpha_I x_I,
\]
and so by using concavity of the function $t\to t^q$, we have for any $I\in S$,
\begin{align} \label{eq:4p1}
[y_i\mu(I)]^q &= \left( \sum_{J\in S: J^\star=I} y_J\mu(J) + \alpha_Ix_I\right)^q \notag \\
&= \left( \sum_{J\in S: J^\star = I} \tau_I\mu(J)\frac{y_I}{\tau_I} + \sigma_I\alpha_I\frac{x_I}{\sigma_I}\right)^q \notag \\
&\geq \sum_{J\in S: J^\star = I} \tau_I\mu(J)\left(\frac{y_J}{\tau_I}\right)^q + \sigma_I\alpha_I\left(\frac{x_I}{\sigma_I}\right)^q,
\end{align}
where $\tau_I, \sigma_I > 0$ satisfy
\[
\tau_I(\mu(I)-\alpha_I) + \sigma_I\alpha_I = \sum_{J\in S: J^\star=I} \tau_I\mu(J) + \sigma_I\alpha_I = 1.
\]
We now fix $\beta>0$ and let
\[
\sigma_I = ((\beta+1)\mu(I) - \beta\alpha_I)^{-1},\ \ \tau_I = (\beta+1)\sigma_I
\]
which satisfy the above relation and thus we get by dividing with $\sigma_I^{1-q}$ that
\begin{equation} \label{eq:4p2}
((\beta+1)\mu(I) - \beta\alpha_I)^{1-q} (y_I\mu(I))^q \geq \sum_{J\in S: J^\star=I} (\beta+1)^{1-q} \mu(J) y_J^q + \alpha_Ix_I^q,
\end{equation}
However,
\begin{equation} \label{eq:4p3}
x_I^q = \left( \frac{1}{\alpha_I} \int_{A(\phi,I)} \phi\,d\mu\right)^q \geq \frac{1}{\alpha_I}\int_{A(\phi,I)} \phi^q\,\mr d\mu.
\end{equation}
We sum now \eqref{eq:4p2} over all $I\in S$ such that $I\supsetneq I_j$ for some $j$ (which we denote by $I\supsetneq \mr{piece}(B)$) and we obtain
\begin{multline} \label{eq:4p4}
\sum_{I\supsetneq \mr{piece}(B)} ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q} (y_I\mu(I))^q \geq\\
\sum_{\substack{I\supsetneq \mr{piece}(B)\\ I\neq X}} (\beta+1)^{1-q} \mu(I) y_I^q + \sum_j (\beta+1)^{1-q} \mu(I_j) y_{I_j}^q + \sum_{I\supsetneq \mr{piece}(B)} a_I x_I^q.
\end{multline}
Note that the first two sums are produced in \eqref{eq:4p4} because of maximality of $(I_j)$. \eqref{eq:4p4} now gives:
\begin{multline} \label{eq:4p5}
\sum_{I\supsetneq \mr{piece}(B)} (\beta+1)^{1+q}\mu(I)y_I^q - \sum_{I\supsetneq \mr{piece}(B)} ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q} (y_I\mu(I))^q \leq \\
(\beta+1)^{1-q}y_X^q - \int_{X\setminus \cup I_j} \phi^q\,\mr d\mu - \sum_j (\beta+1)^{1-q}\mu(I_j)y_{I_j}^q,
\end{multline}
in view of H\"{o}lder's inequality \eqref{eq:4p3}. Thus \eqref{eq:4p5} gives
\begin{multline} \label{eq:4p6}
\sum_{I\supsetneq \mr{piece}(B)} \left[(\beta+1)^{1-q}\mu(I) - ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q}\mu(I)^q\right]y_I^q \leq \\
(\beta+1)^{1-q} \left(f^q - \sum\mu(I_j)y_{I_j}^q\right) - \int_{X\setminus \cup I_j} \phi^q\,\mr d\mu,
\end{multline}
On the other side we have that
\begin{align} \label{eq:4p7}
& \frac{1}{\mu(I)} \left[ (\beta+1)^{1-q}\mu(I) - ((\beta+1)\mu(I) - \beta\alpha_I)^{1-q}\mu(I)^q\right] = \notag \\
& (\beta+1)^{1-q} - ((\beta+1) - \beta\rho_I)^{1-q} \geq
(1-q)(\beta+1)^{-q}\beta\rho_I = \notag \\
& (1-q)(\beta+1)^{-q}\beta\frac{\alpha_I}{\mu(I)},
\end{align}
where the inequality in \eqref{eq:4p7} comes from the differentiation mean value theorem on calculus.
From the last two inequalities we conclude
\begin{equation} \label{eq:4p8}
(1-q)(\beta+1)^{-q}\beta \!\! \sum_{I\supsetneq \mr{piece}(B)} a_Iy_I^q \leq
(\beta+1)^{1-q} \left( f^q - \sum\mu(I_j)y_{I_j}^q\right) - \int_{X\setminus \cup I_j} \!\!\phi^q\,\mr d\mu.
\end{equation}
Now it is easy to see that
\[
\sum_{I\supsetneq \mr{piece}(B)} a_Iy_I^q = \int_{X\setminus \cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu,
\]
because $B=\{I_j\}_j$ is a family of elements of $S_\phi$. Then \eqref{eq:4p8} becomes
\begin{multline*}
\int_{X\setminus \cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq \\
\frac{1}{(1-q)\beta} \left[ (\beta+1)\bigg(f^q - \sum_j \mu(I_j)y_{I_j}^q\bigg) - (\beta+1)^q\int_{X\setminus \cup I_j} \phi^q\,\mr d\mu \right]
\end{multline*}
for any fixed $\beta>0$, and $\phi: \mc T$-good.
In this way we derived the proof of Theorem \ref{thm:4p1}.
\end{proof}
In the same lines as above we can prove:
\begin{theorem} \label{thm:4p2}
Let $\phi$ be $\mc T$-good and $\mc A=\{I_j\}$ be a pairwise disjoint family of elements of $S_\phi$. Then for every $\beta>0$ we have that:
\[
\int_{\cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq
\frac{1}{(1-q)\beta} \left[ (\beta+1) \sum \mu(I_j)y_{I_j}^q - (\beta+1)^q \int_{\cup I_j} \phi^q\,\mr d\mu \right].
\]
\end{theorem}
\begin{proof}
We use the technique mentioned above in Theorem \ref{eq:4p1} by summing inequality \eqref{eq:4p2} with respect to all $I\in S_\phi$ with $I\subseteq I_j$ for any $j$. The rest details are easy to be verified.
\end{proof}
We have now the following generalization of Theorem \ref{thm:4p1}.
\begin{corollary} \label{cor:4p1}
Let $\phi$ be $\mc T$-good and $\mc A = \{I_j\}$ be a pairwise disjoint family of elements of $S_\phi$. Then for every $\beta>0$
\begin{multline} \label{eq:4p9}
\int_{X\setminus\cup I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu \leq
\frac{1}{(1-q)\beta} \left[ (\beta+1) \left( f^q - \sum\mu(I_j) y_{I_j}^q\right) - \right. \\
\left. (\beta+1)^q \int_{X\setminus\cup I_j} \phi^q\,\mr d\mu \right],
\end{multline}
where $y_{I_j} = Av_{I_j}(\phi)$, $f=\int_X \phi\,\mr d\mu$.
\end{corollary}
\begin{proof}
We choose a pairwise disjoint family $(J_i)_i = B \subseteq S_\phi$ such that the union $\mc A\cup B$ is maximal under the relation $\subseteq$ in $S_\phi$, and $I_j\cap J_i = \emptyset$ for all $i, j$.
Then if we apply Theorem \ref{thm:4p1} for $\mc A\cup B$ and Theorem \ref{thm:4p2} for $B$, and sum the two inequalities we derive the proof of our Corollary.
\end{proof}
\noindent We now proceed to the
\begin{proof}[Proof of Theorem \ref{thm:a}]
Suppose that we are given an extremal sequence $\phi_n: (X,\mu)\to \mb R^+$ of functions, such that $\int_X \phi_n\,\mr d\mu = f$, $\int_X \phi_n^q\,\mr d\mu = h$ for any $n\in \mb N$ and
\begin{equation} \label{eq:4p10}
\lim_n \int_X \left[ \max(\mc M_{\mc T}\phi_n, L)\right]^q\mr d\mu = h c.
\end{equation}
We prove that
\begin{equation} \label{eq:4p11}
\lim_n \int_X \left| \max(\mc M_{\mc T}\phi_n,L) - c^{\frac{1}{q}}\phi_n \right|^q\mr d\mu = 0.
\end{equation}
For the proof of \eqref{eq:4p11} we are going to give the chain of inequalities from which one gets Theorem \ref{thm:1}.
Then we use the fact that these inequalities become equalities in the limit. \\
Fix a $n\in\mb N$ and write $\phi=\phi_n$. For this $\phi$ we have the following
\begin{equation} \label{eq:4p12}
I_\phi := \int_X \left[ \max(\mc M_{\mc T}\phi,L) \right]^q\mr d\mu =
\int_{\{\mc M_{\mc T}\phi \geq L\}} (\mc M_{\mc T}\phi)^q\,\mr d\mu + L^q(1-\mu(E_\phi))
\end{equation}
where $E_\phi = \{\mc M_{\mc T}\phi \geq L\}$. \\
We write $E_\phi$ as $E_\phi = \cup I_j$, where $I_j$ are maximal elements of the $\mc T$, such that
\begin{equation} \label{eq:4p13}
\frac{1}{\mu(I_j)} \int_{I_j} \phi\,\mr d\mu \geq L.
\end{equation}
We set for any $j$, $\alpha_j = \int_{I_j} \phi^q\,\mr d\mu$ and $\beta_j = \mu(I_j)^{1-q} \left( \int_{I_j} \phi\,\mr d\mu\right)^q$. Additionally we set $A = \sum \alpha_j = \int_E \phi^q\,\mr d\mu \leq h$, where $E := E_\phi$, and $B = \sum_j \left( \mu(I_j)^{q-1} \beta_j\right)^\frac{1}{q} = \int_E \phi\,\mr d\mu \leq f$. \\
We also set $k=\mu(E)$. Note that the variables $A$, $B$, $k$ depend on the function $\phi$. \\
From \eqref{eq:4p12} we now obtain
\begin{equation} \label{eq:4p14}
I_\phi = L^q(1-k) + \sum_j \int_{I_j} (\mc M_{\mc T}\phi)^q\,\mr d\mu.
\end{equation}
Note now that from the maximality of any $I_j$ we have that $\mc M_{\mc T}\phi(x) =\\ \mc M_{\mc T(I_j)}\phi(x)$, for every $x\in I_j$ where $\mc T(I_j) = \{ J\in\mc T: J\subseteq I_j \}$.
We now apply Theorem \ref{thm:a} for the measure space $\left( I_j, \frac{\mu(\cdot)}{\mu(I_j)}\right)$ and for $L=\frac{1}{\mu(I_j)} \int_{I_j}\phi\,\mr d\mu = \Av_{I_j}(\phi)$, for any $j$, and we get that
\begin{equation}
I_\phi \leq L^q(1-k) + \sum_j \alpha_j\, \omega_q\! \left(\frac{\beta_j}{\alpha_j}\right).
\end{equation}
Note that $k^{1-q} B^q = \left(\sum_j \mu(I_j)\right)^{1-q} \left(\sum_j \left(\mu(I_j)^{q-1} \beta_j\right)^\frac{1}{q}\right)^q \geq \sum \beta_j \geq A$ in view of H\"{o}lder's inequality. \\
We now use the concavity of the function $\omega_q$, as can be seen in Lemma \ref{lem:3p1} \rnum{1}), and we conclude that
\begin{equation} \label{eq:4p16}
I_\phi \leq L^q(1-k) + A\,\omega_q\!\left(\frac{\sum \beta_j}{A}\right) \leq L^q(1-k) + A\,\omega_q\!\left(\frac{k^{1-q}B^q}{A}\right),
\end{equation}
where the last inequality comes from the fact that $\omega_q$ is increasing.
It is not difficult to see that the parameters $A, B$ and $k$ satisfy the following inequalities:
\begin{align*}
& A \leq k^{1-q} B^q,\quad A\leq h,\quad B\leq f,\quad 0\leq k\leq 1\quad \text{and} \\
& h-A \leq (1-k)^{1-q}(f-B)^q,
\end{align*}
the last one being $\int_{X\setminus E} \phi^q\,\mr d\mu \leq \mu(X\setminus E)^{1-q} \left(\int_{X\setminus E} \phi\,\mr d\mu\right)^q$.
It is also easy to see that $B\geq kL$, by \eqref{eq:4p13} and the disjointness of $\{I_j\}_j$. From the above inequalities and \eqref{eq:4p16} we conclude that
\begin{equation} \label{eq:4p17}
I_\phi \leq L^q(1-k) + R_k(B),
\end{equation}
where $R_k$ is given by \eqref{eq:3p4}. Thus using Lemma \ref{lem:3p3} we have that
\begin{multline} \label{eq:4p18}
I_\phi \leq L^q(1-k) + R_k(B^\star) = \\
L^q(1-k) + h\,\omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k))\right) - (1-k)f^q\left(\mc X_\lambda(k)\right)^q,
\end{multline}
where $\lambda = \frac{f^q}{h}$, $\mc X_\lambda(k)$ is given in Lemma \ref{lem:3p2} and $B^\star = \mc X_\lambda(k) k f > k f$. According to Lemma \ref{lem:3p2} $\mc X_\lambda(k) $ satisfies $1< \mc X_\lambda(k)< \frac{1}{k}$ and
\[
H_q\!\left(\frac{\mc X_\lambda(k)(1-k)}{1-k\mc X_\lambda(k)}\right) = \lambda H_q(x).
\]
From \eqref{eq:4p18} we have that
\begin{equation} \label{eq:4p19}
I_\phi \leq \left[ L^q - f^q(\mc X_\lambda(k))^q\right](1-k) + h\,\omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k))\right).
\end{equation}
We now set $\mu = \frac{L}{f} > 1$. Then \eqref{eq:4p19} becomes
\begin{equation} \label{eq:4p20}
I_\phi \leq f^q \left\{ \left[ \mu^q - (\mc X_\lambda(k))^q\right](1-k) + \omega_q\!\left(\frac{f^q}{h} H_q(\mc X_\lambda(k))\right)\frac{1}{\sigma_q(k,\mc X_\lambda(k))}\right \}
\end{equation}
Remember that $\mc X_\lambda(k)$ satisfies $\sigma_q(k,\mc X_\lambda(k)) = \lambda = \frac{f^q}{h}$ by Lemma \ref{lem:3p2}. Now by the last equation we have that
\begin{multline} \label{eq:4p21}
\frac{f^q}{h} H_q(\mc X_\lambda(k)) =
H_q\!\left(\frac{\mc X_\lambda(k)(1-k)}{1-k\mc X_\lambda(k)}\right) \implies \\
\omega_q\!\left(\frac{f^q}{h}H_q(\mc X_\lambda(k))\right) =
\omega_q\!\left(H_q\!\left(\frac{\mc X_\lambda (k)(1-k)}{1-k\mc X_\lambda(k)}\right)\right).
\end{multline}
Remember that $\omega_q(z) = \left(H_q^{-1}(z)\right)^q$, for any $z\geq 1$. Thus $\omega_q\!\left(\frac{f^q}{h}H_q(\mc X_\lambda(k))\right) = \left(\frac{\mc X_\lambda(k)(1-k)}{1-k\mc X_\lambda(k)}\right)^q$. Thus from \eqref{eq:4p20} we have as a consequence that
\begin{align} \label{eq:4p22}
I_\phi &= f^q \left\{ \left[ \mu^q - \mc X_\lambda(k)^q\right](1-k) + \frac{1}{\sigma_q(k,\mc X_\lambda(k))} \left(\frac{(1-k)\mc X_\lambda(k)}{1-k\mc X_\lambda(k)}\right)^q\right\} \notag \\
&= f^q R_{q,\mu}(k,\mc X_\lambda(k)).
\end{align}
According then to Lemma \ref{lem:3p2} \rnum{2}) we have that
\begin{multline} \label{eq:4p23}
I_\phi \leq f^q\left\{ \frac{1}{\lambda} \omega_q(\lambda H_q(\mu))\right\} =
h\,\omega_q\!\left(\frac{f^q}{h} H_q\!\left(\frac{L}{f}\right)\right) = \\
h\,\omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right) = h c = B_q^{\mc T}(f,h,L,1).
\end{multline}
Now if $\phi$ runs along $(\phi_n)$, we see by the extremality of this sequence that in the limit we have equality in \eqref{eq:4p23}.
That is we have equalities in the limit to all the previous steps which lead to \eqref{eq:4p23}. \\
If we let now $\phi=\phi_n$, we write $A=A_n,\ B=B_n$ and $k=k_n$. Since we have equality in the last inequality giving \eqref{eq:4p23} we conclude that $k\to k_0$, where $k_0$ satisfies:
\[
k_0(\lambda,\mu) = \frac{\omega_q(\lambda H_q(\mu))^{\frac{1}{q}} - \mu}{\mu\left(\omega_q(\lambda H_q(\mu))^\frac{1}{q} - 1\right)} \quad \text{and} \quad
\mc X_\lambda(k_0) = \mu = \frac{L}{f}.
\]
Additionally we must have that $B_n \to B^\star = k_0 f \mc X_\lambda(k_0) = k_0 f \frac{L}{f} = k_0 L$, which means exactly that $\lim \frac{1}{\mu(E_n)} \int_{E_n} \phi\,\mr d\mu = L$, where $E_n = \{\mc M_{\mc T}\phi_n \geq L\}$, with $\mu(E_n) = k_n \to k_0$.
This gives us equality in the weak type inequality for $(\phi_n)_n$, in case where $\lambda=L$. \\
We wish to prove that if we define $I_n^{(1)} := \int_X \big| \max(\mc M_{\mc T}\phi_n, L) - c^{\frac{1}{q}}\phi_n\big|^q\mr d\mu$, we then have that $\lim_n I_n^{(1)} = 0$.
Thus we write
\begin{align*}
I_n^{(1)} &= \int_{E_n} \left| \mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu + \int_{X\setminus E_n} \left|L-c^\frac{1}{q}\phi_n\right|^q\mr d\mu \\
&= J_n + \Lambda_n,
\end{align*}
where $J_n$ and $\Lambda_n$ have the obvious meaning.
Remember now in the above chain of inequalities leading to \eqref{eq:4p23}, we have already proved that
\[
\int_X [\max(\mc M_{\mc T}\phi_n,L)]^q\,\mr d\mu < L^q(1-k_n) + A_n \omega_q\!\left(\frac{k_n^{1-q}B_n^q}{A_n}\right)
\]
and that we used the fact that $A_n\omega_q\!\left(\frac{k_n^{1-q}B_n^q}{A_n}\right) \leq R_k(B_n)$. Thus we must have in the inequality $A_n \geq h-(1-k)^{1-q}(f-B_n)^q=C_n$, equality in the limit according to the way that $R_k(B)$ is defined.
Thus we must have that $h-A_n \approx (1-k_n)^{1-q}(f-B_n)^q$, or equivalently
\begin{equation} \label{eq:4p24}
\bigg( \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi\,\mr d\mu\bigg)^q \approx \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n^q\,\mr d\mu.
\end{equation}
Additionally we must have that
\begin{equation} \label{eq:4p25}
\int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \approx A_n\,\omega_q\!\left(\frac{k_n^{1-q}B_n^q}{A_n}\right).
\end{equation}
We first prove that $\Lambda_n = \int_{X\setminus E_n} \big|L-c^\frac{1}{q}\phi_n\big|^q\,\mr d\mu \to 0$, as $n\to \infty$.
Since $\int_{E_n}\phi_n\,\mr d\mu = B_n \to B^\star = L k_0$, we must have that $\frac{1}{\mu(X\setminus E_n)} \int_{X\setminus E_n} \phi_n\,\mr d\mu = \frac{f-B_n}{1-k_n} \to \frac{f-B^\star}{1-k_0} = \frac{f-L k_0}{1-k_0}$. \\
By the properties that $k_0$ satisfies, we have that
\begin{equation} \label{eq:4p26}
k_0 = k_0(\lambda,\mu) = \frac{\omega_q(\lambda H_q(\mu))^\frac{1}{q} - \mu}{\mu\left(\omega_q(\lambda H_q(\mu))^\frac{1}{q} - 1\right)},
\end{equation}
where $\lambda = \frac{f^q}{\mu}$, $\mu = \frac{L}{f}$. Of course $\omega_q\!\left( \frac{f^q}{h} H_q\!\left(\frac{L}{f}\right)\right) = c$, thus \eqref{eq:4p26} gives
\[
k_0 = \frac{c^\frac{1}{q} - \frac{L}{f}}{\frac{L}{f}(c^\frac{1}{q}-1)} = \frac{f c^\frac{1}{q} - L}{L c^\frac{1}{q} - L} \implies \frac{f-k_0L}{1-k_0} = \frac{L}{c^\frac{1}{q}}.
\]
Thus \eqref{eq:4p24} becomes:
\begin{equation} \label{eq:4p27}
\left[ \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n^q\,\mr d\mu\right]^\frac{1}{q} \approx
\left( \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n\,\mr d\mu\right) \cong
\frac{L}{c^\frac{1}{q}} =: \tau
\end{equation}
In order to show that $\Lambda_n \to 0$, as $n\to \infty$ it is enough to prove that $\int_{X\setminus E_n} |\phi_n-\tau|\,\mr d\mu \to 0$, as $n\to\infty$, where $\tau$ is defined as above.
We use now the following elementary inequality
\begin{equation} \label{eq:4p28}
t + \frac{1-q}{q} \geq \frac{t^q}{q},
\end{equation}
which holds for every $q\in(0,1)$ and every $t>0$.
Additionally we have equality in \eqref{eq:4p28} only if $t=1$. We also assume that $\tau=1$ on \eqref{eq:4p27}. We can overcome this difficulty by dividing \eqref{eq:4p27} by $\tau$ and by considering $\frac{\phi_n}{\tau}$ instead of $\phi_n$.\\
By \eqref{eq:4p28} we have that
\begin{equation} \label{eq:4p29}
\frac{\phi_n^q(x)}{q} \leq \frac{1-q}{q} + \phi_n(x),\quad \text{for all}\ \ (X\!\setminus\! E_n)\cap \{\phi_n>1\}
\end{equation}
and that
\begin{equation} \label{eq:4p30}
\frac{\phi_n^q(y)}{q} \leq \frac{1-q}{q} + \phi_n(y),\quad \text{for all}\ \ y\in (X\!\setminus\! E_n) \cap \{\phi_n \leq 1\}.
\end{equation}
By integrating in the respective domains inequalities \eqref{eq:4p29} and \eqref{eq:4p30} we immediately get:
\begin{align} \label{eq:4p31}
\frac{1}{q} \int\limits_{(X\setminus E_n)\cap\{\phi_n > 1\}}\!\! \phi_n^q\,\mr d\mu \leq \frac{1-q}{q}\,\mu\!\left( (X\!\setminus\! E_n) \cap \{\phi_n>1\} \right) + \int\limits_{(X\setminus E_n)\cap\{\phi_n>1\}}\!\! \phi_n\,\mr d\mu, \\
\frac{1}{q} \int\limits_{(X\setminus E_n)\cap\{\phi_n\leq 1\}}\!\! \phi_n^q\,\mr d\mu \leq \frac{1-q}{q}\,\mu\!\left( (X\!\setminus\! E_n) \cap \{\phi_n \leq 1\} \right) + \int\limits_{(X\setminus E_n)\cap\{\phi_n \leq 1\}}\!\! \phi_n\,\mr d\mu. \label{eq:4p32}
\end{align}
Adding \eqref{eq:4p31} and \eqref{eq:4p32} we conclude that
\begin{equation} \label{eq:4p33}
\frac{1}{q}\frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n^q\,\mr d\mu \leq \frac{1-q}{q} + \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n} \phi_n\,\mr d\mu.
\end{equation}
Since now \eqref{eq:4p27} holds, with $\tau=1$, we conclude that we have equality in \eqref{eq:4p33} in the limit.
Thus we must have equality in the limit in both of \eqref{eq:4p31} and \eqref{eq:4p32}. Thus we have that
\begin{align} \label{eq:4p34}
\frac{1}{\mu((X\!\setminus\! E_n)\cap\{\phi_n > 1\})} \int_{(X\setminus E_n)\cap\{\phi_n > 1\}} \phi_n\,\mr d\mu &\approx 1\ \ \text{and} \notag \\
\frac{1}{\mu((X\!\setminus\! E_n)\cap\{\phi_n \leq 1\}} \int_{(X\setminus E_n)\cap\{\phi_n \leq 1\}} \phi_n\,\mr d\mu &\approx 1.
\end{align}
Then from \eqref{eq:4p34} we have as a consequence that
\begin{multline*}
\int\limits_{(X\setminus E_n)\cap\{\phi_n > 1\}} (\phi_n-1)\,\mr d\mu =
\mu((X\!\setminus\! E_n)\cap\{\phi_n > 1\})\, \cdot \\
\Bigg\{\frac{1}{\mu((X\!\setminus\! E_n)\cap\{\phi_n > 1\})} \int\limits_{(X\setminus E_n)\cap\{\phi_n > 1\}}\!\! \phi_n\,\mr d\mu - 1\Bigg\}
\end{multline*}
tends to zero, as $n\to\infty$. \\
By the same way $\int_{(X\setminus E_n)\cap\{\phi_n \leq 1\}} (1-\phi_n)\,\mr d\mu \to 0$, so as a result we have $\int_{X\setminus E_n} |\phi_n-1|\,\mr d\mu \approx 0$. Since now $\int_{X\setminus E_n} |\phi_n-1|^q\,\mr d\mu \leq \mu(X\setminus E_n)^{1-q} \big[\int_{X\setminus E_n} |\phi_n-1|\,\mr d\mu\big]^q$ and $\mu(E_n)\to k_0\in (0,1)$
we have that $\lim_n \int_{X\setminus E_n} |\phi_n-1|^q\,\mr d\mu = 0$. \\
By the above reasoning we conclude $\Lambda_n = \int_{X\setminus E_n} |L-c^\frac{1}{q}\phi_n|^q\,\mr d\mu \to 0$, as $n\to \infty$.
\end{proof}
\noindent We now prove the following
\begin{theorem} \label{thm:4p3}
Let $(\phi_n)_n$ be extended, where $0 < h \leq L^q$, $L \geq f$ are fixed. Consider for each $n\in\mb N$ a pairwise disjoint family $\mc A_n = (I_{j,n})_j$ such that the following limit exists:
\[
\lim_n \sum_{I\in \mc A_n} \mu(I) y_{I,n}^q,\ \ \text{where}\ \ y_{I,n} = \Av_I(\phi_n),\ I\in \mc A_n.
\]
Suppose also that $\cup\mc A_n = \cup_j I_{j,n} \subseteq \{\mc M_{\mc T}\phi_n \geq L\}$, for each $n=1, 2, \ldots$. Then $\lim_n \int_{\cup\mc A_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{\cup\mc A_n} \phi_n^q\,\mr d\mu$, where $c = \omega_q\!\left(\frac{(1-q)L^q + qL^{1-q}f}{h}\right)$.
\end{theorem}
\begin{proof}
Define $\ell_n = \sum_{I\in\mc A_n} \mu(I) y_{I,n}^q$. By Theorem \ref{eq:4p2} we immediately see that for each $n\in\mb N^\star$
\begin{equation} \label{eq:4p35}
\int_{\cup\mc A_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq
\frac{1}{(1-q)\beta} \left[ (\beta+1) \sum_{I\in\mc A_n} \mu(I)y_{I,n}^q - (\beta+1)^q \int_{\cup\mc A_n} \phi_n^q\,\mr d\mu \right].
\end{equation}
Suppose that $E_n = \{\mc M_{\mc T}\phi_n \geq L\} = \cup I_j^{(n)}\ n=1, 2, \ldots$ where $I_j^{(n)}\in S_{\phi_n}$, for each $j$. \\
Now by our hypothesis we have that $\cup\mc A_n\subseteq E_n$, for all $n\in\mb N^\star$. Thus
\[
E_n\setminus \cup\mc A_n = \cup_j\left[ I_j^{(n)}\setminus \cup\mc A_n\right].
\]
Consider now for each $j$ and $n$ the probability since $\left( I_j^{(n)}, \frac{\mu(\cdot)}{\mu(I_j^{(n)})}\right)$, and apply there Theorem \ref{thm:4p1}, to get after summing on $j$ the following inequality
\begin{multline} \label{eq:4p36}
\int_{E_n\setminus \cup\mc A_n}(\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \\
\frac{1}{(1-q)\beta} \bigg\{ (\beta+1) \bigg[ \sum_{I\in \{I_j^{(n)}\}_j} \mu(I) y_{I,n}^q - \sum_{I\in\mc A_n} \mu(I)y_{I,n}^q\bigg] - (\beta+1)^q\!\!\! \int\limits_{E_n\setminus \cup\mc A_n}\! \phi_n^q\,\mr d\mu \bigg\}.
\end{multline}
Summing \eqref{eq:4p35} and \eqref{eq:4p36} we have as a consequence that:
\begin{equation} \label{eq:4p37}
\int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq
\frac{1}{(1-q)\beta} \bigg[ (\beta+1) \sum_{I\in \{I_j^{(n)}\}_j} \mu(I)y_{I,n}^q - (\beta+1)^q\int_{E_n}\phi_n^q\,\mr d\mu \bigg].
\end{equation}
Using now the concavity of $t\mapsto t^q$, for $q\in(0,1)$ we obtain the inequality
\begin{equation} \label{eq:4p38}
\sum_{I\in\{I_j^{(n)}\}_j} \mu(I)y_{I,n}^q \leq
\frac{\left(\sum_{I\in\{I_j^{(n)}\}_j} \mu(I)y_{I,n}\right)^q}{\left(\sum_{I\in\{I_j^{(n)}\}_j} \mu(I)\right)^{q-1}} =
\frac{\left(\int_{E_n} \phi_n\,\mr d\mu\right)^q}{\mu(E_n)^{q-1}}.
\end{equation}
Thus \eqref{eq:4p37} in view of \eqref{eq:4p38} gives
\begin{multline} \label{eq:4p39}
\int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq
\frac{1}{(1-q)\beta} \left[ (\beta+1) \frac{1}{\mu(E_n)^{q-1}} \left( \int_{E_n} \phi_n\,\mr d\mu \right)^q - \right. \\
\left. (\beta+1)^q\int_{E_n} \phi_n^q\,\mr d\mu \right]
\end{multline}
By our hypothesis we have that
\begin{equation} \label{eq:4p40}
\int_{E_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \approx \\
\int_{E_n}\phi_n^q\,\mr d\mu\cdot \omega_1\!\left(\frac{k_n^{1-q}\left(\int_{E_n}\phi_n\,\mr d\mu\right)^q}{\int_{E_n}\phi_n^q\,\mr d\mu}\right),
\end{equation}
since $(\phi_n)$ is extremal, where $k_n=\mu(E_n)$, for all $n\in\mb N$. \\
But then by the definition of $\omega_q$; this means exactly that we have equality in the limit in \eqref{eq:4p39} for $\beta=\beta_n=\omega_q\!\left(\frac{k_n^{1-q}\left(\int_{E_n}\phi_n\,\mr d\mu\right)^q}{\int_{E_n}\phi_n^q\,\mr d\mu}\right)^\frac{1}{k}-1$ (see (3.18) and (3.19) in \cite{4}). \\
We set $c_{1,n} = \frac{k_n^{1-q}\left(\int_{E_n}\phi_n\,\mr d\mu\right)^q}{\int_{E_n}\phi_n^q\,\mr d\mu}$. We now prove that $c_{1,n}\to \frac{(1-q)L^q + qL^{q-1}f}{h}$, as $n\to \infty$.
Indeed, note that by the chain of inequalities leading to the least upper bound $B_q^{\mc T}(f,h,L,1) = c\,h$, we must have that
\begin{equation} \label{eq:4p41}
L^q(1-k_0) + \omega_q(c_{1,n})\int_{E_n}\phi_n^q\,\mr d\mu \approx c\,h,\quad \text{as}\ \ n\to\infty,
\end{equation}
where we suppose that $k_n\to k_0$ (we pass to a subsequence if necessary).
Now \eqref{eq:4p41} can be written as
\begin{equation} \label{eq:4p42}
L^q(1-k_n) + \omega_q(c_{1,n})\int_{E_n}\phi_n^q\,\mr d\mu \approx
\left(h - \int_{E_n}\phi_n^q\,\mr d\mu\right)c + \int_{E_n}\phi_n^q\,\mr d\mu\!\cdot\! c.
\end{equation}
But as we have already proved before, we have
\begin{align*}
\begin{aligned}
& L^q(1-k_n) \approx
\left(h-\int_{E_n}\phi_n^q\,\mr d\mu\right)c \iff
\frac{1}{1-k_n} \left(h - \int_{E_n}\phi_n^q\,\mr d\mu\right)= \frac{L^q}{c} \iff \\
& \frac{1}{\mu(X\!\setminus\! E_n)} \int_{X\setminus E_n}\phi_n^q\,\mr d\mu = \frac{L^q}{c}\ \text{which is indeed right, since}
\end{aligned} &\\
\int_{X\setminus E_n} \left|\phi_n - \frac{L}{c^\frac{1}{q}}\right|^q\mr d\mu \approx 0. &
\end{align*}
Thus \eqref{eq:4p41} gives $\omega_q(c_{1,n})\int_{E_n}\phi_n^q\,\mr d\mu \approx c\int_{E_n}\phi_n^q\,\mr d\mu$ and since it is easy to see that $\lim_n \int_{E_n}\phi_n^q\,\mr d\mu > 0$,
since $(\phi_n)$ is extremal, we have that $\lim \omega_q(c_{1,n}) = c = \omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right)$ or that $c_{1,n} \to \frac{(1-q)L^q + qL^{q-1}f}{h}$, as $n\to \infty$. \\
From \eqref{eq:4p40} we conclude now that $\int_{E_n}(\mc M_{\mc T}\phi_n)^q\,\mr d\,\mu \approx c\int_{E_n}\phi_n^q\,\mr d\mu$ and that, as we have said before we have equality in \eqref{eq:4p39} for the value of $\beta = c^\frac{1}{q}-1$, in the limit.
But \eqref{eq:4p39} comes from \eqref{eq:4p35} and \eqref{eq:4p36} by summing, so we must have equality in \eqref{eq:4p35} in the limit for this value of $\beta = c^\frac{1}{q}-1 = \omega_q\!\left(\frac{(1-q)L^q + qL^{q-1}f}{h}\right)^\frac{1}{q}-1$. \\
But the right side of \eqref{eq:4p35} is minimized for $\beta=\beta_n=\omega_q\!\left(\frac{\ell}{s}\right)^\frac{1}{q}-1$, where $\ell=\lim_n\sum_{I\in\mc A_n} \mu(I)y_{I,n}^q$, $s=\lim_n\int_{\cup\mc A_n}\phi_n^q\,\mr d\mu$. \\
Thus we must have that $\omega_q\!\left(\frac{\ell}{s}\right)=c$, and for the value of $\beta=c^\frac{1}{q}-1$, we get by the equality in \eqref{eq:4p35} that
\[
\lim_n\int_{\cup\mc A_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = c \lim_n \int_{\cup\mc A_n}\phi_n^q\,\mr d\mu.
\]
By this we end the proof of our theorem.
\end{proof}
\noindent We now proceed to prove that
\[
J_n = \int_{E_n} \left| \mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu \to 0,\ \ \text{as}\ n \to \infty.
\]
For this proof we are going to use Theorem \ref{thm:4p3} for a sequence $\left(g_{\phi_n}\right)_n$ which arises from $(\phi_n)_n$ in a canonical way and is extremal by construction. We prove the following
\begin{lemma} \label{lem:4p1}
Let $\phi$ be $\mc T$-good and $L \geq f = \int_X\phi\,\mr d\mu$. There exists a measurable function $g_\phi: X\to \mb R^+$ such that for every $I\in \mc T$ such that $I\in S_\phi$ and $\Av_I(\phi)\geq L$ we have that $g_\phi$ assumes two values ($c_I^\phi$ or $0$) on $A(\phi,I)=A_I$.
Moreover $g_\phi$ satisfies $\Av_I(\phi) = \Av_I(g_\phi)$, for every $I\in \mc T$ that contains an element of $S_\phi$ (that is, it is not contained in any of the $A_J$'s).
\end{lemma}
\begin{proof}
Let $\phi$ be $\mc T$-good and $L\geq f$. \\
We first define $g_\phi(t)=\phi(t)$, $t\in X\!\setminus\! E_\phi$, where we set $E_\phi = \{\mc M_{\mc T}\phi \geq L\}$.
Then we write $E_\phi = \cup I_j$, where $I_j$ are maximal elements of the tree $\mc T$ such that $\Av_{I_j}(\phi)\geq L$. Then by Lemma \ref{lem:2p2} we have that $I_j\in S_\phi$. Note now that
\begin{equation} \label{eq:4p43}
I_j = A(\phi,I_j) \cup \bigg( \cup_{\substack{J=I_j\\ J\in S_\phi}} J\bigg),\ \ \text{for any}\ j.
\end{equation}
Fix a $j$. We define first the following function $g_{\phi,j}^{(1)}(t) = \phi(t)$, $t\in I_j\!\setminus\! A(\phi,I_j)$. We write $A_{I_j} = A(\phi,I_j) = \cup_i I_{i,j}$, where $I_{i,j}$ are maximal elements of $\mc T$, subject to the relation $I_{i,j}\subseteq A_{I_j}$ for any $i$. For each $i$ we have $I_{i,j}\in \mc T_{\left(k_{i,j}\right)}$ for some $k_{i,j} \geq 1$, $k_{i,j} \in \mb N$. Let $I_{i,j}'$ be the unique element of $\mc T$, such that $I_{i,j}' \in \mc T_{(k_{i,j}-1)}$ and $I_{i,j}' \supsetneq I_{i,j}$. \\
We set $\Omega_j = \cup_i I_{i,j}'$, for our $j$. Note now that for every $i$, $I_{i,j}' \notin S_\phi$. This is true because of \eqref{eq:4p43} and the structure that the tree $\mc T$ has by its definition.
Consider now a maximal subfamily of $(I_{i,j}')$ that still covers $\Omega_j$. Then we can write $\Omega_j = \cup_{k=1}^\infty I_{i_k,j}'$, for some sequence of integers $i_1\!<\!i_2\!<\! \ldots \!<\!i_k\!<\!\ldots$, possibly finite, where the family $\left(I_{i_k,j}'\right)_k$, $k=1, 2, \ldots$.
Additionally we obviously have that $\Omega_j \subseteq I_j$. By the maximality of any $I_{i_k,j}$, $k\in \mb N$, we have that $I_{i_k,j}' \cap \left(I_j\setminus A_{I_j}\right) \neq \emptyset$, so there exists $J_j\in S_\phi$ such that $J_j^\star = I_j$ with $J_i \cap I_{i_k,j}' \neq \emptyset$.
Since now each $I_{i_k,j}'$ is not contained in any of $J_i$ (since it contains elements of $A_{I_j}$) we must have that it actually contains any such $J_i$. That is we can write, for any $k\in \mb N$
\[
I_{i_k,j}' = \left[ \cup J_{k,j,m}\right] \cup \left[B_{j,k}\right],
\]
where for any $n\in \mb N$
\[
J_{k,j,m}\in S_\phi,\ \ J_{k,j,m}^\star = I_j\ \ \text{and}\ \ B_{j,k} = I_{i_k,j}'\cap A_{I_j}.
\]
Of course we have $\cup_k B_{j,k} = A_{I_j}$. \\
We define the following function on $I_j$. We name it as $g_{\phi,j,1}: I_j \to \mb R^+$. We set $g_{\phi,j,1}(t) = \phi(t)$, $t\in I_j\setminus A_{I_j}$. Now we are going to construct $g_{\phi,j,1}$ on $A_{I_j}$ in such way that for every $I\in \mc T$ such that $I$ contains an element $J\in S_\phi$, such that $J^\star=I$, we have that $\Av_I(g_{\phi,j,1}) = \Av_I(\phi)$.
We proceed to this as follows: For any $k$, $B_{j,k}$ is a union of elements of the tree $\mc T$. Using Lemma \ref{lem:2p1}, we construct for any $\alpha\in (0,1)$ (that will be fixed later) a pairwise disjoint family of elements of $\mc T$ and subsets of $B_{j,k}$ named as $A_{\phi,j,k}$, such that $\sum_{J\in A_{\phi,j,k}} \mu(J) = \alpha\mu(B_{j,k})$,
We define now the function $g_{\phi,j,1,k}: B_{j,k}\to \mb R^+$ by the following way:
\[
g_{\phi,j,k,1}(t) := \left\{ \begin{aligned}
& c_{j,k,1}^\phi, &t&\in \cup\left\{J: J\in\mc A_{\phi,j,k}\right\} \\
& 0, \quad &t&\in B_{j,k}\setminus \cup\left\{J: J\in\mc A_{\phi,j,k}\right\}
\end{aligned} \right.
\]
such that
\begin{equation} \label{eq:4p44}
\left. \begin{aligned}
\int_{B_{j,k}} g_{\phi,j,k,1}\,\mr d\mu = c_{j,k,1}^\phi \gamma_{j,k,1}^\phi = \int_{B_{j,k}}\phi\,\mr d\mu&,\ \text{and} \\
\int_{B_{j,k}} g_{\phi,j,k,1}^q\,\mr d\mu = \left(c_{j,k,1}^\phi\right)^q \gamma_{j,k,1}^\phi = \int_{B_{j,k}} \phi^q\,\mr d\mu&
\end{aligned}\ \right\},
\end{equation}
where $\gamma_{j,k,1}^\phi = \mu\!\left(\cup_{J\in A_{\phi,j,k}} J\right) = \alpha \mu(B_{j,k})$.
It is easy to see that such choices for $c_{j,k,1}^\phi\geq 0$, $\gamma_{j,k,1}^\phi\in [0,1]$ always exist. In fact we just need to set
\[
\gamma_{j,k,1}^\phi = \left[ \frac{\left(\int_{B_{j,k}} \phi\,\mr d\mu\right)^p}{\int_{B_{j,k}}\phi^p\,\mr d\mu} \right]^\frac{1}{(p-1)} \leq \mu(B_{j,k}),
\]
by H\"{o}lder's inequality, and also $\alpha = \gamma_{j,k,1}^\phi / \mu(B_{j,k})$, $c_{j,k,1}^\phi = \int_{B_{j,k}} \phi\,\mr d\mu / \gamma_{j,k,1}^\phi$. \\
We let then $g_{\phi,j,1}(t) := g_{\phi,j,k,1}(t)$, if $t\in B_{j,k}$. Note that $g_{\phi,j,1}$ may assume more that one positive values on $A_{I_j} = \cup_k B_{j,k}$. It is easy then to see that there exists a common positive value, denoted by $c_{I_j}^{\phi}$ and measurable sets
$L_k\subseteq B_{j,k}$, such that if we define $g'_{\phi,j,1}(t) :=c_{I_j}^{\phi}\chi_{L_k}(t)$ for $t\in B_{j,k}$ for any $k$ and $g'_{\phi,j,1}(t) :=\phi(t)$ for $t\in X\setminus B_{j,k}$, where $\chi_S$ denotes the characteristic function of $S$, we still have
$\int_{B_{j,k}}\phi\,\mr d\mu = \int_{B_{j,k}}g'_{\phi,j,k,1}\,\mr d\mu=c_{I_j}^{\phi}\mu(L_k)$ and
$\int_{A_{I_j}}\phi^q\,\mr d\mu = \int_{A_{I_j}}(g'_{\phi,j,k,1})^q\,\mr d\mu$. For the construction of $L_k$ and $c_{I_j}^{\phi}$
we just need to find first the subsets $L_k$ of $B_{j,k}$, for which the first inequalities mentioned above are true, and this can be done for arbitrary $c_{I_j}^{\phi}$, since $(X,\mu)$ is nonatomic. Then we just need to find the value of the constant $c_{I_j}^{\phi}$ for which the
second integral equality is also true. Note also that for these choices for $L_k$ and $c_{I_j}^{\phi}$ we may not have
$\int_{B_{j,k}}\phi^q\,\mr d\mu = \int_{B_{j,k}}g'_{\phi,j,k,1})^q\,\mr d\mu$, but the respective equality with $A_{I_j}$ in place
of $B_{j,k}$ should be true.
We have thus defined $g_{\phi,j,1}'$. It is obvious now that if $I\in\mc T: I\cap A_{I_j} \neq \emptyset$ and $I\subsetneq A_{I_j}$ (that is $I\cap J\neq \emptyset$ for some $J\in S_\phi$ with $J^\star=I_j$), we must have that $I$ is a union of some of the $I_{i_k,j}'$ and some of the $J$'s for which $J\in S_\phi$ and $J^\star=I$.
Then obviously we should have by the construction we just made that $\int_I g_{\phi,j,1}'\,\mr d\mu = \int_I\phi\,\mr d\mu$. We inductively continue and define $g_{\phi,j,2}':= g_{\phi,j,1}'$ on $A_{I_j}$, and by working also in any of $J\in S_\phi$ such that $J^\star=I_j$, we define it in all the $A_J$'s by the same way as before.
We continue defining with $g_{\phi,j,\ell}'$, $\ell=3, 4, \ldots$. We set at last $g_\phi(t):= \lim_\ell g_{\phi,j,\ell}'(t)$ for any $t\in I_j$. Note that is in fact the sequence $(g_{\phi,j,\ell}'(t))_\ell$ is constant for every $t\in I_j$. Then by it's definition, $g_\phi$ should satisfy the conclusions of our lemma. In this way we derive it's proof.
\end{proof}
\begin{cremark}
It is not difficult to see that for every $I\in S_\phi$, $I\subseteq I_j$ for some $j$ the function $g_\phi$ that is constructed in the previous lemma satisfies $\mu(\{g_\phi=0\} \cap A_I) \geq \mu(\{\phi=0\} \cap A_I)$. This can be seen if we repeat the previous proof by working on the set $\{\phi>0\}\cap A_I$ for any such $I$.
As a consequence, since $E_\phi = \cup_j I_j = \cup_j \bigg(\cup_{\substack{I\subseteq I_j\\ I\in S_\phi}} A(\phi,I)\bigg)$, we conclude that $\mu(\{g_\phi=0\}\cap E_\phi) \geq \mu(\{\phi=0\}\cap E_\phi)$. \\
\end{cremark}
\noindent We prove now the following
\begin{lemma} \label{lem:4p2}
For an extremal sequence $(\phi_n)_n$ of $\mc T$-good functions we have that $\lim_n \mu( \{\phi_n=0\} \cap \{ \mc M_{\mc T}\phi_n \geq L\} ) = 0$.
\end{lemma}
\noindent Before we proceed to the proof of the above Lemma, we prove the following
\begin{lemma} \label{lem:4p3}
For an extremal sequence $(\phi_n)$, consisting of $\mc T$-good functions, such that $S_{\phi_n}$ is the respective subtree of any $\phi_n$, the following holds
\[
\lim_n \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \frac{\left(\int_{A(\phi_n,I)} \phi_n\,\mr d\mu\right)^q}{\alpha_{I,n}^{q-1}} =
\lim_n \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu,
\]
where $\alpha_{I,n} = \mu(A(\phi,I))$, for $I\in S_{\phi_n}$, $n=1, 2, \ldots$.
\end{lemma}
\begin{proof}
Remember that the following inequalities have been used in the evaluation of the function $B_\phi^{\mc T}(f,h,L,1)$:
\begin{equation} \label{eq:4p45}
\int_{I_j} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \alpha_j \omega_q\! \left(\frac{\beta_j}{\alpha_j}\right).
\end{equation}
Thus we must have equality in the limit in the following inequality:
\begin{equation} \label{eq:4p46}
\sum_j \int_{I_j} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu \leq \sum_j \alpha_j \omega_q \left(\frac{\beta_j}{\alpha_j}\right).
\end{equation}
But in the proof of (4.45) the following inequality was used in order to pass from (3.16) to (3.17) in \cite{4}:
\[
\sum_{\substack{I\in S_{\phi}}} \alpha_I x_I^q \geq \int_X \phi^q\,\mr d\mu.
\]
Now in place of $X$ in the last integral we have the $I_j$'s, so from equality in \eqref{eq:4p46} in the limit, we immediately obtain the statement of our Lemma \ref{lem:4p3}. Our proof is complete.
\end{proof}
\noindent We now return to the
\begin{proof}[Proof of Lemma \ref{lem:4p2}]
It is enough due to the comments mentioned above that $\lim_n \mu(\{g_{\phi_n}=0\} \cap E_{\phi_n}) = 0$. For this, we just need to prove that $$\lim_n \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\alpha_{I,n}-\gamma_I^{\phi_n}) = 0,$$ where $\alpha_{I,n} = \mu(A(\phi_n,I))$,
and $\gamma_I^{\phi_n}=\mu(A(\phi_n,I)\cap \{g_{\phi_n}>0\})$ for $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$. \\
For those $I$ we set
\[
P_{I,n} = \frac{\int_{A_{I,n}} \phi_n^q\,\mr d\mu}{\alpha_{I,n}^{q-1}},\ \text{where}\ A_{I,n} = A(\phi_n,I).
\]
Then we obviously have that $\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \alpha_{I,n}^{q-1} P_{I,n} = \int_{E_{\phi_n}} \phi_n^q$. \\
Additionally $\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\gamma_I^{\phi_n})^{q-1} P_{I,n} \geq h$, since $0<q<1$ and $\gamma_I^{\phi_n} \leq \alpha_{I,n}$ for $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$. However
\begin{multline} \label{eq:4p47}
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\gamma_I^{\phi_n})^{q-1} P_{I,n} =
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} (\gamma_I^{\phi_n})^{q-1} \frac{(c_I^{\phi_n})^q \gamma_I^{\phi_n}}{(\alpha_{I,n})^{q-1}} = \\
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \frac{(\gamma_I^{\phi_n} . c_I^{\phi_n})^q}{(\alpha_{I,n})^{q-1}} =
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \frac{\left(\int_{A_{I,n}} \phi_n\,\mr d\mu\right)^q}{(\alpha_{I,n})^{q-1}} \approx
\int_{E_{\phi_n}} \phi_n^q\,\mr d\mu,
\end{multline}
by Lemmas \ref{lem:4p1} and \ref{lem:4p3}. \\
We define now for any $R>0$ the set
\[
S_{\phi_n,R} = \cup\left\{ A_{I,n}: I\in S_{\phi_n},\ I\subseteq E_{\phi_n},\ P_{I,n}<R(a_{I,n})^{2-q}\right\}.
\]
Then for $I\in S_{\phi_n}$ such that $I\subseteq E_{\phi_n}$ and $P_{I,n} < R(\alpha_{I,n})^{2-q}$ we have that
\begin{align} \label{eq:4p48}
& \int_{A_{I,n}} \phi_n^q\,\mr d\mu < R \alpha_{I,n} \implies\ \text{(by summing up to all such $I$)} \notag \\
& \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu < R \mu(S_{\phi_n,R}).
\end{align}
Additionally we have that
\begin{equation} \label{eq:4p49}
\Bigg|\!\!\! \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \alpha_{I,n}^{q-1} P_{I,n} - \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu\ \Bigg| =
\int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu,
\end{equation}
and
\begin{align} \label{eq:4p50}
& \Bigg|\!\! \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} P_{I,n} - \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu\ \Bigg| \overset{\eqref{eq:4p47}}{\approx} \notag \\
& \Bigg|\!\! \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} P_{I,n} - \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}}} \left(\gamma_I^{\phi_n}\right)^{1-q} P_{I,n}\ \Bigg| = \notag \\
& \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} P_{I,n} =
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left(\gamma_I^{\phi_n}\right)^{q-1} \frac{(c_I^{\phi_n})^q\gamma_I^{\phi_n}}{(\alpha_{I,n})^{q-1}} = \notag \\
& \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \frac{(\gamma_I^{\phi_n} c_I^{\phi_n})^q}{(\alpha_{I,n})^{q-1}} =
\sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}< R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \frac{\left(\int_{A_{I,n}} \phi_n\,\mr d\mu\right)^q}{\alpha_{I,n}^{2-q}} \approx
\int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu,
\end{align}
where the last equality in the limit is explained by the same reasons as Lemma \ref{lem:4p3} does.
Using \eqref{eq:4p49} and \eqref{eq:4p50} we conclude that
\begin{equation} \label{eq:4p51}
\limsup_n\hspace{-10pt} \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n},\ P_{I,n}\geq R\alpha_{I,n}^{2-q}}}\hspace{-10pt} \left[ \left(\gamma_I^{\phi_n}\right)^{q-1} - (\alpha_{I,n})^{q-1} \right] P_{I,n} \leq
2 \lim_n \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu,
\end{equation}
By Theorem \ref{thm:4p3} now, and Lemma \ref{lem:2p3} (using the form that the $A_{I,n}$ have, and a diagonal argument) we have that the following is true
\begin{equation} \label{eq:4p52}
\lim_n \int_{S_{\phi_n,R}} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu =
c \lim_n \int_{S_{\phi_n,R}} \phi_n^q\,\mr d\mu.
\end{equation}
Since $\mc M_{\mc T}\phi \geq f$, on $X$ we conclude by \eqref{eq:4p48} and \eqref{eq:4p52} that
\begin{equation} \label{eq:4p53}
f^q \limsup_n \mu(S_{\phi_n,R}) \leq c\,R \limsup \mu(S_{\phi_n,R}).
\end{equation}
Thus if $R>0$ is chosen small enough, we must have because of \eqref{eq:4p53} that $\lim_n \mu(S_{\phi_n,R})=0$, thus by \eqref{eq:4p48} we have $\lim_n \int_{S_{\phi_n,R}}\phi^q\,\mr d\mu=0$, and so by \eqref{eq:4p51} we obtain
\begin{equation} \label{eq:4p54}
\lim_n\hspace{-10pt} \sum_{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n} \geq R a_{I,n}^{2-q}}}\hspace{-10pt} \left[ \left(\gamma_I^{\phi_n}\right)^{q-1} - \left(\alpha_{I,\phi_n}\right)^{q-1}\right] P_{I,n} = 0.
\end{equation}
We consider now, for any $y>0$ the function $\phi_y(x) = \frac{x^{q-1}y^{2-q}-y}{y-x}$, defined for $x\in(0,y)$. Is is easy to see that $\lim_{x\to 0^+} \phi_y(x) = +\infty$, $\lim_{x\to y^-} \phi_y(x)=1-q$. Moreover $\phi_y'(x) = \frac{(y-1)x^{q-2}y^{3-q} - (q-2)_x^{q-1}y^{2-q} - y}{(y-x)^2}$, $x\in (0,y)$. \\
Then by setting $x=\lambda y$, $\lambda\in(0,1)$ we define the following function $g(\lambda)=(q-1)\lambda^{q-2} - (q-2)\lambda^{q-1} - 1$, which as is easily seen satisfies $g(\lambda)<0$, for all $\lambda\in(0,1)$. But $\phi_y'(x) = \frac{y\,g(\lambda)}{(1-\lambda)^2y^2}<0$, so that $\phi_y$ is decreasing on $(0,y)$.
Thus $\phi_y(x) \geq 1-q$, for all $x\in(0,y)$ $\implies x^{q-1}y^{2-q}-y \geq (1-q)(y-x)$, $\forall x\in(0,y)$. \\
From the above and \eqref{eq:4p54} we see that
\[
\lim_n \quad \sum_{\mathpalette\mathclapinternal{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n}\geq R_{\alpha_{I,n}}^{2-q}}}} \quad \left(\alpha_{I,n}-\gamma_{I,n}^{\phi_n}\right) = 0 \implies
\mu(E_{\phi_n}) - \mu(S_{\phi_n,R}) - \quad\sum_{\mathpalette\mathclapinternal{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n}\geq R_{\alpha_{I,n}}^{2-q}}}}\quad \left(\gamma_n^{\phi_n}\right) \approx 0.
\]
Since then $\mu(S_{\phi_n,R})\to0$ we conclude that
\begin{equation} \label{eq:4p55}
\mu(E_{\phi_n}) \approx \quad\sum_{\mathpalette\mathclapinternal{\substack{I\in S_{\phi_n}\\ I\subseteq E_{\phi_n}, P_{I,n}\geq R_{\alpha_{I,n}}^{2-q}}}}\quad \gamma_I^{\phi_n} \leq
\sum_{I\in S_{\phi_n}, I\subseteq E_{\phi_n}} (\gamma_I^{\phi_n}) \leq
\sum_{I\in S_{\phi_n}, I\subseteq E_{\phi_n}} \alpha_{I,n} =
\mu(E_{\phi_n}).
\end{equation}
Thus from \eqref{eq:4p55} we immediately see that $\sum_{I\in S_{\phi_n}, I\subseteq E_{\phi_n}}\!\! \left(\alpha_{I,n}-\gamma_I^{\phi_n}\right) \approx 0$, or that $\mu(\{g_{\phi_n}=0\}\cap E_{\phi_n}) \approx 0$, and by this we end the proof of Lemma \ref{lem:4p2}.
\end{proof}
Now as we have mentioned before, by the construction of $g_{\phi_n}$, we have that $\int_I g_{\phi_n}\,\mr d\mu = \int_I \phi_n\,\mr d\mu$, for every $I\in S_{\phi_n}$.
Thus $\mc M_{\mc T}g_\phi \geq \mc M_{\mc T}\phi\ \text{on}\ X \implies \lim_n \int_X (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq h\, c$. Since $\int_X g_{\phi_n}\,\mr d\mu = f$ and $\int g_{\phi_n}^q\,\mr d\mu=h$, by construction, we conclude that $$\lim_n\int _X (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu =ch,$$ or that $(g_{\phi_n})_n$ is an extremal sequence.
We prove now the following Lemmas needed for the end of the proof of the characterization of the extremal sequences for \eqref{eq:1p8}
\begin{lemma} \label{lem:4p4}
With the above notation there holds:
\[
\lim \int_{E_{\phi_n}} \left|\mc M_{\mc T}g_{\phi_n} - c^\frac{1}{q}g_{\phi_n}\right|^q\mr d\mu = 0.
\]
\end{lemma}
\begin{proof}
We define for every $n\in\mb N^\star$ the set:
\[
\Delta_n = \left\{ t\in E_{\phi_n}:\ \mc M_{\mc T}g_{\phi_n} \geq c^\frac{1}{q} g_{\phi_n}(t)\right\}.
\]
It is obvious, by passing, if necessary to a subsequence that
\begin{equation} \label{eq:4p56}
\lim_n \int_{\Delta_n} (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq c \lim_n \int_{\Delta_n} g_{\phi_n}^q\,\mr d\mu.
\end{equation}
We consider now for every $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$ the set $(E_{\phi_n}\!\setminus \Delta_n)\cap A_{I,n}$ where $A_{I,n} = A(\phi_n,I)$. We distinguish two cases.
\begin{enumerate}[\hspace{-5pt}(i)]
\item $\Av_I(\phi_n) = y_{I,n} > c^\frac{1}{q} c_I^{\phi_n}$, where $c_I^{\phi_n}$ is the positive value of $g_{\phi_n}$ on $A_{I,n}$ (if it exists). Then because of Lemma \ref{lem:4p1} we have that
\[
\mc M_{\mc T}g_{\phi_n}(t) \geq \Av_I(g_{\phi_n}) =\Av_I(\phi_n) > c^\frac{1}{q} c_I^{\phi_n} \geq c^\frac{1}{q} g_\phi(t),
\]
for each $t\in A_{I,n}$. Thus $(E_{\phi_n}\!\setminus \Delta_n)\cap A_{I,n} = \emptyset$ in this case. \\
We study now the second one. \\
\item $y_I \leq c^\frac{1}{q} c_I^{\phi_n}$. Let now $t\in A_{I,n}$ with $g_\phi(t)>0$, that is $g_{\phi_n}(t) = c_I^{\phi_n}$. We prove that for this $t$ we have $\mc M_{\mc T}g_{\phi_n} \leq c^\frac{1}{q} g_{\phi_n}(t) = c^\frac{1}{q} c_I^{\phi_n}$.
\end{enumerate}
Suppose now that we have the opposite inequality. Then there exists $J_t\in \mc T$ such that $t\in J_t$ and $\Av_{J_t}(g_{\phi_n}) > c^\frac{1}{q} c_I^{\phi_n}$. Then one of the following holds
\begin{enumerate}[(a)]
\item $J_t \subseteq A_{I,n}$. Then by the form of $g_{\phi_n} / A_{I,n}$ (equals $0$ or $c_i^{\phi_n}$), we have that $\Av_{J_t}(g_{\phi_n}) \leq c_I^{\phi_n} \leq c^\frac{1}{q} c_I^{\phi_n}$, which is a contradiction. Thus this case is excluded.
\item $J_t$ is not a subset of $A_{I,n}$. Then two subcases can occur.
\begin{enumerate}[($\mr b_1$)]
\setlength{\parsep}{0pt}
\item $J_t \subseteq I \subseteq E_{\phi_n}$ and contains properly an element of $S_{\phi_n}$, $J'$, for which $(J')^\star = I$. Since now (\rnum{2}) holds, $t\in J_t$ and $\Av_{J_t}(g_{\phi_n}) > c^\frac{1}{q}c_I^{\phi_n}$, we must have that $J' \subsetneq J_t \subsetneq I$.
We choose now an element $J_t'$ of $\mc T$, $J_t \subsetneq I$ which contains $J_t$, with maximum value on the average $\Av_{J_t'}(\phi_n)$. Then by it's choice we have that for each $K\in \mc T$ such that $J_t' \subset K \subsetneq I$ there holds $\Av_K(\phi) \leq \Av_{J_t'}(\phi)$. Since now $I\in S_{\phi_n}$ and $\Av_I(\phi_n) \leq c^\frac{1}{q}c_I^{\phi_n}$, by Lemma \ref{lem:2p2} and the choice of $J_t'$ we have that $\Av_K(\phi_n) < \Av_{J_t'}(\phi_n)$ for every $K\in \mc T$ such that $J_t' \subseteq K$. So again by Lemma \ref{lem:2p2} we conclude that $J_t'\in S_{\phi_n}$. But this is impossible, since $J' \subsetneq J_t' \subsetneq I$, $J'\!, I\in S_{\phi_n}$ and $(J')^\star = I$. We turn now to the second subcase.
\item $I\subsetneq J_t$. Then by application of Lemma \ref{lem:4p1} we have that $\Av_{J_t}(\phi_n) = \Av_{J_t}(g_{\phi_n}) > c^\frac{1}{q}c_I^{\phi_n} \geq y_{I,n} = \Av_I(\phi_n)$ which is impossible by Lemma \ref{lem:2p2}, since $I\in S_{\phi_n}$.
\end{enumerate}
\end{enumerate}
Thus in any of the two cases ($b_1$) and ($b_2$) we have proved that we have \\ $(E_{\phi_n}\!\setminus \Delta_n) \cap A_{I,n} = A_{I,n} \setminus \{g_{\phi_n}\!=0\}$, while we showed that in the case (\rnum{1}), $(E_{\phi_n}\!\setminus \Delta_n)\cap A_{I,n} = \emptyset$. \\
Since $\cup \{A_I: I\in S_{\phi_n},\ I\subseteq E_{\phi_n}\} \approx E_{\phi_n}$, we conclude by the above discussion that $E_{\phi_n}\!\setminus \Delta_n$ can be written as $\left(\cup_{I\in S_{1,\phi_n}} A_{I,n}\right)\setminus \Gamma_{\phi_n}$, where $\mu(\Gamma_{\phi_n})\to 0$ and $S_{1,\phi_n}$ is a subtree of $S_{\phi_n}$. Then by Lemma \ref{lem:2p3} and Theorem \ref{thm:4p3}, by passing if necessary to a subsequence, we have
\[
\lim_n \int_{\cup_{I\in S_{1,\phi_n}}\! A_{I,n}} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu =
c \lim_n \int_{\cup_{I\in S_{1,\phi_n}}\! A_{I,n}} \phi_n^q\,\mr d\mu,
\]
so since $\lim_n \mu(\Gamma_{\phi_n}) = 0$, we conclude that
\begin{equation} \label{eq:4p57}
\lim_n \int_{E_{\phi_n}\!\setminus\Delta_n} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu =
c \lim_n \int_{E_{\phi_n}\!\setminus\Delta_n} \phi_n^q\,\mr d\mu.
\end{equation}
Because then of the relation $\mc M_{\mc T}g_\phi \geq \mc M_{\mc T}\phi$, on $X$ we have by \eqref{eq:4p57} as a consequence that
\begin{equation} \label{eq:4p58}
\lim_n \int_{E_{\phi_n}\!\setminus\Delta_n} (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq
c \lim_n \int_{E_{\phi_n}\!\setminus \Delta_n} g_{\phi_n}^q\,\mr d\mu.
\end{equation}
Adding \eqref{eq:4p56} and \eqref{eq:4p58}, we obtain
\begin{equation} \label{eq:4p59}
\lim_n \int_{E_{\phi_n}} (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu \geq
c\int_{E_{\phi_n}} g_{\phi_n}^q\,\mr d\mu,
\end{equation}
which in fact is an equality, because if we had strict inequality in \eqref{eq:4p59} we would produce since $g_{\phi_n} = \phi_n$ on $X\setminus E_{\phi_n}$, that $\lim_n \int_X (\mc M_{\mc T}g_{\phi_n})^q\,\mr d\mu > c\, h$, as we can easily see.
This is a contradiction, since $\int_X g_{\phi_n}\,\mr d\mu = f$ and $\int_X g_{\phi_n}^q\,\mr d\mu = h$, for every $n\in\mb N$, and because of Theorem \ref{thm:1}.
Thus we must have equality in both \eqref{eq:4p56} and \eqref{eq:4p58}. \\
Our proof is completed.
\end{proof}
We proceed now to the following
\begin{lemma} \label{lem:4p5}
Let $X_n\subset X$, and $h_n, z_n: X_n\to \mb R^+$ be measurable functions such that $h_n^q = z_n$, where $q\in(0,1)$ is fixed. Suppose additionally that $g_n, w_n: X\to \mb R^+$ satisfy $g_n^q = w_n$. Suppose also that $g_n \geq h_n$, on $X_n$.
Then if $\lim_n \int_{X_n} (w_n-z_n)\,\mr d\mu = 0$ and the sequence $\int_{X_n}\! w_n\,\mr d\mu$ is bounded, we have that $\lim_n \int_{X_n} (g_n-h_n)^q\,\mr d\mu = 0$.
\end{lemma}
\begin{proof}
We set $I_n = \int_{x_n} \Big(w_n^\frac{1}{q} - z_n^\frac{1}{q}\Big)^q\mr d\mu$. \\
For every $p>1$, the following elementary inequality is true $x^p-y^p \leq p(x-y)x^{p-1}$, for $x>y>0$. Thus for $p=\frac{1}{q}$, we have $w_n^p - z_n^p \leq p(w_n-z_n)w_n^{p-1} \implies$
\begin{equation} \label{eq:4p60}
I_n \leq \left(\frac{1}{q}\right)^q \int_{X_n} (w_n-z_n)^q w_n^{1-q}\,\mr d\mu.
\end{equation}
If we use now H\"{o}lder's inequality in \eqref{eq:4p60} we immediately obtain that $I_n \leq \left(\frac{1}{q}\right)^q \left(\int_{X_n} (w_n-w)\,\mr d\mu\right)^q \left(\int_{X_n} w_n\right)^{1-q} \to 0$, as $n\to \infty$, by our hypothesis.
\end{proof}
\noindent Let now $(\phi_n)$ be an extremal sequence of functions. We define $g_{\phi_n}': (X,\mu)\to \mb R^+$ by
\[
g_{\phi_n}'(t) = c_I^{\phi_n},\ t\in A_{I,n} = A(\phi_n,I),\ \ \text{for}\ I\in S_{\phi_n}.
\]
\noindent We prove now the following
\begin{lemma} \label{lem:4p6}
With the above notation $\lim_n \int_{E_{\phi_n}} |g_{\phi_n}' - \phi_n|^q\,\mr d\mu = 0$.
\end{lemma}
\begin{proof}
We are going to use again the inequality $t + \frac{1-q}{q} \geq \frac{t^q}{q}$, which holds for every $t>0$ and $q\in(0,1)$. In view of Lemma \ref{lem:4p5} we just need to prove that
\[
\int_{\{\phi_n \geq g_{\phi_n}'\}\cap E_{\phi_n}}\!\!\! [\phi_n^q - (g_{\phi_n}')^q]\,\mr d\mu \to 0\quad \text{and}\quad
\int_{\{g_{\phi_n}' > \phi_n\}\cap E_{\phi_n}}\!\!\! [(g_{\phi_n}')^q - \phi_n]\,\mr d\mu \to 0.
\]
We proceed to this as follows. \\
For every $I\in S_{\phi_n}$, $I\subseteq E_{\phi_n}$, we set
\begin{align*}
\Delta_{I,n}^{(1)} = \{g_{\phi_n}' \leq \phi_n\} \cap A(\phi_n, I), \\
\Delta_{I,n}^{(2)} = \{\phi_n < g_{\phi_n}'\} \cap A(\phi_n,I).
\end{align*}
From the inequality mentioned in the beginning of this proof we have that, if $c_I^{\phi_n} > 0$, then
\[
\frac{\phi_n(x)}{c_I^{\phi_n}} + \frac{1-q}{q} \geq \frac{1}{q} \frac{\phi_n^q(x)}{(c_I^{\phi_n})^q}, \ \ \forall x\in A_{I,n},
\]
so integrating over every $\Delta_{I,n}^{(j)}$, $j=1,2$ we obtain
\begin{align} \label{eq:4p61}
& \frac{1}{c_I^{\phi_n}} \int_{\Delta_{I,n}^{(j)}} \phi_n\,\mr d\mu + \frac{1-q}{q}\mu(\Delta_{I,n}^{(j)}) \geq
\frac{1}{q} \frac{1}{(c_I^{\phi_n})^q} \int_{\Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu \implies \notag \\
& \sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^q \int_{\Delta_{I,n}^{(j)}} \phi_n\,\mr d\mu + \frac{1-q}{q} \sum_{I\in S_{\phi_n}'} \mu(\Delta_{I,n}^{(j)}) (c_I^{\phi_n})^q \geq
\frac{1}{q} \int_{\cup_{I\in S_{\phi_n}'}\!\!\! \Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu,
\end{align}
for $j=1,2$, where $S_{\phi_n}' = \{I\in S_{\phi_n}: I\subseteq E_{\phi_n},\ c_I^{\phi_n}>0\}$. \\
From the definition of $g_{\phi_n}'$ we see that \eqref{eq:4p61} gives
\begin{multline} \label{eq:4p62}
\int_{\cup_{I\in S_{\phi_n}'} \Delta_{I,n}^{(j)}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu + \frac{1-q}{q} \sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^q \mu(\Delta_{I,n}^{(j)}) \geq \\
\frac{1}{q} \int_{\cup_{I\in S_{\phi_n}'} \Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu,\ \ \text{for}\ j=1,2.
\end{multline}
Note now that
\[
\sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^q \mu(\Delta_{I,n}^{(j)}) = \begin{cases}
\int_{\{\phi_n\geq g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu, & j=1 \\[10pt]
\int_{\{\phi_n<g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu, & j=2,
\end{cases}
\]
and
\[
\int_{\cup_{I\in S_{\phi_n}'}\!\!\!\Delta_{I,n}^{(j)}} \phi_n^q\,\mr d\mu = \begin{cases}
\int_{\{\phi_n\geq g_{\phi_n}'\}\cap E_{\phi_n}} \phi_n^q\,\mr d\mu, & j=1 \\[10pt]
\int_{\{\phi_n<g_{\phi_n}'\}\cap E_{\phi_n}} \phi_n^q\,\mr d\mu, & j=2,
\end{cases}
\]
because if $c_I^{\phi_n}=0$, for some $I\in S_{\phi_n}'$, $I\subseteq E_{\phi_n}$, then $\phi_n=0$ on the respective $A_{I,n}$, and conversely. Additionally:
\[
\int_{\cup_{I\in S_{\phi_n}'}\!\!\!\Delta_{I,n}^{(j)}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu = \begin{cases}
\int_{\{\phi_n\geq g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu, & j=1 \\[10pt]
\int_{\{\phi_n<g_{\phi_n}'\}\cap E_{\phi_n}} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu, & j=2.
\end{cases}
\]
So we conclude the following two inequalities:
\begin{equation} \label{eq:4p63}
\int\limits_{\{0 < g_{\phi_n}' \leq \phi_n\} \cap E_{\phi_n}}\hspace{-26pt} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu +
\frac{1-q}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' \leq \phi_n\} \cap E_{\phi_n}}} (g_{\phi_n}')^q\,\mr d\mu\quad \geq \quad
\frac{1}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' \leq \phi_n\} \cap E_{\phi_n}}} \phi_n^q\,\mr d\mu,
\end{equation}
and
\begin{equation} \label{eq:4p64}
\int\limits_{\{g_{\phi_n}' > \phi_n\} \cap E_{\phi_n}}\hspace{-21pt} (g_{\phi_n}')^{q-1}\phi_n\,\mr d\mu +
\frac{1-q}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' > \phi_n\} \cap E_{\phi_n}}} (g_{\phi_n}')^q\,\mr d\mu\quad \geq \quad
\frac{1}{q}\ \int\limits_{\mathpalette\mathclapinternal{\{g_{\phi_n}' >\phi_n\} \cap E_{\phi_n}}} \phi_n^q\,\mr d\mu.
\end{equation}
If we sum the above inequalities we get:
\begin{equation} \label{eq:4p65}
\sum_{I\in S_{\phi_n}'} (c_I^{\phi_n})^{q-1} (c_I^{\phi_n} \gamma_I^{\phi_n}) + \frac{1-q}{q} \int_{E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu \geq
\frac{1}{q} \int_{E_{\phi_n}} \phi_n^q\,\mr d\mu.
\end{equation}
Now the following are true because of Lemma \ref{lem:4p2}
\[
\int_{E_{\phi_n}} \phi_n^q\,\mr d\mu = \sum_{I\in S_{\phi_n}'} \gamma_I^{\phi_n}(c_I^{\phi_n})^q \approx
\int_{E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu.
\]
Thus in \eqref{eq:4p65} we must have euality in the limit. As a result we obtain equalities in both \eqref{eq:4p63} and \eqref{eq:4p64} in the limit. \\
As a consequence, if we set
\[
t_n = \int_{\{g_{\phi_n}'\leq \phi_n\}\cap E_{\phi_n}} \phi_n^q\,\mr d\mu, \qquad
s_n = \int_{\{g_{\phi_n}'\leq \phi_n\}\cap E_{\phi_n}} (g_{\phi_n}')^q\,\mr d\mu,
\]
we must have that
\begin{equation} \label{eq:4p66}
\int_{\{\phi_n \geq g_{\phi_n}' > 0\}\cap E_{\phi_n}} \phi_n (g_{\phi_n}')^q\,\mr d\mu + \frac{1-q}{q}s_n \approx \frac{1}{q}t_n.
\end{equation}
But as can be easily seen we have that
\begin{equation} \label{eq:4p67}
\bigg[\qquad \int\limits_{\qquad\mathpalette\mathclapinternal{\{0 < g_{\phi_n}' \leq \phi_n\}\cap E_{\phi_n}}} \phi_n(g_{\phi_n}')^{q-1}\,\mr d\mu\bigg]^q \cdot
\bigg[\qquad \int\limits_{\qquad\mathpalette\mathclapinternal{\{0 < g_{\phi_n}' \leq \phi_n\}\cap E_{\phi_n}}} (g_{\phi_n}')^q\,\mr d\mu\bigg]^{1-q} \geq
\qquad \int\limits_{\qquad\mathpalette\mathclapinternal{\{0 < g_{\phi_n}' \leq \phi_n\}\cap E_{\phi_n}}} \phi_n^q\,\mr d\mu.
\end{equation}
From \eqref{eq:4p66} and \eqref{eq:4p67} we have as a result that
\begin{equation} \label{eq:4p68}
\frac{t_n^\frac{1}{q}}{s_n^{\frac{1}{q}-1}} + \frac{1-q}{q}s_n \leq \frac{1}{q}t_n \implies
\left(\frac{t_n}{s_n}\right)^\frac{1}{q} + \frac{1-q}{q} \leq \frac{1}{q}\left(\frac{t_n}{s_n}\right),
\end{equation}
in the limit. \\
But for every $n\in\mb N$ we have that $\left(\frac{t_n}{s_n}\right)^\frac{1}{q} + \frac{1-q}{q} \geq \frac{1}{q}\left(\frac{t_n}{s_n}\right)$. Thus we have equality in \eqref{eq:4p68} in the limit. This means that $\frac{t_n}{s_n}\approx 1$ and since $(t_n)_n$ and $(s_n)_n$ are bounded sequences, we conclude that
\[
t_n-s_n\to 0 \implies \int_{\{g_{\phi_n}'\leq \phi_n\}\cap E_{\phi_n}} [\phi_n^q-(g_{\phi_n}')^q]\,\mr d\mu \to 0,\ \ \text{as}\ n\to\infty.
\]
In a similar way we prove that $\int_{\{\phi_n < g_{\phi_n}'\}\cap E_{\phi_n}} [(g_{\phi_n}')^q - \phi_n^q]\,\mr d\mu \to 0$. Thus Lemma \ref{lem:4p6} is proved. \\
\end{proof}
\noindent We now proceed to the following.
\begin{lemma} \label{lem:4p7}
With the above notation, we have that
\[
\lim_n \int_{E_{\phi_n}} \left| \mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu = 0.
\]
\end{lemma}
\begin{proof}
We set $J_n = \int_{E_{\phi_n}} \left|\mc M_{\mc T}\phi_n - c^\frac{1}{q}\phi_n\right|^q\mr d\mu$. \\
It is true that $(x+y)^q < x^q+y^q$, whenever $x,y>0$, $q\in (0,1)$.
Thus
\begin{multline*}
J_n \leq \int_{E_{\phi_n}} |\mc M_{\mc T}\phi_n - \mc M_{\mc T}g_{\phi_n}|^q\,\mr d\mu + \int_{E_{\phi_n}} |\mc M_{\mc T}g_{\phi_n}-c^\frac{1}{q}g_{\phi_n}|^q\,\mr d\mu + \\
c\int_{E_{\phi_n}} |g_{\phi_n}-\phi_n|^q\,\mr d\mu = J_n^{(1)} + J_n^{(2)} + J_n^{(3)}.
\end{multline*}
By Lemmas \ref{lem:4p6} and \ref{lem:4p2} we have that $J_n^{(3)}\to 0$, as $n\to \infty$. Also, $J_n^{(2)}\to 0$ by Lemma \ref{lem:4p4}. We look now at $J_n^{(1)} = \int_{E_{\phi_n}} |\mc M_{\mc T}\phi_n - \mc M_{\mc T}g_{\phi_n}|^q\,\mr d\mu$.
As we have mentioned before $\mc M_{\mc T}g_{\phi_n} \geq \mc M_{\mc T}\phi_n$, on $X$, thus $J_n^{(1)} = \int_{E_{\phi_n}} (\mc M_{\mc T}g_{\phi_n} - \mc M_{\mc T}\phi_n)^q\,\mr d\mu$. \\
Since $\lim_n\int_{E_{\phi_n}} (\mc M_{\mc T}\phi_n)^q\,\mr d\mu = \lim_n\int_{E_{\phi_n}} (\mc M_{\mc T}\phi_m)^q\,\mr d\mu = c \lim \int_{E_{\phi_n}} \phi_n^q$ we immediately see that $J_n^{(1)}\to 0$, by Lemma \ref{lem:4p5}. \\
The proof of Lemma \ref{lem:4p7} is thus complete, completing also the proof of Theorem \ref{thm:a}.
\end{proof}
\begin{remark} \label{rem:4p1}
We need to mention that Theorem \ref{thm:a} holds true on $\mb R^n$ without the hypothesis that the sequence $(\phi_n)_n$ consists of $\mc T$-good functions. This is true since in the case of $\mb R^n$, where $\mc T$ is the usual tree of dyadic subcubes of a fixed cube $Q$, the class of $\mc T$-good functions contains the one of the dyadic step functions on $Q$, which are dense on $L^1(X,\mu)$.
\end{remark}
\noindent Nikolidakis Eleftherios\\
Visiting Professor\\
Department of Mathematics \\
University of Ioannina \\
Greece\\
E-mail address: [email protected]
\end{document} |
\begin{document}
\title[area of minimal hypersurface ]
{ Area of minimal hypersurfaces }
\author{Qing-Ming Cheng, Guoxin Wei and Yuting Zeng}
\address{Qing-Ming Cheng \\ Department of Applied Mathematics, Faculty of Sciences ,
Fukuoka University, 814-0180, Fukuoka, Japan, [email protected]}
\address{Guoxin Wei \\ School of Mathematical Sciences, South China Normal University,
510631, Guangzhou, China, [email protected]}
\address{Yuting Zeng \\ School of Mathematical Sciences, South China Normal University,
510631, Guangzhou, China, [email protected]}
\begin{abstract}
A well-known conjecture of Yau states that the area of one of Clifford minimal hypersurfaces $S^k\big{(}\sqrt{\frac{k}{n}}\, \big{)}\times S^{n-k}\big{(}\sqrt{\frac{n-k}{n}}\, \big{)}$ gives the lowest value of area among all non-totally geodesic compact minimal hypersurfaces in the unit sphere $S^{n+1}(1)$. The present paper shows that Yau conjecture is true for minimal rotational hypersurfaces, more precisely, the area $|M^n|$ of compact minimal rotational hypersurface $M^n$ is either equal to $|S^n(1)|$, or equal to $|S^1(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})|$, or greater than $2(1-\frac{1}{\pi})|S^1(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})|$. As the application, the entropies of some special self-shrinkers are estimated.
\end{abstract}
\footnotetext{ 2010 \textit{ Mathematics Subject Classification}: 53C42, 53A10.}
\footnotetext{{\it Key words and phrases}: area, minimal hypersurface, Yau conjecture.}
\footnotetext{The first author was partially supported by JSPS Grant-in-Aid for Scientific Research (B): No.16H03937
and Challenging Exploratory Research.
The second author was partly supported by grant No. 11371150 of NSFC.}
\maketitle
\section{Introduction}
The study of minimal hypersurfaces in space forms (that is, $\mathbb{R}^{n+1}$, the sphere $S^{n+1}$, and hyperbolic space $H^{n+1}$), is one of the most important subjects in differential geometry. There are a lot of nice results on this topic (see \cite{B2}, \cite{CDK}, \cite{DX}, \cite{L}, \cite{PT} and many others). The simplest examples of minimal hypersurfaces in $S^{n+1}$ are the totally geodesic $n$-spheres. Another basic examples are the so-called Clifford minimal hypersurfaces $S^k\big{(}\sqrt{\frac{k}{n}}\, \big{)}\times S^{n-k}\big{(}\sqrt{\frac{n-k}{n}}\, \big{)}$.
Cheng, Li and Yau \cite{CLY} proved in 1984 that if $M^n$ is a compact minimal hypersurface in the unit sphere $S^{n+1}(1)$ and $M^n$ is not totally geodesic, then there exists a constant $c(n)>0$, such that the area $|M^n|$ of $M^n$ satisfies $|M^n|>(1+c(n))|S^n(1)|$, that is, the area of the totally geodesic $n$-sphere $S^{n}(1)\subset S^{n+1}(1)$ is the smallest among all compact minimal hypersurfaces in $S^{n+1}(1)$.
In 1992, S.T. Yau \cite{Y} posed the following conjecture (P288, Problem 31):
\noindent {\bf Yau Conjecture}: {\it The area of one of Clifford minimal hypersurfaces $S^k\big{(}\sqrt{\frac{k}{n}}\, \big{)}\times S^{n-k}\big{(}\sqrt{\frac{n-k}{n}}\, \big{)}$ gives the lowest value of area among all non-totally geodesic compact minimal hypersurfaces in the unit sphere $S^{n+1}(1)$.}
In this paper, we consider a little more restricted problem of Yau conjecture for compact minimal rotational hypersurfaces $M^n$ in $S^{n+1}(1)$. As one of the main results of this paper, we prove
\begin{theorem}\label{theorem 1}
If $M^n$ is a compact minimal rotational hypersurface in $S^{n+1}(1)$, then the area $|M^n|$ of $M^n$ satisfies either
$|M^n|=|S^n(1)|$, or $|M^n|=|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})|$, or
$|M^n|> 2(1-\dfrac{1}{\pi})|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})|$.
\end{theorem}
\begin{remark}\label{remark 1}
From the theorem \ref{theorem 1}, we can know that Yau conjecture is true for minimal rotational hypersurfaces.
\end{remark}
\begin{corollary}\label{theorem 2}
If $M^n$ is a compact minimal rotational hypersurface in $S^{n+1}(1)$, then the area $|M^n|$ of $M^n$ satisfies either
$|M^n|=|S^n(1)|$, or $|M^n|=|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})|$, or
$|M^n|=|M^n(3,2)|$, or $|M^n|> 3(1-\dfrac{1}{\pi})|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})|$, where $M^n(3,2)$ is the compact minimal rotational hypersurface with $3$-fold rotational symmetry and rotation number $2$.
\end{corollary}
\begin{remark}\label{remark 2}
From the theorem \ref{theorem 2}, we have that the conjecture proposed by Perdomo and the first author in \cite{PW} is true except the case $M^n(3,2)$ since
$3(1-\dfrac{1}{\pi})>2$.
\end{remark}
By considering the upper bounds of some integral, we show another main result of the present paper concerning the areas of compact minimal rotational hypersurfaces $M^n$ in $S^{n+1}(1)$, stated as follows
\begin{theorem}\label{theorem 3}
The lowest value of area among all compact minimal rotational hypersurfaces with non-constant principal curvatures in the unit sphere $S^{n+1}(1)$ is the area of either $M^n(3,2)$ or $M^n(5,3)$, where $M^n(k,l)$ is a compact minimal rotational hypersurface in $S^{n+1}(1)$ with $k$-fold rotational symmetry and rotation number $l$, its periodic is $K=l\times\frac{2\pi}{k}=\frac{2l\pi}{k}$.
\end{theorem}
\section{Preliminaries}
Without loss of generality, we assume that the minimal rotational hypersurface is neither $S^n(1)$ nor $S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})$.
From \cite{O} and \cite{P1}, we have the following description for every minimal rotational hypersurface in $S^{n+1}(1)$ whose principal curvatures are not constant.
Let us describe complete minimal rotational hypersurface $M_a$. For any positive number $a<a_0=\frac{(n-1)^{n-1}}{n^n}$, let $r(t)$ be a solution of the following ordinary differential equation
\begin{eqnarray}\label{theode}
(r^\prime(t))^2=1-r(t)^2-a r(t)^{2-2n}.
\end{eqnarray}
Since $0<a<a_0$, we have that the function $q(v)=1-v^2-a v^{2-2n}$ has two positive roots $r_1$ and $r_2$ between $0$ and $1$. Therefore it is not difficult to check that the solution of the differential equation (\ref{theode}) is a periodic function with period $T=2 \int_{r_1}^{r_2} \frac{1}{\sqrt{q(v)}}\, dv$ that takes values between $r_1$ and $r_2$. Moreover, since the differential equation (\ref{theode}) does not depend on $t$ explicitly, then, for any $k$ we have that $r(t-k)$ is a solution, provided $r(t)$ is a solution. Therefore we can assume that $r(0)=r_1$ and $r(\frac{T}{2})=r_2$.
If we define
$$\theta(t)= \int_0^t \frac{\sqrt{a} \, r^{1-n}(\tau)}{1-r^2(\tau)}\, d\tau,$$
then the hypersurface $\phi:S^{n-1}\times [0, T]\to S^{n+1}$ given by
$$\phi(y,t)=\big{(}\, r(t)\, y, \, \sqrt{1-r^2(t)}\, \cos(\theta(t)),\, \sqrt{1-r^2(t)}\, \sin(\theta(t))\, \big{)}$$
is called the {\it fundamental portion} of $M_a$. The curve
$$\alpha(t)=\big{(}\, \sqrt{1-r^2(t)}\, \cos(\theta(t)),\, \sqrt{1-r^2(t)}\, \sin(\theta(t))\, \big{)}$$
is called the {\it profile curve} of $M_a$. It turns out that the whole hypersurface $M_a$ is the union of rotations of the fundamental portion. We also have that the hypersurface $M_a$ is compact if and only if the number (also see \cite{O})
\begin{eqnarray}\label{k}
K(a)=\theta(T)= 2 \int_0^{\frac{T}{2}} \frac{\sqrt{a} \, r^{1-n}(\tau)}{1-r^2(\tau)}\, d\tau =2 \pi \frac{p}{s},
\end{eqnarray}
for some pair of relatively prime integers $p$ and $s$. In this case, $M_a$ is made out of exactly $s$ copies of the fundamental portion, that is, $M^n$ is a hypersurface with {\it $s$-fold rotational symmetry} and {\it rotation number $p$}. When $\frac{K(a)}{2 \pi}$ is not a rational number, we have that the hypersurface is not compact.
The following lemma is due to Otsuki \cite{O1}, \cite{O2}. A proof for the particular case $n=2$ can also be found in \cite{AL} and in \cite{P2}.
\begin{lemma}\label{lemma 1}
The function $K(a)$ given in (\ref{k}) is strictly increasing and differentiable on $(0,a_0)$ and
$$ \lim_{a\to 0} K(a)=\pi ,\qquad \lim_{a\to a_0} K(a)=\sqrt{2}\pi. $$
\end{lemma}
In \cite{PW}, Perdomo and the first author proved the following lemma,
\begin{lemma}\label{lemma 2}
If $M^n$ is a compact minimal rotational hypersurface in $S^{n+1}(1)$ with non-constant principal curvatures, then the area of $M^n$, denoted by $|M^n|$, is equal to $$w(a)p,$$
where $p$ is the rotational number greater than $1$ (see \eqref{k}),
\begin{equation}
w(a)=2\pi \sigma_{n-1}\dfrac{\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{K(a)}\,,
\end{equation}
$a\in (0,a_0)$, $x_1<x_2$ are the only two roots in the interval $(0,1)$ of the polynomial $z(x)=x^{n-1}-x^n-a$,
$\sigma_{n-1}$ denotes the area of $S^{n-1}(1)$.
Moreover, we have
\begin{equation}
\aligned
&\ \ \ \lim_{a\to a_0} w(a)\\
&=\lim_{a\to a_0}2\pi \sigma_{n-1}\dfrac{\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{K(a)}\\
&=2\pi \sigma_{n-1}\dfrac{\sqrt{2a_0}\pi}{\sqrt{2}\pi}=2\pi \sigma_{n-1}\sqrt{a_0}\\
&= \biggl|S^1\biggl(\sqrt{\frac{1}{n}} \biggl)\times S^{n-1} \biggl(\sqrt{\frac{n-1}{n}} \biggl) \biggl|.
\endaligned
\end{equation}
\end{lemma}
\section{Proofs of the theorem \ref{theorem 1} and the corollary \ref{theorem 2}}
\noindent{\it Proof of the theorem \ref{theorem 1}}.
From the lemma \ref{lemma 1} and lemma \ref{lemma 2}, it is sufficient to prove the theorem \ref{theorem 1} if we can get the following inequality
\begin{equation}\label{eq:3-21-1}
2\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}}}{\sqrt{x^{n-1}-x^{n}-a}}dx>2(1-\frac{1}{\pi})\sqrt{2a_0}\pi,
\end{equation}
for a compact minimal rotational hypersurface $M^n$ in $S^{n+1}(1)$ with non-constant principal curvatures,
where $0<x_1<x_0=\frac{n-1}{n}<x_2<1$ are two roots of $z(x)=x^{n-1}-x^{n}-a$,
$0<a<a_0$.
We next prove \eqref{eq:3-21-1}. Let
\begin{equation}
y=x^{n-\frac{1}{2}},
\end{equation}
then
\begin{equation}
2\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}}}{\sqrt{x^{n-1}-x^{n}-a}}dx
=\frac{4}{2n-1}\int_{y_1}^{y_2}\frac{1}{\sqrt{y^{\frac{2n-2}{2n-1}}-y^{\frac{2n}{2n-1}}-a}}dy.
\end{equation}
\eqref{eq:3-21-1} is equivalent to
\begin{equation}
\int_{y_1}^{y_2}\frac{1}{\sqrt{y^{\frac{2n-2}{2n-1}}-y^{\frac{2n}{2n-1}}-a}}dy>2(1-\frac{1}{\pi})\frac{2n-1}{4}\sqrt{2a_0}\pi:=2(1-\frac{1}{\pi})A_0\pi,
\end{equation}
where $y_1$, $y_2$ are two roots of $f(y)=y^{\frac{2n-2}{2n-1}}-y^{\frac{2n}{2n-1}}-a$, $A_0=\frac{2n-1}{4}\sqrt{2a_0}$.
We construct a function $g_1(y)$ as follows:
\begin{equation}
g_1(y) = {\begin{cases}
c{{(\sqrt {{y_1}} - \sqrt {{y_c}} )}^2} - c{{(\sqrt y - \sqrt {{y_c}} )}^2},\ \ y \in [{y_1},{y_c}],\\[3mm]
b{{({y_2} - {y_c})}^2} - b{{(y - {y_c})}^2},\quad \quad \quad \ \ \ \ y \in ({y_c},{y_2}],
\end{cases}}
\end{equation}
where $c = \frac{{8\sqrt {n(n - 1)} }}{{{{(2n - 1)}^2}}}$, $b = \frac{{2(n - 1)}}{{{{(2n - 1)}^2}}}{\left( {\frac{{n - 1}}{n}} \right)^{ - n}}$ and
$y_c=(\frac{n-1}{n})^{\frac{1}{2}(2n-1)}$.
Let
\begin{equation}
h_1(y)= g_1(y) - f(y), \ \ \ {\mbox{for}} \ y\in [y_1,y_2],
\end{equation}
we will prove
\begin{equation}
h_1(y)\geq 0
\end{equation}
and
\begin{equation}
\aligned
&\ \ \ \int_{y_1}^{y_2}\frac{1}{\sqrt{y^{\frac{2n-2}{2n-1}}-y^{\frac{2n}{2n-1}}-a}}dy\\
&=\int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {f(y)} }}} dy \ge \int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {g_1(y)} }}} dy\\
& > \left( {2-\frac{2}{\pi} + \frac{2}{\pi }\sqrt {\frac{{{y_1}}}{{{y_c}}}} } \right){A_0}\pi.
\endaligned
\end{equation}
\noindent We next consider two cases.\\
\noindent {\bf Case 1: $y\in [y_c,y_2]$.}
By a direct calculation, we obtain
\begin{equation}
h_1^{\prime}(y) = - 2b(y - {y_c}) - \frac{{2n - 2}}{{2n - 1}}{y^{\frac{{ - 1}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}{y^{\frac{1}{{2n - 1}}}}, \end{equation}
\begin{equation}\label{eq:4-19-1}
h_1^{\prime\prime}(y) = - 2b + \frac{{2n - 2}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n - 2}}{{2n - 1}}}},
\end{equation}
and
\begin{equation}
h_1^{\prime\prime}(y_c)=0,
\end{equation}
it is easy from \eqref{eq:4-19-1} to see that $h_1^{\prime\prime}(y)$ is a monotonic decreasing function on an interval $[y_c,y_2]$, then
\begin{equation}
h_1^{\prime\prime}(y) \leq h_1^{\prime\prime}(y_c)=0, \ \ \ {\text{for}} \ y\in [y_c,y_2],
\end{equation}
that is, $h_1^{\prime}(y)$ is a monotonic decreasing function on an interval $[y_c,y_2]$. Since $h_1^{\prime}(y_c)=0$, we have
$h_1^{\prime}(y)\leq0$ for $y\in [y_c,y_2]$, that is, $h_1(y)$ is a monotonic decreasing function on an interval $[y_c,y_2]$. Since $h_1(y_2)=0$, we conclude that
\begin{equation}
h_1(y)\geq h_1(y_2)=0,\ \ \ {\mbox{for}} \ y\in [y_c,y_2],
\end{equation}
it follows that
\begin{equation}
\frac{1}{{\sqrt {f(y)} }} - \frac{1}{{\sqrt {g_1(y)} }} \ge 0,
\end{equation}
\begin{equation}
\int_{{y_c}}^{{y_2}} {\left( {\frac{1}{{\sqrt {f(y)} }} - \frac{1}{{\sqrt {g_1(y)} }}} \right)} dy \ge 0.
\end{equation}
On the other hand,
\begin{equation}
\int_{{y_c}}^{{y_2}} {\frac{1}{{\sqrt {g_1(y)} }}dy} = {A_0}\pi,
\end{equation}
we have
\begin{equation}
\int_{{y_c}}^{{y_2}} {\frac{1}{{\sqrt {{y^{\frac{{2n - 2}}{{2n - 1}}}} - {y^{\frac{{2n}}{{2n - 1}}}} - a} }}dy}
=\int_{{y_c}}^{{y_2}} {{\frac{1}{{\sqrt {f(y)} }}}}dy\geq \int_{{y_c}}^{{y_2}} \frac{1}{{\sqrt {g_1(y)} }} dy={A_0}\pi.
\end{equation}
\noindent {\bf Case 2: $y\in [y_1,y_c]$.}
By a direct calculation, we have
\begin{equation}
h_1^{\prime}(y) = g_1^{\prime}(y) - f^{\prime}(y) = c\left( {\sqrt {\frac{{{y_c}}}{y}} - 1} \right) - \frac{{2n - 2}}{{2n - 1}}{y^{ - \frac{1}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}{y^{\frac{1}{{2n - 1}}}},
\end{equation}
\begin{equation}\label{eq:4-19-2}
h_1''(y) = - \frac{c}{2}\sqrt {\frac{{{y_c}}}{{{y^3}}}} + \frac{{2n - 2}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n - 2}}{{2n - 1}}}},
\end{equation}
and
\begin{equation}\label{eq:4-19-3}
h_1''(y_c)=0,
\end{equation}
then we obtain from \eqref{eq:4-19-2} and \eqref{eq:4-19-3} that
\begin{equation}
h_1''(y_c)\leq0,
\end{equation}
\begin{equation}
h_1''(y)\leq0,\ \ \ {\mbox{for}} \ y\in [y_1,y_c].
\end{equation}
From the above equations and $h_1'(y_c)=0$, $h_1(y_1)=0$, we can get
\begin{equation}
h_1(y)\geq0,\ \ \ {\mbox{for}} \ y\in [y_1,y_c].
\end{equation}
Hence we have
\begin{equation}
\aligned
&\ \ \ \int_{{y_1}}^{{y_c}} {\frac{1}{{\sqrt {f(y)} }}} dy\ge \int_{{y_1}}^{{y_c}} {\frac{1}{{\sqrt {g_1(y)} }}} dy= \left( {1 - \frac{2}{\pi } + \frac{2}{\pi }\sqrt {\frac{{{y_1}}}{{{y_c}}}} } \right){A_0}\pi.
\endaligned
\end{equation}
From the above two cases, we conclude that
\begin{equation}
\int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {f(y)} }}} dy \geq \left( {2-\dfrac{2}{\pi} + \frac{2}{\pi }\sqrt {\frac{{{y_1}}}{{{y_c}}}} } \right){A_0}\pi>
\left( {2-\dfrac{2}{\pi} } \right) {A_0}\pi.
\end{equation}
This completes the proof of the theorem \ref{theorem 1}.
$$\eqno{\Box}$$
\noindent{\it Proof of the corollary \ref{theorem 2}}.
From the lemma \ref{lemma 1} and lemma \ref{lemma 2}, we have the area $|M^n|$ of a compact minimal rotational hypersurface $M^n$ in $S^{n+1}(1)$ with non-constant principal curvatures except $M^n(3,2)$ is greater than
\begin{equation}
3\times 2\pi \sigma_{n-1}\dfrac{\inf\limits_{a\in(0,a_0)}\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{K(a)}\,
\end{equation}
since the rotation number of $M^n(3,2)$ is $2$ and the rotation numbers of other hypersurfaces are greater than $2$.
From the proof of the theorem \ref{theorem 1}, we can get that
\begin{equation}
3\times 2\pi \sigma_{n-1}\dfrac{\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{K(a)}\ > 3(1-\dfrac{1}{\pi})\biggl|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})\biggl|.
\end{equation}
This completes the proof of the corollary \ref{theorem 2}.
$$\eqno{\Box}$$
\section{Estimate of an upper bound}
\begin{theorem}\label{theorem 4}
The area of $M^n(3,2)$ satisfies
\begin{equation}
|M^n(3,2)|<3\biggl|S^{1}\biggl(\sqrt{\frac{1}{n}}\biggl)\times S^{n-1}\biggl(\sqrt{\frac{n-1}{n}}\biggl)\biggl|.
\end{equation}
\end{theorem}
\begin{proof}
\noindent
Since the rotation number $p$ of $M^n(3,2)$ is $2$, we know from the lemma \ref{lemma 2} that the area of $M^n(3,2)$ is
\begin{equation}\label{eq:4-26-4}
4\pi \sigma_{n-1}\dfrac{\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{K(a)}=
4\pi \sigma_{n-1}\dfrac{\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{\frac{4\pi}{3}}
\end{equation}
for some $a<a_0$ and the area of $S^{1}\biggl(\sqrt{\frac{1}{n}}\biggl)\times S^{n-1}\biggl(\sqrt{\frac{n-1}{n}}\biggl)$ is $2\pi \sigma_{n-1}\sqrt{a_0}$. Hence it is sufficient to prove the following inequality
\begin{equation}
\int_{x_1}^{x_2}\dfrac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx<2\pi\sqrt{a_0},
\end{equation}
that is,
\begin{equation}
\int_{y_1}^{y_2}\dfrac{1}{\sqrt{f(y)}}dy<(2n-1)\pi\sqrt{a_0}=\dfrac{4\pi}{\sqrt{2}}A_0.
\end{equation}
First of all, we construct a function $g_2(y)$ as follows:
\begin{equation}
g_2(y) = \left\{ {\begin{array}{*{20}{c}}
{C{{({y_1} - {y_c})}^2} - C{{(y - {y_c})}^2},\quad y \in [{y_1},{y_c}]},\\
{B{{({y_2} - {y_c})}^2} - B{{(y - {y_c})}^2},\quad y \in ({y_c},1]},
\end{array}} \right.\end{equation}
where $B=\frac{1}{2n-1}$, $C=\frac{{2(n - 1)}}{{{{(2n - 1)}^2}}}{\left( {\frac{{n - 1}}{n}} \right)^{ - n}}$ and $y_c=(\frac{n-1}{n})^{\frac{1}{2}(2n-1)}$, then we claim
\begin{equation}
g_2(y) \le f(y), \ \ \ y\in [{y_1},{y_2}].
\end{equation}
Let $h_2(y)=g_2(y)-f(y)$. We have to consider two cases.
\noindent {\bf Case 1: $y\in [y_c,y_2]$.}
By a direct calculation, we obtain
\begin{equation}\label{eq:4-22-1}
h_2'(y)=g_2'(y)-f'(y) = - 2B(y - {y_c}) - \frac{{2n - 2}}{{2n - 1}}{y^{\frac{{ - 1}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}{y^{\frac{1}{{2n - 1}}}},
\end{equation}
\begin{equation}\label{eq:4-22-2}
h_2''(y) = - 2B + \frac{{2n - 2}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n - 2}}{{2n - 1}}}}
\end{equation}
and
\begin{equation}\label{eq:4-22-3}
h_2''(1)=0,
\end{equation}
we can see from \eqref{eq:4-22-2} that $h_2^{\prime\prime}(y)$ is a monotonic decreasing function on an interval $[y_c,1]$, then
\begin{equation}
h_2^{\prime\prime}(y) \geq h_2^{\prime\prime}(y_2)\geq h_2^{\prime\prime}(1)=0, \ \ \ {\text{for}} \ y\in [y_c,y_2],
\end{equation}
that is, $h_2^{\prime}(y)$ is a monotonic increasing function on an interval $[y_c,y_2]$. Since $h_2^{\prime}(y_c)=0$, we have
$h_2^{\prime}(y)\geq0$ for $ y\in [y_c,y_2]$, that is, $h_2(y)$ is a monotonic increasing function on an interval $[y_c,y_2]$. Since $h_2(y_2)=0$, we conclude that
\begin{equation}
h_2(y)\leq h_2(y_2)=0,\ \ \ {\mbox{for}} \ y\in [y_c,y_2],
\end{equation}
it follows that
\begin{equation}
\frac{1}{{\sqrt {f(y)} }} - \frac{1}{{\sqrt {g_2(y)} }} \leq 0,
\end{equation}
\begin{equation}
\int_{{y_c}}^{{y_2}} {\left( {\frac{1}{{\sqrt {f(y)} }} - \frac{1}{{\sqrt {g_2(y)} }}} \right)} dy \leq 0.
\end{equation}
On the other hand,
\begin{equation}
\int_{{y_c}}^{{y_2}} {\frac{1}{{\sqrt {g_2(y)} }}dy} =\frac{{\sqrt {2n - 1} }}{2} \pi,
\end{equation}
we have
\begin{equation}\label{eq:4-22-7}
\int_{{y_c}}^{{y_2}} {\frac{1}{{\sqrt {{y^{\frac{{2n - 2}}{{2n - 1}}}} - {y^{\frac{{2n}}{{2n - 1}}}} - a} }}dy}
=\int_{{y_c}}^{{y_2}} {{\frac{1}{{\sqrt {f(y)} }}}}dy\leq\int_{{y_c}}^{{y_2}} \frac{1}{{\sqrt {g_2(y)} }} dy=\frac{{\sqrt {2n - 1} }}{2}\pi.
\end{equation}\\
\noindent {\bf Case 2: $y\in [y_1,y_c]$.}
By a direct calculation, we have
\begin{equation}\label{eq:4-22-4}
h_2'(y) = - 2C(y - {y_c}) - \frac{{2n - 2}}{{2n - 1}}{y^{\frac{{ - 1}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}{y^{\frac{1}{{2n - 1}}}},
\end{equation}
\begin{equation}\label{eq:4-22-5}
h_2''(y) = - 2C + \frac{{2n - 2}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n}}{{2n - 1}}}} + \frac{{2n}}{{2n - 1}}\frac{1}{{2n - 1}}{y^{ - \frac{{2n - 2}}{{2n - 1}}}}
\end{equation}
and
\begin{equation}\label{eq:4-22-6}
h_2''(y_c)=0,
\end{equation}
then we obtain from \eqref{eq:4-22-5} that $h_2^{\prime\prime}(y)$ is a monotonic decreasing function on an interval $[y_1,y_c]$, then
\begin{equation}
h_2^{\prime\prime}(y) \geq h_2^{\prime\prime}(y_c)=0, \ \ \ {\text{for}} \ y\in [y_1,y_c],
\end{equation}
that is, $h_2^{\prime}(y)$ is a monotonic increasing function on an interval $[y_1,y_c]$. Since $h_2^{\prime}(y_c)=0$, we have
$h_2^{\prime}(y)\leq0$ for $y\in [y_1,y_c]$, that is, $h_2(y)$ is a monotonic decreasing function on an interval $[y_1,y_c]$. Since $h_2(y_1)=0$, we conclude that
\begin{equation}
h_2(y)\leq h_2(y_1)=0,\ \ \ {\mbox{for}} \ y\in [y_1,y_c],
\end{equation}
it follows that
\begin{equation}
\frac{1}{{\sqrt {f(y)} }} - \frac{1}{{\sqrt {g_2(y)} }} \leq 0.
\end{equation}
On the other hand,
\begin{equation}
\int_{{y_1}}^{{y_c}} {\frac{1}{{\sqrt {g_2(y)} }}dy} =\frac{1}{2}\sqrt {\frac{{{{(2n - 1)}^2}}}{{2(n - 1)}}{{\left( {\frac{n}{{n - 1}}} \right)}^{ - n}}}\pi,
\end{equation}
we have
\begin{equation}\label{eq:4-22-8}
\aligned
&\ \ \ \int_{{y_1}}^{{y_c}} {\frac{1}{{\sqrt {{y^{\frac{{2n - 2}}{{2n - 1}}}} - {y^{\frac{{2n}}{{2n - 1}}}} - a} }}dy}\\
&=\int_{{y_1}}^{{y_c}} {{\frac{1}{{\sqrt {f(y)} }}}}dy\\
&\leq\int_{{y_1}}^{{y_c}} \frac{1}{{\sqrt {g_2(y)} }} dy\\
&= \frac{1}{2}\sqrt {\frac{{{{(2n - 1)}^2}}}{{2(n - 1)}}{{\left( {\frac{n}{{n - 1}}} \right)}^{ - n}}}\pi.
\endaligned
\end{equation}
\noindent From \eqref{eq:4-22-7} and \eqref{eq:4-22-8}, we have
\begin{equation}\label{eq:4-22-9}
\aligned
\int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {f(y)} }}} dy
&\le \frac{{\sqrt {2n - 1} }}{2}\pi + \frac{1}{2}\sqrt {\frac{{{{(2n - 1)}^2}}}{{2(n - 1)}}{{\left( {\frac{n}{{n - 1}}} \right)}^{ - n}}} \pi\\
& = \left( {1 + \sqrt {\frac{{2(n - 1)}}{{2n - 1}}{{\left( {\frac{n}{{n - 1}}} \right)}^n}} } \right){A_0}\pi\\
&= \left( 1 +\sqrt{\frac{2}{(2n-1)a_0}}\right){A_0}\pi\\
&<\left( 1 +\sqrt{\biggl(\frac{n}{n-1}\biggl)^n}\right){A_0}\pi.
\endaligned
\end{equation}
When $n=3$, we see from \eqref{eq:4-22-9}
\begin{equation}\label{eq:4-26-1}
\int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {f(y)} }}} dy\leq\left( {1 + \sqrt {\frac{{2(n - 1)}}{{2n - 1}}{{\left( {\frac{n}{{n - 1}}} \right)}^n}} } \right){A_0}\pi
=\left(1+\sqrt{\frac{27}{10}}\right){A_0}\pi,
\end{equation}
\noindent when $n\geq4$, we get
\begin{equation}\label{eq:4-26-2}
\int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {f(y)} }}} dy<\left( 1 +\sqrt{\biggl(\frac{n}{n-1}\biggl)^n}\right){A_0}\pi\leq (1+\dfrac{16}{9}){A_0}\pi=\frac{25}{9}{A_0}\pi.
\end{equation}
Hence, we obtain from \eqref{eq:4-26-1} and \eqref{eq:4-26-2} that
\begin{equation}\label{eq:4-26-3}
\int_{{y_1}}^{{y_2}} {\frac{1}{{\sqrt {f(y)} }}} dy<\frac{25}{9} A_0\pi<\frac{4}{\sqrt{2}}A_0\pi.
\end{equation}
\noindent This completes the proof of the theorem \ref{theorem 4}.
\end{proof}
\noindent{\it Proof of the theorem \ref{theorem 3}}.
From the lemma \ref{lemma 1} and the lemma \ref{lemma 2}, we know that the area of a compact minimal rotational hypersurface $M^n$ in $S^{n+1}(1)$ with non-constant principal curvatures except $M^n(3,2)$ and $M^n(5,3)$ is greater than
\begin{equation}
4\times 2\pi\sigma_{n-1}\dfrac{\inf\limits_{a\in (0,a_0)}\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{K(a)}\,
\end{equation}
since the rotation number of $M^n(3,2)$ is $2$, the rotation number of $M^n(5,3)$ is $3$, the rotation number of $M^n(7,4)$ is $4$ and the rotation numbers of other hypersurfaces are greater than $4$.
We know from the lemma \ref{lemma 2} and the proof of the theorem \ref{theorem 1} that
\begin{equation}
\aligned
|M^n(7,4)|&=4\times 2\pi\sigma_{n-1}\dfrac{\int_{x_1}^{x_2}\frac{x^{n-\frac{3}{2}} \,}{\sqrt{x^{n-1}-x^n-a}} dx}{\frac{8}{7}\pi}\ \\
&>4(1-\frac{1}{\pi})\times\frac{\sqrt{2}}{\frac{8}{7}}\biggl|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})\biggl|\\
&>3|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})\biggl|\\
&>|M^n(3,2)|
\endaligned
\end{equation}
for some $a\in (0,a_0)$ and the area of other hypersurface $M^n$ except $M^n(3,2)$, $M^n(5,3)$ and $M^n(7,4)$ satisfies
\begin{equation}
\aligned
|M^n|&>5(1-\frac{1}{\pi})\biggl|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})\biggl|\\
&>3|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})\biggl|\\
&>|M^n(3,2)|.
\endaligned
\end{equation}
Hence, the lowest value of area among all compact minimal rotational hypersurfaces with non-constant principal curvatures in the unit sphere $S^{n+1}(1)$ is the area of either $M^n(3,2)$ or $M^n(5,3)$.
\noindent This completes the proof of the theorem \ref{theorem 3}.
$$\eqno{\Box}$$
According to the above theorems, we propose the following conjecture.
\noindent {\bf Conjecture}: {\it The lowest value of area among all compact minimal rotational hypersurfaces with non-constant principal curvatures in the unit sphere $S^{n+1}(1)$ is the area of $M^n$ with $3$-fold rotational symmetry and rotation number $2$.}
\section{Entropies of some special self-shrinkers}
In this section, we estimate the entropies of some special self-shrinkers as the application of the estimate of the areas.
An immersed hypersurface $X:M^n\rightarrow \mathbb{R}^{n+1}$ in the ($n+1$)-dimensional Euclidean space $\mathbb{R}^{n+1}$ is called a {\it self-shrinker} if it satisfies
\begin{equation}\label{eq:6-3-1}
H+\langle X,N\rangle=0,
\end{equation}
where $N$ is the unit normal vector of $X:M^n\rightarrow \mathbb{R}^{n+1}$, $H$ is the mean curvature.
From the definition of self-shrinkers, we know that if $X:M^n\rightarrow S^{n+1}(1)$ is a minimal rotational hypersurface, then $C(M^n)$, the {\it cone} over $M^n$, satisfies the self-shrinker equation \eqref{eq:6-3-1} in $\mathbb{R}^{n+2}$.
On the other hand, the {\it entropy} $\lambda(M^n)$ of self-shrinker $M^n$ can be defined as follows:
\begin{equation}
\lambda(M^n)=\dfrac{1}{(2\pi)^{n/2}}\int_{M^n}e^{-|X|^2/2}d\mu.
\end{equation}
Therefore, the entropy $\lambda(C(M^n))$ of the cone $C(M^n)$ over a compact minimal rotational hypersurface $M^n\subset S^{n+1}(1)$ in $\mathbb{R}^{n+2}$ is
\begin{equation}\label{eq:6-3-2}
\aligned
\lambda(C(M^n))&=\frac{1}{(2\pi)^{(n+1)/2}}|M^n|\int_0^{+\infty}t^ne^{-t^2/2}dt\\
&=\dfrac{1}{2}\pi^{-(n+1)/2}\Gamma(\frac{n+1}{2})|M^n|\\
&=\dfrac{1}{\sigma_n}|M^n|,
\endaligned
\end{equation}
where $|M^n|$ denotes the area of $M^n$, $\Gamma(x)=\int_0^{+\infty}t^{x-1}e^{-t}dt$ is the Gamma function, $\sigma_n$ denotes the $n$-area of $S^n(1)$.
By a computation, we have
\begin{theorem}\label{theorem 5}
If $M^n$ is a compact minimal rotational hypersurface in $S^{n+1}(1)$, then the entropies $\lambda(C(M^n))$ of the cones $C(M^n)$ over $M^n$ in $\mathbb{R}^{n+2}$ satisfies either
$\lambda(C(M^n))=1$, or $\lambda(C(M^n))=\dfrac{2\pi\sigma_{n-1}\sqrt{a_0}}{\sigma_n}$, or
$\lambda(C(M^n))> \dfrac{4(\pi-1)\sigma_{n-1}\sqrt{a_0}}{\sigma_n}$, where $a_0=\frac{(n-1)^{n-1}}{n^n}$, $\sigma_n$ denotes the $n$-area of $S^n(1)$.
\end{theorem}
\begin{proof}
Combining the theorem \ref{theorem 1}, \eqref{eq:6-3-2} and using
\begin{equation}
\biggl|S^{1}(\sqrt{\frac{1}{n}})\times S^{n-1}(\sqrt{\frac{n-1}{n}})\biggl|=2\pi\sigma_{n-1}\sqrt{a_0},
\end{equation}
we can proof the theorem \ref{theorem 5}.
\end{proof}
\end{document} |
\begin{document}
\title{Macrorealism from entropic Leggett-Garg inequalities}
\author{A. R. Usha Devi}
\email{[email protected]}
\affiliation{Department of Physics, Bangalore University,
Bangalore-560 056, India}
\affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.}
\author{H. S. Karthik}
\affiliation{Raman Research Institute, Bangalore 560 080, India}
\author{Sudha}
\affiliation{Department of Physics, Kuvempu University, Shankaraghatta, Shimoga-577 451, India.}
\affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.}
\author{A. K. Rajagopal}\affiliation{Inspire Institute Inc., Alexandria, Virginia, 22303, USA.}
\affiliation{Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019, India.}
\affiliation{Department of Materials Science \& Engineering, Northwestern University, Evanston, IL 60208, USA.}
\date{\today}
\begin{abstract}
We formulate entropic Leggett-Garg inequalities, which place constraints on the statistical outcomes of temporal correlations of observables. The information theoretic inequalities are satisfied if {\em macrorealism} holds. We show that the quantum statistics underlying correlations between time-separated spin component of a quantum rotor mimics that of spin correlations in two spatially separated spin-$s$ particles sharing a state of zero total spin. This brings forth the violation of the entropic Leggett-Garg inequality by a rotating quantum spin-$s$ system in similar manner as does the entropic Bell inequality (\prl {\bf 61}, 662 (1988)) by a pair of spin-$s$ particles forming a composite spin singlet state.
\end{abstract}
\pacs{03.65.Ta
03.65.Ud
}
\maketitle
Conflicting foundational features like non-locality~\cite{Bell}, contextuality~\cite{KS} mark how quantum universe differs from classical one. Non-locality rules out that spatially separated systems have their own objective properties prior to measurements and they do not get influenced by any local operations by the other parties. Violation of Clauser-Horne-Shimony-Holt (CHSH) - Bell correlation inequality~\cite{CHSH} by entangled states reveals that local realism is untenable in the quantum scenario. On the other hand, quantum contextuality states that the measurement outcome of an observable depends on the set of compatible observables that are measured alongside it. In this sense, non-locality turns out to be a reflection of contextuality in spatially separated systems.
Yet another foundational concept of classical world that is at variance with the quantum description is {\em macrorealism}~\cite{LG}.
The notion of {\em macrorealism} rests on the classical world view that (i) physical properties of a macroscopic object
exist independent of the act of observation and (ii) measurements are non-invasive i.e., the measurement of an observable at any instant of time does not influence its subsequent evolution. Quantum predictions differ at a foundational level from these two contentions. In 1985, Leggett and Garg (LG)~\cite{LG} designed an inequality (which places bounds on certain linear combinations of temporal correlations of a dynamical observable) to test whether a single macroscopic object exhibits macrorealism or not. The Leggett-Garg correlation inequality is satisfied by all macrorealistic theories and is violated if quantum law governs. Debates on the emergence of macroscopic classical realm from the corresponding quantum domain continue and it is a topic of current experimental and theoretical research~\cite{Nature, Nature2, Brukner3, Brukner2, Brukner}.
Probabilities associated with measurement outcomes in the quantum framework are fundamentally different from those arising in the classical statistical scenario - and this is pivotal in initiating multitude of debates on various contrasting implications in the two worlds~\cite{Fine,Fine2,Pitowski}. A deeper understanding of these foundational conflicts requires it to be investigated from as many independent ways as possible. The CHSH-Bell (LG) inequalities were originally formulated for dichotomic observables and they constrain certain linear combinations of correlation functions of spatially (temporally) separated states. However, there have been extensions of correlation Bell inequalities to arbitrary measurement outcomes~\cite{pop}. Information entropy too offers as a natural candidate to capture the puzzling features of quantum probabilities and it offers operational tests demarcating the two domains in an elegant, illustrative fashion~\cite{BC, Fritz2, Kaszlikowski}. The information entropic formulation is applicable to observables with any number of outcomes of measurements. Moreover, while the correlation inequalities define a convex polytope~\cite{Pitowski}, the entropic inequalities form a convex cone~\cite{Yeung}, bringing out their geometrically distinct features. Entropic tests thus generalize and strengthen the platform to understand the basic differences between quantum and
classical world view.
It was noticed quite early by Braunstein and Caves (BC) that interpreting correlations between two spatially separated EPR entangled pair of particles based on Shannon information entropy results in contradiction with local realism~\cite{BC}. They developed information theoretic Bell inequality applicable to any pair of spatially separated systems and showed that the inequality is violated by two spatially separated spin-$s$ particles sharing a state of zero total angular momentum. More recently, Kurzy{\' n}ski et. al.~\cite{Kaszlikowski} constructed an entropic inequality to investigate failure of non-contextuality in a {\em single} quantum three level system and they identified optimal measurements revealing violation of the inequality. Chaves and Fritz~\cite{Fritz} framed a more general entropic framework~\cite{Fritz2} to analyze local realism and contextuality in quantum as well as post-quantum scenario. Entropic inequalities provide, in general, a necessary but not sufficient criterion for local realism and non-contextuality~\cite{Fritz, Kaszlikowski}. It is shown that for the $n$-cycle scenario with dichotomic outcomes, entropic inequalities are also sufficient i.e., the violations of entropic inequalities completely characterize non-local and contextual probabilities in this case~\cite{Chaves, note1}. Application of entropic inequality to test contextuality in four level quantum system has been proposed in Ref.~\cite{pkp}.
It is highly relevant to address the question "Does the macrorealistic tenet encrypted in the form of classical entropic inequality get defeated in the quantum realm?" This issue gains increasing importance as questions on the role of quantum theory in biological molecular processes are being addressed in rigorous manner and LG type tests are significant in recognizing quantum effects in evolutionary biological processes~\cite{MW}. Entropic formulation of macrorealism generalizes the scope and applicability of such bench-mark investigations. In this paper we formulate entropic LG inequalities to investigate the notion of macrorealism of a single system. We show that the entropic inequality is violated by a spin-$s$ quantum rotor (prepared in a completely random state) in a manner similar to the information theoretic BC inequality for a counter propagating entangled pair of spin-$s$ particles in a spin-singlet state. To our knowledge, this is the first time that entropic considerations are applied to investigate macrorealism.
We begin with some basic elements of probabilities and the associated information content in order to develop the entropic LG inequality in similar spirit as it was formulated by BC~\cite{BC}. Consider a macrorealistic system in which $Q(t_i)$ is a dynamical observable at time $t_i$. Let the outcomes of measurements of the observable $Q(t_i)$ be denoted by $q_i$ and the corresponding probabilities $P(q_i)$. In a macrorealistic theory, the outcomes $q_i$ of the observables $Q(t_i)$ at all instants of time pre-exist irrespective of their measurement; this feature is mathematically validated in terms of a joint probability distribution $P(q_1,q_2,\ldots )$ characterizing the statistics of the outcomes; the joint probabilities yield the marginals $P(q_i)$ of individual observations at time $t_i$. Further, measurement invasiveness implies that the act of observation of $Q(t_i)$ at an earlier time $t_i$ has no influence on its subsequent value at a later time $t_j>t_i$. This demands that the joint probabilities are expressed as a convex combination of product of probabilities $P(q_i\vert \lambda)$, averaged over a hidden variable probability distribution $\rho(\lambda)$~~\cite{Fine, Brukner}:
\begin{widetext}
\begin{eqnarray}
\label{ValidProb}
P(q_1,q_2,\ldots , q_n)&=&\sum_{\lambda}\, \rho(\lambda)\, P(q_1\vert \lambda)P(q_2\vert \lambda)\ldots P(q_n\vert \lambda),\\
\ \ 0\leq \rho(\lambda)\leq 1, \ \sum_\lambda \rho_\lambda=1; && 0\leq P(q_i\vert \lambda)\leq 1,\ \sum_{q_i}\, P(q_i\vert \lambda)=1. \nonumber
\end{eqnarray}
\end{widetext}
Joint Shannon information entropy associated with the measurement statistics of the observable at two different times $t_k, t_{k+l}$ is defined as, $H(Q_k,Q_{k+l})=-\sum_{q_k,q_{k+l}}\, P(q_k,q_{k+l})\, \log_2\, P(q_k,q_{k+l})$. The conditional information carried by the observable $Q_{k+l}$ at time $t_{k+l}$, given that it had assumed the values $Q_k=q_k$ at an earlier time is given by,
$H(Q_{k+l}\vert Q_k=q_k)=-\sum_{q_{k+l}}\, P(q_{k+l}\vert q_{k})\, \log_2\, P(q_{k+l}\vert q_{k})$, where $P(q_{k+l}\vert q_{k})=P(q_{k}, q_{k+l})/P(q_k)$ denotes the conditional probability. The mean conditional information entropy is given by
\begin{eqnarray}
H(Q_{k+l}\vert Q_{k})&=&\sum_{q_k}\, P(q_k)\, H(Q_{k+l}\vert Q_k=q_k) \nonumber \\
&=& H(Q_k,Q_{k+l})-H(Q_k).
\end{eqnarray}
The classical Shannon information entropies obey the inequality~\cite{BC}:
\begin{equation}
\label{entin}
H(Q_{k+l}\vert Q_{k})\leq H(Q_{k+l})\leq H(Q_k,Q_{k+l}),
\end{equation}
left side of which implies that removing a condition never decreases the information -- while right side inequality means that two variables never carry less information than that carried by one of them. Extending (\ref{entin}) to three variables, and also, using the relation $H(Q_k, Q_{k+l})=H(Q_{k+l}\vert Q_{k})+H(Q_{k})$, we obtain,
\begin{widetext}
\begin{eqnarray}
\label{h3}
H(Q_k, Q_{k+m})&\leq& H(Q_k, Q_{k+l}, Q_{k+m})=H(Q_{k+m}\vert Q_{k+l}, Q_{k})+ H(Q_{k+l}\vert Q_{k})+H(Q_{k}) \nonumber \\ &&\Longrightarrow
H(Q_{k+m}\vert Q_k)\leq H(Q_{k+m}\vert Q_{k+l})+ H(Q_{k+l}\vert Q_{k}).
\end{eqnarray}
\end{widetext}
Here, the first line follows from the chaining rule for entropies and the derivation is analogous to that given by BC~\cite{BC}.
The entropic inequality (\ref{h3}) is a reflection of the fact that knowing the value of the observable at three different times $t_k<t_{k+l}<t_{k+m}$ -- via its information content -- can never be smaller than the information about it at two time instants. Moreover, existence of a grand joint probability distribution $P(q_1,q_2,q_3)$ of the variables $Q_1, Q_2, Q_3$, consistent with a given set of marginal probability distributions $P(q_1, q_2)$, $P(q_2, q_3)$, $P(q_1, q_3)$ of pairs of observables, imposes non-trivial conditions on the associated Shannon information entropies. Violation of the inequality points towards lack of a legitimate grand joint probability distribution for all the measured observables, such that the family of probability distributions associated with measurement outcomes of pairs of observables belong to it as marginals~\cite{Fritz2, note}.
The same reasoning, which lead to a three term entropic inequality (\ref{h3}), could be extended to construct an entropic inequality for $n$ consecutive measurements $Q_1, Q_2, \ldots, Q_n$ at time instants $t_1<t_2<\ldots <t_n$:
\begin{widetext}
\begin{equation}
\label{hn}
H(Q_n\vert Q_1)\leq H(Q_n\vert Q_{n-1})+H(Q_{n-1}\vert Q_{n-2})+\ldots +H(Q_{2}\vert Q_{1}).
\end{equation}
\end{widetext}
The macrorealistic information underlying the statistical outcomes of the observable at $n$ different times must be consistent with the information associated with pairwise non-invasive measurements as given in (\ref{hn}).
Note that for even values of $n$, there is a one-to-one correspondence between the entropic inequality (\ref{hn}) of a single system and the information theoretic BC inequality~\cite{BC} for two spatially separated parties (Alice and Bob). More specifically, let us consider $n=4$ in (\ref{hn}) and associate temporal observable $Q_i$ with Alice's (Bob's) observables $A',\ A$ ($B',\ B$) as $Q_1\leftrightarrow B$, $Q_2\leftrightarrow A'$, $Q_3\leftrightarrow B'$, $Q_4\leftrightarrow A$ to obtain the BC inequality~\cite{BC} for a set of four correlations: $H(A\vert B)\leq H(A\vert B')+ H(B'\vert A')+H(A'\vert B)$, which is satisfied by any {\em local realistic} model of spatially separated pairs. It may be identified that Eq.~(\ref{ValidProb}) is essentially analogous to local hidden variable model (Bell scenario for spatially separated systems) as well as non-contextual model, while the interpretation here is towards macrorealism. Moreover, we emphasize that the logical reasoning in formulating the entropic LG inequalities (\ref{hn}) is synonymous to that of BC~\cite{BC}, which indeed offers a unified approach to address non-locality, contextuality and also non-macrorealism.
We proceed to show that LG entropic inequality is violated by a quantum spin-$s$ system. Consider a quantum rotor prepared initially in a maximally mixed state
\begin{equation}
\label{rhoin}
\rho=\frac{1}{2s+1}\sum_{m=-s}^{s}\ \vert s,m\rangle\langle s,m\vert =\frac{I}{2s+1}
\end{equation}
where $\vert s, m\rangle$ are the simultaneous eigenstates of the squared spin operator $S^2=S_x^2+S_y^2+S_z^2$ and the $z$-component of spin $S_z$ (with respective eigenvalues $ s(s+1)\, \hbar^2$ and $m\hbar$); $I$ denotes the $(2s+1)\times (2s+1)$ identity matrix. We consider the Hamiltonian
\begin{equation}
H= \omega\, S_y,
\end{equation}
resulting in the unitary evolution $U(t)=e^{-i\omega t\, S_y/\hbar}$ of the system (which corresponds to a rotation about the $y$-axis by an angle $\omega\, t$). We choose $z$-component of spin $Q(t)=S_z(t)=U^\dag(t)\, S_z\, U(t)$ as the dynamical observable for our investigation of macrorealism. Let us suppose that the observable $Q_k=S_z(t_k)$ takes the value $m_k$ at time $t_k$. Correspondingly, at a later instant of time $t_{k+l}$ if the spin component $S_z(t_{k+l})$ assumes the value $m_{k+l}$, the quantum mechanical joint probability is given by~\cite{Brukner}
\begin{equation}
P(m_k, m_{k+l})=P_{m_k}(t_k)\, P(m_{k+l}, t_{k+l}\vert m_{k}, t_{k}).
\end{equation}
Here, $P_{m_k}(t_k)={\rm Tr}[\rho\, \Pi_{m_k}(t_k)]$ is the probability of obtaining the outcome $m_k$ at time $t_k$,
$P(m_{k+l}, t_{k+l}\vert m_{k}, t_{k})={\rm Tr}[\Pi_{m_k}(t_k)\rho\Pi_{m_k}(t_k)\, \Pi_{m_{k+l}}(t_{k+l})]/P_{m_k}(t_k)$ denotes the conditional probability of obtaining the outcome $m_{k+l}$ for the spin component $S_z$ at time $t_{k+l}$, given that it had taken the value $m_k$ at an earlier time $t_k$;
$\Pi_{m}(t)=U^\dag(t)\, \vert s, m\rangle\langle s, m\vert\, U(t)$ is the projection operator measuring the outcome $m$ for the spin component at time $t$. For the maximally mixed initial state (\ref{rhoin}), we obtain the quantum mechanical joint probabilities as,
\begin{eqnarray}
\label{cp}
P(m_k, m_{k+l})&=&\frac{1}{2s+1}\, {\rm Tr}[\Pi_{m_k}(t_k)\, \Pi_{m_{k+l}}(t_{k+l})]\nonumber \\
&=&\frac{1}{2s+1}\, \vert \langle\, s, m_{k+l}\vert e^{-i\omega (t_{k+l}-t_k)\, S_y/\hbar}\,\vert s, m_{k}\rangle\vert^2\nonumber \\
&=& \frac{1}{2s+1}\, \vert\, d^{s}_{m_{k+l}\, m_k}(\theta_{kl})\vert^2
\end{eqnarray}
where $d^{s}_{m' m}(\theta_{kl})=\langle s, m'\vert e^{-i\theta_{kl}\, S_y/\hbar}\vert s, m\rangle$ are the matrix elements of the $2s+1$ dimensional irreducible representation of rotation~\cite{Rose} about $y$-axis by an angle $\theta_{kl}=\omega (t_{k+l}-t_k)$. The marginal probability of the outcome $m_k$ for the observable $Q_k$ is readily obtained by making use of the unitarity property of $d$ matrices:
$P(m_k)=\sum_{m_{k+l}}\, P(m_k, m_{k+l})=\frac{1}{2s+1}\, \sum_{m_{k+l}}\vert\, d^{s}_{m_{k+l}\, m_k}(\theta_{kl})\vert^2=\frac{1}{2s+1}.$
Clearly, the temporal correlation probability (\ref{cp}) of quantum rotor is similar to the quantum mechanical pair probability~\cite{BC}
\begin{eqnarray}
P(m_a, m_b)&=&\left[_{\hat{a}}\langle s,m_a\vert \otimes _{\hat{b}}\langle s,m_b \vert \right]\ \vert \Psi_{AB}\rangle \nonumber \\
&=& \frac{1}{2s+1}\, \vert d^s_{m_a,-m_b}(\theta_{ab})\vert^2
\end{eqnarray}
that Alice's measurement of spin component $\vec{S}\cdot \hat{a}$ yields the value $m_a$ and Bob's measurement of $\vec{S}\cdot \hat{b}$ results in the outcome $m_b$ in a spin singlet state $\vert \Psi_{AB}\rangle=\frac{1}{\sqrt{2s+1}}\, \sum_{m=-s}^{s}\, (-1)^{s-m}\ \vert s,m\rangle \otimes \vert s,-m\rangle$ of a spatially separated pair of spin-$s$ particles. (Here $\theta_{ab}$ is the angle between the unit vectors $\hat{a}$ and $\hat{b}$). In other words, quantum statistics of temporal correlations in a single spin-$s$ rotor mimics that of spatial correlations in an entangled counter propagating pair of spin-s particles.
Let us consider measurements at equidistant time intervals $\Delta t=t_{k+1}-t_k, \ k=1,2,\ldots n$ and denote $\theta=(n-1) \omega\, \Delta t$. The quantum mechanical information entropy depends only on the time separation, specified by the angle $\theta$ and is given by,
\begin{widetext}
\begin{eqnarray}
H(Q_k\vert Q_{k+1})\equiv H[\theta/(n-1)]&=&-\frac{1}{2s+1}\, \sum_{m_k, m_{k+1}}\, \vert d^s_{m_{k+1},m_{k}}[\theta/(n-1)]\vert^2 \log_2 \vert d^s_{m_{k+1},m_{k}}[\theta/(n-1)]\vert^2.
\end{eqnarray}
The $n$-term entropic inequality (\ref{hn}) for observations at equidistant time steps assumes the form,
\begin{eqnarray}
\label{ed}
(n-1)\, H[\theta/(n-1)]-H(\theta)&=& -\frac{1}{2s+1}\sum_{m_k, m_{k+1}}\left[\,(n-1) \, \vert d^s_{m_{k+1},m_{k}}[\theta/(n-1)]\vert^2\, \log_2 \vert d^s_{m_{k+1},m_{k}}[\theta/(n-1)]\vert^2\right. \nonumber \\
&& \left. - \vert d^s_{m_{k+1},m_{k}}(\theta)\vert^2\, \log_2 \vert d^s_{m_{k+1},m_{k}}(\theta)\vert^2\right]\geq 0
\end{eqnarray}
\end{widetext}
We introduce information deficit, measured in units of $\log_2 (2s+1)$ bits, as
\begin{equation}
\label{dn}
{\cal D}_n(\theta)= \frac{(n-1)\, H[\theta/(n-1)]-H(\theta)}{\log_2 (2s+1)}
\end{equation}
so that the violation of the LG entropic inequality (\ref{ed}) is implied by negative values of ${\cal D}_n(\theta)$.
The units $\log_2 (2s+1)$ for the quantity ${\cal D}_n(\theta)$ imply that the base of the logarithm for evaluating the entropies of a spin $s$ system is chosen appropriately to be $(2s+1)$. For a spin-1/2 rotor, it is in bits.
In Fig.~1, we have plotted information deficit ${\cal D}_n(\theta)$ for $n=3$ (Fig.~1a) and $n=6$ ~(Fig.~1b) as a function of $\theta=(n-1)\, \omega\, \Delta t$ for spin values $s=1/2, 1, 3/2$ and 2. The results illustrate that the information deficit assumes negative values, though the range of violation (i.e., the value of the angle $\theta$ for which the violation occurs) and also the strength (maximum negative value of $D_n(\theta)$) of the entropic violation reduces~\cite{noteusha} with the increase of spin $s$. This implies the emergence of macrorealism for the dynamical evolution of a quantum rotor in the limit of large spin $s$. It may be noted that Kofler and Brukner~\cite{Brukner2} had shown, violation of the correlation LG inequality -- corresponding to the measurement outcomes of a dichotomic parity observable in the example of a quantum rotor -- persists even for large values of spin if the eigenvalues of spin can be experimentally resolved by sharp quantum measurements. However, under the restriction of coarse-grained measurements classical realm emerges in the large spin limit.
\begin{figure}
\caption{(Color online) LG Information deficit ${\cal D}
\end{figure}
Macrorealism requires that a consistently larger information content $H[\theta/(n-1)]$ has to be carried by the system, when number of observations $n$ is increased and small steps of time interval are employed; however, quantum situation does not comply with this constraint. More specifically, in the classical premise, knowing the observable at almost all time instants provides more information content, whereas, quantum realm results in less information with large number of observations. To see this explicitly, consider the limit of $n\rightarrow \infty$ and infinitesimal time steps $\omega\, \Delta t\rightarrow 0$. Quantum statistics leads to vanishingly small information i.e., $H(\frac{\theta}{n-1})\rightarrow 0$ -- a signature of quantum Zeno effect. In this limit, the information deficit (see (\ref{dn})) $D_n(\theta)\rightarrow \frac{-H(\theta)}{\log_2 (2s+1)}$ is negative -- thus violating the entropic LG inequality. The entropic test clearly brings forth the severity of macrorealistic demands towards {\em knowing} the observable in a non-invasive manner under such miniscule time scale observations.
In conclusion, we have formulated entropic LG inequality, which places bounds on the amount of information associated with non-invasive measurement of a macroscopic observable. The entropic formulation can be applied to any observables -- not necessarily dichotomic ones -- and it puts to test macrorealism i.e., a combined demand of the pre-existence of definite values of the measurement outcomes of a given dynamical observable at different instants of time -- together with the assumption that act of observation at an earlier instant does not influence the subsequent evolution. The information entropic approach provides a unified approach to test local realism, non-contextuality and macrorealism.
The classical notion of macrorealism demands that statistical outcomes of measurement of an observable at consecutive time intervals originate from a valid grand joint probability, presumably of the form (\ref{ValidProb}). Non-existence of a legitimate joint probability, such that the family of probability distributions associated with the measurement outcomes of every pair of observables belong to it as marginals, reflects through the violation of the entropic test. The violation also brings forth the fact that more information is associated with the knowledge of the observable at more instants of time in the classical macrorealistic realm -- however, more number of observations correspond to less information in the quantum case.
In order to demonstrate violation of the entropic inequality, we considered the dynamical evolution of a quantum spin system prepared initially in a maximally mixed state. We have demonstrated that the entropic violation in a quantum rotor system is similar to that of a spatially separated pair of spin-$s$ particles sharing a state of total spin zero~\cite{BC}. Further, we have illustrated that the information content of a rotor grows with the increase of spin $s$ such that it is consistent with the requirements of macrorealism.
We thank T. S. Mahesh and Hemant Katiyar for sharing their experimental results through private communication and for several insightful discussions.
\noindent {\em Note added}: After submission of this paper, Hemant et. al.~\cite{Hemant} have reported experimental violation of entropic LG inequalities in an ensemble of spin 1/2 nuclei using nuclear magnetic resonance (NMR) techniques, by recording negative values of information deficit ${\cal D}_3$ -- in striking agreement with our theoretical prediction. Further, they have demonstrated that the experimentally extracted three time joint probabilities do not contain all the pairwise probabilities as marginals -- which reflects the failure of the entropic test (see \cite{note}).
\end{document} |
\begin{document}
\begin{abstract}
We give necessary and sufficient conditions on an Ore extension $A[x;\sigma,\delta]$, where $A$ is a finite dimensional algebra over a field $\mathbb{F}$, for being a Frobenius extension of the ring of commutative polynomials $\mathbb{F}[x]$. As a consequence, as the title of this paper highlights, we provide a negative answer to a problem stated by Caenepeel and Kadison.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:intro}
Frobenius extensions were introduced by Kasch \cite{Kasch:1954,Kasch:1961}, and by Nakayama and Tsuzuku \cite{Nakayama/Tsuzuku:1959,Nakayama/Tsuzuku:1960} as a generalization of the well known notion of Frobenius algebra. Of course the underlying idea was to recover the duality theory of Frobenius algebras in a more general setting. The notion of separable extension comes from the generalization of the well known notion of separable field extension. The classical definition of separable ring extension is due to Hirata and Sugano in \cite{Hirata/Sugano:1966}. Both notions, Frobenius and separable, have been extended to more general framework in category theory.
As it is explained in the Introduction of \cite{Caenepeel/Kadison:2001}, deep connections between separable and Frobenius extensions were found from the very beginning. For instance, Eilenberg and Nakayama show in \cite{Eilenberg/Nakayama:1955} that finite dimensional semisimple algebras over a field are symmetric, hence Frobenius. A key result to extend this to algebras over commutative rings is due to Endo and Watanabe, concretely they show that separable, finitely generated, faithful and projective algebras over a commutative ring are symmetric \cite[Theorem 4.2]{Endo/Watanabe:1967}. Their ideas were connected to separable extensions, as defined in \cite{Hirata/Sugano:1966}, by Sugano, who shows that separable and centrally projective extensions are Frobenius, see \cite[Theorem 2]{Sugano:1970}. However, as Caenepeel and Kadison say ``it is implicit in the literature that there are several cautionary examples showing separable extensions are not always Frobenius extensions in the ordinary untwisted sense''. They provide one of these examples in \cite[\S 4]{Caenepeel/Kadison:2001} under the stronger hypothesis that the extension is split, but the Frobenius property is lost because the provided extension is not finitely generated. Split extensions are naturally considered since separability and splitting can be viewed as particular cases of the notion of separable module introduced in \cite{Sugano:1971}, see also \cite{Kadison:1996}. Biseparable extensions are therefore considered because they contains both notions of separable and split extensions under the same module theoretic approach. Biseparable extensions are finitely generated and projective, hence the example they provide is not a counter example of their main question: ``Are biseparable extensions Frobenius?''
This problem comes up again recently in the article \cite{Kadison:2019}, whether additional convenient equations are always satisfied: this is the same as asking if a biseparable bimodule is Frobenius. There are arguments in the monograph \cite{Kadison:1999} as evidence for thinking this might be true, as well as the weight of all classical examples.
In this paper we develop some techniques based in the Ore extensions introduced in \cite{Ore:1933} to provide a counter example to the previous question. Our example also gives a negative answer the same question but considering Frobenius extensions of the second kind as introduced by Nakayama and Tsuzuku in \cite{Nakayama/Tsuzuku:1960}.
This paper is structured as follows. In section \ref{sec:preliminaires}, we recall precise definitions of Frobenius and biseparable extensions, and we recall again the main question we are going to answer. In section \ref{Frobenius} Frobenius extensions are lifted under Ore extensions, while similar results are obtained in section \ref{Biseparable} for biseparable extensions. Finally, in section \ref{sec:Example} the full counter example is built.
\section{Preliminaries}\label{sec:preliminaires}
We recall the notions of Frobenius, separable and split extensions. All along the paper $B$ and $C$ are arbitrary unital rings, whilst we reserve the letter $A$ for denoting an algebra over a field $\mathbb{F}$. Following, for instance, \cite{Nakayama/Tsuzuku:1959}, a unital ring extension $C \subseteq B$ is said to be Frobenius if $B$ is a finitely generated projective right $C$-module and there exists an isomorphism $B \cong B^*=\operatorname{Hom}(B_C,C_C)$ of \(C-B\)-bimodules. Here, by $\operatorname{Hom}(B_C,C_C)$, we denote the set of morphisms of right $C$-modules from $B$ to $C$. The additive group $B^*$ is endowed with the standard \(C-B\)-bimodule structure given by $(c \chi b)(u)= c (\chi(bu))$ for any $\chi\in B^*$, $c\in C$ and $b,u\in B$.
The notion of a Frobenius extension is right-left symmetric as observed in \cite[\S 1, page 11]{Nakayama/Tsuzuku:1959}, i.e. \(C \subseteq B\) is Frobenius if \(B\) is a finitely generated projective left \(C\)-module and there exists an isomorphism \(B \cong {}^*B\) of \(B-C\)-bimodules, where \({}^*B = \operatorname{Hom}({}_CB,{}_CC)\) is a \(B-C\)-bimodule in a analogous way.
This is a generalization of the well-known notion of Frobenius algebra over a field, namely, a finite dimensional $\mathbb{F}$-algebra $A$ is Frobenius if the following equivalent conditions hold:
\begin{enumerate}
\item there exists an isomorphism of right (or left) $A$-modules $A \cong A^*$ \label{FrobAlgIso}
\item there exists an associative and non-degenerate $\mathbb{F}$-bilinear form $\langle -,-\rangle:A\times A\to \mathbb{F}$ \label{FrobAlgBilin}
\item there exists a linear functional $\varepsilon: A \to \mathbb{F}$ whose kernel does not contain a non zero right (or left) ideal.\label{FrobAlgLin}
\end{enumerate}
\begin{remark}\label{FrobAlg}
The bijection between Frobenius forms (\ref{FrobAlgBilin}) and Frobenius functionals (\ref{FrobAlgLin}) on $A$ is as follows. If $\langle -,-\rangle: A \times A \to \mathbb{F}$ is a Frobenius form, then the rule $\varepsilon(a) = \langle 1,a \rangle$ for any $a\in A$ defines a Frobenius functional $\varepsilon:A\to \mathbb{F}$. Conversely, if $\varepsilon:A\to \mathbb{F}$ is a Frobenius functional, set $\langle a,b\rangle=\varepsilon(ab)$ for any $a,b\in A$ in order to get a Frobenius form.
The correspondence between Frobenius functionals (\ref{FrobAlgLin}) and left $A$-isomorphisms (\ref{FrobAlgIso}) is given as follows. For any Frobenius functional $\varepsilon$, we may define $\alpha:A\to A^*$ as
$\alpha(a)(b) = \varepsilon(ab)$
for any $a,b\in A$, which becomes a left $A$-isomorphism. Conversely, for any left $A$-isomorphism $\alpha:A\to A^*$, the rule $\varepsilon(a)=\alpha(a)(1)$ for any $a\in A$ provides a Frobenius functional $\varepsilon$. See \cite[Theorem 3.15]{Lam:1999} for full details. In particular, for each \(\mathbb{F}\)-basis \(\{a_1, \dots, a_r\}\) of \(A\) there exists an \(\mathbb{F}\)-basis \(\{b_1, \dots, b_r\}\) of \(A\) such that \(\{\alpha(b_1), \dots, \alpha(b_r)\}\) is the dual basis of \(\{a_1, \dots, a_r\}\), i.e.
\begin{equation}\label{dualbasis}
\varepsilon(b_j a_i) = \alpha(b_j)(a_i) = \delta_{ij}.
\end{equation}
\end{remark}
Following \cite{Hirata/Sugano:1966}, the extension \(C \subseteq B\) is called separable if the canonical multiplication map
\[
\begin{split}
\mu : B \otimes_C B &\to B \\
b_1 \otimes b_2 &\mapsto b_1 b_2
\end{split}
\]
splits as a morphism of \(B\)-bimodules, i.e. there exists \(p \in B \otimes_C B\) such that \(bp = pb\) for all \(b \in B\) and \(\mu(p) = 1\). The splitting map is therefore determined by \(1 \mapsto p\).
Finally, \(C \subseteq B\) is called split if the inclusion map \(C \to B\) splits as a morphism of \(C\)-bimodules, i.e. there exists a \(C\)-bimodule morphism \(\xi : B \to C\) such that \(\xi(1) = 1\).
In \cite[Definition 2.4]{Caenepeel/Kadison:2001}, the notion of a separable module is extended to the concept of biseparable module. When particularizing to ring extensions, \cite[Lemma 3.3]{Caenepeel/Kadison:2001} says that \(C \subseteq B\) is called to be biseparable if one of the following equivalent conditions holds:
\begin{enumerate}
\item $B$ is biseparable as \(B-C\)-bimodule and finitely generated projective as left $C$-module.
\item $B$ is biseparable as \(C-B\)-bimodule and finitely generated projective as right $C$-module.
\item $B$ is biseparable as \(B-C\)-bimodule and as \(C-B\)-bimodule.
\item $C \subseteq B$ is split, separable and finitely generated projective as left $C$-module and as right $C$-module.
\end{enumerate}
Henceforth, motivated by the arguments provided in the Introduction, the following question is stated in \cite{Caenepeel/Kadison:2001}:
\begin{problem}\label{problem}\cite[Problem 3.5]{Caenepeel/Kadison:2001}\label{theproblem}
Are biseparable extensions Frobenius?
\end{problem}
The main aim of this paper is to build an example of a ring extension which is biseparable and not Frobenius, giving a negative answer to Problem \ref{problem}. Throughout the paper we assume that $A$ is a finite dimensional $\mathbb{F}$-algebra of dimension $r$. Let also denote by $\sigma:A \to A$ an algebra $\mathbb{F}$-automorphism and $\delta: A \to A$ a $\sigma$-derivation on $A$, i.e. \(\delta(ab) = \sigma(a) \delta(b) + \delta(a) b\) for all \(a,b \in A\). We denote by $R$ the ring of (commutative) polynomials $\mathbb{F}[x]$ and by $S$ the Ore extension $A[x;\sigma,\delta]$, that is, the ring of polynomials with coefficients in $A$ written on the left whose product is twisted by the rule $xa=\sigma(a)x+\delta(a)$ for any $a\in A$. This notation is fixed throughout the rest of the paper.
We give conditions on \(\sigma\) and \(\delta\) in order to get that \(R \subseteq S\) inherits the corresponding properties (separable, split, Frobenius) from \(\mathbb{F} \subseteq A\). A precise construction of \(A\), \(\sigma\) and \(\delta\) will lead to the counterexample.
\section{Lifting Frobenius extensions}\label{Frobenius}
Given $a\in A$, $n\geq 0$ and $0\leq i\leq n$, we denote by $N_i^n(a)$ the coefficient of degree $i$ when multiplying $x^n$ on the right by $a$ in $S$. That is to say,
\begin{equation}\label{N}
x^na=\sum_{i=0}^nN_i^n(a)x^{i},
\end{equation}
and, for $\sum_{i= 0}^n g_ix^i \in S$,
\begin{equation}\label{ga}
\left(\sum_{i = 0}^ng_ix^i\right)a = \sum_{i=0}^n \left(\sum_{k= i}^n g_kN_i^k(a)\right)x^i.
\end{equation}
We may then consider \(\mathbb{F}\)-linear operators $N_i^n:A \to A$ for every $i$ and $n$ with $0\leq i\leq n$. If we set $N_{i}^n=0$ whenever $i<0$ or $i > n$, then we obtain inductively
\begin{equation}\label{Nnmas1}
N_i^{n+1} = \sigma N_{i-1}^n + \delta N_i^n.
\end{equation}
These maps were introduced in \cite{Lam/Leroy:1988}, where \(N_i^n\) is denoted by \(f_i^n\).
The ring extension $R \subseteq S$ makes $S$ free of finite rank both as a left as a right $R$--module. More precisely, we have the following result.
\begin{lemma}\label{lemma 1a}
Let $\{a_1,\ldots,a_r\}$ be an $\mathbb{F}$-basis of $A$. The following statements hold.
\begin{enumerate}
\item $\{a_1,\ldots ,a_r\}$ is a right basis of $S$ over $R$.
\item $\{a_1,\ldots,a_r\}$ is a left basis of $S$ over $R$.
\end{enumerate}
\end{lemma}
\begin{proof}
\emph{(1)} This is an easy computation.
\emph{(2)} It is well known that \(S^{op} = A^{op}[x;\sigma^{-1},-\delta \sigma^{-1}]\) (see e.g. \cite[page 39, Exercise 2R]{Goodearl/Warfield:2004}). Now, apply part (1) to $S^{op}$.
\end{proof}
By $\{a_1^*, \dots, a_r^* \}$ we will denote the basis of the left $R$--module $S^*$ dual to an \(R\)-basis $\{a_1, \dots, a_r \}$ of \(S\) as right \(R\)-module, determined by the condition $a_i^*(a_j) = \delta_{ij}$.
The aim of this section is to characterize when $R \subseteq S$ is a Frobenius ring extension in terms of the $\sigma$--derivation $\delta$ acting on $A$. The key result to get such a characterization is the following theorem.
\begin{theorem}\label{semiFrobext}
There exists a bijective correspondence between the following sets.
\begin{enumerate}
\item Frobenius functionals on the $\mathbb{F}$-algebra $A$.
\item Right $S$-isomorphisms from $S$ to $S^*$.
\end {enumerate}
\end{theorem}
\begin{proof}
Let $\varepsilon:A \rightarrow \mathbb{F}$ be a Frobenius functional on $A$.
To define a right $S$--linear map $\alpha_\varepsilon:S\to S^*$ we just need to specify $\alpha_\varepsilon (1) \in S^*$. For every $f=\sum_i f_ix^i\in S$, set
\[
\alpha_\varepsilon(1)(f)=\sum_i \varepsilon(f_i)x^i.
\]
This map is indeed right $R$--linear, since
\[
\alpha_\varepsilon(1)(f x ) = \sum_{i} \varepsilon(f_i) x^{i+1} = \alpha_\varepsilon(1)(f)x.
\]
Note that, by the right $S$--module structure of $S^*$, one has, for every $f, g \in S$,
\[
\alpha_\varepsilon (f)(g) = \alpha_\varepsilon (1) (fg) = \alpha_\varepsilon(fg)(1).
\]
Let $f=\sum_{i=0}^nf_ix^{i}\in S$ with $f_n\neq 0$ such that $\alpha_\varepsilon(f)=0$. Then, for any $b\in A$, we get from \eqref{ga} that
\[
0=\alpha_\varepsilon(f)(b)= \alpha_\varepsilon(fb)(1) = \sum_{i=0}^n\varepsilon\left(\sum_{k=i}^nf_kN_i^k(b)\right)x^i.
\]
In particular, $\varepsilon(f_nN_n^n(b))=\varepsilon(f_n\sigma^n(b))=0$ for every $b\in A$. Since $\sigma$ is an automorphism, $\varepsilon(f_nb) =0$ for all $b\in A$ and, thus, the kernel of $\varepsilon$ contains the right ideal generated by $f_n$, a contradiction. Thus $\alpha_\varepsilon$ is injective.
Finally, it remains to prove that $\alpha_\varepsilon$ is surjective. Let $\{a_1, \dots ,a_r\}$ be an $\mathbb{F}$-basis of $A$. Let us show that $x^na_i^* \in \operatorname{Im} \alpha_{\epsilon}$ for all $n\geq 0$ and $1\leq i \leq r$, which yields the result.
For any $n\geq 0$, since $\{\sigma^{n}(a_1), \ldots , \sigma^n(a_r)\}$ is an $\mathbb{F}$-basis of $A$, by \eqref{dualbasis}, there exist $b^{(n)}_1,\ldots,b^{(n)}_r\in A$ such that
\begin{equation}\label{eq bi ai}
\varepsilon\left( b^{(n)}_{i} \sigma^n(a_j)\right) = \delta_{ij}
\end{equation}
for all $1\leq i,j \leq r$. For each \(1 \leq i \leq r\), set
\[
g^{(i)}=\sum_{k=0}^ng_{k}^{(i)} x^k\in S,
\]
where $g_n^{(i)}=b_i^{(n)}$ and, for each $0\leq m\leq n-1$,
\begin{equation} \label{eq gn-1}
g_m^{(i)}=-\sum_{\ell=1}^rb_{\ell}^{(m)} \left( \sum_{k={m+1}}^n \varepsilon \left( g_k^{(i)} N_m^{k}(a_{\ell})\right)\right).
\end{equation}
Then, by \eqref{eq bi ai}, for all $1\leq i,j\leq r$,
\begin{equation} \label{eq x}
\varepsilon \left(g_n^{(i)} \sigma^{n}(a_j) \right) = \varepsilon\left( b^{(n)}_{i} \sigma^n(a_j)\right) = \delta_{ij}
\end{equation}
and
\[
\begin{split}
\varepsilon \left( g_m^{(i)} N_m^m(a_j) \right) &= \varepsilon \left( g_m^{(i)} \sigma^m(a_j)\right) \\
&\stackrel{\eqref{eq gn-1}}{=} \varepsilon \left( -\sum_{\ell=1}^r\left(b_{\ell}^{(m)} \left(\sum_{k={m+1}}^n \varepsilon \left( g_k^{(i)} N_m^{k}(a_{\ell})\right) \right)\right) \sigma^m(a_j)\right) \\
&= -\sum_{\ell=1}^r \sum_{k={m+1}}^n \varepsilon \left( g_k^{(i)} N_m^{k}(a_{\ell})\right) \varepsilon \left( b_{\ell}^{(m)} \sigma^m(a_j) \right) \\
&\stackrel{\eqref{eq x}}{=} - \sum_{k={m+1}}^n \varepsilon \left( g_k^{(i)} N_m^{k}(a_{j})\right).
\end{split}
\]
Hence
\begin{equation}\label{yy}
\sum_{k={m}}^n \varepsilon \left( g_k^{(i)} N_m^{k}(a_{j})\right) = 0
\end{equation}
for \(1 \leq i,j \leq r\), \(0 \leq m \leq n-1\). Now
\[
\begin{split}
\alpha_\varepsilon(g^{(i)})(a_j) &= \alpha_{\varepsilon}(g^{(i)}a_j)(1) \\
& \stackrel{\eqref{ga}}{=} \sum_{m= 0}^n\varepsilon \left(\sum_{k=m}^n g^{(i)}_k N_m^k(a)\right)x^m\\
& \stackrel{\eqref{yy},\eqref{eq x}}{=} x^na_i^*(a_j).
\end{split}
\]
So $x^na_i^* = \alpha_\varepsilon(g^{(i)}) \in \operatorname{Im} \alpha_{\varepsilon}$, as required.
Conversely, let $\alpha:S\to S^*$ be a right $S$-isomorphism. We would like to define $\varepsilon_\alpha:A\to \mathbb{F}$ as $\varepsilon_\alpha(a)=\alpha(a)(1)$ for $a \in A$. We need first to show that $\alpha(a)(1)\in \mathbb{F}$ for every $a\in A$. Consider again the $\mathbb{F}$--basis $\{a_1,\ldots ,a_r\}$ of $A$, and set $g_i = \alpha^{-1}(a_i^*)$ for $i = 1, \dots, r$. If we prove that the $\mathbb{F}$--linearly independent set $\{g_1, \dots, g_r \}$ is contained in $A$, then it becomes an $\mathbb{F}$--basis of $A$.
Write $g_i=\sum_{k=0}^{n_i}g_{ik}x^k$. Therefore,
\[
\begin{split}
\delta_{ij} &= a_i^*(a_j) \\
&= \alpha(g_i)(a_j) \\
&= \sum_{k=0}^{n_i}\alpha (g_{ik}x^k )(a_j) \\
&\overset{\ast}{=} \sum_{k=0}^{n_i}\alpha (g_{ik})(x^k a_j) \\
&= \sum_{k=0}^{n_i}\alpha (g_{ik}) \left( \sum_{m=0}^k N_m^k(a_j)x^m \right) \\
&\overset{\dagger}{=} \sum_{k=0}^{n_i} \sum_{m=0}^k\alpha(g_{ik})(N_m^k(a_j))x^m,
\end{split}
\]
where equality $\ast$ comes from that $\alpha$ is a right $S$-morphism, and $\dagger$ is due to $\alpha(g_{ik})$ is a right $R$-morphism for every $k$ and $i$. Now, if $n_i\geq 1$, then
\[
\alpha(g_{i n_i})(\sigma^{n_i}(a_j))=0
\]
for every $j\in \{1,\ldots, r\}$. By Lemma \ref{lemma 1a}, $\{\sigma^{n_i}(a_1),...,\sigma^{n_i}(a_r)\}$ is a right $R$-basis of $S$, so $\alpha(g_{in_i})=0$ and then $g_{in_i}=0$. Hence, $n_i$ must be zero, so $g_i\in A$ for each $1\leq i\leq r$. Therefore, the $\mathbb{F}$--linear map $\alpha$ satisfies that $\alpha(g_i)(a_j) = \delta_{ij}$, for the $\mathbb{F}$--bases $\{g_1, \dots, g_r \}$ and $\{a_1, \dots, a_r\}$ of $A$. This obviously implies that $\alpha(a)(b) \in \mathbb{F}$ for every $a, b \in A$, and that the bilinear form on $A$ given by $\langle a,b \rangle = \alpha(a)(b)$ is non-degenerate. Therefore, $\varepsilon_\alpha$ is a well defined Frobenius functional on $A$.
It remains to prove that both constructions are inverse one to each other. Indeed, let $\varepsilon$ be a Frobenius functional on $A$. Keeping the previous notation, for any $a\in A$,
\[
\varepsilon_{\alpha_\varepsilon}(a)=\alpha_\varepsilon(a)(1)=\varepsilon(a).
\]
On the other hand, let $\alpha$ be a $S$-right isomorphism from $S$ to $S^*$. We want to check that $\alpha_{\varepsilon_\alpha}=\alpha$. Since both are right $S$--linear maps, it is enough if we prove that $\alpha_{\varepsilon_\alpha}(1) =\alpha (1)$. And these two maps are right $R$--linear, so that the following computation, for $a \in A$, suffices:
\[
\alpha_{\varepsilon_\alpha}(1)(a) = \varepsilon_\alpha(a) = \alpha (a)(1) = \alpha(1)(a).
\]
\end{proof}
Condition (1) in Theorem \ref{semiFrobext} is quite close to the notion of Frobenius extension, removing the need of being a left \(R\)-module morphism. We have not found in the literature that this condition has been introduced and studied. For this reason, let us now introduce semi Frobenius extensions.
\begin{definition}
A unital ring extension $C \subseteq B$ is said to be right (resp. left) semi Frobenius if $B$ is a finitely generated projective right (resp. left) $C$-module and there exists an isomorphism \(B \cong B^*\) of right \(B\)-modules (resp. an isomorphism \(B \cong {}^*B\) of left \(B\)-modules).
\end{definition}
Our aim now is to prove that $A$ is a Frobenius algebra over $\mathbb{F}$ if and only if the extension $R \subseteq S$ is left or right semi Frobenius.
\begin{theorem}\label{semiFrobtwosided}
Let \(A\) be an \(\mathbb{F}\)-algebra. The following statements are equivalent:
\begin{enumerate}
\item $A$ is a Frobenius $\mathbb{F}$-algebra,
\item the extension $R \subseteq S$ is right semi Frobenius,
\item the extension $R \subseteq S$ is left semi Frobenius.
\end{enumerate}
\end{theorem}
\begin{proof}
The equivalence between (1) and (2) is Theorem \ref{semiFrobext}.
In order to check the equivalence (1) if and only if (3), observe that \(A\) is Frobenius if and only if \(A^{op}\) is Frobenius. By Theorem \ref{semiFrobext}, \(A^{op}\) is a Frobenius \(\mathbb{F}\)-algebra if and only if \(\mathbb{F}[x] \subseteq A^{op}[x;\sigma^{-1},-\delta \sigma^{-1}]\) is right semi Frobenius. Since \(\mathbb{F}[x] = R = R^{op}\) and \(S^{op} = A^{op}[x;\sigma^{-1},-\delta \sigma^{-1}]\) (see e.g. \cite[page 39, Exercise 2R]{Goodearl/Warfield:2004}), it follows that \(A^{op}\) is a Frobenius \(\mathbb{F}\)-algebra if and only if \(R \subseteq S\) is left semi Frobenius.
\end{proof}
\begin{remark}\label{leftright}
Although, by Theorems \ref{semiFrobext} and \ref{semiFrobtwosided}, $R\subseteq S$ is left semi Frobenius if and only if it is right semi Frobenius, it is an open question to know if, in general, the notion of semi Frobenius extension is left-right symmetric, as it does for Frobenius extensions, see \cite[\S 1, page 11]{Nakayama/Tsuzuku:1959}.
\end{remark}
We now refine the latter results in the realm of Frobenius extensions.
\begin{theorem}\label{bijectionFrob}
There exists a bijective correspondence between the sets of
\begin{enumerate}
\item \(R-S\)-isomorphisms from $S$ to $S^*$.
\item Frobenius functionals $\varepsilon:A\to \mathbb{F}$ satisfying $\varepsilon\sigma=\varepsilon$ and $\varepsilon\delta=0$.
\item Frobenius forms $\langle -,- \rangle :A\times A\to \mathbb{F}$ satisfying the conditions $\langle a,b\rangle=\langle\sigma(a),\sigma(b)\rangle$ and $\langle \sigma(a), \delta(b)\rangle +\langle\delta(a),b\rangle=0$ for all $a,b\in A$.
\end {enumerate}
\end{theorem}
\begin{proof}
In order to prove the bijection between \emph{(1)} and \emph{(2)} it is enough to show that left $R$-linearity of a right $S$-isomorphism $\alpha:S\to S^*$ is equivalent to the conditions described in \emph{(2)} on the corresponding Frobenius functional $\varepsilon$ under the bijection stated in by Theorem \ref{semiFrobext}.
Now, $\alpha$ is left $R$--linear if and only if $\alpha(xf) = x\alpha(f)$ for every $f \in S$. But, since $\alpha$ is right $S$--linear, the latter is equivalent to the condition $\alpha(x) = x\alpha(1)$. Both $\alpha(x)$ and $x\alpha(1)$ are right $R$--linear maps, so, they are equal if and only if $\alpha(x)(a) = (x\alpha(1))(a)$ for every $a \in A$.
Thus, from the computations
\[
\alpha(x)(a) = \alpha(xa)(1) = \alpha(\sigma(a)x + \delta(a))(1) = \varepsilon(\sigma(a)) x + \varepsilon(\delta(a)),
\]
\[
(x\alpha(1))(a) = x \alpha(1)(a) = x \alpha(a)(1) = x \varepsilon(a) = \varepsilon(a)x,
\]
we get that $\alpha$ is left $R$--linear if and only if $\varepsilon(\sigma(a)) = \varepsilon(a)$ and $\varepsilon(\delta(a)) = 0$ for every $a \in A$.
The bijection between \emph{(2)} and \emph{(3)} follows from the bijection between Frobenius forms and Frobenius functionals explained in Remark \ref{FrobAlg}.
\end{proof}
The following direct consequence of Theorem \ref{bijectionFrob} is the characterization which will be used to build an example of biseparable extension which is not Frobenius.
\begin{theorem}\label{Frobext}
$R \subseteq S$ is Frobenius if and only if there exists a Frobenius functional $\varepsilon:A\to \mathbb{F}$ verifying $\varepsilon\sigma=\varepsilon$ and $\varepsilon\delta=0$.
\end{theorem}
We finish the section showing a family of examples of left and right semi Frobenius, but not Frobenius, extensions.
\begin{example}\label{seminoFrob}
Let $p$ be a prime number and $\mathbb{F}_p$ the finite field of $p$ elements. Consider some $n>1$, and the field extension $\mathbb{F}_p \subseteq \mathbb{F}_{p^n}$. Then $\mathbb{F}_{p^n}$ is a Frobenius $\mathbb{F}_{p}$-algebra. Let $\tau:\mathbb{F}_{p^n}\to \mathbb{F}_{p^n}$ be the Frobenius automorphism, i.e. $\tau(x)=x^p$ for any $x\in \mathbb{F}_{p^n}$. Then, there exists $\alpha\in \mathbb{F}_{p^n}$ such that $\{\alpha, \tau(\alpha),...,\tau^{n-1}(\alpha)\}$ is an $\mathbb{F}_{p}$-basis of $\mathbb{F}_{p^n}$. We set then the $\tau$-derivation $\delta:\mathbb{F}_{p^n}\rightarrow \mathbb{F}_{p^n}$ given by
\[
\delta(b)=(\tau(b)-b)\frac{\alpha}{\tau(\alpha)-\alpha}
\]
for any $b\in \mathbb{F}_{p^n}$. By Theorem \ref{semiFrobtwosided}, $\mathbb{F}_{p}[x] \subseteq \mathbb{F}_{p^n}[x;\sigma,\delta]$ is left and right semi-Frobenius. Nevertheless, it is not Frobenius. Indeed, by Theorem \ref{Frobext}, $\mathbb{F}_p[x] \subseteq \mathbb{F}_{p^n}[x;\tau,\delta]$ is Frobenius if and only if there exists a Frobenius functional $\varepsilon:\mathbb{F}_{p^n}\to \mathbb{F}_p$ such that $\varepsilon\tau=\varepsilon$ and $\varepsilon\delta=0$. But, in such a case, since $\delta(\alpha)=\alpha$,
\[
0=\varepsilon(\delta(\alpha))=\varepsilon(\alpha)=\varepsilon(\tau(\alpha))=\cdots=\varepsilon(\tau^{n-1}(\alpha)).
\]
So that $\varepsilon=0$.
\end{example}
\section{Lifting biseparable extensions}\label{Biseparable}
In this section we aim to provide conditions for ensuring that the extension $R\subseteq S$ is biseparable. Since, by \cite[Lemma 3.3]{Caenepeel/Kadison:2001} and Lemma \ref{lemma 1a}, this is so if and only if $R\subseteq S$ is separable and split, we deal with both notions independently. Let us first analyze the property of being split.
Let $C \subseteq B$ be a ring extension, $\sigma:B \to B$ an automorphism of $B$ and $\delta:B \to B$ a $\sigma$-derivation on $B$ such that $\sigma(C)\subseteq C$ and $\delta(C)\subseteq C$.
\begin{proposition}\label{split}
Suppose that $C \subseteq B$ is split and $\xi:B\to C$ is a \(C\)-bimodule morphism with $\xi\sigma=\sigma\xi$, $\xi\delta=\delta \xi$ and $\xi(1)=1$, then $C[x;\sigma,\delta] \subseteq B[x;\sigma,\delta]$ is split.
\end{proposition}
\begin{proof}
We define $\widehat{\xi}:B[x;\sigma, \delta] \to C[x;\sigma, \delta]$ as, for any $f=\sum_{i=0}^n b_ix^i\in B[x;\sigma,\delta]$,
\[
\widehat{\xi}(f )=\sum_{i=0}^n\xi(b_i)x^{i}.
\]
We check that $\widehat{\xi}$ is a $C[x;\sigma,\delta]$-bimodule morphism. Let $a\in C$ and $f=\sum_{i=0}^nb_ix^{i}\in B[x;\sigma,\delta]$,
\[
\begin{split}
\widehat{\xi}(xf) &= \widehat{\xi}\left (x\sum_{i=0}^nb_ix^{i}\right )\\
&= \widehat{\xi}\left (\sum_{i=0}^n\sigma(b_i)x^{i+1}+\delta(b_i)x^{i}\right )\\
&= \sum_{i=0}^n\xi(\sigma(b_i))x^{i+1}+\sum_{i=0}^n\xi(\delta(b_i))x^{i}\\
&= \sum_{i=0}^n\sigma(\xi(b_i))x^{i+1}+\sum_{i=0}^n\delta(\xi(b_i))x^{i}\\
&= x\sum_{i=0}^n\xi(b_i)x^{i}\\
&= x\widehat{\xi}(f),
\end{split}
\]
and
\[
\widehat{\xi} (af ) =\widehat{\xi}\left (\sum_{i=0}^nab_ix^{i}\right ) = \sum_{i=0}^n\xi(ab_i)x^{i} = a\sum_{i=0}^n\xi(b_i)x^{i} = a\widehat{\xi}(f),
\]
so $\widehat{\xi}$ is left $C[x;\sigma,\delta]$-linear. Analogously,
\[
\widehat{\xi}(fx) = \widehat{\xi}\left (\sum_{i=0}^nb_ix^{i+1}\right ) = \sum_{i=0}^n\xi(b_i)x^{i+1} = \left (\sum_{i=0}^n\xi(b_i)x^{i}\right )x = \widehat{\xi}(f)x,
\]
and
\[
\begin{split}
\widehat{\xi}(fa)
&= \widehat{\xi}\left (\sum_{i=0}^n b_i \left (\sum_{k=0}^{i}N_k^{i}(a)x^{k}\right) \right)\\
&= \sum_{i=0}^n\sum_{k=0}^{i}\xi(b_iN_k^{i}(a))x^{k}\\
&= \sum_{i=0}^n\sum_{k=0}^{i}\xi(b_i)N_k^{i}(a)x^{k}\\
&= \sum_{i=0}^n\xi(b_i)\left (\sum_{k=0}^{i}N_k^{i}(a)x^{k}\right )\\
&= \sum_{i=0}^n\xi(b_i)x^{i}a\\
&= \widehat{\xi}(f)a,\\
\end{split}
\]
so $\widehat{\xi}$ is right $C[x;\sigma,\delta]$-linear. Clearly, $\widehat{\xi}(1)=1$, and thus $B[x;\sigma,\delta] \subseteq C[x;\sigma,\delta]$ is split.
\end{proof}
\begin{corollary}\label{corsplit}
If there exists an \(\mathbb{F}\)-linear map \(\xi: A \to \mathbb{F}\) such that \(\xi(1) = 1\), \(\xi \sigma = \xi\) and \(\xi \delta = 0\), then \(R \subseteq S\) is split.
\end{corollary}
\begin{proof}
Observe that any finite dimensional \(\mathbb{F}\)-algebra \(A\) is split, since there is an \(\mathbb{F}\)-basis of \(A\) containing the element \(1\). Hence, the corollary follows from Proposition \ref{split}, since \(\sigma_{|\mathbb{F}} = \operatorname{id}_{\mathbb{F}}\) and \(\delta_{|\mathbb{F}} = 0\).
\end{proof}
The transfer of separability in Ore extensions is studied in \cite{Gomez/Lobillo/Navarro:2017a}. For brevity, we denote by $\sigma^\otimes$ and $\delta^\otimes$ the maps
\[
\begin{split}
\sigma^\otimes:B\otimes_C B &\to B\otimes_C B\\
b_1\otimes b_2 &\mapsto \sigma(b_1)\otimes \sigma(b_2)\\
& \\
\delta^\otimes:B\otimes_C B &\to B\otimes_C B\\
b_1\otimes b_2 &\mapsto \sigma(b_1)\otimes \delta(b_2)+\delta(b_1)\otimes b_2
\end{split}
\]
for every $b_1,b_2\in B$. By \cite[Lemma 27]{Gomez/Lobillo/Navarro:2017a}, $\sigma^\otimes$ and $\delta^\otimes$ are well defined.
We will use the following proposition whose easy proof compares $xp$ and $px$ in the light of the rule defining the product in an Ore extension.
\begin{proposition}[{\cite[Theorem 29]{Gomez/Lobillo/Navarro:2017a}}]\label{sep}
If $C \subseteq B$ is separable and there exists a separability element $p$ verifying $\sigma^\otimes(p)=p$ and $\delta^\otimes(p)=0$, then $C[x;\sigma,\delta] \subseteq B[x;\sigma,\delta]$ is separable.
\end{proposition}
In \cite[Theorem 8]{Gomez/Lobillo/Navarro:2017b} a converse to Proposition \ref{sep} is provided when \(\delta = 0\). Here, we generalize part of this result when \(\delta\) is an inner \(\sigma\)--derivation. So, for the rest of this section, \(\sigma : A \to A\) is an \(\mathbb{F}\)--linear automorphism and \(\delta_{\sigma,b} : A \to A\) is a $\sigma$-derivation defined by
\[
\delta_{\sigma,b}(a) = ba - \sigma(a) b
\]
for some \(b \in A\). Hence \(R = \mathbb{F}[x]\) and \(S = A[x;\sigma,\delta_{\sigma,b}]\). Recall that we have fixed an \(\mathbb{F}\)--basis \(\{a_1, \dots, a_r\}\) of \(A\).
\begin{lemma}\label{gradingtensor}
The set \(\{a_i \otimes_R a_j x^k ~|~ 1 \leq i,j \leq r, k \geq 0\}\) is an \(\mathbb{F}\)--basis of \(S \otimes_R S\). Consequently, the map
\[
\varphi : S \otimes_R S \to \bigoplus_{k \geq 0} (A \otimes_\mathbb{F} A) x^k, \quad \left[a_i \otimes_R a_j x^k \mapsto (a_i \otimes_\mathbb{F} a_j) x^k \right]
\]
is an \(\mathbb{F}\)--isomorphism that provides an \(\mathbb{N}\)--grading on \(S \otimes_R S\) as an \(\mathbb{F}\)--vector space.
\end{lemma}
\begin{proof}
It can be derived from Lemma \ref{lemma 1a} and \cite[Corollary 8.5]{Stenstrom:1975} that \(S \otimes_R S\) is a free right \(R\)-module with basis \(\{a_i \otimes_R a_j ~|~ 1 \leq i,j \leq r\}\), hence \(\{a_i \otimes_R a_j x^k ~|~ 1 \leq i,j \leq r, k \geq 0\}\) is an \(\mathbb{F}\)--basis. Consequently \(\varphi\) is an isomorphism because \(\{(a_i \otimes_\mathbb{F} a_j) x^k ~|~ 1 \leq i,j \leq r, k \geq 0\}\) is a basis of \(\bigoplus_{k \geq 0} (A \otimes_\mathbb{F} A) x^k\).
\end{proof}
\begin{proposition}\label{sepdown}
If \(R \subseteq S\) is separable and \(\delta = \delta_{\sigma,b}\) is inner, then \(\mathbb{F} \subseteq A\) is separable.
\end{proposition}
\begin{proof}
Let \(p \in S \otimes_R S\) be a separability element. We do not lose generality if we assume \(p = \sum_{i=1}^r \sum_{j=0}^m a_i \otimes_R g_{ij} x^j\). Let \(a \in A\), where $\{a_1, \dots, a_r \}$ is an $\mathbb{F}$--basis of $A$. Since \(ap = pa\) we have
\[
\begin{split}
\sum_{i=1}^r \sum_{j=0}^m aa_i \otimes_R g_{ij} x^j & = \sum_{i=1}^r \sum_{j=0}^m \sum_{k=0}^j a_i \otimes_R g_{ij} N_k^j(a) x^k \\
& = \sum_{i=1}^r \sum_{k=0}^m \sum_{j=k}^m a_i \otimes_R g_{ij} N_k^j(a) x^k.
\end{split}
\]
By Lemma \ref{gradingtensor} and by applying \(\varphi\), we get that, for all \(0 \leq \ell \leq m\),
\[
\sum_{i=1}^r aa_i \otimes_\mathbb{F} g_{i \ell} = \sum_{i=1}^r \sum_{j=\ell}^m a_i \otimes_\mathbb{F} g_{ij} N_\ell^j(a) \in A \otimes_\mathbb{F} A.
\]
Multiplying on the right by \(b^\ell\) and adding all the obtained identities we have
\[
\begin{split}
\sum_{\ell=0}^m \sum_{i=1}^r aa_i \otimes_\mathbb{F} g_{i \ell} b^\ell & = \sum_{\ell=0}^m \sum_{i=1}^r \sum_{j=\ell}^m a_i \otimes_\mathbb{F} g_{ij} N_\ell^j(a) b^\ell \\
& = \sum_{i=1}^r \sum_{j=0}^m \sum_{\ell=0}^j a_i \otimes_\mathbb{F} g_{ij} N_\ell^j(a) b^\ell.
\end{split}
\]
Since \(ba = \sigma(a) b + \delta_{\sigma,b}(a)\), an inductive argument on $j$, which uses \eqref{Nnmas1}, shows that
\[
\sum_{\ell=0}^j N_\ell^j(a) b^\ell = b^j a,
\]
hence
\[
\sum_{i=1}^r \sum_{\ell=0}^m aa_i \otimes_\mathbb{F} g_{i \ell} b^\ell = \sum_{i=1}^r \sum_{j=0}^m a_i \otimes_\mathbb{F} g_{ij} b^j a.
\]
So \(\widehat{p} = \sum_{i=1}^r \sum_{j=0}^m a_i \otimes_\mathbb{F} g_{ij} b^j\) satisfies \(a \widehat{p} = \widehat{p} a\) for all \(a \in A\). Now, since
\[
1 = \mu(p) = \sum_{i=1}^r \sum_{j=0}^m a_{i} g_{ij} x^j \in A[x;\sigma,\delta_{\sigma,b}],
\]
it follows that \(1 = \sum_{i=1}^r a_i g_{i0}\) and \(0 = \sum_{i=1}^r a_i g_{ij}\) for all \(1 \leq j \leq m\). Therefore \(\mu(\widehat{p}) = 1\) and \(\widehat{p}\) is a separability element for \(\mathbb{F} \subseteq A\).
\end{proof}
\section{An answer to a problem of Caenepeel and Kadison}\label{sec:Example}
In this section, with the aid of the results of Sections \ref{Frobenius} and \ref{Biseparable}, we give a negative answer to Problem \ref{theproblem}.
\begin{example}[Answer to Problem \ref{theproblem}]\label{counterexample}
Let $\mathbb{F}_8$ be the field with eight elements described as $\mathbb{F}_8=\mathbb{F}_2(a)$, where $a^3+a^2+1=0$. Let $\tau$ be the Frobenius automorphism on $\mathbb{F}_{8}$, that is, $\tau(c)=c^2$ for every $c\in \mathbb{F}_8$. Observe that $\{a, a^2,a^4\}$ is an auto dual basis of the extension $\mathbb{F}_2 \subseteq \mathbb{F}_8$. Set $A=\mathcal{M}_2(\mathbb{F}_8)$, the ring of $2\times 2$ matrices over $\mathbb{F}_8$, and consider the $\mathbb{F}_2$-automorphism $\sigma: A \to A$ defined as the component-by-component extension of $\tau$ to $A$. That is, $\sigma$ is given by
\begin{equation}\label{automorphism}
\sigma\left ( \begin{matrix} x_0 & x_1 \\ x_2 & x_3 \end{matrix} \right ) = \left ( \begin{matrix} \tau(x_0) & \tau(x_1) \\ \tau(x_2) & \tau(x_3) \end{matrix} \right ) = \left ( \begin{matrix} x_0^2 & x_1^2 \\ x_2^2 & x_3^2 \end{matrix} \right ) \text{ for every } \left ( \begin{matrix} x_0 & x_1 \\ x_2 & x_3 \end{matrix} \right )\in A.
\end{equation}
We can also set the inner $\sigma$-derivation $\delta:A\rightarrow A$ given by $\delta(X)=MX-\sigma(X)M$ for $X\in A$, where
\[
M=\begin{pmatrix}
0 & 0 \\
0 & a
\end{pmatrix}.
\]
Our aim is to prove that the ring extension $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is split and separable, and hence biseparable, but not Frobenius. For simplicity, we denote
\[
e_0= \begin{pmatrix}
1 & 0 \\
0 & 0
\end{pmatrix}, \, e_1= \begin{pmatrix}
0 & 1 \\
0 & 0
\end{pmatrix}, \, e_2= \begin{pmatrix}
0& 0 \\
1 & 0
\end{pmatrix} \text{ and } e_3= \begin{pmatrix}
0& 0 \\
0 & 1
\end{pmatrix}.
\]
Hence, an $\mathbb{F}_2$-basis of $A$ is given by $\mathcal{B}=\{a^{2^i}e_j \text{ such that } 0\leq i \leq 2 \text{ and } 0\leq j \leq 3\}$.
Let $\varepsilon: A \to \mathbb{F}_2$ be an $\mathbb{F}_2$-linear map. If we force $\varepsilon \sigma = \varepsilon$, then
\[
\varepsilon(a^{2^{i+1}}e_j)=\varepsilon\sigma(a^{2^i}e_j)=\varepsilon(a^{2^{i}}e_j)
\]
for every $0 \leq i \leq 2$ and \(0 \leq j \leq 3\), so that $\varepsilon$ is determined by four values $\gamma_0,\gamma_1,\gamma_2, \gamma_3\in \mathbb{F}_2$ such that $\varepsilon(a^{2^i}e_j)=\gamma_j$ for $0 \leq i \leq 2$ and \(0 \leq j \leq 3\).
Let us then consider $\xi:A\rightarrow \mathbb{F}_2$ the $\mathbb{F}_2$-linear map determined by $\gamma_0=1$, $\gamma_1=0$, $\gamma_2=0$ and $\gamma_3=0$. Firstly,
\[
\begin{split}
\xi\begin{pmatrix}
1& 0 \\
0 & 1
\end{pmatrix}
& =\xi\begin{pmatrix}
a+a^2+a^4& 0 \\
0 & a+a^2+a^4
\end{pmatrix}\\
& =\xi(ae_0)+\xi(a^2e_0)+\xi(a^4e_0)+\xi(ae_3)+\xi(a^2e_3)+\xi(a^4e_3)\\
& =1.
\end{split}
\]
On the other hand, for any $x_0,x_1,x_2,x_3 \in \mathbb{F}_8$,
\begin{equation}\label{derivation}
\begin{split}
\delta\begin{pmatrix}
x_0 & x_1 \\
x_2 & x_3
\end{pmatrix} &= \begin{pmatrix}
0 & 0 \\
0 & a
\end{pmatrix} \begin{pmatrix}
x_0 & x_1 \\
x_2 & x_3
\end{pmatrix} + \begin{pmatrix}
x_0^2 & x_1^2 \\
x_2^2 & x_3^2
\end{pmatrix}\begin{pmatrix}
0 & 0 \\
0 & a
\end{pmatrix}\\
&= \begin{pmatrix}
0 & 0 \\
ax_2 & ax_3
\end{pmatrix} + \begin{pmatrix}
0 & ax_1^2 \\
0 & ax_3^2
\end{pmatrix}\\
&= \begin{pmatrix}
0 & ax_1^2 \\
ax_2 & a(x_3+x_3^2).
\end{pmatrix}
\end{split}
\end{equation}
Therefore, $\xi\delta = 0$. By Corollary \ref{corsplit}, the extension $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is split.
Let us prove that the map $\xi$ is the only non trivial $\mathbb{F}_2$-linear map verifying the equalities $\xi\sigma=\xi$ and $\xi\delta=0$. Let us suppose that $\varepsilon:A\rightarrow \mathbb{F}_2$ is a non zero $\mathbb{F}_2$-linear map that verifies the equation $\varepsilon\sigma=\varepsilon$. As reasoned above, it is determined by some values $\gamma_0,\gamma_1,\gamma_2, \gamma_3\in \mathbb{F}_2$. Nevertheless,
\begin{itemize}
\item If $\gamma_1=1$, then
$\varepsilon\delta\begin{pmatrix}
0& 1 \\
0 & 0
\end{pmatrix} = \varepsilon \begin{pmatrix}
0& a \\
0 & 0
\end{pmatrix}=1$,
\item If $\gamma_2=1$, then
$\varepsilon\delta\begin{pmatrix}
0& 0 \\
1 & 0
\end{pmatrix} = \varepsilon \begin{pmatrix}
0& 0 \\
a & 0
\end{pmatrix}=1$,
\item If $\gamma_3=1$, then
$\varepsilon\delta\begin{pmatrix}
0& 0 \\
0 & a
\end{pmatrix} = \varepsilon \begin{pmatrix}
0& 0 \\
0 & a^2 + a^3
\end{pmatrix}= \varepsilon \begin{pmatrix}
0& 0 \\
0 & a+a^2+a^4
\end{pmatrix}=1$,
\end{itemize}
so that $\varepsilon\delta=0$ implies $\gamma_1=\gamma_2=\gamma_3=0$. Hence, $\gamma_0 = 1$, and $\varepsilon = \xi$. Note that the kernel of $\xi$ contains the left ideal
\[
J=\left\{\begin{pmatrix}
0& c_2\\
0 & c_3
\end{pmatrix} \mid c_2,c_3\in \mathbb{F}_8 \right\},
\]
so that there is no Frobenius functional $\varepsilon:A\to \mathbb{F}_2$ verifying $\varepsilon\sigma=\varepsilon$ and $\varepsilon\delta=0$. By Corollary \ref{Frobext}, the extension $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is not Frobenius.
Finally, let us prove that the extension is separable. Consider the element $p\in A\otimes_{\mathbb{F}_2}A$ given by
\[
\begin{split}
p &= \begin{pmatrix}
a& 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a& 0 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
a^2& 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^2& 0 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
a^4& 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^4& 0 \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0& 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0& a\\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0& 0 \\
a^2 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0& a^2 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0& 0 \\
a^4 & 0
\end{pmatrix}\otimes \begin{pmatrix}
0& a^4\\
0 & 0
\end{pmatrix}.
\end{split}
\]
This is a separability element of the extension $\mathbb{F}_2 \subseteq A$, since it is the composition of the separability element $a \otimes a + a^2 \otimes a^2 + a^4 \otimes a^4$ of the extension $\mathbb{F}_2 \subseteq \mathbb{F}_8$, and the separability element $e_0 \otimes e_0 + e_2 \otimes e_3$ of the extension $\mathbb{F}_8 \subseteq A$, see \cite[Examples 4 and 5]{Gomez/Lobillo/Navarro:2017a} and \cite[Proposition 2.5]{Hirata/Sugano:1966}. Although it is straightforward to check that $\sigma^{\otimes}(p)=p$ and $\delta^{\otimes}(p)=0$, due to its importance in this paper, we detail explicitly all the computations. Since the Frobenius automorphism induces a permutation on \(\{a,a^2,a^4\}\), it follows that
\[
\begin{split}
\sigma^\otimes(p) &=\begin{pmatrix}
a^2 & 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^2 & 0 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
a^4 & 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^4 & 0 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
a & 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a & 0 \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^2 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^2 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^4 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^4 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a & 0
\end{pmatrix}\otimes \begin{pmatrix}
0 & a\\
0 & 0
\end{pmatrix} \\
&= p.
\end{split}
\]
Let us now compute \(\delta^\otimes(p)\). Recall \(\delta^\otimes = \sigma \otimes \delta + \delta \otimes \operatorname{id}\). By \eqref{derivation} and \eqref{automorphism}, \(\delta \left( \begin{smallmatrix} c & 0 \\ 0 & 0 \end{smallmatrix} \right) = \left( \begin{smallmatrix} 0 & 0 \\ 0 & 0 \end{smallmatrix} \right)\) for each \(c \in \mathbb{F}_8\), so
\[
\delta^\otimes \left(\begin{pmatrix}
a^{2^i} & 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^{2^i} & 0 \\
0 & 0
\end{pmatrix}\right) = \begin{pmatrix}
a^{2^{i+1}} & 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & 0 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^{2^i} & 0 \\
0 & 0
\end{pmatrix},
\]
for \(0 \leq i \leq 2\). Hence
\begin{equation}\label{deltafirstreduction}
\begin{split}
\delta^\otimes(p) &= \delta^\otimes \left(\begin{pmatrix}
a& 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a& 0 \\
0 & 0
\end{pmatrix}\right)
+ \delta^\otimes \left(\begin{pmatrix}
a^2& 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^2& 0 \\
0 & 0
\end{pmatrix}\right) \\
&\quad + \delta^\otimes \left(\begin{pmatrix}
a^4& 0 \\
0 & 0
\end{pmatrix} \otimes \begin{pmatrix}
a^4& 0 \\
0 & 0
\end{pmatrix}\right)
+ \delta^\otimes \left(\begin{pmatrix}
0& 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0& a\\
0 & 0
\end{pmatrix}\right) \\
&\quad + \delta^\otimes \left(\begin{pmatrix}
0& 0 \\
a^2 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0& a^2 \\
0 & 0
\end{pmatrix}\right)
+ \delta^\otimes \left(\begin{pmatrix}
0& 0 \\
a^4 & 0
\end{pmatrix}\otimes \begin{pmatrix}
0& a^4\\
0 & 0
\end{pmatrix}\right) \\
&= \delta^\otimes \left(\begin{pmatrix}
0& 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0& a\\
0 & 0
\end{pmatrix}\right) \\
&\quad + \delta^\otimes \left(\begin{pmatrix}
0& 0 \\
a^2 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0& a^2 \\
0 & 0
\end{pmatrix}\right)
+ \delta^\otimes \left(\begin{pmatrix}
0& 0 \\
a^4 & 0
\end{pmatrix}\otimes \begin{pmatrix}
0& a^4\\
0 & 0
\end{pmatrix}\right)
\end{split}
\end{equation}
Moreover, by \eqref{derivation} and \eqref{automorphism} again,
\[
\delta^\otimes \left(\begin{pmatrix}
0 & 0 \\
a^{2^i} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2^i} \\
0 & 0
\end{pmatrix}\right) = \begin{pmatrix}
0 & 0 \\
a^{2^{i+1}} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2^{i+1}+1} \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^{2^i+1} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2^i} \\
0 & 0
\end{pmatrix},
\]
so we can follow the computations in \eqref{deltafirstreduction} to get
\begin{equation}\label{deltasecondreduction}
\begin{split}
\delta^\otimes(p) &= \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{3} \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^{4} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{5} \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^{3} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^{5} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{4} \\
0 & 0
\end{pmatrix},
\end{split}
\end{equation}
where we have used that \(a^7 = 1\). The identities $a^3 = a + a^4$ and $a^5 = a^2 + a^4$ in \(\mathbb{F}_8\)
allow us to expand \eqref{deltasecondreduction} in order obtain
\begin{equation}\label{deltathirdreduction}
\begin{split}
\delta^\otimes(p) &= \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a + a^4 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^{4} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^2 + a^4 \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a + a^4 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix} + \begin{pmatrix}
0 & 0 \\
a^2 + a^4 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{4} \\
0 & 0
\end{pmatrix} \\
&= \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a \\
0 & 0
\end{pmatrix}
+ \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^4 \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^{2} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a \\
0 & 0
\end{pmatrix}
+ \begin{pmatrix}
0 & 0 \\
a^{4} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^2 \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^{4} & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^4 \\
0 & 0
\end{pmatrix}
+ \begin{pmatrix}
0 & 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^4 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix}
+ \begin{pmatrix}
0 & 0 \\
a & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{2} \\
0 & 0
\end{pmatrix} \\
&\quad + \begin{pmatrix}
0 & 0 \\
a^2 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{4} \\
0 & 0
\end{pmatrix}
+ \begin{pmatrix}
0 & 0 \\
a^4 & 0
\end{pmatrix} \otimes \begin{pmatrix}
0 & a^{4} \\
0 & 0
\end{pmatrix}\\
& = 0.
\end{split}
\end{equation}
By Proposition \ref{sep}, $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is separable. Hence $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is a biseparable extension which is not Frobenius.
\end{example}
At this point one could ask what happens if we replace the family of Frobenius extensions in Problem \ref{theproblem} by a more general family. For instance, we can consider the family of Frobenius extensions of second kind introduced in \cite{Nakayama/Tsuzuku:1960}. Let $C \subseteq B$ be a ring extension and let $\kappa: C \rightarrow C$ be an automorphism. There is a structure of left \(C\)-module on \(C\) given by $a \cdot_{\kappa} b = \kappa(a)b$ for each $a,b \in C$. Hence, $C \subseteq B$ is said to be a $\kappa$-Frobenius extension, or a Frobenius extension of second kind, if $B$ is a finitely generated projective right $C$-module, and there exists a \(C-B\)-isomorphism from $B$ to $B^{*_\kappa}=\operatorname{Hom}(B_C, {_\kappa}C_C)$. The \(C-B\)-bimodule structure on $B^{*_\kappa}$ is then given by $(afb)(c)=a\cdot_{\kappa}f(bc)=\kappa(a)f(bc)$ for any $f\in B^{*_\kappa}$, $a\in C$ and $b,c\in B$. It is clear that a Frobenius extension of second kind is left and right semi Frobenius. A natural question that arises is then if a biseparable extension is a Frobenius extension of second kind. In order to answer this question, we may prove similar results to those showed in the previous sections.
\begin{proposition}\label{prop second kind}
Let $\kappa:R\to R$ be an automorphism with $\kappa(x)=mx+n$ for some $m,n\in \mathbb{F}$ with $m\not =0$. There exists a bijection between the sets of
\begin{enumerate}
\item \(R-S\)-isomorphisms $\alpha:S\to S^{*_\kappa}$.
\item Frobenius functionals $\varepsilon:A\to \mathbb{F}$ verifying $\varepsilon\sigma=m\varepsilon$ and $\varepsilon\delta=n\varepsilon$.
\end{enumerate}
\end{proposition}
\begin{proof}
By Theorem \ref{semiFrobext}, there exists a right $S$-isomorphism $\beta:S\to S^{*_\kappa}$ if and only if there exists a Frobenius functional $\varepsilon:A\rightarrow \mathbb{F}$. Now, analogously to the proof of Theorem \ref{Frobext},
\[
\kappa(x)\beta(1)(a)=m\varepsilon(a) x+n\varepsilon(a).
\]
and
\[
\beta(x)(a)=\beta(1)(xa)=\beta(1) (\sigma(a)x+\delta(a))=\varepsilon(\sigma(a))x+\varepsilon(\delta(a))
\]
for any $a\in A$. Hence, $\beta$ is left $R$-linear if and only if $\varepsilon \sigma = m\varepsilon$ and $\varepsilon \delta = n\varepsilon$.
\end{proof}
\begin{corollary}\label{Frob2KExt}
$R \subseteq S$ is a Frobenius extension of second kind if and only if there exists a Frobenius functional $\varepsilon: A \to \mathbb{F}$ and $m,n\in\mathbb{F}$ with $m \neq 0$ such that $\varepsilon \sigma = m\varepsilon$ and $\varepsilon \delta = n\varepsilon$.
\end{corollary}
Are biseparable extensions Frobenius extensions of second kind? The answer is again negative.
\begin{example}[Biseparable extensions are not necessarily Frobenius of second kind] By the latter result, Example \ref{counterexample} also provides an example of a biseparable extension which is not Frobenius of second kind. Indeed, let $\kappa:\mathbb{F}_2[x]\to \mathbb{F}_2[x]$ be an automorphism. Hence $\kappa(x)=x+n$ for some $n\in \mathbb{F}_2$. The case $n=0$ is already analyzed in Example \ref{counterexample}. Therefore, set $\kappa(x)=x+1$. By Proposition \ref{prop second kind}, $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is Frobenius of second kind if and only if there exists a Frobenius functional $\varepsilon:A\to \mathbb{F}_2$ verifying $\varepsilon\sigma=\varepsilon$ and $\varepsilon\delta=\varepsilon$. As reasoned in Example \ref{counterexample}, $\varepsilon$ is determined by four values $\gamma_0,\gamma_1,\gamma_2, \gamma_3\in \mathbb{F}_2$ such that $\varepsilon(a^{2^i}e_j)=\gamma_j$ for any $i=0,1,2$ and $j=0,1,2,3$. Now,
\begin{itemize}
\item If $\gamma_0=1$, then
$0=\varepsilon\delta\begin{pmatrix}
a& 0 \\
0 & 0
\end{pmatrix} \not = \varepsilon \begin{pmatrix}
a& 0 \\
0 & 0
\end{pmatrix}=1$,
\item If $\gamma_1=1$, then
$0=\varepsilon\delta\begin{pmatrix}
0& a \\
0 & 0
\end{pmatrix} \not = \varepsilon \begin{pmatrix}
0& a \\
0 & 0
\end{pmatrix}=1$,
\item If $\gamma_2=1$, then
$0=\varepsilon\delta\begin{pmatrix}
0& 0 \\
a^2 & 0
\end{pmatrix} \not = \varepsilon \begin{pmatrix}
0& 0 \\
a^2 & 0
\end{pmatrix}=1$,
\item If $\gamma_3=1$, then
$0=\varepsilon\delta\begin{pmatrix}
0& 0 \\
0 & a^2
\end{pmatrix} \not = \varepsilon \begin{pmatrix}
0& 0 \\
0 & a^2
\end{pmatrix}=1$,
\end{itemize}
so that $\varepsilon\delta=\varepsilon$ if and only if $\varepsilon=0$. By Corollary \ref{Frob2KExt}, $\mathbb{F}_2[x] \subseteq A[x;\sigma, \delta]$ is not Frobenius of second kind. Additionally, we may state that the class of Frobenius extensions of second kind is strictly contained in the class of left and right semi Frobenius.
\end{example}
We can formulate the next problem.
\begin{problem}\label{extendedproblem}
Are biseparable extensions left and right semi Frobenius?
\end{problem}
The techniques developed in this paper are not suitable to handle with this problem. In fact, assume \(R \subseteq S\) is biseparable with \(\delta = \delta_{\sigma,b}\) inner. Then \(\mathbb{F} \subseteq A\) is separable by Proposition \ref{sepdown}. By \cite[Proposition 5]{Eilenberg/Nakayama:1955} or \cite[Theorem 4.2]{Endo/Watanabe:1967}, \(\mathbb{F} \subseteq A\) is a Frobenius extension, hence \(R \subseteq S\) is right and left semi Frobenius by Theorem \ref{semiFrobtwosided}.
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
We consider oscillatory systems of interacting Hawkes processes introduced in \cite{EvaSusanne} to model multi-class systems of interacting neurons together with the diffusion approximations of their intensity processes. This diffusion, which incorporates the memory terms defining the dynamics of the Hawkes process, is hypo-elliptic. It is given by a high dimensional chain of differential equations driven by $2-$dimensional Brownian motion. We study the large-population-, i.e.,\ small noise-limit of its invariant measure for which we establish a large deviation result in the spirit of Freidlin and Wentzell.
\end{abstract}
\section{Introduction}
The aim of this paper is to study oscillatory systems of interacting Hawkes processes and their long time behavior. This study has been started in Ditlevsen and L\"ocherbach \cite{EvaSusanne} where multi-class systems of Hawkes processes with mean field interactions have been introduced as microscopic models for spike trains of interacting neurons. In the large population limit, i.e.\ on a macroscopic scale, such systems present {\it oscillations}. In the present paper we concentrate on the finite population process and its large deviation properties. In particular, we will be interested in its deviations from limit cycles, i.e.\ from the {\it typical oscillatory behavior} of the limit process.
We consider two populations of particles, the first composed by $ N_1 ,$ the second by $N_2$ particles. The total number of particles in the system is $N = N_1 +N_2$. The activity of each particle is described by a counting process $ Z^{N}_{k, i} (t), {1 \le k \le 2 , 1 \le i \le N_k} , t \geq 0,$ recording the number of ``actions'' of the $i$th particle belonging to population $k$ during the interval $ [0, t ]. $ Such ``actions'' can be ``spikes'' if we think of neurons, it can be ``transactions'', if we think of economical agents.
The sequence of counting processes $ ( Z^{N}_{k,i} ) $ is characterized by its intensity processes $ (\lambda^N_{k, i} ( t) ) $ which are informally defined through the relation
$$ {\mathbb P} ( Z^{N}_{k, i } \mbox{ has a jump in ]t , t + dt ]} | \F_t ) = \lambda^N_{k } (t) dt , {1 \le k \le 2 , 1 \le i \le N_k} ,$$
where $ \F_t = \sigma ( Z^{N}_{k, i} (s) , \, s \le t , {1 \le k \le 2 , 1 \le i \le N_k} ) ,$ and where
\begin{equation}\label{eq:intensity}
\lambda^N_{1} ( t) = f_1 \left( \frac{1}{N_2}\sum_{1 \le j \le N_2} \int_{]0 , t [} h_{12 }( t-s) d Z^N_{ 2,j} (s) \right)
\end{equation}
and
\begin{equation}\label{eq:intensity2}
\lambda^N_{2} ( t) = f_2 \left( \frac{1}{N_1}\sum_{1 \le j \le N_1} \int_{]0 , t [} h_{21 }( t-s) d Z^N_{ 1,j} (s) \right) .
\end{equation}
The function $f_k$ is called the jump rate function of population $k,$ and the functions $h_{12}, h_{21} $ are the ``memory'' or ``interaction'' kernels of the system. Note that $ \lambda^N_{1} ( t) $ and $ \lambda^N_{2} ( t) $ encode the interactions of the system and that the way the intensities are defined, particles belonging to the first population depend only on the past jumps of the particles belonging to the second population, and vice versa. In particular, no self-interactions are included in our model.
The form of the intensities \eqref{eq:intensity} is the typical form of the intensity of a multivariate nonlinear Hawkes process. Hawkes processes have been introduced by Hawkes \cite{Hawkes} and Hawkes and Oakes \cite{ho} as a model for earthquake appearances. Recently, they have regained a lot of interest as good models in neuroscience but also in financial econometrics, see e.g.\ Hansen et al.\ \cite{hrbr} and Chevallier \cite{julien} for the use of Hawkes processes as models of spike trains in neuroscience, see Delattre et al. \cite{mathieu} for the use of Hawkes processes in financial modeling. Finally, we refer the reader to Br\'emaud and Massouli\'e \cite{bm} for the stability properties of nonlinear Hawkes processes.
By the form \eqref{eq:intensity} and \eqref{eq:intensity2} of the intensities, we are in a mean-field frame, that is, the intensity processes of one population depend only on the empirical measure of the other population. We will suppose that $ N \to \infty $ such that for $k=1,2, $
$$\lim_{N \to \infty} \frac{N_k}{N} \, \mbox{ exists and is in } \, ]0, 1[ .$$
In \cite{EvaSusanne}, we have shown that in the large population limit, when $ N \to \infty, $ self-sustained periodic behavior emerges even though each single particle does not follow periodic dynamics. In the present paper we show how this periodic behavior is also felt at a finite population size.
\subsection{An associated cascade of diffusion processes}
We represent the Hawkes processes via the associated processes
$$
X^N_{1} (t) =\frac{1}{N_{2}} \sum_{ j = 1 }^{N_{2}} \int_{]0, t]} h_{12 } ( t- s ) d Z^N_{2, j } ( s)
, \;
X^N_{2} (t) =\frac{1}{N_{1}} \sum_{ j = 1 }^{N_{1}} \int_{]0, t]} h_{21 } ( t- s ) d Z^N_{1, j } ( s) .
$$
Each particle belonging to the first population jumps at rate $ f_1 ( X^N_1 (t- ) ) , $ and each particle belonging to the second population at rate $f_2 ( X^N_2 (t- ) ) , $ at time $t.$ If the memory kernels $h_{12} $ and $ h_{21}$ are exponential, then the system $ (X^N_1 (t) , X^N_2 (t))_{t \geq 0}$ is a piecewise deterministic Markov processes (PDMP). In the present paper, we do not choose exponential memory kernels, since they induce a very short memory. Instead of this, we consider {\it Erlang memory kernels}
$$ h_{12 } (s) = c_1 e^{ - \nu_1 s } \frac{ s^{n_1}}{n_1!} , h_{21 } (s) = c_2 e^{ - \nu_2 s } \frac{ s^{n_2}}{n_2!} ,$$
where $c_1 , c_2 \in {\mathbb R} , $ where $ \nu_1 , \nu_2 $ are positive constants and $ n_1 , n_2 \in {\mathbb N} $ the length of the delay within the memory kernel. Such kernels allow for delays in the transmission of information. In this case, the processes $ (X^N_1 (t) , X^N_2 (t))_{t \geq 0} $ alone are not Markov, but they can be completed by a {\it cascade} of processes $X^N_{k, l } , l=1, \ldots , n_k +1 , k=1, 2, $ such that this cascade is Markov. The equations defining the cascade are given by
\begin{equation}\label{eq:cascadepdmp0}
\left\{
\begin{array}{lcl}
d X^N_{1, l } (t) &=& [ - \nu_{1} X^N_{1, l } ( t) + X^N_{1, l+1} (t) ] dt , \; 1 \le l \le n_1 , \\
d X^N_{1, n_1+1} (t) &=& - \nu_{1} X^N_{1, n_1+1} (t) dt + \frac{c_1}{N_{2}} \sum_{j=1}^{N_{2} }d Z^N_{2, j} (t) ,
\end{array}
\right.
\end{equation}
where $X^N_{1 }$ is identified with $X^N_{1, 1 } $ and where each $Z^N_{2, j } $ jumps at rate $f_{2} ( X^N_{2} (t- ) ) .$ A similar cascade describes the evolution of the second population.
Notice that for each population the length of the cascade is related to the length of delay in the corresponding memory kernels.
In the ``large jump intensity, small jump height''-regime, it is natural to study the {\it canonical diffusion approximation} of this cascade. It is given by the following systems of equations. The process $X^N_1 (t) $ is approached by the diffusion process $ Y^N_{1, 1 } (t) ,$ together with its successive cascade terms, solution of
\begin{equation}\label{eq:cascadeapprox0}
\left\{
\begin{array}{lcl}
d Y^N_{1, l } (t) &=& [ - \nu_{1} Y^N_{1, l } ( t) + Y^N_{1, l +1} (t) ] dt , \; \quad \quad 1 \le l \le n_1 , \\
d Y^N_{1, n_1 +1} (t) &=& - \nu_{1} Y^N_{1, n_1+1} (t) dt + c_{1} f_{2} ( Y^N_{ 2, 1} (t) ) dt + c_{1} \frac{\sqrt{f_2 ( Y^N_{ 2, 1} )(t) }}{ \sqrt{ N_{2}}} d B^{2}_t .
\end{array}
\right .
\end{equation}
In the above system, $B^2 $ is a one dimensional standard Brownian motion which is associated to the jump noise of the second population and appears only in the last term of the cascade. Notice also that only the last term of the cascade encodes the interactions with the second population, through the jump rate function $f_2 $ and the jump intensity $ f_2 ( Y^N_{2, 1 } ( t) ) $ of the second population. The above system \eqref{eq:cascadeapprox0} has to be completed by a similar cascade of length $n_2 +1 $ describing the jump intensity of the second population. This diffusion approximation is a good approximation of the original cascade of PDMP's, and the weak approximation error $| E ( \varphi ( X_t^N)) - E ( \varphi ( Y_t^N)) | , t \le T,$ is of order $ T N^{-2}, $ for sufficiently smooth test functions $\varphi $ (see \cite{EvaSusanne}).
The present paper is devoted to the study of the long time behavior of this diffusion approximation $ Y^N $ and its large deviation properties.
Let us start by discussing the main features of this diffusion process. Firstly, we have to treat the memory terms -- the terms following the first line of the above cascade -- as auxiliary variables. This gives rise to coordinates of $ Y^N$ without noise.
Therefore we obtain a degenerate high-dimensional diffusion process $Y^N $ driven by two-dimensional Brownian motion. This diffusion turns out to be hypo-elliptic; indeed, it is easy to check that the weak H\"ormander condition is satisfied. The drift of the diffusion is almost linear -- only the two coordinates encoding the interactions between the two populations do not have a linear drift term.
The interactions are transported through the system according to a ``chain of reactions'', i.e.\ the drift of a given coordinate does only depend on the coordinate itself and the next one. We call this the {\it cascade structure of the drift vector field}. This structure enables us to use results on the control properties of the diffusion \eqref{eq:cascadeapprox0} obtained by Delarue and Menozzi \cite{delaruemenozzi} in a recent paper establishing density estimates for such chains of differential equations. Due to this structure, the coordinates of the diffusion do not travel at the same speed. Indeed, the coordinate $ Y^N_{1, n_1 +1} (t) ,$ driven by Brownian motion, evolves at speed $t^{1/2} , $ the coordinate $Y^N_{1, n_1} (t) $ at speed $ t^{1+ 1/2 } $ and more generally, $ Y^N_{1, n_1 - l } (t) $ at speed $t^{l +1 + 1/2 }.$ In particular, over small time intervals $ [0, \delta ] $ and for all coordinates which are not driven by Brownian motion, the drift does play a crucial role in the control problem of our diffusion, and this is reflected in the cost associated to the control (see \cite{delaruemenozzi} and the proof of Theorem \ref{theo:6} below).
Cascades or chains of reactions similar to the one described in \eqref{eq:cascadeapprox0} appear also in systems of coupled oscillators in models of heat conduction where the first oscillator is forced by random noise. Rey-Bellet and Thomas \cite{reybellet} have studied the large deviation properties of such systems, and parts of our proofs are inspired by their approach.
\subsection{Monotone cyclic feedback systems}
The deterministic part of the system \eqref{eq:cascadeapprox0} is given by an $ n_1 + n_2 + 2-$dimensional dynamical system $ (x_{k, l} ( t)) , 1 \le l \le n_k+1, k = 1 , 2,$ which is solution of
\begin{equation}\label{eq:ll}
\frac{d x_{1,l} (t) }{d t } = - \nu_1 x_{1, l } ( t) + x_{1, l+1} (t) , 1 \le l \le n_1, \frac{d x_{1, n_1+1} (t) }{d t } = - \nu_1 x_{1, n_1+1} (t) + c_1 f_2 ( x_{2, 1} (t) ) ,
\end{equation}
together with the chain of equations describing the second population. This system is a {\it monotone cyclic feedback system} in the sense of Mallet-Paret and Smith \cite{malletparet-smith}. The most important point is that the long time behavior of \eqref{eq:ll}, i.e.\ the structure of its $\omega-$limit sets, is well-understood. More precisely, there exist explicit conditions ensuring the existence of a single linearly unstable equilibrium point $x^* $ of this limit system, together with a finite number of periodic orbits such that at least one of them is asymptotically orbitally stable (see Theorem \ref{theo:orbit} below). This result goes back to deep theorems in dynamical system's theory, obtained by Mallet-Paret and Smith \cite{malletparet-smith} and used in a different context in Bena\"{\i}m and Hirsch \cite{michel}, relying on the Poincar\'e-Bendixson theorem.
In other words, there exist $ x_1, \ldots , x_M \in {\mathbb R}^{ n_1 + n_2 + 2 } $ such that the solutions $\Gamma_1 (t) , \ldots, \Gamma_M (t)$ of \eqref{eq:ll} issued from these points are non-constant periodic trajectories, i.e., they are cycles (or periodic orbits). At least one of these cycles is an attractor of \eqref{eq:ll}, which means that the other solutions of \eqref{eq:ll} will converge to this limit cycle in the long run (provided they start within the domain of attraction of this limit cycle). The limit cycles encode oscillatory behavior of the system; that is, periods where the first population has large jump intensity, while the jump intensity of the second population is small, are followed by periods where the second population has large jump intensity, but not the first one. This has been supported by simulations provided in \cite{EvaSusanne}.
Due to the presence of noise, the diffusion $Y^N$ may switch from one limit cycle to another. But for large $N , $ $Y^N$ will tend to stay within tubes around the limit cycles $ \Gamma_1, \ldots, \Gamma_M $ during long periods, before eventually leaving such a tube after a time which is of order $ e^{ N \bar V },$ where $\bar V$ is related to the cost of steering the process from the cycle to the boundary of the tube (see Proposition \ref{prop:sigma_0first} and \ref{prop:sigma_0second} below). As time goes by, the diffusion will therefore spend very long time intervals in vicinities of one of the limit cycles -- interrupted by short lasting excursions into the rest of the state space. It is therefore natural to consider the concentration of the invariant measure $\mu^N $ of $Y^N$ around the periodic orbits -- if this invariant measure exists and is unique.
It is not difficult to show that, for fixed $N, $ the process possesses a unique invariant probability measure $ \mu^N .$ Moreover, a Lyapunov type argument implies that the process converges to its invariant regime at exponential speed. For fixed $N, $ $\mu^N $ is of full support but its mass is concentrated around the periodic orbits of the limit system \eqref{eq:ll}. More precisely, we can show that for any open set $ D$ with compact closure and smooth boundary,
\begin{equation}\label{eq:ld}
\mu^N ( D) \sim C e^{ - [\inf_{ x \in D} W (x)] N} ,
\end{equation}
where the cost function $W(x) $ is related to the control properties of system \eqref{eq:cascadeapprox0} and is given explicitly in \eqref{eq:W} below.
In order to prove this result, we rely on the approach of Freidlin and Wentzell \cite{FW} to sample path large deviations of diffusions, developed further in Dembo and Zeitouni \cite{DZ}. Both \cite{FW} and \cite{DZ} suppose that the underlying diffusion is elliptic -- which is not the case in our situation. Recently, Rey-Bellet and Thomas \cite{reybellet} have extended the results of Freidlin and Wentzell \cite{FW} to degenerate diffusions, and our proof is inspired by their paper. The most important point of our paper is to establish the necessary control theory in our framework. For this, an important tool are recent results obtained by Delarue and Menozzi \cite{delaruemenozzi}. Moreover, since we are dealing with periodic orbits rather than with equilibrium points, we have to extend the notion of {\it small time local controllability} to the situation where the drift vector field does play a role in the sense of a shift on the orbit, see Theorem \ref{theo:stlc} below.
This paper is organized as follows. In Section \ref{sec:2} we state the main assumptions and provide a short study of the limit system together with its $ \omega-$limit set in Theorem \ref{theo:orbit}. In Section \ref{sec:3}, we state the main results of the paper which are the positive Harris recurrence of $Y^N$ in Theorem \ref{theo:harris} together with the large deviation properties of the invariant measure $\mu^N$ of the diffusion as $ N \to \infty , $ in Theorem \ref{theo:main}. Section \ref{sec:control} provides a proof of the Harris recurrence of $Y^N, $ based on the control theorem. Finally, Section \ref{sec:5} is devoted to a study of the control properties of the process. Here, we first show that the process is strongly completely controllable. The proof of this fact relies on the prescription of a control that allows to decouple the two populations and to make use of the linear structure of the (main part of the) drift. We also study the continuity properties of the cost functional -- a study which is not trivial in the present frame of strong degeneracy of the diffusion matrix. Section \ref{sec:6} gives the proof of Theorem \ref{theo:main}.
\section{Main assumptions and results}\label{sec:2}
In what follows, we use the notations introduced above. Moreover, for fixed $ n \geq 1 , $ elements $x $ of $ {\mathbb R}^n $ shall be denoted by $ x = ( x_1 , \ldots , x_n ) ,$ and $ {\mathbb R}^n $ will be endowed with the Euclidean norm denoted by $ \| x\| .$ Finally, for matrices $ A \in {\mathbb R}^{n \times n } ,$ $ \| A \| $ denotes the associated operator norm.
Our first main assumption is the following.
\begin{ass}\label{ass:1}
(i) $f_1$ and $f_2: {\mathbb R} \to {\mathbb R}_+ $ are bounded analytic functions which are strictly lower bounded, i.e., $ f_1 (x) , f_2 (x) \geq \underbar f > 0 $ for all $x \in {\mathbb R} .$ Moreover, $f_1 $ and $f_2 $ are non decreasing.\\
(ii) There exists a finite constant $L $ such that
for every $x$ and $x' $ in ${\mathbb R},$
\begin{equation}
\label{Lipsch-f}
|f_1 (x) -f_1 (x') | + |f_2 (x) -f_2 (x') | \le L |x-x'|.
\end{equation}\\
(iii) The functions $ h_{12}, h_{21} $ are given by
\begin{equation}\label{eq:erlang}
h_{12 } (s) = c_1 e^{ - \nu_1 s } \frac{ s^{n_1}}{n_1!} , h_{21 } (s) = c_2 e^{ - \nu_2 s } \frac{ s^{n_2}}{n_2!} ,
\end{equation}
where $n_1, n_2 \in {\mathbb N}_0 , c_1, c_2 \in \{-1,1\} $ and $ \nu_1 , \nu_2 > 0$ are fixed constants.
\end{ass}
Under the above assumption, it is standard to show that the Hawkes process with the prescribed dynamics above exists.
\begin{prop}[Prop.1 of \cite{EvaSusanne}]
Under Assumption \ref{ass:1} there exists a path-wise unique Hawkes process \\$ (Z^{N}_{k,i} (t)_{ 1 \le k \le 2, 1 \le i \le N_k } )$ with intensity \eqref{eq:intensity}, for all $ t \geq 0.$
\end{prop}
\subsection{An associated cascade of piecewise deterministic Markov processes (PDMP's)}\label{sec:erlang}
In the sequel we establish a link between the Hawkes process $ (Z^{N}_{k,i} (t)_{1 \le k \le 2, 1 \le i \le N_k} )$ -- which is of infinite memory -- and an associated system of Markov processes. This relation exists thanks to the very specific structure of the memory kernels $h_{12}, h_{21} $ in \eqref{eq:erlang}. Such kernels are called {\it Erlang memory} kernels; they can describe delays in the transmission of information. In \eqref{eq:erlang}, $n_1+1$ is the order of the delay, i.e., the number of differential equations needed for population $1$ to obtain a system without delay terms, and $ n_2 +1 $ is the order of delay for population $2.$ The delay of the influence e.g.\ of population $2$ on population $1$ is distributed and takes its maximum absolute value at $n_2/\nu_2$ time units back in time, and the mean is $(n_2+1)/\nu_2$ (if normalizing to a probability density). The higher the order of the delay, the more concentrated is the delay around its mean value, and in the limit of $n_2 \rightarrow \infty$ while keeping $(n_2+1)/\nu_2$ fixed, the delay converges to a discrete delay. The sign of $c_1$ and $c_2$ indicates if the influence is inhibitory or excitatory.
We introduce the family of adapted c\`adl\`ag processes
\begin{equation}\label{eq:intensity21}
X^N_{1} (t) :=\frac{1}{N_{2}} \sum_{ j = 1 }^{N_{2}} \int_{]0, t]} h_{12 } ( t- s ) d Z^N_{2, j } ( s) = \int_{]0, t]} h_{12 } (t-s) d \bar Z^N_{2} ( s)
\end{equation}
and
\begin{equation}\label{eq:intensity22}
X^N_{2} (t) :=\frac{1}{N_{1}} \sum_{ j = 1 }^{N_{1}} \int_{]0, t]} h_{21 } ( t- s ) d Z^N_{1, j } ( s) = \int_{]0, t]} h_{21 } (t-s) d \bar Z^N_{1} ( s) ,
\end{equation}
where $ \bar Z^N_{k} ( s) = \frac{1}{N_{k}} \sum_{j=1}^{N_{k}} Z^N_{k, j } ( s) , k=1, 2 .$ Recalling \eqref{eq:intensity}, it is clear that the dynamics of the system is entirely determined by the dynamics of the processes $ X^N_{k } ( t- ) , t \geq 0 .$ Indeed, any particle belonging to the first population jumps at rate $ f_1 (X^N_{1} (t-)), $ and any particle belonging to the second population at rate $ f_2 ( X^N_{2} (t-) ) .$ Without assuming the memory kernels to be Erlang kernels, the system $(X^N_{k } , 1 \le k \le 2 ) $ is not Markovian: For general memory kernels, Hawkes processes are truly infinite memory processes.
When the kernels are Erlang, given by \eqref{eq:erlang},
taking formal derivatives in \eqref{eq:intensity21} and \eqref{eq:intensity22} with respect to time $t$ and introducing for any $ k =1,2$ and $ 1 \le l \le n_{k } +1 $
\begin{equation}
X^N_{k, l} ( t) := c_{k} \, \int_{]0, t]} \frac{ (t-s)^{n_k- (l-1)}}{(n_k- (l-1))! } e^{- \nu_{k} ( t-s)} d \bar Z^N_{k+1} ( s) ,
\end{equation}
where we identify population $ 2+1$ with population $1$, we obtain the following system of stochastic differential equations driven by Poisson random measure.
\begin{equation}\label{eq:cascadepdmp}
\left\{
\begin{array}{lcl}
d X^N_{k, l } (t) &=& [ - \nu_{k} X^N_{k, l } ( t) + X^N_{k, l+1} (t) ] dt , \; 1 \le l \le n_k , \\
d X^N_{k, n_k+1} (t) &=& - \nu_{k} X^N_{k, n_k+1} (t) dt + c_{k} d \bar Z^N_{k+1} (t) ,
\end{array}
\right.
\end{equation}
$k=1, 2.$
Here, $X^N_{k }$ is identified with $X^N_{k, 1 }, $ and each $Z^N_{k, j } $ jumps at rate $f_{k} ( X^N_{k, 1 } (t- ) ) .$ We call the system \eqref{eq:cascadepdmp} a {\em cascade of memory terms}.
Thus, the dynamics of the Hawkes process $ (Z^N_{k, i } (t))_{ 1 \le k \le 2, 1 \le i \le N_k}$ is entirely determined by the PDMP $ ( X^N_{k, l })_{(1\le k \le 2, 1 \le l \le n_k +1 )} $ of dimension $n := n_1 + n_2 + 2 .$
\subsection{A diffusion approximation in the large population regime}
In the large population limit, i.e.\ when $ N \to \infty , $ it is natural to consider the diffusion process approximating the above cascade of PDMP's. This diffusion approximation is given by
\begin{equation}\label{eq:cascadeapprox}
\left\{
\begin{array}{lcl}
d Y^N_{k, l } (t) &=& [ - \nu_{k} Y^N_{k, l } ( t) + Y^N_{k, l +1} (t) ] dt , \; \quad \quad 1 \le l \le n_k , \\
d Y^N_{k, n_k +1} (t) &=& - \nu_{k} Y^N_{k, n_k+1} (t) dt + c_{k} f_{k+1} ( Y^N_{ k+1, 1} (t) ) dt + c_{k} \frac{\sqrt{f_{k+1} ( Y^N_{ k+1, 1} )(t) }}{ \sqrt{ N_{k+1}}} d B^{ k+1}_t ,
\end{array}
\right .
\end{equation}
$k=1, 2,$ where population $2+1$ is identified with $1$ and where $B^{1} , B^2 $ are independent standard Brownian motions (compare to Theorem 4 of \cite{EvaSusanne}). The diffusion $Y^N = (Y^N_t)_{t \in {\mathbb R}_+} $ takes values in $ {\mathbb R}^n $ with $ n = n_1 + n_2 +2 .$
By Theorem 4 of \cite{EvaSusanne}, we know that $Y^N$ is a good approximation of the PDMP $X^N $ since the weak approximation error can be controlled by
$ \sup_x | E_x ( \varphi ( X^N (t) ))- E_x ( \varphi ( Y^N (t) )) | \le C ( \varphi) T N^{-2 } ,$
for all $ t \le T , $ for sufficiently smooth test functions $ \varphi .$ We therefore concentrate on the study of this diffusion process $Y^N.$
We write $A^N$ for the infinitesimal generator of the process \eqref{eq:cascadeapprox}. Moreover, we denote by $ Q_x^{N}$ the law of the solution $ (Y^N(t), t \geq 0) $ of \eqref{eq:cascadeapprox}, starting from $Y^N (0) = x ,$ for some $x \in {\mathbb R}^n , $ and by $ E_x^N$ the corresponding expectation.
We study the above diffusion when $ N_1 , N_2 \to \infty $ such that $ N_1/N =: p_1 $ and $ N_2/N =: p_2$ remain constant. Re-numbering the coordinates of $ Y^N $ as $ (Y^N_1, \ldots, Y^N_n ), $ where $ n = n_1 + n+2 + 2, $ we may introduce
\begin{equation}\label{eq:drift}
b(x) := \left(
\begin{array}{c}
- \nu_1 x_{ 1 } + x_{2 } \\
- \nu_1 x_{2} + x_{3} \\
\vdots\\
- \nu_1 x_{n_1 +1 } + c_1 f_2 ( x_{n_1+2} ) \\
- \nu_2 x_{n_1+2 } + x_{n_1+3 } \\
\vdots \\
- \nu_2 x_{n} + c_2 f_1 ( x_{1} )
\end{array}
\right) , \;
\sigma (x) := \left(
\begin{array}{cc}
0&0\\
\vdots & \vdots \\
0 & \frac{c_1}{\sqrt{p_2}} \sqrt{ f_2 ( x_{n_1+2 } ) } \\
0 & 0 \\
\vdots & \vdots \\
\frac{c_2}{\sqrt{ p_1} } \sqrt{ f_1 ( x_{1 } ) } & 0
\end{array}
\right) ,
\end{equation}
which are the drift vector of \eqref{eq:cascadeapprox} and the associated diffusion matrix which is an $ n \times 2 -$ matrix. Notice that $ \sigma $ is highly degenerate; there is a two-dimensional Brownian motion driving an $n-$dimensional system. We may rewrite \eqref{eq:cascadeapprox} as
\begin{equation}\label{eq:diffusionsmallnoise}
d Y_t^N = b ( Y_t^N ) dt + \frac{1}{ \sqrt{N}} \sigma ( Y_t^N ) d B_t ,
\end{equation}
with $ B_t = ( B^1_t , B^2_t ) .$
The aim of this paper is to study this diffusion $Y^N $ and its long time behavior in the large population limit (i.e.\ as the noise term tends to $0$). We will show that this diffusion presents oscillations in the long run and we will study the large population limit of the associated invariant measure. This will be done relying on the Freidlin-Wentzell theory (see \cite{FW} and \cite{DZ}) on sample path large deviations for diffusion processes which has been extended recently to the case of (some) degenerate diffusions in Rey-Bellet and Thomas \cite{reybellet}.
We start with a discussion of the deterministic limit system associated to \eqref{eq:diffusionsmallnoise}.
\subsection{Monotone cyclic feedback systems}
Consider the solution of
\begin{equation}\label{eq:limit}
\dot x (t) = b (x (t) ) ,
\end{equation}
i.e.\ of
\begin{eqnarray}\label{eq:cascade}
\frac{d x_{i} (t) }{dt} &=& - \nu_1 x_i (t) + x_{i+1} (t) , \, \, 1 \le i \le n_1 , \;
\frac{d x_{n_1 +1 } (t) }{dt} = - \nu_1 x_{n_1 + 1 } (t) + c_1 f_{2} ( x_{n_1+2 } (t)) , \nonumber \\
\frac{d x_{i} (t) }{dt} &=& - \nu_2 x_{i} (t) + x_{i+1} (t) , \, \, n_1+2 \le i < n , \; \frac{d x_{n } (t) }{dt} = - \nu_2 x_{n } (t) + c_2 f_{1} ( x_{1 } (t)) .
\end{eqnarray}
This system is a monotone cyclic feedback system as considered e.g.\ in \cite{malletparet-smith} or as in (33) and (34) of \cite{michel}. If $c_1 c_2 > 0, $ then the system \eqref{eq:cascade} is of total positive feedback, otherwise it is of negative feedback. It can be shown easily (see Prop.\ 5 of \cite{EvaSusanne}) that \eqref{eq:cascade} admits a unique equilibrium $x^*$ if $ c_1c_2 < 0 .$
We now present special cases where system \eqref{eq:cascade} is necessarily attracted to non-equilibrium periodic orbits. Recall that $ n = n_1+ n_2 +2$ is the dimension of \eqref{eq:cascade}.
The following theorem is based on Theorem 4.3 of \cite{malletparet-smith} and generalizes the result obtained in Theorem 6.3 of \cite{michel}. We quote it from \cite{EvaSusanne}.
\begin{theo}\label{theo:orbit}[Theorem 3 of \cite{EvaSusanne}]
Grant Assumption \ref{ass:1}. Put $\varrho := c_1 c_2 f_1' (x^*_{
1}) f_2' (x^*_{
n_1+2 }) $ and suppose that $ \varrho < 0.$ Consider all solutions $\lambda
$ of
\begin{equation}\label{eq:racinesunite}
(\nu_1 + \lambda)^{n_1 +1} \cdot (\nu_2 + \lambda)^{n_2
+1} = \varrho
\end{equation}
and suppose that there exist at least two solutions $\lambda$ of
\eqref{eq:racinesunite} such that
\begin{equation}\label{eq:unstable}
\mbox{Re } ( \lambda ) > 0.
\end{equation}
Then
$x^* $ is linearly unstable, and the system \eqref{eq:cascade}
possesses at least one, but no more than a finite number of non constant periodic
orbits. Any $ \omega-$limit set is either the equilibrium $ x^* $ or one of these periodic orbits. At least one of the periodic orbits is orbitally asymptotically stable.
\end{theo}
Notice that $\varrho < 0 $ implies that $ c_1 c_2 < 0, $ that is, we are in the frame of a total negative feedback. In the sequel, we shall always assume that the assumptions of Theorem \ref{theo:orbit} are satisfied and we introduce
\begin{ass}\label{ass:4}
We suppose that $\rho := c_1 c_2 f_1' (x^*_{
1}) f_2' (x^*_{
n_1+2 }) $ satisfies that $ \rho < 0$ and that there exist at least two solutions $\lambda$ of
\eqref{eq:racinesunite} with $
\mbox{Re } ( \lambda ) > 0.$
\end{ass}
Under Assumption \ref{ass:4}, there exists a finite number of periodic orbits, and we write $ K_1 = \{ x^* \} $ for the unstable equilibrium point and $K_2 , \ldots , K_L $ for the periodic orbits of the limit system \eqref{eq:cascade}. Moreover, we write $K = \bigcup_{l=1}^L K_l $ and
\begin{equation}\label{eq:b}
B_\varepsilon (K) = \{ x \in {\mathbb R}^n : dist ( x, K) < \varepsilon \} ,
\end{equation}
where $dist ( x, K) = \inf \{ \| x - y \| , y \in K \} $ and where $ \| \cdot \| $ is the Euclidean norm on ${\mathbb R}^n.$
Due to the presence of noise, the diffusion $Y^N$ will be able to switch from the vicinity of one periodic orbit to the vicinity of another orbit. However, as $N \to \infty, $ the diffusion will stay within tubes around periodic orbits during longer and longer periods, before eventually leaving such a tube after a time which is of order $ e^{ N \bar V },$ where $\bar V$ is related to the cost of steering the process from the orbit to the boundary of the tube. This behavior can be read on the invariant measure of $Y^N, $
and the main result of this paper is to show that the invariant measure of the diffusion will concentrate around the stable periodic orbits of \eqref{eq:cascade} as $ N \to \infty .$
\section{Main results : Large deviations for the diffusion approximation $Y^N$ }\label{sec:3}
We start with some preliminary results on the diffusion process $Y^N .$
\begin{theo}\label{theo:harris}
Grant Assumption \ref{ass:1}. Then $Y^N$ is positive Harris recurrent with unique invariant probability measure $ \mu^N .$ The invariant measure $ \mu^N $ is of full support.
\end{theo}
The proof of this result will be given in Section \ref{sec:control} below. It is based on two main ingredients. The first ingredient is the existence of a Lyapunov function, a result that has been obtained in \cite{EvaSusanne} and that we quote from there. In order to state this result, introduce
$\tau_\varepsilon = \inf \{ t \geq 0 : Y_t^N \in B_\varepsilon (K) \} ,$ where $ B_\varepsilon (K) $ has been defined in \eqref{eq:b}.
\begin{prop}[Prop.\ 5 and Theorem 5 of \cite{EvaSusanne}]\label{prop:lyapunov}
There exists a function $ G : {\mathbb R}^n \to {\mathbb R}_+ $ not depending on $N,$ such that $ \lim_{ |x | \to \infty } G( x) = \infty ,$ and constants $a ,b > 0 $ not depending on $N$ such that $ A^N G \le - a G + b .$
Moreover, we also have
$$ E^N_x \tau_\varepsilon \le c G(x) ,$$
for some constant $c > 0$ not depending on $N.$
\end{prop}
The second main ingredient to prove the Harris recurrence is the following: Despite the fact that $Y^N$ is highly degenerate, the weak H\"ormander condition is satisfied on the whole state space, as it has been shown in Proposition 7 of \cite{EvaSusanne}.
Once the Harris recurrence of the process is proven, we turn to the large deviation properties of $Y^N.$ We firstly introduce the cost functional related to the control problem of the diffusion $Y^N.$ For that sake, for some time horizon $t_1<\infty$ which is arbitrary but fixed, write $\,\tt H\,$ for the Cameron-Martin space of measurable functions ${h}:[0,t_1]\to {\mathbb R}^2 $ having absolutely continuous components ${ h}^\ell(t) = \int_0^t \dot h^\ell(s) ds$ with $\int_0^{t_1}[{\dot h}^\ell]^2(s) ds < \infty$, $1\le \ell\le 2$. For $ h \in \,\tt H\, , $ we put $ \| \dot h\|_\infty := \| \dot h^1 \|_\infty + \| \dot h ^2 \|_\infty .$ For $x\in {\mathbb R}^n $ and ${ h}\in{\tt H}$, consider the deterministic system
\begin{equation}\label{eq:generalcontrolsystem}
\varphi = \varphi^{( {h}, x)} \; \mbox{solution to}\; d \varphi (t) = b ( \varphi (t) ) dt + \sigma( \varphi (t) ) \dot h(t) dt, \; \mbox{with $\varphi (0)=x,$}
\end{equation}
on $[0, t_1 ].$ As in Dembo and Zeitouni \cite{DZ}, we introduce the rate function $ I_{x, t_1} (f) $ on $ C ( [0, t_1 ], {\mathbb R}^n ) $ by
\begin{equation}\label{eq:rate}
I_{x, t_1} (f) = \inf_{ {h} \in {\tt H} : \varphi^{( { h}, x)} (t) = f(t) , \; \forall t \le t_1 } \frac12 \int_0^{t_1} [| \dot{ h}^1(s)|^2+ | \dot{ h}^2 (s) |^2] ds ,
\end{equation}
where $ \inf \emptyset = + \infty . $ Notice that the above rate function is not explicit since the diffusion matrix $\sigma $ is degenerate.
We then introduce the cost function $ V_t (x, y )$ which is given by
$$ V_t ( x, y) = \inf_{ { h } \in {\tt H} : \varphi^{ ( {h} , x)} (t) = y } \frac12 \int_0^t [| \dot{ h}^1(s)|^2+ | \dot{ h}^2 (s) |^2] ds , \; V( x, y ) = \inf_{t > 0 } V_t ( x,y).$$
Finally, for any two sets $ B, C \in {\mathcal B} ( {\mathbb R}^n ) $ we define
$$ V( B, C ) = \inf_{x \in B, y \in C } V( x, y ) .$$
As in Freidlin and Wentzell \cite{FW} we say that two points $x$ and $ y $ are equivalent and we write $ x \sim y , $ if and only if $V( x, y ) = V(y, x ) = 0 . $ Notice that the $\omega -$limit set $K= \{ x^*\} \cup \bigcup_{l=2}^L K_l $ consists of $L$ such equivalence classes with respect to this equivalence relation. In \cite{FW}, Chapter 6.3, Freidlin and Wentzell introduce graphs on the set $ \{1 , \ldots , L \} $ in the following way. For any fixed $ i \le L , $ an $ \{i \}-$graph is a set consisting of arrows $ m \to n $ where all starting points $m$ of such an arrow are $\neq i ,$ such that every $ m \neq i $ is the initial point of exactly one arrow and such that there are no closed cycles in the graph. Intuitively, such an $\{i\}-$graph describes the possible ways of going from some $ K_m , $ $m \neq i , $ to $K_i, $ following a path $K_m =: K_{m_1} \rightarrow K_{m_2} \rightarrow K_{m_3} \rightarrow \ldots \rightarrow K_i, $ without hitting one of the sets $K_{m_j} $ twice. Therefore, $ \{i\}-$graphs describe all possible ways of passages between the sets $ K_j ,$ ending up in $K_i.$ An alternative description of such passages is given by means of the hierarchy of $L-$cycles, as pointed out in Freidlin and Wentzell \cite{FW}, Chapter 6.6. Writing $ G \{ i \} $ for the set of all possible $ \{i\}-$graphs, we then introduce
\begin{equation}
W( K_i ) = \min_{g \in G\{i\} } \sum_{m \to n \in g } V( K_m, K_n ) ,
\end{equation}
which is the minimal cost of going from any $ K_j $ to $K_i $ for some $ j \neq i .$
The following theorem is our main result.
\begin{theo}\label{theo:main}
Grant Assumptions \ref{ass:1} and \ref{ass:4}. Then for any open set $ D$ with compact closure and smooth boundary satisfying $ dist ( D, K ) > 0, $ we have
\begin{equation}\label{eq:main}
\lim_{N \to \infty } \frac1N \log \mu^N (D) = - \inf_{ x \in D} W(x) ,
\end{equation}
where
\begin{equation}\label{eq:W}
W(x) = \min_i \left( W( K_i ) + V ( K_i , x) \right) - \min_j W( K_j ) .
\end{equation}
\end{theo}
A similar result has been established by Rey-Bellet and Thomas in a recent paper on the asymptotic behavior of thermal non-equilibrium steady states in driven chains of anharmonic oscillators, which is a model of heat conduction, see \cite{reybellet}. Our proof is inspired by their approach. The main difference with respect to their paper is the fact that the $\omega -$limit set of our model is built of periodic orbits rather than stable equilibrium points. As a consequence, the action of the drift vector field plays an important role close to points of any of the periodic orbits. This implies that the property of {\it small time local controllability} -- essential for the proof -- has to be adapted to the present situation. Moreover, the controllability of our system has to be carefully studied -- indeed we are facing a degenerate situation where Brownian motion is only present in two coordinates of a (possibly) high-dimensional system. As a consequence, the controllability of the system as well as the continuity of $ V (x, y) $ with respect to $x$ and $y$ are difficult questions. It is the {\it cascade structure} of the drift vector which is crucial for our purpose -- we will come back to this point later.
\section{Proof of Theorem \ref{theo:harris}}\label{sec:control}
We will use the control theorem which goes back to Stroock and Varadhan \cite{StrVar-72}, see also Millet and Sanz-Sol\'e \cite{MilSan-94}, theorem 3.5, in order to prove Theorem \ref{theo:harris}. The following proposition summarizes the inclusion of the control theorem which is important for our purpose.
\begin{prop}\label{theo:4bis}
Grant Assumption \ref{ass:1}. Denote by $ Q_x^{N,t_0} $ the law of the solution $ (Y^N(t))_{0 \le t \le t_0} $ of \eqref{eq:cascadeapprox}, starting from $Y^N (0) = x .$ Let $\,\varphi = \varphi^{(N, {h},x)}\,$ denote a solution to
\begin{equation}\label{eq:withN}
d \varphi (t) \;=\; b ( \varphi (t) )\, dt \;+\; \frac{1}{\sqrt{N}} \sigma ( \varphi (t) )\, \dot{h}(t)\, dt \quad,\quad \varphi (0)=x.
\end{equation}
Fix $ x $ and $ { h } \in \tt H $ such that $\,\varphi = \varphi^{(N, {h},x)}\,$ exists on some time interval $ [ 0, \widetilde T ] $ for $\widetilde T > t_0.$ Then
$$ \left(\varphi^{(N, { h} , x ) }\right)_{| [0, t_0 ] } \in \overline{ {\rm supp} \left( Q_x^{N,t_0 } \right)}.$$
\end{prop}
We now show how to use Proposition \ref{theo:4bis} in order to prove Theorem \ref{theo:harris}.
{\it Proof of Theorem \ref{theo:harris}. }
By Proposition \ref{prop:lyapunov}, putting $F := \{ x : G(x) \le 2 b /a \} ,$ the set $F$ is visited infinitely often by the process $Y^N , $ almost surely.
Fix $ x \in F $ and recall that $ x^* $ is the unique equilibrium point of the system \eqref{eq:cascade}. We will show in Theorem \ref{theo:control} below that it is possible to choose ${ h } \in \tt H $ such that $\,\varphi = \varphi^{(N, {h},x)}\,$ satisfies $ \varphi (T) = x^* $ (for some arbitrary fixed $ T > 0 $).
We have therefore shown the following assertions.
\begin{enumerate}
\item
There exists an attainable point $x^* $ for $ Y^N .$
\item
There exists a Lyapunov function for $Y^N, $ in the sense of Proposition \ref{prop:lyapunov}.
\item
The weak H\"ormander condition holds.
\end{enumerate}
Under these conditions, it is classical to show (see e.g.\ Theorem 1 of H\"opfner et al.\ \cite{hh3}), that $Y^N$ is positively recurrent in the sense of Harris. The fact that $\mu^N $ is of full support follows again from Theorem \ref{theo:control} below implying that the control system \eqref{eq:withN} is strongly completely controllable. This concludes the proof.
$\bullet$
In the following we will prove that the control system \eqref{eq:generalcontrolsystem} is {\it strongly completely controllable}.
\section{Controllability}\label{sec:5}
\begin{theo}\label{theo:control}
Grant Assumption \ref{ass:1}. Then the control system $\,\varphi = \varphi^{({ h},x)}\,$ given by
$$
d \varphi (t) \;=\; b ( \varphi (t) )\, dt \;+\; \sigma ( \varphi (t) )\, \dot{h}(t)\, dt \quad,\quad \varphi (0)=x,
$$
is strongly completely controllable, i.e.\ for all $T > 0 ,$ for any pair of points $ x, y \in {\mathbb R}^n ,$ there exists a control $ {h } \in \,\tt H\, $ such that $\varphi^{(h,x)} (T) = y .$
\end{theo}
\begin{proof}
The main idea of the proof is to use the fact that the drift vector field is linear -- except for the last coordinate of each population encoding the interactions between the two populations. Imposing a trajectory for the two coordinates carrying the noise -- and carrying the interactions -- allows to decouple the two populations and to rely on linear control problems. Our Ansatz is to write $ \varphi (t) = ( {\mathbb P}hi (t) , {\mathbb P}si (t) ) $ with $ {\mathbb P}hi (t ) = (\varphi_1(t) , \ldots , \varphi_{n_1 +1 } (t) ) ,$ $ {\mathbb P}si (t) = (\varphi_{n_1+2 } (t), \ldots, \varphi_n (t) ) .$ ${\mathbb P}hi ( t) $ summarizes the coordinates describing the first population of particles, $ {\mathbb P}si (t) $ describes the second population. We choose $ {\mathbb P}hi $ and $ {\mathbb P}si$ such that they are solution of a different control system, given by
\begin{equation}\label{eq:newcontrol}
\dot {\mathbb P}hi (t) = F_1 ( {\mathbb P}hi ( t)) + B_1 u^1 ( t) ,
\end{equation}
where $ B_1 \in {\mathbb R}^{n_1+1}$ is the vector given by
\begin{equation}\label{eq:B}
B_1 = \left( \begin{array}{c}
0 \\
0 \\
\vdots \\
0 \\
1
\end{array}
\right)
\end{equation}
and where for $x = (x_1, \ldots , x_{n_1 +1} ),$
\begin{equation}\label{eq:F}
F_1 (x) = \left(
\begin{array}{c }
- \nu_1 x_1 + x_2 \\
- \nu_1 x_2+ x_3 \\
\vdots \\
- \nu_1 x_{n_1 } + x_{n_1 +1} \\
- \nu_1 x_{n_1 +1}
\end{array}
\right) .
\end{equation}
Analogous definitions apply to the second population described by $ {\mathbb P}si (t) .$
In what follows, by abuse of notation, we will systematically write $ x= (x_1, \ldots , x_{n_1 +1} ) $ for starting configurations of $ {\mathbb P}hi ( t) $ or $ x = ( x_{n_1 + 2} , \ldots , x_n ) $ for those of ${\mathbb P}si (t) $ or $ x = (x_1 , \ldots , x_n ) $ for
starting configurations of the entire system, depending on the context.
Notice that writing $ A_1 := \left( \frac{\partial F_1^i (x) }{\partial x_j } \right)_{ 1 \le i, j \le n_1 +1 } ,$ we can rewrite \eqref{eq:newcontrol} as
\begin{equation}\label{eq:newcontrolsecond}
\dot {\mathbb P}hi (t) = A_1 {\mathbb P}hi ( t) + B_1 u^1 (t) .
\end{equation}
By Theorem 1.11 in Chapter 1 of Coron \cite{coron}, the problem \eqref{eq:newcontrolsecond} is controllable at time $T$ if and only if the associated Gram matrix
$$ Q_T := \int_0^T e^{(T- t)A_1} B_1 B_1^* (e^{(T- t ) A_1 })^* dt $$
is invertible. But by the cascade structure of the drift, $ B_1, A_1 B_1, A_1^2 B_1 , \ldots , A_1^{n_1} B_1$ span $ {\mathbb R}^{n_1 +1 } , $ implying that $Q_T $ is non degenerate.
As a consequence, for any $ x= (x_1 , \ldots, x_{n_1 +1 } )$ and $ y = ( y_1 , \ldots , y_{n_1 +1}) $ in $ {\mathbb R}^{n_1 + 1 } $ there exists a control $ u^1 (t) $ steering the solution $ {\mathbb P}hi $ of \eqref{eq:newcontrol} from $x$ to $ y, $ during $ [0, T ] .$ The associated cost functional is given by
$$ V_T^{1, lin} (x, y ) = < e^{T A_1 } x - y , Q_T^{-1} ( e^{T A_1 } x - y ) > , $$
see Proposition 1.13 in Chapter 1 of \cite{coron}. A similar result applies to the second population, i.e.\ the system described by $ {\mathbb P}si .$
We resume the above discussion and come back to the total process, consisting of the two populations. For any $x = (x_1 , \ldots , x_n ) $ a possible initial configuration of the two populations, and for any $ y \in {\mathbb R}^n ,$ we have therefore a control $ (u^1 (t) , u^2 (t)) ,$ such that the decoupled and linear system
$ ( {\mathbb P}hi (t) , {\mathbb P}si (t) ) $ solution of \eqref{eq:newcontrol} (and the analogous equation for the second population) is steered from $x$ to $y, $ during $ [0, T ].$
In what follows, we shall write $ {\mathbb P}hi (t) = ( {\mathbb P}hi_1 (t) , \ldots , {\mathbb P}hi_{n_1 +1} ( t) ) $ and $ {\mathbb P}si ( t) = ({\mathbb P}si_1 ( t) , \ldots , {\mathbb P}si_{n_2 +1} (t) ) .$
In order to come back to the original control system, we put
\begin{equation}\label{eq:choiceh}
\dot h^1 (t) = \frac{u^1 (t) - c_1 f_2 ( {\mathbb P}si_1 (t) ) }{c_1 / \sqrt{p_2} \; \sqrt{ f_2 ( {\mathbb P}si_1 (t) ) } } \mbox{ and } \dot h^2 (t) = \frac{u^2 (t) - c_2 f_1 ( {\mathbb P}hi_1 (t) ) }{c_2 / \sqrt{p_1} \; \sqrt{ f_1 ( {\mathbb P}hi_1 (t) ) } } .
\end{equation}
Since $f_1 $ and $f_2$ are lower bounded, $ \dot h^1 $ and $ \dot h^2 $ are well-defined and admissible, that is, $ \dot h^1 , \dot h^2 \in L^2_{loc}.$ Moreover, by the structure of \eqref{eq:newcontrol},
\begin{eqnarray*}
\frac{ d {\mathbb P}hi_{ n_1 +1 } (t) }{dt} &= &- \nu_1 {\mathbb P}hi_{n_1 + 1 } ( t) + u^1 (t) \\
&=& - \nu_1 {\mathbb P}hi_{n_1 + 1 } ( t) + c_1 f_2 ( {\mathbb P}si_1 (t) ) + \frac{c_1}{\sqrt{p_2}} \sqrt{ f_2 ( {\mathbb P}si_1 (t) )} \dot h^1 (t) ,
\end{eqnarray*}
thus $ \varphi = ( {\mathbb P}hi, {\mathbb P}si ) ,$ together with the choice of $ h $ in \eqref{eq:choiceh}, is solution of the original control problem \eqref{eq:generalcontrolsystem}.
\end{proof}
We use the ideas of the above proof to show that the cost functions $V (x,y) $ and $V_T (x,y) $ are upper semi continuous.
\begin{theo}\label{theo:6}
Grant Assumption \ref{ass:1}. Then the cost functions $ V_T ( x, y ) $ and $V(x, y) $ are upper semicontinuous in $x$ and in $y.$
\end{theo}
The main difficulty in the proof of this result is the fact that due to the hypo-ellipticity of the diffusion, the action of the drift is important in small time. As a consequence, if we want to steer the process within a small time step $\delta $ to any possible target point within a given ball, we have to take into account the action of the drift. It turns out that it is possible to steer the process from a fixed starting point $z$ to any point within a small ball around $ z + b(z) \delta ,$ and that the cost of doing this remains small, for small $ \delta .$ This is related to small time local controllability, see below, and also to the fact that the weak H\"ormander condition is satisfied. There is also a relation with density estimates of the associated diffusion over small time intervals, see e.g.\ Pigato \cite{Pigato}. In the proof we shall use tools developed in the recent paper by Delarue and Menozzi \cite{delaruemenozzi} where the same ``cascade''-structure of the drift as in our case is present.
\begin{proof} We fix some $\eta > 0 . $ Fix $T$ and $ x, y .$ Then there exists a control $h $ such that $ \varphi^{ ( h , x) } ( T) = y $ and such that $I_{x, T } ( \varphi) \le V_T ( x, y ) + \eta .$ In the following we work with this fixed control and with the fixed trajectory $ \varphi := \varphi^{(h,x)}.$
Let us briefly explain the idea of our proof. We first show that for any $\delta > 0 $ and for any $\tilde x $ belonging to a small neighborhood of $x, $ it is possible to perturb the control $h$ on an interval $ [0, T - \delta ] $ such that the newly obtained controlled trajectory $\tilde \varphi $ stays within a small tube around $ \varphi $ during $[0, T - \delta] $ and such that the cost of doing so is comparable to the original cost $ I_{x, T - \delta } ( \varphi ) .$ We then show that we can choose $\delta $ sufficiently small such that we are able to steer $\tilde \varphi $ from its position at time $T- \delta $ to any target position $ \tilde y $ belonging to a small neighborhood of $y, $ by maintaining the cost of doing so negligible. This last step will be done by relying on the ideas developed in the proof of the preceding theorem.
{\bf Step 1.} We fixe some $0 < \delta < T $ and points $\tilde x , \tilde y $ in some neighborhoods of $x$ and $y.$ These neighborhoods and $ \delta $ will be chosen later. Write for short $ \gamma_1 (t) = \varphi_{n_1 +1} ( t) , $ $ \gamma_2 ( t) = \varphi_n ( t) $ for the two components of $ \varphi $ depending directly on the control. In a first step of the proof, for a given $ \varepsilon , $ we choose any smooth trajectories $ \tilde \gamma_1 $ and $ \tilde \gamma_2 $ such that for all $0 \le t \le T - \delta , $
$$ \tilde \gamma_1 (t) \in B_\varepsilon ( \gamma_1 (t) ) , \tilde \gamma_2 (t) \in B_\varepsilon ( \gamma_2 (t) ) $$
and also
$$ \frac{d}{dt} \tilde \gamma_1 (t) \in B_\varepsilon (\frac{d\gamma_1 (t) }{dt} ) , \frac{d}{dt} \tilde \gamma_2 (t) \in B_\varepsilon ( \frac{d \gamma_2 (t)}{dt} ) ,$$
with
$$ \tilde \gamma_1 ( 0) = \tilde x_{n_1 +1 } , \; \tilde \gamma_2 (0) = \tilde x_n .$$
We then put
$$ \tilde \varphi_{n_1 + 1 } (t) := \tilde \gamma_1 (t) , \; \tilde \varphi_{n} (t) := \tilde \gamma_2 (t) ,$$
for all $ 0 \le t \le T - \delta .$
Once these two trajectories are fixed, by the structure of $b,$ we necessarily have
$$
\tilde \varphi_{n_1 } (t)= e^{- \nu_1 t } \tilde x_{n_1} + e^{-\nu_1 t} \int_0^t e^{\nu_1 s} \tilde \gamma_1 ( s) ds .$$
Now, since $ \tilde x_{n_1} \in B_{\varepsilon } ( x_{n_1} ) $ and $ \tilde \gamma_1 ( s) \in B_{\varepsilon } ( \gamma_1 (s) ),$ for all $ s \le T - \delta , $ we certainly have that
$$| \tilde \varphi_{n_1 } (t) -
( e^{- \nu_1 t } x_{n_1} + e^{-\nu_1 t} \int_0^t e^{\nu_1 s} \gamma_1 ( s) ds )| \le \max( 1, \frac{1}{\nu_1 } ) \varepsilon .$$
Thus, since $ e^{- \nu_1 t }x_{n_1} + e^{-\nu_1 t} \int_0^t e^{\nu_1 s} \gamma_1 ( s) ds = \varphi_{n_1 } ( t) , $
$$ | \tilde \varphi_{n_1 } (t) - \varphi_{n_1 } ( t) | \le \max( 1, \frac{1}{\nu_1 } ) \varepsilon .$$
The same arguments apply for the other coordinates $ \tilde \varphi_i .$
As a consequence, introducing $ \kappa = \sqrt{n} \max( 1, \frac{1}{\nu_1^{n_1}}, \frac{1}{\nu_2^{n_2} } ) ,$ where $ n = n_1 + n_2 + 2,$ we have constructed a trajectory $ \tilde \varphi (t)$ such that
$$ \tilde \varphi (t) \in B_{ \kappa \varepsilon } ( \varphi (t ) ) \mbox{ for all } t \le T- \delta .$$
The control which allows to produce this trajectory is given by
$$\dot{ \tilde h}^1 (t) = \frac{\frac{d}{dt } \tilde \gamma_1 (t) + \nu_1 \tilde \gamma_1 ( t) - c_1 f_2 ( \tilde \varphi_{ n_1 + 2 } (t)) }{
c_1/ \sqrt{p_2} \; \sqrt{ f_2 ( \tilde \varphi_{ n_1 + 2} (t))} } , \; \dot{\tilde h}^2 (t) = \frac{\frac{d}{dt } \tilde \gamma_2 (t) + \nu_2 \tilde \gamma_2 ( t) - c_2 f_1 ( \tilde \varphi_{ 1 } (t)) }{
c_2/ \sqrt{p_1} \; \sqrt{ f_1 ( \tilde \varphi_{ 1 } (t))} } .$$
By continuity of $ f_1, f_2$ and the fact that $f_1, f_2 $ are lower bounded, there exist $ \eta_1 = \eta_1 ( \varepsilon) $ and $ \eta_2 = \eta_2 ( \varepsilon ) $ with $ \eta_1 ( \varepsilon ) \to 0, \eta_2 ( \varepsilon ) \to 0 $ as $ \varepsilon \to 0 , $ such that $ \dot{ \tilde h}^1 (t) \in B_{\eta_1} ( \dot{ h}^1 (t) ) $ and $ \dot{ \tilde h}^2 (t) \in B_{\eta_2} ( \dot{ h}^2 (t) ), $ for all $t \le T - \delta .$ We choose $ \varepsilon_1 $ such that
\begin{equation}\label{eq:choiceofepsilon}
(T- \delta ) [(\eta_1 ( \varepsilon))^2 + (\eta_2 ( \varepsilon))^2] \le \eta \mbox{ for all } \varepsilon \le \varepsilon_1.
\end{equation}
Then clearly
$$ I_{\tilde x, T- \delta } ( \tilde \varphi ) \le I_{x, T - \delta } ( \varphi ) + \eta .$$
For the moment we have produced a controlled trajectory $ \tilde \varphi $ steering the initial point $ \tilde x $ belonging to $B_\varepsilon (x)$ to a point $ \tilde z = \tilde \varphi (T- \delta ) \in B_{\kappa \varepsilon} ( z ) , $ where $z = \varphi (T- \delta ) ,$ such that we have a control on the cost function of this new trajectory. \\
{\bf Step 2.} Consider now the original control system $ \varphi = \varphi^{(h,x) } $ on the interval $ [T - \delta , T].$ Its coordinate $ \varphi_{n_1 +1} $ solves the equation
$$ \dot \varphi_{n_1+1 } ( t) = - \nu_1 \varphi_{n_1+1 } ( t) + c_1 f_2 ( \varphi_{n_1 + 2 } (t)) + \frac{c_1}{ \sqrt{p_2}} \; \sqrt{ f_2 ( \varphi_{ n_1 + 2 } (t))} \dot h_t^1 .$$
If we write
\begin{equation}\label{eq:decouplefirst}
u^1 (t) := c_1 f_2 ( \varphi_{n_1 + 2 } (t)) + \frac{c_1}{ \sqrt{p_2}} \; \sqrt{ f_2 ( \varphi_{ n_1 + 2 } (t ))} \dot h_{t } ^1 ,
\end{equation}
then clearly, $ u^1 \in L^2 ( [0, T ] ) $ and
\begin{equation}\label{eq:decouplesecond}
\dot \varphi_{n_1+1 } ( t) = - \nu_1 \varphi_{n_1+1} ( t) + u^1 (t) , T- \delta \le t \le T ,
\end{equation}
with $\varphi_{n_1+1} ( T - \delta) = z, \varphi_{n_1+1 } (T) = y. $ The same argument applies for $ \dot \varphi_n ( t) , $ with the definition $ u^2 (t) = c_2 f_1 ( \varphi_{ 1 } (t)) + (c_2/ \sqrt{p_1}) \; \sqrt{ f_1 ( \varphi_{ 1 } (t ))} \dot h_{t } ^2 .$
Since $\dot h^1 , \dot h^2 \in L^2 ( [0, T ] ) $ and since $ f_1$ and $ f_2 $ are bounded, we can now choose $ \delta $ such that
\begin{equation}\label{eq:choicedelta}
\frac12 \int_{T- \delta }^T (u^1 (t) )^2 + (u^2 (t) )^2 dt \le \underbar f \eta , \; \delta (\| f_1 \|_\infty + \| f_2 \|_\infty ) \le \eta ,
\end{equation}
where we recall that $ \underbar f$ is such that $ f_1 ( x) \geq \underbar f , $ $ f_2 (x) \geq \underbar f , $ for all $ x \in {\mathbb R} .$
With this choice of $ u^1 $ and $u^2 $ we can rewrite the control problem on $ [T- \delta , T ] $ as in the proof of Theorem \ref{theo:control}. As there, we put $ \varphi (t) = ( {\mathbb P}hi (t) , {\mathbb P}si (t) ) $ with $ {\mathbb P}hi (t ) = (\varphi_1(t) , \ldots , \varphi_{n_1 +1 } (t) ) $ and $ {\mathbb P}si (t) = (\varphi_{n_1+2 } (t), \ldots, \varphi_n (t) ) .$ Then
$$
\dot {\mathbb P}hi (t) = F_1 ( {\mathbb P}hi ( t) ) + B_1 u^1 ( t) ,\; \dot {\mathbb P}si (t) = F_2 ({\mathbb P}si (t) ) + B_2 u^2 (t) ,$$
where $B_1, B_2, F_1 , F_2$ are defined in \eqref{eq:B} and \eqref{eq:F} above.
In what follows, by abuse of notation, we will systematically write $ z= (z_1, \ldots , z_{n_1 +1} ) $ for starting configurations of $ {\mathbb P}hi ( t) $ or $ z = ( z_{n_1 + 2} , \ldots , z_n ) $ for those of ${\mathbb P}si (t) $ or $ z = (z_1 , \ldots , z_n ) $ for starting configurations of the entire system, depending on the context.
Having thus constructed a specific controlled trajectory, we certainly have that
\begin{equation}\label{eq:good}
\frac12 \int_{T- \delta}^T (u^1 (t) )^2 dt \geq V^{1,lin}_{\delta} (z,y) , \; \frac12 \int_{T- \delta}^T (u^2 (t) )^2 dt \geq V^{2,lin}_{\delta} (z,y) ,
\end{equation}
where $ V^{1, lin}_{ \delta } (z,y) = \inf \{ \frac12 \int_{T-\delta}^T (u^1 (t) )^2 dt : {\mathbb P}hi (T - \delta ) = z, {\mathbb P}hi ( T) = y \} $ such that $ \dot {\mathbb P}hi (t) = F_1 ( {\mathbb P}hi ( t) ) + B_1 u^1 (t) ,$ and with a similar definition for the second system.
{\bf Step 3.} The key observation is now that system \eqref{eq:newcontrol} satisfies the conditions of Section 4.1 of Delarue and Menozzi \cite{delaruemenozzi} (with order of coordinates reversed, i.e.\ the coordinate depending on the noise is the first in \cite{delaruemenozzi} and not the last as it is the case here). The main point is the {\it cascade-}structure of the drift, i.e., the fact that the $i-$th coordinate of $ F_1 (x), $ which is given by $ - \nu_1 x_i + x_{i+1} ,$ does only depend on the coordinates $ x_i $ and $x_{i+1} ,$ for all $ 1 \le i \le n_1 +1 .$ In particular, writing $T_t $ for the $(n_1 +1) \times (n_1+1) -$diagonal matrix having entries
$$ T_t = diag ( t^{ n_1+1 }, t^{n_1 }, \ldots , t) ,$$
Proposition 4.1 of \cite{delaruemenozzi} implies that there exists a constant $C_1$ depending only on $T,$ such that
\begin{equation}\label{eq:explode?}
V^{1, lin}_{\delta} (z,y) \geq C_1 \delta | T_\delta^{-1} ( \theta_\delta ( z) - y) |^2 ,
\end{equation}
where $ \theta_\delta $ is the deterministic flow associated to the zero-noise system $ \dot \theta_ t (z) = F_1 ( \theta_t (z) ) , $ and where $ z = (z_1 , \ldots, z_{n_1+1 } ), $ $ y = (y_1 , \ldots , y_{n_1+1 } ) .$
The important point is now that as a consequence of \eqref{eq:good} together with \eqref{eq:choicedelta}, we have
\begin{equation}\label{eq:first}
C_1 \delta | T_\delta^{-1} ( \theta_\delta ( z) - y) |^2 \le \underbar f \; \eta .
\end{equation}
The same argument applies to the second population.
{\bf Step 5.} Recall the definition of $\varepsilon_1 $ in \eqref{eq:choiceofepsilon}. We now choose $ \varepsilon_2 \le \varepsilon_1 $ such that for all $\varepsilon \le \varepsilon_2, $ for all $ \tilde z \in B_{ \kappa \varepsilon }( z) , $ $ \tilde y \in B_\varepsilon ( y) , $
\begin{equation}\label{eq:second}
\delta | T_\delta^{-1} ( \theta_\delta ( \tilde z) - \tilde y) |^2 \le \frac{2}{C_1} \underbar f \; \eta .
\end{equation}
We then solve \eqref{eq:newcontrol} on $[T- \delta , T ] $ and obtain a system $ \tilde {\mathbb P}hi ( t) $ with $\tilde {\mathbb P}hi ( T- \delta ) = \tilde z $ and $ \tilde {\mathbb P}hi ( \delta ) = \tilde y ,$ for any $\tilde y \in B_\varepsilon ( y ) .$ By Proposition 4.2 of \cite{delaruemenozzi}, this is possible using a control $\tilde u^1 $ such that
$$ \sup \{ ( \tilde u^1 (s))^2 , T- \delta \le s \le T \} \le C_2 | T_\delta^{-1} ( \theta_\delta ( \tilde z) - \tilde y) |^2
\le \frac{2C_2 }{C_1 \delta } \underbar f \; \eta,$$
where $C_2 $ is another universal constant and where we have used \eqref{eq:second}. In particular,
$$ \frac12 \int_{T- \delta }^T ( \tilde u^1 (s))^2 ds \le \frac{C_2 }{C_1} \underbar f \; \eta .$$
The same argument applies to ${\mathbb P}si (t),$ describing the second population of particles. In order to come back to the original control system, we use \eqref{eq:decouplefirst} and find
$$\dot {\tilde h}_t^1 = \frac{\tilde u^1 (t) - c_1 f_2 ( \tilde {\mathbb P}si_1 (t) ) }{[c_1 / \sqrt{p_2}] \; \sqrt{ f_2 ( \tilde {\mathbb P}si_1 (t) ) } } . $$
Then
$$ \frac12 \int_{T- \delta}^T ( \dot {\tilde h}_t^1)^2 dt \le \frac{p_2}{c_1^2} \frac{1}{\underbar f} \frac{C_2 }{C_1} \underbar f \eta + p_2 \| f_2\|_\infty \delta \le C \eta , $$
for some constant $C$ not depending on $f,$ by the choice of $\delta $ in \eqref{eq:choicedelta}.
Summarizing the above arguments, we have thus constructed a control $ (\dot{ \tilde h}^1 , \dot{\tilde h}^2 )$ acting on $ [T - \delta , T ]$ steering $ \tilde z $ to $ \tilde y $ for any $ \tilde y \in B_\varepsilon ( y) , $ at a cost at most $ C \eta .$ Pasting together the two control paths $\tilde \varphi $ constructed in Step 1 on $ [0, T - \delta ] $ and the last one, we have thus obtained a path $ \tilde \varphi $ from $ \tilde x $ to $\tilde y$ at a total cost
$$ I_{\tilde x, T} ( \tilde \varphi ) \le I_{x, T- \delta } ( \varphi ) + \eta + C \eta \le I_{x, T } ( \varphi ) + (1 + C ) \eta . $$
Since $ \eta $ can be chosen arbitrarily small, this implies that $ V_T (x, y ) $ is upper semicontinuous in $x$ and in $y.$ The fact that $ V(x, y ) $ is upper semicontinuous in $x $ and $y$ follows then easily from this.
\end{proof}
{\bf Small time local controllability.}
We will now discuss the important notion of small time local controllability which is related to the behavior of the system close to equilibrium points or to periodic orbits.
In the following, we restrict attention to controls $h $ such there is some -- sufficiently fine -- finite partition $0=s_0 < s_1 < \ldots < s_\nu = t$ such that all components $\dot{ h}^\ell$ are smooth on $s_{r-1}$ and $s_r$. We shall call such controls ${ h}$ {\em piecewise smooth.}
We denote $R_{ \tau} ( x) $ the set of points which can be reached from $x$ in time $ \tau $ using a piecewise smooth control $ h;$ i.e.\
$$R_{\tau}(x) = \{ \varphi^{( { h}, x)}(\tau) : \, h\in{\tt H}\;\;\mbox{piecewise smooth}\; \}.$$
We shall also consider
$$ R^{ M}_{ \tau}(x) = \{ \varphi^{({ h}, x)}(\tau) : \,{ h}\in{\tt H}\;\;\mbox{piecewise smooth}, \| \dot { h} \|_\infty = \| \dot h^1 \|_\infty + \| \dot h ^2 \|_\infty \le M \; \}.$$
We say that the system is {\it small-time locally controllable at $x$} if $ R_{ \tau} ( x) $ contains a neighborhood of $x$ for every $ \tau > 0.$
\begin{lem}\label{cor:STLCxstar}
$Y^N$ is small-time locally controllable at $x^*.$
\end{lem}
\begin{proof}
Write $ \sigma^1 $ and $ \sigma^2$ for the two columns of the diffusion matrix $ \sigma .$ Then it is straightforward to verify that $ \sigma^1, [\sigma^1 , b ] , [[\sigma^1, b ] , b ] , \ldots $ and $\sigma^2, [\sigma^2 , b ] , [[\sigma^2, b ] , b ] \ldots $ span $ {\mathbb R}^n .$ In particular, the system satisfies the weak H\"ormander condition. Then the assertion follows from Theorem 3.4 of Lewis \cite{Lewis}, based on the results of Sussmann \cite{Sussmann} and Bianchini and Stefani \cite{Bianchini}.
\end{proof}
The following theorem states a result concerning the small time controllability around points which are on a periodic orbit $\Gamma $ of the limit system \eqref{eq:cascade}. On $\Gamma , $ the drift vector $ b $ plays an important role, in the sense of a ``shift'' along the orbit. As a consequence, the system is not small time locally controllable in the classical sense, but in a ``shifted sense'' as stated in the following theorem.
\begin{theo}\label{theo:stlc}
Grant Assumptions \ref{ass:1} and \ref{ass:4}. Let $\Gamma $ be a periodic orbit of \eqref{eq:cascade}, $x_0 \in \Gamma ,$
and let $x^{x_0}(t) $ be the solution of \eqref{eq:limit}, issued from $ x_0 $ at time $0.$ Then there exists $ \delta^* $ such that for any $0< \delta < \delta^* ,$ for all $M ,$ we have that $x^{x_0} (\delta) $ is in the interior of $R^{M}_\delta (x_0) .$
\end{theo}
The proof of this theorem is given in the Appendix.
With these results at hand we are able to prove the following proposition which is the analogue of Proposition 3 of Rey-Bellet and Thomas \cite{reybellet}. \cite{reybellet} consider systems locally around equilibria, and therefore, the drift vector does not play a role in their case. In our case, we have to consider the control system locally around non constant periodic orbits -- hence the drift vector does play a crucial role since it induces a shift along the orbit which is not negli-geable in the study of the system. We shall use the following notation. For any periodic orbit $ \Gamma $ of \eqref{eq:cascade}, let $B_\varepsilon ( \Gamma ) = \{ x \in {\mathbb R}^n : dist (x, \Gamma ) < \varepsilon \} .$
\begin{prop}\label{prop:STLCgamma}
Grant Assumptions \ref{ass:1} and \ref{ass:4}. Let $\Gamma $ be a periodic orbit of \eqref{eq:cascade}. For any $\eta > 0 $ and $\varepsilon' > 0 $ there exists $ \varepsilon > 0 $ such that $ \varepsilon < \varepsilon' /3 $ with the following properties. For all $x , y \in B_\varepsilon ( \Gamma ) , $ there exist $T> 0 $ and a control ${ h } \in {\tt H} $ such that $ \varphi^{( h , x)} ( T) = y ,$ $ \varphi^{ ( h , x)} ( s) \in B_{ 2 \varepsilon ' /3}( \Gamma ) $ for all $ 0 \le s \le T,$ and $ I_{x, T} ( \varphi^{ ( \tt h , x)} ) \le \eta .$
\end{prop}
\begin{proof}
We start by introducing some additional objects needed in the proof. We denote by $ \tilde \varphi^{(h, x) } $ the inverse flow, solution of
$$ d \tilde \varphi^{(h, x )} (t) = - b ( \tilde \varphi^{(h, x )} (t) ) dt + \sigma ( \tilde \varphi^{(h, x )} (t) ) \dot h (t) dt , \tilde \varphi^{(h, x )} (0 ) =x ,$$
and write $ \tilde R^M_\delta (x) $ for the set of attainable points for the inverse flow, using a piecewise smooth control which is bounded by $M.$
We then choose $M$ and $ \delta_0 $ such that $M^2 \delta_0 < \eta $ and such that $ R_{\delta}^{M} ( x) \subset B_{ 2\varepsilon ' /3} ( \Gamma ) $ and $ \tilde R_{\delta}^{M} ( x) \subset B_{ 2\varepsilon ' /3} ( \Gamma ) $ for all $ \delta \le \delta_0, $ for all $ x \in \Gamma .$ In the sequel, $ \delta \le \delta_0 $ will be fixed.
For any $ z \in \Gamma , $ there exist $ \varepsilon_1 (z) $ such that $B_{\varepsilon_1 (z) } ( x^z (\delta) ) \subset R_{\delta}^{ M} ( z) ,$ by Theorem \ref{theo:stlc}.
Notice that $ x^z (\delta) \in \Gamma $ if $ z \in \Gamma .$ Notice moreover that for any $ v \in \Gamma $ there exists $ w = x^v (- \delta ) \in \Gamma $ such that $ x^w (\delta) = v .$ Therefore, by compactness of $\Gamma, $ there exists a finite collection $z_1, \ldots , z_K \in \Gamma $ such that $ \Gamma \subset \bigcup_{k=1}^K B_{\frac13 \varepsilon_1 ( z_k) } ( x^{z_k} (\delta) )$ and such that for all $k, $ $ B_{ \varepsilon_1 ( z_k) } ( x^{z_k} (\delta) ) \subset R_\delta^M (z_k) .$
Applying the same arguments as above to the inverse flow, for all $x \in \Gamma $ there exists $ \varepsilon_2 ( x)$ such that $B_{\varepsilon_2 (x) } ( x^x (-\delta) ) \subset \tilde R_{\delta}^{ M} ( x) .$
Then again, by compactness of $\Gamma,$ $ \Gamma \subset \bigcup_{k=1}^K B_{\frac13 \varepsilon_2 ( u_k) } ( x^{u_k} (-\delta) )$ for $ u_1 , \ldots , u_K \in \Gamma $ (where we suppose w.l.o.g.\ that the number of balls is the same in the two coverings) and such that $B_{ \varepsilon_2 ( u_k) } ( x^{u_k} (-\delta) )
\subset \tilde R^M_\delta (u_k) $ for all $k.$
Choose now
$$ \varepsilon \le \min \{ \varepsilon_1 ( z_k )/4 , k \le K \} \wedge \min \{ \varepsilon_2 ( u_k )/4 , k \le K \} \wedge \varepsilon'/3 .$$
Let $y \in B_ \varepsilon ( \Gamma ).$ Then there exists $ y^*
\in \Gamma $ such that $ \| y - y^* \| \le \varepsilon .$ Let $k$ be such that $ \| y^* - x^{z_k } (\delta) \| \le \varepsilon_1 ( z_k )/3 .$ Hence, $\| y - x^{z_k } (\delta) \| < \varepsilon_1 ( z_k ) $ and as a consequence, $ y \in R_{\delta}^{ M} ( z_k) .$ In the same way, for any $ x \in B_\varepsilon ( \Gamma ) $ there exists $ u_l $ such that $ x \in \tilde R_\delta^{ M} ( u_l ) .$ Therefore, there exist $ { h }_1 $ and ${ h}_2 $ with $ \| \dot h_1 \|_\infty \le M, \| \dot h_2 \|_\infty \le M ,$ such that
$ \varphi^{ ( h_2 , z_k) } (\delta) = y $ and $ \tilde \varphi^{ ( h_1 , u_l ) } (\delta) = x .$
By reversing the time, this yields a trajectory $ \varphi^{( h_1 , x) } (t) $ with $ \varphi^{( h_1 , x)} (0) = x $ and $\varphi^{( h_1 , x)} (\delta) = u_l . $ Then it suffices to choose $ T$ such that $ x^{ u_l}_{T - 2 \delta } = z_k $ -- this is just a shift on the orbit.
To finish the proof, observe that by construction the produced trajectory $ \varphi = \varphi^{( h_1 , x) } $ is such that $\varphi ( s) \in B_{ 2 \varepsilon ' /3 } ( \Gamma ) $ for all $ s \le T ,$ since we have chosen $M$ and $ \delta $ such that $ R_{\delta}^{M} ( x) \subset B_{ 2\varepsilon ' /3} ( \Gamma ) $ and $ \tilde R_{\delta}^{M} ( x) \subset B_{ 2\varepsilon ' /3} (\Gamma ) $ for all $ \delta \le \delta_0, $ for all $ x \in \Gamma .$
\end{proof}
\section{Large deviations and asymptotics of the invariant measure}\label{sec:6}
Recall that we have introduced controlled trajectories
$$
\varphi = \varphi^{({h}, x)} \; \mbox{solution to}\; d \varphi (t) = b ( \varphi (t) ) dt + \sigma( \varphi (t) ) \dot h(t) dt, \; \mbox{with $\varphi (0)=x,$}
$$
together with their rate function on time intervals $[0, t_1]$
$$
I_{x, t_1} (f) = \inf_{ {h} \in {\tt H} : \varphi^{( { h}, x)} (t) = f(t) , \; \forall t \le t_1 } \frac12 \int_0^{t_1} [| \dot{ h}^1(s)|^2+ | \dot{ h}^2 (s) |^2] ds .
$$
This rate function is not explicit since the diffusion matrix $\sigma $ is degenerate. It is however a ``good rate function", i.e.\ all of its level sets $ \{ f : I_{x, t_1 } ( f) \le \alpha \} $ are compact, and the following large deviation principle for the sample paths of the diffusion $ Y^N$ is well known, going back to Freidlin and Wentzell \cite{FW}. We quote if from \cite{DZ}.
\begin{theo}[Corollary 5.6.15 of \cite{DZ}]
Grant Assumption \ref{ass:1}. Let $Y^N $ denote the solution of \eqref{eq:diffusionsmallnoise}, starting from $ x \in {\mathbb R}^n .$ Then for any $x \in {\mathbb R}^n $ and for any $ t_1 < \infty , $ the rate function $ I_{x, t_1 } ( f) $ is a lower semicontinuous function on $ C ( [0, t_1 ], {\mathbb R}^n ) $ with compact level sets. Moreover, the family of measures $ Q_x^N $ satisfies the large deviation principle on $ C ( [0, t_1 ], {\mathbb R}^n ) $ with rate function $ I_{x, t_1} ( f) .$ \\
(i ) For any compact $ K \subset {\mathbb R}^n $ and any closed $ F \subset C ( [0, t_1 ], {\mathbb R}^n ) , $
\begin{equation}\label{eq:compact}
\limsup_{ N \to \infty } \frac1N \log \sup_{x \in K } Q_x^N ( F) \le - \inf_{x \in K} \inf_{ f \in F} I_{x, t_1} ( f) .
\end{equation}
(ii) For any compact $ K \subset {\mathbb R}^n $ and any open $ O \subset C ( [0, t_1 ], {\mathbb R}^n) , $
\begin{equation}\label{eq:open}
\liminf_{ N \to \infty } \frac1N \log \inf_{x \in K } Q_x^N ( O) \geq - \sup_{x \in K} \inf_{ f \in O} I_{x, t_1} ( f) .
\end{equation}
\end{theo}
\subsection{Proof of Theorem \ref{theo:main}}
We are now able to give the proof of our main result, Theorem \ref{theo:main}. It follows closely Freidlin and Wentzell \cite{FW}, adapted to the situation of degenerate diffusions in Rey-Bellet and Thomas \cite{reybellet}.
Recall that $ K = \{ x^* \} \cup \bigcup_{l=2}^L K_l $ denotes the $\omega-$limit set of \eqref{eq:generalcontrolsystem}. To start, we stress that the diffusion process $Y^N $ solution of \eqref{eq:cascadeapprox} satisfies the two main assumptions of \cite{reybellet} which are the following.
\begin{ass}
The diffusion process $Y^N ( t) $ has a hypo-elliptic generator, and for any $x$ belonging to the $\omega-$limit set $K ,$ the control system associated with
\eqref{eq:generalcontrolsystem} is small-time locally controllable (in a sense of a shift along periodic orbits, as stated in Theorem \ref{theo:stlc}).
\end{ass}
\begin{ass}
The diffusion process is strongly completely controllable and for any $ T > 0, $ the cost function $V_T ( x,y) $ is upper semicontinuous in $x$ and $y.$
\end{ass}
We now follow Freidlin-Wentzell \cite{FW} and put
$$ U = B_\varepsilon ( K) , V = B_{\bar \varepsilon } ( K), $$
for $\varepsilon < \bar \varepsilon $ such that $ 3 \varepsilon < \bar \varepsilon .$ We introduce
$$ \tau_0 = 0, \sigma_n = \inf \{ t > \tau_n : Y^N (t) \in V^c\} , \tau_{n+1} = \inf \{ t > \sigma_{n} : Y^N ( t) \in U \} , n \geq 0 .$$
Since $Y^N$ is Harris-recurrent with invariant measure $\mu^N $ being of full support and therefore charging $ U, V$ and $V^c, $ we have $ \tau_n < \tau_{n +1} < \infty , \sigma_n < \sigma_{n+1} < \infty $ almost surely, and $ \sigma_n , \tau_n \uparrow \infty $ as $ n \to \infty . $ Writing $ U_n := Y^N ( \tau_n ) , n \geq 1 ,$ $U_n $ is a Markov chain taking values in $ \partial U $ which is a compact set. In particular, $( U_n)_n$ admits a (unique) invariant probability measure $\ell_N $ on $ \partial U $ (since $Y^N$ is Harris), and the invariant measure $ \mu^N $ of the process $ Y^N$ can be decomposed as
$$ \mu^N ( D) =\frac{1}{ c (N)} \int_{\partial U} \ell_N (dx) E^N_x \int_0^{\tau_1} 1_D ( Y^N ( t) ) dt =: \frac{1}{ c (N)} \nu^N ( D) ,$$
where
$$c(N) = \int_{\partial U} \ell_N (dx) E^N_x \tau_1 .$$
We now take a regular open set $ D,$ i.e.\ a set such that $ \partial D$ is a piecewise smooth manifold, with $ dist (D, K ) > \Delta .$ Let $ \tau_D = \inf \{ t> 0 : Y^N ( t) \in D \} $ be the associated hitting time. Then we have the following result.
\begin{lem}\label{lem:1}
Grant Assumptions \ref{ass:1} and \ref{ass:4}. Let
$$ S := \inf \{ t \geq 0 : Y^N \in B_{\varepsilon } ( K) \cup D \} .$$
Then for any compact set $E, $
$$ \lim_{T \to \infty } \limsup_{ N \to \infty } \sup_{x \in E} Q^N_x ( S > T ) = 0 .$$
\end{lem}
\begin{proof}
We have
$$ Q^N_x ( S > T ) \le \frac1T E^N_x \tau_\varepsilon .$$
But by Proposition \ref{prop:lyapunov}, $ \sup_N E^N_x \tau_\varepsilon \le C G ( x) ,$ where $G$ does not depend on $N.$ The fact that $G$ is bounded on the compact set $E$ then implies the result.
\end{proof}
In the following we establish two classical results on the growth rate of the expected escape time $ E^N_x \sigma_0 $ that will be useful in the sequel. They are analogous to the results of \cite{FW}, transposed to the hypo-elliptic context of our model.
\begin{prop}\label{prop:sigma_0first}
Grant Assumptions \ref{ass:1} and \ref{ass:4}. Given $h > 0 ,$ for $ \varepsilon < \bar \varepsilon $ such that $ 3 \varepsilon < \bar \varepsilon $ sufficiently small,
$$\liminf_{N \to \infty} \frac1N \log \inf_{x \in \partial B_\varepsilon (K) } E^N_x \sigma_0 \geq - h .$$
\end{prop}
An analogous result holds for the upper bound of $ E^N_x \sigma_0 .$
\begin{prop}\label{prop:sigma_0second}
Grant Assumptions \ref{ass:1} and \ref{ass:4}.
Given $h > 0 ,$ for $ \varepsilon < \bar \varepsilon $ such that $ 3 \varepsilon < \bar \varepsilon $ sufficiently small,
$$\limsup_{N \to \infty} \frac1N \log \sup_{x \in \partial B_{\varepsilon} (K ) } E^N_x \sigma_0 \le h .$$
\end{prop}
The proofs of the two propositions are given in the Appendix.
Recall that the $\omega -$limit set $K= \{ x^*\} \cup \bigcup_{l=2}^L K_l $ is divided into disjoint subsets consisting of equivalence classes induced by the equivalence relation $ x \sim y ,$ where we say that $ x \sim y $ if and only if $ V (x, y) = V ( y, x ) = 0.$ Following \cite{FW}, we now introduce
$$ \tilde V(K_i , K_j ) = \inf_T \inf \{ I_{x, T } ( \varphi ) : \varphi ( 0) \in K_i, \varphi (T) \in K_j, \varphi ( t) \notin \bigcup_{ l \neq i, j } K_l , 0 \le t\le T \},$$
$$ \tilde V (K_i, z) = \inf_T \inf \{ I_{x, T } ( \varphi ) : \varphi ( 0) \in K_i, \varphi (T) = z, \varphi ( t) \notin\bigcup_{ l \neq i } K_l , 0 \le t\le T \} ,$$
for $i = 1, 2, \ldots , L.$
We also put
$$ \tilde V_i (x, y ) = \inf_T \inf \{ I_{x, T } ( \varphi ) : \varphi ( 0) = x , \varphi (T) = y, \varphi ( t) \notin \bigcup_{ l \neq i } K_l ,, 0 \le t\le T \} .$$
As a consequence of the small-time local controllability as stated in Corollary \ref{cor:STLCxstar} and of Proposition \ref{prop:STLCgamma} we have the following useful result.
\begin{lem}\label{lem:STLC}
Grant Assumptions \ref{ass:1} and \ref{ass:4}. For all $ i, $ for all $ x, y \in K_i ,$ and for all $h$ there exists $ \delta $ such that $ | x - \tilde x | < \delta , $ $|y - \tilde y | < \delta $ imply that $ \tilde V_i ( \tilde x, \tilde y ) < h .$
\end{lem}
We put $ B_\varepsilon (K_i ) = \{ y \in {\mathbb R}^n : dist (y, K_i ) < \varepsilon \} , $ for $ i =1, 2, \ldots, L.$ We quote the following lemma from \cite{reybellet}.
\begin{lem}[Lemma 4 of \cite{reybellet}]\label{lem:4}
Grant Assumptions \ref{ass:1} and \ref{ass:4}.
For any $ h > 0 $ there exist $ \varepsilon < \bar \varepsilon $ sufficiently small such that
$$ \limsup_{N \to \infty } \frac1N \log \sup_{ x \in \partial B_{\bar \varepsilon }(K_i) } Q_x^N ( \tau_D < \tau_1) \le
- \left( \inf_{ z \in D} \tilde V ( K_i , z ) - h \right) $$
and
$$ \limsup_{N \to \infty } \frac1N \log \sup_{ x \in \partial B_{\bar \varepsilon }(K_i) } Q_x^N ( Y^N ( \tau_1 ) \in \partial B_{ \varepsilon }(K_j) ) \le -
\left( \tilde V( K_i, K_j ) - h \right) .$$
\end{lem}
\begin{proof}
Once Lemma \ref{lem:STLC} established, the proof is the same as the proof of Lemma 4 of \cite{reybellet}. The fact that most of the sets $ K_i$ are periodic orbits does not change the proof.
\end{proof}
Small time local controllability around $x^* $ and around periodic orbits $\Gamma $ are also sufficient to obtain the lower bound obtained by \cite{reybellet} in their Lemma 5:
\begin{lem}[Lemma 5 of \cite{reybellet}]
Grant Assumptions \ref{ass:1} and \ref{ass:4}.
For any $ h > 0 ,$ for any $ \varepsilon < \bar \varepsilon $ sufficiently small,
$$ \liminf_{N \to \infty } \frac1N \log \inf_{ x \in \partial B_{\bar \varepsilon }(K_i)} Q_x^N ( \tau_D < \tau_1) \geq
- \left( \inf_{ z \in D} \tilde V ( K_i , z ) + h \right) $$
and
$$ \liminf_{N \to \infty }\frac1N \log \inf_{ x \in \partial B_{\bar \varepsilon }(K_i) } Q_x^N ( Y^N ( \tau_1 ) \in \partial B_{ \varepsilon }(K_j) ) \geq -
\left( \tilde V( K_i, K_j ) + h \right) .$$
\end{lem}
Also, the lower bound of Lemma 6 of \cite{reybellet} is easily verifiable in our context, and we obtain
\begin{lem}[Lemma 6 of \cite{reybellet}]\label{lem:6}
Grant Assumptions \ref{ass:1} and \ref{ass:4}.
For any $ h > 0 , $ $ \liminf_{N \to \infty } \frac1N \log \nu^N ( {\mathbb R}^n ) \geq - h .$
\end{lem}
\begin{proof}
The proof is the same as in \cite{reybellet}, once we have obtained the estimate
\begin{equation}
\inf_{ x \in \partial B_\varepsilon (K) } E^N_x ( \sigma_0 ) \geq e^{ - N h} ,
\end{equation}
as proven in Proposition \ref{prop:sigma_0first}.
\end{proof}
In order to finish the proof of our main theorem, we follow now closely Rey-Bellet et Thomas \cite{reybellet} and Freidlin and Wentzell \cite{FW}.
1) We have, as in formula (46) of \cite{reybellet},
\begin{multline*}
\nu^N ( D) \le \sum_{i=1}^L \ell^N ( \partial B_\varepsilon (K_i ) ) \sup_{x \in \partial B_\varepsilon (K_i ) } E^N_x \int_0^{\tau_1} 1_D ( Y^N_s) ds \\
\le L \max_i \ell^N ( \partial B_\varepsilon (K_i) ) \sup_{x \in \partial B_\varepsilon (K_i) } Q^N_x ( \tau_D \le \tau_1 ) \sup_{y \in \partial D} E^N_y \tau_1 .
\end{multline*}
But $\sup_{y \in \partial D} E^N_y \tau_1 \le C ,$ for some fixed constant $C,$ by Proposition \ref{prop:lyapunov}. Moreover, we have, for sufficiently small $ \varepsilon < \bar \varepsilon, $ by Lemma \ref{lem:4}, for $x \in \partial B_\varepsilon (K_i ) ,$
$$ Q^N_x ( \tau_D \le \tau_1 ) \le \exp ( - N ( \inf_{z \in D} \tilde V (K_i, z ) - h/4 ) ) .$$
Define now the function $ \tilde W ( x) $ in the same way as $ W(x) $ in \eqref{eq:W}, by replacing all $ V (K_m, K_n ) $ by $ \tilde V( K_m, K_n ) .$ By Freidlin-Wentzell \cite{FW}, Lemma 3.1 and 3.2 together with Lemma 4.1 and 4.2 of Chapter 6, we know that $\tilde W(x) = W(x),$ and therefore we obtain
$$ \ell^N ( \partial B_\varepsilon (K_i) ) \le \exp ( - N [ W ( K_i ) - \min_j W (K_j) - h/4 ] ) ,$$
for sufficiently large $N.$ As a consequence, following the lines of proof of \cite{reybellet}, (47)--(50),
$$ \nu^N ( D) \le L C \exp ( - N [ \inf_{z \in D} W(z) - h/2 ] ) .$$
Finally, using the lower bound obtained for $ \nu^N ( {\mathbb R}^n ) $ in Lemma \ref{lem:6}, implying that
$$ \nu^N ( {\mathbb R}^n) \geq \exp ( - \frac{Nh}{2} ) $$
for all $ N $ sufficiently large, we obtain
$$ \mu^N ( D) \le LC \exp ( - N [ \inf_{z \in D} W(z) - h ] ),$$
concluding the first part of the proof.
We now turn to the study of the lower bound in \eqref{eq:main}. We fix some $ \delta > 0 $ sufficiently small such that $ D_\delta = \{ x \in D : dist (x, \partial D) \geq \delta \} $ satisfies $ D_\delta \neq \emptyset .$ Let $ z \in D $ and fix $ i$ such that $ \tilde V ( K_i, z ) < \infty .$ Such an index $i$ always exists due to the complete controllability property.\footnote{Indeed, for any $ j , $ $V (K_j, z ) < \infty .$ Suppose that the trajectory achieving the minimal cost to go from $K_j $ to $z $ visits the sets $K_j, $ followed by $ K_{n_1}, \ldots , K_{n_l} , $ before leaving the last of them, $K_{n_l} ,$ and reaching the target $z.$ It is then sufficient to choose $i$ to be equal to the index of the last visited set, that is, $ i := n_l .$ } The proof of Theorem \ref{theo:6} shows that it is possible to choose $ \delta $ so small that
$$ \inf_{ z \in D_\delta } \tilde V ( K_i, z ) \le \inf_{z \in D} \tilde V ( K_i, z) + h/ 4.$$
This point is crucial for the rest of the proof.
Then
$$ \nu^N ( D) \geq \min_i \left[ \ell^N ( \partial B_\varepsilon(K_i ) ) \inf_{x \in \partial B_\varepsilon (K_i ) } Q^N_x ( \tau_{D_\delta} < \tau_1 ) \right] \inf_{x \in \partial D_\delta } E^N_x \int_0^{\tau_1} 1_D ( Y^N_s) ds .$$
We will prove below that
\begin{equation}\label{eq:last}
\inf_N \inf_{x \in \partial D_\delta } E^N_x \int_0^{\tau_1} 1_D ( Y^N_s) ds \geq C > 0.
\end{equation}
We then obtain, following exactly the arguments of \cite{reybellet}, the lower bound
$$ \nu^N ( D) \geq C \exp ( - N [ \inf_{z \in D } W(z) + h/2 ] ) .$$
The proof is completed by an upper bound on $ \nu^N ( {\mathbb R}^n ) , $ which is obtained thanks to Proposition \ref{prop:sigma_0second}.
We finish the above proof by showing \eqref{eq:last}. Let $Y_0^N =x \in \partial D_\delta .$ Then
$$ E^N_x \int_0^{\tau_1} 1_D ( Y^N_s) ds = E^N_x \tau_{D^c } , $$
where $ \tau_{D^c} = \inf \{ t \geq 0 : Y^N_t \in D^c \}. $ But for $ Y_0^N = x \in \partial D_\delta , $
\begin{equation}\label{eq:delta}
\delta \le \| Y_{\tau_{D^c }}^N - x \| \le \sup_{ z \in D} \| b(z) \| \tau_{D^c } + \frac{1}{\sqrt{N}} \sup_{s \le \tau_{D^c } } \| M_s \| ,
\end{equation}
where $ M_s = \int_0^s \sigma( Y_u^N ) d B_u. $ Since the coefficients of $\sigma $ are bounded, using the Burkholder-Davis-Gundy inequality, there exists a positive constant $ \chi $ only depending on the bound of $ b$ on $D$ and on the bounds of $ \sigma $ such that $ E_x^N \sup_{s \le \tau_{D^c } } \| M_s \| \le \chi \sqrt{E_x^N \tau_{D^c } } ,$ and therefore,
$$ \delta \le \chi \left( E_x^N ( \tau_{D^c }) + \sqrt{ \frac{E_x^N ( \tau_{D^c })}{N }} \right) , $$
which in turn implies that
$$ \inf_N \inf_{ x \in \partial D_\delta } E_x^N ( \tau_{D^c }) = \inf_N \inf_{x \in \partial D_\delta } E^N_x \int_0^{\tau_1} 1_D ( Y^N_s) ds \geq C> 0 $$
for a constant $C$ not depending on $N.$
$ \bullet $
\section*{Appendix}
{\it Proof of Theorem \ref{theo:stlc}.}
The proof follows the lines of the proof of Theorem 1 in Chapter 6 of Lee and Markus \cite{LeeMarkus}. As there, we write
$ f( x, u) = b(x) + \sigma ( x) u .$ We fix $x_0 \in \Gamma $ and write
$$ A (t) =\left( \frac{\partial f }{\partial x}\right)_{| x= x_t^{x_0}, u = 0} , \; B (t) = \left( \frac{\partial f }{\partial u}\right)_{| x= x_t^{x_0}, u = 0} = \sigma (x_t^{x_0}) .$$
Let $ A := A ( 0) $ and $B = B(0) .$
Then it is easy to see that the columns of $ B, AB, A^2 B, \ldots , A^{n - 1 } B $ span $ {\mathbb R}^n .$
We start by considering the equation
\begin{equation}\label{eq:homo}
\dot Y = A Y + Bu , Y(0) = 0 .
\end{equation}
Denote by $Y^u (t) , t \le \delta , $ a solution to \eqref{eq:homo} driven by $ u(t) , t \le \delta .$
The above system is controllable, since $ B, AB, A^2 B , \ldots , A^{n - 1 } B $ span $ {\mathbb R}^n .$ As a consequence, for every $M $ and for any $ \delta < 1 $ there exist controls $ u_1, u_2 , \ldots , u_n $ with $ \| u_i \|_\infty \le M $ such that
\begin{equation}\label{eq:explicitcontrol}
Y^{u_1} (\delta) = r e_1, \ldots , Y^{u_n } (\delta) = r e_n ,
\end{equation}
where $ e_1, \ldots, e_n $ are the unit vectors of ${\mathbb R}^n $ (Corollary 1 of Chapter 2 of Lee and Markus \cite{LeeMarkus}) and where $r > 0 $ is suitably small.
We wish now to replace the system \eqref{eq:homo} by the time dependent system
\begin{equation}\label{eq:inhomo}
\dot W = A (t) W + B (t) u , W(0) = 0 , t \le \delta .
\end{equation}
Write $ W_k ( t) $ for the solution of $ \dot W_k (t) = A ( t) W_k ( t) + B(t) u_k ( t) ,$ where the $u_k (t) $ are given in \eqref{eq:explicitcontrol}. Then $ W_k (t) $ is explicitly given by
$$ W_k ( t) = {\mathbb P}hi (t) \int_0^t {\mathbb P}hi^{-1} (s) B(s) u_k ( s) ds ,$$
with $ {\mathbb P}hi (t) $ the matrix solution of $ \dot {\mathbb P}hi (t) = A(t) {\mathbb P}hi (t) , $ $ {\mathbb P}hi (0) = Id.$ Writing $Y_k (t) = Y^{u_k } (t) , $ we obtain similarly
$$ Y_k ( t) = \overline {\mathbb P}hi (t) \int_0^t \overline {\mathbb P}hi^{-1} (s) B u_k ( s) ds ,$$
with $ \overline {\mathbb P}hi (t) = e^{ A t } $ (recall that $ A = A(0) $). We wish to show that $\| Y_k ( t) - W_k (t) \| $ is small for $t$ sufficiently small. For that sake, note that there exists a constant $C$ such that for all $ t \le \delta , $
$$ \| {\mathbb P}hi (t) \| , \| \overline {\mathbb P}hi (t) \| , \| {\mathbb P}hi^{-1} (t) \| , \| \overline {\mathbb P}hi^{-1} (t) \| , \| B( t) \| , \| B\| \le C.$$
Since
$$ {\mathbb P}hi (t) = Id + \int_0^t A(s) {\mathbb P}hi (s) ds , \; \overline {\mathbb P}hi (t) = Id + \int_0^t A \overline {\mathbb P}hi (s) ds,$$
it follows from this that $ \| {\mathbb P}hi (t) - \overline {\mathbb P}hi (t) \| \to 0 $ as $ t \to 0.$
Fix $\varepsilon > 0 $ such that $ \tilde e_1 , \ldots , \tilde e_n $ still span $ {\mathbb R}^n $ for all $ \tilde e_k \in B_\varepsilon ( r e_k ) , 1 \le k \le n .$ Then there exists $\delta^* $ such that for all $\delta \le \delta^* , $ $ W_k ( \delta ) \in B_\varepsilon ( Y_k ( \delta ) ) ,$ for all $
1 \le k \le n, $ and therefore the following holds.
\begin{multline}\label{eq:super}
\mbox{The solutions of } \dot W_k (t) = A ( t) W_k ( t) + B(t) u_k ( t) ,\; W_k ( 0 ) = 0, \; 1 \le k \le n , \\
\mbox{ are such that } W_1 (\delta ) , \ldots , W_n ( \delta ) \mbox{ span } {\mathbb R}^n .
\end{multline}
We are now able to conclude the proof, following the lines of Lee and Markus \cite{LeeMarkus}. Consider $x ( t, \xi ) $ which is the solution of
$$
d x( t , \xi ) = b ( x( t , \xi ) ) dt + \sigma ( x( t , \xi ) ) \dot { h } (t, \xi) dt , \; x(0, \xi ) = x_0,
$$
following the control $ \dot {h} ( t, \xi ) = \xi_1 u_1 (t) + \ldots + \xi_n u_n ( t) , $ for $ | \xi_i | \le 1, 1 \le i \le n .$ It is clear that $ x ( t, 0 ) = x^{x_0}(t) .$ Hence, if we can prove that $ Z (t) = \left( \frac{\partial x (t , \xi) }{\partial \xi }\right)_{ | \xi = 0 }$ is non-degenerate at $t = \delta, $ we are done, using the inverse function theorem. But
$$ \frac{ \partial x (t, x) }{\partial t} = f( x (t, \xi ) , \dot{h} (t, \xi ) )$$ and thus
$$ \frac{\partial }{\partial t} \frac{\partial x (t, \xi) }{\partial \xi } =
f_x ( x(t, \xi ), \dot h (t, \xi ) ) \frac{\partial x}{\partial \xi } + f_u ( x (t, \xi ) , \dot h (t, \xi) ) \frac{\partial \dot h }{\partial \xi } . $$
Notice that $ x (t, 0) = x^{x_0 }_{t } $ and $\dot h (t, 0) = 0.$ Thus we obtain
$$ \dot Z (t) = A ( t) Z (t) + B(t) U (t) , $$
where $U (t) = (u_1 (t) , \ldots , u_n (t) ) .$ Writing $ z_1, \ldots , z_n $ for the columns of $ Z(t), $ this gives
$$ \dot z_k ( t) = A(t) z_k ( t) + B(t) u_k (t) , \; z_k ( 0) = 0 .$$
The solutions of this system are given by \eqref{eq:super}, and they are such that $ z_k ( \delta ), 1 \le k \le n ,$ span ${\mathbb R}^n .$ Therefore, $ Z ( \delta) $ is non-degenerate, and this concludes the proof.
$\bullet $
{\it Proof of Proposition \ref{prop:sigma_0first}.} The proof follows closely the ideas of Chapter 5.7 of \cite{DZ}.
1) For all $x \in \partial B_\varepsilon (K) , $ by small time local controllability, there exists a smooth path $ \psi^x $ of length $ t^x $ such that $ \psi^x (t^x) \in K= \{x^* \} \cup \bigcup_{l=2}^L K_l $ and such that
$ \psi^x (t) $ does not leave $ B_{2 \bar \varepsilon /3 }( K) $ for all $ t \le t^x .$ Moreover, this path can be chosen such that $ I_{x, t^x} ( \psi^x ) \le h/2.$
2) For all $ x_0 \in K $ there exists $z \in \partial B_\varepsilon (K) $ and a path $ \psi^{x_0} $ of length $ t^{x_0} $ steering $x_0$ to $z, $ during $[0, t^{x_0} ] , $ without leaving $ B_{2 \bar \varepsilon /3}(K) ,$
at a cost $ I_{x_0, t^{x_0}} ( \psi^{x_0 } ) \le h/2 .$
3) We concatenate the two paths $ \psi^x$ and then $ \psi^{x_0} $ to obtain a new trajectory $ {\mathbb P}si^x $ of length $T^x = t^x + t^{x_0} $ steering $x$ to $z \in \partial B_\varepsilon (K) .$ Let then
$$ T_0 := \inf_{ x \in \partial B_{\varepsilon}(K ) } T^x > 0 $$
and put
$$ {\mathcal O} := \bigcup_{ x \in \partial B_\varepsilon (K) } \{ \varphi \in C ( [ 0, T_0], {\mathbb R}^n ) : \| \varphi - {\mathbb P}si^x \|_\infty < \varepsilon/ 2 \} ,$$
which is an open set. Then
$$ \liminf_{N \to \infty } \frac1N \log \inf_{x \in \partial B_\varepsilon (K) } Q^N_x ( {\mathcal O} ) \geq - h ,$$
which implies the assertion since $ Q^N_x ( Y^N \in {\mathcal O} ) \le Q^N_x ( \sigma_0 \geq T_0 ) \le \frac{E^N_x \sigma_0}{T_0} .$
$\bullet$
{\it Proof of Proposition \ref{prop:sigma_0second}.}
1) Let $ S = \inf \{ t \geq 0 : Y^N \in B_{\varepsilon } ( K) \cup D \} , $ where $ D = (B_{\bar \varepsilon }(K) )^c .$
We know by Lemma \ref{lem:1} that there exists $ T_1 > 0 $ such that
\begin{equation}\label{eq:(1)}
\limsup_{N \to \infty } \sup_{x \in \overline {B_{\bar \varepsilon }(K )}} Q_x^N ( S > T_1 ) < 1.
\end{equation}
2) We shall now show that there exists $ T_2$ such that
\begin{equation}\label{eq:(2)}
\liminf_{N \to \infty } \frac1N \log \inf_{x \in \overline {B_\varepsilon (K)}} Q_x^N ( \sigma_0 \le T_2 ) \geq - h .
\end{equation}
Indeed, like in \cite{DZ}, page 231, we first construct, for all $x \in \overline {B_\varepsilon (K)}$ a smooth path $ \psi^x $ of length $t^x $ such that $ \psi^x (t^x) \in K$ and such that
$ \psi^x (t) $ does not leave $ B_{2 \bar \varepsilon /3}(K )$ for all $ t \le t^x .$ Moreover, this path can be chosen such that $ I_{x, t^x} ( \psi^x ) \le h/2.$
We then fix $ \varepsilon ' > \bar \varepsilon $ such that $ 6 \bar \varepsilon < \varepsilon ' $ and apply Proposition \ref{prop:STLCgamma} to $ 2 \bar \varepsilon $ and $ \varepsilon '.$ This is possible if $ \bar \varepsilon $ is sufficiently small. Then for any $ x_0 \in K$ there exists $ z \in \partial B_{2 \bar \varepsilon }(K) $ and a path $ \psi^{x_0} $ of length $t^{x_0} $ steering $x_0$ to $z , $ during $ [0, t^{x_0} ],$ such that $ I_{x_0, t^{x_0 } } ( \psi^{x_0 } ) \le h/2 .$ We then concatenate the two paths and obtain a new path $ {\mathbb P}si^x $ of length $ T^x = t^x + t^{x_0} , $ steering $x $ to $z,$ at cost $\le h.$ Let
$$ T_2 = \sup_{ x \in \overline {B_\varepsilon (K)} } T^x < \infty $$
and
$$ {\mathcal O} =\bigcup_{ x \in \overline {B_\varepsilon (K)} } \{ \varphi \in C ( [ 0, T_2], {\mathbb R}^n ) : \| \varphi - {\mathbb P}si^x \|_\infty < \bar \varepsilon / 2 \} .$$
Then
$$ \liminf_{N \to \infty } \frac1N \log \inf_{x \in \overline {B_\varepsilon (K)}} Q_x^N ( Y^N \in {\mathcal O} ) \geq - h ,$$
which implies \eqref{eq:(2)}, since $ \varphi \in {\mathcal O} $ implies that $ \sigma_0 ( \varphi ) \le T_2 .$
3) We deduce from the above discussion the following.
$$ \inf_{ x \in \overline {B_{\bar \varepsilon}(K ) }} Q^N_x ( \sigma_0 \le T := T_1 + T_2 ) \geq \inf_{ x \in \overline {B_{\bar \varepsilon }(K )} } Q^N_x ( S \le T_1 ) \cdot \inf_{ x \in \overline{ B_\varepsilon (K )} } Q^N_x ( \sigma_0 \le T_2 ) =: q .$$
By iteration, we obtain
$$ \sup_{ x \in \overline {B_{\bar \varepsilon} (K ) }} Q^N_x ( \sigma_0 > k T ) \le (1- q )^k , \mbox{ whence } \sup_{ x \in \partial {B_{ \varepsilon }(K ) }} E^N_x \sigma_0 \le \sup_{ x \in \overline {B_{\bar \varepsilon }(K ) }} E^N_x \sigma_0 \le \frac{T}{q}.$$
But
$$ q \geq e^{ - Nh } \inf_{ x \in \overline {B_{\bar \varepsilon }(K )} } Q^N_x ( S \le T_1 ) \geq c e^{- N h } , $$
for $ N $ sufficiently large. This implies the desired assertion.
$ \bullet$
\section*{Acknowledgments}
I would like to thank an anonymous reviewer for his valuable comments and suggestions which helped me to improve the paper.
This research has been conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01) and as part of the activities of FAPESP Research,
Dissemination and Innovation Center for Neuromathematics (grant
2013/07699-0, S.\ Paulo Research Foundation).
\end{document} |
\begin{document}
\title{On the $L^1$ norm of an exponential sum involving the divisor function}
\author{D. A. Goldston and M. Pandey}
\thanks{$^{*}$ The first author was in residence at the Mathematical Sciences
Research Institute in Berkeley, California (supported by the National Science Foundation under Grant
No. DMS-1440140), during the Spring 2017 semester.}
\date{\today}
\maketitle
\section{Introduction}
Let $\tau(n) = \sum_{d|n} 1$ be the divisor function, and
\begin{align*}
S(\alpha) = \sum_{n\le x} \tau(n)e(n\alpha), \qquad e(\alpha) = e^{2\pi i \alpha}.
\end{align*}
In 2001 Brudern \cite{Brudern} considered the $L^1$ norm of $S(\alpha)$ and claimed to prove
\begin{equation} \sqrt{x} \ll \int_0^1 |S(\alpha)| d\alpha \ll \sqrt{x}. \end{equation}
However there is a mistake in the proof given there which depends on a lemma which is false.
In this note we prove the following result.
\begin{theorem}
We have
\label{thm}
\begin{equation}
\sqrt{ x} \ll \int_0^1 |S(\alpha)|d\alpha\ll \sqrt{x}\log x.
\end{equation}
\end{theorem}
The upper bound here is obtained by following Brudern's proof with corrections. The lower bound is based on the method Vaughan introduced to study the $L^1$ norm for exponential sums over primes \cite{Va}, and also makes use of a more recent result of Pongsriiam and Vaughan \cite{PV} on the divisor sum in arithmetic progressions. We do not know whether the upper bound or the lower bound reflects the actual size of the $L^1$ norm here.
\section{Proof of the upper bound}
Let $u$ and $v$ always be positive integers. Following Br\"udern, we have
\[ \begin{split}
S(\alpha) &= \sum_{n\le x} \left( \sum_{uv=n}1\right) e(n\alpha) \\&=\sum_{uv\le x}e( uv\alpha) \\&= 2\sum_{u\le\sqrt{x}}\sum_{u < v\le x/u} e( uv\alpha) + \sum_{u\le\sqrt{x}} e( u^2\alpha)\\ &
:= 2T(\alpha) + V(\alpha) .
\end{split} \]
By Cauchy-Schwarz and Parseval
\[ \int_0^1 |V(\alpha)|\, d\alpha \le \left( \int_0^1 |V(\alpha)|^2\, d\alpha \right)^{\frac12} = \sqrt{ \lfloor \sqrt{x}\rfloor} \le x^{\frac14} ,\]
and therefore by the triangle inequality
\[ \int_0^1 |S(\alpha)|\, d\alpha = 2 \int_0^1 |T(\alpha)|\, d\alpha + O(x^{\frac14}). \]
Thus to prove the upper bound in Theorem 1 we need to establish
\begin{equation} \label{integral}
\int_0^1 |T(\alpha)| d\alpha\ll\sqrt{x}\log x.
\end{equation}
We proceed as in the circle method. Clearly in \eqref{integral} we can replace the integration range $[0,1]$ by $[1/Q, 1+1/Q]$. By Dirichlet's theorem for any $\alpha \in [1/Q, 1+1/Q]$ we can find a fraction $\frac{a}{q}$, $1\le q \le Q$, $1\leq a \le q$, $(a,q)=1$, with $|\alpha - \frac{a}{q}| \le 1/(qQ)$. Thus the intervals $ [\frac{a}{q}- \frac{1}{qQ}, \frac{a}{q} +\frac{1}{qQ} ] $ cover the interval $[1/Q,1+1/Q]$. Taking
\begin{equation} \label{Qequation} 2\sqrt{x} \le Q \ll \sqrt{x}, \end{equation}
we obtain
\[
\int_0^1 |T(\alpha)|d\alpha
\le \sum_{q\le Q}\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{\frac{a}{q}-1/(2q\sqrt{x})}^{\frac{a}{q}+1/(2q\sqrt{x})} \left\vert T(\alpha)\right\vert d\alpha.
\]
On each interval $ [\frac{a}{q}- \frac{1}{2q\sqrt{x}}, \frac{a}{q} +\frac{1}{2q\sqrt{x}} ] $ we decompose $T(\alpha)$
into
\[ T(\alpha) = F_q(\alpha) + G_q(\alpha) \]
where
\[ F_q(\alpha) = \sum_{\substack{u\le\sqrt{x}\\ q|u}}\sum_{u < v\le x/u} e( uv\alpha) \]
and
\[ G_q(\alpha) = \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}}\sum_{u < v\le x/u} e( uv\alpha), \]
and have
\[ \begin{split} \int_0^1 |T(\alpha)|d\alpha
&\le \sum_{q\le Q}\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{\frac{a}{q}-1/(2q\sqrt{x})}^{\frac{a}{q}+1/(2q\sqrt{x})} \left\vert F_q(\alpha)\right\vert d\alpha + \sum_{q\le Q}\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{\frac{a}{q}-1/(2q\sqrt{x})}^{\frac{a}{q}+1/(2q\sqrt{x})} \left\vert G_q(\alpha)\right\vert d\alpha\\ &
:= I_F + I_G. \end{split}
\]
The upper bound in Theorem 1 follows from the following two lemmas.
\begin{lemma} [Br\"udern] We have
\[ I_F \ll \sqrt{x}. \]
\end{lemma}
\begin{lemma} We have
\[ I_G \ll \sqrt{x}\log x . \]
\end{lemma}
In what follows we always assume $(a,q)=1$, and define the new variable $\beta$ by
\begin{equation} \label{alpha} \alpha = \frac{a}{q} +\beta . \end{equation}
\begin{proof}[Proof of Lemma 2] The proof follows from the estimate
\begin{equation} \label{Festimate}
F_q(\alpha)
\ll \left\{ \begin{array}{ll}
{\min\left(x,\frac{1}{|\beta|}\right)\frac{\log
\frac{2\sqrt{x}}{q}}{q} ,} & \mbox{if $q\le \sqrt{x}$, $ |\beta|\le \frac{1}{2q\sqrt{x}}$;} \\
0, & \mbox{if $q>\sqrt{x}$;} \\
\end{array}
\right.
\end{equation}
since this implies
\[ \begin{split} I_F &\ll \sum_{q\le \sqrt{x}} q \int_0^{1/(2q\sqrt{x})}\min\left(x, \frac{1}{|\beta|}\right) \frac{\log
\frac{2\sqrt{x}}{q}}{q}\, d\beta \\&
\ll \sum_{q\le \sqrt{x}} \log
\frac{2\sqrt{x}}{q}\left(\int_0^{1/(2x)} x \, d\beta + \int_{1/(2x)}^{1/(2q\sqrt{x})} \frac{1}{\beta} \, d\beta \right) \\& \ll \sum_{q\le \sqrt{x}} \left(
\log
\frac{2\sqrt{x}}{q}\right)^2 \quad \ll \sqrt{x}.
\end{split} \]
To prove \eqref{Festimate}, we first note that the conditions $q|u$ and $u \le \sqrt{x}$ force $F_q(\alpha)=0$ when $q>\sqrt{x}$. Next, when $q\le \sqrt{x}$ we write $u=jq$ and have
\[ F_q(\alpha) = \sum_{j\le \frac{\sqrt{x}}{q}}\sum_{jq\le v\le \frac{x}{jq}} e( jqv\beta).\]
Making use of the estimate
\begin{equation} \label{basic} \sum_{N_1 < n \le N_2} e(n\alpha) \ll \min\left( N_2-N_1, \frac{1}{\Vert \alpha \Vert}\right)\end{equation}
we have
\[ F_q(\alpha) \ll \sum_{j\le \frac{\sqrt{x}}{q}}\min\left(\frac{x}{jq}, \frac{1}{\Vert jq\beta \Vert}\right).\]
In this sum $jq\le \sqrt{x}$ so that $|jq\beta | \le |\beta| \sqrt{x}$, and hence the condition $ |\beta|\le \frac{1}{2q\sqrt{x}}$ implies $|jq\beta | \le \frac{1}{2q} \le \frac12$. Hence $ \Vert jq\beta \Vert = jq|\beta|$, and we have
\[ F_q(\alpha) \ll \sum_{j\le \frac{\sqrt{x}}{q} } \frac{1}{jq} \min\left(x, \frac{1}{|\beta|}\right)\ll \min(x, \frac{1}{|\beta|})\frac{\log\frac{2\sqrt{x}}{q}}{q} .\]
\end{proof}
\begin{proof}[Proof of Lemma 3] The proof follows from the estimate,
\begin{equation} \label{Gestimate}
G_q(\alpha) \ll (\sqrt{x} + q)\log q , \quad \text{for} \ \alpha = \frac{a}{q} + \beta, \quad |\beta| \le \frac{1}{2q\sqrt{x}}, \end{equation}
since this implies
\[ \begin{split} I_G &\ll \sum_{q\le Q} q \int_0^{1/(2q\sqrt{x})}(\sqrt{x} +q) \log q\, d\beta \\&
\ll \frac{1}{\sqrt{x}}( Q(\sqrt{x} +Q)\log Q \ll \sqrt{x}\log x
\end{split} \]
by \eqref{Qequation}.
To prove \eqref{Gestimate}, we apply \eqref{basic} to the sum over $v$ in $G_q(\alpha)$ and obtain
\[ G_q(\alpha) \ll \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}} \min\left( \frac{x}{u},\frac{1}{\Vert u\alpha \Vert}\right). \]
Recalling $\Vert x \Vert = \Vert -x \Vert $ and the triangle inequality $\Vert x+y\Vert \le \Vert x\Vert +\Vert y\Vert$, and using the conditions $1\le u\le \sqrt{x}$, $q\nmid u$, $ |\beta| \le \frac{1}{2q\sqrt{x}}$, we have
\[ \begin{split}\left \Vert u\alpha\right \Vert &\ge \left\Vert \frac{au}{q}\right \Vert - \Vert u \beta \Vert \\&
\ge \left \Vert \frac{au}{q} \right\Vert - u|\beta| \\&
\ge\left \Vert \frac{au}{q} \right\Vert - \frac{u} {2q\sqrt{x}} \\&
\ge \left \Vert \frac{au}{q} \right \Vert - \frac{1} {2q} \\&
\ge \frac12\left \Vert \frac{au}{q} \right \Vert , \end{split} \]
and therefore
\[ G_q(\alpha) \ll \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}} \frac{1}{\left\Vert \frac{au}{q} \right \Vert}. \]
Here $\left\Vert \frac{au}{q} \right \Vert = \frac{b}{q}$ for some integer $1\le b \le \frac{q}{2}$ and since the integers $\{au: 1\le u \le \frac{q}{2}\}$ are distinct modulo $q$ since $(a,q)=1$, we see
\[ \sum_{1\le u \le \frac{q}{2} } \frac{1}{\left\Vert \frac{au}{q}\right \Vert} = \sum_{1\le b \le \frac{q}{2}}\frac{q}{b} \ll q\log q .\]
If $q> \sqrt{x}$ then
\[ \sum_{\substack{u\le\sqrt{x}\\ q\nmid u}} \frac{1}{\left\Vert \frac{au}{q} \right \Vert} \le 2 \sum_{1\le u \le \frac{q}{2} } \frac{1}{\left\Vert \frac{au}{q}\right \Vert} \ll q\log q ,\]
while if $q\le \sqrt{x}$ then
the sum bounding $G_q(\alpha)$ can be split into $\ll \frac{ \sqrt{x}}{q}$ sums of this type and
\[ G_q(\alpha) \ll \frac{\sqrt{x}}{q} (q\log q )\ll \sqrt{x}\log q. \]
\end{proof}
\section{Proof of the lower bound}
Following Br\"udern, consider the intervals $|\alpha - \frac{a}{q}| \le 1/(4x)$ for $1\le a\le q\le Q $, where we take $\frac12 \sqrt{x} \le Q \le \sqrt{x}$. These intervals are pairwise disjoint because for two distinct fractions
$|a/q - a'/q'| \ge 1/(qq') \ge 1/x$. (We will see later why these intervals have been chosen shorter than required to be disjoint.) Hence, using \eqref{alpha}
\[ \int_0^1 |S(\alpha)|\, d\alpha = \int_{1/Q}^{1+1/Q} |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} }\sum_{\substack{a = 1\\ (a,q)=1}}^q \int_{-1/(4x)}^{1/(4x)} \left\vert S\left(\frac{a}{q} +\beta\right)\right\vert d\beta.
\]
Next we follow Vaughan's method \cite{Va} and apply the triangle inequality to obtain the lower bound
\[ \int_0^1 |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} } \int_{-1/(4x)}^{1/(4x)} \left\vert \sum_{\substack{a = 1\\ (a,q)=1}}^qS\left(\frac{a}{q} +\beta\right)\right\vert d\beta.
\]
Letting
\begin{equation} \label{Uq} U_q(x;\beta) := \sum_{\substack{a = 1\\ (a,q)=1}}^qS\left(\frac{a}{q} +\beta\right) = \sum_{n\le x} \tau(n) c_q(n)e(n\beta) , \end{equation}
where
\[ c_q(n) = \sum_{\substack{a = 1\\ (a,q)=1}}^q e\left(\frac{an}{q}\right) \]
is the Ramanujan sum, our lower bound may now be written as
\begin{equation} \label{lowerbound} \int_0^1 |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} } \int_{-1/(4x)}^{1/(4x)} \left\vert U_q(x;\beta)\right\vert d\beta. \end{equation}
To complete the proof of the lower bound we need the following lemma, which we prove at the end of this section.
\begin{lemma}
\label{Usum}
For $ q\ge 1$ we have
\begin{align}
U_q(x; 0) = \frac{\varphi(q)}{q}x(\log (x/q^2) + 2\gamma - 1) + O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon),
\end{align}
where $\gamma$ is Euler's constant.
\end{lemma}
\begin{proof}[Proof of the lower bound in Theorem 1] For any exponential sum
$T(x; \beta) =\sum_{n\le x} a_n e(n\beta)$ we have by partial summation or direct verification
\[ T(x; \beta) = e(\beta x) T(x;0) - 2\pi i \beta \int_1^x e(\beta y) T(y;0) \, dy.\]
Taking $T(x;\beta) = U_q(x;\beta)$ we thus obtain from \eqref{lowerbound} and the triangle inequality
\begin{equation} \label{lower} \int_0^1 |S(\alpha)|\, d\alpha \ge \sum_{q\le \frac12\sqrt{x} } \int_{-1/(4x)}^{1/(4x)} \left( |U_q(x;0)| - 2\pi |\beta | \int_1^x |U_q(y; 0)| \, dy \right) d\beta.\end{equation}
By Lemma 4, with $q\le \frac12\sqrt{x}$,
\[ \begin{split} \int_1^x |U_q(y; 0)| \, dy & \le \frac{\varphi(q)}{q} \left( \int_1^x y | \log(y/q^2)| +(2\gamma -1)y \, dy\right) + O(xq\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon) \\&
\le \frac{\varphi(q)}{q} \left( \int_1^{q^2} y \log(q^2/y) \, dy + \int_{q^2}^x y \log(y/q^2) \, dy+ \frac12 x^2 (2\gamma -1) \right) \\ & \hskip 2.5in + O(xq\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon) \\&
= \frac{x}{2}\left( \frac{\varphi(q)}{q}\left( x(\log (x/q^2)+ 2\gamma -1) - \frac{x}{2} + \frac{q^4}{x} \right) + O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon)\right)\\&
\le \frac{x}{2} U_q(x,0) + O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon).
\end{split}\]
Using $|\beta | \le 1/(4x)$, we have
\[ |U_q(x;0)| - 2\pi \beta \int_1^x |U_q(y; 0)| \, dy \ge \left( 1 - \frac{\pi}{4} \right)|U_q(x;0)| - O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon). \]
We conclude, returning to \eqref{lower} and making use of Lemma 4 again,
\[ \begin{split} \int_0^1 |S(\alpha)|\, d\alpha & \ge \frac{4-\pi}{8 x} \sum_{q\le \frac12\sqrt{x} } \left(|U_q(x;0)| - O(q\tau(q) (x^{\frac13} + q^{\frac12})x^\epsilon)\right) \\&
\ge \frac{4-\pi}{8} \sum_{q\le \frac12\sqrt{x} } \frac{\varphi(q)}{q}(\log (x/q^2) + 2\gamma - 1) -O(x^{\frac13+\epsilon}) .\end{split} \]
It is easy to see that the sum above is $\gg \sqrt{x} $ which suffices to proves the lower bound. More precisely, using
\[ \frac{\varphi(n)}{n} = \sum_{d|n} \frac{ \mu(d) }{d} \]
a simple calculation gives the well-known result
\[ \sum_{ n\le x} \frac{\varphi(n)}{n} = \frac{6}{\pi^2} x +O(\log x), \]
and then by partial summation we find
\[ \sum_{q\le \frac12\sqrt{x} } \frac{\varphi(q)}{q}(\log (x/q^2) + 2\gamma - 1)\sim \frac{6}{\pi^2}(\log 2 +\gamma -1)\sqrt{x} .\]
\end{proof}
\begin{proof}[Proof of Lemma 4] Pongsriiam and Vaughan \cite{PV} recently proved the following very useful result on the divisor
function in arithmetic progressions. For inteqer $a$ and $d\ge 1$ and real $x\ge 1$ we have
\[ \sum_{\substack{ n\le x \\ n\equiv a(\text{mod}\, d)}}\tau(n) = \frac{x}{d} \sum_{r | d}\frac{c_r(a)}{r}\left(\log\frac{x}{r^2} + 2\gamma -1\right) +O( (x^{\frac13} + d^{\frac12})x^\epsilon),\]
where $\gamma$ is Euler's constant and $c_r(a)$ is the Ramanujan sum. We need the special case when $a=0$ which along with the situation $(a,d)>1$ is explicitly allowed in this formula. Hence we have
\begin{equation} \label{PV}
\sum_{\substack{ n\le x \\ d|n}} \tau(n) = \frac{x}{d}f_x(d) + O( (x^{\frac13} + d^{\frac12})x^\epsilon)
\end{equation}
where
\begin{equation}\label{f-g}
f_x(d) = \sum_{r|d} g_x(r), \quad g_x(r) = \frac{\varphi(r)}{r}(\log(x/r^2) + 2\gamma - 1).
\end{equation}
Making use of
\[
c_q(n) = \sum_{d|(n, q)}d\mu\left(\frac{q}{d}\right),
\]
and \eqref{PV} we have
\[ \begin{split} U_q(x;0) &= \sum_{n\le x}\tau(n)c_q(n)\\&
= \sum_{d|q}d\mu\left(\frac{q}{d}\right)\underset{d|n}{\sum_{n\le x}}\tau(n) \\& =
x\sum_{d|q}\mu\left(\frac{q}{d}\right)f_x(d) + O( q\tau(q)(x^{\frac13} + d^{\frac12})x^\epsilon).
\end{split} \]
We evaluate the sum above using Dirichlet convolution and the identity $1*\mu = \delta$ where $\delta(n)$ is the identity for Dirichlet convolution defined to be 1 if $n=1$ and zero otherwise. Hence
\[ \sum_{d|q}\mu\left(\frac{q}{d}\right)f_x(d) = (f_x * \mu)(q) = ( (g_x*1)*\mu)(q) = (g_x * \delta)(q) = g_x(q),
\]
and Lemma 4 is proved.
\end{proof}
\footnotesize D. A. Goldston \,\,\,
([email protected])
Department of Mathematics and Statistics
San Jose State University
San Jose, CA 95192
USA \\
\footnotesize M. Pandey \,\,\,
([email protected])
12861 Regan Lane
Saratoga, CA 95070
USA \\
\end{document} |
\begin{document}
\title{Davidson-Luce model for multi-item choice with ties\thanks{The work of David Firth and Ioannis Kosmidis was
supported by the Alan Turing Institute under EPSRC grant
EP/N510129/1.}}
\author{ David Firth\thanks{
Department of Statistics, University of Warwick, UK; and
The Alan Turing Institute, London, UK.
ORCiD: 0000-0003-0302-2312.
\href{mailto:[email protected]}{[email protected]}}
\and
Ioannis Kosmidis\thanks{
Department of Statistics, University of Warwick, UK; and
The Alan Turing Institute, London, UK.
ORCiD: 0000-0003-1556-0302.}
\and
Heather L. Turner\thanks{
Department of Statistics, University of Warwick, UK.
ORCiD: 0000-0002-1256-3375.}
}
\maketitle
\begin{abstract}
This paper introduces a natural extension of the pair-comparison-with-ties model of Davidson (1970, J.~Amer.~Statist.~Assoc), to allow for ties when more than two items are compared.
Properties of the new model are discussed.
It is found that this `Davidson-Luce' model retains the many appealing features of Davidson's solution, while extending the scope of application substantially beyond the domain of pair-comparison data.
The model introduced here already underpins the handling of tied rankings in the \textbf{PlackettLuce} \emph{R} package. \\
\noindent {Keywords: \textit{Bradley-Terry model}, \textit{Plackett-Luce model}, \textit{exponential family}, \textit{Luce axiom}}
\end{abstract}
\section{Background: Pair comparisons}
\label{intro}
\subsection{Bradley-Terry model and Davidson's generalization for ties}
\label{bradley-terry}
A commonly used statistical model for pair-comparison data is the so-called Bradley-Terry model \citep{Bradley1952}, in which a binary outcome `$i$ is preferred to $j$' or `$i$ beats $j$' is assumed to have probability in the form
\[
\frac{\alpha_i}{\alpha_i + \alpha_j}.
\]
In the Bradley-Terry model each `item' (or `player') $i$ has their own unobserved `strength' or `ability' $\alpha_i > 0$, and it is the relative values of $\alpha_i$ and $\alpha_j$ that determine the win-probabilities when $i$ and $j$ are compared.
The Bradley-Terry model is a logit-linear model for the binary outcome ($i$ wins, or $j$ wins); and the ratio $\alpha_i / \alpha_j$ is readily interpretable as the \emph{odds} (on $i$ winning, in a contest between $i$ and $j$).
The Bradley-Terry model has also been shown \citep{Luce1959, Luce1977} to follow from a simple and appealing axiom for behaviour when making choices among items.
For choosing the preferred item from a finite set $S$, Luce's axiom implies choice probabilities
\[
\frac{\alpha_i}{\sum_{k \in S} \alpha_k}\quad (i \in S),
\]
from which the Bradley-Terry model follows whenever $S$ contains only two elements.
The Bradley-Terry model's outcomes are strictly binary: ties are not permitted.
\citet{Davidson1970} shows how to generalize the Bradley-Terry model to accommodate ties, in a way that does not violate Luce's axiom.
The Davidson model stipulates, for the three possible outcomes \{$i$ wins, $j$ wins, tie\} in a comparison of items $i$ and $j$, probabilities (summing to 1) as follows:
\begin{center}
\begin{tabular}{lccc}
\textbf{Outcome}:\qquad & $i$ wins & $j$ wins & tie \\
\textbf{Probability} (proportional to):\qquad & $\alpha_i$ & $\alpha_j$ &
$\delta(\alpha_i\alpha_j)^{1/2}$ \\
\end{tabular}
\end{center}
The Davidson model thus incorporates a single additional parameter, $\delta$, which describes the prevalence of ties; different values of $\delta$ will be appropriate in different application contexts.
\subsection{Properties of the Davidson model}
Some well-known properties of the Davidson model are as follows:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
The geometric mean $(\alpha_i \alpha_j)^{1/2}$ has the same dimension as $\alpha_i$ and $\alpha_j$; that is to say, their units of measurement are the same. This makes the tie-prevalence parameter $\delta$ dimensionless, and straightforwardly interpretable. Specifically, the probability of a tie in any comparison between items of equal strength (i.e., $\alpha_i = \alpha_j$) is ${\delta}/({2 + \delta})$.
\item
Conditional upon the outcome \emph{not} being a tie, the probability that $i$ wins is $\alpha_i/(\alpha_i + \alpha_j)$, exactly as in the Bradley-Terry model for binary outcomes.
In this way the Davidson model maintains compatibility with Luce's axiom.
\item
Like the Bradley-Terry model, the Davidson generalization depends on the strengths only through their \emph{relative} values.
The scale --- or unit of measurement --- of strengths $\{\alpha_i, \alpha_j,\ldots\}$ is immaterial.
\item
For any fixed value of $\delta$, the tie probability is proportional to $(\alpha_i \alpha_j)^{1/2}/(\alpha_i + \alpha_j)$ and is maximized when $\alpha_i=\alpha_j$.
That is, ties are most likely when the items being compared have equal strength.
\item
The Davidson model is a full exponential family model, and so maximum likelihood estimation of the parameters (the strengths $\{\alpha_i, \alpha_j, \ldots\}$ and the tie prevalence $\delta$) simply equates sufficient statistics with their expectations under the model.
The sufficient statistics are
\begin{itemize}
\item
for each item, its observed number of `wins' plus half its observed number of ties;
\item
the total number of ties seen, in all comparisons made.
\end{itemize}
See, e.g., \citet{Fienberg1979} for full details of the model's representation in log-linear form, and consequent solution of the likelihood equations in standard software.
\item
The preceding property has a neat implication when the Davidson model is applied to a `balanced round-robin' tournament among $n$ items, where every item is compared with every other item the same number of times.
In that context the maximum likelihood estimates $\{\hat\alpha_i:\, i=1,\ldots,n\}$ are ordered in exactly the same way as would be simple, item-specific `points totals', with 2 points awarded for a win and 1 point for a tie \citep{Davidson1970}.
This holds regardless of the value of $\delta$.
\end{enumerate}
\section{More than two items: Davidson-Luce model}
\subsection{Preamble}
\label{preamble}
In this section we extend the Davidson model to comparisons involving more than two items. The new `Davidson-Luce model' is designed to retain
the key properties of the Davidson model.
In general we will suppose that a choice is to be made (i.e., a winner is to be determined) from $r$ items.
The outcome can be a single `best' item, or a tie between two or more of the items under comparison.
The model is introduced first for $r=3$, before giving its general definition for any finite $r$.
\subsection{Choice from three items}
\label{three_items}
With three items $i, j, k$, there are 7 possible outcomes.
We will label these here as
\begin{itemize}
\item
$i$, $j$, $k$ (a single item wins)
\item
$ij$, $jk$, $ik$ (two items are tied winners)
\item
$ijk$ (all three items are tied winners)
\end{itemize}
The Davidson-Luce model in this case specifies 7 probabilities that sum to 1, in the following proportions:
\begin{center}\small
\begin{tabular}{lccccccc}
\textbf{Outcome}:\qquad & $i$ & $j$ & $k$ & $ij$ & $jk$ & $ik$ & $ijk$ \\
\textbf{Probability} (proportional to):\qquad & $\alpha_i$ & $\alpha_j$ & $\alpha_k$ &
$\delta_2(\alpha_i\alpha_j)^{1/2}$ &
$\delta_2(\alpha_j\alpha_k)^{1/2}$ &
$\delta_2(\alpha_i\alpha_k)^{1/2}$ &
$\delta_3(\alpha_i\alpha_j\alpha_k)^{1/3}$ \\
\end{tabular}
\end{center}
In this model there are two separate tie-prevalence parameters, $\delta_2\ge 0$ and $\delta_3\ge 0$, for the prevalence of 2-way ties and 3-way ties respectively.
The interpretation of strengths $\alpha_i, \alpha_j, \alpha_k$ is still as in the Luce model: conditional upon the outcome being an outright win for one item, the probabilities are in the ratios $\alpha_i:\alpha_j:\alpha_k$.
Still it is the case --- as in the Bradley-Terry, Luce and Davidson models --- that only \emph{relative} values of the strength parameters affect the model.
Moreover, as before, the tie probabilities are all maximized when strengths are equal.
The interpretation of $\delta_2$ is like that of $\delta$ in the Davidson model.
For example, conditional upon $k$ not being included in the winning choice, the possible outcomes are $\{i, j, ij\}$, in which case $\delta_2/(2 + \delta_2)$ is --- as before --- the probability of a 2-way tie between $i$ and $j$ when $\alpha_i = \alpha_j$.
Alternatively, if we condition only upon the outcome not being a 3-way tie, then with $\alpha_i=\alpha_j=\alpha_k$ the probability of a 2-way tie is $\delta_2/(1 + \delta_2)$.
The interpretation of $\delta_3$, similarly, is simplest in terms of the hypothetical situation of equal strengths $\alpha_1=\alpha_2=\alpha_3$ (i.e., the situation where, for any given value of $\delta_3>0$, the probability of a 3-way tie is maximized).
The probability of a 3-way tie is then $\delta_3/(3 + 3\delta_2 + \delta_3)$.
The extensions of properties 5 and 6 listed above for the Davidson model are as follows.
The model is a full exponential family, whose sufficient statistics are:
\begin{itemize}
\item
for each item, its observed number of outright `wins', plus $\frac{1}{2}$ of its observed number of 2-way ties, plus $\frac{1}{3}$ of its observed number of 3-way ties;
\item
the total number of 2-way ties seen, in all comparisons made;
\item
the total number of 3-way ties seen, in all comparisons made.
\end{itemize}
As a consequence, in a balanced round-robin tournament of 3-way comparisons involving $n$ items in total, the maximum likelihood estimates $\hat\alpha_i,\ldots,\hat\alpha_n$ are ordered in exactly the same way as are simple, item-specific `points totals', with 6 points awarded for an outright win, 3 points for a 2-way tied win, and 2 points for a 3-way tie.
This holds regardless of the values of $\delta_2$ and $\delta_3$.
Further discussion of the properties of the Davidson-Luce model
is deferred to section \ref{properties}.
In the next subsection we show how this Davidson-Luce model extends, in an obvious way, to a choice made from any number $r$ of items.
\subsection{Choice from any finite set}
\label{general_r}
The model for $r=3$, as described above, immediately suggests the form of the Davidson-Luce model for any $r$.
In any given comparison, label the items being compared by $\{i_1,i_2,\ldots,i_r\}$,
and denote by $T$ the set of possible `winning' choices that might be made from the $r$ items being compared. For example, $T = \{i_2\}$ indicates an outright winner, $T = \{i_1, i_2\}$ indicates a 2-way tie, and so on, up to and including the possibility $T = \{i_1,i_2,\ldots,i_r\}$, which indicates that all $r$ items tied.
The Davidson-Luce model stipulates that the probability of any such choice $T$ is proportional to
\begin{equation}
p_T = \delta_t\left(\prod_{i \in T} \alpha_i\right)^{1/t},
\end{equation}
where $t$ denotes the cardinality of set $T$.
Thus $t$ can take values in $\{1,\ldots,r\}$. The adjustable tie-prevalence parameters are $\delta_2,\ldots, \delta_r$; the value of $\delta_1$ can be set arbitrarily to be 1, so $\delta_1$ is not actually a parameter in the model but is included here for presentational tidiness.
The constant of proportionality is just the normalizing constant, the reciprocal of the sum of $p_T$ over all possible choice sets $T$.
That normalizing constant can be straightforwardly computed, if needed, but, it involves a rapidly increasing number of terms as the value of $r$ increases.
The model's log-linear representation, which follows as a direct extension of \citet{Fienberg1979}, allows for simple iterative computation of estimates and associated standard errors without any need to evaluate the likelihood itself.
A numerical illustration is provided in the Appendix, to show how this works in detail.
\hypertarget{basic-properties-of-the-model}{
\subsection{Basic properties of the model}\label{basic-properties-of-the-model}}
\label{properties}
The \citet{Davidson1970} model is a special case of the Davidson-Luce model,
with $r \equiv |S| = 2$ and $\delta_2 \equiv \delta$. The Luce model \citep{Luce1959, Luce1977} is
the special case in which ties are not allowed: that is, $\delta_t \equiv 0$ for all
$t>1$.
Here we briefly describe how the Davidson model
properties listed above (in section \ref{intro})
extend to the Davidson-Luce model.
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
The geometric means $\left(\prod_{i \in T} \alpha_i\right)^{1/t}$ all have the same dimensions as the strengths $\{\alpha_i\}$, and so the tie-prevalence parameters $\delta_2,\ldots,\delta_r$ are all dimensionless.
It was shown in section \ref{three_items} above how to construct meaningful interpretations for those parameters.
\item
Conditional upon the outcome of a comparison \emph{not} being a tie, the probability that $i$ wins is $\alpha_i/\sum_{k \in S} \alpha_k$, for any $i$ in the comparison set $S$.
The Davidson-Luce model thus maintains compatibility with Luce's axiom.
\item
As before, dependence on item strengths is only ever through their \emph{relative} values.
\item
The tie probabilities all are in the form of geometric means, which are maximized when the items being compared have equal strengths.
\item
The Davidson-Luce model is still a full exponential family model as before, the sufficient statistics being
\begin{itemize}
\item
for each item, the total number of wins, counting a tied win fractionally in the obvious way;
\item
the total numbers of ties seen, of each order (i.e., the count of 2-way ties,
the count of 3-way ties, etc.).
\end{itemize}
A straightforward extension of the log-linear representation in \citet{Fienberg1979} leads to efficient solution of maximum likelihood equations --- without any need to compute the likelihood itself --- using standard software for generalized linear models.
\item
As already exemplified in section \ref{three_items}, the Davidson-Luce model continues to yield exact agreement with points-based league tables for fully balanced tournaments, provided that points are divided equally whenever items share a tied win.
\end{enumerate}
In summary, then: the Davidson-Luce model retains the many appealing features of the Davidson model for ties, while extending the scope of application substantially beyond the limited domain of pair-comparison data.
\section{Concluding remarks}
\label{remarks}
A specific application of the ideas developed here is to the Plackett-Luce model
\citep{Turner2019b}, which generalizes Bradley-Terry models to analysis of rankings.
In a Plackett-Luce model, it would typically be the case that tied ``winners'' can occur at any stage of the sequence of choices that forms a multi-item ranking; and this flexibility is what is implemented in the \textbf{PlackettLuce} package.
The \textbf{PlackettLuce} package also implements a prior penalty for Plackett-Luce models, which regularizes the likelihood with the aim of improving estimation. In particular, use of that prior penalty ensures that the conditions of \citet{Ford1957}, which ensure existence and finiteness of parameter estimates, are always satisfied.
The prior penalty, as implemented in the \textbf{PlackettLuce} package, requires no modification at all to work with the Davidson-Luce model. For full details on the \textbf{PlackettLuce} package and its use, see \citet{Turner2019b}.
\appendix
\section{Appendix: Computation via Poisson log-linear model representation}
Here we use a small, artificial example to show details of
implementation of the Davidson-Luce model in \emph{R}, using maximum
likelihood via a log-linear representation as suggested by
\citet{Fienberg1979}.
\subsection{Davidson-Luce model for a small, contrived example}
We imagine here a 4-player round-robin tournament in which each
`contest' involves exactly 3 of the 4 players. A single round-robin
tournament thus has 4 contests, in this setting.
The data we will use are as follows:
\begin{verbatim}
triples_round_robin <- matrix(c(
NA, 1, 0, 0,
1, NA, 1, 0,
0, 1, NA, 1,
1, 1, 1, NA),
4, 4, byrow = TRUE,
dimnames = list(contest = c("BCD", "ACD", "ABD", "ABC"),
winner = c("A", "B", "C", "D"))
)
triples_round_robin
\end{verbatim}
\begin{verbatim}
## winner
## contest A B C D
## BCD NA 1 0 0
## ACD 1 NA 1 0
## ABD 0 1 NA 1
## ABC 1 1 1 NA
\end{verbatim}
The first contest is won outright by player \(B\); the second is tied
between \(A\) and \(C\); the third is tied between \(B\) and \(D\); and
the fourth is a 3-way tie between \(A\), \(B\) and \(C\).
The simple tournament-scoring system described in Section
\ref{three_items}, with 6 points shared across the winners of each
contest, gives points totals as follows:
\begin{verbatim}
6 * colSums(triples_round_robin / rowSums(triples_round_robin, na.rm = TRUE),
na.rm = TRUE)
\end{verbatim}
\begin{verbatim}
## A B C D
## 5 11 5 3
\end{verbatim}
So in this small tournament \(B\) is the clear winner, with \(A\) and
\(C\) jointly second.
To fit the Davidson-Luce model via its Poisson log-linear
representation, we first expand the data to a form that has a separate
row for each possible outcome of every contest. To do this we will use a
special-purpose function named \texttt{expand\_outcomes} (whose
definition is shown at the end, below).
\begin{verbatim}
expanded_data <- expand_outcomes(triples_round_robin)
print(expanded_data, digits = 2)
\end{verbatim}
\begin{verbatim}
## comparison A B C D delta2 delta3 outcome
## 1: B 1 0.00 1.00 0.00 0.00 0 0 1
## 1: C 1 0.00 0.00 1.00 0.00 0 0 0
## 1: D 1 0.00 0.00 0.00 1.00 0 0 0
## 1: B=C 1 0.00 0.50 0.50 0.00 1 0 0
## 1: B=D 1 0.00 0.50 0.00 0.50 1 0 0
## 1: C=D 1 0.00 0.00 0.50 0.50 1 0 0
## 1: B=C=D 1 0.00 0.33 0.33 0.33 0 1 0
## 2: A 2 1.00 0.00 0.00 0.00 0 0 0
## 2: C 2 0.00 0.00 1.00 0.00 0 0 0
## 2: D 2 0.00 0.00 0.00 1.00 0 0 0
## 2: A=C 2 0.50 0.00 0.50 0.00 1 0 1
## 2: A=D 2 0.50 0.00 0.00 0.50 1 0 0
## 2: C=D 2 0.00 0.00 0.50 0.50 1 0 0
## 2: A=C=D 2 0.33 0.00 0.33 0.33 0 1 0
## 3: A 3 1.00 0.00 0.00 0.00 0 0 0
## 3: B 3 0.00 1.00 0.00 0.00 0 0 0
## 3: D 3 0.00 0.00 0.00 1.00 0 0 0
## 3: A=B 3 0.50 0.50 0.00 0.00 1 0 0
## 3: A=D 3 0.50 0.00 0.00 0.50 1 0 0
## 3: B=D 3 0.00 0.50 0.00 0.50 1 0 1
## 3: A=B=D 3 0.33 0.33 0.00 0.33 0 1 0
## 4: A 4 1.00 0.00 0.00 0.00 0 0 0
## 4: B 4 0.00 1.00 0.00 0.00 0 0 0
## 4: C 4 0.00 0.00 1.00 0.00 0 0 0
## 4: A=B 4 0.50 0.50 0.00 0.00 1 0 0
## 4: A=C 4 0.50 0.00 0.50 0.00 1 0 0
## 4: B=C 4 0.00 0.50 0.50 0.00 1 0 0
## 4: A=B=C 4 0.33 0.33 0.33 0.00 0 1 1
\end{verbatim}
The \texttt{expanded\_data} object is an ordinary data frame that can be
used with \emph{R}'s standard functions for fitting generalized linear
models. The Davidson-Luce model could now just be fitted by maximum
likelihood in \emph{R} through a call to \texttt{glm()}, as a Poisson
log-linear model as follows:
\begin{verbatim}
DLmodel <- glm(outcome ~ comparison + A + B + C + D + delta2 + delta3,
family = poisson, data = expanded_data)
\end{verbatim}
But here the factor named \texttt{comparison} is included purely for
technical reasons, to ensure that the fitted probabilities (over the 7
possible outcomes in each contest here) sum to 1. That factor is not of
any interest, and so for tidiness --- as well as a slight improvement in
computational efficiency --- we will use \texttt{gnm} (from the
\textbf{gnm} package) instead of \texttt{glm}. The advantage of
\texttt{gnm} here is that it allows the `nuisance' factor
\texttt{comparison} to be included more cleanly in the model via the
\texttt{eliminate} argument:
\begin{verbatim}
library(gnm)
DLmodel <- gnm(outcome ~ A + B + C + D + delta2 + delta3, eliminate = comparison,
family = poisson, data = expanded_data)
DLmodel
\end{verbatim}
\begin{verbatim}
##
## Call:
##
## gnm(formula = outcome ~ A + B + C + D + delta2 + delta3, eliminate = comparison,
## family = poisson, data = expanded_data)
##
## Coefficients of interest:
## A B C D delta2 delta3
## 2.071 6.864 2.071 NA 2.390 3.249
##
## Deviance: 11.35986
## Pearson chi-squared: 14.20569
## Residual df: 19
\end{verbatim}
The reported model parameters are on the log scale; and the
parameterization here has \(\alpha_D\) arbitrarily set to 1, to resolve
parameter redundancy.
So, for example \(\alpha_C/\alpha_D\) is estimated to be
\(\exp(2.07)/1 = 7.93\).
The two tie-prevalence parameters here are both estimated to be very
large: \(\hat\delta_2 = \exp(2.39) = 10.91\) and
\(\hat\delta_3 = \exp(3.25) = 25.8\). This is due to the deliberately
common occurrence of ties in this dataset, in order to demonstrate how
ties are handled; and also the fact that the estimated player strengths
here differ widely. (The data seen here would suggest that in notional
contests where players all have \emph{equal} strengths, ties would be
\emph{extremely} common.)
\subsection{Agreement with full round-robin `points totals'}
Since this was a fully balanced round robin tournament design, then as
mentioned in Section \ref{three_items} the fit of the Davidson-Luce
model should agree exactly with the simple points totals that were
calculated above. Those points totals do indeed agree with their
expectations under the fitted Davidson-Luce model:
\begin{verbatim}
DLfitted <- predict(DLmodel, type = "response")
print(DLfitted, digits = 2)
\end{verbatim}
\begin{verbatim}
## 1: B 1: C 1: D 1: B=C 1: B=D 1: C=D 1: B=C=D 2: A
## 0.34278 0.00284 0.00036 0.34071 0.12096 0.01101 0.18133 0.02967
## 2: C 2: D 2: A=C 2: A=D 2: C=D 2: A=C=D 3: A 3: B
## 0.02967 0.00374 0.32385 0.11498 0.11498 0.38312 0.00284 0.34278
## 3: D 3: A=B 3: A=D 3: B=D 3: A=B=D 4: A 4: B 4: C
## 0.00036 0.34071 0.01101 0.12096 0.18133 0.00200 0.24096 0.00200
## 4: A=B 4: A=C 4: B=C 4: A=B=C
## 0.23950 0.02181 0.23950 0.25423
\end{verbatim}
\begin{verbatim}
expected_points_totals <- 6 * colSums(expanded_data[, c("A","B","C","D")] * DLfitted)
expected_points_totals
\end{verbatim}
\begin{verbatim}
## A B C D
## 5.000000 11.000000 5.000000 3.000001
\end{verbatim}
The actual points totals, from above, were 5, 11, 5 and 3. The agreement
is exact, apart from numerical error due to the iteration-stopping rule
that was used by \texttt{gnm}.
\subsection{Illustration of tie-prevalence interpretations}
The interpretation of tie-prevalence parameters \(\delta_2\) and
\(\delta_3\) was described in Section \ref{three_items}, in terms of the
probabilities in a notional contest involving only players of equal
ability.
Merely as a numerical illustration of those interpretations, we re-fit
here the Davidson-Luce model, but with the constraint that strengths
\(\alpha_A,\alpha_B,\alpha_C,\alpha_D\) are all equal to 1 (so that
their logarithms are all zero).
\begin{verbatim}
DL_equal_strengths <- update(DLmodel, . ~ . - A - B - C - D)
DL_equal_strengths
\end{verbatim}
\begin{verbatim}
##
## Call:
## gnm(formula = outcome ~ delta2 + delta3, eliminate = comparison,
## family = poisson, data = expanded_data)
##
## Coefficients of interest:
## delta2 delta3
## 0.6931 1.0986
##
## Deviance: 14.90944
## Pearson chi-squared: 24
## Residual df: 22
\end{verbatim}
The tie-prevalence estimates here are \(\hat\delta_2 = \exp(0.6931)\)
and \(\hat\delta_3 = \exp(1.0986)\). Agreement with the detailed
interpretations shown in Section \ref{three_items} can thus be checked
as follows:
\begin{verbatim}
coefs <- coef(DL_equal_strengths)
print(round(coefs, 4))
\end{verbatim}
\begin{verbatim}
## Coefficients of interest:
## delta2 delta3
## 0.6931 1.0986
\end{verbatim}
\begin{verbatim}
delta2 <- exp(coefs[1])
delta3 <- exp(coefs[2])
delta2/(1 + delta2)
\end{verbatim}
\begin{verbatim}
## delta2
## 0.6666667
\end{verbatim}
\begin{verbatim}
delta3/(3 + 3*delta2 + delta3)
\end{verbatim}
\begin{verbatim}
## delta3
## 0.25
\end{verbatim}
These values agree with what was seen in the data, which was 2 two-way
ties out of the 3 contests whose outcome was not a 3-way tie (so
\(\hat\delta_2/(1 + \hat\delta_2) = 2/3\)), and one 3-way tie out of the
4 contests observed in total (so
\(\hat\delta_3/(3 + 3\hat\delta_2 + \hat\delta_3) = 1/4\)).
\subsection{Definition of the function used to expand the data}
For completeness here, we show the full definition of the function that
made the dataframe named \texttt{expanded\_data} in the above.
The function shown here is very much a prototype, not programmed for
efficiency, robustness or scalability.
\begin{verbatim}
expand_outcomes
\end{verbatim}
\begin{verbatim}
## function(m) {
## n_comparisons <- nrow(m)
## n_items <- ncol(m)
## items <- colnames(m)
## rvec <- apply(m, 1, function(row) sum(!is.na(row)))
## tvec <- apply(m, 1, function(row) sum(na.omit(row)))
## maxt <- max(tvec)
## if (maxt > 1) delta_names <- paste0("delta", 2:maxt)
## n_possible_outcomes <- integer(n_comparisons)
## for (i in 1:n_comparisons) {
## n_possible_outcomes[i] <- sum(choose(rvec[i], 1:(min(rvec[i], maxt))))
## }
## result <- matrix(0, sum(n_possible_outcomes), n_items + maxt + 1)
## colnames(result) <- c("comparison", colnames(m), delta_names, "outcome")
## rownames(result) <- as.character(1:nrow(result))
## filled <- 0
## for (comparison in 1:n_comparisons){
## involved <- items[!is.na(m[comparison, ])]
## for (t in 1:maxt) {
## combs <- combn(involved, t)
## for (index in 1:ncol(combs)){
## result[filled + index, 1] <- comparison
## result[filled + index, 1 + which(items
## if (t > 1) {
## result[filled + index, n_items + t] <- 1
## }
## if (all(na.omit(t * result[filled + index, 1 + (1:n_items)] -
## m[comparison, ]) == 0)) {
## result[filled + index, "outcome"] <- 1
## }
## rownames(result)[filled + index] <-
## paste(comparison, paste0(combs[, index], collapse = "="),
## sep = ": ")
## }
## filled <- filled + ncol(combs)
## }
## }
## result <- as.data.frame(result)
## result$comparison <- as.factor(result$comparison)
## return(result)
## }
## <bytecode: 0x36ad6c0>
\end{verbatim}
\end{document} |
\begin{equation}gin{document}
\title{\textcolor{black}{A notion of nonpositive curvature for general metric spaces}}
\author[M. Ba\v{c}\'ak \and B. Hua \and J. Jost \and M. Kell \and A. Schikorra]{Miroslav Ba\v{c}\'ak \and Bobo Hua \and J\"{u}rgen Jost \and Martin Kell \and Armin Schikorra}
\date{\today}
\subjclass[2010]{Primary: 51F99; 53B20; Secondary: 52C99}
\keywords{Comparison geometry, geodesic space, Kirszbraun's theorem, nonpositive curvature.}
\thanks{The research leading to these results has received funding from the
European Research Council under the European Union's Seventh Framework
Programme (FP7/2007-2013) / ERC grant agreement no 267087.}
\mathrm{ad}dress{Max Planck Institute for Mathematics in the Sciences, Inselstr.~22, 04103 Leipzig, Germany}
\curraddr[B. Hua]{School of Mathematical Sciences, LMNS, Fudan University, Shanghai 200433, China}
\email[M. Ba\v{c}\'ak]{[email protected]}
\email[B. Hua]{[email protected]}
\email[J. Jost]{[email protected]}
\email[M. Kell]{[email protected]}
\email[A. Schikorra]{[email protected]}
\begin{equation}gin{abstract}
We introduce a new definition of nonpositive curvature in metric spaces and study its relationship to the existing notions of nonpositive curvature in comparison geometry. The main feature of our definition is that it applies to all metric spaces and does not rely on geodesics. Moreover, a scaled and a relaxed version of our definition are appropriate in discrete metric spaces, and are believed to be of interest in geometric data analysis.
\end{abstract}
\maketitle
\operatorname{Sec}tion{Introduction}
The aim of the present paper is to introduce a new definition of nonpositive curvature in metric spaces. Similarly to the definitions of Busemann and CAT(0) spaces, it is based on comparing triangles in the metric space in question with triangles in the Euclidean plane, but it does not require the space be geodesic.
Let $(X,d)$ be a metric space. A triple of points $\left(a_1,a_2,a_3\right)$ in $X$ is called a \emph{triangle} and the points $a_1,a_2,a_3$ are called its \emph{vertices.} For this triangle in $(X,d),$ there exist points $\overline{a}_1,\overline{a}_2,\overline{a}_3\in\mathbb{R}^2$ such that
\begin{equation}gin{equation*}
d\left(a_i,a_j\right)=\left\| \overline{a}_i-\overline{a}_j \right\|,\qquad \text{for every } i,j=1,2,3,
\end{equation*}
where $\|\cdot\|$ stands for the Euclidean distance. The triple of points $\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)$ is called a \emph{comparison triangle} for the triangle $\left(a_1,a_2,a_3\right),$ and it is unique up to isometries.
Given these two triangles, we define the functions
\begin{equation}gin{align*}
\rightho_{\left(a_1,a_2,a_3\right)}(x) & =\max_{i=1,2,3} d(x,a_i),\qquad x\in X,\\
\intertext{and,}
\rightho_{\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)}(x) & =\max_{i=1,2,3} \left\|x-\overline{a}_i\right\|,\qquad x\in \mathbb{R}^2.
\end{align*}
The numbers
\begin{equation}gin{equation*}
r\left(a_1,a_2,a_3\right)\!\mathrel{\mathop:}= \inf_{x\in X} \rightho_{\left(a_1,a_2,a_3\right)}(x) \quad\text{and}\quad r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)\!\mathrel{\mathop:}= \min_{x\in \mathbb{R}} \rightho_{\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)}(x)
\end{equation*}
are called the \emph{circumradii} of the respective triangles. Next we can introduce our main definition.
\begin{equation}gin{definition}[Nonpositive curvature] \leftabel{def:ournpc}
Let $(X,d)$ be a metric space. We say that $\operatorname{Curv} X\lefteq0$ if, for each triangle $\left(a_1,a_2,a_3\right)$ in $X,$ we have
\begin{equation}gin{equation} \leftabel{eq:def}
r\left(a_1,a_2,a_3\right)\lefteq r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right),
\end{equation}
where $\overline{a}_i$ with $i=1,2,3$ are the vertices of an associated comparison triangle.
\end{definition}
As we shall see, our definition of nonpositive curvature is implied by
the CAT(0) property, but not by nonpositive curvature in the sense of
Busemann. In Riemannian manifolds, however, all of them are equivalent
to global nonpositive \emph{sectional} curvature. We also make a
connection to the celebrated Kirszbraun extension theorem.
In order to appreciate the geometric content of our definition, let us assume that the infimum in \rightf{eq:def} is attained, i.e., there exists some $m\in X$ with
\begin{equation}l{eq:2}
d(m,a_i) \lefte r\left(a_1,a_2,a_3\right)= \inf_{x\in X} \rightho_{\left(a_1,a_2,a_3\right)}(x).
\end{equation}
We then call such an $m$ a \emph{circumcenter} of the triangle with vertices $a_1,a_2,a_3.$ This can be equivalently expressed as
\begin{equation}l{eq:3}
\bibitemgcap_{i=1,2,3} B\left(a_i,r\left(a_1,a_2,a_3\right)\right)\neq \emptyset,
\end{equation}
where $B(x,r)\!\mathrel{\mathop:}=\{ y\in X: d(x,y) \lefte r\}$ denotes a closed distance
ball. The intersection is nonempty because it contains the point
$m$. The condition \rightf{eq:3} as such, however, does not involve the
point $m$ explicitly. Our curvature inequality thus embodies the
principle that three balls in $X$ should have a nonempty intersection
whenever the corresponding balls in the Euclidean plane with the same
distances between their centers intersect nontrivially. It therefore is meaningful in a general metric
space to search for the minimal radius for which the balls centered at
three given points have a nonempty intersection. In such a general
context, curvature bounds can therefore be interpreted as
quantification of the dependence of such a minimal radius on the
distances between the points involved, as compared to the Euclidean
situation. Below, we shall also discuss how this principle can be
adapted to discrete metric spaces. This should justify the word
``general'' in the title of our paper.
It is also worth mentioning that our definition of nonpositive curvature is stable under the Gromov-Hausdorff convergence.
\operatorname{Sec}tion{Preliminaries}
We first introduce some terminology from metric geometry and recall a few facts. As references on the subject, we recommend \cite{bacak,BridsonHaefliger99,Jost97}. Let $(X,d)$ be a metric space and let $x,y\in X.$ If there exists a point $m\in X$ such that $d(x,m)=d(m,y)=\frac12d(x,y),$ we call it a \emph{midpoint} of $x,y.$ Similarly, we say that a pair of points $x,y\in X$ has \emph{approximate midpoints} if for every $\varepsilon>0$ there exists $m\in X$ such that
\begin{equation}gin{equation*}
\max\left\{d(x,m),d(y,m)\right\}\lefte\frac12d(x,y)+\varepsilon.
\end{equation*}
A continuous mapping $\gamma\colon[0,1]\to X$ is called a \emph{path} and its \emph{length} is defined as
\begin{equation}gin{equation*}
\leftength(\gamma)\!\mathrel{\mathop:}=\sup \sum_{i=1}^n d\left(\gamma\left(t_{i-1}\right),\gamma\left(t_i\right) \right),
\end{equation*}
where the supremum is taken over the set of all partitions $0=t_0<\cdots<t_n=1$ of the interval~$[0,1],$ with an arbitrary $n\in\mathbb{N}.$ Given $x,y\in X,$ we say that a path $\gamma\colon [0,1]\to X$ joins $x$ and $y$ if $\gamma(0)=x$ and $\gamma(1)=y.$
A~metric space $(X,d)$ is a \emph{length space} if
\begin{equation}gin{equation*}
d(x,y)=\inf\left\{\leftength(\gamma)\colon\text{ path } \gamma\text{ joins } x,y \right\},
\end{equation*}
for every $x,y\in X.$ A complete metric space is a lenght space if and only if each pair of points has approximate midpoints.
A metric space $(X,d)$ is called \emph{geodesic} if each pair of points $x,y\in X$ is joined by a path $\gamma\colon[0,1]\to X$ such that
\begin{equation}gin{equation*}
d\left(\gamma(s),\gamma(t)\right)=d(x,y)\:|s-t|,
\end{equation*}
for every $s,t\in[0,1].$ The path $\gamma$ is then called a \emph{geodesic} and occasionally denoted $[x,y].$ If each pair of points is connected by a \emph{unique} geodesic, we call the space \emph{uniquely geodesic.}
Denote by $Z_{t}(x,y)$ the set of $t$-midpoints, i.e. $z\in Z_{t}(x,y)$
iff $td(x,y)=d(x,z)$ and $(1-t)d(x,y)=d(z.y).$ A geodesic space
is called non-branching if for each triple of points $x,y,y'\in X$
with $d(x,y)=d(x,y)$ the condition $Z_{t}(x,y)\cap Z_{t}(x,y')\ne\varnothing$
for some $t\in(0,1)$ implies that $y=y'$.
\subsection{Busemann spaces}
A geodesic space $(X,d)$ is a Busemann space if and only if, for every geodesics $\gamma,\eta\colon[0,1]\to X,$ the function $t\mapsto d \left(\gamma(t),\eta(t)\right)$ is convex on $[0,1].$ This property in particular implies that Busemann spaces are uniquely geodesic.
\subsection{Hadamard spaces}
Let $(X,d)$ be a geodesic space. If for each point $z\in X,$ each geodesic $\gamma\colon[0,1]\to X,$ and $t\in[0,1],$ we have
\begin{equation}gin{equation} \leftabel{eq:cat}
d\left(z,\gamma(t)\right)^2\lefteq (1-t) d\left(z,\gamma(0)\right)^2+td\left(z,\gamma(1)\right)^2-t(1-t) d\left(\gamma(0),\gamma(1)\right)^2,
\end{equation}
the space $(X,d)$ is called CAT(0). It is easy to see that CAT(0) spaces are Busemann. A complete CAT(0) space is called an \emph{Hadamard space.}
\operatorname{Sec}tion{Connections to other definitions of NPC} \leftabel{sec:otherNPC}
In this section we study the relationship between Definition~\mathrm{Re}f{def:ournpc} and other notions of nonpositive curvature that are known in comparison geometry.
We begin with a simple observation. If a metric space $(X,d)$ is complete and $\operatorname{Curv} X\lefteq0,$ then it is a length space. If we moreover required the function $\rightho_{\left(a_1,a_2,a_3\right)}(\cdot)$ from Definition \mathrm{Re}f{def:ournpc} to attain its minimum, we would obtain a geodesic space. That is a motivation for introducing scaled and relaxed versions of Definition~\mathrm{Re}f{def:ournpc} in Section~\mathrm{Re}f{sec:scaled}.
To show that Hadamard spaces have nonpositive curvature in the sense of Definition~\mathrm{Re}f{def:ournpc}, we need the following version of the Kirszbraun extension theorem with Hadamard space target. By a nonexpansive mapping we mean a $1$-Lipschitz mapping.
\begin{equation}gin{theorem}[Lang-Schroeder] \leftabel{thm:kirszbraun}
Let $(\mathcal{H},d)$ be an Hadamard space and let $S\subset\mathbb{R}^2$ be an arbitrary set. Then for each nonexpansive mapping $f\colon S\to\mathcal{H},$ there exists a nonexpansive mapping $F\colon\mathbb{R}^2\to\mathcal{H}$ such that $F\mathrm{Re}s_S=f.$
\end{theorem}
\begin{equation}gin{proof}
See~\cite{langschroeder}.
\end{proof}
\begin{equation}gin{corollary}
Let $(\mathcal{H},d)$ be an Hadamard space. Then $\operatorname{Curv}\mathcal{H}\lefteq0.$
\end{corollary}
\begin{equation}gin{proof}
Consider a triangle with vertices $a_1,a_2,a_3\in\mathcal{H}$ and apply Theorem~\mathrm{Re}f{thm:kirszbraun} to the set
$S\!\mathrel{\mathop:}=\left\{\overline{a}_1,\overline{a}_2,\overline{a}_3\right\}$ and isometry $f\colon
\overline{a}_i\mapsto a_i,$ for $i=1,2,3.$ We obtain a nonexpansive mapping
$F\colon\mathbb{R}^2\to\mathcal{H}$ which maps the circumcenter $\overline{a}$ of $S$ to some $a \in \mathcal{H}$. By nonexpansiveness, $d(a,a_i)\lefte \| \overline{a}- \overline{a}_i\|$,
which is exactly the condition in~\eqref{eq:cat}.
\end{proof}
In case of Hadamard manifolds, one can argue in a more elementary way than via Theorem~\mathrm{Re}f{thm:kirszbraun}. We include the proof since it is of independent interest. The following fact, which holds in all Hadamard spaces, will be used.
\begin{equation}gin{lemma} \leftabel{lem:varfor}
Let $(\mathcal{H},d)$ be an Hadamard space. Assume $\gamma\colon [0,1]\to\mathcal{H}$ is a
geodesic and $z\in\mathcal{H}\setminus\gamma.$ Then
\begin{equation}gin{equation*}
\leftim_{t\to0+}\frac{d\left(z,\gamma_0\right)-d\left(z,\gamma_t\right)}{t}=\angle\left(\gamma(1),\gamma(0),z\right).
\end{equation*}
The existence of the limit is part of the statement. The RHS denotes
the angle at $\gamma(0)$ between $\gamma$ and $\left[\gamma(0),z\right].$
\end{lemma}
\begin{equation}gin{proof}
Cf. \cite[p. 185]{BridsonHaefliger99}.
\end{proof}
\begin{equation}gin{theorem}
Let $M$ be an Hadamard manifold. Then $\operatorname{Curv} M\lefteq0.$
\end{theorem}
\begin{equation}gin{proof}
Choose a triangle with vertices $a_1,a_2,a_3\in M$ and observe that the set of minimizers of the function $\rightho_{\left(a_1,a_2,a_3\right)}(\cdot)$ coincides with the set of minimizers of the function
\begin{equation}gin{equation*}
x\mapsto \max_{i=1,2,3} d(x,a_i)^2,\qquad x\in X.
\end{equation*}
Since the latter function is strongly convex on Hadamard manifolds, it has a unique minimizer $m\in M.$ Denote $b_i=\exp_m^{-1}(a_i)\in T_m M$ for every $i=1,2,3.$ We claim that $b_1,b_2,b_3$ lie in a plane containing $0.$ Indeed, if it were not the case, there would exist a vector $v\in T_m M$ such that $\left\leftangle v,b_i\right\rightangle_m >0$ for every $=1,2,3.$ According to Lemma \mathrm{Re}f{lem:varfor} we would than have
\begin{equation}gin{equation*}
\leftim_{t\to0+}\frac{d\lefteft(b_i,m \rightight)-d\lefteft(b_i,\exp_m(tv) \rightight)}{t}>0,
\end{equation*}
for every $i=1,2,3.$ There is hence $\varepsilon>0$ such that
\begin{equation}gin{equation*}
d\left(b_i,m \right)>d\left(b_i,\exp_m(tv) \right),
\end{equation*}
for every $t\in(0,\varepsilon)$ and $i=1,2,3.$ This is a contradiction
to $m$ being a minimizer of $\rightho_{\left(a_1,a_2,a_3\right)}(\cdot).$ We can therefore
conclude that $b_1,b_2,b_3$ lie in a plane containing $0.$
By an elementary Euclidean geometry argument we obtain that
\begin{equation}gin{equation*}
r\left(a_1,a_2,a_3\right)= r\left(b_1,b_2,b_3\right).
\end{equation*}
Since $\operatorname{Sec} M\lefteq0,$ we have $\|b_i-b_j\|\lefteq d(a_i,a_j)$ for every
$i,j=1,2,3.$ Consequently,
\begin{equation}gin{equation*}
r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3 \right)\geq r\left(b_1,b_2,b_3\right),
\end{equation*}
which finishes the proof.
\end{proof}
The converse implication holds as well.
\begin{equation}gin{theorem}\leftabel{th:CurvimpliesSec}
Let $M$ be a smooth manifold with $\operatorname{Curv} M\lefteq0$. Then $\operatorname{Sec} M\lefteq0$.
\end{theorem}
We shall prove this theorem in the remainder of the present section. Naturally, our arguments are local, and to obtain nonpositive sectional curvature at a point $m\in M,$ we only need assume that \eqref{eq:def} is satisfied for all sufficiently small triangles around this point.
Let us choose a plane $\Pi \subset T_m M$ and pick three unit vectors $X,Y,Z \in\Pi$ with
\begin{equation}gin{equation}\leftabel{eq:anglesaddup}
\angle XOY=\angle YOZ=\angle ZOX=\frac{2\pi}{3},
\end{equation}
where $O \in T_m M$ is the origin and the angles are measured in the metric of $T_m M$. Furthermore, we set $\gamma_X(t)\!\mathrel{\mathop:}=\exp_mtX,\gamma_Y(t)\!\mathrel{\mathop:}=\exp_mtY,\gamma_Z(t)\!\mathrel{\mathop:}=\exp_m tZ,$ for all small $t>0.$
\begin{equation}gin{lemma} \leftabel{lem:aux}
For sufficiently small $t,$ the center of the minimal enclosing ball of $\gamma_X(t),\gamma_Y(t),\gamma_Z(t)$ in $M$ is $m,$ and the circumradius is equal to $t,$ that is,
\begin{equation}gin{equation*}
t = \rightho_{\gamma_X(t),\gamma_Y(t),\gamma_Z(t)}(m) = r\left(\gamma_X(t),\gamma_Y(t),\gamma_Z(t)\right).
\end{equation*}
\end{lemma}
\begin{equation}gin{proof}
For small enough $t$ we can pick a convex neighborhood $W\subset M$ of $m$ containing $\gamma_X(t),\gamma_Y(t),\gamma_Z(t)$ such that the $\rightho_{\gamma_X(t),\gamma_Y(t),\gamma_Z(t)}(\cdot)$ has a unique minimizer on $W.$
Fix a unit vector $V \in T_m M$. We first show that there exists $U \in \{X,Y,Z\}$ such that
\begin{equation}gin{equation}\leftabel{eq:onedirectionisfine}
\leftim_{\varepsilon \to 0+} \frac{d (\gamma_U(t),\exp_m(\varepsilon V))^2-d (\gamma_U(t),m)^2}{\varepsilon} \geq 0,
\end{equation}
and $\rightho(m) = d \left(\gamma_U(t),m\right).$ If that were true, then for all $z\in W$ sufficiently close to $m$, we would have
\begin{equation}gin{equation*}
\frac{\rightho(\exp_m(\varepsilon V))^2-\rightho(m)^2}{\varepsilon} \geq \frac{d (\gamma_U(t),\exp_m(\varepsilon V))^2-d (\gamma_U(t),m)^2}{\varepsilon}.
\end{equation*}
Hence
\begin{equation}gin{equation*}
\leftim_{\varepsilon \to 0+} \frac{\rightho(\exp_m(\varepsilon V))^2-\rightho(m)^2}{\varepsilon} \geq 0.
\end{equation*}
If this holds for any $V \in T_m M$, together with the convexity of $\rightho$ we obtain that $m$ is the unique minimizer.
To prove the existence of $U$ satisfying \eqref{eq:onedirectionisfine}, decompose $V = \leftambda V^\Pi + \mu V^\perp$, where $V^\Pi \in \Pi$ and $V^\perp \in \Pi^\perp$ are unit vectors and $\leftambda,\mu\in\mathbb{R}.$
On the one hand, for any $U \in \Pi$ by the first variation of the distance function gives
\begin{equation}gin{equation*}
\frac{\operatorname{d}\!\;\;}{\operatorname{d}\!\varepsilon} \Big |_{\varepsilon = 0} d(\gamma_{U}(t),\exp_m(\varepsilon V^\perp))^2 = 0.
\end{equation*}
On the other hand, by \eqref{eq:anglesaddup} there has to be some $U \in \{X,Y,Z\}$ such that
\begin{equation}gin{equation*}
\angle UOV^\Pi \geq \frac{2\pi}{3} > \frac{\pi}{2}.
\end{equation*}
If $W$ is small enough (independent of $t$), the uniform bound away from $\frac{\pi}{2}$ implies that for any sufficiently small $\varepsilon > 0$
\begin{equation}gin{equation*}
d (\gamma_{U}(t),\exp_m(\varepsilon V^\Pi)) > d (\gamma_{U}(t),m),
\end{equation*}
which establishes \eqref{eq:onedirectionisfine}. Since we have $\rightho(m) = d \left(\gamma_U(t),m\right),$ the Lemma \mathrm{Re}f{lem:aux} is proved.
\end{proof}
Now let $\overline{x}(t),\overline{y}(t),\overline{z}(t)$ be a comparison triangle in $\mathbb{R}^2$ for the geodesic triangle $\gamma_X(t),\gamma_Y(t),\gamma_Z(t)$ in $M.$ Let $\overline{m}(t)$ be the minimizer of $\rightho_{\overline{x}(t),\overline{y}(t),\overline{z}(t)}(\cdot)$ in $\mathbb{R}^2$. By Lemma~\mathrm{Re}f{lem:aux} and by the assumption of nonpositive curvature in the sense of Definition~\mathrm{Re}f{def:ournpc}, we have
\begin{equation}gin{equation*}
t =r\left(\gamma_X(t),\gamma_Y(t),\gamma_Z(t)\right)\lefteq r\left(\overline{x}(t),\overline{y}(t),\overline{z}(t)\right).
\end{equation*}
On the other hand, the origin $O$ is the minimizer of $\rightho_{(tX,tY,tZ)}(\cdot).$ Now we can conclude from a fully Euclidean argument the following. There are two possibilities for $\overline{m}(t).$ It either lies on one of the sides of the triangle $\overline{x}(t),\overline{y}(t),\overline{z}(t),$ say $\overline{m}(t) \in \left[\overline{x}(t),\overline{y}(t)\right],$ in which case
\begin{equation}gin{equation*}
\left\|\overline{x}(t)-\overline{y}(t)\right\| \ge 2t > \left\|tX-tY\right\|,
\end{equation*}
or $\overline{m}$ has equal distance to $\overline{x}(t),\; \overline{y}(t),$ and $\overline{z}(t),$ so at least one angle at $\overline{m}$ is greater or equal $\frac{2\pi}3,$ say $\angle \left(\overline{x}(t),\overline{m}(t),\overline{y}(t)\right) \geq \frac{2\pi}{3}.$ Since the angle $\angle X O Y = \frac{2\pi}{3}$, it must be that
\begin{equation}gin{equation*}
\left\|\overline{x}(t)-\overline{y}(t)\right\| \geq \left\|tX-tY\right\|.
\end{equation*}
Given $t>0,$ there exist therefore $U,V \in \{X,Y,Z\},$ with $U\neq V,$ such that
\begin{equation}gin{equation*}
d(\gamma_U(t),\gamma_V(t)) = \left\|\overline{u}(t)-\overline{v}(t) \right\| \geq \left\|tU-tV\right\|,
\end{equation*}
and in particular there exists a sequence $t_i \to 0$ and $U,V \in \{X,Y,Z\},$ with $U\neq V,$ such that this holds for any $i \in \mathbb{N}$. By the following Lemma \mathrm{Re}f{t:sectional curvature}, the sectional curvature of the plane $\Pi\!\mathrel{\mathop:}=\operatorname{span}(X,Y,Z)$ at $m$ is nonpositive. Its proof follows from the second variation formula of the energy of geodesics.
\begin{equation}gin{lemma}\leftabel{t:sectional curvature}
Let $X,Y\in T_m M$ be two independent unit tangent vectors at $m$.
Then \begin{equation}gin{equation*}
\leftim_{t\to 0}\frac{1}{t^2}\lefteft(\frac{d(\exp_m(tX),\exp_m(tY))}{t\left\|X-Y\right\|}-1\rightight)=-C\left(n,\angle(X,Y)\right) K(X,Y),
\end{equation*}
where $K(X,Y)$ is the sectional curvature of $\operatorname{span}(X,Y).$ In particular, if the sectional curvature of a plane $\Pi \subset T_m M$ is finite, and there exist unit vectors $X$ and $Y$ spanning $\Pi$ such that for some sequence $t_i \to 0,$
\begin{equation}gin{equation*}
d\left(\exp_m(t_iX),\exp_m(t_iY)\right) \geq t_i\left\|X-Y\right\|, \quad \text{for each } i \in \mathbb{N},
\end{equation*}
then the sectional curvature of $\Pi$ is nonpositive.
\end{lemma}
The proof of Theorem \mathrm{Re}f{th:CurvimpliesSec} is now complete.
\operatorname{Sec}tion{Scaled and relaxed nonpositive curvature} \leftabel{sec:scaled}
We now introduce a quantitative version of nonpositive curvature from Definition~\mathrm{Re}f{def:ournpc}. It is appropriate in discrete metric spaces.
\begin{equation}gin{definition} \leftabel{def:scalednpc}
A metric space $(X,d)$ has nonpositive curvature at scale $\begin{equation}ta>0$ if
\begin{equation}gin{equation*}
\inf_{x\in X}\rightho_{\left(a_1,a_2,a_3\right)}(x)\lefteq \min_{x\in\mathbb{R}^2} \rightho_{\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)}(x),
\end{equation*}
for every triangle $a_1,a_2,a_3\in X$ such that $d\left(a_i,a_j\right)\geq\begin{equation}ta$ for every $i,j=1,2,3$ with $i\neq j.$ Denote this curvature condition by $\operatorname{Curv}_\begin{equation}ta X\lefteq0.$
\end{definition}
Again, whenever the infimum is attained, this condition can be formulated in terms of intersections of distance balls.
Another way to relax the nonpositive curvature condition from Definition~\mathrm{Re}f{def:ournpc} is to allow a small error.
\begin{equation}gin{definition} \leftabel{def:relaxednpc}
A metric space $(X,d)$ has $\varepsilon$-relaxed nonpositive curvature, where $\varepsilon>0,$ if
\begin{equation}gin{equation*}
\inf_{x\in X}\rightho_{\left(a_1,a_2,a_3\right)}(x)\lefteq \min_{x\in\mathbb{R}^2} \rightho_{\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)}(x)+\varepsilon,
\end{equation*}
for every triangle $a_1,a_2,a_3\in X.$ Denote this curvature condition by $\varepsilon\textrm{-}\operatorname{Curv} X\lefteq0.$
\end{definition}
We will now observe that this relaxed nonpositive curvature is enjoyed by $\delta$-hyperbolic spaces, where $\delta>0.$ Recall that a geodesic space is $\delta$-hyperbolic if every geodesic triangle is contained in the $\delta$-neighborhood of its arbitrary two sides \cite[p. 399]{BridsonHaefliger99}. Consider thus a geodesic triangle $a_1,a_2,a_3\in X$ in a $\delta$-hyperbolic space $(X,d).$ By the triangle inequality, one can see that
\begin{equation}gin{equation*}
\inf_{x\in X}\rightho_{\left(a_1,a_2,a_3\right)}(x)\lefteq \frac12\max_{i,j=1,2,3} d\left(a_i,a_j\right)+2\delta \lefteq \min_{x\in\mathbb{R}^2} \rightho_{\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)}(x)+2\delta,
\end{equation*}
and therefore $2\delta\textrm{-}\operatorname{Curv} X\lefteq0.$
This in particular applies to Gromov hyperbolic groups. A group is called \emph{hyperbolic} if there exists $\delta>0$ such that its Cayley graph is a $\delta$-hyperbolic space. The above discussion hence implies that a hyperbolic group has $\varepsilon$-relaxed nonpositive curvature for some $\varepsilon>0.$ We should like to mention that Y.~Ollivier has recently esthablished coarse Ricci curvature for hyperbolic groups \cite[Example 15]{ollivier}. For more details on hyperbolic spaces and groups, the reader is referred to \cite{BridsonHaefliger99}.
In conclusion, Definitions \mathrm{Re}f{def:scalednpc} and \mathrm{Re}f{def:relaxednpc} require ``large'' triangles only to satisfy some nonpositive curvature conditions, whereas ``small'' triangles can be arbitrary. This, in particular, allows for the notion of nonpositive curvature in discrete metric spaces and might be useful in \emph{geometric data analysis.}
\operatorname{Sec}tion{From local to global}
In the sense of Alexandrov, any simply-connected geodesic space with local nonpositive
curvature has global nonpositive curvature, i.e. CAT(0): It is a natural question to
ask when such kind of globalization theorem holds for our curvature definition.
\begin{equation}gin{definition} \leftabel{def:localnpc}
A metric space $(X,d)$ has local nonpositive curvature if for each $x\in X$ there is a neighborhood
$U$ such that the curvature condition holds for all triangles in $U$. We denote this
curvature condition by $\operatorname{Curv}_{loc} X \lefte 0$.
\end{definition}
If every point $x$ admits a convex neighborhood $U_x$ then the condition can be also written as
$$
\operatorname{Curv}_{loc} X \lefte 0 \Longleftrightarrow \forall x\in X: \operatorname{Curv} U_x \lefte 0.
$$
\begin{equation}gin{theorem}Assume that $(X,d)$ is a geodesic space with $\operatorname{Curv}_{loc} X\lefteq0$ and the circumcenter is attained for every triangle $\{a_i\}_{i=1}^3.$ If $(X,d)$ is globally nonpositive curved in the sense of Busemann, then we have $\operatorname{Curv} X\lefteq 0.$
\end{theorem}
\begin{equation}gin{proof}For any triangle $\{a_i\}_{i=1}^3$ in $X,$ it suffices to show \eqref{eq:def}.
Let $m$ be the circumcenter of the triangle $\{a_i\}_{i=1}^3$ and $ma_i, 1\lefteq i\lefteq3,$ the
minimizing geodesic connecting $m$ and $a_i.$ For any $t>0,$ let $a_i^t$ be the point on the
geodesic $m a_i$ such that $|ma_i^t|=t|ma_i|.$ By the contradiction argument, one can show
that $m$ is the circumcenter of the triangle $\{a_i^t\}_{i=1}^3$ for any $t>0,$ and hence
$r\left(a_1^t,a_2^t,a_3^t\right)=t\cdot r\left(a_1,a_2,a_3\right).$
By the local curvature condition, there exists a small neighbourhood $U_m$ of $m$ in which
the comparison \eqref{eq:def} holds. We know that for sufficiently small $t>0,$ $a_{i}^t\in U_m$
for $1\lefteq i\lefteq 3.$ Hence for the corresponding comparison triangle $\{\overline{a}_i^t\}$ in $\mathbb{R}^2,$
we have
$$ r\left(a_1^t,a_2^t,a_3^t\right)\lefteq r\left(\overline{a}_1^t,\overline{a}_2^t,\overline{a}_3^t\right).$$
The global Busemann condition for the triangle $\{m,a_i,a_{i+1}\},1\lefteq i\lefteq 3,$ implies
that $|a_i^ta_{i+1}^t|\lefteq t|a_i,a_{i+1}|$ where the indices are understood in the sense of
module $3.$ Hence $r\left(\overline{a}_1^t,\overline{a}_2^t,\overline{a}_3^t\right)\lefteq t\cdot r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right).$
Combining all these facts, we have $r\left(a_1,a_2,a_3\right)\lefteq r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right)$
which proves the theorem.
\end{proof}
\operatorname{Sec}tion{The Kirszbraun theorem and general curvature bounds}
In this section, we shall describe that our constructions and results can be extended to curvature bounds other than $0.$ We shall generalize the result of Section~\mathrm{Re}f{sec:otherNPC} to arbitrary curvature bounds both from above and below. For that purpose, we show a direct implication of our curvature comparison by Kirszbraun's theorem on Lipschitz extension by \cite{langschroeder}. Given $\kappa\in\mathbb{R},$ let $\mathrm{CBB}(\kappa)$ denote the class of Alexandrov spaces with sectional curvature bounded from below by $\kappa$ and $\mathrm{CAT}(\kappa)$ the class of spaces with sectional curvature bounded from above by $\kappa.$ As a reference on Alexandrov geometry, we recommend \cite{BridsonHaefliger99,bbi}. The symbol $\left(\mathbb{M}_\kappa^2,d_\kappa\right)$ stands for the model plane, as usually.
\begin{equation}gin{theorem}[Kirszbraun's theorem] \leftabel{thm:kirszbraunk}
Let $\mathcal{L}\in \mathrm{CBB}(\kappa),$ $\mathcal{U}\in \mathrm{CAT}(\kappa),$
$Q\subset \mathcal{L}$ and $f\colon Q\to \mathcal{U}$ be a nonexpansive map.
Assume that there is $z\in \mathcal{U}$ such that $f(Q)\subset
B\left(z,\frac{\pi}{2\sqrt{\kappa}}\right)$ if $\kappa>0.$ Then $f\colon Q\to
\mathcal{U}$ can be extended to a nonexpansive map $F\colon \mathcal{L}\to
\mathcal{U}.$\end{theorem}
\begin{equation}gin{proof}
Cf. \cite{langschroeder}.
\end{proof}
As a convention, we exclude large triangles in what follows if $\kappa>0.$
\begin{equation}gin{definition}
Let $(X,d)$ be a metric space. We say that $\operatorname{Curv} X\lefteq\kappa$ if, for each triangle $\left(a_1,a_2,a_3\right)$ in $X,$ we have $r\left(a_1,a_2,a_3\right)\lefteq r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right),$ where $\overline{a}_i$ with $i=1,2,3$ are the vertices of an associated comparison triangle in $\mathbb{M}_\kappa^2.$ Similarly, we say that $\operatorname{Curv} X\geq\kappa$ if, for each triangle $\left(a_1,a_2,a_3\right)$ in $X,$ we have $r\left(a_1,a_2,a_3\right)\geq r\left(\overline{a}_1,\overline{a}_2,\overline{a}_3\right),$ where $\overline{a}_i$ with $i=1,2,3$ are the vertices of an associated comparison triangle in $\mathbb{M}_\kappa^2.$
\end{definition}
\begin{equation}gin{theorem}\leftabel{t:short proof by Kirszbraun}
Let $(X,d)$ be a $\mathrm{CAT}(\kappa)$ space. Then $\operatorname{Curv} X \lefte \kappa.$
\end{theorem}
\begin{equation}gin{proof}
Let $\left(x_1,x_2,x_3\right)$ be a triangle in $X$ and $\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)$ be the
comparison triangle in $\mathbb{M}_\kappa^2.$ By the definition of comparison
triangle, the map $f\colon \left\{\overline{x}_1,\overline{x}_2,\overline{x}_3\right\}\to X$ defined by $f\left(\overline{x}_i\right)=x_i$
for $i=1,2,3$ is an isometry. By Theorem \mathrm{Re}f{thm:kirszbraunk}, the mapping $f$ can be extended to a nonexpansive map $F\colon\mathbb{M}_\kappa^2\to X.$ Let $\overline{m}\in\mathbb{M}_\kappa^2$ be the circumcenter of $\overline{x}_1,\overline{x}_2,\overline{x}_3.$ Then by the nonexpansiveness of the map $F,$ we have
\begin{equation}gin{equation*}
d\left(F\left(\overline{m}\right),x_i\right)\lefteq d_\kappa\left(\overline{m},\overline{x}_i\right)\lefteq r\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right),\qquad i=1,2,3.
\end{equation*}
Hence, we have $r\left(x_1,x_2,x_3\right)\lefteq r\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)$ by the very definition of the circumradius of $\left(x_1,x_2,x_3\right).$
\end{proof}
We note that we can also define a lower curvature bound for any $\kappa$
by requiring that the circumradius of comparison triangle for a triangle
$\Delta$ is less than or equal to the circumcenter of the triangle $\Delta$.
Similar to the theorem above one can use Kirszbraun's theorem to prove:
\begin{equation}gin{theorem}\leftabel{t:short proof by Kirszbraun lower}
Let $(X,d)$ be a $\mathrm{CBB}(\kappa)$ space. Then $\operatorname{Curv} X \ge \kappa.$
\end{theorem}
\begin{equation}gin{proof}
Let $\left(x_1,x_2,x_3\right)$ be a triangle in $X$ and $\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)$ be the
comparison triangle in $\mathbb{M}_\kappa^2.$ By the definition of comparison
triangle, the map $f\colon \{x_1,x_2,x_3\}\to\mathbb{M}_\kappa^2$ defined by $f(x_i)=\overline{x}_i$
for $i=1,2,3$ is an isometry. By Kirszbraun's theorem, the map $f$ can be extended to a nonexpansive map $F\colon X\to\mathbb{M}_\kappa^2.$ Let $\left(m_l\right)\subset X$
be a minimizing sequence of the function $\rightho_{x_1,x_2,x_3}(\cdot).$ Then by the nonexpansiveness of the map $F,$ we have
\begin{equation}gin{equation*}
\leftim_{l\to\infty} d_\kappa\left(F\left(m_l\right),\overline{x}_i\right)\lefteq \leftim_{l\to\infty} d\left(m_l,x_i\right) \lefteq r\left(x_1,x_2,x_3\right),\ \ i=1,2,3.\end{equation*}
Hence, we have $r\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)\lefteq r\left(x_1,x_2,x_3\right)$ by the very definition of the circumradius of $\left(x_1,x_2,x_3\right).$
\end{proof}
We end this section by showing a metric implication, which generalized an observation made in Section~\mathrm{Re}f{sec:otherNPC}.
\begin{equation}gin{proposition}
Let $(X,d)$ be a complete metric space. If $\operatorname{Curv} X\lefte\kappa$ for some $\kappa\in\mathbb{R},$ then it is a length space.
\end{proposition}
\begin{equation}gin{proof}
We will show that each pair of points has approximate midpoints. Let $x,y\in X$ and choose a triangle $\left(x_1,x_2,x_3\right)$ in $X$ such that $x_1=x_2=x$ and $x_3=y$. The circumcenter of the comparison triangle is the midpoint $\overline{m}$ of the geodesic $\overline{x}_1,\overline{x}_3$.
By our assumptions we have the inequality
\begin{equation}gin{equation*}
r\left(x_1, x_2, x_3 \right) \lefte r\left(\overline{x}_1, \overline{x}_2, \overline{x}_3\right)=\frac12 \left\| \overline{x}_1-\overline{x}_3\right\|=\frac12d(x,y),
\end{equation*}
and since it always holds $\frac12d(x,y)\lefte r\left(x_1, x_2, x_3\right),$ we obtain that there exists a sequence $\left(m_l\right)\subset X$ such that $d(x,m_l),d(y,m_l)\to \frac12d(x,y).$ That is, the pair of points $x,y$ has approximate midpoints.
\end{proof}
\begin{equation}gin{proposition}
Let $(X,d)$ be a geodesic space. If $\ensuremath{\operatorname{Curv} X\ge\kappa,}$
then the space is non-branching.\end{proposition}
\begin{equation}gin{proof}
Assume the space is branching, then there are three distinct point
$x,y,y'\in X$ such that $z\in Z_{\frac{1}{2}}(x,y)\cap Z_{\frac{1}{2}}(x,y')$
and it is not difficult to see that $d(y,y')\lefte d(x,y)=d(x,y')$ and
thus $r(x,y,y')=d(x,y)/2$. Note, however, that the corresponding
comparison triangle $(\begin{eqnarray}r{x},\begin{eqnarray}r{y},\begin{eqnarray}r{y}')$ is a regular isosceles
triangle and hence $r(\begin{eqnarray}r{x},\begin{eqnarray}r{y},\begin{eqnarray}r{y}')>d(\begin{eqnarray}r{x},\begin{eqnarray}r{y})/2=r(x,y,y')$.
But this violates the curvature conditions and hence the space cannot
contain branching geodesics.
\end{proof}
\operatorname{Sec}tion{$L^p$-spaces and the curvature condition}
We have the following surprising result for the new curvature condition.
\begin{equation}gin{theorem}[Curvature of $L^p$-spaces]
We have $\operatorname{Curv} L^p\lefteq 0$ if and only if $p=2$ or $p=\infty.$
\end{theorem}
\begin{equation}gin{proof}
Since $\operatorname{Curv} L^p\lefteq 0$ trivially, we show that $\operatorname{Curv} L^\infty\lefteq 0.$ Let $\left(x_1,x_2,x_3\right)$ be a triangle in $L^\infty$
and without loss of generality $\left[x_1,x_2\right]$ be the longest side. Furthermore let $\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)$
be the comparison triangle in $\mathbb{R}^2$ for the triangle $\left(x_1,x_2,x_3\right).$ We claim that
\begin{equation}gin{equation*}
r\left(x_1,x_2,x_3\right)=\frac12 \left\|x_1-x_2\right\|_\infty.
\end{equation*}
Because in $\mathbb{R}^2$ one always has $\frac12\left\|\overline{x}_1-\overline{x}_2\right\|\lefte r\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)$, we have
$r \left(x_1,x_2,x_3 \right)\lefte r\left(\overline{x}_1,\overline{x}_2,\overline{x}_3\right)$ and thus $\operatorname{Curv} L^\infty \lefteq0.$
In order to prove the claim, we will construct a circumcenter explicitly: Let
$c\in L^\infty$ be the point that is for each coordinate a circumcenter, that is,
for coordinate $l\in \mathbb{N}$ if $x_i^l\lefte x_j^l \lefte x_k^l$ then
$c^l=\frac{x_k^l-x_i^l}{2}$. One can easily see that $\left\|c-x_i\right\|_\infty=\frac12\left\|x_1-x_2\right\|_\infty$ and that
there cannot be any point closer to all three points at once.
We now turn to the statement about $L^p$-spaces with $p\in(1,2)\cup(2,\infty).$ In order to show that these $L^p$-spaces do not satisfy the curvature
condition we will construct explicit counterexamples. For this note that it suffices to show that $\left(\mathbb{R}^2,\|\cdot\|_p\right)$ admits a counterexample, since each $L^p$-space with $p\in(1,\infty)$ contains $\left(\mathbb{R}^2,\|\cdot\|_p\right)$.
Assume first that $2<p<\infty$ and let $(A,B,C)$ be the triangle with coordinates
$A=(0,1),B=(-1,0)$ and $C=(1,0)$, see Figure \mathrm{Re}f{figp1}. The length of the sides with respect to the $L^p$-norm
are $a=2$ and $b=c=\sqrt[p]{2}<\sqrt{2}$.
Now find the triangle $A'=(0,y)$ such that $y>1$ and $b'=c'=\sqrt{2}$. One easily sees
that this triangle is not obtuse and thus $r\left(A',B,C\right)>1$, but the corresponding comparison
triangle in $\mathbb{R}^2$ is rectangular and its circumradius is $1$. All $L^p$-spaces with
$2<p<\infty$ do not satisfy the curvature condition.
\begin{equation}gin{figure}
\centering
\begin{equation}gin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.8\leftinewidth]{Lp1.png}
\caption{}{$2<p<\infty$}
\leftabel{figp1}
\end{minipage}
\begin{equation}gin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.8\leftinewidth]{Lq2.png}
\caption{}{$1<p<2$}
\leftabel{figq2}
\end{minipage}
\end{figure}
Now assume $1<p<2$. We assume again that $\left(A,B,C\right)$ is a triangle on the
$L^p$-unit sphere with coordinate $A=(r,r), B=(-r,r)$
and $C=(r,-r)$, see Figure \mathrm{Re}f{figq2}. One easily see that $r=\frac{1}{\sqrt[p]{2}}$ and that
\begin{equation}gin{equation*}
b=c=2r=\frac{2}{\sqrt[p]{2}} < \frac{2}{\sqrt{2}} = \sqrt{2}.
\end{equation*}
Thus we can again find a point $A'=(r',r')$ with $r'>r$ and $b'=c'=\sqrt{2}$. This
triangle is not obtuse and $r\left(A',B,C\right)>1$. Hence $L^p$ with $1<p<2$ does not satisfy
the curvature condition.
\end{proof}
\begin{equation}gin{remark}
Actually it is not difficult to show that $L^p$-spaces with $p\in(1,2)\cup(2,\infty)$ do not even satisfy a lower curvature bound.
\end{remark}
\begin{equation}gin{proof}
In order to show that no $L^p$-space except for $L^2$ can satisfy a lower curvature bound take
the two triangle above but change the condition $2<p<\infty$ and $1<p<2$, see Figure \mathrm{Re}f{figp2}
and \mathrm{Re}f{figq1}. Now point $A'$ will lie inside the unit sphere and the corresponding triangles
$A'BC$ are in the interior of obtuse triangle with respect to the $L^p$-norm. Since the comparison
triangle in $\mathbb{R}^2$ is rectangular we can create a acute isosceles triangle $\tilde{\Delta}$
with base side length $1$. Since the triangle $A'BC$ is in the interior of obtuse triangles the
triangle $\Delta$ corresponding to the comparison triangle $\tilde{\Delta}$ will be obtuse as
well and its circumradius is $1$. Since $\tilde{\Delta}$ is regular, we see that its circumradius
is greater than $1$, hence $\Delta$ is a counterexample to a lower curvature bound.
\end{proof}
\begin{equation}gin{figure}
\centering
\begin{equation}gin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.8\leftinewidth]{Lp2.png}
\caption{}{$2<p<\infty$}
\leftabel{figp2}
\end{minipage}
\begin{equation}gin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.8\leftinewidth]{Lq1.png}
\caption{}{$1<p<2$}
\leftabel{figq1}
\end{minipage}
\end{figure}
\bibitembliography{npc}
\bibitembliographystyle{siam}
\end{document} |
\begin{document}
\title{Mild assumptions for the derivation of \ Einstein's effective viscosity formula}
\begin{abstract}
We provide a rigorous derivation of Einstein's formula for the effective viscosity of dilute suspensions of $n$ rigid balls, $n \gg 1$, set in a volume of size $1$. So far, most justifications were carried under a strong assumption on the minimal distance between the balls: $d_{min} \ge c n^{-\frac{1}{3}}$, $c > 0$. We relax this assumption into a set of two much weaker conditions: one expresses essentially that the balls do not overlap, while the other one gives a control of the number of balls that are close to one another. In particular, our analysis covers the case of suspensions modelled by standard Poisson processes with almost minimal hardcore condition.
{\mathbb E}nd{abstract}
\section{Introduction}
Mixtures of particles and fluids, called {{\mathbb E}m suspensions}, are involved in many natural phenomena and industrial processes. The understanding of their rheology, notably the so-called {{\mathbb E}m effective viscosity} $\mu_{eff}$ induced by the particles, is therefore crucial. Many experiments or simulations have been carried out to determine $\mu_{eff}$ \cite{Guaz}. For $\lambda$ large enough, they seem to exhibit some generic behaviour, in terms of the ratio between the solid volume fraction $\lambda$ and the maximal flowable solid volume fraction $\lambda_c$, {\it cf.} \cite{Guaz}. Still, a theoretical derivation of the relation $\mu_{eff} = \mu_{eff}(\lambda/\lambda_c)$ observed experimentally is missing, due to the complex interactions involved: hydrodynamic interactions, direct contacts, \dots Mathematical works related to the analysis of suspensions are mostly limited to the {{\mathbb E}m dilute regime}, that is when $\lambda$ is small.
\noindent
In these mathematical works, the typical model under consideration is as follows. One considers $n$ rigid balls $B_i = \overline{B(x_i, r_n)}$, $1 \le i \le n$, in a fixed compact subset of ${\mathbb R}^3$, surrounded by a viscous fluid.
The inertia of the fluid is neglected, leading to the Stokes equations
\begin{equation}
\label{Sto}
\left\{
\begin{aligned}
-\mu \Delta u_n + {\nabla} p_n & = f_n, \quad x \in \Omega_n = {\mathbb R}^3 \setminus \cup B_i , \\
\hbox{div \!} u_n & = 0, \quad x \in \Omega_n , \\
u_n\vert_{B_i} & = u_{n,i} + \omega_{n,i} \times (x-x_i).
{\mathbb E}nd{aligned}
\right.
{\mathbb E}nd{equation}
The last condition expresses a no-slip condition at the rigid spheres, where the velocity is given by some translation velocities $u_{n,i}$ and some rotation vectors $\omega_{n,i}$, $1 \le i \le n$. We neglect the inertia of the balls: the $2n$ vectors $u_{n,i}, \omega_{n,i}$ can then be seen as Lagrange multipliers for the $2n$ conditions
\begin{equation}
\label{Sto2}
\begin{aligned}
\int_{{\partial} B_i} \sigma_\mu(u,p) \nu & = - \int_{B_i} f_n , \quad \int_{{\partial} B_i} \sigma_\mu(u,p) \nu \times (x-x_i) = - \int_{B_i} (x-x_i) \times f_n
{\mathbb E}nd{aligned}
{\mathbb E}nd{equation}
where $\sigma_\mu = 2\mu D(u) \nu - p \nu$ is the usual Newtonian tensor, and $\nu$ the normal vector pointing outward $B_i$.
\noindent
The general belief is that one should be able to replace {\mathbb E}qref{Sto}-{\mathbb E}qref{Sto2} by an effective Stokes model, with a modified viscosity taking into account the average effect of the particles:
\begin{equation}
\label{Stoeff}
\left\{
\begin{aligned}
-\hbox{div \!} (2 \mu_{eff} D u_{eff} ) + {\nabla} p_{eff} & = f, \quad x \in {\mathbb R}^3, \\
\hbox{div \!} u_{eff} & = 0, \quad x \in {\mathbb R}^3,
{\mathbb E}nd{aligned}
\right.
{\mathbb E}nd{equation}
with $D = \frac{1}{2}({\nabla} + {\nabla}^t)$ the symmetric gradient. Of course, such average model can only be obtained asymptotically, namely when the number of particles $n$ gets very large. Moreover, for averaging to hold, it is very natural to impose some averaging on the distribution of the balls itself. Our basic hypothesis will therefore be the existence of a limit density, through
\begin{equation} \label{A0} \tag{A0}
\frac{1}{n} \sum_i \delta_{x_i} \xrightarrow[n \rightarrow +\infty]{} \rho(x) dx \quad \text{weakly in the sense of measures}
{\mathbb E}nd{equation}
where $\rho \in L^\infty({\mathbb R}^3)$ is assumed to be zero outside a smooth open bounded set $\mathcal{O}$. After playing on the length scale, we can always assume that $|\mathcal{O}| = 1$. Of course, we expect $\mu_{eff}$ to be different from $\mu$ only in this region $\mathcal{O}$ where the particles are present.
\noindent
The volume fraction of the balls is then given by $\lambda = \frac{4\pi}{3} n r_n^3$. We shall consider the case where $\lambda$ is small (dilute suspension), but independent of $n$ so as to derive a non-trivial effect as $n \rightarrow +\infty$. The mathematical questions that follow are:
\begin{itemize}
\item Q1 : Can we approximate system {\mathbb E}qref{Sto}-{\mathbb E}qref{Sto2} by a system of the form {\mathbb E}qref{Stoeff} for large $n$?
\item Q2 : If so, can we provide a formula for $\mu_{eff}$ inside $\mathcal{O}$? In particular, for small $\lambda$, can we derive an expansion
$$ \mu_{eff} = \mu + \lambda \mu_1 + \dots \quad ? $$
{\mathbb E}nd{itemize}
Regarding Q1, the only work we are aware of is the recent paper \cite{DuerinckxGloria19}. It shows that $u_n$ converges to the solution $u_{eff}$ of an effective model of the type {\mathbb E}qref{Sto2}, under two natural conditions:
\begin{enumerate}
\item[i)] the balls satisfy the separation condition $\inf_{i \neq j} |x_i - x_j| \ge M \ r_n$, $M > 2$. Note that this is a slight reinforcement of the natural constraint that the balls do not overlap.
\item[ii)] the centers of the balls are obtained from a stationary ergodic point process.
{\mathbb E}nd{enumerate}
We refer to \cite{DuerinckxGloria19} for all details. Note that in the scalar case, with the Laplacian instead of the Stokes operator, similar results can be found in \cite[paragraph 8.6]{MR1329546}.
\noindent
Q2, and more broadly quantitative aspects of dilute suspensions, have been studied for long. The pioneering work is due to Einstein \cite{Ein}. By {{\mathbb E}m neglecting the interaction between the particles}, he computed a first order approximation of the effective viscosity of homogeneous suspensions:
$$ \mu_{eff} = (1 + \frac{5}{2} \lambda) \mu \quad \text{ in } \mathcal{O}.$$
This celebrated formula was confirmed experimentally afterwards. It was later extended to the inhomogenous case, with formula
\begin{equation} \label{Almog-Brenner}
\mu_{eff} = (1 + \frac{5}{2} \lambda \rho) \mu,
{\mathbb E}nd{equation}
see \cite[page 16]{AlBr}. Further works investigated the $O(\lambda^2)$ approximation of the effective viscosity, {\it cf.} \cite{BaGr1}
and the recent analysis \cite{DGV_MH, GerMec20}.
\noindent
Our concern in the present paper is the justification of Einstein's formula. To our knowledge, the first rigorous studies on this topic are \cite{MR813656} and \cite{MR813657}: they rely on homogenization techniques, and are restricted to suspensions that are periodically distributed in a bounded domain. A more complete justification, still in the periodic setting but based on variational principles, can be found in \cite{MR2982744}. Recently, the periodicity assumption was relaxed in \cite{HiWu}, \cite{NiSc}, and replaced by an assumption on the minimal distance:
\begin{equation} \label{A1} \tag{A1}
\text{There exists an absolute constant $c$, such that } \quad \forall n, \forall 1 \le i \neq j \le n, \quad |x_i - x_j| \ge c n^{-\frac{1}{3}}.
{\mathbb E}nd{equation}
For instance, introducing the solution $u_{E}$ of the Einstein's approximate model
\begin{equation} \label{Sto_E}
-\hbox{div \!} (2 \mu_E Du_E) + {\nabla} p_E = f, \quad \hbox{div \!} u = 0 \quad \text{ in } \: {\mathbb R}^3
{\mathbb E}nd{equation}
with $\mu_E = (1 + \frac{5}{2} \lambda \rho) \mu$, it is shown in \cite{HiWu} that for all $ 1 \le p < \frac{3}{2}$,
$$ \limsup_{n \to \infty} ||u_n - u_E||_{L^p_{loc}({\mathbb R}^3)} = O(\lambda^{1+\theta}), \quad \theta = \frac{1}{p} - \frac{2}{3}. $$
We refer to \cite{HiWu} for refined statements, including quantitative convergence in $n$ and treatment of polydisperse suspensions.
\noindent
Although it is a substantial gain over the periodicity assumption, hypothesis {\mathbb E}qref{A1} on the minimal distance is still strong. In particular, it is much more stringent that the condition that the rigid balls can not overlap. Indeed, this latter condition reads: $\forall i \neq j$, $|x_i - x_j| \ge 2 r_n$, or equivalently $|x_i - x_j| \ge c \, \lambda^{1/3} n^{-\frac{1}{3}}$, with $c = 2 (\frac{3\pi}{4})^{1/3}$. It follows from {\mathbb E}qref{A1} at small $\lambda$. On the other hand, one could argue that a simple non-overlapping condition is not enough to ensure the validity of Einstein's formula. Indeed, it is based on neglecting interaction between particles, which is incompatible with too much clustering in the suspension. Still, one can hope that if the balls are not too close from one another {{\mathbb E}m on average}, the formula still holds.
\noindent
This is the kind of result that we prove here. Namely, we shall replace {\mathbb E}qref{A1} by a set of two relaxed conditions:
\begin{align}
\label{B1} \tag{B1}
& \text{There exists $M >2$, such that} \quad \forall n, \: \forall 1 \le i \neq j \le n, \quad |x_i - x_j| \ge M r_n. \\
\label{B2} \tag{B2}
& \text{There exist $C,\alpha > 0$, such that} \quad \forall {\mathbb E}ta > 0, \quad \#\{i, \: {\mathbb E}xists j, \: |x_i - x_j| \le {\mathbb E}ta n^{-\frac13}\} \le C {\mathbb E}ta^\alpha n
{\mathbb E}nd{align}
Note that {\mathbb E}qref{B1} is slightly stronger than the non-overlapping condition, and was already present in the work \cite{DuerinckxGloria19} to ensure the existence of an effective model. It is possible to relax this condition into a moment bound on the particle separation, see Remark \ref{rem:recentbibli} and Section \ref{sec:B1}.
As regards {\mathbb E}qref{B2}, one can show that it is satisfied almost surely as $n \to \infty$ in the case when the particle positions
are generated by a stationary ergodic point process if the process does not favor too much close pairs of points. In particular, it is satisfied by
a (hard-core) Poisson point process for $\alpha = 3$.
Moreover, {\mathbb E}qref{B2} is satisfied for $\alpha = 3$ with probability tending to $1$ as $n \to \infty$ for independent and identically distributed particles. We postpone further discussion to Section \ref{sec:prob}.
\noindent
Under these general assumptions, we obtain:
\begin{theorem} \label{main}
Let $\lambda > 0$, $f \in L^1({\mathbb R}^3) \cap L^\infty({\mathbb R}^3)$. For all $n$, let $r_n$ such that $\displaystyleplaystyle \lambda = \frac{4\pi}{3} n r_n^3$, let $f_n \in L^{\frac65}({\mathbb R}^3)$, and $u_n$ in $\displaystyleplaystyle \dot{H}^1({\mathbb R}^3)$ the solution of {\mathbb E}qref{Sto}-{\mathbb E}qref{Sto2}. Assume {\mathbb E}qref{A0}-{\mathbb E}qref{B1}-{\mathbb E}qref{B2}, and that $f_n \rightarrow f$ in $L^{\frac65}({\mathbb R}^3)$. Then, there exists $p_{min} > 1$ such that for any $p < p_{min}$, any $q < \frac{3 p_{min}}{3 - p_{min}}$, one can find $\delta > 0$ with the estimate
$$ ||{\nabla} (u - u_E)||_{L^p({\mathbb R}^3)} + \limsup_{n \rightarrow +\infty} ||u_n - u_E||_{L^q(K)} = O(\lambda^{1+\delta}), \quad \forall K \Subset {\mathbb R}^3, \quad \text{as } \: \lambda \rightarrow 0, $$
where $u$ is any weak accumulation point of $u_n$ in $\displaystyleplaystyle \dot{H}^1({\mathbb R}^3)$ and
$u_E$ satisfies Einstein's approximate model {\mathbb E}qref{Sto_E}.
{\mathbb E}nd{theorem}
\noindent Here, we use the notation $\dot H^1({\mathbb R}^3)$ for the homogeneous Sobolev space
$\dot H^1({\mathbb R}^3) = \{ w \in L^6({\mathbb R}^3) : {\nabla}bla w \in L^2({\mathbb R}^3)\}$ equipped with the $L^2$ norm of the gradient.
\begin{remark} \label{rem:exponents}
The following explicit formula for $p_{min}$ and $\delta$ will be obtained in the proof of the theorem:
\begin{align*}
p_{min} = 1 + \frac{\alpha}{6 + \alpha}, \qquad \delta = \frac 1 r - \frac{6}{6 + (2-r)\alpha} \qquad r = \max\left\{p,\frac{3q}{3+q} \right\}.
{\mathbb E}nd{align*}
{\mathbb E}nd{remark}
\begin{remark} \label{rem:recentbibli}
Since the preprint of our paper, several further results have appeared which we briefly discuss in this remark.
In \cite[version 1]{DuerinckxGloria20}, an extensive study of the effective viscosity at low volume fraction was performed in the context of stationary ergodic particle configurations, under suitable versions of {\mathbb E}qref{B1}-{\mathbb E}qref{B2}. It includes results on the $O(\lambda^2)$ and higher order corrections, see also the recent paper \cite{Gerard-Varet20}. As regards the $O(\lambda)$ Einstein's formula, a result analogous to Theorem \ref{main} was shown with methods of a more probabilistic flavour.
It was subsequently shown in \cite{Duerinckx20} and \cite[version 2]{DuerinckxGloria20} that both the existence of an effective viscosity and the Einstein's formula hold when relaxing condition {\mathbb E}qref{B1} into a moment bound on the particle separation. We will argue in Section \ref{sec:B1} that our main result still holds under similar milder assumption.
Finally, in \cite{HoeferSchubert20}, results have been obtained concerning the coupling of Einstein's formula to the time evolution of sedimenting particles.
{\mathbb E}nd{remark}
\noindent
The rest of the paper is dedicated to the proof of Theorem \ref{main}.
\section{Main steps of proof}
To prove Theorem \ref{main}, we shall rely on an enhancement of the general strategy explained in \cite{DGV}, to justify various effective models for conducting and fluid media. Let us point out that one of the examples considered in \cite{DGV} is the scalar version of {\mathbb E}qref{Sto}-{\mathbb E}qref{Sto2}. It leads to a proof of a scalar analogue of Einstein's formula, under assumptions {\mathbb E}qref{A0}, {\mathbb E}qref{B1}, plus an abstract assumption intermediate between {\mathbb E}qref{A1} and {\mathbb E}qref{B2}. We refer to the discussion at the end of \cite{DGV} for more details. Nevertheless, to justify the effective fluid model {\mathbb E}qref{Sto_E} under the mild assumption {\mathbb E}qref{B2} will require several new steps. The main difficulty will be to handle particles that are close to one another, and will involve sharp $L^p$ estimates similar to those of \cite{GerMec20}.
\noindent
Concretely, let $\varphi$ be a smooth and compactly supported divergence-free vector field. For each $n$, we introduce the solution $\phi_n \in \dot{H}^1({\mathbb R}^3)$ of
\begin{equation} \label{Sto_phi}
\begin{aligned}
- \hbox{div \!}(2\mu D \phi_n) + {\nabla} q_n & = \hbox{div \!} (5 \lambda \mu \rho D \varphi) \: \text{ in } \: \Omega_n, \\
\hbox{div \!} \phi_n & = 0 \: \text{ in } \: \Omega_n, \\
\phi_n & = \varphi + \phi_{n,i} + w_{n,i} \times (x-x_i) \: \text{ in } \: B_i, \: 1 \le i \le n
{\mathbb E}nd{aligned}
{\mathbb E}nd{equation}
where the constant vectors $\phi_{n,i}$, $w_{n,i}$ are associated to the constraints
\begin{equation} \label{Sto2_phi}
\begin{aligned}
\int_{{\partial} B_i} \sigma_\mu(\phi_n,q_n) \nu & = - \int_{{\partial} B_i} 5 \lambda \mu \rho D \varphi \nu, \\
\int_{{\partial} B_i}(x-x_i) \times \sigma_\mu(\phi_n,q_n) \nu & = - \int_{{\partial} B_i}(x-x_i) \times 5 \lambda \mu \rho D \varphi \nu.
{\mathbb E}nd{aligned}
{\mathbb E}nd{equation}
Testing {\mathbb E}qref{Sto} with $\varphi - \phi_n$, we find after a few integration by parts that
$$
\int_{{\mathbb R}^3} 2\mu_E Du_n : D \varphi = \int_{{\mathbb R}^3} f_n \cdot \varphi - \int_{{\mathbb R}^3} f_n \cdot \phi_n.
$$
Testing {\mathbb E}qref{Sto_E} with $\varphi$, we find
$$
\int_{{\mathbb R}^3} 2\mu_E Du_E : D \varphi = \int_{{\mathbb R}^3} f \cdot \varphi.
$$
Combining both, we end up with
\begin{equation} \label{weak_estimate}
\int_{{\mathbb R}^3} 2\mu_E D(u_n - u_E) : D \varphi = \int_{{\mathbb R}^3} (f_n - f) \cdot \varphi - \int_{{\mathbb R}^3} f_n \cdot \phi_n.
{\mathbb E}nd{equation}
We remind that vector fields $u_n, u_E, \phi_n$ depend implicitly on $\lambda$.
\noindent
The main point will be to show
\begin{proposition} \label{main_prop}
There exists $p_{min} > 1$ such that for all $p < p_{min}$, there exists $\delta > 0$ and $C > 0$, independent of $\varphi$, such that
\begin{equation} \label{estimateR}
\limsup_{n \to \infty} \big| \int_{{\mathbb R}^3} f_n \cdot \phi_n \big| \le C \lambda^{1+\delta} ||{\nabla} \varphi||_{L^{p'}}, \quad p' = \frac{p}{p-1}.
{\mathbb E}nd{equation}
{\mathbb E}nd{proposition}
\noindent
Let us show how the theorem follows from the proposition. First, by standard energy estimates, we find that $u_n$ is bounded in $\dot{H}^1({\mathbb R}^3)$ uniformly in $n$. Let $u = \lim u_{n_k}$ be a weak accumulation point of $u_n$ in this space. Taking the limit in {\mathbb E}qref{weak_estimate}, we get
$$ \int_{{\mathbb R}^3} 2\mu_E D(u - u_E) : D \varphi = \langle R, \varphi \rangle $$
where $\langle R , \varphi \rangle = \lim_{k \rightarrow +\infty} \int_{{\mathbb R}^3} f_{n_k} \cdot \phi_{n_k}$.
Recall that $\varphi$ is an arbitrary smooth and compactly supported divergence-free vector field and that such functions are dense in the homogeneous Sobolev space of divergence-free functions $\dot{W}^{1,p}_\sigma$.
Thus, Proposition \ref{main_prop} implies that $R$ is an element of $\dot{W}_\sigma^{-1,p}$ with $||R||_{\dot{W}_\sigma^{-1,p}} = O(\lambda^{1+\delta})$. Moreover, the previous identity is the weak formulation of
$$ - \hbox{div \!}(2 \mu_E D(u - u_E)) + {\nabla} q = R, \quad \hbox{div \!} (u - u_E) = 0 \quad \text{ in } \: {\mathbb R}^3. $$
Writing these Stokes equations with non-constant viscosity as
$$ -\mu \Delta (u - u_E) + {\nabla} q = R + \hbox{div \!} (5 \lambda \mu \rho D(u-u_E)), \quad \hbox{div \!} (u - u_E) = 0 \quad \text{ in } \: {\mathbb R}^3. $$
and using standard estimates for this system, we get
$$ ||{\nabla} (u-u_E)||_{L^p} \le C \left(||R||_{\dot{W}^{-1,p}_\sigma} + \lambda ||{\nabla} (u-u_E)||_{L^p} \right). $$
For $\lambda$ small enough, the last term is absorbed by the left-hand side, and finally
$$ ||{\nabla} (u-u_E)||_{L^p({\mathbb R}^3)} \le C\lambda^{1+\delta}$$
which implies the first estimate of the theorem. Then, by Sobolev imbedding, for any $q \le \frac{3p}{3-p}$, and any compact $K$,
\begin{equation} \label{Sob_imbed}
||u-u_E||_{L^q(K)} \le C_{K,q} \, \lambda^{1+\delta}.
{\mathbb E}nd{equation}
We claim that
$ \limsup_{n \to \infty} ||u_n-u_E||_{L^q(K)} \le C_{K,q} \, \lambda^{1+\delta}$. Otherwise, there exists a subsequence $u_{n_k}$ and ${\mathbb E}ps > 0$ such that $\displaystyleplaystyle ||u_{n_k} - u_E||_{L^q(K)} \ge C_{K,q} \, \lambda^{1+\delta} + {\mathbb E}ps$ for all $k$. Denoting by $u$ a (weak) accumulation point of $u_{n_k}$ in $\dot{H}^1$, Rellich's theorem implies that, for a subsequence still denoted $u_{n_k}$, $||u_{n_k} - u||_{L^q(K)} \rightarrow 0$, because $q < 6$ (for $p_{min}$ taken small enough). Combining this with {\mathbb E}qref{Sob_imbed}, we reach a contradiction. As $p$ is arbitrary in $(1, p_{min})$, $q \leq \frac{3p}{3-p}$ is arbitrary in $(1, \frac{3 p_{min}}{3 - p_{min}})$. The last estimate of the theorem is proved.
\noindent
It remains to prove Proposition \ref{main_prop}.
Therefore, we need a better understanding of the solution $\phi_n$ of {\mathbb E}qref{Sto_phi}-{\mathbb E}qref{Sto2_phi}. Neglecting any interaction between the balls, a natural attempt is to approximate $\phi_n$ by
\begin{equation} \label{approx_phi}
\phi_n \approx \phi_{{\mathbb R}^3} + \sum_i \phi_{i,n}
{\mathbb E}nd{equation}
where $\phi_{{\mathbb R}^3}$ is the solution of
\begin{equation} \label{eq_phi_R3}
- \mu \Delta \phi_{{\mathbb R}^3} + {\nabla} p_{{\mathbb R}^3} = \hbox{div \!}(5 \lambda \mu \rho D\varphi), \quad \hbox{div \!} \phi_{{\mathbb R}^3} = 0 \quad \text{in } \: {\mathbb R}^3
{\mathbb E}nd{equation}
and $\phi_{i,n}$ solves
\begin{equation} \label{eq_phi_i_n}
- \mu \Delta \phi_{i,n} + {\nabla} p_{i,n} = 0, \quad \hbox{div \!}\phi_{i,n} = 0 \quad \text{ outside } \: B_i, \quad \phi_{i,n}\vert_{B_i}(x) = D \varphi(x_i) \, (x-x_i)
{\mathbb E}nd{equation}
Roughly, the idea of approximation {\mathbb E}qref{approx_phi} is that $\phi_{{\mathbb R}^3}$ adjusts to the source term in {\mathbb E}qref{Sto_phi}, while for all $i$, $\phi_{i,n}$ adjusts to the boundary condition at the ball $B_i$. Indeed, using a Taylor expansion of $\varphi$ at $x_i$, and splitting ${\nabla} \varphi(x_i)$ between its symmetric and skew-symmetric part, we find
$$ \phi_n\vert_{B_i}(x) \approx D\varphi(x_i) \, (x- x_i) \: + \: \text{{\mathbb E}m rigid vector field} = \phi_{i,n}\vert_{B_i}(x) \: + \: \text{{\mathbb E}m rigid vector field}. $$
Moreover, $\phi_{i,n}$ can be shown to generate no force and torque, so that the extra rigid vector fields (whose role is to ensure the no-force and no-torque conditions), should be small.
\noindent
Still, approximation {\mathbb E}qref{approx_phi} may be too crude : the vector fields $\phi_{j,n}$, $j \neq i$, have a non-trivial contribution at $B_i$, and for the balls $B_j$ close to $B_i$, which are not excluded by our relaxed assumption {\mathbb E}qref{B1}, these contributions may be relatively big. We shall therefore modify the approximation, restricting the sum in {\mathbb E}qref{approx_phi} to balls far enough from the others.
\noindent
Therefore, for ${\mathbb E}ta > 0$, we introduce a {{\mathbb E}m good} and a {{\mathbb E}m bad} set of indices:
\begin{equation} \label{good_bad_sets}
\mathcal{G}_{\mathbb E}ta = \{ 1 \le i \le n, \: \forall j \neq i, |x_i - x_j| \ge {\mathbb E}ta n^{-\frac{1}{3}} \}, \quad \mathcal{B}_{\mathbb E}ta = \{1, \dots n\} \setminus \mathcal{G}_{\mathbb E}ta.
{\mathbb E}nd{equation}
The good set $\mathcal{G}_{\mathbb E}ta$ corresponds to balls that are at least ${\mathbb E}ta n^{-\frac{1}{3}}$ away from all the others. The parameter ${\mathbb E}ta > 0$ will be specified later: we shall consider ${\mathbb E}ta = \lambda^\theta$ for some appropriate power $0 < \theta < 1/3$. We set
\begin{equation} \label{def_phi_app}
\phi_{app,n} = \phi_{{\mathbb R}^3} + \sum_{i \in \mathcal{G}_{\mathbb E}ta} \phi_{i,n}
{\mathbb E}nd{equation}
Note that $\phi_{{\mathbb R}^3}$ and $\phi_{i,n}$ are explicit:
$$ \phi_{{\mathbb R}^3} = \mathcal{U} \star \hbox{div \!}(5 \lambda \rho D\varphi), \quad \mathcal{U}(x) = \frac{1}{8\pi} \left( \frac{I}{|x|} + \frac{x \otimes x}{|x|^3} \right)$$
and
\begin{equation} \label{def_phi_in}
\phi_{i,n} = r_n V[D \varphi(x_i)]\left(\frac{x-x_i}{r_n}\right)
{\mathbb E}nd{equation}
where for all trace-free symmetric matrix $S$, $V[S]$ solves
$$ -\Delta V[S] + {\nabla} P[S] = 0, \: \hbox{div \!} V[S] = 0 \quad \text{outside } \: B(0,1), \quad V[S](x) = Sx, \: x \in B(0,1). $$
with expressions
$$ V[S] = \frac{5}{2} S : (x \otimes x) \frac{x}{|x|^5} + Sx \frac{1}{|x|^5} - \frac{5}{2} (S : x \otimes x) \frac{x}{|x|^7}, \quad P[S] = 5 \frac{S : x \otimes x}{|x|^5}. $$
Eventually, we denote
$$ \psi_n = \phi_n - \phi_{app,n}. $$
Tedious but straightforward calculations show that
$$ - \hbox{div \!} (\sigma_\mu(V[S], P[S])) = 5 \mu S x s^1 = - \hbox{div \!} (5 \mu S 1_{B(0,1)}) \quad \text{in } \: {\mathbb R}^3 $$
where $s^1$ denotes the surface measure at the unit sphere. It follows that
\begin{equation}
- \mu \Delta \phi_{app,n} + {\nabla} p_{app,n} = \hbox{div \!} \Big( 5 \lambda \mu \rho D\varphi - \sum_{i \in \mathcal{G}_{\mathbb E}ta} 5 \mu D \varphi(x_i) 1_{B_i} \Big), \quad \hbox{div \!} \phi_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3,
{\mathbb E}nd{equation}
Moreover, for all $1 \le i \le n$,
\begin{align*}
\int_{{\partial} B_i} \sigma_\mu(\phi_{app,n}, p_{app,n}) \nu & = - \int_{{\partial} B_i} 5 \lambda \mu \rho D\varphi \nu, \\
\int_{{\partial} B_i} (x-x_i) \times \sigma_\mu(\phi_{app,n}, p_{app,n}) \nu & = - \int_{{\partial} B_i} (x-x_i) \times 5 \lambda \mu \rho D\varphi \nu.
{\mathbb E}nd{align*}
Hence, the remainder $\psi_n$ satisfies
\begin{equation} \label{Sto_psi}
\begin{aligned}
- \mu \Delta \psi_n + {\nabla} q_n & = 0 \: \text{ in } \: \Omega_n, \\
\hbox{div \!} \psi_n & = 0 \: \text{ in } \: \Omega_n, \\
\psi_n & = \varphi - \phi_{app,n} + \psi_{n,i} + w_{n,i} \times (x-x_i) \: \text{ in } \: B_i, \: 1 \le i \le n
{\mathbb E}nd{aligned}
{\mathbb E}nd{equation}
where the constant vectors $\psi_{n,i}$, $w_{n,i}$ are associated to the constraints
\begin{equation} \label{Sto2_psi}
\begin{aligned}
\int_{{\partial} B_i} \sigma_\mu(\psi_n,q_n) \nu & = 0, \\
\int_{{\partial} B_i}(x-x_i) \times \sigma_\mu(\psi_n,q_n) \nu & = 0.
{\mathbb E}nd{aligned}
{\mathbb E}nd{equation}
Estimates on $\phi_{app,n}$ and $\psi_n$ will be postponed to sections \ref{sec_app} and \ref{sec_rem} respectively. Regarding $\phi_{app,n}$, we shall prove
\begin{proposition} \label{prop_phi_app}
For all $p \ge 1$,
\begin{equation} \label{estimate_phi_app}
\limsup_{n \to \infty} \Big|\int_{{\mathbb R}^3} f \cdot \phi_{app,n} \Big| \le C_{p,f} (\lambda {\mathbb E}ta^\alpha)^{\frac{1}{p}} ||{\nabla} \varphi||_{L^{p'}}.
{\mathbb E}nd{equation}
{\mathbb E}nd{proposition}
\noindent
Regarding the remainder $\psi_n$, we shall prove
\begin{proposition} \label{prop_psi}
For all $1 < p < 2$, there exists $c> 0$ independent of $\lambda$ such that for all $1 \ge {\mathbb E}ta \ge c \lambda^{1/3}$,
$$ \limsup_{n \to \infty} \Big|\int_{{\mathbb R}^3} f \cdot \psi_n \Big| \le C_{p,f} \lambda^{\frac 1 2} \Big(\lambda^{1+ \frac{2-p}{2p}} \, {\mathbb E}ta^{-\frac{3}{p}} + \big( {\mathbb E}ta^\alpha \lambda \big)^{\frac{2-p}{2p}} \Big) ||{\nabla} \varphi||_{L^{p'}}.$$
{\mathbb E}nd{proposition}
\noindent
Let us explain how to deduce Proposition \ref{main_prop} from these two propositions. Let $1 < p < 2$. By standard estimates, we see that $\phi_n$ is bounded uniformly in $n$ in $\dot{H}^1$. It follows that
\begin{align*}
\limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f_n \cdot \phi_n \Big| &= \limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f \cdot \phi_n \Big| \le \limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f \cdot \phi_{app,n} \Big| + \limsup_{n \to \infty} \Big| \int_{{\mathbb R}^3} f \cdot \psi_n \Big|
\\ &\le C_{p,f} \left((\lambda {\mathbb E}ta^\alpha)^{\frac{1}{p}} + \lambda^{\frac 3 2 + \frac{2-p}{2p}} {\mathbb E}ta^{-\frac{3}{p}} + \lambda^{\frac 1 2} \big( {\mathbb E}ta^\alpha \lambda \big)^{\frac{2-p}{2p}} \right) \|{\nabla}bla \varphi \|_{L^{p'}}
{\mathbb E}nd{align*}
To conclude, we adjust properly the parameters $p$ and ${\mathbb E}ta$.
We look for ${\mathbb E}ta$ in the form ${\mathbb E}ta = \lambda^{\theta}$, with $0 < \theta < \frac{1}{3}$, so that the lower bound on ${\mathbb E}ta$ needed in Proposition \ref{prop_psi} will be satisfied for small enough $\lambda$.
Then, we choose $p_{min} = 1 + \frac{\alpha}{6 + \alpha}$ and for $p < p_{min}$ we choose
$\theta = \frac{2p}{6 + (2-p) \alpha}$. It is straightforward to check that this yields a right-hand side $\lambda^{1+\delta}$ with $\delta = \frac 1 p - \frac{6}{6 + (2-p)\alpha}$ in accordance with Remark \ref{rem:exponents}.
\section{Bound on the approximation} \label{sec_app}
This section is devoted to the proof of Proposition \ref{prop_phi_app}. We decompose
$$\phi_{app,n} = \phi_{app,n}^1 + \phi_{app,n}^2 + \phi_{app,n}^3$$
where
\begin{align*}
& - \mu \Delta \phi^1_{app,n} + {\nabla} p^1_{app,n} = \hbox{div \!} \Big( 5 \lambda \mu \rho D\varphi - \sum_{1 \le i \le n} 5 \mu D \varphi(x_i) 1_{B_i} \Big), \quad \hbox{div \!} \phi^1_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3, \\
& - \mu \Delta \phi^2_{app,n} + {\nabla} p^2_{app,n} = \hbox{div \!} \Big( \sum_{i \in \mathcal{B}_{\mathbb E}ta} 5 \mu D \varphi(x) 1_{B_i} \Big), \quad \hbox{div \!} \phi^1_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3, \\
& - \mu \Delta \phi^3_{app,n} + {\nabla} p^3_{app,n} = \hbox{div \!} \Big( \sum_{i \in \mathcal{B}_{\mathbb E}ta} 5 \mu (D \varphi(x_i) - D\varphi(x)) 1_{B_i} \Big), \quad \hbox{div \!} \phi^1_{app,n} = 0 \quad \text{in} \: {\mathbb R}^3.
{\mathbb E}nd{align*}
By standard energy estimates, $\phi^k_{app,n}$ is seen to be bounded in $n$ in $\dot{H^1}$, for all $1 \le k \le 3$. We shall prove next that
$\phi^1_{app,n}$ and $\phi^3_{app,n}$ converge in the sense of distributions to zero, while for any $f$ with $\displaystyleplaystyle D ( \Delta)^{-1} \mathbb{P} f \in L^\infty$ ($\mathbb{P}$ denoting the standard Helmholtz projection), for any $p \ge 1$,
\begin{equation} \label{estimate_phi_app_2}
\Big|\int_{{\mathbb R}^3} f \cdot \phi^2_{app,n} \Big| \le C_{f,p} (\lambda {\mathbb E}ta^\alpha)^{\frac{1}{p}} ||{\nabla} \varphi||_{L^{p'}}, \quad p' = \frac{p}{p-1}.
{\mathbb E}nd{equation}
Proposition \ref{prop_phi_app} follows easily from those properties.
\noindent
We start with
\begin{lemma}
Under assumption {\mathbb E}qref{A0}, $\: \sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i} \: \rightharpoonup \: \lambda \rho D\varphi $ weakly* in $L^\infty$.
{\mathbb E}nd{lemma}
\noindent
{{\mathbb E}m Proof}. As the balls are disjoint, $|\sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i}| \le ||D\varphi||_{L^\infty}$. Let $g \in C_c({\mathbb R}^3)$, and denote $\delta_n = \frac{1}{n} \sum_{i} \delta_{x_i}$ the empirical measure. We write
\begin{align*}
\int_{{\mathbb R}^3} \sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i}(y) g(y) dy & = \sum_{1 \le i \le n} D\varphi(x_i) \int_{B(0,r_n)} g(x_i+y) dy \\
& = n \int_{{\mathbb R}^3} D\varphi(x) \int_{B(0,r_n)} g(x+y) dy d\delta_n(x) \\
& = n r_n^3 \int_{{\mathbb R}^3} \int_{B(0,1)} g(x+r_nz) dz d\delta_n(x).
{\mathbb E}nd{align*}
The sequence of bounded continuous functions $x \rightarrow \int_{B(0,1)} g(x+r_n z) dz$ converges uniformly to the function $x \rightarrow \frac{4\pi}{3} g(x)$ as $n \rightarrow +\infty$. We deduce:
$$ \lim_{n \to \infty} \int_{{\mathbb R}^3} \sum_{1 \le i \le n} D\varphi(x_i) \mathbf{1}_{B_i}(y) g(y) dy = \lim_{n \to \infty} \lambda \int_{{\mathbb R}^3} D\varphi(x) g(x) d\delta_n(x) = \lambda \int_{{\mathbb R}^3} D\varphi(x) g(x) \rho(x) dx $$
where the last equality comes from {\mathbb E}qref{A0}. The lemma follows by density of $C_c$ in $L^1$.
\noindent
Let now $h \in C^\infty_c({\mathbb R}^3)$ and $v = (\Delta)^{-1} \mathbb{P} h$. We find
\begin{align*}
\langle \phi_{app,n}^1 , h \rangle & = \langle \phi_{app,n}^1 , \Delta v \rangle = \langle \Delta \phi_{app,n}^1 , v \rangle \\
& = \int_{{\mathbb R}^3} \big( 5 \lambda \mu \rho D\varphi - \sum_{1 \le i \le n} 5 \mu D \varphi(x_i) 1_{B_i} \big) \cdot Dv \: \rightarrow 0 \quad \text{ as } \: n \rightarrow +\infty
{\mathbb E}nd{align*}
where we used the previous lemma and the fact that $Dv$ belongs to $L^1_{loc}$ and $\varphi$ has compact support. Hence, $\phi_{app,n}^1$ converges to zero in the sense of distributions. As regards $\phi^3_{app,n}$, we notice that
\begin{align*}
||\sum_{i \in \mathcal{B}_{\mathbb E}ta} 5 \mu (D \varphi(x) - D\varphi(x_i)) 1_{B_i}||_{L^1} & \le ||{\nabla}^2 \varphi||_{L^\infty} \sum_{1 \le i \le n} \int_{B_i} |x-x_i| dx \\
& \le ||{\nabla}^2 \varphi||_{L^\infty} \lambda r_n \rightarrow 0 \quad \text{ as } \: n \rightarrow +\infty
{\mathbb E}nd{align*}
Using the same duality argument as for $\phi^1_{app, n}$ (see also below), we get that $\phi^3_{app,n}$ converges to zero in the sense of distributions.
\noindent
It remains to show {\mathbb E}qref{estimate_phi_app_2}. We use a simple H\"older estimate, and write for all $p \ge 1$:
\begin{align*}
||\sum_{i \in \mathcal{B}_{\mathbb E}ta} 5 \mu D \varphi 1_{B_i}||_{L^1} & \le 5 \mu || \sum_{i \in \mathcal{B}_{\mathbb E}ta} 1_{B_i}||_{L^p} ||D\varphi||_{L^{p'}} = 5 \mu \big( \text{card} \ \mathcal{B}_{\mathbb E}ta \, \frac{4\pi}{3} r_n^3 \big)^{\frac{1}{p}} ||D\varphi||_{L^{p'}} \\
& \le C ({\mathbb E}ta^\alpha \lambda)^{\frac{1}{p}} ||D\varphi||_{L^{p'}}
{\mathbb E}nd{align*}
where the last inequality follows from {\mathbb E}qref{B2}. Denoting $v = ( \Delta)^{-1} \mathbb{P} f$, we have this time
\begin{align*}
\int_{{\mathbb R}^3} f \cdot \phi_{app,n}^2 & = \int_{{\mathbb R}^3} D v \cdot \sum_{i \in \mathcal{B}_{\mathbb E}ta} 5 \mu D \varphi 1_{B_i} \le C ||Dv||_{L^\infty} ({\mathbb E}ta^\alpha \lambda)^{\frac{1}{p}} ||D\varphi||_{L^{p'}}
{\mathbb E}nd{align*}
which implies {\mathbb E}qref{estimate_phi_app_2}.
\section{Bound on the remainder} \label{sec_rem}
We focus here on estimates for the remainder $\psi_n = \phi_n - \phi_{app,n}$, which satisfies {\mathbb E}qref{Sto_psi}-{\mathbb E}qref{Sto2_psi}.
The proof of Proposition \ref{prop_psi} relies on properties of the solutions of the system
\begin{equation} \label{Sto_Psi}
-\mu \Delta \psi + {\nabla} p = 0, \quad \hbox{div \!} \psi = 0 \quad \text{ in } \: \Omega_n, \quad
D \psi = D \tilde{\psi} \quad \text{ in } \: B_i, \quad 1 \le i \le n
{\mathbb E}nd{equation}
together with the constraints
\begin{equation} \label{Sto2_Psi}
\int_{{\partial} B_i} \sigma_\mu(\psi, p)\nu = \int_{{\partial} B_i} (x-x_i) \times \sigma_\mu(\psi, p)\nu = 0, \quad 1 \le i \le n.
{\mathbb E}nd{equation}
More precisely, we use a duality argument to prove the following proposition, corresponding to \cite[Proposition 3.2]{GerMec20}.
\begin{proposition} \label{prop_estimate_gphi2n}
Let $q > 3$. Then, under assumption {\mathbb E}qref{B1} for all $g \in L^{q}({\mathbb R}^3)$ and all $\tilde \psi \in H^1(\cup_i B_i)$, the weak solution $\psi \in \dot H^1({\mathbb R}^3)$ to {\mathbb E}qref{Sto_Psi}-{\mathbb E}qref{Sto2_Psi} satisfies
\begin{align} \label{improvement}
\left|\int_{{\mathbb R}^3} g \psi \right| \leq C_{g} \lambda^{\frac 1 2 } \| D \tilde{\psi} \|_{L^2(\cup B_i)}.
{\mathbb E}nd{align}
{\mathbb E}nd{proposition}
\begin{proof}
We introduce the solution $u_g$ of the Stokes equation
\begin{equation} \label{eq_ug}
-\Delta u_g + {\nabla} p_g = g, \quad \hbox{div \!} g = 0, \quad \text{ in } \: {\mathbb R}^3.
{\mathbb E}nd{equation}
As $g \in L^{q}$, $q>3$, $u_g \in W^{2,q}_{loc}$, so that $D(u_g)$ is continuous.
Integrations by parts yield
\begin{align*}
\int_{{\mathbb R}^3} g \psi & = \int_{{\mathbb R}^3}(-\Delta u_g + {\nabla} p_g) \psi = 2 \int_{{\mathbb R}^3} D(u_g) : D(\psi) \\
& = 2 \int_{\cup B_i} D(u_g) : D(\psi) - \sum_i \int_{{\partial} B_i} u_g \cdot \sigma(\psi, p)\nu \\
& = 2 \int_{\cup B_i} D(u_g) : D(\psi) - \sum_i \int_{{\partial} B_i} (u_g + u^i_g + \omega^i_g \times (x-x_i)) \cdot \sigma(\psi, p)\nu
{\mathbb E}nd{align*}
for any constant vectors $u^i_g$, $\omega^i_g$, $1 \le i \le n$, by the force-free and torque-free conditions on $\psi$.
As $u_g + u^i_g + \omega^i_g \times (x-x_i)$ is divergence-free, one has
$$ \int_{{\partial} B_i} (u_g + u^i_g + \omega^i_g \times (x-x_i)) \cdot \nu = 0. $$
We can apply classical considerations on the Bogovskii operator: for any $1 \le i \le n$, there exists $U_g ^i \in H^1_0(B(x_i, (M/2)r_n))$ such that
$$ \hbox{div \!} U_g ^i = 0 \quad \text{ in } \: B\Big(x_i, \frac{M}{2} r_n\Big), \quad U_g ^i = u_g + u^i_g + \omega^i_g \times (x-x_i) \quad \text{ in } \: B_i $$
and with
$$ ||{\nabla} U_g ^i||_{L^2} \le C_{i,n} ||u_g + u^i_g + \omega^i_g \times (x-x_i)||_{W^{1,2}(B_i)} $$
Furthermore, by a proper choice of $u_g ^i$ and $\omega_g^i$, we can ensure the Korn inequality:
$$ ||u_g + u^i_g + \omega^i_g \times (x-x_i)||_{W^{1,2}(B_i)} \le c'_{i,n} ||D(u_g)||_{L^2(B_i)} $$
resulting in
\begin{equation*} \label{control_Ug^i}
||{\nabla} U_g ^i||_{L^2} \le C ||D(u_g)||_{L^2(B_i)}
{\mathbb E}nd{equation*}
where the constant $C$ in the last inequality can be taken independent of $i$ and $n$ by translation and scaling arguments. Extending $U_g ^i$ by zero, and denoting $U_g = \sum U_g ^i$, we have
\begin{equation} \label{control_Ug}
||{\nabla} U_g||_{L^2} \le C ||D(u_g)||_{L^2(\cup B_i)}
{\mathbb E}nd{equation}
Thus, we find
\begin{align*}
\int_{{\mathbb R}^3} g \psi & = 2 \int_{\cup B_i} D(U_g) : D(\psi) - \sum_i \int_{{\partial} B_i} U_g \cdot \sigma(\psi, q)\nu \\
& = 2 \int_{{\mathbb R}^3} D(U_g) : D(\psi)
{\mathbb E}nd{align*}
By using {\mathbb E}qref{control_Ug} and Cauchy-Schwarz inequality, we end up with
\begin{align*}
\big| \int_{{\mathbb R}^3} g \psi \big| & \le C ||D(u_g)||_{L^2(\cup B_i)} \|D(\psi)\|_{L^2({\mathbb R}^3)} \le C ||D(u_g)||_{L^\infty} \lambda^{\frac12} \|D(\psi)\|_{L^2({\mathbb R}^3)}
{\mathbb E}nd{align*}
Now the assertion follows from the somehow standard estimate
\begin{equation} \label{L2_estimate_Psi}
||{\nabla} \psi||_{L^2({\mathbb R}^3)} \le C \| D \tilde{\psi} \|_{L^2(\cup B_i)}
{\mathbb E}nd{equation}
for a constant $C$ independent of $n$. Indeed, by a classical variational characterization of $\psi$, we have
$$
||{\nabla} \psi||_{L^2({\mathbb R}^3)}^2 = 2 ||D \psi||_{L^2({\mathbb R}^3)}^2 = \inf \big\{ 2 ||D U||_{L^2({\mathbb R}^3)}^2, \: D U = D \tilde{\psi} \: \text{ on } \cup_i B_i\big\}.
$$
Thus, {\mathbb E}qref{L2_estimate_Psi} follows by constructing such a vector field $U$ from $\tilde \psi$ in the same manner as we constructed $U_g$ from $u_g$ above and applying {\mathbb E}qref{control_Ug}.
{\mathbb E}nd{proof}
\noindent
By {\mathbb E}qref{Sto_psi} we can apply this proposition with $g=f$, $\psi = \psi_n$ and $\tilde \psi_n = \varphi - \phi_{app,n}$.
Thus, for the proof of Proposition \ref{prop_psi}, it remains to show
\begin{equation} \label{bound.psi}
\limsup_{n \to \infty} ||D (\varphi - \phi_{app,n})||_{L^2(\cup B_i)} \le C \Big(\lambda^{1+ \frac{2-p}{2p}} \, {\mathbb E}ta^{-\frac{3}{p}} + \big( {\mathbb E}ta^3 \lambda \big)^{\frac{2-p}{2p}} \Big) ||{\nabla} \varphi||_{L^{p'}}.
{\mathbb E}nd{equation}
We decompose
$$\varphi - \phi_{app,n} = \tilde{\psi}^1_n + \tilde{\psi}^2_n + \tilde{\psi}^3_n $$
where
\begin{align*}
& \forall 1 \le i \le n, \: \forall x \in B_i, \quad \tilde{\psi}^1_n(x) = -\phi_{{\mathbb R}^3}(x) - \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \phi_{j,n}(x)
{\mathbb E}nd{align*}
and
\begin{align*}
& \forall i \in \mathcal{G}_{\mathbb E}ta, \: \forall x \in B_i, \quad \tilde{\psi}^2_n(x) = \varphi(x) - \varphi(x_i) - {\nabla}\varphi(x_i) (x-x_i) + \Bigl(\varphi(x_i) + \frac 1 2 \hbox{curl \!} \varphi(x_i) \times (x-x_i)\Bigr), \\
& \forall i \in \mathcal{B}_{\mathbb E}ta, \: \forall x \in B_i, \quad \tilde{\psi}^2_n(x) = 0, \\
& \forall i \in \mathcal{G}_{\mathbb E}ta, \: \forall x \in B_i, \quad \tilde{\psi}^3_n(x) = 0, \\
& \forall i \in \mathcal{B}_{\mathbb E}ta, \: \forall x \in B_i, \quad \tilde{\psi}^3_n(x) = \varphi(x).
{\mathbb E}nd{align*}
We remind that the sum in {\mathbb E}qref{def_phi_app} is restricted to indices $i \in \mathcal{G}_{\mathbb E}ta$ and that $\phi_{i,n}(x) = D\varphi(x_i) (x-x_i)$ for $x$ in $B_i$. This explains the distinction between $\tilde{\psi}^2_n$ and $\tilde{\psi}^3_n$.
\noindent
The control of $\tilde \psi^2_n$ is the simplest:
\begin{align} \label{psi_2n}
\|D \tilde \psi^2_n\|_{L^2(\cup B_i)} \le C ||D^2 \varphi||_{L^\infty} \Big( \sum_{i \in \mathcal{G}_{\mathbb E}ta} \int_{B_i} |x-x_i|^2 dx \Bigr)^{1/2} \le C' \lambda^{1/2} r_n.
{\mathbb E}nd{align}
Hence,
\begin{equation} \label{lim_psi_2n}
\lim_{n \rightarrow +\infty} \|D \tilde \psi^2_n\|_{L^2(\cup B_i)} = 0.
{\mathbb E}nd{equation}
Next, we estimate $\tilde \psi^3_n$.
This term expresses the effect of the balls $\mathcal{B}_{\mathbb E}ta$ that are close to one another. By assumption {\mathbb E}qref{B2}, $\text{card}\mathcal{B}_{\mathbb E}ta \le C {\mathbb E}ta^\alpha n$.
Thus,
\begin{equation} \label{bound_psi_3n}
\begin{aligned}
\|D \tilde \psi^3_n\|_{L^2(\cup B_i)} \le C \|1_{\cup_{i \in \mathcal{B}'} B_i}\|_{L^{\frac{2p}{2-p}}({\mathbb R}^3)} \|D\varphi\|_{L^{p'}(\cup_{i \in \mathcal{B}'} B_i)}
\leq C'({\mathbb E}ta^\alpha \lambda)^{\frac {2-p} {2p}} \|{\nabla}bla \varphi\|_{L^{p'}}.
{\mathbb E}nd{aligned}
{\mathbb E}nd{equation}
The final step in the proof of Proposition \ref{prop_psi} is to establish bounds on $\tilde \psi^1_n$.
We have
\begin{align} \label{bound_psi_2n}
||D \tilde \psi^1_n||_{L^2(\cup B_i)} & \le C \Big( \|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} + \Big(\sum_i \int_{B_i} \big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} D\phi_{j,n}\big|^2 \Big)^{1/2} \Big)
{\mathbb E}nd{align}
For any $r,s < +\infty$ with $\frac{1}{r} + \frac{1}{s} = \frac{1}{2}$, we obtain
\begin{equation} \label{bound_phi_R3}
\|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} \: \le \: ||1_{\cup B_i}||_{L^r({\mathbb R}^3)} ||D \phi_{{\mathbb R}^3}||_{L^s({\mathbb R}^3)} \: \le C ||1_{\cup B_i}||_{L^r({\mathbb R}^3)} ||\lambda \rho D\varphi||_{L^s({\mathbb R}^3)}
{\mathbb E}nd{equation}
using standard $L^s$ estimate for system {\mathbb E}qref{eq_phi_R3}. Hence,
$$ \|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} \le C' \lambda^{1+ \frac{1}{r}} ||D\varphi||_{L^s({\mathbb R}^3)}. $$
Note that we can choose any $s >2$, this lower bound coming from the requirement $\frac{1}{r} + \frac{1}{s} = \frac{1}{2}$. Introducing $p$ such that $s= p'$, we find that for any $p < 2$,
\begin{equation} \label{bound_Dphi_R3}
\|D \phi_{{\mathbb R}^3}\|_{L^2(\cup B_i)} \le C' \lambda^{\frac{1}{2} + \frac{1}{p}} ||D\varphi||_{L^{p'}({\mathbb R}^3)}.
{\mathbb E}nd{equation}
The treatment of the second term at the r.h.s. of {\mathbb E}qref{bound_psi_2n} is more delicate. We write, see {\mathbb E}qref{def_phi_in}:
\begin{align} \label{decompo_phi_jn}
D\phi_{j,n}(x) & = DV[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) = \mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) \: + \: \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)
{\mathbb E}nd{align}
where $\: \mathcal{V}[S] = D\Big( \frac{5}{2} S : (x \otimes x) \frac{x}{|x|^5} \Big)$, $\: \mathcal{W}[S] = D \Big( \frac{Sx}{|x|^5} - \frac{5}{2} (S : x \otimes x) \frac{x}{|x|^7} \Big)$.
\noindent
We have:
\begin{align*}
& \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx
\: \le \: C \, r_n^{10} \, \sum_i \int_{B_i} \Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} |D\varphi(x_j)| \, |x-x_j|^{-5} \Big)^2 dx
{\mathbb E}nd{align*}
For all $i$, for all $j \in \mathcal{G}_{\mathbb E}ta$ with $j \neq i$, and all $(x,y) \in B_i \times B(x_j, \frac{{\mathbb E}ta}{4} n^{-\frac{1}{3}})$, we have for some absolute constants $c,c' > 0$:
$$|x-x_j| \: \ge \: c \, |x - y| \ge c' \, {\mathbb E}ta n^{-\frac{1}{3}}. $$
Denoting $B_j^* = B(x_j,\frac{{\mathbb E}ta}{4} n^{-\frac{1}{3}})$ We deduce
\begin{align*}
& \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx \\
& \le C \, r_n^{10} \sum_i \int_{B_i}
\Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \frac{1}{|B_j^*|} \int_{B_j^*} |x - y|^{-5} 1_{\{|x-y| > c' {\mathbb E}ta n^{-\frac{1}{3}}\}}(x-y) |D\varphi(x_j)| dy \Big)^2 dx\\
& \le C' \, n^2 \frac{r_n^{10}}{{\mathbb E}ta^6} \int_{{\mathbb R}^3} 1_{\cup B_i}(x) \Big( \int_{{\mathbb R}^3} |x - y|^{-5} 1_{\{|x-y| > c' {\mathbb E}ta n^{-\frac{1}{3}}\}}(x-y) \sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*}(y) dy \Big)^2 dx
{\mathbb E}nd{align*}
Using H\"older and Young's convolution inequalities, we find that for all $r,s$ with $\frac{1}{r} + \frac{1}{s} = 1$,
\begin{align*}
& \int_{{\mathbb R}^3} 1_{\cup B_i}(x) \Big( \int_{{\mathbb R}^3} |x - y|^{-5} 1_{\{|x-y| > c' {\mathbb E}ta n^{-\frac{1}{3}}\}}(x-y) \sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*}(y) dy \Big)^2 dx \\
& \le ||1_{\cup B_i}||_{L^r} \, || \big(|x|^{-5} 1_{\{|x| > c' {\mathbb E}ta n^{-\frac{1}{3}}\}}\big) \star \sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
& \le ||1_{\cup B_i}||_{L^r} \, |||x|^{-5} 1_{\{|x| > c' {\mathbb E}ta n^{-\frac{1}{3}}\}}||_{L^1}^2 \, ||\sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
& \le C \lambda^{\frac{1}{r}} \, ({\mathbb E}ta n^{-\frac{1}{3}})^{-4} \, \Big( \sum_j |D\varphi(x_j)|^{2s} {\mathbb E}ta^3 n^{-1} \Big)^{\frac{1}{s}}
{\mathbb E}nd{align*}
Note that, by {\mathbb E}qref{A0}, $\frac{1}{n} \sum_j |D\varphi(x_j)|^t \rightarrow \int_{{\mathbb R}^3} |D\varphi|^t \rho$ as $n \rightarrow +\infty$. We end up with
\begin{align*}
& \limsup_{n \to \infty} \, \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx
\le C \, \lambda^{\frac{10}{3} + \frac{1}{r}} \, {\mathbb E}ta^{-10+\frac{3}{s}} ||D\varphi||^2_{L^{2s}(\mathcal{O})}.
{\mathbb E}nd{align*}
We can take any $s > 1$, which yields by setting $p$ such that $p'=2s$: for any $p < 2$
\begin{equation} \label{bound_mW}
\limsup_{n \to \infty} \, \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \mathcal{W}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx
\le C \, \lambda^{\frac{10}{3} + \frac{2-p}{p}} \, {\mathbb E}ta^{-4-\frac{6}{p}} ||D\varphi||^2_{L^{2s}(\mathcal{O})}.
{\mathbb E}nd{equation}
To treat the first term in the decomposition {\mathbb E}qref{decompo_phi_jn}, we write
$$\mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big) = r_n^3 \, \mathcal{M}(x-x_j) \, D\varphi(x_j) $$
for $\mathcal{M}$ a matrix-valued Calderon-Zygmund operator.
We use that for all $i$ and all $j \neq i$, $j \in \mathcal{G}_{\mathbb E}ta$ we have for all $(x,y) \in B_i \times B_j^\ast$
\begin{align*}
|\mathcal{M}(x-x_j) - \mathcal{M}(x-y)| \leq C {\mathbb E}ta n^{-1/3} |x- y|^{-4}
{\mathbb E}nd{align*}
Thus, by similar manipulations as before
\begin{align*}
& \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx \\
& \leq C r_n^6 \sum_i \int_{B_i}
\Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \frac{1}{|B_j^*|} \int_{B_j^*} \mathcal{M}(x-y) 1_{\{|x-y| > c {\mathbb E}ta n^{-\frac{1}{3}}\}}(x-y) |D\varphi(x_j)| dy \Big)^2 dx\\
& + C \frac{{\mathbb E}ta^2}{n^{2/3}} r_n^6 \sum_i \int_{B_i}
\Big( \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \frac{1}{|B_j^*|} \int_{B_j^*} |x- y|^{-4} 1_{\{|x-y| > c {\mathbb E}ta n^{-\frac{1}{3}}\}}(x-y) |D\varphi(x_j)| dy \Big)^2 dx \\
& \leq C n^2 \frac{r_n^6}{{\mathbb E}ta^6} \, ||1_{\cup B_i}||_{L^r} \, || \big(\mathcal{M}(x) 1_{\{|x| > {\mathbb E}ta n^{-\frac{1}{3}}\}}\big) \star \sum_{1 \le j \le n} \, |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
&+C n^2 \frac{{\mathbb E}ta^2}{n^{2/3}} \frac{r_n^6}{{\mathbb E}ta^6} ||1_{\cup B_i}||_{L^r} \, |||x|^{-4} 1_{\{|x| > c {\mathbb E}ta n^{-\frac{1}{3}}\}}||_{L^1}^2 \, ||\sum_{1 \le j \le n} |D\varphi(x_j)| 1_{B_j^*} ||_{L^{2s}}^2 \\
{\mathbb E}nd{align*}
As seen in \cite[Lemma 2.4]{DGV_MH}, the kernel $\mathcal{M}(x) 1_{\{|x| > c {\mathbb E}ta n^{-\frac{1}{3}}\}}$ defines a singular integral that is continuous over $L^t$ for any $1 < t < \infty$, with operator norm bounded independently of the value ${\mathbb E}ta n^{-\frac{1}{3}}$ (by scaling considerations). Applying this continuity property with $t=2s$, writing as before $p'=2s$, we get for all $p < 2$,
\begin{align*}
& \limsup_{n \to \infty} \sum_i \int_{B_i} \Big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} \mathcal{V}[D\varphi(x_j)]\Big(\frac{x-x_j}{r_n}\Big)\Big|^2 dx \le C \lambda^{2+ \frac{2-p}{p}} {\mathbb E}ta^{-\frac{6}{p}} ||D\varphi||^2_{L^{2s}(\mathcal{O})}
{\mathbb E}nd{align*}
Combining this last inequality with {\mathbb E}qref{decompo_phi_jn} and {\mathbb E}qref{bound_mW}, we finally get: for all $p < 2$,
\begin{align} \label{bound_sum_phi_jn}
& \limsup_{n \to \infty} \Big(\sum_i \int_{B_i} \big| \sum_{\substack{j \neq i, \\ j \in \mathcal{G}_{\mathbb E}ta}} D\phi_{j,n}\big|^2 \Big)^{1/2} \le C' \lambda^{1+ \frac{2-p}{2p}} \, {\mathbb E}ta^{-\frac{3}{p}} ||D\varphi||_{L^{p'}(\mathcal{O})}
{\mathbb E}nd{align}
Finally, if we inject {\mathbb E}qref{bound_Dphi_R3} and {\mathbb E}qref{bound_sum_phi_jn} in {\mathbb E}qref{bound_psi_2n}, we obtain that for any $p < 2$,
\begin{align} \label{bound_psi_1n_final}
\limsup_{n \to \infty} ||D \tilde \psi^1_n||_{L^2(\cup_i B_i)} & \le C \lambda^{1+ \frac{2-p}{2p}} \, {\mathbb E}ta^{-\frac{3}{p}} ||D\varphi||_{L^{p'}({\mathbb R}^3)}
{\mathbb E}nd{align}
Here we used ${\mathbb E}ta \leq 1$.
The desired estimate {\mathbb E}qref{bound.psi} follows from collecting {\mathbb E}qref{lim_psi_2n}, {\mathbb E}qref{bound_psi_1n_final} and {\mathbb E}qref{bound_psi_3n}.
This concludes the proof of Proposition \ref{prop_psi}.
\section{Discussion of assumption \texorpdfstring{{\mathbb E}qref{B1}}{(B1)}} \label{sec:B1}
In the light of the recent paper \cite{Duerinckx20}, we will show how condition {\mathbb E}qref{B1} can be replaced by the following assumption:
\begin{align} \label{B1'} \tag{B1'}
\forall i, \quad \rho_i := \sup_{ j \neq i} r_n^{-1} |x_i - x_j| - 2 > 0, \qquad {\mathbb E}xists s > 1, \quad \limsup_{n \to \infty} \frac{1}{n} \sum_i \rho_i^{-s} < \infty.
{\mathbb E}nd{align}
We will argue that Theorem \ref{main} remains valid with $p_{\min}$ depending in addition on the power $s$ from {\mathbb E}qref{B1'}. More precisely, $p_{\min}$ in Remark \ref{rem:exponents} needs to be replaced by
\begin{align} \label{p_min}
p_{\min} = \min\left\{ 1 + \frac{\alpha}{6+\alpha}, 1 + \frac{s-1}{s+1}, \frac{3}{2}\right\}.
{\mathbb E}nd{align}
There are only two instances where we have used assumption {\mathbb E}qref{B1}, which are both contained in the proof of Proposition \ref{prop_estimate_gphi2n} : one is to prove the estimate {\mathbb E}qref{L2_estimate_Psi} for the solution $\psi$ to the system
{\mathbb E}qref{Sto_Psi} -- {\mathbb E}qref{Sto2_Psi}, and the other one is to prove the analogue estimate {\mathbb E}qref{control_Ug}. The proof has been based on the construction of suitable functions $\mathbb Psi_i \in \dot H^1_0(B(x_i,M/2 r_n))$ with $D(\mathbb Psi_i) = D(\tilde \psi)$ in $B_i$. If we drop assumption {\mathbb E}qref{B1}, we can still replace the balls $B(x_i,M/2 r_n)$ by disjoint neighborhoods $B_i^+$ satisfying the assumptions of \cite[section 3.1]{Duerinckx20} (with $I_n, I_n^+$ replaced by $B_i, B_i^{+}$). By \cite[Lemma 3.3]{Duerinckx20}, it then follows that for all $r > 2$ and all $q \ge \max(2,\frac{6r}{5r-6})$, there exists $\mathbb Psi_i \in H^1_0(B_i^+)$, such that
$$ \| {\nabla} \mathbb Psi_i\|_{L^2(B_i^+)} \le C_{r} \, \rho_i^{\frac{2}{r} - \frac{3}{2}} r_n^{\frac{3}{2} - \frac{3}{q}} \| D(\tilde \psi)\|_{L^q(B_i)}, $$
Setting $\mathbb Psi = \sum_i \mathbb Psi_i$, we find that
\begin{equation} \label{estim_naPsi}
\|{\nabla}bla \mathbb Psi\|^2_{L^2} \leq C_{r} \sum_i \rho_i^{\frac{4}{r} - 3} r_n^{3-\frac{6}{q}} \| D(\tilde \psi)\|_{L^q(B_i)}^2 \leq C_{r} \lambda^{\frac{q-2}{q}} \left( \frac{1}{n} \sum_i
\left( \rho_i^{\frac{4}{r} - 3}\right)^\frac{q}{q-2} \right)^{\frac{q-2}{q}} \|D(\tilde \psi)\|_{L^q(\cup B_i)}^2
{\mathbb E}nd{equation}
Note that for $s$ the exponent in {\mathbb E}qref{B1'}, $q > 3$ and $\frac{q}{q-2} < s$, taking $r$ close enough to $2$, one can ensure that $q \ge \max(2,\frac{6r}{5r-6})$ and that the first factor at the right-hand side of {\mathbb E}qref{estim_naPsi} is finite. In conclusion this argument shows that Proposition \ref{prop_estimate_gphi2n} remains valid under assumption {\mathbb E}qref{B1'} with the estimate {\mathbb E}qref{improvement} replaced by
\begin{align} \label{improvement'}
\left|\int_{{\mathbb R}^3} g \psi \right| \leq C_{g,q} \lambda^{\frac 1 2 + \frac{q - 2}{2q}} \| D \tilde{\psi} \|_{L^{q}(\cup B_i)}.
{\mathbb E}nd{align}
It is not difficult to check that this change of the estimate still allows to conclude the argument in Section \ref{sec_rem} along the same lines as before. Indeed, whenever we used {\mathbb E}qref{improvement}, we also applied Hölder's estimate to replace $\| D \tilde{\psi} \|_{L^{2}(\cup B_i)}$ by a higher Lebesgue norm in order to gain powers in $\lambda$. One could say that the modified estimate {\mathbb E}qref{improvement'} has just partly anticipated Hölder's estimate. The additional restrictions on $q$ ($q >3$, $q < \frac{s}{s-2}$) are the reason for the additional constraints in $p_{\min}$ in {\mathbb E}qref{p_min}. The estimates in Section \ref{sec_rem} where we use Proposition \ref{prop_estimate_gphi2n} concern the terms $\tilde \psi_n^i$, $i=1,2,3$.
First, in the estimate for $\tilde \psi_n^2$ corresponding to {\mathbb E}qref{psi_2n}, we can just use {\mathbb E}qref{improvement'} with $q= \infty$.
Second for $\tilde \psi_n^3$, previously estimated in {\mathbb E}qref{bound_psi_3n},
we use {\mathbb E}qref{improvement'} with $q = p'$.
Finally, for $\tilde \psi_n^1$, if one carefully follows the estimates in Section \ref{sec_rem}, one observes that {\mathbb E}qref{improvement'} with $q= p'$ is again sufficient.
\section{Discussion of assumption \texorpdfstring{{\mathbb E}qref{B2}}{(B2)}} \label{sec:prob}
\subsection{Stationary ergodic processes}
Let $\mathbb Phi^\delta = \{y_i\}_i \subset {\mathbb R}^3$ be a stationary ergodic point process on ${\mathbb R}^3$ with intensity $\delta$ and hard-core radius $R$, i.e., $|y_i - y_j| \geq R$ for all $i \neq j$. An example of such a process is a hard-core Poisson point process, which is obtained from a Poisson point process upon deleting all points with a neighboring point closer than $R$. We refer to \cite[Section 8.1]{MR1950431} for the construction and properties of such processes.
\noindent
Assume that $\mathcal{O}$ is convex and contains the origin. For ${\mathbb E}ps > 0$, we consider the set
\begin{align*}
{\mathbb E}ps \mathbb Phi^\delta \cap \mathcal{O} =: \{ x^{\mathbb E}ps_i, i =1, \dots, n_{\mathbb E}ps\}.
{\mathbb E}nd{align*}
Let $r < R/2$ and denote $r_{\mathbb E}ps = {\mathbb E}ps r$ and consider $B_i = \overline{ B(x_i,r_{\mathbb E}ps)}$.
The volume fraction of the particles depends on ${\mathbb E}ps$ in this case.
However, it is not difficult to generalize our result to the case when the volume fraction converges to $\lambda$ and
this holds in the setting under consideration since
\begin{align*}
\frac{4 \pi}{3} n_{\mathbb E}ps r_{\mathbb E}ps^3 \to \frac{4 \pi}{3} \delta r^3 =: \lambda(r,\delta) \quad \text{almost surely as } {\mathbb E}ps \to 0.
{\mathbb E}nd{align*}
Clearly, $\lambda(r,\delta) \to 0$, both if $r \to 0$ and if $\delta \to 0$.
However, the process behaves fundamentally differently in those cases. Indeed, if we take $r \to 0$ (for $\delta$ and $R$ fixed), we find that condition {\mathbb E}qref{A1}, which implies {\mathbb E}qref{B2}, is satisfied
almost surely for ${\mathbb E}ps$ sufficiently small as
\begin{align*}
n_{\mathbb E}ps^{1/3} |x_i^{\mathbb E}ps - x_j^{\mathbb E}ps| \geq n_{\mathbb E}ps^{1/3} {\mathbb E}ps R \to \delta^{1/3} R.
{\mathbb E}nd{align*}
\noindent
In the case when we fix $r$ and consider $\delta \to 0$ (e.g. by randomly deleting points from a process), {\mathbb E}qref{A1} is in general not satisfied.
We want to characterize processes for which {\mathbb E}qref{B2} is still fulfilled almost surely as ${\mathbb E}ps \to 0$.
Indeed, using again the relation between ${\mathbb E}ps$ and $n_{\mathbb E}ps$, it suffices to show
\begin{align} \label{eq:B2.prob}
\forall {\mathbb E}ta > 0, \quad \#\{i, \: {\mathbb E}xists j, \: |x_i - x_j| \le {\mathbb E}ta {\mathbb E}ps \} \le C {\mathbb E}ta^\alpha \delta^{1 + \frac \alpha 3} {\mathbb E}ps^{-3}.
{\mathbb E}nd{align}
Let $\mathbb Phi^\delta_{\mathbb E}ta$ be the process obtained from $\mathbb Phi^\delta$ by deleting those points $y$ with $B(y,{\mathbb E}ta) \cap \mathbb Phi^\delta = \{y\}$.
Then, the process $\mathbb Phi^\delta_{\mathbb E}ta$ is again stationary ergodic (since deleting those points commutes with translations\footnote{In detail: let $\mathcal E_{\mathbb E}ta$ be the operator that erases all points without a neighboring point closer than ${\mathbb E}ta$, and let $T_x$ denote a translation by $x$. Now, let $\mu$ be the measure for the original process $\mathbb Phi^\delta$. Then the measure for $\mathbb Phi^\delta_{\mathbb E}ta$
is given by $\mu_{\mathbb E}ta = \mu \circ \mathcal E_{\mathbb E}ta^{-1}$. Since $\mathcal E_{\mathbb E}ta T_x = T_x \mathcal E_{\mathbb E}ta$ (for all $x$, in particular for $T_{-x} = T^{-1}_x$),
we have for any measurable set $A$ that $T_x \mathcal E_{\mathbb E}ta^{-1} A = \mathcal E_{\mathbb E}ta^{-1} T_x A$.
This immediately implies that the new process adopts stationarity and ergodicity.}), so that almost surely as ${\mathbb E}ps \to 0$
\begin{align*}
{\mathbb E}ps^3 \#\{i, \: {\mathbb E}xists j, \: |x_i - x_j| \le {\mathbb E}ta {\mathbb E}ps \} \to {\mathbb E}[\# \mathbb Phi^\delta_{\mathbb E}ta \cap Q],
{\mathbb E}nd{align*}
where $Q = [0,1]^3$. Clearly,
$$ {\mathbb E}[\# \mathbb Phi^\delta_{\mathbb E}ta \cap Q] \le {\mathbb E} \sum_{y \in \mathbb Phi^\delta \cap Q} \sum_{y' \neq y \in \mathbb Phi^\delta} 1_{B(0,{\mathbb E}ta)}(y' - y). $$
We can express this expectation in terms of the 2-point correlation function $\rho^\delta_2(y,y')$ of $\mathbb Phi^\delta$ yielding
$$ {\mathbb E}[\# \mathbb Phi^\delta_{\mathbb E}ta \cap Q] \le \int_{{\mathbb R}^6} 1_Q(y) 1_{B(0,{\mathbb E}ta)}(y'-y) \rho^\delta_2(y,y') \, \mathrm{d} y \, \mathrm{d} y'. $$
Hence, {\mathbb E}qref{eq:B2.prob} and therefore also {\mathbb E}qref{B2} is in particular satisfied with $\alpha = 3$ if $\rho^\delta_2 \leq C \delta^2$
which is the case for a (hard-core) Poisson point process.
Moreover, we observe that {\mathbb E}qref{B2} with $\alpha < 3$ is satisfied even for processes
that favor clustering: {\mathbb E}qref{eq:B2.prob} holds if $\rho_2^\delta(y,y') \leq C \delta^{1 + \frac{\alpha}3} |y - y'|^{\alpha - 3}$. This means that $\rho_2^\delta$ can be quite singular at the diagonal and of much higher intesity than $\delta^2$. Examples for such clustering point processes are Neyman-Scott processes (see e.g. \cite[Section 6.3]{MR1950431}).
\subsection{Identically, independently distributed particles}
Focusing on assumption {\mathbb E}qref{B2}, we neglect the non-overlapping condition {\mathbb E}qref{B1} in the following, which is not satisfied for i.i.d. particles. As in the case of hard-core Poisson point processes, it is nevertheless possible to construct a process that satisfies {\mathbb E}qref{B1}, by deleting points which have a too close neighbor. As those points will be few for small volume fractions, this will not affect the discussion of {\mathbb E}qref{B2} qualitatively.
\noindent
We will show the following result: for $x_1, \dots, x_n$ i.i.d. with a law $\rho \in L^\infty$ ($\rho \geq 0$, $\int \rho = 1$), for all ${\mathbb E}ta > 0$:
\begin{align} \label{eq:B2.iid}
n^{-1} \# \{ i, \: {\mathbb E}xists j \neq i, \: |x_i - x_j| \le {\mathbb E}ta n^{-1/3} \} \: \xrightarrow[n \rightarrow +\infty]{} \: 1 - \int_{{\mathbb R}^3} \rho(x) e^{-\rho(x) \frac{4\pi}{3} {\mathbb E}ta^3} dx
{\mathbb E}nd{align}
in probability. This implies {\mathbb E}qref{B2} with $\alpha = 3$ in probability. We first set
$${\mathbb E}ta_n := {\mathbb E}ta n^{-1/3}, \quad B^n_j := B(x_j, {\mathbb E}ta_n), \quad Y^n_{i} := \prod_{j\neq i} 1_{(B^n_j)^c}(x_i).$$
Note that the random variables $Y^n_{i}$ are identically distributed, but not independent. Note also that $ \displaystyleplaystyle n^{-1} \# \{ i, \: {\mathbb E}xists j \neq i, \: |x_i - x_j| \le {\mathbb E}ta_n \} = \frac{1}{n} \sum_{i=1}^n (1-Y_i^n)$. Hence, we need to show that $\frac{1}{n} \sum_{i=1}^n Y_i^n$ converges to $\displaystyleplaystyle I_{\rho, {\mathbb E}ta} := \int_{{\mathbb R}^3} \rho(x) e^{-\rho(x) \frac{4\pi}{3} {\mathbb E}ta^3} dx $ in probability.
\noindent
\noindent {{\mathbb E}m Step 1}. We show that $ {\mathbb E} Y_1^n \: \xrightarrow[n \rightarrow +\infty]{} \: I_{\rho,{\mathbb E}ta}$. Indeed, by independence,
\begin{align*}
{\mathbb E} Y_1^n & = \int_{{\mathbb R}^3} \Big( \int_{{\mathbb R}^3} 1_{B(y,{\mathbb E}ta_n)^c}(x) \rho(y)dy \Big)^{n-1} \rho(x) dx = \int_{{\mathbb R}^3} \Big( 1- \int_{B(x, {\mathbb E}ta_n)} \rho(y) dy \Big)^{n-1} \rho(x) dx
{\mathbb E}nd{align*}
At each Lebesgue point $x$ of $\rho$, one has $\frac{1}{|B(x, {\mathbb E}ta_n)|} \int_{B(x, {\mathbb E}ta_n)} \rho(y) dy \rightarrow \rho(x)$, so that
$$ \Big( 1- \int_{B(x, {\mathbb E}ta_n)} \rho(y) dy \Big)^{n-1} \rightarrow e^{-\rho(x) \frac{4\pi}{3} {\mathbb E}ta^3} \quad \text{for a.e. $x$} $$
and the result follows by the dominated convergence theorem.
\noindent
\noindent {{\mathbb E}m Step 2}. We show that
$$\text{var} \Big( \frac{1}{n}\sum_{i=1}^n Y^n_i \Big) \rightarrow 0 \quad \text{as} \: n \rightarrow +\infty$$
By Markov inequality and Step 1, this implies {\mathbb E}qref{eq:B2.iid}.
\noindent
We have
\begin{align*}
\text{var}\Big( \frac{1}{n} \sum_{i=1}^n Y^n_i \Big) & = \frac{1}{n}\text{var}Y^n_1 +
\frac{n(n-1)}{n^2} \text{Cov}Y_1^n Y_2^n = \text{Cov}Y_1^n Y_2^n + O(\frac{1}{n})
{\mathbb E}nd{align*}
using that $0 \le Y_1^n \le 1$. It remains to show that the covariance goes to zero. Using the independence of the $x_i$'s, we have the explicit formula
\begin{align*}
{\mathbb E} Y_1^n Y_2^n = \int_{{\mathbb R}^6} \Big( \int_{\mathbb R} 1_{|x-x_1|\ge {\mathbb E}ta_n} 1_{|x-x_2|\ge {\mathbb E}ta_n} \rho(x) dx \Big)^{n-2} 1_{|x_1-x_2|\ge {\mathbb E}ta_n} \rho(x_1) \rho(x_2) dx_1 dx_2. \\
{\mathbb E}nd{align*}
We have
\begin{align*}
&\Big( \int_{\mathbb R} 1_{|x-x_1|\ge {\mathbb E}ta_n} 1_{|x-x_2|\ge {\mathbb E}ta_n} \rho(x) dx \Big)^{n-2} \\
= & \Big( 1 - \int_{B(x_1,{\mathbb E}ta_n)} \!\rho - \int_{B(x_2,{\mathbb E}ta_n)}\!\rho + \Big(\int_{B(x_1,{\mathbb E}ta_n) \cap B(x_2,{\mathbb E}ta_n)} \!\! \rho\Big) \, 1_{|x_1-x_2|\le 2{\mathbb E}ta_n}\Big)^{n-2}\\
= & e^{-\frac{4\pi}{3}\Xint-_{B(x_1,{\mathbb E}ta_n)} \rho} e^{-\frac{4\pi}{3}\Xint-_{B(x_1,{\mathbb E}ta_n)} \rho} e^{R_n(x_1,x_2)}, \quad |R_n(x_1,x_2)|\le C 1_{|x_1-x_2|\le 2{\mathbb E}ta_n} + C n^{-1}.
{\mathbb E}nd{align*}
This quantity converges almost surely to $e^{-\frac{4\pi}{3}\rho(x_1)} e^{-\frac{4\pi}{3}\rho(x_2)}$ and it follows by the dominated convergence theorem that
\begin{align*}
{\mathbb E} Y_1^n Y_2^n \rightarrow \Big(\int_{\mathbb R} e^{-\frac{4\pi}{3}\rho(x_1)} \rho(x_1) dx_1 \Big) \Big(\int_{\mathbb R} e^{-\frac{4\pi}{3}\rho(x_2)}\rho(x_2) dx_2 \Big) = \lim_{n \rightarrow +\infty} ({\mathbb E} Y_1^n)^2
{\mathbb E}nd{align*}
wich yields the result.
\begin{thebibliography}{10}
\bibitem{AlBr}
Y.~Almog and H.~Brenner.
\newblock Global homogenization of a dilute suspension of spheres.
\newblock Preprint arxiv:2003.01480, 2020.
\bibitem{BaGr1}
G.~Batchelor and J.~Green.
\newblock The determination of the bulk stress in a suspension of spherical
particles at order $c^2$.
\newblock {{\mathbb E}m J. Fluid Mech.}, 56:401--427, 1972.
\bibitem{MR1950431}
D.~J. Daley and D.~Vere-Jones.
\newblock {{\mathbb E}m An introduction to the theory of point processes. {V}ol. {I}}.
\newblock Probability and its Applications (New York). Springer-Verlag, New
York, second edition, 2003.
\newblock Elementary theory and methods.
\bibitem{DuerinckxGloria19}
M. Duerinckx and A. Gloria.
\newblock Corrector equations in fluid mechanics: Effective viscosity of colloidal suspensions.
\newblock Preprint arXiv:1909.09625, 2019.
\bibitem{DuerinckxGloria20}
M. Duerinckx and A. Gloria.
\newblock On Einstein's effective viscosity formula.
\newblock Preprint arXiv:2008.03837, 2020.
\bibitem{Duerinckx20}
M. Duerinckx.
\newblock Effective viscosity of random suspensions without uniform separation.
\newblock Preprint arXiv:2008.13188, 2020.
\bibitem{Ein}
A.~Einstein.
\newblock Eine neue Bestimmung der Molek\"uldimensionen.
\newblock {{\mathbb E}m Ann. Physik.}, 19:289--306, 1906.
\bibitem{Galdi}
G.~Galdi.
\newblock {{\mathbb E}m An introduction to the mathematical theory of the Navier-Stokes equations: Steady-state problems.}
\newblock Springer Science \& Business Media, 2011.
\bibitem{DGV}
D.~G\'{e}rard-Varet.
\newblock A simple justification of effective models for conducting or fluid media with dilute spherical inclusions.
\newblock Preprint arXiv:1909.11931, 2019.
\bibitem{Gerard-Varet20}
D.~G\'{e}rard-Varet.
\newblock Derivation of Batchelor-Green formula for random suspensions.
\newblock Preprint arXiv:2008.06324, 2020.
\bibitem{DGV_MH}
D.~G\'{e}rard-Varet and M.~Hillairet.
\newblock Analysis of the viscosity of dilute suspensions beyond {E}instein's
formula.
\newblock {{\mathbb E}m Arch. Rat. Mech. Anal.}, 238:1349–-1411, 2020.
\bibitem{GerMec20}
D.~G\'{e}rard-Varet and A.~Mecherbet.
\newblock On the correction to Einstein's formula for the effective viscosity.
\newblock arXiv:2004.05601, 2020.
\bibitem{Guaz}
E. Guazzelli and O. Pouliquen.
\newblock Rheology of dense granular suspensions.
\newblock {{\mathbb E}m Journal of Fluid Mechanics}, Cambridge University Press (CUP), 2018, 852
\bibitem{MR2982744}
B.~M. Haines and A.~L. Mazzucato.
\newblock A proof of {E}instein's effective viscosity for a dilute suspension
of spheres.
\newblock {{\mathbb E}m SIAM J. Math. Anal.}, 44(3):2120--2145, 2012.
\bibitem{HiWu}
M.~Hillairet and D.~Wu.
\newblock Effective viscosity of a polydispersed suspension.
\newblock \{em J. Math. Pures Appl. 138:413--447, 2020.
\bibitem{Hof2}
R.~M. H\"{o}fer.
\newblock Convergence of the method of reflections for particle suspensions in Stokes flows.
\newblock Preprint arXiv:1912.04388, 2019.
\bibitem{HoeferSchubert20}
R.~M. H\"{o}fer and R.~Schubert.
The Influence of Einstein's Effective Viscosity on Sedimentation at Very Small Particle Volume Fraction.
\newblock Preprint arXiv:2008.04813, 2020.
\bibitem{MR1329546}
V.~V. Jikov, S.~M. Kozlov, and O.~A. Ole\u{\i}nik.
\newblock {{\mathbb E}m Homogenization of differential operators and integral
functionals}.
\newblock Springer-Verlag, Berlin, 1994.
\newblock Translated from the Russian by G. A. Yosifian.
\bibitem{MR813657}
T.~L\'{e}vy and E.~S\'{a}nchez-Palencia.
\newblock Einstein-like approximation for homogenization with small
concentration. {II}. {N}avier-{S}tokes equation.
\newblock {{\mathbb E}m Nonlinear Anal.}, 9(11):1255--1268, 1985.
\bibitem{NiSc}
B.~Niethammer and R.~Schubert.
\newblock A local version of {E}instein's formula for the effective viscosity
of suspensions.
\newblock {{\mathbb E}m SIAM J. Math. Anal.}, 52(3):2561-–2591, 2020.
\bibitem{MR813656}
E.~S\'{a}nchez-Palencia.
\newblock Einstein-like approximation for homogenization with small
concentration. {I}. {E}lliptic problems.
\newblock {{\mathbb E}m Nonlinear Anal.}, 9(11):1243--1254, 1985.
{\mathbb E}nd{thebibliography}
{\mathbb E}nd{document} |
\mathtt{Bell}gin{document}
\title{On Light Spanners, Low-treewidth Embeddings and Efficient Traversing in Minor-free Graphs}
\mathtt{Bell}gin{abstract}
Understanding the structure of minor-free metrics, namely shortest path metrics obtained
over a weighted graph excluding a fixed minor, has been an important research direction
since the fundamental work of Robertson and Seymour.
A fundamental idea that helps both to understand the structural properties of these metrics
and lead to strong algorithmic results is to construct a ``small-complexity''
graph that approximately preserves distances between pairs of points of the metric. We show the two following structural results for minor-free metrics:
\mathtt{Bell}gin{enumerate}
\item Construction of a \varnothingh{light} subset spanner. Given a subset of vertices called terminals, and $\varepsilonlon$, in polynomial time we construct a subgraph that
preserves all pairwise distances between terminals up to a multiplicative $1+\varepsilonlon$ factor, of total weight at most $O_{\varepsilonlon}(1)$ times the weight of the minimal Steiner tree spanning the terminals.
\item Construction of a stochastic metric embedding into low treewidth graphs with expected additive distortion $\varepsilonlon D$.
Namely, given a minor free graph $G=(V,E,w)$ of diameter $D$, and parameter $\varepsilonlon$, we construct a distribution $\mathcal{D}$ over dominating metric embeddings into treewidth-$O_{\varepsilonlon}(\log n)$ graphs such that $\forall u,v\in V$, $\mathbb{E}_{f\sim\mathcal{D}}[d_H(f(u),f(v))]\le d_G(u,v)+\varepsilonlon D$.
\end{enumerate}
One of our important technical contributions is a novel framework that allows us to reduce \varnothingh{both problems} to problems on simpler graphs of \varnothingh{bounded diameter} that we solve using a new decomposition. Our results have the following algorithmic consequences: (1) the first efficient approximation scheme for subset TSP in minor-free metrics; (2) the first approximation scheme for vehicle routing with bounded capacity in minor-free metrics; (3) the first efficient approximation scheme for vehicle routing with bounded capacity on bounded genus metrics. En route to the latter result, we design the first FPT approximation scheme for vehicle routing with
bounded capacity on bounded treewidth graphs (parameterized by the treewidth).
\end{abstract}
{\small \setcounter{tocdepth}{2} \tableofcontents}
\pagenumbering{arabic}
\section{Introduction}
Fundamental routing problems such as the Traveling Salesman Problem (TSP) and the Vehicle Routing Problem have been widely studied since the 50s.
Given a metric space, the goal is to find a minimum-weight collection of tours (only one for TSP) so as to meet a prescribed demand at some points of
the metric space. The research on these problems, from both practical and theoretical perspectives, has been part of the agenda of the operations research
and algorithm-design communities for many decades (see e.g.: \cite{HR85,AKTT97,CFN85,Salazar03,BMT13,ZTXL15,LN15,ZTXL16}).
Both problems have been the source of inspiration for many algorithmic breakthroughs and, quite frustratingly, remain good examples of the limits of the
power of our algorithmic methods.
Since both problems are APX-hard in general graphs \cite{PY93,AKTT97}
and since the best known approximation for TSP remains the 40-year old
$\frac{3}{2}$-approximation of Christofides \cite{Chr76}, it has been
a natural and successful research direction to focus on
\varnothingh{structured} metric spaces. Initially, researchers focused on achieving polynomial-time approximation schemes (PTASs) for TSP
in planar-graphs~\cite{GKP95,AGKKW98} and Euclidean metrics~\cite{Arora97,Mitchell99}. Two themes emerged in the ensuing research: speed-ups and
generalization.
In the area of speed-ups, a long line of research on Euclidean TSP improved the running time $n^{O(1/\varepsilonlonilon)}$ of the innitial algorithm by Arora to linear time~\cite{BartalG13}. In a parallel research thread, Klein \cite{Klein05,Klein08} gave the
first \varnothingh{efficient PTAS}\footnote{A PTAS is an
\varnothingh{efficient} PTAS (an EPTAS) if its running time is bounded by a
polynomial $n^c$ whose degree $c$ does not depend on $\varepsilonlon$} for TSP in weighted planar graphs, a linear-time
algorithm.
In the area of generalization, a key question was whether these results
applied to more general (and more abstract) families of metrics. One
such generalization of Euclidean metrics is metrics of bounded
doubling dimension. Talwar~\cite{Tal04} gave a \varnothingh{quasi}-polynomial-time approximation scheme (QPTAS) for this
problem which was then improved to an EPTAS~\cite{Gottlieb15}. In minor-free metrics, an important generalization of planar metrics, Grigni~\cite{Grigni00} gave a QPTAS for TSP which was recently improved to EPTAS by Borradaile et al.~\cite{BLW17}.
When the metric is that of a planar/minor-free graph, the problem of visiting
every vertex is not as natural as that of visiting a given subset of
vertices (the \varnothingh{Steiner TSP} or \varnothingh{subset TSP}) since the
latter cannot be reduced to the former without destroying the graph
structure. The latter problem turns out to be much harder than TSP in minor-free graphs, and
in fact no approximation scheme was known until the recent PTAS for subset TSP by Le~\cite{Le20}.
This immediately raises the question:
\mathtt{Bell}gin{question}\leftarrowbel{question:subsetTSP}
Is there an EPTAS for subset TSP in minor-free graphs?
\end{question}
The purpose of this line of work is to understand what are the most general metrics
for which we can obtain approximation schemes for routing problems, and when it is the case
how fast can the approximation schemes be made.
Toward this goal, minor-free metrics have been a testbed of choice for generalizing
the algorithmic techniques designed for planar or bounded-genus graphs. Indeed, while minor-free metrics
offer very structured decompositions, as shown by the celebrated work of Robertson and
Seymour~\cite{RS03}, Klein et al.~\cite{KPR93}, and Abraham et al.~\cite{AGGNT19}
(see also~\cite{FT03,Fil19padded}), they do not exhibit a strong topological structure.
Hence, various strong results for planar metrics, such as the efficient approximation schemes
for Steiner Tree~\cite{BKM09} or Subset TSP~\cite{Klein06}, are not known to exist in minor-free metrics.
\mathtt{Bell}gin{wrapfigure}{r}{0.5\textwidth}
\resizebox{0.5\columnwidth}{!}{
\mathtt{Bell}gin{tabular}{|l|l|l|l|}
\hline
\textbf{Space} & \textbf{Lightness} & \textbf{TSP runtime} & \textbf{Reference} \\ \hline
$(\mathbb{R}^{O(1)},\|\cdot\|_2)$ & $\varepsilonlon^{-O(1)}$ & $2^{\varepsilonlon^{-O(1)}}\cdot\tilde{O}(n)$ & \cite{RS98,LS19} \\ \hline
Doubling $O(1)$ & $\varepsilonlon^{-O(1)}$ & $2^{\varepsilonlon^{-O(1)}}\cdot\tilde{O}(n)$ & \cite{Gottlieb15,BLW19} \\ \hline
Planar & $O(1/\varepsilonlon)$ & $2^{O(1/\varepsilonlon^2)}\cdot O(n)$ & \cite{ADDJS93,Klein05} \\ \hline
$K_{O(1)}$ free & $\tilde{O}({1}/{\varepsilonlon^3})$ & $2^{\tilde{O}({1}/{\varepsilonlon^4})}\cdot n^{O(1)}$ & \cite{DHK11,BLW17} \\ \hline
\end{tabular}
}
\end{wrapfigure}
A common ingredient to designing efficient PTAS for TSP is the notion of \varnothingh{light spanner}: a weighted subgraph $H$
over the points of the original graph/metric space $G$ that preserves all pairwise distances up to some $1+\varepsilonlon$ multiplicative factor (i.e. $\forall u,v\in V(G),~d_H(u,v)\le (1+\varepsilonlon)\cdot d_G(u,v)$). The \varnothingh{lightness} of the spanner $H$ is the ratio between the total weight of $H$, to that of the Minimum Spanning Tree (MST) of $G$. While significant progress has been made on understanding the structure of spanners (see the table), it is not the case for \varnothingh{subset spanners}. A subset spanner $H$ w.r.t. a prescribed subset $K$ of vertices, called terminals, is a subgraph that preserves distances between terminals up to a $1+\varepsilonlon$ multiplicative factor (i.e. $\forall u,v\in K,~d_H(u,v)\le (1+\varepsilonlon)\cdot d_G(u,v)$). The \varnothingh{lightness} of $H$ is the ratio between the weight of $H$ to the weight of a minimum Steiner tree \footnote{A Steiner tree is a connected subgraph containing all the terminals $K$. A minimum Steiner tree is a minimum-weight such subgraph; because cycles do no help in achieving connectivity, we can require that the subgraph be a tree.} w.r.t. $K$. While for light spanners the simple greedy algorithm is ``existentially optimal'' \cite{FS16}, in almost all settings, no such ``universal'' algorithm is known for constructing light subset spanners. In planar graphs, Klein \cite{Klein06} constructed the first light subset spanner. Borradaile {et al. \xspace} \cite{BDT14} generalized Klein's construction to bounded-genus graphs. Unfortunately, generalizing these two results to minor-free metrics remained a major challenge since both approaches were heavily relying on topological arguments.
Recently, Le \cite{Le20} gave the first polynomial-time algorithm for computing a subset spanner with lightness $\mathrm{poly}(\frac{1}{\varepsilonlonilon})\cdot\log|K|$ in $K_r$-minor-free graphs. However, the following question remains a fundamental open problem, often mentioned in the literature \cite{DHK11,BDT14,BLW17,Le20}.
\mathtt{Bell}gin{question}\leftarrowbel{question:light-subset-spanners}
Does a subset spanner of lightness $\mathrm{poly}(\frac{1}{\varepsilonlonilon})$ exist in minor-free graphs?
\end{question}
A very related routing problem which is poorly understood even in structured metrics is the vehicle routing problem. Given a special vertex called the \varnothingh{depot} and a
\varnothingh{capacity} $Q$, the goal is to find a collection of subsets of the vertices of the graph each of size at most $Q+1$ such that (1) each vertex appears
in at least one subset, (2) each subset contains the depot, and (3) the sum of the lengths of the shortest tours visiting all the vertices of each subset is minimized.
This is a very classic routing problem, introduced in the late 50s by Dantzig and Ramser~\cite{dantzig1959truck}. While major progress
has been made on TSP during the 90s and 00s for planar and Eucliean metrics, the current understanding of vehicle routing is much less satisfactory.
In Euclidean space, the best known result is a QPTAS by Das and Mathieu \cite{DM15}, while the problem has been shown to be APX-hard for planar graphs
(in fact APX-hard for trees~\cite{Becker18})\footnote{More precisely, the problem where the demand at each vertex is arbitrary is known to be APX-hard
on trees.} unless
the capacity $Q$ is a fixed constant -- note that the problem remains NP-hard in that case too (see~\cite{AKTT97}).
Given the current success of delivery platforms, the problem with constant capacity is still of high importance from an operations research perspective.
Hence, Becker et al.~\cite{BKS17} have recently given a quasi-polynomial approximation scheme for planar graphs, which was subsequently improved
to a running time of $n^{(Q\varepsilonlon)^{-O(Q/\varepsilonlon)}}$~\cite{BKS19}. The next question is:
\mathtt{Bell}gin{question}\leftarrowbel{question:vhr-eptas-planar}
Does the vehicle routing with bounded capacity problem admit an EPTAS in planar and bounded genus graphs?
\end{question}
Since the techniques in previous work~\cite{BKS17,BKS19} for the vehicle routing problem rely on topological arguments, they are not extensible to minor-free graphs. In fact, no nontrivial approximation scheme was known for this problem in minor-free graphs. We ask:
\mathtt{Bell}gin{question}\leftarrowbel{question:vhr-qptas-minor}
Is it possible to design a QPTAS for the vehicle routing with bounded capacity problem minor-free graphs?
\end{question}
The approach of Becker et al. (drawing on~\cite{EKM14}) is through \varnothingh{metric embeddings},
similar to the celebrated work of Bartal~\cite{Bar96} and
Fakcharoenphol et al.~\cite{FRT04} who showed how to embed
any metric space into a simple tree-like structure.
Specifically, Becker et al. aim at embedding the input metric space into a
``simpler''
target space, namely a graph of bounded treewidth, while (approximately)
preserving all pairwise distances. A major constraint arising in this
setting is that for obtaining approximation schemes, the distortion
of the distance should be carefully controlled.
An ideal scenario would be to embed $n$-vertex minor free graphs into
graphs of treewidth at most $O_\varepsilonlon(\log n)$, while preserving the
pairwise distance up to a $1+\varepsilonlon$ factor.
Unfortunately, as implied by the work of Chakrabarti {et al. \xspace} \cite{CJLV08}, there are $n$ vertex planar graphs such that every (stochastic) embedding into $o(\sqrt{n})$-treewidth graphs must incur expected multiplicative distortion $\Omega(\log n)$ (see also~\cite{Rao99,KLMN04,AFGN18} for
embeddings into Euclidean metrics).
Bypassing the above roadblock,
Eisenstat {et al. \xspace} \cite{EKM14} and Fox-Epstein {et al. \xspace} \cite{FKS19} showed
how to embed planar metrics into bounded-treewidth graphs while preserving distances up
to a controlled \varnothingh{additive} distortion.
Specifically, given a planar graph $G$ and a parameter $\varepsilonlonilon$, they showed how to construct a metric embedding into a graph $H$ of bounded treewidth such that all pairwise distances between pairs of vertices are preserved up to an additive $\varepsilonlon D$ factor, where $D$ is the diameter of $G$.
While $\varepsilonlon D$ may look like a crude additive bound, it
is good enough for obtaining approximation schemes for some
classic problems such as $k$-center, and vehicle routing.
While Eisenstat {et al. \xspace} constructed an embedding into a graph of treewidth $\mathrm{poly}(\frac1\varepsilonlon)\cdot \log n$, Fox-Epstein {et al. \xspace} constructed an embedding into a graph of treewidth $\mathrm{poly}(\frac1\varepsilonlon)$, leading to the first PTAS for vehicle routing (with running time $n^{(Q/\varepsilonlon)^{O(Q/\varepsilonlon)}}$). Yet for minor-free graphs, or even bounded-genus graphs, obtaining such a result with any non-trivial bound on the treewidth is a major challenge; the embedding of Fox-Epstein et al.~\cite{FKS19} heavily relies on planarity (for example by using the face-vertex incident graph). Therefore, prior to our work, the following question is widely open.
\mathtt{Bell}gin{question}\leftarrowbel{question:embedding}
Is it possible to (perhaps stochastically) embed a minor-free graph with diameter $D$ to a graph with treewidth $\mathrm{poly}log(n)$ and additive distortion at most $\varepsilonlonilon D$?
\end{question}
\subsection{Main contribution}
We answer all the above questions by the affirmative. Our first main contribution is a ``truly'' \varnothingh{light subset spanner} for minor-free metrics that bridges the gap for spanners between planar and minor-free metrics; this completely settles \Cref{question:light-subset-spanners}.
In the following, the $O_r$ notation hides factors in $r$, e.g. $x = O_r(m) \iff x \le m\cdot f(r)$ for some sufficiently large $m$ and
computable function $f$; and $\mathrm{poly}(x)$ is (some) polynomial function of $x$.
\mathtt{Bell}gin{theorem}\leftarrowbel{thm:tsp-spanner}
Given a $K_r$-minor-free graph $G$, a set of terminals $K\subseteq V(G)$, and a parameter $\varepsilonlon\in(0,1)$, there is a polynomial time algorithm that computes a subset spanner with distortion $1+\varepsilonlon$ and lightness $O_r(\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{theorem}
Our second main contribution is a
\varnothingh{stochastic embedding} (see \Cref{def:stocastic}) of minor-free graphs into bounded-treewidth graphs with small expected additive distortion, obtaining
the first result of this kind for minor-free graphs and resolving \Cref{question:embedding} positively.
\mathtt{Bell}gin{theorem}\leftarrowbel{thm:embedding-minor}
Given an $n$-vertex $K_r$-minor-free graph $G$ of diameter $D$, and a parameter $\varepsilonlonilon\in(0,1)$, in polynomial time one can construct a stochastic embedding from $G$ into graphs with treewidth $O_r(\frac{\log n}{\varepsilonlonilon^2})$, and expected additive distortion $\varepsilonlon D$.
\end{theorem}
While the embedding of planar graphs to low treewidth graphs by Fox-Epstein et al.~\cite{FKS19} is deterministic, our embedding in~\Cref{thm:embedding-minor} is stochastic. Thus, it is natural to ask whether randomness is necessary. We show in~\Cref{thm:LB} below that the embedding must be stochastic to guarantee (expected) additive distortion $\varepsilonlonilon D$, for small enough $\varepsilonlonilon $ (see \Cref{sec:emblowerbound} for details).
\mathtt{Bell}gin{restatable}{theorem}{EmLowerBound}
\leftarrowbel{thm:LB}
There is an infinite graph family $\mathcal{H}$ of $K_6$-free graphs, such that for every $H\in \mathcal{H}$ with $n$ vertices and diameter $D$, every dominating embedding of $H$ into a treewidth-$o(\sqrt{n})$ graph has additive distortion at least $\frac{1}{20}\cdot D$.
\end{restatable}
For the more restricted case of a graph with genus $g$, we can construct a deterministic embedding without any dependence on the number of vertices.
\mathtt{Bell}gin{theorem}\leftarrowbel{thm:embedding-genus}
Given a genus-$g$ graph $G$ of diameter $D$, and a parameter $\varepsilonlonilon\in(0,1)$, there exists an embedding $f$ from $G$ to
a graph $H$ of treewidth $O_{g}(\mathrm{poly}(\frac{1}{\varepsilonlonilon}))$ with additive distortion $\varepsilonlon D$.
\end{theorem}
Next we describe the algorithmic consequences of our results. First, we obtain an efficient PTAS for a Subset TSP problem in $K_r$-minor-free graphs for any fixed $r$, thereby completely answering \Cref{question:subsetTSP}. (See \Cref{appendix:subsetTSP} for details.)
\mathtt{Bell}gin{theorem}\leftarrowbel{thm:tsp-eptas}
Given a set of terminals $K$ in an $n$-vertex $K_r$-minor-free graph $G$ of, there exists an algorithm with running time
$2^{O_{r}(\mathrm{poly}(1/\varepsilonlonilon))} n^{O(1)}$ that can find a tour visiting every vertex in $K$ of length at most $1+\varepsilonlonilon$ times the length of
the shortest tour.
\end{theorem}
Second, we obtain the first polynomial-time approximation scheme for bounded-capacity vehicle routing in $K_r$-minor-free graphs.
\mathtt{Bell}gin{theorem}\leftarrowbel{thm:vhr-qptas}
there is a randomized algorithm that,
given an $n$-vertex $K_r$-minor-free graph $G$ and an instance of bounded-capacity vehicle routing on $G$, in time $n^{O_{\varepsilonlonilon,Q,r}(\log\log n)}$
returns a solution with expected cost at most $1+\varepsilonlonilon$
times the cost of the optimal solution.
\end{theorem}
\Cref{thm:vhr-qptas} provides a definite answer to \Cref{question:vhr-qptas-minor}. En route to this result, we design a new dynamic program for bounded-capacity vehicle routing on bounded-treewidth graphs that constitutes
the first approximation scheme that is fixed-parameter tractability in the treewidth (and also in $\varepsilonlon$) for this class of graphs. For planar graphs and bounded-genus graphs, this yields a $2^{\mathrm{poly}(\frac{1}{\varepsilonlonilon})} n^{O(1)}$ approximation scheme and completely answers \Cref{question:vhr-eptas-planar}.
\mathtt{Bell}gin{theorem}\leftarrowbel{thm:vhr-eptas}
There is a randomized algorithm that, given a graph $G$ with genus at most $g$ and an instance of bounded-capacity vehicle routing on $G$, in time $2^{O_{g,Q}(\mathrm{poly}(1/\varepsilonlonilon))} n^{O(1)}$ returns a solution whose expected cost at most $1+\varepsilonlonilon$ times the cost of the optimal solution.
\end{theorem}
A major tool in our algorithm in \Cref{thm:vhr-eptas} is a new efficient dynamic programming for approximating bounded-capacity vehicle routing in bounded treewidth graphs. The best exact algorithm known for bounded-treewidth graphs has running time
$n^{O(Q\mathrm{tw})}$\cite{BKS18}.
\mathtt{Bell}gin{theorem} \leftarrowbel{thm:vehiclerouting-tw-dp}
Let $\mathrm{tw},\varepsilonlon>0$. There is an algorithm that, for any instance of the vehicle routing problem
$(G,Q,s)$
such that $G$ has treewidth $\mathrm{tw}$ and $n$ vertices,
outputs a $(1+\varepsilonlon)$-approximate solution in time
$(Q\varepsilonlon^{-1}\log n)^{O(Q\mathrm{tw} /\varepsilonlon)} n^{O(1)}$.
\end{theorem}
We refer readers to \Cref{appendix:vhr} for details on \Cref{thm:vhr-qptas}, \Cref{thm:vhr-eptas}, and \Cref{thm:vehiclerouting-tw-dp}.
\subsection{Techniques} \leftarrowbel{sec:techniques}
In their seminal series of papers regarding minor free graph, Robertson and Seymour showed how to decompose a minor-free graph into four "basic components": \varnothingh{surface-embedded graphs}, \varnothingh{apices}, \varnothingh{vortices} and \varnothingh{clique-sums} \cite{RS03} (see \Cref{subsec:RobertsonSeymour} for details and definitions).
Their decomposition suggested an algorithmic methodology, called the \varnothingh{RS framework}, for solving a combinatorial optimization problem on minor-free graphs: solve the problem on planar graphs, and then generalize to bounded-genus graphs, to graphs embedded on a surface with few vortices, then deal with the apices, and finally extend to minor-free graphs. The RS framework has been successfully applied to many problems such as vertex cover, independent set and dominating set \cite{Grohe03,DHK05}. A common feature for these problems was that the graphs were unweighted, and the problems rather ``local''. This success can be traced back to the pioneering work of Grohe \cite{Grohe03} who showed how to handle graphs embedded on a surface with few vortices by showing that these graphs have linear local-treewidth.
However, there is no analogous tool that can be applied to fundamental connectivity problems such as Subset TSP, Steiner tree, and survivable network design. Therefore, even though efficient PTASes for these problems were known for planar graphs \cite{Klein06,BKK07,BK08} for a long time, achieving similar results for any of them in minor-free graphs remains a major open problem. Inspired by the RS framework, we propose a multi-step framework for light subset spanner and embedding problems in minor-free graphs.
\paragraph{A multi-step framework} The fundamental building block in our framework is planar graphs each with a \varnothingh{single vortex} with bounded diameter $D$, on which we solve the problems (Step 1 in our framework). We consider this as a major conceptual contribution as we overcome the barrier posed by vortices. We do so by introducing a hierarchical decomposition where each cluster in every level of the decomposition is separated from the rest of the graph by a constant number of shortest paths \varnothingh{of the input graph}.
\footnote{One might hope that a similar decomposition can be constructed using the shortest-path separator of Abraham and Gavoille \cite{AG06} directly. Unfortunately, this is impossible as the length of the shortest paths in \cite{AG06} is unbounded w.r.t. $D$. Rather, they are shortest paths in different subgraphs of the original graph.}
Similar decomposition for planar graphs \cite{AGKKW98,Tho04} and bounded-genus graphs \cite{KKS11} has found many algorithmic applications \cite{AGKKW98,Thorup04,EKM14,KKS11}. Surprisingly, already for the rather restricted case of apex graphs,\footnote{A graph $G$ is an apex graph if there is a vertex $v$ such that $G\setminus v$ is a planar graph.} it is impossible to have such a decomposition. We believe that our decomposition is of independent interest.
While it is clear that the diameter parameter $D$ is relevant for the embedding problem, a priori it is unclear why it is useful for the light-subset-spanner problem. As we will see later, the diameter comes from a reduction to subset local spanners (Le \cite{Le20}), while the assumption is enabled by using sparse covers \cite{AGMW10}.
In Step 2, we generalize the results to $K_r$-minor-free graphs. Step 2 is broken into several mini-steps. In Mini-Step 2.1,\footnote{In the subset spanner problem, there is an additional step where we remove the constraint on the diameter of the graph, and this becomes Step 2.0.} we handle the case of planar graphs with more than one vortex; we introduce a \varnothingh{vortex-merging operation} to reduce to the special case in Step 1. In Mini-Step 2.2, we handle graphs embedded on a surface with multiple vortices. The idea is to cut along vortex paths to reduce the genus one at a time until the surface embedded part is planar (genus $0$), and in this case, Step 2.1 is applicable. In Mini-Step 2.3, we handle graphs embedded on a surface with multiple vortices and \varnothingh{a constant number of apices}, a.k.a \varnothingh{nearly embeddable graphs}. In Mini-Step 2.4, we show how to handle general $K_r$-minor-free graphs by dealing with clique-sums.
In this multi-step framework, there are some steps that are simple to implement for one problem but challenging for the other. For example, implementing Mini-Step 2.3 is simple in the light subset spanner problem, while it is highly non-trivial for the embedding problem; removing apices can result in a graph with unbounded diameter. Novel ideas are typically needed to resolve these challenges; we refer the reader to \Cref{sec:tech-ideas} for more technical details.
We believe that our multi-step framework will find applications in designing PTASes for other problems in $K_r$-minor-free graphs, such as minimum Steiner tree or survivable network design.
\paragraph{ An FPT approximation scheme for vehicle routing on low treewidth graphs } Our $(1+\varepsilonlon)$-approximation for vehicle routing with bounded
capacity in bounded treewidth graphs relies on a dynamic program
that proceeds along the clusters of a branch decomposition\footnote{For
simplicity, we work with branch decompositions}, namely
the subgraphs induced by the leaves of the subtrees of the
branch decomposition.
One key idea is to show that there exists a near-optimal solution such
that the number of tours entering (and leaving)
a given cluster with some fixed capacity $q \in [Q]$ can be rounded
to a power of $1+\tilde{\varepsilonlon}$, for some
$\tilde{\varepsilonlon}$ to be chosen later. To achieve this, we start from
the optimum solution and introduce
\varnothingh{artificial paths}, namely paths that start at a vertex and go
to the depot (or from the depot to a vertex), without making any delivery
and whose only purpose is to help \varnothingh{rounding} the number of
paths entering or
leaving a given cluster of the decomposition (i.e.: making it a power
of $1+\tilde{\varepsilonlon}$). This immediately reduces the number
of entries in the dynamic programming table we are using, reducing
the running time of the dynamic program to the desired complexity.
The main challenge becomes to bound the number of artificial paths
hence created so as to show that the obtained solution has cost
at most $1+\varepsilonlon$ times the cost of the optimum solution.
To do so, we design a charging scheme and prove that
every time a new path is created, its cost can be charged to the cost
of some $\tilde{\varepsilonlon}^{-1}$ paths of the original optimum solution.
Then, we ensure that each path of the original optimum solution does not
get charged more than $\varepsilonlon$ times. This is done by showing by defining
that a path \varnothingh{enters} (resp. \varnothingh{leaves}) a cluster only if it
is making its next delivery (resp. it has made its last delivery)
to a vertex inside. This definition helps limit the number of times
a path gets charged to $\tilde{\varepsilonlon} = \varepsilonlon/(Q \log n)$
but it also separates the underlying shortest path
metric from the structure of the graph: A path from vertices
$s_1,\ldots,s_k$ should not be considered entering any cluster of the
branch decomposition containing $s_i$ if it does not pick up its
next delivery (or has picked up its last delivery) within the
cluster of $s_i$. This twist demands a very careful design of the
dynamic program by working with distances rather than explicit paths.
Then, our dynamic program works as follows: The algorithm computes
the best solution at a given cluster $C$ of the
decomposition, for any prescribed number of tours
(rounded to a power of $1+\tilde{\varepsilonlon}$)
entering and leaving $C$.
This is done by iterating over all pairs
of (pre-computed) solutions for the child clusters of $C$
that are consistent with (namely, that potentially \varnothingh{can} lead to)
the prescribed number of tours entering and leaving at $C$.
Given consistent solutions for the child cluster,
the optimal cost of combining them
(given the constraints on the number of
tours entering at $C$) is then computed through a min-cost max-flow
assignment.
\section{Proof Overviews} \leftarrowbel{sec:tech-ideas}
\subsection{Light subset spanners for minor-free metrics}
In this section, we give a proof overview and review the main technical ideas for the proof of \Cref{thm:tsp-spanner}. A subgraph $H$ of a graph $G$ is called a \varnothingh{subset} $L$-\varnothingh{local} $(1+\varepsilonlon)$-spanner of $G$ with respect to a set $K$ of terminals if:
\mathtt{Bell}gin{equation*}
\forall t_1,t_2\in K \mbox{ ~s.t.~ } d_G(t_1,t_2) \le L \qquad \mbox{it holds that ~~} d_H(t_1,t_2) \leq (1+\varepsilonlon)\cdot d_G(t_1,t_2)
\end{equation*}
\noindent Our starting point is the following reduction of Le~\cite{Le20}.
\mathtt{Bell}gin{theorem}[Theorem 1.4~\cite{Le20}]\leftarrowbel{thm:reduction-Le}
Fix an $\varepsilonlon\in(0,1)$. Suppose that for any $K_r$-minor-free weighted graph $G=(V,E,w)$, subset $K\subseteq V$ of terminals, and parameter $L > 0$, there is a subset $L$-local $(1+\varepsilonlon)$-spanner w.r.t. $K$ of weight at most $O_{r}(|K|\cdot L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$. For any terminal set, $G$ admits a subset $(1+\varepsilonlon)$-spanner with lightness $O_r(\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{theorem}
\noindent Our main focus is to construct a light subset $L$-local spanner.
\mathtt{Bell}gin{restatable}{proposition}{MinorFreeSubset}
\leftarrowbel{prop:ell-close-spanner}
For any edge-weighted $K_r$-minor-free graph $G=(V,E,w)$, any subset $K\subseteq V$ of terminals, and any parameter $L > 0$, there is a subset $L$-local $(1+\varepsilonlon)$-spanner for $G$ with respect to $K$ of weight $O_{r}(|K|\cdot L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{restatable}
\Cref{thm:tsp-spanner} follows directly by combining \Cref{thm:reduction-Le} with \Cref{prop:ell-close-spanner}. Our focus now is on proving \Cref{prop:ell-close-spanner}.
The proof is divided into two steps:
in step 1 we solve the problem on the restricted case of planar graphs with bounded diameter and a single vortex.
Then, in step 2, we reduce the problem from $K_r$-minor-free graphs to the special case solved in step 1.
\paragraph{Step 1: Single vortex with bounded diameter}
The main lemma in step 1 is stated below; the proof appears in \Cref{sec:oneVortex}.
We define a \varnothingh{single-vortex} graph $G = G_{\Sigma} \cup W$ as a graph whose edge set can
be partitioned into two parts $G_{\Sigma}, W$ such that $G_{\Sigma}$ induces a plane graph and $W$ is a vortex of width \footnote{The \varnothingh{width} of the vortex is the width
of its path decomposition.\leftarrowbel{foot:VortexWidth}} at most $h$ glued to some face of $G_{\Sigma}$.
\mathtt{Bell}gin{restatable}[Single Vortex with Bounded Diameter]{lemma}{SingleVortexBoundedDiam}
\leftarrowbel{lm:one-vortex-Bounded-diam}
Consider a single-vortex
graph $G = G_{\Sigma}\cup W$ with diameter $D = O_h(L)$,
where $G_{\Sigma}$ is planar, and $W$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
For any terminal set $K$, there exists a subset $L$-local $(1+\varepsilonlon)$-spanner for $G$ with respect to $K$ of weight $O_{h}(|K|L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{restatable}
The basic idea in constructing the spanner for \Cref{lm:one-vortex-Bounded-diam} is to use shortest-path
separators to recursively break down the graph into clusters while maintaining the distance from every terminal
to the boundaries of its cluster. Let $k$ be the number of terminals.
The idea is to construct a hierarchical tree of clusters of depth $O(\log k)$ where each terminal-to-boundary-vertex path
is well approximated. An elementary but inefficient approach to
obtain such a result is add a single-source spanner (\Cref{lm:ss-spanner}) from each terminal $t$ to every shortest path
(at distance at most $L$) in each one of the separators in all the recursive levels. As a result, the spanner will consist
of $O_h(k\log k)$ single-source spanners of total weight $O_{h}(L\cdot k\log k\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$, as obtained in \cite{Le20}.
Thus, a very natural and basic question is whether minor-free graphs have enough structure so that we can do avoid
the $\log k$ factors coming from the depth of the hierarchy. We show that this is indeed the case.
The problem with the previous approach is that for each hierarchical cluster $\varUpsilon$ in the decomposition, the total weight of the edges added to the spanner is proportional to the number of terminals in $\varUpsilon$ (and is thus $k\log k\cdot L$ in total, considering the entire process).
Our approach is the following: instead of adding single-source spanners from a terminal to paths in $O(\log k)$ separators, we add bipartite spanners (\Cref{lm:bipartite-spanner}) from the paths in a newly added separator to all the separator paths in the boundary of the current cluster.
A \varnothingh{bipartite spanner} is a set of edges that preserve all pairwise distances between two paths such that its weight is proportional to the distance between the paths and their lengths. The hope is to pay only $L$ for each hierarchical cluster $\varUpsilon$, regardless of the number of terminals it contains.
This approach has two main obstacles:
(1) the number of paths in the boundary of a cluster at depth $t$ of the recursion can be as large as $\Omega_h(t)$, implying that the total number of bipartite spanners added is $\Omega_h(k\log k)$ -- and we would not have gained anything compared to the elementary approach, and
(2) the weight of the shortest paths is unbounded. While initially the diameter and thus the length of shortest paths is bounded by $O_h(L)$, in the clusters created recursively, after deleting some paths there is no such bound. Note that in the approach that used the single-source spanners this was a non-issue, as for single-source spanner (\Cref{lm:ss-spanner}), the length of the shortest path $P$ does not matter. However, the weight of a bipartite spanner (\Cref{lm:bipartite-spanner}) depends on the weight of the paths it is constructed for.
We resolve both these issues by recursively constructing separators with a special structure. According to Abraham and Gavoille \cite{AG06}, a separator can be constructed using a fundamental vortex cycle constructed between two vortex paths induced by an arbitrary tree (see \Cref{def:vortex-path}).
We construct a spanning tree $T$ as a shortest-path tree rooted in the perimeter vertices. Every separator then consists of two shortest paths from perimeter vertices to vertices in the embedded part of the graph, and at most two bags. The important property is that for every cluster $G_\Upsilon$ we encounter during the recursion, $T\cap G_\Upsilon$ is a spanning tree of $G_\Upsilon$. As a result, all the shortest paths we use for the separators throughout the process are actual shortest paths in $G$. In particular, their length is bounded, and thus issue (2) is resolved.
In order to resolve issue (1), we control the number of shortest paths in the boundary of a cluster in our decomposition using a more traditional approach.
Specifically, in some recursive steps, we aim for a reduction in the number of paths in the boundary instead of a reduction
in the number of terminals.
\paragraph{Step 2: From minor-free to single vortex with bounded diameter.}
We generalize the spanner construction of Step 1 to minor-free graphs using the Robertson-Seymour decomposition.
We have five sub-steps, each generalizing further (at the expense of increasing the weight of the spanner by an additive term $O_{h}(k L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$).
Thus, consider the construction proposed in Step 1.
In the first sub-step, we remove the assumption on the bounded diameter and make our spanner construction work for arbitrary
planar graphs with a single vortex.
The approach is as follows: Break a graph with unbounded diameter to overlapping clusters of diameter $O_h(L)$ such that every pair of vertices at
distance at most $L$ belongs to some cluster, and each vertex belong to at most $O_h(1)$ clusters. This is done using Abraham {et al. \xspace}
sparse covers \cite{AGMW10}.
Then construct a spanner for each cluster separately by applying the approach of Step 1, namely \Cref{lm:one-vortex-Bounded-diam}, and
return the union of these spanners. More concretely, we prove the following lemma, whose proof appears in
\Cref{subsec:diameterReduction}.
\mathtt{Bell}gin{restatable}[Single Vortex]{lemma}{LemmaOneVortexUbnoundedDiam}
\leftarrowbel{lm:one=Vortex-Unbounded-diam}
Consider a graph $G = G_{\Sigma}\cup W$
where $G_{\Sigma}$ is planar, and $W$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$. For any terminal set $K$, there exists a subset $L$-local $(1+\varepsilonlon)$-spanner for $G$ with respect to $K$ of weight $O_{h}(|K|L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{restatable}
In the second sub-step, we generalize to planar graphs with at most $h$ vortices of width $^{\ref{foot:VortexWidth}}$ $h$.
The basic idea is to ``merge'' all vortices into a single vortex of width $O(h^2)$. This is done by repeatedly deleting a shortest path
between pairs of vortices, and ``opening up'' the cut to form a new face. The two vortices are then ``merged'' into a single vortex -- in other
words, they can be treated as a single vortex by the algorithm obtained at the first sub-step.
This is repeated until all the vortices have been ``merged'' into a single vortex, at which point \Cref{lm:one=Vortex-Unbounded-diam} applies.
Here we face a quite important technical difficulty: when opening up a shortest path between two vortices, we may alter shortest paths between
pairs of terminals (e.g.: the shortest path between two terminals intersects the shortest path between our two vortices, in which case
deleting the shortest path between the vortices destroys the shortest path between the terminals).
To resolve this issue, we compute a single-source spanner from each terminal to every nearby deleted path, thus controlling
the distance between such terminal pairs in the resulting spanner.
The above idea is captured in the following lemma, whose proof appears in \Cref{subsec:reducingVortices}.
\mathtt{Bell}gin{restatable}[Multiple Vortices]{lemma}{NearlyEmbdablSpannerNoApicesNoGenus}
\leftarrowbel{lm:planar-with-many-vortices}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{h'}$, where $G_{\Sigma}$ is planar, $h'\le h$, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
For any terminal set $K$, there exists a subset $L$-local $(1+\varepsilonlon)$-spanner for $G$ with respect to $K$ of weight $O_{h}(|K|L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{restatable}
In our third sub-step, we generalize to graphs of bounded genus with multiple vortices.
The main tool here is ``vortex paths'' from \cite{AG06}. Specifically, we can remove two vortex paths and reduce the genus by one (while increasing the number of vortices).
Here each vortex path consists of essentially $O_h(1)$ shortest paths.
We apply this genus reduction repeatedly until the graph has genus zero. The graph then has $O(g)$ new vortices. Next, we apply \Cref{lm:planar-with-many-vortices} to create a spanner.
The technical difficulty of the previous step arises here as well: There may be shortest paths between pairs of terminals that intersect
the vortex paths. We handle this issue in a similar manner.
The proof appears in \Cref{subsec:removingGenus}.
\mathtt{Bell}gin{restatable}[Multiple Vortices and Genus]{lemma}{NearlyEmbdablSpannerNoApices}
\leftarrowbel{lm:nearly-embed-spanner-no-apices}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{h'}$
where $G_{\Sigma}$ is (cellularly) embedded on a surface $\Sigma$ of genus at most $g=O(h)$, $h'\le h$, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$. For any terminal set $K$, there exists a subset $L$-local $(1+\varepsilonlon)$-spanner for $G$ with respect to $K$ of weight $O_{h}(|K|L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{restatable}
In our fourth sub-step, we generalize to nearly $h$-embeddable graphs.
That is, in addition to genus and vortices, we also allow $G$ to have at most $h$ apices.
The spanner is constructed by first deleting all the apices and applying \Cref{lm:nearly-embed-spanner-no-apices}. Then, in order to compensate for
the deleted apices, we add a shortest path from each apex to every terminal at distance at most $L$.
The proof appears in \Cref{subsec:removingApices}.
\mathtt{Bell}gin{restatable}[Nearly $h$-Embeddable]{lemma}{NearlyEmbdablSpanner}
\leftarrowbel{lm:nearly-embed-spanner}
Consider a nearly $h$-embeddable graph $G$ with a set $K$ of $k$ terminals. There exists an $L$-local $(1+\varepsilonlon)$-spanner for $K$ of weight $O_{h}(kL\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\end{restatable}
Finally, in our last sub-step, we generalize to minor-free graphs, thus proving \Cref{prop:ell-close-spanner}.
Recall that according to \cite{RS03} a minor graph can be decomposed into a clique-sum decomposition, where each node in the decomposition
is nearly $h$-embeddable.
Our major step here is transforming the graph $G$ into a graph $G'$ that preserves all terminal distances in $G$,
while having at most $O(k)$ bags in its clique-sum decomposition.
This is done by first removing leaf nodes which are not ``essential'' for any terminal distance, and then shrinking long paths in the decomposition where all
internal nodes have degree two and (roughly) do not contain terminals.
Next, given $G'$, we make each vertex that belongs to one of the cliques in the clique-sum decomposition into a terminal. The new number of terminals is
bounded by $O_h(k)$. The last step is simply to construct an internal spanner for each
bag separately using \Cref{lm:nearly-embed-spanner}, and return the union of the constructed spanners.
The proof (of \Cref{prop:ell-close-spanner}) appears in \Cref{subsec:EliminatingCliqueSum}.
\subsection{Embedding into low-treewidth graphs}\leftarrowbel{subsec:tech-tw-emb}
At a high level, we follow the same approach as for the subset spanner. Due to the different nature of the constructed structures, and the different distortion guarantees, there are some differences that raise significant challenges.
Our first step is to generalize the result of Fox-Epstein et al.~\cite{FKS19} to graph of bounded genus.
Basically, our approach is the same as for the subset spanner: we decompose the graph into simpler and simpler pieces by removing shortest paths. Here, instead of deleting a path, we will use a \varnothingh{cutting lemma}. However, in this setting it is not clear how to use single-source or bipartite spanners to compensate for the changes to the shortest-path metric due to path deletions, since these spanners may have large treewidth.
Instead, we will \varnothingh{portalize} the cut path. That is, we add an $\varepsilonlon D$-net\footnote{An $r$-net of a set $A$, is a set $N\subset A$ of vertices all at distance at least $r$ from each other, and such that every $v\in A$ has a net point $t\in N$ at distance at most $r$. If $A$ is a path of length $L$, then for every $r$-net $N$, $|N|=O(\frac{L}{r})$.\leftarrowbel{foot:net}}
of the path to every bag of the tree decomposition of the host graph. Clearly, this strategy has to be used cautiously since
it immediately increases the treewidth significantly.
Apices pose an interesting challenge. Standard techniques to deal with apices consist in removing them from the graph, solve the problem on
the remaining graph
which is planar, and add back the apices later~\cite{Grohe03,DHK05}. However, in our setting, removing apices can make the diameter of the resulting graph, say $G'$, become arbitrarily larger than $D$ and thus, it seems hopeless to embed $G'$ into a low treewidth graph with an additive
distortion bounded by $D$. This is where randomness comes into play: we use padded decomposition~\cite{Fil19padded} to randomly partition $G'$ into pieces of (strong) diameter $D' = O(\frac{D}{\varepsilonlonilon})$. We then embed each part of the partition (which is planar) separately into graphs of bounded treewidth with additive distortion $\varepsilonlon^2 D' = O(\varepsilonlon D)$,
add back the apices by connecting them to all the vertices of all the bounded treewidth graphs
(and so adding all of them to each bag of each decomposition) and obtain a graph with
bounded treewidth and an \varnothingh{expected} additive distortion $\varepsilonlonilon D$.
Our next stop on the road to minor-free metrics is to find bounded treewidth embeddings of clique-sums of bounded genus graphs with apices.
Suppose that $G$ is decomposed into clique-sums of graphs $G_1,G_2, \ldots, G_k$. We call each $G_i$ a \varnothingh{piece}. A natural idea is to embed each $G_i$ into a low treewidth graph $H_i$, called the \varnothingh{host graph} with a tree decomposition $\mathcal{T}_i$, and then combine all the tree decompositions together. Suppose that $G_1$ and $G_2$ participate in the clique-sum decomposition of $G$ using the clique $Q$. To merge $G_1$ and $G_2$, we wish to have an embedding from $G_i$ to $H_i$, $i = 1,2$, that \varnothingh{preserves} the clique $Q$ in the clique-sum of $G_1$ and $G_2$. That is, the set of vertices $\{ f_i(v) | v \in Q\}$ induces a clique in $H_i$ (so that there will be bag in the tree decomposition of $H_i$ containing $f(Q)$). However, it is impossible to have such an embedding even if all $G_i$'s are planar
\footnote{To see this, suppose that $G$ is clique-sums of a graph $G_0$ with many other graphs $G_1,G_2\ldots, G_s$ in a star-like way, where $G_0$ has treewidth polynomial in $n$, and every edge of $G_0$ is used for some clique sum. If $H_0$ preserves all cliques, it contains $G_0$ and thus has treewidth polynomial in $n$.}.
To overcome this obstacle, we will allow each vertex in $G_i$ to have multiple images in $H_i$. Specifically, we introduce \varnothingh{one-to-many} embeddings. Note that given a one-to-many embedding, one can construct a classic embedding by identifying each vertex with an arbitrary copy.
\mathtt{Bell}gin{definition}[One-to-many embedding]
An embedding $f:G\rightarrow2^H$ of a graph $G$ into a graph $H$ is a \varnothingh{one-to-many embedding} if for every $v\in G$, $f(v)$ is a non empty set of vertices in $H$, where the sets $\{f(v)\}_{v\in G}$ are disjoint.
We say that $f$ is \varnothingh{dominating} if for every pair of vertices $u,v\in G$, it holds that $d_G(u,v)\le \min_{u'\in f(u),v'\in f(v)}d_H(u',v')$.
We say that $f$ has additive distortion $\varepsilonlon D$ if it is dominating and $\forall u,v\in G$ it holds that $\max_{u'\in f(u),v'\in f(v)}d_H(u',v')\le d_G(u,v)+\varepsilonlon D$.
Note that, as for every vertex $v\in G$, $d_G(v,v)=0$, having additive distortion $\varepsilonlon D$ implies that all the copies in $f(v)$ are at distance at most $\varepsilonlon D$ from each other.
A stochastic one-to-many embedding is a distribution $\mathcal{D}$ over dominating one-to-many embeddings. We say that a stochastic one-to-many embedding has expected additive distortion $\varepsilonlon D$ if $\forall u,v\in G$ it holds that $\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_H(u',v')]\le d_G(u,v)+\varepsilonlon D$.
\end{definition}
We can show that in order to combine the different one-to-many embeddings of the pieces $G_1,\dots,G_s$, it is enough that for every clique $Q$
we will have a bag $B$ containing at least one copy of each vertex in $Q$. Formally,
\mathtt{Bell}gin{definition}[Clique-preserving embedding]
A one-to-many embedding $f:G\rightarrow2^H$ is called clique-preserving embedding if for every clique $Q \in G$,
there is a clique $Q'$ in $H$ such that for every vertex $v \in Q$, $f(v)\cap Q'\ne\varnothingtyset$.
\end{definition}
While it is impossible to preserve all cliques in a one-to-one embedding, it is possible to preserve all cliques in a one-to-many embedding; this is one of our major conceptual contributions. One might worry about the number of maximal cliques in $G$. However, since $G$ has constant degeneracy, the number of maximal cliques is linear~\cite{ELS10}.
Suppose that $f$ is clique-preserving, and let $\mathcal{T}$ be some tree-decomposition of $H$. Then for every clique $Q$ in $G$, there is a bag of $\mathcal{T}$ containing a copy of (the image of) $Q$ in $H$.
We now have the required definitions, and begin the description of the different steps in creating the embedding.
The most basic case we are dealing with directly is that of a planar graph with a single vortex and diameter $ D$ into a graph of treewidth $O(\frac{\log n}{\varepsilonlonilon})$ and additive distortion $\varepsilonlonilon D$. The high level idea is to use vortex-path separator to create a hierarchical partition tree $\tau$ as in \Cref{sec:oneVortex}. The depth of the tree will be $O(\log n)$. To accommodate for the damage caused by the separation, we \varnothingh{portalize} each vortex-path in the separator.
That is for each such path $Q$, we pick an $\varepsilonlon D$-net $^{\ref{foot:net}}$
$N_Q$ of size $O(\frac{1}{\varepsilonlonilon})$. The vertices of $N_Q$ called \varnothingh{portals}. Since each node of $\tau$ is associated with a constant number of vortex-paths, there are at most $O(\frac{1}{\varepsilonlonilon})$ portals corresponding to each node of $\tau$. Thus, if we collect all portals along the path from a leaf to the root of $\tau$, there are $O(\frac{\log n}{\varepsilonlonilon})$ portals.
We create a bag for each leaf $\Upsilon$ of the tree $\tau$. In addition for each bag we add the portals corresponding to nodes along the path from the root to $\Upsilon$. The tree decomposition is then created w.r.t. $\tau$.
Finally, we need to make the embedding clique-preserving. Consider a clique $Q$, there will be a leaf $\Upsilon_Q$ of $\tau$ containing a sub-clique $Q'\subseteq Q$, while all the vertices in $Q\setminus Q'$ belong to paths in the boundary of $\Upsilon_Q$. We will create a new bag containing (copies) of all the vertices in $Q$ and all the corresponding portals. The vertices of $Q'$ will have a single copy in the embedding, while the distortion of the vertices $Q'\subseteq Q$ will be guaranteed using a nearby portal.
\mathtt{Bell}gin{restatable}[Single Vortex with Bounded Diameter]{lemma}{embPlanarVortex}
\leftarrowbel{lm:emb-planar-vortex}
Given a single-vortex graph $G = G_{\Sigma}\cup W$ where the vortex $W$ has width $h$.
There is a one-to-many, clique-preserving embedding $f$ from $G$ to a graph $H$ with
treewidth $O(\frac{h\log n}{\varepsilonlonilon})$ and additive distortion $\varepsilonlonilon D$ where $D$ is diameter of $G$.
\end{restatable}
We then can extend the embedding to planar graphs with multiple vortices using the \varnothingh{vortex merging} technique (\Cref{subsec:reducingVortices}), and then to graphs embedded on a genus-$g$ surface with multiple vortices by cutting along vortex-paths.
The main tool here is a \varnothingh{cutting lemma} described in \Cref{sec:cut} which bound the diameter blowup after each cutting step. At this point, the embedding is still deterministic. The proofs appear in \Cref{sec:EmbedManyVortices} and \Cref{sec:genus-minor} respectively.
\mathtt{Bell}gin{restatable}[Multiple Vortices]{lemma}{emVortexReduce}
\leftarrowbel{lm:em-vortexReduce}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{v(G)}$ of diameter $D$, where $G_{\Sigma}$ can be drawn on the plane, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$, and $v(G)$ is the number of vortices in $G$. There is a one-to-many, clique-preserving embedding $f$ from $G$ to a graph $H$ of treewidth at most $\frac{h2^{O(v(G))}\log n}{\varepsilonlonilon}$ with additive distortion $\varepsilonlonilon D$.
\end{restatable}
\mathtt{Bell}gin{restatable}[Multiple Vortices and Genus]{lemma}{embedGenusVortex}
\leftarrowbel{lm:embed-genus-vortex}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{v(G)}$ of diameter $D$, where $G_{\Sigma}$ is (cellularly) embedded on a surface $\Sigma$ of genus $g(G)$, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$. There is a one-to-many clique-preserving embedding $f$ from $G$ to a graph $H$ of treewidth at most $\frac{h2^{O(v(G)g(G))}\log n}{\varepsilonlonilon}$ with additive distortion $\varepsilonlonilon D$.
\end{restatable}
We then extend the embedding to graphs embedded on a genus-$g$ surface with multiple vortices and apices (a.k.a. nearly embeddable graphs). The problem with apices, as pointed out at the beginning of this section, is that the diameter of the graph after removing apices could be unbounded in terms of the diameter of the original graph. Indeed, while the embedding in \Cref{lm:embed-genus-vortex} is deterministic, it is not clear how to \varnothingh{deterministically} embed a nearly embeddable graph into a bounded treewidth graph with additive distortion $\varepsilonlonilon D$. We use padded decompositions~\cite{Fil19padded} to decompose the graph into clusters of strong diameter $O(D/\varepsilonlon)$, embed each part separately, and then combine all the embeddings into a single graph. Note that separated nodes will have additive distortion as large as $2D$, however, this will happen with probability at most $O(\varepsilonlon)$.
To make this embedding clique-preserving, we add to each cluster its neighborhood. Thus some small fraction of the vertices will belong to multiple clusters.
As a result, we obtain a one-to-many stochastic embedding with expected additive distortion $\varepsilonlonilon D$. The proof appears in \Cref{sec:EmbedApices}.
\mathtt{Bell}gin{restatable}[Nearly $h$-Embeddable]{lemma}{embedNearlyEmbeddable}
\leftarrowbel{lm:embed-nearly-embeddable}
Given a nearly $h$-embeddable graph $G$ of diameter $D$, there is a one-to-many stochastic clique-preserving embedding into graphs with treewidth $O_h(\frac{\log n}{\varepsilonlonilon^2})$ and expected additive distortion $\varepsilonlonilon D$. Furthermore, every bag of the tree decomposition of every graph in the support contains (the image of) the apex set of $G$.
\end{restatable}
Finally we are in the case of general minor free graph $G = G_1 \oplus_h G_2 \oplus_h \ldots \oplus_h G_s$. We sample an embedding for each $G_i$ using \Cref{lm:embed-nearly-embeddable} to some bounded treewidth graph $H_i$. As all these embeddings are clique-preserving, there is a natural way to combine the tree decompositions of all the graphs $H_i$ together.
Here we run into another challenge: we need to guarantee that the additive distortion caused by merging tree decompositions is not too large. To explore this challenge, let us consider the clique-sum decomposition tree $\mathcal{T}$ of $G$: each node of $\mathcal{T}$ corresponds uniquely to $G_i$ for some $i$, and that $G$ is obtained by clique-summing all adjacent graphs $G_i$ and $G_j$ in $\mathcal{T}$. Suppose that $\mathcal{T}$ has a (polynomially) long path $\mathcal{P}$ with hop-length $p$. Then, for a vertex $u$ in the graph corresponding to one end of $\mathcal{P}$ and a vertex $v$ in the graph corresponding to another end of $\mathcal{P}$, the additive distortion between $u$ and $v$ could potentially $p \varepsilonlonilon D$ since every time the shortest path between $u$ and $v$ goes through a graph $G_i$, we must pay additive distortion $\varepsilonlonilon D$ in the embedding of $G_i$. When $p$ is polynomially large, the additive distortion is polynomial in $n$. We resolve this issue by the following idea:(1) pick a \varnothingh{separator piece} $G_i$ of $\mathcal{T}$ ($G_i$ is a separator of $\mathcal{T}$ if each component $\mathcal{T}\setminus G_i$ has at most $2/3$ the number of pieces of $\mathcal{T}$), (2) recursively embed pieces in subtrees of $\mathcal{T}\setminus G_i$ and (3) add the join set between $G_i$ and each subtree, say $\mathcal{T}'$ of $\mathcal{T}\setminus G_i$ to all bags of the tree decomposition corresponding to $\mathcal{T}'$. We then can show that this construction incurs another additive $\log n$ factor in the treewidth while insuring a total additive distortion of $\varepsilonlon D$. Hence the final tree decomposition has width $O(\frac{\log n}{\varepsilonlonilon^2})$.
The proof of \Cref{thm:embedding-minor} appears in \Cref{sec:EmbedGeneralMinor}.
An interesting consequence of our one-to-many embedding approach is that the host graphs $H$ will contain Steiner points. That is, its vertex set will be greater than $V$. We do not know whether it is possible to obtain the properties of \Cref{thm:embedding-minor} while embedding into $n$-vertex graphs.
In this context, the Steiner point removal problem studies whether it is possible to remove all Steiner points while preserving both pairwise distance and topological structure \cite{Fil19SPR,Fil20scattering}. Unfortunately, in general, even if $G$ is a tree, a multiplicative distortion of $8$ is necessary \cite{CXKR06}.
Nevertheless, as Krauthgamer {et al. \xspace} \cite{KNZ14} proved, given a set $K$ of $k$ terminals in a graph $H$ of treewidth $\mathrm{tw}$, we can embed the terminal set $K$ isometrically (that is with multiplicative distortion $1$) into a graph with $O(k\cdot\mathrm{tw}^3)$ vertices and treewidth $\mathrm{tw}$. It follows that we can ensure that all embeddings in the support of the stochastic embedding in \Cref{thm:embedding-minor} are into graphs with $O_r(n\cdot \frac{\log^3 n}{\varepsilonlon^6})$ vertices.
\section{Related work}\leftarrowbel{sec:related}
\paragraph{TSP in Euclidean and doubling metrics} Arora~\cite{Arora97} and Mitchell~\cite{Mitchell99} gave polynomial-time approximation schemes (PTASs) for TSP (Arora's algorithm is a PTAS for any fixed dimension). Rao and Smith~\cite{RS98} gave
an $O(n \log n)$ approximation scheme for bounded-dimension Euclidean
TSP, later improved to linear-time by Bartal and
Gottlieb~\cite{BartalG13}. For TSP in doubling metrics, Talwar~\cite{Tal04} gave a QPTAS; Bartal {et al. \xspace} \cite{BGK16} gave a PTAS; and Gotlieb~\cite{Gottlieb15} gave efficient PTAS.
\paragraph{TSP and subset TSP in minor-closed families} For TSP problem in planar graphs, Grigni {et al. \xspace} \cite{GKP95} gave the first (inefficient) PTAS for \varnothingh{unweighted} graphs; Arora {et al. \xspace} \cite{AGKKW98} extended Grigni {et al. \xspace} \cite{AGKKW98} to weighted graphs; Klein \cite{Klein05} designed the first EPTAS by introducing the contraction decomposition framework. Borradaile {et al. \xspace} \cite{BDT14} generalized Klein's EPTAS to bounded-genus graphs.
The first PTAS for $K_r$-minor-free graph was desgined by Demaine {et al. \xspace} \cite{DHK11} that improved upon the QPTAS by Grigni \cite{Grigni00}. Recently, Borradaile {et al. \xspace} \cite{BLW17} obtained an EPTAS for TSP in $K_r$-minor-free graphs by connstructing light spanners; this work completed a long line of research on approximating classical TSP in $K_r$-minor-free graphs.
For subset TSP, Arora {et al. \xspace} \cite{AGKKW98} designed the first QPTAS for weighted planar graphs. Klein \cite{Klein06} obtained the first EPTAS for subset TSP in planar graphs by constructing a light planar subset spanner. Borradaile {et al. \xspace} \cite{BDT14} generalized Klein's subset spanner construction to bounded-genus graphs, thereby obtained an EPTAS. Le \cite{Le20} designed the first (inefficient) PTAS for subset TSP in minor-free graphs. Our \Cref{thm:tsp-eptas} completed this line of research.
\paragraph{Light (subset) spanners} Light and sparse spanners were introduced for distributed computing \cite{Awerbuch85,PS89,ABP91}. Since then, spanners attract ever-growing interest; see ~\cite{AGSHJKS19} for a survey. Over the years, light spanners with constant lightness have been shown to exist in Euclidean metrics~\cite{RS98,LS19}, doubling metrics~\cite{Gottlieb15,BLW19}, planar graphs~\cite{ADDJS93}, bounded genus graphs~\cite{Grigni00} and minor-free graphs~\cite{BLW17}. For subset spanners, relevant results include subset spanners with constant lightness for planar graphs by Klein~\cite{Klein06}, for bounded genus graphs by Borradaile {et al. \xspace}~\cite{BDT14}. Le~\cite{Le20} constructed subset spanners with lightness $O(\log |K|)$ for minor-free graphs.
\paragraph{Capacitaed vehicle routing} There is a rich literature on the capacitated vehicle routing problem. When $Q$ is arbitrary, the problem becomes extremely difficult as there is no known PTAS for any non-trivial metric. For $\mathbb{R}^2$, there is a QPTAS by Mathieu and Das for $\mathbb{R}^2$~\cite{DM15} and for tree metrics, there is a (tight) $\frac{4}{3}$-approximation algorithm by Becker~\cite{Becker18}. In general graphs, Haimovich and Rinnooy Kan~\cite{HR85} designed a $2.5$-approximation algorithm.
In Euclidean spaces, better results were known for restricted values of $Q$: PTASes in $\mathbb{R}^2$ for $Q = O( 2^{\log^{O_{\varepsilonlonilon}(1)}n})$ by a sequence of work~\cite{HR85,AKTT97,ACL09} and for $Q = \Omega(n)$ by Asano et al.~\cite{ AKTT97}; a PTAS in $\mathbb{R}^d$ for $Q = O(\log n^{1/d})$ by Khachay and Dubinin~\cite{KD16}.
For constant $Q$, progress has been made on designing approximation schemes for various minor-closed families of graphs. In recent work, Becker et al.~\cite{BKS19} designed a PTAS for planar graphs. Becker et al.~\cite{BKS17} gave a QPTAS for planar and bounded-genus graphs.
Other relevant works include a PTAS for graphs of bounded highway dimension and constant $Q$~\cite{BKS18}, a bicriteria PTAS for tree metrics and arbitrary $Q$~\cite{BP19}, and an exact algorithm for treewidth-$\mathrm{tw}$ graphs with running time $O(n^{\mathrm{tw} Q})$~\cite{BKS18}.
\section{Preliminaries} \leftarrowbel{sec:prelim}
$O_r$ notation hides factors in $r$, e.g. $O_r(m)=O(m)\cdot f(r)$ for some function $f$ of $r$.
We consider connected undirected graphs $G=(V,E)$ with edge weights
$w_G: E \to \mathbb{R}_{\ge 0}$. Additionally, we denote $G$'s vertex set and edge set by $V(G)$ and $E(G)$, respectively. Let $d_{G}$ denote the shortest path metric in
$G$, i.e., $d_G(u,v)$ equals to the minimal weight of a path from $u$ to $v$. Given a vertex $v$ and a subset of vertices $S$, $d_G(v,S)=\min_{u \in S}d_G(v,u)$ is the distance between $v$ and $S$. If $v \in S$, then $d_G(v,S) = 0$. When the graph is clear from the context, we simply use $w$ to refer to $w_G$, and $d$ to refer to $d_G$.
$G[S]$ denotes the induced subgraph by $S$. We define the \varnothingh{strong}\footnote{The \varnothingh{weak} diameter of $S$ is $ \max_{u,v \in S}d_{G}(u,v)$.} diameter of $S$, denoted by $\mathrm{\textsc{Diam}}(S)$, to be $\max_{u,v \in S}d_{G[S]}(u,v)$.
For a subgraph $H$ of $G$, $w_G(H)=\sum_{e\in E(H)}w_G(e)$ denotes the total weight of all the edges in $H$.
For two paths $P_1,P_2$ where the last vertex of $P_1$ is the first vertex of $P_2$. We denote by $P_1\circ P_2$ the
concatenation of $P_1$ and $P_2$. We denote by $P[u,v]$ a subpath between $u$ and $v$ of $P$.
We say a subset of vertices $N$ is a \varnothingh{$r$-net} of $G$ if the distance between any two vertices of $N$ is at least $r$ and for every $x \in V(G)$, there exists $y \in N$ such that $d_G(x,y) \leq r$.
Given a subset $U\subseteq V$ of vertices, a \varnothingh{Steiner tree} of $U$ is an acyclic subgraph of $G$ such that all the vertices in $U$ belong to the same connected component. A \varnothingh{Minimum Steiner tree} is a subgraph of minimum weight among all such subgraphs (it is not necessarily unique).
Given a subset $U$ of terminals, a \varnothingh{subset $t$-spanner w.r.t. $U$} is a subgraph $H$ that preserves the distances between any pair of terminals, up to a multiplicative factor of $t$, i.e., $\forall u,v\in U$, $d_H(u,v)\le t\cdot d_G(u,v)$. Note that as $H$ is a subgraph, it necessarily holds that $d_G(u,v)\le d_H(u,v)$. The lightness of $H$ is the ratio of its weight $w(H)$ to the weight of a minimum Steiner tree of $U$.
A metric embedding is a function $f:G\rightarrow H$ between two graphs $G$ and $H$.
We say that metric embedding $f$ is \varnothingh{dominating} if for every pair of vertices $u,v\in G$, it holds that $d_G(u,v)\le d_H(f(u),f(v))$.
\mathtt{Bell}gin{definition}[Stochastic embedding]\leftarrowbel{def:stocastic}
\sloppy A stochastic embedding, is a distribution $\mathcal{D}$ over dominating embeddings $f$. We say that a stochastic embedding has expected additive distortion $\varepsilonlon D$, if $\forall u,v\in G$ it holds that $\mathbb{E}_{f\sim\mathcal{D}}[d_H(f(u),f(v))]\le d_G(u,v)+\varepsilonlon D$.
\end{definition}
\subsection{Robertson-Seymour decomposition of minor-free graphs}\leftarrowbel{subsec:RobertsonSeymour}
In this section, we review notation used in graph minor theory by Robertson and Seymour. Readers who are familiar with Robertson-Seymour decomposition can skip this section. Basic definitions such as tree/path decomposition and treewidth/pathwidth are provided in \Cref{appendix:additionalNotation}.
Informally speaking, the celebrated theorem of Robertson and Seymour (\Cref{thm:RS}, \cite{RS03}) said that any minor-free graph can be decomposed into a collection of graphs \varnothingh{nearly embeddable} in the surface of constant genus, glued together into a tree structure by taking \varnothingh{clique-sum}. To formally state the Robertson-Seymour decomposition, we need additional notations.
A \varnothingh{vortex} is a graph $G$ equipped with a pah decomposition $\{X_1,X_2,\ldots, X_t\}$ and a sequence of $t$ designated vertices $x_1,\ldots, x_t$, called the \varnothingh{perimeter} of $G$, such that each $x_i \leq X_i$ for all $1\leq i \leq t$. The \varnothingh{width} of the vortex is the width of its path decomposition.
We say that a vortex $W$ is \varnothingh{glued} to a face $F$ of a surface embedded graph $G$ if $W\cap F$ is the perimeter of $W$ whose vertices appear consecutively along the boundary of $F$.
\paragraph{Nearly $h$-embeddability} A graph $G$ is nearly $h$-embeddable if there is a set of at most $h$ vertices $A$, called \varnothingh{apices}, such that $G\setminus A$ can be decomposed as $G_{\Sigma}\cup \{W_1, W_2,\ldots, W_{h}\}$ where $G_{\Sigma}$ is (cellularly) embedded on a surface $\Sigma$ of genus at most $h$ and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
\paragraph{$h$-Clique-sum} A graph $G$ is a $h$-clique-sum of two graphs $G_1,G_2$, denoted by $G = G_1\oplus_h G_2$, if there are two cliques of size exactly $h$ each such that $G$ can be obtained by identifying vertices of the two cliques and remove some clique edges of the resulting identification.
Note that clique-sum is not a well-defined operation since the clique-sum of two graphs is not unique due to the clique edge deletion step. We now can state the decomposition theorem.
\mathtt{Bell}gin{theorem}[Theorem~1.3~\cite{RS03}] \leftarrowbel{thm:RS} There is a constant $h = O_r(1)$ such that any $K_r$-minor-free graph $G$ can be decomposed into a tree $\mathcal{T}$ where each node of $\mathcal{T}$ corresponds to a nearly $h$-embeddable graph such that $G = \cup_{X_iX_j \in E(\mathcal{T})} X_i \oplus_h X_j$.
\end{theorem}
By slightly abusing notation, we use the term \varnothingh{nodes} of $\mathcal{T}$ to refer to both the nodes and the graphs corresponding to the nodes of $\mathcal{T}$. Note that nodes of $\mathcal{T}$ may not be
subgraphs of $G$, as in the clique-sum, some edges of a node, namely some edges of a nearly $h$-embeddable subgraph associated to a node,
may not be present in $G$. However, for any edge $(u,v)$ between two vertices of a node, say $X$, of $\mathcal{T}$, that are not present in $G$,
we add edge $(u,v)$ to $G$ and set its weight to be $d_G(u,v)$. It is immediate that this does not
change the Robertson-Seymour decomposition of the graph, nor its shortest path metric.
Thus, in the decomposition of the resulting graph, the clique-sum operation does not remove any edge.
This is an important point to keep in mind as in what follows, we will remove some nodes out of $\mathcal{T}$ while guaranteeing that
the shortest path metric between terminals is not affected.
\subsection{Vortex paths}
Throughout the paper, we will use the notion of \varnothingh{vortex-path}, which was first introduced by Abraham and Gavoille~\cite{AG06}.
\mathtt{Bell}gin{definition}[Vortex-path~\cite{AG06}]\leftarrowbel{def:vortex-path} Given a vortex embedded graph $G = G_{\Sigma}\cup W_1\cup W_2\ldots \cup W_{v(G)}$, a \varnothingh{vortex-path} between two vertices $u,v$, denoted by $\mathcal{V}[u,v]$, is a subgraph of $G$ that can be written as $\mathcal{V}[u,v] = P_0\cup X_1\cup Y_1\cup P_1 \cup \ldots \cup X_\ell \cup Y_\ell\cup P_{\ell}$ such that:
\mathtt{Bell}gin{itemize}[nolistsep,noitemsep]
\item[(a)] $P_i$ is a path of $G_{\Sigma}$ for all $0\leq i\leq \ell$.
\item[(b)] For all $1\leq i\leq \ell$, $X_i$ and $Y_i$ are two bags of the same vortex, denoted $\mathcal{W}(X_i)$.
\item[(c)] For any $1\leq i\not= j\leq \ell$, $\mathcal{W}(X_i) \not = \mathcal{W}(X_j)$.
\item[(d)] $P_0$ ($P_{\ell}$) is a path from $u$ ($v$) to a perimeter in $X_1$ ($Y_\ell$). $P_i$ is a path from a perimeter vertex in $Y_{i}$ to a perimeter vertex in $X_{i+1}$, $1\leq i\leq \ell-1$. No path $P_i$ contains a perimeter vertex as an internal vertex for any $i \in [0,\ell]$.
\end{itemize}
Each path $P_i$ is called a \varnothingh{segment} of $\mathcal{V}$.
\end{definition}
\mathtt{Bell}gin{figure}[]
c(\varepsilonlonilon)ntering{\includegraphics[width=1\textwidth]{fig/vortexDef3}}
\caption{\leftarrowbel{fig:vortexDef}\small \it
In part (1) displayed a planar graph with a single vortex. The vortex embedded on a face colored in blue. The perimeter vertices are colored in red. The vertices belonging to the vortex but not to the planar graph displayed by smaller dots (while their edges in gray).\newline
In part (2) a path $P$ from $v$ to $u$ is displayed, in red.\newline
In part (3) displayed a vortex path $\mathcal{V}[v,u]=P_0\cup X\cup Y\cup P_1$ which induced by $P$. Here $P_0,P_1$ displayed in green, where $P_0$ is the prefix of $P$ from $v$ to $x_2$, and $P_2$ is the suffix of $P$ from $x_9$ to $u$. $X$ (resp. $Y$) which is encircled by a dashed blue line, is the bag $X_2$ ($X_9$) associated with the perimeter vertex $x_2$ ($x_9$).\newline
In part (4) displayed in red the projection $\bar{\mathcal{V}}$ of the vortex path $\mathcal{V}$. $\bar{\mathcal{V}}$ consist of $P_0,P_1$, and an imaginary edge $e$ (dashed) between $x_2$ to $x_9$. \newline
In part (5) displayed a fundamental vortex cycle $C$, where it's embedded is in green. All the vertices in $\mathcal{W}(C)$ are encircled by a blue line.\newline
In part (6) we display the close curve $\bar{C}$ induced by $C$. $e_C$, as well as the other imaginary edges are displayed by a red dashed lines. The interior of $\bar{C}$ is encircled by an orange dashed line, while the exterior is encircled by an purple dashed line.
}
\end{figure}
When the endpoints of a vortex-path $\mathcal{V}[u,v]$ are not relevant in our discussion, we would omit the endpoints and simply denote it by $\mathcal{V}$.
The \varnothingh{projection} of the vortex-path $\mathcal{V}[u,v]=P_0\cup X_1\cup Y_1\cup P_1 \cup \ldots \cup X_\ell \cup Y_\ell\cup P_{\ell}$ denoted by $\bar{\mathcal{V}}$ is a path formed by $P_0\circ e_1\circ P_1\circ e_2\circ\dots\circ e_{\ell}\circ P_\ell$ where $e_i$ is an (imaginary) extra edge added to $G_\Sigma$ between the perimeter vertex of $X_i$ and the perimeter vertex of $Y_i$, and embedded inside the cellular face upon which the vortex $W_i$ is glued. We observe that even though $\mathcal{V}$ may not be a path of $G$, its projection
$\bar{\mathcal{V}}$ is a curve of $\Sigma$. See \Cref{fig:vortexDef} for a simple example and see \Cref{fig:vortex-path-complex} in \Cref{appendix:additionalNotation} for a more complex one.
Consider a path $Q=(u=v_0,v_1,\dots,v_r=v)$ in $G$ with two endpoints in the embedded part $G_{\Sigma}$. $Q$ \varnothingh{induces} a vortex path $\mathcal{V}[u,v]$ defined as follows:
Start a walk on $Q$ until you first encounter a vertex $v_{i_1}$ that belongs to a vortex $W_{i_1}$. Let $u_{i_1}$ be the last vertex in $Q$ belonging to $W_{i_1}$. Note that necessarily $v_{i_1},u_{i_1}$ are perimeter vertices in $W_{i_1}$, denote them $x^{i_1}_{j_1}
,x^{i_1}_{l_1}$ respectively. We continue and define $v_{i_2}$ to be the first vertex (after $u_{i_1}$) belonging to some vortex $W_{i_2}$, and $u_{i_2}$ being the last vertex in $Q\cap W_{i_2}$. $x^{i_2}_{j_2}
,x^{i_2}_{l_2}$ are defined in the natural manner. We iteratively define $\left(v_{i_3}=x^{i_3}_{j_3},W_{i_3},u_{i_3}=x^{i_3}_{l_3}\right),\left(v_{i_4}=x^{i_4}_{j_4},W_{i_4},u_{i_4}=x^{i_4}_{l_4}\right),\dots$ until the first index $i_s$ such that there is no vertex $v_{i_{s+1}}$ after $u_{i_s}$ belonging to a vortex.
The respective induced vortex path is defined as $P_0\cup X_1\cup Y_1\cup P_1 \cup \ldots \cup X_\ell \cup Y_s\cup P_{s}$ where $P_0=(v_0,\dots,v_{i_1})$, $P_q=(u_{i_q},\dots,v_{i_{q+1}})$, $X_q$ (resp. $Y_q$) is a bag in $W_{i_q}$ associated with the perimeter vertex $x^{i_q}_{j_q}$ (resp. $x^{i_q}_{l_q}$), and $P_{s}= Q[u_{i_s}, v]$. See \Cref{fig:vortexDef}.
Suppose next that $G$ has genus $0$. Specifically, that $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{h'}$, where $G_{\Sigma}$ can be drawn on the plane, $h'\le h$, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
Fix some drawing of $G_{\Sigma}$ on the plane, let $T_r$ be an arbitrary spanning tree of $G$ rooted at $r$.
A \varnothingh{fundamental vortex cycle} $C$ of $T$ is a union of vortex paths $\mathcal{V}[r,v]\cup \mathcal{V}[r,u]$, induced by two paths $Q_1,Q_2$, both starting at the root $r$, end at $u,v\in G_\Sigma$, such that either $u,v$ are neighbors in $G_\Sigma$, or a curve could be added between $u$ to $v$ without intersecting any other curve in the drawing $G_{\Sigma}$. Denote this edge/imaginary curve by $e_C$.
We call the union of the projections, $\bar{\mathcal{V}}[r,v]\cup \bar{\mathcal{V}}[r,u]$ the \varnothingh{embedded} part of $C$.
Adding $e_C$ to the embedded part, $\bar{\mathcal{V}}[r,v]\cup \bar{\mathcal{V}}[r,u]\cup e_C$ induces a close curve $\bar{C}$ which is associated with $C$.
Removing the fundamental vortex cycle $C$ from $G$ partitions $G\setminus C$ into two parts, interior $\mathcal{I}$ and exterior $\mathcal{E}$.
The embedded part $G_{\Sigma}$ is partitioned to interior $\mathcal{I}\cap G_\Sigma$ and exterior $\mathcal{E}\cap G_\Sigma$, w.r.t. the closed curve $\bar{C}$ associated with $C$. For every vertex $z$ belonging to the vortex only ($z\in W\setminus G_{\Sigma}$), which was not deleted, let $X_i$ be an arbitrary bag containing $z$. Note that $X_i$ is not one of the bags belonging to the fundamental vortex cycle. In particular, even though it might be deleted, the perimeter vertex $x_i$ belongs either to the interior or the exterior of $\bar{C}$.
If $x_i$ is in the interior, respectively exterior, part of $G \setminus C$, then
vertex $z$ joins the interior $\mathcal{I}$, respectively exterior $\mathcal{E}$, part of $G\setminus C$.
Note that cycle vertices $C$ belong to neither to the interior or the exterior.
\mathtt{Bell}gin{claim}\leftarrowbel{clm:CisSeperator}
$\mathcal{I},C,\mathcal{E}$ form a partition of $G$. Further, there are no edges between $\mathcal{I}$ and $\mathcal{E}$.
\end{claim}
\mathtt{Bell}gin{proof}
Let $u\in \mathcal{E}$, and $v\in\mathcal{I}$. Assume for contradiction that they are neighbors in $G$, denote $e=(u,v)$. We continue by case analysis,
\mathtt{Bell}gin{OneLiners}
\item Suppose $e\in G_\Sigma$. In this case $e$ must cross the closed curve $\bar{C}$, a contradiction.
\item Suppose $e\notin G_\Sigma$. Then $u,v$ must belong to the same vortex $W$ (they might be perimeter vertices, however $e$ must belong to $W$).
Denote by $I_{u}=\{i\mid u\in X_i\}$ and $I_{v}=\{i\mid v\in X_i\}$ the set of indices belonging to bags containing $u$ and $v$, respectively.
By the definition of path decomposition, there are integers $a_u,b_u,a_v,b_v$ such that $I_{u}=[a_u,b_u]$ and $I_{v}=[a_v,b_v]$.
As $u\in \mathcal{E}$ and $v\in \mathcal{I}$, there are indices $i_u\in [a_u,b_u]$, $i_v\in [a_v,b_v]$ such that $x_{i_u}$ is outside $\bar{C}$, while $x_{i_v}$ is inside $\bar{C}$. W.l.o.g. $i_u\le i_v$. The curve $\bar{C}$ must intersect the path $(x_{i_u},x_{i_u+1},\dots, x_{i_v})$ at a perimeter vertex $x_{i_c}$, where the entire bag $X_{i_c}$ belongs to the fundamental vortex cycle $C$. As $u,v$ do not belong to $C$, it must hold that $i_c\notin [a_u,b_u]\cup[a_v,b_v]$, implying that $a_u\le i_u\le b_u<i_c<a_v\le i_v\le b_v$. Thus $I_{u}\cap I_{v}=\varnothingtyset$, a contradiction to the assumption that $u$ and $v$ are neighbors in $W$.
\end{OneLiners}
\end{proof}
We will use the following lemma, which is a generalization of the celebrated Lipton-Tarjan planar separator theorem \cite{LT79,Tho04}, to planar graphs with vortices.
This is a slightly different \footnote{Originally \cite{AG06} used three vortex-paths to separate the graph into components of weight at most $\mathcal{W}/2$ each. Here we use two vortex-paths, but each component has at weight at most $\frac{2\mathcal{W}}{3}$ instead. Additionally, \cite{AG06} is more general and holds for an arbitrary number of vortices.} version of Lemma 6 in~\cite{AG06} (Lemma 10 in the full version).
\mathtt{Bell}gin{lemma}[\cite{AG06}]\leftarrowbel{lm:AG06one-vortex-separator} Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{h'}$, where $G_{\Sigma}$ can be drawn on the plane and $W_1\cup\dots \cup W_{h'}$ are vortices glued to a faces of $G_{\Sigma}$. Let $T_r$ be a spanning tree of $G$ rooted at $r$, and a weight function $\omega:V\rightarrow\mathbb{R}_+$ over the vertices. Set $\mathcal{W}=\sum_{v\in V}\omega(v)$ to be the total vertex weight of $G$. Then there is a fundamental vortex cycle $C$, such that both the interior and exterior in $G\diagdown C$ has vertex weight at most $\frac{2\mathcal{W}}{3}$, i.e., $\sum_{v\in\mathcal{I}}\omega(v),\sum_{v\in\mathcal{E}}\omega(v)\le \frac{2\mathcal{W}}{3}$.
\end{lemma}
\section{Light Subset Spanners for Minor-Free Metrics}
In our construction, we will use \varnothingh{single-source spanners} and \varnothingh{bipartite spanners} as black boxes. These concepts were introduced initially by Klein~\cite{Klein06} for planar graphs, and then generalized to general graphs by Le~\cite{Le20}.
\mathtt{Bell}gin{lemma}[Single-source spanners~\cite{Le20}]\leftarrowbel{lm:ss-spanner} Let $p$ be a vertex and $P$ be a shortest path in an edge-weighted graph $G$. Let $d(p,P) = R$. There is a subgraph $H$ of $G$ of weight at most $8R\varepsilonlonilon^{-2}$ that can be computed in polynomnial time such that:
\mathtt{Bell}gin{equation} \leftarrowbel{eq:ss-spanner}
d_G(p,x) \leq d_{P\cup H}(p,x) \leq (1+\varepsilonlonilon) d_G(p,x) \qquad\forall x \in P
\end{equation}
\end{lemma}
\mathtt{Bell}gin{lemma}[Bipartite spanners~\cite{Le20}]\leftarrowbel{lm:bipartite-spanner}
Let $W$ be a path and $P$ be a shortest path in an edge-weighted graph $G$. Let $R = \min_{v \in W}d_G(v, P)$ be the distance between $W$ and $P$. Then, there is a subgraph $H$ constructible in polynomnial time such that $$d_{H\cup P}(p,q) \leq (1+\varepsilonlon)d_{G}(p,q) \qquad \forall p \in W, q \in P$$ and $w(H) = O(\varepsilonlon^{-3})w(W) + O(\varepsilonlon^{-2})R$.
\end{lemma}
Lemma~\ref{lm:ss-spanner} is extracted from Lemma 4.2 in~\cite{Le20}, and Lemma~\ref{lm:bipartite-spanner} is extracted from Corollary 4.3 in~\cite{Le20}. Given a shortest path $P$, we denote by $\textsc{SSP}(t,P, G)$ a single-source spanner from $t$ to $P$ with stretch $(1+\varepsilonlon)$ constructed using \Cref{lm:ss-spanner}.
Given a parameter $L>0$, denote
\[
\textsc{SSP}(t,P,G,L)=\mathtt{Bell}gin{cases}
\textsc{SSP}(t,P,G) & d_{G}(t,P)\le L\\
\varnothingtyset & d_{G}(t,P)>L
\end{cases}
\]
That is, in case $d_{G}(t,P)\le L$, $\textsc{SSP}(t,P,G,L)=\textsc{SSP}(t,P,G)$, while otherwise it is an empty-set.
Similarly, given a path $W$ and a shortest path $P$, let $\mathtt{BS}(P,W,G)$ be a bipartite spanner $P$ to $W$ with stretch $(1+\varepsilonlon)$
constructed using \Cref{lm:bipartite-spanner}.
\subsection{Step (1): Planar graphs with a single vortex and bounded diameter, proof of \Cref{lm:one-vortex-Bounded-diam}}\leftarrowbel{sec:oneVortex}
We begin by restating the main lemma of the section:
\SingleVortexBoundedDiam*
This section contains a considerable amount of notations. \Cref{appendix:key} contains a summary of all the definitions and notations
used in the section. The reader is encouraged to refer to this index while reading.
Recall that a \varnothingh{vortex} is a graph equipped with a path decomposition $\{X_1,X_2,\ldots, X_t\}$ and a sequence of $t$ designated vertices $x_1,\ldots, x_t$, called the \varnothingh{perimeter} of the vortex, such that $\forall i\in[t]$, $x_i \in X_i$. The \varnothingh{width} of the vortex is the width of its path decomposition.
We say that a vortex $W$ is \varnothingh{glued} to a face $F$ of a surface embedded graph $G_{\Sigma}$ if $W\cap F$ is the perimeter of $W$ whose vertices appear consecutively along the boundary of $F$.
\paragraph{Graph preprocessing.} In order to simplify the spanner construction and its proof, we modify the graph as follows.
We add an auxiliary vertex $\tilde{x}$, with weight-$D$ edges to all the vertices in the vortex $W$, where $D$ is the diameter of the graph.
In the drawing $G_\Sigma$, we add an arc between the perimeter vertices $x_1,x_t$ and draw $\tilde{x}$ somewhere along this arc.
$\tilde{x}$ is added to the vortex, which is now considered to have perimeter $\tilde{x},x_1,\ldots, x_t$. Note that only the edges $\{\tilde{x},x_1\},\{\tilde{x},x_2\}$ are added to the embedded part.
The bag associated with $\tilde{x}$ is the singleton $\{\tilde{x}\}$. Every other perimeter vertex $x_i$, has an associated bag $X_i\cup \{\tilde{x}\}$.
See \Cref{fig:vortexDef} for illustration of the modification.
As a result, we obtain a planar graph with a single vortex of width at most $h+1$. Note that the diameter is still bounded by $D$.
We abuse notation and call this graph $G$, its drawing (i.e. planar part) $G_\Sigma$, and its vortex $W$.
In the following, we show how to construct a subset spanner of this graph. A subset spanner of this graph immediately yields
a subset spanner of the original graph, we can simply discard $\tilde{x}$ and the resulting subset spanner would indeed be a subset spanner
of the original graph. To see this, observe that for any pair of terminals $t,t'$ their shortest path in the subset spanner of $G$
does not go through $\tilde{x}$ since otherwise their distance would be greater than $2D\ge 2\cdot d_G(t,t')$.
\mathtt{Bell}gin{figure}[]
c(\varepsilonlonilon)ntering{\includegraphics[width=.8\textwidth]{fig/VortexDefModified2}}
\caption{\leftarrowbel{fig:vortexModification}\small \it
The modification step illustrated. On the left there is a planar graph with a single vortex, glued to a face colored in blue. The perimeter vertices are colored in red, while all the edges not in the embedded part are colored in gray. On the right is illustrated a modified graph, where we add a new vertex $\tilde{x}$ with edges towards all other vortex vertices. The perimeter vertices, and the face upon which the vortex is glued, are updated accordingly.
}
\end{figure}
Next, we construct a tree that will be used later to create separators. Let
$T_\Sigma$ be a shortest path tree of $G_\Sigma$ rooted in $W$. Note that $T_\Sigma$ has $t+1$ connected components. Furthermore, every path in $T_\Sigma$ from a perimeter vertex $x_j$ to a vertex $v\in G_\Sigma$ will be fully included in $G_\Sigma$, and will have length at most $D=O_h(L)$.
We extend $T_W$ to $T_{\tilde{x}}$, a spanning tree of $G$ by adding an edge from $\tilde{x}$ to every vortex vertex, formally $T_{\tilde{x}}=T_\Sigma\cup\{(\tilde{x},v)\mid v\in W\setminus\{\tilde{x}\}\}$. We think of $T_{\tilde{x}}$ as a spanning tree rooted at $\tilde{x}$.
This choice of root and tree, induces a restricted structure on vortex paths and fundamental vortex cycles. Specifically, consider a path $Q=(\tilde{x}=v_0,\dots,v_q=v)$ from the root $\tilde{x}$ to a vertex $v\in G_\Sigma$ and let $\mathcal{V}[\tilde{x},v]=P_0\cap X\cup Y\cup P_1$ be the induced vortex path.
As there are no edges from $\tilde{x}$ towards $G_\Sigma\setminus W$, necessarily $v_1=x_i$ is a perimeter vertex (there are no other neighbors of $\tilde{x}$ on a path towards a vertex in $G_\Sigma$). It holds than $P_0=(\tilde{x})$ is a singleton path, $X=\tilde{X}$, $Y=X_i$, and $P_1=(x_i=v_1,v_2,\dots,v_q=v)$.
Furthermore, consider a fundamental vortex cycle $C$ that consists of the two vortex paths $\mathcal{V}[\tilde{x},v],\mathcal{V}[\tilde{x},u]$.
Then $C$ actually contains $\tilde{x}$, two vortex bags $X_i,X_j$, and two paths from in $T_\Sigma$ from $x_i,x_j$
to vertices $v,u$ in $G_\Sigma$, of length $O_h(L)$.
\paragraph{Hierarchical tree construction}
We recursively apply \Cref{lm:AG06one-vortex-separator} to hierarchically divide the vertex set $V$ into disjoint subsets.
Specifically, we have a hierarchical tree $\tau$ of sets with origin $V$. Each node in $\tau$ is associated with a subset $\Upsilon$, and a graph $G_\Upsilon$, which contains $\Upsilon$ and in addition vertices out of $\Upsilon$.
We abuse notation and denote the tree node by $\Upsilon$. In the same level of $\tau$, all the vertex sets will be disjoint, while the same vertex might belong to many different subgraphs.
We maintain the following invariant:
\mathtt{Bell}gin{invariant}\leftarrowbel{inv:SubgraphSingleVortex}
The vertex set of the graph $G_\Upsilon$ is a subset of $G$.
It contains a single vortex $W_\Upsilon$, and a drawing in the plane which coincides with that of $G_\Sigma$.
Each perimeter vertex $x^\Upsilon_i$ of $W_\Upsilon$ is also a perimeter vertex $x_{i'}$ in $G_\Sigma$.
Furthermore, the bag $X^\Upsilon_i$ associated with $x^\Upsilon_i$ equals to $X_{i'}\cap G_\Upsilon$, the bag associated with $x_{i'}$.
Finally, there exists a set $\mathcal{E}$ of perimeter edges that have been added by the algorithm such that
the graph $G_\Upsilon - \mathcal{E}$ is an induced subgraph of $G$.
\end{invariant}
The root vertex $\tilde{x}$ belongs to all the subgraphs $G_\Upsilon$ of all the tree nodes $\Upsilon\in \tau$.
Consider the subgraph $T_\Upsilon=T_{\tilde{x}}\cap G_\Upsilon$ rooted at $\tilde{x}$.
We also maintain the following invariant:
\mathtt{Bell}gin{invariant}\leftarrowbel{inv:tree}
$T_\Upsilon$ is a spanning tree of $G_\Upsilon$.
\end{invariant}
It follows from \Cref{inv:SubgraphSingleVortex} and \Cref{inv:tree}, that in similar manner to $T_{\tilde{x}}$, every fundamental vortex
cycle consists of $\tilde{x}$, two bags and two paths of length $O_h(L)$ originated in perimeter vertices.
Given a fundamental cycle $C$, we denote by $\mathcal{P}(C)$ the set of at most $2(h+1)+1$ paths from which $C$ is composed.
We abuse notation here and treat the vertices in the deleted bags as singleton paths.
In each hierarchical tree node $\Upsilon$, if it contains between $1$ to $2(h+1)$ terminals (that is $1\le |\Upsilon\cap K|\le 2(h+1)$), it is defined
as a leaf node in $\tau$. Otherwise, we use \Cref{lm:AG06one-vortex-separator} to produce a fundamental vortex
cycle $C_\Upsilon$, w.r.t. $T_\Upsilon$ and a weight function to be specified later.
Using the closed curve $\bar{C}_\Upsilon$ induced by $C_\Upsilon$, the
set $\Upsilon$ is partitioned to interior $\Upsilon^{\mathcal{I}}$ and exterior $\Upsilon^{\mathcal{E}}$. $\Upsilon^{\mathcal{I}}$ and
$\Upsilon^{\mathcal{E}}$ are the children of $\Upsilon$ in $\tau$ (unless they contain no terminals, in which case they are discarded).
Note that the graph $G_\Upsilon$ contains vertices out of $\Upsilon$. Thus the exterior and interior of $C_\Upsilon$ in $G_\Upsilon$ may
contain vertices out of $\Upsilon$. Nonetheless, $\Upsilon^{\mathcal{E}},\Upsilon^{\mathcal{I}}$ consists of subsets of $\Upsilon$. Formally, they
are defined as the intersection of $\Upsilon$ with the exterior and interior of $C_\Upsilon$ in $G_\Upsilon$, respectively.
By the definition of $T_\Upsilon$ and assuming \Cref{inv:tree} indeed holds, we deduce:
\mathtt{Bell}gin{observation}\leftarrowbel{obs:FundamentalPathLenght}
Every path $Q\in \mathcal{P}(C_\Upsilon)$ is one of the following:
\mathtt{Bell}gin{enumerate}
\initOneLiners
\item A path $Q$ in $T_{\tilde{x}}$ from a perimeter vertex $x_i$ to a vertex $v\in G_\Upsilon\setminus W_\Upsilon$. In particular $Q$ is a shortest path in $G$ of length $O_h(L)$.\leftarrowbel{type:simple}
\item A singleton vortex vertex $u\in W_\Upsilon$.\leftarrowbel{type:singleton}
\end{enumerate}
\end{observation}
Denote by $\mathcal{C}_\Upsilon$ the set of all the fundamental vortex cycles removed from the ancestors of $\Upsilon$ in $\tau$.
Denote by $\bar{\mathcal{C}}_\Upsilon$ the set of paths constituting the fundamental vortex cycles in $\mathcal{C}_\Upsilon$.
Note that \Cref{obs:FundamentalPathLenght} also holds for all the ancestors of $\Upsilon$ in $\tau$.
Each path $Q\in \bar{\mathcal{C}}_\Upsilon$ will have a \varnothingh{representative vertex} $v_Q$. Specifically, for a path $Q$ of type (\ref{type:simple}), set $v_Q=v$, while for a singleton path $Q=(u)$ (path of type (\ref{type:singleton})) set $v_Q=u$.
Finally, we define $\mathcal{P}_\Upsilon\subseteq \bar{\mathcal{C}}_\Upsilon$, the subset of shortest paths that are added to $G_\Upsilon$.
Intuitively, $Q\in \bar{\mathcal{C}}_\Upsilon$ joins $\mathcal{P}_\Upsilon$ if it has a neighbor in $\Upsilon$. However, we would like to avoid double counting that might appear due to intersecting paths.
Formally this is a recursive definition. For $\Upsilon=V$, $\mathcal{P}_V=\varnothingtyset$ and $G_V=G$. Consider $\mathcal{P}_{\Upsilon}$ and $G_{\Upsilon}$.
We define next $\mathcal{P}_{\Upsilon^{\mathcal{E}}}$ and $G_{\Upsilon^{\mathcal{E}}}$ (which will also imply the definition of $T_{\Upsilon^{\mathcal{E}}}$). $\mathcal{P}_{\Upsilon^{\mathcal{I}}}$ and $G_{\Upsilon^{\mathcal{I}}}$ are defined symmetrically.
$\mathcal{P}_{\Upsilon^{\mathcal{E}}}$ will contain all the paths in $\mathcal{P}(C_\Upsilon)$ (the at most $2(h+1)+1$ paths composing $C_\Upsilon$). In addition, we add to $\mathcal{P}_{\Upsilon^{\mathcal{E}}}$ every path $Q\in\mathcal{P}_{\Upsilon}$ such that the
representative vertex $v_Q$ belongs to the exterior of $C_\Upsilon$ in $G_\Upsilon$.
The graph $G_{\Upsilon^{\mathcal{E}}}$ is defined as the graph induced by the vertex set $\Upsilon^{\mathcal{E}}$ and all the vertices belonging to paths in $\mathcal{P}_{\Upsilon^{\mathcal{E}}}$.
In addition, in order to maintain the vortex intact, we add additional edges between the perimeter vertices. Specifically, suppose that the perimeter vertices in $G_\Upsilon$ are $\tilde{x},x_1,\dots,x_q$, while only $\tilde{x},x_{i_1},\dots,x_{i_l}$ belong to $G_{\Upsilon^{\mathcal{E}}}$. Then we add the edges $\{x_{i_1},x_{i_2}\},\{x_{i_{s-1}},x_{i_{s}}\},\dots, \{x_{i_{s-1}},x_{i_s}\}$ (unless this edges already exist),
where the weight of $\{x_{i_{j}},x_{i_{j+1}}\}$ is $d_G(x_{i_{j}},x_{i_{j+1}})$.
We maintain the following invariant:
\mathtt{Bell}gin{invariant}\leftarrowbel{inv:holes}
$|\mathcal{P}_\Upsilon|\le 12\cdot (h+1)$.
\end{invariant}
It is straightforward that \Cref{inv:SubgraphSingleVortex} is maintained.
\mathtt{Bell}gin{claim}\leftarrowbel{clm:InvTreeMaintained}
\Cref{inv:tree} is maintained.
\end{claim}
\hspace{-18pt}\varnothingh{Proof.~}
We will show that $T_{\Upsilon^{\mathcal{E}}}$ is a tree, the argument for $T_{\Upsilon^{\mathcal{I}}}$ is symmetric.
It is clear that $T_{\Upsilon^{\mathcal{E}}}$ is acyclic, thus it will be enough to show that it is connected. As $\tilde{x}$ is part of the fundamental vortex cycle $C_\Upsilon$, $\tilde{x}\in G_{\Upsilon^{\mathcal{E}}}$, it thus belongs to $T_{\Upsilon^{\mathcal{E}}}$.
We show that every vertex contains a path towards $\tilde{x}$.
Consider a vertex $u\in G_{\Upsilon^{\mathcal{E}}}$.
First note that if $u\in W_{\Upsilon^{\mathcal{E}}}$, then $(\tilde{x},u)\in T_{\Upsilon}\cap G_{\Upsilon^{\mathcal{E}}}=T_{\Upsilon^{\mathcal{E}}}$.
Next, if $u\in G_{\Upsilon^{\mathcal{E}}}\setminus \Upsilon^{\mathcal{E}}$, then $u$ is a part of some path $Q_u\in \mathcal{P}_{\Upsilon^{\mathcal{E}}}$ from $u$ to a perimeter vertex $x_{i_u}$. Here $Q_u\cup (r,x_{i_u})\subseteq T_{\Upsilon}\cap G_{\Upsilon^{\mathcal{E}}}=T_{\Upsilon^{\mathcal{E}}}$, thus we are done.
For the last case ($u\in \Upsilon^{\mathcal{E}}\setminus W_{\Upsilon^{\mathcal{E}}}$), let $x_i,x_j$ be the two perimeter vertices such that the fundamental vortex cycle $C_\Upsilon$ contains the paths $Q_i,Q_j$ starting at $x_i,x_j$. \footnote{It is possible that $x_i=x_j$. In this case, we will abuse notation and treat $Q_i,Q_j$ as different paths.\leftarrowbel{foot:diffPaths}}
Let $Q_u\subseteq T_\Upsilon$ be a path from a perimeter vertex towards $u$ (exist by the induction hypothesis).
Note that the paths $\tilde{x}\circ Q_i,\tilde{x}\circ Q_j,\tilde{x}\circ Q_v$ are all paths in a tree $T_\Upsilon$. In particular, while they might have a mutual prefix, once they diverge, the paths will not intersect again.
Denote $\tilde{x}\circ Q_u=(\tilde{x}=v_0,v_1,\dots,v_q=u)$.
Let $v_s\in \tilde{x}\circ Q_u$ be the vertex with maximal index intersecting $Q_i\cup Q_j$.
All the vertices $(\tilde{x}=v_0,v_1,\dots,v_s)$ belongs to the fundamental vortex cycle, implying that they belong to $G_{\Upsilon^{\mathcal{E}}}\cap T_{\Upsilon}$.
\mathtt{Bell}gin{wrapfigure}{r}{0.19\textwidth}
\mathtt{Bell}gin{center}
\includegraphics[width=0.19\textwidth]{fig/Invariant2}
\end{center}
\end{wrapfigure}
As $u$ is in the exterior of $\bar{C}_\Upsilon$ (the closed curve associated with $C_\Upsilon$), the entire path $(v_{s+1},\dots,v_q=u)$ is in the exterior of $\bar{C}_\Upsilon$.
First suppose that all the vertices $v_{s+1},\dots,v_q=u$ belong to $\Upsilon$. It follows that $v_{s+1},\dots,v_q=u$
belong to $\Upsilon^{\mathcal{E}}$, and therefore also to $G_{\Upsilon^{\mathcal{E}}}\cap T_{\Upsilon}=T_{\Upsilon^{\mathcal{E}}}$.
Finally, assume that not all of $\{v_{s+1},\dots,v_q=u\}$ belong to $\Upsilon$. Let $\alpha$ be the maximal index such that $v_\alpha\in G_\Upsilon\setminus\Upsilon$. It follows that there is a path $Q_\alpha\in\mathcal{P}_\Upsilon$ such that $v_\alpha\in Q_\alpha$. Note that the intersection of $Q_\alpha$ with the fundamental cycle $C_\Upsilon$ equals to $(\tilde{x}=v_0,v_1,\dots,v_\alpha)$. It follows that both $v_\alpha$ and $v_{Q_\alpha}$ (the representative vertex of $Q_\alpha$) belong to the exterior of $\bar{C}_\Upsilon$, implying $Q_\alpha\in \mathcal{P}_{\Upsilon^{\mathcal{E}}}$.
We conclude that all the vertices along $Q_u$ belong to $G_{\Upsilon^{\mathcal{E}}}$. The claim follows.\mbox{}
$\Box$\\
Next we define the weight function $\omega$ which we used to invoke \Cref{lm:AG06one-vortex-separator}. There are two cases. First, if $|\mathcal{P}_\Upsilon|\le 10\cdot (h+1)$, then $\omega(t)=1$ iff $t$ is a terminal in $\Upsilon$, and otherwise $\omega(t)=0$.
In the second case (when $|\mathcal{P}_\Upsilon|> 10\cdot h$), initially the weight of all the vertices is $0$. For every $Q\in\mathcal{P}_\Upsilon$, the weight of $v_Q$, its representative vertex, will increase by $1$.
\mathtt{Bell}gin{claim}\leftarrowbel{clm:InvHolesMaintained}
\Cref{inv:holes} is maintained. Furthermore, if $|\mathcal{P}_\Upsilon|> 10\cdot (h+1)$ then $|\mathcal{P}_{\Upsilon^{\mathcal{E}}}|,|\mathcal{P}_{\Upsilon^{\mathcal{I}}}|\le 10\cdot (h+1)$.
\end{claim}
\mathtt{Bell}gin{proof}
Given that $|\mathcal{P}_{\Upsilon}|\le 12\cdot (h+1)$ we will prove that $|\mathcal{P}_{\Upsilon^{\mathcal{E}}}|\le 12\cdot (h+1)$.
The argument for $\mathcal{P}_{\Upsilon^{\mathcal{I}}}$ is symmetric.
For $\Upsilon=V$, $|\mathcal{P}_V|=0$ and $|\mathcal{P}_{V^{\mathcal{E}}}|\le2(h+1)+1$. For every other $\Upsilon$, as $\{\tilde{X}\}\in\mathcal{P}_\Upsilon$, it is clear that $$|\mathcal{P}_{\Upsilon^{\mathcal{E}}}|\le |\mathcal{P}_{\Upsilon}|+|\mathcal{P}(C_\Upsilon)\setminus\{\tilde{x}\}|\le |\mathcal{P}_{\Upsilon}|+2(h+1)~.$$
Thus in the first case, where $|\mathcal{P}_{\Upsilon}|\le 10\cdot (h+1)$, clearly $|\mathcal{P}_{\Upsilon^{\mathcal{E}}}|\le12\cdot (h+1)$.
Otherwise, the total weight of all the vertices is $|\mathcal{P}_{\Upsilon}|\le 12\cdot (h+1)$. By \Cref{lm:AG06one-vortex-separator}, the exterior $C_\Upsilon$ in $G_\Upsilon$ contains at most $\frac{2}{3}\cdot |\mathcal{P}_{\Upsilon}|\le \frac{2}{3}\cdot 12\cdot (h+1)=8\cdot (h+1)$ representative vertices. As $|\mathcal{P}(C_\Upsilon)|$ contains at most $2(h+1)$ new paths, we conclude $|\mathcal{P}_{\Upsilon^{\mathcal{E}}}|\le 10\cdot (h+1)+2(h+1)= 12\cdot (h+1)$, as required.
\end{proof}
Before we turn to the construction of the spanner, we observe the following crucial fact regarding the graph $G_\Upsilon$.
\mathtt{Bell}gin{claim}\leftarrowbel{clm:GUpsilon}
Consider a set $\Upsilon\in \tau$. Let $u$ be some vertex such that $u$ has a neighbor in $\Upsilon$. Then $v\in G_\Upsilon$.
\end{claim}
\mathtt{Bell}gin{proof}
The proof is by induction on the construction of $\tau$.
Suppose the claim holds for $\Upsilon$, we will prove it for $\Upsilon^{\mathcal{E}}$ (the proof for $\Upsilon^{\mathcal{I}}$ is symmetric).
Consider a pair of neighboring vertices $v,u$ where $v\in \Upsilon^{\mathcal{E}}$. As $\Upsilon^{\mathcal{E}}\subseteq \Upsilon$, $v\in \Upsilon$. Thus by the induction hypothesis $u\in G_\Upsilon$.
If $u\in \Upsilon^{\mathcal{E}}$ or $u\in C_\Upsilon$, then trivially $u\in G_{\Upsilon^{\mathcal{E}}}$, and we are done.
Otherwise, according \Cref{clm:CisSeperator} there are no edges between the exterior and interior of $C_\Upsilon$. Thus $u$ belongs to the exterior of $C_\Upsilon$ in $G_\Upsilon$.
Further, as $u\notin \Upsilon^{\mathcal{E}}\cup C_\Upsilon$ it must be that $u\notin\Upsilon$. In particular, $u$ belongs to a path $Q_u\in \mathcal{P}_\Upsilon$.
We proceed by case analysis.
\mathtt{Bell}gin{itemize}
\item First assume that $u$ belong to the embedded part of $G_\Upsilon$.
Here $Q_u$ is a path from a perimeter vertex $x_l$ towards a representative vertex $v_{Q_u}$.
Let $x_i,x_j$ be the two perimeter vertices such that the fundamental vortex cycle $C_\Upsilon$
contains the paths $Q_i,Q_j$ starting at $x_i,x_j$. $^{\ref{foot:diffPaths}}$
Note that the paths $\tilde{x}\circ Q_i,\tilde{x}\circ Q_j,\tilde{x}\circ Q_u$ are all paths in a tree $T_\Upsilon$. In particular, while $Q_i, Q_j, Q_u$ might have mutual prefix, once they diverge, the paths will not intersect again.
It follows that the suffix of the path $Q_u$ from $u$ to the representative $v_{Q_u}$ will not intersect $Q_i,Q_j$ (as otherwise $u\in C_\Upsilon$). As $u$ is in the exterior of $C_\Upsilon$, $v_{Q_u}$ will also belong to the exterior. Thus $u\in Q_u\subseteq G_{\Upsilon^{\mathcal{E}}}$.
\item Second, assume that $u$ does not belong to the embedded part. It follows that $Q_u=\{u\}$. As $v_{Q_u}=u$ belongs to the exterior of $C_\Upsilon$,
it follows that $u\in Q_u\subseteq G_{\Upsilon^{\mathcal{E}}}$.
\end{itemize}
\end{proof}
\paragraph{Construction of the spanner $H$, and bounding its weight}
For each node $\Upsilon\in \tau$ of the hierarchical tree, we will construct a spanner $H_\Upsilon$. The final spanner $H=\cup_{\Upsilon\in\tau}H_\Upsilon$ is the union of all these spanners.
We argue that $\tau$ contains $O(k)$ nodes, and that for every $\Upsilon\in \tau$, $w(H_\Upsilon)=O_{h}(L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$. It then follows
that $w(H)=O_{h}(kL\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
First consider a leaf node $\Upsilon\in \tau$, let $K_\Upsilon=\Upsilon\cap K$ be the set of terminals in $\Upsilon$. As $\Upsilon$ is a leaf, $|K_\Upsilon|= O(h)$.
For every shortest path $Q\in \mathcal{P}_{\Upsilon}$ and terminal $t\in K_\Upsilon$, we add to $H_\Upsilon$ a $(1+\varepsilonlon)$ single source spanner from $t$ to $Q$ (w.r.t. $G$) using \Cref{lm:ss-spanner}. Additionally, for every pair of terminals $t,t'\in K_\Upsilon$, we add the shortest path from $t$ to $t'$ in $G$ to $H_\Upsilon$. Formally,
\[
H_{\Upsilon}=\cup_{t\in K_{\Upsilon}}\cup_{Q\in\ensuremath{\mathcal{P}_{\Upsilon}}\cup K_{\Upsilon}}\mathtt{SSP}(t,Q,G)~,
\]
where we abuse notation and treat vertices as singleton paths.
As each $Q \in\mathcal{P}_{\Upsilon}$ is a shortest path in $G$, and the distance from every $t\in K$ is bounded by the diameter $D=O_h(L)$ of $G$, by \Cref{lm:ss-spanner} $w(\mathtt{SSP}(t,Q,G))=O_{h}(L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
We conclude that $w(H_{\Upsilon})=\sum_{t\in K_{\Upsilon}}\sum_{Q\in\ensuremath{\mathcal{P}_{\Upsilon}}\cup K_{\Upsilon}}w\left(\mathtt{SSP}(t,Q,G)\right)=O_{h}(L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$, where we used \Cref{inv:holes} to bound the number of addends by $O(h^2)$.
For the general case ($\Upsilon$ is an internal node), recall that we have a fundamental vortex cycle $C_\Upsilon$, which consist of at most $O(h)$ paths $\mathcal{P}(C_\Upsilon)$.
First, we add all the paths in $\mathcal{P}(C_\Upsilon)$ to $H_\Upsilon$.
Next, for every pair of shortest paths $Q\in \mathcal{P}_{\Upsilon}\cup \mathcal{P}(C_\Upsilon)$ and $Q'\in \mathcal{P}(C_\Upsilon)$, we add to $H_\Upsilon$ a $(1+\varepsilonlon)$ bipartite spanner between $Q$ and $Q'$ (w.r.t. $G$) using \Cref{lm:bipartite-spanner}. Formally,
\[
H_{\Upsilon}=\mathcal{P}(C_{\Upsilon})\cup\left(\cup_{Q\in\mathcal{P}(C_{\Upsilon})}\cup_{Q'\in\ensuremath{\mathcal{P}_{\Upsilon}}\cup\mathcal{P}(C_{\Upsilon})}\mathtt{BS}(Q,Q',G)\right)~.
\]
As each path in the union is a shortest path in $G$, so a graph with diameter $O_h(L)$, we have
by \Cref{lm:bipartite-spanner} that $\forall Q,Q',\,w\left(\mathtt{BS}(Q,Q',G)\right)=O_{h}(L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\sloppy We conclude that
$w(H_{\Upsilon})\le\sum_{Q\in\mathcal{P}(C_{\Upsilon})}w\left(Q\right)+\sum_{Q\in\mathcal{P}(C_{\Upsilon})}\sum_{Q'\in\ensuremath{\mathcal{P}_{\Upsilon}}\cup\mathcal{P}(C_{\Upsilon})}w\left(\mathtt{BS}(Q,Q',G)\right)=O_{h\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})}(L)$
,
where we used \Cref{inv:holes} to bound the number of addends by $O(h^2)$.
Next, we bound the number of nodes in $\tau$.
We say that a node $\Upsilon'\in\tau$ is a grandchild of $\Upsilon\in \tau$ if there is a node $\Upsilon''$ which is the child of $\Upsilon$, and the parent of $\Upsilon'$.
Consider some node $\Upsilon$. It follows from \Cref{clm:InvHolesMaintained} that either $|\mathcal{P}_\Upsilon|\le 10\cdot (h+1)$ or for both its children $|\mathcal{P}_{\Upsilon^{\mathcal{E}}}|,|\mathcal{P}_{\Upsilon^{\mathcal{I}}}|\le 10\cdot (h+1)$. In particular, either in $\Upsilon$, or in both its children the number of terminals drops by a $\frac{2}{3}$ factor. We conclude that if $\Upsilon'$ is a grandchild of $\Upsilon$ then $|K_{\Upsilon'}|\le\frac23 |K_{\Upsilon}|$.
For the sake of analysis, we will divide $\tau$ into two trees, $\tau_{O}$ and $\tau_{E}$. $\tau_{E}$ (resp. $\tau_{O}$) contains all the nodes of even (resp. odd ) depth. There is an edge between $\Upsilon$ to $\Upsilon'$ if $\Upsilon'$ is a grandchild $\Upsilon$.
Consider $\tau_{E}$. Note that the number of leafs is bounded by $k$ (as they are all disjoint and contain at least one terminal). Further, if an internal node $\Upsilon$ (other than the root $V$) has degree $2$ in $\tau_O$, it follows that $\Upsilon$ has a single grandchild in $\tau$. There is some terminal in $\Upsilon\setminus\Upsilon'$ (as $|K_{\Upsilon'}|\le\frac23 |K_{\Upsilon}|$). As there are $k$ terminals, we conclude that the number of degree $2$ nodes is bounded by $k$. It follows that $\tau_{E}$ has $O(k)$ nodes. A similar argument will imply that $\tau_{O}$ has $O(k)$ nodes. It follows that $\tau$ has $O(k)$ nodes. We conclude
\[
w(H)\le\sum_{\Upsilon\in\tau}w(H_{\Upsilon})=O_{h}(kL\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))~.
\]
\paragraph{Bounding the stretch}
The following claim will be useful for bounding the stretch between terminals:
\mathtt{Bell}gin{claim}\leftarrowbel{clm:TerminalToInactive}
Consider an internal node $\Upsilon\in\tau$, and a terminal vertex $t\in \Upsilon$. For every fundamental vortex cycle vertex $u\in C_{\Upsilon}$, it holds that $d_H(t,u)\le (1+\varepsilonlon)d_G(t,u)$.
\end{claim}
\mathtt{Bell}gin{proof}
Let $Q^u\in \mathcal{P}(C_\Upsilon)$ be the shortest path belonging to the fundamental vortex cycle that contains $u$.
Let $Q_{t,u}=\{t=v_0,v_1,\dots,v_s=u\}$ be the shortest path from $t$ to $u$ in $G$.
The proof is by induction on $s$ (the number of hops in the path). We proceed by case analysis.
\mathtt{Bell}gin{itemize}
\item Suppose that not all the vertices in $Q_{t,u}\setminus\{u\}$ belong to $\Upsilon$.
Let $v_i$ be the vertex with minimal index not in $\Upsilon$.
As $v_{i-1}\in \Upsilon$, by \Cref{clm:GUpsilon} $v_i\in G_\Upsilon$. In particular, there is some path $Q^{v_i}\in\mathcal{P}_\Upsilon$ such that $v_i\in Q^{v_i}$, where $Q^{v_i}$ is a path belonging to a fundamental vortex cycle removed in an ancestor of $\Upsilon$ in $\tau$. By the induction hypothesis, $d_H(t,v_i)\le (1+\varepsilonlon)d_G(t,v_i)$.
During the construction of $H_\Upsilon$, we added $\mathtt{BS}(Q^u,Q^{v_i},G)$, a bipartite spanner between the paths $Q^u,Q^{v_i}$, to $H_\Upsilon$. Thus
$d_H(t,u)\le d_H(t,v_i)+d_H(v_i,u)\le(1+\varepsilonlon)(d_G(t,v_i)+d_G(v_i,u))=(1+\varepsilonlon)d_G(t,u)$.
\item Otherwise (all vertices in $Q_{t,u}\setminus\{u\}$ belong to $\Upsilon$), suppose that there is some vertex $v_i \neq u$ that belongs to $C_{\Upsilon}$.
By the induction hypothesis, $d_H(t,v_i)\le (1+\varepsilonlon)d_G(t,v_i)$.
By the construction of $H_\Upsilon$, $d_{H_\Upsilon}(v_i,u)\le(1+\varepsilonlon)d_G(v_i,u)$. It follows that
$d_H(t,u)\le(1+\varepsilonlon)d_G(t,u)$.
\item Otherwise (all vertices in $Q_{t,u}\setminus\{u\}$ belong to $\Upsilon$, and $u$ is the only vertex in $Q_{t,u}$
belonging to $C_{\Upsilon}$), if there exists a future hierarchical step such that in a node $\Upsilon'$,
some vertices in $Q_{t,u}\setminus\{u\}$ belongs to the fundamental vortex cycle $C_{\Upsilon'}$.
Then, let $\Upsilon'$ be the first such set (that is the closest to $\Upsilon$ w.r.t. $\tau$).
Let $v_i\in C_{\Upsilon'}$. By the minimality of $\Upsilon$, all the vertices $\{t=v_0,\dots,v_i,\dots,v_{s-1}\}$ belong to $\Upsilon'$.
By the induction hypothesis, $d_H(t,v_i)\le (1+\varepsilonlon)d_G(t,v_i)$. Further, as $v_{s-1}\in \Upsilon'$, by \Cref{clm:GUpsilon} $v_s\in G_{\Upsilon'}$. In particular the spanner $H_{\Upsilon'}$ has stretch $(1+\varepsilonlon)$ between $v_i$ and $u$ (as we added a bipartite spanner between two paths containing $v_i,u$). It follows that $d_H(t,u)\le(1+\varepsilonlon)d_G(t,u)$.
\item Otherwise, (all vertices in $Q_{t,u}\setminus\{u\}$ belong to $\Upsilon$, and $u$ is the only vertex in $Q_{t,u}$ belonging to a fundamental cycle in $\Upsilon$, and in all future hierarchical steps).
Then, all vertices in $Q_{t,u}\setminus\{u\}$ belong to some leaf node $\Upsilon'\in \tau$.
By \Cref{clm:GUpsilon} $v_s\in G_{\Upsilon'}$. In particular $u$ belongs to some path in $\mathcal{P}_{\Upsilon'}$.
During the construction, we added
to $H_{\Upsilon'}$ a single source spanner from $t$ to this path. It follows that $d_H(t,u)\le(1+\varepsilonlon)d_G(t,u)$.
\end{itemize}
\end{proof}
Consider a pair of terminals $t,t'$ with a shortest path $Q_{t,t'}$ is $G$.
If $t$ and $t'$ end up together in a leaf node $\Upsilon\in \tau$, than we added a shortest path between them to $H_\Upsilon$. Thus $d_H(t,t')=d_G(t,t')$.
Otherwise, let $v\in Q_{t,t'}$ be the first vertex which was added to a fundamental vortex cycle during the construction of $\tau$ (first w.r.t. the order defined by $\tau$). By \Cref{clm:TerminalToInactive} it holds that
$$d_H(t,t')\le d_H(t,v_i)+d_H(v_i,t')\le(1+\varepsilonlon)(d_G(t,v_i)+d_G(v_i,t'))=(1+\varepsilonlon)d_G(t,t')~,$$
hence the bound on the stretch.
\subsection{Step (2.0): Unbounded diameter, proof of \Cref{lm:one=Vortex-Unbounded-diam}}\leftarrowbel{subsec:diameterReduction}
We start by restating the lemma we will prove in this subsection.
\LemmaOneVortexUbnoundedDiam*
The \varnothingh{strong diameter} \footnote{On the other hand, the \varnothingh{weak diameter} of a cluster $A\subseteq V$ equals to the maximal distance between a pair of vertices $u,v\in A$ in the original graph. Formally $\max_{\{u,v\in A\}}d_{G}(u,v)$. See \cite{Fil19padded,Fil20scattering} for further details on sparse covers and related notions.} of a cluster $A\subseteq V$ equals to the maximal distance between a pair of vertices $u,v\in A$ in the induced graph $G[A]$. Formally $\max_{\{u,v\in A\}}d_{G[A]}(u,v)$.
The main tool we will use here is \varnothingh{sparse covers}.
\mathtt{Bell}gin{definition}[Sparse Cover]
Given a weighted graph $G=(V,E,w)$, a collection of clusters $\mathcal{C} = \{C_1,..., C_t\}$ is called a $(\rho,s,\Delta)$-strong sparse cover if the following conditions hold.
\mathtt{Bell}gin{enumerate}
\item Bounded diameter: The strong diameter of every $C_i\in\mathcal{C}$ is bounded by $\rho\cdot\Delta$.\leftarrowbel{condition:RadiusBlowUp}
\item Padding: For each $v\in V$, there exists a cluster $C_i\in\mathcal{C}$ such that $B_G(v,\Delta)\subseteq C_i$.
\item Overlap: For each $v\in V$, there are at most $s$ clusters in $\mathcal{C}$ containing $v$.
\end{enumerate}
We say that a graph $G$ admits a $(\rho,s)$-strong sparse cover scheme, if for every parameter $\Delta>0$ it admits a $(\rho,s,\Delta)$-strong sparse cover. A graph family $\mathcal{G}$ admits a $(\rho,s)$-strong sparse cover scheme, if every $G\in\mathcal{G}$ admits a $(\rho,s)$-strong sparse cover scheme.
\end{definition}
Abraham et al. \cite{AGMW10} constructed strong sparse covers.
\mathtt{Bell}gin{theorem}[\cite{AGMW10}]\leftarrowbel{AGMW10Covers}
Every weighted graph excluding $K_{r,r}$ as a minor admits an $(O(r^2), 2^{O(r)}\cdot r!)$-strong sparse cover scheme constructible in polynomial time.
\end{theorem}
Consider a graph $G$ as in the lemma with terminal set $K$ and parameters $\varepsilonlon,L$. Note that $G$ is $K_{h+2,h+2}$ free.
Using \Cref{AGMW10Covers} let $\{C_1,..., C_t\}$ be an $(O(h^2), 2^{O(h)}\cdot h!,L)$ sparse cover for $G$. Note that each cluster $C_i$ has strong diameter $O(h^2)\cdot L=O_h(L)$. For every $i$, using \Cref{lm:one-vortex-Bounded-diam}, let $H_i$ be a $(1+\varepsilonlon)$-spanner for $G[C_i]$, w.r.t. terminal set $K_i=C_i\cap K$ and parameters $L,\varepsilonlon$.
Set $H=\bigcup H_i$. Then $H$ has weight
\[
w(H)\le\sum_{i}w(H_{i})\le\sum_{i}O_{h}(|K_{i}|\cdot L)=O_{h}(k\cdot L\cdot2^{O(h)}\cdot h!)=O_{h}(k\cdot L)~,
\]
where the first equality follows as every terminal is counted at most $2^{O(h)}\cdot h!$ times in the sum.
We argue that $H$ preserves all terminal distances up to $L$.
Consider a pair of terminals $t,t'$ such that $d_G(t,t')\le L$. There is a cluster $C_i$ such that
the ball of radius $L$ around $t$ contained in $C_i$. In particular the entire shortest path from $t$ to $t'$ is contained in $C_i$. We conclude
$$d_H(t,t')\le d_{H_i}(t,t')\le (1+\varepsilonlon)\cdot d_{G[C_i]}(t,t')=(1+\varepsilonlon)\cdot d_{G}(t,t')~.$$
\subsection{Step (2.1): Reducing vortices, proof of \Cref{lm:planar-with-many-vortices}}\leftarrowbel{subsec:reducingVortices}
We start by restating \Cref{lm:planar-with-many-vortices}:
\mathbb{N}earlyEmbdablSpannerNoApicesNoGenus*
This subsection is essentially devoted to proving the following lemma:
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:vorexReduce}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{\ell}$, where $G_{\Sigma}$ is drawn on plane, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
Then given a terminal set $K$ of size $k$, and parameter $L>0$, there is an induced subgraph $G'$ of $G$ and a spanning subgraph $H_{vo}$ of $G$ such that:
\mathtt{Bell}gin{OneLiners}
\item $G'$ can be drawn on the plane with a single vortex of width at most $h$.
\item $w(H_{vo})\le O(k\ell L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
\item For every pair of terminals $t,t'\in K$ at distance at most $L$, either $d_{G'}(t,t')=d_G(t,t')$ or $d_{H_{vo}}(t,t')\le (1+\varepsilonlon)d_G(t,t')$.
\end{OneLiners}
\end{lemma}
Given \Cref{lm:vorexReduce}, \Cref{lm:planar-with-many-vortices} easily follows.
\mathtt{Bell}gin{proof}[Proof of \Cref{lm:planar-with-many-vortices}]
We begin by applying \Cref{lm:vorexReduce} on the graph $G$. As a result we receive the graphs $G',H_{vo}$, where $G'$ has a single vortex of width at most $h$, $H_{vo}$ has weight $O_{h}(k\ell L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})$, and for every pair of terminals at distance up to $L$, either $d_{G'}(t,t')=d_G(t,t')$ or $d_{H_{vo}}(t,t')\le (1+\varepsilonlon)d_G(t,t')$.
Next, we apply \Cref{lm:one=Vortex-Unbounded-diam} on $G'$ and receive a $(1+\varepsilonlon)$-subset spanner $H'$ of weight $O_{\varepsilonlon}(kL\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$ that preserves all terminal distances up to $L$ (w.r.t. $G'$). Set $H=H'\cup H_{vo}$. Note that $H$ has weight $O_{h}(kL\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$. Let $u,v\in K$ be a pair of terminals at distance at most $L$. Then either $d_{H'}(t,t')\le(1+\varepsilonlon)d_{G'}(t,t')=(1+\varepsilonlon)d_G(t,t')$, or $d_{H_{vo}}(t,t')\le (1+\varepsilonlon)d_G(t,t')$, implying the lemma.
\end{proof}
Before turning to the proof of \Cref{lm:vorexReduce}, we introduce the \varnothingh{vortex merge} procedure.
Given two vortices $W,W'$, a \varnothingh{proper vortex path} $P$ is a path in $G$ between a vertex $v\in W$ to a vertex $u\in W'$ such that all but the first and last vertices of $P$ belong to the planar part $G_\Sigma$, and $P$ does not contain any vortex vertices. Given a proper vertex path $P=\{v=v_0,\dots,v_s=u\}$ from $v\in W$ to $u\in W'$, the following operation is called \varnothingh{vortex merge} w.r.t. $P$:
Let $X_1,\dots,X_t$ (respectively $Y_1,\dots,Y_{t'}$) be a path decomposition of $W$ (respectively $W'$) with perimeter $x_1,\dots,x_t$ ($y_1,\dots,y_{t'}$) such that $v=x_i$ ($u=y_{j}$).
Let $F_W$ and $F_{W'}$ be respectively the face containing the vertices of $W$ and $W'$ in the planar part.
The operation consists in \varnothingh{cutting open} the surface along the path $P$ (see e.g.:~\cite{Klein08}
for a formal definition), resulting in two copies of the vertices
of the path $P_1,P_2$. Then for each copy $P_i$, delete all the edges that have exactly one endpoint that belongs to
$P_i \cup \{x_{i-1}, x_{i+1}, y_{i-1}, y_{i+1}\}$, and finally contract all
the edges in $P_i$. This yields edges between either
$\{x_{i-1},y_{j-1}\}$, and $\{x_{i+1},y_{j+1}\}$, or $\{x_{i-1},y_{j+1}\}$, $\{x_{i+1},y_{j-1}\}$ -- in the following we assume that
the former happened.
Since contraction and deletion preserve the genus, the genus of the embedded part has not increased.
See \Cref{fig:VortexMerge} for illustration.
The process produces a new vortex, glued to the face $x_1,\dots,x_{i-1},y_{j-1},\dots,y_1,y_{t'},\dots,y_{j+1},x_{i+1}\dots,x_t$, which will be the perimeter vertices. The bags in the path decomposition will be $X_1,\dots,X_{i-1},Y_{j-1},\dots,Y_1,Y_{t'},\dots,Y_{j+1},X_{i+1}\dots,X_t$, respectively (the newly added edges will be part of the embedded graph).
\mathtt{Bell}gin{figure}[]
c(\varepsilonlonilon)ntering{\includegraphics[scale=0.9]{fig/VortexMerge3}}
\caption{\leftarrowbel{fig:VortexMerge}\small \it
The graph $G$, who has three vortices $W,W',W_1$, is illustrated on the left. The (purple) path $P$ is a proper vortex path between $W,W'$. On the right, we illustrate the vortex merge that creates a new graph $G'$ by deleting the path $P$ and adding two new edges.
}
\end{figure}
\mathtt{Bell}gin{observation}\leftarrowbel{obs:vortexMerge}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{\ell}$, where $G_{\Sigma}$ is drawn on the plane
and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$.
Let $P$ be a proper vortex path between $W_\ell$ to $W_{\ell-1}$, and consider the graph $G'$ created by a vortex merge w.r.t. $P$.
Then $G'$ has planar drawing with vortices $W_1,\ldots, W_{\ell-2},\tilde{W}_{\ell-1}$ all of width at most $h$.
\end{observation}
We are now ready to prove the main lemma of the subsection:
\mathtt{Bell}gin{algorithm}[h]
\caption{\texttt{Contracting Vortices}} \leftarrowbel{alg:vortex}
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Graph $G=(V,E,w)=G_\Sigma\cup W_1\cup\dots\cup W_\ell$, $k$ terminals $K\subseteq V$}
\Output{Induced graph $G'$ and subgraph $H_{vo}$ of $G$}
\BlankLine
Let $G_1 \leftarrow G$, $H_{vo}\leftarrow\varnothingtyset$, $\textsc{Im}\leftarrow\varnothingtyset$\;
\For{$j=1$ to $\ell-1$}{
Let $P_j$ be the shosrtest proper vortex path in $G_j$ betweem $x_i\in W$ and $y_j\in W'$, let $\mathcal{P}_j$ be the set of paths consisting of $P_j$ and all the singleton paths consisting of vertices in $X_i\cup Y_j$
\footnotemark \;
$H_{vo}\leftarrow H_{vo}\cup\left(\cup_{t\in K\cap G_j}\cup_{P\in\mathcal{P}_j} \textsc{SSP}(t,P,G_j,L)\right)$\;
Create a new graph $G_{j+1}$ by preforming a vortex merge w.r.t. $P_j$ in $G_j$\;
Add to $\textsc{Im}$ the (at most $2$) new edges $G_{j+1}\setminus G_j$\;
}
Remove from $G_\ell$ and from $H_{vo}$ all the edges in $\textsc{Im}$\leftarrowbel{line:removeIm}\;
\mathbb{R}eturn $G_\ell$ and $H_{vo}$\;
\end{algorithm}
\footnotetext{$P_j$ can be picked as the minimal path having its endpoints in two different vortices. By minimality, $P_j$ necessarily will be a proper vortex path.}
\mathtt{Bell}gin{proof}[Proof of \Cref{lm:vorexReduce}]
The algorithm for constructing $G'$ and $H_{vo}$ is described in \Cref{alg:vortex}. Consider the graphs $G_\ell, H_{vo}$ just before the removal of the set $\textsc{Im}$ in \Cref{line:removeIm}.
By induction and \cref{obs:vortexMerge}, we have that $G_\ell$ has a single vortex of width at most $h$.
Further, $G'=G_\ell\setminus\textsc{Im}$ can also be drawn on the plane with a single vortex of the same width.
Next we bound the weight of $H_{vo}$. In each of the $\ell-1\le h$ rounds of \Cref{alg:vortex} we added at most $k\cdot (2h+1)$ single source spanners, each of weight $O(L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$. It follows that $w(H_{vo})=O(k\ell L\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
Finally we bound the stretch. Consider a pair of terminals $t,t'\in K$ at distance at most $d_G(t,t')\le L$. Let $P_{t,t'}$ be the shortest path between $t$ to $t'$ in $G$.
If no vertex of $P_{t,t'}$ was deleted during the execution of \Cref{alg:vortex}, then $d_{G'}(t,t')=d_G(t,t')$ and the we are done.
Otherwise, let $s$ be the minimal index where some vertex $v\in P_{t,t'}$ belongs to a path $P_v\in\mathcal{P}_s$, and thus was deleted. By minimality, it holds that $d_{G_s}(t,P_v)\le d_{G_v}(t,v)=d_{G}(t,v)\le L$. Similarly $d_{G_s}(t',P_v)\le L$. Using the properties of $\textsc{SSP}(t,P_v,G_s,L)\cup \textsc{SSP}(t',P_v,G_s,L)$, we conclude
\[
d_{H_{vo}}(t,t')\le d_{\textsc{SSP}(t,P_{v},G_{s},L)}(t,v)+d_{\textsc{SSP}(t',P_{v},G_{s},L)}(t',v)\le(1+\varepsilonlon)\left(d_{G_{s}}(t,v)+d_{G_{s}}(t',v)\right)=(1+\varepsilonlon)d_{G_{s}}(t,t')~.
\]
Note that no edge of $\textsc{Im}$ incident on edges of $H_{vo}$, as all these edges have weight greater $\Omega_\varepsilonlon(L)$. Thus the inequality holds also in $H_{vo}\setminus\textsc{Im}$. The lemma now follows.
\end{proof}
\subsection{Step (2.2): Cutting out genus, proof of \Cref{lm:nearly-embed-spanner-no-apices}}\leftarrowbel{subsec:removingGenus}
We begin by restating \Cref{lm:nearly-embed-spanner-no-apices}.
\mathbb{N}earlyEmbdablSpannerNoApices*
Given a graph with an embedded part and some width $h$ glued vortices, we denote by $g(G)$ the genus of the surface embedded part, and by $v(G)$ the number of vortices.
Most of this section is essentially devoted to proving the following lemma:
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:genus}
Consider a graph $G = G_{\Sigma}\cup W_1\cup\dots \cup W_{v(G)}$,
where $G_{\Sigma}$ is (cellularly) embedded on a surface $\Sigma$ of genus $g(G)$, and each $W_i$ is a vortex of width at most $h$ glued to a face of $G_{\Sigma}$. Then given a terminal set $K$ of size $k$ and parameter $L>0$, there is an induced subgraph $G'$ of $G$ and a spanning subgraph $H_g$ of $G$ such that:
\mathtt{Bell}gin{OneLiners}
\item $G'$ has genus $g(G')=0$ and at most $v(G')\le v(G)+g(G)$ vortices, all of width at most $h$.
\item $w(H_g)\le O\left(kL\cdot g(G)\left(g(G)+v(G)\right)\right)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})$.
\item For every pair of terminals $t,t'\in K$ at distance at most $L$ either $d_{G'}(t,t')=d_G(t,t')$ or $d_{H_g}(t,t')\le (1+\varepsilonlon)d_G(t,t')$.
\end{OneLiners}
\end{lemma}
Given \Cref{lm:genus}, \Cref{lm:nearly-embed-spanner-no-apices} easily follows.
\mathtt{Bell}gin{proof}[Proof of \Cref{lm:nearly-embed-spanner-no-apices}]
Apply \Cref{lm:genus} on $G$, and let $G',H_g$ be the output. Construct a $(1+\varepsilonlon)$-subset spanner $H'$ for $G'$ using \Cref{lm:planar-with-many-vortices}. Set $H=H'\cup H_g$. Then $H$ has weight $O_{h}(kL\cdot\mathrm{poly}(\frac{1}{\varepsilonlon}))$.
To bound the stretch, let $t,t'\in K$ be a pair of terminals at distance at most $L$. Then either $d_{H'}(t,t')\le(1+\varepsilonlon)d_{G'}(t,t')=(1+\varepsilonlon)d_G(t,t')$, or $d_{H_g}(t,t')\le (1+\varepsilonlon)d_G(t,t')$, implying the lemma.
\end{proof}
Recall the definition of vortex-path (\Cref{def:vortex-path}). Essentially one can think of a vortex-path as a union of $V(G)+1+2\cdot v(G)\cdot h=O(v(G)\cdot h)$ paths (see \Cref{sec:oneVortex} for details).
A vortex-path is called a \varnothingh{shortest-vortex-path} if all the paths it contains are shortest paths in $G$.
We will use the following \varnothingh{Cutting Lemma} of Abraham and Gavoille \cite[Lemma 6, full version]{AG06}.
\mathtt{Bell}gin{lemma}[Cutting Lemma~\cite{AG06}]\leftarrowbel{lm:cuttingAG06} Given a $h$-nearly embeddable graph $G$, there are (efficiently computable) two shortest-vortex-paths $\mathcal{V}_1,\mathcal{V}_2$ of $G$ such that the graph $G' = G\setminus (\mathcal{V}_1\cup \mathcal{V}_2)$ is $h$-nearly embeddable and has $v(G') \leq v(G)+1$ and $g(G')\le g(G)-1$.
\end{lemma}
Intuitively, \Cref{lm:cuttingAG06} says that we can reduce the genus of $G$ by removing two vortex-paths at the expense of increasing the number of vortices by $1$, without affecting their width.
Given the characterization above, an immediate corollary is that given a graph $G$ that embeds on a surface of genus $g(G)$, with $v(G)$ vortices of genus $g$, we can remove a set $\mathcal{P}$ of $O(v(G)\cdot h)$ shortest paths to obtain a graph $G'=G\setminus \mathcal{P}$, where $v(G') \leq v(G)+1$ and $g(G') = g(G)-1$.
We proceed now to proving \Cref{lm:genus}.
\mathtt{Bell}gin{proof}[Proof of \Cref{lm:genus}]
In \Cref{alg:genus} below we repeatedly apply \Cref{lm:cuttingAG06}, and remove paths until the remaining graph has genus $0$.
\mathtt{Bell}gin{algorithm}[h]
\caption{\texttt{Genus Reduction}} \leftarrowbel{alg:genus}
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Embedded graph $G=(V,E,w)$ with genus $g(G)$, $v(G)$ vortices, and terminal set $K$}
\Output{Induced graph $G'$ and subgraph $H_g$ of $G$}
\BlankLine
Let $G_0 \leftarrow G$, $H_g\leftarrow\varnothingtyset$, $j\leftarrow 0$\;
\While{$g(G_j)>0$}{
Let $\mathcal{P}_j$ be the set of $O(h\cdot v(G_j))$ shortest paths as guaranteed by \Cref{lm:cuttingAG06} w.r.t. $G_j$\;
Add to $H_g$ the set $\cup_{t\in K}\cup_{P\in \mathcal{P}_j}\textsc{SSP}(t,P, G_j, L)$\;
$G_{j+1}\leftarrow G_j\setminus \mathcal{P}_j$\;
$j\leftarrow j+1$\;
}
\mathbb{R}eturn $G'$ and $H_g$\;
\end{algorithm}
By induction it holds that $g(G_j)\le g(G)-j$ and $v(G_j)\le v(G)+j$. Let $J$ be final value of $j$. Then $J\le g(G)$. Thus $g(G_J)=0$ and $v(G_J)\le v(G)+g(G)$. Furthermore,
\mathtt{Bell}gin{align*}
w(H_{g})\le\sum_{j=0}^{J-1}\sum_{t\in K}\sum_{P\in\mathcal{P}_{j}}w\left(\textsc{SSP}(t,P,G_{j},L)\right) & \le\sum_{j=0}^{J-1}O(kv(G_{j})h\cdot L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})\\
& =O\left(kL\cdot g(G)\left(g(G)+v(G)\right)\right)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})~.
\end{align*}
Finally, we prove the stretch guarantee. Consider a pair of terminals $t,t'\in K$ such that $d_G(t,t')\le L$.
Let $P_{t,t'}$ be the shortest path from $t$ to $t'$ in $G$. If $P_{t,t'}\subseteq G'$, then $d_{G'}(t,t')=d_G(t,t')$ and the we are done.
Otherwise, let $j$ be the minimal index such that $P_{t,t'}\cap \mathcal{P}_j\neq \varnothingtyset$ , and let $v\in P_{t,t'}$ be some vertex that belongs to a path $Q\in \mathcal{P}_j$.
By minimality, it holds that $d_{G_j}(t,Q)\le d_{G_j}(t,v)=d_{G}(t,v)\le L$. Similarly $d_{G_j}(t',Q)\le d_{G_j}(t',v)=d_{G}(t',v)\le L$. We conclude
\[
d_{H_{g}}(t,t')\le d_{\textsc{SSP}(t,P_{s},G_{j},L)}(t,v)+d_{\textsc{SSP}(t',P_{s},G_{j},L)}(t',v)\le(1+\varepsilonlon)\left(d_{G_{j}}(t,v)+d_{G_{j}}(t',v)\right)=(1+\varepsilonlon)d_{G}(t,t')~.
\]
\end{proof}
\subsection{Step (2.3): Removing apices, proof of \Cref{lm:nearly-embed-spanner}}\leftarrowbel{subsec:removingApices}
\mathbb{N}earlyEmbdablSpanner*
\mathtt{Bell}gin{proof}
Consider an $h$-nearly embeddable graph $G$. Let $A$ be the set of apices. By definition $|A| \leq h$.
Set $H_A\leftarrow \varnothingtyset$. For any apex $a\in A$ and terminal $t\in K$ such that $d_G(a,t)\le L$, we add the shortest $a-t$ path in $G$ to $H_A$. Then $w(H_A)\le k\cdot h\cdot L$.
Let $G'=G[V\setminus A]$ be the graph $G$ after we removed all the apices. Create a subset spanner $H'$ for $G'$ using \Cref{lm:nearly-embed-spanner-no-apices}.
Set $H=H'\cup H_A$ to be a subset spanner for $G$. Then $w(H)\le O_{h}(kL)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})+ khl =O_{h,\varepsilonlon}(kL)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})$.
We argue that $H$ preserves all terminal distances up to $L$. Consider a pair of terminals $t,t'\in K$ such that $d_G(t,t')\le L$. If the shortest path between $t$ to $t'$ contains an apex $a$, then $d_H(t,t')\le d_{H_A}(t,a)+d_{H_A}(a,t')=d_{G}(t,a)+d_{G}(a,t')=d_{G}(t,t')$. Otherwise,
$d_H(t,t')\le d_{H'}(t,t')\le (1+\varepsilonlon)d_{G'}(t,t')= (1+\varepsilonlon)d_{G}(t,t')$.
\end{proof}
\subsection{Step (2.4): Eliminating clique-sums, proof of \Cref{prop:ell-close-spanner}}\leftarrowbel{subsec:EliminatingCliqueSum}
In this subsection we generalize to minor free graphs.
\MinorFreeSubset*
The main technical step here is in proving the following lemma, which will allow us to assume that the clique sum decomposition has only $O(k)$ nodes.
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:tree-reduction}
Consider a weighted graph $G=(V,E,w)$ with a set $K\subseteq V$ of $k$ terminals, and a clique-sum decomposition $\mathcal{T}$ of $G$ s.t. $G = \cup_{X_iX_j \in E(\mathcal{T})} X_i \oplus_h X_j$.
Then there is a graph $G'=(V'\subseteq V,E,w)$ containing all the terminals, with a clique-sum decomposition $\mathcal{T}'$ of $G'$ s.t. $G' = \cup_{X_iX_j \in E(\mathcal{T}')} X_i \oplus_h X_j$. The weight function $w$ of $G'$ gives weight $d_G(u,v)$ to each edge $\{u,v\}$. It holds that:
\mathtt{Bell}gin{OneLiners}
\item The number of nodes in $\mathcal{T}'$ is $O(k)$.
\item For each node $X\in \mathcal{T}'$, either $X\in \mathcal{T}$ or $X$ contains at most $2h$ vertices.
\item For every pair of terminals $t,t'$, it holds that $d_{G}(t,t')=d_{G'}(t,t')$.
\end{OneLiners}
\end{lemma}
We proceed directly to proving \Cref{prop:ell-close-spanner}.
\mathtt{Bell}gin{proof}[Proof of \Cref{prop:ell-close-spanner}]
According to the \cite{RS03} \Cref{thm:RS}, $G$ can be decomposed into a tree $\mathcal{T}$ where each node of $\mathcal{T}$ corresponds a nearly $h$-embeddable subgraph, such that $G = \cup_{X_iX_j \in E(\mathcal{T})} X_i \oplus_h X_j$.
We apply \Cref{lm:tree-reduction} and receive graph $G'$ with clique-sum decomposition $\mathcal{T}'$ as above.
Set $K'$ to be the set of all vertices in $G'$ which belong to at least one clique in the set of clique-sums used by $\mathcal{T}'$.
Formally, $K'=\left\{v\in G'\mid \exists X_iX_j \in E(\mathcal{T}')\mbox{ such that }v\in X_i\cap X_j\right\}$. Note that we add at most $h$ vertices to $K'$ per each edge of $\mathcal{T}'$, it thus holds that $|K'|\le h\cdot(|\mathcal{T}'|-1)=O_h(k)$. Set $\hat{K}=K\setminus K'$ to be the set of terminals out of $K'$.
For each node $X\in\mathcal{T}'$, set $\hat{K}_X=\hat{K}\cap X$ and $K'_X=K'\cap X$.
If $X\in\mathcal{T}$ let $H_X$ be a subset spanner constructed using \Cref{lm:nearly-embed-spanner} w.r.t. the terminal set $K'_X\cup \hat{K}_X$. Note that $H_X$ is a subgraph of $G$, of weight $O_{h}(|K'_X\cup \hat{K}_X|\cdot L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})=O_{h}(|\hat{K}_X|\cdot L+L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})$.
Else ($X\notin\mathcal{T}$), then $X$ contains at most $2h$ vertices.
For every $v,u\in X$, if $d_G(u,v)\le L$, set $P^L_{v,u}$ to be an arbitrary shortest path from $u$ to $v$ in $G$, else ($d_G(u,v)> L$), set $P^L_{v,u}=\varnothingtyset$.
Let $H_X=\cup_{u,v\in X}P^L_{v,u}$ to be a subgraph of $G$ that contains all the shortest paths between $X$ vertices at distance at most $L$ in $G$. Note that the weight of $H_X$ is bounded by $O(h^2\cdot L)=O_h(L)$.
Set $H=\cup_{X\in\mathcal{T}'}H_X$.
It is clear that $H_{\mathcal{T}'}$ is a subgraph of $G$.
We first bound the weight of $H_{\mathcal{T}'}$,
\mathtt{Bell}gin{align*}
w(H) & =\sum_{X\in\mathcal{T}'}w(H_{X})=\sum_{X\in\mathcal{T}'}O_{h}(|\hat{K}_{X}|\cdot L+L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})\\
& =O_{h}(|\mathcal{T}'|\cdot L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})+O_{h}(L)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})\cdot\sum_{X\in\mathcal{T}'}|\hat{K}_{X}|=O_{h}(kL)\cdot\mathrm{poly}(\frac{1}{\varepsilonlon})~,
\end{align*}
where the last equality follows as $\cup_{X\in\mathcal{T}'}\hat{K}_{X}=\hat{K}$, and $\left\{ \hat{K}_{X}\right\} _{X\in\mathcal{T}'}$ are disjoint.
Next, we argue that $H_{\mathcal{T}'}$ preserves terminals distances up to $L$.
Consider a pair of terminals $t,t'\in K\cup K'$ at distance at most $L$.
Let $P_{t,t'}=\{t=v_0,v_1,\dots,v_s=t'\}$ be a shortest path between $t$ to $t'$ in $G'$. Let $\mathcal{I}=\{0=i_0<i_1<\dots<i_q=s\}\subseteq[0,s]$ be a minimal set of indexes such that for every $j\in [q]$ there is a node $X_j\in\mathcal{T}'$ such that $\{v_{i_{j-1}},v_{i_{j-1}+2},\dots ,v_{i_{j}}\}\subseteq X_j$.
By the minimality of $\mathcal{I}$, it necessarily holds that $v_{i_1},v_{i_2},\dots, v_{i_q-1}\in K'$. Furthermore, as $P_{t,t'}$ is a shortest path it holds that
$d_{G'[X_{j}]}(v_{i_{j-1}},v_{i_{j}})=d_{G'}(v_{i_{j-1}},v_{i_{j}})$. As $w(P_{t,t'})\le L$, the spanner $H_{X_{j}}$ has distortion $1+\varepsilonlon$ w.r.t. the pair $v_{i_{j-1}},v_{i_{j}}$
We conclude
\mathtt{Bell}gin{align*}
d_{H}(t,t') & \le\sum_{j=1}^{q}d_{H_{\mathcal{T}'}}(v_{i_{j-1}},v_{i_{j}})\le\sum_{j=1}^{q}d_{H_{X_{j}}}(v_{i_{j-1}},v_{i_{j}})\\
& \le(1+\varepsilonlon)\cdot\sum_{j=1}^{q}d_{G'[X_{j}]}(v_{i_{j-1}},v_{i_{j}})=(1+\varepsilonlon)\cdot d_{G'}(t,t')=(1+\varepsilonlon)\cdot d_{G}(t,t')~.
\end{align*}
\end{proof}
The rest of this subsection is devoted to proving \Cref{lm:tree-reduction} .
The modification of $G$ into $G'$, and of $\mathcal{T}$ into $\mathcal{T}'$ is described in \Cref{alg:cliqueSum}. Initially $\mathcal{T}'\leftarrow\mathcal{T}$ and $G'\leftarrow G$. The algorithm has two steps.
In the first step, we ensure that the number of leaves is bounded by $k$. This is done by repeatedly deleting non-essential leaf nodes from $\mathcal{T}'$.
In the second step, we bound the number of nodes of degree two in $\mathcal{T}'$. This is done by deleting redundant paths where all the internal nodes have degree two. Here, however, it will be necessary to add a new node to $\mathcal{T}'$ to compensate for the deleted ones.
\mathtt{Bell}gin{algorithm}[h]
\caption{\texttt{Clique-Sum Modification}} \leftarrowbel{alg:cliqueSum}
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Graph $G=(V,E,w)$, terminals $K\subseteq V$, clique-sum decomposition $\mathcal{T}$ of $G$ into clique-sums}
\Output{Subgraph $G'$ of $G$ and a clique-sum decomposition $\mathcal{T}'$ of $G'$.}
\BlankLine
Let $\mathcal{T}' \leftarrow \mathcal{T}$, $G' \leftarrow G$, $\nu \leftarrow \varnothingtyset$.\hspace{87pt} {\color{Darkgreen}\small /** $\nu:K\hookrightarrow\mathcal{T}$ is a function, initially undefined}\;
\While {there is a leaf $l\in\mathcal{T}'$ such that $\nu^{-1}(l)=\varnothingtyset$}{
\If{$l$ contains a terminal $t$ such that $\nu(t)$ is undefined}
{Set $\nu(t)=l$\;
}
\Else{
Delete $l$ from $\mathcal{T}'$, and delete from $G'$ all the vertices that belong only to $l$
\;
}
}
\ForEach{terminal $t\in K$ for which $\nu(t)$ is undefined}
{Pick arbitrary node $X\in\mathcal{T}'$ containing $t$. Set $\nu(t)=X$.}
A node $X\in\mathcal{T}'$ is called \varnothingh{redundant} if it has degree $2$ in $\mathcal{T}'$ and $\nu^{-1}(X)=\varnothingtyset$.\; A path $P=\{X_0,X_1,\dots,X_s\}$ in $\mathcal{T}'$ is called \varnothingh{redundant}, if all the internal nodes $\{X_1,\dots,X_{s-1}\}$ are redundant.\;
\ForEach{maximal redundant path $P=\{X_0,X_1,\dots,X_s\}$}
{Let $K_0\subseteq X_0$ (resp. $K_s\subseteq X_s$) be the set used for the clique sum $X_0\oplus_h X_1$ (resp. $X_{s-1}\oplus_h X_s$)\;
Remove $\{X_1,\dots,X_{s-1}\}$ from $\mathcal{T}'$. Delete all vertices $v$ in $G'$ that belong only to nodes in $\{X_1,\dots,X_{s-1}\}$.\;
Add new node $X$ with $K_1\cup K_2$ as vertices, and the complete graph between them as edges (the weight of $\{v,u\}$ will be $d_G(u,v)$). Add to $\mathcal{T}'$ the edges $\{X_0,X\},\{X_s,X\}$.
Add the respective edges to $G'$.\;
}
\mathbb{R}eturn Graph $G'$ and clique-sum decomposition $\mathcal{T}'$\;
\end{algorithm}
By \Cref{alg:cliqueSum}, it is straightforward that $\mathcal{T}'$ is a decomposition of $G'$. The second property (that for every $X\in \mathcal{T}'$ either $X\in\mathcal{T}$ or $|X|\le 2h$) is also obvious.
In \Cref{clm:cliqueSumReductionDistancePreserved} we prove that all the terminal distances are preserved exactly by $\mathcal{T}'$, while in \Cref{clm:cliqueSumReductionNumberOfNodesBound} we prove that $\mathcal{T}'$ contains $O(k)$ nodes.
\mathtt{Bell}gin{claim}\leftarrowbel{clm:cliqueSumReductionDistancePreserved}
For every pair of terminals $t,t'\in K$, $d_G(t,t')=d_{G'}(t,t')$.
\end{claim}
\mathtt{Bell}gin{proof}
The proof is by induction on the construction of $G'$ and $\mathcal{T}'$ following \Cref{alg:cliqueSum}. Initially $G=G'$ so the claim obviously holds.
There are two types of modifications that occur: (1) deletion of a leaf node. (2) Replacement of a redundant path by a single new node. Let $G'$ be the graph at some stage with decomposition $\mathcal{T}'$. Suppose that $\tilde{G}$ with decomposition $\tilde{\mathcal{T}}$ is obtained from $G',\mathcal{T}'$ by a single modification step (of type (1) or (2)).
Following the algorithm, it is clear that no terminal vertex is ever deleted (as it necessarily belongs to some non-redundant node).
Furthermore, if there is a pair of neighboring vertices, $v,u\in G'$ who belong to $\tilde{G}$ then they also neighbors in $\tilde{G}$. This holds because if they both belong to a deleted node, then they are necessarily part of some clique in the clique sum together, and will remain there.
Let $P_{t,t'}=\{t=v_0,\dots,v_s=t'\}$ be the shortest path from $t$ to $t'$ in $G'$, with the minimal number of hops (that is minimizing $s$ among all shortest paths). By the induction hypothesis, $d_G(t,t')=d_{G'}(t,t')$.
If no vertex of $P_{t,t'}$ is deleted, then obviously $d_{\tilde{G}}(t,t')=d_{G'}(t,t')$, and we are done.
Else let $v_i,v_j\in P_{t,t'}$ be the vertices with the minimal and maximal indices among the deleted vertices, respectively.
We first deal with modification of type (1), deletion of a leaf note $X\in \mathcal{T}'$. We argue that no vertex of $P_{t,t'}$ is deleted.
Assume toward contradiction that this is not the case.
As $v_i,v_j$ are deleted, they belong to $X$ and no other node in $\mathcal{T}'$. By the minimality of $v_{i}$, $v_{i-1}$ was not deleted. Similarly, $v_{j+1}$ was also not deleted. However, as they are neighbors of deleted vertices, they necessarily belong to the clique sum part in $X$. In particular, $v_{i-1},v_{j+1}$ are neighbors in $G'$. Implying that the path $\{t=v_0,\dots,v_{i-1},v_{j+1},\dots,v_s=t'\}$ has weight $\le d_G(t,t')$ but less hops than $P_{t,t'}$, a contradiction.
Next, we deal with modification of type (2), deletion of a redundant path $P=\{X_0,X_1,\dots,X_s\}$.
As $v_{i-1},v_{j+1}$ were not deleted, but have deleted neighbors, they necessarily belong to the clique parts in $X_0$ or $X_s$ (the one responsible for joining to $X_1,X_{s-1}$). In particular, $v_{i-1},v_{j+1}$ belong to the newly created node $X$, and are neighbors in $\tilde{G}$. We conclude that
$$d_{\tilde{G}}(t,t')\le d_{G'}(t,v_i)+d_{G'}(v_i,v_j)+d_{G'}(v_j,t')=d_{G'}(t,t')=d_{G}(t,t')~.$$
\end{proof}
\mathtt{Bell}gin{claim}\leftarrowbel{clm:cliqueSumReductionNumberOfNodesBound}
The number of nodes in $\mathcal{T}'$ is $O(k)$.
\end{claim}
\mathtt{Bell}gin{proof}
We make the following notation for $\mathcal{T}'$:
$N$ is the total number of nodes,
$l$ denotes the number of leafs,
$a$ denotes the number of degree $2$ nodes for which $\nu^{-1}(X)\ne\varnothingtyset$,
$b$ denotes the number of degree $2$ nodes for which $\nu^{-1}(X)=\varnothingtyset$,
Finally $c$ denotes the number of nodes of degree at least $3$.
Recall that for every leaf $X\in \mathcal{T}'$ it holds that $\nu^{-1}(X)\ne\varnothingtyset$ (as otherwise it would've been removed). Thus $l+a\le k$.
Furthermore, we called nodes $X\in\mathcal{T}'$ of degree $2$ where $\nu^{-1}(X)=\varnothingtyset$ \varnothingh{redundant} and removed all paths consisting of such nodes. It follows that there are no pairs of adjacent redundant nodes. In particular, the number of redundant nodes is bounded by half the number of edges, thus $b\le \frac{N-1}{2}$.
Using the sum-of-degree formula we conclude,
\mathtt{Bell}gin{align*}
& 2N-2=\sum_{X\in \mathcal{T}'}\deg(X)\ge l+2(a+b)+3c=l+2(a+b)+3(N-l-a-b)=3N-2l-a-b\\
& N\le l+(l+a)+b-2\le2k+\frac{N-1}{2}-2\\
& N\le4k-5~.
\end{align*}
\end{proof}
\paragraph{Remark.}
\mathtt{Bell}gin{wrapfigure}{r}{0.2\textwidth}
\mathtt{Bell}gin{center}
\includegraphics[width=0.2\textwidth]{fig/cliqueSumTight}
\end{center}
\end{wrapfigure}
The analysis of the number of nodes in the decomposition $\mathcal{T}'$ created by \Cref{alg:cliqueSum} is tight, as illustrated by the figure on the right.
Indeed, suppose that only the leaf nodes (colored in red) contain terminals, one each. Then the number of terminals is $k=6$. However, the number of nodes in the decomposition is $19=4\cdot k-5$, and none of them will be removed in \Cref{alg:cliqueSum}.
\section{Embedding Minor-Free Graphs into Small Treewidth Graphs} \leftarrowbel{sec:embedding}
We refer readers to ~\Cref{subsec:tech-tw-emb} for an overview of the argument.
\subsection{Step (1): Planar graphs with a single vortex, proof of \Cref{lm:emb-planar-vortex}}\leftarrowbel{sec:embed-vortex}
We begin by restating the main lemma of the section:
\embPlanarVortex*
Our construction here follows the same steps as the proof of \Cref{lm:one-vortex-Bounded-diam} in \Cref{sec:oneVortex}.
The main difference is that we aim for a clique-preserving embedding, and thus the embedding will be one-to-many.
Our first step is to construct the same hierarchical partition tree $\tau$ as in \Cref{sec:oneVortex},
where the set of terminal is the entire set of vertices (i.e. $K=V$).
Recall that each node $\Upsilon\in \tau$ is associated with some cluster $\Upsilon\subseteq V$, and some subgraph $G_\Upsilon$ of $G$. There is a spanning tree $T_{\tilde{x}}$ of $G$, such that for every $\Upsilon\in\tau$, $T_\Upsilon=T_{\tilde{x}}\cap G_\Upsilon$ is a spanning tree of $G_\Upsilon$ (see \Cref{inv:SubgraphSingleVortex} and \Cref{inv:tree}).
We used a fundamental vortex cycle $C_\Upsilon$ in $G_\Upsilon$ w.r.t. $T_\Upsilon$ to partition $\Upsilon$ into two parts $\Upsilon^{\mathcal{I}},\Upsilon^{\mathcal{E}}$ and apply this recursively.
The fundamental cycle $C_\Upsilon$ consists of at most $2(h+1)$ shortest paths denoted $\mathcal{P}(C_\Upsilon)$, all of length at most $D$ (\Cref{obs:FundamentalPathLenght}).
By \Cref{clm:InvHolesMaintained}, and the choice of fundamental vortex cycles, the number of terminals, i.e. vertices, in a cluster $\Upsilon$ drops in every two steps of the hierarchy $\tau$. It follows that the depth of $\tau$ is bounded by $O(\log n)$. \footnote{Actually there is no need to control for the size of $\mathcal{P}_\Upsilon$ here. Thus a simpler rule for choosing $C_\Upsilon$ could be applied (compared to \Cref{sec:oneVortex}). For continuity considerations, we will not take advantage of this.}
For a node $\Upsilon\in \tau$, denote by $\tilde{\mathcal{P}}_\Upsilon$ the set of all paths, in all the fundamental vortex cycles $\mathcal{P}(C_{\Upsilon'})$ in all the ancestors $\Upsilon'$ of $\Upsilon$ in $\tau$. Note that the set $\mathcal{P}_\Upsilon$ defined during the proof of \Cref{lm:emb-planar-vortex} is only a subset of $\tilde{\mathcal{P}}_\Upsilon$.
As each fundamental vortex cycle consist of at most $2(h+1)$ paths, and $\tau$ has depth $O(\log n)$, if follows that $|\tilde{\mathcal{P}}_\Upsilon|=O(h\log n)$.
For each path $Q$ and a parameter $\delta > 0$, let $\mathtt{Portalize}(Q,\delta)$ be a $\delta$-net of $Q$. Vertices in $\mathtt{Portalize}(Q,\delta)$ are called \varnothingh{$\delta$-portals} of $Q$. When $\delta$ is clear from the context, we drop the prefix $\delta$. For a collection of paths $\mathcal{P}$ we denote $\mathtt{Portalize}(\mathcal{P}, \delta) = \cup_{Q\in \mathcal{P}} \mathtt{Portalize}(Q,\delta)$.
\paragraph{Algorithm} We now describe a tree decomposition $\mathcal{T}$ for the vertex set $V$. For each bag $B$ in the decomposition we
add edges between all the vertices in the bag (where the weight of an edge $u,v$ is $d_G(u,v)$).
However, the bags containing a vertex $v$ does not necessarily induce a connected subgraph of $\mathcal{T}$. Thus our embedding is one-to-many. Specifically, in the sub-tree decomposition of $\mathcal{T}$ induced by bags containing $v$, each connected component corresponds to a different copy of $v$.
Note that the portal vertices have only a single copy, as all the bags containing such a vertex are connected.
It is thus straightforward that the embedding is dominating. What needs to be proven is that the additive distortion is at most
$\varepsilonlon$ times the diameter of the input graph and that the embedding clique preserving.
For a node $\Upsilon$, consider the graph $\tilde{G}_{\Upsilon}=G[\Upsilon\cup (\cup\tilde{\mathcal{P}}_{\Upsilon})]$ induced by $\Upsilon$ and all the fundamental cycles in all its ancestors (note that the graph $G_{\Upsilon}$ defined in \Cref{sec:oneVortex} is only a subgraph of $\tilde{G}_{\Upsilon}$). For each node $\Upsilon\in \tau$, we create a bag $B_\Upsilon$ that contains $\mathtt{Portalize}(\tilde{\mathcal{P}}_\Upsilon, \varepsilonlon D/2)$. The bags are connected in the same way as the $\tau$ nodes they represent. In addition, each leaf node $\Upsilon_l\in \tau$, $B_{\Upsilon_l}$ also contains all the vertices in $\Upsilon_l$. For every maximal clique \footnote{A set $Z$ of vertices is a maximal clique if $Z$ is a clique, and there is no clique that strictly contains $Z$. Since $\tilde{G}_{\Upsilon_l}$ is $O(h)$-degenerated, all maximal cliques of $\tilde{G}_{\Upsilon_l}$ can be found in time $O_h(n)$~\cite{ELS10}.} $Z$ in $\tilde{G}_{\Upsilon_l}$, we create a bag $B_Z$, connected in $\mathcal{T}$ only to $B_{\Upsilon_l}$, where $B_Z$ contains vertices in $Z\cup\mathtt{Portalize}(\tilde{\mathcal{P}}_{\Upsilon_l}, \varepsilonlon D/2)$. This concludes the construction.
First observe that the maximal size of a clique in $G$ is $O(h)$. Furthermore, according to the construction of $\tau$, every leaf node $\Upsilon_l$ contains at most $O(h)$ vertices. Finally, for every node $\Upsilon\in\tau$, $ \tilde{\mathcal{P}}_{\Upsilon}$ contains at most $O(h\log n)$ paths and every path in $\tilde{\mathcal{P}}_{\Upsilon}$ has weight at most $D$. Thus, it holds that $|\mathtt{Portalize}(\tilde{\mathcal{P}}_{\Upsilon}, \varepsilonlon D/2)|\le O(h\cdot\log n)\cdot O(\frac1\varepsilonlon)=O(\frac{h\log n}{\varepsilonlon})$. From the definition we immediately have,
\mathtt{Bell}gin{observation}\leftarrowbel{obs:tw-leaf} The decomposition $\mathcal{T}$ is a valid tree-decomposition of width $O(\frac{h\log n}{\varepsilonlon})$.
\end{observation}
Next we argue that our embedding is clique-preserving. By induction, in every level of $\tau$, there is a node $\Upsilon$ such that $Z\subseteq\tilde{G}_\Upsilon$. That is, every vertex $v\in Z$ either belongs to $\Upsilon$, or belongs to a fundamental vortex cycle $C_{\Upsilon'}$ of an ancestor $\Upsilon'$ of $\Upsilon$. In particular, there is a leaf node $\Upsilon_l$ such that $Z\subseteq\tilde{G}_{\Upsilon_l}$. Let $Z'$ be some maximal clique in $\tilde{G}_{\Upsilon_l}$ containing $Z$. By the construction, there is a bag $B_{Z'}$ containing all the vertices in $Z'$, and in particular $Z$.
Finally we bound the distortion. Consider a pair of vertices $u,v$. We then show that for every two copies $u',v'$ it holds that $d_H(u',v')\le d_G(u,v)+\varepsilonlon D$.
Consider first the case where there is a leaf node $\Upsilon_l$ containing both $u,v$. In this case there is a single copy of $u,v$, which belongs to the same bag and therefore it holds that $d_H(u,v)=d_G(u,v)$.
Else, let $P_{u,v}$ be some shortest path between $u$ and $v$ in $G$.
Let $\Upsilon_{u,v}\in\tau$ be the first node such that its fundamental vortex cycle $C_{\Upsilon_{u,v}}$ intersects $P_{u,v}$ at some vertex $z$. Specifically, there is a path $Q_z\in \mathcal{P}(C_{\Upsilon_{u,v}})$ such that $z\in Q_z\cap P_{u,v}$. There is some portal $z'\in \mathtt{Pt}_{Q_z}$ such that $d_G(z,\hat{z})\le\frac\varepsilonlon2 D$. Further, by the construction of $H$, any bag associated with a descendent of $\Upsilon_{u,v}$ contains $\hat{z}$. In particular, every bag $B$ containing either a copy of $u$ or $v$, also contains $\hat{z}$. We conclude
\mathtt{Bell}gin{align*}
d_{H}(u',v') & \leq d_{H}(u',\hat{z})+d_{H}(\hat{z},v')=d_{G}(u,\hat{z})+d_{G}(\hat{z},v)\\
& \leq d_{G}(\hat{z},z)+d_{G}(u,z)+d_{G}(z,v)+d_{G}(z,\hat{z})\leq d_{G}(u,v)+\varepsilonlon D~.
\end{align*}
Note that this argument applies also to two copies $u_1,u_2$ of the same vertex $u$, where $P_{u,u}=\{u\}$.
\subsection{A Cutting Lemma}\leftarrowbel{sec:cut}
Let $H$ be a connected subgraph of an arbitrary graph $G$.
\commentforlater{
As discussed with Hung this is really a problematic part.
I am fine with defining this for general graph, but then this
is used on planar or bounded genus graphs and when used
it is used in a way that \textbf{must} preserve the genus of the
surface. Then, it has to be proven that this operation can be
implemented so as to achieve that.
If you want to prove this formally you would have to use
combinatorial embeddings, I think...
}
Let $I(H)$ be the set of all edges incident to vertices in $H$ that do not belong to $E(H)$. Let $LE(H)$ (left edges) and $RE(H)$ (right edges) be a partition of $I(H)$. We say a graph obtained from $G$ by \varnothingh{cutting along $H$}, denoted by $G \text{ \ding{34} } H$, is the graph obtained by: (1) removing all the vertices of $H$ from $G$, (2) making two copies of $H$, say $H^l$ and $H^r$, (3)
adding an edge between two copies of the endpoints of an edge $e$ in $H^l$ ($H^r$) for every edge $e \in LE(H)$ ($e \in RE(H)$), and (4) adding an edge $uv^{l}$ ($uv^r$) for each edge $uv \in LE(H)$ ($uv \in RE(H)$) where $v^{l}$ ($v^r$) is the copy of $v$ in $H^l$ ($H^r$). We say that $H$ is \varnothingh{separating} if $G\text{ \ding{34} } H$ is disconnected, and \varnothingh{non-separating} otherwise. The following lemma will be useful in for our embedding framework.
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:cutting} Let $H$ be a non-separating connected subgraph of $G$. Then:
\mathtt{Bell}gin{equation*}
\mathrm{\textsc{Diam}}(G\text{ \ding{34} } H) \leq 4\mathrm{\textsc{Diam}}(G) + 2\mathrm{\textsc{Diam}}(H)
\end{equation*}
\end{lemma}
\mathtt{Bell}gin{proof}
Let $\Delta = \mathrm{\textsc{Diam}}(G)$ and $L = \mathrm{\textsc{Diam}}(H)$. For every vertex $v\in V$ $d_G(v,H)\le\Delta$, which implies $\min(d_{G\text{ \ding{34} } H}(v,H^l), d_{G\text{ \ding{34} } H}(v,H^r)) \leq \Delta$ (go along the path realizing the distance $d_G(v,H)$ until we meet a vertex of $H^l\cup H^r$). Since $H$ is non-separating, there is a path $P=v_0,\dots,v_s$ from $v_0\in H^l$ to $v_s\in H^r$. Since $d_{G\text{ \ding{34} } H}(v_0,H^l)=0$ and $d_{G\text{ \ding{34} } H}(v_s,H^r)=0$, there is an index $i$ such that $d_{G\text{ \ding{34} } H}(v_i,H^l)\le\Delta$ and $d_{G\text{ \ding{34} } H}(v_{i+1},H^r)\le\Delta$. It follows that $d_{G\text{ \ding{34} } H}(H^l,H^r)\le 2\Delta+d_G(v_i,v_{i+1})\le3\Delta$. By triangle inequality, for any two vertices $x,y \in H^l \cup H^r $, $d_{G\text{ \ding{34} } H}(x,y) \leq 2L + 3\Delta$.
Let $u$ and $v$ be two vertices in $G\text{ \ding{34} } H$ and a shortest path $P$ between $u$ and $v$ in $G$. If $P \cap H = \varnothingtyset$, then $d_{G\text{ \ding{34} } H}(u,v) = w(P) \leq \mathrm{\textsc{Diam}}(G)$. Otherwise, let $v_i$ and $v_j$ be the first and last vertices belonging to $H_l\cup H_r$ when we follow $P$ from $u$ to $v$ in $G\text{ \ding{34} } H$. Then $d_{G\text{ \ding{34} } H}(u,v)\le d_{G\text{ \ding{34} } H}(u,v_i)+d_{G\text{ \ding{34} } H}(v_i,v_j)+d_{G\text{ \ding{34} } H}(v_j,v)\le\Delta+d_{G\text{ \ding{34} } H}(v_i,v_j)\le4\Delta+2L$.
\end{proof}
\subsection{Step (2.1): Planar graphs with more than one vortex, proof of \Cref{lm:em-vortexReduce}}\leftarrowbel{sec:EmbedManyVortices}
We begin by restating the main lemma of the section:
\emVortexReduce*
The embedding algorithm is presented in \Cref{alg:emb-planar-mult-vortex}. Parameter $s$ represents the step number of the recursion. Initially $s = 0$ and $G_0$ is the input graph with diameter $D$.
\mathtt{Bell}gin{algorithm}[]
\caption{\texttt{EmbedPlanarMultipleVortices}} \leftarrowbel{alg:emb-planar-mult-vortex}
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Graph $G=(V,E,w)=G_\Sigma\cup W_1\cup\dots\cup W_{v(G)}$, parameter $\varepsilonlon$, and step number $s$}
\Output{An embedding $f$ to a graph $H$ with tree decomposition $\mathcal{T}$ with additive distortion $\varepsilonlon D$}
\BlankLine
\If{ $v(G)= 1$}
{
$\{f,H,\mathcal{T}\}\leftarrow \mathtt{EmbedPlanarOneVortex}(G,\frac{\varepsilonlon}{10^{v(G_0)-1}})$\leftarrowbel{line:EmbedVortexBaseCase}\;
return $\{f,H,\mathcal{T}\}$;
}
Let $P$ be the shortest proper vortex path in $G$ between $x_i\in W$ and $y_j\in W'$ and $\mathcal{P}$ be vortex path $X_i\cup Y_j\cup P$\leftarrowbel{line:EmbedVortexPdeff}\;
$K$ be $G[X_i]\cup G[Y_j]\cup P$\leftarrowbel{line:EmbedVortexKdeff}\;
$G'\leftarrow G\text{ \ding{34} } K$\;
$\{f',H',\mathcal{T}'\} \leftarrow \mathtt{EmbedPlanarMultipleVortices}(G',\varepsilonlon, s+1)$\leftarrowbel{line:EmbedVortexRecursion}\;
Add $\mathtt{Pt} = \mathtt{Portalize}(\mathcal{P},\varepsilonlon D/2)$ to every bag of $\mathcal{T}'$\leftarrowbel{line:EmbedVortexPortalize} \;
\For{each maximal clique $Q$ of $G$ such that $Q\cap K \neq\varnothingtyset$\leftarrowbel{line:EmbedVortexFor}}{
Let $B$ be the bag that contains an image of $Q\setminus K$\leftarrowbel{line:EmbedVortexBagB}\;
Create a bag $B_Q = (Q\cap K) \cup B$ and make $B_Q$ adjacent to $B$\leftarrowbel{line:EmbedVortexNewBag}\;
}
Let $\mathcal{T} \leftarrow \mathcal{T}'$ and $\{f,H,\mathcal{T}\}$ be the resulting embedding;
\mathbb{R}eturn $f, H$ and $\mathcal{T}$\;
\end{algorithm}
Recall that a \varnothingh{proper vortex path} $P$ is a path in $G$ between a vertex $v$ in a vortex $W$ to a vertex $u$ in a vortex $W'$ such all the vertices on $P$ belong to the planar part $G_\Sigma$, and other than the first and last vertices, $P$ do not contain any vortex vertices. The path $P$ picked in \cref{line:EmbedVortexPdeff} is the minimal path having its endpoints in two different vortices. By minimality, $P$ is necessarily a proper vortex path.
In \cref{line:EmbedVortexKdeff}, we cut $G$ along $K = G[X_i]\cup G[Y_j] \cup P$. In this cutting procedure, we define $LE(K), RE(K)$ as folows. For $P$, we define left edges $LE(P)$ and right edges $RE(P)$ of $P$ w.r.t the drawing of the planar part of $G$ as follows: $LE(P)$ ($RE(P)$) contains edges drawn on the (right) left side of the path as we walk on the path from $x_i$ to $y_j$\footnote{It could be that an edge $ e = (u,v)$ where $u,v \in P$ such that it is drawn on the left side of $v$ and on the right side of $u$. In this case, we simply assign $e$ to $LE(P)$.}. For $X_i$, assume that the path decomposition of the vortex containing $X_i$ is $\{X_1,\ldots, X_i, X_{i+1}, \ldots, X_t\}$ where the perimeter vertex $x_{i+1}$ of $X_{i+1}$ is incident to a right edge of $P$; in this case, we define $RE(X_i)$ (respectively $LE(X_i)$) to be the set of edges incident to vertices of $X_{i}$ in $G[X_{i+1} \cup \ldots \cup X_{t}]$ (respectively $G[X_{1} \cup \ldots \cup X_{i-1}]$). Similarly, assume that the path decomposition of the vortex containing $Y_j$ is $\{Y_1,\ldots, Y_j, Y_{j+1}, \ldots, Y_{t'}\}$ where the perimeter vertex $y_{j+1}$ of $Y_{j+1}$ is incident to a right edge of $P$; in this case, we define $RE(Y_j)$ ($LE(Y_j)$) be the set of edges incident to vertices of $Y_{j}$ in $G[Y_{j+1} \cup \ldots \cup Y_{t'}]$ ( $G[Y_{1} \cup \ldots \cup Y_{i-1}]$). Finally, we define $LE(K) = LE(X_i) \cup LE(P) \cup LE(Y_j)$ and $RE(K) = RE(X_i) \cup RE(P) \cup RE(Y_j)$. By cutting $G$ along $H$ we essentially \varnothingh{merge} two vortices of $G$ into a single vortex (see \Cref{fig:embeddingMerge}).
Let $P^l = \{x_i^l = p^l_0, p^l_1, \ldots, p^l_k = y_j^l\}$ be the left copy of $P$ and $P^r = \{x_i^r = p^r_0, p^r_1, \ldots, p^r_k = y_j^r\}$ be the right copy of $P$. The path decomposition of the new vortex is
$$\{X_1,\ldots, X_{i-1}, X^{l}_{i}, p^l_1, \ldots, p^l_{k-1}, Y^l_{j}, Y_{j-1}, \ldots Y_1, Y_{t'}, \ldots, Y_{j+1}, Y^r_{j}, p^r_{k-1}, \ldots, p^r_1, X^r_{i}, X_{i+1}, \ldots, X_t\}$$
that has width at most $h$; here $X^l_i, X^r_i$ ($Y^l_j, Y^r_j$) are left and right copies of $X_i$ ($Y_j$), respectively.
\mathtt{Bell}gin{figure}[]
c(\varepsilonlonilon)ntering{\includegraphics[width=1.0\textwidth]{fig/Embedding-merge}}
\caption{\leftarrowbel{fig:embeddingMerge}\small \it
Cutting the left graph along a vortex path $X_i\cup P\cup Y_j$ (hilighted purple) to obtain the graph on the right figure.
}
\end{figure}
Since the vortex merging step reduces the number of vortices by $1$, $v(G') = v(G) - 1$. Note that cutting $G$ along $K$ does not destroy the connectivity of $G$ as we can walk from the left copy of $x_i$ to a right copy of $x_i$ along the face where $W$ is attached to. We then recursively apply \Cref{alg:emb-planar-mult-vortex} to $G'$. To account for the damage caused by cutting $G$ along $K$, we add all the portals of each shortest path in $\mathcal{P}$ to every bag of $\mathcal{T}'$. In the for loop at \cref{line:EmbedVortexFor}, we make the embedding clique-preserving. We will show later by induction that $f'(\cdot)$ is clique-preserving and thus, bag $B$ in \cref{line:EmbedVortexBagB} indeed exists.
The base case is when $v(G)= 1$; in this case, we use the embedding in \Cref{lm:emb-planar-vortex} (\cref{line:EmbedVortexBaseCase}). This completes the embedding procedure.
\mathtt{Bell}gin{claim}\leftarrowbel{clm:Diam-Gj} For any step $s$, $\mathrm{\textsc{Diam}}(G) \leq 10^{s} D$. Furthermore, $s \leq v(G_0)-1$.
\end{claim}
\mathtt{Bell}gin{proof}
The path $P$ in \cref{line:EmbedVortexPdeff} has weight $w(P) \leq \mathrm{\textsc{Diam}}(G)$ since it is a shortest path, and since every edge between two vertices in the vortex has length at most $D$, $\mathrm{\textsc{Diam}}(H) \leq \mathrm{\textsc{Diam}}(G) + 2D\leq 3\mathrm{\textsc{Diam}}(G)$. Thus, by \Cref{lm:cutting}, $\mathrm{\textsc{Diam}}(G') \leq 4\mathrm{\textsc{Diam}}(G) + 2(3\mathrm{\textsc{Diam}}(G)) = 10\mathrm{\textsc{Diam}}(G)$. Hence, by induction, at step $s$, we have:
\mathtt{Bell}gin{equation}\leftarrowbel{eq:dm-cutting-vortex}
\mathrm{\textsc{Diam}}(G) \leq 10^{s} \mathrm{\textsc{Diam}}(G_0) = 10^{s}D \mbox{}
$\Box$\\here
\end{equation}
Finally, observe that the recursion has $v(G_0)-1$ steps since after every step, we reduce the number of vortices by $1$.
\end{proof}
\mathtt{Bell}gin{proof}[Proof of~\Cref{lm:em-vortexReduce}]
We first bound the width of the tree decomposition output by the algorithm. Let $\mathrm{tw}(\mathcal{T}')$ be the treewidth of $\mathcal{T}'$ in \cref{line:EmbedVortexRecursion}. By \Cref{clm:Diam-Gj}, the size of the portal set $\mathtt{Pt}$ in \cref{line:EmbedVortexPortalize} is $O(h \frac{10^{s-1}}{\varepsilonlon} ) = \frac{h 2^{O(v(G_0))}}{\varepsilonlon}$. Since $G$ is $O(h)$-degenerate, every clique $Q$ in \cref{line:EmbedVortexFor} has size $O(h)$. Thus, $|B_Q| = \mathrm{tw}(\mathcal{T}') + \frac{h 2^{O(v(G_0))}}{\varepsilonlon} + O(h)$; this implies
\mathtt{Bell}gin{equation}
\mathrm{tw}(\mathcal{T}) = \mathrm{tw}(\mathcal{T}') + \frac{h 2^{O(v(G_0))}}{\varepsilonlon} + O(h) = \mathrm{tw}(\mathcal{T}') + \frac{h 2^{O(v(G_0))}}{\varepsilonlon}
\end{equation}
In the base case, the treewidth of the embedding in \cref{line:EmbedVortexBaseCase} is $O(\frac{h2^{O(v(G_0))}\log n }{\varepsilonlon})$. Thus, the total treewidth after $v(G_0)-1$ steps of recursion is $O(v(G_0))\cdot O(\frac{h2^{O(v(G_0))}\log n }{\varepsilonlon}) = O(\frac{h2^{O(v(G_0))}\log n }{\varepsilonlon})$.
We next argue that the embedding is clique-preserving by induction. The base case where $v(G) = 1$, $f(\cdot)$ is clique-preserving by \Cref{lm:emb-planar-vortex}. By the induction hypothesis, we assume that $f'(\cdot)$ in \cref{line:EmbedVortexRecursion} is clique-preserving. Let $Q$ be a clique of $G$. If $K\cap Q = \varnothingtyset$, then $Q$ is a clique in $G'$ and hence $f(\cdot)$ preserves $Q$ as $f'(\cdot)$ preserves $Q$. Otherwise, since $Q\setminus K$ is a clique in $G'$, there is a bag of $\mathcal{T}'$ containing an image of $Q\setminus K$. Thus there exist a bag $B\in\mathcal{T}'$ containing $Q\setminus K$ (\cref{line:EmbedVortexBagB}). This implies that $\mathcal{T}$ has a bag $B_Q$ containing an image of $Q$ (\cref{line:EmbedVortexNewBag}); We conclude that $f(\cdot)$ is clique-preserving.
It remains to bound the distortion. For the base case, since the diameter is at most $10^{v(G_0)-1} D$, the distortion is at most $\frac{\varepsilonlon}{10^{v(G_0)-1}} 10^{v(G_0)-1} D = \varepsilonlon D$. For the inductive step, suppose that we have additive distortion $\varepsilonlon D$ for $G'$.
The set $\mathtt{Pt}$ is contained in every bag of $\mathcal{T}$ (\cref{line:EmbedVortexPortalize}), where each vertex in $v\in Q$ has a portal vertex $t\in\mathtt{Pt}$ at distance at most $\frac\varepsilonlon2 D$. Thus, if a shortest path between $u$ and $v$ intersects $K$, then by rerouting the path through the nearest portal in $\mathtt{Pt}$, we obtain a new path with length at most $w(P(u,v)) + 2\varepsilonlon D/2 = w(P(u,v)) + \varepsilonlon D$ for every two copies $u',v'$ of $u,v$. Note that this argument holds in particular for all $K$ vertices. Otherwise, using the induction hypothesis the distance between any two copies of $u$ and $v$ is preserved with additive distortion $\varepsilonlon D$ (note that no new copies are created in this step).
\end{proof}
\subsection{Step (2.2): Cutting out genus, proof of \Cref{lm:embed-genus-vortex}}\leftarrowbel{sec:genus-minor}
We begin by restating the main lemma of the section:
\embedGenusVortex*
The main tool we will use in this section is to cut along a vortex cycle to reduce the genus of the surface embedded part of $G$ without disconnecting $GG$. We assume that every face of $G_{\Sigma}$ is a simple cycle since we can always add more edges to $G_{\Sigma}$ without destroying the shortest path metric.
\subsubsection{Cutting Along a Vortex Cycle}
Let $F_i$ be the face that $W_i$ is glued to, $1\leq i \leq v(G)$. Let $K_{\Sigma}$ be the graph obtained from $G_{\Sigma}$ by, for any $i \in [1,v(G)]$, adding virtual vertex $f_i$ to each face $F_i$ and connect $f_i$ to every other vertex of $F_i$ by an edge of length $0$. Let $U = \{f_i\}_{i=1}^{v(G)}$ and $B = \cup_{i=1}^{v(G)}F_i$. For simplicity of presentation, we assume that every edge of $G$ has positive length. Pick a vertex $r\in V(G)\setminus (U\cup B)$ and compute a shortest path tree $T$ of $K_{\Sigma}$ from $r$.
Let $K^*_{\Sigma}$ be the \varnothingh{dual graph} of $K_{\Sigma}$. A \varnothingh{tree-cotree decomposition} w.r.t. $T$ is a partition of $E(K_{\Sigma})\setminus T$ into two sets $(C, X)$ such that $C^*$ is a spanning tree of $K^*_{\Sigma}$; by Euler formula, $|X| = 2g$ of $\Sigma$ is orientable and $|X| = g$ if $\Sigma$ is non-orientable. For each $e \in E(K_{\Sigma})\setminus T$, define $C_e$ be the fundamental cycle of $T$ w.r.t $e$. By Lemma 2 of~\cite{Eppstein03}, there is an edge $e \in X$ such that cutting $\Sigma$ along $C_e$ does not disconnect the resulting surface and that $\Sigma\text{ \ding{34} } C_e$ has smaller Euler genus. Since every face $K_{\Sigma}$ is a simple cycle, we have:
\mathtt{Bell}gin{claim} \leftarrowbel{clm:cut-nonseparating}
There is an edge $e \in X$ such that $K_{\Sigma}\text{ \ding{34} } C_e$ is connected and that $g(K_{\Sigma}\text{ \ding{34} } C_e) \leq g(K_{\Sigma})-1 $.
\end{claim}
In cutting $K_{\Sigma}$ along $C_e$, we define "left edges" and "right edges" w.r.t the embedding of $C_e$ on $\Sigma$ (see page 106-107 in the book by Mohar and Thomassen~\cite{MT01} for details on how to define left and right sides of a cycle embedded on a surface-embedded graph.).
Fix an edge $e$ in Claim~\ref{clm:cut-nonseparating}. (Such an edge can be found in polynomial time by trying all possible edges.) Let $C_e = T[r_0,u] \circ (u,v) \circ T[v,r_0]$ where $(u,v) = e$ and $r_0$ is the lowest common ancestor of $u$ and $v$ in $T$. We have the following claim whose proof is deferred to \Cref{app:cycle-intersect-vortex-face}.
\mathtt{Bell}gin{restatable}[Single Vortex with Bounded Diameter]{claim}{CycleIntersectVortexFace}
\leftarrowbel{clm:cycle-intersect-vortex-face}
For $i \in [1,v(G)]$, $|C_e \cap F_i| \leq 2$ and if $|C_e \cap F_i| = 2$, then ${x_1,f_i,x_2}$ is a subpath of $C_e$ where $C_e\cap F_i = \{x_1,x_2\}$.
\end{restatable}
\paragraph{Induced vortex cycles} Fix an orientation of $C_e$ from $r_0$ to $u$, then $v$ and back to $r_0$. Let $F_{i_1}, \ldots, F_{i_k}$ be a sequence of faces such that $C_e \cap F_{i_j} \not= \varnothingtyset$ for all $1\leq j \leq k$and that is ordered by the direction from $r_0$ along $C_e$ back to $r_0$. Let $\{x_j,y_j\}$ be two vertices of $C_e\cap F_{i_j}$; if $|C_e\cap F_{i_j}| = 1$, then we let $x_{j} = y_{j} = C_e\cap F_{i_j}$. Cycle $C_e$ \varnothingh{induces} a \varnothingh{vortex cycle} $\mathcal{C}_e = Y_1\cup P_1 \cup \ldots \cup X_k \cup Y_k\cup P_k \cup X_1$ where:
\mathtt{Bell}gin{itemize}
\item $P_j$ is $C_e[y_j,x_{j+1}]$, $1\leq j \leq k$ with convention $x_{k+1} = x_1$.
\item $X_j$ is the bag of $W_{i_j}$ attached to $x_j$ and $Y_j$ is the bag of $W_{i_j}$ attached to $y_j$.
\end{itemize}
\paragraph{Cutting along a vortex cycle} First, we perform a cut along each path $P_j$, $1\leq j \leq k$; each vertex of the path will have two copies. The left edges and right edges are defined w.r.t the embedding of $G_{\Sigma}$. Now, for each vortex $W_{i_j} = \{X_1,\ldots, X_t\}\}$ that has (at most) two perimeter vetices $x_j$ and $y_j$, let $X_p$ and $X_q$ be two bags containing $x_j$ and $y_j$ respectively. Assume that $ p \not = q$ (the case $p = q$ is handled in the same way.) We makes two copies $X^1_p$ and $X^2_p$ of $X_p$ and two copies $X^1_q$ and $X^2_q$ of $X_q$. Let $x^1_j, x^2_j$ be two copies of $x_j$ such that the neighbor of $x^1_j$ in $F_j$ is the perimeter vertex of $X_{p-1}$. Let $y^1_j, y^2_j$ be two copies of $y_j$ such that the neighbor of $y^2_j$ in $F_j$ is the perimeter vertex of $X_{q+1}$. We make a new path decomposition $W'_{i_j} = \{X_1,\ldots, X^1_p,X^2_p ,\ldots, X^1_q, X^2_q, \ldots, X_t\}$ where:
\mathtt{Bell}gin{itemize}[noitemsep]
\item $X^i_p$ ($Y^i_q$) is attached to $x_j^i$ ($y_i^i$) for $i = 1,2$.
\item Each vertex $x \in X_p$ appears in $\{X_1, \ldots, X_{p-1}\}$ will be replaced by the copy of $x \in X^1_p$. Each vertex $y \in X_q$ appears in $\{X_{q+1}, \ldots, X_{t}\}$ will be replaced by the copy of $y \in X^2_q$.
\item For each vertex $x \in X_p $, we replace the occurrence of $x$ in $\{X_p^2,\ldots, X_q^1\}$ by the copy of $x$ in $X_p^2$. For each vertex $y \in X_q\setminus X_p$, we replace the occurrence of $y$ in $\{X_p^2,\ldots, X_q^1\}$ by the copy of $y$ in $X_q^1$.
\end{itemize}
See \Cref{fig:cutting-vortex-path} for an illustration. We denote the graph afer cutting $G$ along $\mathcal{C}_e$ by $G\text{ \ding{34} } \mathcal{C}_e$. We show that:
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:cutting-formal}
There is a vortex cycle $\mathcal{C} = Y_1\cup P_1 \cup X_2 \cup \ldots \cup X_k \cup Y_k\cup P_k \cup X_1$ such that (a) $G\text{ \ding{34} } \mathcal{C}$ is connected, (b) $g(G\text{ \ding{34} } \mathcal{C}) \leq g(G)-1$, (c) $v(G\text{ \ding{34} } \mathcal{C}) \leq v(G) +1 $, (d) $w(P_i) \leq 3\mathrm{\textsc{Diam}}(G)$ for every $1\leq i\leq k$ and (e) $\mathrm{\textsc{Diam}}(G\text{ \ding{34} } \mathcal{C}) \leq 2^{O(v(G))}\mathrm{\textsc{Diam}}(G)$.
\end{lemma}
\mathtt{Bell}gin{proof}
Let $G' = G\text{ \ding{34} } \mathcal{C}$. Property (a) and (b) follow directly from \Cref{clm:cut-nonseparating}. Since cutting long a vortex cycle can creat at most independent two copies of the vortex cycle which become two new vortices, $v(G') \leq v(G+1)$.
For property (d), we observe that $P_i$ is either a shortest path of $G$, or is composed of two shortest path of $G$ share the same endpoint $r_0$ or three shortest paths of $G$ containing edge $(u,v)$. Thus, $w(P_i) \leq 3\mathrm{\textsc{Diam}}(G)$.
To prove (e), it is helpful to think of the process of cutting along $\mathcal{C}_e$ as cutting along each vortex path $\{Y_i \cup P_2\cup X_{i+1}\}$ for $i \in [1,v(G)]$ one by one. Since $\mathrm{\textsc{Diam}}(G[Y_i] \cup P_2 \cup G[X_{i+1}])\leq 5\mathrm{\textsc{Diam}}(G)$, by \Cref{lm:cutting}, each cut increases the diameter of the graph by a constant factor (of $14$). Thus, cutting along at most $v(G)$ vortex paths increases the diameter of $G$ by a factor of $2^{O(v(G))}$.
\end{proof}
\mathtt{Bell}gin{figure}[]
c(\varepsilonlonilon)ntering{\includegraphics[width=0.8\textwidth]{fig/cutting-vortex-path}}
\caption{\leftarrowbel{fig:cutting-vortex-path}\small \it
Cutting along a fundamental vortex cycle to reduce the genus of the graph. The cutting operation induces two new vortices of the resulting graph.
}
\end{figure}
\subsubsection{The proof}
The embedding algorithm is presented in \Cref{alg:embed-genus-vortex}. Parameter $s$ represents the depth of the recursion. Initially $s = 0$ and $G_0$ is the input graph with diameter $D$. In \cref{line:EmbedGenusCut}, we cut $G$ along $\mathcal{C}$; the goal is to reduce the genus of the surface embedded part of $G$ using \Cref{lm:cutting-formal}. In \cref{line:portalizeVortexCycle} we \varnothingh{portalize} the vortex cycle $\mathcal{C}$; we regard $\mathcal{C}$ is a collection of paths where each vertex in a vortex bag is a singleton path. In the for loop at \cref{line:EmbedGenusFor} we make the embedding clique-preserving.
\mathtt{Bell}gin{algorithm}[t]
\caption{\texttt{GenusReduction}} \leftarrowbel{alg:embed-genus-vortex}
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Graph $G=(V,E,w)$ with genus $g(G)$ and $v(G)$ vortices, parameter $\varepsilonlonilon < 1$, and step number $s$}
\Output{An embedding $f$ to a graph $H$ with tree decomposition $\mathcal{T}$ with additive distortion $\varepsilonlonilon D$}
\BlankLine
\If{ $g(G)= 0$}
{
$\{f,H,\mathcal{T}\}\leftarrow \mathtt{EmbedPlanarMultipleVortices}(G,\frac{\varepsilonlonilon}{2^{c (g(G_0)v(G_0))}})$ for some big constant $c$\leftarrowbel{line:EmbedGenusBaseCase}\;
return $\{f,H,\mathcal{T}\}$;
}
Let $\mathcal{C}$ be the vortex cycle guaranteed by \Cref{lm:cutting-formal} w.r.t. $G$\;
$\mathtt{Pt}\leftarrow \mathtt{Portalize}(\mathcal{C},\varepsilonlonilon D/2)$\leftarrowbel{line:portalizeVortexCycle}\;
$G'\leftarrow G\text{ \ding{34} } \mathcal{C}$\leftarrowbel{line:EmbedGenusCut}\;
$\{f',H',\mathcal{T}'\}\leftarrow \mathtt{GenusReduction}(G',\varepsilonlonilon, s+1)$\leftarrowbel{line:genusReduction}\;
Add $\mathtt{Pt}$ to every bag of $\mathcal{T}'$\leftarrowbel{line:EmbedGenusPortalize}\;
\For{each maximal clique $Q$ of $G$ such that $Q\cap \mathcal{C} \neq\varnothingtyset$\leftarrowbel{line:EmbedGenusFor}}{
Let $B$ be the bag that contains an image of $Q\setminus \mathcal{C}$ \leftarrowbel{line:bagClique}\;
Create a bag $B_Q = \left(Q\cap \mathcal{C}\right) \cup B$ and make $B_Q$ adjacent to $B$\;
}
Let $\mathcal{T} \leftarrow \mathcal{T}'$ and $\{f,H,\mathcal{T}\}$ be the resulting embedding\;
\mathbb{R}eturn $f, H$ and $\mathcal{T}$\;
\end{algorithm}
We first bound the treewidth of the embedding. Let $s_{m}$ be the maximum of recursion depth of the algorithm. We have $s_{m} \leq g(G_0)$. By~\Cref{lm:cutting-formal}, $ v(G') \leq v(G)+1$and hence for every step $s$,
\mathtt{Bell}gin{equation}\leftarrowbel{eq:diam-G-everystep}
\mathrm{\textsc{Diam}}(G) \leq 2^{O(\sum_{i=1}^{s} v(G_0) + i)} \mathrm{\textsc{Diam}}(G_0) = 2^{O(v(G_0)g(G_0))}D
\end{equation}
Thus, we have:
\mathtt{Bell}gin{equation*}
\mathtt{Bell}gin{split}
|\mathtt{Pt}| &= |\mathtt{Portalize}(\mathcal{C},\varepsilonlonilon D/2)| \\
& = 2 \left(O(v(G)h) + \frac{v(G) 2^{O(v(G_0)g(G_0))}}{\varepsilonlonilon}\right) = \frac{h 2^{O(v(G_0) g(G_0))}}{\varepsilonlonilon}\\
\end{split}
\end{equation*}
Since $G$ is $O(h + g(G))$-degenerate, every clique $Q$ in \cref{line:EmbedGenusFor} has size $O(v(G) + g(G))$. Thus, $|B_Q| = \mathrm{tw}(\mathcal{T}') + \frac{h 2^{O(v(G_0) g(G_0))}}{\varepsilonlonilon} + O(h + g(G))$; this implies
\mathtt{Bell}gin{equation}\leftarrowbel{eq:tw-recursion}
\mathrm{tw}(\mathcal{T}) = \mathrm{tw}(\mathcal{T}') + \frac{h 2^{O(v(G_0) g(G_0))}}{\varepsilonlonilon}
\end{equation}
For the base case when $s = s_{\max}$, $v(G) \leq v(G_0) + g(G_0)$ and hence according to \Cref{lm:em-vortexReduce} the treewidth of $\mathcal{T}$ in \cref{line:EmbedGenusBaseCase} is $O(\frac{h2^{O(v(G_0) g(G_0))}\log n }{\varepsilonlonilon})$. Thus, the total treewidth after $g(G_0)$ steps of recursion by equation (\ref{eq:tw-recursion}) is $O(g(G_0) \frac{h2^{O(v(G_0) g(G_0))}\log n }{\varepsilonlonilon}) = O(\frac{h2^{O(v(G_0))}\log n }{\varepsilonlonilon})$.
To show that the embedding is clique-preserving, we use iduction. The base case where $g(G) = 0$, $f(\cdot)$ is clique-preserving by \Cref{lm:em-vortexReduce}. By the induction hypothesis, we assume that $f'(\cdot)$ in \cref{line:genusReduction} is clique-preserving. Let $Q$ be a clique of $G$. If $\mathcal{C}\cap Q = \varnothingtyset$, then $Q$ is a clique in $G'$ and hence $f(\cdot)$ preserves $Q$ as $f'(\cdot)$ preserves $Q$. Otherwise, since $Q\setminus \mathcal{C}$ is a clique in $G'$, there is a bag of $\mathcal{T}'$ containing an image of $Q\setminus \mathcal{C}$; bag $B$ in \cref{line:bagClique} exists. This implies that $\mathcal{T}$ has a bag $B_Q$ containing an image of $Q$, and thus $f(\cdot)$ is clique-preserving.
We now bound the distortion. In the base case, the diameter of $G$ is at most $ 2^{O(v(G_0)g(G_0))}D$ by equation (\ref{eq:diam-G-everystep}). Thus, the distortion is at most $\frac{\varepsilonlonilon}{2^{c(v(G_0)g(G_0))}} 2^{O(v(G_0)g(G_0))}D = \varepsilonlonilon D$ when $c$ is sufficiently big; recall that in \cref{line:EmbedGenusBaseCase} we apply \Cref{lm:em-vortexReduce} with parameter $\frac{\varepsilonlonilon}{2^{c(v(G_0)g(G_0))}}$.
For the inductive case, we observe that $\mathtt{Pt}$ is contained in every bag of $\mathcal{T}$ since in \cref{line:EmbedGenusPortalize}, we add $\mathtt{Pt}$ to every bag of $\mathcal{T}'$. Thus, if a shortest path between $u$ and $v$ intersects $\mathcal{C}$, then by rerouting the path through the nearest portal in $\mathtt{Pt}$, we obtain a new path with length at most $w(P(u,v)) + 2\varepsilonlonilon D/2 = w(P(u,v)) + \varepsilonlonilon D$. Otherwise, by induction $d_G(u,v) \leq d_{H'}(u,v) + \varepsilonlon D = d_{H}(u,v) + \varepsilonlon D$.
\subsection{Step (2.3): Removing apices, proof of \Cref{lm:embed-nearly-embeddable}}\leftarrowbel{sec:EmbedApices}
Recall that a nearly $h$-embeddable graph $G$ has an apex set $A$ of size at most $h$ such that $G\setminus A = G_{\Sigma}\cup \{W_1 \ldots\cup W_h\}$. In this section, we will devise a \varnothingh{stochastic} embedding of $G$ into a small treewidth graph with \varnothingh{expected} additive distortion $\varepsilonlon D$.
\embedNearlyEmbeddable*
The main tool we use in the proof of \Cref{lm:embed-nearly-embeddable} is \varnothingh{padded decompositions}.
Given a partition $\mathcal{Q}$ of $V(G)$, $\mathcal{Q}(v)$ denotes the cluster containing $v$.
A partition $\mathcal{Q}$ of $V(G)$ is $\Delta$-bounded if for every cluster $Q\in\mathcal{Q}$, $\mathrm{\textsc{Diam}}(C) \Delta$. Similarly, a distribution $\mathcal{D}$ is $\Delta$ bounded if every partition $\mathcal{Q}\in\mathrm{supp}(\mathcal{D})$ is $\Delta$-bounded.
We denote by $B_G(v,r)=\{u\in V(G)\mid d_G(u,v)\le r\}$ the ball of radius $r$ around $v$.
\mathtt{Bell}gin{theorem}[Theorem 15~\cite{Fil19padded}]\leftarrowbel{thm:padded-minor} Consider a $K_r$-minor-free graph $G$, and parameter $\Delta$. There is an $O(r\Delta)$-bounded distribution over partitions $\mathcal{D}$, such that for every $v\in V(G)$ and $\gamma\in(0,1)$,
\mathtt{Bell}gin{equation}\leftarrowbel{eq:padded-prob}
\Pr_{\mathcal{Q}\sim\mathcal{D}}[B_G(v,\gamma\Delta)\subseteq \mathcal{Q}(v)]\ge e^{-\gamma}
\end{equation}
\end{theorem}
Let $G^{-} = G\setminus A$. Note that the diameter of $G^{-}$ may be unboundedly greater than $D$.
Note that nearly $h$-embeddable graphs exclude $K_{r(h)}$ as a minor where $r(h)$ is some constant depending on $h$ only (indeed, $r(h) = O(h)$).
Thus we can apply \Cref{thm:padded-minor} with parameter $\Delta=\frac {8D}{\varepsilonlon}$ to obtain an $O(h\cdot\frac D\varepsilonlon)$-bounded partition $\mathcal{Q}$ of $G^{-}$.
For each cluster $C\in\mathcal{Q}$, let $N_G(C)$ be the set of vertices with a neighbor in $C$. Let $G^{-}_C \leftarrow G[C \cup N_G(C)]$ be the graph induced by $C \cup N_G(C)$. We apply \Cref{lm:embed-genus-vortex} to $G^{-}_C$ with accuracy parameter $\Omega(\frac{\varepsilonlon^2}{h})$. Thus we obtain a one-to-many, clique preserving embedding $f_C$ into $H_C$ which has tree decomposition $\mathcal{T}_C$ of width $O_h(\frac{\log n}{\varepsilonlon^2})$, and additive distortion $\Omega(\frac{\varepsilonlon^2}{h})\cdot O(h\cdot\frac D\varepsilonlon)\le \frac\varepsilonlon2 D$.
Next we combine all the graphs $\cup_{C\in\mathcal{Q}}H_C$ into a single graph $H$. This is done by adding the set of apices $A$ with edges towards all the other vertices (where the weight of the edge $\{u,v\}$ is $d_G(u,v)$).
Note that if a vertex $v$ belongs to both $G^{-}_{C}$ and $G^{-}_{C'}$, its copies in $H_{C}$ and $H_{C'}$ will not be merged.
The tree decomposition $\mathcal{T}$ of $H$ can be constructed by adding $A$ to all the bags, and connecting the tree decompositions of all the clusters $\cup_{C\in\mathcal{Q}}\mathcal{T}_C$ arbitrarily.
Finally we define the embedding $f$. For apex vertices it is straightforward, as there is a single copy. For a vertex $v\in V(G)\setminus A$, $f(v)=\cup_{C\mbox{ s.t. }v\in C}f_C(v)$.
See \Cref{alg:emb-nearly-embeddable} for illustration.
\mathtt{Bell}gin{algorithm}[t]
\caption{\texttt{EmbedNearlyEmbeddableGraphs}} \leftarrowbel{alg:emb-nearly-embeddable}
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{A nearly $h$-embeddable graph $G=(V,E,w)$}
\Output{A stochastic embedding $f$ to a graph $H$ with tree decomposition $\mathcal{T}$ of width $O_h(\frac{\log n}{\varepsilonlon^2})$ and expected additive distortion $\varepsilonlon D$}
\BlankLine
Let $ G^- \leftarrow G\setminus A$ and $\Delta \leftarrow \frac{8D}{\varepsilonlon}$\;
Sample a partition $\mathcal{Q}$ from the distribution promised by \Cref{thm:padded-minor} with parameter $\Delta$\;
\ForEach{cluster $C \in \mathcal{Q}$}
{$G^{-}_C \leftarrow G[C \cup N_G(C)]$ \hspace{87pt} {\color{Darkgreen}\small /** subgraph induced by $C$ and its neighbors in $G$}\leftarrowbel{line:extendNeighbors}\;
$\{f_C,H_C,\mathcal{T}_C\}\leftarrow \mathtt{GenusReduction}(G^{-}_{S},\Omega(\varepsilonlon^2))$ \hspace{87pt} {\color{Darkgreen}\small /** \Cref{alg:embed-genus-vortex}}\;
Add $A$ to all bags of $\mathcal{T}_C$\;
}
$\mathcal{T}$ obtained by connecting the tree decompositions $\mathcal{T}_C$ of all the clusters in $\mathcal{Q}$ arbitrarily\;
$H\leftarrow$ the graph induced by $\mathcal{T}$\;
Set $f(v) \leftarrow \cup_{C\in\mathcal{Q}}\mathcal{T}_C$ for each $v\in V$\;
\mathbb{R}eturn $f, H$ and $\mathcal{T}$\;
\end{algorithm}
It is straightforward that $\mathcal{T}$ has treewidth $O_h(\frac{\log n}{\varepsilonlon^2})+|A|=O_h(\frac{\log n}{\varepsilonlon^2})$. Next, we argue that $f$ preserves cliques. Consider a clique $Q$. If $Q\subseteq A$ then every bag contains $Q$ and we are done. Otherwise, let $C$ be some cluster that contains a vertex $v\in Q\setminus A$. By definition of $N_G(C)$, it holds that $Q\setminus A\subseteq C\cup N_G(C)$. Thus there is a bag in $\mathcal{T}_C$ containing (copies of) all the vertices in $Q\setminus A$. In particular, by construction, there is a bag in $\mathcal{T}$ containing (copies of) all the vertices in $Q$.
It remains to bound the expected distortion of $f$.
First note that as all the vertices are connected to all the vertices in $A$ by edges of weight at most $D$, $H$ has diameter at most $2D$.
Let $u,v$ be two vertices of $G$ (it could be that $u=v$). Let $P(u,v)$ be (some) shortest path in $G$ between $u$ and $v$.
We consider two cases:
\textbf{Case 1.} $P(u,v)\cap A \neq \varnothingtyset$: Let $t\in P(u,v)\cap A$. Then for every $u'\in f(u),v'\in f(v)$, it holds that $d_{H}(u',v')\le d_{H}(u',t)+d_{H}(t,v')=d_{G}(u,t)+d_{G}(t,v)=d_{G}(u,v)$.
\textbf{Case 2.} $P(u,v)\cap A = \varnothingtyset$: In this case, $P(u,v)\subseteq G^{-}$ and hence $d_{G^{-}}(u,v)=d_{G}(u,v) \leq D$.
By equation (\ref{eq:padded-prob}),
$$\Pr_{\mathcal{Q}\sim\mathcal{D}}[B_{G^{-}}(v,2D)\subseteq \mathcal{Q}(v)]=\Pr_{\mathcal{Q}\sim\mathcal{D}}[B_{G^{-}}(v,\frac\varepsilonlon4\Delta)\subseteq \mathcal{Q}(v)]\ge e^{-\frac\varepsilonlon4}\ge1-\frac\varepsilonlon4~.$$
Denote the event that $B_{G^{-}}(v,2D)\subseteq \mathcal{Q}(v)$ by $\Phi$.
If $\Phi$ occurred, then both $u,v$, all their neighbors, and all the vertices in $P(u,v)$ belong to $\mathcal{Q}(v)$. It follows that $u$ and $v$ do not belong to any graph $G^{-}_C$ for $C\neq \mathcal{Q}(v)$, and further that $d_{G^{-}_{\mathcal{Q}(v)}}(u,v)=d_G(u,v)$. By the guarantee of \Cref{lm:embed-genus-vortex}, it holds that
$$\max_{u'\in f(u),v'\in f(v)}d_H(u',v')\le \max_{u'\in f_{\mathcal{Q}(v)}(u),v'\in f_{\mathcal{Q}(v)}(v)}d_{H_{\mathcal{Q}(v)}}(u',v')\le d_{G^-_{\mathcal{Q}(v)}}(u,v)+\frac\varepsilonlon2 D= d_G(u,v)+\frac\varepsilonlon2 D~.$$
From the other hand, if $\Phi$ did not occurred, the maximal distance between every two copies of $u$ and $v$ is at most $2D$ (the diameter of $H$). We conclude:
\mathtt{Bell}gin{align*}
& \mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')]\\
& \qquad=\Pr\left[\Phi\right]\cdot\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')\mid\Phi]+\Pr\left[\bar{\Phi}\right]\cdot\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')\mid\bar{\Phi}]\\
& \qquad\le1\cdot\left(d_{G}(u,v)+\frac{\varepsilonlon}{2}D\right)+\Pr\left[\bar{\Phi}\right]\cdot2D\\
& \qquad\le d_{G}(u,v)+\frac{\varepsilonlon}{2}D+\frac{\varepsilonlon}{4}\cdot2D=d_{G}(u,v)+\varepsilonlon D
\end{align*}
\subsection{Step (2.4): General minor-free graphs, proof of \Cref{thm:embedding-minor}}\leftarrowbel{sec:EmbedGeneralMinor}
In this section, we prove the following lemma that together with \Cref{lm:embed-nearly-embeddable} imply \Cref{thm:embedding-minor}.
Let $h(r)$ be a function of $r$, such that every $K_r$-minor-free graph can be decomposed into a clique-decomposition of nearly $h(r)$-embeddable graphs (\Cref{thm:RS}).
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:emb-clique-sum} If nearly $h$-embeddable graphs have a stochastic one-to-many embedding $f$ into treewidth-$\mathrm{tw}(h,\varepsilonlon)$ graphs such that:
\mathtt{Bell}gin{enumerate} [noitemsep]
\item[(1)] $f$ is clique-preserving
\item[(2)] the expected additive distortion of $f$ is at most $\varepsilonlon D$.
\end{enumerate}
then there is a stochastic embedding of $K_r$-minor-free graphs into graphs of treewidth at most $\mathrm{tw}(h(r),\varepsilonlon) + h(r)\cdot\log n$ with expected additive distortion $\varepsilonlon D$.\\
Here $\mathrm{tw}(h,\varepsilonlon)$ is some function depending only on $h$ and $\varepsilonlon$.
\end{lemma}
\mathtt{Bell}gin{proof}[Proof of \Cref{lm:emb-clique-sum}]
Consider a $K_r$-minor-free graph $G$, and let $\mathbb{T}$ be its clique-sum decomposition. That is
$G = \cup_{(G_i,G_j) \in E(\mathbb{T})}G_i \oplus_h G_j$
where each $G_i$ is a nearly $h(r)$-embeddable graph.
We call the clique involved in the clique-sum of $G_i$ and $G_j$ the \varnothingh{joint set} of the two graphs.
The embedding of $G$ is defined recursively. Specifically, we will prove by induction that $G$ can be stochastically embedded into $\left(\mathrm{tw}(h(r),\varepsilonlon) + h(r)\cdot \log |\mathbb{T}|\right)$-treewidth graphs with expected additive distortion $\varepsilonlon D$.
Note that $\mathbb{T}$ is a tree. Let $\tilde{G}_i\in\mathbb{T}$ be the \varnothingh{central piece} of $\mathbb{T}$ chosen using the following lemma.
\mathtt{Bell}gin{lemma}[\cite{Jordan69}]\leftarrowbel{lm:tree-sep} Given a tree $T$ of $n$ vertices, there is a vertex $v$ such that every connected component of $T\setminus \{v\}$ has at most $\frac{n}{2}$ vertices.
\end{lemma}
Let $G_1,\dots,G_p$ be the neighbors of $\tilde{G}$ in $\mathbb{T}$. Note that $\mathbb{T}\setminus \tilde{G}$ contains $p$ connected components $\mathbb{T}_1,\dots,\mathbb{T}_p$, where $G_i\in \mathbb{T}_i$, and $\mathbb{T}_i$ contains at most $|\mathbb{T}|/2$ pieces.
We will abuse notation and refer to $\mathbb{T}_i$ also as the graph graph induced by the vertices in all the pieces in $\mathbb{T}_i$. Note that $\mathbb{T}_i$ is $K_r$-minor-free. Further, for every $u,v\in \mathbb{T}_i$ (or $u,v\in\tilde{G}$) it holds that $d_{\mathbb{T}_i}(u,v)=d_{G}(u,v)$
($d_{\tilde{G}}(u,v)=d_{G}(u,v)$)\footnote{To see this consider a shortest path $P(u,v):u=z_0,\dots,z_q=v$ from $u$ to $v$ in $G$. If $P(u,v)\subseteq \mathbb{T}_i$ then we are done. Else, let $a$ ($b$) be the minimal (maximal) index s.t. $z_a\notin \mathbb{T}_i$ ($z_b\notin \mathbb{T}_i$). Necessarily $z_{a-1},z_{b+1}\in C_i\subseteq\mathbb{T}_i$ where $C_i$ is the joint set between $G_i$ and $\tilde{G}$. Moreover, they are neighbors. Thus $z_0,\dots,z_{a-1},z_{b+1},\dots,z_q$ is a path from $u$ to $v$ in $\mathbb{T}_i$ of length $d_G(u,v)$. Similarly (with some additional steps) one can prove $d_{\tilde{G}}(u,v)=d_{G}(u,v)$ for $u,v\in\tilde{G}$.}.
In particular, the diameter of the graphs $\mathbb{T}_1,...,\mathbb{T}_p,\tilde{G}$ is bounded by $D$.
First we use the assumption on $\tilde{G}$, and sample a clique preserving, one-to-many embedding $\tilde{f}$ of $\tilde{G}$ into graph $\tilde{H}$ with tree decomposition $\tilde{\mathcal{T}}$ of width at most $\mathrm{tw}(h,\varepsilonlon)$, and expected additive distortion $\varepsilonlon D$.
Next, for every $i$, we use the inductive hypothesis on $\mathbb{T}_i$, and sample a clique preserving, one-to-many embedding $f_i$ of $G_i$ into graph $H_i$ with tree decomposition $\mathcal{T}_i$ of width at most $\mathrm{tw}(h,\varepsilonlon)+h(r)\cdot\log|\mathbb{T}_i|$, and expected additive distortion $\varepsilonlon D$.
Next, we create a single one-to-many embedding $f$ of $G$ into a graph $H$ with tree decomposition $\mathcal{T}$.
We combine the $p+1$ different embeddings as follows. Initially, we just take a disjoint union of all the graphs $\tilde{H},H_1,\dots, H_p$, keeping all copies of the different vertices separately. Next, we will identify some copies, and add some edges.
For each $i$, let $C_i$ be the joint set of $\tilde{G}$ and $G_i$, i.e., the clique used for their clique sum. As both $\tilde{f}$ and $f_i$ are clique-preserving, there are bags $\tilde{B}_{C_i}\in\tilde{\mathcal{T}}$ and $B_{C_i}\in\mathcal{T}_i$ containing copies of $C_i$. Identify the copies of $C_i$ in $\tilde{B}_{C_i}$ and $B_{C_i}$. Denote this copy by $\bar{C}_i$.
Add an edge in $H$ between every vertex $v'\in \bar{C}_i$ to every other vertex $u'\in H_i$. Here if $v'$ ($u'$) is a copy of $v\in V(G)$ ($u\in V(G)$) the weight of the edge $\{u',v'\}$ will be $d_G(u,v)$.
This finished the construction of $H$. The embedding $f$ is defined naturally. For $v\in \mathbb{T}_i\setminus\tilde{G}$ let $f(v)=f_i(v)$. For $v\in\tilde{G}$ let $\mathcal{I}_v=\{i\mid v\in\mathbb{T}_i\}$ be the indices of the joint sets $v$ belongs to (it might be an empty set). Then $f(v)=\tilde{f}(v)\cup\bigcup_{i\in\mathcal{I}_v}f_i(v)$ (note that we identified the copies $\bar{C}_i$ for each $i$ previously).
Finally, the tree decomposition $\mathcal{T}$ of $H$ is constructed by first taking $\tilde{\mathcal{T}}$, and for every $i$, adding $\mathcal{T}_i$ to $\tilde{\mathcal{T}}$ via an edge between the bags $\tilde{B}_{C_i}$ and $B_{C_i}$. Further, the vertices $\bar{C}_i$ will be added to all the bags in $\mathcal{T}_i$.
It is straightforward to verify that $\mathcal{T}$ is a legal tree decomposition for $H$. The width of every bag in the central part $\tilde{\mathcal{T}}$ is at most $\mathrm{tw}(h,\varepsilonlon)$. While for every bag $B$ from $\mathcal{T}_i$, using the induction hypothesis its width is bounded by
\mathtt{Bell}gin{align*}
\mathrm{tw}(h,\varepsilonlon)+h(r)\cdot\log|\mathbb{T}_{i}|+|\bar{C}_i| & \le\mathrm{tw}(h,\varepsilonlon)+h(r)\cdot\left(\log|\mathbb{T}|-1\right)+h(r)\\
& =\mathrm{tw}(h,\varepsilonlon)+h(r)\cdot\log|\mathbb{T}|~.
\end{align*}
It is straightforward that the one-to-many embedding $f$ is dominating (as $\tilde{f},f_1,\dots,f_p$ were dominating, and the newly add edges dominate the original distances). It is left to prove that $f$ has expected additive distortion $\varepsilonlon D$.
Consider a pair of vertices $u,v\in G$. We proceed by case analysis:
\mathtt{Bell}gin{itemize}
\item If there is an index $i$ such that $u,v\in\mathbb{T}_i\setminus\tilde{G}$.
Then by the induction hypothesis,
\[
\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')]\le\mathbb{E}[\max_{u'\in f_{i}(u),v'\in f_{i}(v)}d_{H_{i}}(u',v')]\le d_{\mathbb{T}_{i}}(u,v)+\varepsilonlon D=d_{G}(u,v)+\varepsilonlon D
\]
\item If both $u,v$ belong to $\tilde{G}$.
Consider some general vertex $z\in\tilde{G}$. If $z\in\mathbb{T}_i$, then there is some copy $\bar{z}_i\in\bar{C}_i$ such that we added an edge of weight $0$ from $\bar{z}_i$ to any copy in $f_i(z)$.
In other words, for every copy $z'\in f(z)$ there is a copy $\bar{z}\in\tilde{f}(z)$ at distance $0$.
Considering $u,v\in\tilde{G}$, by the assumption on $\tilde{f}$ it holds that,
\[
\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')]\le\mathbb{E}[\max_{\bar{u}'\in\tilde{f}(u),\bar{v}'\in\tilde{f}(v)}d_{\tilde{H}}(\bar{u}',\bar{v}')]\le d_{\tilde{G}}(u,v)+\varepsilonlon D=d_{G}(u,v)+\varepsilonlon D~.
\]
\item If there is an index $i$ such that $u\in\mathbb{T}_{i}\setminus\tilde{G}$ and $v\in \tilde{G}$ (the case $v\in\mathbb{T}_{i}\setminus\tilde{G}$ and $u\in \tilde{G}$ is symmetric). Then necessarily, there is a vertex $z\in C_i$ laying on a shortest path from $v$ to $u$ in $G$.
By construction, we added an edge between the copy $\bar{z}\in\bar{C}_i$ to any copy $u'\in f(u)$. It follows that
\mathtt{Bell}gin{align*}
\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')] & \le\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',\bar{z})+d_{G}(\bar{z},v)]\\
& \le d_{G}(u,z)+\mathbb{E}[\max_{z'\in f(z),u'\in f(u)}d_{H}(z',v')]\\
& \le d_{G}(u,z)+d_{G}(z,v)+\varepsilonlon D=d_{G}(u,v)+\varepsilonlon D~,
\end{align*}
where the last inequality follows by the previous case.
\item If there are indices $i_u\ne i_v$ such that $u\in\mathbb{T}_{i_u}\setminus\tilde{G}$ and $v\in\mathbb{T}_{i_v}\setminus\tilde{G}$. Then necessarily, there are vertices $z_u\in C_{i_u}$ and $z_v\in C_{i_v}$ laying on a shortest path from $v$ to $u$ in $G$.
By construction, we added an edge between $\bar{z}_u\in \bar{C}_{i_u}$ ($\bar{z}_v\in \bar{C}_{i_v}$) to every copy in $f(u)$ ($f(v)$). It follows that
\mathtt{Bell}gin{align*}
\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',v')] & \le\mathbb{E}[\max_{u'\in f(u),v'\in f(v)}d_{H}(u',\bar{z}_{u})+d_{H}(\bar{z}_{u},\bar{z}_{v})+d_{H}(\bar{z}_{v},v')]\\
& \le d_{G}(u,z_{u})+\mathbb{E}[\max_{z_{u}'\in f(z_{u}),z_{v}'\in f(z_{u})}d_{H}(z_{u}',z_{v}')]+d_{G}(z_{v},v)\\
& \le d_{G}(u,z_{u})+d_{G}(z_{u},z_{v})+\varepsilonlon D+d_{G}(z_{v},v)=d_{G}(u,u)+\varepsilonlon D~,
\end{align*}
where the second inequality followed by the second case.
\end{itemize}
\end{proof}
\paragraph{Remark} The stochastic embedding $f$ we constructed is one-to-many, but we can keep it one-to-one by simply retaining (any) one vertex in the image of $u$.
\subsection{Corollaries} \leftarrowbel{sec:cor}
In the case where there are no vortices, we can use our constructions, combined with the $\mathrm{poly}(\frac1\varepsilonlon)$-treewidth embedding of \cite{FKS19}
to generalize their bound to graphs of bounded genus, and to apex graphs (for the latter the embedding is stochastic).
\mathtt{Bell}gin{theorem}[Theorem 1.3~\cite{FKS19}]\leftarrowbel{thm:embeddinng-planar}
Given a planar graph $G$ of diameter $D$ and a parameter $\varepsilonlon < 1$, there exists a deterministic embedding $f$ from $G$ to a graph $H$ of treewidth $O(\mathrm{poly}(\frac{1}{\varepsilonlonilon}))$ such that for every $x, y \in G$:
\mathtt{Bell}gin{equation}
d_G(x,y) \leq d_H(x,y) \leq d_G(x,y) + \varepsilonlon D
\end{equation}
\end{theorem}
By using the same cutting approach (using \Cref{lm:cutting-formal})) in \Cref{alg:embed-genus-vortex}, we can reduce the problem to the case where the graph $G$ in \cref{line:EmbedGenusBaseCase} \Cref{alg:embed-genus-vortex} is planar. Thus, we can invoke the embedding algorithm in \Cref{thm:embeddinng-planar} to embed $G$ with additive distortion $\frac{\varepsilonlon D}{2^{cg(G)}}$ and treewidth $O_{g}(\mathrm{poly}(\frac1\varepsilonlon))$.
\mathtt{Bell}gin{corollary}\leftarrowbel{cor:embedding-genus}
Given a graph $G$ with diameter $D$ embedded on a surface with genus $g$ and a parameter $\varepsilonlon < 1$,
there exists an embedding $f$ from $G$ to a graph $H$ of treewidth at most $O_{g}(\mathrm{poly}(\frac{1}{\varepsilonlonilon}))$ such that for every $x, y \in G$:
\mathtt{Bell}gin{equation}
d_G(x,y) \leq d_H(f(x),f(y)) \leq d_G(x,y) + \varepsilonlon D
\end{equation}
\end{corollary}
For a bounded genus graphs with a constant number of apices, we can design a stochastic embedding with constant treewidth using padded decomposition~\cite{Fil19padded} as in \Cref{alg:emb-nearly-embeddable}. In this case, since we do not need to preserve triangles and vortices, we do not need to extend $G^-_C$ to include neighbors $N_G(C)$ as in \cref{line:extendNeighbors} of \Cref{alg:emb-nearly-embeddable}.
\mathtt{Bell}gin{corollary}\leftarrowbel{cor:embedding-apex}
Let $G$ be a graph of diameter $D$ that has a set of vertices $A$ called \varnothingh{apices} such that $G\setminus A$ can be (cellularly) embedded on a surface of genus $g$. Given any parameter $\varepsilonlon < 1$, there exists a stochastic embedding of $G$ into treewidth $O_g(\mathrm{poly}(\frac1\varepsilonlon) )+ |A|$ graphs with expected additive distortion $\varepsilonlon D$.
\end{corollary}
\section{Algorithmic Applications}\leftarrowbel{apendix:AlgApplications}
\newcommand{\text{in}}{\text{in}}
\newcommand{\text{cost}}{\text{cost}}
\newcommand{\text{out}}{\text{out}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\set}[1]{\{#1\}}
\newcommand{\text{distance}}{\text{distance}}
\subsection{EPTAS for subset TSP}\leftarrowbel{appendix:subsetTSP}
Given a subset spanner with lightness $\Psi(\varepsilonlonilon)$, we can design an efficient PTAS for Subset TSP in time $2^{O(\Psi(\varepsilonlonilon)/\varepsilonlonilon)}n^{O(1)}$ using the contraction decomposition framework by Demaine {et al. \xspace} \cite{DHK11} (originally introduced by Klein \cite{Klein05} for the planar case). The framework has four steps:
\mathtt{Bell}gin{itemize}
\item \textbf{Step 1} Find a subset spanner $H$ with lightness $\Psi(\varepsilonlonilon)$.
\item \textbf{Step 2} Partition the edge sets of $H$ into $s = \mathcal{T}heta_h(\Psi(\varepsilonlonilon)/\varepsilonlon)$ sets $E_1,E_2,\ldots E_s$ such that for any set $E_i$, the graph $H_i = \nicefrac{H}{E_i}$ obtained by contracting all edges in $E_i$ has treewidth at most $O(s)$.
\item \textbf{Step 3} For each $i$, solve the Subset TSP problem in treewidth-$O(s)$ graphs in time $2^{O(s)}n^{O(1)}$ (see Appendix D in the full version of \cite{Le20}).
\item \textbf{Step 4} Lift the solution found in Step 3 for each $i \in [1,s]$ to a solution $G$ and retun the minimum solution over all $i$ (see Section 3 in \cite{Le20} for details of the lifting procedure).
\end{itemize}
The overall running time is hence $2^{O(s)}n^{O(1)} = 2^{O(\Psi(\varepsilonlonilon)/\varepsilonlonilon)}n^{O(1)}$. This in combination with \Cref{thm:tsp-spanner} implies \Cref{thm:tsp-eptas}.
\subsection{Approximation schemes for vehicle routing}\leftarrowbel{appendix:vhr}
We consider a uniform-capacity vehicle-routing problem.
An instance consists of:
\mathtt{Bell}gin{itemize}
\item a graph $G=(V,E)$ and cost function $\text{cost}: E \longrightarrow \mathbb{R}_+$,
\item a \varnothingh{capacity} $Q \in \mathbb{Z}_+$,
\item a function $d:\ V \longrightarrow \mathbb{Z}_+$, called the
\varnothingh{delivery requirement function},
and
\item a vertex $r$, called the \varnothingh{depot}.
\end{itemize}
To describe a solution, we introduce some terminology that will also
be helpful later, in describing the algorithm.
For each pair $u,v$ of vertices of $G$, fix a min-cost $u$-to-$v$ path.
We define a \varnothingh{route} to be a sequence that alternates between
nonnegative integers and vertices of $G$. The \varnothingh{start} of the route is the
first vertex, and the \varnothingh{end} of the route is the last. For a
route $R$ consisting of vertices $v_0, \ldots, v_k$, and an edge
$e$, the multiplicity $m(e, R)$ of $e$ in $R$ is
$|\set{i\ :\ e \text{ is in the min-cost $v_{i-1}$-to-$v_i$
path}}|$ The \varnothingh{cost} of $R$ is
$\sum_{e\in E} m(e,R) \text{cost}(e)$, or equivalently
$\sum_{i=1}^k \text{distance}_G(v_{i-1}, v_i)$. The integers are used to
indicate deliveries: the integer appearing immediately after a vertex
$v$ indicates the number of deliveries made at that vertex. An
integer before the first vertex of a route is naturally interpreted as
representing deliveries that a vehicle makes \varnothingh{before} following
that route but, for technical reasons to be apparent later, this is
not enforced. We refer to the integer in a route before the first
vertex as the route's \varnothingh{pre-delivery number}. We refer to the sum
of the other integers as the route's \varnothingh{internal delivery number}.
We refer to the sum of pre-delivery and internal delivery numbers as
the route's \varnothingh{total delivery number}. A route is \varnothingh{feasible}
if the total delivery number is at most $Q$, and if either the
internal delivery number is positive or the start or end is the depot.
We say a feasible route is a \varnothingh{tour} if it starts and ends at the
depot.
A \varnothingh{solution} is a multiset
of tours, each starting and ending at the depot, such that
for each vertex $v$ the total number of deliveries made at $v$ is $d(v)$.
The objective is to find a solution of minimum total cost.
We prove the following theorem in
Section~\ref{subsec:fptas-vehiclerouting}.
\mathtt{Bell}gin{theorem}
\leftarrowbel{thm:vehiclerouting}
Let $\varepsilonlon>0$. There exists an algorithm that, for any instance of
the vehicle routing problem $(G,\text{cost}(\cdot),Q,d(\cdot),r)$ outputs a
$(1+O(\varepsilonlon))$-approximate solution in time
$(Q\varepsilonlon^{-1}\log n)^{O(wQ/\varepsilonlon)} n^3$ where $n=|V(G)|$ and $w$ is
the treewidth of $G$.
\end{theorem}
Note that, for bounded width $w$ and bounded capacity $Q$, this is an
efficient PTAS.\footnote{Note that in Theorem~\ref{thm:vehiclerouting}
exponential dependencies on $w$ and $Q$ are needed to get a
polynomial time approximation scheme since the problem with
arbitrary capacity $Q$ is APX-hard on trees~\cite{Becker18} and the problem with
bounded capacity is APX-hard graphs on graphs of arbitrary
treewidth.}
Previously no efficient PTAS was known
for bounded width and bounded capacity.
From there, an immediate application of the framework of Becker,
Klein, Schild~\cite{BKS19} yields a serie of corollaries.
The framework of Becker, Klein, Schild implies that for any
graph $G$ in some graph class $\mathcal{G}$,
if there exists a polynomial time algorithm that computes
a stochastic embedding
into a dominating graph $H$ with treewidth $w$ such that:
\mathtt{Bell}gin{equation*}
E_{f\sim\mathcal{D}}[d_H(f(u),f(v))] \leq d_G(u,v) +
\varepsilonlonilon (d_G(s,u) + d_G(s,v)),
\end{equation*}
then there exists an approximation scheme for vehicle routing
with bounded capacity for $\mathcal{G}$ with running time
$n^{O(1)} + T(w, \varepsilonlonilon, n)$, where $T(w, \varepsilonlonilon, n)$ is the
best running time for an approximation scheme for $n$-vertex graphs of
treewidth at most $w$.
As a consequence, we obtain the following corollary:
\mathtt{Bell}gin{corollary} There is an efficient PTAS for bounded-capacity
vehicle routing in planar metrics.
\end{corollary}
Moreover, again as a consequence of the results of Becker, Klein, and
Saulpic~\cite{BKS18}, we have:
\mathtt{Bell}gin{corollary} There is an efficient PTAS for bounded-capacity
vehicle routing in metrics of bounded highway dimension.
\end{corollary}
As corollaries of Lemma~\ref{lm:root-embedding} and
Lemma~\ref{cor:root-embedding-gunus} below, we obtain the two following
results.
\mathtt{Bell}gin{corollary} There is a QPTAS for bounded-capacity
e vehicle routing in minor-free metrics.
\end{corollary}
\mathtt{Bell}gin{corollary} There is an efficient PTAS for bounded-capacity
vehicle routing in bounded genus metrics.
\end{corollary}
\subsubsection{Reduction to the bounded treewidth case}
We first prove the following lemma.
\mathtt{Bell}gin{lemma}\leftarrowbel{lm:root-embedding}Given an $n$-vertex edge-weighted $K_r$-minor-free graph $G$ and a distinguished vertex $s$, in polynomial time, one can embed $G$ stochastically into a dominating graph $H$ with treewidth $O_r(2^{\mathrm{poly}(\frac{1}{\varepsilonlonilon})}\log n)$ such that:
\mathtt{Bell}gin{equation}
E_{f\sim\mathcal{D}}[d_H(f(u),f(v))] \leq d_G(u,v) + \varepsilonlonilon (d_G(s,u) + d_G(s,v))~.
\end{equation}
\end{lemma}
\mathtt{Bell}gin{proof}
The proof closedly follows the proof of Theorem 2 in Becker et al.~\cite{BKS19}. Assume that the minimum edge weight is $1$. Choose a random $x\in [0,1]$. We partition $V(G)$ into bands $\mathcal{B} = \{B_0,B_1,\ldots B_{m}\}$ such that:
\mathtt{Bell}gin{itemize}[noitemsep]
\item $B_0 = \{u | d_G(s,u) \leq \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{x}{\varepsilonlonilon}}\}$.
\item $B_i = \{ u | \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{i-x}{\varepsilonlonilon}} \leq d_G(s,u) \leq \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{i + x}{\varepsilonlonilon}}\}$ for $1\leq i\leq m$.
\item $\cup_{i=0}^m B_i = V(G)$.
\end{itemize}
Let $\mathcal{B}(u)$ be the band containing vertex $u$. The key property of this random partition is that (Lemma 2~\cite{BKS19}):
\mathtt{Bell}gin{equation}\leftarrowbel{eq:diffBandProb}
\Pr[\mathcal{B}(u) \not= \mathcal{B}(v)] \leq \varepsilonlonilon \qquad \mbox{for $u,v$ s.t }\varepsilonlonilon d_G(s,v) \leq d_G(s,u) \leq d_G(s,v)
\end{equation}
Let $G_i$ be the graph that contains all pairwise shortest paths between vertices in $\{r\} \cup B_i$. Let $L_0 = 1$ and $L_i = \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{i-x}{\varepsilonlonilon}}$ for $i \in [1,m]$ and $U_i = \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{i+x}{\varepsilonlonilon}} $ for $i \in [0,m]$. Let $\delta_i = U_i/L_i$. Observe that for every $i \in [0,m]$
\mathtt{Bell}gin{equation}
\delta_i \leq \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{2x}{\varepsilonlonilon}} \leq \left(\frac{1}{\varepsilonlonilon}\right)^{\frac{2}{\varepsilonlonilon}}
\end{equation}
and that each graph $G_i$ has $\mathrm{\textsc{Diam}}(G_i) \leq 2U_i$. Since $G_i$ is a $K_r$-minor-free graph, we apply \Cref{thm:embedding-minor} to stochastically embed $G_i$ into $H_i$ with additive distortion $\frac{\varepsilonlonilon}{\delta_i}U_i$ and of treewidth:
\mathtt{Bell}gin{equation}\leftarrowbel{eq:tw-Hi}
\mathrm{tw}(H_i) = O_r(\frac{2\delta_i\log n}{\varepsilonlonilon}) = O_r(2^{\mathrm{poly}(\frac{1}{\varepsilonlonilon})} \log n )
\end{equation}
We can construct the graph $H$ and the corresponding embedding by adding an edge of weigth $d_G(s,v)$ from $s$ to every vertex $v$ of $\cup_{i=1}^m H_i$ that has (one) preimage in $G$. Clearly $H$ has treewidth $\max_{i=1}^m \mathrm{tw}(H_i) +1 = O_r(2^{\mathrm{poly}(\frac{1}{\varepsilonlonilon})} \log n )$.
It remains to bound the distortion betweeu $u,v \in V(G)$. We assume w.l.o.g that $d_G(s,u) \leq d_G(s,v)$.
\mathtt{Bell}gin{itemize}[noitemsep]
\item \textbf{Case 1} $ d_G(s,u) < \varepsilonlon d_G(s,v)$, then
\mathtt{Bell}gin{equation*}
\mathtt{Bell}gin{split}
\mathbb{E}[d_H(u,v)] &\leq \mathbb{E}[d_H(s,u) + d_H(s,v)] = d_G(s,u) + d_G(s,v) \\
&\leq d_G(s,u) + d_G(s,u) + d_G(u,v) \leq d_G(u,v) + 2\varepsilonlonilon d_G(s,v)
\end{split}
\end{equation*}
\item \textbf{Case 2} $ \varepsilonlonilon d_G(s,v) \leq d_G(s,v)$. Let $\Phi$ be the event that $u$ and $v$ are in the same band $B_i$ for some $i \in [0,m]$. By \Cref{eq:diffBandProb}, $\Pr[\bar{\Phi}] \leq \varepsilonlonilon$. We have:
\mathtt{Bell}gin{equation*}
\mathbb{E}[d_{H}(u,v)|\Phi] = \mathbb{E}[d_{H_i}(u,v)] \leq d_{G_i}(u,v) + \frac{\varepsilonlonilon}{\delta_i}U_i = d_{G}(u,v) + \varepsilonlonilon L_i \leq d_{G}(u,v) + \varepsilonlonilon d_{G}(s,u)
\end{equation*}
and:
\mathtt{Bell}gin{equation*}
\mathbb{E}[d_{H}(u,v)|\bar{\Phi}] \leq \mathbb{E}[d_{H}(s,u) + d_{H}(s,v)] = d_{G}(s,u) + d_{G}(s,v)
\end{equation*}
That implies:
\mathtt{Bell}gin{equation*}
\mathtt{Bell}gin{split}
\mathbb{E}[d_{H}(u,v) &= \Pr[\Phi] \mathbb{E}[d_{H}(u,v)|\Phi] + \Pr[\bar{\Phi}]\mathbb{E}[d_{H}(u,v)|\bar{\Phi}] \\
&\leq \mathbb{E}[d_{H}(u,v)|\Phi] + \varepsilonlonilon \mathbb{E}[d_{H}(u,v)|\bar{\Phi}] \\
& \leq d_{G}(u,v) + \varepsilonlonilon d_{G}(s,u) + \varepsilonlonilon (d_{G}(s,u) + d_{G}(s,v))\\
&\leq d_{G}(u,v) + 2\varepsilonlonilon (d_{G}(s,u) + d_{G}(s,v))
\end{split}
\end{equation*}
\end{itemize}
In both cases, $ \mathbb{E}[d_{H}(u,v) \leq q d_{G}(u,v) + 2\varepsilonlonilon (d_{G}(s,u) + d_{G}(s,v))$. By scaling $\varepsilonlonilon \leftarrow \varepsilonlonilon/2$ we obtain the desired distortion with the same treewidth bound.
\end{proof}
By applying the same argument to bounded genus graphs and using \Cref{thm:embedding-genus} to embed $G_i$, we obtain the following corollary:
\mathtt{Bell}gin{corollary}\leftarrowbel{cor:root-embedding-gunus} Given an $n$-vertex edge-weighted graph $G$ of genus $g$ and a distinguished vertex $s$, in polynomial time, one can embed $G$ stochastically into a dominating graph $H$ with treewidth $O_g(2^{\mathrm{poly}(\frac{1}{\varepsilonlonilon})})$ such that:
\mathtt{Bell}gin{equation}
E_{f\sim\mathcal{D}}[d_H(f(u),f(v))] \leq d_G(u,v) + \varepsilonlonilon (d_G(s,u) + d_G(s,v))~.
\end{equation}
\end{corollary}
\mathtt{Bell}gin{proof}[Proof of \Cref{thm:vhr-qptas}]
To obtain an approximation scheme for vehicle routing problem with
bounded capacity, namely where $Q$ is considered a fixed constant,
using~\Cref{lm:root-embedding}, the idea is to embed an instance of the vehicle routing problem to
a graph $H$ with parameter $\hat{\varepsilonlonilon} = \varepsilonlonilon/Q$ and hence
treewidth $\mathrm{tw} = O_r(\log n \cdot 2^{\mathrm{poly}(Q/\varepsilonlonilon)})$ by
\Cref{lm:root-embedding}.
We then solve the instance of the vehicle routing problem on $H$ and
"lift" the solution to the solution of the original graph: for each tour $T_H$ in $H$ starting from $\Phi(s)$ covering an order sequence of points $f(v_1) ,f(v_2), \ldots, f(v_q)$ where $v_i$ is a vertex of $G$, $i \in [1,q]$, and $q\leq Q$, we convert it into a tour $T_H$ starting from $s$ covering points $v_1,\ldots, v_q$ in this order. The expected total cost of the lifted solution is at most $(1+\varepsilonlonilon)$ times the optimal cost by Lemma 5 in~\cite{BKS19}. The running time, by \Cref{thm:vehiclerouting-tw-dp}, is:
\mathtt{Bell}gin{equation}
\left( Q\varepsilonlon^{-1} \log n\right)^{O_r(\log n) 2^{\mathrm{poly}(\frac{Q}{\varepsilonlonilon})}Q/\varepsilonlonilon}n^{O(1)} = n^{O_{\varepsilonlon,r,q}(\log \log n)}.
\end{equation}
\end{proof}
\mathtt{Bell}gin{proof}[Proof of \Cref{thm:vhr-eptas}] The algorithm for genus-$g$ graphs is exactly the same. However, the treewidth of $H$ in this case is $O_g(2^{\mathrm{poly}(\frac{Q}{\varepsilonlonilon})})$. Thus, the running time, by \Cref{thm:vehiclerouting-tw-dp}, is:
\mathtt{Bell}gin{equation}
\left( Q\varepsilonlon^{-1} \log n\right)^{O_g(2^{\mathrm{poly}(\frac{Q}{\varepsilonlonilon})}Q/\varepsilonlonilon)}n^{O(1)} = 2^{O_g(\mathrm{poly}(\frac{Q}{\varepsilonlonilon}))} n^{O(1)}
\end{equation}
\noindent In the above equation, we use the following inquality:
\mathtt{Bell}gin{equation*}
(\log n)^d n^{c} \leq n^{c+1} 2^{d^2} \qquad \mbox{when $n$ is sufficiently big}
\end{equation*}
\noindent which can be proved as follows. If $d \geq \log \log n$, then:
\mathtt{Bell}gin{equation*}
d\log \log n + c\log n \leq d^2 + c\log n \leq (c+1)\log n + d^2
\end{equation*}
Otherwise,
\mathtt{Bell}gin{equation*}
d\log \log n + c\log n \leq (\log \log n)^2+ c\log n \leq (c+1)\log n \leq (c+1)\log n + d^2
\end{equation*}
when $n$ is sufficiently big.
\end{proof}
\subsubsection{An FPT-approximation scheme for vehicle routing with bounded capacity in bounded treewidth graphs}
\leftarrowbel{subsec:fptas-vehiclerouting}
\input{vehicle-EPTAS}
\section{Lower bound for deterministic embedding into bounded treewidth graphs}\leftarrowbel{sec:emblowerbound}
Given an unweighted graph $H=(V,E)$, denote by $H_k$ the $k$-subdivision of $H$ (the graph where each edge is replaced by a $k$-path).
Our proof is based on the following lemma by Carrol and Goel \cite{CG04}:
\mathtt{Bell}gin{lemma}[\cite{CG04} Lemma 1]\leftarrowbel{lem:CG04}
Let $G$ be a (possibly weighted) graph that excludes $H$ as a minor. Every dominating embedding from $H_k$ to $G$ has multiplicative distortion at least $\frac{k-3}{6}$.
\end{lemma}
We now prove~\Cref{thm:LB}; we start by restating the theorem.
\EmLowerBound*
\mathtt{Bell}gin{proof}
Set $k=90$. For every $n\in \mathbb{N}$, let $H^n$ be the unweighted graph consisting of an $n\times n$ grid, with an additional vertex $\psi$ who is a common neighbor of all other vertices.
Define $\mathcal{H}=\{H^n_k\mid n\in \mathbb{N}\}$.
Note that $H^n_k$ has $|H^n_k|=\mathcal{T}heta(n^2\cdot k)=\mathcal{T}heta(n^2)$ vertices.
In addition, all the graphs in $\mathcal{H}$ are $K_6$ free.
Furthermore, for each $n$, $H^n$ has diameter $2$, and thus $H^n_k$ has diameter $2k+2\left\lfloor \frac{k}{2}\right\rfloor \le3k$.
Consider some $H^n_k\in\mathcal{H}$, as $H^n_k$ includes the $n\times n$ as a minor, it has treewidth at least $n$ (see e.g. \cite{RS86}). Using \Cref{lem:CG04}, every embedding $f$ of $H^n_k$ into a graph $G$ with treewidth at most $o(\sqrt{|V(H_k^n)|})<n-1$ will have multiplicative distortion at least $\frac{k-3}{6}$. In particular, there is an edge $(u,v)$ in $H^n_k$ with such a multiplicative distortion (as otherwise using the triangle inequality all pairs of vertices will have distortion smaller than $\frac{k-3}{6}$). We conclude:
\[
d_{G}(f(u),f(v))\ge\frac{k-3}{6}=d_{H_{k}^{n}}(u,v)+\frac{k-9}{6}\ge d_{H_{k}^{n}}(u,v)+\frac{k-9}{18k}\cdot D=d_{H_{k}^{n}}(u,v)+\frac{1}{20}\cdot D~.
\]
\end{proof}
\section{Conclusion}
We have proved two structural results for minor-free graphs: (a) a subset spanner with constant lightness exists and (b) there is a stochastic embedding of diameter $D$ minor-free graphs into treewidth $O(\frac{\log n}{\varepsilonlonilon^2})$ graphs and additive distortion $\varepsilonlonilon D$.
The results are obtained from a new multi-step framework for designing algorithms in minor-free graphs, which we believe is of independent interest. There are two major algorithmic applications of our structural results: an EPTAS for TSP and the first QPTAS for the vehicle routing with bounded capacity problem, both in minor-free graphs.
We also provide an efficient FPT approximation scheme for the vehicle routing with bounded capacity problem in bounded treewidth graphs. As corollaries, we obtain EPTASes for the same problem in planar graphs, bounded genus graphs and graphs with bounded highway dimension. Major open problems from our work are:
\mathtt{Bell}gin{enumerate}
\item Can a minor-free graph of diameter $D$ be stochastically embedded into a graph with treewidth $c(\varepsilonlonilon)$ and distortion $\varepsilonlonilon D$ in polynomial time, where $c(\varepsilonlonilon)$ only depends on $\varepsilonlonilon$? If the answer to this question is positive, one can immediately get a PTAS for the vehicle routing problem with bounded capaciy in minor-free graphs.
\item Can one design a PTAS or QPTAS for Steiner tree, Steiner forest, surviviable network design problems in minor-free graphs?
\end{enumerate}
\paragraph{Acknowledgement} Part of this work was initiated at the workshop \varnothingh{The Traveling Salesman Problem: Algorithms $\&$ Optimization} at Banff International Research Station; Hung Le and Vincent Cohen-Addad thank the organizers of the workshop and Banff for their hospitality.
Arnold Filtser is supported by the Simons Foundation.
Philip Klein is supported by NSF Grant CCF-1841954. Hung Le is supported by an NSERC grant, a
PIMS postdoctoral fellowship, and a start-up grant from Umass Amherst.
\pagebreak
\appendix
\section{Additional Notation}\leftarrowbel{appendix:additionalNotation}
\mathtt{Bell}gin{figure}[]
c(\varepsilonlonilon)ntering{\includegraphics[width=0.8\textwidth]{fig/Vortex-path-multi}}
\caption{\leftarrowbel{fig:vortex-path-complex}\small \it
An example of a vortex path (purpule-higlighted) $\mathcal{V}[u,v]=P_0\cup X_1\cup Y_1\cup P_1 \cup X_2 \cup Y_2\cup P_{2}$ induced by a path $P[u,v]$ between $u$ and $v$. It could be that a vortex $W_{i_3}$ contains a vertex of $P[u,v]$ but is disjoint from the vortex path $\mathcal{V}[u,v]$
}
\end{figure}
\paragraph{Tree decomposition} \varnothingh{A tree decomposition} of $G(V,E)$, denoted by $\mathcal{T}$, satisfying the following conditions:
\mathtt{Bell}gin{enumerate} [noitemsep,nolistsep]
\item Each node $i \in V(\mathcal{T})$ corresponds to a subset of vertices $X_i$ of $V$ (called bags), such that $\cup_{i \in V(\mathcal{T})}X_i = V$.
\item For each edge $uv \in E$, there is a bag $X_i$ containing both $u,v$.
\item For a vertex $v \in V$, all the bags containing $v$ make up a subtree of $\mathcal{T}$.
\end{enumerate}
The \varnothingh{width} of a tree decomposition $\mathcal{T}$ is $\max_{ i \in V(\mathcal{T})}|X_i| -1$ and the treewidth of $G$, denoted by $\mathrm{tw}$, is the minimum width among all possible tree decompositions of $G$. A \varnothingh{path decomposition} of a graph $G(V,E)$ is a tree decomposition where the underlying tree is a path. The pathwidth of $G$, denoted by $\mathrm{pw}$, defined accordingly.
\mathtt{Bell}gin{multicols}{2}
\footnotesize
\subsection{Key notation for \Cref{sec:oneVortex} }\leftarrowbel{appendix:key}
\mathtt{Bell}gin{description}
\item[$G=(V,E,w)$] : planar graph with a single vortex.
\item[$K$] : terminal set of size $k$.
\item[$D=O_h(L)$] : the diameter of $G$.
\item[$G_\Sigma$] : the embedded part.
\item[$W$] : vortex.
\item[$\{X_1,\dots,X_t\}$] : path decomposition of $W$ of width $h$.
\item[$\{x_1,\dots,x_t\}$] : perimeter vertices.
\item[{$\mathcal{V}\left[u,v\right]=P_0\cup X\cup Y\cup P_1$}] : vortex path.
\item[{$\bar{\mathcal{V}}[u,v]$}] : projection of a vortex path.
\item[$\tilde{x}$] : auxiliary perimeter vertex with bag $\tilde{X}=\{\tilde{x}\}$, $\tilde{x}$ is a neighbor of all the other vertices in $W$.
\item[$T_{\Sigma}$] : shortest path tree of $G_\Sigma$ rooted at $\{x_1,\dots,x_t\}$.
\item[$T_{\tilde{x}}=T_\Sigma\cup\{(\tilde{x},v)\mid v\in W\setminus\{\tilde{x}\}\}$] : spanning tree of $G$.
\item[{$C=\mathcal{V}_1[r,u]\cup \mathcal{V}_2[r,v]$}] : fundamental vortex cycle.
\item[$\mathcal{P}(C)$] : set paths constituting $C$ ($|\mathcal{P}(C)|\le2(h+1)+1$).
\item[$\bar{C}$] : closed curve induced by $C$.
\item[$\mathcal{I},\mathcal{E}$] : interior and exterior of $C$.
\item[$\tau$] : hierarchical partition tree of $V$.
\item[$\Upsilon$] : subset of $V$, and node of $\tau$.
\item[$G_\Upsilon$] : graph associated with $\Upsilon$.
\item[$W_\Upsilon$] : the vortex of $G_\Upsilon$.
\item[$T_\Upsilon=T_{\tilde{x}}\cap G_\Upsilon$] : spanning tree of $G_\Upsilon$ rooted in $\tilde{x}$.
\item[$C_\Upsilon$] : a fundamental vortex cycle of $\Upsilon$ w.r.t. $T_\Upsilon$.
\item[$\bar{C}_\Upsilon$] : closed curve induced by $C_\Upsilon$.
\item[$\Upsilon^{\mathcal{E}}$, $\Upsilon^{\mathcal{I}}$] : interior and exterior of $C_\Upsilon$ Also the children of $\Upsilon$ in $\tau$.
\item[{$\mathcal{P}(C)$}] : set of paths constituting fund. vor. cycle $C$.
\item[{$\mathcal{C}_\Upsilon$}] : the set of all the fundamental vortex cycles removed from the ancestors of $\Upsilon$ in $\tau$.
\item[$\bar{\mathcal{C}}_\Upsilon$] : the set of paths constituting $\mathcal{C}_\Upsilon$.
\item[$\mathcal{P}_\Upsilon\subseteq \mathcal{C}_\Upsilon$] : subset of shortest paths that is added to $G_\Upsilon$.
\item[$v_Q$] : representative vertex of a path $Q\in \mathcal{P}_\Upsilon$.
\item[$\omega$] : weight function over the vertices.
\item[$K_\Upsilon=\Upsilon\cap K$] : the set of terminals in $\Upsilon$.
\end{description}
\end{multicols}
\section{Missing Proofs}
\subsection{Proof of \Cref{clm:path-intersect-vortex-face}}\leftarrowbel{app:cycle-intersect-vortex-face}
We begin by restating the claim
\CycleIntersectVortexFace*
\mathtt{Bell}gin{proof}
First, we observe that
\mathtt{Bell}gin{equation}\leftarrowbel{dist:boundary}
d(x,y) = 0 \qquad \forall x,y \in F_i, 1\leq i \leq v(G)
\end{equation}
since edges $(f_i,x),(f_i,y)$ have weight $0$.
\mathtt{Bell}gin{observation}\leftarrowbel{clm:path-intersect-vortex-face}
For any $u \in V(K_{\Sigma})$ and any $i \in [1,v(G)]$, $|T[r,u] \cap F_i| \leq 2$ and if $|T[r,u] \cap F_i| = 2$, then ${x_1,f_i,x_2}$ is a subpath of $T[r,u]$ where $T[r,u]\cap F_i = \{x_1,x_2\}$.
\end{observation}
\mathtt{Bell}gin{proof}
Suppose that $|T[r,u] \cap F_i| \geq 3$, then there must be two vertices $x_1,x_2 \in F_i$ such that $T[x_1,x_2] \subseteq T[r,u]$ does not go through $f_i$. Since $T[x_1,x_2]$ is a shortest path of positive length, this contradicts \Cref{dist:boundary}. This argument also implies that if $T[r,u]\cap F_i = \{x_1,x_2\}$, then $\{x_1,f_i,x_2\}$ must be a subpath of $T[r,u]$.
\end{proof}
If $r_0 \not= f_i$, then by Claim~\ref{clm:path-intersect-vortex-face}, there is only one path among $T[r_0,u], T[r_0,v]$ that can contain a vertex of $F_i$; otherwise, both paths share the same vertex $f_i$ which contradicts that they are vertex disjoint. If $r_0 = f_i$, then $|T[r_0,u]\cap F_i| = |T[r_0,v]\cap F_i| = 1$. Thus, the claim holds. \mbox{}
$\Box$\\
\end{proof}
\end{document} |
\begin{document}
\title{Stochastic Decomposition Method for Two-Stage Distributionally Robust Optimization}
{\bf Abstract.} In this paper, we present a sequential sampling-based algorithm for the two-stage distributionally robust linear programming (2-DRLP) models. The 2-DRLP models are defined over a general class of ambiguity sets with discrete or continuous probability distributions. The algorithm is a distributionally robust version of the well-known stochastic decomposition algorithm of Higle and Sen (Math. of OR 16(3), 650-669, 1991) for a two-stage stochastic linear program. We refer to the algorithm as the distributionally robust stochastic decomposition (DRSD) method. The key features of the algorithm include (1) it works with data-driven approximations of ambiguity sets that are constructed using samples of increasing size and (2) efficient construction of approximations of the worst-case expectation function that solves only two second-stage subproblems in every iteration. We identify conditions under which the ambiguity set approximations converge to the true ambiguity sets and show that the DRSD method asymptotically identifies an optimal solution, with probability one. We also computationally evaluate the performance of the DRSD method for solving distributionally robust versions of instances considered in stochastic programming literature. The numerical results corroborate the analytical behavior of the DRSD method and illustrate the computational advantage over an external sampling-based decomposition approach (distributionally robust L-shaped method).
\section{Introduction} \label{sect:intro}
Many applications of practical interest have been formulated as stochastic programming (SP) models. The models with recourse, particularly in a two-stage setting, have gained wide acceptance across application domains. These two-stage stochastic linear programs (2-SLPs) can be stated as follows:
\begin{align} \label{eq:2slp_master}
\min~ \{c^\top x + \expect{Q(x,\tilde{\omega})}{P^\star}~|~ x \in \set X\}.
\end{align}
One needs to have complete knowledge of the probability distribution $P^\star$ to formulate the above problem. Alternatively, one must have an appropriate means to simulate observations of the random variable so that a sample average approximation (SAA) problem with a finite number of scenarios can be formulated and solved. In many practical applications, distribution associated with random parameters in the optimization model is not precisely known. It either has to be estimated from data or constructed by expert judgments, which tend to be subjective. In any case, identifying a distribution using available information may be cumbersome at best. Stochastic min-max programming that has gained significant attention in recent years under the name of \emph{distributionally robust optimization} (DRO) is intended to alleviate the ambiguity is distributional information.
In this paper, we study a particular manifestation of the DRO problem in the two-stage setting, viz., the two-stage distributionally robust linear programming (2-DRLP) problem. This problem is stated as:
\begin{align}
\min~ \{f(x) = c^\top x + \mathbb{Q}(x) ~|~ x \in \set{X} \}. \label{eq:2drlp_master}
\end{align}
Here, $c$ is the coefficient vector of a linear cost function and $\set X \subseteq \mathbb{R}^{d_x}$ is the feasible set of the first-stage decision vector. The feasible region $\set{X}$ takes the form of a compact polyhedron $\set{X} = \{x ~|~ Ax \geq b, x \geq 0\}$, where $A \in \mathbb{R}^{m_1 \times d_x}$ and $b \in \mathbb{R}^{m_1}$. The function $\mathbb{Q}(x)$ is the worst-case expected recourse cost, which is formally defined as follows:
\begin{align}
\mathbb{Q}(x) = \max_{P \in \mathfrak{P}}~ \bigg\{\mathcal{Q}(x; P) := \expect{Q(x,\tilde{\omega})}{P}\bigg\}. \label{eq:distrSeparation}
\end{align}
The random vector $\tilde{\omega} \in \mathbb{R}^{d_\omega}$ is defined on a measurable space $(\tilde{\omega}set, \cal F)$, where $\Omega$ is the sample space equipped with the sigma-algebra $\set{F}$ and $\mathfrak{P}$ is a set of continuous or discrete probability distributions defined on the measurable space $(\tilde{\omega}set, \set{F})$. The set of probability distributions is often referred to as the \emph{ambiguity set}. The expectation operation $\expect{\cdot}{P}$ is taken with respect to the probability distribution $P \in \mathfrak{P}$. For a given $x \in \mathcal{X}$, we refer to the optimization problem in \eqref{eq:distrSeparation} as the \emph{distribution separation problem}. For a given realization $\omega$ of the random vector $\tilde{\omega}$, the recourse cost in \eqref{eq:distrSeparation} is the optimal value of the following second-stage linear program:
\begin{align} \label{eq:2drlp_subproblem}
Q(x,\omega) := \min \quad & g(\omega)^\top y \\
\text{s.t.} \quad
& y \in \set Y(x,\omega) := \big\{W(\omega) y = r(\omega) - T(\omega) x,~ y \geq 0\big\} \subset \mathbb{R}^{d_y}. \notag
\end{align}
The second-stage parameters $g \in \mathbb{R}^{d_y}$, $W \in \mathbb{R}^{m \times d_y}$, $r \in \mathbb{R}^m$, and $T \in \mathbb{R}^{m\times d_x}$ can depend on uncertainty.
Most data-driven approaches for 2-SLP, such as SAA, tackle the problem in two steps. In the first step, an uncertainty representation is generated using a finite set of observations that serves as an approximation of $\Omega$ and the corresponding empirical distribution serves as an approximation of $P^\star$. For a given uncertainty representation, one obtains a deterministic approximation of \eqref{eq:2slp_master}. In the second step, the approximate problem is solved using deterministic optimization methods. Such a two-step approach may lead to poor out-of-sample performance, forcing the entire process to be repeated from scratch with an improved uncertainty representation. Since sampling is performed prior to the optimization step, the two-step approach is also referred to as the \emph{external sampling procedure}.
The data-driven approaches for DRO problems avoid working with a fixed approximation of $P^\star$ in the first step. However, the ambiguity set is still defined either over the original sample space $\Omega$ or a finite approximation of it. Therefore, the resulting ambiguity set is a deterministic representation of the true ambiguity set in \eqref{eq:2drlp_master}. Once again, deterministic optimization tools are employed to solve the DRO problem. In many data-driven settings, prior knowledge of the sample space may not be available, and using a finite sample to approximate the original sample space may result in similar out-of-sample performance as in the case of the external sampling approach for 2-SLP.
\subsection{Contributions} In light of the above observations regarding the two-step procedure for data-driven optimization, the main contributions of this manuscript are highlighted in the following.
\begin{enumerate}
\item \emph{A Sequential Sampling Algorithm}: We present a sequential sampling approach for solving 2-DRLP. We refer to this algorithm as the \emph{distributionally robust stochastic decomposition} (DRSD) algorithm following its risk-neutral predecessor, the two-stage stochastic decomposition (SD) method \cite{Higle1991}. The algorithm uses a sequence of ambiguity sets that evolve over the course of the algorithm due to the sequential inclusion of new observations. While the simulation step improves the representation of the ambiguity set, the optimization step improves the solution in an online manner. Therefore, the DRSD method concurrently performs simulation and optimization steps. Moreover, the algorithm design does not depend on any specific ambiguity set description, and hence, is suitable for a general family of ambiguity sets.
\item \emph{Convergence Analysis}: The DRSD method is an inexact bundle method that creates outer linearization for the dynamically evolving approximation of the first-stage problem. We provide the asymptotic analysis of DRSD and identify conditions on ambiguity sets under which the sequential sampling approach identifies an optimal solution to the 2-DRLP problem in \eqref{eq:2drlp_master} with probability one.
\item \emph{Computational Evidence of Performance}: We provide the first evidence that illustrates the advantages of a sequential sampling approach for DRO through computational experiments conducted on well-known instances in SP literature.
\end{enumerate}
\subsection{Related work}
The DRSD method has its roots in two-stage SP, in particular two-stage SD. In this subsection, we review the related two-stage SP literature along with decomposition and reformulation-based approaches for 2-DRLP.
For 2-SLP problems with finite support, including the SAA problem, the L-shaped method due to Van Slyke and Wets \cite{VanSlyke1969} has proven to be very effective. Other algorithms for 2-SLP's such as the Dantzig-Wolfe decomposition \cite{Dantzig1960} and the progressive hedging (PH) algorithm \cite{Rockafellar1991} also operate on problems with finite support. The well-established theory of SAA (see Chapter 5 in \cite{Shapiro2014}) supports the external sampling procedure for 2-SLP. The quality of the solution obtained by solving an SAA problem is assessed using the procedures developed, e.g., in \cite{Bayraksan2006}. When the quality of the SAA solution is not acceptable, a new SAA is constructed with a larger number of observations. Prior work, such as \cite{Bayraksan2011} and \cite{Royset2013}, provide rules on how to choose the sequence of sample sizes in a sequential SAA procedure.
In contrast to the above, SD-based methods incorporate one new observation in every iteration to create approximations of the dynamically updated SAA of \eqref{eq:2slp_master}. First proposed in \cite{Higle1991}, this method has seen significant development in the past three decades with the introduction of quadratic regularization term in \cite{Higle1994}, statistical optimality rules \cite{Higle1999}, and extensions to multistage stochastic linear programs \cite{Gangammanavar2020sdlp, Sen2014}. The DRSD method presented in this manuscript extends the sequential sampling approach (i.e., SD) for 2-SLPs to DRO problems. Since the simulation of new observations and optimization steps are carried out in every iteration of the SD-based methods, they can also be viewed as \emph{internal sampling methods}.
The concept of DRO dates back to the work of Scarf \cite{Scarf1958}, and has gained significant attention in recent years. We refer the reader to \cite{rahimian2019distributionally} for a comprehensive treatment on various aspects of the DRO. The algorithmic works on DRO are either decomposition-based or reformulation-based approaches. The decomposition-based methods for 2-DRLP mimic the two-stage SP approach of using a deterministic representation of the sample space using a finite number of observations. As a consequence, the SP solution methods with suitable adaptation can be applied to solve the 2-DRLP problems. For instance, Breton and El Hachem \cite{Breton1995a, Breton1995b} apply the PH algorithm for a 2-DRLP model with a moment-based ambiguity set. Riis and Anderson \cite{Riis2005} extend the L-shaped method for 2-DRLP with continuous recourse and moment-based ambiguity set. Bansal et.al. \cite{bansal_DROdecomposition_2018} extend the algorithm in \cite{Riis2005}, which they refer to as the distributionally robust (DR) L-shaped method, to solve 2-DRLPs, with ambiguity set defined over a polytope, in finite iterations. Further extensions of this decomposition approach are presented in \cite{bansal_DROdecomposition_2018} and \cite{bansal_DRO-Disjunctive_2019} for DRO with mixed-binary recourse and disjunctive programs, respectively.
Another predominant approach to solve 2-DRLP problems is to reformulate the distribution separation problem in \eqref{eq:distrSeparation} as a minimization problem, pose the problem in \eqref{eq:2drlp_master} as a single deterministic optimization problem, and use off-the-shelf deterministic optimization tools to solve the reformulation. For example, Shapiro and Kleywegt \cite{Shapiro2002} and Shapiro and Ahmed~\cite{Shapiro2004} provided approaches for 2-DRLP problem with moment matching set to derive an equivalent stochastic program with a certain reference distribution. Bertsimas et al. \cite{Bertsimas2010} provided tight semidefinite programming reformulations for 2-DRLP where the ambiguity set is defined using multivariate distributions with known first and second moments. Likewise, Hanasusanto and Kuhn \cite{Hanasusanto2018} provided a conic programming reformulation for 2-DRLP problem where the ambiguity set comprises of a 2-Wasserstein ball centered at a discrete distribution. Xie \cite{xie2019tractable} provided similar reformulations to tractable convex programs for 2-DRLP problems with ambiguity set defined using $\infty-$ Wasserstein metric. Jiang and Guan \cite{Jiang2018} reduced the worst-case expectation in 2-DRLP, where the ambiguity set is defined using $l_1$-norm on the space of all (continuous and discrete) probability distributions, to a convex combination of CVaR and an essential supremum. By taking the dual of inner maximization, Love and Bayraksan \cite{Bayraksan2015} demonstrated that 2-DRLP where the ambiguity set is defined using $\phi$-divergence and finite sample space is equivalent to 2-SLP with a coherent risk measure. When reformulations result in equivalent stochastic programs (in \cite{Jiang2018, Bayraksan2015, ShaAhm04}, for instance), a SAA of the reformulation is used to obtain an approximate solution.
Data-driven approaches for DRO have been presented for specific ambiguity sets. In \cite{Delage2010}, problems with ellipsoidal moment-based ambiguity set whose parameters are estimated using sampled data are addressed. Esfahani et. al. tackled data-driven problems with Wasserstein metric-based ambiguity sets with convex reformulations in \cite{MohajerinEsfahani2018}. In both these works, the authors provide finite-sample performance guarantees that probabilistically bound the gap between approximate and true DRO problems. Sun and Xu presented asymptotic convergence analysis of DRO problems with ambiguity sets that are based on moments and mixture distributions constructed using a finite set of observations in \cite{Sun2016}. A practical approach to incorporate the results of these works to identify a high-quality DRO solution will be similar to the sequential SAA procedure for SP in \cite{Bayraksan2011}. Such an approach will involve the following steps performed in a series -- a deterministic representation of ambiguity set using sampled observations, applying appropriate reformulation, and solving the resulting deterministic optimization problem. If the quality of the solution is deemed insufficient, then the entire series of steps is repeated with an improved representation of the ambiguity set (possibly with a larger number of observations).
\subsubsection*{Organization} The remainder of the paper is organized as follows. In \S\ref{sect:approximations}, we present the two key ideas of the DRSD, viz., the sequential approximation of the ambiguity set and the recourse function. We provide a detailed description of the DRSD method in \S\ref{sect:algorithm}. We show the convergence of the value functions and solutions generated by the DRSD method in \S\ref{sect:convergence}. We present results of our computational experiments in \S\ref{sect:computations}, and finally we conclude and discuss about potential extensions of this paper in \S \ref{sect:conclusion}.
\subsubsection*{Notations and Definitions}
We define the ambiguity sets over $\set{M}$, the set of all finite signed measures on the measurable space $(\tilde{\omega}set, \set{F})$. A nonnegative measure (written as $P \succeq 0$) that satisfies $P(\tilde{\omega}set) = 1$ is a probability measure. For probability distributions $P, P^\prime \in \mathfrak{P}$, we define \begin{align} \label{eq:zetaDistance}
\distance{P,P^\prime} := \sup_{F \in \mathcal{F}} \Big | \expect{F(\tilde{\omega})}{P} - \expect{F(\tilde{\omega})}{P^\prime} \Big |
\end{align}
as the uniform distance of expectation, where $\mathcal{F}$ is a class of measurable functions. The above is the distance with $\zeta$-structure that is used for the stability analysis in SP \cite{romisch2003stability}. The distance between a single probability distribution $P$ to a set of distributions $\mathfrak{P}$ is given as $\distance{P,\mathfrak{P}} = \inf_{P^\prime \in \mathfrak{P}} d(P,P^\prime)$. The distance between two sets of probability distributions $\mathfrak{P}$ and $\widehat{\mathfrak{P}}$ is given as
\begin{align*}
\setDistance{\mathfrak{P}, \widehat{\mathfrak{P}}} := \sup_{P \in \widehat{\mathfrak{P}}} \distance{P, \mathfrak{P}}.
\end{align*}
Finally, the Hausdorff distance between $\mathfrak{P}$ and $\widehat{\mathfrak{P}}$ is defined as
\begin{align*}
\hausdorffDistance{\mathfrak{P},\widehat{\mathfrak{P}}} := \max\{\setDistance{\mathfrak{P},\widehat{\mathfrak{P}}},~ \setDistance{\widehat{\mathfrak{P}}, \mathfrak{P}}\}.
\end{align*}
With suitable definitions for the set $\mathcal{F}$, the distance in \eqref{eq:zetaDistance} accepts the bounded Lipschitz, the Kantorovich and the $p$-th order Fourier-Mourier metrics (see \cite{romisch2003stability}).
\section{Approximating Ambigiuty Set and Recourse Function} \label{sect:approximations}
In this section, we present the building blocks for the DRSD method. Specifically, we present the procedures to obtain approximations of the ambiguity set $\mathfrak{P}$ and the recourse function $Q(x,\omega)$. These procedures will be embedded within a sequential sampling-based approach. Going forward we make the following assumptions on the 2-DRLP models:
\begin{enumerate}[label=(A\arabic{enumi})]
\item The first-stage feasible region $\set X$ is a non-empty and compact set. \label{assum:compactX}
\item The recourse function satisfies relatively complete recourse. The dual feasible region of the recourse problem is nonempty compact polyhedral set. The transfer (or technology) matrix satisfies $\sup_{P \in \mathfrak{P}} \expect{T(\tilde{\omega})}{P} < \infty$.\label{assum:completeRecourse}
\item The randomness affects the right-hand sides of constraints in \eqref{eq:2drlp_subproblem}. \label{assum:rhs}
\item Sample space $\tilde{\omega}set$ is a compact metric space and the ambiguity set $\mathfrak{P} \neq \emptyset$.\label{assum:compactOm}
\end{enumerate}
As a consequence of \ref{assum:completeRecourse}, the recourse function satisfies $Q(x,\tilde{\omega}) < \infty$ with probability one for all $x \in \set{X}$. It also implies that the second-stage feasible region, i.e., $\{y~:~ W y = r(\omega) - T(\omega)x,~ y\geq 0\}$, is non-empty for all $x \in \set X$ and every $\omega \in \tilde{\omega}set$. The non-empty dual feasible region $\Pi$ implies that there exists a $L > -\infty$ such that $Q(x,\tilde{\omega}) > L$. Without loss of generality, we assume that $Q(x,\tilde{\omega})\geq 0$. As a consequence of \ref{assum:rhs}, the cost coefficient vector $g$ and the recourse matrix $W$ are not affected by uncertainty. Problems that satisfy \ref{assum:compactOm} are said to have a fixed recourse. Finally, the compactness of the support $\tilde{\omega}set$ guarantees that every probability measure $P \in \mathfrak{P}$ is tight.
\subsection{Approximating the Ambiguity Set} \label{sect:ApproxAmbiguitySet}
The DRO approach assumes only partial knowledge about the underlying uncertainty that is captured by a suitable description of the ambiguity set. An ambiguity set must capture the true distribution with absolute or high degree of certainty, and must be computationally manageable. In this section we present a family of of ambiguity sets that are of interest to us in this paper.
The computational aspects of solving a DRO problem relies heavily on the structure of the ambiguity set. The description of these structures involve parameters which are determined based on practitioner's risk-preferences. The ambiguity set descriptions that are prevalent in the literature include moment-based ambiguity sets with linear constraints (e.g., \cite{Dupacova(1987)distributionRO}) or conic constraints (e.g., \cite{Delage2010}); Kantorovich distance or Wasserstein metric-based ambiguity sets \cite{Mehrotra2013}; $\zeta$-structure metrics \cite{Zhao2015}, $\phi$-divergences such as $\chi^2$ distance and Kullback-Leibler divergence \cite{Ben-tal2012}; Prokhorov metrics \cite{Erdogan2006}, among others. Although the design of the DRSD method can work with any ambiguity set description defined over a compact sample space, we use 2-DRLPs with moment-based ambiguity sets and Wasserstein distance-based ambiguity sets to illustrate the algorithm in details.
In a data-driven setting, the parameters used in the description of ambiguity sets are estimated using a finite set of independent observations which can either be past realizations of the random variable $\tilde{\omega}$ or simulated by an oracle. We will denote such a sample by $\tilde{\omega}set^k \subseteq \tilde{\omega}set$ with $\tilde{\omega}set^k = \{\omega^j\}_{j=1}^k$. Naturally, we can view $\tilde{\omega}set^k$ as a random sample and define the empirical frequency
\begin{align}
\hat{p}^k(\omega^j) = \frac{\kappa(\omega^j)}{k} \qquad \text{for all } \omega^j \in \tilde{\omega}set^k,
\end{align}
where $\kappa(\omega^j)$ denotes the number of times $\omega^j$ is observed in the sample. Since in sequential sampling setting, the sample set is updated within the optimization algorithm, it is worthwhile to note that the empirical frequency can be updated using the following recursive equations:
\begin{align} \label{eq:probUpdate}
\hat{p}^k(\omega) = \left \{ \begin{array}{ll}
\theta^k \hat{p}^{k-1}(\omega) & \text{if}~ \omega \in \tilde{\omega}set^{k-1}, \omega \neq \omega^k \\
\theta^k \hat{p}^{k-1}(\omega) + (1-\theta^k) & \text{if}~ \omega \in \tilde{\omega}set^{k-1}, \omega = \omega^k \\
(1-\theta^k) & \text{if}~ \omega \notin \tilde{\omega}set^{k-1}, \omega = \omega^k.
\end{array}\right.
\end{align}
where $\theta^k \in [0,1]$. We will succinctly denote the the above operation using the mapping $\Theta^k: \mathbb{R}^{|\tilde{\omega}set^{k-1}|} \rightarrow \mathbb{R}^{|\tilde{\omega}set^k|}$.
In remainder of this section, we will present alternate descriptions of ambiguity sets and show the construction of what we will refer to as {\it approximate ambiguity sets}, denoted by $\mathfrak{P}^k$. Let $\set{F}^k = \sigma(\omega^j~|~ j \leq k)$ be the $\sigma$-algebra generated by the observations in the sample $\tilde{\omega}set^k$. Notice that $\set{F}^{k-1} \subseteq \set{F}^k$, and hence, $\{\set{F}^k\}_{k \geq 1}$ is a filtration. We will define the approximate ambiguity sets over the measurable space $(\tilde{\omega}set^k, \set{F}^k)$. These sets should be interpreted to include all distributions that could have generated using the sample $\tilde{\omega}set^k$, which share a certain relationship with sample statistics. We will use $\set{M}^k$ to denote the finite signed measures on $(\tilde{\omega}set^k, \set{F}^k)$.
\subsubsection{Moment-based Ambiguity Sets} \label{sect:momentAmbiguity}
Given the first $q$ moments associated with the random variable $\tilde{\omega}$, the moment-based ambiguity set can be defined as
\begin{align} \label{eq:momentAmbuity}
\mathfrak{P}_{\text{mom}} = \left \{ P \in \set{M} \left \vert \begin{array}{l}
\int_\tilde{\omega}set dP(\tilde{\omega}) = 1, \\ \int_\tilde{\omega}set \psi_i(\tilde{\omega})dP(\tilde{\omega}) = b_i \qquad i = 1,\ldots,q
\end{array}\right. \right \}.
\end{align}
While the first constraint ensures the definition of a probability measure, the moment requirements are guaranteed by the second constraints. Here, $\psi_i(\tilde{\omega})$ denote real valued measurable function on $(\tilde{\omega}set, \set{F})$ and $b_i \in \mathbb{R}$ be a scalar for $i = 1,\ldots,q$. Existence of moments ensures that $b_i < \infty$ for all $i = 1,\ldots,q$. Notice that the description of the ambiguity set requires explicit knowledge of the following statistics: support $\tilde{\omega}set$ and the moments $b_i$ for $i = 1,\ldots,q$. In the data-driven setting, the support is approximated by $\tilde{\omega}set^k$ and the sample moments $\hat{b}_i^k = (1/k)\sum_{j=1}^k \psi_i(\omega^j)$ are used to define the following approximate ambiguity set
\begin{align} \label{eq:momentAmbuity_approx}
\widehat{\probset}_{\textup{mom}}^k = \left \{ P \in \set{M}^k \left \vert \begin{array}{l}
\sum_{\omega \in \tilde{\omega}set^k} p(\omega) = 1, \\
\sum_{\omega \in \tilde{\omega}set^k} p(\omega) \psi_i(\omega) = \hat{b}_i^k \qquad i = 1,\ldots,q
\end{array}\right. \right \}.
\end{align}
The following result characterizes the relationship between distributions drawn from the above approximate ambiguity set, as well as asymptotic behavior of the sequence $\{\widehat{\probset}_{\textup{mom}}^k\}_{k \geq 1}$.
\begin{proposition} \label{prop:momentAmbiguity_property}
For any $P \in \widehat{\probset}_{\textup{mom}}^{k-1}$, we have $\Theta^k (P) \in \widehat{\probset}_{\textup{mom}}^k$ where $\theta^k = \frac{k-1}{k}$. Further, suppose $\widehat{\probset}_{\textup{mom}}^k \neq \emptyset$ for all $k \geq 0$, $\hausdorffDistance{\widehat{\probset}_{\textup{mom}}^k, \mathfrak{P}_{\textup{mom}}} \rightarrow 0$ as $k \rightarrow \infty$, almost surely.
\end{proposition}
\begin{proof}
See Appendix \S\ref{sect:proofs}.
\end{proof}
In the context of DRO, similar ambiguity sets have been studied in \cite{Bertsimas2005,Dupacova(1987)distributionRO} where only the first moment (i.e., $q = 1$) was considered. The form of ambiguity set above also relates to those used in \cite{Delage2010, Riis2005, Scarf1958, Sun2016} where constraints were imposed only on the mean and covariance. In the data driven setting of \cite{Delage2010} and \cite{Sun2016}, the statistical estimates are used in constructing the approximate ambiguity set as in the case of \eqref{eq:momentAmbuity_approx}. However, the ambiguity sets in these previous works are defined over the original sample space $\tilde{\omega}set$, as opposed to $\tilde{\omega}set^k$ that is used in \eqref{eq:momentAmbuity_approx}. This marks a critical deviation in the way the approximate ambiguity set are constructed.
\begin{remark}\label{rem:RiisandAndersonProp2.1}
When the moment information is available about the underlying distribution $P^\star$, an approximate moment-based ambiguity set with constant parameters in \eqref{eq:momentAmbuity_approx} (i.e., with $\hat{b}_i^k = b_i$ for all $k$) can be constructed. Such an approximate ambiguity sets defined over $\tilde{\omega}set^k$ is studied in \cite{Riis2005}. Notice that these approximate ambiguity sets satisfy $\cup_{k\geq 1} \widehat{\mathfrak{P}}^k \subseteq \mathfrak{P}$ and $\widehat{\mathfrak{P}}^k\subseteq \widehat{\mathfrak{P}}^{k+1}$, for all $k\geq 1$.
\end{remark}
\subsubsection{Wasserstein distance-based Ambiguity Sets} \label{sect:wassersteinAmbiguity}
We next present approximations of another class of ambiguity sets that has gained significant attention in the DRO literature, viz., the Wasserstein distance-based ambiguity sets. Consider probability distributions $\mu_1, \mu_2 \in \set{M}$, and a function $\nu:\tilde{\omega}set \times \tilde{\omega}set \rightarrow \mathbb{R}_+ \cup \{\infty\}$ such that $\nu$ is symmetric, $\nu^\frac{1}{r}(\cdot)$ satisfies triangle inequality for $1 \leq r < \infty$, and $\nu(\omega_1, \omega_2) = 0$ whenever $\omega_1 = \omega_2$. If $\Pi(\mu_1, \mu_2)$ denotes the joint distribution of random vectors $\omega_1$ and $\omega_2$ with marginals $\mu_1$ and $\mu_2$, respectively, then the Wasserstein metric of order $r$ is given by
\begin{align} \label{eq:wassersteinMetric}
d_{\text{w}}(\mu_1, \mu_2) = \inf_{\eta \in \Pi(\mu_1, \mu_2)} \bigg\{ \int_{\tilde{\omega}set \times \tilde{\omega}set} \nu(\omega_1, \omega_2)\eta(d\omega_1, d\omega_2) \bigg\}.
\end{align}
In the above definition, the decision variable $\eta \in \Pi$ can be viewed as a plan to transport goods/mass from an entity whose spatial distribution is given by the measure $\mu_1$ to another entity with spatial distribution $\mu_2$. Therefore, the $d_{\text{w}}(\mu_1,\mu_2)$ measures the optimal transport cost between the measures. Notice that an arbitrary norm $\|\bullet\|^r$ on $\mathbb{R}^{d_\omega}$ satisfies the requirement of the function $c(\cdot)$. In this paper, we will use $\|\bullet\|$ as the choice of our metric, in which case we obtain the Wasserstein metric of order~1. Using this metric, we define an ambiguity set as follows:
\begin{align} \label{eq:wassersteinAmbiguity}
\mathfrak{P}_{\textup{w}} = \{P \in \set{M} ~|~ d_W(P,P^*) \leq \epsilon\}
\end{align}
for a given $\epsilon > 0$ and a reference distribution $P^*$. As was done in \S\ref{sect:momentAmbiguity}, we define approximate ambiguity sets defined over the measurable space $(\tilde{\omega}set^k, \mathcal{F}^k)$ as follows:
\begin{align} \label{eq:wassersteinAmbiguityApprox}
\widehat{\mathfrak{P}}_{\textup{w}}^k = \{P \in \set{M}^k ~|~ d_W(P,\widehat{P}^k) \leq \epsilon\}.
\end{align}
For the above approximate ambiguity set, the distribution separation problem in \eqref{eq:distrSeparation} takes the following form:
\begin{subequations} \label{eq:distrSeparationWasserstein}
\begin{align}
\max~ & \sum_{\omega \in \tilde{\omega}set^k} p(\omega) Q(x,\omega) \\
\text{subject to}~&P \in \widehat{\mathfrak{P}}^k_{\textup{w}} = \left \{ P \in \set{M}^k \left \vert
\renewcommand{1.5}{1.5}
\begin{array}{l}
\sum_{\omega \in \tilde{\omega}set^k} p(\omega) = 1 \\
\sum_{\omega^\prime \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) = p(\omega) \qquad \forall \omega \in \tilde{\omega}set^k, \\
\sum_{\omega \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) = \hat{p}^k(\omega^\prime) \qquad \forall \omega^\prime \in \tilde{\omega}set^k, \\
\sum_{(\omega, \omega^\prime) \in \tilde{\omega}set^k \times \tilde{\omega}set^k} \|\omega - \omega^\prime\| \eta(\omega,\omega^\prime) \leq \epsilon \\
\eta(\omega,\omega^\prime) \geq 0 \quad \forall \omega, \omega^\prime \in \tilde{\omega}set^k
\end{array}\right. \right \}. \label{eq:wassersteinAmbiguityApprox_full}
\end{align}
\end{subequations}
The following result characterizes the distributions drawn from the approximate ambiguity sets of the form in \eqref{eq:wassersteinAmbiguityApprox}, or equivalently \eqref{eq:wassersteinAmbiguityApprox_full}.
\begin{proposition} \label{prop:wassersteinAmbiguity_property}
The sequence of Wasserstein distance-based approximate ambiguity set satisfies the following properties (1) for any $P \in \widehat{\mathfrak{P}}_{\textup{w}}^{k-1}$, we have $\Theta^k(P) \in \widehat{\mathfrak{P}}_{\textup{w}}^{k}$ where $\theta^k = \frac{k-1}{k}$, and (2) $\mathbb{H}( \widehat{\mathfrak{P}}_{\textup{w}}^{k}, \mathfrak{P}_{\textup{w}}) \rightarrow 0$ as $k \rightarrow \infty$, almost surely.
\end{proposition}
\begin{proof}
See appendix \S\ref{sect:proofs}.
\end{proof}
The approximate ambiguity set in \cite{MohajerinEsfahani2018} is a ball constructed in the space of probability distributions that are defined over the sample space $\tilde{\omega}set$ and whose radius reduces with increase in the number of observations. Using Wasserstein balls of shrinking radii, the authors of \cite{MohajerinEsfahani2018} show that the optimal value of the sequence of DRO problems converges to the optimal value of the expectation-valued SP problem in \eqref{eq:2slp_master} associated with the true distribution $P^\star$. The approximate ambiguity set in \eqref{eq:wassersteinAmbiguityApprox}, on the other hand, uses a constant radius for all $k \geq 1$. In this regard, we consider settings where the ambiguity is not necessarily resolved with increasing number of observations. This is reflected in the approximate ambiguity sets \eqref{eq:momentAmbuity_approx} and \eqref{eq:wassersteinAmbiguityApprox} that converge to their respective true ambiguity sets \eqref{eq:momentAmbuity} and \eqref{eq:wassersteinAmbiguity}, respectively.
\subsection{Approximating the Recourse Problem} \label{sect:recourseApprox}
Cutting plane methods for the 2-SLPs use an outer linearization-based approximation of the first-stage objective function in \eqref{eq:2slp_master}. In such algorithms, the challenging aspect of computing the expectation is addressed by taking advantage of the structure of the recourse problem \eqref{eq:2drlp_subproblem}. Specifically, for a given $\omega$, the recourse value $Q(\cdot,\omega)$ is known to be convex in the right-hand side parameters that includes the first-stage decision vector $x$. Additionally, if the support of $\tilde{\omega}$ is finite and \ref{assum:completeRecourse} holds, then the function $Q(\cdot,\omega)$ is polyhedral. Under assumptions \ref{assum:completeRecourse} and \ref{assum:compactOm}, these structural property of convexity extends to the expected recourse value $\mathcal{Q}(x)$.
Due to strong duality of linear programs, the recourse value is also equal to the optimal value of the dual of \eqref{eq:2drlp_subproblem}, i.e.,
\begin{align} \label{eq:subproblemDual}
Q(x,\omega) = \max~& \pi^\top [r(\omega) - T(\omega)x] \\
\text{subject to}~& \pi \in \Pi := \{\pi~|~W^\top \pi \leq g\}. \notag
\end{align}
Due to \ref{assum:completeRecourse} and \ref{assum:compactOm}, the dual feasible region $\Pi$ is a polytope that is not affected by the uncertainty. If $\Pi \subset \Pi$ denotes the set of all extreme points of the polytope $\Pi$, then the recourse value can also be expressed as the pointwise maximum of affine functions computed using elements of the set $\Pi$ as given below.
\begin{align} \label{eq:recoursePolyhedralForm}
Q(x,\omega) = \max_{\pi \in \Pi} \pi^\top [r(\omega) - T(\omega)x].
\end{align}
The outer linearization approaches tend to approximate the above form of recourse function by identifying the extreme points (optimal solutions to \eqref{eq:subproblemDual}) at a sequence of candidate (or trial) solutions $\{x^k\}$, and generating the corresponding affine functions. If $\pi(x^k,\omega)$ is the optimal dual obtained by solving \eqref{eq:subproblemDual} with $x^k$ as input, then the affine function $\alpha^k(\omega) + (\beta^k(\omega))^\top x$ is obtained by computing the coefficients $\alpha^k(\omega) = (\pi^k(\omega))^\top r(\omega)$ and $\beta^k(\omega) = C(\omega)^\top \pi^k(\omega)$. Following linear programming duality, notice that this affine function is a supporting hyperplane to $Q(x,\omega)$ at $x^k$, and lower bounds the function at every other $x \in \set{X}$.
If the support $\tilde{\omega}set$ is finite, then one can solve a dual subproblem for all $\omega \in \tilde{\omega}set$ with the candidate solution as input, generate the affine functions, and collate them together to obtain an approximation of the first-stage objective function. This is the essence of the L-shaped method applied to 2-SLP in \eqref{eq:2slp_master}. In each iteration of the L-shaped method, the affine functions generated using a candidate solution $x^k$ and information gathered from individual observations are weighed by the probability density of the observation to update the approximate first-stage objective function. Notice that even when SAA of the first-stage objective function of the 2-SLP, using a sample $\tilde{\omega}set_N \subset \Omega$ of size $N$, the L-shaped method can be applied. A similar approximation strategy is used in the DR L-shaped method for 2-DRLP problems.
Alternatively, we can consider the following approximation of the recourse function expressed in the form given in \eqref{eq:recoursePolyhedralForm}:
\begin{align}\label{eq:recoursePolyhedralApprox}
Q^k(x,\omega) = \max_{\pi \in \Pi^k} \pi^\top[r(\omega) - C(\omega) x].
\end{align}
Notice that the above approximation is built using only a subset $\Pi^k \subset \Pi$ of extreme points, and therefore, satisfies $Q^k(x,\omega) \leq Q(x,\omega)$. Since $Q(x,\omega) \geq 0$, we begin with $\Pi^0 = \{0\}$. Subsequently, we construct a sequence of sets $\{\Pi^k\}$ such that $\Pi^k \subseteq \Pi^{k+1} \subseteq \ldots \subset \Pi$ that ensures $Q^k(x,\omega)\geq 0$ for all $k$. The following result from \cite{Higle1991} captures the behavior of the sequence of approximation $\{Q^k\}$.
\begin{proposition}
The sequence $\{Q^k(x,\omega)\}_{k \geq 1}$ converges uniformly to a continuous function on $\set{X}$ for any $\omega \in \tilde{\omega}set$. \label{prop:uniformConvergenceApproxRecourse}
\end{proposition}
\begin{proof}
See Appendix \ref{sect:proofs}.
\end{proof}
The approximations of the form in \eqref{eq:recoursePolyhedralApprox} is one of the principal features of the SD algorithm (see \cite{Higle1991, Higle1994}). While the L-shaped and DR L-shaped methods require a finite support for $\tilde{\omega}$, SD is applicable even for problems with continuous support. The algorithm uses an ``incremental'' SAA for the first-stage objective function by adding one new observation in each iteration. Therefore, the first-stage objective function approximation used in SD is built using the recourse problem approximation in \eqref{eq:recoursePolyhedralApprox} and the incremental SAA. This approximation is given by:
\begin {align} \label{eq:saaIncremental}
\mathcal{Q}^k(x) = c^\top x + \frac{1}{k} \sum_{j=1}^k Q^k(x,\omega^j).
\end{align}
The affine functions generated in SD provide an outer linearization for the approximation in \eqref{eq:saaIncremental}. The sequence of sets that grow monotonically in size, viz. $\{\Pi^k\}$, is generated by adding one new vertex to the previous set $\Pi^{k-1}$ to obtain the updated set $\Pi^k$. The newly added vertex is the optimal dual solution obtained by solving \eqref{eq:subproblemDual} with the most recent observation $\omega^k$ and candidate solution $x^k$ as input.
We refer the reader to \cite{Birge2011}, \cite{bansal_DROdecomposition_2018,Riis2005}, and \cite{Higle1991,Higle1996} for the a detailed exposition of the L-shaped, the DR L-Shaped, and the SD methods, respectively, and note only the key differences between these methods. Firstly, the sample used to in the (DR) L-shaped method is fixed prior to optimization. In SD, this sample is updated dynamically throughout the course of the algorithm. Secondly, exact subproblem optimization for all observations in the sample is used in every iteration of the (DR) L-shaped method. On the contrary, exact optimization is used only for the subproblems corresponding to the latest observation, and an ``argmax'' procedure (to be described in the next section) is used for observations encountered in earlier iterations.
\section{Distributionally Robust Stochastic Decomposition}\label{sect:algorithm}
In this paper, we focus on a setting where the ambiguity set $\mathfrak{P}$ is approximated by a sequence of ambiguity sets $\{\widehat{\mathfrak{P}}^k\}_{k > 0}$ such that the following properties are satisfied: ($i$) for any $P \in \widehat{\mathfrak{P}}^{k-1}$, there exists $\theta^k \in [0,1]$ such that $\Theta^k(P) \in \widehat{\mathfrak{P}}^{k}$ and ($ii$) $\mathbb{H}( \widehat{\mathfrak{P}}^{k}, \mathfrak{P}) \rightarrow 0$ as $k \rightarrow \infty$, almost surely. The moment-based ambiguity set $\mathfrak{P}_{\text{mom}}$ and Wasserstein distance-based ambiguity set $\mathfrak{P}_w$ are two sets that satisfy these properties (Propositions \ref{prop:momentAmbiguity_property} and \ref{prop:wassersteinAmbiguity_property}, respectively). Recall that the approximate ambiguity set in an iteration (say $k$) is constructed using a finite set of observations $\tilde{\omega}set^k$ that progressively grow in size. Note that the sequence of approximate ambiguity sets may not necessarily converge to a single distribution. In other words, we do not assume that increasing sample size will overcome ambiguity asymptotically, as is the case in \cite{MohajerinEsfahani2018, Zhao2015}.
\begin{algorithm}[!ht]
\caption{Distributionally Robust Stochastic Decomposition}
\begin{algorithmic}[1]
\State {\bf Input:} Incumbent solution $\hat{x}^1 \in \set{X}$; initial sample $\tilde{\omega}set^0 \subseteq \tilde{\omega}set$; stopping tolerance $\tau > 0$; $\gamma \in (0,1]$, and minimum iterations $k^{\min}$.
\State {\bf Initialization:} Set iteration counter $k\leftarrow 1$; $\Pi^0 = \emptyset$; $\set{L}^0 = \emptyset$, and $f^0(x) = 0$.
\While{($k \geq k^{\min}$ and $f^{k-1}(\hat{x}^{k-1}) - f^{k-1}(x^k) \geq \tau f^{k-1}(\hat{x}^{k-1})$)}
\State Solve the master problem $\mathcal{M}^k$ \eqref{eq:2rdsd_master} to obtain a candidate solution $x^k$. \label{step:masterProblem}
\State Generate a scenario $\omega^k \in \tilde{\omega}set$ to get sample $\tilde{\omega}set^k \leftarrow \tilde{\omega}set^{k-1} \cup \{\omega^k\}$. \label{step:scenariogeneraration}
\For{$\omega \in \tilde{\omega}set^k$} \label{step:stocUpdates}
\If {$\omega = \omega^k$} \label{step:currentObs}
\State Solve the second-stage linear program \eqref{eq:2drlp_subproblem} with ($x^k,\omega$) as input;
\State Obtain the optimal value $Q(x^k,\omega)$ and optimal dual solution $\pi(x^k,\omega)$;
\State Update dual vertex set $\Pi^k \leftarrow \Pi^{k-1} \cup \{\pi(x^k,\omega)\}$. \label{step:currentObs_end}
\Else \label{step:oldObs}
\State Use the argmax procedure \eqref{eq:argmax} to identify dual vertex $\pi(x^k,\omega^k)$;
\State Store $Q^k(x^k,\omega) = (\pi(x^k,\omega))^\top[r(\omega) - T(\omega) x^k]$.\label{step:oldObs_end}
\EndIf
\EndFor
\State Solve the distribution separation problem using the ambiguity set $\widehat{\mathfrak{P}}^k$ and $\{Q^k(x^k,\omega)\}_{\omega \in \tilde{\omega}set^k}$ to get an extremal distribution $P^k:= (p^k(\omega))_{\omega \in \tilde{\omega}set_k}$. \label{line:dissepalgo}
\State Derive affine function ${\ell}_k^k(x) = {\alpha}_k^k + ({\beta}_k^k)^\top x$ using $\{\pi(x^k,\omega)\}_{\omega \in \tilde{\omega}set^{k}}$ and $P^k$ to get lower bound approximation of $\mathbb{Q}^k(x)$ as in \eqref{eq:affineCoeff};\label{step:affinefunctionlowerbound}
\State Perform Steps \ref{step:stocUpdates}-\ref{step:affinefunctionlowerbound} with $\hat{x}^{k-1}$ (incumbent solution) to obtain $\hat{\ell}_k^k(\cdot)$. \label{step:repeatforincumbentsol}
\For{$\ell_j^{k-1} \in \set{L}^{k-1}$} \label{step:affinecoeffupdateini}
\State Update previously generated affine functions $\ell^{k-1}_j(x) = {\alpha}_{k-1}^j + ({\beta}_{k-1}^j)^\top x$: $$\alpha^k_j = \theta^k \alpha^{k-1}_j \text{ and } \beta^k_j = \theta^k \alpha^{k-1}_j;$$
\State Set ${\ell}^k_j(x) = {\alpha}^k_j + ({\beta}^k_j)^\top x$ that provides lower bound approx. of $\mathbb{Q}^k(x)$;
\EndFor \label{step:affinecoeffupdateend}
\State Build a collection of these affine functions, denoted by $\set{L}^k$;\label{step:buildLk}
\State Update approximation of the first-stage objective function:\label{step:update1stageapprox}
\begin{align*}
c^\top x + \mathbb{Q}^k (x) \geq f^k(x) = c^\top x + \max_{j \in \mathcal{L}^k}~ \{ \alpha^k_j + (\beta^k_i)^\top x \};
\end{align*}
\If{Incumbent update rule \eqref{eq:incumbUpdt} is satisfied} \label{step:incumbUpdate}
\State $\hat{x}^k \leftarrow \hat{x}^{k-1}$ and $i_k \leftarrow i_{k-1}$;
\Else
\State $\hat{x}^k \leftarrow x^k$ and $i_k \leftarrow k$;
\EndIf
\State Update the master problem by replacing $f^{k-1}(x)$ with $f^{k}(x)$ to obtain $\mathcal{M}^{k+1}$;
\State $k \leftarrow k+1$;\label{step:incrementcounter}
\EndWhile \label{step:incumbUpdate_end}
\State \Return {$x^k$} \label{return}
\label{Algo:return}
\end{algorithmic}
\label{Algo:GenTSSDP}
\end{algorithm}
The pseudocode of the DRSD method is given in Algorithm \ref{Algo:GenTSSDP}.
We present the main steps of the DRSD method in iteration $k$ (Steps \ref{step:masterProblem}-\ref{step:incrementcounter} of Algorithm~\ref{Algo:GenTSSDP}). At the beginning of iteration $k$, we have a certain approximation of the first-stage objective function that we denote as $f^{k-1}(x)$, a finite set of observations $\tilde{\omega}set^{k-1}$ and an incumbent solution $\hat{x}^{k-1}$. We will use the term \emph{incumbent solution} to refer to the best solution discovered by the algorithm until iteration $k$. The solution identified in the current iteration will be referred to as the \emph{candidate solution} and denote it as $x^k$ (without $\hat{\bullet}$).
Iteration $k$ begins by first identifying the candidate solution by solving the following the master problem (Step \ref{step:masterProblem}):
\begin{align}\label{eq:2rdsd_master}
x^k \in \arg\min~\{f^{k-1}(x) ~|~ x \in \mathcal{X}\},
\end{align}
denoted by $\mathcal{M}^k$. Following this, a new observation $\omega^k \in \tilde{\omega}set$ is realized, and added to the current sample of observations $\tilde{\omega}set^{k-1}$ to get $\tilde{\omega}set^k = \tilde{\omega}set^{k-1} \cup \{\omega^k\}$ (Step \ref{step:scenariogeneraration}).
In order to build the first-stage objective function approximation, we rely upon the recourse function approximation presented in Section \ref{sect:recourseApprox}. For the most recent observation $\omega^k$ and the candidate solution $x^k$, we evaluate the recourse function value $Q(x^k, \omega^k)$ by solving \eqref{eq:2drlp_subproblem}, and obtain the dual optimum solution $\pi(x^k, \omega^k)$. Likewise, we obtain dual optimum solution $\pi(\hat{x}^{k-1}, \omega^k)$ by solving \eqref{eq:2drlp_subproblem} for incumbent solution $\hat{x}^{k-1}$ (Steps \ref{step:currentObs}--\ref{step:currentObs_end}). These dual vectors are added to a set $\Pi^{k-1}$ of previously discovered optimal dual vectors. In other words, we recursively update $\Pi^k \leftarrow \Pi^{k-1}\cup \{\pi(x^k,\omega^k), \pi(\hat{x}^{k-1},\omega^k)\}$. For all other observations ($\omega \in \tilde{\omega}set^k,\ \omega \neq \omega^k$), we identify a dual vector in $\Pi^k$ that provides the best lower bounding approximation at $\{Q(x^k, \omega)\}$ using the following operation (Steps \ref{step:oldObs}--\ref{step:oldObs_end}):
\begin {equation} \label{eq:argmax}
\pi(x^k, \omega) \in \arg\max~ \{ \pi^{\top} [r(\omega) - T(\omega)x^k]~|~ \pi \in \Pi^k \}.
\end {equation}
Note that the calculations in \eqref{eq:argmax} are carried out only for previous observations as $\pi(x^k, \omega^k)$ provides the best lower bound at $Q(x^k, \omega^k)$. Further, notice that $$\pi(x^k, \omega)^\top[r(\omega) - T(\omega)x^k] = Q^k(x^k,\omega),$$ the approximate recourse function value at $x^k$ defined in \eqref{eq:recoursePolyhedralApprox}, for all $\omega \in \tilde{\omega}set^k$, and $Q^k(x^k,\omega^k) = Q(x^k,\omega^k)$.
Using $\{Q^k(x^k,\omega^j)\}_{j=1}^k$, we solve a \emph{distribution separation problem} (in Step \ref{line:dissepalgo}):
\begin{align} \label{eq:distrSeparationApprox}
\mathbb{Q}^k(x^k) = \max~\bigg \{\sum_{\omega \in \tilde{\omega}set^k} p(\omega)Q^k(x^k,\omega)~|~ p(\omega) \in \widehat{\mathfrak{P}}^k \bigg \}.
\end{align}
Let $P^k = (p^k(\omega))_{\omega \in \tilde{\omega}set^k}$ denote the optimal solution of the above problem which we identify as the maximal/extremal probability distribution. Since the problem is solved over measures $\set{M}^k$ that are defined only over the observed set $\tilde{\omega}set^k$, the maximal probability distribution has weights $p^k(\omega^j)$ for $\omega^j \in \tilde{\omega}set^k$, and $p^k(\omega) = 0$ for $\omega \in \tilde{\omega}set\setminus \tilde{\omega}set^k$. Notice that the problem in \eqref{eq:distrSeparation} differs from the distribution separation problem \eqref{eq:distrSeparationApprox} as the latter uses the recourse function approximation $Q^k(\cdot)$ and approximate ambiguity set $\widehat{\mathfrak{P}}^k$ as opposed to the true recourse function $Q(\cdot)$ and ambiguity set $\mathfrak{P}$, respectively. For the ambiguity sets in \eqref{sect:ApproxAmbiguitySet} the distribution separation problem is a deterministic linear program. In general, the distribution separation problems associated with well-known ambiguity sets remain deterministic convex optimization problems \cite{rahimian2019distributionally}, and off-the-shelf solvers can used to obtain the extremal distribution.
In Step \ref{step:affinefunctionlowerbound} of Algorithm \ref{Algo:GenTSSDP}, we use the dual vectors $\{\pi(x^k,\omega^j)\}_{j \leq k}$ and the maximal probability distribution $P^k$ to generate a lower bounding affine function:
\begin{align} \label{eq:cutComputation}
\mathbb{Q}^k(x) = \max_{P \in \widehat{\mathfrak{P}}^k} \expect{Q^k(x,\tilde{\omega})}{P} \geq \sum_{\omega^j \in \tilde{\omega}set^k} p^k(\omega^j) \cdot (\pi(x^k,\omega^j))^{\top} [r(\omega^j) - T(\omega^j)x],
\end{align}
for the worst case expected recourse function measured with respect to the maximal probability distribution $P^k \in \widehat{\mathfrak{P}}^k$ which is obtained by solving the distribution separation problem \eqref{eq:distrSeparationApprox}. We denote the coefficients of the affine function on the right-hand side of \eqref{eq:cutComputation} by
\begin{align}\label{eq:affineCoeff}
\alpha_k^k = \sum_{\omega^j \in \tilde{\omega}set^k} p^k(\omega^j) \pi(x^k,\omega^j)^\top r(\omega^j)\text{ and }\beta_k^k = -\sum_{\omega^j \in \tilde{\omega}set^k} p^k(\omega^j) T(\omega^j)^\top \pi(x^k,\omega^j),
\end{align}
and succinctly write the affine function as $\ell_k^k(x) = \alpha_k^k + (\beta_k^k)^\top x$. Similar calculations are carried out using the incumbent solution $\hat{x}$ to identify a maximal probability distribution and a lower bounding affine function resulting in the affine function $\hat{\ell}_k^k(x) = \hat{\alpha}_k^k + (\hat{\beta}_k^k)^\top x$.
While the latest affine functions provide lower bound on $\mathbb{Q}^k$, the affine functions generated in previous iteration are not guaranteed to lower bound $\mathbb{Q}^k$. To see this, let us consider the moment-based approximate ambiguity sets $\{\widehat{\mathfrak{P}}^k_{\text{mom}}\}_{k\geq 1}$. Let $P^j_{\text{mom}} \in \widehat{\mathfrak{P}}^j_{\text{mom}}$ be the maximal distribution identified in an iteration $j <k$ which was used to compute the affine function $\ell_j^j(x)$. By assigning $p^j(\omega) = 0$ for all new observations encountered after iteration $j$, i.e., $\omega \in \tilde{\omega}set^k \setminus \tilde{\omega}set^j$, we can construct a probability distribution $\bar{P} = ((p^j(\omega))_{\omega \in \tilde{\omega}set^j}, (0)_{\omega \in \tilde{\omega}set^k \setminus \tilde{\omega}set^j}) \in \mathbb{R}^{|\Omega^k|}_+$. This reconstructed distribution satisfies $\sum_{\omega \in \tilde{\omega}set^k} \bar{p}(\omega) = 1$. However, it is easy to see that it does not satisfy $\sum_{\omega \in \tilde{\omega}set^k} \psi_i(\omega) \bar{p}(\omega) = \hat{b}^j_i = \hat{b}^k_i$ for all $i = 1,\ldots,q$. Therefore, $\bar{P} \notin \mathfrak{P}^k$. In other words, while the coefficients $(\alpha_j^j, \beta_j^j)$ are $\set{F}^j$-measurable, the corresponding measure is not feasible to the approximate ambiguity set $\mathfrak{P}^k$. Therefore, $\ell_j^j(x)$ is not a valid lower bound to $\mathbb{Q}^k$. The arguments for the Wasserstein-based approximate ambiguity set are more involved, but persistence of a similar issue can be demonstrated.
In order to address this, we update the previously generated affine functions $\ell^{k-1}_j(x) = {\alpha}_{k-1}^j + ({\beta}_{k-1}^j)^\top x$ for $j<k$, as follows (Steps \ref{step:affinecoeffupdateini} - \ref{step:affinecoeffupdateend}):
\begin{align} \label{eq:affineCoeff_update}
\alpha_j^k = \theta^k \alpha_j^{k-1}, \ \ \beta_j^k = \theta^k \beta_j^{k-1}, \text{ and } {\ell}^k_j(x) = {\alpha}_k^j + ({\beta}_k^j)^\top x \qquad \text{ for all } j < k,
\end{align}
such that $\ell^k_j(x)$ provides lower bound approximation of $\mathbb{Q}^k(x)$ for all $j \in \{1,\ldots, k-1\}$. Similarly, we update the affine functions $\hat{\ell}^k_j(x)$, $j<k$, associated with incumbent solution (Step \ref{step:repeatforincumbentsol}). Recall that for $\widehat{\mathfrak{P}}_{\text{mom}}$ and $\widehat{\mathfrak{P}}_w$, the parameter $\theta^k = \frac{k-1}{k}$ (Propositions \ref{prop:momentAmbiguity_property} and \ref{prop:wassersteinAmbiguity_property}). The candidate and the incumbent affine functions ($\ell_k^k(x)$ and $\hat{\ell}_k^k(x)$, respectively), as well as the updated collection of previously generated affine functions are used to build the set of affine functions which we will denote by $\set{L}^k$ (Step \ref{step:buildLk}). The lower bounding property of this first-stage objective function approximation is captured in the following result.
\begin{theorem} \label{thm:lowerBoundingMinorants}
Under assumption \ref{assum:completeRecourse}, the first-stage objective function approximation in \eqref{eq:objfnApprox} satisfies
\begin{align*}
f^k(x) \leq c^\top x + \mathbb{Q}^k(x) \text{ for all } x \in \set{X} \text{ and } k \geq 1.
\end{align*}
\end{theorem}
\begin{proof}
For non-empty approximate ambiguity set $\widehat{\mathfrak{P}}^1$ of ambiguity set $\mathfrak{P}$, the construction of the affine function ensures that $\ell_1^1(x) \leq \mathbb{Q}^1(x)$. Assume that $\ell(x) \leq \mathbb{Q}^{k-1}(x)$ for all $\ell \in \set{L}^{k-1}$ and $k>1$. The maximal nature of the probability distribution $P^k$ satisfies:
\begin{align*}
\sum_{\omega \in \tilde{\omega}set^k} p^k(\omega) Q^k(x,\omega) &\geq \sum_{\omega \in \tilde{\omega}set^k} p(\omega) Q^k(x,\omega) \qquad \forall P \in \widehat{\mathfrak{P}}^k.
\end{align*}
Using above and the monotone property of the approximate recourse function, we have
\begin{align}
\sum_{\omega \in \tilde{\omega}set^k} p^k(\omega) Q^k(x,\omega) &\geq \sum_{\omega \in \tilde{\omega}set^k} p(\omega) Q^{k-1}(x,\omega) \notag \\
&= \sum_{\omega \in \tilde{\omega}set^k\setminus \omega^k} p(\omega) Q^{k-1}(x,\omega) + p(\omega^k) Q^{k-1}(x,\omega^k), \label{eq:thm_monotonicity}
\end{align}
for all $\{p(\omega)\}_{\omega \in \Omega^k} \in \widehat{\mathfrak{P}}^k$. Based on the properties of $\mathfrak{P}$ and $\{\widehat{\mathfrak{P}}^k\}_{k \geq 1}$ (similar to Propositions \ref{prop:momentAmbiguity_property} and \ref{prop:wassersteinAmbiguity_property}), we know that for every $P \in \widehat{\mathfrak{P}}^{k-1}$ we can construct a probability distribution in $\mathfrak{P}^k$ using the mapping $\Theta^k$ defined by \eqref{eq:probUpdate}. Considering a probability distribution $P' = \{p'(\omega)\}_{\omega \in \Omega^{k-1}} \in \widehat{\mathfrak{P}}^{k-1}$ such that $\Theta^k(P') \in \widehat{\mathfrak{P}}^k$, the inequality \eqref{eq:thm_monotonicity} reduces to
\begin{align*}
\sum_{\omega \in \tilde{\omega}set^k} p^k(\omega) Q^k(x,\omega) &\geq \hspace{-0.4em} \sum_{\omega \in \tilde{\omega}set^k\setminus \omega^k} [\theta^k p^\prime(\omega) Q^{k-1}(x,\omega)] + [\theta^k p^\prime(\omega^k) + (1-\theta^k)] Q^{k-1}(x,\omega^k) \\
&= \theta^k \bigg [\sum_{\omega \in \tilde{\omega}set^{k-1}} p^\prime(\omega) Q^{k-1}(x,\omega)\bigg] + (1-\theta^k) Q^{k-1}(x,\omega^k) \\
&\geq \theta^k \bigg [\sum_{\omega \in \tilde{\omega}set^{k-1}} p^\prime(\omega) Q^{k-1}(x,\omega)\bigg].
\end{align*}
The last inequality is due to assumption $\ref{assum:completeRecourse}$, i.e., $Q(x,\omega^k) \geq 0$ and the construction of recourse function approximation $Q^k$ described in \S\ref{sect:recourseApprox}. Since $\ell(x)$ lower bounds the term in bracket, we have
\begin{align*}
\sum_{\omega \in \tilde{\omega}set^k} p^k(\omega) Q^k(x,\omega) &\geq \theta^k \ell(x).
\end{align*}
Using the same arguments for all $\ell \in \set{L}^{k-1}$, and the fact that the $\ell_k^k(x)$ and $\hat{\ell}(x)$ are constructed as lower bounds to the $\mathbb{Q}^k$, we have $f^k(x) \leq c^\top x + \mathbb{Q}^k(x)$. This completes the proof by induction.
\end{proof}
Using the collection of affine functions $\set{L}^k$, we update approximation of the first-stage objective function in Step~\ref{step:update1stageapprox}, as follows:
\begin{align}\label{eq:objfnApprox}
f^k(x) = c^\top x + \max_{i \in \mathcal{L}^k}~ \{ \alpha^i + (\beta^i)^\top x \}.
\end{align}
Once the approximation is updated, the performance of the candidate solution is compared relative to the incumbent solution (Steps \ref{step:incumbUpdate}--\ref{step:incumbUpdate_end}). This comparison is performed by verifying if the following inequality
\begin{align} \label{eq:incumbUpdt}
f^k(x^k) - f^k(\hat{x}^{k-1}) < \gamma[f^{k-1}(x^k) - f^{k-1}(\hat{x}^{k-1}],
\end{align}
where parameter $\gamma \in (0,1]$, is satisfied. If so, the candidate solution is designated to be the next incumbent solution, i.e., $\hat{x}^{k} = x^k$. If the inequality is not satisfied, the precious incumbent solution is retained as $\hat{x}^k = \hat{x}^{k-1}$. This completes an iteration of the DRSD method.
\begin{remark}
The algorithm design can be extended to incorporate 2-DRLP where the relatively complete recourse assumption of \ref{assum:completeRecourse} and/or assumption \ref{assum:rhs} is not satisfied. For problems where relatively complete recourse condition is not met, a candidate solution may lead to one or more subproblems to be infeasible. In this case, the dual extreme rays can be used to compute a feasibility cut that is included in the first-stage approximation. The argmax procedure in \eqref{eq:argmax} is only valid when assumption \ref{assum:rhs} is satisfied. In problems where the uncertainty also affects the cost coefficients, the argmax procedure presented in \cite{gangammanavar2020stochastic} can be utilized. These algorithmic enhancements can be incorporated without affecting the convergence properties of DRSD that we present in the next section.
\end{remark}
\section{Convergence Analysis} \label{sect:convergence}
In this section we provide the convergence result of the sequential sampling-based approach to solve DRO problems. In order to facilitate the exposition of our theoretical results, we will define certain quantities for notational convenience that are not necessarily computed during the course of the algorithm in the form presented in the previous section. Our convergence results are built upon stability analyses presented in \cite{Sun2016} and convergence analysis of the SD algorithm in \cite{Higle1991}.
We define a function which is defined over the approximate ambiguity set using the recourse function $Q(\cdot, \cdot)$, that is
\begin{align} \label{eq:objfnApprox_trueRecourse}
g^k(x) := c^\top x + \max_{P \in \widehat{\mathfrak{P}}^k} \expect{Q(x,\tilde{\omega})}{P}
\end{align}
for a fixed $x \in \set{X}$. We begin by analyzing the behavior of the sequence $\{g^k\}_{k \geq 1}$ as $k \rightarrow \infty$. In particular, we will assess the sequence of function evaluations at a converging subsequence of first-stage solutions. The result is captured in the following proposition.
\sloppy
\begin{proposition}\label{prop:FnConvergenceApproxSet}
Suppose $\{\hat{x}^{k_n}\}$ denotes a subsequence of $\{\hat{x}^k\}$ such that $\hat{x}^{k_n} \rightarrow \bar{x}$, then $\lim_{n \rightarrow \infty} |g^{k_n}(\hat{x}^{k_n}) - f(\bar{x})| = 0$, with probability one.
\end{proposition}
\begin{proof}
Consider the ambiguity set $\widehat{\mathfrak{P}}^k$. For $i = 1,2$ and $x_i \in \mathcal{X}$, let $P(x_i) \in \mathrm{argmax}_{P \in \widehat{\mathfrak{P}}^k} \{\expect{Q(x_i,\tilde{\omega})}{P}$. Then,
\begin{align*}
g^k(x_1) =~& c^\top x_1 + \expect{Q(x_1,\tilde{\omega})}{P(x_1)} \\
\geq~& c^\top x_1 + \expect{Q(x_1,\tilde{\omega})}{P(x_2)} \\
=~& c^\top x_2 + \expect{Q(x_2,\tilde{\omega}}{P(x_2)} + c^\top (x_1 - x_2) + \\ & \hspace{3cm} \expect{Q(x_1,\tilde{\omega})}{P(x_2)} - \expect{Q(x_2,\tilde{\omega})}{P(x_2)} \\
=~& g^k(x_2) + c^\top (x_1 - x_2) + \expect{Q(x_1,\tilde{\omega})}{P(x_2)} - \expect{Q(x_2,\tilde{\omega})}{P(x_2)}.
\end{align*}
The inequality in the above follows from optimality of $P(x_1)$. The above implies that
\begin{align}
g^k(x_2) - g^k(x_1) \leq~& c^\top (x_2-x_1) + \expect{Q(x_2,\tilde{\omega})}{P(x_2)} - \expect{Q(x_1,\tilde{\omega})}{P(x_2)} \notag \\
\leq~& |c^\top (x_2-x_1)| + \bigg | \expect{Q(x_2,\tilde{\omega})}{P(x_2)} - \expect{Q(x_1,\tilde{\omega})}{P(x_2)} \bigg | \notag \\
\leq~& (\|c\| + C)\|x_2 - x_1\|. \label{eq:continuity_g1}
\end{align}
The second relationship is due to the triangular inequality. The third inequality follows from uniform Lipschitz continuity of recourse function $Q(x,\tilde{\omega})$, with probability one, under assumption \ref{assum:completeRecourse} which implies that there exists a constant $C$ such that $| \expect{Q(x_1,\tilde{\omega})}{P} - \expect{Q(x_2,\tilde{\omega})}{P} | \leq C\|x_1 - x_2\|$ for any $P$. Therefore, the function $g^k(x)$ is equi-continuous on $x \in \set{X}$. Starting with $x_2$ and using the same arguments, we have
\begin{align}
g^k(x_1) - g^k(x_2) \leq~& (\|c\| + C)\|x_2 - x_1\|. \label{eq:continuity_g2}
\end{align}
Now consider ambiguity sets $\mathfrak{P}$ and $\widehat{\mathfrak{P}}^k$. Note that
\begin{align*}
|f(x) - g^k(x)| =&~ \bigg| \max_{P \in \mathfrak{P}} \expect{Q(x,\tilde{\omega})}{P} - \max_{P^\prime \in \widehat{\mathfrak{P}}^k} \expect{Q(x,\tilde{\omega})}{P^\prime} \bigg| \qquad \forall x \in \set{X}\\
\leq &~ \max_{P \in \mathfrak{P}} \min_{P^\prime \in \widehat{\mathfrak{P}}^k} \big| \expect{Q(x,\tilde{\omega})}{P} - \expect{Q(x,\tilde{\omega}}{P^\prime}\big| \qquad \forall x \in \set{X} \\
\leq&~ \max_{P \in \mathfrak{P}} \min_{P^\prime \in \widehat{\mathfrak{P}}^k} \sup_{x \in \set{X}} \big| \expect{Q(x,\tilde{\omega})}{P} - \expect{Q(x,\tilde{\omega})}{P^\prime}\big|.
\end{align*}
Using the definition of deviation between ambiguity sets $\mathfrak{P}$ and $\widehat{\mathfrak{P}}^k$ as well as its relationship with Hausdorff distance, we have
\begin{align}
|f(x) - g^k(x)| = \mathbb{D}(\mathfrak{P}, \widehat{\mathfrak{P}}^k) \leq \mathbb{H}(\mathfrak{P}, \widehat{\mathfrak{P}}^k) \label{eq:g_vs_f}.
\end{align}
For $\hat{x}^{k_n}$ and $\bar{x}$, combining \eqref{eq:continuity_g1}, \eqref{eq:continuity_g2}, and \eqref{eq:g_vs_f}, we have
\begin{align*}
|f(\bar{x}) - g^{k_n}(\hat{x}^{k_n})| \leq&~ |f(\bar{x}) - g^{k_n}(\bar{x})| + |g^{k_n}(\bar{x}) - g^{k_n}(\hat{x}^{k_n})| \\
\leq&~ \mathbb{H}(\mathfrak{P}, \widehat{\mathfrak{P}}^k) + (\|c\| + C) \|\bar{x} - \hat{x}^{k_n}\|.
\end{align*}
As $n \rightarrow \infty$, the family of ambiguity sets considered satisfy $\mathbb{H}(\mathfrak{P}, \widehat{\mathfrak{P}}^{k_n}) \rightarrow 0$ and the hypothesis informs us that $\hat{x}^{k_n} \rightarrow \bar{x}$. Therefore, we conclude that $g^{k_n}(\hat{x}^{k_n}) \rightarrow f(\bar{x})$ as $n \rightarrow \infty$.
\end{proof}
Notice that, the behavior of the approximate ambiguity sets defined in \S\ref{sect:ApproxAmbiguitySet}, in particular, the condition $\mathbb{H}(\mathfrak{P}, \widehat{\mathfrak{P}}^k) \rightarrow 0$ as $k \rightarrow \infty$ plays a central role in the above proof. Recall that for the moment and Wasserstein distance-based ambiguity sets, the condition is established in propositions \ref{prop:momentAmbiguity_property} and \ref{prop:wassersteinAmbiguity_property}, respectively. It is also worthwhile to note that under the foregoing conditions, \eqref{eq:g_vs_f} also implies uniform convergence of the sequence $\{g^k\}$ to $f(x)$, with probability one.
The above result applies to any algorithm that generates a converging sequence of iterates $\{x^k\}$ and a corresponding sequence of extremal distributions. Such an algorithm is guaranteed to exhibit convergence to the optimal distributionally robust objective function value. Therefore, this result is applicable to the sequence of instances constructed using external sampling and solved, for example, using reformulation-based methods. Such an approach was adopted in \cite{Riis2005} and \cite{Sun2016}. The analysis in \cite{Riis2005} relies upon two rather restrictive assumptions. The first assumption is that for all $P \in \mathfrak{P}$ there exists a sequence of measures $\{P^k\}$ such that $P^k \in \widehat{\mathfrak{P}}^k$ and converges weakly to $P$. The second assumption requires the approximate ambiguity sets to be strict subsets of the true ambiguity set, i.e., $\widehat{\mathfrak{P}}^k \subset \mathfrak{P}$. Both of these assumptions are very difficult to satisfy in a data-driven setting (also see Remark \ref{rem:RiisandAndersonProp2.1}).
The analysis in \cite{Sun2016}, on the other hand, does not make the above assumptions. Therefore, their analysis is more broadly applicable in settings where external sampling is used to generate $\tilde{\omega}set^k$. DRO instances are constructed based on statistics estimated using $\tilde{\omega}set^k$ and solved to optimality for each $k \geq 1$. They show the convergence of optimal objective function values and optimal solution sets of approximate problems to the optimal objective function value and solutions of the true DRO problem, respectively. In this regard, the result in Proposition \ref{prop:FnConvergenceApproxSet} can alternatively be derived using Theorem 1(i) in \cite{Sun2016}. While the above function is not computed during the course of the sequential sampling algorithm, it provides the necessary benchmark for our convergence analysis.
One of the main point of deviation in our analysis stems from the fact that we use the objective function approximations that are built based on the approximate recourse function in \eqref{eq:recoursePolyhedralApprox}. In order to study the piecewise affine approximation of the first-stage objective function, we introduce another benchmark function
\begin{align}\label{eq:objfnApprox_approxRecourse}
\phi^k(x) := c^\top x + \max_{P \in \widehat{\mathfrak{P}}^k} \expect{Q^k(x,\tilde{\omega})}{P}.
\end{align}
Notice that the above function uses the approximations for the ambiguity set (as in the case of \eqref{eq:objfnApprox_trueRecourse}) as well as the approximation for recourse function. This construction ensures for all $x \in \set{X}$ and $k \geq 1$ that $\phi^k(x) \leq g^k(x)$ that follows from the fact that $Q^k(x,\tilde{\omega}) \leq Q(x,\tilde{\omega})$, almost surely. Further, following the result in Theorem \ref{thm:lowerBoundingMinorants} ensures that $f^k(x) \leq \phi^k(x)$. Putting these together, we obtain the following relationship
\begin{align}
f^k(x) \leq \phi^k(x) \leq g^k(x) \qquad \forall x \in \set{X}, k \geq 1.
\end{align}
While the previous proposition was focused on the upper limit in the above relationship, we present the asymptotic behavior of the $\{f^k\}$ sequence in the following results.
\begin{lemma}\label{lemma:asymptoticSupport}
Suppose $\{\hat{x}^{k_n}\}$ denotes a subsequence of $\{\hat{x}^k\}$ such that $\hat{x}^{k_n} \rightarrow \bar{x}$, $\lim_{n \rightarrow \infty} f^{k_n}(\hat{x}^{k_n}) - f(\bar{x}) = 0$, with probability one.
\end{lemma}
\begin{proof}
From Proposition \ref{prop:FnConvergenceApproxSet}, we have $\lim_{n \rightarrow \infty} |f(\bar{x}) - g^{k_n}(\hat{x}^{k_n})| = 0$. Therefore, there exists $N_1 < \infty$ and $\epsilon_1>0$ such that
\begin{align}\label{eq:support_1}
\bigg | \max_{P \in \mathfrak{P}} \expect{Q(\bar{x},\tilde{\omega})}{P} - \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{Q(\hat{x}^{k_n},\tilde{\omega})}{P} \bigg| < \epsilon_1/2 \quad \forall n > N_1.
\end{align}
Now consider,
\begin{align*}
\max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{Q(\hat{x}^{k_n}, \tilde{\omega})}{P} &- \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{Q^{k_n}(\hat{x}^{k_n},\tilde{\omega})}{P} \\
= & \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \big( \expect{Q(\hat{x}^{k_n}, \tilde{\omega})}{P} - \expect{Q^{k_n}(\hat{x}^{k_n},\tilde{\omega})}{P}\big) \\
\leq &~ \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \big | \expect{Q(\hat{x}^{k_n},\tilde{\omega})}{P} - \expect{Q^{k_n}(\hat{x}^{k_n},\tilde{\omega})}{P} \big | \\
=&~\max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{|Q(\hat{x}^{k_n},\tilde{\omega}) - Q^{k_n}(\hat{x}^{k_n},\tilde{\omega})|}{P}.
\end{align*}
The last equality follows from the fact that $Q(x,\tilde{\omega}) \geq Q^k(x,\tilde{\omega})$ for all $x \in \set{X}$ and $k \geq 1$, almost surely. Moreover, because of the uniform convergence of $\{Q^k\}$ (Proposition \ref{prop:uniformConvergenceApproxRecourse}), the sequence of approximate functions $\{\phi^k\}$ also convergences uniformly. This implies that, there exists $N_2 < \infty$ such that
\begin{align} \label{eq:support_2}
\bigg | \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{Q(\hat{x}^{k_n}, \tilde{\omega})}{P} - \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{Q^{k_n}(\hat{x}^{k_n},\tilde{\omega})}{P} \bigg | < \epsilon_1/2.
\end{align}
Let $N = \max\{N_1, N_2\}$. Using \eqref{eq:support_1} and \eqref{eq:support_2}, we have for all $n > N$
\begin{align*}
\bigg| \max_{P \in \mathfrak{P}} \expect{Q(\bar{x},\tilde{\omega})}{P} - \max_{P \in \widehat{\mathfrak{P}}^{k_n}} \expect{Q^{k_n}(x^{k_n},\tilde{\omega})}{P} \bigg| < \epsilon_1.
\end{align*}
This implies that $|f(\bar{x}) - \phi^{k_n}(\hat{x}^{k_n})| \rightarrow 0$ as $n \rightarrow \infty$. Based on \eqref{eq:argmax}, we have $Q^{k_n}(\hat{x}^{k_n}, \omega) = (\pi(\hat{x}^{k_n},\omega))^\top [r(\omega) - T(\omega)\hat{x}^{k_n}] \geq (\pi(\hat{x}^{k_n}, \omega))^\top [r(\omega) - T(\omega) x]$ for all $x \in \set{X}$ and $\omega \in \tilde{\omega}set^{k_n}$. Let
\begin{align*}
\alpha^{k_n}_{k_n} = \sum_{\omega \in \tilde{\omega}set^{k_n}} p^{k_n}(\omega) (\pi(\hat{x}^{k_n},\omega))^\top r(\omega)\text{ and }\beta^{k_n}_{k_n} = -\sum_{\omega \in \tilde{\omega}set^{k_n}} p^{k_n}(\omega) T(\omega)^\top \pi(\hat{x}^{k_n},\omega),
\end{align*}
where $\{p^{k_n}(\omega)\}_{\omega \in \tilde{\omega}set^{k_n}}$ is an optimal solution of the distributional separation problem \eqref{eq:distrSeparationApprox} where index $k$ is replaced by $k_n$. Then, the affine function $\alpha^{k_n}_{k_n} + (c+\beta^{k_n}_{k_n})^\top x$ provides a lower bound approximation for function $\phi^{k_n}(x)$, i.e.,
\begin{align*}
\phi^{k_n}(x) \geq \alpha^{k_n}_{k_n} + (c+\beta^{k_n}_{k_n})^\top x \qquad \text{ for all } x \in \set{X},
\end{align*}
with strict equality holding only at $\hat{x}^{k_n}$. Therefore, using the definition of $f^k(x)$ we have $\lim_{n \rightarrow \infty} \alpha^{k_n}_{k_n} + (c+\beta^{k_n}_{k_n})^\top \hat{x}^{k_n} = \lim_{n \rightarrow \infty} f^{k_n}(\hat{x}^{k_n}) = \lim_{n \rightarrow \infty} \phi^{k_n}(\hat{x}^{k_n}) = f(\bar{x})$, almost surely. This completes the proof.
\end{proof}
The above result characterizes the behavior of the sequence of affine functions generated during the course of the algorithm. In particular, the sequence $\{f^k(\hat{x}^k)\}_{k \geq 1}$ accumulates at the objective value of the original DRO problem \eqref{eq:2drlp_master}. Recall that the candidate solution $x^k$ is a minimizer of $f^{k-1}(x)$ and an affine function is generated at this point such that $f^k(x^k) = \phi^k(x^k)$ in all iterations $k \geq 1$. However, due to the update procedure in \eqref{eq:affineCoeff_update} the quality of the estimates at $x^k$ gradually diminishes leading to a large value for $(\phi^k(x^k) - f^k(x^k))$ as $k$ increases. This emphasizes the role of the incumbent solution and computing the incumbent affine function $\hat{\ell}(x)$ during the course of the algorithm. By updating the incumbent solution and frequently reevaluating the affine functions at the incumbent solution, we can ensure that the approximation is ``sufficiently good'' in the neighborhood of the incumbent solution. In order to assess the improvement of approximation quality, we define
\begin{align}\label{eq:estError}
\delta^k := f^{k-1}(x^k) - f^{k-1}(\hat{x}^{k-1}) \leq 0 \qquad \forall k \geq 1.
\end{align}
The inequality follows from the optimality of $x^k$ with respect to the objective function $f^{k-1}$. The quantity $\delta^k$ measures the error in objective function estimate at the candidate solution with respect to the estimate at the current incumbent solution. The following result captures the asymptotic behavior of this error term.
\begin{lemma}\label{lemma:vanishingError}
Let $\mathcal{K}$ denotes a sequence of iterations where the incumbent solution changes. There exists a subsequence of iterations, denoted as $\set{K}^* \subseteq \mathcal{K}$, such that $\lim_{k \in \set{K}^*} \delta^k = 0$.
\end{lemma}
\begin{proof} We will consider two cases depending on whether the set $\set{K}$ is finite or not. First, suppose that $|\set{K}|$ is not finite. By the incumbent update rule and \eqref{eq:estError},
\begin{align*}
f^{k_n}(x^{k_n}) - f^{k_n}(\hat{x}^{k_n-1}) < \gamma [f^{k_n-1}(x^{k_n}) - f^{{k_n}-1}(\hat{x}^{k_n-1})] = \gamma \delta^{k_n} \leq 0 \qquad \forall k_n \in \set{K}.
\end{align*}
Subsequently, we have $\limsup_{n \rightarrow \infty} \delta^{k_n} \leq 0$. Since $x^{k_n} = \hat{x}^{k_n}$ and $\hat{x}^{k_n-1} = \hat{x}^{k_{n-1}}$, we have
\begin{align*}
f^{k_n}(\hat{x}^{k_n}) - f^{k_n}(\hat{x}^{k_{n-1}}) \leq \gamma \delta^{k_n} \leq 0.
\end{align*}
The left-hand side of the above inequality captures the improvement in the objective function value at the current incumbent solution over the previous incumbent solution. Using the above, we can write the average improvement attained over $n$ incumbent changes as\sloppy
\begin{align*}
\frac{1}{n} \sum_{j = 1}^n \bigg[f^{k_j}(\hat{x}^{k_j}) - f^{k_j}(\hat{x}^{k_{j-1}}) \bigg] \leq \frac{1}{n} \sum_{j = 1}^n \gamma \delta^{k_j} \leq 0 \qquad \text{ for all } n.
\end{align*}
This implies that \sloppy
\begin{align*}
\frac{1}{n}\underbrace{\bigg(f^{k_n}(\hat{x}^{k_n}) - f^{k_1}(\hat{x}^{k_0}) \bigg)}_{(a)}+ \frac{1}{n}\bigg[\sum_{j=1}^{n-1} \underbrace{\bigg(f^{k_j}(\hat{x}^{k_j}) - f^{k_{j+1}}(\hat{x}^{k_j}) \bigg)}_{(b)} \bigg] \leq \frac{1}{n} \sum_{j = 1}^n \gamma \delta^{k_j} \leq 0, \ \ \forall n.
\end{align*}
Under the assumption that the dual feasible region is non-empty and bounded (this is ensured by relatively complete recourse, \ref{assum:completeRecourse}), $\{f^k\}$ is a sequence of Lipschitz continuous functions. This along with compactness of $\set{X}$ \ref{assum:compactX}, implies that $f^{k_n}(\hat{x}^{k_n}) - f^{k_1}(\hat{x}^{k_0})$ is bounded from above. Hence, the term (a) reduces to zero as $n \rightarrow \infty$. The term (b) converges to zero, with probability one, due to uniform convergence of $\{f^k\}$. Since $\gamma \in (0,1]$, we have
\begin{align*}
\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{j = 1}^n \delta^{k_j} = 0
\end{align*}
with probability one. Further,
\begin{align*}
\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{j = 1}^n \delta^{k_j} \leq \limsup_{n \rightarrow \infty} \delta^{k_n} \leq 0.
\end{align*}
Thus, there exists a subsequence indexed by the set $\set{K}^*$ such that $\lim_{k \in \set{K}^*} \delta^k = 0$, with probability one.
Now if $|\set{K}|$ is finite, then there exists $\hat{x}$ and $K < \infty$ such that for all $k \geq K$, we have $\hat{x}^k = \hat{x}$. Notice that, if $\lim_{k \in \set{K}^*} x^k = \bar{x}$, uniform convergence of the sequence $\{f^k\}$ and Lemma \ref{lemma:asymptoticSupport} ensure that
\begin{subequations} \label{eq:limitApprox}\begin{align}
\lim_{k \in \set{K}^*} f^k(x^k) = \lim_{k \in \set{K}^*} f^{k-1}(x^k) = f(\bar{x}) \\
\lim_{k \in \set{K}^*} f^k(\hat{x}) = \lim_{k \in \set{K}^*} f^{k-1}(\hat{x}) = f(\hat{x}).
\end{align} \end{subequations}
Further, since the incumbent is not updated in iterations $k \geq K$, we must have from the update rule in \eqref{eq:incumbUpdt} that
\begin{align*}
f^{k}(x^k) - f^{k}(\hat{x}) \geq \gamma [f^{k-1}(x^k) - f^{k-1}(\hat{x})] = \gamma \delta^k \quad \text{ for all } k \geq K.
\end{align*}
Using \eqref{eq:limitApprox}, we have
\begin{align*}
\lim_{k \in \set{K}^*} \big( f^{k}(x^k) - f^{k}(\hat{x})\big) &\geq \gamma \lim_{k \in \set{K}^*} \big(f^{k-1}(x^k) - f^{k-1}(\hat{x})\big),
\end{align*}
which implies
\begin{align*}
f(\bar{x}) - f(\hat{x}) &\geq \gamma(f(\bar{x}) - f(\hat{x})).
\end{align*}
Since $\gamma \in (0,1]$, we must have $f(\bar{x}) - f(\hat{x}) = 0$. Hence, $\lim_{k \in \set{K}^*} \delta^k = f(\bar{x}) - f(\hat{x}) = 0$, with probability one.
\end{proof}
Equipped with the results in lemmas \ref{lemma:asymptoticSupport} and \ref{lemma:vanishingError}, we state the main theorem which establishes the existence of a subsequence of incumbent solution sequence for which every accumulation point is an optimal solution to \eqref{eq:2drlp_master}.
\begin{theorem}
Let $\{x^k\}_{k=1}^\infty$ and $\{\hat{x}^k\}_{k=1}^\infty$ be the sequence candidate and incumbent solutions generated by the algorithm. There exists a subsequence $\{\hat{x}^k\}_{k \in \set{K}}$ for which every accumulation point is an optimal solution of 2-DRLP \eqref{eq:2drlp_master}, with probability one.
\end{theorem}
\begin{proof}
Let $x^* \in \set{X}$ be an optimal solution of \eqref{eq:2drlp_master}. Consider a subsequence indexed by $\set{K}$ such that $\lim_{k \in \set{K}} \hat{x}^k = \bar{x}$. Compactness of $\set{X}$ ensures the existence of accumulation point $\bar{x} \in \set{X}$ and therefore,
\begin{align} \label{eq:lbMinorant_1}
f(x^*) \leq f(\bar{x}).
\end{align}
From Theorem \ref{thm:lowerBoundingMinorants}, we have
\begin{align*}
f^k(x) \leq~& c^\top x + \mathbb{Q}^k(x) \\
\leq~& c^\top x + \max_{P \in \widehat{\mathfrak{P}}^k} \expect{Q(x,\tilde{\omega})}{P} = g^k(x) \qquad \forall k, x \in \set{X}.
\end{align*}
Thus, using the uniform convergence of $\{g^k\}$ (Proposition \ref{prop:FnConvergenceApproxSet}) we have
\begin{align} \label{eq:lbMinorant_2}
\limsup_{k \in \set{K}^\prime} f^k(x^*) \leq \lim_{k \in \set{K}^\prime} g^k(x^*) = f(x^*)
\end{align}
for all subsequences indexed by $\set{K}^\prime \subseteq\{1,2,\ldots\}$, with probability one. Recall that,
\begin{align*}
\delta^{k} = f^{{k}-1}(x^{k}) - f^{{k}-1}(\hat{x}^{{k}-1})
&\leq f^{{k}-1}(x^*) - f^{{k}-1}(\hat{x}^{{k}-1}) \qquad \text{for all } k \geq 1.
\end{align*}
The inequality in the above follows from the optimality of $x^{k}$ with respect to $f^{k-1}(x)$. Taking limit over $\set{K}$, we have
\begin{align*}
\lim_{k \in \set{K}} \delta^{k} \leq~& \lim_{k \in \set{K}} \big(f^{k-1}(x^*) - f^{{k}-1}(\hat{x}^{{k}-1})\big) \\
\leq~& \limsup_{k \in \set{K}} f^{k-1}(x^*) - \liminf_{k \in \set{K}} f^{{k}-1}(\hat{x}^{{k}-1}) \\
\leq~& f(x^*) - f(\bar{x}).
\end{align*}
The last inequality follows from \eqref{eq:lbMinorant_2} and $\lim_{k \in \set{K}} f^{k-1}(\hat{x}^{k-1}) = f(\bar{x})$ (Lemma \ref{lemma:asymptoticSupport}). From Lemma \ref{lemma:vanishingError}, there exists a subsequence indexed by $\set{K}^* \subseteq \mathcal{K}$ such that $\lim_{k \in \set{K}^*} \delta^{k} = 0$. Therefore, if $\{\hat{x}^k\}_{k \in \set{K}^*} \rightarrow \bar{x}$, we have
\begin{align*}
f(x^*) - f(\bar{x}) \geq 0.
\end{align*}
Using \eqref{eq:lbMinorant_1} and the above inequality, we conclude that $\bar{x}$ is an optimal solution with probability one.
\end{proof}
\section{Computational Experiment} \label{sect:computations}
In this section, we evaluate the effectiveness and efficiency of the DRSD method in solving 2-DRLP problems. For our preliminary experiments, we consider 2-DRLP problems with moment-based ambiguity set $\mathfrak{P}_{\text{mom}}$ for the first two moments ($q=2$). We report results from the computational experiments conducted on four well-known SP test problems: {\tt CEP}, {\tt PGP}, {\tt BAA}, and {\tt STORM}. The test problems have supports of size $216$, $576$, $10^{18}$ and $10^{81}$, respectively.
We use an external sampling-based approach as a benchmark for comparison. The external sampling-based instances involve constructing approximate problems of the form \eqref{eq:objfnApprox_trueRecourse} with a pre-determined number of observations $N \in \{100, 250, 500, 1000, 2000\}$ (that might not be unique). The resulting instances are solved using the DR L-Shaped method. Specifically, we compare the solution quality provided by these methods along with the solution time.
We conduct $30$ independent replications for each problem instance with different seeds for the random number generator. The algorithms are implemented in the C programming language, and the experiments are conducted on a $64$-bit Intel core i7 - 4770 CPU at $3.4$GHz $\times ~ 8$ machine with $32$ GB memory. All linear programs, i.e., master problem, subproblems, and distribution separation problem, are solved using CPLEX 12.10 callable subroutines. The results from the experiments are presented in Tables \ref{tab:estimates} and \ref{tab:times}. The results for the instances with a finite sample size obtained from the DR L-shaped method are labeled as DRLS-$N$, where $N$ denotes the number of observations used to approximate the ambiguity set. The DRSD method is run for a minimum of $256$ iteration, while no such limit is imposed on the DR L-shaped method.
\begin{table}[!t]\centering \renewcommand{1.5}{1}
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Method & \# Iterations & Objective Estimate & \# Unique Obs. \\
\hline
\multicolumn{4}{|c|}{{\tt PGP}} \\ \hline
DRLS-100 & $17.867$ ($\pm 0.92$) & $457.610$ ($\pm 3.28$) & $38.233$ ($\pm 1.10$) \\
DRLS-250 & $20.467$ ($\pm 0.73$) & $462.922$ ($\pm 2.28$) & $53.267$ ($\pm 1.74$) \\
DRLS-500 & $20.467$ ($\pm 0.63$) & $464.704$ ($\pm 1.95$) & $68.667$ ($\pm 1.62$) \\
DRLS-1000 & $20.500$ ($\pm 0.84$) & $466.104$ ($\pm 1.78$) & $85.833$ ($\pm 1.78$) \\
DRLS-2000 & $20.900$ ($\pm 0.57$) & $469.579$ ($\pm 2.53$) & $102.867$ ($\pm 1.87$) \\
\hline
DRSD & $503.667$ ($\pm 687.01$) & $463.185$ ($\pm 16.28$) & $65.667$ ($\pm 10.04$) \\
\hline
\multicolumn{4}{|c|}{{\tt CEP}} \\ \hline
DRLS-100 & $2.600$ ($\pm 0.19$) & $658817.129$ ($\pm 14457.30$) & $80.700$ ($\pm 1.21$) \\
DRLS-250 & $2.267$ ($\pm 0.17$) & $680735.606$ ($\pm 10511.47$) & $147.900$ ($\pm 2.05$) \\
DRLS-500 & $2.067$ ($\pm 0.09$) & $683252.019$ ($\pm 5948.62$) & $195.167$ ($\pm 1.44$) \\
DRLS-1000 & $2.000$ ($\pm 0.00$) & $679665.728$ ($\pm 4926.88$) & $214.100$ ($\pm 0.47$) \\
DRLS-2000 & $2.000$ ($\pm 0.00$) & $680744.118$ ($\pm 3872.39$) & $215.967$ ($\pm 0.07$) \\
\hline
DRSD & $257.000$ ($\pm 0.00$) & $681772.359$ ($\pm 10348.32$) & $150.033$ ($\pm 2.12$) \\
\hline
\multicolumn{4}{|c|}{{\tt BAA}} \\ \hline
DRLS-100 & $247.000$ ($\pm 5.17$) & $248677.974$ ($\pm 1238.19$) & $100.000$ ($\pm 0.00$) \\
DRLS-250 & $240.233$ ($\pm 5.74$) & $249173.795$ ($\pm 770.38$) & $250.000$ ($\pm 0.00$) \\
DRLS-500 & $236.067$ ($\pm 6.28$) & $249827.472$ ($\pm 499.59$) & $500.000$ ($\pm 0.00$) \\
DRLS-1000 & $229.067$ ($\pm 6.03$) & $250640.029$ ($\pm 355.53$) & $1000.000$ ($\pm 0.00$) \\
DRLS-2000 & $219.900$ ($\pm 5.36$) & $251405.806$ ($\pm 243.54$) & $2000.000$ ($\pm 0.00$) \\
\hline
DRSD & $316.367$ ($\pm 31.53$) & $250235.142$ ($\pm 737.21$) & $315.367$ ($\pm 31.53$) \\
\hline
\multicolumn{4}{|c|}{{\tt STORM}} \\ \hline
DRLS-100 & $11.667$ ($\pm 0.51$) & $15742456.082$ ($\pm 12191.85$) & $100.000$ ($\pm 0.00$) \\
DRLS-250 & $11.167$ ($\pm 0.52$) & $15781724.510$ ($\pm 8753.59$) & $250.000$ ($\pm 0.00$) \\
DRLS-500 & $11.733$ ($\pm 0.59$) & $15797019.902$ ($\pm 5345.83$) & $500.000$ ($\pm 0.00$) \\
DRLS-1000 & $11.767$ ($\pm 0.52$) & $15806575.387$ ($\pm 3771.91$) & $1000.000$ ($\pm 0.00$) \\
DRLS-2000 & $11.900$ ($\pm 0.46$) & $15817039.917$ ($\pm 2502.88$) & $2000.000$ ($\pm 0.00$) \\
\hline
DRSD & $516.200$ ($\pm 108.53$) & $15786864.662$ ($\pm 9155.49$) & $515.200$ ($\pm 108.53$) \\
\hline
\end{tabular}
}
\caption{Comparison of results obtained from DR L-Shaped method and DRSD method}
\label{tab:estimates}
\end{table}
Table \ref{tab:estimates} shows the average number of iterations, the average objective function value, and the average number of unique observations in Columns 2, 3, and 4, respectively. The values in the parenthesis are the half-widths of the corresponding confidence intervals. Notice that for DRSD, the number of iterations is also equal to the number of observations used to approximate the ambiguity set. The results show the ability of DRSD to dynamically determine the number of observations by assessing the progress made during the algorithm. The objective function estimate obtained using the DRSD is comparable to the objective function estimate obtained using the DR L-shaped method of comparable size. For instance, the DRSD objective function estimate for {\tt STORM} that is based upon a sample of size $516.2$ (on average) is within $0.1\%$ of the objective function value estimate of DRLS-500. These results show that the optimal objective function estimate obtained from DRSD are comparable to those obtained using an external sampling-based approach.
\begin{table}[!t]
\centering \renewcommand{1.5}{1}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Method & Total & Master & Subproblem & Optimality & Argmax & Separation \\
\hline
\multicolumn{7}{|c|}{{\tt PGP}} \\ \hline
DRLS-100 & $0.0517$ & $0.0019$ & $0.0422$ & $0.0000$ & $-$ & $0.0023$ \\
DRLS-250 & $0.0770$ & $0.0021$ & $0.0651$ & $0.0000$ & $-$ & $0.0025$ \\
DRLS-500 & $0.0959$ & $0.0020$ & $0.0822$ & $0.0000$ & $-$ & $0.0026$ \\
DRLS-1000 & $0.1214$ & $0.0021$ & $0.1050$ & $0.0000$ & $-$ & $0.0030$ \\
DRLS-2000 & $0.1463$ & $0.0022$ & $0.1270$ & $0.0000$ & $-$ & $0.0033$ \\
\hline
DRSD & $0.3498$ & $0.1225$ & $0.0549$ & $0.0003$ & $0.0018$ & $0.0697$ \\
\hline
\multicolumn{7}{|c|}{{\tt CEP}} \\ \hline
DRLS-100 & $0.0154$ & $0.0002$ & $0.0086$ & $0.0000$ & $-$ & $0.0004$ \\
DRLS-250 & $0.0237$ & $0.0001$ & $0.0124$ & $0.0000$ & $-$ & $0.0004$ \\
DRLS-500 & $0.0283$ & $0.0001$ & $0.0138$ & $0.0000$ & $-$ & $0.0003$ \\
DRLS-1000 & $0.0285$ & $0.0001$ & $0.0134$ & $0.0000$ & $-$ & $0.0003$ \\
DRLS-2000 & $0.0286$ & $0.0001$ & $0.0137$ & $0.0000$ & $-$ & $0.0003$ \\
\hline
DRSD & $0.1154$ & $0.0299$ & $0.0262$ & $0.0001$ & $0.0009$ & $0.0292$ \\
\hline
\multicolumn{7}{|c|}{{\tt BAA}} \\ \hline
DRLS-100 & $2.6497$ & $0.0778$ & $2.1698$ & $0.0067$ & $-$ & $0.1837$ \\
DRLS-250 & $6.1376$ & $0.0756$ & $5.2157$ & $0.0078$ & $-$ & $0.3252$ \\
DRLS-500 & $11.1848$ & $0.0655$ & $9.7257$ & $0.0056$ & $-$ & $0.4624$ \\
DRLS-1000 & $21.1755$ & $0.0616$ & $18.5514$ & $0.0043$ & $-$ & $0.8286$ \\
DRLS-2000 & $44.2579$ & $0.0685$ & $39.2027$ & $0.0056$ & $-$ & $1.2710$ \\
\hline
DRSD & $2.0943$ & $0.1570$ & $0.0577$ & $0.0002$ & $0.0550$ & $0.9003$ \\
\hline
\multicolumn{7}{|c|}{\tt STORM} \\ \hline
DRLS-100 & $0.4336$ & $0.0022$ & $0.3197$ & $0.0001$ & $-$ & $0.0391$ \\
DRLS-250 & $1.0080$ & $0.0021$ & $0.7429$ & $0.0001$ & $-$ & $0.0885$ \\
DRLS-500 & $2.1167$ & $0.0025$ & $1.5629$ & $0.0001$ & $-$ & $0.1957$ \\
DRLS-1000 & $4.3179$ & $0.0027$ & $3.1438$ & $0.0001$ & $-$ & $0.4571$ \\
DRLS-2000 & $9.0015$ & $0.0027$ & $6.2653$ & $0.0001$ & $-$ & $1.3203$ \\
\hline
DRSD & $30.4394$ & $0.7815$ & $0.3084$ & $0.0003$ & $0.4781$ & $23.8544$ \\
\hline
\end{tabular}
}
\caption{Computational time comparison between DR L-shaped and DRSD}
\label{tab:times}
\end{table}
Table \ref{tab:times} shows the average total computational time (Column 2) for each instance. The table also includes the average time spent to solve the Master problem (first-stage approximate problem) and the subproblems, to verify the optimality conditions, to complete the argmax procedure (only for DRSD), and to solve the distribution separation problem (Columns 3--7, respectively). The results for small scale instances ({\tt PGP} and {\tt CEP}) show that both DRSD and the DR L-shaped method take a fraction of a second, but the computational time for DRSD is higher than the DR L-shaped method for all $N$. This behavior can be attributed to the fact that (i) the subproblems are relatively easy to solve and the computational effort to solve all the subproblems does not increase significantly with $N$, and (ii) the DRSD is run for a minimum number of iterations (256) and thereby, contributing to the total time taken to solve master problems and distribution separation problems. This observation is in-line with our computational experience with the SD method for 2-SLPs. It is important to note that, while the computational time for the DR L-shaped method on an individual instance may be lower, but the iterative procedure necessary to identify a sufficient sample size may require solving several instances with an increasing sample size. This may result in a significantly higher cumulative computational time. The DRSD method, and the sequential sampling idea in general, mitigates the need for this iterative process.
On the other hand, for large-scale problems ({\tt BAA} and {\tt STORM}), we observe a noticeable increase in the computational time for the DR L-shaped method with an increase in $N$. A significant portion of this time is spent on solving the subproblems. Since the DRSD solves only two subproblems in each iteration, the time taken to solve the subproblems is significantly less in comparison to the DR L-shaped method for all $N$ where all subproblems corresponding to unique observations are solved in each iteration. Notice that for {\tt STORM}, the average number of iterations taken by DRSD is at least $43$ times the average number of iterations taken by DR L-shaped for each $N$. This is reflected in the significant increase in the computational time for solving the master problems and the distributional separation problems. In contrast, for {\tt BAA}, the average number of DRSD iterations is only $28\%$ higher than the DR L-shaped iterations. As a result, the increase in the computational times due to an increase in the master and distributional separation problem solution times is overshadowed by the computational gains attained by solving only two subproblems in each iteration. Moreover, the computational time associated with solving the distribution separation problem can be reduced by using column-generation procedures that take advantage of the problem structure. Such an implementation was not undertaken for our current experiments and is a fruitful future research avenue.
\section{Conclusions} \label{sect:conclusion}
We presented a new decomposition approach for solving two-stage distributionally robust linear programs (2-DRLPs) with a general ambiguity set that is defined using continuous and/or discrete probability distributions with a very large sample space. Since this approach extended the stochastic decomposition approach of Higle and Sen \cite{Higle1991} for 2-DRLPs with a singleton ambiguity set, we referred to it as Distributionally Robust Stochastic Decomposition (DRSD) method. The DRSD is a sequential sampling-based approach that allowed sampling within the optimization step where only two second-stage subproblems are solved in each iteration. In contrast, an external sampling procedure utilizes the distributionally robust L-shaped method~\cite{bansal_DROdecomposition_2018} for solving 2-DRLP with a finite number of scenarios, where in each iteration all subproblems are solved. While the design of DRSD accommodates general ambiguity sets, we provided its asymptotic convergence analysis for a family of ambiguity sets that includes the well-known moment-based and Wasserstein metric-based ambiguity sets. Furthermore, we performed computational experiments to evaluate the efficiency and effectiveness of solving distributionally robust variants of four well-known stochastic programming test problems that have supports of size ranging from $216$ to $10^{81}$. Based on our results, we observed that the objective function estimates obtained using the DRSD and the DR L-shaped method are statistically comparable. These DRSD estimates are obtained while providing computational improvements that are critical for large-scale problem instances.
\appendix
\section{Proofs} \label{sect:proofs}
In this appendix, we provide the proofs for the propositions related to the asymptotic behavior of the ambiguity sets approximations defined in \S\ref{sect:ApproxAmbiguitySet} and the recourse function approximation presented in \S\ref{sect:recourseApprox}.
\begin{proof} (Proposition \ref{prop:momentAmbiguity_property})
For $P \in \widehat{\probset}_{\textup{mom}}^{k-1}$, it is easy to verify that $p^\prime = \Theta P$ satisfies the support constraint, viz., $\sum_{\omega \in \tilde{\omega}set^k} p^\prime(\omega) = 1$. Now consider for $i = 1,\ldots,q$, we have
\begin{align*}
\sum_{\omega \in \tilde{\omega}set^{k}} p^\prime(\omega) \psi_i(\omega) =~& \sum_{\omega \in \tilde{\omega}set^{k-1}, \omega \neq \omega^k} p^\prime(\omega) \psi_i(\omega) + p^\prime(\omega^k) \psi_i(\omega^k) \\
=~& \theta^k \sum_{\omega \in \tilde{\omega}set^{k-1}, \omega \neq \omega^k} p(\omega) \psi_i(\omega) + \theta^kp(\omega^k)\psi(\omega^k) + (1-\theta^k)\psi_i(\omega^k) \\
=~& \theta^k \sum_{\omega \in \tilde{\omega}set^{k-1}} p(\omega) \psi_i(\omega) + (1-\theta^k)\psi_i(\omega^k) \\
=~& \hat{b}_i^{k-1} + (1-\theta^k)\psi_i(\omega^k) \\
=~& \sum_{\omega \in \tilde{\omega}set^{k-1}} \hat{p}^{k-1}(\omega) \psi_i(\omega) + (1-\theta^k)\psi_i(\omega^k) = \hat{b}^k_i.
\end{align*}
This implies that $\Theta^k(P) \in \widehat{\probset}_{\textup{mom}}^k$.
Using Proposition 4 in \cite{Sun2016}, there exists a positive constant $\chi$ such that
\begin{align*}
0 \leq \hausdorffDistance{\widehat{\probset}_{\textup{mom}}^k, \mathfrak{P}_{\textup{mom}}} \leq \chi \|\hat{\mathbf{b}}^k - \mathbf{b}\|.
\end{align*}
Here, $\mathbf{b} = (b_i)_{i=1}^q$ and $\hat{\mathbf{b}}^k = (\hat{b}_i^k)_{i=1}^q$, and $\|\cdot\|$ denotes the Euclidean norm. Since the approximate ambiguity sets are constructed using independent and identically distributed samples of $\tilde{\omega}$, using law of large numbers, we have $\hat{b}_i^k \rightarrow b_i$ for all $i = 1,\ldots,q$. This completes the proof.
\end{proof}
\begin{proof} (Proposition \ref{prop:wassersteinAmbiguity_property})
Consider approximate ambiguity sets $\widehat{\mathfrak{P}}^{k-1}_\text{w}$ and $\widehat{\mathfrak{P}}^k_\text{w}$ of the form given in \eqref{eq:wassersteinAmbiguityApprox_full}. Let $P^\prime = (p^\prime(\omega))_{\omega \in \Omega^{k-1}} \in \widehat{\mathfrak{P}}^{k-1}_\text{w}$, and let the reconstructed probability distribution be denoted by $P$. We can easily check that $P = \Theta(P^\prime)$ is indeed a probability distribution. With $P$ fixed, it suffices now to show that the polyhedron
\begin{align} \label{eq:etaPolyhedron}
\set{E}(P, \widehat{P}^k) = \left \{ \eta \in \mathbb{R}^{\tilde{\omega}set^k \times \tilde{\omega}set^k} \left \vert
\renewcommand{1.5}{1.5}
\begin{array}{l}
\sum_{\omega^\prime \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) = p(\omega) \qquad \forall \omega \in \tilde{\omega}set^k, \\
\sum_{\omega \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) = \hat{p}^k(\omega^\prime) \qquad \forall \omega^\prime \in \tilde{\omega}set^k, \\
\sum_{(\omega, \omega^\prime) \in \tilde{\omega}set^k \times \tilde{\omega}set^k} \|\omega - \omega^\prime\| \eta(\omega,\omega^\prime) \leq \epsilon
\end{array}\right. \right \}.
\end{align}
is non-empty. Since $P^\prime \in \widehat{\mathfrak{P}}^{k-1}_\text{w}$, there exist $\eta^\prime(\omega, \omega^\prime)$ for all $(\omega, \omega^\prime) \in \tilde{\omega}set^{k-1} \times \tilde{\omega}set^{k-1}$ such that the constraints in the description of the approximate ambiguity set in \eqref{eq:wassersteinAmbiguityApprox_full} are satisfied. We will do this by analyzing two possibilities,
\begin{enumerate}
\item We encounter a previously seen observation, i.e., $\omega^k \in \tilde{\omega}set^{k-1}$ and $\tilde{\omega}set^k = \tilde{\omega}set^{k-1}$. Let $\eta(\omega, \omega^\prime) = \theta^k \eta^\prime(\omega, \omega^\prime)$ for $\omega,\omega^\prime \in \tilde{\omega}set^{k-1}$ and $\omega \neq \omega^\prime \neq \omega^k$; and $\eta(\omega^k, \omega^k) = \theta^k \eta^\prime(\omega^k, \omega^k) + (1-\theta^k)$. We will verify the feasibility of this choice by verifying the three sets of constraints in \eqref{eq:etaPolyhedron}.
\begin{align*}
\sum_{\omega^\prime \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) =~& \sum_{\omega^\prime \in \tilde{\omega}set^k \setminus \{\omega^k\}} \eta(\omega, \omega^\prime) + \eta(\omega, \omega^k)\\
=~& \sum_{\omega^\prime \in \tilde{\omega}set^{k-1} \setminus \{\omega^k\}} \theta^k \eta^\prime(\omega, \omega^\prime) + \theta^k \eta^\prime(\omega, \omega^k) + \mathbf{1}_{\omega = \omega^k} (1-\theta^k) \\
=~& \theta^k \bigg ( \sum_{\omega^\prime \in \tilde{\omega}set^{k-1}} \eta^\prime(\omega, \omega^\prime)\bigg) + \mathbf{1}_{\omega = \omega^k} (1-\theta^k) \\
=~& \theta^k p^\prime (\omega) + \mathbf{1}_{\omega = \omega^k} (1-\theta^k) = p(\omega) \qquad \forall \omega \in \tilde{\omega}set^k.
\end{align*}
\begin{align*}
\sum_{\omega \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) =~& \sum_{\omega \in \tilde{\omega}set^k \setminus \{\omega^k\}} \eta(\omega,\omega^\prime) + \eta(\omega^k,\omega^\prime) \\
=~& \sum_{\omega \in \tilde{\omega}set^{k-1} \setminus \{\omega^k\}} \theta^k \eta^\prime(\omega,\omega^\prime) + \theta^k \eta(\omega^k,\omega) + \mathbf{1}_{\omega^\prime = \omega^k} (1-\theta^k) \\
=~& \theta^k \bigg(\sum_{\omega \in \tilde{\omega}set^{k-1}} \eta(\omega,\omega^\prime)\bigg) + \mathbf{1}_{\omega^\prime = \omega^k} (1-\theta^k) \\
=~& \theta^k \hat{p}^{k-1}(\omega^\prime) + \mathbf{1}_{\omega^\prime = \omega^k} (1-\theta^k) = \hat{p}^k(\omega^\prime) \qquad \forall \omega^\prime \in \tilde{\omega}set^k.
\end{align*}
Finally,
\begin{align*}
\sum_{(\omega, \omega^\prime) \in \tilde{\omega}set^k \times \tilde{\omega}set^k} \|\omega - \omega^\prime\| & \eta(\omega,\omega^\prime) \\
=~& \sum_{\substack{(\omega, \omega^\prime) \in \tilde{\omega}set^{k-1} \times \tilde{\omega}set^{k-1}\\\omega \neq \omega^\prime \neq \omega^k}} \theta^k \|\omega - \omega^\prime\| \eta(\omega,\omega^\prime) + \|\omega^k - \omega^k\| \eta(\omega^k,\omega^k) \\
=~& \theta^k \bigg ( \sum_{(\omega, \omega^\prime) \in \tilde{\omega}set^{k-1} \times \tilde{\omega}set^{k-1}} \|\omega - \omega^\prime\| \eta(\omega,\omega^\prime) \bigg ) \leq \theta^k \epsilon \leq \epsilon.
\end{align*}
Since all the three constraints are satisfied, the chosen values for $\eta$ is an element of the polyhedron $\set{E}$, and therefore, $\set{E} \neq \emptyset$.
\item We encounter a new observation, i.e., $\omega^k \notin \tilde{\omega}set^{k-1}$. Let $\eta(\omega,\omega^\prime) = \theta^k \eta^\prime(\omega, \omega^\prime)$ for $\omega, \omega^\prime \in \tilde{\omega}set^{k-1}$, $\eta(\omega^k,\omega^\prime) = 0 $ for $\omega^\prime \in \Omega^{k-1}$, $ \eta(\omega,\omega^k) = 0$ for $\omega \in \Omega^{k-1}$, and $\eta(\omega^k,\omega^k) = (1-\theta^k)$. Let us verify the three conditions defining \eqref{eq:etaPolyhedron} with this choice for $\eta$ variables.
\begin{align*}
\sum_{\omega^\prime \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) =~& \sum_{\omega^\prime \in \tilde{\omega}set^k\setminus \{\omega^k\}} \eta(\omega,\omega^\prime) + \eta(\omega,\omega^k) \\
=~& \sum_{\omega^\prime \in \tilde{\omega}set^{k-1}} \theta^k \eta^\prime(\omega, \omega^\prime) + \mathbf{1}_{\omega = \omega^k} (1-\theta^k) \\
=~& \theta^k p^\prime(\omega) + \mathbf{1}_{\omega = \omega^k} (1-\theta^k) = p(\omega). \\
\sum_{\omega \in \tilde{\omega}set^k} \eta(\omega,\omega^\prime) =~& \sum_{\omega \in \tilde{\omega}set^k \setminus \{\omega^k\}} \eta(\omega,\omega^\prime) + \eta(\omega^k,\omega^\prime) \\
=~& \sum_{\omega^\prime \in \tilde{\omega}set^{k-1}} \theta^k \eta^\prime(\omega, \omega^\prime) + \mathbf{1}_{\omega^\prime = \omega^k} (1-\theta^k) \\
=~& \theta^k \hat{p}^{k-1} + + \mathbf{1}_{\omega^\prime = \omega^k} (1-\theta^k) = \hat{p}^k(\omega^\prime)
\end{align*}
Consider,
\begin{align*}
\sum_{(\omega, \omega^\prime) \in \tilde{\omega}set^k \times \tilde{\omega}set^k} \|\omega - \omega^\prime\| & \eta(\omega,\omega^\prime) \\ =~& \sum_{(\omega, \omega^\prime) \in \tilde{\omega}set^{k-1} \times \tilde{\omega}set^{k-1}} \theta^k \|\omega - \omega^\prime\| \eta^\prime(\omega,\omega^\prime) + \|\omega^k - \omega^k\| \eta(\omega^k,\omega^k) \\
&\hspace{1cm} + \sum_{\omega \in \Omega^k}\|\omega - \omega^k\| \eta(\omega,\omega^k) + \sum_{\omega^\prime \in \Omega^k}\|\omega^k - \omega^\prime\| \eta(\omega^k,\omega^\prime) \\
\leq~& \theta^k \epsilon \leq \epsilon.
\end{align*}
Therefore, the value of the $\eta$ variables satisfy the constraints and $\set{E} \neq \emptyset$. This implies that $\Theta^k(P) \in \widehat{\mathfrak{P}}_{\textup{w}}^k$.
\end{enumerate}
Next, let us consider a distribution $Q \in \widehat{\mathfrak{P}}_{\textup{w}}^{k}$. Then,
\begin{align*}
d_{\textup{w}}(Q,P^*) \leq~& d_{\textup{w}} (Q,\widehat{P}^k) + d_{\textup{w}}(\widehat{P}^k,P^*) \leq \epsilon + d_{\textup{w}}(\widehat{P}^k,P^*).
\end{align*}
The above inequality is a consequence of the triangle inequality of Wasserstein distance. Since $P \in \widehat{\mathfrak{P}}^k_{\textup{w}}$, we have $d_{\textup{w}}(P^*,P) \leq \epsilon$.
Under compactness assumption for $\tilde{\omega}set$, $d > 2$, and $\expect{\text{exp}(\|\tilde{\omega}\|^a)}{P^*} < \infty$, Theorem 2 in \cite{Fournier2015rate} guarantees
\begin{align*}
\text{Prob}\big[d_{\textup{w}}(\widehat{P}^k,P^*) \leq \delta\big] \leq \left\{
\begin{array}{ll} C~ \text{exp}(-ck\delta^d) & \text{if}~ \delta > 1 \\
C~ \text{exp}(-ck\delta^a) & \text{if}~\delta \leq 1
\end{array} \right .
\end{align*}
for all $k \geq 1$. This implies that the $\lim_{k \rightarrow \infty} d_{\textup{w}}(\widehat{P}^k,P^*) = 0$, almost surely. Consequently, we obtain that $d_{\textup{w}}(Q,P^*) \leq \epsilon$ (or equivalently $Q \in \mathfrak{P}_{\textup{w}}$) as $k \rightarrow \infty$, almost surely. This completes the proof.
\end{proof}
\begin{proof}(Proposition \ref{prop:uniformConvergenceApproxRecourse})
Recall that $\set{X} \times \tilde{\omega}set$ is a compact set because of Assumptions \ref{assum:compactX}) and \ref{assum:compactOm}, and $\{Q^k\}$ is a sequence of continuous (piecewise linear and convex) functions. Further, the construction of the set of dual vertices satisfies $\Pi^0 = \{{\bf 0}\} \subseteq \ldots \subseteq \Pi^k \subseteq \Pi^{k+1} \subseteq \ldots \subseteq \Pi$ which ensures that $0 \leq Q^k(x,\omega) \leq Q^{k+1}(x,\omega) \leq Q(x,\omega)$ for all $(x,\omega)\in (\set{X}, \tilde{\omega}set)$. Since $\{Q^k\}$ increases monotonically and is bounded by a finite function $Q$ (due to \ref{assum:completeRecourse}), this sequence pointwise converges to some function $\xi(x,\omega) \leq Q(x,\omega)$. Once again due to \ref{assum:completeRecourse}, we know that the set of dual vertices $\Pi$ is finite and since $\Pi^k \subseteq \Pi^{k+1} \subseteq \Pi$, the set $\lim_{k\rightarrow\infty}\Pi^k := \overline{\Pi} ~(\subseteq \Pi)$ is also a finite set. Clearly,
\begin{align*}
\xi(x,\omega) = \lim_{k \rightarrow \infty} Q^k(x,\omega) = \max~\{\pi^\top [r(\omega) - T(\omega)x]~|~ \pi \in \overline{\Pi}\}
\end{align*}
is the optimal value of a LP, and hence, is a continuous function. The compactness of $\mathcal{X} \times \tilde{\omega}set$, and continuity, monotonicity and pointwise convergence of $\{Q^k\}$ to $\xi$ guarantees that the sequence uniformly converges to $\xi$ (Theorem 7.13 in \cite{Rudin1976}).
\end{proof}
\end{document} |
\begin{document}
\title{When is a Squarefree Monomial Ideal of Linear Type ?}
\begin{abstract}
In $1995$ Villarreal gave a combinatorial description of the equations of Rees algebras of quadratic squarefree monomial ideals.
His description was based on the concept of closed even walks in a graph. In this paper we will generalize his results to all squarefree monomial ideals
by defining even walks in a simplicial complex. We show that simplicial complexes with no even walks have facet ideals that are of linear type, generalizing
Villarreal's work.
\end{abstract}
\section{Introduction}
Rees algebras are of special interest in algebraic geometry and commutative algebra since they describe
the blowing up of the spectrum of a ring along the subscheme defined by an ideal.
The Rees algebra of an ideal can also be viewed as a quotient of a polynomial ring. If $I$ is an ideal of a ring $R$,
we denote the Rees algebra of $I$ by $R[It]$, and we can represent $R[It]$ as $S/J$ where $S$ is a polynomial
ring over $R$. The ideal $J$ is called the {\bf defining ideal} of $R[It]$. Finding generators of $J$ is difficult and crucial for a better understanding of $R[It]$. Many authors
have worked to get a better insight into these generators in special classes of ideals, such as those with special height, special embedding dimension and so on.
When $I$ is a monomial ideal, using methods from Taylor's thesis~\cite{Taylor1966} one can describe the generators of $J$ as binomials.
Using this fact, Villarreal \cite{Villarreal1995} gave a combinatorial
characterization of $J$ in the case of degree $2$ squarefree monomial ideals. His work led Fouli and Lin~\cite{fouli2013} to consider the question of characterizing generators of $J$ when $I$ is a
squarefree monomial ideal in any degree. With this purpose in mind we define simplicial even walks, and show that for all squarefree monomial ideals,
they identify generators of $J$ that may be obstructions to $I$ being of linear type. We show that in dimension $1$, simplicial even walks are the same as closed even walks of graphs.
We then further investigate properties of simplicial even walks, and reduce the problem of checking whether
an ideal is of linear type to identifying simplicial even walks. At the end of the paper we give a new proof for Villarreal's Theorem (Corollary~\ref{col:vill}).
\section{Rees algebras and their equations}
Let $I$ be a monomial ideal in a polynomial ring $R={\mathbb{K}}[x_1,\dots,x_n]$ over a field ${\mathbb{K}}$. We denote the {\bf Rees algebra} of $I=(f_1,\dots,f_{q})$ by $R[It]=R[f_1t,\dots,f_qt]$ and
consider the homomorphism $\psi$ of algebras
\begin{align*}
\psi:R[T_1,\dots,T_q]\longrightarrow R[It], \hspace{.06 in} T_i\mapsto f_{i}t.
\end{align*}
If $J$ is the kernel of $\psi$, we can consider the Rees algebra $R[It]$ as the quotient of the polynomial ring $R[T_1,\dots,T_q]$.
The ideal $J$ is called the \textbf{defining ideal} of $R[It]$ and
its minimal generators are called the \textbf{Rees equations} of $I$.
These equations carry a lot of information about $R[It]$; see for example ~\cite{Vasconcelos1994} for more details.
\begin{defn}
For integers $s,q\geq 1$ we define
$${\mathcal{I}}_{s}=\{(i_1,\dots,i_s):1\leq i_1\leq i_2\leq\dots\leq i_{s}\leq q\}\subset {\mathbb{N}}^{s}.$$
Let $\alpha=(i_1,\dots,i_s)\in{\mathcal{I}}_{s}$ and $f_1,\dots,f_q$ be monomials in $R$ and
$T_1,\dots,T_q$ be variables. We use the following notation for the rest of this paper. If $t\in\{1,\dots,s\}$
\begin{itemize}
\item $\ Supp \ (\alpha)=\{i_1,\dots,i_s\}$;
\item $\widehat{\alpha}_{i_t}=(i_1,\dots,\widehat{i}_{t},\dots,i_s)$;
\item ${T}_{\alpha}=T_{i_1}\dots T_{i_s}$ and $\ Supp \ (T_{\alpha})=\{T_{i_1},\dots ,T_{i_s}\}$;
\item ${f}_{\alpha}=f_{i_1}\dots f_{i_s}$;
\item $\displaystyle \widehat{f}_{\alpha_t}=f_{i_1}\dots\widehat{f}_{i_t}\dots f_{i_s}=\frac{f_{\alpha}}{f_{i_t}}$;
\item $\displaystyle \widehat{T}_{\alpha_t}=T_{i_1}\dots\widehat{T}_{i_t}\dots T_{i_s}=\frac{T_{\alpha}}{T_{i_t}}$;
\item $\alpha_{t}(j)=(i_1,\dots,i_{t-1},j,i_{t+1},\dots,i_s)$, for $j\in\{1,2,\dots,q\}$ and $s\geq 2$.
\end{itemize}
\end{defn}
For an ideal $I=(f_1,\dots,f_q)$ of $R$ the defining ideal $J$ of $R[It]$ is graded and
$$J=J^{\prime}_1\oplus J^{\prime}_2\oplus\cdots$$
where $J^{\prime}_s$ for $s\geq 1$ is the $R$-module.
The ideal $I$ is said to be \textbf{of linear type} if $J=(J^{\prime}_1)$; in other words, the defining ideal of $R[It]$ is generated by linear forms in the variables $T_1,\dots,T_q$.
\begin{defn}
Let $I=(f_1,\dots,f_q)$ be a monomial ideal, $s\geq 2$ and $\alpha,\beta\in{\mathcal{I}}_{s}$. We define
\begin{eqnarray}\label{eqn:lineartype}
T_{\alpha,\beta}(I)=\left(\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\alpha}}\right)T_{\alpha}-
\left(\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\beta}}\right)T_{\beta}.\label{equation:new}
\end{eqnarray}
When $I$ is clear from the context we use $T_{\alpha,\beta}$ to denote $T_{\alpha,\beta}(I)$.
\end{defn}
\begin{prop}(D. Taylor~\cite{Taylor1966})\label{eqn:taylortheorem}
Let $I=(f_1,\dots,f_q)$ be a monomial ideal in $R$ and $J$ be the defining ideal of $R[It]$. Then for $s\geq 2$ we have
\begin{align*}\label{eqn:2}
J^{\prime}_s= \left\langle T_{\alpha,\beta}(I): \alpha,\beta\in {\mathcal{I}}_{s}
\right\rangle.
\end{align*}
Moreover, if $m=\gcd{(f_1,\dots,f_{q})}$ and $I^{\prime}=(f_1/m,\dots,f_{q}/m)$, then for every $\alpha,\beta\in {\mathcal{I}}_{s}$ we have
$$T_{\alpha,\beta}(I)=T_{\alpha,\beta}(I^{\prime})$$
and hence $R[It]=R[I^{\prime}t]$.
\end{prop}
In light of Proposition~\ref{eqn:taylortheorem}, we will always assume that if $I=(f_1,\dots,f_q)$ then
\begin{eqnarray*}
\gcd{(f_1,\dots,f_q)}=1.
\end{eqnarray*}
We will also assume $\ Supp \ (\alpha)\cap\ Supp \ (\beta)=\emptyset$, since otherwise $T_{\alpha,\beta}$ reduces to those with this property.
This is because if $t\in \ Supp \ (\alpha)\cap\ Supp \ (\beta)$ then we have
$T_{\alpha,\beta}=T_{t}T_{\widehat{\alpha}_{t},\widehat{\beta}_{t}}.$
For this reason we define
\begin{eqnarray}\label{eqn:2}
\displaystyle J_s= \left\langle T_{\alpha,\beta}(I): \alpha,\beta\in {\mathcal{I}}_{s},\ Supp \ (\alpha)\cap\ Supp \ (\beta)=\emptyset
\right\rangle
\end{eqnarray}
as an $R$-module. Clearly $J=J_1S+J_2S+\cdots$.
\begin{defn}\label{den:def}
Let $I=(f_1,\dots,f_q)$ be a squarefree monomial ideal in $R$ and $J$ be the defining ideal of $R[IT]$, $s\geq 2$, and $\alpha=(i_1,\dots,i_s),\beta=(j_1,\dots,j_s)\in {\mathcal{I}}_s$.
We call $T_{\alpha,\beta}$ {\bf redundant} if it is a redundant generator of $J$, coming from lower degree; i.e.
$$\displaystyle T_{\alpha,\beta}\in J_1S+\dots+ J_{s-1}S.$$
\end{defn}
\section{Simplicial even walks}
By using the concept of closed even walks in graph theory Villarreal~\cite{Villarreal1995} classified all Rees equations of
edge ideals of graphs in terms of
closed even walks. In this section our goal is to define an even walk in a simplicial complex in order to classify all irredundant Rees equations of squarefree monomial ideals.
Motivated by the works of S. Petrović and D. Stasi in \cite{sonia2012} we generalize closed even walks from graphs to simplicial complexes.
We begin with basic definitions that we will need later.
\begin{defn}
A \textbf{simplicial complex} on vertex set $\mbox{V}=\left\{x_1,\dots,x_n\right\}$ is a collection $\Delta$ of subsets of $\mbox{V}$ satisfying
\begin{enumerate}
\item $\left\{x_i\right\}\in \Delta$ for all $i$,
\item $F \in \Delta , G\subseteq F \Longrightarrow G \in \Delta$.
\end{enumerate}
The set $\mbox{V}$ is called the {\bf vertex set} of $\Delta$ and we denote it by $\mbox{V}(\Delta)$.
The elements of $\Delta$ are called \textbf{faces} of $\Delta$ and the maximal faces under inclusion are called \textbf{facets}. We denote the simplicial complex $\Delta$
with facets $F_1,\dots,F_s$ by $\left\langle F_1,\dots,F_s\right\rangle$. We denote the set of facets of $\Delta$ with $\mbox{Facets}\left(\Delta\right)$.
A \textbf{subcollection} of a simplicial complex $\Delta$ is a simplicial complex whose facet set is a subset of the facet set of $\Delta$.
\begin{defn}
Let $\Delta$ be a simplicial complex with at least three facets, ordered as $F_1,\dots,F_q$. Suppose $\bigcap F_i=\emptyset$.
With respect to this order $\Delta$ is a
\renewcommand{(\arabic{enumi})}{(\roman{enumi})}
\begin{enumerate}
\item {\bf extended trail} if we have
\begin{align*}
F_{i}\cap F_{i+1}\neq \emptyset\hspace{.3 in}\mbox{$i=1,\dots,q$}\hspace{.2 in}\mbox{mod $q$};
\end{align*}
\item {\bf special cycle}~\cite{herzog2008} if $\Delta$ is an extended trail in which we have
\begin{eqnarray*}
F_{i}\cap F_{i+1}\not\subset \bigcup_{j\notin\{i,i+1\}} F_j&\mbox{$i=1,\dots,q$}\hspace{.2 in}\mbox{mod $q$};
\end{eqnarray*}
\item {\bf simplicial cycle}~\cite{Faridi2007} if $\Delta$ is an extended trail in which we have
\begin{eqnarray*}
F_{i}\cap F_{j}\neq \emptyset\Leftrightarrow j\in\{i+1,i-1\}&\mbox{$i=1,\dots,q$}\hspace{.2 in}\mbox{mod $q$}.
\end{eqnarray*}
\end{enumerate}
\end{defn}
We say that $\Delta$ is an extended trail (or special or simplicial cycle) if there is an order on the facets of $\Delta$ such that the specified conditions hold on that order.
Note that
\begin{align*}
\{\mbox{Simplicial Cycles}\}\subseteq \{\mbox{Special Cycles}\}\subseteq \{\mbox{Extended Trails}\}.
\end{align*}
\begin{defn}[Simplicial Trees and Simplicial Forests \cite{Faridi2007}\& \cite{Faridi2002}]
A simplicial complex $\Delta$ is called a {\bf simplicial forest} if $\Delta$ contains no simplicial cycle.
If $\Delta$ is also connected, it is called a {\bf simplicial tree}.
\end{defn}
\begin{defn}[\cite{Zheng2004}, Lemma 3.10]\label{Zheng:1}
Let $\Delta$ be a simplicial complex. The facet $F$ of $\Delta$ is called a {\bf good leaf} of $\Delta$ if
the set $\left\{H\cap F; H\in \mbox{Facets}(\Delta)\right\}$ is totally ordered by inclusion.
\end{defn}
Good leaves were first introduced by X. Zheng in her PhD thesis~\cite{Zheng2004} and later in~\cite{Faridi2007}. The existence of a good leaf in every tree was proved in
\cite{herzog2008} in 2008.
\begin{theorem}[\cite{herzog2008}, Corollary 3.4]~\label{theorem:herzog}
Every simplicial forest contains a good leaf.
\end{theorem}
Let $I=(f_1,\dots,f_q)$ be a squarefree monomial ideal in $R=\mathbb{K}[x_1,\dots,x_n]$. The \textbf{facet complex} $\mbox{Facets}F(I)$ associated to $I$ is a simplicial complex with
facets $F_1,\dots,F_s$, where for each $i$,
$$F_i=\left\{x_j:\hspace{.02 in} x_j | f_i,\hspace{.04 in} 1\leq j\leq n\right\}.$$
The \textbf{facet ideal} of a simplicial complex $\Delta$ is the ideal generated by the products of the variables labeling the vertices of each facet of $\Delta$; in other words
$$\mbox{Facets}F(\Delta)=\left( \prod_{x \in F}x : \ F \mbox{ is a facet of $\Delta$} \right).$$
\end{defn}
\begin{defn}[\bf{Degree}]\label{egn:defn}
Let $\Delta=\langle F_1,\dots, F_q\rangle$ be a simplicial complex, $\mbox{Facets}F(\Delta)=(f_1,\dots,f_{q})$ be its facet ideal and $\alpha=(i_1,\dots,i_s)\in {\mathcal{I}}_{s}$, $s\geq 1$.
We define the $\alpha$-{\bf degree for a vertex} $x$ of $\Delta$ to be
\begin{eqnarray*}
deg_{\alpha}(x)&=\max\{m:x^{m}|f_{\alpha}\}
\end{eqnarray*}
\end{defn}
\begin{example}\label{eqn:first}
Consider Figure [\ref{figure2}] where
\begin{eqnarray*}
&F_1=\{x_4,x_7,a_3\},F_2=\{x_4,x_5,a_1\},F_3=\{x_5,x_6,a_2\},\\
&F_4=\{x_2,x_3,a_2\},F_5=\{x_1,x_2,a_1\},F_6=\{x_6,x_7,a_1\}.
\end{eqnarray*}
If we consider $\alpha=(1,3,5)$ and $\beta=(2,4,6)$ then $deg_{\alpha}(a_1)=1$ and $deg_{\beta}(a_1)=2$.
\begin{figure}
\caption{Even walk}
\caption{Not an even walk}
\label{figure2}
\label{figure3}
\end{figure}
\end{example}
Suppose $I=(f_1,\dots,f_q)$ is a squarefree monomial ideal in $R$ with $\Delta=\left\langle F_1,\dots,F_q\right\rangle $
its facet complex and let $\alpha,\beta\in {\mathcal{I}}_{s}$ where $s\geq 2$ is an integer.
We set $\alpha=(i_1,\dots,i_s)$ and $\beta=(j_1,\dots,j_s)$ and consider the following sequence of not necessarily distinct facets of $\Delta$
$${\mathbb{C}}C_{\alpha,\beta}=F_{i_1},F_{j_1},\dots,F_{i_s},F_{j_s}.$$
Then (\ref{equation:new}) becomes
\begin{eqnarray}\label{eqn:lcmequation}
\displaystyle T_{\alpha,\beta}(I)=\left(\prod_{deg_{\alpha}(x) < deg_{\beta}(x)} x^{deg_{\beta}(x)-deg_{\alpha}(x)}\right)T_{\alpha}-
\left(\prod_{deg_{\alpha}(x) > deg_{\beta}(x)} x^{deg_{\alpha}(x) - deg_{\beta}(x)}\right)T_{\beta}
\end{eqnarray}
where the products vary over the vertices $x$ of ${\mathbb{C}}C_{\alpha,\beta}$.
\begin{defn}[\bf{Simplicial even walk}]\label{def:scew}
Let $\Delta=\langle F_1,\dots, F_q\rangle$ be a simplicial complex and let $\alpha=(i_1,\dots,i_s),\beta=(j_1,\dots,j_s)\in {\mathcal{I}}_{s}$, where $s\geq 2$.
The following sequence of not necessarily distinct facets of $\Delta$
\begin{eqnarray*}
{\mathbb{C}}C_{\alpha,\beta}= F_{i_1},F_{j_1},\dots,F_{i_s},F_{j_s}
\end{eqnarray*}
is called a {\bf simplicial even walk}, or simply ``even walk``, if the following conditions hold
\begin{itemize}
\item For every $i\in \ Supp \ (\alpha)$ and $j\in \ Supp \ (\beta)$ we have
\begin{eqnarray*}
F_{i}\backslash F_{j}\not \subset \{x\in \mbox{V}(\Delta): deg_{\alpha}(x) > deg_{\beta}(x)\}&\mbox{and}&
F_{j}\backslash F_{i}\not \subset \{x\in \mbox{V}(\Delta): deg_{\alpha}(x) < deg_{\beta}(x)\}.
\end{eqnarray*}
\end{itemize}
If ${\mathbb{C}}C_{\alpha,\beta}$ is connected, we call the even walk ${\mathbb{C}}C_{\alpha,\beta}$ a {\bf connected} even walk.
\end{defn}
\begin{remark}
It follows from the definition, if ${\mathbb{C}}C_{\alpha,\beta}$ is an even walk then $\ Supp \ (\alpha)\cap \ Supp \ (\beta)=\emptyset$.
\end{remark}
\begin{example}
In Figures [\ref{figure2}] and [\ref{figure3}] by setting $\alpha=(1,3,5),\beta=(2,4,6)$ we have ${\mathbb{C}}C_{\alpha,\beta}=F_1,\dots,F_6$ is an even walk in [\ref{figure2}] but
in [\ref{figure3}] ${\mathbb{C}}C_{\alpha,\beta}=F_1,\dots,F_6$ is not an even walk because
$$F_1\backslash F_2=\{x_1,a_1\}=\{x:deg_{\alpha}(x)>deg_{\beta}(x)\}.$$
\end{example}
\begin{remark}
A question which naturally arises here is if a minimal even walk (an even walk that does not properly contain another even walk) can have repeated facets. The answer is positive since
for instance, the bicycle graph in Figure~\ref{fig:bicycle} is a minimal even walk, because of Theorem~\ref{them:graphwalk} below, but it has a pair of repeated edges.
\begin{figure}
\caption{A minimal even walk with repeated facets}
\label{fig:bicycle}
\end{figure}
\end{remark}
\subsection{The structure of even walks}
\begin{prop}[\bf{Structure of even walks}]\label{eqn:luisatheorem}
Let
${\mathbb{C}}C_{\alpha,\beta}=F_{1},F_{2},\dots,F_{2s}$ be an even walk. Then we have
\renewcommand{(\arabic{enumi})}{(\roman{enumi})}
\begin{enumerate}
\item If $i\in \ Supp \ (\alpha)$ (or $i\in\ Supp \ (\beta)$) there exist distinct $j,k\in\ Supp \ (\beta)$ (or $j,k\in\ Supp \ (\alpha)$) such that
\begin{eqnarray}\label{eqn:intersection}
F_i\cap F_{j}\neq \emptyset&\mbox{and}& F_i\cap F_{k}\neq \emptyset.
\end{eqnarray}
\item The simplicial complex $\langle{\mathbb{C}}C_{\alpha,\beta}\rangle$ contains an extended trail of even length labeled $F_{v_1},F_{v_2},\dots,F_{v_{2l}}$ where $v_1,\dots,v_{2l-1}\in\ Supp \ (\alpha)$ and
$v_2,\dots,v_{2l}\in\ Supp \ (\beta)$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{inparaenum}[$(i)$]
To prove \item let $i\in\ Supp \ (\alpha)$, and consider the following set
\begin{align*}
{\mathcal{A}}_{i}= \{j\in\ Supp \ (\beta): F_{i}\cap F_{j}\neq \emptyset\}.
\end{align*}
We only need to prove that $|{\mathcal{A}}_{i}|\geq 2$.
Suppose $|{\mathcal{A}}_{i}|=0$ then for all $j\in\ Supp \ (\beta)$ we have
$$F_i\backslash F_j=F_i\subseteq \{x\in\mbox{V}({\mathbb{C}}C_{\alpha,\beta}):deg_{\alpha}(x)>deg_{\beta}(x)\}$$
because for each $x\in F_i\backslash F_j$ we have $deg_{\beta}(x)=0$ and $deg_{\alpha}(x)>0$; a contradiction.
Suppose $|{\mathcal{A}}_{i}|= 1$ so that there is one $j\in\ Supp \ (\beta)$ such that $F_i\cap F_{j}\neq \emptyset$. So
for every $x\in F_i\backslash F_{j}$ we have $deg_{\beta}(x)=0$. Therefore, we have
$$F_i\backslash F_{j}\subseteq \{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}):deg_{\alpha}(x)>deg_{\beta}(x)\},$$
again a contradiction. So we must have $|{\mathcal{A}}_{i}|\geq 2$.
To prove \item pick $u_1\in \ Supp \ (\alpha)$. By using the previous part we can say there are $u_0,u_2\in\ Supp \ (\beta)$, $u_0\neq u_2$, such that
\begin{eqnarray*}
F_{u_0}\cap F_{u_1}\neq\emptyset&\mbox{and}&F_{u_1}\cap F_{u_2}\neq\emptyset.
\end{eqnarray*}
By a similar argument there is $u_3\in\ Supp \ (\alpha)$ such that $u_1\neq u_3$ and $F_{u_2}\cap F_{u_3}\neq\emptyset$.
We continue this process. Pick $u_4\in\ Supp \ (\beta)$ such that
\begin{eqnarray*}
F_{u_4}\cap F_{u_3}\neq \emptyset&\mbox{and}& u_4\neq u_2.
\end{eqnarray*}
If $u_4=u_0$, then $F_{u_0},F_{u_1},F_{u_2},F_{u_3}$ is an even length extended trail. If not, we continue this process each time taking
$$F_{u_0},\dots,F_{u_n}$$
and picking $u_{n+1}\in\ Supp \ (\alpha)$ (or $u_{n+1}\ Supp \ (\beta)$) if $u_n\in\ Supp \ (\beta)$ (or $u_n\in\ Supp \ (\alpha)$) such that
\begin{eqnarray*}
F_{u_{n+1}}\cap F_{u_{n}}\neq \emptyset&\mbox{and}&u_{n+1}\neq u_{n-1}.
\end{eqnarray*}
If $u_{n+1}\in\{u_0,\dots,u_{n-2}\}$, say $u_{n+1}=u_m$, then the process stops and we have
$$F_{u_m},F_{u_{m+1}},\dots,F_{u_n}$$
is an extended trail. The length of this cycle is even since the indices $u_m,u_{m+1},\dots,u_n$
alternately belong to $\ Supp \ (\alpha)$ and $\ Supp \ (\beta)$ (which are disjoint by our assumption), and if $u_m\in\ Supp \ (\alpha)$, then by construction $u_n\in\ Supp \ (\beta)$ and vice-versa. So there are an even length of such indices and we are done.
If $u_{n+1}\notin \{u_0,\dots,u_{n-2}\}$
we add it to the end of the sequence and repeat the same process for $F_{u_0},F_{u_1},\dots,F_{u_{n+1}}$. Since ${\mathbb{C}}C_{\alpha,\beta}$ has a finite number of facets, this process has to stop.
\end{inparaenum}
\end{proof}
\begin{col}\label{col:new}
An even walk has at least $4$ distinct facets.
\end{col}
In Corollary~\ref{col:mine}, we will see that every even walk must contain a simplicial cycle.
\begin{theorem}\label{theorem:goodleaf}
A simplicial forest contains no simplicial even walk.
\end{theorem}
\begin{proof}
Assume the forest $\Delta$ contains an even walk ${\mathbb{C}}C_{\alpha,\beta}$ where $\alpha,\beta,\in{\mathcal{I}}_{s}$ and $s\geq 2$ is an integer.
Since $\Delta$ is a simplicial forest so is its subcollection $\langle{\mathbb{C}}C_{\alpha,\beta}\rangle$, so by Theorem~\ref{theorem:herzog}
$\langle{\mathbb{C}}C_{\alpha,\beta}\rangle$ contains a good leaf $F_0$.
So we can consider the following order on the facets $F_0,\dots,F_q$
of $\langle{\mathbb{C}}C_{\alpha,\beta}\rangle$
\begin{eqnarray}\label{eqn:goodleaforder}
F_{q}\cap F_0\subseteq\dots \subseteq F_{2}\cap F_0\subseteq F_{1}\cap F_0.
\end{eqnarray}
Without loss of generality we suppose $0\in\ Supp \ (\alpha)$. Since $\ Supp \ (\beta)\neq \emptyset$, we can pick $j\in\{1,\dots,q\}$ to be the smallest index with $F_j\in\ Supp \ (\beta)$.
Now if $x\in F_0\backslash F_j$, by (\ref{eqn:goodleaforder}) we will have $deg_{\alpha}(x)\geq 1$ and $deg_{\beta}(x)=0$, which shows that
$$F_0\backslash F_j\subset\{x\in V({\mathbb{C}}C_{\alpha,\beta});deg_{\alpha}(x)>deg_{\beta}(x)\},$$
a contradiction.
\end{proof}
\begin{col}\label{col:mine}
Every simplicial even walk contains a simplicial cycle.
\end{col}
An even walk is not necessarily an extended trail. For instance see the following example.
\begin{example}\label{ex,counterexample}
Let $\alpha=(1,3,5,7),\beta=(2,4,6,8)$ and ${\mathbb{C}}C_{\alpha,\beta}=F_1,\dots,F_8$ as in Figure~\ref{fig:newpicture}. It can easily be seen that ${\mathbb{C}}C_{\alpha,\beta}$ is an even walk of distinct facets but
${\mathbb{C}}C_{\alpha,\beta}$ is not an extended trail.
\begin{figure}
\caption{An even walk which is not an extended trail.}
\label{fig:newpicture}
\end{figure}
The main point here is that we do not require that $F_{i}\cap F_{i+1}\neq \emptyset$ in an even walk which is necessary condition for
extended trails. For example $F_4\cap F_5\neq \emptyset$ in this case.
\end{example}
On the other hand, every even-length special cycle is an even walk.
\begin{prop}[\bf{Even special cycles are even walks}] \label{eqn:special cycle}
If $F_1,\dots, F_{2s}$ is a special cycle (under the written order) then it is an even walk under the same order.
\end{prop}
\begin{proof}
Let $\alpha=(1,3,\dots,2s-1)$ and $\beta=(2,4,\dots,2s)$, and set ${\mathbb{C}}C_{\alpha,\beta}=F_1,\dots,F_{2s}$. Suppose ${\mathbb{C}}C_{\alpha,\beta}$ is not an even walk,
so there is $i\in\ Supp \ (\alpha)$ and $j\in \ Supp \ (\beta)$ such that at least one of the following conditions holds
\begin{eqnarray}\label{eqn:one}
F_i\backslash F_j\subseteq\{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}):deg_{\alpha}(x)>deg_{\beta}(x)\}\\
\nonumber
F_j\backslash F_i\subseteq\{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}):deg_{\alpha}(x)<deg_{\beta}(x)\}.
\end{eqnarray}
Without loss of generality we can assume that the first condition holds.
Pick $h\in\{i-1,i+1\}$ such that $h\neq j$. Then by definition of special cycle there is a vertex $z\in F_i\cap F_{h}$ and $z\notin F_{l}$ for $l\notin \{i,h\}$. In particular,
$z\in F_i\backslash F_j$, but $deg_{\alpha}(z)=deg_{\beta}(z)=1$ which contradicts (\ref{eqn:one}).
\end{proof}
The converse of Proposition~\ref{eqn:special cycle} is not true: not every even walk is a special cycle, see for example Figure [\ref{figure2}] or Figure [\ref{fig:newpicture}] which are not
even extended trails.
But one can show that it is true for even walks with four facets (see~\cite{Alilooee2014}).
\subsection{The case of graphs}
We demonstrate that Definition~\ref{def:scew} in dimension $1$ restricts to closed even walks in graph theory. For more details on the graph theory mentioned in
this section we refer the reader to \cite{west2001}.
\begin{defn}\label{def:graph}
Let $G=(\mbox{V},E)$ be a graph (not necessarily simple) where $\mbox{V}$ is a nonempty set of vertices and $E$ is a set of edges.
A {\bf walk} of length $n$ in $G$ is a list $e_{1},e_{2},\dots,e_{n}$ of not necessarily distinct edges such that
$$e_{i}=\{x_{i},x_{i+1}\}\in E\hspace{.4 in} \mbox{for each $i\in \{1,\dots,n-1\}$}.$$
A walk is called {\bf closed} if its endpoints are the same i.e. $x_{1}=x_{n}$. The length of a walk $\mathcal{W}$ is denoted by
$\ell(\mathcal{W})$. A walk with no repeated edges is called a {\bf trail} and a walk with no repeated vertices or edges is called a {\bf path}.
A closed walk with no repeated vertices or edges allowed, other than the repetition of the starting and ending vertex, is called a {\bf cycle}.
\end{defn}
\begin{lem}[Lemma $1.2.15$ and Remark~$1.2.16$~\cite{west2001}]\label{lem:west}
Let $G$ be a simple graph. Then we have
\begin{itemize}
\item Every closed odd walk contains a cycle.
\item Every closed even walk which has at least one non-repeated edge contains a cycle.
\end{itemize}
\end{lem}
Note that in the graph case the special and simplicial cycles are the ordinary cycles. But extended trails in our definition are not necessarily
cycles in the case of graphs or even a trail. For instance the graph in Figure~\ref{fig:bergecycle} is an extended trail, which is not neither a cycle nor a trail,
but contains one cycle. This is the case in general.
\begin{figure}\label{fig:bergecycle}
\end{figure}
\begin{theorem}[\bf{Euler's Theorem}, \cite{west2001}]\label{thm:euler}
If $G$ is a connected graph, then $G$ is a closed walk with no repeated edges if and only if the degree of every vertex of $G$ is even.
\end{theorem}
\begin{lem}\label{lem:newlem}
Let $G$ be a simple graph and let ${\mathbb{C}}C=e_{i_1},\dots,e_{i_{2s}}$ be a sequence of not necessarily distinct edges of $G$ where $s\geq 2$ and $e_{i}=\{x_{i},x_{i+1}\}$ and $f_i=x_ix_{i+1}$
for $1\leq i\leq 2s$.
Let $\alpha=(i_1,i_3,\dots,i_{2s-1})$ and $\beta=(i_2,i_4,\dots,i_{2s})$. Then
${\mathbb{C}}C$ is a closed even walk if and only if $f_{\alpha}=f_{\beta}$.
\end{lem}
\begin{proof}
$(\Longrightarrow)$ This direction is clear from the definition of closed even walks.
$(\Longleftarrow)$ We can give to each repeated edge in ${\mathbb{C}}C$ a new label and consider ${\mathbb{C}}C$ as a multigraph (a graph with multiple edges). The condition $f_{\alpha}=f_{\beta}$ implies that
every $x\in\mbox{V}({\mathbb{C}}C)$ has even degree, as a vertex of the multigrpah ${\mathbb{C}}C$ (a graph containing edges that are incident to the same two vertices). Theorem~\ref{thm:euler} implies that ${\mathbb{C}}C$ is a closed even walk with no repeated edges. Now we revert back to the
original labeling of the edges of ${\mathbb{C}}C$ (so that repeated edges appear again) and then since ${\mathbb{C}}C$ has even length we are done.
\end{proof}
To prove the main theorem of this section (Theorem~\ref{them:graphwalk}) we need the following lemma.
\begin{lem}\label{lem,mynewlemma}
Let ${\mathbb{C}}C={\mathbb{C}}C_{\alpha,\beta}$ be a $1$-dimensional simplicial even walk and $\alpha,\beta\in{\mathcal{I}}_{s}$. If there is $x\in \mbox{V}({\mathbb{C}}C)$ for which $deg_{\beta}(x)=0$
(or $deg_{\alpha}(x)=0$), then we have $deg_{\beta}(v)=0$ ($deg_{\alpha}(v)=0$)
for all $v\in \mbox{V}({\mathbb{C}}C)$.
\end{lem}
\begin{proof}
First we show the following statement.
\begin{eqnarray}\label{stat:new}
e_{i}=\{w_i,w_{i+1}\}\in E({\mathbb{C}}C) &\mbox{and}& deg_{\beta}(w_i)=0\Longrightarrow deg_{\beta}(w_{i+1})=0.
\end{eqnarray}
where $E({\mathbb{C}}C)$ is the edge set of ${\mathbb{C}}C$.
Suppose $deg_{\beta}(w_{i+1})\neq 0$. Then there is $e_{j}\in E({\mathbb{C}}C)$ such that $j\in\ Supp \ (\beta)$ and $w_{i+1}\in e_{j}$. On the other hand since $w_i\in e_i$ and $deg_{\beta}(w_i)=0$ we can
conclude $i\in\ Supp \ (\alpha)$ and thus $deg_{\alpha}(w_i)> 0$. Therefore, we have
$$e_i\backslash e_j=\{w_i\}\subseteq \{z:deg_{\alpha}(z)>deg_{\beta}(z)\}$$
and it is a contradiction. So we must have $deg_{\beta}(w_{i+1})=0$.
Now we proceed to the proof of our statement.
Pick $y\in \mbox{V}({\mathbb{C}}C)$ such that $y\neq x$. Since ${\mathbb{C}}C$ is connected we can conclude there is a path $\gamma=e_{i_1},\dots,e_{i_t}$ in ${\mathbb{C}}C$ in which we have
\begin{itemize}
\item $e_{i_j}=\{x_{i_j},x_{i_{j+1}}\}$ for $j=1,\dots,t$;
\item $x_{i_1}=x$ and $x_{i_{t+1}}=y$.
\end{itemize}
Since $\gamma$ is a path it has neither repeated vertices nor repeated edges. Now note that since $deg_{\beta}(x)=deg_{\beta}(x_{i_1})=0$
and $\{x_{i_1},x_{i_2}\}\in E({\mathbb{C}}C)$ from (\ref{stat:new}) we have $deg_{\beta}(x_{i_2})=0$. By repeating a similar argument we have
\begin{eqnarray*}
deg_{\beta}(x_{i_j})=0&\mbox{for $j=1,2,\dots,t+1$.}
\end{eqnarray*}
In particular we have $deg_{\beta}(x_{i_{t+1}})=deg_{\beta}(y)=0$ and we are done.
\end{proof}
We now show that a simplicial even walk in a graph (considering a graph as a $1$-dimensional simplicial complex)
is a closed even walk in that graph as defined in Definition~\ref{def:graph}.
\begin{theorem}\label{col:newcol}
Let $G$ be a simple graph with edges $e_1,\dots,e_q$. Let
$e_{i_1},\dots,e_{i_{2s}}$ be a sequence of edges of $G$ such that $\left\langle e_{i_1},\dots,e_{i_{2s}}\right\rangle$ is a connected subgraph of
$G$ and $\{i_1,i_3,\dots,i_{2s-1}\}\cap\{i_2,i_4,\dots,i_{2s}\}=\emptyset$. Then
$e_{i_1},\dots,e_{i_{2s}}$ is a simplicial even walk
if and only if
$$\{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}): deg_{\alpha}(x) > deg_{\beta}(x)\}=\{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}): deg_{\alpha}(x) < deg_{\beta}(x)\}=\emptyset$$
\end{theorem}
\begin{proof}
($\Longleftarrow$) is clear. To prove the converse we assume $\alpha=(i_1,i_3,\dots,i_{2s-1})$, $\beta=(i_2,i_4,\dots,i_{2s})$ \
and ${\mathbb{C}}C_{\alpha,\beta}$ is a simplicial even walk. We only need to show
\begin{eqnarray*}
deg_{\alpha}(x)= deg_{\beta}(x)\hspace{.3 in}\mbox{for all $x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta})$}.
\end{eqnarray*}
Assume without loss of generality $deg_{\alpha}(x)>deg_{\beta}(x)\geq 0$, so there exists $i\in \ Supp \ (\alpha)$ such that $x\in e_i$. We set
$e_i=\{x,w_1\}$.
Suppose $deg_{\beta}(x)\neq 0$.
We can choose an edge $e_{k}$ in ${\mathbb{C}}C_{\alpha,\beta}$ where $k\in \ Supp \ (\beta)$
such that $x\in e_{i}\cap e_{k}$. We consider two cases.
\renewcommand{(\arabic{enumi})}{(\arabic{enumi})}
\begin{enumerate}
\item If $deg_{\beta}(w_1)=0$, then since $deg_{\alpha}(w_1)\geq 1$ we have
$$e_{i}\backslash e_{k}=\{w_1\}\subseteq \{z\in \mbox{V}(G): deg_{\alpha}(z) > deg_{\beta}(z)\},$$
a contradiction.
\item If $deg_{\beta}(w_1)\geq 1$, then there exists $h\in \ Supp \ (\beta)$ with $w_1\in e_{h}$. So we have
$$e_{i}\backslash e_{h}=\{x\}\subseteq \{z\in \mbox{V}(G): deg_{\alpha}(z) > deg_{\beta}(z)\},$$
again a contradiction.
\end{enumerate}
So we must have $deg_{\beta}(x)=0$. By Lemma~\ref{lem,mynewlemma} this implies that $deg_{\beta}(v)=0$ for every $v\in\mbox{V}({\mathbb{C}}C_{\alpha,\beta})$, a contradiction, since
$\ Supp \ (\beta)\neq\emptyset$.
\end{proof}
\begin{col}[\bf{$1$-dimensional simplicial even walks}]\label{them:graphwalk}
Let $G$ be a simple graph with edges $e_1,\dots,e_q$. Let
$e_{i_1},\dots,e_{i_{2s}}$ be a sequence of edges of $G$ such that $\left\langle e_{i_1},\dots,e_{i_{2s}}\right\rangle$ is a
connected subgraph of $G$ and $\{i_1,i_3,\dots,i_{2s-1}\}\cap\{i_2,i_4,\dots,i_{2s}\}=\emptyset$. Then
$e_{i_1},\dots,e_{i_{2s}}$ is a simplicial even walk
if and only if $e_{i_1},\dots,e_{i_{2s}}$ is a closed even walk in $G$.
\end{col}
\begin{proof}
Let $I(G)=(f_1,\dots,f_q)$ be the edge ideal of $G$ and $\alpha=(i_1,i_3,\dots,i_{2s-1})$ and $\beta=(i_2,i_4,\dots,i_{2s})$ so that
${\mathbb{C}}C_{\alpha,\beta}=e_{i_1},\dots,e_{i_{2s}}$. Assume ${\mathbb{C}}C_{\alpha,\beta}$ is a closed even walk in $G$.
Then we have
\begin{align*}
f_{\alpha}=\prod_{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta})}x^{deg_{\alpha}(x)}=\prod_{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta})}x^{deg_{\beta}(x)}=f_{\beta},
\end{align*}
where the second equality follows from Lemma~\ref{lem:newlem}.
So for every $x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta})$ we have $deg_{\alpha}(x)=deg_{\beta}(x)$. In other words we have
$$\{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}): deg_{\alpha}(x) > deg_{\beta}(x)\}=\{x\in \mbox{V}({\mathbb{C}}C_{\alpha,\beta}): deg_{\alpha}(x) < deg_{\beta}(x)\}=\emptyset,$$
and therefore we can say ${\mathbb{C}}C_{\alpha,\beta}$ is a simplicial even walk.
The converse follows directly from Theorem~\ref{col:newcol} and Lemma~\ref{lem:newlem}.
\end{proof}
We need the following proposition in the next sections.
\begin{prop}\label{prop:new}
Let ${\mathbb{C}}C_{\alpha,\beta}$ be a $1$-dimensional even walk, and $\langle {\mathbb{C}}C_{\alpha,\beta}\rangle=G$. Then every vertex of
$G$ has degree $>1$. In particular, $G$ is either an even cycle or contains
at least two cycles. \end{prop}
\begin{proof}
Suppose $G$ contains a vertex $v$ of degree $1$. Without
loss of generality we can assume $v\in e_{i}$ where $i\in\ Supp \ (\alpha)$. So $deg_{\alpha}(v)=1$ and from Theorem~\ref{col:newcol}
we have $deg_{\beta}(v)=1$. Therefore, there is $j\in\ Supp \ (\beta)$ such that $v\in e_j$. Since $deg(v)=1$ we must have $i=j$, a contradiction since
$\ Supp \ (\alpha)$ and $\ Supp \ (\beta)$ are disjoint.
Note that by Corollary~\ref{col:mine} $G$ contains a cycle. Now we show that $G$ contains at least two distinct cycles or it is an even cycle.
Suppose $G$ contains
only one cycle $C_n$. Then removing the edges of $C_n$
leaves a forest of $n$ components. Since every vertex of $G$ has degree $>1$,
each of the components must be singleton graphs (a null graph with only one vertex). So $G=C_n$. Therefore, by Corollary~\ref{them:graphwalk} and the fact
that $\ Supp \ (\alpha)$ and $\ Supp \ (\beta)$ are disjoint, $n$ must be even.
\end{proof}
\section{A necessary condition for a squarefree monomial ideal to be of linear type}
We are ready to state one of the main results of this paper which is a combinatorial method to detect irredundant Rees equations of squarefree monomial ideals.
We first show that these Rees equations come from even walks.
\begin{lem}\label{eqn:computationallemma}
Let $I=(f_1,\dots,f_q)$ be a squarefree monomial ideal in the polynomial ring $R$. Suppose $s,t,h$ are integers with $s\geq 2$, $1\leq h\leq q$ and $1\leq t \leq s$.
Let $0\neq\gamma\in R$, $\alpha=(i_1,\dots,i_s),\beta=(j_1,\dots,j_s)\in {\mathcal{I}}_s$. Then
\renewcommand{(\arabic{enumi})}{(\roman{enumi})}
\begin{enumerate}
\item $\operatorname{lcm}(f_{\alpha},f_{\beta})=\gamma f_{h}\widehat{f}_{\alpha_{t}}\Longleftrightarrow
T_{\alpha,\beta}=\lambda\widehat{T}_{\alpha_t} T_{(i_t),(h)}+\mu T_{\alpha_{t}(h),\beta}$ for some monomials $\lambda,\mu\in R$, $\lambda\neq 0$.
\item $\operatorname{lcm}(f_{\alpha},f_{\beta})=\gamma f_{h}\widehat{f}_{\beta_{t}}\Longleftrightarrow
T_{\alpha,\beta}=\lambda\widehat{T}_{\beta_t} T_{(h),(j_t)}+\mu T_{\alpha,\beta_t(h)}$ for some monomials $\lambda,\mu\in R$, $\lambda\neq 0$.
\end{enumerate}
\end{lem}
\begin{proof}
We only prove $(i)$; the proof of $(ii)$ is similar.
First note that if $h=i_t$ then $(i)$ becomes
$$\operatorname{lcm}(f_{\alpha},f_{\beta})=\gamma f_{\alpha}\Longleftrightarrow
T_{\alpha,\beta}=T_{\alpha,\beta}\hspace{.2 in}\text{(Setting $\mu=1$)}$$
and we have nothing to prove, so we assume that $h\neq i_t$.
If we have $\operatorname{lcm}(f_{\alpha},f_{\beta})=\gamma f_{h}\widehat{f}_{\alpha_{t}}$, then the monomial $\gamma f_{h}$ is divisible by $f_{i_t}$,
so there exists a nonzero exists a monomial
$\lambda\in R$ such that
\begin{eqnarray}\label{eqn:equation4}
\lambda\operatorname{lcm}(f_{i_t},f_{h})=\gamma f_{h}.
\end{eqnarray}
It follows that
\begin{eqnarray}\label{eqn:T}
\nonumber
\displaystyle T_{\alpha,\beta}&=&\displaystyle\left(\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\alpha}}\right)T_{\alpha}-
\left(\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\beta}}\right)T_{\beta}=
\displaystyle \left(\frac{\gamma f_h}{f_{i_t}}\right)T_{\alpha}-\left(\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\beta}}\right)T_{\beta}\\
\nonumber
\\
T_{\alpha,\beta}&=&\displaystyle\lambda\widehat{T}_{\alpha_t}T_{(i_t),(h)}+\left(\frac{\lambda\operatorname{lcm}(f_{i_t},f_{h})}{f_{h}}\right)
{T}_{\alpha_t(h)}-\left(\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\beta}}\right)T_{\beta}.
\end{eqnarray}
On the other hand note that since we have
\begin{eqnarray}\label{eqn:equation5}
\operatorname{lcm}(f_{\alpha},f_{\beta})=\gamma f_{h}\widehat{f}_{\alpha_{t}}=\gamma f_{\alpha_t(h)},
\end{eqnarray}
we see $\operatorname{lcm}(f_{\alpha_{t}(h)},f_{\beta})$ divides $\operatorname{lcm}(f_{\alpha},f_{\beta})$. Thus there exists a monomial $\mu\in R$ such that
\begin{equation}\label{eqn:13}
\operatorname{lcm}(f_{\alpha},f_{\beta})=\mu\operatorname{lcm}(f_{\alpha_{t}(h)},f_{\beta}).
\end{equation}
By (\ref{eqn:equation4}), (\ref{eqn:equation5}) and (\ref{eqn:13}) we have
\begin{eqnarray}
\frac{\lambda\operatorname{lcm}(f_{i_t},f_{h})}{f_{h}}=\frac{\lambda\operatorname{lcm}(f_{i_t},f_{h})\widehat{f}_{\alpha_t}}{f_{\alpha_{t}(h)}}=
\frac{\gamma f_h\widehat{f}_{\alpha_t}}{f_{\alpha_{t}(h)}}=
\frac{\operatorname{lcm}(f_{\alpha},f_{\beta})}{f_{\alpha_{t}(h)}}=
\frac{\mu\operatorname{lcm}(f_{\alpha_{t}(h)},f_{\beta})}{f_{\alpha_{t}(h)}}.\label{eqn:newsara}
\end{eqnarray}
Substituting (\ref{eqn:13}) and (\ref{eqn:newsara}) in (\ref{eqn:T}) we get
$$T_{\alpha,\beta}=\lambda\widehat{T}_{\alpha_t}T_{(i_t),(h)}+\mu T_{\alpha_{t}(h),\beta}.$$
For the converse since $h\neq i_t$, by comparing coefficients we have
\begin{eqnarray*}
\frac{\operatorname{lcm}{(f_{\alpha},f_{\beta})}}{f_{\alpha}}=\lambda\left(\frac{\operatorname{lcm}(f_{i_t},f_h)}{f_{i_t}} \right)=\lambda \prod_{x\in F_{h}\backslash F_{i_t}}x&\Longrightarrow&
\operatorname{lcm}{(f_{\alpha},f_{\beta})}=\lambda\left(\prod_{x\in F_{h}\backslash F_{i_t}}x\right)f_{\alpha}\\
&\Longrightarrow& \operatorname{lcm}{(f_{\alpha},f_{\beta})}=\lambda_0 f_{h}\hat{f}_{\alpha_t}
\end{eqnarray*}
where $0\neq \lambda_0\in R$. This concludes our proof.
\end{proof}
Now we show that there is a direct connection between redundant Rees equations and the above lemma.
\begin{theorem}\label{eqn:maintheorem}
Let $\Delta=\left\langle F_1,\dots,F_q\right\rangle $ be a simplicial complex
, $\alpha,\beta\in {\mathcal{I}}_{s}$ and $s\geq 2$ an integer.
If ${\mathbb{C}}C_{\alpha,\beta}$
is not an even walk then
$$T_{\alpha,\beta}\in J_1S+J_{s-1}S.$$
\end{theorem}
\begin{proof}
Let $I=(f_1,\dots,f_q)$ be the facet ideal of $\Delta$ and let $\alpha=(i_1,\dots,i_s),\beta=(j_1,\dots,j_s)\in {\mathcal{I}}_{s}$.
If $C_{\alpha,\beta}$ is not an even walk, then by
Definition~\ref{def:scew} there exist
$i_t\in \ Supp \ (\alpha)$ and $j_l\in \ Supp \ (\beta)$ such that one of the following is true
\renewcommand{(\arabic{enumi})}{(\arabic{enumi})}
\begin{enumerate}
\item $F_{j_l}\backslash F_{i_t}\subseteq \{x\in \mbox{V}(\Delta): deg_{\alpha}(x) < deg_{\beta}(x)\}$;
\item $F_{i_t}\backslash F_{j_l}\subseteq \{x\in \mbox{V}(\Delta): deg_{\alpha}(x) > deg_{\beta}(x)\}$.
\end{enumerate}
Suppose $(1)$ is true. Then there exists a monomial $m\in R$ such that
\begin{eqnarray}\label{eqn:useful}
\frac{\operatorname{lcm}{(f_{\alpha},f_{\beta})}}{f_{\alpha}}=\prod_{deg_{\beta}(x) > deg_{\alpha}(x)} x^{deg_{\beta}(x)-deg_{\alpha}(x)}=m \prod_{x\in F_{j_l}\backslash F_{i_t}}x.
\end{eqnarray}
So we have
$$\operatorname{lcm}{(f_{\alpha},f_{\beta})}= mf_{\alpha} \prod_{x\in F_{j_l}\backslash F_{i_t}}x =m_0f_{j_l}\widehat{f}_{\alpha_{t}}$$
where $m_0\in R$. On the other hand by Lemma~\ref{eqn:computationallemma} there exist monomials $0\neq\lambda,\mu\in R$ such that
\begin{eqnarray*}
T_{\alpha,\beta}&=&\lambda\widehat{T}_{\alpha_t}T_{(i_t),(j_l)}+\mu T_{\alpha_t(j_l),\beta}\\
&=&\lambda\widehat{T}_{\alpha_t}T_{(i_t),(j_l)}+\mu T_{j_l} T_{\widehat{\alpha}_{t},\widehat{\beta}_{l}}\in J_1S+J_{s-1}S\hspace{.2 in}(\mbox{since $j_l\in\ Supp \ (\beta)$}).
\end{eqnarray*}
If case $(2)$ holds, a similar argument settles our claim.
\end{proof}
\begin{col}\label{eqn:maintheorem2}
Let $\Delta=\left\langle F_1,\dots,F_q\right\rangle $ be a simplicial complex
and $s\geq 2$ be an integer. Then we have
$$J=J_1S+\left(\bigcup_{i=2}^{\infty}P_{i}\right)S$$
where $P_{i}=\{T_{\alpha,\beta}:\alpha,\beta\in{\mathcal{I}}_{i}\hspace{.05 in}\mbox{ and ${\mathbb{C}}C_{\alpha,\beta}$ is an even walk}\}$.
\end{col}
\begin{theorem}[{\bf Main Theorem}]\label{col:linear}
Let $I$ be a squarefree monomial ideal in $R$ and suppose the facet complex $\mbox{Facets}F(I)$ has no even walk. Then $I$ is of linear type.
\end{theorem}
The following theorem, can also be deduced from combining Theorem 1.14 in \cite{jahan2012} and Theorem 2.4 in \cite{conca1999}.
In our case, it follows directly from Theorem~\ref{col:linear} and Theorem~\ref{theorem:goodleaf}.
\begin{col}
The facet ideal of a simplicial forest is of linear type.
\end{col}
The converse of Theorem~\ref{eqn:maintheorem} is not in general true. For example see the following.
\begin{example}\label{example:goodexample} Let $\alpha=(1,3),\beta=(2,4)$. In Figure [\ref{figure1}] we see ${\mathbb{C}}C_{\alpha,\beta}=F_1,F_2,F_3,F_4$ is an even walk
but
we have
$$T_{\alpha,\beta}=x_4x_8 T_1T_3-x_1x_6 T_2T_4=x_8T_3(x_4T_1-x_2T_5)+T_5(x_2x_8T_3-x_5x_6T_4)+x_6T_4(x_5T_5-x_1T_2)\in J_1S .$$
\begin{figure}\label{figure1}
\end{figure}
\end{example}
By Theorem~\ref{eqn:maintheorem}, all irredundant generators of $J$ of deg $>1$ correspond to even walks. However
irredundant generators of $J$ do not correspond to minimal even walks in $\Delta$ (even walks that do not properly contain other even walks). For instance ${\mathbb{C}}C_{(1,3,5),(2,4,6)}$
as displayed in Figure~[\ref{figure2}] is an even walk which is not minimal (since ${\mathbb{C}}C_{(3,5),(2,4)}$ and ${\mathbb{C}}C_{(1,5),(2,6)}$ are even walks which contain properly in
${\mathbb{C}}C_{(1,3,5),(2,4,6)}$). But $T_{(1,3,5),(2,4,6)}\in J$ is an irredundant generator of $J$.
We can now state a simple necessary condition for a simplicial complex to be of linear type in terms of its line graph.
\begin{defn}
Let $\Delta=\left\langle F_1,\dots, F_n\right\rangle $ be a simplicial complex. The {\bf line graph} $L(\Delta)$ of $\Delta$
is a graph whose vertices are labeled with the facets of $\Delta$, and two vertices labeled $F_i$ and $F_j$ are adjacent if and only if $F_{i}\cap F_j\neq \emptyset$.
\end{defn}
\begin{theorem}[\bf{A simple test for linear type}]\label{eqn:mycol}
Let $\Delta$ be a simplicial complex and suppose $L(\Delta)$ contains no even cycle. Then $\mbox{Facets}F(\Delta)$ is of linear type.
\end{theorem}
\begin{proof}
We show that $\Delta$ contains no even walk ${\mathbb{C}}C_{\alpha,\beta}$. Otherwise by Proposition~\ref{eqn:luisatheorem}
${\mathbb{C}}C_{\alpha,\beta}$ contains an even extended trail $B$, and $L(B)$ is then an even cycle contained in $L(\Delta)$ which is a
contradiction. Theorem~\ref{col:linear} settles our claim.
\end{proof}
Theorem~\ref{eqn:mycol} generalizes results of Lin and Fouli~\cite{fouli2013}, where they showed if $L(\Delta)$ is a tree or is an odd cycle then $I$ is of linear type.
It must be noted, however, that the converse of Theorem~\ref{eqn:mycol} is not true. Below is an example of an ideal of linear type whose line graph contains an even cycle.
\begin{example}
In the following simplicial complex $\Delta$, $L(\Delta)$ contains an even cycle but its facet ideal $\mbox{Facets}F(\Delta)$ is of
linear type.
\end{example}
By applying Theorem~\ref{col:linear} and Proposition~\ref{prop:new} we conclude the following statement, which was originally proved by Villarreal in \cite{Villarreal1995}.
\begin{col}\label{col:vill}
Let $G$ be a graph which is either tree or contains a unique cycle and that cycle is odd. Then the edge ideal $\mbox{Facets}F(G)$ is of linear type.
\end{col}
\section*{Acknowledgments}
This paper was prepared while the authors were visiting MSRI. We are grateful to Louiza Fouli, Elizabeth Gross and Elham Roshanbin for further discussions,
Jonathan Montano and Jack Jeffries for feedback and
to MSRI for their hospitality. We also would like to thank the anonymous referee for his or her helpful comments. The results of this paper were obtained
with aid of the the computer algebra software Macualay 2~\cite{Macaulay2}.
\end{document} |
\begin{document}
\tildetle{Moduli spaces of semistable modules over Lie algebroids}
\author{Adrian Langer}
\date{\today}
\subjclass[2010]{Primary 14D20, Secondary 14G17, 14J60, 17B55}
\title{Moduli spaces of semistable modules over Lie algebroids}
{\sc Address:}\\
Institute of Mathematics, University of Warsaw,
ul.\ Banacha 2, 02-097 Warszawa, Poland\\
e-mail: {\tt [email protected]}
\begin{abstract}
We show a few basic results about moduli spaces of semistable modules over Lie algebroids.
The first result shows that such moduli spaces exist for relative projective morphisms of noetherian schemes, removing some earlier constraints. The second result proves general separatedness Langton type theorem for such moduli spaces. More precisely, we prove S-completness of some moduli stacks of semistable modules.
In some special cases this result identifies closed points of the moduli space of Gieseker semistable sheaves on a projective scheme and of the Donaldson--Uhlenbeck compactification of the moduli space of slope stable locally free sheaves on a smooth projective surface.
The last result generalizes properness of Hitchin's morphism and it shows properness of so called Hodge-Hitchin morphism defined in positive characteristic on the moduli space of Gieseker semistable integrable $t$-connections in terms of the $p$-curvature morphism. This last result was proven in the curve case by de Cataldo and Zhang using completely different methods.
\end{abstract}
\section*{Introduction}
In this paper we continue the study of relative moduli spaces of semistable modules over Lie algebroids started in \cite{La}. The aim is to show three theorems about such moduli spaces. The first result says that such moduli spaces exist in a larger and more natural class of schemes than previously claimed. Namely,
\cite[Theorem 1.1]{La} asserts existence of such moduli spaces for projective families over a base of finite type over a universally Japanese ring. Here we relax these assumptions and prove existence of moduli spaces for projective families over any noetherian base (see Theorem \ref{general-moduli}).
The second result concerns \cite[Theorem 5.2]{La}, whose proof was omitted in \cite{La}. Here we prove the following much stronger version of this theorem:
\begin{Theorem}\label{main2}
Let $R$ be a discrete valuation ring with quotient ring $K$ and residue field $k$. Let $X\to S=\mathop{\rm Spec \, } R$
be a projective morphism and let us fix a relatively ample line bundle on $X/S$. Let $L$ be a smooth ${\mathcal O}_S$-Lie algebroid on $X$ and let $E_1$ and $E_2$ be $L$-modules, which as ${\mathcal O}_X$-modules are coherent of relative dimension $d$ and flat over $S$. Assume that there exists an isomorphism $\varphi: (E_1)_K\to (E_2)_K$ of $L_K$-modules.
Then we have the following implications:
\begin{enumerate}
\item If $(E_1)_k$ and $(E_2)_k$ are Gieseker semistable then they are S-equivalent.
\item If $(E_1)_k$ and $(E_2)_k$ are Gieseker polystable then they are isomorphic.
\item If $(E_1)_k$ is stable and $(E_2)_k$ is Gieseker semistable then the
$L$-modules $E_1$ and $E_2$ are isomorphic.
\end{enumerate}
\end{Theorem}
This theorem is a strong generalization of Langton's \cite[Theorem, p.~99]{Lt} and it implies separatedness of the moduli space of Gieseker semistable modules over a smooth Lie algebroid. Whereas this result follows from the GIT construction
of such moduli spaces, we prove a much more general result (see Theorem \ref{slope-Langton2} or below for a simple example) that cannot be obtained in this way.
It is well known that S-equivalent sheaves correspond to the same point in the moduli space (see, e.g., \cite[Lemma 4.1.2]{HL}). So the above theorem implies that points of the moduli space of (Gieseker) semistable modules over a smooth Lie algebroid (or, in the special case, points of the moduli space of semistable sheaves) correspond to S-equivalence classes of semistable modules. This was the last missing step in Faltings's non-GIT construction of the moduli space of semistable sheaves on a semistable curve (see \cite[the last paragraph on p.~509]{Fa}). A different approach to this problem for the moduli stack of semistable vector bundles over a smooth projective curve defined over a field of characteristic zero was recently developed in \cite{AHLH} (see \cite[Lemma 8.4]{AHLH}). In their language, we prove even stronger result than Theorem \ref{main2} saying that the moduli stack of Giesker semistable modules over a smooth Lie algebroid on a projective scheme is S-complete (see Theorem \ref{S-completness}). In case $k$ has characteristic zero this result follows from \cite[Proposition 3.47]{AHLH} and separatedness of the moduli space of Gieseker semistable modules but it is not so in positive characteristic. Recently, D. Greb and M. Toma pointed out to the author that in \cite[Section 4]{GT2} they gave a direct proof of Langton's type separatedness criterion for Gieseker semistable sheaves. In their case the result does not follow from the GIT construction as they consider moduli spaces on complex projective varieties with (possibly non-ample) K\"ahler polarizations.
The original motivation to reconsider this problem was provided by Chen\-yang Xu during his talk on the ZAG seminar. Namely, when showing proof of an analogous result for ${\mathbb Q}$-Gorenstein log Fano varieties (see \cite[Theorem 1.1]{BX}), he said that there is no known direct proof of this fact for Gieseker semistable sheaves (as pointed out above, this was in fact known and proven in\cite{GT2}).
The proof that we provide here is modelled on Gabber's proof of an analogous fact for vector bundles with integrable connections on smooth complex varieties (see \cite[Variant 2.5.2]{Ka}). The differences come mainly from the fact that coherent modules with an integrable connection on complex varieties are locally free, whereas we need to study flatness and semistability of various modules appearing in the proof. Another difference is that irreducibility of vector bundles with an integrable connection corresponds to stability in our case.
Theorem \ref{main2} is stated for Gieseker semistability but we prove a much more general result that works also, e.g., for slope semistability. In this introduction we will formulate this result only in the simplest possible case, leaving the full generalization to Theorem \ref{slope-Langton2}.
Before formulating the result we need to slightly change the usual notion of slope semistability to allow non-torsion free sheaves.
Let $Y$ be a smooth projective scheme defined over an algebraically closed field $k$ and let us fix an ample polarization. Let $E$ be a coherent sheaf on $Y$ and let $T(E)$ denote the torsion part of $E$.
We say that $E$ is \emph{slope semistable} if $c_1 (T(E))=0$ and $E/T(E)$ is slope semistable in the usual sense (we allow $E/T(E)$ to be trivial).
We say that two slope semistable sheaves $E$ and $E'$ are \emph{strongly S-equivalent} if there exist filtrations $F_{\bullet }E$ of $E$ and $F'_{\bullet} E'$ of $E'$ such that associated graded $\mathop{\rm Gr} ^{F}(E)$ and $ \mathop{\rm Gr} ^{F'}(E')$ are slope semistable and isomorphic to each other (see Definition \ref{semistable-filtr} and Lemma \ref{cor-Langton}). One can show that this induces an equivalence relation on slope semistable sheaves (see Corollary \ref{equivalence-relation}).
If $F_{\bullet }E$ is a filtration of $E$ then Rees construction provides a deformation of $E$ to $\mathop{\rm Gr} ^{F}(E)$. In particular, strongly S-equivalent sheaves should correspond to the same point in the ``moduli space of slope semistable sheaves''. The following result describes closed points of such ``moduli space of slope semistable sheaves''.
\begin{Theorem}\label{main3}
Let $R$ be a discrete valuation ring with quotient ring $K$ and residue field $k$.
Let $E_1$ and $E_2$ be flat families of slope semistable sheaves on $Y$ parametrized by $S=\mathop{\rm Spec \, } R$.
If there exists an isomorphism $\varphi: (E_1)_K\to (E_2)_K$ then we have the following implications:
\begin{enumerate}
\item If $(E_1)_k$ and $(E_2)_k$ are slope semistable then they are strongly S-equivalent.
\item If $(E_1)_k$ and $(E_2)_k$ are slope polystable then they are isomorphic.
\item If $(E_1)_k$ is slope stable and torsion free and $(E_2)_k$ is slope semistable then the families $E_1$ and $E_2$ are isomorphic.
\end{enumerate}
\end{Theorem}
One can check that in the surface case strong S-equivalence classes correspond to closed points of the Donaldson--Uhlenbeck compactification of the moduli space (see Proposition \ref{Donaldson-Uhlenbeck}). In higher dimensions (in the characteristic zero case) there also exists an analogous construction of projective ``moduli space of slope semistable sheaves'' due to D. Greb and M. Toma (see \cite{GT}). However, their moduli space identifies many
strong S-equivalence classes (see Example \ref{higher-dim-example}). So unlike in the surface case, closed points of their moduli spaces cannot be recovered by looking at families of (torsion free) slope semistable sheaves.
As in the previous case, the proof of Theorem \ref{main3} shows that the moduli stack of slope semistable
sheaves on $Y$ is S-complete. It was already known that the moduli stack of torsion free slope semistable sheaves is of finite type (see \cite{La0}) and universally closed (see \cite{Lt}). However, this stack is not S-complete (and even in characteristic zero it does not have a good moduli space, as automorphism groups of slope polystable sheaves need not be reductive). Allowing some torsion, we enlarge this stack so that it becomes S-complete and the stack of torsion free slope semistable sheaves is open in this new stack. Unfortunately, the obtained stack is no longer of finite type and it is not universally closed.
The last result was motivated by a question of Mark A. de Cataldo asking the author about properness of the so called Hodge--Hitchin morphism. In \cite{dCZ} the authors proved such properness by finding a projective completion of the moduli space of $t$-connections in positive characteristic. Here we reprove this result in a more general setting using Langton's type theorem for restricted Lie algebroids. In fact, for a general restricted Lie algebroid the $p$-curvature defines two different proper maps but with similar proofs of properness. The main difficulty is to find an extension of a semistable module over a restricted Lie algebroid from the general fiber to the special fiber.
In general, this is not possible and it fails, e.g., for vector bundles with integrable connection over complex varieties. However, using special characteristic $p$ features one can prove an appropriate result using arguments similar to properness of the Hitchin morphism.
Then we use \cite{BMR} to show that in the case of a special Lie algebroid related to the relative tangent bundle, the two morphisms are related and we give a precise construction of the Hodge--Hitchin morphism in higher dimensions. This implies the following result (see Corollary \ref{Hodge-Hitchin-properness}).
\begin{Theorem}
Let $f : X\to S$ be a smooth morphism of noetherian schemes of characteristic $p$ and let $P$ be a Hilbert polynomial
of rank $r$ sheaves on the fibers of $f$. Then
the Hodge--Hitchin morphism
$$M_{Hod} (X/S, P)\to \left( \bigoplus _{i=1}^r f_* \left({{S} }^i \Omega_{X/S}\right) \right) \tildemes {\mathbb A}^1$$
is proper.
\end{Theorem}
In the above theorem $M_{Hod} (X/S, P)$ denotes the moduli space of Gieseker semistable modules with an integrable $t$-connection and Hilbert polynomial $P$.
The paper is organized as follows. In Section 1 we prove general existence theorem for moduli spaces of semistable modules over Lie algebroids. In Section 2 we define and study strong S-equivalence. In Section 3 we prove Theorems \ref{main2} and \ref{main3}. In Section 4 we recall some facts about the $p$-curvature of modules over restricted Lie algebroids. Then in Section 5 we prove the results related to properness of the Hodge--Hitchin morphism in all
dimensions.
\subsection*{Notation}
Let ${\mathcal A}$ be an abelian category and let $E$ be an object of ${\mathcal A}$.
All filtrations $F_{\bullet}E$ in the paper are finite and increasing. In particular, they start with the zero object and finish with $E$.
We say that a filtration $F'_{\bullet}E$ of $E$ is a \emph{refinement} of a filtration $F_{\bullet}E$ if for every $F_iE$ there exists $j$ such that $F_{j}'E=F_iE$. In this case we say that $F_{\bullet}E$ is \emph{refined} to $F'_{\bullet}E$.
\section{Moduli space of semistable $\Lambda$-modules}
In this section we generalize the result on existence of moduli spaces of semistable modules
for projective families over a base of finite type over a universally Japanese ring to
an optimal setting of noetherian schemes. This generalization is obtained by replacing
the use of Seshadri's results \cite{Se} on GIT quotients with a more modern technology due
to V. Franjou and W. van der Kallen \cite{FvdK}.
Let $f: X\to S$ be a projective morphism of noetherian schemes and let ${\mathcal O}_X(1)$ be an $f$-very ample line bundle. Let $\Lambda$ be a sheaf of rings of differential operators on $X$ over $S$ (see \cite[Section 1]{La} for the definition).
Let $T$ be an $S$-scheme. A \emph{family of $\Lambda$-modules on the fibres of} $p_T:X_T=X\tildemes_S T\to T$
(or a \emph{family of $\Lambda$-modules on $X$ parametrized by $T$})
is a $\Lambda_T$-module $E$ on $X_T$, which is quasi-coherent, locally finitely presented and $T$-flat as an ${\mathcal O}_{X_T}$-module. We say that $E$ is a \emph{family of Gieseker semistable $\Lambda$-modules on the fibres of} $p_T:X_T=X\tildemes_S T\to T$ if $E$ is a family of $\Lambda$-modules on the fibres of $p_T$
and for every geometric point $t$ of $T$ the restriction of $E$ to the fibre $X_t$ is pure and Gieseker semistable as a $\Lambda_t$-module.
We introduce an equivalence relation $\sim $ on such families by
saying that $E\sim E'$ if and only if there exists an invertible
${\mathcal O}_T$-module $L$ such that {$E'\simeq E\otimes p_T^* L$.}
One defines the moduli functor
$${\underline M} ^{\Lambda}(X/S, P) : (\hbox{\rm Sch/}S) ^{o}\to \hbox{Sets} $$
from the category of schemes over $S$ to the
category of sets by
$${\underline M} ^{\Lambda}(X/S, P) (T)=\left\{
\aligned
&\sim\hbox{equivalence classes of families of Gieseker}\\
&\hbox{semistable $\Lambda$-modules $E$ on the fibres of } X_T\to T,\\
&\hbox{such that for every point $t\in T$ the Hilbert polynomial}\\
&\hbox{of $E$ restricted to the fiber $X_t$ is equal to $P$.}\\
\endaligned
\right\} .$$
The reason to define the moduli functor on the category of all $S$-schemes, instead of locally noetherian $S$-schemes as in \cite{HL} or \cite{Ma}, is that the moduli stack needs to be defined in that generality and one wants to relate the moduli space to the moduli stack.
We have the following theorem generalizing earlier results of
C. Simpson, the author and many others (see \cite[Theorem 1.1]{La}).
\begin{Theorem} \label{general-moduli}
Let us fix a polynomial $P$. Then there exists a quasi-projective
$S$-scheme $M ^{\Lambda}(X/S, P)$ of finite type over $S$ and a
natural transformation of functors
$$\varphi :{\underline M} ^{\Lambda}(X/S, P)\to {\mathop{{\rm Hom}}} _S (\cdot, M ^{\Lambda}(X/S, P)),$$
which uniformly corepresents the functor ${\underline M} ^{\Lambda}(X/S, P)$.
For every geometric point $s\in S$ the induced map $\varphi (s)$
is a bijection. Moreover, there is an open scheme $M ^{\Lambda,
s}(X/S, P)\subset M ^{\Lambda}(X/S, P)$ that universally
corepresents the subfunctor of families of geometrically Gieseker
stable $\Lambda$-modules.
\end{Theorem}
\begin{proof}
Here we simply sketch the changes that one needs to do in general.
The full proof in case $\Lambda ={\mathcal O}_X$ and $S$ is of finite type over
a universally Japanese ring is written down in the book \cite{Ma} (see \cite[Chapter 3, Theorem 9.4]{Ma} and \cite[Appendix A, Theorem 1.4]{Ma}). One of differences between our theorem and the approach presented in \cite{Ma} is that our moduli functor is defined on all $S$-schemes and not only on locally noetherian $S$-schemes as, e.g., in \cite{Ma}. This needs a slightly different definition of families in case of non locally noetherian schemes, which comes from a now standard approach taken in the construction of Quot schemes (see, e.g., \cite[Theorem 1.5.4]{Ol}).
The boundedness of semistable sheaves with fixed Hilbert polynomial $P$
(see \cite{La0})
allows one to consider an open subscheme $R$ of some Quot scheme, whose geometric points contain as quotients all semistable sheaves with fixed $P$. This $R$ comes with a group action of a certain $\mathop{\rm GL} (V)$ and one constructs the moduli scheme $M ^{\Lambda}(X/S, P)$ as a GIT quotient of $R$ by $\mathop{\rm GL}(V)$.
The only place, where one uses that $S$ is of finite type over a universally Japanese ring, is via Seshadri's \cite[Theorem 4]{Se} on existence of quotients of finite type (see \cite[Chapter 3, Theorem 2.9]{Ma}). Here one should point out that Seshadri's definition of a universally Japanese ring is not exactly the same as currently used as he also assumes noetherianity (see \cite[Theorem 2]{Se}). So the notion of a universally Japanese ring used in the formulation of existence of the moduli scheme is nowadays usually called a Nagata ring.
Let us recall that a reductive group scheme over a scheme $S$ (in the sense of SGA3) is a smooth affine group scheme over $S$ with geometric fibers that are connected and reductive.
We need to replace Seshadri's theorem by the following result (we quote only the most important properties)
that follows from \cite[Theorems 8 and 17]{vdK} (see also \cite[Theorems 3 and 12]{FvdK}):
\begin{Theorem}
Let $G$ be a reductive group scheme over a noetherian scheme $S$. Assume that $G$ acts on a projective $S$-scheme $f: X\to S$ and there exists a $G$-linearized $f$-very ample line bundle $L$ on $X$.
Then the following hold.
\begin{enumerate}
\item There is a $G$-stable open $S$-subscheme $X^{ss}(L)\subset X$, whose geometric points
are precisely the semistable points.
\item There is a $G$-invariant affine surjective morphism $\varphi: X^{ss}(L)\to Y$ of $S$-schemes and we have ${\mathcal O}_Y= \varphi _* ({\mathcal O}_{X^{ss}(L)})^G$.
\item $Y$ is projective over $S$.
\end{enumerate}
\end{Theorem}
The proof for general $\Lambda$ is analogous to that given in \cite{Si2}.
\end{proof}
\begin{Remarks}
\noindent
\begin{enumerate}
\item In \cite[Theorem 4.3.7]{HL} the authors add an assumption that $f: X\to S$ has geometrically connected fibers. This assumption is obsolete.
\item Let us recall that $M ^{\Lambda}(X/S, P)$ {\it uniformly corepresents} ${\underline M} ^{\Lambda}(X/S, P)$ if for every flat morphism $T\to M ^{\Lambda}(X/S, P)$ the fiber product $ T\tildemes _{M ^{\Lambda}(X/S, P)}{\underline M} ^{\Lambda}(X/S, P)$ is corepresented by $T$.
The definition given in \cite[p.~512]{La} is incorrect.
\item The need to consider moduli spaces for $X\to S$, where $S$ is of finite type over a non-Nagata ring, appeared already in some papers of de Cataldo and Zhang (see, e.g., \cite{dCZ}), where the authors consider $S$ of finite type over a discrete valuation ring. Such rings need not be Nagata rings (see \cite[Tag 032E, Example 10.162.17]{St}).
\end{enumerate}
\end{Remarks}
\section{Strong S-equivalence}\label{strong-S-section}
In this section we introduce strong S-equivalence and study its basic properties.
We fixthe following notation.
Let $Y$ be a projective scheme over a field $k$ and let $L$ be a
$k$-Lie algebroid on $Y$. Let $\mathop{\rm Coh} ^L_d(Y)$ be the full subcategory of
the category of $L$-modules which are coherent as ${\mathcal O}_Y$-modules
and whose objects are sheaves supported in dimension $\le d$.
For any $d'\le d$ the subcategory $\mathop{\rm Coh} ^L_{d'-1}(Y)$ is a Serre subcategory and we can form
the quotient category $\mathop{\rm Coh} ^L_{d,d'}(Y)=\mathop{\rm Coh} ^L_{d}(Y)/\mathop{\rm Coh} ^L_{d'-1}(Y)$.
Let $S_{d'}$ be the class of morphisms $s: E\to F$ in $\mathop{\rm Coh} ^L_{d}(Y)$ that are isomorphisms in dimension $\le d'$,
i.e., such that $\ker s$ and $\mathop{\rm coker} s$ are supported in dimension $<d'$. This is a multiplicative system and
$\mathop{\rm Coh} ^L_{d, d'}(Y)$ is constructed as $S_{d'}^{-1}\mathop{\rm Coh} ^L_{d}(Y)$. So the objects of $\mathop{\rm Coh} ^L_{d, d'}(Y)$
are objects of $\mathop{\rm Coh} ^L_{d}(Y)$ and morphisms in $\mathop{\rm Coh} ^L_{d, d'}(Y)$ are equivalence classes of diagrams
$E\stackrel{s}{\leftarrow} F'\stackrel{f}{\rightarrow }F$ in which $s$ is a morphism from $S_{d'}$.
\subsection{Stability and S-equivalence}
Let us fix an ample line bundle ${\mathcal O}_Y(1)$ on $Y$.
For any $E\in \mathop{\rm Coh} ^L_d(Y)$ we write the Hilbert polynomial of $E$ as
$$P(E, m)= \mathop{\rm ch}i (Y, E\otimes {\mathcal O}_Y(m))=\sum _{i=0}^d \alpha _i (E) \frac{m^i}{i!}.$$
Let ${\mathbb Q} [t]_d$ denote the space of polynomials in ${\mathbb Q} [t]$ of degree $\le d$.
For any object $E$ of $\mathop{\rm Coh} ^L_{d}(Y)$ of dimension $d$ we define its \emph{normalized Hilbert polynomial}
$p_{d, d'} (E)$ as an element $P(E)/ \alpha_d(E)$ of ${\mathbb Q} [T]_{d, d'}={\mathbb Q} [t]_d/ {\mathbb Q} [t]_{d'-1}$.
If $E$ is of dimension less than $d$ we set $p_{d, d'} (E)=0$.
This factors to
$$p_{d, d'}: {\mathop{\rm Coh} }^L_{d,d'}(Y) \to {\mathbb Q} [T]_{d, d'}={\mathbb Q} [t]_d/ {\mathbb Q} [t]_{d'-1}.$$
\begin{Definition}
We say that $E\in \mathop{\rm Coh} ^L_d(Y)$ is \emph{stable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$} if if for
all proper subobjects $E'\subset E$ (in $\mathop{\rm Coh} ^L_{d,d'}(Y)$) we have
$$\alpha _d(E) \cdot P(E')< \alpha _d (E')\cdot P(E) \mathop{\rm mod} {\mathbb Q} [t]_{d'-1}.$$
We say that $E\in \mathop{\rm Coh} ^L_d(Y)$ is \emph{semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$} if it is either
of dimension $<d'$ or it has dimension $d$ and
for all proper subobjects $E'\subset E$ we have
$$\alpha _d(E) \cdot P(E')\le \alpha _d (E')\cdot P(E) \mathop{\rm mod} {\mathbb Q} [t]_{d'-1}.$$
\end{Definition}
\begin{Remark}
\begin{enumerate}
\item If $E\in \mathop{\rm Coh} ^L_d(Y)$ is stable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ then it is either isomorphic to $0$
in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ (so it is of dimension $<d'$) or it has dimension $d$. Otherwise, we can find a non-zero proper subobject $E'\subset E$ and $\alpha _d(E')=\alpha _d(E)=0$, contradicting the required inequality.
\item
In view of the above remark, adding the assumption on the dimension of $E$ in the definition of semistability is done because we would like semistable $E$ to have a filtration, whose quotients are stable.
\end{enumerate}
\end{Remark}
We say that $E\in \mathop{\rm Coh} ^L_d(Y)$ is \emph{pure in $\mathop{\rm Coh} ^L_{d,d'}(Y)$}, if it has dimension $d$ and the maximal $L$-submodule $T(E)$ of $E$ of dimension $<d$ has dimension $<d'$.
Note that if $E\in \mathop{\rm Coh} ^L_d(Y)$ of dimension $d$ is {semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$} then it is pure in $\mathop{\rm Coh} ^L_{d,d'}(Y)$. Indeed, we have $\alpha _d(E) \cdot P(T(E))\le 0 \mathop{\rm mod} {\mathbb Q} [t]_{d'-1},$ so $P(T(E))\in {\mathbb Q} [t]_{d'-1}$, which
shows that $T(E)$ has dimension $\le d'-1$.
If $E\in \mathop{\rm Coh} ^L_d(Y)$ is semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ then there exists a Jordan--H\"older filtration
$$0=E_0\subset E_1\subset ...\subset E_m=E$$
by $L$-submodules such that $E_i/E_{i-1}$ have the same normalized Hilbert polynomial $p_{d,d'}$ as $E$. The associated graded $\mathop{\rm Gr} ^{JH}(E)=\bigoplus E_{i}/E_{i-1} $ is polystable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ and its class in $\mathop{\rm Coh} ^L_{d,d'}(Y)$
independent of the choice of a Jordan--H\"older filtration of $E$.
We say that two semistable objects $E$ and $E'$ of $\mathop{\rm Coh} ^L_{d,d'}(Y)$ are \emph{S-equivalent} if their
associated graded polystable objects $\mathop{\rm Gr} ^{JH}(E)$ and $\mathop{\rm Gr} ^{JH}(E')$ are isomorphic in $\mathop{\rm Coh} ^L_{d,d'}(Y)$.
\subsection{Basic definitions}
\begin{Definition}\label{semistable-filtr}
Let $E\in \mathop{\rm Coh} ^L_d(Y)$. We say that a filtration
$$0=F_0E\subset F_1E\subset ...\subset F_mE=E$$
by $L$-submodules is \emph{$d'$-semistable} if all the quotients $F_iE/F_{i-1}E$ are
semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ with $p_{d,d'}(E_i/E_{i-1})$ equal to either $0$ or $p_{d,d'} (E)$.
We say that $F_{\bullet }E$ is \emph{$d'$-stable}, if it is $d'$-semistable and
all quotients $F_iE/F_{i-1}E$ are stable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$.
\end{Definition}
Clearly, if $E$ admits a $d'$-semistable filtration then it is semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$
and any $d'$-semistable filtration can be refined to a $d'$-stable filtration.
A $d'$-stable filtration generalizes slightly the notion of a Jordan--H\"older filtration. In particular, a filtration $F_{\bullet} E$ is $d'$-stable if and only if the associated graded $\mathop{\rm Gr} ^{F}(E)$ is isomorphic to $\mathop{\rm Gr} ^{JH}(E)$ in $\mathop{\rm Coh} ^L_{d,d'}(Y)$.
The important difference is that we allow quotients to be stable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ but with the zero
normalized Hilbert polynomial $p_{d,d'}$. So quotients in a $d'$-(semi)stable filtration can contain torsion (or be torsion) even if $E$ is torsion free as an ${\mathcal O}_Y$-module.
Any $d'$-stable filtration can be refined to a $d'$-stable filtration whose quotients are of dimension $<d'$ or pure on $Y$ but we will also use more general $d'$-stable filtrations.
\begin{Definition}\label{strongly-S-equivalent}
We say that $E\in \mathop{\rm Coh} ^L_d(Y)$ and $E'\in \mathop{\rm Coh} ^L_d(Y)$ are \emph{strongly S-equi\-va\-lent in $\mathop{\rm Coh} ^L_{d,d'}(Y)$} if there exist $d'$-semistable filtrations $F_{\bullet }E$ and $F'_{\bullet} E'$ whose quotients
are isomorphic up to a permutation, i.e., if both filtrations have the same length $m$ and there exists a permutation $\sigma$ of $\{1,..., m\}$ such that for all $i=1,...,m$ the $L$-modules $F_{i}E/F_{i-1}E$ and $F'_{\sigma(i)}E'/F'_{\sigma (i-1)}E'$ are isomorphic (on $Y$). In this case we write $E\simeq_{d'}E'$.
\end{Definition}
Note that if $E$ and $E'$ are strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ then the associated graded objects $\mathop{\rm Gr} ^{F}(E)$ and $\mathop{\rm Gr} ^{F'}(E')$ are isomorphic in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ and after possibly refining the filtrations they are isomorphic to $\mathop{\rm Gr} ^{JH}(E)$ in $\mathop{\rm Coh} ^L_{d,d'}(Y)$. In particular, $E$ and $E'$ are S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(Y)$.
However, the opposite implication is false even if $L$ is a trivial Lie algebroid. For example,
Hilbert polynomials of strongly S-equivalent modules are equal but this does not need to be true for S-equivalent modules. Even if the Hilbert polynomials of S-equivalent $L$-modules are equal, they do not need to be strongly S-equivalent (see Section \ref{surfaces}).
The following lemma gives a convenient reformulation of Definition \ref{strongly-S-equivalent}.
\begin{Lemma} \label{cor-Langton}
Let $E, E'\in \mathop{\rm Coh} ^L_{d}(Y) $ be semistable of dimension $d$ in $\mathop{\rm Coh} ^L_{d,d'}(Y)$. Then the following conditions are equivalent:
\begin{enumerate}
\item $E$ and $E'$ are strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(Y)$.
\item There exist $d'$-semistable filtrations $F_{\bullet }E$ and $F'_{\bullet} E'$ such that $\mathop{\rm Gr} ^{F}(E)\simeq \mathop{\rm Gr} ^{F'}(E')$.
\end{enumerate}
\end{Lemma}
\begin{proof}
The implication $(1)\Rightarrow (2)$ follows immediately from the definition. Now let us assume that $(2)$ is satisfied.
Let us decompose $\mathop{\rm Gr} ^{F}(E)$ and $\mathop{\rm Gr} ^{F'}(E')$ into a direct sum of irreducible $L$-modules.
These decompositions induce $d'$-semistable refinements of the original filtrations.
But by the Krull--Remak--Schmidt theorem (see \cite[Theorem 2]{At}) we can find isomorphisms between the direct factors
of the decomposition, so quotients of the refined filtrations are isomorpic up to a permutation.
\end{proof}
\subsection{Properties of strong S-equivalence}
\begin{Lemma}\label{torsion-equivalence}
Let $E$ be an $L$-module, coherent as an ${\mathcal O}_Y$-module.
Then any two filtrations of $E$ by $L$-submodules can be refined to filtrations by $L$-submodules, whose quotients are isomorphic up to a permutation.
\end{Lemma}
\begin{proof}
Let us consider a natural ordering on the polynomials $P\in {\mathbb Q} [T]$ given by the lexicographic order of their coefficients. Assume that for any $L$-module $\tilde E$ with Hilbert polynomial $P (\tildelde E)<P (E)$ and for any two filtrations of $\tilde E$ we can find refinements to filtrations, whose quotients are isomorphic up to a permutation.
Let $F_{\bullet} E$ and $G_{\bullet} E$ be two filtrations of $E$ by $L$-submodules of lengths $m$ and $m'$, respectively. It is sufficient to show that these filtrations can be refined so that quotients of the refined filtrations are isomorphic up to a permutation (note that this sort of induction works because $P(E)\ge 0$ for any $L$-module $E$; it is a mixture of the induction on the dimension of the support and multiplicity of an $L$-module).
The filtrations $F_{\bullet} E$ and $G_{\bullet} E$ induce the filtrations on $F_1E$ and $E/F_1E$.
By assumption these induced filtrations can be refined to filtrations, whose quotients are isomorphic up to a permutation. But these filtrations induce refinements of the filtrations $F_{\bullet} E$ and $G_{\bullet} E$,
which proves our claim.
\end{proof}
To simplify notation we say that $E$ is \emph{$d'$-refinable} if any two $d'$-semistable filtrations of $E$ can be refined to $d'$-semistable filtrations, whose quotients are isomorphic up to a permutation.
\begin{Proposition}\label{equivalence-lemma}
Any $L$-module $E\in \mathop{\rm Coh} ^L_d(Y)$ is $d'$-refinable.
\end{Proposition}
\begin{proof}
Let us consider a short exact sequence
$$0\to T(E)\to E\to E/T(E)\to 0.$$
$d'$-semistable filtrations of $E$ induce $d'$-semistable filtrations of $T(E)$ and $E/T(E)$. By
Lemma \ref{torsion-equivalence} the filtrations of $T(E)$ can be refined to filtrations by $L$-submodules, whose quotients are isomorphic up to a permutation. Such filtrations are automatically $d'$-semistable, so
$T(E)$ is $d'$-refinable and it is sufficient to prove that $E/T(E)$ is $d'$-refinable
and then use the corresponding filtrations to obtain the required filtrations of $E$.
So in the following we can assume that $E$ is pure of dimension $d$ on $Y$. Suitably refining the filtrations we can also assume that they are $d'$-stable.
Let $F_{\bullet} E$ and $G_{\bullet} E$ be $d'$-stable filtrations of $E$ and assume that any $L$-module $\tilde E$ pure of dimension $d$ with multiplicity $\alpha _d (\tildelde E)<\alpha _d (E)$ is $d'$-refinable.
Since $E$ is pure of dimension $d$, $F_1E$ is also pure of dimension $d$ on $Y$.
Let us take the minimal $i$ such that $F_1E\subset G_iE$ and the composition $F_1E\to G_iE\to G_iE/G_{i-1}E$ is non-zero in dimension $d$ (clearly, such $i$ must exist as one can see starting from $i=m'$ and going down). Since $G_iE/G_{i-1}E$ is pure in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ and both $F_1E$ and $G_iE/G_{i-1}E$ are
stable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ with the same normalized Hilbert polynomial $p_{d,d'}$, the map
$F_1E\to G_iE/G_{i-1}E$ is an inclusion and an isomorphism in dimension $d'$.
So there exists a closed subset $Z\subset Y$ of dimension $\le d'-1$ such that
the canonical map
$$G_{i-1}E\oplus F_1E\to G_{i}E$$
is an isomorphism on the open subset $U=Y-Z\subset Y$. Since $G_{i-1}E\oplus F_1E$ is pure of dimension $d$, this map is also injective. Therefore the composition $G_{i-1}E\to E\to E/F_1E$ is also injective.
The quotient $\tildelde E= E/F_1E$ has two natural $d'$-stable filtrations induced from $E$.
The first one is defined by $F'_{j}\tilde E=F_{j+1}E/F_1E$ and
the second one by $$G'_{j}\tilde E= \mathop{\rm im} ( G_{j}E\to E\to E/F_1E ).$$
Note that both filtrations are $d'$-stable. This is clear for $F'_{\bullet} \tilde E$.
Since $G_{i-1}E\to E/F_1E$ is injective
we have
$$G'_{j}\tilde E/G'_{j-1}\tilde E\simeq G_jE/G_{j-1}E$$
for $j<i$. Since $F_1E\subset G_iE$, we also have
$$G'_{j}\tilde E/G'_{j-1}\tilde E\simeq G_jE/G_{j-1}E$$
for $j>i$. Finally, $G'_{i}\tilde E/G'_{i-1}\tilde E$ is isomorphic to the cokernel of $G_{i-1}E\oplus F_1E\to G_{i}E$, so it is either $0$ or of dimension $\le d'-1$, which proves that the filtration $G'_{\bullet} \tilde E$
is $d'$-stable. Therefore we can apply the induction assumption to $\tildelde E$ and then lift the corresponding filtrations to the required filtrations of $E$.
\end{proof}
\begin{Corollary}\label{equivalence-relation}
Strong S-equivalence in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ is an equivalence relation on $\mathop{\rm Coh} ^L_{d}(Y)$.
\end{Corollary}
\begin{proof}
To simplify notation we say that filtrations satisfy condition $(*)$ if their quotients
are isomorphic up to a permutation (see Definition \ref{strongly-S-equivalent}).
Let us consider $E, E', E''\in \mathop{\rm Coh} ^L_{d}(Y)$ and assume that $E\simeq_{d'}E'$
and $E'\simeq_{d'}E''$.
Then $E$ and $E'$ have the filtrations satisfying $(*)$, and $E'$ and $E''$ have the filtrations satisfying $(*)$.
By Proposition \ref{equivalence-lemma} the filtrations of $E'$ can be refined to filtrations satisfying
condition $(*)$. But these refined filtrations induce filtrations of $E$ and $E''$
that satisfy condition $(*)$, so $E\simeq_{d'}E''$. The remaining conditions are obvious.
\end{proof}
Thanks to the above corollary one can talk about strong S-equivalence classes of (semistable) $L$-modules.
\begin{Lemma}\label{S-equivalence-on-sequences}
Let us fix $P\in {\mathbb Q} [T]_{d, d'}$. Let
$$0\to E'\to E\to E''\to 0$$
and
$$0\to \tilde E'\to \tilde E\to \tilde E''\to 0$$
be short exact sequences of $L$-modules in $ \mathop{\rm Coh} ^L_d(Y)$ with normalized Hilbert polynomials $p_{d,d'}$ equal to either $0$ or $P$.
\begin{enumerate}
\item If $E'\simeq_{d'} \tilde E'$ and $E''\simeq_{d'} \tilde E''$ then $E\simeq_{d'} \tilde E$.
\item If $E\simeq_{d'} \tilde E$ and $E''\simeq_{d'} \tilde E''$ then $E'\simeq_{d'} \tilde E'$.
\item If $E\simeq_{d'} \tilde E$ and $E'\simeq_{d'} \tilde E'$ then $E''\simeq_{d'} \tilde E''$.
\end{enumerate}
\end{Lemma}
\begin{proof}
The first assertion follows from the fact that a $d'$-semistable filtration of $E$ induces $d'$-semistable filtrations on $E'$ and $E''$ and $d'$-semistable refinements of these filtrations induce a $d'$-semistable refinement of the original filtration of $E$.
To prove the second assertion let us consider two $d'$-semistable filtrations $F_{\bullet} E''$ and
$F_{\bullet} \tilde E''$, whose quotients are isomorphic up to a permutation.
Let $F_0 E=0$ and let $F_iE$ for $i>0$ be the preimage of $F_{i-1}E''$. This defines a $d'$-semistable
filtration $F_{\bullet} E$ of $E$. Similarly, we can define the filtration
$F_{\bullet} \tilde E$. By Lemma \ref{equivalence-lemma} and our assumption we can find $d'$-semistable refinements of $F_{\bullet} E$ and $F_{\bullet} \tilde E$ with quotients isomorphic up to a permutation.
These refinements define filtrations on $E'$ and $\tilde E'$. Possibly changing the permutations we see that the quotients of these filtrations are isomorphic up to a permutation, which shows that $E'$ and $\tilde E'$ are strongly S-equivalent in $ \mathop{\rm Coh} ^L_{d,d'}(Y)$.
The last assertion can be proven similarly to the second one.
\end{proof}
\subsection{Slope semistability on surfaces} \label{surfaces}
To better understand strong S-equivalence classes let us consider the surface case.
Let $Y$ be a smooth projective surface, $d=2$, $d'=1$ and $L={\mathcal O}_Y$ is the trivial Lie algebroid. In this case a coherent sheaf $E$ is semistable in $\mathop{\rm Coh} ^L_{2,1}(Y)$ if and only if $E$ is slope semistable
(of dimension $2$ or $0$).
\begin{Lemma}\label{independence}
Let $E$ be a slope semistable sheaf of dimension $2$ or $0$ on $X$. Then
for any $1$-stable filtration $F_{\bullet}E$ the sheaf $({\mathop{\rm Gr}} ^{F}(E)) ^{**}$
and the function $l_E: Y\to {\mathbb Z}_{\ge 0}$ given by
$$l_E(y)={\mathrm {length}} \, ( \ker ({\mathop{\rm Gr}} ^{F}(E)\to ({\mathop{\rm Gr}} ^{F}(E))^{**}) _y + {\mathrm {length}} \, ( \mathop{\rm coker} ({\mathop{\rm Gr}} ^{F}(E)\to ({\mathop{\rm Gr}} ^{F}(E))^{**}) _y$$
do not depend on the choice of the filtration.
\end{Lemma}
\begin{proof}
If $E$ has dimension $0$ the assertion is clear as the reflexivization is trivial and $l_E(y)={\mathrm {length}} \, E_y$. So in the following we can assume that $E$ has dimension $2$.
In this proof we write $l_E^F$ for the function $l_E$ defined by the filtration $F_{\bullet} E$.
By Lemma \ref{equivalence-lemma} any two $1$-stable filtrations of $E$ can be refined to $1$-stable filtrations, whose quotients are isomorphic up to a permutation. So it is sufficient to prove that if $F'_{\bullet}E$ is a refinement of a $1$-stable filtration $F_{\bullet}E$ then
$({\mathop{\rm Gr}} ^{F}(E)) ^{**}\simeq ({\mathop{\rm Gr}} ^{F'}(E)) ^{**}$ and $l_E^F=l_E^{F'}$.
Then passing to the quotients, we can reduce to the situation when $F_{\bullet} E$ has length $1$, i.e.,
$E$ is slope stable of dimension $2$ or $0$ on $X$. In the second case we already know the assertion, so we can assume that $E$ has dimension $2$.
Let us consider a short exact sequence
$$0\to E'\to E\to E''\to 0$$
in which one of the sheaves $E'$ and $E''$ is $0$-dimensional and the other one is $2$-dimensional and slope stable.
Let us first assume that $E'$ is $0$-dimensional. Then $E^{**}\simeq (E'')^{**}$,
$$\mathop{\rm coker} (E\to E^{**})\simeq \mathop{\rm coker} (E'\to (E'')^{**})$$
and we have a short exact sequence
$$0\to E'\to T(E)\to T(E'')\to 0.$$
Similarly, if $E''$ is $0$-dimensional we have $E^{**}\simeq (E')^{**}$,
$$\mathop{\rm coker} (E'\to (E')^{**}) \simeq \mathop{\rm coker} (E\to E^{**}) $$
and we have a short exact sequence
$$0\to T(E')\to T(E)\to E''\to 0.$$
So we see that $({\mathop{\rm Gr}} ^{F'}(E)) ^{**}\simeq E^{**}$ and $l_E^F=l_E^{F'}$ follows by induction on the length of the filtration $F'_{\bullet }E$.
\end{proof}
\begin{Corollary}\label{one-implication}
If $E$ and $E'$ are semistable and strongly S-equivalent in $\mathop{\rm Coh} ^L_{2,1}(Y)$ then $({\mathop{\rm Gr}} ^{JH}(E) )^{**} \simeq ({\mathop{\rm Gr}} ^{JH}(E') )^{**}$ and $l_E=l_{E'}.$
\end{Corollary}
\begin{proof}
Assume that $E\simeq_{1}E'$ and let $F_{\bullet }E$ and $F'_{\bullet} E'$ be $1$-stable filtrations whose quotients are isomorphic up to a permutation.
Since ${\mathop{\rm Gr}} ^{F}(E)\simeq {\mathop{\rm Gr}} ^{F'}(E)$, we have $({\mathop{\rm Gr}} ^{F}(E)) ^{**}\simeq({\mathop{\rm Gr}} ^{F'}(E')) ^{**}$ and $l_E=l_{E'}$. So the corollary follows from the fact that by Lemma \ref{independence} we have $({\mathop{\rm Gr}} ^{JH}(E)) ^{**}\simeq({\mathop{\rm Gr}} ^{F}(E)) ^{**}$ and $({\mathop{\rm Gr}} ^{JH}(E')) ^{**}\simeq({\mathop{\rm Gr}} ^{F'}(E')) ^{**}$.
\end{proof}
If $E$ is slope semistable and torsion free then we can find a (slope) Jordan--H\"older filtration $E^{{\mathop{\rm tf}}}_\bullet $ of $E$ such that the associated graded $\mathop{\rm Gr} ^{{\mathop{\rm tf}}}(E)$ is also torsion free.
Then for any $y\in Y$ we have
$$l_E(y)={\mathrm {length}} \, (({\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E))^{**}/ {\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E)) _y ,$$
so our function $l_E$ agrees with the one from \cite[Definition 8.2.10]{HL}.
Let $M^{\mu \mathrm {ss}}(r, \Lambda, c_2)$ be the moduli space of torsion free slope semistable sheaves $E$ of rank $r$
with $\det E\simeq \Lambda$ and $c_2(E)=c_2$ (see \cite[8.2]{HL}).
The following result shows that closed points of $M^{\mu \mathrm {ss}}(r, \Lambda, c_2)$ correspond to strong S-equivalence classes (see \cite[Theorem 8.2.11]{HL}).
\begin{Lemma} \label{Donaldson-Uhlenbeck}
Let $E$ and $E'$ be slope semistable torsion free sheaves on $X$. Then $E$ and $E'$ are strongly
S-equivalent in $\mathop{\rm Coh} ^L_{2,1}(Y)$ if and only if $({\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E) )^{**} \simeq ({\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E') )^{**}$ and $l_E=l_{E'}.$
\end{Lemma}
\begin{proof}
One implicaton follows from Corollary \ref{one-implication}.
To prove the other implication let us assume that $({\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E)) ^{**}\simeq ({\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E') )^{**}$
and $l_E=l_{E'}$. Since $E\simeq _1{\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E)$ and $E'\simeq _1{\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E')$, by Corollary \ref{equivalence-relation} it is sufficient to prove that ${\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E) \simeq _1{\mathop{\rm Gr}} ^{{\mathop{\rm tf}}}(E')$.
So we can assume that $E$ and $E'$ are slope polystable.
Since $E^{**}/E$ and $(E')^{**}/E'$ are $0$-dimensional sheaves of the same length at every point of $Y$, we can find the filtrations of $E^{**}/E$ and $(E')^{**}/E'$ whose quotients are isomorphic. Therefore
$E^{**}/E\simeq_1(E')^{**}/E'$. Since by assumption $E^{**}\simeq _1(E')^{**}$,
Lemma \ref{S-equivalence-on-sequences} implies that $E\simeq_1 E'$.
\end{proof}
\subsection{Slope semistability in higher dimensions}
Let $Y$ be a smooth projective variety of dimension $d$. Let us consider $d'=d-1$ and the trivial Lie algebroid $L={\mathcal O}_Y$.
Then a coherent sheaf $E$ is semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ if and only if $E$ is slope semistable
(of dimension $d$ or $\le d-2$).
\begin{Example} \label{higher-dim-example}
Let $E$ be a slope stable vector bundle on $Y$ of dimension $\ge 3$. Let us consider the family of slope stable torsion free sheaves $\{E_y\} _{y\in Y}$ defined by $E_y=\ker (E\to E\otimes k(y))$. This can be seen to be a flat family parametrized by $Y$. Note that if $y_1$, $y_2$ are distinct $k$-points of $Y$ then $E_{y_1}$ and $E_{y_2}$ are not strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(Y)$. This follows from Lemma \ref{S-equivalence-on-sequences}
and the fact that $E\otimes k(y_1) $ and $E\otimes k(y_2)$ are not strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(Y)$ as they have different supports.
Note however that by \cite[Lemma 5.7]{GT} all $E_y$ for $y\in Y(k)$ correspond to the same point in the moduli space of slope semistable sheaves constructed by D. Greb and M. Toma.
\end{Example}
\section{``Separatedness of the moduli space of semistable modules''}
In this section we fix the following notation.
Let $R$ be a discrete valuation ring with maximal ideal $m$
generated by $\pi \in R$. Let $K$ be the quotient field of $R$ and
let us assume that the residue field $k=R/m$ is algebraically
closed.
Let $X\to S=\mathop{\rm Spec \, } R$ be a projective morphism and let $L$
be a smooth ${\mathcal O}_S$-Lie algebroid on $X$. Let us fix a
relatively ample line bundle ${\mathcal O}_X (1)$ on $X/S$. In the following
stability of sheaves on the fibers of $X\to S$ is considered with respect to
this fixed polarization.
\subsection{Flatness lemma}
In the proof of the main theorem of this section we need the following lemma.
\begin{Lemma}\label{hom-flatness}
Let $E_1$ and $E_2$ be coherent ${\mathcal O}_X$-modules. If $E_2$ is flat over $S$ then
\begin{enumerate}
\item the $R$-module ${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)$ is flat,
\item the sheaf ${\mathcal H}om _{{\mathcal O}_X} (E_1, E_2)$ is flat over $S$,
\item we have a canonical isomorphism
$${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)\otimes _RK\mathop{\longrightarrow}^{\simeq} {\mathop{{\rm Hom}}} _{{\mathcal O}_{X_K}} ((E_1)_K, (E_2)_K) .$$
\end{enumerate}
\end{Lemma}
\begin{proof}
Since $E_2$ is $R$-flat, the canonical map $E_2\to (E_2)_K$ is an inclusion.
If $\varphi: E_1\to E_2$ is a non-zero ${\mathcal O}_X$-linear map and $\pi \varphi =0$ then $\varphi \pi =0$,
so $\varphi$ factors through $E_1/\pi E_1\to E_2$. But $E_1/\pi E_1$ is a torsion $R$-module and
$(E_2)_K$ is a torsion free $R$-module (since it is a $K$-vector space and $K$ is a torsion free $R$-module), so $E_1/\pi E_1\to E_2\subset (E_2)_K$ is the zero map and hence $\varphi =0$. It follows that ${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)$ is a torsion free $R$-module. So it is also a free $R$-module (and in particular $R$-flat). Since $E_1$ and $E_2$ are ${\mathcal O}_X$-coherent, the canonical homomorphism
${\mathcal H}om _{{\mathcal O}_X} (E_1, E_2) _x\to {\mathop{{\rm Hom}}} _{{\mathcal O}_{X,x}} ((E_1)_x, (E_2)_x)$ is an isomorphism for every point $x\in X$. Then a local version of the same argument as above shows that ${\mathcal H}om _{{\mathcal O}_X} (E_1, E_2)$ is flat over $S$.
Since cohomology commutes with flat base change, we have an isomorphism
$${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)\otimes _RK\mathop{\longrightarrow}^{\simeq} H^0(X_K, ({\mathcal H}om _{{\mathcal O}_X} (E_1, E_2))_K).$$
Since $X_K\subset X$ is open we have $({\mathcal H}om _{{\mathcal O}_X} (E_1, E_2))_K
\simeq {\mathcal H}om _{{\mathcal O}_{X_K}} ((E_1)_K, (E_2)_K)$ and hence
$${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)\otimes _RK\mathop{\longrightarrow}^{\simeq} {\mathop{{\rm Hom}}} _{{\mathcal O}_{X_K}} ((E_1)_K, (E_2)_K) .$$
\end{proof}
\subsection{Langton type theorem for separatedness}
The following theorem is a far reaching generalization of \cite[Theorem 5.2]{La} and \cite[Theorem 5.4]{La} (unfortunately, the proof of the first part of \cite[Theorem 5.2]{La} was omitted). The second part of the proof is similar to Gabber's proof of \cite[Variant 2.5.2]{Ka} but we need to study semistability of various sheaves appearing in the proof.
\begin{Theorem}\label{slope-Langton2}
Let $E_1$ and $E_2$ be $R$-flat ${\mathcal O}_X$-coherent $L$-modules of relative dimension $d$.
Assume that there exists an isomorphism $\varphi: (E_1)_K\to (E_2)_K$ of $L_K$-modules.
Then we have the following implications:
\begin{enumerate}
\item If $(E_1)_k$ and $(E_2)_k$ are semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ then they are strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
\item If $(E_1)_k$ and $(E_2)_k$ are polystable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ then they are isomorphic in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
\item If $(E_1)_k$ is stable and $(E_2)_k$ is semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ and $(E_1)_k$ is pure then $\pi^n\varphi$ extends to an isomorphism of $L$-modules $E_1\to E_2$ for some integer $n$.
\end{enumerate}
\end{Theorem}
\begin{proof}
By Lemma \ref{hom-flatness} ${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)$ is a free $R$-module and
$${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2)\otimes _RK\mathop{\longrightarrow}^{\simeq} {\mathop{{\rm Hom}}} _{{\mathcal O}_{X_K}} ((E_1)_K, (E_2)_K) .$$
So if we treat $E_i$ as an ${\mathcal O}_X$-submodule of $(E_i) _K$ then $\pi^n \varphi (E_1)\subset E_2$ for some integer $n$. Note that $\varphi'=\pi ^n\varphi : E_1\to E_2$ is a homomorphism of $L$-modules. More precisely, giving an $L$-module structure on $E_i$ is equivalent to an integrable $d_{\Omega _L}$-connection $\nabla_i: E_i\to E_i\otimes \Omega_L$. Under this identification $\varphi'$ is a homomorphism of $L$-modules
if and only if $\alpha= (\varphi'\otimes {\mathop{\rm id}})\circ \nabla _1-\nabla_2\circ \varphi '$ is the zero map.
But this map is ${\mathcal O}_X$-linear and $\alpha _K=0$, since $\varphi$ is a homomorphism of $L_K$-modules.
Since by Lemma \ref{hom-flatness} ${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2\otimes \Omega _L)$ is a free $R$-module
and
$${\mathop{{\rm Hom}}} _{{\mathcal O}_X} (E_1, E_2\otimes \Omega _L)\otimes _RK\mathop{\longrightarrow}^{\simeq} {\mathop{{\rm Hom}}} _{{\mathcal O}_{X_K}} ((E_1)_K, (E_2)_K\otimes \Omega _{L_K}), $$
we have $\alpha =0 $ as required.
Note that if we choose $n$ so that $\pi^n \varphi (E_1)\subset E_2$ but $\pi^{n-1} \varphi (E_1)$ is not contained in $E_2$ then $\varphi'_k: (E_1)_k\to (E_2)_k$ is a non-zero map of sheaves with the same Hilbert polynomial. In particular, if $(E_1)_k$ is stable and $(E_2)_k$ is semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ and $(E_1)_k$ is pure, then $\varphi'_k$ is an isomorphism of $L_k$-modules, which implies that $\varphi '$ is an isomorphism of $L$-modules. This gives the last part of the theorem.
Now let us set $\psi=\varphi^{-1}:(E_2)_K\to (E_1)_K$. Then as above for some integer $m$ we get a homomorphism of $L$-modules $\psi'=\pi ^m\psi : E_2\to E_1$. So setting $E_2'=\psi (E_2)$
we have inclusions of $L$-modules $\pi^m E_2'\subset E_1$ and $\pi ^nE_1\subset E_2'$.
For every $(a,b)\in {\mathbb Z}^2$ let us set
$$E(a,b):= \mathop{\rm im} (E_1\oplus E_2' \mathop{\longrightarrow}^{\pi^a\oplus \pi ^b} (E_1)_K ). $$
Each $E(a,b)$ is an $L$-submodule of $(E_1)_K$, which as an ${\mathcal O}_X$-module is flat over $S$.
Moreover, we have $E(a,b)_K= (E_1)_K$.
We claim that $E(a, b)_k$ is semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
Assume it is not semistable and let $E(a, b)_k\twoheadrightarrow F$ be the minimal destabilizing quotient.
Note that $E(a, b)_k$, $(E_1)_k$ and $(E_2')_k$ are special fibers of $S$-flat families with the same general fiber $(E_1)_K$. Therefore their Hilbert polynomials, and hence also reduced Hilbert polynomials, coincide.
Since
$$p((E_1)_k\oplus (E_2')_k)= p (E(a, b)_k)> p_{\min} (E(a,b)_k)=p (F) \mathop{\rm mod} {\mathbb Q} [t]_{d'-1}$$
and $(E_1)_k\oplus (E_2')_k$ is semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$, every map of $L_k$-modules $(E_1)_k\oplus (E_2')_k\to F$ is zero in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$. But we have a surjection $(E_1)_k\oplus (E_2')_k\to E(a, b)_k\to F$, a contradiction.
On $X_k$ we have short exact sequences
$$0\to E(a+1,b)/\pi E(a,b) \to E(a,b)/\pi E(a,b) \to E(a,b)/ E(a+1,b)\to 0$$
and
$$0\to \pi E(a,b)/\pi E(a+1,b) \to E(a+1,b)/\pi E(a+1,b)\to E(a+1,b)/ \pi E(a,b)\to 0.$$
Since $E(a,b)_k= E(a,b)/\pi E(a,b)$ and $E(a,b)_k$ is pure in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$, $E(a+1,b)/\pi E(a,b)$
is also pure in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$. Similarly, the second sequence implies that $E(a,b)/E(a+1,b)\simeq \pi E(a,b)/\pi E(a+1,b)$ is pure in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
If either $E(a+1,b)/\pi E(a,b)$ or $E(a,b)/E(a+1,b)$ has dimension $\le d'-1$, then the other sheaf is semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ and it is clear that $E(a,b)_k$ and $E(a+1,b)_k$ are strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
So we can assume that both $E(a+1,b)/\pi E(a,b)$ and $E(a,b)/E(a+1,b)$ have dimension $d$.
Then the first short exact sequence shows that
$$p_{\max} (E(a+1,b)/ \pi E(a,b))\le p (E(a, b)_k)\le p_{\min} (E(a,b)/ E(a+1,b)) \mathop{\rm mod} {\mathbb Q} [t]_{d'-1}.$$
Similarly, the second short exact sequence shows that
$$ p_{\max} (E(a,b)/ E(a+1,b)) \le p (E(a+1,b)_k)\le p_{\min} (E(a+1,b)/ \pi E(a,b)) \mathop{\rm mod} {\mathbb Q} [t]_{d'-1}.$$
But $p (E(a, b)_k)=p (E(a+1,b)_k)$ so $E(a+1,b)/\pi E(a,b)$ and $ E(a,b)/ E(a+1,b)$ are semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ with the same normalized Hilbert polynomial in $ {\mathbb Q} [T]_{d, d'}$.
This shows that $E(a,b)_k$ and $E(a+1,b)_k$ are strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
Similarly, $E(a,b)_k$ and $E(a,b+1)_k$ are strongly S-equivalent in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$.
Since $E_1=E(0,m)$ and $E_2'= E(n,0)$, Corollary \ref{equivalence-relation} implies the first part of the theorem. The second part follows immediately from the first one.
\end{proof}
\subsection{S-completness}
Let us recall some definitions from \cite[Section 3.5]{AHLH}.
If $R$ is a discrete valuation ring with a uniformizer $\pi$ then one can consider the following quotient stack
$$ \overline{\mathop{\rm ST}}_R: =[\mathop{\rm Spec \, } (R[s,t]/(st-\pi)) /{\mathbb G}_m],$$
where $s$ and $t$ have ${\mathbb G}_m$-weights $1$ and $-1$.
\begin{Definition}
We say that a morphism $f: {\mathcal X} \to {\mathcal Y}$ of locally noetherian algebraic stacks is \emph{S-complete} if for any DVR $R$ and any commutative diagram
$$\xymatrix{
\overline{\mathop{\rm ST}}_R \setminus 0\ar[d]\ar[r]&{\mathcal X} \ar[d] \\
\overline{\mathop{\rm ST}}_R \ar@{-->}[ru]\ar[r]&{\mathcal Y}.\\
}$$
of solid arrows, there exists a unique dashed arrow filling in the diagram.
\end{Definition}
Let $Y$ be a projective scheme over an algebraically closed field $k$ and let ${\mathcal O}_Y(1)$ be an ample line bundle
(one can also consider the general relative situation as in Section \ref{moduli-section} but we state the results in the simplest possible case to simplify notation). Let us also fix a smooth $k$-Lie algebroid on $Y$.
Then we consider the moduli stack ${\mathcal M} ^{L, \rm{ss}}_{d,d'}(X)$ of objects of $\mathop{\rm Coh} ^L_{d}(Y)$ that are semistable in $\mathop{\rm Coh} ^L_{d,d'}(Y)$.
\begin{Theorem}\label{S-completness}
The moduli stack ${\mathcal M}^{L, \rm{ss}}_{d,d'}(Y)$ is S-complete over $\mathop{\rm Spec \, } k$.
\end{Theorem}
\begin{proof}
Let us fix a commutative diagram
$$\xymatrix{
\overline{\mathop{\rm ST}}_R \setminus 0\ar[d]\ar[r]& {\mathcal M}^{L, \rm{ss}}_{d,d'}(Y)\ar[d] \\
\overline{\mathop{\rm ST}}_R \ar[r]&\mathop{\rm Spec \, } k \\
}$$
and let us consider $X=Y\tildemes _k \mathop{\rm Spec \, } R \to S=\mathop{\rm Spec \, } R$.
Giving a morphism $\overline{\mathop{\rm ST}}_R \setminus 0\to {\mathcal M}^{L, \rm{ss}}_{d,d'}(Y)$ is equivalent to giving two $R$-flat ${\mathcal O}_X$-coherent $L$-modules $E_1$ and $E_2$ such that $(E_1)_k$ and $(E_2)_k$ are semistable in $\mathop{\rm Coh} ^L_{d,d'}(X_k)$ together with
an isomorphism $\varphi: (E_1)_K\to (E_2)_K$ of $L_K$-modules.
Let us use the notation from the proof of Theorem \ref{slope-Langton2} and let us set
$$F_j=\left\{ \begin{array}{cl}
E(-j, 0) & \hbox{for } j\le 0,\\
E(0, j) & \hbox{for } 1\le j.\\
\end{array} \right.$$
Let us now consider the diagram of maps
$$\xymatrix{
\cdots\ar@/^1pc/[r]^{1}& F_{-2}\ar@/^1pc/[r]^{1}\ar@/^1pc/[l]^{\pi} & F_{-1}\ar@/^1pc/[r]^{1}\ar@/^1pc/[l]^{\pi}&F_0\ar@/^1pc/[r]^{\pi}\ar@/^1pc/[l]^{\pi} &F_1\ar@/^1pc/[r]^{\pi}\ar@/^1pc/[l]^{1} &F_2\ar@/^1pc/[r]^{\pi }\ar@/^1pc/[l]^{1} &\cdots \ar@/^1pc/[l]^{1}\\
}$$
By assumption $\pi ^{n+m}E_1 \subset \pi^m E_2'\subset E_1$, so $n+m\ge 0$. Replacing $E_1$ with $E_2$ if necessary, we can therefore assume that $n\ge 0$. The proof of Theorem \ref{slope-Langton2} shows that we have $F_j=E_2'$ for $j\le -n$ and $F_j=E_1$ for $j\ge n+m$. By \cite[Remark 3.36]{AHLH} and the proof of Theorem \ref{slope-Langton2},
this gives the required map $\overline{\mathop{\rm ST}}_R \to {\mathcal M}^{L, \rm{ss}}_{d,d'}(Y)$.
\end{proof}
\section{Modules over Lie algebroids in positive characteristic}\label{modules-char-p}
Let $f: X\to S$ be a morphism of noetherian schemes, where $S$ is a scheme of characteristic $p>0$.
Let $L$ be a smooth restricted ${\mathcal O}_S$-Lie algebroid on $X$, i.e., a locally free ${\mathcal O}_X$-module $L$ equipped with a restricted ${\mathcal O}_S$-Lie algebra structure (i.e., an ${\mathcal O}_S$-Lie algebra structure with the $p$-th power operation) and an anchor map $L\to T_{X/S}$ compatible with $p$-th power map (see \cite[Definitions 2.1 and 4.2]{La}). By $\Lambda _L$ we denote the universal enveloping algebra of the Lie algebroid $L$ (see \cite[p.~515]{La}).
\subsection{$p$-curvature}
Let $F_X:X\to X$ denote the absolute Frobenius morphism.
Let us recall that for an $L$-module $M=(E, \nabla: L\to {\mathcal E}nd _{{\mathcal O}_S} E)$ we can define its $p$-curvature
$\psi (\nabla): L\to {\mathcal E}nd _{{\mathcal O}_S} E$ by sending $x\in L$ to $(\nabla (x))^{p}-\nabla (x^{[p]})$. In fact, this gives rise
to a map $\psi (\nabla) : F_X^*L\to {\mathcal E}nd _{{\mathcal O}_X} E$, that we also call the $p$-curvature of $M$ (see \cite[4.4]{La}).
In this case, $(E, \psi(\nabla))$ defines an $F_X^*L$-module, where $F_X^*L$ has a trivial ${\mathcal O}_S$-Lie algebroid structure
(equivalently, we get an $F_X^*L$-coHiggs sheaf $(E, E\to E\otimes _{{\mathcal O}_X} F_X^*\Omega_L)$). We will reinterpret this sheaf in terms of sheaves
on the total space ${\mathbb V}( F^*_XL)$ of $F_X^*L$ as follows.
Let us recall the following corollary of \cite[Lemma 4.5]{La}, generalizing an earlier known result for the ring of differential operators:
\begin{Lemma} \label{BMR-lemma}
The map
$\mathop{\rm im}ath : F^*_XL\to \Lambda_L$ sending $x\otimes 1\in F^*_XL= F^{-1}_XL\otimes_{F_X^{-1}{\mathcal O}_X}{\mathcal O}_X$
for $x\in L$ to $\mathop{\rm im}ath(x\otimes 1):=x^p-x^{[p]}\in \Lambda_L $ is ${\mathcal O}_X$-linear and its
image is contained in the centralizer $Z_{\Lambda_L} ({\mathcal O}_X)$ of ${\mathcal O}_X$ in $\Lambda_L$. Moreover, $\mathop{\rm im}ath$ extends to an inclusion of
the symmetric algebra $S^{\bullet}(F^*_XL)$ into $Z_{\Lambda_L} ({\mathcal O}_X)$.
\end{Lemma}
The above lemma shows that there exists a sheaf of (usually non-commutative) rings ${\tilde\Lambda}_{L}$
with an injective homomorphism of sheaves of rings ${\mathcal O}_{{\mathbb V} ( F^*_XL)}\to {\tilde\Lambda}_{L}$ such that
$$\pi _* {\tilde\Lambda}_{L}={\Lambda}_L ,$$
where $\pi : {\mathbb V}(F_X^*L)\to X$ denotes the canonical projection. By \cite[Lemma 4.6]{La}
${\tilde \Lambda}_L$ is locally free of finite rank both as a left and a right ${\mathcal O}_{{\mathbb V} (F_X^*L)}$-module.
In the following ${\mathrm {\mathop{{\rm QCoh}}} \, } (Y, {\mathcal A})$ denotes the category of (left) ${\mathcal A}$-modules, which are quasicoherent as ${\mathcal O}_Y$-modules.
\begin{Lemma}\label{equivalence}
We have an equivalence of categories
$${\mathrm {\mathop{{\rm QCoh}}} \, } (X, L) \simeq {\mathrm {\mathop{{\rm QCoh}}} \, } ({\mathbb V} (F_X^*L), {\tilde\Lambda}_{L})$$
such that if $M$ is an $L$-module and $\tilde M$ is the corresponding $\tilde \Lambda _{L}$-module then
$$\pi _* \tilde M=M.$$
Moreover, $M$ is coherent as an ${\mathcal O}_X$-module if and only if $\tilde M$ is coherent as an ${\mathcal O}_{{\mathbb V} (F_X^*L)}$-module
and its support is proper over $X$.
\end{Lemma}
\begin{proof}
Since $\pi $ is affine, we have the following equivalences of categories:
$${\mathrm {\mathop{{\rm QCoh}}} \, } (X, L) \simeq {\mathrm {\mathop{{\rm QCoh}}} \, } (X, \Lambda_ L)
\simeq {\mathrm {\mathop{{\rm QCoh}}} \, } ({\mathbb V} ( F^*_XL), {\tilde\Lambda}_{L}).$$
If $M$ is coherent then $\tilde M$ is coherent. Let us fix a relative compactification $Y$ of ${\mathbb V}(F_X^*L)$, e.g.,
$Y={\mathbb P} (F_X^*L\oplus {\mathcal O}_X)\to X$. The support of $\tilde M$ is quasi-finite over $X$, so it does not intersect
the divisor at infinity $D=Y- {\mathbb V}(F_X^*L)$. Since ${\mathrm {Supp}}\, \tilde M$ is closed in ${\mathbb V}(F_X^*L)$, it is also closed in $Y$ and hence it is proper over $X$. On the other hand, if $\tilde M$ is coherent and the support of $\tilde M$ is proper over $X$ then $M=\pi _* \tilde M$ is coherent.
\end{proof}
Let $M=(E, \nabla: L\to {\mathcal E}nd _{{\mathcal O}_S} E)$ be an $L$-module, quasi-coherent as an ${\mathcal O}_{X}$-module and let
$\tilde M$ be a $\tilde \Lambda _{L}$-module corresponding to $M$. Let $\tilde N$ denote $\tilde M$ considered as
an ${\mathcal O}_{{\mathbb V} (F_X^*L)}$-module. Using the standard equivalence
$${\mathrm {\mathop{{\rm QCoh}}} \, } (X, F_X^*L) \simeq {\mathrm {\mathop{{\rm QCoh}}} \, } ({\mathbb V} ( F^*_XL), {\mathcal O}_{{\mathbb V}( F^*_XL)}),$$
we get an $F_X^*L$-module $N=\pi _*\tilde N$. Lemma \ref{BMR-lemma} shows that this module is equal to $(E, \psi (\nabla))$, which gives another interpretation of the $p$-curvature.
\subsection{Modules on the Frobenius twist} \label{F-twist}
Now let us assume that $X/S$ is smooth of relative dimension $d$. Let $F_{X/S}: X\to X'$ denote the relative Frobenius morphism over $S$ and let
$L'$ be the pull back of $L$ via $X'\to X$. \cite[Lemma 4.5]{La} shows that $\mathop{\rm im}ath :
S^{\bullet}(F^*_XL)=F^*_{X/S}S^{\bullet}(L')\to \Lambda_L$ induces a homomorphism of sheaves of
${\mathcal O}_{X'}$-algebras
$$S^{\bullet}(L')\to F_{X/S, *}(Z(\Lambda_L))\subset \Lambda_L':=F_{X/S, *}\Lambda_L .$$
In particular, it makes $\Lambda'_L$ into a quasi-coherent sheaf
of $S^{\bullet}(L')$-modules. This defines a
quasi-coherent sheaf of ${\mathcal O}_{{\mathbb V} (L')}$-algebras
${\tilde\Lambda}_{L}'$ on the total space ${\mathbb V}(L')$ of $L'$. By construction
$$\pi '_* {\tilde\Lambda}_{L}'={ F}_{X/S, *}{\Lambda}_L ,$$
where $\pi' : {\mathbb V}(L')\to X'$ denotes the canonical projection.
Let us recall that by \cite[Theorem 4.7]{La}
${\tilde \Lambda}_L'$ is a locally free ${\mathcal O}_{{\mathbb V} (L')}$-module of rank $p^{m+d}$, where $m$ is the rank of $L$.
The first part of the following lemma generalizes \cite[Lemma 2.8]{Gr}.
The second part is a generalization of \cite[Lemma 6.8]{Si2}.
\begin{Lemma}\label{equivalence2}
We have an equivalence of categories
$${{\mathop{{\rm QCoh}}} \, } (X, L) \simeq {\mathrm {\mathop{{\rm QCoh}}} \, } ({\mathbb V} (L'), {\tilde\Lambda}_{L}')$$
such that if $M$ is an $L$-module and $M'$ is the corresponding $\tilde \Lambda _{L}'$-module then
$$\pi '_* M'={ F}_{X/S, *}M.$$
Moreover, $M$ is coherent as an ${\mathcal O}_X$-module if and only if $M'$ is coherent as an ${\mathcal O}_{{\mathbb V} (L')}$-module
and its support is proper over $X'$ (or equivalently the closure of the support of $M'$ does not intersect the divisor at infinity).
\end{Lemma}
\begin{proof}
Since both $\pi '$ and $F_{X/S}$ are affine, we have the following equivalences of categories:
$${\mathrm {\mathop{{\rm QCoh}}} \, } (X, L) \simeq {\mathrm {\mathop{{\rm QCoh}}} \, } (X, \Lambda_ L)\simeq
{\mathrm {\mathop{{\rm QCoh}}} \, } (X', { F}_{X/S, *}{\Lambda}_L )={\mathrm {\mathop{{\rm QCoh}}} \, } (X',\pi '_* {\tilde\Lambda}_{L}')
\simeq {\mathrm {\mathop{{\rm QCoh}}} \, } ({\mathbb V} (L'), {\tilde\Lambda}_{L}').$$
The second part can be proven in the same way as Lemma \ref{equivalence}.
Namely, $M$ is coherent if and only if ${ F}_{X/S, *}M=\pi'_*M'$ is coherent, which is equivalent to the fact that $M'$ is coherent and its support is proper over $X'$.
\end{proof}
We have a cartesian diagram
$$\xymatrix{
{\mathbb V} (F_X^*L) \ar[d]^{\pi}\ar[r]^{\tilde F_{X/S}} &{\mathbb V} (L')\ar[d]^{\pi'}\\
X\ar[r]^{F_{X/S}}&X'\\
}$$
coming from the equality $F_{X/S}^*L'=F_X^*L$. By definition we have $\tilde F_{X/S, *} {\tilde\Lambda}_{L}={\tilde\Lambda}_{L}'$ and equivalences of Lemmas \ref{equivalence} and \ref{equivalence2}
are compatible with each other. More precisely, by the flat base change
if $M=(E, \nabla: L\to {\mathcal E}nd _{{\mathcal O}_S} E)$ is an $L$-module, $\tilde M$ is the corresponding $\tilde \Lambda _{L}$-module and $M'$ is the corresponding $\tilde \Lambda _{L}'$-module then
$$\tilde F_{X/S, *}\tilde M=M'.$$
The ${\mathcal O}_{{\mathbb V} (L')}$-module structure on $M'$ corresponds to the $S^{\bullet} (L')$-module structure on $E'=F_{X/S,*}E$. The corresponding map $L'\to {\mathcal E}nd _{{\mathcal O}_{X'}} E'$ is denoted by $\psi'(\nabla)$. If we interepret the $p$-curvature of $M$ as an ${\mathcal O}_X$-linear map $F^*_{X/S}L'\otimes E\to E$, take the push-forward by $F_{X/S}$ and use the projection formula, we get
an ${\mathcal O}_{X'}$-linear map $L'\otimes E'\to E'$ corresponding to $\psi '(\nabla)$.
Note that if $E$ has rank $r$ then $E'$ has rank $p^dr$ as an ${\mathcal O}_{X'}$-module and
$(E', \psi'(\nabla))$ is an $L'$-module, where $L'$ is considered with trivial Lie algebroid structure.
\section{Moduli stacks in positive characteristic}\label{moduli-section}
In this section we fix the following notation.
Let $f: X\to S$ be a flat projective morphism of noetherian schemes and let
${\mathcal O}_X (1)$ be a relatively ample line bundle. In the following
stability of sheaves on the fibers of $X\to S$ is considered with respect to
this fixed polarization. Assume also that $X/S$ is
a family of $d$-dimensional varieties satisfying Serre's condition $(S_2)$.
Let $L$ be a smooth ${\mathcal O}_S$-Lie algebroid on $X$ and let us set $\Omega_L=L^*$.
\subsection{Moduli stack of $L$-modules}
Let us fix a polynomial $P$. The moduli stack of $L$-modules ${\mathcal M}^{L} (X/S, P)$ is
defined as a lax functor from $({\mathop{{\rm Sch }}} /S) $ to the $2$-category of groupoids, where ${\mathcal M}(T)$ is the category whose objects are $T$-flat families
of $L$-modules with Hilbert polynomial $P$ on the fibres of $X_T\to T$,
and whose morphisms are isomorphisms of quasi-coherent sheaves. ${\mathcal M}^{L} (X/S,
P)$ is an Artin algebraic stack for the fppf topology on $({\mathop{{\rm Sch }}} /S)$, which is
locally of finite type. It contains open substacks
${\mathcal M}^{L, \rm{tf}} (X/S, P)$ and ${\mathcal M}^{L, \rm{ss}} (X/S, P)$,
which corresponds to families of ${\mathcal O}_X$-torsion free and Gieseker semistable $L$-modules, respectively.
\subsection{Hitchin's morphism for $L$-coHiggs
sheaves}
Let us fix some positive integer $r$ and consider the functor which to an $S$-scheme $h:
T\to S$ associates
$$\bigoplus _{i=1}^rH^0(X_T/T, {{S} }^i\Omega_{L,T}).$$
By \cite[Lemma 3.6]{La} this functor is representable by an
$S$-scheme ${\mathbb V} ^L (X/S, r)$.
Let us fix a polynomial $P$ of degree $d=\dim (X/S)$, corresponding to rank $r$ sheaves.
If we consider $L$ with a trivial ${\mathcal O}_S$-Lie algebroid structure (we will denote it by $L_{\rm triv}$), then the corresponding moduli stacks are denoted by ${\mathcal M}^L_{\rm Dol}(X/S, P)$, ${\mathcal M}^{L, \rm{tf}}_{\rm Dol} (X/S, P)$ etc. (the Dolbeaut moduli stacks of $\Omega_L$-Higgs sheaves).
One can define Hitchin's morphism
$$H_L: {\mathcal M}^{L, \rm{tf}}_{\rm Dol}(X/S, P) \to {\mathbb V} ^L(X/S, r)$$
by evaluating elementary symmetric polynomials $\sigma _i$ on $E\to E\otimes \Omega_{L_T}$
corresponding to an $L_{\rm triv}$-module structure on a locally free part of $E$ (see \cite[3.5]{La}).
Alternatively, we can describe it as follows. Assume $E$ is a locally free ${\mathcal O}_{X_T}$-module.
An $L_{\rm triv}$-module structure on $E$ can be interpreted as
a section $s: {\mathcal O}_T\to {\mathcal E}nd E\otimes \Omega_{L_T}$. Locally this gives a matrix with values in $\Omega _{L_T}$
and we can consider the characteristic polynomial
$$\det (t\cdot I-s)=t^r+\sigma_1 (s) t^{r-1}+...+\sigma _r (s),$$
where $t$ is a formal variable.
To see that this makes sense one needs to use the integrability condition $s\wedge s=0$ (which is obtained from
the $L_{\rm triv}$-module structure).
These local sections glue to $$H_L((E,s))= (\sigma_1 (s), ..., \sigma _r (s))\in {\mathbb V} ^L(X/S, r) (T). $$
In general, one uses this construction on a big open subset on which $E$ is locally free and uniquely extends the sections using Serre's condition $(S_2)$.
\subsection{Langton type properness theorem}
Let $R$ be a discrete valuation ring with maximal ideal $m$ and the quotient field $K$. Let us assume that the residue field $k=R/m$ is algebraically closed.
The following Langton's type theorem for modules over Lie algebroids is a special case of
\cite[Theorem 5.3]{La}.
\begin{Theorem}\label{slope-Langton}
Let $S=\mathop{\rm Spec \, } R $ and let $F$ be an $R$-flat ${\mathcal O}_X$-coherent $L$-module of relative
pure dimension $n$ such that the $L_K$-module $F_K=F\otimes _RK$
is Gieseker semistable. Then there exists an $R$-flat $L$-submodule $E\subset
F$ such that $E_K=F_K$ and $E_k$ is a Gieseker semistable
$L_k$-module on $X_k$.
\end{Theorem}
\subsection{Properness of the $p$-Hitchin morphism}
Assume that $S$ has characteristic $p>0$ and $L$ is a restricted ${\mathcal O}_S$-Lie algebroid.
Then the $p$-curvature gives rise to a morphism of moduli stacks
$$\begin{array}{cccc}
\Psi_L: & {\mathcal M}^L(X/S, P)&\to& {\mathcal M}^{F_X^*L}_{\rm Dol}(X/S, P)\\
&(E, \nabla)&\to&(E,\psi (\nabla)).
\end{array}
$$
If $(E, \psi(\nabla))$ is Gieseker semistable then $(E,\nabla)$ is Gieseker semistable. However, the opposite implication fails. For example, one can consider any semistable vector bundle $G$ on a smooth projective curve $X$ defined over a field of characteristic $p$ such that $F_X^*G$ is not semistable. Then $E=F_X^*G$ has a canonical connection $\nabla$
with vanishing $p$-curvature. In this case $(E, \nabla)$ is semistable but $(E, \psi(\nabla))$ is not semistable.
This shows that the morphism $\Psi_L$ does not restrict to a morphism of moduli stacks of semistable objects.
But since Gieseker semistable $L$-modules are torsion free, we can still consider $\Psi _L: {\mathcal M}^{L, ss} (X/S, P)\to {\mathcal M}^{F_X^*L, \rm{tf}}_{\rm Dol}(X/S, P)$. The composition of this morphism with Hitchin's morphism $H_{F_X^*L}: {\mathcal M}^{F_X^*L, \rm{tf}}_{\rm Dol}(X/S, P) \to {\mathbb V}^{ F_X^*L}(X/S,r)$ will be called a \emph{$p$-Hitchin morphism}.
\begin{Theorem}\label{p-Hitchin-properness}
Let us fix a polynomial $P$ of degree $d=\dim (X/S)$, corresponding to rank $r$ sheaves.
The $p$-Hitchin morphism $H_{L,p}: {\mathcal M}^{L, ss} (X/S, P)\to {\mathbb V}^{ F_X^*L}(X/S,r)$ is universally closed.
\end{Theorem}
\begin{proof}
Let us consider a commutative diagram
$$\xymatrix{
\mathop{\rm Spec \, } \, K \ar[d]\ar[r]&{\mathcal M} ^{L, ss}(X/S, P)\ar[d]\\
\mathop{\rm Spec \, } \, R \ar@{-->}[ru]\ar[r]&{\mathbb V} ^{ F_X^*L}(X/S,r).\\
}$$
We need to show existence of the dashed arrow making the diagram commutative. Taking a base change we can assume that $S=\mathop{\rm Spec \, } R$. Then we need to show that for a fixed semistable $L_K$-module $M$ on $X_K$ there exists an $R$-flat ${\mathcal O}_X$-coherent $L$-module $F$ such that $M\simeq F\otimes _RK$ and $F_k$ is Gieseker semistable.
\emph{Step 1.} Let us show that there exists an $R$-flat ${\mathcal O}_X$-quasicoherent $L$-module $F$
such that $M\simeq F\otimes _RK$.
By Lemma \ref{equivalence} there exists $\tilde \Lambda _{L_K}$-module $\tilde M$ on ${\mathbb V} (F_{X_K}^*L_K)$ such that
$$(\pi_{X_K})_* \tilde M=M,$$
where $\pi : {\mathbb V} (F_X^* L)\to X$ is the canonical projection and $\pi_{X_K}$ is its restriction to the preimage of $X_K$.
By Lemma \ref{extension} there exists an
$\tilde \Lambda _{L}$-module $\tilde M '$, coherent as an ${\mathcal O}_{{\mathbb V}(F_X^*L)}$-module, which extends $\tilde M$ via
an open immersion ${\mathbb V}(F_{X_K}^*L _K)\hookrightarrow {\mathbb V}(F_X^*L)$. Then again by
Lemma \ref{equivalence} we get the required $L$-module $F=\pi _* \tilde M'$.
\emph{Step 2.} In this step we show that $F$ is coherent as an ${\mathcal O}_X$-module.
Let $\tilde N$ be the ${\mathcal O}_{{\mathbb V}(F_X^*L)}$-module structure on $\tilde M'$. By the results of Section \ref{modules-char-p}, $\tilde N$ corresponds to the $F_X^*L$-module $\Psi _L (F)$. Let us recall that we have the total spectral scheme ${\mathbb W} ^L(X/S,r)\subset
{\mathbb V}(F_X^*L)\tildemes _S {\mathbb V} ^{F_X^*L}(X/S, r)$, which is finite and flat over
$X\tildemes _S {\mathbb V} ^{F_X^*L} (X/S, r)$ (see \cite[p.~521]{La}). By construction the support of $\tilde N_K$ coincides set-theoretically with the spectral scheme of $\Psi _{F_{X_K}^*L_K}(M)$, so it is contained in the closed subscheme ${\mathbb W} ^{F_X^* L}(X/S,r)\tildemes _{{\mathbb V} ^{F_X^*L}(X/S, r)}\mathop{\rm Spec \, } R $ of $ {\mathbb V}(F_X^*L)$ (here we use existence of $\mathop{\rm Spec \, } R\to {\mathbb V} ^{ F_X^*L}(X/S,r)$ making the diagram at the beginning of proof commutative). This subscheme does not intersect the divisor at infinity (when fixing an appropriate relative compactification of ${\mathbb V}(F_X^*L)$) and it contains the support of $\tilde N$. So the support of $\tilde N$ is proper over $\mathop{\rm Spec \, } R$, which implies that $F$ is coherent as an ${\mathcal O}_X$-module.
\emph{Step 3.} Now the required assertion follows from Theorem \ref{slope-Langton}.
\end{proof}
In the proof of the above theorem we used the following lemma.
\begin{Lemma}\label{extension}
Let $X$ be a quasi-compact and quasi-separated scheme and let $j: U\to X$ be
a quasi-compact open immersion of schemes. Let ${\mathcal A}$ be a sheaf of
associative and unital (possibly non-commutative) ${\mathcal O}_X$-algebras, which is locally free of finite rank as a (right)
${\mathcal O}_X$-module. Let $E$ be a left ${\mathcal A} _U$-module, which is quasi-coherent of finite type as an ${\mathcal O}_U$-module.
Then there exists a left ${\mathcal A}$-module $G$, which is quasi-coherent of finite type as an ${\mathcal O}_X$-module
and such that $G_U\simeq E$ as ${\mathcal A}_U$-modules.
\end{Lemma}
\begin{proof}
By \cite[Tag 01PE and {Tag 01PF}]{St} there exists a quasi-coherent ${\mathcal O}_X$-submod\-ule $E'\subset j_*E$ such that $E'|_U=E$ and $E'$ is of finite type. Note that $j_*E$ is a $j_*({\mathcal A}_U)$-module and $j_*({\mathcal A})$
is a ${\mathcal A}$-module. So we can consider $G:={\mathcal A}\cdot E' \subset j_*E$. Clearly, $G$ is an ${\mathcal A}$-module
of finite type (as it is the image of ${\mathcal A}\otimes E'$) and $G_U\simeq E$ as ${\mathcal A}_U$-modules.
\end{proof}
\begin{Remark}
Theorem \ref{p-Hitchin-properness} was stated in passing in \cite[p.~531, l.~2-3]{La} but without a full proof.
\end{Remark}
\subsection{Properness of the $p$-Hitchin morphism II}
Assume that $S$ has characteristic $p>0$ and $X/S$ is smooth of relative dimension $d$.
We assume that $L$ is a restricted ${\mathcal O}_S$-Lie algebroid and we use notation from Subsection \ref{F-twist}. We also fix a polynomial $P$ of degree $d=\dim (X/S)$, corresponding to rank $r$ sheaves.
Then the $p$-curvature gives rise to a morphism of moduli stacks
$$\begin{array}{cccc}
\Psi_L': & {\mathcal M}^L(X/S, P)&\to& {\mathcal M}^{L'}_{\rm Dol}(X'/S, P')\\
&(E, \nabla)&\to&(F_{X/S, *}E,\psi '(\nabla)),
\end{array}
$$
where $P'$ is the Hilbert polynomial of the corresponding push-forward by $F_{X/S}$.
As before we can consider the composition of $\Psi _L': {\mathcal M}^{L, ss} (X/S, P)\to {\mathcal M}^{L', \rm{tf}}_{\rm Dol}(X'/S, P')$ with Hitchin's morphism $H_{L'}: {\mathcal M}^{L', \rm{tf}}_{\rm Dol}(X'/S, P) \to {\mathbb V}^{ L'}(X'/S,p^d r)$. This will be called a
\emph{$p'$-Hitchin morphism}.
Essentially the same proof as that of Theorem \ref{p-Hitchin-properness} gives the following theorem:
\begin{Theorem}\label{Hodge-Hitchin-properness2}
Let us fix a polynomial $P$ of degree $d$, corresponding to rank $r$ sheaves.
The $p'$-Hitchin morphism $H_{L', p'}:{\mathcal M}^{L, ss} (X/S, P)\to {\mathbb V}^{ L'}(X'/S,p^d r)$ is universally closed.
\end{Theorem}
We have the following diagram
$$\xymatrix{
{\mathcal M}^{L, ss} (X/S, P)\ar[d]^{H_{L', p'}}\ar[r]^{H_{L, p}}& {\mathbb V}^{ F_{X/S}^* L'}(X/S,r) \\
{\mathbb V}^{ L'}(X'/S,p^d r) & {\mathbb V}^{ L'}(X'/S,r)\ar[u]^{F_{X/S}^*}\ar[l]^{(\cdot )^{p^d}}.\\
}$$
But there are no natural maps between ${\mathbb V}^{ F_{X/S}^* L'}(X/S, r)$ and
${\mathbb V}^{ L'}(X'/S,p^d r)$, and in general the maps $H_{L, p}$ and $H_{L', p'}$ do not factor through ${\mathbb V}^{ L'}(X'/S,r)$,
so Theorems \ref{p-Hitchin-properness} and \ref{Hodge-Hitchin-properness2} give different results.
However, these maps factor through ${\mathbb V}^{ L'}(X'/S,r)$ in the following special case where $L={\mathbb T}_{X/S}$ is the canonical restricted Lie algebroid associated to $T_{X/S}$ with $\alpha={\mathop{\rm id}}$, the usual Lie bracket for derivations and $(\cdot)^{[p]}$ given by sending $D$ to the derivation acting like the differential operator $ D^p$.
Let us recall that for this Lie algebroid by \cite[Proposition 2.2.2 and Theorem 2.2.3]{BMR} $\tilde \Lambda _L' =F_{X/S, *}\tilde \Lambda_L$ is a sheaf of Azumaya ${\mathcal O}_{{\mathbb V} (L')}$-algebras and we have a canonical isomorphism
$$\varphi: F_{X/S}^{*} F_{X/S, *}\tilde \Lambda_L\to {\mathcal E}nd_{{\mathcal O}_{{\mathbb V} ( F^*_{X/S}L')}}\tilde \Lambda_L$$
of sheaves of rings (note that our conventions of left and right modules are opposite to those in \cite{BMR}).
Let ${\mathcal A}$ be a sheaf of ${\mathcal O}_Y$-algebras. In the following ${\mathrm {QCoh} \, } _{\mathrm{fp}} (Y, {\mathcal A})$ denotes the category of left ${\mathcal A}$-modules, which are quasicoherent and locally finitely presented as ${\mathcal O}_Y$-modules. In the formulation of the following theorem we use the above isomorphism to identify $ F_{X/S}^{*} F_{X/S, *}\tilde \Lambda_L$-module structure on $ F_{X/S}^{*} F_{X/S, *}\tilde M$
with the corresponding ${\mathcal E}nd_{{\mathcal O}_{{\mathbb V} ( F^*_{X/S}L')}}\tilde \Lambda_L$-module structure.
\begin{Theorem}\label{Laszlo-Pauly}
Let $Y\to T$ be a smooth morphism with $T$ of characteristic $p>0$ and let $L={\mathbb T}_{Y/T}$.
Then we have equivalences of categories
$$\tilde F_{Y/T}^{*} \tilde F_{Y/T, *} : {\mathrm {QCoh} \, } _{\mathrm{fp}} ({\mathbb V} (F_{Y/T}^*L'), \tilde \Lambda _L )\to
{\mathrm {QCoh} \, } _{\mathrm{fp}} ({\mathbb V} (F_{Y/T}^*L'), {\mathcal E}nd_{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}}\tilde \Lambda_L ) $$
and
$$ \tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}} : {\mathrm {QCoh} \, } _{\mathrm{fp}} ({\mathbb V} (F_{Y/T}^*L'), \tilde \Lambda _L )\to {\mathrm {QCoh} \, } _{\mathrm{fp}} ({\mathbb V} (F_{Y/T}^*L'), {\mathcal E}nd_{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}}\tilde \Lambda_L ).$$
Moreover, there exists a natural transformation
$$\varphi: \tilde F_{Y/T}^{*} \tilde F_{Y/T, *} \to \tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}},$$
which is an isomorphism of functors.
\end{Theorem}
\begin{proof}
The fact that $\tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}}$ is an equivalence of categories follows from the standard Morita equivalence. So to prove the theorem it is sufficient to show an isomorphism of functors $\varphi$.
The natural transformation $\varphi$ is induced from the fact that $\tilde F_{Y/T}^{*} $ is left adjoint to
$\tilde F_{Y/T, *}$.
Let $M$ be an $L$-module, which is quasi-coherent and locally finitely presented as an ${\mathcal O}_Y$-module.
Let $\tilde M$ be the $\tilde \Lambda_L$-module corresponding to $M$.
Then the canonical
map $\tilde F_{Y/T}^{*} \tilde F_{Y/T, *}\tilde M\to \tilde M$ induces
$$\varphi: \tilde F_{Y/T}^{*} \tilde F_{Y/T, *} \tilde M\to \tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}} \tilde M$$ and we need to show that it is an isomorphism of ${\mathcal E}nd_{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}}\tilde \Lambda_L$-modules.
Note that this statement is local both in $T$ and $Y$, so we can assume that they are both affine and $\tildelde M$
can be written as the cokernel of the homomorphism $\tilde \Lambda_L^{\oplus m}\to \tilde \Lambda_L^{\oplus n}$
of trivial $\tilde \Lambda_L$-modules of finite rank.
This induces a commutative diagram
$$\xymatrix{
\tilde F_{Y/T}^{*} \tilde F_{Y/T, *} (\tilde \Lambda_L^{\oplus m} )\ar[d]^{\simeq}\ar[r]&
\tilde F_{Y/T}^{*} \tilde F_{Y/T, *} (\tilde \Lambda_L^{\oplus n}) \ar[r]\ar[d]^{\simeq}
&\tilde F_{Y/T}^{*} \tilde F_{Y/T, *}\tilde M\ar[r]\ar[d] & 0 \\
\tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}} \tilde \Lambda_L^{\oplus m} \ar[r]&\tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}} \tilde \Lambda_L^{\oplus n} \ar[r] &\tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}} \tilde M\ar[r] & 0 ,\\
}$$
where the two vertical maps are isomorphisms by \cite[Proposition 2.2.2]{BMR}. This implies
that the last vertical map is also an isomorphism.
\end{proof}
\subsection{Properness of the Hodge--Hitchin morphism}
Let us define a restricted Lie algebroid $L={\mathbb T}_{X/S, {\mathbb A}^1}$ on $X\tildemes {\mathbb A}^1/S\tildemes {\mathbb A}^1$
by setting $L:=p_1^*T_{X/S}$ with Lie bracket given by $[\cdot ,
\cdot]_{L}:=p_1^*[\cdot , \cdot]_{{\mathbb T} _{X/S}}\otimes t $, the anchor map $\alpha :=p_1^*{\mathop{\rm id}} \otimes t$
and the $p$-th power operation given by $(\cdot)_L^{[p]}=p_1^*(\cdot)^{[p]}_{{\mathbb T} _{X/S}}\otimes t^{p-1}$.
Then ${\mathcal M}^{L, ss} (X/S, P)$ is the Hodge moduli stack, i.e., the moduli stack of semistable modules with $t$-connections,
and we denote it by ${\mathcal M}_{Hod} (X/S, P)$.
The following result follows easily from Theorem \ref{Laszlo-Pauly}. If $X$ is a smooth projective curve defined over an algebraically closed field of characteristic $p$ this result was proven in \cite[Proposition 3.2]{LP}.
\begin{Corollary}\label{corrected-commutativity}
If $L={\mathbb T}_{X/S, {\mathbb A}^1}$ then there exists a morphism
$$\tilde H_{p}:{\mathcal M}_{Hod} (X/S, P)\to {\mathbb V}^{ L'}(X'/S,r)= {\mathbb V}^{ T_{X'/S}}(X'/S,r)\tildemes {\mathbb A}^1,$$
called the \emph{Hodge--Hitchin morphism}, making the diagram
$$\xymatrix{
{\mathcal M}_{Hod} (X/S, P)\ar[d]^{H_{L', p'}}\ar[r]^{H_{L, p}}\ar[rd]^{\tilde H_{L, p}}& {\mathbb V}^{ F_{X/S}^* L'}(X/S,r) \\
{\mathbb V}^{ L'}(X'/S,p^d r) & {\mathbb V}^{ L'}(X'/S,r)\ar[u]^{F_{X/S}^*}\ar[l]^{(\cdot )^{p^d}}\\
}$$
commutative.
\end{Corollary}
\begin{proof}
We have a cartesian diagram
$$\xymatrix{
{\mathbb V}^{ L'}(X'/S,r) \ar[d]^{(\cdot )^{p^d}} \ar[r]^{F_{X/S}^*}& {\mathbb V}^{ F_{X/S}^* L'}(X/S,r)\ar[d]^{(\cdot )^{p^d}} \\
{\mathbb V}^{ L'}(X'/S,p^d r) \ar[r]^{F_{X/S}^*}& {\mathbb V}^{ F_{X/S}^* L'}(X/S,p^d r),\\
}$$
so it is sufficient to show that the diagram
$$\xymatrix{
{\mathcal M}_{Hod} (X/S, P)\ar[d]^{H_{L', p'}}\ar[r]^{H_{L, p}}& {\mathbb V}^{ F_{X/S}^* L'}(X/S,r)\ar[d]^{(\cdot )^{p^d}} \\
{\mathbb V}^{ L'}(X'/S,p^d r) \ar[r]^{F_{X/S}^*}& {\mathbb V}^{ F,_{X/S}^* L'}(X/S,p^d r),\\
}$$
is commutative. Note that $(H_{L, p})^{p^d}$ is given by sending an $ L_{X_T/T}$-module $M$ to the characteristic polynomial of $\tilde \Lambda_L ^*\otimes _{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}} \tilde M$. Here we use the proof of \cite[Lemma 4.6]{La}, which shows that over the inverse image of an open subset $U\subset T$ on which $L_T$ is a free ${\mathcal O}_{X_T}$-module, $\tilde \Lambda_{L}$ is a free $_{{\mathcal O}_{{\mathbb V} ( F^*_{Y/T}L')}}$-module of rank $p^d$ . Since $F_{X/S}^*\circ H_{L', p'}$ is given by sending $M$ to the characteristic polynomial of $\tilde F_{X_T/T}^{*} \tilde F_{X_T/T, *} \tilde M$, the corollary follows from Theorem \ref{Laszlo-Pauly}.
\end{proof}
\begin{Remark}\begin{enumerate}
\item If $X$ is a smooth projective variety defined over an algebraically closed field of characteristic $p$ and one restricts to $t=1$ (the de Rham case) commutativity of the upper triangle in the diagram was mentioned in \cite[2.5]{EG} and attributed to \cite{LP}.
However, the proof of \cite{LP} uses an assumption that $X$ is a curve. Recently, M. de Cataldo, A. F. Herrero and S. Zhang noticed that even in that case the proof of \cite{LP} needs some additional arguments.
\item The above corollary generalizes also \cite[Theorem 2.17]{EG}. This theorem shows existence of the lower triangle in the diagram after restricting to the de Rham and locally free part.
\end{enumerate}
\end{Remark}
For a curve $X$ the following corollary is one of the main theorems of \cite{dCZ}.
\begin{Corollary}\label{Hodge-Hitchin-properness}
The Hodge--Hitchin morphism
$$\tilde H_{p}:{\mathcal M}_{Hod} (X/S, P)\to {\mathbb V}^{ T_{X'/S}}(X'/S,r)\tildemes {\mathbb A}^1$$
is of finite type, universally closed and S-complete.
In particular, the Hodge moduli space of relative integrable $t$-connections on $X/S$ with fixed Hilbert polynomial $P$ is proper over ${\mathbb V}^{ T_{X'/S}}(X'/S,r)\tildemes {\mathbb A}^1$.
\end{Corollary}
\begin{proof}
The fact that the moduli stack is of finite type follows from \cite{La0}. The fact that it is universally closed follows from the definition and Theorem \ref{p-Hitchin-properness}. S-completness follows from Theorem \ref{S-completness}. The second part of the theorem follows from the first one.
\end{proof}
\end{document} |
\begin{document}
newtheorem{theorem}{Theorem}[section]
newtheorem{lemma}[theorem]{Lemma}
newtheorem{corollary}[theorem]{Corollary}
newtheorem{proposition}[theorem]{Proposition}
newcommand{p}{p}
newcommand{s}{s}
newcommand{q}{q}
newcommand{\F}[1][q]{\mathbb{F}_{#1}}
newcommand{\mathbb{K}}{\mathbb{K}}
newcommand{\mathbb{Q}}{\mathbb{Q}}
newcommand{\mathbb{Z}}{\mathbb{Z}}
newcommand{\mathcal{S}}{\mathcal{S}}
newcommand{\FX}[1][X]{\mathbb{F}_q[#1]}
newcommand{\mathbb{K}X}[1][X]{\mathbb{K}[#1]}
newcommand{\bmod}{\bmod}
newcommand{\ff}[1][M]{
\ifthenelse{\equal{#1}{M}}{f}{}
\ifthenelse{\equal{#1}{X}}{f(X)}{}
\ifthenelse{\equal{#1}{\gamma_{i}}}{f(\gamma_{i})}{}
\ifthenelse{\equal{#1}{\beta_{i}}}{f(\beta_{i})}{}
\ifthenelse{\equal{#1}{\hh}}{f \circ \hh}{}
}
\renewcommand{\gg}[1][M]{
\ifthenelse{\equal{#1}{M}}{g}{}
\ifthenelse{\equal{#1}{X}}{g(X)}{}
\ifthenelse{\equal{#1}{\hh}}{g \circ \hh}{}
}
newcommand{\hh}[1][M]{
\ifthenelse{\equal{#1}{M}}{h}{}
\ifthenelse{\equal{#1}{X}}{h(X)}{}
}
newcommand{\degree}[1]{
\ifthenelse{\equal{#1}{\ff}}{d}{}
\ifthenelse{\equal{#1}{\gg}}{e}{}
\ifthenelse{\equal{#1}{\hh}}{d_{\hh}}{}
\ifthenelse{\equal{#1}{\f{n}{X}}}{d_{\ff}^{n}}{}
\ifthenelse{\equal{#1}{\f{n-1}{X}}}{d_{\ff}^{n-1}}{}
}
newcommand{n}{n}
newcommand{\f}[2]{f^{(#1)}(#2)}
newcommand{\g}[2]{g^{(#1)}(#2)}
newcommand{\coefficient}[2]{
\ifthenelse{\equal{#2}{\ff}}{a_{#1}}{}
\ifthenelse{\equal{#2}{\gg}}{b_{#1}}{}
\ifthenelse{\equal{#2}{\hh}}{a_{#1}}{}
\ifthenelse{\equal{#2}{\f{n}{X}}}{\left (a_{\degree{\ff}}\right )^{\frac{\degree{\ff}^n-1}{\degree{\ff}-1}}}{}
\ifthenelse{\equal{#2}{\f{n-1}{X}}}{\left (a_{\degree{\ff}}\right )^{\frac{\degree{\ff}^{n-1}-1}{\degree{\ff}-1}}}{}
\ifthenelse{\equal{#2}{\f{n2}{X}}}{\left (a_{\degree{\ff}}\right )^{\frac{\degree{\ff}^{n-1}-1}{\degree{\ff}-1}}}{}
}
newcommand{\Cf}[1]{
\ifthenelse{\equal{#1}{1}}{C_f}{C_{f^{#1}}}
}
newcommand{principal}[1]{\coefficient{\degree{#1}}{#1}}
newcommand{\element}[1][\alpha]{#1}
newcommand{\Res}[2]{\mathrm{Res}\left(#1,#2\right)}
newcommand{\Disc}[1]{\mathrm{Disc}(#1)}
newcommand{\mathrm{Tr}}{\mathrm{Tr}}
\renewcommand{\gamma}{\gamma}
\renewcommand{\left (}{\left (}
\renewcommand{\right )}{\right )}
newcommand{\fa}[2]{F_{#2}(a_0,\ldots,a_{\degree{#1}})}
newcommand{\fy}[2]{F_{#2}(Y,a_1,\ldots,a_{\degree{#1}x})}
\def\mathrm{Orb}(f){\mathrm{Orb}(f)}
\def\mathrm{Nm}{\mathrm{Nm}}
newcommand{\comm}[1]{\marginpar{
\vskip-\baselineskip
\raggedright\footnotesize
\itshape\hrulesmallskip#1parsmallskip\hrule}}
\title[Stable Polynomials over Finite Fields]
{Stable Polynomials over Finite Fields}
\author[D. G\'omez-P\'erez]{Domingo G\'omez-P\'erez}
\address{Department of Mathematics, University of Cantabria, Santander 39005, Spain}
\email{[email protected]}
\author[A. P. Nicol\'as]{Alejandro P. Nicol\'as}
\address{Departamento de Matem‡tica Aplicada, Universidad de Valladolid, Spain}
\email{[email protected]}
\author[A. Ostafe]{Alina Ostafe}
\address{Department of Computing, Macquarie University, Sydney NSW 2109, Australia}
\email{[email protected]}
\author[D. Sadornil]{Daniel Sadornil}
\address{Department of Mathematics, University of Cantabria, Santander 39005, Spain}
\email{[email protected]}
\thanks{A. N. was supported by MTM2010-18370-C04-01, A.~O. was
supported by SNSF Grant 133399 and D. S. was supported by
MTM2010-21580-C02-02 and MTM2010-16051.}
\maketitle
\begin{abstract}
We use the theory of resultants to study the stability of an arbitrary polynomial $f$
over a finite field $\F$, that is, the property of having all its iterates irreducible.
This result partially generalises the quadratic polynomial case described
by R. Jones and N. Boston. Moreover, for $p=3$, we show
that certain polynomials of degree three are not stable.
We also use the Weil bound for multiplicative character
sums to estimate the number of stable arbitrary polynomials
over finite fields of odd characteristic.
\end{abstract}
section{Introduction}
For a polynomial $\ff$ of degree at least 2 and coefficients in a
field $\mathbb{K}$, we define the following sequence:
\begin{equation*}
\f{0}{X}=X,quad \f{n}{X}=\f{n-1}{\ff[X]},\ n\ge 1.
\end{equation*}
A polynomial $\ff$ is \textit{stable} if $f^{(n)}$ is irreducible
over $\mathbb{K}$ for all $n\ge 1$.
In this article, $\mathbb{K}=\F$ is a finite field
with $q$ elements, where $q=p^{s}$ and
$p$ an odd prime.
Studying the stability of a polynomial is an exciting problem which has
attracted a lot of attention. However, only few results are known and
the problem is far away from being well understood.
The simplest case, when the polynomial is quadratic, has been studied
in several works. For example, some results concerning the stability
over $\F$ and $\mathbb{Q}$ can be found in \cite{Ali,Ayad,Danielson,Jon,Jones09}.
In particular, by~\cite[Proposition~2.3]{Jones09},
a quadratic monic polynomial $f \in \mathbb{K}[X]$ over a field $\mathbb{K}$
of odd characteristic and
with the unique critical point $\gamma$, is stable
if the set
$$\{-f(\gamma)\} \cup \{f^{(n)}(\gamma)\mid n\ge 2\}$$ contains no squares.
In the case when
$\mathbb{K}=\mathbb{F}_q$ is a finite field of odd characteristic,
this property is also necessary.
In~\cite{GomezNicolas10} an
estimate of the number of stable quadratic polynomials over the
finite field $\F$ of odd characteristic is given, while in~\cite{ALOS}
it is proved that almost
all monic quadratic polynomials $f\in\mathbb{Z}[X]$ are stable over $\mathbb{Q}$.
Furthermore, in~\cite{ALOS} it is shown that there are no stable quadratic
polynomials over finite fields of characteristic two. One might
expect that this is the case over any field of characteristic two,
which is not true as is also shown in~\cite{ALOS} where an
example of a stable quadratic polynomial over a function field of
characteristic two is given.
The goal of this paper is to characterize the set of stable
polynomials of arbitrary degree and to devise a test for checking
the stability of polynomials.
Our techniques come from theory of resultants and they use the relation between
irreducibility of polynomials and the properties of the discriminant of polynomials.
Using these techniques, we partially generalize previous results
known for quadratic polynomials.
A test for stability of quadratic polynomials was given
in~\cite{Ostafe09}, where it was shown that checking the
stability of such polynomials can be done in time $q^{{3/4}+o(1)}$.
As in ~\cite{Jones09}, for an arbitrary polynomial $f$ over $\mathbb{F}_q$,
the set defined by $$\{\f{n}{\gamma_1}\ldots\f{n}{\gamma_{k}}\ \mid \ n\geq 1\ \},$$
where $\gamma_i$, $i=1,\ldots,k$, are the roots of the
derivative of the polynomial $\ff$,
plays also an important role in checking the stability of $f$.
In particular, we use techniques based on resultants of polynomials
together with the Stickelberger's theorem to prove our
results.
We introduce analogues of the
orbit sets defined in~\cite{Jones09} for arbitrary degree $d\ge 2$ polynomials.
As in~\cite{Ostafe09},
we obtain a nontrivial estimate for the cardinality of these sets for polynomials with irreducible derivative.
We also give an estimate for the
number of stable arbitrary polynomials which generalises the result
obtained in~\cite{GomezNicolas10} for quadratic stable polynomials.
The outline of the paper is the following: in Section~\ref{sec:preliminaries}
we introduce the preliminaries necessary to understand the paper.
These include basic results about resultants
and discriminants of polynomials. This section ends with the Stickelberger's result.
Next, Section~\ref{sec:stabilityPolynomials}
is devoted to proving a necessary condition for the stability of a polynomial.
We define a set, which generalizes the orbit set for
a quadratic polynomial, and then we give an upper bound
on the number of elements of this set.
Section~\ref{sec:nonExistence} gives a new proof of the
result that appeared in~\cite{ALOS} for cubic polynomials when the characteristic is
equal to 3.
Finally, in Section~\ref{sec:numberStables} we give an estimate of the number of stable
polynomials for any degree.
For that, we relate the number of stable polynomials with estimates of
certain multiplicative character sums.
section{Preliminaries}
\label{sec:preliminaries}
Before proceeding with the main results, it is necessary to introduce some
concepts related
to commutative algebra. Let $\mathbb{K}$ be any field and let $\ff\in\mathbb{K}X$ be
a polynomial of degree $d$
with leading coefficient $principal{\ff}$. The \textit{discriminant} of $\ff$,
denoted by $\Disc{\ff}$, is defined by
\begin{equation*}
\Disc{\ff}=principal{\ff}^{2d-2}prod_{i<j}(\alpha_i-\alpha_j)^2,
\end{equation*}
where $\alpha_1,\ldots, \alpha_{d}$ are the roots of $\ff$ in
some extension of $\mathbb{K}$.
newline
It is widely known that for any polynomial $\ff\in\mathbb{K}X$, its discriminant
is an element of the field $\mathbb{K}$.
Alternatively, it is possible to compute $\Disc{f}$ using resultants.
We can define the resultant of two polynomials $\ff$ and $\gg$
over $\mathbb{K}$ of degrees $d$ and $e$, respectively,
with leading coefficients $a_d$ and $b_e$, as
\begin{equation*}
\Res{\ff}{\gg}=
principal{\ff}^{e}principal{\gg}^{d}prod (\alpha_i-\beta_j),
\end{equation*}
where $\alpha_i, \beta_j$ are the roots of $\ff$ and $\gg$, respectively.
Like the discriminant, the resultant belongs to $\mathbb{K}$.
In the following lemmas we summarize several known results about resultants without proofs. The
interested reader can find them in~\cite{Cox07,LN97}.
\begin{lemma}
\label{lem:resultant_eval_root}
Let $\ff,\gg\in\mathbb{K}X$ be two polynomials of degrees $d\ge 1$ and $e\ge 1$
with leading coefficients $a_d$ and $b_e$, respectively. Let
$\beta_1,\ldots,\beta_{\degree{\gg}}$ be the roots of $\gg$ in an extension field of $\mathbb{K}$.
Then,
\begin{equation*}
\Res{\ff}{\gg}=
(-1)^{d\degree{\gg}}
principal{\gg}^{d}prod_{i=1}^{\degree{\gg}} \ff[\beta_{i}].
\end{equation*}
\end{lemma}
The behaviour of the resultant with respect to the multiplication is given
by the next result.
\begin{lemma}
\label{lem:resultant_multiplication}
Let $\mathbb{K}$ be any field. Let $\ff,\gg,\hh\in\mathbb{K}X$ be polynomials of degree greater than 1 and
$a\in\mathbb{K}$.
The following hold:
\begin{align*}
& \Res{\ff\gg}{\hh}=\Res{\ff}{\hh}\Res{\gg}{\hh},&\Res{ a\ff}{\gg}=a^{\degree{\gg}}\Res{\ff}{\gg},
\end{align*}
where $\deg g=e$.
\end{lemma}
The relation between $\Disc{\ff}$ and $\Res{\ff}{\ff'}$ is given by the next statement.
\begin{lemma}
\label{lem:resultant_discriminant}
Let $\mathbb{K}$ be any field and $\ff\in\mathbb{K}X$ be a polynomial of degree $d\geq
2$ with leading coefficient $a_d$, non constant derivative $\ff'$ and $\deg \ff'=k\le d-1$. Then, we have the relation
\[\Disc{\ff}= \Cf{1}\Res{\ff}{\ff '},\]
noindent where $\Cf{1} =(-1)^{\frac{d(d-1)}{2}}principal{\ff}^{d-k-2}$.
\end{lemma}
One of the main tools used to prove our main result regarding the stability of arbitrary polynomials
is the Stickelberger's result~\cite{Stickelberger97},
which gives the parity of the number of distinct irreducible factors of a polynomial over a finite field of odd characteristic.
\begin{lemma}
\label{thm:Stickelberger}
Suppose $\ff\in\FX$, $q$ odd, is a polynomial of degree $d\ge 2$ and is
the product of $r$ pairwise
distinct irreducible polynomials over $\F$.
Then $r\equiv d\bmod 2$ if and only
if $\Disc{\ff}$ is a square in $\F$.
\end{lemma}
section{Stability of arbitrary polynomials}
\label{sec:stabilityPolynomials}
In this section we give a necessary condition for the stability of arbitrary
polynomials. For this purpose, we use the
following general result known as Capelli's Lemma, see~\cite{FS}.
\begin{lemma}
\label{lem:Capelli} Let $\mathbb{K}$ be a field, $f,g\in\mathbb{K}X$, and let
$\beta\in\overline{\mathbb{K}}$ be any root of $g$. Then $g(f)$ is
irreducible over $\mathbb{K}$ if and only if both $g$ is irreducible over
$\mathbb{K}$ and $f-\beta$ is irreducible over $\mathbb{K}(\beta)$.
\end{lemma}
We prove now one of the main results about the stability of an arbitrary polynomial.
We note that our result partially generalises the quadratic polynomial
case presented in~\cite{Jones09} which is known to be necessary
and sufficient over finite fields.
\begin{theorem}
\label{thm:criterium}
Let $q=p^s$, $p$ be an odd prime, and $\ff\in\FX$ a stable polynomial of degree $d\geq
2$ with leading coefficient $a_d$, non constant derivative $\ff'$ and $\deg \ff'=k\le d-1$. Then the following hold:
\begin{enumerate}
\item if $d$ is even, then $\Disc{f}$
and $principal{\ff}^{k}\Res{f^{(n)}}{\ff '}$, $n\ge 2$, are nonsquares in $\F$;
\item if $d$ is odd, then $\Disc{f}$
and $(-1)^{\frac{d-1}{2}}principal{\ff}^{(n-1)k+1}\Res{f^{(n)}}{\ff '}$, $n\ge 2$, are squares in $\F$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\ff\in\FX$ be a stable polynomial. We assume first that
$d$ is even. We have that $f^{(n)}$ is irreducible for
any $n$, and thus, by Capelli's Lemma~\ref{lem:Capelli}, we know
that $f-\alpha$ is irreducible over
$\mathbb{F}_{q^{d^{n-1}}}$, where $\alpha$ is a root of $f^{(n-1)}$. By
Lemma~\ref{thm:Stickelberger} this means that $\Disc{f-\alpha}$ is
a nonsquare in $\mathbb{F}_{q^{d^{n-1}}}$. Now, taking the norm over $\F[q]$
and using Lemma~\ref{lem:resultant_discriminant}, we get
\begin{equation*}
\begin{split}
\mathrm{Nm}_{q^{d^{n-1}}|q} &\Disc{f-\alpha}\\
&=prod_{substack{\alpha\in\mathbb{F}_{q^{d^{n-1}}}\\ f^{(n-1)}(\alpha)=0}} \Disc{f-\alpha}=prod_{substack{\alpha\in\mathbb{F}_{q^{d^{n-1}}}\\ f^{(n-1)}(\alpha)=0}} C_{f} \Res{f-\alpha}{f'}\\
&=C_f^{d^{n-1}} \Res{prod_{substack{\alpha\in\mathbb{F}_{q^{d^{n-1}}}\\ f^{(n-1)}(\alpha)=0}} (f-\alpha)}{f'}\\
&=C_f^{d^{n-1}}\Res{\frac{f^{(n-1)}(f)}{A}}{f'}=A^{-k}C_f^{d^{n-1}}\Res{f^{(n)}}{f'},
\end{split}
\end{equation*}
where $C_f$ is defined by Lemma~\ref{lem:resultant_discriminant}, $A$ is the leading coefficient of
$f^{(n-1)}$ and
$\mathrm{Nm}_{q^{d^{(n-1)}}|q}$ is the norm map from $\mathbb{F}_{q^{d^{n-1}}}$
to $\mathbb{F}_q$.
As the norm $\mathrm{Nm}_{q^{d^{n-1}}|q}$ maps nonsquares to nonsquares,
we obtain that $ A^{-k} C_f^{d^{(n-1)}} \Res{f^{(n)}}{f'}$ is a nonsquare, and taking into account
that $A=a_{d}^{\frac{d^{n}-1}{d-1}}$ and
the parity of the exponents involved, the result follows. The case of odd $d$ can be treated in a similar way.
\end{proof}
Theorem~\ref{thm:criterium}
is interesting because it gives a method for testing the
stability of a polynomial. Lemma~\ref{lem:resultant_eval_root} says that the
resultant is just the evaluation of $f^{(n)}$ in the roots of $\ff '$ multiplied by some constants. Taking into account this fact,
the quadratic character of $principal{\ff}$ and the exponents
which are involved in Theorem \ref{thm:criterium}, we have the following characterisation.
\begin{corollary}\label{cor:condition}
Let $q=p^s$, $p$ an odd prime, and $\ff\in\FX$ a stable polynomial
of degree $d\geq 2$ with leading coefficient $a_d$, non constant derivative
$\ff'$, $\deg \ff'=k\le d-1$ and $a_{k+1}$ the coefficient of $X^{k+1}$ in $f$.
Let $\gamma_i$, $i=1,\ldots,k,$ be the roots of the derivative
$\ff'$. Then the following hold:
\begin{enumerate}
\item if $d$ is even, then
\begin{equation}\label{eq:even orbit}
\mathcal{S}_1=
\left\{principal{\ff}^k prod_{i=1}^{k} \f{n}{\gamma_i}\ \mid \ n>1\right\}
\ \bigcup\ \left\{\ (-1)^\frac{d}{2}principal{\ff}^kprod_{i=1}^{k}
\ff({\gamma_i})\ \right\}
\end{equation}
contains only nonsquares in $\F$;
\item if $d$ is odd, then
\begin{equation}\label{eq:odd orbit}
\mathcal{S}_2=\left\{\ (-1)^{\frac{(d-1)}{2}+k}(k+1)a_{k+1}
principal{\ff}^{(n-1)k+1}prod_{i=1}^{k} \f{n}{\gamma_i}\ \mid \ n\ge1\right\}
\end{equation}
contains only squares in $\F$.
\end{enumerate}
\end{corollary}
\begin{proof}
The result follows directly from Theorem~\ref{thm:criterium}
and Lemma~\ref{lem:resultant_eval_root}.
\end{proof}
We note that the converse
of Corollary~\ref{cor:condition} is not true. Indeed, take
any $d$ with $\gcd(d,q-1)=\gcd(d,p)=1$, $\F$ an extension
of even degree of $\F[p]$ and $a_0$ a quadratic residue in $\F$.
Let us consider the polynomial
$\ff{(X)}=(X-a_0)^{d}+a_0\in\FX$. It is straightforward to see that
$\f{n}{X}=(X-a_0)^{d^n}+a_0$ and that the set~\eqref{eq:odd orbit} is
\begin{equation*}
\{(-1)^{\frac{d-1}{2}}\,d\, a_0^{d-1}\}.
\end{equation*}
We note that the polynomial $\ff$ is reducible.
Indeed, let the integer $1\le e\le q-1$ be such that $ed=1 pmod {q-1}$.
Then $(a_0^e)^d=a_0$, and thus $-a_0^e+a_0$ is a root of $f$.
On the other hand, since $-1$ and $d$ are squares in
$\F$ because both elements belong to $\F[p]$ and $\F$ is an
extension of even degree, the set~\eqref{eq:odd orbit}
contains only squares.
We finish this section by showing that, when the derivative $f'$ of the stable polynomial $f$ is irreducible,
the sets~\eqref{eq:even orbit} and~\eqref{eq:odd orbit} are defined
by a short sequence of initial elements.
The proof follows exactly the same lines as in the proof of~\cite[Theorem 1]{Ostafe09}. Indeed, assume $\deg f'=k$ and $\gamma_1,\ldots,\gamma_k\in\mathbb{F}_{q^k}$ are the roots of $f'$. Using Corollary~\ref{cor:condition} we see that the sets ~\eqref{eq:even orbit} and~\eqref{eq:odd orbit} contain only nonsquares and squares, respectively, and thus, the problem reduces to the cases when $\f{n}{\gamma_1}\ldots\f{n}{\gamma_k}$ are either all squares or all nonsquares for any $n\ge 1$. It is clear that, when $f'$ is irreducible, taking into account that
$\gamma_i=\gamma_1^{q^{i}}$, $i=1,\ldots,k-1$, we get for every $1\le n\le N$,
\begin{equation*}
\begin{split}
\f{n}{\gamma_1}\ldots\f{n}{\gamma_k} &=
\f{n}{\gamma_1}\ldots\f{n}{\gamma_1^{q^{k-1}}}\\
&=\f{n}{\gamma_1}\ldots\f{n}{\gamma_1}^{q^{k-1}}\ =
\mathrm{Nm}_{q^{k}|q}\f{n}{\gamma_1}.
\end{split}
\end{equation*}
Applying now the same technique with multiplicative character sums as in~\cite[Theorem 1]{Ostafe09} (as the argument does not depend on the degree of the polynomial $f$), we obtain the following estimate:
\begin{theorem}
\label{thm:UB} For any odd $q$ and any stable polynomial
$f \in\FX$ with irreducible derivative $f'$, $\deg f'=k$, there exists
$$
N= O\left (q^{3k/4}\right )
$$ such that for the sets~\eqref{eq:even orbit} and~\eqref{eq:odd orbit}
we have
\begin{equation*}
\begin{split}
\mathcal{S}_1&= \left\{principal{\ff}^k prod_{i=1}^{k} \f{n}{\gamma_i}\ \mid \ 1< n \le N\right\}\
\bigcup\ \left\{\
(-1)^\frac{d}{2}principal{\ff}^kprod_{i=1}^{k}
\ff({\gamma_i})\right\};\\
\mathcal{S}_2& =\left\{\ (-1)^{\frac{(d-1)}{2}+k}(k+1)a_{k+1}principal{\ff}^{(n-1)k+1}prod_{i=1}^{k} \f{n}{\gamma_i}\ \mid \ 1\le n \le N\right\}.
\end{split}
\end{equation*}
\end{theorem}
section{Non-existence of certain cubic stable polynomials when $p$=3}
\label{sec:nonExistence}
The existence of stable polynomials is difficult to prove. For $p=2$, there are
no stable quadratic polynomials as shown in~\cite{A}, whereas
for $p>2$, there is a big number of them as is shown in~\cite{GomezNicolas10}.
In this section, we show that for certain polynomials of degree 3, $f^{(3)}$ is a reducible
polynomial when $p=3$.
This result also appears in~\cite{ALOS}, but we think this approach uses new
ideas that could be of independent interest.
For this approach, we need the following
result which can be found in~\cite[Corollary 4.6]{Menezes93}.
\begin{lemma}
\label{lem:menezes93}
Let $q=p^s$ and $f(X)=X^{p}-a_1 X-a_0\in\FX$ with $a_1 a_0neq 0$. Then $f$ is irreducible over
$\F$ if and
only if $a_1=b^{p-1}$ and $\mathrm{Tr}_{q|p}(a_0/b^{p})neq 0.$
\end{lemma}
Based on this result, we can present an irreducibility criterium for polynomials
of degree 3 in characteristic $3$.
\begin{lemma}
\label{lem:appliedMenezes}
Let $p=3$ and $q=3^{s}$. Then $f(X)=X^3-a_2X^2-a_1X-a_0$ is irreducible over
$\F$ if and only if
\begin{enumerate}
\item $a_1=b^2$ and $\mathrm{Tr}_{q|3}(a_0/b^{3})neq 0,$ if $a_2 =0$ and $a_1neq 0$;
\item $a_2^4/(a_2^2a_1^2+a_1^3-a_0a_2^3)=b^2$ and
$\mathrm{Tr}_{q|3}(1/a_2b)neq 0,$ if $a_2 neq 0$,
\end{enumerate}
where $\mathrm{Tr}_{q|3}$ represent the trace map of $\F$ over $\F[3]$.
\end{lemma}
\begin{proof}
The case $a_2=0$ is a direct application of Lemma~\ref{lem:menezes93}.
In the other case, we take the polynomial
\begin{multline*}
f(X+a_1/a_2)=(X+a_1/a_2)^3-a_2(X+a_1/a_2)^2-a_1(X+a_1/a_2)-a_0=\\
X^3-a_2X^2-a_0+a_1^2/a_2+a_1^3/a_2^3= X^3-a_2X^2+(a_1^2a_2^2+a_1^3-a_0a_2^3)/a_2^3.
\end{multline*}
Notice that $f(X+a_1/a_2)$ is irreducible if and only if $f(X)$ is irreducible.
We denote $g(X)=f(X+a_1/a_2)$ to ease the notation and $g^*$
\textit{the reciprocal polynomial} of $g$, i. e.
\begin{equation*}
g^*(X)=X^3g\left (\frac{1}{X}\right ).
\end{equation*}
By \cite[Theorem 3.13]{LN97}, $g^*$ is irreducible if and only if $g$ is.
Applying Lemma~\ref{lem:menezes93}, we get the result.
\end{proof}
For simplicity, we proved an irreducibility criterium for monic polynomials,
however the proof holds for
non-monic polynomials as well taking into account the principal coefficient.
Using Lemma~\ref{lem:appliedMenezes} and following the same lines as in~\cite{A},
we can prove now the following result.
\begin{theorem}
\label{thm:deg3}
For any polynomial $f\in\mathbb{F}_3[X]$ of the form $f(X)=a_3X^3-a_1X-a_0$, at least
one of the following polynomials $f,f^{(2)}$ or $ f^{(3)}$ is a reducible polynomial.
\end{theorem}
\begin{proof}
Suppose that $f,\ f^{(2)},\ f^{(3)}$ are all irreducible polynomials.
Using Lemma~\ref{lem:Capelli}, $f^{(3)}$ is irreducible if and only if
$f^{(2)}$ is irreducible over $\F$ and $f-\gamma$ is irreducible over $\F[q^9]$, where $\gamma$ is a root of $f^{(2)}$.
Thus, the monic polynomial $h=\frac{f-\gamma}{a_3}$ is irreducible over $\F[q^9]$
and we can apply now Lemma~\ref{lem:appliedMenezes} from where we get that
$\mathrm{Tr}_{q^9|3}(\frac{a_0-\gamma}{a_3 b^3})neq 0$, where $b^2=a_1$ and $b\in\F[q^9]$.
Notice that $b\in\F[q]$. Indeed, as $b$ is the root of the polynomial
$X^2-a_1$, then either $b\in\F[q]$ or $b\in\F[q^2]$. Since $b\in\F[q^9]$
we obtain that $b\in\F[q]$.
Using the
properties of the trace map we obtain
\begin{equation*}
\mathrm{Tr}_{q^9|3} \left (\frac{a_0-\gamma}{a_3 b^3}\right )=\mathrm{Tr}_{q^9|3}\left (\frac{-\gamma}{a_3 b^3}\right ),
\end{equation*}
and from here we conclude that the right hand side of the last equation is non zero.
Using now the transitivity of the trace, see~\cite[Theorem 2.26]{LN97}, we get
\begin{equation*}
\mathrm{Tr}_{q^9|3}\left (\frac{-\gamma}{a_3 b^3}\right )=\mathrm{Tr}_{q|3}\left (\mathrm{Tr}_{q^9|q}\left (\frac{-\gamma}{a_3 b^3}\right )\right )=\mathrm{Tr}_{q|3}\left (\frac{\mathrm{Tr}_{q^9|q}(-\gamma)}{a_3b^3}\right ).
\end{equation*}
Now, $f^{(2)}$ is an irreducible polynomial with roots
$\gamma, \gamma^q,\ldots,\gamma^{q^{8}}$. Thus, $\mathrm{Tr}_{q^9|q}(\gamma)$
is given by the coefficient of the term $X^8$ in $f^{(2)}$, which is zero.
This shows that $\mathrm{Tr}_{q^9|3}(\gamma)=0$, which is a contradiction with the fact that
$f^{(3)}$ is irreducible.
\end{proof}
We note that Theorem~\ref{thm:deg3} cannot be extended to infinite fields.
As in~\cite{ALOS}, let $\mathbb{K} = \mathbb{F}_3(T)$ be the rational function field
in $T$ over $\mathbb{F}_3$, where $T$ is transcendental
over $\mathbb{F}_3$. Take $f(X)=X^3+T\in \mathbb{K}[X]$. Then it is easy to see that
$$
f^{(n)}(X)=X^{3^n}+T^{3^{n-1}}+T^{3^{n-2}}+\cdots+T^3+T.
$$
Now from the Eisenstein criterion for function fields
(see~\cite[Proposition~III.1.14]{Sti}, for example),
it follows that for every $n\ge 1$, the polynomial $f^{(n)}$ is irreducible
over $\mathbb{K}$. Hence, $f$ is stable.
section{On the number of stable polynomials}
\label{sec:numberStables}
In this section we obtain an estimate for the number of stable polynomials of
certain degree $d$. Note that, from Corollary \ref{cor:condition},
it suffices to estimate the number of nonsquares of the orbit \eqref{eq:even orbit}
for even $d$, or the number of squares of \eqref{eq:odd orbit}
for odd $d$.
For a given $d$, let
$\ff(X)=a_{d}X^{d}+a_{d-1}X^{d-1}+\cdots+a_1X+a_0\in\mathbb{F}_q[X]$
and we define
\begin{equation*}
\fa{\ff}{l}=prod_{i=1}^{k}\ff^{(l)}({\gamma_i}),
\end{equation*}
which is a polynomial in the variables $a_{0},\ldots,\ a_{d}$ and with coefficients in
$\F$.
Following~\cite{Ostafe09}, the number of stable polynomials of degree $d$,
which will be denoted by $S_{d}$, satisfies the inequality
\begin{equation}\label{eq:sums}
S_{d}\le \frac{1}{2^{K}}sum_{a_0\in\F}\cdotssum_{a_{d}\in\F^*}
prod_{l=1}^{K}(1pm\chi(\fa{\ff}{l})),\ \forall K\in\mathbb{Z}^+,
\end{equation}
noindent where $\chi$ is the multiplicative quadratic character of $\F$.
The sign of $\chi$ depends on $d$ and is chosen in order to count
the elements of the orbit of $\ff$ which satisfy the condition of stability.
Since the upper bound of $S_{d}$ is independent of this choice, let
us suppose from now on that $\chi$ is taken with $+$. If we expand and rearrange
the product, we obtain $2^K-1$ sums of the shape
\begin{equation*}
sum_{a_0\in\F}\cdotssum_{a_{d}\in\F^*}\chi
\left (prod_{j=1}^{\mu}\fa{\ff}{l_{j}}\right ),\, 1\le l_1<\cdots < l_{\mu}\le K,
\end{equation*}
noindent with $\mu\ge 1$ plus one trivial sum correponding to 1 in (\ref{eq:sums}).
The upper bound for $S_{d}$ will be obtained using the Weil bound for
character sums, which can be found in \cite[Lemma 1]{GomezNicolas10}. This result
can only be used when $prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$ is not a square polynomial.
The next lemmas are used to estimate the number of values for $a_{1},\ldots,a_{d}$ such that
the resulting polynomial in $a_0$ is a square.
The first lemma is a bound on the number of zeros of two multivariate polynomials.
A more general inequality is given by the Schwartz-Zippel lemma. For a proof, we
refer the reader to~\cite{Gathen99}.
\begin{lemma}
\label{multiple_roots} Let $F(Y_0,Y_1,\ldots,Y_{d}),
G(Y_0,Y_1,\ldots,Y_{d})$ be two polynomials of degree
$d_1$ and $d_2$, respectively, in $d+1$ variables with
\begin{equation*}
\gcd\left(F(Y_0,Y_1,\ldots,Y_{d}), G(Y_0,Y_1,\ldots,Y_{d})\right)=1.
\end{equation*}
Then, the number of common roots in $\F$ is bounded by
$d_1d_2q^{d-1}$.
\end{lemma}
The next lemma gives a bound for the number of ``bad'' choices of
$a_1,\ldots,a_{d}$, that is, the number of choices of
$a_1,\ldots,a_{d}$ such that
$prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$ is a square polynomial in $a_0$.
\begin{lemma}
\label{lem:roots}
For fixed integers $l_1,\ldots, l_{\mu}$ such that $1\le l_1<\cdots < l_{\mu}\le K$,
the polynomial
\[
prod_{j=1}^{\mu}\fa{\ff}{l_{j}}
\]
is a square polynomial in the variable $a_0$ up to a multiplicative constant
only for at most $O(d^{2K}q^{d-1})$
choices of $a_1,\ldots,a_{d}$.
\end{lemma}
\begin{proof}
For even degree which is coprime to $p$, we consider the polynomial $f=(X-b)^{d}+c+b$, where
$b,c$ are considered as variables. Then $f'=d(X-b)^{d-1}$ and
\begin{equation*}
f^{(n)}(b)=b+H_n(c),
\end{equation*}
where $ \deg H_n(c)= d^{n-1}$.
For odd degree, coprime to $p$, we consider the following polynomial
$f= (X-b)^{d-1}(X-b+1)+c+b$ with the derivative
$f'=(X-b)^{d-2}(d(X-b)+d-1).$
Notice that, if the degree of this polynomial is coprime to the characteristic $p$,
then $f'$ has two different roots $ b, b+(1-d)d^{-1}$.
Substituting these in the polynomial $f$, we get
\begin{eqnarray*}
f^{(n)}(b) &=&b+H_n(c), \\
f^{(n)}( b+(1-d)d^{-1})&=&b+L_n(c),\\
\end{eqnarray*}
where $L_nneq H_n$ and $\deg L_n(c)=\deg H_n(c)= d^{n-1}.$
In either of the two cases, we can compute the irreducible factors
of $\Res{f^{(k)}}{\ff '}$.
When the degree is not coprime to the characteristic,
take $f=(X-b)^{d}+(X-b)^2+c+b$ and the proof is similar to the last two cases.
This proves that the following polynomial
\[
prod_{j=1}^{\mu}\fa{\ff}{l_{j}}
\]
is not a square polynomial as a multivariate polynomial up to a multiplicative constant.
Let
$
prod_{j=1}^{\mu}\fa{\ff}{l_{j}}=G_1(a_0,\ldots, a_{d})^{d_1}
\cdots G_h(a_0,\ldots, a_{d})^{d_h}
$ be
the decomposition into a product of irreducible polynomials.
Without loss of generality, $d_1$ is not even
because $prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$ is not a square of a polynomial
up to a multiplicative constant. Moreover, because $G_1$ is an irreducible
factor of the product $prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$, then
there exists $1\le j\le \mu$ such that $G_1$ is an irreducible factor of
$\fa{\ff}{l_{j}}$, which implies that $\deg G_1\le d^{K}.$
We use $G_1(a_0,\ldots, a_{d})$ to count the number of
choices for $a_1,\ldots, a_{d}$ such that
\begin{itemize}
\item the polynomial $prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$ is a constant polynomial.
\item the polynomial $prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$ is a square polynomial
up to a multiplicative constant.
\end{itemize}
There are at most $d^{K\mu}q^{d-1}$ different choices of $a_1,\ldots, a_{d}$ when
the polynomial can be a constant.
Now, we consider in which cases the polynomial $prod_{j=1}^{\mu}\fa{\ff}{l_{j}}$ is a square
of a polynomial
and how these cases will be counted. We have the following two possible situations:
\begin{itemize}
\item $G_1^{d_1}$ is a square, nonconstant, and because $d_1$ is not
even, then we must have that $G_1$ has
at least one multiple root. This is only possible if
$G_1$ and the first derivative with respect to the
variable $a_0$ of $G_1$ have a common root.
$G_1$ is an irreducible polynomial, so Lemma~\ref{multiple_roots} applies.
We remark that the first derivative is a nonzero polynomial. Otherwise
$G_1$ is a reducible polynomial.
This can only happen in $(\deg G_1)(\deg G_1-1)q^{d-1}$ cases.
\item $G_1$ and $G_j$ have a common root for some $1\le j\le h$. In this case, using the same argument, there are at most
$(\deg G_1)(\deg G_j)q^{d-1}$
possible values for $a_1,\ldots, a_{d}$ where it happens.
\end{itemize}
\end{proof}
Now we are able to find a bound for $S_{d}$, the number of
stable polynomials of degree $d$.
\begin{theorem}
The number of stable polynomials $\ff\in\FX$ of degree $d$ is
$O(q^{d+1-1/\log(2d^2)})$.
\end{theorem}
\begin{proof}
The trivial summand of~\eqref{eq:sums} can be bounded by $O(q^{d+1}/2^K)$.
For the other terms, we can use the Weil bound,
as is given in \cite[Lemma 1]{GomezNicolas10}, for those polynomials which
are nonsquares. Since these polynomials have degree at most
$d^{K}$ in the indeterminate $a_0$ (see the proof of Lemma~\ref{lem:roots}),
we obtain $O(d^K q^{d+1/2})$ for this part.
For the rest, that is, the square polynomials, we can use the trivial bound.
Thus, from Lemma~\ref{lem:roots}, we get $O(d^{2K} q^{d})$.
Then,
\[
S_{d}=
O(q^{d+1}/2^{K}+d^Kq^{d+1/2}+
d^{2K}q^{d}).
\]
Choosing $K=\lceil(\log q/\log (2d^2))\rceil$ the result follows.
\end{proof}
\end{document} |
\begin{document}
\title{Locating a robber with multiple probes}
\author{John Haslegrave\thankx{University of Warwick, Coventry, UK. {\tt [email protected]}}, Richard A. B. Johnson\thankx{The King's School, Canterbury, UK. {\tt [email protected]}}, Sebastian Koch\thankx{University of Cambridge, Cambridge, UK.}}
\maketitle
\begin{abstract}
We consider a game in which a cop searches for a moving robber on a connected graph using distance probes, which is a slight variation on one introduced by Seager (Seager, 2012). Carragher, Choi, Delcourt, Erickson and West showed that for any $n$-vertex graph $G$ there is a winning strategy for the cop on the graph $\gm$ obtained by replacing each edge of $G$ by a path of length $m$, if $m\geq n$ (Carragher et al., 2012). The present authors showed that, for all but a few small values of $n$, this bound may be improved to $m\geq n/2$, which is best possible (Haslegrave et al., 2016b). In this paper we consider the natural extension in which the cop probes a set of $k$ vertices, rather than a single vertex, at each turn. We consider the relationship between the value of $k$ required to ensure victory on the original graph with the length of subdivisions required to ensure victory with $k=1$. We give an asymptotically best-possible linear bound in one direction, but show that in the other direction no subexponential bound holds. We also give a bound on the value of $k$ for which the cop has a winning strategy on any (possibly infinite) connected graph of maximum degree $\Delta$, which is best possible up to a factor of $(1-o(1))$.
\end{abstract}
\section{Introduction}
Search games and pursuit games on graphs have been widely studied, beginning with a graph searching game introduced by Parsons \cite{Par76} in which a fixed number of searchers try to find a lost spelunker in a dark cave. The searchers cannot tell where the target is, and aim to move around the vertices and edges of the graph in such a way that one of them must eventually encounter him. The spelunker may move around the graph in an arbitrary fashion and at unlimited speed, and in the worst case may be regarded as an antagonist who knows the searchers' positions and is trying to escape them. Parsons \cite{Par78} subsequently introduced the \textit{search number}, $s(G)$, of a graph $G$, being the minimum number of searchers required to guarantee catching the spelunker. Megiddo, Hakimi, Garey, Johnson and Papadimitriou \cite{MHGJP} showed that computing the search number of a general graph is an NP-hard problem. From the subsequent proof by LaPaugh \cite{LaP93} that any number of searchers who can find the spelunker can do so while ensuring the spelunker cannot return to a previously searched edge, it follows that the decision problem is in NP (and hence NP-complete).
Robertson and Seymour \cite{RS83} introduced the concepts of path decompositions and the pathwidth, $\operatorname{pw}(G)$ of a graph $G$, which has deep connections to the theory of graph minors and to algorithmic complexity. Minor-closed families which have a forbidden forest are precisely those with bounded pathwidth \cite{RS83}, and many algorithmic problems which are difficult in general have efficient algorithms for graphs of bounded pathwidth \cite{Arn85}. Ellis, Sudborough and Turner \cite{EST83} independently defined the vertex separation number of a graph, and Kinnersley subsequently showed the equivalence of these two definitions \cite{Kin92}. Ellis, Sudborough and Turner showed that the search number of a graph is almost completely determined by its pathwidth, giving bounds of $\operatorname{pw}(G)\leq s(G)\leq\operatorname{pw}(G)+2$ \cite{EST94}.
Graph pursuit games date back to the classical Cops and Robbers game, introduced independently by Quillot \cite{Qui78} and Nowakowski and Winkler \cite{NW83} (who attribute it to G. Gabor). This involves one or more cops and a robber moving around a fixed graph. The cops move simultaneously and alternate moves with the robber, all moves being to neighbouring vertices. The cops win if one of them occupies the robber's location. On a particular graph $G$ the question is whether a given number of cops have a strategy which is guaranteed to win, or whether there is a strategy for the robber which will allow him to evade capture indefinitely. The \textit{cop number} of a graph is the minimum number of cops that can guarantee to catch the robber. An important open problem is Meyniel's conjecture, published by Frankl \cite{Fra87}, that the cop number of any $n$-vertex connected graph is at most $O(\sqrt{n})$.
Models which focus on finding an invisible target rather than catching a visible one have been the focus of much recent work. In the Hunter versus Rabbit game, studied by Adler, R\"acke, Sivadasan, Sohler and V\"ocking \cite{ARSSV} and the Cop versus Gambler game, studied by Komarov and Winkler, \cite{KW16} the aim is to catch a randomly-moving target as quickly as possible; in both cases the searcher is restricted to moving on the edges of the underlying graph, but the target is not. Related models can be used to design protocols for ad-hoc mobile networks \cite{CNS01}. In Hunter versus Rabbit, the rabbit's strategy is unrestricted; Adler et al.\ showed that the hunter can achieve expected capture time $O(n\log n)$, while for some graphs the rabbit can achieve expected survival time $\Omega(n\log n)$ \cite{ARSSV}. In Cop versus Gambler, the gambler's strategy is simply a probability distribution on the vertices, and his location at different time steps is independent. In this setting Komarov and Winkler showed that a cop who knows this probability distribution can achieve expected capture time $n$, which is trivially best possible for the uniform distribution, and a cop who does not know the distribution can still achieve capture time $O(n)$ \cite{KW16}.
In the problem variously referred to as Cat and Mouse, Finding a Princess, and Hunter and Mole, it is the target which is constrained to moving around a graph, and the searcher probes vertices one by one in an unrestricted manner. In between any two probes the target must move to an adjacent vertex. The searcher wins if she probes the target's location, but gets no information otherwise. A complete classification of graphs on which the searcher can guarantee to win was given by Haslegrave \cite{Has11,Has14} and subsequently and independently by Britnell and Wildon \cite{BW13} and Komarov and Winkler \cite{KW13}.
In real-life searching we might gain information about how close an unsuccessful probe is. The simplest search model of this form is Graph Locating, independently introduced by Slater \cite{Sla75} and by Harary and Melter \cite{HM76}. In this model a set of vertices is probed and each probe reveals the graph distance to a stationary target vertex; the searcher wins if she can then determine the target's precise location. The minimum number of probes required to guarantee victory on the graph $G$ is its \textit{metric dimension}, $\mu(G)$. If the probed vertices are instead chosen sequentially, with each choice potentially depending on the results of previous probes, it may be possible to ensure victory with fewer probes; this is the Sequential Locating game, studied by Seager \cite{Sea13}.
In this paper we consider the Robber Locating game, introduced in a slightly different form by Seager \cite{Sea12} and further studied by Carragher, Choi, Delcourt, Erickson and West \cite{CCDEW}, as well as by the current authors \cite{HJKa, HJKb}. This combines features of the other games mentioned above. Like the Sequential Locating game, the aim is to deduce the target's location from distance probes, but, like Cops and Robbers or Cat and Mouse, the target moves around the graph in discrete steps. A single cop and robber take turns to act. For ease of reading we shall refer to the cop as female and the robber as male. The cop, who is not on the graph, can probe a vertex at her turn and is told the distance to the robber's current location. If at this point she can identify the robber's precise location, she wins. At the robber's turn he may move to an adjacent vertex. (The original version of the game also had the restriction that the robber may not move to the vertex most recently probed, but subsequent work has generally permitted such moves.)
In the Sequential Locating game the searcher can guarantee to win eventually, simply by probing every vertex, and the natural question is the minimum number of turns required to guarantee victory on a given graph $G$. In the Robber Locating game, by contrast, it is not necessarily true that the cop can guarantee to win in any number of turns. Consequently the primary question in this setting is whether, for a given graph $G$, the cop can guarantee to catch a robber who has full knowledge of how she will act. (On a finite graph, this is equivalent to asking whether there is some fixed number of turns in which she can guarantee victory.) We say that a graph is \textit{locatable} if she can do this and \textit{non-locatable} otherwise.
The main result of Carragher et al.\ \cite{CCDEW} is that for any graph $G$ a sufficiently large equal-length subdivision of $G$ is locatable. Formally, write $\gm$ for the graph obtained by replacing each edge of $G$ by a path of length $m$, adding $m-1$ new vertices for each such path. Carragher et al.\ proved that $\gm$ is locatable whenever $m\geq \min\{\abs{V(G)},1+\max\{\mu(G)+2^{\mu(G)},\Delta(G)\}\}$. In most graphs this bound is simply $\abs{V(G)}$, and they conjectured that this was best possible for complete graphs, i.e.\ that $K_n^{1/m}$ is locatable if and only if $m\geq n$. The present authors showed that in fact $K_n^{1/m}$ is locatable if and only if $m\geq n/2$, for every $n\geq 11$ \cite{HJKa}, and then subsequently that the same improvement on the upper bound may be obtained in general: provided $\abs{V(G)}\geq 23$, $\gm$ is locatable whenever $m\geq\abs{V(G)}/2$ \cite{HJKb}. This bound is best possible, since $K_n^{1/m}$ is not locatable if $m=(n-1)/2$, and some lower bound on $\abs{V(G)}$ is required for it to hold, since $K_{10}^{1/5}$ is not locatable \cite{HJKa}. These results fundamentally depend on taking equal-length subdivisions, and do not imply any results for unequal subdivisions, since subdividing a single edge of a locatable graph can result in a non-locatable graph (as observed independently by Seager \cite{Sea14} and in \cite{HJKa}); however, the present authors showed that an unequal subdivision is also locatable provided every edge is subdivided into a path of length at least $2\abs{V(G)}$ \cite{HJKb}.
\section{Multiple probes vs subdivisions}
Subdividing the edges of $G$ tends to favour the cop both by slowing down the robber's movement around the graph and by giving her extra vertices to probe. Another way in which we might make the game easier for the cop is to allow her to choose a set of $k$ vertices to probe at each turn, rather than a single vertex. Formally, we define the $k$-probe version of the game as follows. At the cop's turn she chooses a list $\seq uk$ of vertices, and is then told the corresponding list $d(v,u_1),\ldots,d(v,u_k)$ of distances to the robber's location, $v$. (Note that the cop must specify all $k$ vertices before being told any of the distances.) If, from this and previous information, she can deduce $v$, she wins. At the robber's turn, as before, he may move to a neighbouring vertex. We say $G$ is $k$-locatable if the cop can guarantee to win the $k$-probe version of the game, that is to say if she has a deterministic strategy which will succeed against any possible sequence of moves for the robber.
The game thus provides two natural graph invariants. Write $\operatorname{rl_{p}}(G)$ for the minimum value of $k$ such that $G$ is $k$-locatable, $\operatorname{rl_{s}}(G)$ for the smallest value such that $\gm$ is locatable whenever $m\geq\operatorname{rl_{s}}(G)$. (We use the subscripts p and s to indicate that the two quantities are the minimum number of probes and subdivisions respectively required for the cop to win.) We investigate the relationship between the two, showing that $\operatorname{rl_{s}}(G)\leq(2+o(1))\operatorname{rl_{p}}(G)$ (the factor of $2$ is best possible), but that no subexponential bound holds in the other direction. In Section~\ref{delta} we also bound $\operatorname{rl_{p}}(G)$ for connected, but not necessarily finite, graphs of maximum degree $\Delta$; our bound is best-possible up to a lower order error term, as shown by the infinite regular tree. We will always assume that the graph $G$ is connected; provided there are only countably many components the cop can always reduce the problem to the connected case by probing components one by one until the component containing the robber is identified.
Recall that $\gm$ is the graph obtained by replacing each edge of $G$ with a path of length $m$ through new vertices. Each such path is called a \textit{thread}, and a \textit{branch vertex} in $\gm$ is a vertex that corresponds to a vertex of $G$. We write $u\cdots v$ for the thread between branch vertices $u$ and $v$. We use ``a vertex on $u\cdots v$'' to mean any of the $m+1$ vertices of the thread, but ``a vertex inside $u\cdots v$'' excludes $u$ and $v$. When $m$ is even we use the term ``midpoint'' for the central vertex of a thread, and when $m$ is odd we will use the term ``near-midpoint'' for either of the vertices belonging to the central edge of a thread. We say that vertices $\seq vr$ form a \textit{resolving set} for $W\subseteq V(G)$ if the vectors $(d(w,v_1),\ldots,d(w,v_r))$ are different for all $w\in W$.
We first note that $\operatorname{rl_{p}}(G)$ and $\operatorname{rl_{s}}(G)$ do not exist unless $G$ is countable. Even if $G$ is countable they may not exist, as shown by the infinite clique.
\begin{lem}\label{countable}For any integer $k$, if $G$ is $k$-locatable then $V(G)$ is countable.\end{lem}
\begin{proof}Suppose $G$ is $k$-locatable, and fix a winning strategy $\mathcal S$ for the cop. For each vertex $v$ let $t_v$ be the number of turns taken for
strategy $\mathcal S$ to succeed against a robber who is stationary at $v$, and let $\boldsymbol s_t^{(v)}$ be the vector of distances returned in the cop's $t$th turn, for $t=1,\ldots,t_v$. Then $(\boldsymbol s_1^{(v)},\ldots\boldsymbol s_{t_v}^{(v)})$ are finite sequences with terms in $\mathbb Z^k$, and so the set of possible sequences is countable. Since each sequence determines the robber's location, they must all be distinct, so $V(G)$ is countable.\end{proof}
\begin{rem}If $G$ is uncountable then $G^{1/m}$ is uncountable for any $m$, so neither $\operatorname{rl_{p}}(G)$ nor $\operatorname{rl_{s}}(G)$ exist. In fact, since we only consider a stationary robber, we have proved the stronger statement that there is no winning strategy for Seager's Sequential Locating game \cite{Sea13} on any uncountable graph.\end{rem}
\subsection{$\operatorname{rl_{s}}(G)$ is $O(\operatorname{rl_{p}}(G))$}
In this section we show that $\operatorname{rl_{s}}(G)\leq 2\operatorname{rl_{p}}(G)+2$ whenever $\operatorname{rl_{p}}(G)$ exists, and also show that this bound cannot be improved to $a\operatorname{rl_{p}}(G)+b$ for any $a,b\in\mathbb R$ with $a<2$ by giving examples of graphs $G_n$ with $\operatorname{rl_{p}}(G_n)=n$ and $\operatorname{rl_{s}}(G_n)\geq 2n-O(\log(n))$.
Note that $\operatorname{rl_{p}}(G)\leq\mu(G)$, since if the cop probes $\mu(G)$ vertices which form a resolving set for $V(G)$ she will locate the robber immediately. Thus \theorem{linear} implies that $\operatorname{rl_{s}}(G)\leq 2\mu(G)+2$, a considerable improvement of the bound $\operatorname{rl_{s}}(G)\leq 1+\max\{\mu(G)+2^{\mu(G)},\Delta(G)\}$ given by Carragher et al.\ \cite{CCDEW}.
\begin{thm}\label{linear}If $G$ is $k$-locatable then $\gm$ is locatable provided $m\geq 2k+2$.\end{thm}
\begin{proof}Suppose $G$ is $k$-locatable (and so, by \lemma{countable}, $V(G)$ is countable), and fix a winning strategy $\mathcal S$ in which the cop probes $k$ vertices of $G$ at each turn. We show how to use $\mathcal S$ to define a winning strategy for the cop with one probe per turn on $\gm$. In fact we can do this with the added restriction that the cop only probes branch vertices of $\gm$. Note that from the result of such a probe considered mod $m$ the cop can always determine the robber's distance to his nearest branch vertex; in particular, she can determine whether he is at a midpoint or near-midpoint, and whether he is at a branch vertex. Also, provided the robber is not at a midpoint, she can determine the distance between his nearest branch vertex and the branch vertex probed (being the nearest multiple of $m$ to the result of the probe), and hence the distance between the corresponding vertices of $G$.
\begin{clm}From any position, the cop can probe branch vertices of $\gm$ such that after at finitely many turns either she locates the robber or he reaches a midpoint or near-midpoint.\end{clm}
\begin{poc}If the robber does not reach a midpoint or near-midpoint, his closest branch vertex, $v$, cannot change. By probing branch vertices in order, the cop will eventually identify $v$. Then she starts probing branch vertices in order again. If the robber reaches $v$ she will recognise that he is at a branch vertex, which must be $v$, and win. If not then he will remain on a single thread $v\cdots w$, and once the cop probes $w$ she will win.\end{poc}
\begin{clm}Fix a set $\{\seq ak\}$ of branch vertices. Suppose the robber is at a midpoint or near-midpoint. Then the cop may probe branch vertices of $\gm$ such that after finitely many turns either she has won or all of the following hold:
\begin{enumerate}[(a)]
\item the robber is at some branch vertex, $v$;
\item he has not reached a branch vertex in the interim; and
\item she has probed every $a_i$ after the robber's last visit to a (near-)midpoint, and hence while the robber was closer to $v$ than to any other branch vertex.
\end{enumerate}\end{clm}
\begin{poc}The cop proceeds as follows. She starts by probing any branch vertex, and every time the result of a probe indicates that the robber was at a (near-)midpoint, she probes a new branch vertex (using an ordering which includes every branch vertex). Once the robber is no longer at a (near-)midpoint she starts probing $\seq ak$ in turn. Every time that the robber returns to a (near)-midpoint she resumes probing new branch vertices, and every time that he leaves the (near-)midpoint(s) she restarts probing $\seq ak$, beginning with $a_1$. If she finishes probing $\seq ak$ she resumes probing new branch vertices. She continues this process until either she has won or the robber is at a branch vertex.
In this way, the cop will probe at least one new branch vertex every $k+1$ turns, so she will eventually probe both endpoints of the robber's thread, unless he reaches a branch vertex first. But if she probes both endpoints while the robber remains inside a thread, she identifies the thread and so wins. If the robber reaches a branch vertex first, then since $m\geq 2k+2$ the cop will have probed at least $k+1$ vertices since the robber was last at a (near-)midpoint, and therefore she will have probed all of $A$ in that time, as required.\end{poc}
Write $m_1$ for the first (near-)midpoint the robber reaches, $v_1$ for the next branch vertex he reaches after leaving $m_1$, $m_2$ for the next (near-)midpoint after leaving $v_1$, and so on. Note that $v_{i+1}$ is either equal to $v_i$ or adjacent to it in $G$, and so $v_1v_2\cdots$ is a possible trajectory for the robber in the $k$-probe game on $G$. We know that strategy $\mathcal S$ locates the robber in that game; write $A_i$ for the set of vertices probed by the cop at turn $i$ when playing strategy $\mathcal S$ against a robber following trajectory $v_1v_2\cdots$.
The cop will alternate between using Claim~1.1 to force the robber to $m_i$ and using Claim~1.2 with set $A_i$ to force him to $v_i$. We show that, assuming she has not yet won, she has enough information to do this by induction: she does for $i=1$ because $A_1$ is fixed; for $i>1$, $A_i$ depends only on the distances (in $G$) of vertices in $A_j$ to $v_j$ for $j<i$, which the cop will be able to deduce using (c) above. In this manner she ensures for each $i$ that either the robber reaches $v_i$ in finite time or he is caught before reaching it.
If the cop would have located the robber on her $t$th turn playing strategy $\mathcal S$ against a robber following trajectory $v_1v_2\cdots$ on $G$, then she has enough information to identify $v_t$ before the robber leaves it, and so locates him.\end{proof}
In fact the factor of $2$ in \theorem{linear} is best possible, as there are $k$-locatable graphs for which subdivisions of length at least $(2-o(1))k$ are required. We use the notation $[n]$ for the set $\{1,\ldots,n\}$ and $\level nk$ for the set of $k$-element subsets of $[n]$. Define the graph $G_{n,k}$, where $1\leq k<n$ as follows. Take as a vertex set $\{v_i\mid i\in[n]\}\cup\{w_A\mid A\in\level nk\}$; for each $A\neq B\in\level nk$, add the edge $w_Aw_B$, and for each $i\in[n]$ and $A\in\level nk$ such that $i\not\in A$, add the edge $v_iw_A$. Figure~1 shows $G_{4,2}$.
\begin{figure}
\caption{$G_{4, 2}
\end{figure}
\begin{lem}Provided $k\leq n-2$, $\operatorname{rl_{p}}\bigl(G_{n,k}\bigr)=n-1$.\end{lem}
\begin{proof}Consider the effect of probing $\seq v{n-1}$. If the robber is at one of those vertices, he has certainly been located. If he is at $v_n$, each probe will return distance $2$, whereas if he is at some $w_A$ at least one probe will return $1$, so in the former case he is located. If he is at some $w_A$ then from the distances to each of $\seq v{n-1}$ the cop can deduce the distance to $v_n$, since exactly $k$ of these $n$ distances must be $1$, and so she can deduce whether $i\in A$ for each $i$, and hence determine $A$.
To complete the proof we show that the graph is not $(n-2)$-locatable. This is true even if the robber is confined to the set $W=\{w_S\mid S\in\level nk\}$: we show that for any set of $n-2$ probes there is some $A\neq B\in\level nk$ such that the probes fail to distinguish $w_A$ and $w_B$. This is certainly true if none of the vertices probed are among the $v_i$, since then there are at least two unprobed vertices in $W$, and these have distance $1$ from every probed vertex. Suppose that exactly $r>0$ of the $v_i$ and $n-r-2$ vertices in $W$ are probed, and let $t=\min\{k-1,r\}$. There are then $\binom{n-r}{k-t}\geq n-r$ vertices in $W$ which are adjacent to the first $t$ of those $r$ vertices, but not to the remaining $r-t$. At least two of these vertices are unprobed, and these two vertices have the same distance as each other from every probed vertex.
Consequently, at each of the cop's turns she must leave some two vertices in $W$ undistinguished, and both of these are possible locations for the robber since from any vertex in $W$ he can reach either of them. So the cop cannot guarantee to locate the robber at any point.\end{proof}
\begin{lem}Provided $\binom{n-k-\floor{m/2}}k\geq 2m+2$, $\operatorname{rl_{s}}\bigl(G_{n,k}\bigr)>m$.\end{lem}
\begin{proof}We'll play the game on $G_{n,k}^{1/m}$ with some restrictions. The robber must stay within the $w$ section (i.e.\ $W$ together with all threads between vertices in $W$), and every time he gets to a branch vertex he must leave it at his next turn and proceed along some thread without stopping or turning round. Also, every time he leaves a branch vertex he will announce which one it was. We show that the cop cannot guarantee to identify the branch vertex the robber is approaching by the time he reaches it, and thus she cannot guarantee to locate the robber. When the robber leaves a branch vertex, say $w_A$, the cop has no information about the next branch vertex he is approaching (call this $w_B$). During the next $\ceil{m/2}$ probes, she can identify for each $i\in A$ whether or not $i\in B$, by probing $v_i$ or inside one of the threads leading from it, and she can additionally eliminate two possible candidates for $B$ per turn, by probing inside a thread of the form $w_C\cdots w_D$. Other probes are not helpful: probing $v_i$ for $i\not\in A$ gives no information, since the shortest path to the robber passes through $w_A$, and probing inside a thread meeting such a $v_i$ only eliminates one candidate for $B$. In the remaining $\floor{m/2}$ probes up to and including the time the robber is at $w_B$, she can determine whether $i\in B$ for $\floor{m/2}$ other values of $i$, and eliminate at most two other candidates for $B$ per turn. If each of the at most $\floor{m/2}+k$ values of $i$ the cop checks are not in $B$, and none of the at most $2m$ candidates for $B$ she tests are correct, there are at least $\binom{n-k-\floor{m/2}}k-2m$ possibilities for $B$ remaining. If this is at least $2$, she cannot guarantee to catch the robber before he next leaves a branch vertex.\end{proof}
Suppose $m=2\ceil{n-a\log n}$ and $k=\floor{b\log n}$. Then
\begin{align*}\binom{n+1-k-m/2}{k}&\geq\binom{\ceil{(a-b)\log n}}{\floor{b\log n}}\\
&=\biggl(\bfrac{a-b}b^b+o(1)\biggr)^{\log n}\,.\end{align*}
If $\bfrac{a-b}b^b>\mathrm e$, as is the case when $a=3.6$ and $b=0.8$, then $\binom{{n+1}-k-m/2}{k}=\omega(n)$ and so for sufficiently large $n$
we have $\operatorname{rl_{s}}\bigl(G_{[n+1],k}\bigr)>m$. Thus we have established the following result.
\begin{thm}For all sufficiently large $n$ there is a finite graph $G$ with $\operatorname{rl_{p}}(G)=n$ and $\operatorname{rl_{s}}(G)>2n-7.2\log n$.\end{thm}
\subsection{$\operatorname{rl_{p}}(G)$ is not $O(\operatorname{rl_{s}}(G))$}
In this section we show that $\operatorname{rl_{p}}(G)$ can be exponentially large in terms of $\operatorname{rl_{s}}(G)$. Naively we might expect a linear bound, since a successful
strategy for $\gm$ may only have $m$ turns between visits of the robber to different branch vertices. However, each vertex probed can depend on the results of previous ones, so the number of vertices which the strategy could potentially probe in those $m$ turns can be large.
Construct a graph $G_n$ as follows. There are four classes of vertices: $A, B, C$ and $D$. Each has $2^n$ vertices. Label the
vertices in $B$ as $b0...0, \ldots, b1...1$ ($b$ followed by all binary words of length $n$), and similarly for $C$. Label the vertices
of $A$ as $a, a1, a01, a11 \ldots$ ($a$ followed by all words of length at most $n$ which are either empty or end in 1), and
similarly for D. The only edges are between adjacent classes: $A$--$B$, $B$--$C$ or $C$--$D$. All vertices in $B$ are adjacent to all in $C$.
The vertices $ax$ and $by$ are adjacent if and only if $x$ is a prefix of $y$; similarly for $dx$ and $cy$. Figure~2 shows $G_2$.
\begin{figure}
\caption{$G_2$}
\end{figure}
\begin{lem}For each $n\geq 2$, $\operatorname{rl_{s}}(G_n)=n+1$.\end{lem}
\begin{proof}The cop's strategy on $G_n^{1/m}$ for $m\geq n+1$ is as follows. First she probes branch
vertices one by one until either a probe results in a multiple of $m$, indicating the robber is at a branch vertex, or two different probes have
given responses of less than $m$. If the latter happens first then the robber has not passed through a branch vertex and the cop has
identified both ends of the thread he is on, so has located him.
Suppose that a probe has given a multiple of $m$, indicating that the robber is at a branch vertex. From the result of the probe mod $2m$, the cop will also know either that he is in $A\cup C$ or that he is in $B\cup D$; assume without loss of generality
the former. Probing $d$ next will tell her whether the robber was in $A$, was in $C$ and is still there, or was in $C$ and has moved.
In the latter case she next probes at $a$, which will tell her
whether the robber has returned to $C$ or not, and if not in whether he is on a thread between $B$ and $C$ or between $C$ and $D$. Now she probes branch vertices until the robber is at a branch vertex or is caught; if he reaches a branch vertex before being caught, the cop will know, from the information she had about his thread together with the result of her last probe mod $2m$, that he is at a branch vertex in a specific class.
\begin{case}The result of a probe establishes that the robber is in $A$, or establishes that the robber is in $D$.\end{case}
Without loss of generality we assume the cop knows that the robber is in $A$. She probes vertices in $A$ which are possible locations
for the robber, until she gets a response which is not a multiple of $m$, indicating that he has left $A$, or a response of $0$, which locates him. One of these must happen, since if he remains at his location in $A$ she will eventually probe it.
Once the robber moves away from $A$ he must be on a thread between a vertex in $A$ and one in $B$; the cop now
attempts to identify the endpoint in $B$. She does this by first probing $a1$ to find out whether it is of the form $b1x$ or $b0x$: a response less than $2m$ indicates that it is of the form $b1x$; a response greater than $2m$ indicates that it is of the form $b0x$, and a response of exactly $2m$ (or $0$) indicates that he has returned to $A$.
If the robber does not return to $A$, the cop has identified the first digit of the endpoint of his thread which is in $B$. She then probes either $a11$ or $a01$, depending on this digit, in order to determine the second digit. By continuing in this manner, either she will identify that the robber has returned to his original vertex in $A$ or she will identify the endpoint of his thread in $B$ while he is still inside the thread. In the latter case she then probes
vertices in $A$ in turn, allowing her to tell if he reaches the branch vertex in $B$ or to eventually find the other end of his current thread
if he remains inside it. If he returns to $A$ she restarts this case; since the robber must have returned to the same branch vertex in $A$ she reduces the set of possible locations in $A$ with each iteration of this case, so she will locate him within $2^n$ iterations.
\begin{case}The result of a probe establishes that the robber is in $B$, or establishes that the robber is in $C$.\end{case}
Without loss of generality we assume the cop knows that the robber is in $A$. She probes vertices at distance $1$ from $B$ which are on threads for which one endpoint is a possible location for the robber in $B$ and the other is $c0...0$. She does this until she gets a response which is not $\pm1$ mod $m$, indicating that the robber has moved. If he does not move she will eventually probe a vertex adjacent to him and win.
Once the robber leaves $B$, the cop will know whether he is on a thread leading to $c0...0$, since she will get a response of $0$ or $2m-2$ in this case and $2$ or $2m$ otherwise; since $m\geq n+1\geq 3$, $2m-2\neq 2$. If the robber is not on such a thread, the cop attempts to either identify the vertex in $C$ which is an endpoint of his thread or identify that no such vertex exists (i.e.\ he is on a thread between $A$ and $B$). She does this as in Case~1 by probing first $d1$, then $d01$ or $d11$, as appropriate, and so on. If she detects that the robber has returned
to $B$ (indicated by a result of exactly $2m$), she restarts this case; as before she reduces the set of possible locations in $B$ every time this happens, so at most $2^n$ iterations can occur.
If the robber does not return to $B$, and is on a thread between $B$ and $C$, the cop will have time to complete this process while he is still on that thread, and since the endpoint in $C$ is not $c0...0$, she will have probed at least one vertex at distance less than $2m$. Thus she will have identified both that he is on a thread between $B$ and $C$ and the endpoint of that thread in $C$, either while he is at that branch vertex or while he is still inside the thread. In the latter case she probes vertices in $B$ until she locates the other end of the thread or establishes that the robber has reached the known vertex in $C$.
If the robber does not return to $B$, and is on a thread between $A$ and $B$, again the cop will complete the process before he reaches $A$, and since in this case none of her probes will have given a result of less than $2m$, she will identify that he is on a thread between $A$ and $B$. If he has not reached $A$ when she identifies this, she probes vertices in $B$ in turn until she either locates him or identifies that he is in $A$, reducing to Case~1.
This completes the proof that $G_n^{1/m}$ is locatable for $m\geq n+1$.
Next we show that $G_n^{1/n}$ is not locatable. Again, we may restrict the robber: he may only visit branch vertices in $\{a,d\}\cup B\cup C$; each time he leaves a branch vertex he announces which vertex he has just left, and moves directly along a thread towards the next branch vertex. However, he may choose to remain stationary at a branch vertex. While the robber is permitted to move to $a$ or $d$, he will never actually do this; however, sometimes there will only be two possible locations which are consistent with the probe results, one of which is $a$.
Suppose the robber is at a branch vertex in $B\cup\{d\}$ (the case $C\cup\{a\}$ is equivalent), but the cop does not know which one (i.e.\ there are at least two possibilities). The robber might remain at this branch vertex, so the cop needs to probe some vertex which eliminates at least one possible branch vertex. So she must probe a vertex in $B\cup\{d\}$, or in $A$, or inside a thread between $A$ and $B$ or between $B\cup\{d\}$ and $C$. Suppose when she does this that the robber has just left a branch vertex, which is not $d$ (since there were at least two possibilities, this is always possible). He is heading towards a vertex in $C\cup\{a\}$. If she has just probed a vertex in $B$ she has no further information about where the robber is heading. If she has just probed $d$, or a vertex in $A$ or between $A$ and $B$, she may be able to tell whether or not he is heading towards $a$, but cannot distinguish destinations in $C$. If she has just probed a vertex in $C$ or between $B$ and $C$ she can tell whether he is heading to a specific vertex in $C$, but cannot distinguish other destinations in $C\cup\{a\}$. Finally, if she has just probed inside the thread $cx\cdots d$ for some $x$ then she can tell whether he is heading to $cx$, but cannot distinguish other vertices in $C\cup\{a\}$ (the shortest route from anywhere inside this thread to the vertex adjacent to $by$ on the thread to $a$ is via $by$). Consequently, some response to the probe is consistent with at least $2^n$ possible destinations.
In order to win before the robber reaches the next branch vertex, the cop must identify the thread he is on by the time he reaches the end of it, so she has $n-1$ turns remaining to do this. If she probes a vertex in $A\setminus\{a\}$ or $B$, or on a thread between the two, she gets no additional information. If she probes a vertex in $C\cup\{a\}$ or on a thread between $B$ and $C\cup\{a\}$ then she may eliminate one possible thread, but will not distinguish between the rest. If she probes a vertex on a thread between $C$ and $D$, or a vertex in $D$, then she may eliminate one possible thread, but all other possible locations for the robber will give one of two possible responses. Thus, if there were $k$ possibilities before the probe, and the response to the probe is that consistent with the greatest number of those possibilities, at least $\ceil{(k-1)/2}$ possibilities remain. Since there were $2^n$ possibilities, at least $2$ will remain after $n-1$ additional probes, and so the cop cannot guarantee to locate him.\end{proof}
\begin{lem}For each $n\geq 1$, $\operatorname{rl_{p}}(G_n)=2^n$.\end{lem}
\begin{proof}We show that if the cop is permitted $2^n-1$ probes per turn, the robber can escape forever provided at every turn he knows which vertices
the cop will probe at her next turn. Suppose he constrains himself to moving between $B$ and $C$. If he is in $B$, the cop will
get no information about which vertex in $B$ he is at unless she probes at least one vertex in $A\cup B$, so he need not move unless that happens.
Similarly if he is in $C$ he need not move unless the cop is about to probe at least one vertex in $C\cup D$. If he moves, he can get to
any vertex in the other class, so in order to win the cop must at some point probe a resolving set for $B$, say, together with some vertex in
$C\cup D$. Since the smallest resolving set for $B$ has size $2^n-1$, and another probe is needed, this is not possible.
If the cop is permitted $2^n$ probes per turn, she may win as follows. In the first turn she probes the set $\{ax,dx\mid x\text{ has length}<n\}$.
This will either locate the robber immediately or pin him down to a set of the form $\{by0, by1\}$ or $\{cy0, cy1\}$; assume without loss of generality he is in $\{by0, by1\}$.
On her next turn the cop probes $\{ay1\}\cup D\setminus\{d\}$. This either locates the robber, or shows that he is in $A\cup\{c0...0\}$. If the robber is in $A\cup\{c0...0\}$, next turn he must be in $A\cup B\cup\{c0...0,d\}$. Now the cop can win by probing $A$, since it is a resolving set for $A\cup B\cup\{c0...0,d\}$.\end{proof}
Thus we have shown the following result.
\begin{thm}For every $m\geq 3$ there is a finite graph $G$ with $\operatorname{rl_{s}}(G)=m$ and $\operatorname{rl_{p}}(G)=2^{m-1}$.\end{thm}
\section{Graphs of bounded degree}\label{delta}
In this section we obtain a general bound for $\operatorname{rl_{p}}(G)$ (and hence, using \theorem{linear}, a bound of the same order on $\operatorname{rl_{s}}(G)$) in terms of the maximum degree $\Delta(G)$. We allow graphs to be infinite (but connected) in this case. By considering infinite regular trees we show that our bound is tight up to a factor of $1+o(1)$. In the case $\Delta=3$ we show that $\operatorname{rl_{p}}(G)\leq3$, which is best possible. The case $\Delta(G)=2$ (i.e.\ $G$ is a finite cycle or path, or infinite ray or path) is trivial: any such graph is $2$-locatable, since any two adjacent vertices form a resolving set for $V(G)$, and this is best possible since $C_3$ is not $1$-locatable.
\subsection{General quadratic bounds}
In this section we give a quadratic upper bound on $\operatorname{rl_{p}}(G)$ in terms of $\Delta(G)$, and a lower bound on $\max\{\operatorname{rl_{p}}(G)\mid\Delta(G)=\Delta\}$ which differs from our upper bound only in lower-order terms.
\begin{thm}For any connected graph $G$ with $\Delta(G)=\Delta$, $\operatorname{rl_{p}}(G)\leq\floorf{(\Delta+1)^2}{4}+1$.\end{thm}
\begin{proof}We give a winning strategy for the cop using $\floorf{(\Delta+1)^2}{4}+1$ probes at each turn. On her first turn she probes arbitrary vertices; since $G$ is connected, all distances are finite.
Suppose that, from the results of probes at the cops $t$th turn, she knows that the robber is at distance $d_t$ from some vertex $v_t$, and that the shortest path from $v_t$ to his location passes through one of $k_t$ neighbours of $v_t$, $\seq w{k_t}$. For each $i$ choose $\Delta-k_t$ neighbours of $w_i$, not including $v_t$. At her next turn the cop probes all these neighbours, together with $v_t$ and $\seq w{k_t}$. This is a total of at most
\[
k_t(\Delta-k_t+1)+1\leq\floorf{(\Delta+1)^2}{4}+1
\]
vertices, so she can always do this. If none of the distances returned is less than $d_t$, then at least one of the $w_i$ (say $w_1$) will return exactly $d_t$, and the shortest path from $w_1$ to the robber's location must not pass through $v_t$ or any of $\Delta-k_t$ other neighbours of $w_1$. So setting $v_{t+1}=w_1$, the cop is in the same position as before, with $d_{t+1}=d_t$ and $k_{t+1}<k_t$. Consequently $(d_t,k_t)$ is decreasing in the lexicographic ordering; since $k_t\leq\Delta$, the cop takes a bounded number of steps to catch the robber from any particular set of responses to the initial probe.
\end{proof}
\begin{thm}For some connected graph $G$ with $\Delta(G)=\Delta$, $\operatorname{rl_{p}}(G)\geq\floorf{\Delta^2}{4}$.\end{thm}
\begin{proof}We show that on the infinite $\Delta$-regular tree, $T_{\Delta}$, if the cop probes fewer than $\floorf{\Delta^2}{4}$ vertices on each turn, it is possible that she never probes a vertex within $r$ of the robber's location, where $r$ is an arbitrarily large distance. In fact we claim that for any fixed $r$, it is possible that for every $t$, after the cop's $t$th turn there is some vertex $v_t$ such that $T_{\Delta}-v_t$ has at least $\ceilf{\Delta-1}{2}$ components which have never been probed, the robber's distance from $v_t$ is $r$, and any vertex at distance $r$ from $v_t$ in the unprobed components is possible. This is certainly possible after the first step, since there is some finite subtree containing all the vertices probed, so setting $v_1$ be a leaf of that subtree, there are $\Delta-1$ components of $T_{\Delta}$ which have not been probed, and all vertices in these components which have distance $r$ from $v_1$ are possible locations for the robber which are not distinguished by the results of the cop's first turn. Suppose that the cop is in the required situation after her $t$th turn, and write $\seq wk$ for the neighbours of $v_t$ in unprobed components of $T_{\Delta}-v_t$. If, at the cop's $(t+1)$th turn, she probes vertices in at least $\floorf{\Delta+1}{2}$ of the components of $T_{\Delta}-w_i$ which do not include $v_t$ for each $i$, then she makes at least
\[
\ceilf{\Delta-1}{2}\floorf{\Delta+1}{2}=\floorf{\Delta^2}{4}
\]
probes, since all these sets of vertices are disjoint. So this is not the case, and without loss of generality $T_{\Delta}-w_1$ has at least $\ceilf{\Delta-1}{2}$ unprobed components. Provided the robber was in one of these, and has moved further away from $v_t$, the same situation holds with $v_{t+1}=w_1$.\end{proof}
\subsection{Exact result for maximum degree $3$}
In this case we prove that all connected graphs with maximum degree $3$ are $3$-locatable. This is trivially best possible, since $\operatorname{rl_{p}}(K_4)=3$. Again, our bound applies even if the graph is infinite.
\begin{thm}For any connected graph $G$ with $\Delta(G)=3$, $\operatorname{rl_{p}}(G)\leq 3$.\end{thm}
\begin{proof}
If $G\cong K_{3,3}$, say with vertices $a,b,c$ in one class and $u,v,w$ in the other, the cop can win by probing $a,b,u$ on the first turn and, if necessary, $a,b,v$ on the second. Henceforth we assume $G\not\cong K_{3,3}$. We will also use the following observation several times.
\begin{obs}\label{square}Suppose the cop knows that at the time of her last probe the robber was at one of two vertices $p,q$ which have at least two neighbours $r,s$ in common. Write $p'$ for the third neighbour of $p$, and $q'$ for the third neighbour of $q$. If $p'=q$, or $p'$ is not adjacent to both $r$ and $s$, the cop can win by probing $q,r,s$. Otherwise $q'$ is not adjacent to both $r$ and $s$, so she can win by probing $p,r,s$.\end{obs}
We give a winning strategy for the cop. On her first turn she probes any three vertices, and since the graph is connected the robber's distance to each is finite. Suppose she has just probed a vertex and gotten a result of $r$. In the case $r\geq 2$, we show how she may get a result of less than $r$ from some probe within the next three time steps, and so by repeatedly employing this tactic she may eventually force a result of $1$. We then show how she may win from that position.
Suppose the cop probed $x_0$ which returned distance $r$. At the next turn she probes the neighbours of $x_0$; one of these probes must be at distance at most $r$ from the robber's new position. If it is less than $r$, we are done; otherwise from probing $x_1$, say, we get a result of $r$, and we know that the shortest path from $x_1$ to the robber's location must not go through $x_0$. Write $x_2,x_3$ for the other neighbours of $x_1$. Let $S=(\Gamma(x_2)\cup\Gamma(x_3))\setminus\{x_1,x_2,x_3\}$. At her next turn the cop probes as many vertices in $S$ as possible. If none of the responses is less than $r$ then we must have $\abs{S}=4$, and the one unprobed vertex of $S$ would have returned less than $r$. Now she probes all the neighbours of that vertex; one of them must be at distance less than $r$.
Continuing in this manner, the cop will either win or reach a position where she has just probed some vertex $v$ adjacent to the robber's position. We now show how the cop can win from this point.
Write $a,b,c$ for the neighbours of $v$. The cop next probes $a,b,c$. It is not possible for there to be three locations for the robber consistent with the results of these probes, since $G\not\cong K_{3,3}$, so either the cop has won or there are exactly two possible locations for the robber. Consequently, if at least two of the probes returned $1$, she can win using \observ{square}.
The only remaining possibility is that exactly one of the probes, say at $a$, returned $1$, and the robber could have been at either of the neighbours of $a$ which are not $v$, say $u$ and $w$. If $u$ and $w$ have another common neighbour, the cop can win using \observ{square}. If not, we may write $\Gamma(u)=\{a,x_1,x_2\}$ and $\Gamma(w)=\{a,y_1,y_2\}$, where $x_1,x_2,y_1,y_2$ are all distinct.
On her next turn, the cop will probe at $a$ and two of $x_1,x_2,y_1,y_2$; we will describe how she chooses which two to probe below. Note that the probe at $a$ will distinguish whether the robber is at $a$, in $\{u,v\}$ or in $\{x_1,x_2,y_1,y_2\}$, and the other probes will distinguish between $u$ and $v$, so the cop will win unless she is unable to distinguish between the two unprobed vertices in $\{x_1,x_2,y_1,y_2\}$ (and the robber is at one of them). We consider five cases depending on the local structure.
\begin{enumerate}[(i)]
\item One of $x_1,x_2,y_1,y_2$ (say $x_1$) is adjacent to exactly one of the others. In this case the cop probes $a$, $x_1$ and one of $x_2,y_1,y_2$ which is not adjacent to $x_1$.
\item One of $x_1,x_2,y_1,y_2$ (say $x_1$) is adjacent to two of the others. In this case the cop probes $a$, $x_1$ and one of $x_2,y_1,y_2$ which is adjacent to $x_1$.
\item Some pair of $x_1,x_2,y_1,y_2$ (say $x_1,y_1$) are at distance more than $2$. In this case the cop probes $a$, $x_1$ and $y_2$.
\item Some pair of $x_1,x_2,y_1,y_2$ have two common neighbours. In this case the cop probes $a$ and the other two vertices.
\item Every pair of $x_1,x_2,y_1,y_2$ have a unique common neighbour. In this case the cop probes $a$, $x_1$ and $x_2$.
\end{enumerate}
Cases (i)--(v) exhaust all possibilities. In cases (i), (ii) and (iii), the probe at $x_1$ will distinguish between the two unprobed vertices in $\{x_1,x_2,y_1,y_2\}$, and so the cop wins immediately. In case (iv), the cop wins unless the robber was at one of the two unprobed vertices; these vertices have two common neighbours so she can now win using \observ{square}. In case (v), the six common neighbours of the pairs must all be different; write $z_{i,j}$ for the common neighbour of $x_i$ and $y_j$. The cop wins immediately unless the robber was at $y_1$ or $y_2$. In this case his possible locations at her next probe will be $\{w,y_1,y_2,z_{1,1},z_{1,2},z_{2,1},z_{2,2}\}$. The cop can therefore win by probing $x_1$, $y_1$ and $y_2$.\end{proof}
\end{document} |
\begin{equation}gin{document}
\tilde{t}le{On the order of the Titchmarsh's sum in the theory of the Riemann zeta-function and on the biquadratic effect in the information theory}
\author{Jan Moser}
\address{Department of Mathematical Analysis and Numerical Mathematics, Comenius University, Mlynska Dolina M105, 842 48 Bratislava, SLOVAKIA}
\email{[email protected]}
\keywords{Riemann zeta-function}
\begin{equation}gin{abstract}
We obtain in this paper the solution of the classical problem on the order of the Titchmarsh's sum (1934). Simultaneously, we obtain a connection of this problem and the Kotelnikoff-Whittaker-Nyquist's theorem
from the information theory.
\end{abstract}
\title{On the order of the Titchmarsh's sum in the theory of the Riemann zeta-function and on the biquadratic effect in the information theory}
\section{Introduction}
The following is the translation of the paper of reference \cite{9} into English. In this paper we obtain the solution of the classical problem on the order of the complicated Titchmarsh's sum
\begin{equation}gin{displaymath}
\sum_{\nu=M+1}^N Z^2(t_\nu)Z^2(t_{\nu+1}) .
\end{displaymath}
In connection with this we also obtain an analog of the biquadratic effect for
\begin{equation}gin{displaymath}
Z(t),\ t\in [T,2T] .
\end{displaymath}
It follows that the continuous signal defined by the function $Z(t)$ obeys the theorem of Kotelnikoff-Whittaker-Nyquist from the information theory. \\
Let us remind the definition of the Riemann zeta-function
\begin{equation}gin{displaymath}
\zeta(s)=\prod_p \frac{1}{1-\frac{1}{p^s}},\ s=\sigma+it,\ \sigma>1
\end{displaymath}
($p$ runs over through the set of all primes), and the analytic continuation of this function to all $s\in\mathbb{C},\ s\not=1$. Riemann defined also the real-valued function
\begin{equation} \label{1.1}
\begin{equation}gin{split}
& Z(t)=e^{i\vartheta(t)}\zf,\ \vartheta(t)=-\frac t2\ln\pi+\text{Im}\ln\Gamma\left(\frac 14+i\frac t2\right)= \\
& =\frac t2\ln\frac{t}{2\pi}-\frac{t}{2}-\frac{\pi}{8}+\mathcal{O}\left(\frac 1t\right)
\end{split}
\end{equation}
(see \cite{10}, (35), (44), (62), \cite{11}, p. 98). From this, it follows that the properties of the signal generated by the Riemann's function are connected with the law of the distribution of the primes in the series
of all positive integers, and this is to be regarded as a pleasant circumstance from the point of view of the Pythagorean philosophy of the Universe.
\section{The asymptotic formula for the Titchmarsh's sum}
\subsection{}
In 1934 Titchmarsh presented the following hypothesis (see \cite{11}, p. 105):\ there is $A>0$ such that
\begin{equation}gin{displaymath}
\sum_{\nu=M+1}^N Z^2(t_\nu)Z^2(t_{\nu+1})=\mathcal{O}(N\ln ^A N)
\end{displaymath}
where $M$ is sufficiently big fixed number and $\{ t_\nu\}$ is the sequence defined by the condition (comp. \cite{11}, p. 99)
\begin{equation} \label{2.1}
\vartheta(t_\nu)=\pi\nu ,\ \nu=1,2,\dots \ .
\end{equation}
In 1980, we have proved this hypothesis with $A=4$ (see \cite{4}, (4)). The following estimate (see \cite{4}, (6))
\begin{equation} \label{2.2}
\sum_{\tilde{t}_{M+1}\leq \tilde{t}_\nu\leq T} Z^4(\tilde{t}_\nu)=\mathcal{O}(T\ln^5T)
\end{equation}
was the key to our proof. The sequence $\{\tilde{t}_\nu\}$ is defined by the formula (see \cite{4}, (5))
\begin{equation} \label{2.3}
\vartheta(\tilde{t}_\nu)=\frac{\pi}{2}\nu,\ \nu=1,2,\dots \ .
\end{equation}
In 1983, we have improved the estimate (\ref{2.2}), namely, we have proved the asymptotic formula (see \cite{7})
\begin{equation}gin{displaymath}
\sum_{\tilde{t}_{M+1}\leq \tilde{t}_\nu\leq T} Z^4(\tilde{t}_\nu)\sim\frac{1}{2\pi^3}T\ln^5T,\ T\to \infty .
\end{displaymath}
In this paper we obtain the solution of the classical Titchmarsh's problem. In reality, we obtain the general autocorrelative formulae for the
function $Z^2(t)$, and from these formulae, as a special case, we obtain the desired result. Namely, the following main Theorem holds true.
\begin{equation}gin{mydef11}
\begin{equation} \label{2.4}
\begin{equation}gin{split}
& \sum_{T\leq t_\nu\leq 2T}Z^2\{ t_\nu+k\rho_1(\nu)\}Z^2\{ t_\nu+l\rho_1(\nu)\}=\\
& =\left\{\begin{equation}gin{array}{lcl}\frac{3}{4\pi^5(k-l)^2}T\ln^5T+\mathcal{O}(MT\ln^4T) & , & k\not=l \\
\frac{1}{4\pi^3}T\ln^5T+\mathcal{O}\{(M+1)T\ln^4T\} & , & k=l \end{array} \right.,
\end{split}
\end{equation}
where
\begin{equation} \label{2.5}
\rho_1(\nu)=\frac{2\pi}{\ln\frac{t_\nu}{2\pi}},\ k,l=0,\pm 1,\pm 2,\dots ,\pm M,\ M=\mathcal{O}(\psi) ,
\end{equation}
and $\psi=\psi(T)$ is a function arbitrarily slowly increasing to $\infty$ as $T\to\infty$.
\end{mydef11}
\subsection{}
We obtain the final result on the order of Titchmarsh's sum from our Theorem 1 as follows. First of all we have (see (\ref{2.4}))
\begin{equation} \label{2.6}
\sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)Z^2\{ t_\nu+\rho_1(\nu)\}=\frac{3}{4\pi^5}T\ln^5T+\mathcal{O}(T\ln^4T) ,
\end{equation}
$k=0,\ l=1;\ M=1$. Since (see \cite{3}, (42))
\begin{equation} \label{2.7}
t_{\nu+1}-t_\nu=\frac{2\pi}{\ln\frac{t_\nu}{2\pi}}+\mathcal{O}\left(\frac{1}{t_\nu\ln t_\nu}\right)=\rho_1(\nu)+
\mathcal{O}\left(\frac{1}{t_\nu\ln^2 t_\nu}\right)
\end{equation}
then we obtain by the usual estimates
\begin{equation}gin{displaymath}
Z(t)=\mathcal{O}(t^{1/6}\ln t),\ Z'(t)=\mathcal{O}(t^{1/6}\ln ^2t)
\end{displaymath}
the following
\begin{equation} \label{2.8}
Z^2(t_{\nu+1})=Z^2\left\{t_\nu+\rho_1(\nu)+\mathcal{O}\left(\frac{1}{t_\nu\ln^2 t_\nu}\right)\right\}=
Z^2\{ t_\nu+\rho_1(\nu)\}+\mathcal{O}\left(\frac{\ln T}{T^{2/3}}\right) .
\end{equation}
Next, we obtain by (\ref{2.8}) and by the formula
\begin{equation}gin{displaymath}
\sum_{T\leq t_\nu\leq 2T} 1=\frac{1}{2\pi}T\ln T+\mathcal{O}(T)
\end{displaymath}
the following
\begin{equation} \label{2.9}
\begin{equation}gin{split}
& \sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)Z^2(t_{\nu+1})=\sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)Z^2\{ t_\nu+\rho_1(\nu)\}+\\
& +\mathcal{O}(T^{1/3}\ln^2T\cdot T^{-2/3}\ln T\cdot T\ln T)= \\
& =\sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)Z^2\{ t_\nu+\rho_1(\nu)\}+\mathcal{O}(T^{2/3}\ln^4T) .
\end{split}
\end{equation}
Hence, by (\ref{2.6}), (\ref{2.9}) we obtain
\begin{equation}gin{cor}
\begin{equation} \label{2.10}
\sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)Z^2(t_{\nu+1})=\frac{3}{4\pi^5}T\ln^5T+\mathcal{O}(T\ln^4T) .
\end{equation}
\end{cor}
\begin{equation}gin{remark}
The order of the Titchmarsh's sum is determined by the asymptotic formula (\ref{2.10}).
\end{remark}
\section{Main lemmas and the conclusion of the proof of Theorem 1}
The following main lemmas hold true.
\begin{equation}gin{mydef5A}
\begin{equation} \label{3.1}
\begin{equation}gin{split}
& \sum_{T\leq \tilde{t}_\nu\leq 2T}Z^2\{ \tilde{t}_\nu+k\rho_2(\nu)\}Z^2\{ \tilde{t}_\nu+l\rho_2(\nu)\}= \\
& =\left\{\begin{equation}gin{array}{lcl}\frac{3}{2\pi^5(k-l)^2}T\ln^5T+\mathcal{O}(MT\ln^4T) & , & k\not=l \\
\frac{1}{2\pi^3}T\ln^5T+\mathcal{O}\{(M+1)T\ln^4T\} & , & k=l \end{array} \right.,
\end{split}
\end{equation}
where
\begin{equation} \label{3.2}
\rho_2(\nu)=\frac{2\pi}{\ln\frac{\tilde{t}_\nu}{2\pi}} ,
\end{equation}
and $k,l,M,\psi$ fulfills the conditions of Theorem 1.
\end{mydef5A}
\begin{equation}gin{mydef5B}
\begin{equation} \label{3.3}
\sum_{T\leq \tilde{t}_\nu\leq 2T}(-1)^\nu Z^2\{ \tilde{t}_\nu+k\rho_2(\nu)\}Z^2\{\tilde{t}_\nu+l\rho_2(\nu)\}=
\mathcal{O}\{(M+1)T\ln^4T\} .
\end{equation}
\end{mydef5B}
Using these lemmas we easily conclude the proof of Theorem 1. Namely, by adding (\ref{3.1}) and (\ref{3.2}) we have
\begin{equation} \label{3.4}
\begin{equation}gin{split}
& \sum_{T\leq \tilde{t}_\nu\leq 2T}Z^2\{\tilde{t}_{2\nu}+k\rho_2(2\nu)\}Z^2\{\tilde{t}_{2\nu}+l\rho_2(2\nu)\}= \\
& =\left\{\begin{equation}gin{array}{lcl}\frac{3}{4\pi^5(k-l)^2}T\ln^5T+\mathcal{O}(MT\ln^4T) & , & k\not=l \\
\frac{1}{4\pi^3}T\ln^5T+\mathcal{O}\{(M+1)T\ln^4T\} & , & k=l \end{array} \right. .
\end{split}
\end{equation}
Since (see (\ref{2.1}), (\ref{2.3}), (\ref{2.5}), (\ref{3.2})), $\tilde{t}_{2\nu}=t_\nu,\ \rho_2(2\nu)=\rho_1(\nu)$, then
the formula (\ref{2.4}) follows from (\ref{3.4}). \\
The proofs of our lemmas A and B are situated in the parts 5-7 and 8-10, respectively.
\section{Biquadratic effect and the connection with the theorem of Kotelnikoff-Whittaker-Nyquist}
\subsection{}
Next, we obtain from (\ref{2.4}), $k=l=M=0$ the following
\begin{equation}gin{cor}
\begin{equation} \label{4.1}
\sum_{T\leq t_\nu\leq 2T}Z^4(t_\nu)=\frac{1}{4\pi^3}T\ln^5T+\mathcal{O}(T\ln^4T) .
\end{equation}
\end{cor}
Since (see \cite{2}, p. 227, comp. \cite{12}, p. 125) we have
\begin{equation} \label{4.2}
\int_T^{2T} Z^4(t){\rm d}t=\frac{1}{2\pi^2}T\ln^4T+\mathcal{O}(T\ln^3T) ,
\end{equation}
and by (\ref{4.1})
\begin{equation} \label{4.3}
\frac{2\pi}{\ln T}\sum_{T\leq t_\nu\leq 2T}Z^4(t_\nu)=\frac{1}{2\pi^2}T\ln^4T+\mathcal{O}(T\ln^3T) ,
\end{equation}
then from (\ref{4.2}) by (\ref{4.3}) one obtains the following statement.
\begin{equation}gin{mydef12}
\begin{equation} \label{4.4}
\int_T^{2T}Z^4(t){\rm d}t\sim \frac{2\pi}{\ln T}\sum_{T\leq t_\nu\leq 2T}Z^4(t_\nu),\ T\to\infty .
\end{equation}
\end{mydef12}
We shall give another example of the relation of the type (\ref{4.4}). First of all, we have the Hardy-Littlewood mean-value theorem
\begin{equation} \label{4.5}
\int_T^{T+U}Z^2(t){\rm d}t\sim U\ln T,\ U=\sqrt{T}\ln T,\ T\to\infty .
\end{equation}
Next, we have proved a discrete analog of the formula (\ref{4.5}) (see \cite{5}, (6), comp. \cite{6}, (10); $H\to U,\ \tau^\prime=0$)
\begin{equation} \label{4.6}
\sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)\sim \frac{1}{2\pi}U\ln^2T,\ T\to\infty .
\end{equation}
The next statement follows immediately from the formulae (\ref{4.5}) and (\ref{4.6}).
\begin{equation}gin{mydef13}
\begin{equation} \label{4.7}
\int_T^{T+U}Z^2(t){\rm d}t\sim\frac{2\pi}{\ln T}\sum_{T\leq t_\nu\leq T+U}Z^2(t_\nu),\ T\to\infty .
\end{equation}
\end{mydef13}
\subsection{}
Let us remind some facts about the information theory. If we use the continuous signals in the information theory then the Kotelnikoff-Whittaker-Nyquist
theorem is the basic mathematical instrument. Namely, in the radioengineering the following \emph{empirical} rule is used (see \cite{1}, pp. 81, 86, 96,
97): if the length of the signal $F(t)$ is approximately $T$ (for example, $t\in [0,T]$), and the spectrum of the given signal $F(t)$ is bounded
approximately by the frequency $w$, and if $2Tw\gg 1$ then we have
\begin{equation} \label{4.8}
F(t)\approx \sum_{n=0}^{2Tw}\frac{\sin(2\pi wt-n\pi)}{2\pi wt-n\pi}F\left(\frac{n}{2w}\right),\ t\in [0,T] ,
\end{equation}
\begin{equation} \label{4.9}
\int_0^TF^2(t){\rm d}t\approx \frac{1}{2w}\sum_{n=0}^{2wT}F^2\left(\frac{n}{2w}\right) .
\end{equation}
The quantities
\begin{equation}gin{displaymath}
\frac{1}{2w},\quad \int_0^TF^2(t){\rm d}t
\end{displaymath}
are said to be the length of the Nyquist interval and the quadratic effect, respectively.
\begin{equation}gin{remark}
Since (see (\ref{2.7}))
\begin{equation}gin{displaymath}
t_{\nu+1}-t_\nu\sim\frac{2\pi}{\ln T},\ t_\nu\in [T,T+U];\ [T,2T],\ T\to\infty
\end{displaymath}
then by the asymptotic formulae (\ref{4.7}) and (\ref{4.4}) we have expressed the quadratic effect (comp. (\ref{4.9})) and the biquadratic effect,
respectively, of the signal defined by the function $Z(t)$. The following length of the Nyquist's interval
\begin{equation}gin{displaymath}
\frac{1}{2w}\sim \frac{2\pi}{\ln T} ,\ T\to \infty
\end{displaymath}
corresponds with these effects.
\end{remark}
\begin{equation}gin{remark}
We have proved also the analog of the equation (\ref{4.8}) for the function $Z(t)$, in the sense of the discrete mean-square.
\end{remark}
\section{The formula for $Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}Z^2\{\tilde{t}_\nu+l\rho_2(\nu)\}$}
We use the Hardy-Littlewood formula (comp. \cite{4}, (24))
\begin{equation}gin{displaymath}
Z^2(t)=2\sum_{n\leq t_1}\frac{d(n)}{\sqrt{n}}\cos\{ 2\vartheta(t)-t\ln n\}+\mathcal{O}(\ln T),\ t_1=\frac{t}{2\pi} , \ t\in [T,2T]
\end{displaymath}
where $d(n)$ is the number of divisors of $n$. Then we have
\begin{equation} \label{5.1}
\begin{equation}gin{split}
& Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}=2\sum_{n\leq t_2}\frac{d(n)}{\sqrt{n}}\cos\{ 2\vartheta(\tilde{t}+k\rho_2(\nu))-\tilde{t}_\nu\ln n-k\rho_2(\nu)\ln n\}+ \\
& + \mathcal{O}(\ln T),\ t_2=\frac{\tilde{t}_\nu}{2\pi},\ \tilde{t}_\nu\in [T,2T] ,
\end{split}
\end{equation}
where the inequality $n\leq t_2$ follows from the estimate
\begin{equation}gin{displaymath}
\frac{\tilde{t}_\nu+k\rho_2(\nu)}{2\pi}-t_2=\frac{k\rho_2(\nu)}{2\pi}=\mathcal{O}\left(\frac{M+1}{\ln T}\right)=o(1) ,
\end{displaymath}
(see (\ref{2.5}), (\ref{3.2})). Since (see \cite{12},. p. 221)
\begin{equation} \label{5.2}
\vartheta'(t)=\frac 12\ln\frac{t}{2\pi}+\mathcal{O}\left(\frac{1}{t}\right),\quad \vartheta''(t)\sim \frac{1}{2t} ,
\end{equation}
then (see (\ref{2.3}))
\begin{equation} \label{5.3}
2\vartheta(\tilde{t}_\nu+k\rho_2(\nu))=\pi\nu+2k\pi+\mathcal{O}\left(\frac{M+1}{T\ln T}\right),\ \tilde{t}_\nu\in [T,2T] .
\end{equation}
Since the remainder in (\ref{5.3}) generates the error
\begin{equation}gin{displaymath}
\mathcal{O}\left(\frac{1}{T^{1/2-\epsilon}\ln T}\right)
\end{displaymath}
($\epsilon>0$ is arbitrarily small) in (\ref{5.1}), then we obtain by (\ref{5.3}) the following
\begin{equation} \label{5.4}
\begin{equation}gin{split}
& Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\} = \\
& =2(-1)^\nu\sum_{n\leq t_2}\frac{d(n)}{\sqrt{n}}\cos\{\tilde{t}_\nu\ln n+k\rho_2(\nu)\ln n\}+\mathcal{O}(\ln T),\ \tilde{t}_\nu\in [T,2T] .
\end{split}
\end{equation}
Consequently
\begin{equation} \label{5.5}
Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}Z^2\{\tilde{t}_\nu+l\rho_2(\nu)\}=S+R_1+R_2+\mathcal{O}(\ln^2 T)
\end{equation}
where
\begin{equation} \label{5.6}
\begin{equation}gin{split}
& S=2\sum_{n\leq t_2}\frac{d^2(n)}{n}\cos\{(k-l)\rho_2(\nu)\ln n\}+ \\
& + 2\ssum_{m,n\leq t_2,m\not=n} \frac{d(m)d(n)}{\sqrt{mn}}\cos\{\tilde{t}_\nu\ln\frac mn + k\rho_2(\nu)\ln m-l\rho_2(\nu)\ln n\}+ \\
& + 2\sum_{n\leq t_2}\frac{d^2(n)}{n}\cos\{ 2\tilde{t}_\nu\ln n+(k+l)\rho_2(\nu)\ln n\}+ \\
& + 2\ssum_{m,n\leq t_2,m\not=n}\frac{d(m)d(n)}{\sqrt{mn}}\cos\{\tilde{t}_\nu\ln (mn)+k\rho_2(\nu)\ln m+l\rho_2(\nu)\ln n\}= \\
& = S_1+S_2+S_3+S_4 ,
\end{split}
\end{equation}
and (see (\ref{5.4}))
\begin{equation} \label{5.7}
\begin{equation}gin{split}
& R_1=\mathcal{O}\left(\ln T\left|\sum_{n\leq t_2}\frac{d(n)}{\sqrt{n}}\cos\{\tilde{t}_\nu\ln n+k\rho_2(\nu)\ln n\}\right|\right)= \\
& = \mathcal{O}\left(Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}\ln T\right)+\mathcal{O}(\ln^2T) , \\
& R_2=\mathcal{O}\left(Z^2\{\tilde{t}_\nu+l\rho_2(\nu)\}\ln T\right)+\mathcal{O}(\ln^2T) .
\end{split}
\end{equation}
\section{The main term in the asymptotic formula (\ref{3.1})}
The following lemma holds true.
\begin{equation}gin{lemma}
\begin{equation} \label{6.1}
\sum_{T\leq \tilde{t}_\nu\leq 2T} S_1
=\left\{\begin{equation}gin{array}{lcl}\frac{3}{2\pi^5(k-l)^2}T\ln^5T+\mathcal{O}(T\ln^4T) & , & k\not=l \\
\frac{1}{2\pi^3}T\ln^5T+\mathcal{O}\{T\ln^4T\} & , & k=l \end{array} \right. .
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
We have (see (\ref{5.6}))
\begin{equation} \label{6.2}
\begin{equation}gin{split}
& S_1=2S_{11} , \\
& S_{11}=\sum_{n\leq t_2}\frac{d^2(n)}{n}\cos(\alpha \ln n),\ t_2=\frac{\tilde{t}_\nu}{2\pi},\ \alpha=(k-l)\rho_2(\nu) .
\end{split}
\end{equation}
\begin{equation}gin{itemize}
\item[(A)] Let $k\not=l$, i.e. $\alpha\not=0$. \\
By using of the partial summation and the Ramanujan's formula (see \cite{2}, p. 296)
\begin{equation}gin{displaymath}
D(x)=\sum_{n=1}^x d^2(n)=\frac{1}{\pi^2}x\ln^3 x+\mathcal{O}(\ln^2x),\ x=\frac{\tilde{t}_\nu}{2\pi} ,
\end{displaymath}
we obtain ($D(0)=0$)
\begin{equation} \label{6.3}
\begin{equation}gin{split}
& S_{11}=\sum_{n=1}^{[x]}\{ D(n)-D(n-1)\}\frac 1n\cos(\alpha\ln n)= \\
& = \sum_{n=1}^{[x]}D(n)\left\{\frac{\cos(\alpha\ln n)}{n}-\frac{\cos(\alpha\ln(n+1))}{n+1}\right\}+\mathcal{O}(\ln^3x)=\\
& = \sum_{n=1}^{[x]}D(n)\int_n^{n+1}\{\cos(\alpha\ln v)+\alpha\sin(\alpha\ln v)\}\frac{{\rm d}v}{v^2}+\mathcal{O}(\ln^3x)= \\
& = \frac{1}{\pi^2}\int_1^x\ln^3v\{\cos(\alpha\ln v)+\alpha\sin(\alpha\ln v)\}\frac{{\rm d}v}{v}+\mathcal{O}(\ln^3x) = \\
& = \frac{1}{\pi^2}\int_0^{\ln x}\{ w^3\cos(\alpha w)+\alpha w^3\sin(\alpha w)\}{\rm d}w+\mathcal{O}(\ln^3 x) = \\
& = \frac{1}{\pi^2}F(x,\alpha)+\mathcal{O}(\ln^3x) .
\end{split}
\end{equation}
Next, by using of the simple integration by parts, we obtain
\begin{equation} \label{6.4}
\begin{equation}gin{split}
& F(x,\alpha)=\left[\left(\frac{3w^2}{\alpha^2}-\frac{6}{\alpha^4}+\frac{6w}{\alpha^2}-w^3\right)\cos(\alpha w)+ \right. \\
& \left. +\left(\frac{w}{\alpha}-\frac{6w}{\alpha^3}+\frac{3w^2}{\alpha}-\frac{6}{\alpha^3}\right)\sin(\alpha w) \right]_{0}^{\ln x} = \\
& =\left(3\frac{\ln^2 x}{\alpha^2}-\frac{6}{\alpha^4}\right)\cos(\alpha \ln x)+\left(\frac{\ln^3x}{\alpha}-6\frac{\ln x}{\alpha^3}\right)
\sin(\alpha\ln x)+ \\
& + \frac{6}{\alpha^4}+\mathcal{O}(\ln^3T)=\frac{3}{4\pi^2(k-l)^2}\ln^4\frac{\tilde{t}_\nu}{2\pi}+\mathcal{O}(\ln^3T) ,
\end{split}
\end{equation}
since (see (\ref{3.2}), (\ref{6.2}))
\begin{equation}gin{displaymath}
\alpha\ln x=2\pi (k-l) .
\end{displaymath}
Consequently (see (\ref{6.2})-(\ref{6.4}))
\begin{equation} \label{6.5}
S_1=\frac{3}{2\pi^4(k-l)^2}\ln^4T+\mathcal{O}(\ln^3T) ,
\end{equation}
and, of course,
\begin{equation} \label{6.6}
\sum_{T\leq\tilde{t}_\nu\leq 2T}1=\frac 1\pi T\ln T+\mathcal{O}(T) .
\end{equation}
Hence, by (\ref{6.5}), (\ref{6.6}) the first formula in (\ref{6.1}) follows.
\item[(B)] Let $k=l$, i.e. $\alpha=0$. \\
Putting $\alpha=0$ in the fifth line of the formula (\ref{6.3}), we obtain
\begin{equation}gin{displaymath}
S_{11}=\frac{1}{4\pi^2}\ln^4\frac{T}{2\pi}+\mathcal{O}(\ln^3T) .
\end{displaymath}
This, together with (\ref{6.6}) gives the second formula in (\ref{6.1}) .
\end{itemize}
\end{proof}
\section{The estimates of the remaining terms}
\subsection{}
The following lemma holds true.
\begin{equation}gin{lemma}
\begin{equation} \label{7.1}
\sum_{T\leq\tilde{t}_\nu\leq 2T}S_2=\mathcal{O}(T\ln^4T) .
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Let
\begin{equation} \label{7.2}
\tilde{t}_k\leq 2T\leq \tilde{t}_{k+1},\ \tau=\max\{T,2\pi n,2\pi m\} .
\end{equation}
We have (see (\ref{5.6})
\begin{equation} \label{7.3}
\begin{equation}gin{split}
& \sum_{T\leq\tilde{t}_\nu\leq 2T}S_2=2\ssum_{m,n\leq \tilde{t}_k/2\pi, m\not=n} \frac{d(m)d(n)}{\sqrt{mn}}U_2 , \\
& U_2=\sum_{\tau\leq\tilde{t}_\nu\leq 2T}\cos\{\tilde{t}_\nu\ln\frac mn +h_1(\nu)\},\ h_1(\nu)=k\rho_2(\nu)\ln m-l\rho_2(\nu)\ln n .
\end{split}
\end{equation}
First of all we put
\begin{equation} \label{7.4}
\begin{equation}gin{split}
& U_2=U_{21}-U_{22} , \\
& U_{21}=\sum_{\tau\leq \tilde{t}_\nu\leq 2T}\cos\{h_1(\nu)\}\cos\left\{\tilde{t}_\nu\ln\frac mn\right\} , \\
& U_{21}=\sum_{\tau\leq \tilde{t}_\nu\leq 2T}\sin\{h_1(\nu)\}\sin\left\{\tilde{t}_\nu\ln\frac mn\right\} .
\end{split}
\end{equation}
It is sufficient to estimate the term $U_{21}$. Since
\begin{equation}gin{displaymath}
h_1(\nu)=\mathcal{O}(M),\ \tilde{t}_\nu\in [T,2T],
\end{displaymath}
then
\begin{equation}gin{displaymath}
h_1(\nu)\in [-AM,AM] .
\end{displaymath}
Now, we divide the segment $[-AM,AM]$ on $\mathcal{O}(M)$ parts in such a way that on each part of our segment the following is true: either
\begin{equation}gin{displaymath}
0\leq \cos\{h_1(\nu)\}\leq 1 ,
\end{displaymath}
or
\begin{equation}gin{displaymath}
0\leq -\cos\{h_1(\nu)\}\leq 1 ,
\end{displaymath}
and the sequences
\begin{equation}gin{displaymath}
\cos\{h_1(\nu)\},-\cos\{h_1(\nu)\}
\end{displaymath}
are monotone. If we use the Abel's transformation on every of those parts, we obtain
\begin{equation} \label{7.5}
|U_{21}|\leq AM\cdot\max_{\tau_1,\tau_2;\ \tau\leq \tau_1<\tau_2\leq 2T}
\left|\sum_{\tau_1\leq\tilde{t}_\nu\leq \tau_2}\cos\left\{\tilde{t}_\nu\ln\frac mn\right\}\right| ,
\end{equation}
Of course, instead of the sum in (\ref{7.5}) we may estimate the following sum
\begin{equation}gin{displaymath}
U_{211}=\sum_{\tau\leq \tilde{t}_\nu\leq \tau_1\leq 2T} \cos\left\{\tilde{t}_\nu\ln\frac mn\right\} ,
\end{displaymath}
and for this sum the method explained in \cite{4}, pp. (30)-(37) is applicable. We then obtain from (\ref{7.5}) the estimate
\begin{equation}gin{displaymath}
U_{21}=\mathcal{O}\left(\frac{(M+1)\ln T}{\left|\ln\frac mn\right|}\right) .
\end{displaymath}
This estimate is valid also for $U_{22}$ and, by (\ref{7.4}), for $U_2$. Hence, from (\ref{7.3}) (comp. \cite{4}, (38), (39)) we obtain (\ref{7.1}).
\end{proof}
\subsection{}
On the basis of \cite{4}, (40)-(59), \cite{7}, (5), (6), via a similar way, we obtain
\begin{equation}gin{lemma}
\begin{equation} \label{7.6}
\sum_{T\leq\tilde{t}_\nu\leq 2T}S_4=\mathcal{O}(T\ln^4T) .
\end{equation}
\end{lemma}
Next, on the basis \cite{7}, (8)-(12), we obtain
\begin{equation}gin{lemma}
\begin{equation} \label{7.7}
\sum_{T\leq\tilde{t}_\nu\leq 2T}S_3=\mathcal{O}(T\ln T) .
\end{equation}
\end{lemma}
Consequently, by (\ref{5.6}), (\ref{7.1}), (\ref{7.6}), (\ref{7.7}) we have
\begin{equation} \label{7.8}
\sum_{T\leq\tilde{t}_\nu\leq 2T}S=\sum_{T\leq\tilde{t}_\nu\leq 2T}S_1+ \mathcal{O}(T\ln^4T) .
\end{equation}
\subsection{}
From the Riemann-Siegel formula
\begin{equation}gin{displaymath}
Z(t)=2\sum_{n\leq t_3}\frac{1}{\sqrt{n}}\cos\{\vartheta(t)-t\ln n)+\mathcal{O}(t^{-1/4}),\ t_3=\sqrt{\frac{t}{2\pi}}
\end{displaymath}
we easily obtain the estimate
\begin{equation} \label{7.9}
\sum_{T\leq\tilde{t}_\nu\leq 2T}Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}=\mathcal{O}(T\ln^2T) .
\end{equation}
Thus (see (\ref{5.7}), (\ref{6.6}), (\ref{7.9})) we have
\begin{equation} \label{7.10}
\sum_{T\leq\tilde{t}_\nu\leq 2T}\{ R_1+R_2+\mathcal{O}(\ln^2T)\}=\mathcal{O}(T\ln^3T) .
\end{equation}
Finally, from (\ref{5.5}) by (\ref{6.1}), (\ref{7.8}), (\ref{7.10}) we obtain (\ref{3.1}).
\section{The formula for $(-1)^\nu Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}Z^2\{\tilde{t}_\nu+l\rho_2(\nu)\}$}
First of all (see (\ref{5.5}))
\begin{equation} \label{8.1}
\begin{equation}gin{split}
& (-1)^\nu Z^2\{\tilde{t}_\nu+k\rho_2(\nu)\}Z^2\{\tilde{t}_\nu+l\rho_2(\nu)\}= \\
& = \bar{S}+(-1)^\nu (R_1+R_2)+\mathcal{O}(\ln^2T) ,
\end{split}
\end{equation}
where (see (\ref{5.6}))
\begin{equation} \label{8.2}
\bar{S}=(-1)^\nu S_1+(-1)^\nu S_2+(-1)^\nu S_3+(-1)^\nu S_4 .
\end{equation}
From (\ref{6.5}) we obtain
\begin{equation} \label{8.3}
\sum_{T\leq\tilde{t}_\nu\leq 2T}(-1)^\nu S_1=\mathcal{O}(T\ln^4T) .
\end{equation}
Next we have (see (\ref{5.6}), (\ref{7.3}))
\begin{equation}gin{displaymath}
\begin{equation}gin{split}
& \sum_{T\leq\tilde{t}_\nu\leq 2T}(-1)^\nu S_3=2\sum_{n\leq \tilde{t}_\nu/2\pi}\frac{d^2(n)}{n}\bar{U}_3 , \\
& \bar{U}_3=\sum_{\tau\leq\tilde{t}_\nu\leq 2T}\cos\{\pi\nu+2\tilde{t}_\nu\ln n+h_2(\nu)\} , \\
& h_2(\nu)=(k+l)\rho_2(\nu)\ln n .
\end{split}
\end{displaymath}
The estimate of the sum $\bar{U}_3$ we may carry forward to the estimates of the following sums
\begin{equation}gin{displaymath}
\bar{U}_{311}(r)=\sum_{\tau\leq\tilde{t}_\nu\leq \tau_1\leq 2T}\cos\left\{\pi\nu+2\tilde{t}_\nu\ln n-\frac{\pi}{2}r\right\},\ r=0,1
\end{displaymath}
(comp. the part 7.1). We obtain the estimates of these sums by the van der Corput's lemma with the second derivative (see \cite{12}, p. 61). Consequently (comp. \cite{7}, (7)-(12)), we obtain
the estimate
\begin{equation} \label{8.4}
\sum_{T\leq\tilde{t}_\nu\leq 2T}(-1)^\nu S_3=\mathcal{O}(T\ln T) .
\end{equation}
Next, we have (see (\ref{5.7}), (\ref{7.10}), (\ref{8.1}))
\begin{equation} \label{8.5}
\sum_{T\leq\tilde{t}_\nu\leq 2T}\{ (-1)^\nu(R_1+R_2)+\mathcal{O}(\ln^2T)\}=\mathcal{O}(T\ln^3T) .
\end{equation}
Now, the proof of the Lemma B lies on the two following lemmas.
\begin{equation}gin{lemma}
\begin{equation} \label{8.6}
\sum_{T\leq\tilde{t}_\nu\leq 2T} (-1)^\nu S_2=\mathcal{O}(T\ln^{7/2}T) .
\end{equation}
\end{lemma}
\begin{equation}gin{lemma}
\begin{equation} \label{8.7}
\sum_{T\leq\tilde{t}_\nu\leq 2T} (-1)^\nu S_4=\mathcal{O}(T\ln^{3}T) .
\end{equation}
\end{lemma}
\section{Proof of the Lemma 5}
We have (see (\ref{5.6}), (\ref{7.2}), (\ref{7.3}))
\begin{equation} \label{9.1}
\begin{equation}gin{split}
& W_2=\sum_{T\leq\tilde{t}_\nu\leq 2T}(-1)^\nu S_2=2\ssum_{m,n\leq \tilde{t}_\nu/2\pi, m\not=n}\frac{d(m)d(n)}{\sqrt{mn}}\bar{U}_2 , \\
& \bar{U}_2=\sum_{T\leq\tilde{t}_\nu\leq 2T}\cos\left\{\pi\nu-t_\nu\ln\frac mn -h_1(\nu)\right\} .
\end{split}
\end{equation}
Let us remind that the sequence $\{ g_\nu\}$ is defined by the formula (see \cite{8}, (6))
\begin{equation} \label{9.2}
\vartheta_1(g_\nu)=\frac{\pi}{2}\nu,\ \nu=1,2,\dots
\end{equation}
where
\begin{equation} \label{9.3}
\begin{equation}gin{split}
& \vartheta_1(t)=\frac t2\ln\frac{t}{2\pi}-\frac t2-\frac{\pi}{8},\ \vartheta'_1(t)=\frac 12\ln\frac{t}{2\pi},\ \vartheta_1''(t)=\frac{1}{2t} ,
\end{split}
\end{equation}
and (see (\ref{1.1}))
\begin{equation} \label{9.4}
\vartheta(t)=\vartheta_1(t)+\mathcal{O}\left(\frac 1t\right) .
\end{equation}
It is clear (comp. (\ref{5.2}) with (\ref{9.3})) that the sequence $\{ g_\nu\}$ is more advisable for the estimation of the sum $U_2$. \\
Since (see (\ref{2.3}), (\ref{9.2}), (\ref{9.3}))
\begin{equation}gin{displaymath}
\mathcal{O}\left(\frac{1}{\tilde{t}_\nu}\right)=\vartheta_1(\tilde{t}_\nu)-\vartheta_1(g_\nu)=(\tilde{t}_\nu-g_\nu)\vartheta_1'(T_\nu),\ T_\nu\in (g_\nu,\tilde{t}_\nu); (\tilde{t}_\mu,g_\nu)
\end{displaymath}
then (see (\ref{9.3}))
\begin{equation} \label{9.5}
\tilde{t}_\nu-g_\nu=\mathcal{O}\left(\frac{1}{T\ln T}\right),\ \tilde{t}_\nu\in [T,2T] .
\end{equation}
Thus we obtain from (\ref{9.1}) by (\ref{6.6}), (\ref{9.5})
\begin{equation} \label{9.6}
\begin{equation}gin{split}
& \bar{U}_2=\bar{U}_{21}+\mathcal{O}(\ln T),\\
& \bar{U}_{21}=\sum_{\tau\leq g_\nu\leq 2T}\cos\left\{\pi\nu-g_\nu\ln\frac mn-h_1(\nu)\right\} .
\end{split}
\end{equation}
The estimation of the sum $\bar{U}_{21}$ may be carry forward to the estimation of the following sums (comp. the part 7.1).
\begin{equation} \label{9.7}
\bar{U}_{211}(r)=\sum_{\tau\leq g_\nu\leq\tau_1\leq 2T}\cos\{ 2\pi\Phi_2(\nu)\},\ r=0,1
\end{equation}
where
\begin{equation} \label{9.8}
\Phi_2(\nu)=\frac{\nu}{2}-\frac{g_\nu}{2\pi}\ln\frac mn - \frac r2 .
\end{equation}
Let $m>n$. Since (see (\ref{9.2}))
\begin{equation} \label{9.9}
\frac{{\rm d}g_\nu}{{\rm d}\nu}=\frac{\pi}{2\vartheta_1'(g_\nu)}=\frac{\pi}{\ln\frac{g_\nu}{2\pi}}
\end{equation}
(here and in similar cases we assume that $g_\nu$ is defined by (\ref{9.2}) for all $\nu\geq 1$) then
\begin{equation} \label{9.10}
\Phi_2'(\nu)=\frac 12-\frac{1}{2\ln\frac{g_\nu}{2\pi}}\ln\frac mn,\ \Phi_2''(\nu)>0 .
\end{equation}
Because
\begin{equation}gin{displaymath}
0<\frac{1}{2\ln\frac{g_\nu}{2\pi}}\ln\frac mn\leq \frac{\ln m}{2\ln\frac{T}{2\pi}}\leq \frac{\ln\frac T\pi}{2\ln\frac{T}{2\pi}}<\frac 12+\epsilon \ \Rightarrow \ |\Phi_2'(\nu)|<\frac 12 ,
\end{displaymath}
then (see \cite{12}, p. 65, Lemma 4.8)
\begin{equation}gin{displaymath}
\bar{U}_{211}(r)=\int_{\tau\leq g_\nu\leq\tau_1}\cos\{ 2\pi\Phi_2(\nu)\}{\rm d}\nu+\mathcal{O}(1) .
\end{displaymath}
Next, since $\Phi_2'(\nu)$ is increasing (see (\ref{9.10})) then
\begin{equation}gin{displaymath}
\begin{equation}gin{split}
& \Phi_2'(\nu)\geq \frac 12-\frac{1}{2\ln\frac{\tau}{2\pi}}\ln\frac mn = \frac{1}{2\ln\frac{\tau}{2\pi}}\ln\frac{\tau n}{2\pi m};\ g_\nu\in [\tau,\tau_1] .
\end{split}
\end{displaymath}
Let $n\geq 2$. If $2\pi m>T$ then $\tau=2\pi m$ (see (\ref{7.2})), and
\begin{equation}gin{displaymath}
\Phi_2'(\nu)\geq \frac{\ln 2}{2\ln\tau}>\frac{A}{\ln T} .
\end{displaymath}
If $T\geq 2\pi n$ then $\tau=T$, and
\begin{equation}gin{displaymath}
\Phi_2'(\nu)\geq \frac{1}{2\ln\frac{T}{2\pi}}\ln\frac{Tn}{2\pi m}\geq \frac{\ln 2}{2\ln\frac{T}{2\pi}}>\frac{A}{\ln T} .
\end{displaymath}
Thus, in the case $m>n\geq 2$ one obtains the following estimate (by \cite{12}, p. 61, Lemma 4.2)
\begin{equation}gin{displaymath}
\bar{U}_{211}(r)=\mathcal{O}(\ln T) .
\end{displaymath}
Consequently, for $\bar{U}_2$ (see (\ref{9.6}), comp. (\ref{7.5})), we have
\begin{equation}gin{displaymath}
\bar{U}_2=\mathcal{O}((M+1)\ln T) ,
\end{displaymath}
and for the corresponding part of $W_{21}$ ($m>n\geq 2$) of the sum $W_2$ (see (\ref{9.1})) we obtain
\begin{equation} \label{9.11}
W_{21}=\mathcal{O}\left\{(M+1)\ln T\ssum_{m,n\leq T} \frac{d(m)d(n)}{\sqrt{mn}}\right\}=\mathcal{O}\{(M+1)T\ln^3T\}
\end{equation}
since (see \cite{2}, pp. 297, 298)
\begin{equation} \label{9.12}
\ssum_{m,n\leq x}\frac{d(m)d(n)}{\sqrt{mn}}=\mathcal{O}\{ x\ln^2x\} .
\end{equation}
Let $n=1$. Since (see (\ref{9.9}), (\ref{9.10}); $m\geq 2$)
\begin{equation}gin{displaymath}
\Phi_2'(\nu)=\frac{\pi}{2}\frac{\ln m}{g_\nu\ln^3\frac{g_\nu}{2\pi}} > \frac{A}{T\ln^3T} ,
\end{displaymath}
then we obtain, by the lemma with the second derivative,
\begin{equation}gin{displaymath}
\bar{U}_{211}(r;n=1)=\mathcal{O}(\sqrt{T}\ln^{3/2}T) ,
\end{displaymath}
i.e. (see (9.6))
\begin{equation}gin{displaymath}
\bar{U}_2(n=1)=\mathcal{O}\{(M+1)\sqrt{T}\ln^{3/2}T\} .
\end{displaymath}
Consequently, for the corresponding part $W_{22}$ of the sum $W_2$ (see (9.1)) we obtain
\begin{equation} \label{9.13}
\begin{equation}gin{split}
& W_{22}=\mathcal{O}\left\{(M+1)\sqrt{T}\ln^{3/2}T\sum_{n\leq T}\frac{d(n)}{\sqrt{n}}\right\}= \\
& = \mathcal{O}\left\{(M+1)\sqrt{T}\ln^{3/2}T\sqrt{T}\left(\sum_{n\leq T}\frac{d^2(n)}{n}\right)^{1/2}\right\} = \\
& = \mathcal{O}\{(M+1)T\ln^{7/2}T\}
\end{split}
\end{equation}
because (see \cite{2}, p. 296)
\begin{equation}gin{displaymath}
\sum_{n\leq x}\frac{d^2(n)}{n}=\frac{1}{4\pi^2}\ln^4x+\mathcal{O}(\ln^3x) .
\end{displaymath}
Thus in the case $m>n$ we have (see (9.11), (9.13))
\begin{equation} \label{9.14}
W_2(m>n)=\mathcal{O}\{(M+1)T\ln^{7/2}T\} .
\end{equation}
Now, let $n>m$. In this case we have (see (9.8))
\begin{equation}gin{displaymath}
\begin{equation}gin{split}
& 2\pi\Phi_2(\nu)=\pi\nu-g_\nu\ln\frac mn-\frac{\pi}{2}r=\pi\nu+g_\nu\ln\frac nm-\frac{\pi}{2}r= \\
& = 2\pi\nu-2\pi\left(\frac{\nu}{2}-\frac{g_\nu}{2\pi}\ln\frac nm+\frac r4\right)=2\pi\nu-2\pi\tilde{\Phi}_2 ,
\end{split}
\end{displaymath}
i.e. in this case the following estimate
\begin{equation} \label{9.15}
W_2(n>m)=\mathcal{O}\{(M+1)T\ln^{7/2}T\}
\end{equation}
holds true. Finally, from (9.1) by (9.14), (9.15) the estimate (8.6) follows.
\section{Proof of the Lemma 6}
We have (see (5.6), (7.2))
\begin{equation} \label{10.1}
\begin{equation}gin{split}
& W_4=\sum_{T\leq\tilde{t}_\nu\leq 2T}(-1)^\nu S_4=2\ssum_{m,n\leq\tilde{t}_k/2\pi, m\not=n}\frac{d(m)d(n)}{\sqrt{mn}}\bar{U}_4 , \\
& \bar{U}_4=\sum_{\tau\leq\tilde{t}_\nu\leq 2T}\cos\{ \pi\nu-\tilde{t}_\nu\ln(mn)-h_3(\nu)\} , \\
& h_3(\nu)=k\rho_2(\nu)\ln m+l\rho_2(\nu)\ln n .
\end{split}
\end{equation}
The estimate for the sum $\bar{U}_4$ may be carry forward to estimate the following sums (similarly to the case (9.6), (9.7))
\begin{equation}gin{displaymath}
\bar{U}_{411}(r)=\sum_{\tau\leq g_\nu\leq \tau_1\leq 2T} \cos\{ 2\pi\Phi_4(\nu)\}+\mathcal{O}(\ln T),\ r=0,1
\end{displaymath}
where
\begin{equation}gin{displaymath}
\Phi_4(\nu)=\frac{\nu}{2}-\frac{g_\nu}{2\pi}\ln(nm)+\frac r4 .
\end{displaymath}
First of all, we have
\begin{equation} \label{10.2}
\Phi_4'(\nu)=\frac 12-\frac{\ln(nm)}{2\ln\frac{g_\nu}{2\pi}} ,\quad \Phi_4''(\nu)>0 .
\end{equation}
Since
\begin{equation}gin{displaymath}
0<\frac{\ln(mn)}{2\ln\frac{g_\nu}{2\pi}}\leq \frac{\ln\left(\frac T\pi\right)^2}{2\ln\frac{T}{2\pi}}<1+\epsilon \ \Rightarrow \
|\Phi_4'(\nu)|\leq \frac 12+\epsilon
\end{displaymath}
then
\begin{equation} \label{10.3}
\bar{U}_{411}(r)=\int_{\tau\leq g_\nu\leq \tau_1}\cos\{ 2\pi\Phi_4(\nu)\}{\rm d}\nu+\mathcal{O}(\ln T) .
\end{equation}
\subsection{}
Let
\begin{equation}gin{displaymath}
mn<\frac{T}{2\pi} .
\end{displaymath}
Then $\tau=T$ and, since $\Phi_4'$ is increasing (see (10.2)), we have
\begin{equation}gin{displaymath}
\Phi_4'(\nu)\geq \frac 12-\frac{\ln(mn)}{2\ln\frac{T}{2\pi}}=\frac{1}{2\ln Q_1}\ln\frac{Q_1}{mn}>0,\ Q_1=\frac{T}{2\pi} ,
\end{displaymath}
\begin{equation}gin{displaymath}
m,n<Q_1,\ g_\nu\in [T,\tau_1] .
\end{displaymath}
Thus, by the lemma with the first derivative, we obtain (see (10.3))
\begin{equation} \label{10.4}
\bar{U}_{411}(r;mn<Q_1)=\mathcal{O}\left(\frac{\ln T}{\ln\frac{Q_1}{mn}}\right) .
\end{equation}
Let
\begin{equation}gin{displaymath}
\frac{T}{2\pi}-\alpha_1<\frac{T}{2\pi} ,
\end{displaymath}
where $\alpha_1>0$ is a convenient number. Then we have the following contribution $\bar{W}$ into the sum $W_4$
\begin{equation}gin{displaymath}
\bar{W}=\mathcal{O}\left(\frac{T^{2\epsilon}}{\sqrt{T}}\alpha_1 T^{\epsilon}T\ln T\right)=\mathcal{O}(T^{1/2+4\epsilon}),\
\frac{T}{2\pi}-\alpha_1\leq mn<\frac{T}{2\pi} .
\end{displaymath}
\begin{equation}gin{remark}
If we assume that $Q_1\in\mathbb{N}$ (we use this assumption also in other similar cases) then
\begin{equation} \label{10.5}
W_{41}=W_4(mn<T/2\pi)=\mathcal{O}(T^{1/2+4\epsilon})+\bar{W}_{41}(mn<Q_1) .
\end{equation}
\end{remark}
We have (see (10.1), (10.4))
\begin{equation}\label{10.6}
\begin{equation}gin{split}
& \bar{W}_{41}=\mathcal{O}\left\{(M+1)\ln T\ssum_{mn<Q_1}\frac{d(m)d(n)}{\sqrt{mn}\ln\frac{Q_1}{mn}}\right\}= \\
& = \mathcal{O}\left\{(M+1)T^{2\epsilon}\ln T\ssum_{mn<Q_1}\frac{1}{\sqrt{mn}\ln\frac{Q_1}{mn}}\right\} = \\
& = \mathcal{O}\left\{(M+1)T^{3\epsilon}\ln T\sum_{q<Q_1}\frac{1}{\sqrt{q}\ln\frac{Q_1}{q}}\right\} = \\
& = \mathcal{O}\{(M+1)T^{3\epsilon}\ln T\sqrt{Q_1}\ln Q_1\}= \\
& = \mathcal{O}\{(M+1)T^{1/2+4\epsilon}\} .
\end{split}
\end{equation}
Hence (see (10.5), (10.6))
\begin{equation} \label{10.7}
W_{41}=\mathcal{O}\{(M+1)T^{1/2+4\epsilon}\} .
\end{equation}
\subsection{}
Let
\begin{equation}gin{displaymath}
mn>\frac{T}{\pi} .
\end{displaymath}
Since $g_\nu\leq 2T$ then (see (10.2))
\begin{equation}gin{displaymath}
-\Phi_4'(\nu)=\frac{\ln(mn)}{2\ln\frac{g_\nu}{2\pi}}-\frac 12>0 ,
\end{displaymath}
i.e. $\{-\Phi_4'(\nu)\}$ is decreasing. Consequently,
\begin{equation}gin{displaymath}
-\Phi_4'(\nu)\geq \frac{\ln(mn)}{\ln\frac{T}{\pi}}-\frac 12=\frac{1}{2\ln Q_2}\ln\frac{mn}{Q_2},\ Q_2=\frac{T}{\pi} ,
\end{displaymath}
and (comp. (10.4))
\begin{equation}gin{displaymath}
\bar{U}_{411}(r;mn>T/2\pi)=\mathcal{O}\left(\frac{\ln T}{\ln\frac{mn}{Q_2}}\right),\ mn>Q_2 .
\end{displaymath}
Let
\begin{equation} \label{10.8}
W_{42}=W_4(mn>T/\pi)=W_{421}(Q_2<mn<2Q_2)+W_{422}(2Q_2\leq mn) .
\end{equation}
First of all we have (similarly to the part 10.1)
\begin{equation} \label{10.9}
W_{421}=\mathcal{O}\{(M+1)T^{1/2+4\epsilon}\}
\end{equation}
here the following known estimate was used
\begin{equation}gin{displaymath}
\sum_{Q_2<q<2Q_2}\frac{1}{\sqrt{q}\ln\frac{q}{Q_2}}=\mathcal{O}(\sqrt{Q_2}\ln Q_2) .
\end{displaymath}
Next, since
\begin{equation}gin{displaymath}
\ln\frac{mn}{Q_2}\geq \ln 2;\ mn\geq 2Q_2 ,
\end{displaymath}
then we obtain (see (9.12))
\begin{equation} \label{10.10}
\begin{equation}gin{split}
& W_{422}=\mathcal{O}\left\{(M+1)\ln T\ssum_{m,n<T/\pi}\frac{d(m)d(n)}{\sqrt{mn}}\right\}= \\
& = \mathcal{O}\{(M+1)T\ln^3T\}.
\end{split}
\end{equation}
Hence, (see (10.8)-(10.10)) we have
\begin{equation} \label{10.11}
W_{42}=\mathcal{O}\{(M+1)T\ln^3T\} .
\end{equation}
\subsection{}
Let
\begin{equation}gin{displaymath}
\frac{T}{2\pi}\leq mn\leq \frac{T}{\pi} .
\end{displaymath}
Since $g_\nu\in [T,2T]$ then, in the case (10.12), the function $\Phi_4'(\nu)$ (see (10.2)) has only one zero $\bar{\nu}$ (since $\Phi_4'$ is
increasing).
\subsubsection{}
If $\tau\leq g_\nu\leq \tau_1$ then (see (10.3))
\begin{equation} \label{10.13}
\begin{equation}gin{split}
& \bar{U}_{411}(r)=\int_{\tau\leq g_\nu\leq g_{\bar{\nu}}-A_1}+\int_{g_{\bar{\nu}}+A_2\leq g_\nu\leq \tau_1}+\mathcal{O}(1)+\mathcal{O}(\ln T) = \\
& = \bar{U}^1_{411}(r)+\bar{U}^2_{411}(r)+\mathcal{O}(\ln T)
\end{split}
\end{equation}
with evident adaptations if $g_{\bar{\nu}}=\tau,\tau_1$ ($0<A_1<A_2$ are the constants). \\
Since $0<-\Phi_4'(\nu)$ is increasing for $g_\nu\leq g_{\bar{\nu}}-A_1$ then
\begin{equation}gin{displaymath}
-\Phi_4'(\nu)\geq \frac{1}{2\ln\frac{g_{\bar{\nu}}-A_1}{2\pi}}\ln\frac{2\pi mn}{g_{\bar{\nu}}-A_1}>0,\ mn>\frac{g_{\bar{\nu}}-A_1}{2\pi} ,
\end{displaymath}
and, consequently,
\begin{equation}gin{displaymath}
\bar{U}_{411}(r)=\mathcal{O}\left(\frac{\ln T}{\ln\frac{mn}{Q_3}}\right),\ \frac{g_{\bar{\nu}}-A_1}{2\pi}=Q_3<mn\leq \frac T\pi\leq 2Q_3 ,
\end{displaymath}
where $Q_3\in\mathbb{N}$ (see Remark 4). Thus we obtain (comp. part. 10.2)
\begin{equation} \label{10.14}
W_{43}=\mathcal{O}\{(M+1)T^{1/2+\epsilon}\} ,
\end{equation}
where $W_{43}$ is the contribution of $\bar{U}^1_{411}(r)$ into the sum $W_4$. \\
Since $0<\Phi_4'(\nu)$ is increasing for $g_\nu\geq g_{\bar{\nu}}+A_2$ then
\begin{equation}gin{displaymath}
\Phi_4'(\nu)\geq \frac{1}{2\ln\frac{g_{\bar{\nu}}+A_2}{2\pi}}\ln\frac{g_{\bar{\nu}}+A_2}{2\pi mn}>0,\ mn<\frac{g_{\bar{\nu}}+A_2}{2\pi} ,
\end{displaymath}
and consequently
\begin{equation}gin{displaymath}
\bar{U}^2_{411}(r)=\mathcal{O}\left(\frac{\ln T}{\ln\frac{Q_4}{mn}}\right),\ mn<Q_4=\frac{g_{\bar{\nu}}+A_2}{2\pi}
\end{displaymath}
where $Q_4\in\mathbb{N}$. Thus, we obtain (comp. the part 10.2)
\begin{equation}\label{10.15}
W_{44}=\mathcal{O}\{(M+1)T^{1/2+4\epsilon}\} ,
\end{equation}
where $W_{44}$ is the contribution of $\bar{U}^2_{411}$ into the sum $W_4$. \\
The contribution of the term $\mathcal{O}(\ln T)$ into $W_4$ is give by
\begin{equation} \label{10.16}
\begin{equation}gin{split}
& W_{45}=\mathcal{O}\left\{(M+1)\ln T\ssum_{T/2\pi\leq mn\leq T/\pi}\frac{d(m)d(n)}{\sqrt{mn}}\right\}= \\
& = \mathcal{O}\{(M+1)\ln T\frac{T^{2\epsilon}}{\sqrt{T}}T^\epsilon T\}=\mathcal{O}\{(M+1)T^{1/2+4\epsilon}\} .
\end{split}
\end{equation}
\subsubsection{}
If $\tau_1<g_\nu\leq \frac{T}{\pi}$ then, similarly to the case $W_{41}$ (see the part 10.1) we obtain
\begin{equation} \label{10.17}
W_{46}=\mathcal{O}\{(M+1)T^{1/2+4\epsilon}\} .
\end{equation}
Thus, from (10.1) by (10.7), (10.11), (10.14)-(10.17) the assertion (8.7) of the Lemma 6 follows.
\subsubsection{}
Finally, we complete the proof of the Lemma B. First of all, from (8.2) by (8.3), (8.4), (8.6), (8.7) we obtain
\begin{equation} \label{10.18}
\bar{S}=\mathcal{O}\{(M+1)T\ln^4T\} .
\end{equation}
Hence, from (8.1) by (8.5), (10.18) the assertion (3.3) of the Lemma B follows.
\section{Some new classes of the formulae generated by the Theorem 1}
\subsection{}
For example, to the Euler's series
\begin{equation}gin{displaymath}
\zeta(2)=\frac{\pi^2}{6}=\sum_{n=1}^\infty\frac{1}{n^2}=\sum_{n=1}^N\frac{1}{n^2}+\mathcal{O}\left(\frac 1M\right)
\end{displaymath}
where $N^2\leq M+1$ corresponds the class of formulae
\begin{equation}gin{displaymath}
\sum_{T\leq t_\nu\leq 2T}Z^2(t_\nu)\sum_{n=1}^N Z^2\{ t_\nu\pm n\rho_1(\nu)\}\sim \frac{1}{8\pi^3}T\ln^5T,\ T\to\infty
\end{displaymath}
for every random distribution of the signs $\pm$.
\subsection{}
In the connection with the discrete analog of the Hardy-Littlewood effect which we have proved in \cite{8},
\begin{equation}gin{displaymath}
\frac{1}{N_1}\sum_{T\leq g_\nu\leq T+U}\left\{ \frac{1}{\bar{M}+1}\sum_{n=0}^{\bar{M}}Z(g_\nu+n\omega)\right\}^2\leq A\frac{\ln T}{\bar{M}},\
T\to\infty
\end{displaymath}
where
\begin{equation}gin{displaymath}
N_1=\sum_{T\leq g_\nu\leq T+U} 1,\ U=T^{5/12}\psi\ln^3T,\ \ln T<\bar{M}<\sqrt[3]{\psi}\ln T,\ \omega=\frac{\pi}{\ln\frac{T}{2\pi}} ,
\end{displaymath}
we obtain from the formula (2.4) a more complicated - biquadratic - analog of the Hardy-Littlewood effect
\begin{equation}gin{displaymath}
\begin{equation}gin{split}
&
\frac{1}{N_2}\sum_{T\leq g_\nu\leq 2T}\left\{ \frac{1}{M}\sum_{n=1}^{M}Z^2(t_\nu+n\rho_1(\nu))\right\}^2\sim
\frac 1\pi\frac{\ln^4T}{M},\ T\to\infty , \\
& N_2=\sum_{T\leq g_\nu\leq 2T} 1 .
\end{split}
\end{displaymath}
\subsection{}
Let $\varphi_n,\ n=1,2,\dots ,[t_i]$ be mutually independent random variables uniformly distributed within the segment $[-\pi,\pi]$. Next, let
$Z^2_\varphi(t)$ be the random process generated by the phase-modulation with the random vector $(\varphi_1,\dots ,\varphi_{[t_i]})$ of the main term in the
Hardy-Littlewood's formula for $Z^2(t)$
\begin{equation} \label{11.1}
Z^2_\varphi(t)=2\sum_{n\leq[t_1]}\frac{d(n)}{\sqrt{n}}\cos\{ 2\vartheta(t)-t\ln n+\varphi_n\},\ t_1=\frac{t}{2\pi} .
\end{equation}
In this case, the formula (2.4) (and all its consequences) is valid for the class of all realizations of $Z^2_{\bar{\varphi}}$ of the random process
(11.1). \\
\thanks{I would like to thank Michal Demetrian for helping me with the electronic version of this work.}
\begin{equation}gin{thebibliography}{29}
\bibitem{1}
S. Goldman, `Information theory`, I.I.L. Moscow, 1957 (in Russian) .
\bibitem{2}
A.E. Ingham, `Mean-value theorems in the theory of the Riemann zeta-function`, Proc. Lond. Math. Soc. 2, 27 (1926), 273-300.
\bibitem{3}
J. Moser, `On one sum in the theory of the Riemann zeta-function` Acta Arith., 31 (1976), 31-43; 40 (1981), 97-107, (in Russian).
\bibitem{4}
J. Moser, `Proof of theTitchmarsh's hypothesis in the theory of the Riemann zeta-function`, Acta Arith. 36 (1980), 147-156, (in Russian).
\bibitem{5}
J. Moser, `On an arithemtic analogue of one Hardy-Littlewood's formula in the theory of the Riemann zeta-function`, Acta Math. Univ. Comen. 37 (1980),
109-120, (in Russian).
\bibitem{6}
J. Moser, `On certain quasiorthogonal system of vectors in the theory of the Riemann zeta-function`, Acta Math. Univ. Comen. 38 (1981), 87-98, (in Russian).
\bibitem{7}
J. Moser, `On a biquadratic sum in the theory of the Riemann zeta-function`, Acta Math. Univ. Comen. 42-43 (1983), 35-39, (in Russian).
\bibitem{8}
J. Moser, `An improvement on a density theorem of Hardy and Littlewood on zeroes of $\zf$`, Acta Arith., 43 (1983), 21-47, (in Russian).
\bibitem{9}
J. Moser, `On the order of Titchmarsh's sum in the theory of the Riemann zeta-function`, Czechoslovak Math. J. 41 (116) (1991), 663-684, (in Russian).
\bibitem{10}
C.L. Siegel, `\" Uber Riemanns Nachlass zur analytischen Zahlentheorie`, Quellen und Studien zur Gesichte der Math. Astr. und Physik, Abt. B:
Studien, 2 (1932), 45-80.
\bibitem{11}
E.C. Titchmarsh, `On van der Corput's method and the zeta-function of Riemann (IV)`, Quart. J. Math. 5 (1934), 98-105.
\bibitem{12}
E.C. Titchmarsh, `The theory of the Riemann zeta-function`, Clarendon Press, Oxford, 1951.
\end{thebibliography}
\end{document} |
\begin{document}
\title{There is no Diophantine $D(-1)$--quadruple}
\author[N. C. Bonciocat]{Nicolae Ciprian Bonciocat}
\address{Simion Stoilow Institute of Mathematics of the
Romanian Academy, Research unit nr. 7,
P.O. Box 1-764, RO-014700 Bucharest, Romania}
\email{[email protected]}
\author[M. Cipu]{Mihai Cipu}
\address{Simion Stoilow Institute of Mathematics of the
Romanian Academy, Research unit nr. 7,
P.O. Box 1-764, RO-014700 Bucharest, Romania}
\email{[email protected]}
\author[M. Mignotte]{Maurice Mignotte}
\address{Universit\'e de Strasbourg, U. F. R. de Math\'ematiques,
67084 Strasbourg, France}
\email{[email protected]}
\subjclass{11D09, 11D45, 11B37, 11J68}
\keywords{Diophantine $m$--tuples, Pell equations,
linear forms in logarithms
}
\thanks{This joint work has been initiated as a project
LEA Franco-Roumain Math-Mode and completed in the framework
of a project GDRI ECO-Math. The second author is grateful for
financial support allowing to attend ANTRA 2017 Conference
organized by RIMS Kyoto and XVI Conference on Representation Theory,
Dubrovnik, 2019, where preliminary results were discussed.
}
\date{\today}
\begin{abstract}
A set of positive integers with the property that the
product of any two of them is the successor of a perfect
square is called Diophantine $D(-1)$--set. Such objects are
usually studied via a system of generalized Pell equations
naturally attached to the set under scrutiny. In this
paper, an innovative technique is introduced in the study
of Diophantine $D(-1)$--quadruples. The main novelty is the
uncovering of a quadratic equation relating various parameters
describing a hypothetical $D(-1)$--quadruple with integer
entries. In combination with extensive computations, this
idea leads to the confirmation of the conjecture according
to which there is no Diophantine $D(-1)$--quadruple.
\end{abstract}
\maketitle
\section{The strategy}\label{sec1}
In the third century, Diophantus of Alexandria found four
positive rationals such that the product of any two of them
increased by unity is a square, see, for
instance,~\cite{dic,diog,dior,hea}.
Fermat found the quadruple consisting of positive integers
$1$, $3$, $8$, $120$ with the same property. As Euler remarked, Fermat's
set can be enlarged by inserting $\frac{777480}{8288641}$
without losing the defining property. It was in 1969 that
Baker and Davenport~\cite{bad} proved that there is no quintuple of
positive integers containing Fermat's set and still having the
property of interest. On this occasion the authors introduced
an important tool, nowadays referred to as Baker--Davenport
lemma, for the effective resolution of Diophantine equations.
Diophantus also studied a problem that turned out to be closely
related to that mentioned before. Namely, he asked for numbers
such that the product of any two of them increased by the sum
of these two is a square. Since $ab+a+b=(a+1)(b+1)-1$, the
question boils down to finding sets with the property that
the product of any two of its elements is one more a square.
The essence of both problems is captured by the next definition.
Let $m\ge 2$ and $n$ be integers. A set of $m$ positive integers
is called Diophantine $D(n)$--$m$--set if the product of any two
distinct elements increased by $n$ is a perfect square. In this terminology,
Fermat's example is a $D(1)$--quadruple, and the set $\{ 4, 9, 28 \}$
presented by Diophantus himself as an answer to the second problem
gives rise to the $D(-1)$--triple $\{5,10,29\}$. A more general
notion is obtained by considering elements of any commutative
ring instead of positive integers. However, many difficult,
interesting problems already occur in the setting fixed by the
given definition. In the rest of the paper we shall refer only
to this definition, even when we omit the adjective ``Diophantine''.
It is worth mentioning that the objects produced by this definition
with $n=0$ are not particularly interesting --- for each positive
$m$ there exist infinitely many $D(0)-m$--sets and even infinite
$D(0)$--sets. Therefore, when speaking of $D(n)-m$--sets we shall
always assume $n$ is nonzero.
A natural question is how large a $D(n)$--set can be.
It is known~\cite{duca} that for $\vert n\vert <400$
one has $m<31$, and for other $n$ the cardinality of any
$D(n)$--$m$--tuple is at most $16 \log \vert n\vert$. Better
bounds are known for particular values of $n$. As noticed
in several papers (among which~\cite{br,gusi,mora}), for $n\equiv
2 \pmod 4$ there is no $D(n)$--quadruple. On the opposite side,
in~\cite{du4} it is showed that if $n \not \equiv 2 \pmod 4$
and $n\not \in S:=\{ -4,-3,-1,3,5,8,12,20\},$ then there exists
at least one $D(n)$--quadruple. In the same paper Dujella
expressed his confidence that this is all one can hope for.
\noindent
\textbf{Conjecture.} There exists no $D(n)$--quadruple for
$n\in \{ -4,-3,-1,3,5,8,12,20\}$.
According to a remarkable result of Dujella and Fuchs~\cite{dufu},
in any $D(-1)$--quadruple $(a,b,c,d)$ with $a<b<c<$ one has $a=1$.
This readily implies the nonexistence of $D(-1)$--quintuples.
The same authors together with Filipin proved in~\cite{duff} that
there are at most finitely many $D(-1)$--quadruples.
The present authors obtained in~\cite{bcm} the bound $10^{71}$
for the number of $D(-1)$--quadruples, thus improving on the
previous bound $10^{356}$ found in~\cite{fifu}. Better estimates
have been given lately: in~\cite{eff} one finds the upper bound
$5\cdot 10^{60}$, successively strenghtened to $3.01\cdot 10^{60}$
in~\cite{tim}, to $4.7\cdot 10^{58}$ in~\cite{lap1}. The best
bound we are aware of is $3.677\cdot 10^{58}$ found in~\cite{lap2}.
A basic technique in the study of $D(n)$--sets exploits a connection
with systems of generalized Pell equations. We explain the main
ideas of this approach in the framework of $D(-1)$--quadruples.
Suppose $(1,b,c,d)$ is a $D(-1)$--quadruple with $1<b<c<d$.
Then there are positive integers $r$, $s$, $t$, $x$, $y$, $z$
satisfying
\beq{ecbc}
b-1=r^2, \ c-1=s^2, \ bc-1=t^2,
\end{equation}
\beq{ecd}
d-1=x^2, \ bd-1=y^2, \ cd-1=z^2.
\end{equation}
Eliminating $d$ in Eq.~\eqref{ecd}, one obtains a system of
three generalized Pell equations
\beq{ecpc}
z^2-cx^2=c-1,
\end{equation}
\beq{ecpbc}
bz^2-cy^2=c-b,
\end{equation}
\beq{ecpb}
y^2-bx^2=b-1.
\end{equation}
By Theorem~1.2 in~\cite{bcm}, we may assume $c<2.5 \, b^6$. Then,
according to~\cite[Lemmata~1 and~5]{duff}, the positive integer
solutions of each of the above Pell equations are respectively
given by
\[
z+x\sqrt{c} =s(s+\sqrt{c})^{2m}, \quad m\ge 0,
\]
\[
z\sqrt{b}+y\sqrt{c}=(s\sqrt{b}+\rho r\sqrt{c})
(t+\sqrt{bc})^{2n},\quad n\ge 0,
\]
\[
y+x\sqrt{b}=r(r+\sqrt{b})^{2l}, \quad l\ge 0,
\]
for fixed $\rho \in \{-1,1 \}$.
Therefore, the triples $(x,y,z)$ of positive integers that
simultaneously satisfy Eqs.~\eqref{ecpc}--\eqref{ecpbc} are such that
\beq{ec12mn}
z=v_m=w_n,
\end{equation}
where the integer sequences $(v_p)_{p\ge 0}$, $(w_p)_{p\ge 0}$
are given by explicit formul\ae \
\beq{eqv}
v_{p}=\frac{s}{2} \left( (s+\sqrt{c})^{2p}+
(s-\sqrt{c})^{2p}\right)
\end{equation}
and respectively
\beq{eqw}
w_p=\frac{s\sqrt{b} +\rho r\sqrt{c}}{2\sqrt{b}}(t+\sqrt{bc})^{2p}
+\frac{s\sqrt{b} -\rho r\sqrt{c}}{2\sqrt{b}}(t-\sqrt{bc})^{2p}.
\end{equation}
These formul\ae \ give rise in the usual way to linear forms
in the logarithms of three algebraic numbers, for which upper
bounds are obtained directly, while lower bounds are given by
a general theorem of Matveev~\cite{mat}. Comparison of these
bounds results in inequalities for indices $m$ and $n$ in terms
of elementary functions in $b$ and $c$. In order to get reverse
inequalities, Dujella and Peth\H o introduced in~\cite{dup}
the congruence method. Their idea is to consider the recurrent
sequences modulo $8c^2$ and prove that suitable hypotheses entail
that these congruences are actually equalities. The best result
obtained by this approach is due to Dujella, Filipin and
Fuchs~\cite{duff}.
\bet{tedff}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple.
Then $b>100$ and $ c< \min \{ 11\, b^6, 10^{491} \}$.
More precisely:
\begin{enumerate}
\item [a)] If $b^3\le c < 11 \, b^6$, then $c< 10^{238}$.
\item [b)] If $b^{1.1} \le c< b^{3}$, then $c< 10^{491}$.
\item [c)] If $3b \le c< b^{1.1}$, then $c< 10^{94}$.
\item [d)] If $b < c< 3b$, then $c< 10^{74}$.
\end{enumerate}
\end{theorem}
A variant of the congruence method has been introduced
in~\cite{bcm}. The new idea is to interpret an equivalence
$L \equiv R \pmod c$ as an equality $L-R=jc$ for a suitable
integer $j$. Instead of striving to get $j=0$, as did the
predecessors, all possibilities for the sign of $j$ have
been analysed. As a result of this study,
inequalities of the form $n>f(b,c)^{\alpha (j)}$ have been
established. Combined with another new idea, called
smoothification in~\cite{bcm}, and large-scale computations,
always performed with the help of the package
PARI/GP~\cite{pari2}, this yields much better results.
\bet{tenoi}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple.
Then $b>1.024\cdot 10^{13}$ and $ \max \{10^{14}b, b^{1.16} \}
< c< \min \{ 2.5\, b^6, 10^{148} \}$.
More precisely:
\begin{enumerate}
\item [i)] If $b^5\le c < 2.5 \, b^6$, then $c< 10^{100}$.
\item [ii)] If $b^4\le c <b^5$, then $c< 10^{82}$.
\item [iii)] If $b^{3.5}\le c< b^4$, then $c< 10^{66}$.
\item [iv)] If $b^3\le c< b^{3.5}$, then $c< 10^{57}$.
\item [v)] If $b^{2}\le c< b^3$, then $c< 10^{111}$.
\item [vi)] If $b^{1.5}\le c< b^2$, then $c< 10^{109}$.
\item [vii)] If $b^{1.4} \le c< b^{1.5}$, then $c< 10^{128}$.
\item [viii)] If $b^{1.3} \le c< b^{1.4}$, then $c< 10^{148}$.
\item [ix)] If $b^{1.2} \le c< b^{1.3}$, then $c< 10^{133}$.
\item [x)] If $b^{1.16} \le c< b^{1.2}$, then $c< 10^{107}$.
\end{enumerate}
\end{theorem}
More recently, Filipin and Fujita obtained an even better
relative bound for the third element of a hypothetical
$D(-1)$--quadruple. In~\cite{ff} they proved the remarkable
result quoted below. The proof is based on an improved variant
of Rickert's theorem~\cite{ric}.
\bet{tefifu}
Any $D(-1)$--quadruple $(1,b,c,d)$ with $1<b<c<d$ satisfies
$c< 9.6 \, b^4$.
\end{theorem}
\begin{figure}
\caption{Absolute bounds for $c$ given by Theorems~\ref{tedff}
\label{fig1}
\end{figure}
\begin{figure}
\caption{A different view on the absolute bounds for $c$ given by
Theorem~\ref{tenoi}
\label{fig2}
\end{figure}
Figures~\ref{fig1} and~\ref{fig2} give a graphical representation
of these results.
The outer (inner) polygon in Figure~\ref{fig1} represents the
region where $c$ is confined according to Theorem~\ref{tedff}
(Theorem~\ref{tenoi}). From Theorem~\ref{tefifu} we see that
in the search of $D(-1)$--quadruples we can restrict ourselves
to the part of polygon sitting in the half-plane $c< 9.6 \, b^4$.
Figure~\ref{fig2} contains an approximate illustration of
Theorem~\ref{tenoi} in polar coordinates. It is seen that $c$
is to be found in a region whose shape looks like a nonstandard
fan. We interpret the presence of inlets as a strong hint
that actually there are no $D(-1)$--quadruples whose third entry
is located in the nonconvex blades. Eliminating these blades has
the effect of ``partially closing the fan''. The aim of this paper
is to ``completely close the fan''. In this respect we will prove
the following result.
\bet{tede}
There is no Diophantine $D(-1)$--quadruple.
\end{theorem}
Since, by~\cite[Remark~3]{du4}, all elements of a $D(-4)$--quadruple
are even, from Theorem~\ref{tede} we get for free another result
that provides partial confirmation of Dujella's conjecture.
\bet{tede4}
There is no Diophantine $D(-4)$--quadruple.
\end{theorem}
Our strategy is based on the following interpretation of
Eq.~\eqref{ecbc}: the initial triple $(1,b,c)$ of a hypothetical
$D(-1)$--quadruple with $1<b<c<d$ is associated to a member of a
two-parameter family of integers $b=r^2+1$, $c=s^2+1$ and $t$
witnesses the fact that we deal with a $D(-1)$--triple. We would
like to handle all these parameters simultaneously. This goal is
achieved by considering the integer
\[
f=t-rs,
\]
which is easily seen to be positive. It already appeared in several
proofs available in literature, see, for instance,~\cite{dufu,fuaa,furo,het}.
Up to our work it always had a secondary role, sitting in
background. Focusing on $f$ turned out to open new prospects for
the study of $D(-1)$--quadruples.
The developments below seem to have been overlooked in the
literature.
Squaring $f+rs=t$, one gets $f^2+2frs+r^2s^2=(r^2+1)(s^2+1)-1$,
whence
\beq{ecrs}
r^2+s^2=2frs+f^2.
\end{equation}
Our approach hinges on the study of solutions in positive
integers to the master equation~\eqref{ecrs} in its various
disguises. This study is much easier than the examination of
solutions to the system of generalized Pell equations~\eqref{ecpc}
to \eqref{ecpb}. As will be seen in Section~\ref{sec2}, rather
strong results are obtained by elementary proofs relying
on properties of solutions to equation~\eqref{ecrs}.
Besides the emphasis on $f$ already mentioned, our treatment
introduces here yet another variant of the congruence method
by considering modulus $8f^6$ instead of the ``classical'' $8c^2$.
Also, we revisit published results and use them in a novel way.
The study is straightforward if $\gcd (r,s)=f$.
In the general case, we complement it with
considerations along the lines described in the next
paragraphs.
Let us denote by $J$ the set of pairs of integers $(r,s)$ such
that there exists a $D(-1)$--quadruple $(1,b,c,d)$ with
$1<b<c<d$ and $b=r^2+1$, $c=s^2+1$. For $(r,s)\in J$ we put
$s=r^\theta$ and define
\[
\theta^-=\inf_{(r,s)\in J} \theta, \quad \theta^+=\sup_{(r,s)\in J}
\theta.
\]
These numbers measure the size of $c$ with respect to $b$.
``Closing the fan'' means showing that for a putative
$D(-1)$--quadruple one has $\theta^+ < \theta^-$.
The main result of~\cite{bcm} (quoted in Theorem~\ref{tenoi})
gives in particular $c\le 3 \, b^6$. Using this upper bound,
a computer verification described in~\cite[Section~2]{bcm}
led to the conclusion that $r>32\times 10^5$, so that
$b>10^{13}$. Hence
\[
\theta^+ \le 6{.}1.
\]
The approach followed in~\cite{bcm} used, among other things,
the inequalities $c>3.999 \, f^2 b$ and $f> 10^7$ in order
to increase the lower bound $\theta^-$ from $1$ to $1.16$.
Pursuing this idea requires to examine much higher values
of $f$, a process that becomes prohibitively time-consuming.
To give the reader a feeling of the difficulties encountered,
we mention that computations that allow exclusion of values
$f \le 10^7$ needed about two weeks (measured by wall-clock)
on a personal computer; to reach the level
$f > 10^8$, our program ran on a network of up to 6 computers
for about three months; further computations were performed
during other six months on as much as 30 computers.
Therefore, another course has been chosen: instead of shortening
the interval $[\theta^-, \theta^+]$, we looked for methods of
splitting it in such a way that parallel processing is possible.
The alternative approach was devised after it was observed that
there is no $D(-1)$--quadruple with $b^2 \le c \le 16\, b^3$
even when $\gcd (r,s) \ne f$ and requires to identify suitable
functions of parameters already introduced in the formulation of
the problem. The breakthrough was realized after
making the choice $F:=s-2rf$. A short study revealed that
it is very helpful in separating values $c < 4b^2$ from values
$c > 4b^2$. It was a matter of days to reach the conclusion
that around $c =4b^2$ there is a large gap (a comparatively
long interval in which there is no third entry of a
$D(-1)$--quadruple).
Completion of the proof requires new explicit computations. Besides
those needed for use of results providing bounds for linear forms
in logarithms, a considerable amount of them was devoted to solve
many quadratic Diophantine equations and then to apply the reduction
procedure based on Baker--Davenport lemma. Further explanations and
full details are given in Section~\ref{sec4}.
A successful implementation of the strategy just sketched
requires to pay attention to several aspects which will be
described in Section~\ref{secAleks}. Here we mention only one point.
It is clear that the smaller the upper bounds on $b$ and $c$
are, the faster the subsequent computations depending on them
are. To this end, Matveev's general theorem~\cite{mat} was first
replaced by a strengthening of it due to Aleksentsev~\cite{alex}
and next by an older result of Matveev~\cite{Mat98}.
This course of action is determined by our experience, according
to which a giant step is better replaced by succesive small steps.
\section{A sufficient condition for the nonexistence of
$D(-1)$--quadruples}\label{sec2}
The aim of this section is to revisit results of our previous
work~\cite{bcm} in the light of the new guiding strategy. As it
turns out, a lot of information already available can be exploited
in a novel manner, producing unexpected results and suggesting
further developments.
The starting point of our study of solutions in positive integers
to the equation
\[
r^2+s^2=2frs+f^2,
\]
is the observation that $\gcd (r,s)$ is a divisor of $f$. As
extremal elements / circumstances generally are very interesting,
we consider solutions to Eq.~\eqref{ecrs} such that
\beq{ecmild}
f=\gcd (r,s).
\end{equation}
Then one has
\[
r=fu, \quad s=fv,
\]
for some positive integers $u$, $v$ satisfying
\[
(v-fu)^2-eu^2=1,
\]
with $e=f^2-1$.
Let $\gamma=f+\sqrt{e}$ be the fundamental solution to the Pell
equation $V^2-eU^2=1$ and $\ensuremath{\overline{\gamma}}=f-\sqrt{e}$ its algebraic
conjugate. According to Lemma~3.5 from~\cite{bcm},
$v>3.999^{1/2}f u$, so that all positive solutions to
Eq.~\eqref{ecrs} have the form
\beq{ec1}
r=\frac{f(\gamma ^k-\ensuremath{\overline{\gamma}} ^k)}{\gamma -\ensuremath{\overline{\gamma}}},
\quad s=\frac{f(\gamma ^{k+1}-\ensuremath{\overline{\gamma}} ^{k+1})}{\gamma -\ensuremath{\overline{\gamma}}},
\quad k\in \mathbb N,
\end{equation}
whence
\beq{ec2}
b=\frac{f^2(\gamma ^k+\ensuremath{\overline{\gamma}} ^k)^2-4}{(\gamma -\ensuremath{\overline{\gamma}})^2},
\quad
c=\frac{f^2(\gamma ^{k+1}+\ensuremath{\overline{\gamma}} ^{k+1})^2-4}{(\gamma -\ensuremath{\overline{\gamma}})^2}.
\end{equation}
Notice that the main result of He--Togb\'e~\cite{het} assures
$f\ge 2$, a piece of information we shall repeatedly use
without explicitly mentioning it. Later on, a more stringent
restriction, checked computationally, will be preferred.
With this notation fixed, we proceed to examine the properties
of solutions to Eq.~\eqref{ecrs} under the
condition~\eqref{ecmild}.
\begin{proof}r{pr21p}
One has $c<b \, \gamma ^2$. Moreover, for $k\ge 2$ it holds
\[
\gamma ^2-\frac{1}{2}<\frac{c}{b}.
\]
\end{proof}r \begin{proof}
The numerator of $b\gamma ^2-c$ is $(\gamma ^2-1)[f^2(2+
\ensuremath{\overline{\gamma}}^{2k}+ \ensuremath{\overline{\gamma}}^{2k+2})-4]$, which is manifestly positive.
The numerator of $2c-(2\gamma ^2-1)b$ is found to be
\[
4(2\gamma ^2-3)+f^2[ \gamma ^{2k}-2(2\gamma ^2-3)
+ \ensuremath{\overline{\gamma}} ^{2k}(1-2\gamma ^2+2\ensuremath{\overline{\gamma}} ^2)].
\]
Since $k\ge 2$ and $0<\ensuremath{\overline{\gamma}} <1$, it is sufficient to prove that
\[
\gamma ^4 > 2(2\gamma ^2-3)+2\gamma ^2-1.
\]
This inequality is obvious on noticing the identity
$\gamma ^4=4(f^2-1)\gamma ^2+2\gamma ^2-1$.
\end{proof}
We can bound from below $b$ by a power of $\gamma$.
\begin{proof}r{pr22p}
If $(1,b,c,d)$ is a $D(-1)$--quadruple with $10^{13} <b<c<d$
and $b$, $c$ given by formula~\eqref{ec2}, then
\[
\gamma ^{2k-1} <b.
\]
\end{proof}r \begin{proof}
The desired inequality is equivalent to $f^2(\gamma ^{2k}+2+
\ensuremath{\overline{\gamma}}^{2k})-4 > 4(f^2-1) \gamma ^{2k-1}$, which follows
from $ f^2\gamma > 4(f^2-1)$.
\end{proof}
We are now in a position to show that the third entry in a
$D(-1)$--quadruple restricted as in~\eqref{ecmild} is much
closer to the second one than was previously known.
\begin{proof}r{pr23p}
Any $D(-1)$--quadruple $(1,b,c,d)$ with $10^{13} <b<c<d$ and
$b$, $c$ given by~\eqref{ec2} satisfies $c<b^3$.
\end{proof}r \begin{proof}
Assuming $b^3\le c$, we deduce with the help of the previous
results
\[
\gamma ^{4k-2}<b^2\le \frac{c}{b}<\gamma^2,
\]
that is, $k<1$. Thus $k=0$, whence $r=0$ and $b=1$, which
is not possible.
\end{proof}
Propositions~\ref{pr21p} and~\ref{pr22p} have other important
consequences drawn from information made available by our previous
work.
For the sake of convenience, we recall an experimental
result obtained after two weeks of computer calculations
for the needs of Lemma 3.5 from~\cite{bcm}. Its proof is based
on the well-known structure of solutions to a Pellian equation
of the type
\beq{ecf1}
W^2-D U^2=N, \quad D >0 \quad \mathrm{nonsquare \ and} \quad N\ne 0.
\end{equation}
The most familiar reference is Nagell's book~\cite{nag} in its
various editions but the results have been published already in
the 19th century by Chebyshev~\cite{ceb}. More details
are available in the proof of~\cite[Lemma~2.9]{bcm}.
\begin{proof}r{prexp} There are no $D(-1)$--quadruples $(1,b,c,d)$
with the corresponding $f$ less than or equal to $10^7$.
\end{proof}r
This result is used below in conjunction with the fact
that for any hypothetical $D(-1)$--quadruple one has $n<10^{19}$
(see Table~1 from~\cite{bcm}).
After these preparations, we proceed with the study of positive
solutions to Eq.~\eqref{ecrs} given by~\eqref{ec1}.
\begin{proof}r{pr34} There are no $D(-1)$--quadruples $(1,b,c,d)$
with $10^{13} <b<c<d$, $b^2\le c<b^3$ and $b$, $c$
given by~\eqref{ec2}.
\end{proof}r \begin{proof}
Suppose, by way of contradiction, that the thesis is false.
From
\[
\gamma ^{2k-1}<b\le \frac{c}{b}<\gamma^2
\]
we get $k<2$. Since $k\ge 1$, we conclude that $k=1$.
Eliminating $d$ in Eq.~\eqref{ecd} yields the system of
generalized Pell equations~\eqref{ecpc}--\eqref{ecpb}.
It is well known that $z$ appears in two second-order
linearly recurrent sequences. Thus (see, e.g.,~\cite{dufu}
or~\cite{duff}) $z=v_m=w_n$, with $m$, $n$ positive integers
of the same parity, and
\beq{relv}
v_0=s, \quad v_1=(2c-1)s, \quad v_{m+2}=(4c-2) v_{m+1}
-v_{m},
\end{equation}
\beq{relw}
w_0=s, \quad w_1=(2bc-1)s +2\rho rtc, \quad w_{n+2}=
(4bc-2) w_{n+1} -w_{n},
\end{equation}
where $\rho =\pm 1$. Since $k=1$, one has
$r=f$, $b=f^2+1$, $s=2f^2$, $c=4f^4+1$, $t=2f^3+f$, and
therefore
\[
v_{m+2}=2(8f^4+1) v_{m+1} -v_{m},
\]
\[
w_{n+2}=2(8f^6+8f^4+2f^2+1) w_{n+1} -w_{n}.
\]
Taken modulo $8f^6$, these recurrent relations readily give
\[
v_m\equiv 2f^2 \pmod{8f^6}
\]
and
\[
w_{n}\equiv 2(\rho n+1) f^2 + \bigl(\frac{4n^3+8n}{3}\rho +4n^2
\bigr) f^4 \pmod{8f^6} .
\]
Together with $v_m=w_n$, this implies
\beq{ec6}
\rho n + \left(\frac{2n^3+4n}{3}\rho +2n^2 \right) f^2
\equiv 0 \pmod{4f^4}.
\end{equation}
Note that $n$ must be even and use this information to deduce
$n\equiv 0 \pmod{4f^2}$,
so that $n=4f^2u$ for some positive integer $u$. Replace $n$
by $4f^2u$ in~\eqref{ec6} to get $u\equiv 0 \pmod{f^2}$, and
therefore $n\ge 4f^4$.
Using this inequality and Proposition~\ref{prexp} one gets
$n> 10^{28}$, in contradiction with the fact $n<10^{19}$ proved
in~\cite[Proposition~4.3]{bcm}.
\end{proof}
\begin{proof}r{pr35} There are no $D(-1)$--quadruples $(1,b,c,d)$
with $10^{13} <b<c<d$, $b^{1.5}\le c<b^2$, and $b$, $c$
given by~\eqref{ec2}.
\end{proof}r \begin{proof}
We reason by reduction to absurd. Suppose that $(1,b,c,d)$
is a $D(-1)$--quadruple satisfying $10^{13} <b<c<d$,
$b^{1.5}\le c<b^2$, and $b$, $c$ given by~\eqref{ec2}.
It is easy to prove the upper bound $2k< 5$ as above.
Assuming $k=1$, from $ c<b^2$ it results $4f^4+1<(f^2+1)^{2}$,
whence $f<1$, which is impossible.
So it is established that $k= 2$. Therefore, one has
$r=2f^2$, $s=4f^3-f$, and $t=8f^5-2f^3+f$. The solutions to
the system of Pellian equations~\eqref{ecpc}--\eqref{ecpb}
verify $y=U_n=u_l$, where $n$, $l$ are positive integers and
\[ u_{l+2}=(4b-2)u_{l+1}-u_l, \quad u_0=r, \quad u_1=(2b-1)r,
\]
\[
U_{n+2}=(4bc-2)U_{n+1}-U_n, \quad U_0=\rho r, \quad
U_1=(2bc-1)\rho r +2bst.
\]
Considering these recurrence relations modulo $r^3$, one readily gets
\[
u_l\equiv r \pmod{r^3}.
\]
A short inductive reasoning that takes into account the explicit
formul\ae \ giving $r$, $s$, $t$ in terms of $f$ results in the
congruence
\[
U_n \equiv (n^2r^2+r)\rho +\frac{(10n-n^3)}{3} r^2-n r \pmod{r^3},
\]
so that $u_l=U_n$ implies $r \rho -nr \equiv r \pmod{r^3}$. Therefore,
there exists an integer $\lambda$ such that
\[
n = \rho -1 + \lambda r.
\]
Since $n\ge 7$ by~\cite[Proposition~2.2]{bcm}, it follows that
$\lambda$ is positive. Introducing this formula for $n$ in the
congruence for $U_n$, one sees that in fact one has $\lambda \ge r-1$,
so that $n>0.9 r^2 > 3 f^4 >10^{19}$. As explained previously,
this contradicts~\cite[Proposition~4.3]{bcm}. The contradiction
is due to the assumption that there exists a $D(-1)$--quadruple
satisfying all the hypotheses of the present proposition.
\end{proof}
\begin{proof}r{pr36} There are no $D(-1)$--quadruples $(1,b,c,d)$
with $10^{13} <b<c<d$, $b^{1.4}\le c<b^{1.5}$,
and $b$, $c$ given by~\eqref{ec2}.
\end{proof}r \begin{proof}
As above, we reason by reduction to absurd. Suppose that
$(1,b,c,d)$ is a $D(-1)$--quadruple satisfying
$10^{13} <b<c<d$, $b^{1.4}\le c<b^{1.5}$, and $b$, $c$ given
by~\eqref{ec2}. As seen in the proof of the previous result,
one then has $k\ge 2$. For $k=2$, from $ c<b^{1.5}$ one gets
\[
c=16f^6-8f^4+f^2+1<(4f^4+1)^{1.5}<9f^6+1,
\]
that is, $f< 1$, a contradiction. Therefore, we
conclude that $k\ge 3$. This and Proposition~\ref{pr22p}
yield
\[
\frac{c}{b}\ge b^{0.4} > \gamma ^{0.4(2k-1)} \ge \gamma ^2,
\]
which contradicts Proposition~\ref{pr21p}.
\end{proof}
\begin{proof}r{pr37} There are no $D(-1)$--quadruples $(1,b,c,d)$
with $10^{13} <b<c<d$, $b^{1.3}\le c<b^{1.4}$,
and $b$, $c$ given by~\eqref{ec2}.
\end{proof}r \begin{proof}
From the last proof we retain that $k$ is at least $3$,
while from the chain of inequalities
\[
\gamma ^2> \frac{c}{b}\ge b^{0.3} > \gamma ^{0.3(2k-1)}
\]
we deduce that $k\le 3$. To obtain a bound for $f$, we
follow the reasoning in the proof of Proposition~\ref{pr34}.
Since $k=3$, one has
\[
v_m\equiv -4f^2+8f^4 \pmod{8f^6},
\]
\[
w_n\equiv -(2\rho n+4)f^2+\bigl(\frac{4n-4n^3}{3}\rho -8n^2
+8\bigr) f^4 \pmod{8f^6},
\]
whence again it follows $n\ge 4f^4$. As already seen, this
leads to a contradiction.
\end{proof}
\begin{proof}r{pr38} There are no $D(-1)$--quadruples $(1,b,c,d)$
with $10^{13} <b<c<d$, $b^{1.2}\le c<b^{1.3}$,
and $b$, $c$ given by~\eqref{ec2}.
\end{proof}r \begin{proof}
We adapt the reasoning used to establish Proposition~\ref{pr35}.
So let $(1,b,c,d)$ be a $D(-1)$--quadruple satisfying
$10^{13} <b<c<d$, $b^{1.2}\le c<b^{1.3}$,
and $b$, $c$ given by~\eqref{ec2}.
In the proof of Proposition~\ref{pr36} it was shown that
$k\ge 3$. For $k=3$, from $c<b^{1.3}$ it results
\[
16f^4(2f^2-1)^2+1 < \left(f^2(4f^2-1)^2+1\right)^{1.3}
< 40f^8+1,
\]
whence $f^2<3$, a contradiction. Therefore, one has
$k\ge 4$. From
\[
\gamma ^2> \frac{c}{b}\ge b^{0.2} > \gamma ^{0.2(2k-1)}
\]
we deduce that $k\le 5$. Assuming $k=5$, one obtains
$n\ge 4f^4$ as in the proof of Proposition~\ref{pr34},
so a contradiction appears in this case. It remains to examine
what happens when $k=4$, $r=8f^4-4f^2$, $s=16f^5-12f^3+f$,
and $t=128f^9-160f^7+56f^5-4f^3+f$.
A short study of the sequences $(u_l)_l$, $(U_n)_n$ introduced in
the proof of Proposition~\ref{pr35} gives
\[
U_n \equiv \bigl( (8-8n^2)f^4 -4f^2 \bigr) \rho +
\frac{(4n^3-100n)}{3} f^4 + 2 nf^2 \pmod{8f^6}.
\]
Proceeding as in the proof of Proposition~\ref{pr35},
one gets $n>3f^4$, whence the same contradiction emerges.
\end{proof}
\begin{proof}r{pr39} There are no $D(-1)$--quadruples $(1,b,c,d)$
with $10^{13} <b<c<d$, $b^{1.16}\le c<b^{1.2}$,
and $b$, $c$ given by~\eqref{ec2}.
\end{proof}r \begin{proof}
Assume the contrary.
By the previous result, $k\ge 4$. For $k=4$ one gets
\[
f^2(16f^4-12f^2+1)^2+1 < \left(16f^4(2f^2-1)^2+1\right)^{1.2}
< 169f^{10}+1,
\]
which is false for $f>1$. For $k=5$ one adapts the reasoning
introduced in the proof of Proposition~\ref{pr34} to obtain
$n> 10^{28}$, in contradiction with~\cite[Proposition~4.3]{bcm}.
As $0.16(2k-1) < 2$ yields $k\le 6$, it remains to consider
the possibility $k=6$. The argument indicated at the end of
the proof of Proposition~\ref{pr35} can be adapted to the
present context. One finally obtains $n>3f^4$, which is not
compatible with the existence of a $D(-1)$--quadruple
subject to all
constraints from the hypothesis of the present proposition.
\end{proof}
Summing up what has been done in this section and noticing
that condition~\eqref{ecmild} holds if $f$ has no prime
divisor congruent to $1$ modulo $4$, we get the next result.
\bet{tef}
There are no $D(-1)$--quadruples $(1,b,c,d)$ with
$10^{13} <b<c<d$ and $b$, $c$ given by~\eqref{ec2}.
In particular, there exists no $D(-1)$--quadruple for
which the corresponding
$f$ has no prime divisor congruent to $1$ modulo $4$.
\end{theorem}
An alternative proof, more familiar to experts in Diophantine
equations, is based on linear forms in logarithms. Here is the
sketch of such a reasoning.
For the rest of the paragraph we put
\[
\alpha=s+\sqrt{c}, \quad \beta = r+\sqrt{b}, \quad
\mathrm{and} \quad \gamma=\sqrt{\frac{s\sqrt{b}}{r\sqrt{c}}}.
\]
As in~\cite{het} and~\cite[Section~4]{bcm}, to a putative
$D(-1)$--quadruple $(1,b,c,d)$ it is associated a linear form
in logarithms
\[
\Lambda := m\log \alpha -l \log \beta +\log \gamma.
\]
We put
\[
\Delta:= (k+1)m-kl.
\]
Then
\[
(k+1)\Lambda=\log (\gamma ^{k+1} \alpha ^\Delta)-l\log (\beta ^{k+1}
\alpha ^{-k})
\]
can be considered as a linear form in the logarithms of two
algebraic numbers. An elementary study shows that one has
\[
\vert \log (\beta ^{k+1}\alpha ^{-k})\vert \le \frac{k+1}{f^2}
\quad \mathrm{and} \quad \vert \Delta \vert \le \frac{2l}{f^2 \log f}.
\]
Then Theorem~\ref{tef} follows from Laurent's estimates on linear
forms in two logarithms given in~\cite{lau} and our computations
which showed that $f>10^7$ and $b>10^{13}$ for each
$D(-1)$--quadruple.
Each approach has its own advantages over the other. The
former is more ``human-friendly'' (and consequently longer),
provides insight and has explanatory power, while the latter
is computer-intensive and therefore shorter yet less enlightning.
Since the former approach involves ideas who proved to be pivotal
for subsequent developments, we decided to expound it extensively.
The idea at the basis of the proof of Theorem~\ref{tef} can be
succintly stated ``reduce the master equation to a Pellian equation''.
The same paradigm can be applied for an arbitrary $D(-1)$--quadruple.
Write $f=f_1f_2$, with $f_1$ the product of all the prime
divisors of $f$ which are congruent to $1$ modulo $4$,
multiplicity included. Then in any solution $(r,s)$ to
\eqref{ecrs} one has
\[
r=f_2u, \quad s=f_2v,
\]
for some positive integers $u$, $v$ satisfying
\beq{ecuv}
u^2+v^2=2fuv+f_1^2 \Longleftrightarrow (v-fu)^2-(f^2-1)u^2=f_1^2.
\end{equation}
Below is a specialization of Frattini's theorems from~\cite{frat}
and~\cite{frat1} giving a representation for the nonnegative
solutions to the equation relevant for us
\beq{ecf111}
W^2-(f^2-1)U^2=f_1^2.
\end{equation}
\begin{proof}r{prfrat} The nonnegative solutions of Eq.~\eqref{ecf111} are
given by
\[
w +u \sqrt{f^2-1} =\bigl(w_0+u_0\sqrt{f^2-1} \,\bigr)
\bigl(f+\sqrt{f^2-1} \,\bigr)^k, \quad k \ge 0,
\]
or
\[
w +u \sqrt{f^2-1} =\bigl(w_0-u_0\sqrt{f^2-1} \,\bigr)
\bigl(f+\sqrt{f^2-1} \,\bigr)^k, \quad k \ge 1,
\]
where $(w_0,u_0)$ runs through the nonnegative solutions of
Eq.~\eqref{ecf111} with
\[
f_1\le w_0 \le f_1\sqrt{\frac{f+1}{2}}, \quad
0\le u_0 \le \frac{f_1}{\sqrt{2(f+1)}}.
\]
\end{proof}r
Let $\gamma=f+\sqrt{f^2-1}$ be the fundamental solution to the Pell
equation $X^2-(f^2-1)Y^2=1$ and $\eps=w_0 +\zeta u_0\sqrt{f^2-1}$,
where $\zeta \in \{-1,1\}$, a fundamental solution as described above.
According to Lemma~3.5 from~\cite{bcm}, $v>3.999^{1/2}f u$, so that
all positive solutions to Eq.~\eqref{ecuv} have the form
\beq{ec4p}
v-fu +u \sqrt{f^2-1}=\eps \gamma ^k, \quad k\ge (1 -\zeta)/2.
\end{equation}
Introducing the algebraic conjugates $\ensuremath{\overline{\gamma}}=f-\sqrt{f^2-1}$,
$\ensuremath{\overline{\eps}}=w_0 -\zeta u_0\sqrt{f^2-1}$, one readily obtains
\begin{align}
r &= \frac{f_2\bigl(\eps \, \gamma ^k-\ensuremath{\overline{\eps}} \, \ensuremath{\overline{\gamma}} ^k
\bigr)}{\gamma -\ensuremath{\overline{\gamma}}}, \label{formr} \\
s &= \frac{f_2\bigl(\eps \, \gamma ^{k+1}-\ensuremath{\overline{\eps}} \, \ensuremath{\overline{\gamma}} ^{k+1}
\bigr)}{\gamma -\ensuremath{\overline{\gamma}}}, \label{forms}
\end{align}
whence
\begin{align}
b &= \frac{f_2^2\bigl(\eps \, \gamma ^k+\ensuremath{\overline{\eps}} \, \ensuremath{\overline{\gamma}} ^k
\bigr)^2-4}{(\gamma -\ensuremath{\overline{\gamma}})^2}, \label{formb} \\
c &= \frac{f_2^2\bigl(\eps \, \gamma ^{k+1}+\ensuremath{\overline{\eps}} \, \ensuremath{\overline{\gamma}} ^{k+1}
\bigr)^2-4}{(\gamma -\ensuremath{\overline{\gamma}})^2}. \label{formc}
\end{align}
Note that when $u_0=0$ one has $\eps=\ensuremath{\overline{\eps}}=f_1$, so that
\eqref{formr} and \eqref{forms} coincide with~\eqref{ec1}
and~\eqref{ec2}, respectively.
One major source of difficulties with this approach is the fact
that the components $w_0$, $u_0$ of a fundamental solution are
known only approximately, being confined to a box defined by the
inequalities stated in the last line of Proposition~\ref{prfrat}.
Another reason for complexity is the existence of positive solutions
to Eq.~\eqref{ecf111} for which $\zeta=-1$. We have succeeded to
overcome all such complications and prove Theorem~\ref{tede} along
these lines. Our attempts to simplify the proof and avoid intricate
arguments were successful as soon as we changed once more the
underlying paradigm.
Multiplication by a power of the minimal solution for the
associated Pell equation can be viewed as a vehicle to move
from a fundamental solution to Eq.~\eqref{ecf111} to a
solution of interest. Metaphorically speaking, one can say that
in the proof for Theorem~\ref{tede} presented in Section~\ref{sec4}
we travel backwards --- we examine to what extent information
about a specific solution is transferred to associated solutions.
Before making explicit the explanations alluded to above, we
present in the next section strenghtened versions for some
technical results from~\cite{bcm}.
\section{Bounds for linear forms in logarithms} \label{secAleks}
Recall that for a nonzero algebraic number $\gamma$ of degree $D$
over $\mathbb Q$, with minimal polynomial $A\prod _{j=1}^D(X-\gamma^{(j)})$
over $\mathbb Z$, the absolute logarithmic height is defined by
\[
{\rm h}h (\gamma) = \frac{1}{D}\left( \log A +\sum_{j=1}^D \log^+ \mid (\gamma^{(j)}) \mid\right),
\]
where $\log^+ x=\log \max (x,1)$.
Next we quote from~\cite{alex} a theorem giving very good lower
bounds for linear linear forms in the logarithms of three algebraic
numbers under hypotheses that are easily checked in the context
of interest here.
\bet{Alex} \emph{(Aleksentsev)}
Let $\Lambda_1$ be a linear form in logarithms of $n$ multiplicatively
independent totally real algebraic numbers $\beta_{1}, \ldots \beta_{n}$,
with rational coefficients $b_{1}, \ldots, b_{n}$. Let ${\rm h}h(\beta_{j})$
denote the absolute logarithmic height of $\beta_{j}$ for $1\leq j \leq n$.
Let $D$ be the degree of the number field
$\mathcal{K} = \mathbb Q (\beta_{1}, \ldots, \beta_{n})$, and let
$B_{j} = \max(D {\rm h}h(\beta_{j}), |\log \beta_{j}|, 1)$. Finally, let
\beq{tail}
E = \max\left( \max_{1 \leq i, j \leq n} \left\{ \frac{|b_{i}|}{B_{j}} + \frac{|b_{j}|}{B_{i}}\right\}, 3\right).
\end{equation}
Then
\begin{equation*}\label{kidney}
\log |\Lambda_1| \geq - 5{.}3 n^{-n+1/2} (n+1)^{n+1}(n+8)^{2}(n+5)(31{.}44)^{n}
D^{2} (\log E)\log(3nD) \prod_{j=1}^n B_{j} .
\end{equation*}
\end{theorem}
We apply Theorem \ref{Alex} for $D=4$, $n=3$, and
\beq{eqlam1}
\Lambda_1 = 2m\log\beta_{1} - 2l\log\beta_{2} + \log\beta_{3},
\end{equation}
with the choices
\beq{cheek}
\beta_{1} = s+ \sqrt{c}, \quad \beta_{2} = r + \sqrt{b}, \quad
\beta_{3} = \frac{s\sqrt{b}}{r\sqrt{c} }, \quad
b_1 = 2m, \quad b_2 = -2l, \quad b_3 = 1.
\end{equation}
The required multiplicative independence readily follows by noting that
$\beta_{1}$ and $\beta_{2}$ are algebraic units while $\beta_{3}$
is not. Indeed, any possible relation of multiplicative dependence
has the shape $\beta_1^u = \beta_2^v$ for some positive integers $u$,
$v$. Note that $\mathbb Q (\beta_1) \cap \mathbb Q (\beta_2) =\mathbb Q$, as otherwise
$b$ and $c$ would have the same square-free part, so that $bc$ would
be a perfect square, in contradiction with $bc-1=t^2$. One concludes
that it holds $\beta_1^u \in \mathbb Q$, which is not possible because
$\beta_1$ is not a root of unity.
For compatibility with~\cite{bcm}, we introduce the notation
$\alpha=s+\sqrt{c}$, $\beta=r+\sqrt{b}$. It is clear that it holds
\[
{\rm h}h (\beta_1) =\frac{1}{2} \log \alpha , \quad {\rm h}h (\beta_2) =\frac{1}{2} \log \beta ,
\]
so that
\beq{eq:valB}
B_1 = 2\log \alpha , \quad B_2= 2\log \beta .
\end{equation}
The minimal polynomial for $\beta_3$ is $r^2cX^2-s^2b$ divided by
$\gcd (r^2c,s^2b)$, so that
\[
{\rm h}h (\beta_3) = \frac{1}{2} \log \left(\frac{s^2 b}{\gcd (r^2c,s^2b)}\right).
\]
As the lower bound for $\log |\Lambda_1|$ given by Aleksentsev's
theorem decreases when $B_3$ increases, we can take
\[
B_3 =4\log s\sqrt{b}.
\]
Combining the obvious relations $\alpha > \beta > \beta_3$, $B_3>B_1>B_2$
with $m\log \alpha < l\log \beta$ (proved in Lemma 3.3 from~\cite{het})
and its consequence $l>m$, one obtains
\[
E=\max \left( \frac{2l}{\log \beta}, 3 \right).
\]
Having in view that by Theorem~\ref{tenoi} one has $b<10^{148/1.16}$,
for $l \ge 250$ one gets
\[
E= \frac{2l}{\log \beta}.
\]
Since $\log \Lambda_1 < -4l\log \beta + \log (b)-\log (b-1)$ by Lemma~3.1
from~\cite{het}, one has
\[
l < 6.005171\cdot 10^{11} \log \alpha \log (s\sqrt{b}) \log \left(\frac{2l}{\log \beta} \right).
\]
Most of the previous work on $D(-1)$-quadruples has focused on the $z$-component
of the solutions to system~\eqref{ecpc}--\eqref{ecpb}. In order to
use the information already available in the literature, we shall derive
from the inequality above one involving $m$ and subsequently another one
in terms of $n$. In a first step towards this goal
we employ the elementary fact that the function $x\mapsto x/\log x$ is
increasing for $x>3$. By Lemma~3.3 from~\cite{het}, we thus get
\[
m < 6.005171\cdot 10^{11} \log \beta \log (s\sqrt{b})
\log \left(\frac{2m \log \alpha}{(\log \beta )^2} \right).
\]
A slight simplification is possible thanks to the following result.
\bel{lecomp}
$
\displaystyle \frac{\log (s+\sqrt{c})}{\log (r+\sqrt{b})} < \frac{\log c}{\log b}.
$
\end{lemma} \begin{proof}
Consider the real functions $ f_1(x)=\log (\sqrt{x}+\sqrt{x-1})$
and $f_2(x)=\log x$ defined for $x\ge 1$.
As $f_2 '(x) >0$ and $f_1 '(x)/f_2'(x)$ is decreasing for $x>1$,
by~\cite{avv} we know that
\[
\frac{\log (\sqrt{x}+\sqrt{x-1})}{\log x} = \frac{f_1(x)-f_1(1)}{f_2(x)-f_2(1)}
\]
is decreasing as well.
\end{proof}
Using this observation together with the obvious inequality
$bs^2 < bc-1 $, we get
\[
2m < 6.005171\cdot 10^{11} \log \beta \log (bc-1)
\log \left(\frac{2m \log c}{\log b \log \beta } \right).
\]
As explained above, for $m\ge 250$ one has $2m>3\log \beta$,
so one can apply the same reasoning to pass from $m$ to $n$
with the help of the inequality $m\log (4c) > n\log (bc-1)$
proved in~\cite[Lemma~2.7]{bcm}. The resulting formula is
\[
2n < 6.005171\cdot 10^{11} \log \beta \log (4c)
\log \left(\frac{2n \log (bc-1) \log c}{\log b \log (4c)\log \beta } \right).
\]
Since $2r < \beta < 2\sqrt{b} $, Theorem \ref{Alex} yields the following corollary.
\bec{aleup}
If $n\ge 250$, then
\[
n < 1.5002\cdot 10^{11} \log (4b) \log (4c)
\log \left(\frac{4n \log (bc)}{\log b \log (4b) } \right).
\]
\end{cor}
Upper bounds of this type are complemented by reverse inequalities.
Our next immediate goal is to sharpen some lower bounds for $n$
in terms of $b$ and $c$ established in~\cite{bcm}. To this end,
we shall use the positive integer $A$ introduced in~\cite{dufu}
by formula
\[
A=(2b-1)c-2rst.
\]
Routine calculations lead to the simpler
statement $A=f^2+b$. For the proof of our next results we recall
from Lemma~3.4 of~\cite{bcm} that $A$ satisfies the double inequality
\beq{eqet2}
\frac{c-5}{4b}+b < A < \frac{1}{3.999} \left(\frac{c}{b} +4b\right)
\end{equation}
as well as the congruence
\beq{congA}
2(bn^2-m^2) \equiv \pm An \pmod c.
\end{equation}
Occasionally we shall rewrite this as
\beq{eqA}
2(bn^2-m^2) +\rho An = jc
\end{equation}
for a fixed $\rho\in \{-1,1\}$ and a certain integer $j$.
Actually, slightly stronger upper bounds on $A$ are valid in the
context of interest in this paper.
\bel{letarA}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple.
Then:
\emph{a)} For $c<b^3$ one has
\[
A<\left(\frac{c}{ 4b}+b\right) \left(1+\frac{1 }{ b}\right).
\]
\emph{b)} $A < 2b$ for $c< 4\, b^2$.
\emph{c)} $A< c/(3.9999\, b)$ for $c> b^3$.
\end{lemma} \begin{proof}
For part a) we use the inequality
\[
A < b + \frac{1 }{ 4}\left( \frac{b-1}{ c-1}+\frac{c-1}{ b-1}\right) +\frac{1 }{ 2}
\]
established in the proof of Lemma~3.4 from~\cite{bcm}.
Since
\[
\frac{b-1}{ c-1} < \frac{b}{ c}
\]
because $b<c$, and
\[
\frac{c-1}{ b-1} < \frac{c}{ b-1}= \frac{c }{ b}+\frac{c }{ b^2}+\frac{c}{ b^2(b-1) },
\]
the result follows from the hypothesis $c<b^3$ and the estimate $b>10^{13}$.
When $c< 4\, b^2$, part a) yields $A< 2b+2$. The assumption $c<4b^2$
implies $s <2\, b$, so for $s=2b-1$ one sees from the definition
of $A$ that $2b-1$ divides $A$, so one necessarily has $A=2b-1$.
Hence, $c$ is odd, which is not possible with $s$ odd.
When $s=2b-2=2r^2$, one readily gets $t=2r^3+r$ and $f=t-sr=r$.
As we already know that there is no $D(-1)$--quadruple with
$\gcd (r,s)=f$, we conclude that $s\le 2b-3$. Therefore,
\[
A< \frac{c}{4b} +b +2 \le 2b -1+\frac{9}{4b} <2b,
\]
as claimed in b).
The inequality c) follows as soon as we show that it holds
\[
b+1+\frac{c-1}{4( b-1)} < \frac{c}{3.9999\, b}.
\]
This is a corollary of the slightly stronger inequality
\[
b+1 < \frac{bc-20000 \, c}{15996\, b(b-1)}
\]
valid because $c>b^3$ and $b>10^{13}$.
\end{proof}
Now we are in a position to give a simplified list of
lower bounds for $n$ in terms of $b$ and $c$. More precisely,
the constants appearing in these bounds improve upon those
provided by Lemmata~3.6,~3.9, and~4.2 from~\cite{bcm}.
\begin{proof}r{prmarg}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple
with $c > b^3$. Then $n > \min \{b,0.125 \, \sqrt{c/b} \}$.
\end{proof}r \begin{proof}
We reason by reduction to absurd. Assuming that the thesis is
false, with the help of Lemma~\ref{letarA} we get
\[
0 < An \le \frac{ c}{3.9999} < 0.2501 \, c,
\]
\[
0 < 2(bn^2-m^2) <2bn^2 \le \frac{ c}{32} < 0.0313\, c.
\]
Therefore, congruence~\eqref{congA} is actually an equality
$An=2(bn^2-m^2)$, whence $A < 2bn \le 0.25\, \sqrt{bc}$. This
inequality is not compatible with $A > c/(4b) $ when $c>b^3$.
\end{proof}
\begin{proof}r{prmarg2}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple
with $4b^2 < c < b^3$. Then $n > 0.125 \, c/b^2$.
\end{proof}r \begin{proof}
As before, we supppose that the conclusion is false. Note
that part a) of Lemma~\ref{letarA} entails $A<0.6 \, c/b$,
whence $An < 0.075 \, c^2/b^3$. Since $2bn^2 < 0.032 \, c^2/b^3$,
we conclude yet again that congruence~\eqref{congA} is actually
an equality. We therefore get
\[
\frac{c}{4b} < A<2 \, bn \le \frac{c}{4b},
\]
a blatant contradiction.
\end{proof}
The result just proved is not useful when $c$ is close to $4b^2$.
One way to eliminate this inconvenience follows. In the statement below
we refer to Eq.~\eqref{eqA}.
\begin{proof}r{pr3.8} Suppose $(1,b,c,d)$ with $1<b<c<d$ is a $D(-1)$--quadruple
with $ c < b^3$.
\emph{a)} If $\rho =1$, then $n>0.5 \sqrt{c/b}$.
\emph{b)} Let $\rho =-1$. Then $j$ is nonnegative. If $j$ is positive,
then $n>0.5 \sqrt{2c/b}$. If $j=0$, then $c> 7164532\, b^2 > b^{2.155}$ and
\[
n> \left\{\begin{array}{ll}
c^{2/11} & for \ c\ge \max \{ b^{2.5}, 10^{50} \}, \\
0.214\, (c/b)^{1/3} & for \ c< b^{2.5}.
\end{array}
\right.
\]
\end{proof}r \begin{proof}
The result is very close to Lemma~3.8 from \cite{bcm}. There are
two differences: the hypothesis $c<b^3$ (instead of $c<b^{2.75}$)
which allows one to employ part a) of the above Lemma~\ref{letarA}
and the conclusion $c> 7164532\, b^2$ (instead of $c> 51.99\, b^2$).
a) The proof given in~ \cite[Lemma~3.8]{bcm} is valid under the
present hypotheses.
b) In \emph{loc. cit.} it was shown that for $\rho =-1$ and $j=0$ one has
$n>0.214\, (c/b)^{1/3}$ when $ c< b^{2.5}$. Since $b>20^{10}$, one gets
\[
A = 2bn -2m^2/n >2(b-4)n > 0.4279 \bigl(51.99\cdot 20^{10} \bigr)^{1/3} b.
\]
As a consequence of Lemma~\ref{letarA} a) one also has
\[
A<1.0001 \left(\frac{c}{4b}+b \right).
\]
Comparison of the two bounds on $A$ results in the inequality $c>138703\, b^2$.
Resume the reasoning from the previous paragraph with this information
instead of $c> 51.99\, b^2$. The outcome is an improved lower bound on $c$.
After fourteen more iterations one obtains $c> 7164532\, b^2$. According
to~\cite[Proposition~4.1]{bcm}, for $b^2 <c<b^3$ one has $b<10^{44}$.
This readily gives $7164532 > b^{0.155}$, which ends the proof.
\end{proof}
For hypothetical $D(-1)$--quadruples with $c<4\, b^2$ we also offer two
kinds of lower bounds for $n$.
\begin{proof}r{pr3.1}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple.
If $ c < 4\, b^{2}$, then $n > 0.707 \, \sqrt{c/b}$.
\end{proof}r \begin{proof}
The previous proposition gives $n> 0.5 \sqrt{2c/b}$ when $\rho=-1$,
which is slightly stronger than the claimed inequality. When
$\rho=1$ then one has $2(bn^2-m^2)+An \ge c$. In view of our
previous Lemma~\ref{letarA}, we get $2bn^2+2b n-c > 0$, so that
\[
2bn > -b +\sqrt{b^2+2bc}> 1.414 \, \sqrt{bc},
\]
where the last inequality holds because $c>3.999 \, f^2 b > 3.999
\cdot 10^{14}b$ by Lemma~3.5 of~\cite{bcm}.
\end{proof}
\begin{proof}r{pr3.9} Suppose $n\ge 1000$.
\emph{i)} If $b^{1.32} \le c <b^{1.40}$, then
$\displaystyle n \ge \left( \frac{15.927 \, b^2}{ c}\right)^{1/4}$ .
\emph{ii)} If $b^{1.27} \le c <b^{1.32}$, then
$\displaystyle n \ge \left( \frac{15.830 \, b^2}{ c}\right)^{1/4}$ .
\emph{iii)} If $b^{1.22} \le c <b^{1.27}$, then
$\displaystyle n \ge \left( \frac{15.387 \, b^2}{ c}\right)^{1/4}$ .
\emph{iv)} If $b^{1.16} \le c <b^{1.22}$, then
$\displaystyle n \ge \left( \frac{12.850 \, b^2}{ c}\right)^{1/4}$ .
\end{proof}r \begin{proof}
The new idea is to use the observation that for any
integers $L$, $M$, $R$, from $L\equiv R\pmod{2M}$ it follows
$L^2\equiv R^2\pmod{4M}$.
We put $e=f^2-1$, $\Delta=f^2$, then $A=b+\Delta$ and, as seen
in the proof of Lemma~\ref{letarA}, it follows that
\[
\Delta < \frac{c}{ 4b} + \frac{c}{4 b^2} + 1 < 0.2501\, \frac{c }{ b}.
\]
Notice that $b +e-1 = 2ft -c $, so that
\[
b^2+2(e-1)b +e^2-2e+1 \equiv c^2+4f^2t^2 \equiv c^2-4f^2=c^2-4(e+1) \pmod{4c}.
\]
It follows that
\[
b^2+2(e-1)b +e^2+2e+5 \equiv c^2 \pmod{4c}.
\]
The congruence method introduced in~\cite{dup} is based on the relation
$s(m^2-bn^2) \equiv \rho rtn \pmod{4c}$. Multiplying both sides by $2s$
one obtains
\[
2(bn^2-m^2) \equiv -\rho A n +cn \pmod {2c},
\]
equivalently
\[
b(2n^2+\rho n) \equiv 2m^2-\rho \Delta n +cn \pmod {2c}.
\]
From this we get
\[
b^2 (2n^2+\rho n)^2 \equiv (2m^2-\rho \Delta n +cn)^2 \pmod {4c}
\]
as well as
\[
2(e-1)b(2n^2+\rho n)^2 \equiv 2(e-1)(2n^2+\rho n) ( 2m^2-\rho \Delta n +cn) \pmod {4c}.
\]
By summation we get that $\bigl(b^2+2(e-1)b \bigr)(2n^2+\rho n)^2$
is congruent modulo $4c$ to
\[
( 2m^2-\rho \Delta n +cn)^2 +2(e-1)(2n^2+\rho n)(2m^2 - \rho \Delta n + cn),
\]
equivalently, again modulo $4c$,
\[
(c^2-e^2-2e-5) (2n^2+\rho n)^2 \equiv
( 2m^2-\rho \Delta n +cn)^2+ 2(e-1)(2n^2+ \rho n)
(2m^2 - \rho \Delta n + cn).
\]
Considering separately even and odd values of $n$, it is seen
\[
- (e^2+2e+5) (2n^2+\rho n)^2 \equiv
( 2m^2-\rho \Delta n )^2
+2(e-1)(2n^2+ \rho n)(2m^2 - \rho \Delta n ) \pmod {4c}.
\]
Now we want to find an upper bound for the expression
\[
\Phi := (e^2+2e+5) (2n^2+\rho n)^2 +
( 2m^2-\rho \Delta n )^2
+2(e-1)(2n^2+ \rho n)) |2m^2 - \rho \Delta n | .
\]
We proceed piece by piece, taking into account the relative
size of $b$ and $c$ as well as the upper bound on $c$. We give
all details for part i), leaving the other cases to the reader.
According to Lemma 2.5 from~\cite{bcm}, we have
\[
\frac{2m-1}{2n} < \frac{\log \bigl(4\, c^{1+1/1.32} \bigr) }{\log \bigl(3.996\, c \bigr)}
< \frac{\log \bigl(4\cdot 10^{147.43\cdot 58/33} \bigr)}{\log \bigl(3.996\cdot 10^{147.43}\bigr)}
< 1.7545,
\]
whence
\[
m< 1.7545 \, n+ 1/2 \le 1.7555 \, n.
\]
Therefore,
\begin{align*}
|2m^2 - \rho \Delta n | & < 2 \times 1.7555^2 n^2 + 0.2501 \frac{cn }{ b}
= \left( 6.16005 + 0.2501 \frac{c}{ nb} \right) n^2, \\
|2m^2 - \rho \Delta n |^2 & < \left( 6.16005 + 0.2501 \frac{c}{ nb} \right)^2 n^4,
\end{align*}
\[
2(e-1)(2n^2+\rho n) |2m^2-\rho \Delta n| < \frac{0.5002 c }{ b}\times
2.001 \,n^2 \times \left( 6.16005 + 0.2501 \frac{c}{ nb} \right) n^2,
\]
hence
\[
2(e-1)(2n^2+\rho n) |2m^2-\rho \Delta n| < 1.001 \frac{c}{ b}
\left( 6.16005 + 0.2501 \frac{c}{ nb} \right) n^4.
\]
By
\begin{align*}
e^2+2e+5 =\Delta^2+4 & < \left( 0.2501^2 + \frac{4}{b^{0.64}} \right)\frac{c^2}{b^2}\\
& < \left( 0.2501^2 + \frac{4}{10^{13\cdot 0.64}} \right)\frac{c^2}{b^2}
< 0.062551 \, \frac{c^2}{b^2}
\end{align*}
we also have
\[
(e^2+2e+5 ) (2n^2+\rho n)^2 < 0.062551 \times \frac{c^2 }{ b^2} \times 2.001^2 n^4
< 0.250455 \, n^4 \, \frac{c^2 }{ b^2} .
\]
Collecting all these estimates we get that $\Phi $ is less than
\[
\left( 0.250455 + \left( 6.16005 \frac{b}{ c} + \frac{0.2501}{ n} \right)^2
+ 1.001 \left( 6.16005 \frac{b}{ c} + \frac{0.2501}{ n} \right) \right) \, n^4 \, \frac{c^2 }{ b^2}.
\]
Using the known lower bounds on $n$ and $c/b$, it is easy to verify that
\[
\Phi < 0.251134\, n^4 \, \frac{c^2 }{ b^2}
\]
and we see that
\[
n < \left( \frac{15.927 \, b^2}{ c}\right)^{1/4} \ \Longrightarrow\ \Phi < 4\, c.
\]
But, by a previous congruence, the nonnegative integer $\Phi$ is a
multiple of $4c$, so for $\Phi < 4\, c $ it holds
\[
(e^2+2e+5) (2n^2+\rho n)^2 +
( 2m^2-\rho \Delta n )^2+2(e-1)(2n^2+ \rho n))(2m^2 - \rho \Delta n )=0.
\]
The left hand side of the last equation is of the form
\[
(e^2+2e+5) X^2 +2(e-1) XY+Y^2, \quad {\rm with}\ X = (2n^2+\rho n),
\ Y = ( 2m^2-\rho \Delta n ),
\]
a quadratic form whose discriminant (equal to $- 4\, f^2$) is negative, so
that $\Phi$ is always positive when $n>0$, a contradiction which implies
$n \ge \left( {15.927 \, b^2}{ c}\right)^{1/4}$.
\end{proof}
The results just proved serve to improve Theorem~\ref{tenoi}.
To this end we combine them with Aleksentsev's theorem
in conjunction with a similar result due to Matveev~\cite[Theorem 2.1]{Mat98}
applicable in the following context.
Let $\beta_1,\beta_2,\beta_3$ be real algebraic
numbers and denote $K:=\mathbb Q(\beta_1,\beta_2,\beta_3)$. Put $D:=[K:\mathbb Q]$.
Assume that $\beta_1,\beta_2,\beta_3$ satisfy the Kummer condition, that is,
\[
[K(\sqrt{\beta_1},\sqrt{\beta_2},\sqrt{\beta_3}):K]=8.
\]
Consider a linear form
$
\Lambda_1:=b_1 \log \beta_1+b_2 \log \beta_2+ b_3 \log \beta_3,
$
where $b_1,\,b_2,\,b_3$ are integers with $b_3 \ne 0$.
Put $A_j:=h(\beta_j)$ for $1 \le j \le 3$.
We take $E,\,E_1,\,C_3,\,C_1,\,C_2$ as follows:
\begin{align*}
E & \ge \frac{1}{3D} \max \left\{\left|\pm \frac{\log \beta_1}{A_1}\pm
\frac{\log \beta_2}{A_2} \pm \frac{\log \beta_3}{A_3}\right|\right\},\\
E_1&=\frac{1}{2D}\left(\frac{1}{A_1}+\frac{1}{A_2}+\frac{1}{A_3}\right),\\
C_3^*\exp(C_3^*)\frac{Ee}{2}&\ge e^3,\quad C_3=\max\{C_3^*,3\},\\
C_1&=\left(1+\frac{e^{-6}}{148}\right)(3 \log 2 + 2) \frac{4}{3C_3},\\
C_2&=16\left(6+\frac{5}{3 \log 2+2}\right)\frac{e^6}{3^{1/2}C_3}.
\end{align*}
We also put
\begin{align*}
\Omega&:=A_1A_2A_3,\\
\omega&:=\Omega \left(\frac{DC_1}{e}\right)^3C_3 \exp(C_3)\frac{Ee}{2}.
\end{align*}
Let $C_0$ be a real number satisfying
\[
C_0 \ge \max\left\{2C_3,\log\left(4C_2 \max
\left\{\frac{C_0\omega}{4C_1A_3},C_0,\frac{2E_1C_3}{C_1}\right\}\right)\right\}.
\]
Furthermore, put
\begin{align*}
B_0&:=\sum_{j=1}^2 \frac{|b_3|+|b_j|}{8\gcd(b_j,b_3)C_0C_2\omega},\\
B_1&:=\sum_{j=1}^2 \frac{1}{24\gcd(b_j,b_3)C_1}\left(\frac{|b_3|}{A_j}+\frac{|b_j|}{A_3}\right),\\
B_2&=\sum_{j=1}^2 \frac{|\log \beta_j|(|b_3|+|b_j|)}{8|b_3|C_0C_2\omega},\\
B_3&=\sum_{j=1}^2 \frac{|\log \beta_j|}{24|b_3|C_1}\left(\frac{|b_3|}{A_j}+\frac{|b_j|}{A_3}\right),
\end{align*}
and take a real number $W_0$ satisfying
\[
W_0 \ge \max\{2C_3,\log(e(1+B_0+B_1+B_2+B_3))\}.
\]
Now we are ready to state \cite[Theorem 2.1]{Mat98} in a form applicable to our situation.
\bet{thm:Mat98} \emph{(Matveev)}
Suppose that
\begin{align*}
2\omega\min\{C_0,W_0\} &\ge C_3,\\
\omega \min\{C_0,W_0\} & \ge 2C_1C_3 \max \{A_1, A_2,A_3\},\\
3(4C_1)^2 4C_0\Omega & \ge C_3\max \{A_1, A_2,A_3\}.
\end{align*}
Then,
\[
\log |\Lambda_1| > -11648C_2C_0W_0 \omega.
\]
\end{theorem}
In order to apply this result to the linear form~\eqref{eqlam1},
we need to check the Kummer condition is valid.
First we show that $\sqrt{\beta_1} \not \in K$.
Assume on the contrary that $\sqrt{\beta_1} \in K$.
Then one may write
$
\sqrt{\beta_1} =l_0+l_1\sqrt{b}+l_2\sqrt{c}+l_3\sqrt{bc}
$
with $l_0,\,l_1,\,l_2,\,l_3 \in \mathbb Q$.
Squaring both sides yields
\begin{align*}
s+\sqrt{c}&=l_0^2+bl_1^2+cl_2^2+bcl_3^2+2(l_0l_1+cl_2l_3)\sqrt{b}\\
&\quad +2(l_0l_2+bl_1l_3)\sqrt{c}+2(l_0l_3+al_1l_2)\sqrt{bc},
\end{align*}
whence
$
s=l_0^2+bl_1^2+cl_2^2+bcl_3^2, \quad 1=2(l_0l_2+bl_1l_3).
$
The arithmetic mean -- geometric mean inequality yields
\[
s\ge 2|l_0l_2|\sqrt{c} +2|l_1l_3| b\sqrt{c}\ge 2(l_0l_2+bl_1l_3)\sqrt{c}
= \sqrt{c} > s,
\]
a contradiction.
Similarly one proves that $\sqrt{\beta_2} \not \in K$.
To check $\sqrt{\beta_3} \not \in K$, we suppose the contrary and
get $0=l_0^2+bl_1^2+cl_2^2+bcl_3^2$ for some $l_0,\,l_1,\,l_2,\,l_3 \in \mathbb Q$.
Since $b$, $c>0$, it follows that all $l_j$ are zero, so $\beta_3 =0$,
absurd.
Secondly, assume that $\sqrt{\beta_1} \in K(\sqrt{\beta_2})$.
Then one may write $\sqrt{\beta_1}=k_0+k_1\sqrt{\beta_2}$ for
some $k_0,\,k_1 \in K$, equivalently
$
s+\sqrt{c}=k_0^2+k_1^2(r+\sqrt{b})+2k_0k_1\sqrt{\beta_2}.
$
If $k_0k_1 \ne 0$, then this equation shows that $\sqrt{\beta_2} \in K$,
which is impossible as seen above. If $k_1=0$, then $s+\sqrt{c}=k_0^2$,
which contradicts $\sqrt{\beta_1} \not \in K$. It remains $k_0=0$, so that
$s+\sqrt{c}=k_1^2(r+\sqrt{b})$ with
$k_1=l_0+l_1\sqrt{b}+l_2\sqrt{c}+l_3\sqrt{bc}$ and
$l_0,\,l_1,\,l_2,\,l_3 \in \mathbb Q$. Identification of coefficients of
$\sqrt{b}$ on the two sides of this equation followed by aplication
of the arithmetic mean -- geometric mean inequality results in
\begin{align*}
0 & = l_0^2+bl_1^2+cl_2^2+bcl_3^2 +2(l_0l_1+cl_2l_3)r \\
& \ge 2 \bigl(|l_0l_1| + c |l_2l_3|\bigr)\sqrt{b} +2(l_0l_1+cl_2l_3)r \\
& \ge 2 \bigl(|l_0l_1| + l_0l_1 +c (|l_2l_3|+ l_2l_3 )\bigr) r \ge 0.
\end{align*}
The middle inequality is strict unless $l_0l_1 = l_2l_3 =0$, in which
case all $l_j$ are zero. In either case we reached a contradiction.
Similarly one shows that $\sqrt{\beta_2} \not \in K(\sqrt{\beta_1})$.
It remains only to show that $\sqrt{\beta_3} \not \in K(\sqrt{\beta_1},\sqrt{\beta_2})$.
Assume the contrary and put
\[
\sqrt{\beta_3}=k_0+k_1\sqrt{\beta_1}+k_2\sqrt{\beta_2}+k_3\sqrt{\beta_1 \beta_2}
\]
with some $k_0,\,k_1,\,k_2,\,k_3 \in K$.
Squaring both sides, one has
\begin{align}\label{chi'2}
\beta_3 &=k_0^2+k_1^2 \beta_1+k_2^2 \beta_2+k_3^2 \beta_1 \beta_2+2(k_0k_1+k_2k_3 \beta_2)\sqrt{\beta_1}\\
&\quad +2(k_0k_2+k_1k_3 \beta_1)\sqrt{\beta_2}+2(k_0k_3+k_1k_2)\sqrt{\beta_1 \beta_2}. \notag
\end{align}
If $k_0k_1+k_2k_3\beta_2\ne 0$, with the help of $\sqrt{\beta_2} \not \in K$
one deduces first that $k_0k_1+k_2k_3\beta_2 + 2(k_0k_3+k_1k_2)\sqrt{\beta_2}\ne 0$
and next that $\sqrt{\beta_1} \in K(\sqrt{\beta_2})$. A similar
contradiction is reached assuming either $k_0k_3+k_1k_2 \ne 0$ or
$k_0k_2+k_1k_3 \beta_1 \ne 0$. So it holds
\[
k_0k_1= -k_2k_3 \beta_2, \quad k_0k_2= -k_1k_3\beta_1, \quad k_0k_3= -k_1k_2,
\]
whence
\[
k_0k_1(k_2^2-\beta_1 k_3^2)=0, \quad k_0k_2(k_1^2-\beta_2 k_3^2)=0, \quad
k_1k_2(k_0^2-\beta_1 \beta_2 k_3^2)=0.
\]
Having in view what we already proved, it is readily seen that the last
three equations imply that precisely one of $k_j$ is nonzero. Note that
$k_0 \ne 0$ gives $\sqrt{\beta_3} \in K$, absurd. For $k_1 \ne 0$
one has $s\sqrt{b} = k_1^2 (s+\sqrt{c}) r \sqrt{c}$. Passing to $\mathbb Q$
and comparing the coefficients of $1$ and $\sqrt{c}$ in both hand sides,
one gets a linear system of equations $sX+cY=0$, $X+sY=0$, with
$X=l_0^2+bl_1^2+cl_2^2+bcl_3^2$, $Y=2(l_0l_2+bl_1l_3)$, and
$l_0,\,l_1,\,l_2,\,l_3 \in \mathbb Q$. Since the determinant of this system
is $s^2-c=-1$, it has only the trivial solution, which gives the
contradiction $k_1=0$. Similarly one can conclude that neither
$k_2 \ne 0$ nor $k_3 \ne 0$ is possible.
Now the verification that Kummer condition holds for our $\Lambda$
is complete, so we can proceed with choosing suitable values for the
parameters in the statement of Theorem~\ref{thm:Mat98}.
As discussed in conection with Theorem~\ref{Alex}, we take
\[
A_1 = \frac{1}{2}\log (s+\sqrt{c}) , \quad A_2= \frac{1}{2}\log (r+\sqrt{b}),
\quad A_3 =\log (s\sqrt{b}).
\]
Then
\[
E\ge \frac{1}{12}\max \left\{ \left \vert\pm 2 \pm 2 \pm
\frac{\log (s\sqrt{b}/r\sqrt{c})}{\log (s\sqrt{b})}\right\vert \right\}.
\]
From
\[
\frac{s\sqrt{b}}{r\sqrt{c}} =\sqrt{1+\frac{s^2-r^2}{(s^2+1)r^2}}
<\sqrt{1+\frac{1}{r^2}} < 1+\frac{1}{2r^2},
\]
we see that we can take
\[
E=\frac{4+10^{-15}}{12}.
\]
Thus, we may take $C_3^*=2.8$ and $C_3=3$.
In order to fix a value for $E_1$, we need lower bounds
for $\log \beta _j$. Using Theorem~\ref{tenoi}, it is readily
seen that a suitable value is
\[
E_1=0.033653.
\]
It is easy to see that $C_0$ should satisfy
\begin{align*}
C_0 \ge \log \left(\frac{C_0C_2\omega}{C_1A_3}\right)
=\log(C_0T)
\end{align*}
with $T=96Ee\,C_1^2 C_2 A_1A_2$, which allows us to take
\[
C_0=\log T+\log(\log T)+\log(\log(\log T))+2\log(\log(\log(\log T)))
\]
(note that $\log(\log(\log(\log T)))>0$).
Since $m \log \beta_1 < l \log \beta_2$ and $A_1 > A_2$, one has
\begin{align*}
B_0+B_1+B_2+B_3&<\left(\frac{l}{2C_0C_2\omega}+\frac{1}{6C_1}
\left(\frac{1}{\log \beta_2}+\frac{l}{A_3}\right)\right)(1+\log \beta_1).
\end{align*}
We therefore take
\[
W_0=1+ \log\left( 1+\left( \frac{l}{2C_0C_2\omega}+\frac{1}{6C_1}
\left(\frac{1}{\log \beta_2}+\frac{l}{A_3}\right)\right)(1+\log \beta_1)
\right).
\]
Hence, combining the estimate in Theorem~\ref{thm:Mat98} with
$0 < \Lambda_1 < \bigl(8ac/(b-1)\bigr)\beta_2^{-4l}$ one gets
\beq{mal}
l < 69888 C_0 C_1^3 C_2 W_0 E e \log (s+\sqrt{c}) \log (s\sqrt{b})
+0.25 \log \left(\frac{b}{b-1} \right).
\end{equation}
As previously did, we pass from this inequality to one involving
$m$ and subsequently to one in terms of $n$. Assuming that it holds
$b^{del} < c < b^{Del} $ for some real numbers $1.16< del < Del <4.1$,
we finally get
\beq{ineqM98n}
n < 17472 C_0 C_1^3 C_2 W_0 S e \log (4b) \log (4c),
\end{equation}
with
\[
S=1+ \log\left( 1+\left( (Del+1) \left(\frac{1}{2C_0C_2\omega}+
\frac{1}{6C_1A_3} \right) n + \frac{1}{6C_1\log \beta_2}\right)
(1+\log \beta_2) \right).
\]
At this moment we have all ingredients for the proof of the main
result of this section.
\bet{prale}
Let $(1,b,c,d)$ with $1<b<c<d$ be a $D(-1)$--quadruple.
Then $b>1.024\cdot 10^{13}$ and $ \max \{10^{14}b, b^{1.233} \}
< c< \min \{ b^{2.93}, 10^{99} \}$.
More precisely:
\begin{enumerate}
\item [i)] If $ b^{2}\le c < b^{2.93}$, then
$b< 6.89\cdot 10^{32}$ and $c< 8.48\cdot 10^{70}$.
\item [ii)] If $b^{1.5}\le c< b^2$, then
$b< 1.26\cdot 10^{49}$ and $c < 4.48\cdot 10^{73}$.
\item [iii)] If $b^{1.4} \le c< b^{1.5}$, then
$b< 2.07 \cdot 10^{62}$ and $c < 1.77\cdot 10^{87}$.
\item [iv)] If $b^{1.3} \le c< b^{1.4}$, then
$b< 6.26 \cdot 10^{73}$ and $c < 10^{99}$.
\item [v)] If $b^{1.233} \le c< b^{1.3}$, then
$b< 10^{69}$ and $c < 4.85\cdot 10^{89}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Each interval $b^\gamma < c < b^\delta$ has been covered by subintervals
$b^\mu < c < b^{\mu +0.0001}$. On each subinterval, Corollary~\ref{aleup}
and Proposition~\ref{prmarg} produce an upper bound on $b$, which
in turns leads to a bound on $S$. When $n<1000$, instead of
Proposition~\ref{prmarg} we apply similar results from~\cite{bcm}
valid for $n\ge 7$, which results in much sharper bounds on $b$.
Using the estimate on $S$ in~\eqref{ineqM98n}, an improved upper
bound on $b$ is obtained. From our computations we learned that
$b<10^{13}$ for $c> b^{2.928}$, whence the conclusion that no
$D(-1)$--quadruple has $c> b^{2.928}$.
We also bound from above $f$ with the help
of the master equation, which shows that
\[
f<\frac{s}{2r}+\frac{r}{×2s}.
\]
Since our computations yield that for $c \le b^{1.233}$ one has
$f< 10^7$, by Proposition~\ref{prexp} we conclude that there
exists no $D(-1)$-quadruple with $c$ so close to $b$.
\end{proof}
Comparison with Theorems~\ref{tenoi} reveals superiority of
Theorem~\ref{prale}. However, it is also apparent that a lot
of work is required to confirm the nonexistence of
$D(-1)$--quadruples by using tools already employed.
Therefore, completely different ideas are required for further
advancements. The next section details and clarifies the change
in viewpoint on the problem.
\section{A proof for the main theorem}\label{sec4}
Recall that we have denoted by $J$ the set of pairs of integers
$(r,s)$ such that there exists a $D(-1)$--quadruple $(1,b,c,d)$
with $1<b<c<d$ and $b=r^2+1$, $c=s^2+1$. For $(r,s)\in J$ we put
$s=r^\theta$ and define
\[
\theta^-=\inf_{(r,s)\in J} \theta, \quad
\theta^+=\sup_{(r,s)\in J} \theta.
\]
Using the upper bound $c\le 2.5 \, b^6$ (see Theorem~\ref{tenoi}),
a computer-aided search described in Section~2 of~\cite{bcm} led
to the conclusion that any hypothetical $D(-1)$--quadruple
satisfies $r>32\times 10^5$, so that $b>10^{13}$. Hence
\[
\theta^+ \le 6{.}04.
\]
Based on a refinement of a Diophantine approximation result of
Rickert~\cite{ric}, Filipin and Fujita proved in~\cite{ff} the
inequality $c \le 9.6\, b^4$, which yields
\[
\theta^+ \le 4{.}08.
\]
What we just proved in Theorem~\ref{prale} entails
\[
\theta^+< 3.
\]
The lower bound $\theta^- > 1{.}16$ has been obtained in~\cite{bcm}
and improved in Theorem~\ref{prale} above to
\[
\theta^- > 1{.}23.
\]
Further shortening of the interval $[\theta^-, \theta^+]$
along these lines becomes hopeless, so we introduce the
new approach mentioned in Introduction.
Our next concern is to have a closer look at solutions of the
master equation compatible with the information gathered so far.
A convenient tool was suggested by the fact that any solution
$(x,y)$ to a Diophantine equation of the type $X^2-2fXY+Y^2=C$
gives rise to other two solutions, namely $(-x+2fy,y)$ and
$(x,2fx-y)$, see~\cite{own}. In this section we shall see how
changes of variables indeed allow to transfer information
regarding one specific solution to an associated solution.
Introduce a new variable
\[F:=s-2rf.
\]
As we shall show shortly, it satisfies
\beq{ecesd}
F \sim \left\{ \begin{array}{rl}
\frac{1}{4} r^{\theta - 2}, & \mbox{ if} \quad \theta>2,
\\
- r^{2-\theta}, & \mbox{ if} \quad \theta<2.
\end{array} \right.
\end{equation}
Therefore, we hope to exploit this variable in order to split
the interval $[\theta^-,\theta^+]$ into two subintervals
having a common end-point about $2$.
We study $F$ with the help of the equation
\[
r^2+s^2=2frs+f^2
\]
or its equivalent forms
\beq{ec2d}
sF=f^2-r^2,
\end{equation}
\beq{ec3d}
F^2+2frF+r^2-f^2=0.
\end{equation}
From the master equation one obtains
\[
f=-rs+(r^2s^2+s^2+r^2)^{1/2}=\frac{s^2+r^2}{rs+(r^2s^2+s^2+r^2)^{1/2}}
\sim \frac{1}{2} r^{\theta-1}.
\]
It follows
\[
F = \frac{f^2-r^2}{s}
\sim \frac{1}{4}r^{\theta-2} -r^{2-\theta},
\]
whence estimate~\eqref{ecesd}.
The first properties of $F$ are almost obvious.
\bel{le1d}
\emph{a)} $ \quad F=0 \iff s=2rf \iff f=r \iff s=2r^2 \iff s=2f^2 $.
\emph{b)} $ \quad F>0 \iff s> 2rf \iff f>r \iff s>2r^2 \iff s<2f^2 $.
\emph{c)} $ \quad F<0 \iff s<2rf \iff f<r \iff s<2r^2 \iff s>2f^2 $.
\end{lemma} \begin{proof}
\emph{b)} To prove ``$ \, F>0 \iff s<2f^2\, $'', notice that from
Eq.~\eqref{ecrs} one gets
\[
r=sf-\sqrt{s^2f^2-s^2+f^2}=\frac{s^2-f^2}{sf+
\sqrt{s^2f^2-s^2+f^2}},
\]
and the last expression is smaller than $s/(2f)$ precisely when
$(s^2-f^2)(s^2-4f^4)<0$.
For ``$\, F>0 \iff s>2r^2\, $'', use $f=-rs+\sqrt{r^2s^2+r^2+s^2}$.
\end{proof}
Observe that there are no $D(-1)$--quadruples for which the
corresponding $F$ is zero.
\bel{le3d}
$F\ne 0$.
\end{lemma} \begin{proof}
Suppose, by way of contradiction, that the thesis is false.
Since $F=0$ if and only if $s=2f^2$ and $r=f$, we are in a
situation we have dealt with in Section~\ref{sec2}. There
it was found that this is possible for no $D(-1)$--quadruple.
\end{proof}
From these results it readily follows
that if $f\ne r$, then $f$ is comparatively far away $r$. The
quantitative expression is given by the next lemma.
\bel{le2d}
\emph{a)} If $ f>r$, then $ f>2r F\ge 2r$.
\emph{b)} If $ f<r $, then $0>F>-2f r$.
\end{lemma} \begin{proof}
Part a) follows from Eq.~\eqref{ec3d} rewritten as
$F^2+r^2=f^2-2rfF$.
b) In view of Lemma~\ref{le1d}, $F$ is negative when $ f<r $.
The lower bound for $F$ follows from~\eqref{ec3d} rewritten as
$F^2+2rfF=f^2-r^2$.
\end{proof}
Now we have all ingredients to show that the newly introduced
variable $F$ indeed serves to separate values of $c$ smaller
than $4b^2$ from those bigger than this threshold.
\bel{le4d}
$ f<r \iff c<4b^2$.
\end{lemma} \begin{proof}
We know that $ f<r$ holds if and only if $s\le 2r^2-1=2b-3$, which
in turn is equivalent to $c\le 4b^2-12b+10$. Hence, $c<4(b-1)^2$
for $ f<r$. To prove the converse implication, note that $c<4b^2$
is tantamount to $s\le 2r^2+1$. For $s=2r^2+1$, Eq.~\eqref{ecrs}
becomes a quadratic in $f$ without integer roots, having
discriminant $4r^6+8r^4+6r^2+1=(2r^3+2r)^2+2r^2+1=
(2r^3+2r+1)^2-4r^3+2r^2-4r$. Thus one has $s\le 2r^2$,
with equality prohibited by Lemmata~\ref{le1d} and~\ref{le3d}.
It remains $s<2r^2$, which, according to the last part of
Lemma~\ref{le1d}, means $f<r$.
\end{proof}
Our next result shows that the existence of $D(-1)$--quadruples is not
compatible with small values of $F$.
\begin{proof}r{prDmic}
There is no $D(-1)$--quadruple with $|s-2rf| \le 2\cdot 10^6$.
\end{proof}r \begin{proof}
This claim can be obtained by the following algorithm.
Start by rewriting Eq.~\eqref{ec3d} in the form
\beq{eqDD}
X^2 - (F^2+1) Y^2 = F^2, \quad \mbox{where} \quad X=f-rF , \ Y =r.
\end{equation}
For any $F$, one obvious solution is $(F^2-F+1,F-1)$. A
conjecture of Dujella predicts that an equation $X^2-(a^2+1)Y^2=a^2$
has at most one positive solution with $0< Y < |a|-1$ (this readily
implies the nonexistence of $D(-1)$--quadruples, see~\cite{mrw}).
In~\cite{mrw}, this claim is checked for $|a| < 2^{50}$, so, for each
$F$ with absolute value up to $2\cdot 10^6$ we can find at most one
exceptional solution $(x_0,y_0)$ with $0< y_0 < |F|-1$.
Next we consider solutions $(x,y)$ to Eq.~\eqref{eqDD} associated
to either the obvious solution or to the exceptional one. Invert
the relations $x=f-rF$, $y=r$, $F=s-2fr$ to obtain $r=y$, $f=x+rF$,
$s=F+2rf$. Check if the resulting values for $r$, $s$, $f$ satisfy
the necessary conditions $r>20^5$, $r^{1.23} < s < r^3$, $f>10^7$.
Finally, apply Baker--Davenport lemma for each solution surviving
the sieving step and produce a contradiction with a known fact.
We use this procedure for $| F| \le 2\cdot 10^6$. For the last step,
we performed computations with real numbers of 173 decimal digits.
In all cases, the outcome of the reduction step is $n=1$. This
contradicts~\cite[Proposition 2.2]{bcm}, where it was shown that
for no $D(-1)$--quadruple is $n<7$ possible.
\end{proof}
Now we are in a position to halve the region where $\theta$ is confined.
\begin{proof}r{prb2cb3}
There is no $D(-1)$--quadruple with $4\, b^{2} \le c < b^3$.
\end{proof}r \begin{proof}
The key new ingredient is the observation that for $ c < b^3$
one has
\[
2bn >A.
\]
Lemma~\ref{le2d} together with Lemma~\ref{le4d} imply
$A=f^2+b> 4r^2 F^2$. By Proposition~\ref{prDmic}, the right-hand
side is greater than $15\cdot 10^{12} b$. We have thus obtained
$n> 7\cdot 10^{12}$. However, explicit computations find that
for $c\ge 4 \, b^{2}$ one has $n<10^{12}$.
Coming to the proof of the claim, we note that it was
explicitly established in case $\rho =1$ during the proof of
Lemma~3.8 from~\cite{bcm}. It is also shown there that when
$\rho =-1$ then $j\ge 0$, so Eq.~\eqref{eqA} gives
$2(bn^2-m^2) \ge An$, whence $2bn >A$.
\end{proof}
Further compression of the interval $[\theta^-,\theta^+]$ is
possible by examining other solutions of the master equation.
Let us define a recurrent sequence by the relation
\[
F_{i+1} = 2 fF_i-F_{i-1}, \quad i\ge 0, \quad F_{-1}=-s,
\quad F_0=-r.
\]
It is readily seen that $F_1=F$ and for any $i\ge -1$ it holds
\beq{eqF}
F_i^2-2fF_iF_{i+1}+F_{i+1}^2=f^2.
\end{equation}
Moreover, for $i\ge 0$ one has
\beq{eqFP}
F_i= P_{i-1}s -P_i r,
\end{equation}
where $P_{-1}=0$, $P_0=1$, and $P_{i+1}=2fP_{i}-P_{i-1}$ for any
nonnegative $i$.
As we shall see shortly, all terms of the sequence $(F_i)_{i\ge 1}$
have properties similar to those established above for $F=F_1$.
First we argue that all $F_i$ are nonzero. In view of Theorem~\ref{tef},
it is sufficient to prove the next result.
\bel{lenz}
Assume $F_i=0$ for some $i\ge 1$. Then $r=P_{i-1}f$ and $s=P_{i}f$.
\end{lemma} \begin{proof}
Since $F_i=0$ is tantamount to $sP_{i-1}=rP_{i}$, Eq.~\eqref{ecrs}
becomes
\beq{eq9rev}
(P_{i}^2+P_{i-1}^2-2fP_{i-1}P_{i})r^2=P_{i-1}^2f^2.
\end{equation}
As the expression within parantheses is
\[
P_{i}^2+P_{i-1}^2-2fP_{i-1}P_{i}= P_{i}^2-P_{i-1}P_{i+1}=P_{0}^2-P_{-1}P_{1}=1,
\]
the equality~\eqref{eq9rev} implies $r=P_{i-1}f$, whence $s=P_{i}f$.
\end{proof}
Note that from $F_i^2-F_{i-1} F_{i+1}=f^2$, for $i\ge 1$ one gets by induction
\[
F_{i} \sim \frac{1}{4}r^{i\theta-i-1} -r^{i+1-i\theta},
\]
so that
\[
F_{i} \sim \left\{ \begin{array}{rl}
\frac{1}{4} r^{i\theta - i-1}, & \mbox{ if} \quad \theta>(i+1)/i,
\\
- r^{i+1-i\theta}, & \mbox{ if} \quad \theta<(i+1)/i.
\end{array} \right.
\]
A reasoning similar to the proof of Lemma~\ref{le2d} yields
the following result.
\bel{le2f}
Assume $i\ge 0$. If $F_iF_{i-1} <0$, then $f>-2F_iF_{i-1}$.
If $F_{i-1}F_{i+1} >0$, then $F_i^2=F_{i-1}F_{i+1} +f^{2}>F_{i-1}F_{i+1} +10^{14}$.
\end{lemma} \begin{proof}
The desired inequalities are obtained by rewriting the master
equation in the equivalent forms $F_i^2+F_{i-1}^2 =f^2+2fF_iF_{i-1}$
and $F_i^2-F_{i-1}F_{i+1}=f^2$ and taking into account Proposition~\ref{prexp}.
\end{proof}
\begin{proof}r{prFmic}
For any $D(-1)$--quadruple it holds $|F_i| > 2\cdot 10^6$ for $2\le i\le 5$.
\end{proof}r \begin{proof}
The algorithm described in the proof of Proposition~\ref{prDmic}
can be adapted for the present context. One necessary modification
is in the second step, now the inversion of the equations
$x=f+F_{i-1}F_i$, $y=F_{i-1}$, $F_i=P_{i-1}s -P_i r$ gives
$r=P_{i-2} F_i -P_{i-1}y$, $s=P_{i-1} F_i -P_{i}y$, $f=x-F_iy$,
because $P_{i-1}^2-P_{i-2}P_i=1$. The resulting values have to
satisfy $r^{1.23} < s < r^{2.05}$ by Theorem~\ref{prale} and
Proposition~\ref{prb2cb3}.
\end{proof}
The last two results have the following consequence.
\bec{coFpo}
If $c<4b^2$, then $F_i< -10^7$ for $-1 \le i\le 5$.
\end{cor} \begin{proof}
For $F_{-1}=-s< -r^{1.23}$ and $F_0=-r$, the desired conclusion follows
from Proposition~\ref{prexp} in conjunction with Lemma~\ref{le1d} c).
Explicit computations show that for any hypothetical $D(-1)$--quadruple
one has $f<3\cdot 10^{12}$. Together with Lemma~\ref{le2f} and
Proposition~\ref{prFmic}, this implies $F_i < -10^7$ for $1\le i \le 5$.
\end{proof}
A last new ingredient in the proof of our main result is
obtained by applying a specialization of the binomial theorem
\[
0< u< 1 \Longrightarrow \sqrt{1+u} =1+\frac{1}{2}u -\frac{1}{8} u^2 + \frac{L}{16}u^3,
\quad \mathrm{where} \quad 0<L<1,
\]
to formula
\[
f=rs\left(-1+\sqrt{1+r^{-2} +s^{-2}} \right).
\]
Maybe it is worth mentioning that $L$ depends on $u$ but the only
property of the function $L$ used below is its boundedness.
When one uses the resulting expression
\[
f=\frac{s}{r} +\frac{r}{s} -\frac{s}{4r^3}-\frac{1}{2rs} -\frac{r}{4s^3}
+\left( \frac{s}{8r^5}+\frac{3}{8r^3s} +\frac{3}{8rs^3} + \frac{r}{8s^5}\right) L
\]
in $F_1=s-2fr$, it gives
\[
F_1=\frac{s}{4r^2}-\frac{r^2}{s} +\frac{1}{2s} +\frac{r^2}{4s^3}
+\left(-\frac{s}{8r^4}-\frac{3}{8r^2s}-\frac{3}{8s^3} -\frac{r^2}{8s^5}\right) L.
\]
Similarly, from $F_2=2fF_1+r$ one gets
\begin{align*}
F_2 =\frac{s^2}{4r^3} & - \frac{r^3}{s^2} -\frac{s^2}{16r^5} +\frac{1}{r}
-\frac{1}{4r^3} +\frac{5r}{4s^2} -\frac{3}{8rs^2} +\frac{r^3}{2s^4}
-\frac{r}{4s^4}-\frac{r^3}{16s^6} \\
&+ LH_2 +L^2 J_2,
\end{align*}
with
\[
-\frac{s^2}{r^5}< H_2 < 0 , \quad -\frac{s^2}{r^9}< J_2 <0.
\]
The recurrence relation $F_{i+1} = 2 fF_i-F_{i-1}$ together with
the chain of inequalities
$s> r^{1.23} > 20^{0.23}r >31 r$ give polynomial expressions in $L$ of
the form
\begin{align*}
F_3= & \frac{(16r^4 - 8r^2 + 1)s^3}{64r^8} + \frac{(32r^4 - 22r^2 + 3)s}{32r^6}
+ \frac{128r^4 - 96r^2 + 15}{64r^4s} \\
& {} +\frac{-16r^6 + 32r^4 - 26r^2 + 5}{16 r^2 s^3} +\frac{48r^4 - 56r^2 + 15}{64 s^5}
+ \frac{-6r^4 + 3r^2}{32 s^7} +\frac{r^4}{64 s^9}\\
& {} + L H_3+L^2 J_3 +L^3 K_3
\end{align*}
with
\[
-\frac{17s^3}{128r^6}< H_3 < 0 , \quad -\frac{9s^3}{256r^{10}}< J_3 <0,
\quad -\frac{s^3}{256r^{14}}< K_3 <0,
\]
and
\begin{align*}
F_4= & \frac{(64r^6 - 48r^4 + 12r^2 - 1)s^4}{256r^{11}} + \frac{(32r^6 - 36r^4 + 11r^2 - 1)s^2}{32r^9}
\\
& {} + \frac{128r^6 - 192r^4 + 69r^2 - 7}{64r^7} +\frac{96r^6 - 144r^4 + 60r^2 - 7}{32 r^5 s^2} \\
& {} +\frac{-128r^8 + 352r^6 - 504r^4 + 250r^2 - 35}{128 r^3 s^4}
+ \frac{32r^6 - 60r^4 + 39r^2 - 7}{32 rs^6} \\
& {} + \frac{-24r^5+ 27r^3 - 7r}{64 s^8} + \frac{2r^5 - r^3}{32 s^{10}} -\frac{r^5}{256s^{12}}\\
& {} + L H_4+L^2 J_4 +L^3 K_4 +L^4 M_4
\end{align*}
with
\[
-\frac{17s^4}{128r^7}< H_4 < 0 , \quad -\frac{25s^4}{256r^{11}}< J_4 <0,
\]
\[
-\frac{s^4}{128r^{15}} < K_4 <0, \quad -\frac{s^4}{2048r^{19}}< M_4 <0.
\]
In view of Corollary~\ref{coFpo}, it is clear that Theorem~\ref{tede}
is established as soon as we prove the next result.
\begin{proof}r{prsec}
Let $(1,b,c,d)$ be a $D(-1)$--quadruple with $1<b<c<d$,
$b=r^2+1$, $c=s^2+1$, and $s=r^\theta$.
Then the following statements hold:
\emph{a)} If $1.64 \le \theta <2.05$, then $|F_1| < 7\cdot 10^6$.
\emph{b)} If $1.40 \le \theta <1.64$, then $|F_2| < 3\cdot 10^6$.
\emph{c)} If $1.30 \le \theta <1.40$, then $|F_3| < 6\cdot 10^6$.
\emph{d)} If $1.23 \le \theta <1.30$, then $|F_4| < 2\cdot 10^6$.
\end{proof}r
In its proof we use an elementary fact, proved here for the sake of
completeness.
\bel{leteta}
Keep the notation from Proposition~\ref{prsec}. For $1.2< \theta <2.05$
one has
\[
0 < \theta -\frac{\log c}{\log b} < \frac{1}{r^2}.
\]
\end{lemma}\begin{proof}
The left inequality follows directly from Bernoulli's inequality.
Indeed, if $\eta =\log c/\log b $, then
\[
c=r^{2\theta}+1 =(r^2+1)^\eta =r^{2\eta} \bigl( 1+r^{-2} \bigr)^\eta
> r^{2\eta} \bigl( 1+ \eta r^{-2} \bigr) > r^{2\eta} +1.
\]
In view of the well-known fact $b^{-1} < \log b -\log r^2 <r^{-2}$,
the right inequality is consequence of $h(r^2)>\theta$, where
\[
h(x)=\frac{1}{x}+\log x+\frac{x}{x^\theta+1}.
\]
The numerator of $h'$ is found to be
$g(x)=(x-1)(x^\theta+1)^2+(1 -\theta) x^{\theta+2}+x^2$,
so that
\begin{align*}
g''(x)=(4\theta^2+2\theta)x^{2\theta -1} & -(4\theta^2-2\theta)x^{2\theta -2}
-(\theta^3+2\theta^2 -\theta -2)x^\theta \\
& +(2\theta^2+4\theta)x^{\theta -1}-(2\theta^2-2\theta)x^{\theta -2} +2.
\end{align*}
The sum of the last three terms in the above expression is obviously
positive and it is easily checked that the same is true for the sum of
the other terms. Therefore, for $x>2$ and $\theta <2.05$ one has
$g'(x)> g'(2)> g'(1) =13-\theta -\theta^2>0$ and
$g(x) >g(2)>(6-3\theta)2^\theta +\theta +6>0$. We conclude that the
function $h$ is increasing, so $h(r^2) > 13\log 10 >\theta$.
\end{proof}
\noindent
\emph{Proof of Proposition~\ref{prsec}.}
a)
As $\theta <2.05$, we can bound from above $F_1$ as follows:
\[
F_1 < \frac{s}{4r^2}-\frac{r^2}{s} +\frac{1}{2s} +\frac{r^2}{4s^3}
< \frac{s}{4r^2} <0.25 \, r^{0.05}.
\]
By Theorem~\ref{prale}, for $1.64\le \theta < 2.05$ it holds
$b< 10^{50}$. Therefore,
\[
F_1 < 0.25 \, \bigl( 10^{25} \bigr)^{0.05} < 5.
\]
We bound from below $F_1$ quite similarly:
\[
F_1> \frac{s}{4r^2}-\frac{r^2}{s} +\frac{1}{2s} +\frac{r^2}{4s^3}
-\frac{s}{8r^4}-\frac{3}{8r^2s}-\frac{3}{8s^3} -\frac{r^2}{8s^5}
> -\frac{r^2}{s} \ge -r^{0.36}.
\]
Our program for computation of an absolute upper bound for $b$
iterates over $\log c/\log b$ not over $\theta$, which explains the
need for Lemma~\ref{leteta}. The computations show $b<10^{38}$
when $\theta$ is in the range $1.639< \theta <2.05$, so that
\[
F_1> -\bigl( 10^{19} \bigr)^{0.36} > -7\cdot 10^6.
\]
b) Under the current hypothesis we get
\[
F_2 < \frac{s^2}{4r^3} - \frac{r^3}{s^2} +\frac{2}{r} < \frac{s^2}{4r^3}
= 0.25 \, r^{2\theta -3}< 0.25 \, r^{0.28}.
\]
Using the bound on $b$ stated in Theorem~\ref{prale} results
in a bound on $F_2$ outside the desired range. Therefore we
split the interval where $\theta$ takes its values.
From the output of our program for computation of an absolute upper
bound for $b$ we see that $b<10^{49.3}$ when $1.499< \theta <1.64$,
so that
\[
F_2 < 0.25 \, \bigl( 10^{24.65} \bigr)^{0.28} < 3\cdot 10^6 \quad \mbox{when {$1.5 \le \theta <1.64$}}.
\]
For $1.40\le \theta <1.50$ one gets at once
\[
F_2< 0.25 \, r^0<1.
\]
We similarly bound $F_2$ from below:
\[
F_2 >\frac{s^2}{4r^3} - \frac{r^3}{s^2} -\frac{s^2}{16r^5}
-\frac{s^2}{r^5} -\frac{s^2}{r^9} > - \frac{r^3}{s^2} = -r^{3-2\theta}.
\]
When $1.5\le \theta <1.64$, this gives
\[
F_2 > -1,
\]
while on the subinterval $1.40\le \theta <1.50$ it implies
\[
F_2 > -r^{0.2} > -\bigl( 10^{31.25} \bigr)^{0.2} > -2\cdot 10^6
\]
because $b<10^{62.5}$ on this subinterval.
c) From the expression for $F_3$ we first obtain
\[
F_3< \frac{s^3}{4r^4}+\frac{s}{r^2} +\frac{2}{s}-\frac{r^4}{s^3} +\frac{2r^2}{s^3}
+\frac{3r^4}{4s^5} <\frac{s^3}{4r^4}+\frac{2s}{r^2}-\frac{r^4}{s^3}<\frac{s^3}{4r^4}.
\]
As seen from Theorem~\ref{prale}, one has $b< 6.26\cdot 10^{73}$ when
$ \theta \ge 1.299$. Therefore, the upper bound for $F_3$ just obtained
can be bounded from above as follows:
\[
0.25 \, r^{0.2}< 0.25 \, \left(62.6^{0.5}\cdot 10^{36} \right)^{0.2} < 6\cdot 10^6.
\]
To obtain a lower bound for $F_3$, we can ignore all fractions but the first, the fourth
and the sixth in its free term and replace the coefficients of positive powers of
$L$ by their respective lower bounds. We thus get
\[
F_3 > \frac{3s^3}{16r^4}+\frac{s}{2r^2}-\frac{r^4}{s^3}-\frac{3r^4}{16s^7}
-\frac{17s^3}{128r^6} -\frac{9s^3}{256r^{10}} -\frac{s^3}{256r^{14}}
> \frac{s}{2r^2}-\frac{r^4}{s^3}-\frac{3r^4}{16s^7} > -\frac{r^4}{s^3},
\]
whence
\[
F_3> -r^{0.1} > -\left(62.6^{0.5}\cdot 10^{36} \right)^{0.1} > -5000.
\]
d) We similarly see that it holds
\[
F_4< \frac{s^4}{4 r^5} < 0.25 \, r^{0.2}
< 0.25 \, \left(10^{34.5} \right)^{0.2} < 2 \cdot 10^6
\]
and
\[
F_4> -\frac{r^5}{s^4} > -r^{0.08} > - \left(10^{34.5} \right)^{0.08} > -600.
\]
Proposition~\ref{prsec} being established, the proof of the nonexistence
of $D(-1)$--quadruples is complete.
\textbf{Acknowledgments.} The results reported in this paper
would not have been obtained without the computations performed
on the computer network of IRMA and Department of Mathematics
and Computer Sciences of Universit\'e de Strasbourg. The authors
are grateful to Ryotaro Okazaki and Yasutsugu Fujita
for drawing attention to Matveev's ignored result from~ \cite{Mat98}.
\end{document} |
\begin{document}
\preprint{AIP/123-QED}
\title[]{Tangency bifurcation of invariant manifolds in a slow-fast system}
\author{Ian Lizarraga}
\email{[email protected].}
\affiliation{Center for Applied Mathematics, Cornell University, Ithaca, NY 14853}
\date{\today}
\begin{abstract}
We study a three-dimensional dynamical system in two slow variables and one fast variable. We analyze the tangency of the unstable manifold of an equilibrium point with ``the'' repelling slow manifold, in the presence of a stable periodic orbit emerging from a Hopf bifurcation. This tangency heralds complicated and chaotic mixed-mode oscillations. We classify these solutions by studying returns to a two-dimensional cross section. We use the intersections of the slow manifolds as a basis for partitioning the section according to the number and type of turns made by trajectory segments. Transverse homoclinic orbits are among the invariant sets serving as a substrate of the dynamics on this cross-section. We then turn to a one-dimensional approximation of the global returns in the system, identifying saddle-node and period-doubling bifurcations. These are interpreted in the full system as bifurcations of mixed-mode oscillations. Finally, we contrast the dynamics of our one-dimensional approximation to classical results of the quadratic family of maps. We describe the transient trajectory of a critical point of the map over a range of parameter values.
\end{abstract}
\pacs{ 05.45.-a, 05.45.Ac}
\keywords{Singular Hopf bifurcation, tangency bifurcation of invariant manifolds, mixed-mode oscillations}
\maketitle
\begin{quotation}
We study a three-dimensional multiple timescale system in five parameters. A startling variety of behaviors can be identified as its five parameters are varied. Organizing this variety are the interactions between classical invariant manifolds (including fixed points, periodic orbits, and their (un)stable manifolds) and locally invariant slow manifolds. Here we focus on the interaction between the two-dimensional unstable manifold of a saddle-focus equilibrium point and a two-dimensional repelling slow manifold, in the presence of a stable periodic orbit of small amplitude.
The images of global return maps, defined on carefully chosen two-dimensional cross-sections, are organized by the interactions of the attracting and repelling slow manifolds with these cross-sections. They are also influenced by the basin of attraction of the periodic orbit. We construct a symbolic map which partitions one such section according to the number and type of turning behaviors of the corresponding trajectories. We locate transverse homoclinic orbits to saddle points. On another cross-section, global returns are well-approximated by one-dimensional, nearly unimodal maps. We show that saddle-node bifurcations of periodic orbits and period-doubling cascades occur. Finally, we describe the dynamics of the critical point of the return map at carefully chosen parameters.
Taking a broader view, our numerical results continue to point to the fruitful connections that exist between multiple-timescale flows and low-dimensional maps.
\end{quotation}
\section{\label{sec:intro} Introduction}
We study slow-fast dynamical systems of the form
\begin{eqnarray*}
\eps\dot{x} &=& f(x,y,\eps)\\
\dot{y} &=& g(x,y,\eps),
\end{eqnarray*}
where $x \in R^m$ is the {\it fast} variable, $y \in R^n$ is the {\it slow} variable, $\eps$ is the {\it singular perturbation parameter} that characterizes the ratio of the timescales, and $f,g$ are sufficiently smooth. The {\it critical manifold} $C = \{f = 0\}$ is the manifold of equilibria of the fast subsystem defined by $\dot{x} = f(x,y,0)$. When $\eps > 0$ is sufficiently small, theorems of Fenichel\cite{fenichel1972} guarantee the existence of locally invariant {\it slow manifolds} that perturb from subsets of $C$ where the equilibria are hyperbolic. We may also project the vector field $\dot{y} = g(x,y,0)$ onto the tangent bundle $TC$. Away from folds of $C$, we may desingularize this projected vector field to define the {\it slow flow}. The desingularized slow flow is oriented to agree with the full vector field near stable equilibria of $C$. For sufficiently small values of $\eps$, trajectories of the full system can be decomposed into segments lying on the slow manifolds near $C$ together with fast jumps across branches of $C$. Trajectory segments lying near the slow manifolds converge to solutions of the slow flow as $\eps$ tends to 0.
\begin{figure*}
\caption{\label{fig:tangbif}
\label{fig:tangbif}
\end{figure*}
We now focus on the case of two slow variables and one fast variable ($m=1$, $n=2$). The critical manifold $C$ is two-dimensional and folds of $C$ form curves. Points on fold curves are called {\it folded singularities}. when the slow flow is two-dimensional we use the terms ``folded node'', ``folded focus'', and ``folded saddle'' to denote folded singularities of node-, focus-, and saddle-type, respectively. In analogy to classical bifurcation theory, folded saddle-nodes are folded singularities having a zero eigenvalue. When they exist, folded saddle-nodes are differentiated by whether they persist as equilibria in the full system of equations. We are interested here in folded-saddle nodes of type II (FSNII), which are true equilibria of the full system. It can be shown that {\it singular Hopf bifurcations} occur generically at distances $O(\eps)$ from the FSNII bifurcation in parameter space.\cite{guckenheimer2008siam} At this bifurcation, a pair of eigenvalues of the linearization of the flow crosses the imaginary axis, and a small-amplitude periodic orbit is born at the bifurcation point.
Normal forms are used to study the local flow of full systems in neighborhoods of these folded singularities. Previous work by Guckenheimer\cite{guckenheimer2008chaos} analyzes the local flow maps and return maps of three-dimensional systems containing folded nodes and folded saddle-nodes. There, it is shown that the appearance of these folded singularities can give rise to complex and chaotic behavior. Characterizing the emergence of small-amplitude oscillations near a folded singularity has also been the subject of intense study. In the case of a folded node, Beno\^it\cite{benoit1990} and Wechselberger\cite{wechselberger2005} observed that the maximum number of small oscillations made by a trajectory passing through the folded node region is related to the ratio of eigenvalues of the folded node.
The present paper focuses on a dynamical system, defined in Sec. \ref{sec:shnf}, which contains folded singularities lying along a cubic critical manifold. The critical manifold serves as a global return mechanism. Parametric subfamilies of this dynamical system have served as important prototypical models of electrochemical oscillations, including the Koper model\cite{koper1992}. This system serves as a concrete, minimal example of a three-dimensional system having an $S$-shaped critical manifold as a global return mechanism. Trajectories leaving a neighborhood of the folded singularities do so by jumping between branches of the critical manifold, before ultimately being reinjected into the regions containing the folded singularities. This interplay between local and global mechanisms gives rise to {\it mixed-mode oscillations} (MMOs), which are periodic solutions of the dynamical system containing large and small amplitudes and a distinct separation between the two. These solutions may be characterized by their signatures, which are symbolic sequences of the form $L_1^{s_1}L_2^{s_2} \cdots$. This notation is used to indicate that a particular solution undergoes $L_1$ large oscillations, followed by $s_1$ small oscillations, followed by $L_2$ large oscillations, and so on. The distinction between `large' and `small' oscillations is dependent on the model. Nontrivial aperiodic solutions are referred to as {\it chaotic MMOs}, and may be characterized as limits of families of MMOs as the lengths of the signatures grow very large.
The classification of routes to MMOs with complicated signatures as well as chaotic MMOs continues to garner interest. Global bifurcations have been identified as natural starting points in this direction. Even so, the connection between these bifurcations and interactions of slow manifolds---which organize the global dynamics for small values of $\eps$---remains poorly understood. Period-doubling cascades, torus bifurcations,\cite{guckenheimer2008siam} and most recently, Shilnikov homoclinic bifurcations,\cite{guckenheimer2015} have been shown to produce MMOs with complex signatures. In the last case, one-dimensional approximations of return maps were used to analyze a Shilnikov bifurcation in a system which exhibits singular Hopf bifurcation.
In this paper, we use a similar technique to analyze a tangency of invariant manifolds. Our starting point is a study by Guckenheimer and Meerkamp\cite{guckenheimer2012siam}, which comprehensively classifies local and global unfoldings of singular Hopf bifurcation. We describe the changes in the phase space as the unstable manifold of the saddle-focus equilibrium point crosses the repelling slow manifold of the system. Our approach takes for granted the complicated crossings of these two-dimensional manifolds, instead focusing directly on the influence of these crossings on the global dynamics. The main tool in our analysis is the approximation of the two-dimensional return map by a map on an interval, which parametrizes trajectories beginning on the attracting slow manifold. We show that in the presence of a small-amplitude stable periodic orbit, the one-dimensional return map has a rich topology. The domain of the map is disconnected, with components separated by finite-length gaps. Intervals where the return map is undefined correspond to bands of initial conditions in the full system whose forward trajectories asymptotically approach the small-amplitude stable periodic orbit without making a large-amplitude passage. The first and second derivatives of the map grow very large outside of large subintervals where the map is unimodal.
We also interpret classical bifurcations of the one-dimensional map as routes to chaotic behavior in the full system. We show that a period-doubling cascade occurs in this map, which gives rise to chaotic MMOs. This cascade is reminiscent of the classical cascade in the family of quadratic maps, even though on small subsets, our return map is far from unimodal. Saddle-nodes of mixed-mode cycles, defined as fixed points of the return map with unit derivative, are also shown to occur. Finally, we identify a parameter set for which the full dynamics is close to the dynamics of a unimodal map with a critical point having dense forward orbit.
\section{\label{sec:shnf} Three-Dimensional System of Equations}
We study the following three-dimensional flow:
\begin{eqnarray}
\eps \dot{x} &=& y - x^2 - x^3 \nonumber\\
\dot{y} &=& z - x \label{eq:shnf}\\
\dot{z} &=& -\nu - ax -by - cz,\nonumber
\end{eqnarray}
where $x$ is the fast variable, $y,z$ are the slow variables, and $\eps,\nu,a,b,c$ are the system parameters. This system exhibits a singular Hopf bifurcation.\cite{braaksma1998,guckenheimer2008siam,guckenheimer2012dcds} The critical manifold is the S-shaped cubic surface $C = \{y = x^2 + x^3\}$ having two fold lines $L_0 := S \cap \{x = 0\}$ and $L_{-2/3} := S \cap \{ x = -2/3\}$. When $\eps > 0 $ is sufficiently small, nonsingular portions of $C$ perturb to families of slow manifolds: near the branches $S\cap \{x > 0\}$ (resp. $S \cap \{x < -2/3\}$), we obtain the {\it attracting slow manifolds} $S^{a+}_{\eps}$ (resp. $S^{a-}_{\eps}$) and near the branch $S\cap\{-2/3<x<0\}$ we obtain the {\it repelling slow manifolds} $S^r_{\eps}$. Nearby trajectories are exponentially attracted toward $S^{a\pm}_{\eps}$ and exponentially repelled from $S^r_{\eps}$. One derivation of these estimates uses the Fenichel normal form.\cite{jones1994} Within each family, these sheets are $O(-\exp(c/ \eps))$ close\cite{jones1994,jones1995}, so we refer to any member of a particular family as `the' slow manifold. This convention should not cause confusion.
We focus on parameters where forward trajectories beginning on $S^{a+}_{\eps}$ interact with a `twist region' near $L_0$, a saddle-focus equilibrium point $p_{eq}$, or both. A folded singularity $n =(0,0,0) \in L_0$ is the governing center of this twist region. The saddle-focus $p_{eq}$ has a two-dimensional unstable manifold $W^u$ and a one-dimensional stable manifold $W^s$. This notation disguises the dependence of these manifolds on the parameters of the system.
\section{Tangency bifurcation of invariant manifolds}
Guckenheimer and Meerkamp\cite{guckenheimer2012siam} drew bifurcation diagrams of the system \eqref{eq:shnf} in a two-dimensional slice of the parameter space defined by $\eps = 0.01$, $b = -1$, and $c = 1$. Codimension-one tangencies of $S^r_{\eps}$ and $W^u$ are represented in Figure 5.1 of their paper by smooth curves (labeled T) in $(\nu,a)$ space. For fixed $a$ and increasing $\nu$, this tangency occurs after $p_{eq}$ undergoes a supercritical Hopf bifurcation. A parametric family of stable limit cycles emerges from this bifurcation. Henceforth we refer to `the' small-amplitude stable periodic orbit $\Gamma$ to refer to the corresponding member of this family at a particular parameter set. The two-dimensional stable manifolds of $\Gamma$ interact with the other invariant manifolds of the system. Guckenheimer and Meerkamp identify a branch of period-doubling bifurcations as $\nu$ continues to increase after the first slow-manifold tangency. We show that the basin of attraction of the periodic orbit has a significant influence on the global returns of the system.
Fixing $a = -0.03$, the tangency occurs within the range $\nu \in \left[ 0.00647, 0.00648\right]$. The location of the tangency may be approximated by studying the asymptotics of orbits beginning high up on $S^{a+}_{\eps}$. Fix a section $\Sigma = S^{a+}_{\eps}\cap \{x = 0.27\}$. Before the tangency occurs, trajectories lying on and sufficiently near $W^u$ must either escape to infinity or asymptotically approach $\Gamma$; these trajectories cannot jump to the attracting branches of the slow manifold, as they must first intersect $S^r_{\eps}$ before doing so. Trajectories beginning in $\Sigma$ first flow very close to $p_{eq}$. As shown in Figure 1, these trajectories then leave the region close to $W^u$. We observe that before the tangency, $W^u$ forms a boundary of the basin of attraction of $\Gamma$. Therefore, all trajectories sufficiently high up on $S^{a+}_{\eps}$ must lie inside the basin of attraction (Figure \ref{fig:tangbif}a).
After the tangency has occurred, isolated trajectories lying in $W^u$ will also lie in $S^r_{\eps}$. These trajectories will bound sectors of trajectories which can now make large-amplitude passages. Trajectories within these sectors jump `to the left' toward $S^{a-}_{\eps}$ or `to the right' toward $S^{a+}_{\eps}$. Trajectories initialized in $\Sigma$ that leave neighborhoods of $p_{eq}$ near these sectors contain {\it canard} segments, which are solution segments lying along $S^r_{\eps}$. Examples of such trajectories are highlighted in green in Figure 1. We can now establish a dichotomy between those trajectories in $\Sigma$ that immediately flow to $\Gamma$ and never leave a small neighborhood of the periodic orbit, versus those that make a global return. In Figure \ref{fig:tangbif}b, only two of the thirty sample trajectories are able to make a global return. Near the boundaries of these subsets, trajectories can come arbitrarily close to $\Gamma$ before escaping and making one large return. Note however that such trajectories might still lie inside the basin of attraction of $\Gamma$, depending on where they return on $\Sigma$. Such trajectories escape via large-amplitude excursions at most finitely many times before tending asymptotically to $\Gamma$. We now focus on the parameter regime where the tangency has already occurred. In Figure 5.1 of the paper of Guckenheimer and Meerkamp, this corresponds to the region to the right of the $T$ (manifold tangency) curve.
\begin{figure}
\caption{\label{fig:retmap}
\label{fig:retmap}
\end{figure}
\section{\label{sec:maps} Singular and Regular Returns}
\begin{figure}
\caption{\label{fig:retmap2}
\label{fig:retmap2}
\end{figure}
Approximating points on $\Sigma$ by their $z$-coordinates, the return map $R: \Sigma \to \Sigma$ is well-approximated by a one-dimensional map on an interval, also denoted $R$. In the presence of the small-amplitude stable periodic orbit $\Gamma$, we now compare our one-dimensional approximation to return maps in the case of folded nodes\cite{wechselberger2005} and folded saddle-nodes\cite{guckenheimer2008chaos,krupa2010}. Where the return map is defined, trajectories beginning in different components of the domain of $R$ make different numbers of small turns before escaping the local region. These subsets are somewhat analogous to the rotation sectors arising from twists due to a folded node.\cite{wechselberger2005} However, in the present case there is a folded singularity as well as a saddle-focus as well as a small-amplitude periodic orbit. Each of these local objects plays a role in the twisting of trajectories that enter neighborhoods of the fold curve $L_0$.
When the small-amplitude stable periodic orbit exists, the domain of the return map is now disconnected, with components separated by finite-length gaps (Figure \ref{fig:retmap}). The gaps where $R$ is undefined correspond to those trajectories beginning on $S^{a+}_{\eps}$ that asymptotically approach $\Gamma$ without making a large-amplitude oscillation. The second difference concerns the extreme nonlinearity near the boundaries of the disconnected intervals where $R$ is defined (Figure \ref{fig:retmap2}a). Portions of the image lie below the local minima in these local concave segments, resulting in tiny regions near the boundaries where the derivative changes rapidly. These points arise from canard segments of trajectories resulting in a jump from $S^r_{\eps}$ to $S^{a+}_{\eps}$ and hence to $\Sigma$. Fixing the parameters and iteratively refining successively smaller intervals of initial conditions, this pattern of disconnected regions where the derivative changes rapidly seems to repeat up to machine accuracy. One consequence of this structure is the existence of large numbers of unstable periodic orbits, defined by fixed points of $R$ at which $|R'(z)| > 1$. This topological structure also appears to be robust to variations of the parameter $\nu$.
This complicated structure arises from the interaction between the basin of attraction of $\Gamma$, the twist region near the folded singularity and $W^{u,s}$. As an illustration of this complexity, consider an unstable fixed point $z \approx 0.05939079$ of the return map as defined in Figure \ref{fig:retmap2}(b), interpreted as an unstable periodic orbit in the full system of equations (Figure \ref{fig:retmap2}(c)-(d)). The orbit is approximately decomposed according to its interactions with the (un)stable manifolds of $p_{eq}$ and the slow manifolds. One possible forward-time decomposition of this orbit proceeds as follows:
\begin{itemize}
\item A segment (red) that begins on $S^{a+}_{\eps}$ and flows very close to $p_{eq}$ by remaining near $W^s$,
\item a segment (gray) that leaves the region near $p_{eq}$ along $W^u$, then jumping right from $S^r$ to $S^{a+}_{\eps}$,
\item a segment (green) that flows from $S^{a+}_{\eps}$ to $S^{r}_{\eps}$, making small-amplitude oscillations while remaining a bounded distance away from $p_{eq}$, then jumping right from $S^r_{\eps}$ to $S^{a+}_{\eps}$,
\item a segment (blue) that flows back down into the region near $p_{eq}$, making small oscillations around $W^s$, then jumping right from $S^r_{\eps}$ to $S^{a+}_{\eps}$,
\item a segment (magenta) with similar dynamics to the green segment, making small-amplitude oscillations while remaining a bounded distance away from $p_{eq}$, then jumping right from $S^r_{\eps}$ to $S^{a+}_{\eps}$, and
\item a segment (black) making a large-amplitude excursion by jumping left to $S^{a-}_{\eps}$, flowing to the fold $L_{-2/3}$, and then jumping to $S^{a+}_{\eps}$.
\end{itemize}
A linearized flow map can be constructed\cite{glendinning1984,silnikov1965} in small neighborhoods of the saddle-focus $p_{eq}$, which can be used to count the number of small-amplitude oscillations contributed by orbit segments approaching the equilibrium point. However, the small-amplitude periodic orbit and the twist region produce additional twists, as observed in the green and magenta segments of the example above.
\begin{figure}
\caption{\label{fig:z0}
\label{fig:z0}
\end{figure}
We will return to one-dimensional approximations of the return map in Sec. \ref{sec:ret}, but now we focus on two-dimensional maps, and show that we can illuminate key features of their small-amplitude oscillations. Let us fix a cross-section and define the geometric objects whose interactions organize the return dynamics. Define $\Sigma_0$ to be a compact subset of $\{ z = 0\}$ containing the first intersection (with orientation $\dot{z} >0$) of $W^s$ . Let $B_0$ denote the {\it immediate basin of attraction} of the stable periodic orbit $\Gamma$, which we define as the set of points in $\Sigma_0$ whose forward trajectories under the flow of Eq. \eqref{eq:shnf} asymptotically approach $\Gamma$ without returning to $\Sigma_0$, and let $\partial B_0$ denote its boundary. The periodic orbit itself does not intersect our choice of cross-section.
Since we wish to study trajectory segments that return to the cross-section, $B_0$ functions as an escape subset. Rigorously, the forward return map $R: \Sigma_0 \to \Sigma_0$ is undefined on the subset $B_0$, and points landing in $B_0$ under forward iterates of $R$ `escape'. Obviously the trajectories with initial conditions inside $\cup_{i=0}^{\infty}R^{-i}(B_0)$ are contained within the basin of attraction of $\Gamma$, and furthermore the $j$-th iterate of the return map $R^j$ is defined only on the subset $\Sigma_0 - \cup_{i=0}^j R^{-j}(B_0)$. Finally, we abuse notation slightly and denote by $S^{a+}_{\eps}$ (resp. $S^r_{\eps}$) the intersections of the corresponding slow manifolds with $\Sigma_0$. We also refer to the intersection of $S^{a+}_{\eps}$ (resp. $S^r_{\eps}$) with $\Sigma_0$ as the {\it attracting} (resp. {\it repelling}) {\it spiral} due to its distinctive shape (see Figure \ref{fig:z0}). The immediate basin of attraction $B_0$ is depicted by gray points in Figure \ref{fig:z0}(a). This result implies that the basin of attraction of the periodic orbit contains at least a thickened spiral which $S^{a+}_{\eps}$ intersects transversely in interval segments, accounting for the disconnected images of the one-dimensional return maps.
The slow manifolds also intersect transversely. Segments of the attracting spiral can straddle both $B_0$ and the repelling spiral. In Fig. \ref{fig:z0}(b), we color initial conditions based on the maximum $y$-coordinate achieved by the corresponding trajectory before its return to $\Sigma_0$. Due to the Exchange Lemma, only thin bands of trajectories are able to remain close enough to $S^r_{\eps}$ to jump at an intermediate height.
We choose the maximum value of the $y$-coordinate to approximately parametrize the length of the canards. This parametrization heavily favors trajectories jumping left (from $S^r_{\eps}$ to $S^{a-}_{\eps}$) rather than right (from $S^r_{\eps}$ to $S^{a+}_{\eps}$), since trajectories jumping left can only return to $\Sigma_0$ by first following $S^{a-}_{\eps}$ to a maximal height, and then jumping from $L_{-2/3}$ to $S^{a+}_{\eps}$. This asymmetry is useful: in Figure \ref{fig:z0}, $S^r_{\eps}$ serves as a boundary between the (apparently discontinuous) blue and yellow regions, clearly demarcating those trajectories which turn right rather than left before returning to $\Sigma_0$. Summarizing, $\partial B_0$ and $S^r_{\eps}$ partition this section according to the behavior of orbits containing canards.
\begin{figure}
\caption{\label{fig:sao}
\label{fig:sao}
\end{figure}
Trajectories beginning in $\Sigma_0$ either follow $W^s$ closely and spiral out along $W^u$ or remain a bounded distance away from both the equilibrium point and $W^s$, instead making small-amplitude oscillations consistent with a folded node. Differences between these two types of small-amplitude oscillations have been observed in earlier work. The transition from the first kind of small-amplitude oscillation to the second is a function of the distance from the initial condition to the intersection of $W^s$ with the cross-section. Two initial conditions are chosen on a vertical line embedded in the section $\{z=0\}$, having the property that the resulting trajectory jumps right from $S^r_{\eps}$ at an intermediate height before returning to the section with orientation $\dot{z} < 0$. These initial conditions are found by selecting points in Figure \ref{fig:z0}(b) in the blue regions lying on a ray that extends outward from the center of the repelling spiral. The corresponding return trajectories are plotted in Figure \ref{fig:sao}. The production of small-amplitude oscillations is dominated by the saddle-focus mechanism: in the example shown, the red orbit exhibits four oscillations before the (relatively) large-amplitude return, whereas the blue orbit exhibits seven oscillations. We can select trajectories with increasing numbers of small-amplitude oscillations by picking points closer to $W^s \cap \{z=0\}$. A complication in this analysis is that jumps at intermediate heights, which are clearly shown to occur in these examples, blur the distinction between `large' and `small' oscillations in a mixed-mode cycle.
\begin{figure}
\caption{\label{fig:2dmap}
\label{fig:2dmap}
\end{figure}
\section{\label{sec:sao} Modeling Small-Amplitude Oscillations}
We now study some of the possible concatenations of small-amplitude oscillation segments as seen in Fig. \ref{fig:sao}. Tangencies of the vector field with the cross-section are given by curves which partition the section into disconnected subsets. The partition that does not contain the attracting and repelling spirals is mapped with full rank to the remaining partition in one return (Fig. \ref{fig:2dmap}(b)), allowing us to restrict our analysis to an invariant two-dimensional subset where the vector field is transverse everywhere.
Mixed-rank behavior occurs in this subset, as shown in Figure \ref{fig:2dmap}. Note that this set of figures is plotted at a slightly different parameter set from that in Figure \ref{fig:z0}, the main difference being that the line of vector field tangencies intersects a portion of the attracting spiral. Other features of the dynamics persist, including the fact that trajectories jumping left to $S^{a-}_{\eps}$ reach a greater maximal height ($y$-component) than the trajectories jumping right to $S^{a+}_{\eps}$. As shown in Figure \ref{fig:2dmap}(a), the region is partitioned according to three criteria: their location with respect to the curve of tangency, and their location with respect to the repelling spiral (corresponding to left or right jumps), and their winding number (defined later in this section).
We will refine this partition in order to create a symbolic model of the dynamics, but we can already state two significant results: \begin{itemize}
\item {\it Mixed-rank dynamics}. As shown in Fig. \ref{fig:2dmap}(c)-(d), the red and blue regions collapse to $S^{a+}_{\eps}$ within one return. This contraction includes those trajectories that return to the cross-section by first jumping right from $S^r_{\eps}$ to $S^{a+}_{\eps}$ at an intermediate height. Figure \ref{fig:2dmap}(b) shows that the yellow subset returns immediately to this low-rank region. The green subset returns either to the low-rank region or to the yellow region. But note that it does not intersect its image, and furthermore, it intersects the yellow region on a portion of the attracting spiral. Therefore, after at most two returns, the dynamics of the points beginning in $\Sigma_0$ (and which did not map to $B_0$) is characterized by the dynamics on the attracting spiral.
\item {\it Trajectories jumping left or right return differently}. Those trajectories jumping left to $S^{a-}_{\eps}$ return to a tiny segment very close to the center of the $S^{a+}_{\eps}$, as shown in Fig. \ref{fig:2dmap}(d). In contrast, the trajectories jumping right sample the entire spiral of $S^{a+}_{\eps}$, as shown in Fig. \ref{fig:2dmap}(c). Thus, multiple intermediate-height jumps to the right are a necessary ingredient in concatenating small- and medium-amplitude oscillations (arising from right jumps) between large-amplitude excursions (arising from left jumps).
\end{itemize}
We now construct a dynamical partition of the cross-section. First we define the winding of a trajectory. Let $s$ and $u$ denote a stable and unstable eigenvector, respectively, of the linearization of the flow at $p_{eq}$. Then consider a cylindrical coordinate system with basis $(u,s,n)$ centered at $p_{eq}$, where $n = u \times s$. The {\it winding} of a given trajectory is the cumulative angular rotation (divided by $2\pi$) of the projection of the trajectory onto the $(u,n)$-plane. The {\it winding number} (or simply {\it number of turns}) of a trajectory is the integer part of the winding.
The cumulative angular rotation depends on both the initial and stopping condition of the trajectory, which in turn depend on the section used. Close to $p_{eq}$, the winding of a trajectory measures winding around $W^s$. This is desirable since most of the rotation occurs as trajectories enter small neighborhoods of $p_{eq}$ along $W^s$.
If Figure \ref{fig:turns}(a), we study the winding on a connected subset of the attracting spiral. On this connected subset we may parametrize the spiral by its arclength. The number of turns increases by approximately one whenever $S^{a+}_{\eps}$ intersects $S^{r}_{\eps}$ twice (these intersections occur in pairs since they correspond to bands of trajectories on $S^{a+}_{\eps}$ which leave the region by jumping left to $S^{a-}_{\eps}$). In between these intersections, there are gaps corresponding to regions where $S^{a+}_{\eps}$ intersects $B_0$.
\begin{figure}
\caption{\label{fig:turns}
\label{fig:turns}
\end{figure}
Sets in the partition are defined according to each trajectory's winding number and jump direction. This partition uses the attracting and repelling spirals as a guide; small rectangles straddling the attracting spiral are contracted strongly transverse to the spiral and stretched along the attracting spiral, giving the dynamics a hyperbolic structure. In the next section we will compute a transverse homoclinic orbit, where this extreme contraction and expansion is shown explicitly.
We restrict ourselves to a subset $S \subset \Sigma_0$ where returns are sufficiently low-rank (i.e. the union of red and blue regions in Fig. \ref{fig:2dmap}(a)). Let $L_{n}\subset S$ (resp. $R_n \subset S$) denote those points whose forward trajectories make $n$ turns before jumping left to $S^{a-}_{\eps}$ (resp. right to $S^{a+}_{\eps}$). Then define $L_{tot} = \cup_{n=0}^{\infty} L_n$ and $R_{tot} = \cup_{n=0}^{\infty} R_n$. The collection $\mathcal{P} = \{L_i,R_j\}_{i,j=1}^{\infty}$ partitions $S$.
For a collection of sets $\mathcal{A}$, let $\sigma(\mathcal{A})$ denote the set of all one-sided symbolic sequences $x = x_0 x_1 x_2 \cdots$ with $x_i \in \mathcal{A}$. We can assign to each $x\in S$ a symbolic sequence in $\sigma(\mathcal{P}\cup \{S^c, B_0\})$, also labeled $x$. This sequence is constructed using the return map: $x = \{x_i\}$ is defined by $x_i = \iota(R^i(x))$, where $\iota: \Sigma_0 \to \mathcal{P}\cup\{B_0,S^c\}$ is the natural inclusion map. Note that some symbolic sequences have finite length, as $R$ is undefined over $B_0$. A portion of the partition is depicted in Fig. \ref{fig:turns}.
The results in figures \ref{fig:2dmap} and \ref{fig:turns} and the definition of $B_0$ constrain the allowed symbolic sequences. In particular:
\begin{itemize}
\item there exists a sufficiently large integer $N$ with $R(L_{tot}) \subset S^{a+}_{\eps} \cap (\cup_{n\geq N} L_n \cup R_n \cup B_0)$ (Figs. \ref{fig:2dmap}(d) and \ref{fig:turns}),
\item $R(R_{tot}) \subset S^{a+}_{\eps}\cap \Sigma_0$ (Fig. \ref{fig:2dmap}(c)),
\item $R(S^c) \subset S$ (Fig. \ref{fig:2dmap}(b)), and
\item the set of finite sequences are precisely those containing and ending in $B_0$.
\end{itemize}
The first result implies that for any integer $n \geq 1$, the block $L_n \alpha_m$ (where $\alpha \in \{L,R\}$) is impossible when $m < N$, since $R(L_n)$ is either $B_0$ or $\alpha_{m\geq N}$. For the parameter set used in Fig. \ref{fig:2dmap}, our numerics suggest a lower bound of $N = 13$. On the other hand, the second result reminds us that only right-jumping trajectories are able to sample the entire attracting spiral. The first two results then imply that blocks of type $R_i L_j$ or $R_i R_j$ are necessarily present in the symbolic sequences of orbits which concatenate small-amplitude oscillations with medium-amplitude oscillations as shown in Fig. \ref{fig:sao}, since medium-amplitude oscillations arise precisely from those points on $\Sigma_0$ whose forward trajectories remain bounded away from the saddle-focus (i.e. those points in $\Sigma_0$ sufficiently far from the intersection of $W^s$ with $\Sigma_0$) and jump right.
The first two results also imply that forward-invariant subsets lie inside the intersection of $S^{a+}_{\eps}$ with $\Sigma_0$. In terms of the full system, it follows that the trajectories corresponding to these points each contain segments which lie within a sheet of $S^{a+}_{\eps}$.
The attracting spiral and the nonsingular region $S^c$ have a nontrivial intersection. In view of the second result, it is possible for trajectories that we track to sometimes be mapped outside of the subset $S$ where our partition is defined. The third result implies the symbolic sequence of a point $x\in S$ whose forward returns leave the subset $S$ must contain the block
\begin{eqnarray*}
x_{n_j-1} S^c x_{n_j},
\end{eqnarray*}
where the index $n_j$ is defined by the $j$-th instance when the orbit leaves $S$ and $x_{n_j-1} \in \mathcal{P}$. We can also constrain the possible symbols of $x_{n_j}$ as follows. Trajectories which have intersected $\Sigma_0$ in the singular region $S$ can only return to $S^c$ along the curve $S^{a+}_{\eps}$. Furthermore, our numerical results demonstrate that $R(S^c \cap S^{a+}_{\eps})$ nontrivially intersects subsets of $\mathcal{P}\cup\{B_0\}$ only in the subcollection $\mathcal{P}_c = \{L_3,L_4,L_5,R_3,R_4,R_5,B_0\}$. Therefore $x_{n_j} \in \mathcal{P}_c$ whenever $n_j$ is defined.
In view of the last result, for each $i \geq 1$ we define the $i$-th {\it escape subset} $E_i$ to be the set of length-$i$ sequences ending in $B_0$. Note that $E_i$ contains the symbol sequences of the points in $R^{-(i-1)}(B_0)$. Section \ref{sec:ret} provides a concrete numerical example of a point in $E_{n}$, where $n$ is at least 1284.
Let us summarize the main results of the symbolic dynamics. Points which do not have repeating symbolic sequences (i.e. points which are not equilibria or do not belong to cycles) either terminate in $B_0$, indicating that trajectory tends asymptotically to the small-amplitude stable periodic orbit $\Gamma$, or the sequence is infinitely long. In either case, due to the apparent hyperbolicity of the map we can track neighborhoods of points until they leave the set $S$ where the partition is defined. However, those `lost' points on the orbit return to $S$ in one interate and we can resume tracking them.
\section{\label{sec:homorbit2d} Invariant Sets of the Two-Dimensional Return Map}
\begin{figure}
\caption{\label{fig:fporbit}
\label{fig:fporbit}
\end{figure}
The structure of the invariant sets of the two-dimensional return map is a key dynamical question, and is related to the intersection of the basin of attraction of the small-amplitude stable periodic orbit with $\Sigma_0$. In this section we focus on two types of invariant sets : fixed points and transverse homoclinic orbits.
Certain invariant sets of the map may be used to construct open sets of points all sharing the same initial block in their symbolic sequence. We briefly describe how the simplest kind of invariant set-- a fixed point-- implies that neighborhoods of points must have identical initial sequences of oscillations. In Figure \ref{fig:fporbit} we plot the saddle-type MMO corresponding to a saddle equilibrium point $p$, whose location in the section $\{z = 0\}$ is plotted in Figures \ref{fig:turns} and \ref{fig:homorbit}. According to Figure \ref{fig:turns}, $p$ has symbolic sequence $R_5 R_5 R_5 \cdots$, in agreement with the time-series shown in Figure \ref{fig:fporbit}(b). We also observe that the fixed-point is sufficiently far away from $W^s$ (the stable manifold of the saddle-focus) that the oscillations remain bounded away from $p_{eq}$. Furthermore, the dynamics in small neighborhoods of $p$ are described by the linearization of the map $R$ near $p$. This implies that small neighborhoods of $p$ consist of points with initial symbolic blocks of $R_5$, where the length of this initial block can be as large as desired. We can relax the condition that this be the initial block by instead considering preimages of these neighborhoods.
From this case study we observe that arbitrarily long chains of small-amplitude oscillations can be constructed using immediate neighborhoods of fixed points, periodic points, and other invariant sets lying in $S^{a+}_{\eps} \cap \{z = 0\}$. These in turn correspond to complicated invariant sets in the full three-dimensional system. Consequently, the maximum number of oscillations produced by a periodic orbit having one large-amplitude return can be very large at a given parameter value, depending on the number of maximum possible returns to sections in the region containing these local mechanisms. This situation should be compared to earlier studies of folded-nodes, in which trajectories with a given number of small-amplitude oscillations can be classified\cite{wechselberger2005}; and the Shilnikov bifurcation in slow-fast systems, in which trajectories have unbounded numbers of small-amplitude oscillations as they approach the homoclinic orbit.\cite{guckenheimer2015}
In the present system, we observe numerically that the return map $R$ contracts two-dimensional subsets of the cross-section to virtually one-dimensional subsets of $S^{a+}_{\eps}$. Subsequent returns act on $S^{a+}_{\eps}$ by stretching and folding multiple times before further extreme contraction of points transverse to $S^{a+}_{\eps}$.
If horseshoes exist, they will also appear to break the diffeomorphic structure of the return map due to the strong contraction (although topologically the horseshoe is still given by two-dimensional intersections of forward images of sets with their inverse images under the return map). We can draw a useful analogy to the H\'enon family $H_{a,b}(x,y) = (1-ax^2+y,bx)$ when a strange attractor exists at parameter values $0 < b \ll 1$. The local structure of the strange attractor, outside of neighborhoods of its folds, is given by $C \times \mathbb{R}$, where $C$ is a Cantor set.
\begin{figure}
\caption{\label{fig:homorbit}
\label{fig:homorbit}
\end{figure}
Let us now provide a visual demonstration of these issues. Let $U$ be a small neighborhood of the saddle fixed point $p$ that we located in the previous section. In Fig. \ref{fig:homorbit} we plot $U$, $R(U)$, $U\cap R^{-1}(U)$, and $W^s(p)$ on the section $\Sigma_0$. The image is a nearly one-dimensional subset of $S^{a+}_{\eps}$ and the preimage is a thin strip which appears to be foliated by curves tangent to $S^r_{\eps}$. The subsets $R(U)$ and $R^{-1}(U)$ contain portions of $W^u(p)$ and $W^s(p)$, respectively. The transversal intersection of $R(U)$ with $W^s(p)$ is also indicated in this figure.
Numerically approximating the diffeomorphism $R^{-1}$ is a challenging problem. Trajectories which begin on the section and approach the attracting slow manifolds $S^{a\pm}_{\eps}$ in reverse time are strongly separated, analogous to the scenario where pairs of trajectories in forward time are strongly separated by $S^r_{\eps}$. This extreme numerical instability means that trajectories starting on the section and integrated backward in time often become unbounded. In order to compute $W^s(p)$, we therefore resort to a continuation algorithm which instead computes orbits in forward time. We take advantage of the singular behavior of the map to reframe this problem as a boundary value problem, with initial conditions beginning in a line on the section and ending `at' $p$. Beginning with a point $y_0$ along $W^s(p)$, we construct a sequence $\{y_0 , y_1, \cdots\}$ along $W^s(p)$ as follows.
(C1) {\it Prediction step}. Let $w_i = y_{i-1} + h v_i$, where $h$ is a fixed step-size and $v_i$ is a numerically approximated tangent vector to $W^s(p)$ at $y_{i-1}$.
(C2) {\it Correction step}. Construct a line segment $L_i$ of initial conditions perpendicular to $v_i$. Use a bisection method to locate a point $y_i \in L_i$ such that $|R(y_i) - p| < \eps$, where $\eps$ is a prespecified tolerance.
The relevant branch of $W^s(p)$ which intersects $R(U)$ lies inside the nearly singular region of the return map, so the segment $L_i$ can be chosen small enough that $R(L_i)$ is, to double-precision accuracy, a segment of $S^{a+}_{\eps}$ which straddles $p$. This justifies our correction step above.
It is usually not sufficient to assert the existence of a transverse homoclinic orbit from the intersection of the image sets. But in the present case, these structures are organized by the slow manifolds $S^{a+}_{\eps}$ and $S^r_{\eps}$. The strong contraction onto $S^{a+}_{\eps}$ in forward time implies that the discrete orbits comprising $W^u(p)$ must also lie along this slow manifold. The unstable manifold $W^u(p)$ lies inside a member of the $O(\exp(-c/\eps))$-close family which comprises $S^{a+}_{\eps}$, so the forward images serve as good proxies for subsets of $W^u(p)$ itself. On the other hand, when $U$ is sufficiently small, its preimage $R^{-1}(U)$ appears to be foliated by a family of curves tangent to $S^r_{\eps}$, such that one of the curves contains $W^s(p)$ itself.
The Smale-Birkhoff homoclinic theorem\cite{birkhoff1950,smale1965} then implies that there exists a hyperbolic invariant subset on which the dynamics is conjugate to a subshift of finite type. Note that while we expect fixed points to lie in $S^{a+}_{\eps}$ due to strong contraction, we do not expect fixed points to also lie in $S^r_{\eps}$ in the generic case. We end this result by commenting on its apparent degeneracy of the two-dimensional sets $U,R(U),$ and $R^{-1}(U)$. A classical proof of the Smale-Birkhoff theorem uses the set $V = R^k(U) \cap R^{-m}(U)$ (where $k,m\geq 0$ are chosen such that $V$ is nonempty) as the basis for constructing the Markov partition on which the shift is defined.\cite{guckenheimer1983} Here, $V$ is well-approximated by a curve segment.
Does a positive-measure set of initial conditions approach this hyperbolic invariant set? In other words, is the set an attractor? While it is difficult to assess the invariance of open sets with finite-time computations, our numerics support the conjecture that most initial conditions lying outside $B_0$ tend to the chaotic invariant set without tending asymptotically to $\Gamma$. In Figs. \ref{fig:homorbit}(b)-(c), we study the eventual fates of a grid of initial conditions beginning on $\Sigma_0$. Fig. \ref{fig:homorbit}(b) shows that even after a relatively long integration time of $t = 600$, most initial conditions in $B_0^c$ are able to return repeatedly to $\Sigma_0$. However, it may simply be that the measure of $(R^{-i} (B_0))^c$ decays extremely slowly to $0$ as $i$ tends to infinity. In Fig. \ref{fig:homorbit}(c), we plot the last recorded intersection with $\Sigma_0$ of those trajectories that do not tend to $\Gamma$ within $t = 600$. Even with a relatively sparse set of $10^4$ points, we observe that these intersections sample much of the attracting spiral. Many of the points are not visible at the scale of the figure because they sample the segment shown in Fig. \ref{fig:2dmap}(d) (i.e. the penultimate intersections resulted in the trajectory jumping left to $S^{a-}_{\eps}$).
We now turn to a section transverse to $S^{a+}_{\eps}$ and approximate returns to this section by a one-dimensional map. The advantage of this low-dimensional approximation is that we can readily identify classical bifurcations and routes to complex behavior. We may interpret invariant sets of these one-dimensional returns (such as fixed points, periodic orbits, and more complicated sets) as large-amplitude portions of the mixed-mode oscillations in the full system.
\begin{figure}
\caption{\label{fig:sn}
\label{fig:sn}
\end{figure}
\section{\label{sec:bif} Bifurcations of the One-Dimensional Return Map}
Fixed points of a return map defined on the section $\Sigma_+ = \{x = 0.3\}$ are interpreted in the full system as the locations of mixed-mode oscillations, formed from trajectories making one large-amplitude passage after interacting with the local mechanisms near $L_0$. Similarly, periodic orbits of the (discrete) return map can be used to identify mixed-mode oscillations having more than one large-amplitude passage. We demonstrate common bifurcations associated with these invariant objects. First we locate a saddle-node bifurcation of periodic orbits, in which a pair of orbits coalesce and annihilate each other at a parameter value.
Figure \ref{fig:sn} demonstrates the existence of a fixed point $z = R(z)$ with unit derivative as $\nu$ is varied within the interval $\left[0.00801, 0.00802\right]$ (remaining parameters are as in Figure \ref{fig:retmap}). This parameter set lies on a generically codimension one branch in the parameter space. We also note that saddle-node bifurcations serve as a mechanism to produce stable cycles in the full system, which in turn may undergo torus bifurcations and period-doubling cascades as a parameter is varied.
\begin{figure}
\caption{\label{fig:bifdiag}
\label{fig:bifdiag}
\end{figure}
\begin{figure}
\caption{\label{fig:crit}
\label{fig:crit}
\end{figure}
The beginning of a period-doubling cascade is identified in the return map $R$ as $\nu$ is varied in the interval $\left[ 0.008685, 0.0087013\right]$ (Figure \ref{fig:bifdiag}a). Within this range, period-3, period-5, and period-6 parameter windows are readily identifiable in Figure \ref{fig:bifdiag}b. The local unimodality of the return map suggests that our ($\nu$-parametrized) family of return maps share some universal properties with maps of the interval that exhibit period-doubling cascades,\cite{feigenbaum1978,coullet1978} despite the nonlinearity at the right boundary of the interval observed in Figure \ref{fig:retmap2}. The cascading structure is clearly robust to small boundary perturbations of the quadratic-like maps we consider.
We stress that these bifurcations produce additional {\it large-amplitude} oscillations of MMOs. As the parameter is varied between period-doubling events, more small-amplitude twists may be generated. The connection between Figs. \ref{fig:bifdiag} and \ref{fig:turns} is that between each large-amplitude passage, the number and type of small-amplitude twists is determined by the location of periodic points of the return map $R:\Sigma_0\to\Sigma_0$.
\section{\label{sec:ret} Returns of the critical point}
We recall a classical result of unimodal dynamics for the quadratic family $f_a(x) = 1 - ax^2$ near the critical parameter $a = 2$, where $f_a: I \to I$ is defined on its invariant interval $I$ (when $a = 2$, $I = \left[ -1,1\right]$). On positive measure sets of parameters near $a = 2$, the map $f_a$ admits absolutely continuous invariant measures with respect to Lebesgue measure.\cite{jakobson1981} These facts depend on the delicate interplay between stretching behavior away from neighborhoods of the critical point, together with recurrence to the arbitrarily small neighborhoods of the critical point as trajectories are `folded back' by the action of $f$. This motivates our current objective: to locate a parameter set for which (i) there exists a forward-invariant subset $\Sigma_u \subset \Sigma_+ $ where $R: \Sigma_u \to \Sigma_u$ has exactly one critical point $c \in \Sigma_u$, and (ii) $R^2(c)$ is a fixed point of $R$.
We couldn't locate a parameter set satisfying both (i) and (ii), but we can obtain a parameter set where $R$ has the topology of Figure \ref{fig:retmap2} (i.e. is unimodal over a sufficiently large interval) and admits a critical point satisfying (ii). This parameter set is numerically approximated using a two-step bisection algorithm. First, a bisection method is used to approximate the critical point $c$ by refining the region where $R'$ first changes sign up to a fixed error term, which we take to be $10^{-15}$. Another bisection method is used to approximate the parameter value at which $|R^2(c) - R^3(c)|$ is minimized. We were able to minimize this distance to $2.5603\times 10^{-8}$ at the parameter value $\nu = 0.0087013381084$, where the remaining parameters are given in Figure \ref{fig:retmap}.
Figure \ref{fig:crit}a depicts the forward trajectory of the critical point near the line of fixed points at this parameter value. The itinerary of $c$ is finite, eventually landing in a subinterval of $\Sigma_u$ where $R$ is undefined. Even so, its forward orbit is unpredictable and samples the interval $\left[ R(c), R^2(c)\right]$ with a nontrivial `transient' density for 1284 iterates (Figure \ref{fig:crit}b). The length of the itinerary is extremely sensitive to tiny ($O(10^{-14})$) perturbations of the parameter $b$, reflecting the sensitive dependence of initial conditions in the selected parameter neighborhood. However, the normalized distributions of the forward iterates behave much more regularly: they are all similar to that shown in Figure \ref{fig:crit}b. We conjecture that there exists a path in parameter space such that these distributions rigorously converge to a smooth distribution on the subset where $R$ is defined.
\section{Concluding remarks}
We have classified much of the complex dynamics arising from a tangency of a slow manifold with an unstable manifold of an equilibrium point. The key to this analysis has been the identification of global bifurcations in carefully-chosen return maps of the system. In particular, transverse homoclinic orbits and period-doubling cascades are identified as mechanisms leading to chaotic behavior in the present system. Our objective has not been to attempt rigorous proofs of the results. Instead, we show that these bifurcations can be identified with fairly standard numerical integration and bisection procedures. The challenge, which is typical in studies of systems with a strong timescale separation, is to use techniques which bypass the numerical instability that occurs in integrating in forward or reverse time. This is particularly relevant in the continuation procedure that is used to locate transverse homoclinic orbits for the two-dimensional return map, as well as to identify a section high up on $S^{a+}_{\eps}$ which admits an approximation by a one-dimensional map.
We also motivate the study of maps having the topology shown in Figure \ref{fig:retmap}(a)-(b). These maps are distinguished by two significant features: they admit small disjoint escape subsets, and they are unimodal over most---but not all---of the remainder of the subset over which the map is defined. Sections \ref{sec:bif} and \ref{sec:ret} can then be regarded retrospectively as an introduction to the dynamics of these maps, especially as they compare to the dynamics of unimodal maps. In particular, we observe that such maps undergo period-doubling cascades (Figure \ref{fig:bifdiag}) as a system parameter is varied. The forward trajectory of the critical point is also seen to have a transient density for a range of parameters (for eg., Figure \ref{fig:crit}). These results lead us to conjecture whether absolutely continuous invariant measures and universal bifurcations for unimodal maps persist weakly for the family of maps studied in this paper. The geometric theory of rank-one maps pioneered by Wang and Young\cite{wang2008} is a possible starting point to prove theorems in this direction. This theory has been used successfully to identify chaotic attractors in families of slow-fast vector fields with one fast and two slow variables.\cite{guckenheimer2006} Their technique is based upon approximating returns by one-dimensional maps.
\begin{acknowledgments}
This work was supported by the National Science Foundation (Grant No. 1006272). The author thanks John Guckenheimer for useful discussions.
\end{acknowledgments}
\end{document} |
{\beta}gin{document}
\title{\LARGE Power Series Expansions of Modular Forms\\
and Their Interpolation
Properties}
\author{\LARGE Andrea Mori\\
{\lambda}rge Dipartimento di Matematica\\
{\lambda}rge Universit\`a di Torino}
\date{}
\maketitle
\pagestyle{fancy}
\fancypagestyle{plain}{
\fancyhf{}
\renewcommand{0pt}{0pt}}
\fancyhead{}
\fancyfoot{\normalsize{\theta}epage}
\fancyhead[RE,LO]{}
\fancyhead[RO]{Power series expansions of modular forms}
\fancyhead[LE]{A. Mori}
\fancyfoot[RO,LE]{}
\fancyfoot[RE,LO]{}
\renewcommand{0pt}{0pt}
{\beta}gin{abstract}
We define a power series expansion of a holomorphic modular
form $f$ in the $p$-adic neighborhood of a CM point $x$ of type $K$
for a split good prime $p$. The modularity group can be either a
classical conguence group or a group of norm 1 elements in an order
of an indefinite quaternion algebra. The expansion
coefficients are shown to be closely
related to the classical Maass operators and give $p$-adic
information on the ring of definition of $f$. By letting the CM
point $x$ vary in its Galois orbit, the $r$-th coefficients define a
$p$-adic $K^{\times}$-modular form in the sense of Hida. By coupling
this form with the $p$-adic avatars of algebraic Hecke characters
belonging to a suitable family and using a Rankin-Selberg type formula
due to Harris and Kudla along with some explicit computations of Watson and of Prasanna,
we obtain in the even weight case a
$p$-adic interpolation for the square roots of
a family of twisted special values of the automorphic
$L$-function associated with the base change of $f$ to $K$.
\noindent
2000 Mathematics Subject Classification 11F67
\end{abstract}
\section*{Introduction}
The idea that the power series expansion of a modular form at a CM
point with respect to a well-chosen local parameter should have an
arithmetic significance goes back to the author's thesis, \cite{Mori94}.
The goal of the thesis was to prove an expansion
principle, namely a characterization of the ring of algebraic $p$-adic
integers of definition of an elliptic modular form in terms of the
coefficients of the expansion. Such a result would be analogous to the
classical $q$-expansion principle based on the Fourier expansion
(e.÷g. \cite{Katz73}), with the advantage of being
generalizable in principle to groups of modularity without
parabolic elements where Fourier series are not available. The
simplest such situation is that of a Shimura curve attached to an
indefinite non-split quaternion algebra $D$ over $\mathbb Q$ (quaternionic
modular forms).
The basic idea in \cite{Mori94} was to consider a prime $p$ of good
reduction for the modular curve that is split
in the quadratic field of complex multiplications $K$ and use the
Serre-Tate deformation parameter to construct a local parameter at the
CM point $x$ corresponding to a fixed embedding of $K$ in the split quaternion algebra.
The coefficients of the resulting power series are
related to the values obtained evaluating the
$C^{\infty}$-modular forms ${\delta}_{k}^{(r)}f$ at a lift $\tau$
of $x$ in the complex upper half-plane, where $k$
is the weight of $f$ and ${\delta}_{k}^{(r)}$ is the $r$-th iterate, in the
automorphic sense, of the basic Maass operator
${\delta}_{k}=-\frac1{4\pi}\left(2i\frac{d}{dz}+\frac{k}{y}\right)$.
Our first goal in this paper is to prove a version of the expansion principle
valid also for quaternionic modular forms
without making use of the local complex geometry and completely
$p$-adic in nature.
The realization of modular forms as global sections of a line bundle ${\cal L}$
suitable for the Serre-Tate theory is subtler in the non-split case because
for Shimura curves the Kodaira-Spencer
map ${\rm KS}\colon\mathrm{Sym}^2\underline{{\omega}ega}\rightarrow{\Omega}^1_{{\cal X}}$ is not an isomorphism (for a trivial
reason: the push-forward $\underline{{\omega}ega}=\pi_{*}{\Omega}^{1}_{{\cal A}/{\cal X}}$ for the
universal family of ``false elliptic curves'' has rank 2). This motivates the introduction of
$p$-ordinary test triples (definition \ref{th:testpair}) that require moving to an
auxiliary quadratic extension.
The abelian variety of dimension $\leq2$
corresponding to the CM point $x$ defined over the ring of $p$-adic algebraic
integers ${\cal O}_{(v)}$ is either a CM curve $E$ with ${\rm End}_{0}(E)=K$ or an
abelian surface isogenous to a twofold product $E\times E$ of such a CM
curve. To it we associate a complex period
${\Omega}_{\infty}\in\mathbb C^{\times}$ and a $p$-adic period
${\Omega}_p\in{\cal O}_v^{{\rm nr},\times}$. If also the modular form is defined
over ${\cal O}_{(v)}$ and $\sum_{r=0}^{\infty}(b_{r}(x)/r!)T_{x}^{r}$ is
its expansion obtained form the Serre-Tate theory, we establish in
theorem \ref{thm:equality} an equality
{\beta}gin{equation}
c^{(r)}_{v}(x)=
{\delta}_{k}^{(r)}(f)(\tau){\Omega}_{\infty}^{-k-2r}=
b_{r}(x){\Omega}_{p}^{-k-2r}
{\lambda}bel{eq:intro1}
\end{equation}
of elements in ${\cal O}_{(v)}$. The expansion
principle, theorem \ref{thm:expanprinc}, asserts that if $f$ is a
holomorphic modular form such that the numbers $c^{(r)}_{v}(x)$
defined by the complex side of the equality \eqref{eq:intro1} are in
${\cal O}_{v}$ and the $p$-adic integers ${\Omega}_p^{2r}c^{(r)}_{v}(x)$ satisfy
the Kummer-Serre congruences, then $f$ is defined over the integral
closure of ${\cal O}_{(v)}$ in the compositum of all finite
extensions of the quotient field of ${\cal O}_{(v)}$ in which $v$ splits
completely.
Suppose again that the holomorphic modular form $f$ is defined over a ring
${\cal O}_{(v)}$ of $p$-adic integers. The numbers $c^{(r)}_{v}(x)$ are
related to the coefficients of a $p$-integral power series, i.e. to a
$p$-adic measure on $\mathbb Z_{p}$, naturally attached to $f$. One may
wonder about the interpolation properties of this measure. In the
introduction of \cite{HaTi01} Harris and Tilouine suggest that in the
case of an eigenform $f$ the author's techniques may be used in
conjunction with the results of Waldspurger \cite{Waldsp85} to
$p$-adically interpolate the square roots of the special values of the
automorphic $L$-functions $L(\pi_{K}\otimes\xi,s)$, where $\pi_{K}$ is
the base change to $K$ of the $\mathbb GL_{2}$-automorphic representation
$\pi$ associated to $f$ (possibly up to Jacquet-Langlands correspondence)
and $\xi$ belongs to a suitable family of Gr\"ossencharakters
for $K$.
Our second goal for this paper is to partially fulfill this expectation
when $f$ has even weight $2{\kappa}$.
A key observation (proposition \ref{th:meascrfx}) is that the set of
values $c^{(r)}_{v}(x)$ for $x$ ranging in a full set of
representatives of the copy of the generalized ideal class group
${K_{\A}^{\times}}/K^{\times}\mathbb C^{\times}\wh{{\cal O}}_{c}^\times$ embedded in the
modular (or Shimura) curve extends to a Hida \cite{Hida86} $p$-adic
$\mathbb GL_{1}(K)$-modular form $\hat{c}_{r}$, which is essentially the
$r$-th moment of a $p$-adic measure on $\mathbb Z_{p}$ with values in the
unit ball of the $p$-adic Banach space of such $p$-adic forms. The
scalar obtained by coupling the form $\hat{c}_{r}$ with the $p$-adic
avatar of a Gr\"ossencharakter $\xi_r$ for $K$ trivial on $\wh{\cal O}_{c}^\times$
and of suitable weight twisted by a power of the idelic norm is proportional to the integral
{\beta}gin{equation}
J_{r}(f,\xi_r,\tau)=
\int_{{K_{\A}^{\times}}/K^\times\mathbb R^\times}\phi_r(td_\infty)\xi_r(t)\,dt
{\lambda}bel{eq:intro2}
\end{equation}
where $\phi_r$ is the adelic lift of ${\delta}_{2{\kappa}}^{(r)}(f)$,
$\tau\in\mathfrak H$ represents $x$ and
$d_{\infty}\in{\rm SL}_{2}(\mathbb R)$ is the standard parabolic matrix such
that $d_{\infty}i=\tau$. When $\xi_r$ is of the form $\xi_r=\chi\xi^r$
and satisfies some technical conditions the value so obtained
is essentialy the $r$-th moment of a $p$-adic measure $\mu(f,x;\chi,\xi)$ on $\mathbb Z_p$.
On the other hand, the square of the integral \eqref{eq:intro2}
is a special case of the generalized Fourier coefficients
$L_{\underline{\xi}}(\Phi)$ studied by Harris and Kudla in
\cite{HaKu91}. Building on results of Shimizu \cite{Shimi72} and
refining the techniques of Waldspurger \cite{Waldsp85}, Harris and
Kudla use the seesaw identity associated with the theta
correspondence between the similitude groups $\mathbb GL_{2}$ and $\mathbb GO(D)$
and the splitting $D=K\oplus K^{\perp}$ to express the generalized
Fourier coefficients $L_{\underline{\xi}}({\theta}eta_{\varphi}(F))$
where $F\in\pi$ and $\varphi$ is a split primitive Schwartz-Bruhat
function on $D_{\mathbb A}$ as a Rankin-Selberg Euler product.
Thus, we can use the explicit version of Shimizu's theory worked out by Watson \cite{Wat03},
the local non-archimedean computations of Prasanna \cite{Pra06} together with
some local archimedean computations to obtain a formula relating the square of
the $r$-th moment of $\mu(f,x;\chi,\xi)$ to the values $L(\pi_K\otimes\chi\xi^r,\frac12)$
whose local correcting terms are explicit outside the primes dividing the conductor
of the Gr\"ossencharakter and the primes dividing the non square-free part of the level
(theorem \ref{thm:maininterpolation}).
Some natural questions arise.
First of all, one would like to compute the special
values of the $p$-adic $L$-function attached to the measure $\mu(f,x;\chi,\xi)$.
Secondly, one may ask if the methods can be extended to treat different or
more general families of
Gr\"ossencharakters, in particular if one can control the interpolation as
the ramification at $p$ increases. Proposition
\ref{th:oldforms} implies that, if anything, this cannot be achieved
without moving the CM point. Thus, some kind of geometric construction
in the modular curve may be in order, with a possible link to the
question of the determination of the action of the Hecke operators on
the Serre-Tate expansions. Another question is whether the
reinterpretation of the integral \eqref{eq:intro2} as inner product in
the space of $p$-adic $\mathbb GL_{1}(K)$-modular forms can be used to
obtain an estimate of the number of non-vanishing special values
$L(\pi_{K}\otimes\xi,\frac12)$.
We hope to be able to attack these problems in a future paper.
\paragraph{Acknowledgements.} The idea that the power series
coefficients may be used to $p$-adically interpolate the special
values $L(\pi_{K}\otimes\xi,\frac12)$ arose a long time ago in
conversations with Michael Harris. I wish to thank Michael Harris for
sharing his intuitions and for many useful suggestions.
Also, I wish to thank the anonymous referee of a previous version of
the manuscript, whose suggestions helped greatly to remove some
unnecessary hypotheses.
\paragraph{Notations and Conventions.}
The symbols $\mathbb Z$, $\mathbb Q$, $\mathbb R$, $\mathbb C$ and $\mathbb F_q$ denote, as usual, the integer,
the rational, the real, the complex numbers and the field with $q$ elements respectively.
We fix once for all an embedding $\imath\colon\overline{\mathbb Q}\rightarrow\mathbb C$ and
by a number field we mean a finite subextension of the field
$\overline{\mathbb Q}$ of algebraic numbers.
If $L$ is a number field, we denote ${\cal O}_{L}$ its ring of integers and
${\delta}_{L}$ its discriminant. If $L=\mathbb Q(\sqrt{d})$ is a quadratic field, for each
positive integer $c$ we denote
${\cal O}_{L,c}=\mathbb Z+c{\cal O}_{L}=\mathbb Z[c{\omega}_d]$ its order of conductor $c$,
with ${\omega}_d=\sqrt d$ if $d\equiv 2$, $3\bmod 4$ or ${\omega}_d=(1+\sqrt d)/2$ if $d\equiv 1\bmod 4$.
If $[L:\mathbb Q]=n$ we denote $I_L=\{{\sigma}_1,\dots,{\sigma}_n\}$ the set of embeddings ${\sigma}_i:L\rightarrow\mathbb C$ and we assume ${\sigma}_1=\imath_{|L}$.
If $p$ is a rational prime we denote $\mathbb Z_{p}$ and $\mathbb Q_{p}$ the $p$-adic
integers and the $p$-adic numbers respectively. By analogy, $\mathbb Q_\infty=\mathbb R$.
If $v|p$ is a place of the number field
$L$ corresponding to the maximal ideal $\mathfrak p_{v}\subset{\cal O}_{L}$, we denote
${\cal O}_{(v)}$, $L_{v}$, ${\cal O}_{v}$, $k(v)$ the localization of ${\cal O}_{L}$ at
$\mathfrak p_{v}$, the $v$-adic completion of $L$, the ring of
$v$-adic integers in $L_{v}$ and the residue field respectively.
The maximal ideal in ${\cal O}_{v}$ is
still denoted $\mathfrak p_{v}$. Also, we denote $\nr{L}_v$ the maximal
unramified extension of $L_{v}$ and $\nr{{\cal O}}_v$ its ring of
integers.
We denote $\widehat{\mathbb Z}$ the profinite completion of $\mathbb Z$ and for
each $\mathbb Z$-module $M$ we let $\widehat M=M\otimes\widehat\mathbb Z$. We
denote $\mathbb A$ the ring of rational adeles and $\mathbb A_{f}$ the finite
adeles, so that $\mathbb A=\mathbb R\times\mathbb A_{f}=\mathbb Q\mathbb R\widehat\mathbb Z$. For a number field $L$ we denote
$\mathbb A_L=\mathbb A\otimes L$ and $L_\mathbb A^\times$ the corresponding ring of adeles and group of \`{\i}deles respectively. If $\mathfrak n\subseteq{\cal O}_{K}$ is an ideal, we let
$L^{\times}_{\mathfrak n}=\{{\lambda}\in L^{\times}\mbox{ such that }{\lambda}\equiv1\bmod\mathfrak n\}$
and denote ${\cal I}_{\mathfrak n}$ the group of fractional ideals of $L$ prime with $\mathfrak n$,
$P_{\mathfrak n}$ the subgroup of principal fractional ideals generated by the elements in
$L^{\times}_{\mathfrak n}$ and $U_{\mathfrak n}$ the subgroup of finite \`{\i}deles product of local units congruent to $1${} $\bmod\mathfrak n$.
We fix an additive character $\psi$ of $\mathbb A/\mathbb Q$, by asking that $\psi_\infty(x)=e^{2\pi i x}$
and $\psi_p$ is trivial on $\mathbb Z_p$ with $\psi_p(x)=e^{2\pi i x}$ for $x\in\mathbb Z[\inv p]$ and finite $p$. On $\mathbb A$ we fix the Haar measure $dx=\prod_{p\leq\infty}dx_p$ where the local Haar measures $dx_p$ are normalized so that the $\psi_p$-Fourier transform is autodual. For a quaternion algebra $D$ with reduced norm $\nu$, we fix on $D_\mathbb A$ the Haar measure $dx=\prod_{p\leq\infty}dx_p$ where the local Haar measures $dx_p$ are normalized so that the Fourier transform with respect to the norm form is autodual.
Let $(V,\scal{\,}{\,})$ be a quadratic space of dimension $d$ over $\mathbb Q$.
We denote ${\cal S}_{\mathbb A}(V)=\bigotimes_{p\leq\infty}{\cal S}_{p}$ the adelic
Schwartz-Bruhat space, where for $p$ finite, ${\cal S}_{p}$ is the space of
Bruhat functions on $V\otimes\mathbb Q_p$ and ${\cal S}_{\infty}$ is the space of Schwartz functions on
$V\otimes\mathbb R$ which are finite under the natural action of a (fixed) maximal compact subgroup of the similitude group $\mathbb GO(V)$. The Weil representation $r_\psi$ is the representation of ${\rm SL}_2(\mathbb A)$ on
${\cal S}_{\mathbb A}(V)$ which is explicitely described locally at $p\leq\infty$ by
{\beta}gin{subequations}{\lambda}bel{eq:Weilrep}
{\beta}gin{eqnarray}
r_\psi\left({\beta}gin{array}{cc}1 & b \\0 & 1\end{array}\right)\varphi(x) & = & \psi_p\left(\frac12\scal{bx}{x}\right)\varphi(x),
{\lambda}bel{eq:Weilrep1} \\
r_\psi\left({\beta}gin{array}{cc}a & 0 \\ 0 & \inv a\end{array}\right)\varphi(x) & = & \chi_V(a)\vass{a}_p^{d/2}\varphi(ax)
{\lambda}bel{eq:Weilrep2} \\
r_\psi\left({\beta}gin{array}{cc}0 & 1 \\ -1 & 0\end{array}\right)\varphi(x) & = & {\gamma}mma_V\hat{\varphi}(x)
{\lambda}bel{eq:Weilrep3}
\end{eqnarray}
\end{subequations}
where ${\gamma}mma_V$ is an eighth root of 1 and $\chi_V$ is a quadratic character that are computed in our cases of interest in \cite{JaLa70} (see also the table in \cite[\S3.4]{Pra06}), while the Fourier transform $\hat{\varphi}(x)=\int_{V\otimes\mathbb Q_p}\varphi(y)\psi_p(\scal xy)\,dy$ is computed with respect to a $\scal{\,}{\,}$-self dual Haar measure on $V\otimes\mathbb Q_p$.
If $R$ is a ring and $M$ a $R$-module we denote $\dual M={\rm Hom}(M,R)$
the dual of $M$. The same notation applies to a sheaf of modules over a scheme.
If $G$ is a subgroup of units in $R$ we say that non-zero elements
$x$, $y\in M$ are $G$-equivalent and
write $x{\sigma}m_{G}y$ if there exists $r\in G$ such that $rx=y$.
The group ${\rm SL}_{2}(\mathbb R)$ acts on the complex upper half-plane $\mathfrak H$
by linear fractional transformations, if $g=\smallmat abcd$ then
$g\cdot z=\frac{az+b}{cz+d}$. The automorphy factor is defined to be
$j(g,z)=cz+d$. The action extends to an action of the
group $\mathbb GL^{+}_{2}(\mathbb R)$.
If $\mathbb Ga<{\rm SL}_{2}(\mathbb R)$ is a Fuchsian group of the first kind we shall denote
$M_{k}(\mathbb Ga)$ the space of modular forms of weight
$k\in\mathbb Z$ with respect to $\mathbb Ga$ i.e. the holomorphic functions $f$
on $\mathfrak H$ such that
$$
\mbox{$f({\gamma} z)=f(z)j({\gamma},z)^k$ for all $z\in\mathfrak H$ and
${\gamma}\in\mathbb Ga$}
$$
and extend holomorphically to a neighborhood of each cusp (when cusps
exist). The subspace of cuspforms, i.e. those modular forms that
vanish at the cusps, will be denoted $S_{k}(\mathbb Ga)$.
The request that a holomorphic function on $\mathfrak H$ extends
holomorphically to a neighborhood of a cusp $s$ is equivalent to a certain
growth condition as $z\to s$. Relaxing holomorphicity
but mantaining the growth condition yields the much bigger spaces of
$C^\infty$-modular and {}-cuspforms, which will be denoted
$M_{k}^{\infty}(\mathbb Ga)$ and $S_{k}^{\infty}(\mathbb Ga)$
respectively. We will denote
$$
M_{k,{\varepsilon}}({\Delta},N),\quad
S_{k,{\varepsilon}}({\Delta},N),\quad
M_{k,{\varepsilon}}^{\infty}({\Delta},N),\quad
S_{k,{\varepsilon}}^{\infty}({\Delta},N)
$$
the above spaces of modular or cuspforms with respect to the groups
$\mathbb Ga=\mathbb GaE({\Delta},N)$, ${\varepsilon}\in\{0,1\}$, defined in section \ref{se:curves}.
It is a well-known fact that $M_{k,{\varepsilon}}({\Delta},N)$ is always
finite-dimensional and trivial for $k<0$.
\section{Modular and Shimura curves}
\subsection{Quaternion algebras.}{\lambda}bel{se:quatalg}
Let $D$ be a quaternion algebra over $\mathbb Q$ with reduced norm $\nu$
and reduced trace ${\rm tr}$. For each place $\ell$ of $\mathbb Q$ let $D_\ell=D\otimes_Q\mathbb Q_\ell$.
Let ${\Sigma}_D$ be the set of places at which $D$ is \emph{ramified}, i.~e.
$D_\ell$ is the unique, up to isomorphism, quaternion division
algebra over $\mathbb Q_\ell$. If $\ell\notin{\Sigma}_D$ the algebra $D$ is
\emph{split} at $\ell$, i.~e. $D_\ell{\sigma}meq{\rm M}_2(\mathbb Q_\ell)$. The set ${\Sigma}_D$
is finite and even and determines completely the isomorphism class
of $D$. Moreover, every finite and even subset of places of $\mathbb Q$ is the set
of ramified places of some quaternion algebra over $\mathbb Q$
(for these and the other basic results on quaternion algebras the standard
reference is \cite{Vigner80}).
In particular, $M_2(\mathbb Q)$ is the only quaternion algebra up to
isomorphism which is \emph{split}, i.~e. split at all
places. The discriminant ${\Delta}={\Delta}_D$ of $D$ is the product of
the finite primes in ${\Sigma}_D$ if ${\Sigma}_D\neq\emptyset$, or ${\Delta}=1$
otherwise.
We shall henceforth assume that $D$ is \emph{indefinite},
i.~e. split at $\infty$, and fix an isomorphism
$\Phi_\infty\colon D_\infty\buildrel{\sigma}m\over\rightarrow{\rm M}_2(\mathbb R)$
which will be often left implicit.
There is a unique conjugacy class of maximal orders in $D$. Once for
all, choose a maximal order ${\cal R}_{1}$ and fix isomorphisms
$\Phi_\ell\colon D_\ell\buildrel{\sigma}m\over\rightarrow{\rm M}_2(\mathbb Q_\ell)$ for
$\ell\notin{\Sigma}_D$ so that
$\Phi_\ell({\cal R}_{1})={\rm M}_2(\mathbb Z_\ell)$. For an integer $N$
prime to ${\Delta}$ let ${\cal R}_{N}$ be the level $N$
Eichler order of $D$ such that
$$
{\cal R}_{N}\otimes_\mathbb Z\mathbb Z_\ell=\inv{\Phi_\ell}
\left(\left\{\left(
{\beta}gin{array}{cc}
a & b \\
c & d
\end{array}\right)
\hbox{$\in{\rm M}_2(\mathbb Z_\ell)$ such that $c\equiv0\bmod N$}
\right\}\right)
$$
for $\ell\notin{\Sigma}_D$, and ${\cal R}_{N}\otimes\mathbb Z_\ell$ is the unique
maximal order in $D_\ell$ for $\ell\in{\Sigma}_D$.
If $D={\rm M}_2(\mathbb Q)$ we take ${\cal R}_{1}={\rm M}_2(\mathbb Z)$ and
${\cal R}_{N}=\left\{\smallmat abcd
\hbox{$\in{\rm M}_2(\mathbb Z)$ such that $c\equiv0\bmod N$}\right\}$.
There are exactly two homomorphisms
${\rm or}_\ell^{1},{\rm or}_\ell^{2}\colon{\cal R}_{N}\otimes\mathbb F_\ell\longrightarrow\mathbb F_{\ell^2}$
for each prime $\ell|{\Delta}$, and two homomorphisms
${\rm or}_\ell^{1},{\rm or}_\ell^{2}\colon{\cal R}_{N}\otimes\mathbb F_\ell\longrightarrow{\mathbb F_{\ell}}^2$
for each prime $\ell|N$. These maps are
called $\ell$-\emph{orientations} and the two $\ell$-orientations are
switched by the non-trivial automorphism of either $\mathbb F_{\ell^2}$ or
${\mathbb F_{\ell}}^{2}$. An orientation for ${\cal R}_{N}$ is the choice of an
$\ell$-orientation ${\rm or}_\ell$ for all primes $\ell|N{\Delta}$.
An involution $d\rightarrowsto\invol d$ in $D$ is \emph{positive} if
${\rm tr}(d\invol d)>0$ for all $d\in D$. By the Skolem-Noether theorem
{\beta}gin{equation}
\invol d=\inv t\bar{d}t
{\lambda}bel{eq:involution}
\end{equation}
where $t\in D$ is some element such that $t^2\in\mathbb Q^{{}<0}$ and
$d\rightarrowsto\bar{d}$ denotes quaternionic conjugation,
$d+\bar{d}={\rm tr}(d)$. If $t\in D$ is
such an element, let $B_t$ be the bilinear form on $D$ defined by
{\beta}gin{equation}
B_t(a,b)={\rm tr}(a\bar{b}t)={\rm tr}(at\invol b)\qquad
\hbox{for all $a, b\in D$}.
{\lambda}bel{eq:formEt}
\end{equation}
If ${\cal R}\subset D$ is an order, the involution $d\rightarrowsto\invol d$
is called ${\cal R}$-\emph{principal} if $\invol{\cal R}={\cal R}$ and the bilinear form
$B_t$ is skew-symmetric, non-degenerate and $\mathbb Z$-valued on
${\cal R}\times{\cal R}$ with pfaffian equal to $1$.
When ${\Delta}>1$ an explicit model for the triple
$(D,{\cal R}_{N},d\rightarrowsto\invol d)$ can be constructed as follows.
The condition $(n,-N{\Delta})_{\ell}=-1$ for all $\ell\in{\Sigma}_D$
on Hilbert symbols defines for $n$ a certain subset
of non-zero congruence classes modulo $N{\Delta}$.
Passing to classes modulo $8N{\Delta}$
and taking $n>0$ we may assume that
$(n,-N{\Delta})_{\infty}=(n,-N{\Delta})_{p}=1$ for all primes $p$
dividing $N$ and also $(n,-N{\Delta})_{2}=1$ if ${\Delta}$ is odd.
By Dirichlet's theorem of primes in arithmetic progressions there
exists a prime $p_{o}$ satisfying these conditions and the product
formula easily implies that
{\beta}gin{equation}
(p_{0},-N{\Delta})_{\ell}=-1\qquad\mbox{if and only if $\ell\in{\Sigma}_{D}$.}
{\lambda}bel{eq:condonHS}
\end{equation}
Let $a\in\mathbb Z$ such that $a^2N{\Delta}\equiv-1\bmod p_o$.
{\beta}gin{thm}[Hashimoto, \cite{Hashim95}]{\lambda}bel{th:Hashimoto}
Let $D$ be a quaternion algebra over $\mathbb Q$ of discriminant ${\Delta}$ and let
$t\in D$ such that $t^2\in\mathbb Q^{{}<0}$. Then:
{\beta}gin{enumerate}
\item $D$ is isomorphic to the quaternion algebra
$D_H=\mathbb Q\oplus\mathbb Q i\oplus\mathbb Q j\oplus\mathbb Q ij$, where
$i^2=-N{\Delta}$, $j^2=p_o$ and $ij=-ji$;
\item the order ${\cal R}_{H,N}=\mathbb Z{\epsilon}_1\oplus\mathbb Z{\epsilon}_2\oplus\mathbb Z{\epsilon}_3
\oplus\mathbb Z{\epsilon}_4$, where ${\epsilon}_1=1$, ${\epsilon}_2=(1+j)/2$,
${\epsilon}_3=(i+ij)/2$ and ${\epsilon}_4=(aN{\Delta} j+ij)/p_o$ is an Eichler
order of level $N$ in $D_H$;
\item the skew symmetric form $B_t$ on $D_H$ is $\mathbb Z$-valued on
${\cal R}_{H,N}$ if and only if $ti\in{\cal R}_{H,N}$.
Moreover, it defines a non-degenerate
pairing on ${\cal R}_{H,N}\times{\cal R}_{H,N}$ if and only if
$ti\in{\cal R}_{H,N}^\times$;
\item let $t=\inv i$. Then the elements
$\eta_1={\epsilon}_3-\frac12(p_o-1){\epsilon}_4$, $\eta_2=-aN{\Delta}-{\epsilon}_4$,
$\eta_3=1$ and $\eta_4={\epsilon}_2$ are a symplectic
$\mathbb Z$-basis of ${\cal R}_{H,N}$.
\end{enumerate}
\end{thm}
\noindent We call \emph{Hashimoto model} of a quaternion algebra endowed with
an Eichler order ${\cal R}$ of level $N$ and a ${\cal R}$-principal positive involution
the triple $(D_H,{\cal R}_{H,N},\inv i)$ given in the above theorem.
We can fix the isomorphism $\Phi_\infty$ for the Hashimoto model by declaring
that
$$
\Phi_\infty(i)=
\left(
{\beta}gin{array}{cc}
0 & -1 \\
N{\Delta} & 0
\end{array}
\right),\qquad
\Phi_\infty(j)=
\left(
{\beta}gin{array}{cc}
\sqrt{p_o} & 0 \\
0 & -\sqrt{p_o}
\end{array}
\right).
$$
\subsection{Moduli spaces.}{\lambda}bel{se:curves}
Fix a ${\cal R}_{1}$-principal positive involution
$d\rightarrowsto\invol d$ as in \eqref{eq:involution}.
We shall consider the groups
$$
\mathbb GaZ({\Delta},N)={\cal R}^{1}_{N}=\left\{
\hbox{${\gamma}\in{\cal R}_{N}$ such that $\nu({\gamma})=1$}
\right\}
$$
and
$$
\mathbb GaU({\Delta},N)=\left\{\hbox{${\gamma}\in\mathbb GaZ({\Delta},N)$ such that
${\rm or}^{{\epsilon}}_{\ell}({\gamma} r)={\rm or}^{{\epsilon}}_{\ell}(r)$
for all $r\in{\cal R}_{N}$, $\ell|N, {\epsilon}=1,2$}\right\}.
$$
When ${\Delta}=1$, $\mathbb GaZ(1,N)$ and $\mathbb GaU(1,N)$ are the classical
congruence subgroup
$$
\mathbb Ga_0(N)=\left\{
\left(
{\beta}gin{array}{cc}
a & b \\
c & d
\end{array}
\right)
\in{\rm SL}_2(\mathbb Z)\hbox{ such that $c\equiv0\bmod N$}\right\}
$$
and
$$
\mathbb Ga_1(N)=
\left\{
\left(
{\beta}gin{array}{cc}
a & b \\
c & d
\end{array}
\right)
\in{\rm SL}_2(\mathbb Z)\hbox{ such that $a,d\equiv1$ e $c\equiv0\bmod N$}\right\}
$$
respectively. Since $D$ is indefinite $\mathbb GaE({\Delta},N)$ for
${\varepsilon}\in\{0,1\}$ is, via $\Phi_\infty$, a discrete subgroup of
${\rm SL}_2(\mathbb R)$ acting on the complex upper half plane $\mathfrak H$.
When ${\Delta}>1$ the quotient $X_{{\varepsilon}}({\Delta},N)=\mathbb GaE({\Delta},N)\backslash\mathfrak H$ is a
compact Riemann surface, \cite[proposition~9.2]{ShiRed}. When ${\Delta}=1$
let $X_{\varepsilon}(N)$ be the standard cuspidal
compactification of $Y_{\varepsilon}(N)=\mathbb GaE(N)\backslash\mathfrak H$.
Each of these complete curves $X$ has a canonical model over $\mathbb Q$,
\cite{ShiRed}.
In fact, each $X$ can be reinterpreted as the set of complex
points of a scheme ${\cal X}$ which is the solution of a moduli
problem, defined over $\mathbb Z[1/{N{\Delta}}]$,
e.g. \cite{BerDar96, DelRap73, DiaIm95, Milne79, Robert89}.
When $D=M_2(\mathbb Q)$ and $N>3$, the functor
$F_1(N)\colon\hbox{\bf $\mathbb Z[1/N]$-Schemes}\rightarrow\hbox{\bf Sets}$ defined
by
$$
F_1(N)(S)=
\left\{
\sopra{\hbox{Isomorphism classes of generalized elliptic curves $E=E_{|S}$}}
{\hbox{with a section $P\colon S\rightarrow E$ of exact order $N$}},
\right\}
$$
is represented by a proper and smooth $\mathbb Z[\frac1{N}]$-scheme
${\cal X}_1(N)$ such that ${\cal X}_1(N)(\mathbb C)=X_{1}(N)$.
The complex elliptic curve with point $P$ of exact order $N$
corresponding to $z\in\mathfrak H$ is the torus $E_z=\mathbb C/\mathbb Z\oplus\mathbb Z z$
with $P=1/N \bmod\mathbb Z$. Denote
{\beta}gin{equation}
\pi_N\colon{\cal E}_{N}\longrightarrow{\cal X}_{1}(N)
{\lambda}bel{eq:univEC}
\end{equation}
the universal generalized elliptic curve attached to the representable functor
$F_{1}(N)$. The scheme ${\cal X}_0(N)$ quotient of ${\cal X}_1(N)$ by
the action of the group of diamond operators
${\lambda}ngle a\rangle\colon{\cal X}_1(N)\rightarrow{\cal X}_1(N)$,
${\lambda}ngle a\rangle(E,P)=(E,aP)$ for all $a\in(\mathbb Z/N\mathbb Z)^\times$,
is the coarse moduli scheme attached to the functor
$$
F_0(N)(S)=
\left\{
\sopra{\hbox{Isomorphism classes of generalized elliptic curves $E=E_{|S}$}}
{\hbox{with a cyclic subgroup $C\subset E$ of exact order $N$}}
\right\}
$$
and a smooth $\mathbb Z[1/N]$-model for the curve $X_0(N)$.
When ${\Delta}>1$ and $N>3$, $X_1({\Delta},N)={\cal X}_{1}({\Delta},N)(\mathbb C)$ for
the proper and smooth $\mathbb Z[1/{N{\Delta}}]$-scheme ${\cal X}_{1}({\Delta},N)$
representing the functor
$F_1({\Delta},N)\colon\hbox{\bf $\mathbb Z[1/{N{\Delta}}]$-Schemes}\rightarrow\hbox{\bf Sets}$
defined by
$$
F_1({\Delta},N)(S)=
\left\{
\sopra{
\sopra{\hbox{Isomorphism classes of compatibly principally polarized}}
{\hbox{ abelian surfaces $A=A_{|S}$ with a ring embedding}}}
{\sopra{\hbox{${\cal R}_{1}\hookrightarrow{\rm End}(A)$ and an equivalence class of}}
{\hbox{${\cal R}_N$-orientation preserving level $N$ structures}}}
\right\}.
$$
A level $N$ structure on an abelian surface $A$ with
${\cal R}_{1}\subset{\rm End}(A)$ is an isomorphism of (left)
${\cal R}_{1}$-modules $A[N]{\sigma}meq{\cal R}_{1}\otimes(\mathbb Z/N\mathbb Z)$. Two
such structures are declared equivalent if they coincide on
${\cal R}_N\otimes(\mathbb Z/N\mathbb Z)$ and induce the same
$\ell$-orientations on ${\cal R}_N$ for all $\ell|N$. The principal polarization is
compatible with the embedding ${\cal R}_{1}\subset{\rm End}(A)$ if the
involution $d\rightarrowsto\invol d$ is the Rosati involution. The abelian
surfaces in $F_1({\Delta},N)(S)$ are called
\emph{abelian surfaces with quaternionic multiplications} (QM-abelian surfaces, for short) or
\emph{false elliptice curves}.
The complex QM-abelian surface corresponding to $z\in\mathfrak H$ is
{\beta}gin{equation}
A_z=D_\infty^z/{\cal R}_{1},
{\lambda}bel{eq:QMtori}
\end{equation}
where $D_\infty^z$ is the real vector space $D_\infty$ endowed with the
$\mathbb C$-structure defined by the identification
$\mathbb C^2=\Phi_\infty(D_\infty)\left(\sopra{z}{1}\right)$, i.e.
$A_z=\mathbb C^2/\Phi_\infty({\cal R}_{1})\vvec{z}{1}$. The complex
uniformization \eqref{eq:QMtori} defines a level structure
$\inv{N}{\cal R}_{1}/{\cal R}_{1}=(D/{\cal R}_{1})[N]\buildrel{\sigma}m\over\rightarrow(A_{z})[N]$
and the skew-symmetric form
$\scal{\Phi_\infty(a)\left(\sopra{z}{1}\right)}
{\Phi_\infty(b)\left(\sopra{z}{1}\right)}=B_t(a,b)$
for all $a,b\in D$,
where $B_t$ is as in \eqref{eq:formEt}, extended to $\mathbb C^{2}$ by
$\mathbb R$-linearity is the unique Riemann form on $A_{z}$ with Rosati
involution $d\rightarrowsto\invol d$, \cite[lemma~1.~1]{Milne79}. Denote
{\beta}gin{equation}
\pi_{{\Delta},N}\colon{\cal A}_{{\Delta},N}\longrightarrow{\cal X}_{1}({\Delta},N)
{\lambda}bel{eq:univQMAV}
\end{equation}
the universal QM abelian surface attached to the representable functor
$F_{1}({\Delta},N)$.
As with the split case, a smooth $\mathbb Z[1/{N{\Delta}}]$-model
${\cal X}_{0}({\Delta},N)$ of $X_{1}({\Delta},N)$ can be obtained as quotient of
${\cal X}_{1}({\Delta},N)$ by a suitable action of
$\frac{\mathbb GaZ({\Delta},N)}{\mathbb GaU({\Delta},N)}{\sigma}meq(\mathbb Z/N\mathbb Z)^{\times}$. It is the
coarse moduli space for the functor
$$
F_0({\Delta},N)(S)=
\left\{
\sopra{\sopra{\hbox{Isomorphism classes of compatibly principally polarized}}
{\hbox{abelian surfaces $A=A_{|S}$ with a ring embedding ${\cal R}_{1}\hookrightarrow{\rm End}(A)$}}}
{\hbox{and an ${\cal R}_{N}$-equivalence class
of level $N$ structures}}
\right\}
$$
where two level $N$ structures are ${\cal R}_{N}$-equivalent if they coincide on
${\cal R}_{N}\otimes(\mathbb Z/N\mathbb Z)$.
{\beta}gin{rem}
\rm In order to study the reduction of the modular and Shimura
curves at primes dividing $N{\Delta}$ one has to extend the moduli
problems described above to moduli problems defined over $\mathbb Z$,
see \cite{BoCa91, KatMaz85}.
The $\mathbb Z$-schemes thus obtained are proper but not smooth. We
shall not deal with primes of bad reduction and for the purposes
of this paper the above descriptions will suffice.
\end{rem}
\subsection{Subfields and CM points.}{\lambda}bel{se:CMpts}
Let $\mathbb Q\subseteq\pr L\subset L$ be a tower of fields with $[L:\pr L]=2$
and assume that $L$ splits $D$, i.~e. $D\otimes_\mathbb Q L{\sigma}meq{\rm M}_2(L)$ or,
equivalently, that $L$ admits an embedding in $D\otimes_\mathbb Q\pr L$.
An embedding $\jmath:L\hookrightarrow D\otimes_\mathbb Q\pr L$ endows $D\otimes_\mathbb Q\pr L$
with a structure of $L$-vector space. Scalar multiplication by
${\lambda}\in L$ is left multiplication by $\jmath({\lambda})$.
The opposite algebra $D^{\mathrm{op}}$ acts $L$-linearly on $D$ by right multiplication,
providing a direct identification
{\beta}gin{equation}
{\lambda}bel{eq:DasEnd}
D^{\mathrm{op}}\otimes L\stackrel{{\sigma}m}{\longrightarrow}{\rm End}_{L}(D\otimes\pr L).
\end{equation}
Let ${\sigma}$ be the non-trivial element in $\mathbb Gal(L/\pr L)$ and $\jmath^{\sigma}({\lambda})=\jmath({\lambda}^{\sigma})$
for all ${\lambda}\in L$. By the Skolem-Noether theorem
there exists $u\in(D\otimes_\mathbb Q\pr L)^\times$, well defined up to a
$L^\times$-multiple, such that $u\jmath({\lambda})=\jmath^{\sigma}({\lambda})u$
for all ${\lambda}\in L$ and $u^2\in\pr L$. Thus, with a slight abuse of notation,
the embedding $\jmath$ defines a splitting
{\beta}gin{equation}
D\otimes\pr L=L\oplus L u
{\lambda}bel{eq:Dsplit}
\end{equation}
which can be more intrinsically seen as the eigenspace decomposition
under right multiplication by $\jmath(L^\times)$.
Also, there is an isomorphism
{\beta}gin{equation}
D\buildrel{\sigma}m\over\longrightarrowD^{\mathrm{op}},\qquad
{\lambda}_1+{\lambda}_2u\rightarrowsto{\lambda}_1+{\lambda}_2^{{\sigma}}u.
{\lambda}bel{eq:isoDDop}
\end{equation}
Let $L=\pr L({\alpha})$ with ${\alpha}^2=A\in\pr L$. The element
{\beta}gin{equation}
{\lambda}bel{eq:idempotent}
e_\jmath=\frac{1}{2}\left(1\otimes1+
\frac{1}{A}\jmath({\alpha})\otimes{\alpha}\right)\in D\otimes\pr L
\end{equation}
is an idempotent which is easily seen to be, under
\eqref{eq:DasEnd}, \eqref{eq:Dsplit} and \eqref{eq:isoDDop}, the projection onto $L$
with kernel $Lu$. If $L\subseteq\mathbb C$ the idempotent
$e_\jmath$ defines a projector in $D_\infty^z$ for all $z\in\mathfrak H$
by scalar extension.
An involution $d\rightarrowsto\invol d$ in $D$ extends by linearity to
$D\otimes\pr L$. If $\invol\jmath$ is the embedding
$\invol\jmath({\lambda})=\jmath({\lambda})\invol{}$, the explicit description
\eqref{eq:idempotent} implies at once that
$e_{\invol\jmath}=\invol{e_\jmath}$ and in particular
$$
\hbox{$\invol{e_\jmath}=e_\jmath$ if and only if
$\invol{\jmath(L)}=\jmath(L)$ pointwise.}
$$
When the involution is positive a fixed idempotent can be constructed as follows.
As an element of ${\rm End}(D)$ the involution \eqref{eq:involution}
has determinant $-1$. Since $\invol1=1$ and ${\rm tr}(\invol d)={\rm tr}(d)$
for all $d\in D$ its $(-1)$-eigenspace is a subspace of trace
$0$ elements of dimension either $1$ or $3$.
If the dimension is $3$ then the involution is the quaternionic conjugation,
contradicting the positivity assumption.
Therefore there exist a non-zero element $d$ of trace $0$ fixed by the involution.
The subalgebra $F=\mathbb Q(d)\subset D$
is a quadratic field fixed by the involution and the corresponding
idempotent $e\in D\otimes_\mathbb Q F$ has the desired property.
Note that the positivity of the involution implies further
that $F$ is real quadratic.
The conductor of an embedding $\jmath\colon L\rightarrow D$ of the quadratic field $L$
relative to the order ${\cal R}_N$ is the integer $c=c_N>0$ such that
$\jmath({\cal O}_{L,c})=\jmath(L)\cap{\cal R}_{N}$.
Denote $\bar c$ the \textit{minimal conductor},
i.e. the conductor relative to the maximal order
${\cal R}_{1}$. It is clear that $c$ is a multiple of $\bar c$, in fact
$c/\bar c$ is a divisor of $N$ because ${\cal O}_{L,c}/{\cal O}_{L,\bar c}$ injects
into ${\cal R}_{1}/{\cal R}_{N}{\sigma}meq\mathbb Z/N\mathbb Z$. In the following result the embedding is left implicit to simplify the notation.
{\beta}gin{pro}{\lambda}bel{prop:decomporder}
Let $L\subset D$ be a quadratic subfield with associated decomposition $D=L\oplus Lu$.
Let ${\Lambda}=L\cap{\cal R}_{N}$ and ${\Lambda}^\prime=Lu\cap{\cal R}_{N}$. Then:
{\beta}gin{enumerate}
\item $D$ is split at the prime $p$ if and only if $(u^2,{\delta}_L)_p=1$;
\item if $p$ is unramified in $L$ and $\mcd pc=1$ then
${\cal R}_N\otimes\mathbb Z_p={\Lambda}\otimes\mathbb Z_p\oplus{\Lambda}^\prime\otimes\mathbb Z_p$. Moreover,
${\Lambda}^\prime\otimes\mathbb Z_p={\cal J} u$ for some fractional ideal ${\cal J}\subset L\otimes\mathbb Q_p$
such that ${\rm N}({\cal J})\nu(u)=(q^{\epsilon})$ with ${\epsilon}=1$ if $q|N{\Delta}$ and ${\epsilon}=0$ otherwise.
\end{enumerate}
\end{pro}
\par\noindent{\bf Proof. } Let $L=\mathbb Q(\sqrt d)$. Then $\{1,\sqrt d,u, \sqrt du\}$ is a $\mathbb Q$-basis of $D$ and the local invariants of the norm form are ${\delta}t=1$ and ${\epsilon}_p=(-1,-1)_p(u^2,d)_p=(-1,-1)_p(u^2,{\delta}_L)_p$, thus proving the first part.
For the second part, choose $u$ so that $u^2\in\mathbb Z$. Then there is an inclusion of orders
${\cal R}^\prime={\cal O}_{L,c}\oplus{\cal O}_{L,c}u\subseteq{\Lambda}\oplus{\Lambda}^\prime\subseteq{\cal R}_N$.
The elements $\{1,c{\omega}_d,u,c{\omega}_du\}$ are a $\mathbb Z$-basis of ${\cal R}^\prime$, so that ${\cal R}^\prime$ has reduced discriminant ${\delta}_Lcu^2$. We are thus reduced to check that when $p|u^2$ and
$\mcd p{c{\delta}_L}=1$ then there is no element $x\in{\cal R}_N$ of the form $x=(r+r^\prime u)/p$ with $r$,
$r^\prime\in{\cal O}_{L,c}-p{\cal O}_{L,c}$. For such an element $x$ one must have $p|{\rm tr}(r)$ and $p|{\rm N}(r)$ from which one derives quickly a contradiction.
The last claim follows from the very same discriminant computation since ${\cal R}_N$ has reduced discriminant $N{\Delta}$.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Fix a quadratic imaginary field $K$ that splits $D$.
Exactly one of the two embeddings $\jmath,\jmath^{{\sigma}}$ is normalized
in the sense of \cite[(4.~4.~5)]{ShiRed}.
The normalized embeddings correspond bijectively to a special subset
of points $\tau\in\mathfrak H$. More precisely, there is a bijection
$$
\left\{
\sopra{\displaystyle\hbox{normalized embeddings}}
{\displaystyle\jmath\colon K\hookrightarrow D}
\right\}
\longleftrightarrow
\mathbb CM_{{\Delta},K}=\left\{
\sopra{\displaystyle\hbox{$\tau\in\mathfrak H$ such that
$\Phi_\infty(\jmath(K^\times))=$}}
{\displaystyle\{{\gamma}\in\Phi_\infty(D^\times)
\cap\mathbb GL_2^+(\mathbb R)~|~{\gamma}\cdot\tau=\tau\}}
\right\}.
$$
The bijection is $\mathbb GaZ({\Delta},N)$-equivariant where $\mathbb GaZ({\Delta},N)$ acts by
conjugation on the left set and on $\mathbb CM_{{\Delta},K}$ via its action on
$\mathfrak H$.
Also, the correspondence $\jmath\leftrightarrow\tau$ is characterized
by the fact that the complex structure on $D_\infty$ induced by the
embedding $\jmath$ coincides with that of $D_{\infty}^{\tau}$.
In the split case $\mathbb CM_{1,K}=K\cap\mathfrak H$.
We shall denote $c_{\tau}=c_{\tau,N}$ the conductor relative to the order
${\cal R}_{N}$ of the embedding associated to the point $\tau\in\mathbb CM_{{\Delta},K}$
and $\bar c_{\tau}$ its minimal conductor.
{\beta}gin{pro}
Let $\tau$ and $\pr\tau\in\mathbb CM_{{\Delta},K}$ such that $\pr\tau={\gamma}\cdot\tau$
for some ${\gamma}\in\mathbb GaZ({\Delta},N)$.
Then $c_{\pr\tau,N}=c_{\tau,N}$.
\end{pro}
\par\noindent{\bf Proof. } Let $\jmath$ and $\pr\jmath$ be the embeddings corresponding
to $\tau$ and $\pr\tau$ respectively.
Then $\pr\jmath={\gamma}\jmath\inv{{\gamma}}$ and so
$\pr\jmath({\cal O}_{c_{\pr\tau,N}})=\pr\jmath(K)\cap{\cal R}_{N}=
{\gamma}\jmath(K)\inv{{\gamma}}\cap{\cal R}_{N}=
{\gamma}(\jmath(K)\cap{\cal R}_{N})\inv{\gamma}={\gamma}\jmath({\cal O}_{c_{\tau,N}})\inv{\gamma}=
\pr\jmath({\cal O}_{c_{\tau,N}})$.
\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{dfn}
A point $x\in X_{0}({\Delta},N)$ is a CM point of type $K$ and conductor
$c=c_x$ if it is represented by a $\tau\in\mathbb CM_{{\Delta},K}$ with
$c_{\tau,N}=c$. Denote
$$
\mathbb CM({\Delta},N;{\cal O}_{K,c})=\{\mbox{CM points of $X_{0}({\Delta},N)$ of type $K$ and
conductor $c$}\}.
$$
\end{dfn}
The following result is \cite[Lemma 4.17]{Dar04}
{\beta}gin{pro}{\lambda}bel{teo:existCM}
Let $c>0$ be an integer such that $\mcd c{N{\Delta}}=1$. Then the set
$\mathbb CM({\Delta},N;{\cal O}_{K,c})$ is non-empty if and only if
{\beta}gin{itemize}
\item all primes $\ell|{\Delta}$ are inert in $K$, and
\item all primes $\ell|N$ are split in $K$.
\end{itemize}
\end{pro}
For $\tau\in\mathbb CM_{1,K}$ the elliptic curve $E_{\tau}$ has complex
multiplications in the field $K$. When ${\Delta}>1$ and $\tau\in\mathbb CM_{{\Delta},K}$
the QM abelian surface $A=A_{\tau}={\cal A}(\mathbb C)$ contains the elliptic curve
$E=K\otimes\mathbb R/{\cal O}_{K,\bar c}$ and in fact is
isogenous to the product $E\times E$. In particular there is
an identification ${\rm End}^{o}(A){\sigma}meq D\otimes K$. Consider the
left ideal $\mathfrak e={\rm End}(A)\cap{\rm End}^{o}(A)(1-e_{\jmath})$ where $e_{\jmath}$ is the
idempotent \eqref{eq:idempotent}
attached to the embedding $\jmath:K\hookrightarrow D$ associated to $\tau$
and let ${\cal E}={\cal A}[\mathfrak e]^{o}$ be the connected component of the subgroup scheme
of ${\cal A}$ killed by $\mathfrak e$. Note that since
$\jmath({\cal O}_{K,\bar c})$ and $e_{\jmath}$ commute, the order ${\cal O}_{K,\bar c}$ acts
on ${\cal E}$.
{\beta}gin{pro}{\lambda}bel{teo:idgrsch}
${\cal A}={\cal E}\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1}$ as group schemes.
\end{pro}
\par\noindent{\bf Proof. } Let $S$ be any scheme of definition for $A$. Over any $S$-scheme
$T$ there is an obvious map
$({\cal E}\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1})(T)\rightarrow{\cal A}(T)$ which is surjective
because ${\cal E}\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1}$ contains two independent
abelian schemes of dimension 1, ${\cal E}$ and any translate of it by an
$r\in{\cal R}_{1}-{\cal O}_{K,\bar c}$. To show that the map is injective, it is
enough to do so over an algebraically closed field. Over $\mathbb C$ we have
${\cal E}(\mathbb C)=E$ and thus
$({\cal E}\otimes_{R_{\bar c}}{\cal R}_{1})(\mathbb C)=E\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1}=
(K\otimes\mathbb R\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1})/{\cal R}_{1}=A$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{dfn}{\lambda}bel{th:testpair}
Let $p$ be an odd prime number, $\mcd p{N{\Delta}}=1$.
A \emph{$p$-ordinary test triple} for $\mathbb GaE({\Delta},N)$ is a
triple $(\tau, v, e)$, where $\tau\in\mathbb CM_{{\Delta},K}$,
$v$ is a finite place dividing $p$ in a finite extension $L\supseteq\mathbb Q$
and $e\in D\otimes F$ is the idempotent associated to a real
quadratic subfield $F\subset D$ pointwise fixed by the positive
involution, such that
{\beta}gin{enumerate}
\item $FK\subseteq L$;
\item the CM curve $E_\tau$ or QM-abelian surface $A_\tau$
has ordinary good reduction modulo $\mathfrak p_v$;
\item if $w$ is the restriction of $v$ to $F$ then
$e\in{\cal R}_{1}\otimes_{\mathbb Z}{\cal O}_{(w)}$.
\end{enumerate}
Furthermore, a $p$-ordinary test triple $(\tau, v, e)$ is said
\emph{split} if $p$ splits in $F$.
\end{dfn}
\noindent Let us observe that:
{\beta}gin{enumerate}
\item the ordinarity hypothesis implies that $p$ splits in $K$;
\item the idempotent $e$ plays no role in the split case and can be omitted in that case;
\item the explicit description \eqref{eq:idempotent} of $e$
shows that the third condition above is equivalent to
$\mcd p{\bar c{\delta}_{F}}=1$ where $\bar c$ is the minimal
conductor of $F$;
\item for a $p$-ordinary triple $(\tau,v,e)$ for $\mathbb GaU({\Delta},N)$ the point
$x\in X_1({\Delta},N)$ represented by $\tau$ is a smooth point in
${\cal X}_1({\cal O}_{(v)})$. This is clear for $D$ split and follows for
instance from $\cite[Theorem~1.1]{Jord86}$ in the non-split case.
\end{enumerate}
{\beta}gin{pro}{\lambda}bel{teo:anypworks}
Let $p$ be an odd prime number, $\mcd p{N{\Delta}}=1$. There exist split
$p$-ordinary triples for $\mathbb GaE({\Delta},N)$.
\end{pro}
\par\noindent{\bf Proof. } Since any two positive involutions \eqref{eq:involution} are
conjugated in $D$, up to a different choice of maximal order we are
reduced to the Hashimoto model. Up to replacing $p_{o}$ in \eqref{eq:condonHS}
in its congruence class modulo $8N{\Delta} p$, we may assume also that
$\vvec{p_{o}}p=1$. Thus the subfield $F=\mathbb Q\oplus\mathbb Q j\subset D_{H}$ is
pointwise fixed by the involution, has discriminant prime to $p$ and $p$
splits in it. Finally, the minimal conductor of the embedding
$\sqrt{p_o}\rightarrowsto j\in F$ is prime to $p$ since $j\in{\cal R}_{H,N}$.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
The decomposition $D=K\oplus Ku$ associated to a choice of
$\tau\in\mathbb CM_{{\Delta},K}$ is also an orthogonal decomposition under
the non-degenerate pairing $(x,y)_{D}={\rm tr}(x\bar{y})$.
Note that here $u^{2}>0$ since the norm is indefinite.
We shall be concerned with the algebraic group of
similitudes of $(\cdot,\cdot)_{D}$, i.e.
$$
\mathbb GO(D)=\left\{\hbox{$g\in\mathbb GL(D)$ such that $(gx,gy)_{D}=\nu_{0}(g)(x,y)_{D}$
for all $x,y\in D$}\right\}.
$$
The structure of the group $\mathbb GO(D)$ is well understood, e.g.
\cite[\S1.1]{Harris93}, \cite[\S7]{HaKu91}. Let $\mathbf{t}\in\mathbb GO(D)$
be the involution $\mathbf{t}(d)=\bar{d}$. Then
$\mathbb GO(D)=\mathbb GO^{o}(D)\ltimes<\mathbf{t}>$, where $\mathbb GO^{o}(D)$ is the
Zariski connected component described by the short exact sequence of
algebraic groups
{\beta}gin{equation}
1\longrightarrow\mathbb G_{m}\longrightarrow
D^{\times}\times D^{\times}\stackrel{\varrho}{\longrightarrow}
GO^{o}(D)\longrightarrow1
{\lambda}bel{eq:sesGOD}
\end{equation}
where $\mathbb G_{m}$ is embedded diagonally and
$\varrho(d_{1},d_{2})(x)=d_{1}xd_{2}^{-1}$. The norm $\nu$ restricts
to $N_{K/\mathbb Q}$ and $-u^{2}N_{K/\mathbb Q}$ on $K$ and $Ku$ respectively, and
$\mathbb GO(K)^o{\sigma}meq\mathbb GO^o(Ku){\sigma}meq R_{K/\mathbb Q}\mathbb G_{m,K}$ where the isomorphism is
given by left multiplication. Thus, the subgroup of $\mathbb GO^{o}(D)$ that
preserves the splitting $D=K\oplus Ku$ can be identified with the group
$$
G(O(K)\times O(Ku))^o=
\left\{\hbox{$(k_{1},k_{2})\in(R_{K/\mathbb Q}\mathbb G_{m,K})^2$
such that $N_{K/\mathbb Q}(k_{1}\inv k_{2})=1$}\right\}
$$
and there is a commutative diagram
{\beta}gin{equation}
{\beta}gin{CD}
K^{\times}\times K^{\times} @>{{\alpha}}>> G(O(K)\times O(Ku))^o \\
@V{\jmath\times\jmath}VV @VVV \\
D^{\times}\times D^{\times} @>{\varrho}>> \mathbb GO^{o}(D)\\
\end{CD}
{\lambda}bel{eq:similitudes}
\end{equation}
where ${\alpha}(k_{1},k_{2})=(k_{1}k_{2}^{-1},k_{1}\bar{k}_{2}^{-1})$.
We will normalize the complex coordinates in
$D_{\infty}^{\tau}=(K\oplus Ku)\otimes\mathbb R$ as follows. The standard normalized
embedding
$\jmath^{\rm st}:\mathbb Q(\sqrt{-1})\hookrightarrow{\rm M}_2(\mathbb Q)$ with fixed point $i\in\mathfrak H$
defines a splitting $M_{2}(\mathbb R)^{i}=\mathbb C\oplus\mathbb C^\perp$ with
$\mathbb C=\mathbb R\smallmat 1{}{}1\oplus\mathbb R\smallmat{}{-1}1{}$ and
$\mathbb C^\perp=\mathbb C\smallmat {}11{}=\mathbb R\smallmat {}11{}\oplus\mathbb R\smallmat{-1}{}{}1$.
Define standard complex coordinates $z_1^{\rm st}$, $z_2^{\rm st}$ in
$D_{\infty}$ by the identity
{\beta}gin{equation}
\Phi_\infty(d)=z_1^{\rm st}+z_2^{\rm st}\smallmat {}11{}.
{\lambda}bel{eq:standardcoord}
\end{equation}
The $\mathbb R$-linear extensions of the embeddings $\jmath^{\rm st}$ and
$\Phi_\infty\circ\jmath$ are conjugated in $M_{2}(\mathbb R)$, namely
$\Phi_{\infty}\circ\jmath=d_\infty\jmath^{\rm st}d_\infty^{-1}$ where
$d_\infty=\smallmat{y^{1/2}}{sy^{1/2}}{}{y^{-1/2}}$
and $\tau=s+iy$. So we define normalized coordinates $z_{1}$ and $z_{2}$ in
$D_\infty^\tau$ by the identity
$$
z_{i}(d)=z_{i}^{\rm st}
(\Phi_\infty^{-1}(d_\infty^{-1})d\Phi_\infty^{-1}(d_\infty)),
\qquad\hbox{for all $d\in D_{\infty}$,\quad $i=1$,$2$}.
$$
\section{Some differential operators}
\subsection{Preliminaries.}{\lambda}bel{se:KSprel}
We briefly review some basic facts about the Kodaira-Spencer map and
the Gau{\ss}-Manin connection. For more details see \cite{Katz70, KatOda68}.
The \emph{Kodaira-Spencer class} of a composition of smooth morphisms of schemes
$X\stackrel{\pi}{\rightarrow}S\rightarrow T$ is the element in
$H^1(X,\dual{({\Omega}^1_{X/S})}\otimes \pi^*{\Omega}^1_{S/T})$
arising from the canonical exact sequence.
{\beta}gin{equation}
0\longrightarrow\pi^*{\Omega}^1_{S/T}\longrightarrow{\Omega}^1_{X/T}\longrightarrow{\Omega}^1_{X/S}\longrightarrow 0
{\lambda}bel{eq:canexseq}
\end{equation}
by local freeness of the sheaves ${\Omega}^1$.
The \emph{Kodaira-Spencer map} is the boundary map
$$
{\rm KS}:\pi_*{\Omega}^1_{X/S}\longrightarrow R^1\pi_*(\pi^*{\Omega}^1_{S/T}){\sigma}meq
{\Omega}^1_{S/T}\otimes R^1\pi_*{\cal O}_X
$$
in the long exact sequence of derived functors obtained from \eqref{eq:canexseq} by pushing down.
Under the natural maps
$H^1(X,\dual{({\Omega}^1_{X/S})}\otimes \pi^*{\Omega}^1_{S/T})\rightarrow
H^0(S,R^1\pi_*(\dual{({\Omega}^1_{X/S})}\otimes \pi^*{\Omega}^1_{S/T}))\rightarrow
H^0(S,{\Omega}^1_{S/T}\otimes R^1\pi_*{\cal O}_X\otimes\dual{(\pi_*{\Omega}^1_{X/S})})$ the Kodaira-Spencer class maps to the Kodaira-Spencer map.
The $q$-\emph{th relative de Rham cohomology sheaf} of $X/S$ is defined as
${\delta}rham{q}(X/S)=\mathbb R^q\pi_*({\Omega}^\bullet_{X/S})$ (hypercohomology). Following \cite{KatOda68}, the \emph{Gau{\ss}-Manin connection}
$$
\nabla\colon{\delta}rham{q}(X/S)\longrightarrow{\Omega}^1_{S/T}\otimes_{{\cal O}_S}{\delta}rham{q}(X/S).
$$
can be seen as the differential
$d_1^{0,q}\colon E_1^{0,q}\rightarrow E_1^{1,q}$ in the spectral sequence
defined by the finite filtration
$F^i{\Omega}^\bullet_{X/T}=\mathrm{Im}({\Omega}^{\bullet-i}_{X/T}
\otimes_{{\cal O}_X}\pi^*{\Omega}^i_{S/T}\longrightarrow{\Omega}^\bullet_{X/T})$,
with associated graded objects
$\mathrm{gr}^i({\Omega}^\bullet_{X/T})={\Omega}^{\bullet-i}_{X/T}
\otimes_{{\cal O}_X}\pi^*{\Omega}^i_{S/T}$.
If $X/S={\cal A}/S$ is an abelian scheme with $0$-section $e_{0}$
and dual ${\cal A}^t/S$ ,
denote
$\underline{{\omega}ega}=\underline{{\omega}ega}_{{\cal A}/S}=\pi_*{\Omega}^1_{{\cal A}/S}={e_0}^*{\Omega}^1_{{\cal A}/S}$
the sheaf on $S$ of translation invariant relative $1$-forms on $A$.
The first de Rham sheaf ${\delta}rham{1}={\delta}rham{1}({\cal A}/S)$ is the
central term in a short exact sequence
{\beta}gin{equation}
0\longrightarrow\underline{{\omega}ega}\longrightarrow{\delta}rham{1}\longrightarrow R^1\pi_*{\cal O}_{{\cal A}}\longrightarrow0
{\lambda}bel{eq:Hodgeseq}
\end{equation}
(called the \emph{Hodge sequence}). By Serre duality
{\beta}gin{equation}
{\rm Hom}_{{\cal O}_S}(\pi_*{\Omega}^1_{{\cal A}/S}, R^1\pi_*(\pi^*{\Omega}^1_{S/T}))
{\sigma}meq
{\rm Hom}_{{\cal O}_S}(\underline{{\omega}ega}_{{\cal A}/S}\otimes\underline{{\omega}ega}_{{\cal A}^t/S},{\Omega}^1_{S/T})
{\lambda}bel{eq:KSmap}
\end{equation}
and the Kodaira-Spencer map can be seen as an element of
the latter group. It can be reconstructed from the Gau{\ss}-Manin connection as the composition
{\beta}gin{equation}
\underline{{\omega}ega}_{{\cal A}/S}\hookrightarrow{\delta}rham{1}\stackrel{\nabla}{\longrightarrow}{\delta}rham{1}\otimes{\Omega}^1_{S/T}
\longrightarrow\dual{\underline{{\omega}ega}_{{\cal A}^t/S}}\otimes{\Omega}^1_{S/T}.
{\lambda}bel{eq:KSfromGM}
\end{equation}
In fact, when ${\cal A}/S{\sigma}meq{\cal A}^t/S$ is principally polarized,
the Kodaira-Spencer map becomes a \emph{symmetric} map
${\rm KS}\colon\mathrm{Sym}^2(\underline{{\omega}ega})\rightarrow{\Omega}^1_{S/T}$, \cite[Section III.~9]{FalCha90}.
Let $(\pr{S},i_o)$ be a smooth closed reduced subscheme of $S$
and consider the commutative pull-back diagram of $T$-schemes
$$
{\beta}gin{CD}
\pr{X} @>i>> X \\
@VV{\pr{\pi}}V @VV{\pi}V \\
\pr{S} @>{i_o}>> S \\
\end{CD}.
$$
Since also $\pr{\pi}$ is smooth, we can consider the Kodaira-Spencer class, or map, $\pr{{\rm KS}}$ attached to the morphisms $\pr{X}\stackrel{\pr{\pi}}{\rightarrow}\pr{S}\rightarrow T$.
When $X={\cal A}$ is a principally polarized abelian scheme,
$\pr{{\rm KS}}\in{\rm Hom}_{{\cal O}_{\pr{S}}}({\pr{\underline{{\omega}ega}}}^{\otimes 2},{\Omega}^1_{\pr{S}/T})$
as in \eqref{eq:KSmap}, where
${\pr{\underline{{\omega}ega}}}=\underline{{\omega}ega}_{\pr{{\cal A}}/\pr{S}}=\pi_*{\Omega}^1_{\pr{{\cal A}}/\pr{S}}=
{e^\prime_0}^*{\Omega}^1_{\pr{{\cal A}}/\pr{S}}$.
Since $i^*\pi^*{\Omega}^1_{S/T}={\pi^\prime}^*i_0^*{\Omega}^1_{S/T}$ and
$i^*{\Omega}_{X/S}{\sigma}meq{\Omega}_{X^\prime/S^\prime}$ canonically, applying $i^{*}$ to \eqref{eq:canexseq} yields an exact sequence
$$
0\longrightarrow{\pr{\pi}}^*i_0^*{\Omega}^1_{S/T}\longrightarrow
i^*{\Omega}^1_{X/T}\longrightarrow{\Omega}^1_{\pr{X}/\pr{S}}\longrightarrow 0,
$$
hence an element
${\rm KS}^*\in\mathrm{Ext}^1_{{\cal O}_{\pr{X}}}({\Omega}^1_{\pr{X}/\pr{S}},
{\pr{\pi}}^*i_0^*{\Omega}^1_{S/T})$. The composition
$\pr{S}\stackrel{i_0}{\rightarrow}S\rightarrow T$ defines a canonical surjective map
${\pr{\pi}}^*i_0^*{\Omega}^1_{S/T}\rightarrow{\pr{\pi}}^*{\Omega}^1_{\pr{S}/T}$.
In the same way, we get a surjective map $i^*{\Omega}^1_{X/T}\rightarrow{\Omega}^1_{\pr{X}/T}$.
These data define a commutative diagram of ${\cal O}_{X^\prime}$-modules
$$
{\beta}gin{CD}
0 @>>> {\pr{\pi}}^*i_0^*{\Omega}^1_{S/T} @>>> i^*{\Omega}^1_{X/T} @>>>
{\Omega}^1_{\pr{X}/\pr{S}} @>>> 0\\
@. @VVV @VVV @| \\
0 @>>> {\pr{\pi}}^*{\Omega}^1_{\pr{S}/T} @>>> {\Omega}^1_{\pr{X}/T} @>>>
{\Omega}^1_{\pr{X}/\pr{S}} @>>> 0\\
\end{CD}
$$
Standard diagram-chasing shows that
${\rm KS}^*\rightarrowsto\pr{{\rm KS}}$ under the canonical map of $\mathrm{Ext}^1$ groups.
The following result follows easily from the definitions.
{\beta}gin{pro}{\lambda}bel{th:KSforpullb}
Let ${\cal A}/S$ be an abelian scheme with $S$ smooth
over $T$, $(\pr{S},i_0)$ a
closed $T$-smooth subscheme of $S$ and
$\pr{{\cal A}}={\cal A}\times_S\pr{S}$. Let
${\rm KS}\colon\underline{{\omega}ega}^{\otimes2}\rightarrow{\Omega}^1_{S/T}$ and
${\rm KS}^\prime\colon{\pr{\underline{{\omega}ega}}}^{\otimes2}\rightarrow{\Omega}^1_{\pr{S}/T}$
be the corresponding Kodaira-Spencer maps. Then
$\pr{{\rm KS}}=\iota_0\circ i_0^*{\rm KS}$, where
$\iota_0\colon i_0^*{\Omega}^1_{S/T}\rightarrow{\Omega}^1_{\pr{S}/T}$
is the canonical pull-back map.
\end{pro}
Let again $X={\cal A}$ be an abelian scheme, and let $\phi\colon{\cal A}\rightarrow{\cal A}$
be an $S$-isogeny (i.~e., a surjective endomorphism such that $\pi\phi=\pi$). The pull-back
$\phi^*{\Omega}^\bullet_{{\cal A}/T}\rightarrow{\Omega}^\bullet_{{\cal A}/T}$ respects filtrations.
Thus we have maps $\phi^*(F^i/F^j)\rightarrow F^i/F^j$ for all $i\leq j$
because the sheaves $F^i$ are locally free. In particular, there is a
map of short exact sequences
$$
{\beta}gin{CD}
0 @>>> \phi^*\mathrm{gr}^{p+1} @>>> \phi^*(F^p/F^{p+2}) @>>>
\phi^*\mathrm{gr}^p @>>> 0 \\
@. @VVV @VVV @VVV \\
0 @>>> \mathrm{gr}^{p+1} @>>> F^p/F^{p+2} @>>> \mathrm{gr}^p @>>> 0 \\
\end{CD}
$$
where the bottom row is the tautological exact sequence of graded objects and the top
row is obtained applying $\phi^*$ to it (again, it remains exact because the
sheaves are locally free).
Since $\phi$ is surjective, $\pi_*\phi^*=\pi_*$ as
functors and the previous diagram yields a map of derived functors long exact sequences
{\beta}gin{equation}
{\beta}gin{CD}
\ldots @>>> R^{p+q}\pi_*\mathrm{gr}^p @>>> R^{p+q+1}\pi_*\mathrm{gr}^{p+1} @>>>
\ldots \\
@. @VV{[\phi]_{p,q}}V @VV{[\phi]_{p+1,q}}V \\
\ldots @>>> R^{p+q}\pi_*\mathrm{gr}^p @>>> R^{p+q+1}\pi_*\mathrm{gr}^{p+1} @>>>
\ldots \\
\end{CD}
{\lambda}bel{cd:four}
\end{equation}
{\beta}gin{pro}{\lambda}bel{th:GMcomm}
Let ${\cal A}/S$ be an abelian scheme with $S$ smooth over $T$.
The algebra ${\rm End}_S({\cal A})$ acts linearly on the sheaves ${\delta}rham{q}({\cal A}/S)$.
If $\phi\in{\rm End}_S({\cal A})$ acts as $[\phi]$, then
$$
\nabla\circ[\phi]=(1\otimes[\phi])\nabla.
$$
\end{pro}
\par\noindent{\bf Proof. } Let $\phi\in{\rm End}_S({\cal A})$ be an isogeny. The endomorphism $[\phi]$ of
${\delta}rham{q}({\cal A}/S)$ attached to $\phi$ is the vertical map $[\phi]_{0,q}$ in diagram
\eqref{cd:four} at $R^q\pi_*\mathrm{gr}^0$. Under the identification
$R^{q+1}\pi_*\mathrm{gr}^1=R^{q+1}\pi_*(\pi^*{\Omega}^1_{S/T}\otimes_{{\cal O}_{{\cal A}}}{\Omega}^{\bullet-1}_{{\cal A}/T})=
{\Omega}^1_{S/T}\otimes_{{\cal O}_S}R^{q+1}\pi_*({\Omega}^{\bullet-1}_{{\cal A}/T})=
{\Omega}^1_{S/T}\otimes_{{\cal O}_S}{\delta}rham{q}({\cal A}/S)$
the Gau{\ss}-Manin connection is the connecting homomorphism for the
tautological exact sequence of graded objects, i.÷e. either horizontal connecting homomorphism in
\eqref{cd:four} at $p=0$ and also $[\phi]_{1,q}=1\otimes[\phi]_{0,q}$ since $\phi$ acts trivially on $S$. The formula follows.
Let $s\in S$ be a geometric point and $A_s$ the fiber at $s$.
Without loss of generality we may assume that $S$ is connected and Grothendieck's rigidity lemma \cite{Mum65} implies that the canonical map
${\rm End}_S({\cal A})\rightarrow{\rm End}(A_s)$ is injective. It follows that there exist
division algebras $D_1,\ldots,D_t$ such that
${\rm End}_S({\cal A})$ is identified to a subring of
${\rm M}_{n_1}(D_1)\times\cdots\times{\rm M}_{n_t}(D_t)$. The latter
algebra is spanned over $\mathbb Q$ by the invertible elements, so the result
follows by linearity. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
\subsection{Computations over $\mathbb C$}{\lambda}bel{se:KSoverC}
In order to compute explicitely the Kodaira-Spencer map for a complex family, i.e. when $T=\mathrm{Spec}(\mathbb C)$, it is more convenient to appeal to GAGA principles, work in the analytic category and follow \cite{Katz76, Harris81}. If ${\cal A}/S$ is a principally polarized family of abelian varieties
over the smooth complex variety $S$ and $U\subset S$ is an open set, the choice of a section
${\sigma}\in H^0(U,\dual{({\Omega}^1_S)})$ defines a map
$\varrho_{\sigma}\colon H^0(U,\underline{{\omega}ega})\rightarrow H^0(U,\dual\underline{{\omega}ega})$
by the composition
$$
H^0(U,\underline{{\omega}ega})\hookrightarrow
H^0(U,{\delta}rham{1})\stackrel{\nabla}{\rightarrow}
H^0(U,{\delta}rham{1}\otimes{\Omega}^1_{S/T})\stackrel{1\otimes{\sigma}}{\rightarrow}
H^0(U,{\delta}rham{1})\stackrel{\varrho}{\rightarrow}
H^0(U,\dual\underline{{\omega}ega}),
$$
where $\varrho$ is induced by the polarization pairing
$\scaldR\cdot\cdot\colon{\delta}rham{1}\otimes{\delta}rham{1}\rightarrow{\cal O}_S$.
The association ${\sigma}\rightarrowsto\varrho_{\sigma}$ defines a map
$\dual{({\Omega}^1_{S/T})}\rightarrow{\rm Hom}(\underline{{\omega}ega},\dual\underline{{\omega}ega})$
whose dual is the Kodaira-Spencer map ${\rm KS}$.
By \'{e}tale-ness, the actual computation of the map ${\rm KS}$ can be
obtained applying the above procedure to the pullback of the family
${\cal A}/S$ on the universal cover of $S$. For instance, for the universal family \eqref{eq:univEC}
{\beta}gin{equation}
{\rm KS}(d{\zeta}^{\otimes 2})=\frac{1}{2\pi i}dz.
{\lambda}bel{eq:Kodforell}
\end{equation}
where ${\zeta}$ is the standard complex coordinate in the elliptic curve $E_z=\mathbb C/\mathbb Z\oplus\mathbb Z z$, $z\in\mathfrak H$.
We follow this approach to compute the Kodaira-Spencer map for
the universal complex family of QM abelian surfaces over $X_\mathbb Ga$
using the Shimura family ${\cal A}^{\rm Sh}/\mathfrak H$ of
\eqref{eq:QMtori} in terms of the arithmetic of the maximal order
${\cal R}_{1}$.
Let $\underline{r}=\{r_1,\dots,r_4\}$ be a symplectic basis of ${\cal R}_{1}$.
By linear extension, the real dual basis $\{\dual{r_1},\dots,\dual{r_4}\}$
of $\dual{D_\infty}$ is a basis of
${\rm Hom}({\cal R}_{1}\otimes_\mathbb Z\mathbb C,\mathbb C){\sigma}meq{\Delta}rham{1}(D_\infty^z/{\cal R}_{1})$.
Thus, the elements $\dual{r_1},\dots,\dual{r_4}$ define global
$C^\infty$-sections of ${\delta}rham{1}({\cal A}^{\rm Sh}/\mathfrak H)$ with
constant periods, hence $\nabla$-horizontal. If $H$
denotes the $\mathbb C$-span of these sections, there is an isomorphism
${\delta}rham{1}({\cal A}^{\rm Sh}/\mathfrak H)=H\otimes_\mathbb C{\cal O}_{\mathfrak H}$.
In terms of this trivialization, $\nabla=1\otimes d$, where
$d$ is the exterior differentiation. Also,
{\beta}gin{equation}
\scaldR{\dual{r_i}}{\dual{r_j}}=\frac{1}{2\pi i}B_t(r_j,r_i),
\qquad i,j=1,\ldots,4,
{\lambda}bel{eq:derhampair}
\end{equation}
where the $2\pi i$ factor accounts for the difference of Tate twists
between singular and algebraic de Rham cohomology, e.~g. \cite[\S1]{Del82}.
Let ${\zeta}_1$ and ${\zeta}_2$ denote the standard coordinates in $\mathbb C^2$.
{\beta}gin{pro}{\lambda}bel{th:KSoverC}
$\left({\rm KS}(d{\zeta}_i\otimes d{\zeta}_j)\right)_{i,j=1,2}=
\frac1{2\pi i}\smallmat100{{\Delta}}\,dz.$
\end{pro}
\par\noindent{\bf Proof. } Write
$$
\left({\beta}gin{array}{c}
d{\zeta}_1 \\
d{\zeta}_2
\end{array}\right)=
\Pi_{\underline{r}}(z)
\left(
{\beta}gin{array}{c}
\dual{r_1} \\
\vdots \\
\dual{r_4}
\end{array}
\right)
$$
where $\Pi_{\underline{r}}(z)$ is the period matrix computed in terms of the
basis $\underline{r}$. Using \eqref{eq:derhampair} and the definitions we first obtain
$$
\varrho_{\sigma}\left({\beta}gin{array}{c}
d{\zeta}_1 \\
d{\zeta}_2
\end{array}\right)=
\frac1{2\pi i}\frac{d\Pi_{\underline{r}}(z)}{dz}
\left({\beta}gin{array}{cc}
0 & I_2 \\
-I_2 & 0
\end{array}\right)
\left({\beta}gin{array}{c}
r_1 \\
\vdots \\
r_4
\end{array}\right){\sigma}(dz),
$$
and finally
$$
\left({\rm KS}(d{\zeta}_i\otimes d{\zeta}_j)\right)_{i,j=1,2}=
\frac{1}{2\pi i}\frac{d\Pi_{\underline{r}}(z)}{dz}
\left({\beta}gin{array}{cc}
0 & I_2 \\
-I_2 & 0
\end{array}\right)
{}^t\Pi_{\underline{r}}(z)\,dz.
$$
To obtain the final formula, we make use of the Hashimoto model with
$N=1$.
In terms of the symplectic basis
$\underline{\eta}=\{\eta_1,\dots,\eta_4\}$
of theorem \ref{th:Hashimoto}
{\beta}gin{equation}
\Pi_{\underline{\eta}}(z)=\left(
{\beta}gin{array}{cccc}
\frac{\varpi^-}{2\sqrt{p_o}}({\alpha}^+a{\Delta} z+1) &
-\frac{1}{\sqrt{p_o}}({\alpha}^{+}a{\Delta} z+1)
& z & \frac12{\alpha}^+z \\
\\
\frac{{\alpha}^+}{2\sqrt{p_o}}a{\Delta} (z-{\alpha}^-) &
\frac{1}{\sqrt{p_o}}{\Delta}(-z+{\alpha}^-a) & 1 & \frac12{\alpha}^- \\
\end{array}\right)
{\lambda}bel{eq:permat}
\end{equation}
where ${\alpha}^{\pm}=1\pm\sqrt{p_o}$.
Plugging these values into the previous formula yields the result. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
\subsection{Maass operators.}{\lambda}bel{ss:maass}
When $D$ is split. the universal family
\eqref{eq:univEC} defines the
line bundle $\underline{{\omega}ega}=\underline{{\omega}ega}_{{\cal E}_N/{\cal Y}_1(N)}$
on the Zariski open set ${\cal Y}_1(N)$ complement of the
cusp divisor $C$ in ${\cal X}_1(N)$. The Kodaira-Spencer map
${\rm KS}\colon\underline{{\omega}ega}^{\otimes2}\buildrel{\sigma}m\over\rightarrow{\Omega}^1_{{\cal Y}_1(N)}$
is an isomorphism.
{\beta}gin{thm}{\lambda}bel{th:KSextended}
The line bundle $\underline{{\omega}ega}$ extends uniquely to a line bundle, still
denoted
$\underline{{\omega}ega}$, on the complete curve ${\cal X}_1(N)$ and the Kodaira-Spencer
isomorphism extends to an isomorphism
$$
{\rm KS}\colon\underline{{\omega}ega}^{\otimes2}\buildrel{\sigma}m\over\longrightarrow{\Omega}^1_{{\cal X}_1(N)}(\log C).
$$
\end{thm}
\par\noindent{\bf Proof. } See \cite{Katz73} and also \cite[section~10.~13]{KatMaz85} where the
extension property is discussed for a general representable
moduli problem. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
If $D$ is not split, the universal family \eqref{eq:univQMAV}
of QM-abelian surfaces defines the sheaf
$\underline{{\omega}ega}=\underline{{\omega}ega}_{{\cal A}_{{\Delta},N}/{\cal X}_1({\Delta},N)}$
and the Kodaira-Spencer map is a surjective map
${\rm KS}\colon\mathrm{Sym}^2\underline{{\omega}ega}\rightarrow{\Omega}^1_{{\cal X}_1({\Delta},N)}$.
Let $p$ be a prime such that $(p,N{\Delta})=1$ and let $v$ be a
place of a number field $L$ dividing $p$. The algebra
${\cal R}_{1}\otimes_\mathbb Z{\cal O}_v$ acts contravariantly and
${\cal O}_{v}$-linearly on $\underline{{\omega}ega}_{v}=\underline{{\omega}ega}\otimes{\cal O}_v$ by pull-back.
For any geometric point $s\in{\cal X}_1({\Delta},N)\otimes{\cal O}_v$
and any non-trivial idempotent $e\in{\cal R}_{1}\otimes_\mathbb Z{\cal O}_v$
there is a non-trivial decomposition $H^0(A_s,{\Omega}^1_{A_s/k(s)})=
eH^0(A_s,{\Omega}^1_{A_s/k(s)})\oplus(1-e)H^0(A_s,{\Omega}^1_{A_s/k(s)})$.
Therefore the subsheaf $e\underline{{\omega}ega}_{v}$ is a line subbundle.
Let $e\underline{{\omega}ega}_v\circ\invol{e}\underline{{\omega}ega}_v\subseteq\mathrm{Sym}^2\underline{{\omega}ega}_v$ be the line
bundle image
of $e\underline{{\omega}ega}_v\otimes\invol{e}\underline{{\omega}ega}_v$ under the natural
map $\underline{{\omega}ega}^{\otimes 2}_v\rightarrow\mathrm{Sym}^2{\underline{{\omega}ega}_v}$.
{\beta}gin{thm}{\lambda}bel{th:Ltbundles}
If $p$, $v$ and $e$ are as above, then the Kodaira-Spencer map defines an
isomorphism
$$
{\rm KS}\colon e\underline{{\omega}ega}_v\otimes\invol{e}\underline{{\omega}ega}_v\longrightarrow
{\Omega}^1_{{\cal X}_1({\Delta},N)/{\cal O}_v}
$$
of line bundles on $X_{1}({\Delta},N)$ defined over ${\cal O}_{v}$.
\end{thm}
\par\noindent{\bf Proof. }
We claim that the action of $r\otimes{\lambda}\in{\cal R}_{1}\otimes{\cal O}_v$ on the universal family \eqref{eq:univQMAV} base-changed to ${\cal O}_v$ gives rise to a commutative diagram
{\beta}gin{equation}
{\beta}gin{CD}
\underline{{\omega}ega} _v @>>> {\Omega}^1_{{\cal X}_1({\Delta},N)/{\cal O}_v}\otimes\dual{\underline{{\omega}ega}_v} \\
@VV{r\otimes{\lambda}}V @VV{1\otimes\invol r\otimes{\lambda}}V \\
\underline{{\omega}ega} _v @>>> {\Omega}^1_{{\cal X}_1({\Delta},N)/{\cal O}_v}\otimes\dual{\underline{{\omega}ega}_v}
\end{CD}
{\lambda}bel{cd:five}
\end{equation}
Indeed, under the Serre duality identification
$R^1\pi_*{\cal O}_{{\cal A}}{\sigma}meq\dual{\underline{{\omega}ega}_{{\cal A}/S}}$ for a principally polarized abelian scheme ${\cal A}/S$ the actions of ${\rm End}_S({\cal A})$ correspond up to Rosati involution. The
commutativity of the diagram \eqref{cd:five} follows from proposition
\ref{th:GMcomm} and \eqref{eq:KSfromGM}.
For an idempotent $e\in{\cal R}_{1}\otimes_\mathbb Z{\cal O}_v$, diagram
\eqref{cd:five} defines a map
$e\underline{{\omega}ega}_v\rightarrow{\Omega}^1\otimes\invol{e}(\dual{\underline{{\omega}ega}_v})$
which can be shown to be an isomorphism by the same deformation theory
argument in \cite[Lemma 6]{DiaTay94}. This is enough to conclude,
because the sheaves $\invol{e}(\dual{\underline{{\omega}ega}_v})$
and $\invol{e}\underline{{\omega}ega}_v$ are dual of each other.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{rems}{\lambda}bel{re:LoverC}
\rm
{\beta}gin{enumerate}
\item We proved theorem \ref{th:Ltbundles} for $p$-adically complete
rings of scalars. In fact the projectors onto the quadratic
subfields of $D$ are defined over the $p$-adic localizations
of their rings of integers for almost all $p$.
Thus, in these cases, $e\underline{{\omega}ega}$ and the Kodaira-Spencer
isomorphism are defined over the subrings ${\cal O}_{(v)}\subset\mathbb C$.
\item If the $\pr e=ded^{-1}$ are conjugated in ${\cal R}_1\otimes B$
for some ring $B$, then the action of $d$ on
$\underline{{\omega}ega}\otimes B$ defines an isomorphism of
$e\underline{{\omega}ega}$ with $\pr e\underline{{\omega}ega}$ over $B$.
\item In the complex case the isomorphism of theorem \ref{th:Ltbundles}
can be checked by a straightforward application of the
computation in section \ref{se:KSoverC}. For instance, in the
Hashi\-mo\-to model for $N=1$ of theorem \ref{th:Hashimoto} let
$d=ai+bj+cij\in D_{H}$ with
${\delta}=d^{2}=-a^{2}{\Delta}+b^{2}p_{o}+c^{2}{\Delta} p_{o}\in\mathbb Q$ and let
$e\in D\otimes_{\mathbb Q}\mathbb Q(\sqrt{{\delta}})$ be the idempotent giving
the projection onto $\mathbb Q(d)$. Then
$$
e=\frac{1}{2\sqrt{{\delta}}}
\left(
{\beta}gin{array}{cc}
\sqrt{{\delta}}+b\sqrt{p_{o}} & -a+c\sqrt{p_{o}} \\
(a+c\sqrt{p_{o}}){\Delta} & \sqrt{{\delta}}-b\sqrt{p_{o}}
\end{array}
\right)
$$
and
$$
\invol e=\frac{1}{2\sqrt{{\delta}}}
\left(
{\beta}gin{array}{cc}
\sqrt{{\delta}}+b\sqrt{p_{o}} & a+c\sqrt{p_{o}} \\
(-a+c\sqrt{p_{o}}){\Delta} & \sqrt{{\delta}}-b\sqrt{p_{o}}
\end{array}
\right).
$$
Therefore $e\underline{{\omega}ega}\circ\invol e\underline{{\omega}ega}$ is generated over $\mathfrak H$
by the global section
$$
(\sqrt{{\delta}}+b\sqrt{p_{o}})^{2}d{\zeta}_{1}\circ d{\zeta}_{1}+
(c^{2}p_{o}-a^{2})d{\zeta}_{2}\circ d{\zeta}_{2}+
2(\sqrt{{\delta}}+b\sqrt{p_{o}})c\sqrt{p_{o}}d{\zeta}_{1}\circ d{\zeta}_{2}
$$
whose image under the Kodaira-Spencer map ${\rm KS}$ is,
by proposition \ref{th:KSoverC},
{\beta}gin{equation}
\frac{1}{\pi i}(\sqrt{{\delta}}+b\sqrt{p_{o}})dz
\in\mathbb Ga(\mathfrak H,{\Omega}^{1}_{\mathfrak H}).
{\lambda}bel{eq:imkasec}
\end{equation}
Since ${\delta}\neq p_{o}b^2$ (else $p_o=(a/c)^2\in\mathbb Q$ which is impossible)
the section \eqref{eq:imkasec} does not vanish and the Kodaira-Spencer map is an isomorphism.
\end{enumerate}
\end{rems}
{\beta}gin{notat}
\rm We will denote ${\cal L}$ either the line bundle $\underline{{\omega}ega}_v$ on
${\cal Y}_{1}(N)$ or the line bundle $e\underline{{\omega}ega}_v$ on ${\cal X}_{1}({\Delta},N)$
for some choice of idempotent $e$ satisfying the hypotheses of
theorem \ref{th:Ltbundles} and such that $\invol e=e$. In either
case the Kodaira-Spencer map gives an isomorphism
$$
{\rm KS}\colon{\cal L}^{\otimes 2}\stackrel{{\sigma}m}{\longrightarrow}{\Omega}^{1}.
$$
With an abuse of notation we will denote also ${\cal L}$ the pullback of
the complexified bundle to $\mathfrak H$ under the natural quotient maps.
\end{notat}
If ${\gamma}\in\mathbb GaU({\Delta},N)$ the identities
$\mathbb Z^{2}\vvec{{\gamma}\cdot z}{1}=\inv{j({\gamma},z)}\mathbb Z^{2}\vvec{z}{1}$ and
$\Phi_\infty({\cal R}_{1})\vvec{{\gamma}\cdot z}{1}=
\inv{j({\gamma},z)}\Phi_\infty({\cal R}_{1})\vvec{z}{1}$
as subsets of $\mathbb C$ (in the split case) and of $\mathbb C^{2}$ (in the non-split
case) respectively, show that
the natural action of $\mathbb GaU({\Delta},N)$ on $\underline{{\omega}ega}$ over $\mathfrak H$ is scalar
multiplication by the automorphy factor. Thus the $\mathbb GaU({\Delta},N)$-action
extends to a ${\rm SL}_2(\mathbb R)$-homogeneous structure on $\underline{{\omega}ega}$,
and on $\mathrm{Sym}^2(\underline{{\omega}ega})$ as well.
Also, in the non-split case the fiber identifications
induced by the action are $D\otimes\mathbb C$-contravariant and
since the line bundle ${\cal L}$ is defined using the $D$ action on $\underline{{\omega}ega}$,
it is an homogeneous
line subbundle of $\mathrm{Sym}^2(\underline{{\omega}ega})$.
Let $n\in\mathbb Z$ and let $V_n$ be the $1$-dimensional representation of
$\mathbb C^\times$ given by the character $\chi_n(z)=z^n$. Let
${\cal V}_n=V_n\times\mathfrak H$
the homogeneous line bundle on $\mathfrak H$ with action
$g\cdot(v,z)=(\chi_n(j(g,z))v,g\cdot z)$. Since $-1\notin\mathbb GaU({\Delta},N)$ and
$\mathbb GaU({\Delta},N)$ has no elliptic elements, the
quotient $\mathbb GaU({\Delta},N)\backslash{\cal V}_n$ is a line bundle on $X_{1}({\Delta},N)$
which we shall
denote ${\cal V}_n$ again. Pick $v_n\in V_n$, $v_n\neq0$, and
let $\tilde{v}_n=(v_n,z)$ be the corresponding global constant
section of ${\cal V}_n$ over $\mathfrak H$. Also, let $s(z)$ be the global
section of ${\cal L}$ over $\mathfrak H$, defined up to a sign,
normalized so that
{\beta}gin{equation}
{\rm KS}(s(z)^{\otimes 2})=2\pi i\,dz.
{\lambda}bel{eq:kanormsec}
\end{equation}
Then ${\cal L}^{\otimes k}={\cal O}_{\mathfrak H}s(z)^{\otimes k}$ for all
$k\geq1$ and
there are identifications of homogeneous complex line bundles over
both $\mathfrak H$ and $X_1({\Delta},N)$
{\beta}gin{equation}
{\cal V}_2\buildrel{\sigma}m\over\longrightarrow{\Omega}^1,
\ \tilde{v}_2\rightarrowsto 2\pi i\,dz\qquad
\hbox{and}\qquad
{\cal V}_{k}\buildrel{\sigma}m\over\longrightarrow{{\cal L}}^{\otimes k},
\ \tilde{v}_{k}\rightarrowsto{s(z)}^{\otimes k}.
{\lambda}bel{eq:Eknsplit}
\end{equation}
These identifications preserve holomorphy
and are compatible with tensor products and
the Kodaira-Spencer isomorphisms. Note that $s(z)=\pm 2\pi i\,d{\zeta}$ in the split
case by \eqref{eq:Kodforell}, and see remark \ref{re:LoverC}.3 for the non-split case.
Following \cite{Katz76}, we shall define
differential operators associated to splittings of the Hodge seguence
\eqref{eq:Hodgeseq} where ${\cal A}/S$ is either the universal elliptic
curve ${\cal E}_N/{\cal Y}_1(N)$ or the universal QM-abelian surface
${\cal A}_{{\Delta},N}/{\cal X}_1({\Delta},N)$.
Over the associated differentiable manifold, which amounts to tensoring with the
sheaf of ${\cal O}_S$-algebras ${\cal O}^\infty_S={\cal C}^\infty(S^{\rm an})$
and which will be denoted with an $\infty$ subscript, the Hodge
decomposition
${\cal H}^1_\infty=\underline{{\omega}ega}_\infty\oplus\overline{\underline{{\omega}ega}}_\infty$
is a splitting of the Hodge sequence with projection
${\rm Pr}_{\infty}\colon{\cal H}^1_\infty\rightarrow\underline{{\omega}ega}_\infty$. For each $k\geq1$, let
${\Theta}_{k,\infty}^o$ be the operator defined by the composition
{\beta}gin{equation}
{\beta}gin{CD}
\mathrm{Sym}^k(\underline{{\omega}ega}_\infty)\subset\mathrm{Sym}^k({\delta}rham1)_\infty @>\nabla>>
\mathrm{Sym}^k({\delta}rham1)_\infty\otimes{\Omega}^1 @>{1\otimes\inv{{\rm KS}}}>>
\mathrm{Sym}^k({\delta}rham1)_\infty\otimes{\cal L}_{\infty} \\
@. @. @VV{{\rm Pr}_{\infty}^{\otimes k}\otimes1}V \\
@. @. \mathrm{Sym}^k(\underline{{\omega}ega})_\infty\otimes{\cal L}_{\infty} \\
\end{CD}
{\lambda}bel{eq:algmaassnsp}
\end{equation}
where the \mathbb Gauss-Manin connection $\nabla$ extends to $\mathrm{Sym}^k$ by the
product rule. The composition
${{\cal L}}^{\otimes k}\subset\underline{{\omega}ega}^{\otimes k}\rightarrow\mathrm{Sym}^{k}(\underline{{\omega}ega})$ is injective and
let ${\Theta}_{k,\infty}$ be the restriction of ${\Theta}_{k,\infty}^o$ to
${{\cal L}}^{\otimes k}_{\infty}$.
{\beta}gin{pro}{\lambda}bel{th:operatorrestricts}
${\Theta}_{k,\infty}$ is an operator
${{\cal L}}^{\otimes k}_\infty\rightarrow{{\cal L}}^{\otimes k+2}_\infty$
\end{pro}
\par\noindent{\bf Proof. } If $D$ is split then ${\cal L}^{\otimes k}=\underline{{\omega}ega}^{k}=\mathrm{Sym}^k(\underline{{\omega}ega})$ and
there is nothing to prove. If $D$ is non-split, the element
$e^{\otimes k}\in({\cal R}_{1}\otimes_{\mathbb Z}{\cal O}_{(v)})^{\otimes k}$ acting
componentwise defines a projection
$\underline{{\omega}ega}^{\otimes k}\rightarrow{\cal L}^{\otimes k}$ which factors through
$\mathrm{Sym}^k(\underline{{\omega}ega})$. By proposition \ref{th:GMcomm} the \mathbb Gauss-Manin connection
$\nabla$ commutes with $e^{\otimes k}$ and also the Hodge projection
${\rm Pr}_{\infty}$ is the identity on ${\cal L}_{\infty}$ (in fact on ${\omega}_{\infty}$).
The result follows. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
The operators ${\Theta}_{k,\infty}$ can be computed in terms of the complex
coordinate $z=x+iy\in\mathfrak H$ and the identifications \eqref{eq:Eknsplit}.
For any (say ${\cal C}^\infty$) function $\phi$ on $\mathfrak H$ let
$$
{\delta}_k(\phi)=\frac1{2\pi i}\left(\frac{d}{dz}+\frac{k}{2iy}\right)\phi.
$$
The operator ${\delta}_k$ was introduced, together with its higher
dimensional analogues by Maass \cite{Maass53} and later extensively studied
by Shimura (see \cite[Ch.~10]{Hida93} and the references cited therein).
{\beta}gin{pro}{\lambda}bel{th:maassincoord}
There are commutative diagrams of $C^\infty$-bundles and
differential operators
$$
{\beta}gin{CD}
{\cal V}_k @>{\sigma}m>> {\cal L}^{\otimes k} \\
@VV{\widetilde{{\delta}}_k}V @VV{{\Theta}_{k,\infty}}V \\
{\cal V}_{k+2} @>{\sigma}m>> {\cal L}^{\otimes k+2} \\
\end{CD}
$$
where $\widetilde{{\delta}}_n(\phi\tilde{v}_n)={\delta}_n(\phi)\tilde{v}_{n+2}$.
\end{pro}
\par\noindent{\bf Proof. } The diagram for $D$ split is but the simplest case (dimension
$1$) of \cite[theorem~6.5]{Harris81}. The computation in the non-split case
is very similar. Let $s$ be the ${\rm KS}$-normalized section of ${\cal L}$ as
in \eqref{eq:kanormsec},
$\underline{\eta}=\{\eta_1,\dots,\eta_4\}$ be Hashimoto's symplectic
basis of theorem \ref{th:Hashimoto}
and $\Pi=\Pi_{\underline{\eta}}(z)$ the period matrix as in \eqref{eq:permat}.
Since the sections $\dual{\eta_1},\ldots,\dual{\eta_4}$ are
$\nabla$-horizontal,
$$
\nabla\left(
{\beta}gin{array}{c}
d{\zeta}_1 \\
\\
d{\zeta}_2
\end{array}
\right)=d\Pi
\left({\beta}gin{array}{c}
\dual{\eta_1} \\
\vdots \\
\dual{\eta_4}
\end{array}\right)=
d\Pi\inv{\left(
{\beta}gin{array}{c}
\Pi \\
\\
\overline{\Pi}
\end{array}
\right)}
\left({\beta}gin{array}{c}
d{\zeta}_1 \\
d{\zeta}_2 \\
d\bar{{\zeta}}_1 \\
d\bar{{\zeta}}_2
\end{array}\right)=
\frac{\left(I_2,-I_2\right)}{z-\bar{z}}
\left({\beta}gin{array}{c}
d{\zeta}_1 \\
d{\zeta}_2 \\
d\bar{{\zeta}}_1 \\
d\bar{{\zeta}}_2
\end{array}\right)\otimes dz.
$$
Since $s$ is in the $\mathbb C$-span of $d{\zeta}_1$ and $d{\zeta}_2$,
$\nabla(s)=\left(\frac1{z-\bar{z}}s+s_0\right)\otimes dz$
with ${\rm Pr}_{\infty}(s_0)=0$. Plugging this into
${\Theta}_{k,\infty}(\phi s^{\otimes k})={\rm Pr}_{\infty}(1\otimes\inv{{\rm KS}})
\left(\frac{d\phi}{dz}s^{\otimes k}\otimes dz+
k\phi s^{\otimes k-1}\nabla(s)\right)$ yields the result. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Let $B$ be a $p$-adic algebra with $(p,{\Delta} N)=1$ and such that the
$e$ is defined over $B$ and the isomorphism of theorem \ref{th:Ltbundles}
holds for the sheaves base-changed to $B$.
Let ${\cal O}^{(p)}$ be the structure sheaf of the formal scheme
$S^{(p)}=\limproj{n}(S\otimes B/p^nB)^{p\mathrm{-ord}}$ obtained
taking out the non-ordinary points in characteristic $p$. Denote
${\cal M}^{(p)}$ the tensorization with ${\cal O}^{(p)}$ of the restriction to
$S^{(p)}$ of a sheaf ${\cal M}$.
In the split case the Dwork-Katz construction \cite[\S A2.3]{Katz73}
of the unique Frobenius-stable $\nabla$-horizontal
submodule ${\cal U}\subset{\delta}rham{1}\otimes B$ defines a splitting
$({\delta}rham{1})^{(p)}=\underline{{\omega}ega}^{(p)}\oplus{\cal U}$ with projection
${\rm Pr}_{p}\colon({\delta}rham{1})^{(p)}\rightarrow\underline{{\omega}ega}^{(p)}$. The construction can
be carried out in the non split case as well. If $\pr{B}$ is a
$B$-algebra and $A_{\pr{B}}$ is is a QM-abelian surface with ordinary reduction
and canonical subgroup $H$ (which, by ordinarity, is simply the
Cartier dual of the lift of the kernel of Verschiebung--an {\`e}tale group),
then $H\subset A[p]$ with $A[p]/H$ lifting an {\`e}tale group and for every
$\phi\in{\rm End}(A)$, $\phi(H)\subseteq H$ by connectedness.
Thus $A/H$ is a QM-abelian
surface with a canonical embedding ${\cal R}_1\hookrightarrow{\rm End}(A/H)$ and the
construction of the Frobenius endomorphism of $({\delta}rham{1})^{(p)}$ and
its splitting follows. Assuming that the line bundle ${\cal L}$ is
defined over $B$ and following the same procedure as in
\eqref{eq:algmaassnsp} with the projection ${\rm Pr}_{\infty}$ replaced by ${\rm Pr}_{p}$
yields a differential operator
$$
{\Theta}_{k,p}^o\colon\mathrm{Sym}^k(\underline{{\omega}ega}^{(p)})\longrightarrow\mathrm{Sym}^k(\underline{{\omega}ega}^{(p)})\otimes{\cal L}^{(p)}.
$$
Let ${\Theta}_{k,p}$ be its restriction to $({\cal L}^{\otimes k})^{(p)}$.
{\beta}gin{pro}
${\Theta}_{k,p}$ is an operator
$({{\cal L}}^{\otimes k})^{(p)}\rightarrow({{\cal L}}^{\otimes k+2})^{(p)}$.
\end{pro}
\par\noindent{\bf Proof. } The argument is the same as in the proof of proposition
\ref{th:operatorrestricts}.
The action of the endomorphisms commutes with the pullback of forms
in the quotient $A\rightarrow A/H$ and so with the Frobenius endomorphism.
Since ${\cal U}$ is Frobenius-stable, the endomorphisms commute with the
projection ${\rm Pr}_{p}$.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Let $\ast\in\{\infty,p\}$. The operators ${\Theta}_{k,\ast}$ can be iterated.
For all $r\geq1$ let
$$
{\Theta}_{k,\ast}^{(r)}=
{\Theta}_{k+2r-2,\ast}\circ\cdots\circ{\Theta}_{k,\ast}.
$$
Since the kernel of the projecton ${\rm Pr}_\ast$ is $\nabla$-horizontal one has in fact
{\beta}gin{equation}
{\lambda}bel{eq:onlyonepr}
{\Theta}_{k,\ast}^{(r)}={\rm Pr}_\ast\left((1\otimes\inv{\rm KS})\nabla\right)^r.
\end{equation}
The operators ${\Theta}_{k,\infty}^{(r)}$ do not preserve holomorphy
because the Hodge projection ${\rm Pr}_{\infty}$ is not holomorphic. Similarly, the
operators ${\Theta}_{k,p}^{(r)}$ are only defined over
$p$-adically complete ring of integers.
Nonetheless, the operators ${\Theta}_{k,\ast}^{(r)}$ are algebraic over the
CM locus, in the following sense.
Let $x\in{\cal X}_{1}({\Delta},N)({\cal O}_{(v)})$ be represented by a $\tau\in\mathfrak H$ belonging to a
$p$-ordinary test triple $(\tau,v,e)$.
Let ${{\cal L}}(x)=x^*{{\cal L}}$ be the algebraic fiber at $x$.
The choice of an invariant form ${\omega}_o$ on
$A_x$ which generates either $H^0(A_x,{\Omega}^1\otimes{\cal O}_{(v)})$ (in the split case) or
$eH^0(A_x,{\Omega}^1\otimes {\cal O}_{(v)})$ (in the non-split case) over ${\cal O}_{(v)}$
identifies ${{\cal L}}(x)$ with a copy of ${\cal O}_{(v)}$.
{\beta}gin{pro}{\lambda}bel{teo:Thkalgebraic}
Let $x\in{\cal X}_{1}({\Delta},N)({\cal O}_{(v)})$ be a point represented by a
$p$-ordinary test triple and let ${\omega}_o$ be an invariant form on
$A_x$ as above. Then, for all
$r\geq1$, the operators ${\Theta}_{k,\ast}^{(r)}$ define maps
$$
{\Theta}_{k,\ast}^{(r)}(x)\colon
H^0({\cal X}_{1}({\Delta},N)\otimes{\cal O}_{(v)},{{\cal L}}^{\otimes k})\longrightarrow
{{\cal L}}^{\otimes k+2r}(x){\sigma}meq
{\cal O}_{(v)}{{\omega}_o}^{\otimes k+2r}.
$$
Moreover ${\Theta}_{k,\infty}^{(r)}(x)={\Theta}_{k,p}^{(r)}(x)$.
\end{pro}
\par\noindent{\bf Proof. } The result follows, as in \cite[theorem~2.4.5]{Katz78},
from the following observation. Let $A$ be an abelian
variety isogenous over ${\cal O}_{(v)}$ to the $g$-fold product of elliptic curves with
complex multiplications in the field $K$ and ordinary good reduction
modulo $v$. The CM splitting of the first de Rham group of $A$ is the splitting
${\Delta}rham{1}(A/{\cal O}{(v)})=H_{{\sigma}_1}\oplus H_{{\sigma}_2}$ where
$H_{{\sigma}_i}$ is the ${\sigma}_i$-eigenspace under the action of complex
multiplications, $I_K=\{{\sigma}_1,{\sigma}_2\}$. The Hodge
decomposition ${\Delta}rham{1}(A)\otimes\mathbb C=H^{1,0}\oplus H^{0,1}$ and
the Dwork-Katz decomposition
${\Delta}rham{1}(A)\otimes B=H^0(A\otimes B,{\Omega}^1)\oplus U$ for some
$p$-adic ${\cal O}{(v)}$-algebra $B$ are both obtained from the CM splitting by
a suitable tensoring. The result follows from the algebraicity of the
Gau\ss-Manin connection and the Kodaira-Spencer map, using the expression
\eqref{eq:onlyonepr}.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
For all $r\geq1$ write
$$
{\delta}_k^{(r)}={\delta}_{k+2r-2}\circ\cdots\circ{\delta}_k=
\left(\frac{1}{2\pi i}\right)^r\left(\frac{d}{dz}+\frac{k+2r-2}{2iy}\right)
\circ\cdots\circ\left(\frac{d}{dz}+\frac{k}{2iy}\right)
$$
and set ${\delta}_k^{(0)}(\phi)=\phi$.
\section{Expansions of modular forms}
\subsection{Serre-Tate theory.}{\lambda}bel{se:STtheory}
Let ${\mbox{\boldmath $k$}}$ be any field, $({\Lambda},\mathfrak m)$ a complete local noetherian ring
with residue field ${\mbox{\boldmath $k$}}$ and ${\cal C}$ the category of artinian local
${\Lambda}$-algebras with residue field ${\mbox{\boldmath $k$}}$. Let $\widetilde{A}$ be an abelian
variety over ${\mbox{\boldmath $k$}}$ of dimension $g$. By a fundamental result of
Grothendieck \cite[2.2.1]{Oort71}, the \emph{local moduli functor}
${\cal M}\colon{\cal C}\rightarrow\mbox{\bf Sets}$ which associates to each
$B\in{\rm Ob}\,{\cal C}$ the set of deformations of $\widetilde{A}$ to $B$, is
pro-represented by ${\Lambda}[[t_1,\ldots,t_{g^2}]]$.
When ${\mbox{\boldmath $k$}}$ is perfect of characteristic $p>0$ and ${\Lambda}=W_{\mbox{\boldmath $k$}}$ is the
ring of Witt vectors of ${\mbox{\boldmath $k$}}$,
deforming $\widetilde{A}$ is equivalent to deforming its formal group, as
precised by the Serre-Tate theory \cite[\S2]{Katz81}.
If ${\mbox{\boldmath $k$}}$ is algebraically closed and $\widetilde{A}$ is
\emph{ordinary}, an important consequence
of the Serre-Tate theory is that there is a canonical isomorphism of functors
$$
{\cal M}\buildrel{\sigma}m\over\longrightarrow{\rm Hom}(T_p\widetilde{A}\otimes T_p\widetilde{A}^t,\widehat{\mathbb G}_m),
$$
\cite[theorem 2.1]{Katz81}.
Write ${\cal M}=\mathrm{Spf}(\mathfrak R^u)$ with universal formal deformation
${\cal A}^u$ over $\mathfrak R^u$. The isomorphism endows ${\cal M}$ with a canonical
structure of formal torus and identifies its group of characters
$X({\cal M})={\rm Hom}({\cal M},\widehat{\mathbb G}_m)\subset\mathfrak R^u$
with the group $T_p\widetilde{A}\otimes T_p\widetilde{A}^t$. Denote $q_S$ the character corresponding to
$S\in T_p\widetilde{A}\otimes T_p\widetilde{A}^t$.
For a deformation ${\cal A}_{/B}$ of $\widetilde{A}$ with
$(B,\mathfrak m_B)\in{\rm Ob}\,\widehat{{\cal C}}$, let
$$
q({\cal A}_{/B};\cdot,\cdot)\colon
T_p\widetilde{A}\times T_p\widetilde{A}^t\longrightarrow\widehat{\mathbb G}_m(B)=1+\mathfrak m_B
$$
be the corresponding bilinear form.
When ${\mbox{\boldmath $k$}}$ is not algebraically
closed, the group structure on ${\cal M}\otimes\overline{{\mbox{\boldmath $k$}}}$
descends to a group structure on ${\cal M}$, for the details see
\cite[1.1.14]{Noot92}.
Let ${\cal N}\subset{\cal M}$ be a formal subgroup and
$\rho\colon X({\cal M})\rightarrow X({\cal N})$ the restriction map. The $\mathbb Z_p$-module
$N=\ker(\rho)$ is called the \emph{dual} of ${\cal N}$. Via Serre-Tate
theory, $N\subseteq T_pA\otimes T_pA^t$. Then
$$
{\cal N}\buildrel{\sigma}m\over\longrightarrow{\rm Hom}\left(\frac{T_p\widetilde{A}\otimes T_p\widetilde{A}^t}N,\widehat{\mathbb G}_m\right)
$$
and
{\beta}gin{eqnarray*}
\mbox{${\cal N}$ is a subtorus of ${\cal M}$} & \Longleftrightarrow &
\mbox{$X({\cal N}){\sigma}meq X({\cal M})/N$ is torsion-free} \\
& \Longleftrightarrow & \mbox{$N$ is a direct summand of $T_p\widetilde{A}\otimes T_p\widetilde{A}^t$.}
\end{eqnarray*}
\noindent To simplify some of the next statements, we shall henceforth assume that $p>2$.
{\beta}gin{pro}{\lambda}bel{th:maplift}
Let $\widetilde{f}\colon\widetilde{A}\rightarrow\widetilde{B}$ be a morphism of ordinary abelian
varieties
over ${\mbox{\boldmath $k$}}$. The morphism $\widetilde{f}$ lifts to a morphism
$f\colon{\cal A}\rightarrow{\cal B}$
of deformations over $B$ if and only if
$$
q({\cal A}_{/B};P,\widetilde{f}^t(Q))=q({\cal B}_{/B};\widetilde{f}(P),Q)\quad
\mbox{for all $P\in T_p\widetilde{A}$ and $Q\in T_p\widetilde{B}$}.
$$
In particular, if $(\widetilde{A},\wt{{\lambda}})$ is principally polarized,
the formal subscheme ${\cal M}^{\rm pp}$ that classifies
deformations of $\widetilde{A}$ with a
lifting ${\lambda}$ of the principal polarization is a subtorus whose group
of characters is
$$
X({\cal M}^{\rm pp})=\mathrm{Sym}^2(T_p\widetilde{A}).
$$
\end{pro}
\par\noindent{\bf Proof. } The first part of the statement is \cite[2.1.4]{Katz81}. For the
second part, the principal polarization $\wt{{\lambda}}$ identifies
$T_p\widetilde{A}{\sigma}meq T_p\widetilde{A}^t$. For a deformation ${\cal A}_{/B}$ let
$\pr{q}({\cal A}_{/B};P,\pr{P})=q({\cal A}_{/B};P,\wt{{\lambda}}(\pr{P}))$.
Then $\wt{{\lambda}}$ lifts to ${\cal A}$ if and only if $\pr{q}$ is symmetric,
and the submodule of symmetric maps is a direct summand.
\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
The last part of the proposition can be rephrased by saying that there
is a commutative diagram
$$
{\beta}gin{CD}
T_p\widetilde{A}\otimes T_p\widetilde{A}^t @>{\sigma}m>> X({\cal M}) \\
@V{\sopra{\hbox{polarization}}{\hbox{$+$ quotient}}}VV
@VV\mbox{restriction}V \\
\mathrm{Sym}^2(T_p\widetilde{A}) @>{\sigma}m>> X({\cal M}^{\rm pp})
\end{CD}
$$
Concretely, if $\{P_1,\ldots,P_g\}$ and $\{P_1^t,\ldots,P_g^t\}$ are
$\mathbb Z_p$-bases of $T_p\widetilde{A}$ and of $T_p\widetilde{A}^t$ respectively,
the $g^2$ elements
$q_{i,j}=q({\cal A}^u_{/\mathfrak R^u};P_i,P_j^t)-1$ define an isomorphism
$\mathfrak R^u{\sigma}meq W_{{\mbox{\boldmath $k$}}}[[q_{i,j}]]$. If $\widetilde{A}$ is principally polarized
we may take $P_i=P_i^t$ under the identification
$T_p(\widetilde{A}){\sigma}meq T_p(\widetilde{A}^t)$. Then $q_{i,j}=q_{j,i}$ on
${\cal M}^{\rm pp}=\mathrm{Spf}(\mathfrak R^{\rm pp})$ and
$$
\mathfrak R^{\rm pp}{\sigma}meq W_{{\mbox{\boldmath $k$}}}[[q^{\rm pp}_{i,j}]],\qquad
\hbox{with $q^{\rm pp}_{i,j}={q_{i,j}}_{|{\cal M}^{\rm pp}}$, $1\leq i\leq j\leq g$.}
$$
More generally, if ${\cal N}=\mathrm{Spf}(\mathfrak R_{\cal N})$ is a subtorus with
$n={\rm rk}_{\mathbb Z_p}(N)$, a $\mathbb Z_p$-basis
$\{S_1,\ldots,S_n\}$ of $N$ can be completed to a basis
$\{S_1,\ldots,S_n,S_{n+1},\ldots,S_{g^2}\}$ of
$T_p\widetilde{A}\otimes T_p\widetilde{A}^t$. If $q_i=q_{S_i}({\cal A}^u_{/\mathfrak R^u})-1$ and
$q_i^{\cal N}={q_i}_{|{\cal N}}$, then $q^{\cal N}_{1}=\ldots=q^{\cal N}_{n}=0$ by construction
and $\mathfrak R_{\cal N}=W_{{\mbox{\boldmath $k$}}}[[q^{\cal N}_{n+1},\ldots,q^{\cal N}_{g^2}]]$.
Since $\widetilde{A}$ is ordinary, there is a canonical isomorphism
$T_p\widetilde{A}^t\buildrel{\sigma}m\over\rightarrow{\rm Hom}_B(\widehat{{\cal A}},\widehat{\mathbb G}_m)$
for any deformation ${\cal A}_{/B}$ of $\widetilde{A}$.
Composition with the pulling back the standard invariant form $dT/T$ on
$\widehat{\mathbb G}_m$ yields a functorial $\mathbb Z_p$-linear homomorphism
${\omega}\colon T_p\widetilde{A}^t\rightarrow\underline{{\omega}ega}_{{\cal A}/B}$ which is compatible with morphisms
of abelian schemes, in the sense that if the morphism
$f\colon{\cal A}\rightarrow{\cal B}$ lifts the morphism $\widetilde{f}\colon\widetilde{A}\rightarrow\widetilde{B}$ of
abelian varieties over ${\mbox{\boldmath $k$}}$ then, \cite[lemma 3.5.1]{Katz81},
{\beta}gin{equation}
f^*({\omega}(P^t))={\omega}(\widetilde{f}^t(P^t)),
\qquad\mbox{for all $P^t\in T_p\widetilde{B}^t$.}
{\lambda}bel{eq:omcomp}
\end{equation}
By functoriality, the maps ${\omega}$ extend
to a well-defined $\mathbb Z_p$-linear homomorphism
$$
{\omega}^u\colon T_p\widetilde{A}^t\rightarrow\underline{{\omega}ega}_{{\cal A}^u/{\cal M}}.
$$
whose $\mathfrak R^u$-linear extension
$T_p\widetilde{A}^t\otimes{\cal O}_{\cal M}\buildrel{\sigma}m\over\rightarrow\underline{{\omega}ega}_{{\cal A}^u/{\cal M}}$ is an isomorphism.
Thus, a choice of a $\mathbb Z_p$-basis $\{P_1^t,\ldots,P_g^t\}$ of
$T_p\widetilde{A}^t$ yields an identification
$\underline{{\omega}ega}_{{\cal A}^u/{\cal M}}=\left(\bigoplus_{i=1}^{g}\mathfrak R^u{\omega}_i\right)^{\rm sh}$
where ${\omega}_i={\omega}^u(P^t_i)$, $i=1,\ldots,g$ and the superscript
$(\ )^{\rm sh}$ denotes the sheafified module.
Suppose that $\widetilde{A}$ is principally polarized. Let
${\cal N}\subset{\cal M}^{\rm pp}$ be a subtorus with dual $N$
and let ${\cal A}_{\cal N}$ be the restriction over ${\cal N}$ of the
universal deformation ${\cal A}^u/{\cal M}$.
Let $\{S_1,\ldots,S_{\frac{g(g+1)}{2}}\}$ be a $\mathbb Z_p$-basis of
$\mathrm{Sym}^2(T_p\widetilde{A}^t)=\mathrm{Sym}^2(T_p\widetilde{A})$
such that $N=\bigoplus_{j=1}^n\mathbb Z_pS_j$. Let
${\omega}^{(2)}_i$ the pullback to ${\cal A}_{\cal N}$ of $\mathrm{Sym}^2({\omega}^u)(S_i)$,
$q_i^{\rm pp}$ and $q_i^{\cal N}$be the restriction of the local
parameters $q_{S_i}$ constructed above, $i=1,\ldots,g(g+1)/2$.
{\beta}gin{pro}{\lambda}bel{th:KSforsub}
$$
{\rm KS}_{\cal N}({\omega}^{(2)}_i)=
\left\{
{\beta}gin{array}{ll}
0 & \mbox{for $i=1,\ldots,n$} \\
d\log(q_i^{\rm pp}+1)_{|{\cal N}} & \mbox{for $i=n+1,\ldots,g(g+1)/2$}
\end{array}\right.
$$
\end{pro}
\par\noindent{\bf Proof. } It is an immediate application of proposition
\ref{th:KSforpullb} to Katz's computations \cite[theorem 3.7.1]{Katz81}, because there are
identifications
$\mathrm{Sym}^2(\underline{{\omega}ega}_{{\cal A}_{\cal N}/{\cal N}})=\left(\bigoplus_{i=1}^{g(g+1)/2}\mathfrak R_{\cal N}{\omega}_i^{(2)}\right)^{\rm sh}$,
${\Omega}^1_{{\cal M}^{\rm pp}/W}=\left(\bigoplus_{i=1}^{g(g+1)/2}\mathfrak R^{\rm pp}dq_i^{\rm pp}\right)^{\rm sh}$,
${\Omega}^1_{{\cal N}/W}=\left(\bigoplus_{i=n+1}^{g(g+1)/2}\mathfrak R_{\cal N} dq_i^{\cal N}\right)^{\rm sh}$
and $dq_i^{\cal N}=0$ for $i=1,...,n$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
\paragraph{Elliptic curves and false elliptic curves.}
Let $(\tau,v,e)$ be a split $p$-ordinary test triple and denote
$A$ the corresponding CM curve (if $D$ is split) or
principally polarized QM-abelian surface (if $D$ is non split) with $\widetilde{A}$
its reduction modulo $p$.
Of the two embeddings $K\hookrightarrow\mathbb Q_p$, fix the one that makes the $K$ action on
$T_p\widetilde{A}\otimes\mathbb Q_p$ coincide with its natural $\mathbb Q_p$-vector space structure.
Let ${\cal M}$ be the local moduli functor corresponding to $\widetilde{A}$.
If $D$ is split, everything has been already implicitely described:
${\cal M}^{\rm pp}={\cal M}$
is a $1$-dimensional torus and $\mathfrak R^{\rm pp}=\mathfrak R^u=W[[q-1]]$ with
$q=q_{P\otimes P}$ for a $\mathbb Z_p$-generator $P$ of $T_p\widetilde{A}$. Also,
if ${\omega}_u={\omega}^u(P)$ then ${\omega}_u^{\otimes 2}=\mathrm{Sym}^2({\omega}^u)(P\circ P)$
and ${\rm KS}({\omega}_u^{\otimes 2})=d\log(q)$.
If $D$ is non-split,
let ${\cal N}={\cal N}_D$ be the subfunctor
$$
{\cal N}_D(B)=
\left\{
\sopra{\hbox{principally polarized deformations ${\cal A}_{/B}$ of $\widetilde{A}$
with a lift of}}
{\hbox{the endomorphisms given by elements of the
maximal order ${\cal R}_{1}$}}
\right\}
$$
The maximal order ${\cal R}_{1}$ acts naturally on $T_p\widetilde{A}$ and since
$e\in{\cal R}_{1}\otimes\mathbb Z_p$ we can find a $\mathbb Z_p$-basis $\{P,Q\}$ of $T_p\widetilde{A}$
such that $eP=P$ and $eQ=0$.
{\beta}gin{pro}{\lambda}bel{th:NR}
{\beta}gin{enumerate}
\item ${\cal N}=\mathrm{Spf}(\mathfrak R_{{\cal N}})$ is a $1$-dimensional subtorus of ${\cal M}^{\rm pp}$;
\item $\mathfrak R_{{\cal N}}=W_{{\mbox{\boldmath $k$}}}[[q-1]]$, where $q=q^{\cal N}_{P\circ P}$;
\item if ${\omega}_u$ denotes the pullback of ${\omega}^u(P)$ to ${\cal A}_{{\cal N}}$, then
${\rm KS}({\omega}_u^{\otimes 2})=d\log(q)$.
\end{enumerate}
\end{pro}
\par\noindent{\bf Proof. } It follows from proposition \ref{th:maplift} that ${\cal N}(B)$ is
identified with the set of the symmetric bilinear forms
$q\colon T_p\widetilde{A}\times T_p\widetilde{A}\rightarrow\widehat{\mathbb G}_m(B)$ such that
$q(P,\invol{r}Q)=q(rP,Q)$ for all $r\in{\cal R}_{1}$.
This makes clear that ${\cal N}$ is a subgroup, and that its
dual $N$ is the $\mathbb Z_p$-submodule generated by the elements
$$
\left\{
{\beta}gin{array}{l}
P_1\otimes P_2-P_2\otimes P_1 \\
rP_1\otimes P_2-P_1\otimes\invol{r}P_2
\end{array}\right.
\quad\mbox{for all $P_1,P_2\in T_p(\widetilde{A})$ and $r\in{\cal R}_{1}$.}
$$
Choose $u$ in the decomposition \eqref{eq:Dsplit} for the subfield
$F\subset D$ so that $u\in{\cal R}_{1}$, $\invol u=-u$ and
$\vass{\nu(u)}$ is minimal. In particular $u^{2}=-\nu(u)$ is a
square-free integer. Pick a basis of $D$ in ${\cal R}_{1}$ of
the form $\{1,r,u,ru\}$ with $r^{2}\in\mathbb Z$. From our choice of test
triple we can assume that $\mathbb Z[r]$ is an order in $F$ of conductor
prime to $p$, in particular $p$ does not divide $r^{2}$.
The submodule ${\cal R}=\mathbb Z\oplus\mathbb Z r\oplus\mathbb Z u\oplus\mathbb Z ru$ is actually an
order of discriminant $-16r^{4}u^{4}$ such that $\invol{\cal R}={\cal R}$.
Suppose that $p|u^{2}$ and let
$y=\frac1p\left(a+br+cu+dru\right)$ with $a$, $b$, $c$ and $d\in\mathbb Z$
be an element in ${\cal R}_{1}-{\cal R}$ such that $\bar y=y+{\cal R}$
generates the unique subgroup of order $p$ in ${\cal R}/{\cal R}_{1}$.
The conditions
${\rm tr}(y)\in\mathbb Z$, $\nu(y)\in\mathbb Z$, and $\invol{\bar y}=\pm\bar y$
easily imply that the coefficients $a$, $b$, $c$ and $d$ are all
divisible by $p$ and this is a contradiction. Thus
${\cal R}\otimes\mathbb Z_{p}={\cal R}_{1}\otimes\mathbb Z_{p}$ and this reduces
the set of generators for $N$ to
$$
{\beta}gin{array}{ccc}
P\otimes Q-Q\otimes P & rP\otimes Q-P\otimes rQ & uP\otimes
P+P\otimes uP \\
uP\otimes Q+P\otimes uQ & uQ\otimes Q+Q\otimes uQ &
ruP\otimes Q-P\otimes ruQ
\end{array}.
$$
From the relations $re=er$ and $ue=(1-e)u$ in ${\cal R}_{1}\otimes\mathbb Z_p$
we get that the elements $r$ and $u$ act on the basis $\{P,Q\}$ as the
matrices $\smallmat{{\alpha}}00{-{\alpha}}$ and $\smallmat 0{{\beta}}{{\gamma}}0$
respectively. Since ${\alpha}$, ${\beta}$ and ${\gamma}$ are $p$-units, finally $N$
turns out to be the $\mathbb Z_{p}$-module generated by
$$
P\otimes Q-Q\otimes P,\quad
{\gamma} P\otimes P+{\beta} Q\otimes Q,\quad
P\otimes Q+Q\otimes P
$$
and
$$
T_{p}\widetilde{A}\otimes T_{p}\widetilde{A}=N\oplus\mathbb Z_{p}(P\otimes P).
$$
This proves points 1 and 2, and the last part follows at once from
proposition \ref{th:KSforsub}.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{rem}{\lambda}bel{rm:OK}
\rm The fact that ${\cal N}$ is actually a subtorus can be reinterpreted,
as in \cite{Noot92}, in the more general context of Hodge and Tate
classes.
\end{rem}
\subsection{Power series expansion}
We shall now use the Serre-Tate theory to write a power
series expansion around an ordinary CM point of a modular form
$f\in M_{k,1}({\Delta},N)$ and compute the coefficients of this expansion in
terms of the Maass operators studied in section \ref{ss:maass}. We
assume that $N>3$.
Let $\hbox{\bf Sp}_D$ the full subcategory of the category of rings
consisting of the rings $B$ such that ${\cal R}_{1}\otimes B{\sigma}meq M_{2}(B)$.
Note that if $(\tau,v,e)$ is a $p$-ordinary test triple, then ${\cal O}_v\in{\rm Ob}\hbox{\bf Sp}_D$.
For a ${\rm KS}$-normalized section $s(z)$ in \eqref{eq:kanormsec} the assignment
{\beta}gin{equation}
f(z)\rightarrowsto f^\ast(z)=f(z)s(z)^{\otimes k}
{\lambda}bel{eq:assignsplit}
\end{equation}
sets up an identification
$$
M_{k,1}({\Delta},N){\sigma}meq H^0(X_1({\Delta},N),{\cal L}^{\otimes k})
$$
defined up to a sign (the ambiguity obviously disappears for $k$ even).
The identification extends naturally to an identification of the bigger space
$M_{k,{\varepsilon}}^{\infty}({\Delta},N)$ of $C^\infty$-modular forms with the global sections of the associated $C^\infty$-bundle ${\cal L}^{\otimes k}_\infty$.
This ``geometric'' interpretation of modular forms can be used to endow
the space $M_{k,1}({\Delta},N)$ with a canonical $B$-structure for any subring
$B\subset\mathbb C$ of definition for ${\cal L}$ in $\hbox{\bf Sp}_D$.
In fact for \emph{any} ring $B$ in $\hbox{\bf Sp}_D$ such that
${\cal L}$ is defined over $B$
the space of modular forms defined over $B$ may be defined as
$$
M_{k,1}({\Delta},N;B)=H^0({\cal X}_1({\Delta},N)\otimes B,{\cal L}^{\otimes k}).
$$
Remark \ref{re:LoverC}.2 shows that this $B$-structure does not depend
on the choice of ${\cal L}$, i.e. on the choice of idempotent $e$.
If $\pr{B}$ is a flat $B$-algebra, the identification
$M_{k,1}({\Delta},N;B)\otimes\pr{B}=M_{k,1}({\Delta},N;\pr{B})$
follows from the usual properties of flat base change.
By smoothness, if $1/N{\Delta}\in B\subset\mathbb C$
then $M_{k,1}({\Delta},N;B)\otimes\mathbb C=M_{k,1}({\Delta},N;\mathbb C)=M_{k,1}({\Delta},N)$.
In fact the assignment \eqref{eq:assignsplit} is normalized so that
$f\in M_{k,1}(N;B)$ if and only if its Fourier coefficients belong to $B$
($q$-expansion principle, e.÷g. \cite[Ch.~1]{Katz73}
\cite[theorem 4.8]{Harris81}).
Let ${\cal A}/{\cal X}$ be either universal family \eqref{eq:univEC}
or \eqref{eq:univQMAV}. Let $x\in{\cal X}({\cal O}_{(v)})$ be represented by a
point $\tau\in\mathbb CM_{{\Delta},K}$ in a split $p$-ordinary test triple
$(\tau,v,e)$. Denote $A_x$ the fiber of ${\cal A}/{\cal X}$ over $x$ and
$A_{\tau}$ the corresponding complex torus.
We will implicitely identify the ring $W_{\overline{k(v)}}$ of Witt
vectors for the algebraic closure of $k(v)$ with $\nr{\cal O}_{v}$.
For each $n\geq0$, let
$J_{x,n}={\cal O}_{{\cal X},x}/\mathfrak m_x^{n+1}$ and
$J_{x,\infty}=\limproj{n}J_{x,n}=\wh{{\cal O}}_{{\cal X},x}$.
By smoothness, there is a non-canonical isomorphism
$J_{x,\infty}{\sigma}meq{\cal O}_{(v)}[[u]]$.
For $n\in\mathbb N\cup\{\infty\}$ the family ${\cal A}/{\cal X}$ restricts to
abelian schemes ${\cal A}_{x,n}{}_{/J_{x,n}}$.
Tautologically $A_x={\cal A}_{x,0}$, and
${\cal A}_{x,n}={\cal A}_{x,\infty}\otimes J_{x,n}$ with respect to
the canonical quotient map $J_{x,\infty}\rightarrow J_{x,n}$.
Also, let $\nr{J}_{x,n}=J_{x,n}\wh{\otimes}\nr{\cal O}_{v}$ and
$\nr{{\cal A}}_{x,n}={\cal A}_{x,n}\otimes\nr{J}_{x,n}$.
Let ${\cal M}=\mathrm{Spf}({\cal R})$ be either the full local moduli functor (in
the split case) or its subtorus described in proposition \ref{th:NR}
(in the non-split case) associated with the reduction
$\widetilde{A}_x=A_x\otimes{\overline{k}}_v$ with universal formal deformation
${\cal A}_x/{\cal M}$. In either case ${\cal M}{\sigma}meq{\rm Hom}(T,\wh{\mathbb G}_m)$
where $T$ is a free $\mathbb Z_p$-module of rank 1.
Since the rings $\nr{J}_{x,n}$ are pro-$p$-Artinian,
there are classifying maps
$$
\phi_{x,n}\colon{\cal R}\longrightarrow\nr{J}_{x,n},\qquad
\hbox{for all $n\in\mathbb N\cup\{\infty\}$}
$$
such that $\nr{{\cal A}}_{x,n}={\cal A}_x\otimes_{\phi_{x,n}}\nr{J}_{x,n}$. Since
the abelian schemes ${\cal A}_{x,n}$ are the restriction of the
universal (global) family, the map $\phi_{x,\infty}$ is an isomorphism.
We will use it to transport the Serre-Tate parameter
$q_S-1\in{\cal R}$ and the formal sections ${\omega}_u$ constructed in section
\ref{se:STtheory} out of a choice of a $Z_p$-generator $S$ of $T$
to the $p$-adic disc of points in ${\cal X}$ that reduce
modulo $\mathfrak p_v$ to the same geometric point in
${\cal X}\otimes{\overline{k}}_v$. Also, we can pull back the parameter along
the translation by $\inv x$ in ${\cal M}$ to obtain a local parameter $u_x$ at $x$ (depending on $S$), namely
$$
\nr{J}_{x,\infty}=\nr{{\cal O}}_v[[u_x]],\qquad
\hbox{with $u_x=\inv{q_S(x)}q_S-1$}.
$$
The complex uniformization of $A_x$ associated with the choice of
$\tau$ can be used to define transcendental periods. For any
${\omega}_o\in H^0(A_{x}(\mathbb C),{\cal L}(x))$ write
$$
{\omega}_o=p({\omega}_o,\tau)s(\tau)
\qquad p({\omega}_o,\tau)\in\mathbb C,
$$
under the isomorphism $A_x(\mathbb C){\sigma}meq A_\tau$.
For $f\in M_{k,1}({\Delta},N)$ define complex numbers
{\beta}gin{equation}
c^{(r)}(f,x,{\omega}_o)=
\frac{{\delta}_k^{(r)}(f)(\tau)}{p({\omega}_o,\tau)^{k+2r}}
\qquad r=0,1,2,\ldots.
{\lambda}bel{eq:defcrfx}
\end{equation}
The use of $x$ in the definition \eqref{eq:defcrfx} is justified by
the following fact.
{\beta}gin{pro}
Suppose that $f\in M_{k}(\mathbb Ga)$ for some Fuchsian group
of the first kind ${\cal R}_1^1\geq\mathbb Ga\geq\mathbb GaU({\Delta},N)$.
Then the numbers $c^{(r)}(f,x,{\omega}_o)\in\mathbb C$ do not depend on the
choice of $\tau$ in its $\mathbb Ga$-orbit.
\end{pro}
\par\noindent{\bf Proof. } For any ${\gamma}\in\mathbb Ga$, multiplication by $j({\gamma},\tau)^{-1}$ induces
an isomorphism of complex tori $A_\tau\buildrel{\sigma}m\over\rightarrow A_{{\gamma}\tau}$. Since $s$ is a global
constant section of ${\cal L}$ over $\mathfrak H$, under the standard
identifications of invariant forms, $s({\gamma}\tau)=s(\tau)j({\gamma},\tau)^{-1}$.
The assertion follows at once.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
The periods $p({\omega}_o,\tau)$ (and conseguently the numbers
$c^{(r)}(f,x,{\omega}_o)$) can be normalized by choosing ${\omega}_o$ as in
proposition \ref{teo:Thkalgebraic}. For such a choice,
defined up to a $v$-unit, set
$$
{\Omega}_\infty={\Omega}_\infty(\tau)=p({\omega}_o,\tau),\qquad
c^{(r)}_v(f,x)=c^{(r)}(f,x,{\omega}_o).
$$
Also, define the \emph{$p$-adic period}
${\Omega}_p={\Omega}_p(x)\in{\cal O}_v^{{\rm nr},\times}$
(again defined up to a $v$-unit) as
$$
{\omega}_o={\Omega}_p{\omega}_u(x).
$$
Let $f\in M_{k,1}({\Delta},N;\nr{{\cal O}}_v)$. Over $\mathrm{Spf}(\nr{J}_{x,\infty})$
write $f^\ast=f_x{\omega}_u^{\otimes k}$ and
$$
{\rm jet}_x(f^\ast)=x^*{\rm jet}(f_x)\otimes{\omega}_u(x)^{\otimes k}
=\left(\sum_{n=0}^\infty\frac{b_n(f,x)}{n!}U_x^n\right)\,
{\omega}_u(x)^{\otimes k}
$$
where $f_x$ is expanded at $x$ in terms of the formal local parameter $U_x=\log(1+u_x)$.
{\beta}gin{thm}{\lambda}bel{thm:equality}
Let $x\in{\cal X}_{{\Delta},N}({\cal O}_{(v)})$ be represented by a split
$p$-ordinary test triple $(\tau,v,e)$
and $f\in M_{k}({\Delta},N;{\cal O}_{(v)})$. Then, for all $r\geq0$,
$$
\frac{b_r(f,x)}{{\Omega}_p^{k+2r}}=c^{(r)}_v(f,x)\in{\cal O}_{(v)}.
$$
\end{thm}
\par\noindent{\bf Proof. } The case $r=0$ is clear, so let us assume that $r\geq1$.
We have $\nabla(f^\ast)=\nabla(f_x{\omega}_u^{\otimes k})=
df_x\otimes{\omega}_u^{\otimes k}+kf_x{\omega}_u^{\otimes k-1}\nabla({\omega}_u)$.
Since $\nabla({\omega}^u(P))\in H^0({\cal M},{\cal U})$ for each
$P\in T_p(\wt{A})$, \cite[theorem 4.3.1]{Katz81}, the term containing
$\nabla({\omega}_u)$ is killed by the projection ${\rm Pr}_{p}$.
Also, $dU_x=d\log(u_x+1)=d\log(q+1)$ doesn't depend on $x$ and we obtain
${\Theta}_{k,p}(f^\ast)=(df_x/dU_x){\omega}_u^{\otimes k+2}$. Iterating the
latter computation $r$ times and evaluating the result at $x$ yields
$$
{\Theta}_{k,p}^{(r)}(f^\ast)(x)=
\frac{d^rf_x}{d{U_x}^r}(x){\omega}_u^{k+2r}(x)=
\frac{b_r(f,x)}{{\Omega}_p^{k+2r}}{\omega}_o^{k+2r}.
$$
On the other hand, applying $r$ times the proposition
\ref{th:maassincoord} and evaluating at $\tau$ yields
$$
{\Theta}_{k,\infty}^{(r)}(f^\ast)(x)=
{\delta}_k^{(r)}(f)(\tau)s(\tau)^{\otimes k+2r}=
c^{(r)}(f,x,{\omega}_o){\omega}_o^{k+2r}.
$$
The result follows from proposition \ref{teo:Thkalgebraic}. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
This result has a converse. For, we need the following preliminary discussion.
Let ${\cal D}$ be any domain of characteristic $0$ and field of quotients ${\cal K}$. The formal substitution $u=e^U-1=U+\frac1{2!}U^2+\frac1{3!}U^3+\cdots$ defines a bijection between the rings of formal power series ${\cal K}[[u]]$ and ${\cal K}[[U]]$. Under this bijection the ring ${\cal D}[[u]]$ is identified with a subring of the ring of \textit{Hurwitz series}, namely power series of the form
$$
\sum_{n=0}^\infty\frac{{\beta}_n}{n!}\,U^n,\qquad
\hbox{with ${\beta}_n\in{\cal D}$ for all $n=0,1,2,...$}.
$$
We say that a power series $\Phi(U)\in{\cal K}[[U]]$ is $u$-integral if $\Phi(U)=F(e^U-1)$ for some
$F(u)\in{\cal D}[[u]]$.
Denote $c_{n,r}$ the coefficients defined by the polynomial identity
$$
n!\vvec Xn=X(X-1)\cdots(X-n+1)=\sum_{r=0}^nc_{n,r}X^r.
$$
The following possibly well known result is closely related to \cite[Th\'eor\`eme 13]{Serre73}.
{\beta}gin{thm}
A Hurwitz series $\Phi(U)=\sum_{n=0}^\infty\frac{{\beta}_n}{n!}\,Z^n$ is $u$-integral if and only if
$\frac1{d!}(c_{d,1}{\beta}_1+c_{d,2}{\beta}_2+\cdots+c_{d,d}{\beta}_d)\in{\cal D}$ for all $d=1,2,...$.
\end{thm}
\par\noindent{\bf Proof. } Let $F(u)=\Phi(\log(1+u))\in{\cal K}[[u]]$. For any polynomial
$P(X)=p_0+p_1X+\cdots+p_dX^d\in{\cal K}[X]$ of degree $d$, an immediate chain rule computation yields
$$
\left.P\left((u+1)\frac d{du}\right)F(u)\right|_{u=0}=p_0{\beta}_0+p_1{\beta}_1+...+p_d{\beta}_d.
$$
Also
$\left.P\left((u+1)\frac d{du}\right)F(u)\right|_{u=0}=\left.P\left((u+1)\frac d{du}\right)F_d(u)\right|_{u=0}$
where $F_d(u)$ is the degree $d$ truncation of $F(u)$, i.÷e. $F(u)=F_d(u)+U^{d+1}H(u)$. On the space of polynomials of degree $\leq d$ the substituition $u=v-1$ is defined over ${\cal D}$ and
$\left.P\left((u+1)\frac d{du}\right)F_d(u)\right|_{u=0}=
\left.P\left(v\frac d{dv}\right)F_d(v-1)\right|_{v=1}$.
Since $\left.P\left(v\frac d{dv}\right)v^k\right|_{v=1}=P(k)v^k$, the argument shows that if $\Phi(U)$ is
$u$-integral, then the expression $p_0{\beta}_0+p_1{\beta}_1+...+p_d{\beta}_d$ is a ${\cal D}$-linear combination of the values $P(0)$, $P(1), ..., P(d)$.
On the other hand, the argument also shows that
$$
\frac1{d!}(c_{d,1}{\beta}_1+c_{d,2}{\beta}_2+\cdots+c_{d,d}{\beta}_d)=
\left.\vvec{v\,d/dv}{d}F_d(u-1)\right|_{v=1}
$$
is the coefficient of $U^d$ in $F$.
Therefore, we obtain that $\Phi(U)$ is $u$-integral if and only
$p_0{\beta}_0+p_1{\beta}_1+...+p_d{\beta}_d\in{\cal D}$ for every polynomial
$P(X)=p_0+p_1X+\cdots+p_dX^d\in{\cal K}[X]$ such that $P(0)$,
$P(1), ..., P(d)\in{\cal D}$. We conclude observing that a degree $d$ polynomial $P(X)\in{\cal K}[X]$ such that $P(0)$, $P(1), ..., P(d)\in{\cal D}$ is necessarily \textit{numeric}, i.÷e. $P(\mathbb N)\subset{\cal D}$ and that the
${\cal D}$-module of numeric polynomials is free, generated by the binomial coefficients. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Note that when ${\cal D}$ is a ring of algebraic integers, or one of its non-archimedean completions, the conditions of the theorem can be readily rephrased in terms of congruences, known as \textit{Kummer-Serre congruences}.
Denote $L^{v, {\rm sc}}$ the compositum of all finite extensions
$L\subseteq F$ such that $v$ splits completely in $F$ and let ${\cal O}^{\rm sc}_{(v)}$
be the integral closure of ${\cal O}_{(v)}$ in $L^{v, {\rm sc}}$.
{\beta}gin{thm}[Expansion principle]{\lambda}bel{thm:expanprinc}
Let $f\in M_{k,1}({\Delta},N)$ and $x\in{\cal X}({\Delta},N)({\cal O}_{(v)})$ be represented
by a split $p$-ordinary test triple $(\tau,v,e)$ such that the numbers
$c^{(r)}_v(f,x)\in{\cal O}_{(v)}$ for all $r\geq0$ and
the $p$-adic numbers ${\Omega}_p^{2r}c^{(r)}_v(f,x)$ satisfy the Kummer-Serre
congruences. Then $f$ is defined over ${\cal O}^{\rm sc}_{(v)}$.
\end{thm}
\par\noindent{\bf Proof. } Choose a field embedding $\imath\colon\mathbb C\rightarrow\mathbb C_{p}$ to
view $f\in M_{k,1}({\Delta},N;\mathbb C_{p})$. For all $r\geq0$ set $c_r=c^{(r)}_v(f,x)$ and
${\beta}_{r}=c_r{\Omega}_{p}^{{\epsilon}_Dk+2r}\in\mathbb C_{p}$.
Unwinding the computations that led to the equality in theorem
\ref{thm:equality} shows that
${\rm jet}_x(f^\ast)=\left(\sum_{r\geq0}\frac{{\beta}_r}{r!}U_x^r\right)
{\omega}_u(x)^{\otimes k}\in(\nr{J}_{x,\infty}\otimes\mathbb C_p){\omega}_u(x)^{\otimes k}$.
We claim that ${\rm jet}_x(f^\ast)$ is defined over ${\cal O}_v$.
Write
$$
{\rm jet}_x(f^\ast)=
\left(\sum_{r\geq0}\frac{{\beta}_r}{r!}U_x^r\right){\omega}_u(x)^{\otimes k}=
\left(\sum_{r\geq0}\frac{c_r}{r!}({\Omega}_p^2U_x)^r\right){\omega}_o^{\otimes k}.
$$
Since the ${\beta}_r$ are $v$-integral and satisfy the Kummer-Serre
congruences, the first equality shows that ${\rm jet}_x(f^\ast)$ is $v$-integral.
Since the formal substitution $u=e^U-1$ preserves the field of
definition, the claim follows from the second equality if we check
that the formal local parameter ${\Omega}_p^2U_x$ is defined over $L_v$.
The group $\mathbb Aut(\mathbb C_p/L_v)$ acts on the section ${\omega}_u$ via the action
of its quotient $\mathbb Gal(\nr{L}_v/L_v){\sigma}meq\mathbb Gal(\overline{k}_v/k_v)$
on $T_p(\wt{A}_x)$ which is scalar because $\wt{A}_x$ is either an
elliptic curve or isogenous to a product of elliptic curves.
Thus, the section ${\Omega}_p{\omega}_u$, whose restriction at $x$ is
defined over $L_v$, is itself defined over $L_v$. Therefore
${\Omega}_p^2\,dU_x={\rm KS}({\Omega}_p^2{\omega}_u^{\otimes 2})$ is defined over
$L_v$, and so is ${\Omega}_p^2U_x$ because it is a priori defined over
$\nr{L}_v$ and its value at the point $x$, defined over
$L_v$, is $0$.
We can now use the very same arguments of Katz's proof \cite{Katz73}
of the $q$-expansion principle
to conclude that the section $f^\ast$ is defined over ${\cal O}_v$.
For, observe that the $q$-expansion of a modular form $f$ at the cusp $s$
multiplied by the right power of the canonical Tate form is
${\rm jet}_s(f^\ast)$. The specific nature of a cusp in the modular curve
plays no role in Katz's proof, which works as well when the former
are replaced by any point in a smooth curve.
Since $f^\ast$ is defined over $\mathbb C$ and over ${\cal O}_v$, the modular
form $f$ is defined over the integral closure of ${\cal O}_{(v)}$ in the largest
subfield $F\subset\mathbb C$ such that $\imath(F)\subseteq L_v$. The
assertion follows from the arbitrariness of the choice of $\imath$,
since $L^{\rm sc}_{(v)}$ can be characterized as the largest subfield
of $\mathbb C$ whose image under all the embeddings $\mathbb C\rightarrow\mathbb C_p$ is contained
in $L_v$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
\section{$p$-adic interpolation}
\subsection{$p$-adic $K^{\times}$-modular forms}{\lambda}bel{se:padicforms}
A \textit{weight} for the quadratic imaginary field $K$ is a formal linear combination
${\underline{w}}=w_1{\sigma}_1+w_2{\sigma}_2\in\mathbb Z[I_K]$, which
will be also written ${\underline{w}}=(w_1,w_2)$. Following our conventions, write
$z^{\underline{w}}=z^{w_1}\bar{z}^{w_2}$ for all $z\in\mathbb C$. Also, let
$\bar{{\underline{w}}}=(w_{2},w_{1})$ and $\vass{\underline{w}}=w_1+w_2$, so that
${\underline{w}}+\bar{{\underline{w}}}=\vass{\underline{w}}\underline{1}$ with $\underline{1}=(1,1)$.
{\beta}gin{dfn}[Hida \cite{Hida86}]
Let $E\supseteq K$ be a subfield of $\mathbb C$. The space
$\widetilde{S}_{{\underline{w}}}(\mathfrak n;E)$ of $K^{\times}$-modular form of weight ${\underline{w}}$ and level $\mathfrak n$ with values
in $E$ is the space of functions $\tilde{f}\colon{\cal I}_{\mathfrak n}\rightarrow E$ such that
$$
\tilde{f}(({\lambda})I)={\lambda}^{{\underline{w}}}\tilde{f}(I)
$$
for all ${\lambda}\in K^{\times}_{\mathfrak n}$.
\end{dfn}
\noindent A remarkable subset of $\widetilde{S}_{{\underline{w}}}(\mathfrak n)=\widetilde{S}_{{\underline{w}}}(\mathfrak n;\mathbb C)$ is the set of algebraic
Hecke characters of type {$\mathrm{A}_{0}$},
$$
\widetilde{\Xi}_{{\underline{w}}}(\mathfrak n)=\widetilde{S}_{{\underline{w}}}(\mathfrak n)\cap{\rm Hom}({\cal I}_{\mathfrak n},\mathbb C^{\times}).
$$
A well-known property noted by Weil \cite{Weil55}
is that for every $\tilde{\xi}\in\widetilde{\Xi}_{{\underline{w}}}(\mathfrak n)$ there exists a
number field $E_{\tilde{\xi}}$ such that $\tilde{\xi}\in\widetilde{S}_{{\underline{w}}}(\mathfrak n;E_{\tilde{\xi}})$.
A classical construction identifies the space $\widetilde{S}_{{\underline{w}}}(\mathfrak n)$
with the space $S_{{\underline{w}}}(\mathfrak n)$ of functions $f\colon{K_{\A}^{\times}}\rightarrow\mathbb C^{\times}$ such that
{\beta}gin{equation}
f(s{\lambda} zu)=z^{-{\underline{w}}}f(s)
\quad
\hbox{for all ${\lambda}\in K^{\times}$, $z\in\mathbb C^{\times}$ and $u\in U_{\mathfrak n}$.}
{\lambda}bel{eq:ftilde}
\end{equation}
If $\tilde{f}\leftrightarrow{f}$ under this identification, then
{\beta}gin{equation}
\tilde{f}(I)={f}(s)\quad
\hbox{whenever $I=[s]$ and $s_{v}=1$ for $v=\infty$ and $v|\mathfrak n$.}
{\lambda}bel{eq:weilrel}
\end{equation}
This relation can be used to recognize $\widetilde{S}_{{\underline{w}}}(\mathfrak n;E)$ in ${S}_{{\underline{w}}}(\mathfrak n)$.
Since $U_{c{\cal O}_K}<\widehat{\calO}_{K,c}^\times$, the functions
${f}\colon{K_{\A}^{\times}}\rightarrow\mathbb C^{\times}$ satisfying the relation in \eqref{eq:ftilde}
for all ${\lambda}\in K^{\times}$, $z\in\mathbb C^{\times}$ and
$u\in\widehat{\calO}_{K,c}^{\times}$ form a linear subspace
${S}_{{\underline{w}}}({\cal O}_{K,c})\subset{S}_{{\underline{w}}}(c{\cal O}_{K})$.
The subspace ${S}_{{\underline{w}}}({\cal O}_{K,c})$
includes the Hecke characters trivial on $\widehat{\calO}_{K,c}^\times$, namely
${\Xi}_{\underline{w}}({\cal O}_{K,c})={S}_{\underline{w}}({\cal O}_{K,c})\cap
{\rm Hom}({K_{\A}^{\times}}/K^\times\widehat{\calO}_{K,c}^\times,\mathbb C^\times)$.
Denote
$$
\mathbb CC_{\mathfrak n}={K_{\A}^{\times}}/K^{\times}\mathbb C^{\times}U_{\mathfrak n}{\sigma}meq
{\cal I}_{\mathfrak n}/P_{\mathfrak n},\qquad
\mathbb CC^{\sharp}_{c}={K_{\A}^{\times}}/K^\times\mathbb C^\times\widehat{\calO}_{K,c}^\times.
$$
and let $h_{\mathfrak n}=\vass{\mathbb CC_{\mathfrak n}}$ and
$h^{\sharp}_{c}=\vass{\mathbb CC^{\sharp}_{c}}$. Clearly $h^\sharp_c|h_c$.
{\beta}gin{lem}{\lambda}bel{th:charchar}
{\beta}gin{enumerate}
\item ${\Xi}_{\underline{w}}({\cal O}_{K})\neq\emptyset$ if and only if $({\cal O}_{K}^{\times})^{{\underline{w}}}=1$.
\item ${\Xi}_{\underline{w}}({\cal O}_{K,2})={\Xi}_{\underline{w}}(2{\cal O}_K)$ and they are non-trivial if and only if
$\vass{\underline{w}}$ is even.
\item If $c>2$ then ${\Xi}_{\underline{w}}(c{\cal O}_{K})\neq\emptyset$ and
${\Xi}_{\underline{w}}({\cal O}_{K,c})\neq\emptyset$ if and only if $\vass{\underline{w}}$ is even.
\item ${\Xi}_{\underline{w}}({\cal O}_{K,c})$ and ${\Xi}_{\underline{w}}(c{\cal O}_{K})$ are
bases for ${S}_{\underline{w}}({\cal O}_{K,c})$ and ${S}_{\underline{w}}(c{\cal O}_{K})$ respectively.
\end{enumerate}
\end{lem}
\par\noindent{\bf Proof. } Let $U<U_1$ and $\mathbb CC_U={K_{\A}^{\times}}/K^{\times}\mathbb C^{\times}U$. Then there is a short exact sequence
$$
1\longrightarrow\frac{\mathbb C^{\times}}{H_U}\longrightarrow
\frac{{K_{\A}^{\times}}}{K^\times U}\longrightarrow\mathbb CC_U\longrightarrow1
$$
where $H_U=\mathbb C^\times\cap K^{\times}U$. The first three points follow from the observation that
$$
H_{U_c}=
{\beta}gin{cases}
{\cal O}_K^\times & \text{if $c=1$}, \\
\{\pm1\} & \text{if $c=2$}, \\
\{1\} & \text{if $c>2$},
\end{cases}
\qquad
H_{\widehat{\calO}_{K,c}^\times}=
{\beta}gin{cases}
{\cal O}_K^\times & \text{if $c=1$}, \\
\{\pm1\} & \text{if $c\geq2$}.
\end{cases}
$$
For the last part, observe that multiplication by any
$\xi\in{\Xi}_{\underline{w}}(c{\cal O}_K)$ defines, for every weight ${\underline{w}}^\prime$,
an isomorphism ${S}_{{\underline{w}}^\prime}(c{\cal O}_K)\buildrel{\sigma}m\over\rightarrow{S}_{{\underline{w}}+{\underline{w}}^\prime}(c{\cal O}_K)$
which identifies the respective sets of Hecke characters.
When $\xi\in\Xi_{\underline{w}}({\cal O}_{K,c})\neq\emptyset$, the isomorphism restricts to an isomorphism of the subspaces ${S}_{{\underline{w}}^\prime}({\cal O}_{K,c})\buildrel{\sigma}m\over\rightarrow{S}_{{\underline{w}}+{\underline{w}}^\prime}({\cal O}_{K,c})$.
So, we are reduced to check the assertion in the case of the null weight
$\underline{0}=(0,0)$, which is clear because
${S}_{\underline{0}}(c{\cal O}_{K})$ and ${\Xi}_{\underline{0}}(c{\cal O}_{K})$
(respectively ${S}_{\underline{0}}({\cal O}_{K,c})$ and ${\Xi}_{\underline{0}}({\cal O}_{K,c})$)
are the set of functions on the finite abelian group
$\mathbb CC_c$ (respectively $\mathbb CC^{\sharp}_{c}$) and its Pontryagin dual.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
If $\mathfrak m|\mathfrak n$ the inclusion ${\cal I}_{\mathfrak n}<{\cal I}_{\mathfrak m}$ defines a
natural restriction map
{\beta}gin{equation}
\widetilde{S}_{{\underline{w}}}(\mathfrak m)\rightarrow\widetilde{S}_{{\underline{w}}}(\mathfrak n).
{\lambda}bel{eq:restr}
\end{equation}
{\beta}gin{lem}{\lambda}bel{th:injec}
The restriction maps \eqref{eq:restr} are injective.
\end{lem}
\par\noindent{\bf Proof. } We can assume that $\mathfrak n=\mathfrak m\mathfrak p$ with $\mathfrak p$ prime and
$(\mathfrak p,\mathfrak m)=1$. Let $\tilde{f}\in\widetilde{S}_{{\underline{w}}}(\mathfrak m)$ and suppose that $\tilde{f}(I)=0$
for all ideals $I\in{\cal I}_{\mathfrak n}$. Let ${\lambda}\in K^{\times}_{\mathfrak m}$ such
that ${\lambda}{\cal O}_{\mathfrak p}=\mathfrak p{\cal O}_{\mathfrak p}$.
Then $\mathfrak p[{\lambda}^{-1}]\in{\cal I}_{\mathfrak n}$ and
$0=\tilde{f}(\mathfrak p[{\lambda}^{-1}])={\lambda}^{-{\underline{w}}}\tilde{f}(\mathfrak p)$, i.e. $\tilde{f}(\mathfrak p)=0$ proving
that $\tilde{f}=0$ identically. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
For ${f}\in{S}_{{\underline{w}}}(\mathfrak n)$ and ${g}\in{S}_{{\underline{w}}^{\prime}}(\mathfrak n)$ let
{\beta}gin{equation}
\scal{f}{g}=
\left\{
{\beta}gin{array}{ll}
h_{\mathfrak n}^{-1}\sum_{{\sigma}\in\mathbb CC_{\mathfrak n}}{f}(s_{{\sigma}}){g}(s_{{\sigma}})
=h_{\mathfrak n}^{-1}\sum_{{\sigma}\in\mathbb CC_{\mathfrak n}}\tilde{f}(I_{{\sigma}})\tilde{g}(I_{{\sigma}})
& \hbox{if ${\underline{w}}^{\prime}=-{\underline{w}}$} \\
& \\
0 & \hbox{if ${\underline{w}}^{\prime}\neq-{\underline{w}}$}
\end{array}
\right.,
{\lambda}bel{eq:padicpair}
\end{equation}
where $\{s_{{\sigma}}\}$ and $\{I_{{\sigma}}\}$ are full set of representatives of
$\mathbb CC_{\mathfrak n}$ in ${K_{\A}^{\times}}$ and in ${\cal I}_{\mathfrak n}$ respectively.
The bilinear form $\scal{\cdot}{\cdot}$ extends
by linearity to a pairing on
${S}(\mathfrak n)=\bigoplus_{{\underline{w}}\in\mathbb Z[I_K]}{S}_{{\underline{w}}}(\mathfrak n)$,
or on the corresponding space
$\widetilde{S}(\mathfrak n)=\bigoplus_{{\underline{w}}\in\mathbb Z[I_K]}\widetilde{S}_{{\underline{w}}}(\mathfrak n)$,
compatible with the restriction maps \eqref{eq:restr}. Note that for
Hecke characters ${\xi}\in{\Xi}_{{\underline{w}}}(\mathfrak n)$ and
${\xi}^{\prime}\in{\Xi}_{-{\underline{w}}}(\mathfrak n)$ one has the orthogonality relation
$$
\scal{\xi}{\xi^\prime}=
\left\{
{\beta}gin{array}{ll}
1 & \hbox{if ${\xi}^{\prime}={\xi}^{-1}$} \\
0 & \hbox{otherwise}
\end{array}
\right..
$$
{\beta}gin{rem}
\rm It follows at once from the definition \eqref{eq:padicpair}
that the pairing $\scal{\cdot}{\cdot}$ takes values in $E$ on $E$-valued forms.
\end{rem}
Let $p$ be a prime number and let $F$ be a $p$-adic local field with
ring of integers ${\cal O}_{F}$.
Following \cite{Hida86, Tilo96}, the space of $p$-adic
$K^{\times}$-modular forms of level $\mathfrak n$ with coefficients in $F$ is the
space $\mathfrak S(\mathfrak n;F)={\cal C}^{0}(\mathfrak C_{\mathfrak n},F)$ of $F$-valued continuous
functions on
$\mathfrak C_{\mathfrak n}=\limproj{}{}_{r\geq0}\mathbb CC_{\mathfrak n p^{r}}$.
It is a $p$-adic Banach space under the sup norm
$\vvass{\phi}=\sup_{x\in\mathfrak C_{\mathfrak n}}\vass{\phi(x)}$
and we denote
$\mathfrak S(\mathfrak n;{\cal O}_{F})$ its unit ball.
Assume that $E$ is a subfield of $F$ (e.g. $F$ is the completion of
$E$ at a prime dividing $p$) and write
$\widetilde{S}_{{\underline{w}}}(\mathfrak n;F)=\widetilde{S}_{{\underline{w}}}(\mathfrak n;E)\otimes F$ and
$\widetilde{S}_{{\underline{w}}}({\cal O}_{K,c};F)=\widetilde{S}_{{\underline{w}}}({\cal O}_{K,c};E)\otimes F$.
{\beta}gin{pro}[\cite{Tilo96}]{\lambda}bel{th:padicembed}
For every ideal $\mathfrak m|\mathfrak n$ and for every ideal $\mathfrak q$ with support included in the set of primes dividing $p$ there is a natural embedding
$$
\widetilde{S}(\mathfrak m\mathfrak q;F)=
\bigoplus_{{\underline{w}}\in\mathbb Z[I_K]}\widetilde{S}_{{\underline{w}}}(\mathfrak m\mathfrak q;F)
\hookrightarrow\mathfrak S(\mathfrak n;F).
$$
\end{pro}
\par\noindent{\bf Proof. } We may use Lemma \ref{th:injec} to assume
that $\mathfrak m\mathfrak q=\mathfrak n p^{a}$ for some $a\geq1$.
Since $\bigcap_{r\geq0}P_{\mathfrak n p^{r}}=\{1\}$ the
group $I_{\mathfrak n p}$ embeds as a dense subset in $\mathfrak C_{\mathfrak n}$.
The restriction of $\tilde{f}\in\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$
to a coset $I\cdot P_{\mathfrak n p^{a}}$ is the function
$I({\lambda})\rightarrowsto\tilde{f}(I){\lambda}^{{\underline{w}}}$. Since ${\underline{w}}\in\mathbb Z[I_{K}]$ the
character ${\lambda}^{{\underline{w}}}$ is continuous for the $p$-adic topology on
$K^{\times}$ and so extends to a character $\chi_{{\underline{w}}}$ of
$(K\otimes\mathbb Q_{p})^{\times}$. Therefore
$\tilde{f}$ extends locally to cosets of
$1+p^{a}(R_{K}\otimes\mathbb Z_{p})$ and globally to the whole of
$\mathfrak C_{\mathfrak n}$. The injectivity of the direct sum space
$S(\mathfrak n p^{a};F)$ follows
from the linear independence of characters. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
We shall denote $\wh{f}$ the $p$-adic modular form associated to the
$K^{\times}$-modular form $\tilde{f}$. If $\tilde{f}=\tilde{\xi}$ is an Hecke character, the
$p$-adic form $\wh{\xi}$ is again a character which is sometimes
called the \textit{$p$-adic avatar} of $\tilde{\xi}$ (or of ${\xi}$).
The density of $I_{\mathfrak n p}$ in $\mathfrak C_{\mathfrak n}$ implies also that the image of
$\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ in $\mathfrak S(\mathfrak n;F)$ is characterized by the
functional relations $\tilde{f}({\lambda} s)={\lambda}^{{\underline{w}}}\tilde{f}(s)$ for all
${\lambda}\equiv1\bmod\mathfrak n p^{a}$. Thus, the association $\tilde{f}\rightarrowsto\wh{f}$
identifies $\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ with the closed linear subspace
{\beta}gin{equation}
\mathfrak S_{{\underline{w}},a}(\mathfrak n;F)=\left\{\sopra
{\mbox{$\phi\in\mathfrak S(\mathfrak n;F)$ such that $\phi(sx)=\phi(s)\chi_{{\underline{w}}}(x)$}}
{\mbox{ for all $x\in1+p^{a}({\cal O}_{K}\otimes\mathbb Z_{p})$}}
\right\}
{\lambda}bel{eq.clospace}
\end{equation}
(when $a=0$ the domain for $x$ is $({\cal O}_{K}\otimes\mathbb Z_{p})^{\times}$).
Let $\overline{S}(\mathfrak n p^{a};F)=\wh{\bigoplus}_{{\underline{w}}}
\mathfrak S_{{\underline{w}},a}(\mathfrak n;F)$ be the closure of
$\widetilde{S}(\mathfrak n p^a;F)$ in $\mathfrak S(\mathfrak n;F)$.
Since $\mathfrak S_{{\underline{w}},a}(\mathfrak n;F)$ is closed the projection onto the $w$-th summand extends to a projection
$\pi_{{\underline{w}},a}\colon\overline{S}(\mathfrak n p^{a};F)\rightarrow\mathfrak S_{{\underline{w}},a}(\mathfrak n;F)$.
Define a pairing
$$
\scalq{\cdot}{\cdot}\colon
\overline{S}(\mathfrak n p^{a};F)\times\overline{S}(\mathfrak n p^{a};F)\longrightarrow F
$$
as the composition
$$
\overline{S}(\mathfrak n p^{a};F)\times\overline{S}(\mathfrak n p^{a};F)
\stackrel{m}{\longrightarrow}\overline{S}(\mathfrak n p^{a};F)
\stackrel{\pi_{\underline{0},a}}{\longrightarrow}
\mathfrak S_{\underline{0},a}(\mathfrak n;F)
\stackrel{\mu_{\mathrm{H}}}{\longrightarrow}F
$$
where $m$ is multiplication and $\mu_{\mathrm{H}}$ is the Haar
distribution which is bounded on the space $\mathfrak S_{\underline{0},a}(\mathfrak n;F)$.
{\beta}gin{pro}{\lambda}bel{th:spiden}
The pairing $\scal{\cdot}{\cdot}$ extends to a continuous pairing on
$\overline{S}(\mathfrak m\mathfrak q;F)$ which coincides with
$\scalq{\cdot}{\cdot}$.
\end{pro}
\par\noindent{\bf Proof. } The pairing $\scalq{\cdot}{\cdot}$ is continuous as composition
of continuous mappings. Thus it is enough to check the identity
$\scal{\tilde{f}}{\tilde{g}}=[{\wh{f}},{\wh{g}}]$ for
$\tilde{f}\in\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ and $\tilde{g}\in\widetilde{S}_{{\underline{w}}^{\prime}}(\mathfrak n p^{a};F)$.
It follows from the definition \eqref{eq:padicpair} and the
density of ${\cal I}_{\mathfrak n p}$ in $\mathfrak C_{\mathfrak n}$, since on ${\cal I}_{\mathfrak n p}$
the restrictions of $\tilde{f}$ and $\wh{f}$ and of $\tilde{g}$ and $\wh{g}$ coincide.
\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Recall that a $p$-adic distribution on $\mathbb Z_p$ with values in the $p$-adic Banach space $W$ over $F$ is a linear operator ${\cal C}^0(\mathbb Z_{p},F)\rightarrow W$. Given two $p$-adic distributions on $\mathbb Z_{p}$ with values in
$\overline{S}(\mathfrak n p^{a};F)$
we construct a new distribution $\mu_{\scalq{\mu_1}{\mu_2}}$ with values in $F$ as the composition
$$
{\cal C}^0(\mathbb Z_{p},F)\stackrel{\mu_1\ast\mu_2}{\longrightarrow}
\overline{S}(\mathfrak n p^{a};F)
\stackrel{\pi_{\underline{0},a}}{\longrightarrow}
\mathfrak S_{\underline{0},a}(\mathfrak n;F)
\stackrel{\mu_{\mathrm{H}}}{\longrightarrow}F,
$$
where $\mu_1\ast\mu_2$ is the convolution product of $\mu_1$ and $\mu_2$. If $\mu_1$ and $\mu_2$ are measures (bounded distributions) $\mu_{\scalq{\mu_1}{\mu_2}}$ is not a measure in general since the map
$\pi_{\underline{0},a}$ is not bounded.
Denote
$m_k(\mu)=\int_{\mathbb Z_p}x^k\,d\mu(x)$, $k\geq0$ the $k$-th moment of the distribution $\mu$.
{\beta}gin{lem}{\lambda}bel{th:measuremix}
Let $M\in\mathbb N\cup\{\infty\}$ and suppose that there exist
pairwise distinct weights $\{{\underline{w}}_{k}\}$ for $0\leq k<M$
such that
$m_{k}(\mu_{1})\in\widetilde{S}_{{\underline{w}}_{k}}(\mathfrak n p^{a};F)$ and
$m_{k}(\mu_{2})\in\wt S_{-{\underline{w}}_{k}}(\mathfrak n p^{a};F)$ for all
$0\leq k<M$. Then
$$
m_{k}(\mu_{\scalq{\mu_1}{\mu_2}})=
\left\{
{\beta}gin{array}{ll}
0 & \mbox{if $0\leq k<M$ is odd,} \\
\binom{2l}{l}\scalq{m_{l}(\mu_{1})}{m_{l}(\mu_{2})} &
\mbox{if $0\leq k=2l<M$ is even.}
\end{array}
\right.
$$
If $M=\infty$ the latter formulae characterize the distribution
$\mu$ completely.
\end{lem}
\par\noindent{\bf Proof. } By direct computation
$m_k(\mu_{\scalq{\mu_1}{\mu_2}})=\mu_H\circ\pi_{\underline{0},a}
\left(\iint_{\mathbb Z_{p}^2}(x+y)^{k}\,d\mu_1(x) d\mu_2(y)\right)=
\sum_{i=0}^{k}\vvec{k}{i}\mu_H\circ\pi_{\underline{0},a}\left(m_i(\mu_1)m_{k-i}(\mu_2)\right)=
\sum_{i=0}^{k}\vvec{k}{i}\scalq{m_i(\mu_1)}{m_{k-i}(\mu_2)}.
$
The formula follows at once from the orthogonality relations in
\eqref{eq:padicpair} since ${\underline{w}}_{i}={\underline{w}}_{k-i}$ only if $k=2l$ is
even and $i=l$. The final assertion is also clear.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Let $\mu$ be a $p$-adic distribution on $\mathbb Z_p$ with values in a $p$-adic space ${\cal S}$ of continuous $F$-valued functions on a profinite space $T$. For every $t\in T$, evaluation at $t$ defines an $F$-valued distribution $\mu(t)$ on $\mathbb Z_p$, $\mu(t)(\phi)=\mu(\phi)(t)$.
Conversely, a family $\{\mu_t\}_{t\in T}$ of $F$-valued distributions such that the function
$\mu(\phi)(t)=\mu_t(\phi)$ is in ${\cal S}$ for all $\phi\in{\cal C}^0(\mathbb Z_{p},F)$ defines a $p$-adic distribution
$\mu$ on $\mathbb Z_p$ with values in ${\cal S}$ and $\mu(t)=\mu_t$ for all $t\in T$, which is obviously unique for this property.
{\beta}gin{lem}{\lambda}bel{le:UnBoundPr}
Let $T$ be a profinite space, $\cal S$ a $p$-adic space of continuous $F$-valued functions and
$\mu$ a $p$-adic distribution on $\mathbb Z_p$ with values in $\cal S$. Then $\mu$ is a $p$-adic measure
if and only if $\mu(t)$ is a $p$-adic measure for all $t\in T$.
\end{lem}
\par\noindent{\bf Proof. } If $\mu$ is bounded, the distributions $\mu(t)$ are obviously bounded.
Suppose that $\mu(t)$ is bounded for all $t\in T$. Let $\{\phi_k\}$ $k=0,1,2,...$ be functions in
${\cal C}^0(\mathbb Z_{p},F)$ with $\vvass{\phi_k}=1$ and let $\varphi_k=\mu(\phi_k)$.
If $\vvass{\varphi_k(t)}=p^{r_k}$ choose $t_k\in T$ such that
$\vass{\varphi_k(t_k)}_p=p^{r_k}$.
If the set of values $\left\{p^{r_k}\right\}$ is not bounded we may assume without loss of generality that $r_1<r_2<r_3<\cdots$ and since each $\mu(t_k)$ is bounded also that $\left\{t_k\right\}$ is an infinite set.
By compactness of $T$, there exists $\bar{t}\in T$, ${\bar t}\neq t_k$ for all $k$, such that every
neighborhood of $\bar t$ meets $\left\{t_k\right\}$. This contradicts the boundedness of
$\mu({\bar t})$ since $\vass{\varphi(t)}_p$ is locally constant for all $\varphi\in\cal S$.
In particular, the sequence $\mu\left(\binom xk\right)$ is bounded and $\mu$ is a measure. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
As an application, let $\tilde{\chi}\in\widetilde{\Xi}_{{\underline{w}}}(\mathfrak n p^a)$ and
$\tilde{\xi}\in\widetilde{\Xi}_{{\underline{w}}^{\prime}}(\mathfrak n p^a)$ be Hecke characters
taking values respectively in ${\cal O}_{F}^{\times}$ and
${\cal O}_{F^\prime}^{\times}$ where $\mathbb Q_p\subseteq F^\prime$ is a totally ramified subextension of $F$.
For every $x\in\mathfrak C_{\mathfrak n}$ the series
$$
\sum_{k=0}^{\infty}\frac{1}{k!}\wh{\chi}{\wh{\xi}}^{k}(x)Z^{k}
=\wh{\chi}(x)(1+T)^{\wh{\xi}(x)},\qquad
T=e^{Z}-1=Z+\frac12Z^{2}+\frac1{3!}Z^{3}+\cdots,
$$
has integral coefficients in the variable $T$ (the corresponding measure is
$\wh{\chi}(x)\partial_{\wh{\xi}(x)}$,
where $\partial_t$ denotes the Dirac measure concentrated at $t$).
Thus, there exists a unique measure $\mu_{\chi,\xi}$ on $\mathbb Z_{p}$ with values in
$\overline{S}(\mathfrak n;{\cal O}_{F})$ such that
$m_{k}(\mu_{\chi,\xi})=\wh{\chi}\wh{\xi}^{k}$. When
${\underline{w}}^{\prime}\neq\underline{0}$ the moments' weights are pairwise distinct.
\subsection{Expansions as distributions}
Let $\jmath\colon K\hookrightarrow D$ a normalized embedding of conductor
$c=c_{\tau,N}$ with corresponding $\tau\in\mathbb CM_{{\Delta},K}$ and
$x\in\mathbb CM({\Delta},N;{\cal O}_{K,c})$. Let $y=\mathrm{Im}(\tau)$.
The embedding $\jmath$ defines by scalar extension a diagram
{\beta}gin{equation}
{\beta}gin{array}{ccccccc}
& & K_\mathbb A^\times/K^\times\mathbb R^\times & \longrightarrow &
D^\times\backslash D_\mathbb A^\times/Z_\infty \\
& & \downarrow & & \downarrow \\
& & K_\mathbb A^\times/K^\times\mathbb R^\times\wh{{\cal O}}_{K,c}^\times & \longrightarrow &
D^\times\backslash D_\mathbb A^\times/Z_\infty\wh{{\cal R}}_N^\times \\
& & \downarrow & & \downarrow \\
\mathbb CC^{\sharp}_{c} & {\sigma}meq & K_\mathbb A^\times/K^\times\mathbb C^\times\wh{{\cal O}}_{K,c}^\times & \longrightarrow &
D^\times\backslash D_\mathbb A^\times/\jmath(\mathbb C^\times)\wh{{\cal R}}_N^\times & {\sigma}meq &
\mathbb GaZ({\Delta},N)\backslash\mathfrak H
\end{array}
{\lambda}bel{eq:maindiag}
\end{equation}
where the vertical maps are the natural quotient maps and $Z_\infty$ is the center of $D_\infty^\times$.
Under the decomposition
{\beta}gin{equation}
D_\mathbb A^\times=D_\mathbb Q^\times\mathbb GL_2^+(\mathbb R)\wh{{\cal R}}_N^\times
{\lambda}bel{eq:decomp}
\end{equation}
the idele $d=d_og_\infty u$ corresponds to the point represented by $g_\infty\tau$.
Classfield theory provides an identification $\mathbb CC^{\sharp}_{c}{\sigma}meq\mathbb Gal(H_c/K)$
where $H_c$ is the ray classfield of conductor $c$. It is also
well-known that the points in the image of the bottom map in
\eqref{eq:maindiag} are defined over $H_c$, so that
$\mathbb Gal(H_c/K)$ acts naturally on them, and that the two actions are
compatible (Shimura reciprocity law, \cite{ShiRed}). In particular,
if $s_{{\sigma}}\in{K_{\A}^{\times}}$ represents ${\sigma}\in\mathbb Gal(H_c/K)$, then $s_{{\sigma}}$
maps to $x^{{\sigma}}$ and $A_{x^{\sigma}}=A_x^{(\inv{\sigma})}$. Write
$A_x(\mathbb C)=A_{\tau}=\mathbb C^{{\epsilon}}/{\Lambda}_\tau$
with ${\Lambda}_\tau={\Lambda}\vvec{\tau}{1}$ where ${\epsilon}=1$ and
${\Lambda}=\mathbb Z^2\subset\mathbb C$ if $D$ is split and ${\epsilon}=2$ and
${\Lambda}=\Phi_{\infty}({\cal R}_{1})\subset\mathbb C^2$ if $D$ is non-split.
The theory of complex multiplication implies that
$A_{x^{\sigma}}(\mathbb C){\sigma}meq\mathbb C^{{\epsilon}}/s_{{\sigma}}{\Lambda}_\tau$ where
$s_{{\sigma}}{\Lambda}_\tau={\Lambda} d_{{\sigma}}^{-1}\vvec{\tau}{1}$ if
$s_{{\sigma}}^{-1}=d_{{\sigma}}g_{{\sigma}}u_{{\sigma}}$ under \eqref{eq:decomp}.
For a fixed prime $p$ one can choose representatives
$\left\{s_{{\sigma}}\right\}\subset{K_{\A}^{\times}}$ normalized as follows:
$$
\left\{{\beta}gin{array}{l}
s_{{\sigma},\infty}=1, \\
s_{{\sigma},v}\mbox{ is $v$-integral at all finite places $v$ and a
$v$-unit at the places $v|pc$.}
\end{array}\right.
$$
For each such representative $s$ there is a diagram of complex tori
$$
{\beta}gin{CD}
A_{g\tau}=\mathbb C^{{\epsilon}}/{\Lambda}_{g\tau} @>{j(g,\tau)}>>
\mathbb C^{{\epsilon}}/s{\Lambda}_{\tau} @>{\pi_{s}}>> \mathbb C^{{\epsilon}}/{\Lambda}_{\tau}\\
@. @| @| \\
{} @. A_{x^{\sigma}}(\mathbb C) @. A_x(\mathbb C)\\
\end{CD}
$$
where $s=dgu$ under \eqref{eq:decomp} and $\pi_{s}$ is the natural
quotient map arising from the inclusion
$s{\Lambda}_{\tau}\subset{\Lambda}_\tau$. The element
$g\in\mathbb GL_{2}^{+}(\mathbb R)$ is defined by $g\tau\in\mathfrak H$ only up to an
element in ${\cal O}_{K,c}^{\times}$.
Choose $p$ and a place $v$ over $p$ in a number field $L$
large enough so that for each $s$ the triple
$(g\tau,v,e)$ is a $p$-ordinary test triple and that the isogenies
$\pi_{s}$ are defined over $L$.
{\beta}gin{lem}{\lambda}bel{lem:compperiods}
With the above notations, it is possible to choose for every
${\sigma}\in\mathbb Gal(H_c/K)$ an invariant 1-form on $A_{x^{\sigma}}$ that generates
${{\cal L}}(x^{\sigma})\otimes{\cal O}_{(v)}$
and for which
{\beta}gin{enumerate}
\item ${\Omega}_{\infty}(g\tau){\sigma}m_{{\cal O}_{(v)}^\times}j(g,\tau){\Omega}_{\infty}(\tau)$;
\item ${\Omega}_{p}(x^{{\sigma}}){\sigma}m_{{\cal O}_{(v)}^\times}{\Omega}_{p}(x)$.
\end{enumerate}
\end{lem}
\par\noindent{\bf Proof. } Take ${\omega}_o\in H^0(A_{x}(\mathbb C),{\cal L}(x)\otimes\mathbb C)$. The quotient map $\pi_{s}$
is the identity on (co)tangent spaces and commutes with the action of
the endomorphisms. Thus
${\omega}_s=\pi_s^*({\omega}_o)\in H^0(A_{x^{\sigma}}(\mathbb C),{\cal L}(x^{\sigma}))$ and
$p({\omega}_s,g\tau)=j(g,\tau)p({\omega}_o,\tau)$. Furthermore, $p$ doesn't
divide the degree of $\pi_s$ and so $\pi_s^*$ is an isomorphism between
the natural $p$-adic structures on the spaces of invariant forms.
This proves part 1.
For part 2 observe that the reduction mod $p$ of the dual map
$\pi_s^t$ gives an isomorphism of the rank 1 tate module quotient $T$ of \S 3.2.
Thus $\pi_s^*({\omega}_{u}(P))$
is a universal form on the deformations of $\widetilde{A}_{x^{\sigma}}$ by formula
\eqref{eq:omcomp} and the equality follows. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
If $s$ and $s^{\prime}=s{\lambda} zu$ with ${\lambda}\in K^{\times}$,
$z\in\mathbb C^{\times}$ and $u\in\wh{{\cal O}}_{K,c}^\times$ are two normalized
representants of the same ${\sigma}\in\mathbb CC^\sharp_{c}$ a comparison of the
relations in lemma \ref{lem:compperiods} for the decompositions
$s=dgr$ and $s^{\prime}=({\lambda} d)(gz)(ru)$ shows that
${\omega}_{s^{\prime}}{\sigma}m_{{\cal O}_{K,c}^{\times}}z{\omega}_{s}$. Therefore the
construction of ${\omega}_{s}$ can be extended modulo
${\cal O}_{K,c}^{\times}$-equivalence to all $s\in{K_{\A}^{\times}}$ by setting
{\beta}gin{equation}
{\omega}_{s{\lambda} zu}{\sigma}m_{{\cal O}_{K,c}^{\times}}z{\omega}_{s}
\quad\mbox{for all ${\lambda}\in K^{\times}, z\in\mathbb C^{\times},
u\in\wh{{\cal O}}_{K,c}^\times$ and $s$ normalized.}
{\lambda}bel{eq:definoms}
\end{equation}
Let $f\in{\rm M}_{2{\kappa},0}({\Delta},N)$ and normalize the invariant form as in
proposition \ref{teo:Thkalgebraic}. For all integers $r\geq0$ such
that $({\cal O}_{K,c}^{\times})^{2({\kappa}+r)}=1$ define a function
${c}_{(r)}(f,x)\colon{K_{\A}^{\times}}\rightarrow\mathbb C$ as
$$
{c}_{(r)}(f,x)(s)=\frac{{\delta}_{2{\kappa}}^{(r)}f(g\tau)}{p({\omega}_{s},g\tau)^{2({\kappa}+r)}}
$$
where $s=dgu$ as above.
{\beta}gin{pro}{\lambda}bel{th:meascrfx}
Suppose that $f$ is defined over ${\cal O}_{(v)}$ and assume that
$({\cal O}_{K,c}^{\times})^{2({\kappa}+r)}=1$. Then
$c_{(r)}(f,x)\in{S}_{(2({\kappa}+r),0)}({\cal O}_{K,c})\cap{\cal S}(cR_{K},{\cal O}_{v})$.
\end{pro}
\par\noindent{\bf Proof. } The modular relation for $c_{(r)}(f,x)$ follows at once from
\eqref{eq:definoms} and the definition
since $gz\tau=g\tau$.
For an idele $s$ satisfying the conditions \eqref{eq:weilrel} for
$\mathfrak n=(pc)$ the
invariant form ${\omega}_{s}$ satisfies proposition \ref{teo:Thkalgebraic}
and then theorem \ref{thm:equality} together with lemma \ref{lem:compperiods}
shows that as a $p$-adic $K^{\times}$-modular form
${c}_{(r)}(f,x)$ has coefficients in
$L_{v}$ and in fact belongs to the unit ball. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
Assume that ${\cal O}_{K,c}^{\times}=\{\pm1\}$.
Let $\mu_{f,x}$ be the $p$-adic distribution on $\mathbb Z_p$ with values in
$\overline{S}(c{\cal O}_K;L_v)$ such that $m_r(\mu_{f,x})=\wh{c}_{(r)}(f,x)$
and let $\mu_{\chi,\xi}$ be the $p$-adic measure associated to a choice of Gr\"ossencharakters
$\chi\in\Xi_{(-2{\kappa},0)}({\cal O}_{K,c})$, $\xi\in\Xi_{(-2,0)}({\cal O}_{K,c})$ as in the discussion after
lemma \ref{le:UnBoundPr}.
{\beta}gin{thm}{\lambda}bel{th:measexists}
There exist a $p$-adic field $F$ and a $p$-adic measure $\mu(f,x;\chi,\xi)$ on
$\mathbb Z_p$ with values in ${\cal O}_F$ such that
$$
m_r\left(\mu_{[\mu_{f,x},\mu_{\chi,\xi}]}\right)
=\left\{
{\beta}gin{array}{ll}
0 & \mbox{if $0\leq r$ is odd,} \\
(h^\sharp_c)^{-1}{\Omega}_p^{-2({\kappa}+l)}\binom{2l}{l}m_l(\mu(f,x;\chi,\xi)) &
\mbox{if $0\leq r=2l$ is even,}
\end{array}
\right.
$$
\end{thm}
\par\noindent{\bf Proof. } Let $F$ be large enough to contain $L_v$, the field of values of $\chi$ and $\xi$
and the $p$-adic period ${\Omega}_p$.
The expression follows from Lemma \ref{th:measuremix} and the fact that for a suitable
choice of representants for $\mathbb CC^{\sharp}_{c}$ we have, combining the definition
\eqref{eq:padicpair} with theorem \ref{thm:equality}, proposition \ref{th:spiden} and lemma \ref{lem:compperiods},
$\scalq{\wh{\chi}{\wh{\xi}}^{l}}{\wh{c}_{(l)}(f,x)}=
(h^\sharp_c)^{-1}{\Omega}_p^{-2({\kappa}+l)}\sum_{\sigma}\wh{\chi}{\wh{\xi}}^{l}(s)b_l(x^{\sigma})$.
Finally, each term ${\wh{\xi}}^{l}(s)b_l(x^{\sigma})$ is the $l$-th moment of a
suitable $p$-adic measure on $\mathbb Z_p$ because the identification
$\sum_{n=0}^\infty(b_n(x^{\sigma})/n!)T^n=\sum_{n=0}^\infty a_nU^n$ with $a_n\in{\cal O}_F$ through the substituition $U=e^T-1$ yields an identification
$\sum_{n=0}^\infty(b_n(x^{\sigma})/n!)z^nT^n=\sum_{n=0}^\infty a_nV^n$ where
$V=(U+1)^z-1$ and this substitution preserves ${\cal O}_F$-integrality when $z$ is a unit in a field with residue field $\mathbb F_p$. Conclude using the linearity of measures.
\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
\subsection{Special $L$-values}
For $f\in M_{2{\kappa},0}^\infty({\Delta},N)$, let
$\phi_f\in L^2(D_\mathbb Q^\times\backslash D_\mathbb A^\times)$ be the usual
$\wh{{\cal R}}_N^\times$-invariant $C^\infty$ lift of $f$ to $D_\mathbb A^\times$.
Namely,
$\phi_f(d)=f(g_\infty\cdot i)j(g_\infty,i)^{-2{\kappa}}{\delta}t(g_\infty)^{\kappa}$ if
$d=d_{\mathbb Q}g_\infty u$ under \eqref{eq:decomp}.
The Lie algebra $\mathfrak g=\mathfrak g\mathfrak l_2{\sigma}meq{\rm Lie}(D_\infty^\times)$
acts on the $\mathbb C$-valued $C^\infty$ functions on $D_\mathbb A^\times$ by
$(A\cdot\varphi)(d)=\left.\frac{d}{dt}\varphi(de^{tA})\right|_{t=0}$.
By linearity and composition the action extends to the complexified
universal enveloping algebra $\mathfrak A(\mathfrak g)_{\mathbb C}$. Let
$$
I=\left(
{\beta}gin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right),
\qquad
H=\left(
{\beta}gin{array}{cc}
0 & -i \\
i & 0
\end{array}
\right),
\qquad
X^\pm=\frac12\left(
{\beta}gin{array}{cc}
1 & \pm i \\
\pm i & -1
\end{array}
\right)
$$
be the usual eigenbasis of $\mathfrak g_{\mathbb C}$ for the adjoint action
of the maximal compact subgroup
$$
{\rm SO}(2)=\left\{\hbox{$r({\theta})=\left(
{\beta}gin{array}{cc}
\cos{\theta} & -{\sigma}n{\theta} \\
{\sigma}n{\theta} & \cos{\theta}
\end{array}
\right)$ such that ${\theta}\in\mathbb R$}\right\}.
$$
Since ${\rm Ad}(r({\theta}))X^\pm=e^{\mp 2i{\theta}}X^\pm$, we have
$X^\pm\cdot\varphi_{f}\in M_{2{\kappa}\pm2,0}^\infty({\Delta},N)$.
A standard computation (e.g. \cite[\S\S2.1--2]{Bu96})
links the Lie action to the Maass operators of
\S\ref{ss:maass}, namely
$$
X^+\cdot\phi_f=-4\pi\phi_{{\delta}_{2{\kappa}}f}.
$$
For $r\geq0$ let
{\beta}gin{equation}
\phi_r=\left(-\frac{1}{4\pi}X^{+}\right)^{r}\cdot\phi_f=
\phi_{{\delta}_{2{\kappa}}^{r}f}.
{\lambda}bel{eq:defphir}
\end{equation}
{\beta}gin{dfn}{\lambda}bel{th:Jintegral}
\rm
Let $f\in M_{2{\kappa},0}({\Delta},N)$, $\xi\in{\Xi}_{\underline{w}}(c{\cal O}_{K})$ for a
weight ${\underline{w}}$ such that $\vass{{\underline{w}}}=0$
and $\tau=t+iy\in\mathbb CM_{{\Delta},K}$ with $c_{\tau,N}=c$ and associated normalized
embedding $\jmath$. For each $r\geq0$, let
$$
J_r(f,\xi,\tau)=\int_{{K_{\A}^{\times}}/K^\times\mathbb R^\times}\phi_r(\jmath(t)d_\infty)\xi(t)\,dt
$$
where $d_\infty=\smallmat{y^{1/2}}{ty^{1/2}}0{y^{-1/2}}$ and $dt$ is the Haar measure on
${K_{\A}^{\times}}$ whose archimedean component is normalized so that
$\mathrm{vol}(\mathbb C^\times/\mathbb R^\times)=\pi$ and such that the local grups of units have volume 1
(hence $m_c=\mathrm{vol}(\wh{{\cal O}}_{K,c}^\times)=
[({\cal O}_K/c{\cal O}_K)^\times\colon(\mathbb Z/c\mathbb Z)^\times]^{-1}$).
\end{dfn}
We show that $J_r(f,\xi,\tau)$ can be expressed in terms
of the pairing introduced in \S\ref{se:padicforms}.
Write $w_{K,c}=\vass{{\cal O}_{K,c}^{\times}}$.
{\beta}gin{thm}{\lambda}bel{th:compJ}
Let $f\in M_{2{\kappa},0}({\Delta},N)$ and $\xi\in\Xi_{(w,-w)}({\cal O}_{K,c})$.
Assume that $({\cal O}_{K,c}^{\times})^{2w}=1$.
Then
$$
J_r(f,\xi,\tau)=
\frac{\pi m_c}{w_{K,c}}h_{c}^{\sharp}y^{-w}{\Omega}_\infty(\tau)^{-2w}
{\scal{c_{(r)}(f,x)}{\xi\vvass{N_{K/\mathbb Q}}^{-w}}}.
$$
\end{thm}
\par\noindent{\bf Proof. } Since the integrand function is right $\wh{{\cal O}}^\times_{K,c}$-invariant, we have
$J_r(f,\xi,\tau)=m_c\int_{{K_{\A}^{\times}}/K^\times\mathbb R^\times\wh{{\cal O}}^\times_{K,c}}
\phi_r(\jmath(t)d_\infty)\xi(t)\,dt$. For a chosen set of
representatives $\{s_{\sigma}\}$ of $\mathbb CC^{\sharp}_{c}$ there is a decomposition
$$
{K_{\A}^{\times}}/K^\times\mathbb R^\times\wh{{\cal O}}^\times_{K,c}=
\bigcup_{{\sigma}\in\mathbb CC^{0}_{c}}\mathbb C^\times s_{\sigma}/\mathbb R^\times{\cal O}_{K,c}^{\times}
\qquad
\mbox{(disjoint union).}
$$
Therefore,
$J_r(f,\xi,\tau)=m_c\sum_{{\sigma}\in\mathbb CC^{\sharp}_{c}}\xi(s_{\sigma})\int_{\mathbb C^\times/\mathbb R^\times{\cal O}_{K,c}^{\times}}
\phi_r(\jmath(s_{\sigma} z)d_\infty)\xi_\infty(z)\,d^\times z$.
Since the standard normalized embedding of $\mathbb C$ in ${\rm M}_2(\mathbb R)$ is
$\rho e^{i{\theta}}\rightarrowsto\rho r({\theta})$, we can write
$D_\infty^\times\ni\jmath(z)=\rho d_\infty r({\theta})d_\infty^{-1}$.
Therefore
$\int_{\mathbb C^\times/\mathbb R^\times{\cal O}_{K,c}^{\times}}\phi_r(\jmath(s_{\sigma} z)d_\infty)
\xi_\infty(z)\,d^\times z=w_{K,c}^{-1}\int_0^{\pi}\phi_r(\jmath(s_{\sigma} d_\infty)r({\theta}))
\xi_\infty(e^{i{\theta}})\,d{\theta}=w_{K,c}^{-1}\phi_r(\jmath(s_{\sigma} d_\infty))
\int_0^{\pi}e^{-2i({\kappa}+r){\theta}}e^{-2iw{\theta}}\,d{\theta}$
and
{\beta}gin{equation}
J_r(f,\xi,\tau)=\left\{
{\beta}gin{array}{ll}
\pi m_cw_{K,c}^{-1}\sum_{{\sigma}\in\mathbb CC^{\sharp}_{c}}
\xi(s_{\sigma})\phi_r(\jmath(s_{\sigma} d_\infty)) & \hbox{if $w=-{\kappa}-r$} \\
0 & \hbox{otherwise}
\end{array}\right..
{\lambda}bel{eq:formforJ}
\end{equation}
Note that this proves the claimed formula when $w\neq-{\kappa}-r$ since the inner product in its right hand side vanishes in this case. Thus, we may now assume that $w=-{\kappa}-r$. Put
$I_{\sigma}=\xi(s_{\sigma})\phi_r(s_{\sigma} d_\infty)$ and write
$s=s_{\sigma}=d_sg_su_s$ under
\eqref{eq:decomp} and $\tau_s=g_s\tau$. Note that
$\vvass{N_{K/\mathbb Q}(s)}={\delta}t(g_s)$. Under the hypothesis
$({\cal O}_{K,c}^{\times})^{2({\kappa}+r)}=1$ we have
{\beta}gin{eqnarray*}
I_{\sigma} & = & \xi(s){\delta}_{2{\kappa}}^{(r)}f(g_sd_\infty\cdot i)j(g_s d_\infty,i)^{-2({\kappa}+r)}{\delta}t(g_s)^{{\kappa}+r} \\
& = & y^{{\kappa}+r}\xi(s){\delta}_{2{\kappa}}^{(r)}f(\tau_{s})j(g_s,\tau )^{-2({\kappa}+r)}\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\
& = & y^{{\kappa}+r}\xi(s)c_{(r)}(f,x)(s)
p({\omega}_s,\tau_{s})^{2({\kappa}+r)}j(g_s,\tau )^{-2({\kappa}+r)}\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\
& = & y^{{\kappa}+r}\xi(s)c_{(r)}(f,x)(s)
p({\omega}_{s_o},\tau_{s_o})^{2({\kappa}+r)}j(g_{s_o},\tau )^{-2({\kappa}+r)}\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\
& = & y^{{\kappa}+r}{\Omega}_\infty(\tau)^{2({\kappa}+r)}\xi(s)c_{(r)}(f,x)(s)\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\
\end{eqnarray*}
where $s_o$ is a normalized representant.
It is now clear that the formula follows. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{dfn}
Let $M$ be a proper divisor of $N$, $x\in\mathbb CM({\Delta},N;{\cal O}_{K,c})$ and
$\pr x\in\mathbb CM({\Delta},M;{\cal O}_{K,\pr c})$ the image of $x$ under the natural
quotient map.
A character $\xi\in\Xi_{{\underline{w}}}({\cal O}_{K,c})$
is called $(x,M)$-primitive if it is
not trivial on $\wh{{\cal O}}_{K,\pr c}^{\times}$.
\end{dfn}
For a divisor $d$ of $N/M$ there is an embedding
$\iota_{{\Delta},d}:M_{2{\kappa},0}({\Delta},M)\longrightarrow M_{2{\kappa},0}({\Delta},N)$.
When ${\Delta}=1$ the embedding is simply $f(z)\rightarrowsto f(dz)$.
When ${\Delta}>1$ the explicit description of
$\iota_{{\Delta},d}$ is less immediate, e.÷g. \cite[\S3]{MoTe99}. We
denote $M_{2{\kappa},0}({\Delta},N)^{M-\mathrm{old}}$ the span of the
images of the embeddings $\iota_{{\Delta},d}$ for all $d$.
After theorem \ref{th:compJ} the following result can be read
as an orthogonality statement between primitive characters and
$K^{\times}$-modular forms arising from oldforms.
{\beta}gin{pro}{\lambda}bel{th:oldforms}
Let $\tau\in\mathbb CM_{{\Delta},K}$ and $x\in\mathbb CM({\Delta},N;{\cal O}_{K,c})$
be the point represented by $\tau$. Let $f\in M_{2{\kappa},0}({\Delta},N)^{M-\mathrm{old}}$
and suppose that $\xi\in\Xi_{(-{\kappa}-r,{\kappa}+r)}({\cal O}_{K,c})$ is $(x,M)$-primitive.
Then $J_r(f,\xi,\tau)=0$.
\end{pro}
\par\noindent{\bf Proof. } Consider again the first expression in \eqref{eq:formforJ}.
Let $\pr x\in\mathbb CM({\Delta},M;{\cal O}_{K,\pr c})$ the point image of $x$ and choose a
system of representants $\{s_{\pr{\sigma}}\}$ of $\mathbb CC^{\sharp}_{\pr c}$
and a system of representants $\{r_{i}\}$ of
$\wh{{\cal O}}^\times_{K,\pr c}/\wh{{\cal O}}^\times_{K,c}$. Then the set of products
$\{s_{{\sigma}}r_{i}\}$ is a system of representatives of $\mathbb CC^{\sharp}_{c}$
and since $d_{\infty}$ commutes with each $r_{i}$ and $f$ is $M$-old
we obtain the expression
$$
J_{r}(f,\xi,\tau)=\frac{\pi m}{w_{K,c}}
\left(\sum_{\wh{{\cal O}}^\times_{K,\pr c}/\wh{{\cal O}}^\times_{K,c}}
\xi(r_{i})\right)\left(\sum_{\pr{\sigma}\in\mathbb CC^{\sharp}_{\pr c}}
\xi(s_{\pr{\sigma}})\phi_r(s_{\pr{\sigma}}d_\infty)\right)
$$
which vanishes because $\xi$ is non trivial on
$\wh{{\cal O}}^\times_{K,\pr c}/\wh{{\cal O}}^\times_{K,c}$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
We shall assume from now on that the modular form $f$ is a
holomorphic newform with associated automorphic representation
$\pi^{D}=\pi_{f}$. Let $\pi$ be the automorphic representation of
$\mathbb GL_{2}(\mathbb A)$ corresponding to $\pi^{D}$ under the Jacquet-Langlands
correspondence.
Other than the Weil representation $r_{\psi}$ of ${\rm SL}_{2}(\mathbb A)$, the adelic
Schwartz-Bruhat space ${\cal S}_{\mathbb A}(D)=\bigotimes_{p\leq\infty}{\cal S}_{p}$
supports the unitary representation of $\mathbb GO(D)(\mathbb A)$ given by
$$
L(h)\varphi(x)=\vvass{\nu_{0}(h)}_{\mathbb A}^{-1}\varphi(h^{-1}x),
\qquad x\in D_{\mathbb A}.
$$
We assume that the archimedean space ${\cal S}_{\infty}$ consists
only of the Schwartz functions on $D_{\infty}$ which are
$K^{1}_{\infty}\times K^{1}_{\infty}$-finite under the action
of $D^{\times}\times D^{\times}$ via the group $\mathbb GO(D)$
(\S\ref{se:CMpts}). Here $K^{1}_{\infty}$ is the maximal compact subgroup of
$\jmath(K^{\times}\otimes\mathbb R)\subset D_{\infty}^{\times}{\sigma}meq\mathbb GL_{2}(\mathbb R)$.
As explained in \cite[\S5]{HaKu92}, the two representations mingle into
one single representation, still denoted $r_{\psi}$, of the group
$R(D)=
\{\hbox{$(g,h)\in\mathbb GL_{2}\times\mathbb GO(D)$ such that ${\delta}t(g)=\nu_{0}(h)$}\}$
given by $r_{\psi}(g,h)\varphi=r_{\psi}(g_{1})L(h)\phi$ where
$g_{1}=g\smallmat{1}{}{}{\nu_{0}(h)}^{-1}$. Note that
{\beta}gin{itemize}
\item the assignment $(g,h)\rightarrowsto(g_{1},h)$ sets up an isomorphism
$R(D)\stackrel{{\sigma}m}{\rightarrow}{\rm SL}_{2}\ltimes\mathbb GO(D)$;
\item the group $R(D)$ is naturally a subgroup of the symplectic
group ${\rm Sp}(W)$, where $W=P\otimes D$ with $P$ the standard
hyperbolic plane, via $(g,h)x\otimes y=gx\otimes h^{-1}y$.
\end{itemize}
The groups $({\rm SL}_{2},{\rm O}(D))$ form a dual reductive pair in
${\rm Sp}(W)$ and the extended Weil representation $r_{\psi}$ allows to realize the
theta correspondence between the similitude groups. The theta kernel
associated to a choice of $(g,h)\in R(D)$ and $\varphi\in{\cal S}_{\mathbb A}(D)$ is
$\vartheta(g,h;\varphi)=\sum_{d\in D}r_{\psi}(g,h)\varphi(d)$.
The theta lift to $\mathbb GO(D)$ of a cuspidal automorphic form $F$ on
$\mathbb GL_{2}(\mathbb A)$ is the automorphic form on $\mathbb GO(D)(\mathbb A)$ given by
{\beta}gin{equation}
{\theta}eta_{\varphi}(F)(h)=\int_{{\rm SL}_{2}(\mathbb Q)\backslash{\rm SL}_{2}(\mathbb A)}
\vartheta(gg^{\prime},h;\varphi)F(gg^{\prime})\,dg^{\prime}
{\lambda}bel{eq:thetalift}
\end{equation}
where ${\delta}t(g)=\nu_{0}(h)$ and $dg^{\prime}$ is induced by a choice
of a Haar measure $dg=\prod dg_{p}$ on $\mathbb GL_{2}(\mathbb A)$.
A straightforward substitution yields
{\beta}gin{equation}
{\theta}eta_{r_{\psi}(g_1,h_1)\varphi}(F)(h)=
{\theta}eta_{\varphi}(\pi(g_1^{-1})F)(hh_1),
\qquad
\forall (g_1,h_1)\in R(D).
{\lambda}bel{eq:tlautom}
\end{equation}
An automorphic form $\Phi$ on $\mathbb GO(D)(\mathbb A)$ pulls back via the map
$\varrho$ of \eqref{eq:similitudes} to an automorphic form
$\widetilde{\Phi}$ on the
product group $D^{\times}\times D^{\times}$. Let $\widetilde{{\Theta}eta}(\pi)$
be the space of automorphic forms on $D^{\times}\times D^{\times}$ which are
pull-backs of theta lifts \eqref{eq:thetalift} with $F\in\pi$.
If $\check{\pi}^{D}$ denotes the contragredient
representation of $\pi^{D}$ the crucial result is,
with a slight abuse of notation, the following, \cite{Shimi72}.
{\beta}gin{thm}[Shimizu]{\lambda}bel{th:Shimizu}
$\widetilde{{\Theta}eta}(\pi)=\pi^{D}\otimes\check{\pi}^{D}$.
\end{thm}
{\beta}gin{rems}{\lambda}bel{rm:onchoice}
\rm
{\beta}gin{enumerate}
\item In our case of interest $\pi^D=\check{\pi}^{D}$.
\item The Schwartz functions, hence the theta lifts
$\widetilde{{\theta}eta}_{\varphi}(F)$, are
$K^{1}_{\infty}\times K^{1}_{\infty}$-finite.
Thus, in Shimizu's theorem the representation space $\pi^{D}$ consists of
$K^{1}_{\infty}$-finite automorphic forms. Note that the functions
$\pi(d_{\infty})\phi_{r}$ are $K^{1}_{\infty}$-finite.
\item An explicit version of Shimizu's theorem has been worked out by Watson \cite{Wat03},
see also \cite[\S3.2]{Pra06} and \cite[\S12]{HaKu92}. Namely, if
$\varphi=\otimes_{p\leq\infty}\varphi_p$ is chosen as
{\beta}gin{equation}
\varphi_\infty(z_1,z_2)=
\frac{(-1)^{\kappa}}\pi z_2^{2{\kappa}}e^{-2\pi(z_{1}\bar{z}_{1}+z_{2}\bar{z}_{2})},
\quad\varphi_p=
\frac{ \mathrm{ch}_{{\cal R}_{N}\otimes\mathbb Z_p}}{\mathrm{vol}(({\cal R}_{N}\otimes\mathbb Z_p)^{\times})}
{\lambda}bel{eq:choiceofphi}
\end{equation}
where $z_1$ and $z_2$ are the complex coordinates in $D_\infty$ of \S1.3, then
$$
\pi(d_{\infty})\phi_{f}\otimes\pi(d_{\infty})\phi_{f}=\widetilde{{\theta}eta}_\varphi(F)
$$
where $F\in\pi$ is the adelic lift of an eigenform normalized so to have an equality of
Petersson norms
${\lambda}ngle\pi(d_{\infty})\phi_{f},\pi(d_{\infty})\phi_{f}\rangle={\lambda}ngle F,F\rangle$.
\end{enumerate}
\end{rems}
Let $\underline{\xi}=(\xi,\xi^{\prime})\in\Xi_{\underline{w}}(c{\cal O}_{K})\times\Xi_{{\underline{w}}^{\prime}}(c{\cal O}_{K})$
thought of as a character of the torus ${K_{\A}^{\times}}\times{K_{\A}^{\times}}$.
Let $\tilde{H}(t)$ be any function on ${K_{\A}^{\times}}\times{K_{\A}^{\times}}$ such that
$\tilde{H}(t)\underline{\xi}(t)$ is $(K^{\times}\mathbb R^{\times})^{2}$-invariant.
Following \cite[\S14]{HaKu91} \cite[\S1.4]{Harris93} we let
$$
L_{\underline{\xi}}(\tilde{H})=
\int_{(K^{\times}\mathbb R^{\times}\backslash{K_{\A}^{\times}})^{2}}
\tilde{H}(t)\underline{\xi}(t)\,dt.
$$
In particular, for $\xi$ as in definition \ref{th:Jintegral},
$$
L_{(\xi,\xi)}( \pi(d_{\infty})\phi_r\otimes\pi(d_{\infty})\phi_r)=
J_r(f,\xi,\tau)^2.
$$
When $\xi=\xi^\prime$ is unitary, $\vass{{\underline{w}}}=\vass{{\underline{w}}^{\prime}}=0$,
the integral $L_{\underline{\xi}}(\widetilde{{\theta}eta}_{\varphi}(F))$
can also be read, via the map ${\alpha}$ of \eqref{eq:similitudes},
as the Petersson scalar product of two automorphic forms
on the similitude group $T=G({\rm O}(K)\times{\rm O}(K^{\perp}))$
associated with the decomposition $D=K\oplus K^\perp$, namely
$L_{(\xi,\xi)}(\widetilde{{\theta}eta}_{\varphi}(F))=\int_{T(\mathbb Q)T(\mathbb R)\backslash T(\mathbb A)}
\widetilde{{\theta}eta}_{\varphi}(F)((a,b))\xi(b)\,d^\times ad^\times b$,
where ${\alpha}pha(t)=(a,b)$.
Thus the seesaw identity
\cite{Kudla84} associated with the seesaw dual pair
$$
{\beta}gin{array}{ccc}
\mathbb GL_{2}\times\mathbb GL_{2} & & \mathbb GO(D) \\
\uparrow & \mbox{{\Huge $\times$}} & \uparrow \\
\mathbb GL_{2} & & G({\rm O}(K)\times{\rm O}(K^{\perp}))
\end{array}
$$
identifies, up to a renormalization of the Haar measures,
the value $L_{(\xi,\xi)}(\widetilde{{\theta}eta}_{\varphi}(F))$ with a scalar
product on $\mathbb GL_{2}$,
{\beta}gin{equation}
L_{\underline{\xi}}(\widetilde{{\theta}eta}_{\varphi}(F))=
\int_{\mathbb GL_{2}(\mathbb Q)\mathbb A^{\times}\backslash\mathbb GL_{2}(\mathbb A)}
F(g){\theta}eta_{\varphi}^t(1,\xi)(g,g)\,dg,
{\lambda}bel{eq:RankinSelberg}
\end{equation}
where ${\theta}eta_{\varphi}^t$ denotes the theta lift to
$\mathbb GL_{2}\times\mathbb GL_{2}$. If $\varphi$ is
split and primitive, i.e. admits a decomposition
$\varphi=\varphi_{1}\otimes\varphi_{2}$ under
$D_{\infty}=(K\oplus K^\perp)\otimes\mathbb R$ and
each component decomposes in a product of local factors,
$\varphi_{i}=\bigotimes_{p\leq\infty}\varphi_{i,p}$ for $i=1,2$,
then ${\theta}eta_{\varphi}^t(1,\xi)$ splits as a product of two separate lifts. In fact
$$
{\theta}eta_{\varphi}^t(1,\xi)(g_{1},g_{2})=
E(0,\Phi,g_{1}){\theta}eta_{\varphi_{2}}(\tilde{\xi})(g_{2})
$$
where:
{\beta}gin{itemize}
\item $E(0,\Phi,g)$ is the value at $s=0$ of the holomorphic
Eisenstein series attached to the unique flat section (\cite[\S3.7]{Bu96})
extending the function $\Phi(g)=r_\psi(g,k)\varphi_1(0)$
where $k\in{K_{\A}^{\times}}$ is such that $N(k)={\delta}t(g)$ and
$r_\psi$ denotes here the extended adelic Weil representation
attached to $K$ as a normed space (Siegel-Weil formula),
\item ${\theta}eta_{\varphi_{2}}(\xi)(g)$ is a binary form in the automorphic representation
$\pi(\xi)$ of $\mathbb GL_{2}$ attached to $\xi$.
\end{itemize}
This expression yields a relation between the right hand side
of \eqref{eq:RankinSelberg} and the value at the centre of symmetry of a
Rankin-Selberg convolution integral. If the Whittaker function $W_F$
of $F$ decomposes as a product of local Whittaker functions,
the Rankin-Selberg integral admits an Euler
decomposition \cite{Ja72} and $L_{(\xi,\xi)}(\widetilde{{\theta}eta}_{\varphi}(F))$
is equal to the value at $s=\frac12$ of the analytic continuation of
$$
\frac 1{h_K}\prod_{q\leq\infty}L_q(\varphi_q,\xi_q,s),
$$
where
{\beta}gin{multline}
L_q(\varphi_q,\xi_q,s)=
\int_{K_q}\int_{\mathbb Q_q^\times}
W^{\psi_q}_{F,q}\left(\left({\beta}gin{array}{cc}a & 0 \\0 & 1\end{array}\right)k\right)
W^{\psi_q}_{{\theta}eta_{\varphi_{2},q}}\left(\left({\beta}gin{array}{cc}-a & 0 \\0 & 1\end{array}\right)k\right)\cdot\\
\Phi^s_{\varphi_1,q}\left(\left({\beta}gin{array}{cc}a & 0 \\0 & 1\end{array}\right)k\right)\inv{\vass{a}}\,
d^\times a\,dk_q.
{\lambda}bel{eq:localterm}
\end{multline}
The local measures are normalized so that $K_\infty={\rm SO}_2(\mathbb R)$ has volume $2\pi$ and
$K_q=\mathbb GL_2(\mathbb Z_q)$ has volume $1$ for finite $q$.
Also $W_{{\theta}eta_{\varphi_{2}}}$ is the Whittaker function
and $\Phi^s(g)=\vvass{a}^{s-\frac12}\Phi(g)$ if $g=nak$ under the $NAK$-decomposition
where $\vvass{\smallmat a{}{}b}=\vass{a/b}$.
Since the local term \eqref{eq:localterm} does not vanish and for almost all $q$ is the local Euler factor of some automorphic $L$-function, one obtains, as in \cite{Harris93, HaKu91}, a version of Waldspurger's result \cite{Waldsp85}. Namely,
$$
L_{\underline{\xi}}(\widetilde{{\theta}eta}_{\varphi}(F))=\left.
{\Lambda}mbda(\varphi,\xi,s)L(\pi_K\otimes\xi,\frac s2)L(\eta_K,2s)^{-1}\right|_{s=1/2},
$$
where ${\Lambda}mbda(\varphi,\xi,s)$ is a finite product of local integrals, $\pi_K$ is the base change to $K$ of the automorphic representation $\pi$ and $L(\eta_K,2s)$ is the Dirichlet $L$-function attached to
$\eta_K$, the quadratic character associated to $K$
When $\varphi^f=\bigotimes_{p<\infty}\varphi_p$ and $F$ are chosen as in Remark \ref{rm:onchoice}.3
the local non-archimede\-an terms in the Rankin-Selberg integral have been explicitely computed by Prasanna \cite[\S3]{Pra06} under the simplifying assumptions that $N$ is squarefree, $c=1$ and $\xi$ is unramified. The effect of these assumptions is that
{\beta}gin{enumerate}
\item the local component of $\xi$ can be written either as
$\xi_q=(\xi_q^{\rm sp},(\xi_q^{\rm sp})^{-1})$ for some unramified character
$\xi_q^{\rm sp}$ of $\mathbb Q_q^\times$ at a prime $q$ split in $K$ under the isomorphism
$(K\otimes\mathbb Q_q)^\times{\sigma}meq\mathbb Q_q^\times\times\mathbb Q_q^\times$, or as
$\xi_q=\xi^{\rm in}_q\circ{\rm N}_{K_q/\mathbb Q_q}$ for an unramified character
$\xi_q^{\rm in}$ of $\mathbb Q_q^\times$ at a prime $q$ inert in $K$, or as
$\xi_q=\xi^{\rm rm}_q\circ{\rm N}_{K_q/\mathbb Q_q}$ at a ramified prime $q$ where
$\xi_q^{\rm rm}$ is the unramified character of $\mathbb Q_q^\times$ obtained by a trivial extension;
\item at a prime $q|N{\Delta}$ the local component $\pi_q$ is equivalent to the special representation
${\sigma}(\vass{\cdot}^{\frac12+it_q},\vass{\cdot}^{-\frac12+it_q})$ with $q^{2it_q}=1$.
\end{enumerate}
Note that the former condition remains true for a split prime $q$ that does not divide $c$ and
$\xi\in\Xi_{\underline{w}}(R_{K,c})$ with $\vass{\underline{w}}=0$. Thus we can apply Prasanna's computations to this more general case to write down a formula in which only the local factors at primes in ${\Sigma}=\left\{q|cN^\prime\right\}$ are left implicit, where
$N=N_{\rm sf}(N^\prime)^2$ and $N_{\rm sf}$ is square-free.
Namely,
{\beta}gin{equation}
L_{\underline{\xi}}(\widetilde{{\theta}eta}_{\varphi_\infty\otimes\varphi^f}(F))
=\left.\frac{V_N}{h_K}{\lambda}_\infty(\varphi_\infty,\xi_\infty,s)\left(\prod_{q\leq\infty}\nu_q(\xi_q)\right)
L(\pi_K\otimes\xi,\frac s2)L(\eta_K,2s)^{-1}\right|_{s=1/2}
{\lambda}bel{eq:finalexp}
\end{equation}
where $V_N=\prod_q{\mathrm{vol}(({\cal R}_{N}\otimes\mathbb Z_q)^{\times})}$,
${\lambda}_\infty(\varphi_\infty,\xi_\infty,s)=\vass{{\rm N}u}^{\frac12}_\infty\xi_\infty(z_u)^{-1}
L_\infty(\varphi_\infty,\xi_\infty,\frac s2)$ ($z_u$ denotes the complex coordinate of $u$ in the chosen identification $(Ku)\otimes\mathbb R{\sigma}meq\mathbb C$)
and
$$
{\beta}gin{cases}
\nu_q(\xi_q)=\xi_q^{\rm sp}(q)^{n_1-n_2} & \text{if $q$ splits, $q\notin{\Sigma}$, $(q,N_{\rm sf})=1$ }, \\
\nu_q(\xi_q)=-\frac{1}{q+1}q^{-\frac12+t_q+s}\xi_q^{\rm sp}(q)^{n_1-n_2} & \text{if $q|N_{\rm sf}$}, \\
\nu_q(\xi_q)=\xi_q^{\rm in}(q)^{-2n} & \text{if $q$ is inert, $q\notin{\Sigma}$}, \\
\nu_q(\xi_q)=\xi_q^{\rm rm}(-{\rm N}\inv u) & \text{if $q$ ramifies}, \\
\nu_\infty(\xi_\infty)=\xi_\infty(z_u) & \\
\end{cases}
$$
where the ideal ${\cal J}$ of proposition \ref{prop:decomporder} in $K\otimes\mathbb Q_q$ is generated by $q^{-n}$ when $q$ is inert and decomposes as $q^{-n_{q,1}}\mathbb Z_q\times q^{-n_{q,2}}\mathbb Z_q$ under
$K\otimes\mathbb Q_q{\sigma}meq\mathbb Q_q\times\mathbb Q_q$ when $q$ is split.
{\beta}gin{rem}
\rm It is clear that $\nu_q(\xi_q)=1$ for almost all $q$.
The local terms ${\lambda}_q(\nu_q)$ do depend
on the choice of $u$ in \eqref{eq:Dsplit} (replacing $u$ with $xu$
the local Whittaker function $W^{\psi_q}_{{\theta}eta_{\varphi_{2},q}}$ gets modified by the factor
$\vass{{\rm N}x}^{-\frac12}_q\xi_q(x)^{-1}$), but the quantity
$$
\nu(\xi,\tau,s)=\prod_{q\leq\infty}\nu_q(\xi_q)
$$
depends only on $\xi$ and the chosen embedding $\jmath:K\rightarrow D$.
\end{rem}
For a pair of non-negative integers $(m,q)$ consider the function of two complex variables
$\varphi^{(l,q)}(z_1,z_2)=(z_1\bar{z}_1)^{l}{z}_2^{q}e^{-2\pi(z_1\bar{z}_1+z_2\bar{z}_2)}$.
{\beta}gin{lem}
Let $\varphi(z)=(z\bar z)^le^{-2\pi z\bar z}$. Then the Fourier transform of $\varphi$ is
$$
\hat\varphi(w_1+w_2i)=e^{-2\pi(w_1^2+w_2^2)}\sum_{0\leq{\alpha}+{\beta}\leq l}{\gamma}({\alpha},{\beta};l)w_1^{2{\alpha}}w_2^{2{\beta}},
$$
where
$$
{\gamma}({\alpha},{\beta};l)=\sum_{\substack{j+k=l\\ {\alpha}\leq j, {\beta}\leq k}}(-4\pi)^{{\alpha}+{\beta}-l}\binom lj\binom{2j}{2{\alpha}}\binom{2k}{2{\beta}}
(2j-2{\alpha}-1)!!(2k-2{\beta}-1)!!
$$
\end{lem}
\par\noindent{\bf Proof. }
One has
$\hat\varphi(w_1+w_2i)=2\int_{\mathbb R^2}e^{4\pi i(w_1x_1+w_2x_2)}(x_1^2+x_2^2)^le^{-2\pi(x_1^2+x_2^2)}\,dx_1dx_2=
2\sum_{j+k=l}\binom lj\left(\int_{\mathbb R}e^{2\pi iw_1x_1}x_1^{2j}e^{-2\pi x_1^2}\,dx_1\right)
\left(\int_{\mathbb R}e^{2\pi iw_2x_2}x_2^{2k}e^{-2\pi x_2^2}\,dx_2\right)$
and the result follows from
$ \int_{\mathbb R}e^{4\pi itx-2\pi x^2}x^{2v}\,dx=\frac1{\sqrt{2}}e^{-2\pi t^2}\sum_{i=0}^v(-4\pi)^{-i}\binom{2v}{2i}(2i-1)!!\,t^{2(v-i)}$.
\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{lem}{\lambda}bel{le:localarch}
Let $r\geq l\geq0$ be integers, $F$ the lift of a weight $2{\kappa}$ eigenform
and $\xi_\infty$ the character
$\xi_\infty(z)=(z/\bar{z})^{{\kappa}+r}$ of $\mathbb C^\times$. Then
$$
L_\infty(\varphi^{(l,2({\kappa}+r))},\xi_\infty,s)=
{\beta}gin{cases}
0 & \text{if $l<r$}, \\
\frac{(-1)^{r}2\pi(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)r!}{(4\pi)^{s+2({\kappa}+r)-\frac12}}\mathbb Ga(s+2{\kappa}+r-\frac12) &
\text{if $l=r$}.
\end{cases}
$$
\end{lem}
\par\noindent{\bf Proof. }
It is well known that
$W_F^{\psi_\infty}\left(\smallmat a001r({\theta})\right)=a^{\kappa}{\rm ch}_{\mathbb R^+ }(a)e^{-2\pi a-2{\kappa} i{\theta}}$.
We compute the other two terms in the integrand of \eqref{eq:localterm} separately with
$\varphi_1(z_1)=(z_1\bar{z}_1)^le^{-2\pi z_1\bar{z}_1}$ and
$\varphi_2(z_2)=\bar{z}_2^{2({\kappa}+r)}e^{-2\pi z_2\bar{z}_2}$.
{\beta}gin{enumerate}
\item To compute $\Phi^s_{\varphi_1}\left(\smallmat a001 r({\theta})\right)=
\vass{a}^sr_{\psi_\infty}(r(-{\theta}))\varphi_1(0)$
we use the definitions \eqref{eq:Weilrep} together with the decomposition
{\beta}gin{equation}
r({\theta})=\smallmat{1}{-\tan{\theta}}{0}{1}\smallmat{0}{-1}{1}{0}
\smallmat{1}{-{\sigma}n{\theta}\cos{\theta}}{0}{1}\smallmat{0}{1}{-1}{0}\smallmat{1/\cos{\theta}}{0}{0}{\cos{\theta}}.
{\lambda}bel{eq:decrth}
\end{equation}
Some straightforward passages yield
$r_{\psi_\infty}(r({\theta}))\varphi_1(0)=(-\cos{\theta})\varphi_1^\sharp(0)$ where
$\varphi_1^\sharp(z)$ is the Fourier transform of
$e^{-2\pi i\cos{\theta}{\sigma}n{\theta}\vass{z}^2}\hat\varphi_1((\cos{\theta})z)$.
Since
{\beta}gin{align*}
\varphi_1^\sharp(0) &= \int_{\mathbb C}e^{-2\pi i{\sigma}n{\theta}\cos{\theta}\vass{z}^2}{\hat\varphi}_1((\cos{\theta})z)\,dz\\
\intertext{(for $z=x+yi$ and from the previous lemma)}
&= 2\sum_{0\leq{\alpha}+{\beta}\leq l}{\gamma}({\alpha},{\beta};l)(cos{\theta})^{2({\alpha}+{\beta})}
\int_{\mathbb R^2}e^{-2\pi(i{\sigma}n{\theta}\cos+(\cos{\theta})^2)(x^2+y^2)}x^{2{\alpha}}y^{2{\beta}}\,dxdy\\
&=-\sum_{0\leq{\alpha}+{\beta}\leq l}{\gamma}({\alpha},{\beta};l)\frac{(2{\alpha}-1)!!\, (2{\beta}-1)!!}{(4\pi)^{{\alpha}+{\beta}}}
\frac{(\cos{\theta})^{2({\alpha}+{\beta})}}{(-{\sigma}n{\theta}\cos{\theta}-(\cos{\theta})^2)^{{\alpha}+{\beta}+1}},
\end{align*}
eventually
{\beta}gin{multline*}
\Phi^s_{\varphi_1}\left(\smallmat a001 r({\theta})\right)=\\
-\vass{a}^s\sum_{0\leq{\alpha}+{\beta}\leq l}\frac{{\gamma}({\alpha},{\beta};l)(2{\alpha}-1)!!(2{\beta}-1)!!}{(-4\pi)^{{\alpha}+{\beta}}}
(\cos{\theta})^{{\alpha}+{\beta}}e^{-({\alpha}+{\beta}+1)i{\theta}}.
\end{multline*}
\item To compute $W^{\psi_\infty}_{{\theta}eta_{\varphi_{2},\infty}}\left(\smallmat{-a}001r({\theta})\right)$ we need
to use again
\eqref{eq:Weilrep} together with the decomposition \eqref{eq:decrth}.
For, it should be noted that this time the
norm in $(Ku)\otimes\mathbb R{\sigma}meq\mathbb C$ is $-{\rm N}_{\mathbb C/\mathbb R}$ (in particular, definite negative)
and the main involution is $z\rightarrowsto -z$. Thus, we get
$W^{\psi_\infty}_{{\theta}eta_{\varphi_{2},\infty}}\left(\smallmat{-a}001r({\theta})\right)=
e^{i(2{\kappa}+2r+1){\theta}}W^{\psi_\infty}_{{\theta}eta_{\varphi_{2},\infty}}\left(\smallmat{-a}001\right)$.
On the other hand, for a choice of $h\in\mathbb C^\times$ such that ${\rm N}h=-a{\rm N}\inv u>0$,
{\beta}gin{align*}
W^{\psi_\infty}_{{\theta}eta_{\varphi_{2},\infty}}\left(\smallmat{-a}001\right)
&= \frac1{2\pi}\int_{S^1}r_{\psi_\infty}\left(\smallmat{-a{\rm N}\inv u}001,h{\vartheta}\right)
\varphi_2(u)\xi_\infty(h{\vartheta})\,d{\vartheta}\\
&= \frac{(-a{\rm N}\inv u)^\frac12}{2\pi}\int_{S^1}\varphi_2(-a{\rm N}\inv u\inv{(h{\vartheta})}u))
\xi_\infty(h{\vartheta})\,d{\vartheta}\\
&= \frac{(-a{\rm N}\inv u)^\frac12}{2\pi}\int_{S^1}\varphi_2(\bar{h}{\inv{\vartheta}}u)\xi_\infty(h{\vartheta})\,d{\vartheta}\\
&= \frac{(-a{\rm N}\inv u)^\frac12}{2\pi}\int_{S^1}(\bar{h}\inv{\vartheta}{z}_u)^{2({\kappa}+r)}e^{-2\pi a}
(h{\vartheta})^{{\kappa}+r}({\bar h}\inv{\theta})^{-{\kappa}-r}\,d{\vartheta}\\
&= (-{\rm N}\inv u)^\frac12\xi_\infty(z_u)a^{{\kappa}+r+\frac12}e^{-2\pi a}
\end{align*}
\end{enumerate}
Putting all the ingredients together
{\beta}gin{align*}
L_\infty&(\varphi^{(l,2({\kappa}+r))},\xi_\infty,s)=\\
&=- (-{\rm N}\inv u)^\frac12\xi_\infty(z_u)\int_{\mathbb R^{>0}}\int_{S^1}
a^{s+2{\kappa}+r-\frac12}e^{-4\pi a}e^{i(2{\kappa}+2r+1){\theta}}\\
& \qquad\qquad\times\left(\sum_{0\leq{\alpha}+{\beta}\leq l}\frac{{\gamma}({\alpha},{\beta};l)(2{\alpha}-1)!!(2{\beta}-1)!!}{(-4\pi)^{{\alpha}+{\beta}}}
(\cos{\theta})^{{\alpha}+{\beta}}e^{-({\alpha}+{\beta}+1)i{\theta}}\right)\,d^\times ad{\theta}\\
& =-(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)\int_{\mathbb R^{>0}}a^{s+2{\kappa}+r-\frac12}e^{-4\pi a}\,d^\times a\\
& \qquad\qquad\times\sum_{0\leq{\alpha}+{\beta}\leq l}\frac{{\gamma}({\alpha},{\beta};l)(2{\alpha}-1)!!(2{\beta}-1)!!}{(-4\pi)^{{\alpha}+{\beta}}}
\int_{S^1}(\cos{\theta})^{{\alpha}+{\beta}}e^{-({\alpha}+{\beta}+1)i{\theta}}\,d{\theta}
\end{align*}
Since ${\alpha}+{\beta}\leq l\leq r$ we have
{\beta}gin{multline*}
\int_{S^1}(\cos{\theta})^{{\alpha}+{\beta}}e^{i(2r-{\alpha}-{\beta}+1){\theta}}\,d{\theta}=\\
\frac1{2^{{\alpha}+{\beta}}}\sum_{j=0}^{{\alpha}+{\beta}}\binom{{\alpha}+{\beta}}{j}\int_{S^1}e^{2i(r-j){\theta}}\,d{\theta}=
{\beta}gin{cases}
2^{1-r}\pi & \text{if ${\alpha}+{\beta}=l=r$ }, \\
0 & \text{otherwise}.
\end{cases}
\end{multline*}
hence $L_\infty(\varphi^{(l,2({\kappa}+r))},\xi_\infty,s)=0$ if $l<r$. When $l=r$, since ${\gamma}(j,r-j;r)=\binom rj$ and
$\sum_{j=0}^r\binom rj(2j-1)!!(2r-2j-1)!!=2^rr!$ as readily proved by induction, we have
{\beta}gin{align*}
L_\infty(\varphi^{(r,2({\kappa}+r))},\xi_\infty,s)&=\frac{2\pi(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)r!}{(-4\pi)^r}
\int_{\mathbb R^{>0}}a^{s+2{\kappa}+r-\frac12}e^{-4\pi a}\,d^\times a\\
&=\frac{(-1)^r2\pi(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)r!}{(4\pi)^{s+2({\kappa}+r)-\frac12}}\mathbb Ga(s+2{\kappa}+r-\frac12).
\end{align*}
\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
We shall now state and prove the main result of this section.
{\beta}gin{thm}{\lambda}bel{thm:maininterpolation}
Let $N$ be a positive integer and fix a decomposition
$N={\Delta} N_{o}$ with ${\Delta}$ a product of an even number of distinct
primes and $({\Delta},N_{o})=1$.
Let $\pi$ be an automorphic cuspidal representation for $\mathbb GL_{2}$ of
conductor $N$ such that
{\beta}gin{enumerate}
\item $\pi_{\infty}{\sigma}meq{\sigma}(\mu_{1},\mu_{2})$, the discrete series
representation with $\mu_{1}\mu_{2}^{-1}(t)=t^{2{\kappa}-1}{\rm sgn}(t)$.
\item $\pi_{\ell}$ is special for each $\ell|{\Delta}$.
\end{enumerate}
Let $K$ be a quadratic imaginary field such that all $\ell|{\Delta}$
are inert in $K$ and all $\ell|N_{o}$ are split in $K$. Let $c$ be a positive integer
with $(c,N)=1$ and $p$ an odd prime number not dividing
$N$ that splits in $K$. Assume that ${\cal O}_{K,c}^{\times}=\{\pm1\}$.
Suppose that there exist Gr\"ossencharakters $\chi\in\Xi_{(-2{\kappa},0)}({\cal O}_{K,c})$
and $\xi\in\Xi_{(-2,0)}({\cal O}_{K,c})$ such that the $p$-adic avatar $\wh\xi$ takes values
in a totally ramified extension of $\mathbb Q_p$.
Then, there exists $x\in\mathbb CM({\Delta},N;{\cal O}_{K,c})$ represented by
$\tau=t+yi\in\mathbb CM_{{\Delta},K}$ with associated periods ${\Omega}_\infty$ and ${\Omega}_p$
such that for all $r\geq0$
{\beta}gin{multline*}
{\Omega}_{p}^{-4({\kappa}+r)}\int_{\mathbb Z_p}z^r\,d\mu(f,x;\chi,\xi)= \\
\frac{2\varpi V_Nw_{K,c}^2}{m_{c}^2h_K}
\frac{(-1)^{{\kappa}+r}r!(2{\kappa}+r)!}
{4^{2{\kappa}+3r}\pi^{2({\kappa}+r+1)}y^{2({\kappa}+r)}{\Omega}_\infty^{4({\kappa}+r)}}
\nu(\xi_r,\tau,\frac12)L(\pi_K\otimes\xi_r,\frac12)L(\eta_K,1)^{-1}
\end{multline*}
where $\xi_r=\xi\chi^r\vvass{N_{K/\mathbb Q}}^{-{\kappa}-r}$ and $\varpi$ is a (fixed) ratio of Petersson norms.
\end{thm}
\par\noindent{\bf Proof. }
Let $D$ be the quaternion algebra with ${\Delta}_{D}={\Delta}$. By hypothesis
the representation $\pi$ is the
image of an automorphic representation $\pi^{D}$ of $D^{\times}$ under
the Jacquet-Langlands correspondence and let
$f\in S_{2k,0}({\Delta},N_{o})$ be a holomorphic newform in $\pi^{D}$.
For all integers $r\geq0$ let $\phi_{r}$ be as in \eqref{eq:defphir}.
By proposition \ref{teo:existCM} there exists $x\in\mathbb CM({\Delta},N;{\cal O}_{K,c})$
and choose a split $p$-ordinary test triple $(\tau,v,e)$, $\tau=t+iy$,
representing $x$ with corresponding $d_\infty\in{\rm SL}_2(\mathbb R)$.
By taking ${\cal O}_{v}$ and $F$ large enough, we can
assume that $f$ is defined over ${\cal O}_{(v)}\subset{\cal O}_F$, and that the measure
$\mu(f,x;\chi,\xi)$ has values in ${\cal O}_{F}$
By remark \ref{rm:onchoice}.3 we can write
$\pi(d_{\infty})\phi_{0}\otimes\pi(d_{\infty})\phi_{0}=\varpi\widetilde{\theta}eta_{\varphi}(F)$
with $\varphi=\varphi_\infty\otimes\varphi^f$ as in \eqref{eq:choiceofphi},
$F$ the adelization of the normalized eigenform in $\pi$
and $\varpi\in\mathbb C^\times$ a Petersson normalization constant.
We claim that for all $r\geq0$
{\beta}gin{equation}
\pi(d_{\infty})\phi_{r}\otimes\pi(d_{\infty})\phi_{r}=
\varpi\frac{(-1)^{\kappa}}{4^r\pi}
\widetilde{\theta}eta_{\phi^{r,2({\kappa}+r)}\otimes\varphi^{f}}(F)
+\sum_{l=0}^{r-1}a_{r,l},\widetilde{\theta}eta_{\phi^{l,2({\kappa}+r)}\otimes\varphi^{f}}(F)
{\lambda}bel{eq:thetaclaim}
\end{equation}
where $a_{r,l}\in\varpi\mathbb Z[\pi]$.
For, the short exact sequence \eqref{eq:sesGOD}
gives a Lie algebras identification
$\mathfrak g\mathfrak o(D){\sigma}meq(D_{\infty}\times D_{\infty})/\mathbb R$
and in particular
$\mathfrak o(D)=\{(A,B)\in D_{\infty}\times D_{\infty}\,|\,{\rm tr} A={\rm tr} B\}/\mathbb R
{\sigma}meq\mathfrak s\mathfrak l_{2}\times\mathfrak s\mathfrak l_{2}$.
Under this identification, differentiating \eqref{eq:tlautom} yields
$$
\widetilde{\theta}eta_{H\varphi}(F)=\left.\frac{d}{dt}
\widetilde{\theta}eta_{\varphi}(F)(h\exp(tH))\right|_{t=0}
\hbox{ with }
H\varphi(x)=\left.\frac{d}{dt}\varphi(e^{-tH_{1}}xe^{tH_{2}})\right|_{t=0}
$$
for all $H=(H_{1},H_{2})\in{\rm Lie}(\mathrm{O}(D))$. If
$A\in\mathfrak s\mathfrak l_{2}$ a repeated application of the last formula with
$\pr{A}=(A,0)$ and $\pr{A}{}^{\prime}=(0,A)$ shows that the diagonal action
of $A$ on $\pi^{D}\otimes\pi^{D}$ corresponds to the action
of the second order operator
$A_{2}=\pr{A}\pr{A}{}^{\prime}=\pr{A}{}^{\prime}\pr{A}
\in\mathfrak A({\rm Lie}(\mathrm{O}(D)))$ on Schwartz functions, i.e.
$$
A_{2}\varphi(x)=\left.\frac{\partial^{2}}{\partial u\partial
v}\varphi(e^{-uA}xe^{vA})\right|_{u=v=0}.
$$
We are interested in the expression of the operator $A_{2}$ in the
normalized coordinates for
$A=d_{\infty}X^{+}d_{\infty}^{-1}$. Up to conjugation,
this is the same as to compute the second order operator associated
to $A=X^{+}$ under the standard coordinates \eqref{eq:standardcoord}.
A straightforward computation using the obvious real coordinates associated to the underlying
real decomposition
$D_{\infty}= \mathbb R\smallmat 1{}{}1\oplus\mathbb R\smallmat{}{-1}1{}\oplus
\mathbb R\smallmat {}11{}\oplus\mathbb R\smallmat{-1}{}{}1$
shows that
$\pr{A}=-i\left(z_2\frac{\partial}{\partial{\bar z}_1}+z_1\frac{\partial}{\partial{\bar z}_2}\right)$
and
$\pr{A}{}^\prime=i\left(z_2\frac{\partial}{\partial z_1}+{\bar z}_1\frac{\partial}{\partial{\bar z}_2}\right)$, so that
$$
A_2=z_2^2\frac{\partial^2}{\partial z_1\partial{\bar z}_1}+
{\bar z}_1{z}_2\frac{\partial^2}{\partial{\bar z}_1\partial{\bar z}_2}+
z_1z_2\frac{\partial^2}{\partial z_1\partial{\bar z}_2}+
z_1{\bar z}_1\frac{\partial^2}{{\partial{\bar z}_2}^2}+
z_2\frac{\partial}{\partial{\bar z}_2}.
$$
Since
$$
A_2\phi^{m,q}=
{\beta}gin{cases}
-2\pi\phi^{0,q+2}+4\pi^2\phi^{1,q+2} & \text{if $m=0$}, \\
m^2\phi^{m-1,q+2}-(4m-2)\pi\phi^{m,q+2}+4\pi^2\phi^{m+1,q+2} & \text{if $m\geq1$},
\end{cases}
$$
formula \eqref{eq:thetaclaim} follows from an $r$-fold iteration using the linearity of the theta lift and the definitions \eqref{eq:defphir} and \eqref{eq:choiceofphi} of $\phi_r$ and $\varphi_\infty$ respectively.
Let $\chi_r$ be a Gr\"ossencharakter of weight $(-2({\kappa}+r),0)$
and trivial on $\wh{R}_{c}^{\times}$ such that
$\xi_r=\chi_r\vvass{N_{K/\mathbb Q}}^{-{\kappa}-r}$ is unitary.
Combining \eqref{eq:thetaclaim} and \eqref{eq:finalexp} with lemma \ref{le:localarch} we get
{\beta}gin{equation}
{\lambda}bel{eq:end}
J_{r}(f,\xi_r,\tau)^{2} =
\frac{(-1)^{{\kappa}+r}2\varpi V_Nr!(2{\kappa}+r)!}{4^{2{\kappa}+3r}h_K\pi^{2{\kappa}+2r}}
\nu(\xi_r,\tau,\frac12)L(\pi_K\otimes\xi_r,\frac12)L(\eta_K,1)^{-1}.
\end{equation}
On the other hand, from theorem \ref{th:compJ},
$$
J_{r}(f,\xi_{r},\tau)^{2}=
\frac{m_c^2\pi^2}{w_{K,c}^2}(h_c^\sharp)^2y^{2({\kappa}+r)}{\Omega}_\infty^{4({\kappa}+r)}
\scal{c_{(r)}(f,x)}{\chi_r}^{2}.
$$
When $\chi_r=\chi\xi^r$ we use proposition \ref{th:spiden} to rewrite the last formula as
$$
{\Omega}_p^{-4({\kappa}+r)}m_r(\mu(f,x;\chi,\xi))^2=
\frac{w_{K,c}^2}{m_c^2\pi^2}{\Omega}_\infty^{-4({\kappa}+r)}y^{-2({\kappa}+r)}J_{r}(f,\xi_r,\tau)^{2}.
$$
Substituting \eqref{eq:end} into the latter formula proves the theorem. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par
{\beta}gin{thebibliography}{10}
\bibitem{BerDar96}
M.~Bertolini and H.~Darmon.
\newblock Heegner points on {M}umford-{T}ate curves.
\newblock {\em Inventiones Math.}, 126:413--456, 1996.
\bibitem{BoCa91}
J.~F. Boutot and H.~Carayol.
\newblock Uniformisation $p$-adique des courbes de {S}himura: les theoremes de
{C}erednik et de {D}rinfeld.
\newblock In {\em Courbes modulaires at courbes de {S}himura}, volume 196 of
{\em Ast{\'e}risque}, pages 45--158, 1991.
\bibitem{Bu96}
D.~Bump.
\newblock {\em Automorphic {F}orms and {R}epresentations}, volume~55 of {\em
Cambridge Studies in Adv. Math..}
\newblock Cambridge University Press, 1996.
\bibitem{Dar04}
H.~Darmon.
\newblock {\em Rational {P}ooints on {M}odular {E}lliptic {C}urves}, volume~101 of {\em
Regional Conf. Ser. Math..}
\newblock American Mathematical Society, 2004.
\bibitem{Del82}
P.~Deligne.
\newblock Hodge cycles on abelian varieties.
\newblock In {\em Hodge cycles, {M}otives, and {S}himura varieties}, volume 900
of {\em Lecture Notes in Math.}, pages 9--100, 1982.
\bibitem{DelRap73}
P.~Deligne and M.~Rapoport.
\newblock Les sch{\'e}mas de modules de courbes elliptiques.
\newblock In {\em Modular Functions of One Variable II}, volume 349 of {\em
Lecture Notes in Math.}, pages 143--316, 1973.
\bibitem{DiaIm95}
F.~Diamond and J.~Im.
\newblock Modular forms and modular curves.
\newblock In {\em Seminar on Fermat's Last Theorem (Toronto, ON, 1993--1994)},
volume~17 of {\em CMS Conf. Proc.}, pages 39--133. American Math. Soc., 1995.
\bibitem{DiaTay94}
F.~Diamond and R.~Taylor.
\newblock Non-optimal levels of mod $l$ representations.
\newblock {\em Inventiones Math.}, 115:435--462, 1994.
\bibitem{FalCha90}
G.~Faltings and C.-L. Chai.
\newblock {\em Degenerations of Abelian Varieties}, volume~22 of {\em
Ergebnisse der Math.}
\newblock Springer, 1990.
\bibitem{Harris79}
M.~Harris.
\newblock A note on three lemmas of {S}himura.
\newblock {\em Duke Math. J.}, 46:871--879, 1979.
\bibitem{Harris81}
M.~Harris.
\newblock Special values of zeta functions attached to {S}iegel modular forms.
\newblock {\em Ann. Sc. {\'E}c. Norm. Sup.}, 14:77--120, 1981.
\bibitem{Harris93}
M.~Harris.
\newblock Non-vanishing of ${L}$-functions on $2\times2$ unitary groups.
\newblock {\em Forum Math.}, 5:405--419, 1993.
\bibitem{HaKu91}
M.~Harris and S.~Kudla.
\newblock The central critical value of a triple product ${L}$-function.
\newblock {\em Annals Math.}, 133:605--672, 1991.
\bibitem{HaKu92}
M.~Harris and S.~Kudla.
\newblock Arithmetic automorphic forms for the nonholomorphic discrete series
of ${\mathbb GSp}(2)$.
\newblock {\em Duke Math. J.}, 66:59--121, 1992.
\bibitem{HaTi01}
M.~Harris and J.~Tilouine.
\newblock $p$-adic measures and square roots of special values of triple
product ${L}$-functions.
\newblock {\em Math. Ann.}, 320:127--147, 2001.
\bibitem{Hashim95}
K.~Hashimoto.
\newblock Explicit form of quaternion modular embeddings.
\newblock {\em Osaka J. Math.}, 32:533--546, 1995.
\bibitem{Hida86}
H.~Hida.
\newblock Hecke algebras for ${\mathbb GL}_1$ and ${\mathbb GL}_2$.
\newblock In {\em Seminaire de theorie des nombres, Paris 1983--84}, volume~63
of {\em Progr. Math.}, pages 133--146, 1986.
\bibitem{Hida93}
H.~Hida.
\newblock {\em Elementary Theory of ${L}$-functions and Eisenstein Series},
volume~26 of {\em Student Texts}.
\newblock London Mathematical Society, 1993.
\bibitem{Howe89}
R.~Howe.
\newblock Transcending classical invariant theory.
\newblock {\em J. Amer. Math. Soc}, 2:536--552, 1989.
\bibitem{Ja72}
H.~Jacquet.
\newblock {\em Automorphic forms on $\mathbb GL(2)$, II}, volume 278 of {\em Lecture Notes
in Math}.
\newblock Springer-Verlag, 1972.
\bibitem{JaLa70}
H.~Jacquet and R.~P. Langlands.
\newblock {\em Automorphic forms on $\mathbb GL(2)$}, volume 114 of {\em Lecture Notes
in Math}.
\newblock Springer-Verlag, 1970.
\bibitem{Jord86}
B.~Jordan.
\newblock Points on {S}himura curves rational over number fields.
\newblock {\em J. reine u. angew. Math.}, 371:92--114, 1986.
\bibitem{Katz70}
N.~Katz.
\newblock Nilpotent connections and the monodromy theorem: applications of a
result of {T}urrittin.
\newblock {\em Publ. Math. IHES}, 39:355--412, 1970.
\bibitem{Katz73}
N.~Katz.
\newblock $p$-adic properties of modular schemes and modular forms.
\newblock In {\em Modular Functions of One Variable III}, volume 350 of {\em
Lecture Notes in Math.}, pages 70--189, 1973.
\bibitem{Katz76}
N.~Katz.
\newblock $p$-adic interpolation of real analytic {E}isenstein series.
\newblock {\em Annals of Math.}, 104:459--571, 1976.
\bibitem{Katz78}
N.~Katz.
\newblock $p$-adic ${L}$-function for {C}{M} fields.
\newblock {\em Inventiones Math.}, 49:199--297, 1978.
\bibitem{Katz81}
N.~Katz.
\newblock Serre-{T}ate local moduli.
\newblock In {\em Surfaces Algebraiques}, volume 868 of {\em Lecture Notes in
Math.}, pages 138--202, 1981.
\bibitem{KatMaz85}
N.~Katz and B.~Mazur.
\newblock {\em Arithmetic moduli of elliptic curves}, volume 108 of {\em Annals
of Math. Studies}.
\newblock Princeton Univ. Press, 1985.
\bibitem{KatOda68}
N.~Katz and T.~Oda.
\newblock On the differentiation of de {R}ham cohomology classes with respect
to parameters.
\newblock {\em J. Math. Kyoto Univ.}, 8(2):199--213, 1968.
\bibitem{Kudla84}
S.~Kudla.
\newblock Seesaw dual reductive pairs.
\newblock In {\em Automorphic forms of several variables (Katata 1983)},
volume~46 of {\em Progr. Math}, pages 244--268. Birkh\"{a}user Boston, 1984.
\bibitem{Maass53}
H.~Maass.
\newblock Die {D}ifferentialgleichungen in der {T}heorie der {S}iegelschen
{M}odulfunktionen.
\newblock {\em Math. Ann.}, 126:44--68, 1953.
\bibitem{Milne79}
J.~S. Milne.
\newblock Points on {S}himura varieties $\bmod p$.
\newblock In {\em Automorphic {F}orms, {R}epresentations and ${L}$-functions},
volume~33 of {\em Proc. Symp. Pure Math.}, pages 165--183, 1979.
\bibitem{Mori94}
A.~Mori.
\newblock A characterization of integral elliptic automorphic forms.
\newblock {\em Ann. Sc. Norm. Sup. Pisa}, IV(21):45--62, 1994.
\bibitem{MoTe99}
A.~Mori and L.~Terracini.
\newblock A canonical map between {H}ecke algebras.
\newblock {\em Boll. Un. Mat. It. Sez. B}, 8(2):429--452, 1999.
\bibitem{Mum65}
D.~Mumford.
\newblock {\em Geometric Invariant Theory}.
\newblock Springer-Verlag, Heidelberg, 1965.
\bibitem{Noot92}
R.~Noot.
\newblock {\em Hodge classes, {T}ate classes, and local moduli of abelian
varieties}.
\newblock PhD thesis, Rijksuniversiteit Utrecht, 1992.
\bibitem{Oort71}
F.~Oort.
\newblock Finite group schemes, local moduli for abelian varieties and lifting
problems.
\newblock {\em Compositio Math.}, 23:265--296, 1971.
\bibitem{Pra06}
K.~Prasanna.
\newblock {\em Integrality of a ratio of {P}etersson norms and level-lowering congruences}.
\newblock {\em Ann. Math.}, 163:901--967, 2006.
\bibitem{Robert89}
D.~Roberts.
\newblock {\em Shimura curves analogous to ${X}_0({N})$}.
\newblock PhD thesis, Harvard University, 1989.
\bibitem{Serre62}
J.-P. Serre.
\newblock Endomorphismes completement continus des espaces de {B}anach
$p$-adiques.
\newblock {\em IHES Publ. Math.}, 12:69--85, 1962.
\bibitem{Serre73}
J.-P. Serre.
\newblock Formes modulaires et fonctions z\^eta $p$-adiques.
\newblock In {\em Modular Functions of One Variable, III}, volume 350 of {\em Lecture Notes in
Math.}, pages 191--268, 1973.
\bibitem{Shimi72}
H.~Shimizu.
\newblock Theta series and automorphic forms on ${\mathbb GL}_2$.
\newblock {\em Math. Soc. Japan}, 24:638--683, 1972.
\bibitem{ShiRed}
G.~Shimura.
\newblock {\em Introduction to the Arithmetic Theory of Automorphic Functions}.
\newblock Iwanami Shoten and Princeton University Press, 1971.
\bibitem{Tilo96}
J.~Tilouine.
\newblock {\em Deformations of {G}alois representations and {H}ecke algebras}.
\newblock Narosa Publ. House, 1996.
\bibitem{Vigner80}
M.-F. Vigneras.
\newblock {\em Arithm{\'e}tique des Alg{\`e}bres de Quaternions}, volume 800 of
{\em Lecture Notes in Math.}
\newblock Springer-Verlag, 1980.
\bibitem{Waldsp85}
J.-L. Waldspurger.
\newblock Sur les valeurs des certaines fonctions ${L}$ automorphes en leur
centre de sym{\'e}trie.
\newblock {\em Compositio Math.}, 54:173--242, 1985.
\bibitem{Wat03}
T.~Watson.
\newblock {\em Rankin triple products and quantum chaos }.
\newblock PhD thesis, Princeton University, 2003.
\bibitem{Weil55}
A.~Weil.
\newblock On a certain type of characters of the idele-class group of an
algebraic number-field.
\newblock In {\em Proceedings of the international symposium on algebraic
number theory, Tokio \& Nikko 1955}, pages 1--7, 1956.
\end{thebibliography}
\end{document} |
\begin{document}
\title[]{Some Remarks on Nonlinear Hyperbolic Equations}
\author{Kamal N. Soltanov}
\address{{\small Department of Mathematics, }\\
{\small Faculty of Sciences, Hacettepe University, }\\
{\small Beytepe, Ankara, TR-06532, TURKEY} }
\email{[email protected]}
\date{}
\subjclass[2010]{Primary 35G25, 35B65, 35L70; Secondary 35K55, 35G20}
\keywords{Nonlinear hyperbolic and parabolic equations, Neumann problem, a
priori estimation, smoothness}
\begin{abstract}
Here a mixed problem for a nonlinear hyperbolic equation with Neumann
boundary value condition is investigated, and a priori estimations for the
possible solutions of the considered problem are obtained. These results
demonstrate that any solution of this problem possess certain smoothness
properties.
\end{abstract}
\maketitle
\subsubsection{Introduction}
In this article we consider a mixed problem for a nonlinear hyperbolic
equation and study the smoothness of a possible solution of the problem, in
some sense. Here we got some new a priori estimations for a solution of the
considered problem.
It is known that, up to now, the problem of the solvability of a nonlinear
hyperbolic equation with nonlinearity of this type has not been solved when $
\Omega \subset R^{n}$, $n\geq 2$. It should also be noted that it is not
possible to use the a priori estimations, which can be obtained by the known
methods, to prove the solvability in this case. Consequently, there are no
obtained results on a solvability of a mixed problem for the equation of the
following type
\begin{equation*}
\frac{\partial ^{2}u}{\partial t^{2}}-\overset{n}{\underset{i=1}{\sum }}D_{i}
\left[ a_{i}\left( t,x\right) \left\vert D_{i}u\right\vert ^{p-2}D_{i}u
\right] =h\left( t,x\right) ,\quad \left( t,x\right) \in Q_{T}
\end{equation*}
\begin{equation*}
Q_{T}\equiv \left( 0,T\right) \times \Omega ,\quad \Omega \subset
R^{n},\quad T>0,\quad p>2.
\end{equation*}
As known, the investigation of a mixed problem for the nonlinear hyperbolic
equations of such type on the Sobolev type spaces when $\ \Omega \subset
R^{n}$, $n\geq 2$ is connected with many difficulties (see, for example, the
works of Leray, Courant, Friedrichs, Lax, F. John, Garding, Ladyzhenskaya,
J.-L. Lions, H. Levine, Rozdestvenskii and also, [2, 7 - 11, 14 - 16, 18,
19], etc. ). Furthermore the possible solutions of this problem may possess
a gradient catastrophe. Only in the case $n=1$, it is achieved to prove
solvability theorems for the problems of such type (and essentially with
using the Riemann invariants).
However, recently certain classes of nonlinear hyperbolic equations were
investigated and results on the solvability of the considered problems in a
more generalized sense were obtained (see, for example, [13] its references)
and also certain result about dense solvability was obtained ([20]).
Furthermore there are such special class of the nonlinear hyperbolic
equations, for which the solvabilities were studied under some additional
conditions (see, [3, 4, 13, 17, 23] and its references), for example, under
some geometrical conditions.
Here, we investigate a mixed problem for equations of certain class with the
Neumann boundary-value conditions. In the beginning, a mixed problem for a
nonlinear parabolic equation with similar nonlinearity and conditions as
above is studied and the existence of the strongly solutions of this problem
is proved. Further some a priori estimations for a possible solution of the
considered problem is received in the hyperbolic case with use of the result
on the parabolic problem studied above. These results demonstrate that any
solution of our main problem possesses certain smoothness properties, which
might help for the proof of some existence theorems.\footnote{
Unfortunately, I could not use the obtained estimates for this aim.}
\section{Formulation of Problem}
Consider the problem
\begin{equation}
\frac{\partial ^{2}u}{\partial t^{2}}-\overset{n}{\underset{i=1}{\sum }}
D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) =h\left(
t,x\right) ,\quad \left( t,x\right) \in Q_{T},\quad p>2, \tag{1.1}
\end{equation}
\begin{equation}
u\left( 0,x\right) =u_{0}\left( x\right) ,\quad \frac{\partial u}{\partial t}
\ \left\vert ~_{t=0}\right. =u_{1}\left( x\right) ,\quad x\in \Omega \subset
R^{n},\quad n\geq 2, \tag{1.2}
\end{equation}
\begin{equation}
\frac{\partial u}{\partial \widehat{\nu }}\left\vert {}~_{\Gamma }\right.
\equiv \overset{n}{\underset{i=1}{\sum }}\left\vert D_{i}u\right\vert
^{p-2}D_{i}u\cos \left( \nu ,x_{i}\right) =0,\quad \left( x
{\acute{}}
,t\right) \in \Gamma \equiv \partial \Omega \times \left[ 0,T\right] ,
\tag{1.3}
\end{equation}
here $\Omega \subset R^{n},n\geq 2$ be a bounded domain with sufficiently
smooth boundary $\partial \Omega $; $u_{0}\left( x\right) $, $u_{1}\left(
x\right) $, $h\left( t,x\right) $ are functions such that $u_{0},u_{1}\in
W_{p}^{1}\left( \Omega \right) $, $h\in L_{p}\left( 0,T;W_{p}^{1}\left(
\Omega \right) \right) $, $\nu $ denote the unit outward normal to $\partial
\Omega $ (see, [10, 12]).
Introduce the class of the functions $u:Q\longrightarrow R$
\begin{equation*}
V\left( Q\right) \equiv W_{2}^{1}\left( 0,T;L_{2}\left( \Omega \right)
\right) \cap L^{\infty }\left( 0,T;W_{p}^{1}\left( \Omega \right) \right)
\cap L_{p-1}\left( 0,T;\widetilde{S}_{1,2\left( p-2\right) ,2}^{1}\left(
\Omega \right) \right) \cap
\end{equation*}
\begin{equation*}
\left\{ u\left( t,x\right) \left\vert \ \frac{\partial ^{2}u}{\partial t^{2}}
,\ \underset{i=1}{\overset{n}{\sum }}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}^{2}u\right) \in L_{1}\left( 0,T;L_{2}\left( \Omega \right)
\right) \right. \right\} \cap
\end{equation*}
\begin{equation*}
\left\{ u\left( t,x\right) \left\vert \ \overset{n}{\underset{i=1}{\sum }}
\underset{0}{\overset{t}{\int }}\left\vert D_{i}u\right\vert
^{p-2}D_{i}ud\tau \in W_{\infty }^{1}\left( 0,T;L_{q}\left( \Omega \right)
\right) \cap L^{\infty }\left( 0,T;W_{2}^{1}\left( \Omega \right) \right)
\right. \right\}
\end{equation*}
\begin{equation}
\left\{ u\left( t,x\right) \left\vert \ u\left( 0,x\right) =u_{0}\left(
x\right) ,\ \frac{\partial u}{\partial t}\ \left\vert ~_{t=0}\right.
=u_{1}\left( x\right) ,\ \frac{\partial u}{\partial \widehat{\nu }}
\left\vert ~_{\Gamma }\right. =0\right. \right\} \tag{DS}
\end{equation}
where
\begin{equation*}
\widetilde{S}_{1,\alpha ,\beta }^{1}\left( \Omega \right) \equiv \left\{
u\left( t,x\right) \left\vert ~\left[ u\right] _{S_{1,\alpha ,\beta
}^{1}}^{\alpha +\beta }=\left\Vert u\right\Vert _{\alpha +\beta }^{\alpha
+\beta }+\underset{i=1}{\overset{n}{\sum }}\left\Vert D_{i}u\right\Vert
_{\alpha +\beta }^{\alpha +\beta }+\right. \right.
\end{equation*}
\begin{equation*}
\left. \left\Vert \underset{i,j=1}{\overset{n}{\sum }}\left\vert
D_{i}u\right\vert ^{\frac{\alpha }{\beta }}D_{j}D_{i}u\right\Vert _{\beta
}^{\beta }<\infty \right\} ,\quad \alpha \geq 0,\ \beta \geq 1.
\end{equation*}
Thus, we will understand the solution of the problem in the following form:
A function $u\left( t,x\right) \in V\left( Q\right) $ is called solution of
problem (1.1) - (1.3) if $u\left( t,x\right) $ satisfies the following
equality
\begin{equation*}
\left[ \frac{\partial ^{2}u}{\partial t^{2}},v\right] -\left[ \overset{n}{
\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) ,v\right] =\left[ h,v\right]
\end{equation*}
for any $v\in W_{q}^{1}\left( 0,T;L_{2}\left( \Omega \right) \right) \cap
L^{\infty }\left( Q\right) $, where $\left[ \circ ,\circ \right] \equiv
\underset{Q}{\int }\circ \times \circ ~dxdt$.
Our aim in this article is to prove
\begin{theorem}
Under the conditions of this section each solution of problem (1.1)-(1.3)
belongs to a bounded subset of the space $V\left( Q\right) $ defined in (DS).
\end{theorem}
For the investigation of the posed problem in the beginning we will study
two problems, which are connected with considered problem. One of \ these
problems immediately follows from problem (1.1)-(1.3) and have the form:
\begin{equation}
\frac{\partial u}{\partial t}-\overset{n}{\underset{i=1}{\sum }}D_{i}\overset
{t}{\underset{0}{\int }}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) d\tau =H\left( t,x\right) +u_{1}\left( x\right) ,
\tag{1.4}
\end{equation}
where $H\left( t,x\right) =\underset{0}{\overset{t}{\int }}h\left( \tau
,x\right) d\tau $.
Consequently, if $u\left( t,x\right) $ is a solution of problem (1.1) -
(1.3) then $u\left( t,x\right) $ is a such solution of equation (1.4) that
the following conditions are fulfilled:
\begin{equation}
u\left( 0,x\right) =u_{0}\left( x\right) ,\quad \frac{\partial u}{\partial
\widehat{\nu }}\left\vert _{\Gamma }\right. =0. \tag{1.5}
\end{equation}
From here it follows that problems (1.1) - (1.3) and (1.4) - (1.5) are
equivalent.
And other problem is the nonlinear parabolic problem
\begin{equation}
\frac{\partial u}{\partial t}-\ \overset{n}{\underset{i=1}{\sum }}
D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) =h\left(
t,x\right) ,\quad \left( t,x\right) \in Q,\ p>2,\ n\geq 2, \tag{1.6}
\end{equation}
\begin{equation}
u\left( 0,x\right) =u_{0}\left( x\right) ,\quad x\in \Omega ,\ \quad \frac{
\partial u}{\partial \widehat{\nu }}\left\vert ~_{\Gamma }\right. =0,
\tag{1.7}
\end{equation}
where $u_{0}\in W_{p}^{1}\left( \Omega \right) $, $h\in L_{2}\left(
0,T;W_{2}^{1}\left( \Omega \right) \right) $ and $p>2$.
In the beginning the solvability of this problem is studied, for which the
general result is used, therefore we begin with this result.
\section{Some General Solvability Results}
Let $X,Y$ be locally convex vector topological spaces, $B\subseteq Y$ be a
Banach space and $g:D\left( g\right) \subseteq X\longrightarrow Y$ be a
mapping. Introduce the following subset of $\ X$
\begin{equation*}
\mathcal{M}_{gB}\equiv \left\{ x\in X\left\vert ~g\left( x\right) \in
B,\right. \func{Im}g\cap B\neq \varnothing \right\} .
\end{equation*}
\begin{definition}
A subset $\mathcal{M}\subseteq X$ is called a $pn-$space (i.e. pseudonormed
space) if $\mathcal{M}$ is a topological space and there is a function $
\left[ \cdot \right] _{S}:\mathcal{M}\longrightarrow R_{+}^{1}\equiv \left[
0,\infty \right) $ (wh\i ch is called $p-$norm of $\mathcal{M}$) such that
qn) $\left[ x\right] _{\mathcal{M}}\geq 0$, $\forall x\in \mathcal{M}$ and $
0\in \mathcal{M}$, $x=0\Longrightarrow \left[ x\right] _{\mathcal{M}}=0$;
pn) \ $\left[ x_{1}\right] _{\mathcal{M}}\neq \left[ x_{2}\right] _{\mathcal{
M}}\Longrightarrow x_{1}\neq x_{2}$, for $x_{1},x_{2}\in \mathcal{M}$, and $
\left[ x\right] _{\mathcal{M}}=0\Longrightarrow x=0$;
\end{definition}
The following conditions are often fulfilled in the spaces $\mathcal{M}_{gB}$
.
N) There exist a convex function $\nu :R^{1}\longrightarrow \overline{
R_{+}^{1}}$ and number $K\in \left( 0,\infty \right] $ such that $\left[
\lambda x\right] _{\mathcal{M}}\leq \nu \left( \lambda \right) \left[ x
\right] _{\mathcal{M}}$ for any $x\in \mathcal{M}$ and $\lambda \in R^{1}$, $
\left\vert \lambda \right\vert <K$, moreover $\underset{\left\vert \lambda
\right\vert \longrightarrow \lambda _{j}}{\lim }\frac{\nu \left( \lambda
\right) }{\left\vert \lambda \right\vert }=c_{j}$, $j=0,1$ where $\lambda
_{0}=0$, $\lambda _{1}=K$ and $c_{0}=c_{1}=1$ or $c_{0}=0$, $c_{1}=\infty $,
i.e. if $K=\infty $ then $\lambda x\in \mathcal{M}$ for any $x\in S$ and $
\lambda \in R^{1}.$
Let $g:D\left( g\right) \subseteq X\longrightarrow Y$ be such a mapping that
$\mathcal{M}_{gB}\neq \varnothing $ and the following conditions are
fulfilled
(g$_{\text{1}}$) $g:D\left( g\right) \longleftrightarrow \func{Im}g$ is
bijection and $g\left( 0\right) =0$;
(g$_{\text{2}}$) there is a function $\nu :R^{1}\longrightarrow \overline{
R_{+}^{1}}$ satisfying condition N such that
\begin{equation*}
\left\Vert g\left( \lambda x\right) \right\Vert _{B}\leq \nu \left( \lambda
\right) \left\Vert g\left( x\right) \right\Vert _{B},\ \forall x\in \mathcal{
M}_{gB},\ \forall \lambda \in R^{1};
\end{equation*}
If mapping $g$ satisfies conditions (g$_{1}$) and (g$_{2}$), then $\mathcal{M
}_{gB}$ is a $pn-$space with $p-$norm defined in the following way: there is
a one-to-one function $\psi :R_{+}^{1}\longrightarrow R_{+}^{1}$, $\psi
\left( 0\right) =0$, $\psi ,\psi ^{-1}\in C^{0}$ such that $\left[ x\right]
_{\mathcal{M}_{gB}}\equiv \psi ^{-1}\left( \left\Vert g\left( x\right)
\right\Vert _{B}\right) $. In this case $\mathcal{M}_{gB}$ is a metric space
with a metric: $d_{\mathcal{M}}\left( x_{1};x_{2}\right) \equiv \left\Vert
g\left( x_{1}\right) -g\left( x_{2}\right) \right\Vert _{B}$. Further, we
consider just such type of $pn-$spaces.
\begin{definition}
The $pn-$space $\mathcal{M}_{gB}$ is called weakly complete if $g\left(
\mathcal{M}_{gB}\right) $ is weakly closed in $B.$ The pn-space $\mathcal{M}
_{gB}$ is "reflexive" if each bounded weakly closed subset of $\mathcal{M}
_{gB}$ is weakly compact in $\mathcal{M}_{gB}$.
\end{definition}
It is clear that if $B$ is a reflexive Banach space and $\mathcal{M}_{gB}$
is a $pn-$space, then $\mathcal{M}_{gB}$ is "reflexive". Moreover, if $B$ is
a separable Banach space, then $\mathcal{M}_{gB}$ is separable (see, for
example, [21, 22] and their references).
Now, consider a nonlinear equation in the general form. Let $X,Y$ be Banach
spaces with dual spaces $X^{\ast },Y^{\ast }$ respectively, $\mathcal{M}
_{0}\subseteq X$ is a weakly complete $pn-$space, $f:D\left( f\right)
\subseteq X\longrightarrow Y$ be a nonlinear operator. Consider the equation
\begin{equation}
f\left( x\right) =y,\quad y\in Y. \tag{2.1}
\end{equation}
\begin{notation}
It is clear that (2.1)$\mathit{\ }$is equivalent to the following functional
equation:
\begin{equation}
\left\langle f\left( x\right) ,y^{\ast }\right\rangle =\left\langle
y,y^{\ast }\right\rangle ,\quad \forall y^{\ast }\in Y^{\ast }. \tag{2.2}
\end{equation}
\end{notation}
Let $f:D\left( f\right) \subseteq X\longrightarrow Y$ be a nonlinear bounded
operator and the following conditions hold
1) $f:\mathcal{M}_{0}\subseteq D\left( f\right) \longrightarrow Y$ is a
weakly compact (weakly "continuous") mapping, i.e. for any weakly
convergence sequence $\left\{ x_{m}\right\} _{m=1}^{\infty }\subset \mathcal{
M}_{0}$ in $\mathcal{M}_{0}$ (i.e. $x_{m}\overset{\mathcal{M}_{0}}{
\rightharpoonup }x_{0}\in \mathcal{M}_{0}$) there is a subsequence $\left\{
x_{m_{k}}\right\} _{k=1}^{\infty }\subseteq \left\{ x_{m}\right\}
_{m=1}^{\infty }$ such that $f\left( x_{m_{k}}\right) \overset{Y}{
\rightharpoonup }f\left( x_{0}\right) $ weakly in $Y$ (or for a general
sequence if $\mathcal{M}_{0}$ is not a separable space) and $\mathcal{M}_{0}$
is a weakly complete $pn-$space;
2) there exist a mapping $g:X_{0}\subseteq X\longrightarrow Y^{\ast }$ and a
continuous function $\varphi :R_{+}^{1}\longrightarrow R^{1}$ nondecreasing
for $\tau \geq \tau _{0}\geq 0$ \& $\varphi \left( \tau _{1}\right) >0$ for
a number $\tau _{1}>0,$ such that $g$ generates a "coercive" pair with $f$ \
in a generalized sense on the topological space $X_{1}\subseteq X_{0}\cap
\mathcal{M}_{0}$, i.e.
\begin{equation*}
\left\langle f\left( x\right) ,g\left( x\right) \right\rangle \geq \varphi
\left( \lbrack x]_{\mathcal{M}_{0}}\right) [x]_{\mathcal{M}_{0}},\quad
\forall x\in X_{1},
\end{equation*}
where $X_{1}$ is a topological space such that $\overline{X_{1}}
^{X_{0}}\equiv X_{0}$ and $\overline{X_{1}}^{\mathcal{M}_{0}}\equiv \mathcal{
M}_{0}$, and\textit{\ }$\left\langle \cdot ,\cdot \right\rangle $ is a
\textit{\ }dual form of the pair $\left( Y,Y^{\ast }\right) $. Moreover one
of the following conditions $\left( \alpha \right) $ or $\left( \beta
\right) $ holds:
$\left( \alpha \right) $ if $g\equiv L$ is a linear continuous operator,
then $\mathcal{M}_{0}$ is a \textquotedblright reflexive\textquotedblright\
space (see [21, 22]), $X_{0}\equiv X_{1}\subseteq \mathcal{M}_{0}$ is a
separable topological vector space which is dense in $\mathcal{M}_{0}$ and $
\ker L^{\ast }=\left\{ 0\right\} $.
$\left( \beta \right) $ if $g$ is a bounded operator (in general,
nonlinear), then $Y$ is a reflexive separable space, $g\left( X_{1}\right) $
contains an everywhere dense linear manifold of $Y^{\ast }$ and $g^{-1}$ is
a weakly compact (weakly continuous) operator from $Y^{\ast }$ to $\mathcal{M
}_{0}$.
\begin{theorem}
\textit{Let conditions 1 and 2 hold. Then equation (2.1) (or (2.2))\ is
solvable in }$\mathcal{M}_{0}$\textit{\ for any} $y\in Y$ \textit{satisfying
the following inequation: there exists} $r>0$ such that\textit{\ the
inequality}
\begin{equation}
\varphi \left( \lbrack x]_{\mathcal{M}_{0}}\right) [x]_{\mathcal{M}_{0}}\geq
\left\langle y,g\left( x\right) \right\rangle ,\text{ for}\quad \forall x\in
X_{1}\quad \text{with}\quad \lbrack x]_{\mathcal{M}}\geq r. \tag{2.3}
\end{equation}
holds.
\end{theorem}
\begin{proof}
Assume that conditions 1 and 2 ($\alpha $)\textit{\ }are fulfilled and $y\in
Y$ such that (2.3) holds. We are going to use Galerkin's approximation
method. Let $\left\{ x^{k}\right\} _{k=1}^{\infty }$ be a complete system in
the (separable) space $X_{1}\equiv X_{0}$. Then we are looking for
approximate solutions in the form $x_{m}=\overset{m}{\underset{k=1}{\sum }}
c_{mk}x^{k},$ where $c_{mk}$ are unknown coefficients, which can be
determined from the system of algebraic equations
\begin{equation}
\Phi _{k}\left( c_{m}\right) :=\left\langle f\left( x_{m}\right) ,g\left(
x^{k}\right) \right\rangle -\left\langle y,g\left( x^{k}\right)
\right\rangle =0,\quad k=1,2,...,m \tag{2.4}
\end{equation}
where $c_{m}\equiv \left( c_{m1},c_{m2},...,c_{mm}\right) $.
We observe that the mapping \ $\Phi \left( c_{m}\right) :=\left( \Phi
_{1}\left( c_{m}\right) ,\Phi _{2}\left( c_{m}\right) ,...,\Phi _{m}\left(
c_{m}\right) \right) $ is continuous by virtue of condition 1. (2.3) implies
the existence of such $r=r\left( \left\Vert y\right\Vert _{Y}\right) >0$
that the \textquotedblright acute angle\textquotedblright\ condition is
fulfilled for all $x_{m}$ with $\left[ x_{m}\right] _{\mathcal{M}_{0}}\geq r$
, i.e. for any $c_{m}\in S_{r_{1}}^{R^{m}}\left( 0\right) \subset R^{m}$, $
r_{1}\geq r$ the inequality
\begin{equation*}
\overset{m}{\underset{k=1}{\sum }}\left\langle \Phi _{k}\left( c_{m}\right)
,c_{mk}\right\rangle \equiv \left\langle f\left( x_{m}\right) ,g\left(
\overset{m}{\underset{k=1}{\sum }}c_{mk}x^{k}\right) \right\rangle
-\left\langle y,g\left( \overset{m}{\underset{k=1}{\sum }}c_{mk}x^{k}\right)
\right\rangle =\quad
\end{equation*}
\begin{equation*}
\left\langle f\left( x_{m}\right) ,g\left( x_{m}\right) \right\rangle
-\left\langle y,g\left( x_{m}\right) \right\rangle \geq 0,\quad \forall
c_{m}\in
\mathbb{R}
^{m},\left\Vert c_{m}\right\Vert _{
\mathbb{R}
^{m}}=r_{1}.
\end{equation*}
holds. The solvability of system (2.4) for each $m=1,2,\ldots $ follows from
a well-known \textquotedblleft acute angle\textquotedblright\ lemma ([10, 21
- 23]),\ which is equivalent to the Brouwer's fixed-point theorem. Thus, $
\left\{ x_{m}\left\vert ~m\geq \right. 1\right\} $ is the sequence of
approximate solutions which are contained in a bounded subset of the space $
\mathcal{M}_{0}$. Further arguments are analogous to those from [10, 23]
therefore we omit them. It remains to pass to the limit in (2.4)\ by $m$ and
use the weak convergency of a subsequence of the sequence $\left\{
x_{m}\left\vert ~m\geq \right. 1\right\} $, the weak compactness of the
mapping $f$, and the completeness of the system $\left\{ x^{k}\right\}
_{k=1}^{\infty }$in the space $X_{1}$.
Hence we get the limit element $x_{0}=w-\underset{j\nearrow \infty }{\lim }
x_{m_{j}}\in S_{0}$ which is the solution of the equation
\begin{equation}
\left\langle f\left( x_{0}\right) ,g\left( x\right) \right\rangle
=\left\langle y,g\left( x\right) \right\rangle ,\quad \forall x\in X_{0},
\tag{2.5}
\end{equation}
or of the equation
\begin{equation}
\left\langle g^{\ast }\circ f\left( x_{0}\right) ,x\right\rangle
=\left\langle g^{\ast }\circ y,x\right\rangle ,\quad \forall x\in X_{0}.
\tag{2.5'}
\end{equation}
In the second case, i.e. when conditions 1 and 2 ($\beta $)\ are fulfilled
and $y\in Y$ such that (2.3) holds, we suppose that the approximate
solutions are searched in the form
\begin{equation}
x_{m}=g^{-1}\left( \overset{m}{\underset{k=1}{\sum }}c_{mk}y_{k}^{\ast
}\right) \equiv g^{-1}\left( y_{\left( m\right) }^{\ast }\right) ,\quad
i.e.\ x_{m}=g^{-1}\left( y_{\left( m\right) }^{\ast }\right) \tag{2.6}
\end{equation}
where $\left\{ y_{k}^{\ast }\right\} _{k=1}^{\infty }\subset Y^{\ast }$ is a
complete system in the (separable) space $Y^{\ast }$ and belongs to $g\left(
X_{1}\right) $. In this case the unknown coefficients $c_{mk}$, that might
be determined from the system of algebraic equations
\begin{equation}
\widetilde{\Phi }_{k}\left( c_{m}\right) :=\left\langle f\left( x_{m}\right)
,y_{k}^{\ast }\right\rangle -\left\langle y,y_{k}^{\ast }\right\rangle
=0,\quad k=1,2,...,m \tag{2.7}
\end{equation}
with $c_{m}\equiv \left( c_{m1},c_{m2},...,c_{mm}\right) $, from which under
the our conditions we get
\begin{equation}
\left\langle f\left( x_{m}\right) ,y_{k}^{\ast }\right\rangle -\left\langle
y,y_{k}^{\ast }\right\rangle =\left\langle f\left( g^{-1}\left( y_{\left(
m\right) }^{\ast }\right) \right) ,y_{k}^{\ast }\right\rangle -\left\langle
y,y_{k}^{\ast }\right\rangle =0, \tag{2.7'}
\end{equation}
for $k=1,2,...,m$.
As above we observe that the mapping \
\begin{equation*}
\widetilde{\Phi }\left( c_{m}\right) :=\left( \widetilde{\Phi }_{1}\left(
c_{m}\right) ,\widetilde{\Phi }_{2}\left( c_{m}\right) ,...,\widetilde{\Phi }
_{m}\left( c_{m}\right) \right)
\end{equation*}
is continuous by virtue of conditions 1 and 2($\beta $). (2.3) implies the
existence of such $\widetilde{r}>0$ that the \textquotedblright acute
angle\textquotedblright\ condition is fulfilled for all $y_{\left( m\right)
}^{\ast }$ with $\left\Vert y_{\left( m\right) }^{\ast }\right\Vert
_{Y^{\ast }}\geq \widetilde{r}$ , i.e. for any $c_{m}\in
S_{r_{1}}^{R^{m}}\left( 0\right) \subset R^{m}$, $\widetilde{r}_{1}\geq
\widetilde{r}$ the inequality
\begin{equation*}
\overset{m}{\underset{k=1}{\sum }}\left\langle \widetilde{\Phi }_{k}\left(
c_{m}\right) ,c_{mk}\right\rangle \equiv \left\langle f\left( x_{m}\right) ,
\overset{m}{\underset{k=1}{\sum }}c_{mk}y_{k}^{\ast }\right\rangle
-\left\langle y,\overset{m}{\underset{k=1}{\sum }}c_{mk}y_{k}^{\ast
}\right\rangle =\quad
\end{equation*}
\begin{equation*}
\left\langle f\left( g^{-1}\left( y_{\left( m\right) }^{\ast }\right)
\right) ,y_{\left( m\right) }^{\ast }\right\rangle -\left\langle y,y_{\left(
m\right) }^{\ast }\right\rangle =\left\langle f\left( x_{m}\right) ,g\left(
x_{m}\right) \right\rangle -\left\langle y,g\left( x_{m}\right)
\right\rangle \geq 0,
\end{equation*}
\begin{equation*}
\forall c_{m}\in
\mathbb{R}
^{m},\left\Vert c_{m}\right\Vert _{
\mathbb{R}
^{m}}=\widetilde{r}_{1}.
\end{equation*}
holds by virtue of the conditions. Consequently the solvability of the
system (2.7) (or (2.7')) for each $m=1,2,\ldots $ follows from the
\textquotedblleft acute angle\textquotedblright\ lemma, as above. Thus, we
obtain $\left\{ y_{\left( m\right) }^{\ast }\left\vert ~m\geq \right.
1\right\} $ is the sequence of the approximate solutions of system (2.7'),
that is contained in a bounded subset of $Y^{\ast }$. From here it follows
there is a subsequence $\left\{ y_{\left( m_{j}\right) }^{\ast }\right\}
_{j=1}^{\infty }$ of the sequence $\left\{ y_{\left( m\right) }^{\ast
}\left\vert ~m\geq \right. 1\right\} $ such that it is weakly convergent in $
Y^{\ast }$, and consequently the sequence $\left\{ x_{m_{j}}\right\}
_{j=1}^{\infty }\equiv \left\{ g^{-1}\left( y_{\left( m_{j}\right) }^{\ast
}\right) \right\} _{j=1}^{\infty }$ weakly converges in the space $\mathcal{M
}_{0}$ by the condition 2($\beta $) (maybe after passing to the subsequence
of it). It remains to pass to the limit in (2.7')\ by $j$ and use a weak
convergency of a subsequence of the sequence $\left\{ y_{\left( m\right)
}^{\ast }\left\vert ~m\geq \right. 1\right\} $, the weak compactness of
mappings $f$ and $g^{-1}$, and next the completeness of the system $\left\{
y_{k}^{\ast }\right\} _{k=1}^{\infty }$in the space $Y^{\ast }$.
Hence we get the limit element $x_{0}=w-\underset{j\nearrow \infty }{\lim }
x_{m_{j}}$ $=w-\underset{j\nearrow \infty }{\lim }g^{-1}\left( y_{\left(
m_{j}\right) }^{\ast }\right) \in \mathcal{M}_{0}$ and it is the solution of
the equation
\begin{equation}
\left\langle f\left( x_{0}\right) ,y^{\ast }\right\rangle =\left\langle
y,y^{\ast }\right\rangle ,\quad \forall y^{\ast }\in Y^{\ast }. \tag{2.8}
\end{equation}
Q.E.D.\footnote{
See also, Soltanov K.N., On Noncoercive Semilinear Equations, Journal- NA:
Hybrid Systems, (2008), 2, 2, 344-358.}
\end{proof}
\begin{remark}
\textit{It is obvious that if there exists a function }$\psi
:R_{+}^{1}\longrightarrow R_{+}^{1}$, $\psi \in C^{0}$\textit{such that }$
\psi \left( \xi \right) =0\Longleftrightarrow \xi =0$\textit{\ and if the
following inequality\ is fulfilled }$\left\Vert x_{1}-x_{2}\right\Vert
_{X}\leq \psi \left( \left\Vert f\left( x_{1}\right) -f\left( x_{2}\right)
\right\Vert _{Y}\right) $\textit{\ for all }$x_{1},x_{2}\in \mathcal{M}_{0}$
\textit{\ then solution of equation (2.2) is unique. }
\end{remark}
\begin{notation}
It should be noted the spaces of the $pn-$space type often arising from
nonlinear problems with nonlinear main parts, for example,
1) the equation of the nonlinear filtration or diffusion that have the
expression:
\begin{equation*}
\frac{\partial u}{\partial t}-\nabla \cdot \left( g\left( u\right) \nabla
u\right) +h\left( t,x,u\right) =0,\quad u\left\vert \ _{\partial \Omega
\times \left[ 0.T\right] }\right. =0,
\end{equation*}
\begin{equation*}
u\left( 0,x\right) =u_{0}\left( x\right) ,\quad x\in \Omega \subset
\mathbb{R}
^{n},\quad n\geq 1
\end{equation*}
where $g:
\mathbb{R}
\longrightarrow
\mathbb{R}
_{+}$ is a convex function ($g\left( s\right) \equiv \left\vert s\right\vert
^{\rho }$, $\rho >0$) and $h\left( t,x,s\right) $ is a Caratheodory
function, in this case it is needed to investigate
\begin{equation*}
S_{1,\rho ,2}\left( \Omega \right) \equiv \left\{ u\in L^{1}\left( \Omega
\right) \left\vert \ \underset{\Omega }{\dint }g\left( u\left( x\right)
\right) \left\vert \nabla u\right\vert ^{2}dx\right. <\infty ;\ \
u\left\vert \ _{\partial \Omega }\right. =0\right\} ;
\end{equation*}
2) the equation of the Prandtl-von Mises type equation that have the
expression:
\begin{equation*}
\frac{\partial u}{\partial t}-\left\vert u\right\vert ^{\rho }\Delta
u+h\left( t,x,u\right) =0,\quad u\left\vert \ _{\partial \Omega \times \left[
0.T\right] }\right. =0,
\end{equation*}
\begin{equation*}
u\left( 0,x\right) =u_{0}\left( x\right) ,\quad x\in \Omega \subset
\mathbb{R}
^{n},\quad n\geq 1
\end{equation*}
where $\rho >0$ and $h\left( t,x,s\right) $ is a Caratheodory function, in
this case it is needed to investigate the spaces of the following spaces
type $S_{1,\mu ,q}\left( \Omega \right) $ ($\mu \geq 0,q\geq 1$) and
\begin{equation*}
S_{\Delta ,\rho ,2}\left( \Omega \right) \equiv \left\{ u\in L^{1}\left(
\Omega \right) \left\vert \ \underset{\Omega }{\dint }\left\vert u\left(
x\right) \right\vert ^{\rho }\left\vert \Delta u\right\vert ^{2}dx\right.
<\infty ;\ \ u\left\vert \ _{\partial \Omega }\right. =0\right\}
\end{equation*}
etc. \footnote{
Theorem and the spaces of such type were used earlier in many works see, for
example, [10, 21], and also the following articles with therein references:
\par
Ju. A. Dubinskii - Weakly convergence into nonlinear elliptic and parabolic
equations, Matem. Sborn., (1965), 67, n. 4; Soltanov K.N. - On solvability
some nonlinear parabolic problems with nonlinearitygrowing quickly
po-lynomial functions. Matematczeskie zametki, 32, 6, 1982.
\par
Soltanov K.N. : Some embedding theorems and its applications to nonlinear
equations. Differensial'nie uravnenia, 20, 12, 1984; On nonlinear equations
of the form $F\left( x,u,Du,\Delta u\right) =0$. Matem. Sb. Ac. Sci. Russ,
1993, v.184, n.11 (Russian Acad. Sci. Sb. Math., 80, (1995) ,2; Solvability
nonlinear equations with operators the form of sum the pseudomonotone and
weakly compact., Soviet Math. Dokl.,1992, v.324, n.5,944-948; Nonlinear
equations in nonreflexive Banach spaces and fully nonlinear equations.
Advances in Mathematical Sciences and Applications, 1999, v.9, n 2, 939-972
(joint with J. Sprekels); On some problem with free boundary. Trans. Russian
Ac. Sci., ser. Math., 2002, 66, 4, 155-176 (joint with Novruzov E.).}
\end{notation}
\begin{corollary}
Assume that the conditions of Theorem 2 are fulfilled and \textit{there is a
continuous function }$\varphi _{1}:R_{+}^{1}\longrightarrow R_{+}^{1}$ such
that $\left\Vert g\left( x\right) \right\Vert _{Y^{\ast }}\leq \varphi
_{1}\left( [x]_{S_{0}}\right) $ for any $x\in X_{0}$ and\textit{\ }$\varphi
\left( \tau \right) \nearrow +\infty $\textit{\ }and $\frac{\varphi \left(
\tau \right) \tau }{\varphi _{1}\left( \tau \right) }\nearrow +\infty $
\textit{\ as }$\tau \nearrow +\infty $. \textit{Then } \textit{equation
(2.2) is solvable in }$\mathcal{M}_{0}$, \textit{for any }$y\in Y$.
\end{corollary}
\section{Solvability of Problem (1.6) - (1.7)}
A solution of problem (1.6) - (1.7) we will understand in following sense.
\begin{definition}
A function $u\left( t,x\right) $ of the space ${\Large P}_{1,\left(
p-2\right) q,q,2}^{1}\left( Q\right) $ is called a solution of problem (1.6)
- (1.7) if $u\left( t,x\right) $ satisfies the following equality
\begin{equation}
\left[ \frac{\partial u}{\partial t},v\right] -\left[ \overset{n}{\underset{
i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) ,v
\right] =\left[ h,v\right] ,\quad \forall v\in L_{p}\left( Q\right) ,
\tag{3.1}
\end{equation}
where
\begin{equation*}
{\Large P}_{1,\left( p-2\right) q,q,2}^{1}\left( Q\right) \equiv L_{p}\left(
0,T;\overset{0}{S}{}_{1,\left( p-2\right) q,q}^{1}\left( \Omega \right)
\right) \cap W_{2}^{1}\left( 0,T;L_{2}\left( \Omega \right) \right)
\end{equation*}
\begin{equation*}
S_{1,\alpha ,\beta }^{1}\left( \Omega \right) \equiv \left\{ u\left(
t,x\right) \left\vert ~\left[ u\right] _{S_{1,\alpha ,\beta }^{1}}^{\alpha
+\beta }=\underset{i=1}{\overset{n}{\sum }}\left\Vert D_{i}u\right\Vert
_{\alpha +\beta }^{\alpha +\beta }+\right. \right.
\end{equation*}
\begin{equation*}
\left. \underset{i,j=1}{\overset{n}{\sum }}\left\Vert \left\vert
D_{i}u\right\vert ^{\frac{\alpha }{\beta }}D_{j}D_{i}u\right\Vert _{\beta
}^{\beta }<\infty \right\} ,\quad \alpha \geq 0,\ \beta \geq 1.
\end{equation*}
and $\left[ \cdot ,\cdot \right] $ denotes dual form for the pair $\left(
L_{q}\left( Q\right) ,L_{p}\left( Q\right) \right) $ as in the section 1.
\end{definition}
For the study of problem (1.6) - (1.7) we use Theorem 2 and Corollary 1 of
the previous section. For applying these results to problem (1.6) - (1.7),
we will choose the corresponding spaces and mappings:
\begin{equation*}
\mathcal{M}_{0}\equiv {\Large P}_{1,\left( p-2\right) q,q,2}^{1}\left(
Q\right) \equiv L_{p}\left( 0,T;\overset{0}{S}{}_{1,\left( p-2\right)
q,q}^{1}\left( \Omega \right) \right) \cap W_{2}^{1}\left( 0,T;L_{2}\left(
\Omega \right) \right) ,
\end{equation*}
\begin{equation*}
\Phi \left( u\right) \equiv -\ \overset{n}{\underset{i=1}{\sum }}D_{i}\left(
\left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) ,\quad \gamma _{0}u\equiv
u\left( 0,x\right) ,
\end{equation*}
\noindent {}
\begin{equation*}
f\left( \cdot \right) \equiv \left\{ \frac{\partial \cdot }{\partial t}+\Phi
\left( \cdot \right) ;\ \gamma _{0}\cdot \right\} ,\quad g\left( \cdot
\right) \equiv \left\{ \frac{\partial \cdot }{\partial t}-\Delta \cdot
;\quad \gamma _{0}\cdot \right\} ,
\end{equation*}
\begin{equation*}
X_{0}\equiv W_{p}^{1}\left( 0,T;L_{p}\left( \Omega \right) \right) \cap
\widetilde{X};\
\end{equation*}
\begin{equation*}
X_{1}\equiv X_{0}\cap \left\{ u\left( t,x\right) \left\vert \frac{\partial u
}{\partial \widehat{\nu }}\left\vert \ _{\Gamma }\right. =0\right. \right\} ;
\end{equation*}
\begin{equation*}
Y\equiv L_{q}\left( Q\right) ,\ q=p^{\prime },\widetilde{X}\equiv
L_{p}\left( 0,T;W_{p}^{2}\left( \Omega \right) \right) \cap \left\{ u\left(
t,x\right) \left\vert \frac{\partial u}{\partial \nu }\left\vert \ _{\Gamma
}\right. =0\right. \right\}
\end{equation*}
here
\begin{equation*}
\overset{0}{S}{}_{1,\left( p-2\right) q,q}^{1}\left( \Omega \right) \equiv
S_{1,\left( p-2\right) q,q}^{1}\left( \Omega \right) \cap \left\{ u\left(
t,x\right) \left\vert \frac{\partial u}{\partial \widehat{\nu }}\left\vert \
_{\partial \Omega }\right. =0\right. \right\}
\end{equation*}
Thus, as we can see from the above denotations, mapping $f$ is defined by
problem (1.6)-(1.7) and mapping $g$ is defined by the following problem
\begin{equation}
\frac{\partial u}{\partial t}-\Delta u=v\left( t,x\right) ,\quad \left(
t,x\right) \in Q, \tag{3.2}
\end{equation}
\begin{equation}
\gamma _{0}u\equiv u\left( 0,x\right) =u_{0}\left( x\right) ,\ \frac{
\partial u}{\partial \nu }\left\vert \ _{\Gamma }\right. =0. \tag{3.3}
\end{equation}
As known (see, [1, 5, 6, 12]), problem (3.2)-(3.3) is solvable in the space
\begin{equation*}
X_{0}\equiv W_{p}^{1}\left( 0,T;L_{p}\left( \Omega \right) \right) \cap
L_{p}\left( 0,T;W_{p}^{2}\left( \Omega \right) \right) \cap \left\{ u\left(
t,x\right) \left\vert \frac{\partial u}{\partial \nu }\left\vert \ _{\Gamma
}\right. =0\right. \right\}
\end{equation*}
for any $v\in L_{p}\left( Q\right) $, $u_{0}\in W_{p}^{1}\left( \Omega
\right) $.
Now we will demonstrate that all conditions of Theorem 2 and also of
Corollary 1 are fulfilled.
\begin{proposition}
Mappings $f$ and $g$ , defined above, generate a "coercive" pair on $X_{1}$
in the generalized sense, and moreover the statement of Corollary 1 is valid.
\end{proposition}
\begin{proof}
Let $u\in X_{1}$, i.e.
\begin{equation*}
u\in X_{0}\cap \left\{ u\left( t,x\right) \left\vert \frac{\partial u}{
\partial \widehat{\nu }}\left\vert \ _{\Gamma }\right. =0\right. \right\} .
\end{equation*}
Consider the dual form $\left\langle f\left( u\right) ,g\left( u\right)
\right\rangle $ for any $u\in X_{1}$. More exactly, it is enough to consider
the dual form in the form
\begin{equation}
\underset{0}{\overset{t}{\int }}\underset{\Omega }{\int }f\left( u\right) \
g\left( u\right) \ dxd\tau \equiv \left[ f\left( u\right) ,\ g\left(
u\right) \right] _{t} \tag{*}
\end{equation}
Hence, if we consider the above expression then after certain action and in
view of the boundary conditions, we get
\begin{equation*}
\left[ f\left( u\right) ,\ g\left( u\right) \right] _{t}\equiv \left[ \frac{
\partial u}{\partial t},\ \frac{\partial u}{\partial t}\right] _{t}\ +\left[
\Phi \left( u\right) ,\ \frac{\partial u}{\partial t}\right] _{t}-
\end{equation*}
\begin{equation*}
\left[ \frac{\partial u}{\partial t},\ \Delta u\right] _{t}-\left[ \Phi
\left( u\right) ,\ \Delta u\right] _{t}+\underset{\Omega }{\int }u_{0}\
u_{0}\ dx=
\end{equation*}
\begin{equation*}
=\underset{0}{\overset{t}{\int }}\left\Vert \frac{\partial u}{\partial t}
\right\Vert _{2}^{2}d\tau +\overset{n}{\underset{i=1}{\sum }}\left[ \frac{1}{
p}\left\Vert D_{i}u\right\Vert _{p}^{p}\left( t\right) +\frac{1}{2}
\left\Vert D_{i}u\right\Vert _{2}^{2}\left( t\right) \right] +
\end{equation*}
\begin{equation*}
\left\Vert u_{0}\right\Vert _{2}^{2}+\left( p-1\right) \overset{n}{\underset{
i,j=1}{\sum }}\ \underset{0}{\overset{t}{\int }}\left\Vert \left\vert
D_{i}u\right\vert ^{\frac{p-2}{2}}D_{i}D_{j}u\right\Vert _{2}^{2}-
\end{equation*}
\begin{equation}
\overset{n}{\underset{i=1}{\sum }}\left[ \frac{1}{p}\left\Vert
D_{i}u_{0}\right\Vert _{p}^{p}+\frac{1}{2}\left\Vert D_{i}u_{0}\right\Vert
_{2}^{2}\right] \tag{3.4}
\end{equation}
here and in (3.5) we denote $\left\Vert \cdot \right\Vert _{p_{1}}\equiv
\left\Vert \cdot \right\Vert _{L_{p_{1}}\left( \Omega \right) }$, $p_{1}\geq
1$.
From here\ it follows,
\begin{equation*}
\left[ f\left( u\right) ,\ g\left( u\right) \right] \geq c\left( \left\Vert
\frac{\partial u}{\partial t}\right\Vert _{L_{2}\left( Q\right) }^{2}+
\overset{n}{\underset{i=1}{\sum }}\left\Vert \left\vert D_{i}u\right\vert ^{
\frac{p-2}{2}}D_{i}u\right\Vert _{L_{2}\left( Q\right) }^{2}\right) -
\end{equation*}
\begin{equation*}
c_{1}\left\Vert u_{0}\right\Vert _{W_{p}^{1}}^{p}-c_{2}\geq \widetilde{c}
\left( \left\Vert \frac{\partial u}{\partial t}\right\Vert _{L_{2}\left(
Q\right) }^{2}+\left[ u\right] _{L_{p}\left( S_{1,\left( p-2\right)
q,q}\right) }^{p}\right) -
\end{equation*}
\begin{equation*}
c_{1}\left\Vert u_{0}\right\Vert _{W_{p}^{1}}^{p}-c_{2}\geq \widetilde{c}
\left[ u\right] _{\mathbf{P}_{1,\left( p-2\right) q,q,2}^{1}\left( Q\right)
}^{2}-c_{1}\left\Vert u_{0}\right\Vert _{W_{p}^{1}}^{p}-\widetilde{c}_{2},
\end{equation*}
which demonstrates fulfillment of the statement of Corollary 1\footnote{
From definitions of these spaces is easy to see that $S_{1,p-2,2}^{1}\left(
\Omega \right) \subset S_{1,\left( p-2\right) q,q}\left( \Omega \right) $}.
Consequently, Proposition 1 is true.
\end{proof}
Further for the right part of the dual form, we obtain under the conditions
of Proposition 1 (using same way as in the above proof)
\begin{equation*}
\left\vert \underset{0}{\overset{t}{\int }}\underset{\Omega }{\int }h\left(
\frac{\partial u}{\partial t}-\Delta u\right) \ dxd\tau \right\vert \leq
C\left( \varepsilon \right) \underset{0}{\overset{t}{\int }}\left\Vert
h\right\Vert _{2}^{2}\ d\tau +
\end{equation*}
\begin{equation}
\varepsilon \underset{0}{\overset{t}{\int }}\left\Vert \frac{\partial u}{
\partial t}\right\Vert _{2}^{2}\ d\tau +C\left( \varepsilon _{1}\right)
\underset{0}{\overset{t}{\int }}\left\Vert h\right\Vert _{W_{q}^{1}}^{q}\
d\tau +\varepsilon _{1}\overset{n}{\underset{i=1}{\sum }}\underset{0}{
\overset{t}{\int }}\left\Vert D_{i}u\right\Vert _{p}^{p}\ d\tau . \tag{3.5}
\end{equation}
It is not difficult to see mapping $g$ defined by problem (3.2)-(3.3)
satisfies of the conditions of \ Theorem 2, i.e. $g\left( X_{1}\right) $
contains an everywhere dense linear manifold of $L_{p}\left( Q\right) $ and $
g^{-1}$ is weakly compact operator from $L_{p}\left( Q\right) $ to $\mathcal{
M}_{0}\equiv L_{p}\left( 0,T;\overset{0}{S}{}_{1,\left( p-2\right)
q,q}^{1}\left( \Omega \right) \right) $ $\cap $ $W_{2}^{1}\left(
0,T;L_{2}\left( \Omega \right) \right) $.
Thus we have that all conditions of Theorem 2 and Corollary 1 are fulfilled
for the mappings and spaces corresponding to the studied problem.
Consequently, using Theorem 2 and Corollary 1 we obtain the solvability of
problem (1.6)-(1.7) in the space ${\Large P}_{1,\left( p-2\right)
q,q,2}^{1}\left( Q\right) $ for any $h\in L_{2}\left( 0,T;W_{2}^{1}\left(
\Omega \right) \right) $ and $u_{0}\in W_{p}^{1}\left( \Omega \right) $.
Furthermore, from here it follows that the solution of this problem
possesses the complementary smoothness, i.e. $\overset{n}{\underset{i=1}{
\sum }}D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) \in
L_{2}\left( Q\right) $ as far as we have $\frac{\partial u}{\partial t}\in
L_{2}\left( Q\right) $ and $h\in L_{2}\left( 0,T;W_{2}^{1}\left( \Omega
\right) \right) $ by virtue of the conditions of the considered problem.
So the following result is proved.
\begin{theorem}
Under the conditions of this section, problem (1.6) - (1.7) is solvable in $
{\Large P}\left( Q\right) $ for any $u_{0}\in W_{p}^{1}\left( \Omega \right)
$ and $h\in L_{2}\left( 0,T;W_{2}^{1}\left( \Omega \right) \right) $ where
\begin{equation*}
{\Large P}\left( Q\right) \equiv L_{p}\left( 0,T;\overset{0}{\widetilde{S}}
{}_{1,p-2,2}^{1}\left( \Omega \right) \right) \cap W_{2}^{1}\left(
0,T;L_{2}\left( \Omega \right) \right) \cap {\Large P}_{1,\left( p-2\right)
q,q,2}^{1}\left( Q\right) .
\end{equation*}
\end{theorem}
\section{A priori Estimations for Solutions of Problem (1.4) - (1.5)}
Now, we can investigate the main problem of this article, which is posed for
problem (1.4) - (1.5). We introduce denotations of the mappings $A$ and $f$
\ that are generated by problems (1.4)-(1.5) and (1.6)-(1.7), respectively.
\begin{theorem}
Under the conditions of section 1, any solution $u\left( t,x\right) $ of
problem (1.4) -(1.5) belongs to the bounded subset of the function class $
\widetilde{P}\left( Q\right) $ defined in the form
\begin{equation*}
u\in L^{\infty }\left( 0,T;W_{p}^{1}\left( \Omega \right) \right) ;\quad
\frac{\partial u}{\partial t}\in L^{\infty }\left( 0,T;L_{2}\left( \Omega
\right) \right) ;
\end{equation*}
\begin{equation}
\overset{n}{\underset{i=1}{\sum }}\overset{t}{\underset{0}{\int }}\left\vert
D_{i}u\right\vert ^{p-2}D_{i}ud\tau \in W_{\infty }^{1}\left(
0,T;L_{q}\left( \Omega \right) \right) \cap L^{\infty }\left(
0,T;W_{2}^{1}\left( \Omega \right) \right) \tag{4.1}
\end{equation}
that satisfies the conditions determined by the dates of problem (1.4)-(1.5).
\end{theorem}
\begin{proof}
Consider the dual form $\left\langle A\left( u\right) ,f\left( u\right)
\right\rangle $ for any $u\in {\Large P}\left( Q\right) $ that is defined by
virtue of Theorem 3. We behave as in proof of Proposition 1 and consider
only the integral with respect to $x$. Then we have after certain known acts
\begin{equation*}
\underset{\Omega }{\int }\frac{\partial u}{\partial t}\ \frac{\partial u}{
\partial t}\ dx\ +\underset{\Omega }{\int }\ \left( \overset{t}{\underset{0}{
\int }}\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right) \left( \overset{n}{
\underset{j=1}{\sum }}D_{j}\left( \left\vert D_{j}u\right\vert
^{p-2}D_{j}u\right) \right) \ dx-
\end{equation*}
\begin{equation*}
\underset{\Omega }{\int }\ \left( \overset{t}{\underset{0}{\int }}\overset{n}
{\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) d\tau \right) \frac{\partial u}{\partial t}dx-\underset{
\Omega }{\int }\frac{\partial u}{\partial t}\left( \overset{n}{\underset{i=1}
{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) \right)
dx\geq
\end{equation*}
\begin{equation*}
\frac{1}{2}\left\Vert \frac{\partial u}{\partial t}\right\Vert _{L_{2}\left(
\Omega \right) }^{2}+\frac{1}{2}\frac{\partial }{\partial t}\left\Vert
\overset{t}{\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum }}
D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau
\right\Vert _{L_{2}\left( \Omega \right) }^{2}+
\end{equation*}
\begin{equation}
\frac{1}{p}\frac{\partial }{\partial t}\overset{n}{\underset{i=1}{\sum }}
\left\Vert D_{i}u\right\Vert _{L_{p}\left( \Omega \right) }^{p}-\frac{1}{2}
\left\Vert \overset{t}{\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum }}
D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau
\right\Vert _{L_{2}\left( \Omega \right) }^{2}\left( t\right) \tag{4.2}
\end{equation}
Now, consider the right part of the dual form, i.e. $\left\langle
H,f\right\rangle $, for the determination of the bounded subset, to which
the solutions of the problem belongs (and for the receiving of the a priori
estimations). Then we get
\begin{equation*}
\left\vert \underset{\Omega }{\int }H\ \frac{\partial u}{\partial t}\ dx-
\underset{\Omega }{\int }H\overset{n}{\underset{j=1}{\sum }}D_{j}\left(
\left\vert D_{j}u\right\vert ^{p-2}D_{j}u\right) \ dx\right\vert \leq
C\left( \varepsilon \right) \left\Vert H\right\Vert _{L_{2}\left( Q\right)
}^{2}+
\end{equation*}
\begin{equation}
\varepsilon \left\Vert \frac{\partial u}{\partial t}\right\Vert
_{L_{2}\left( \Omega \right) }^{2}\left( t\right) +C\left( \varepsilon
_{1}\right) \left\Vert H\right\Vert _{L_{p}\left( W_{p}^{1}\right)
}^{p}+\varepsilon _{1}\overset{n}{\underset{j=1}{\sum }}\left\Vert
D_{j}u\right\Vert _{L_{p}\left( \Omega \right) }^{p}\left( t\right) .
\tag{4.3}
\end{equation}
From (4.2) and (4.3) it follows
\begin{equation*}
0=\underset{\Omega }{\int }\left( A\left( u\right) -H\right) \ f\left(
u\right) \ dx\geq \frac{1}{2}\left\Vert \frac{\partial u}{\partial t}
\right\Vert _{L_{2}\left( \Omega \right) }^{2}+\frac{1}{p}\frac{\partial }{
\partial t}\overset{n}{\underset{j=1}{\sum }}\left\Vert D_{j}u\right\Vert
_{L_{p}\left( \Omega \right) }^{p}+
\end{equation*}
\begin{equation*}
\frac{1}{2}\frac{\partial }{\partial t}\left\Vert \overset{t}{\underset{0}{
\int }}\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right\Vert _{L_{2}\left( \Omega
\right) }^{2}-\frac{1}{2}\left\Vert \overset{t}{\underset{0}{\int }}\overset{
n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) d\tau \right\Vert _{L_{2}\left( \Omega \right) }^{2}-
\end{equation*}
\begin{equation*}
\varepsilon _{1}\overset{n}{\underset{i=1}{\sum }}\left\Vert
D_{i}u\right\Vert _{L_{p}\left( \Omega \right) }^{p}-C\left( \varepsilon
\right) \left\Vert H\right\Vert _{L_{2}\left( Q\right) }^{2}-\varepsilon
\left\Vert \frac{\partial u}{\partial t}\right\Vert _{L_{2}\left( \Omega
\right) }^{2}-C\left( \varepsilon _{1}\right) \left\Vert H\right\Vert
_{L_{p}\left( W_{p}^{1}\right) }^{p}
\end{equation*}
or if we choose small parameters $\varepsilon >0$ and $\varepsilon _{1}>0$
such as needed, then we have
\begin{equation*}
c\left\Vert \frac{\partial u}{\partial t}\right\Vert _{L_{2}\left( \Omega
\right) }^{2}+\frac{1}{p}\frac{\partial }{\partial t}\overset{n}{\underset{
i=1}{\sum }}\left\Vert D_{i}u\right\Vert _{L_{p}\left( \Omega \right) }^{p}+
\end{equation*}
\begin{equation*}
\frac{1}{2}\frac{\partial }{\partial t}\left\Vert \overset{t}{\underset{0}{
\int }}\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right\Vert _{L_{2}\left( \Omega
\right) }^{2}\leq C\left( \left\Vert H\right\Vert _{L_{2}\left( Q\right)
},\left\Vert H\right\Vert _{L_{p}\left( W_{p}^{1}\right) }\right) +
\end{equation*}
\begin{equation}
\frac{1}{p}\overset{n}{\underset{i=1}{\sum }}\left\Vert D_{i}u\right\Vert
_{L_{p}\left( \Omega \right) }^{p}+\frac{1}{2}\left\Vert \overset{t}{
\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right\Vert _{L_{2}\left( \Omega
\right) }^{2}. \tag{4.4}
\end{equation}
Inequality (4.4) show that we can use Gronwall lemma. Consequently using
Gronwall lemma we get
\begin{equation*}
\overset{n}{\underset{i=1}{\sum }}\left\Vert D_{i}u\right\Vert _{L_{p}\left(
\Omega \right) }^{p}\left( t\right) +\left\Vert \overset{t}{\underset{0}{
\int }}\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right\Vert _{L_{2}\left( \Omega
\right) }^{2}\left( t\right) \leq
\end{equation*}
\begin{equation}
C\left( \left\Vert H\right\Vert _{L_{2}\left( Q\right) },\left\Vert
H\right\Vert _{L_{p}\left( W_{p}^{1}\right) },\left\Vert u_{0}\right\Vert
_{W_{p}^{1}\left( \Omega \right) }\right) \tag{4.5}
\end{equation}
holds for a.e. $t\in \left[ 0,T\right] $.
Thus we have for any solution of problem (1.4) -(1.5) the following
estimations
\begin{equation*}
\left\Vert u\right\Vert _{W_{p}^{1}\left( \Omega \right) }\left( t\right)
\leq C\left( \left\Vert H\right\Vert _{L_{p}\left( W_{p}^{1}\right)
},\left\Vert u_{0}\right\Vert _{W_{p}^{1}\left( \Omega \right) }\right) ,
\end{equation*}
\begin{equation*}
\left\Vert \overset{t}{\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum }}
D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau
\right\Vert _{L_{2}\left( \Omega \right) }\left( t\right) \leq C\left(
\left\Vert H\right\Vert _{L_{p}\left( W_{p}^{1}\right) },\left\Vert
u_{0}\right\Vert _{W_{p}^{1}\left( \Omega \right) }\right) ,
\end{equation*}
\begin{equation*}
\left\Vert \frac{\partial u}{\partial t}\right\Vert _{L_{2}\left( \Omega
\right) }\left( t\right) \leq C\left( \left\Vert H\right\Vert _{L_{p}\left(
W_{p}^{1}\right) },\left\Vert u_{0}\right\Vert _{W_{p}^{1}\left( \Omega
\right) }\right)
\end{equation*}
hold for a.e. $t\in \left[ 0,T\right] $ by virtue of inequalities (4.2) -
(4.5). In other words we have that any solution of problem (1.4) -(1.5)
belongs to the bounded subset of the following class
\begin{equation*}
u\in L^{\infty }\left( 0,T;W_{p}^{1}\left( \Omega \right) \right) ;\quad
\frac{\partial u}{\partial t}\in L^{\infty }\left( 0,T;L_{2}\left( \Omega
\right) \right) ;
\end{equation*}
\begin{equation*}
\frac{\partial }{\partial t}\left( \overset{n}{\underset{i=1}{\sum }}\overset
{t}{\underset{0}{\int }}\left\vert D_{i}u\right\vert ^{p-2}D_{i}ud\tau
\right) \in L^{\infty }\left( 0,T;L_{q}\left( \Omega \right) \right)
\end{equation*}
\begin{equation}
\overset{t}{\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum }}
D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \in
L^{\infty }\left( 0,T;L_{2}\left( \Omega \right) \right) , \tag{4.6}
\end{equation}
for each given $u_{0},u_{1}\in W_{p}^{1}\left( \Omega \right) $, $h\in
L_{p}\left( 0,T;W_{p}^{1}\left( \Omega \right) \right) $.
From here it follows that all solutions of this problem belong to a bounded
subset of space $P\left( Q\right) $, which is defined by (4.1).
Indeed, firstly it is easy to see that the following inequality holds
\begin{equation*}
\left\Vert \overset{n}{\underset{i=1}{\sum }}\overset{t}{\underset{0}{\int }}
\left\vert D_{i}u\right\vert ^{p-2}D_{i}ud\tau \right\Vert _{L_{q}\left(
\Omega \right) }^{q}\leq C\overset{n}{\underset{i=1}{\sum }}\left\Vert
\left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right\Vert _{L_{q}\left( \Omega
\right) }^{q}\leq
\end{equation*}
\begin{equation*}
C\left( T,mes\ \Omega \right) \left\Vert u\right\Vert _{W_{p}^{1}\left(
\Omega \right) }^{q}\left( t\right) \Longrightarrow \overset{t}{\underset{0}{
\int }}\overset{n}{\underset{i=1}{\sum }}\left\vert D_{i}u\right\vert
^{p-2}D_{i}ud\tau \in L^{\infty }\left( 0,T;L_{q}\left( \Omega \right)
\right) ,
\end{equation*}
and secondary, the following equalities are correct
\begin{equation*}
\underset{\Omega }{\int }\left( \overset{t}{\underset{0}{\int }}\overset{n}{
\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) d\tau \right) ^{2}dx\equiv \left\Vert \overset{t}{
\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right\Vert _{2}^{2}\equiv
\end{equation*}
\begin{equation*}
\left\langle \overset{t}{\underset{0}{\int }}\overset{n}{\underset{i=1}{\sum
}}D_{i}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau ,
\overset{t}{\underset{0}{\int }}\overset{n}{\underset{j=1}{\sum }}
D_{j}\left( \left\vert D_{j}u\right\vert ^{p-2}D_{j}u\right) d\tau
\right\rangle =
\end{equation*}
\begin{equation*}
\overset{n}{\underset{i,j=1}{\sum }}\left\langle \overset{t}{\underset{0}{
\int }}D_{j}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau ,
\overset{t}{\underset{0}{\int }}D_{i}\left( \left\vert D_{j}u\right\vert
^{p-2}D_{j}u\right) d\tau \right\rangle =
\end{equation*}
\begin{equation*}
\overset{n}{\underset{i,j=1}{\sum }}\left\langle D_{j}\overset{t}{\underset{0
}{\int }}\left\vert D_{i}u\right\vert ^{p-2}D_{i}ud\tau ,D_{i}\overset{t}{
\underset{0}{\int }}\left\vert D_{j}u\right\vert ^{p-2}D_{j}ud\tau
\right\rangle ,
\end{equation*}
and also
\begin{equation*}
\overset{n}{\underset{j=1}{\sum }}\left\Vert D_{j}\overset{t}{\underset{0}{
\int }}~\overset{n}{\underset{i=1}{\sum }}\left( \left\vert
D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \right\Vert _{2}^{2}=
\end{equation*}
\begin{equation*}
\overset{n}{\underset{j=1}{\sum }}\left\langle D_{j}\overset{t}{\underset{0}{
\int }}~\overset{n}{\underset{i=1}{\sum }}\left\vert D_{i}u\right\vert
^{p-2}D_{i}ud\tau ,D_{j}\overset{t}{\underset{0}{\int }}~\overset{n}{
\underset{i=1}{\sum }}\left\vert D_{i}u\right\vert ^{p-2}D_{i}ud\tau
\right\rangle .
\end{equation*}
These demonstrate that the function
\begin{equation*}
v\left( t,x\right) \equiv \overset{t}{\underset{0}{\int }}\ \overset{n}{
\underset{i=1}{\sum }}\left\vert D_{i}u\right\vert ^{p-2}D_{i}u\ d\tau
\end{equation*}
belongs to a bounded subset of the space
\begin{equation*}
L^{\infty }\left( 0,T;L_{q}\left( \Omega \right) \right) \cap \left\{
v\left( t,x\right) \left\vert ~Dv\in \right. L^{\infty }\left(
0,T;L_{2}\left( \Omega \right) \right) \right\} .
\end{equation*}
Therefore, in order to prove the correctness of (4.1), it remains to use the
following inequality, i.e. the Nirenberg-Gagliardo-Sobolev inequality
\begin{equation}
\left\Vert D^{\beta }v\right\Vert _{p_{2}}\leq C\left( \underset{\left\vert
\alpha \right\vert =m}{\sum }\left\Vert D^{\alpha }v\right\Vert
_{p_{0}}^{\theta }\right) \left\Vert v\right\Vert _{p_{1}}^{1-\theta },\quad
0\leq \left\vert \beta \right\vert =l\leq m-1, \tag{4.7}
\end{equation}
which holds for each $v\in W_{p_{0}}^{m}\left( \Omega \right) $, $\Omega
\subset R^{n}$, $n\geq 1$, $C\equiv C\left( p_{0},p_{1},p_{2},l,s\right) $
and $\theta $ such that $\frac{1}{p_{2}}-\frac{l}{n}=\left( 1-\theta \right)
\frac{1}{p_{1}}+\theta \left( \frac{1}{p_{0}}-\frac{m}{n}\right) $. Really,
in inequality (4.7) for us it is enough to choose $p_{2}=2$, $l=0$, $p_{1}=q$
, $p_{0}=2$ then we get
\begin{equation*}
\frac{1}{2}=\left( 1-\theta \right) \frac{p-1}{p}+\theta \left( \frac{1}{2}-
\frac{1}{n}\right) \Longrightarrow \theta \left( \frac{1}{2}-\frac{1}{n}-
\frac{p-1}{p}\right) =\frac{1}{2}-\frac{p-1}{p}\Longrightarrow
\end{equation*}
$\theta =\frac{n\left( p-2\right) }{n\left( p-2\right) +2p}$ for $p>2$, and
so (4.1) is correct.
\end{proof}
\begin{corollary}
Under the conditions of Theorem 4, each solution of problem (1.1)-(1.3)
belongs to the bounded subset of the class $\mathbf{V}\left( Q\right) $
defined in (DS).
\end{corollary}
\begin{proof}
From (4.1) it follows
\begin{equation*}
\overset{n}{\underset{i=1}{\sum }}\overset{t}{\underset{0}{\int }}\left\vert
D_{i}u\right\vert ^{p-2}D_{i}ud\tau \in L^{\infty }\left( 0,T;\overset{0}{W}
\ _{2}^{1}\left( \Omega \right) \right) \cap W_{\infty }^{1}\left(
0,T;L_{q}\left( \Omega \right) \right) ,
\end{equation*}
moreover
\begin{equation*}
\overset{t}{\underset{0}{\int }}\overset{n}{\underset{i,j=1}{\sum }}
D_{j}\left( \left\vert D_{i}u\right\vert ^{p-2}D_{i}u\right) d\tau \in
L^{\infty }\left( 0,T;L_{2}\left( \Omega \right) \right) ,
\end{equation*}
and is bounded in this space. Then taking into account the property of the
Lebesgue integrals we obtain
\begin{equation*}
\overset{t}{\underset{0}{\int }}\ \left\{ \underset{\Omega }{\int }\left[
\overset{n}{\underset{i,j=1}{\sum }}D_{j}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) \right] ^{2}dx\right\} ^{\frac{1}{2}}d\tau \leq C,\quad
C\neq C\left( t\right)
\end{equation*}
from which we get
\begin{equation*}
\overset{n}{\underset{i,j=1}{\sum }}D_{j}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) \in L_{1}\left( 0T;L_{2}\left( \Omega \right) \right) ,
\end{equation*}
and so
\begin{equation}
\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) \in L_{1}\left( 0T;L_{2}\left( \Omega \right) \right)
\tag{4.8}
\end{equation}
in which is bounded.
If we consider equation (1.1), and take into account that it is solvable in
the generalized sense and $\frac{\partial u}{\partial t}\in W_{\infty
}^{1}\left( 0,T;L_{2}\left( \Omega \right) \right) $ (by (4.1)) then from
Definition 1 it follows that
\begin{equation*}
\left[ \frac{\partial ^{2}u}{\partial t^{2}},v\right] -\left[ \overset{n}{
\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) ,v\right] =\left[ h,v\right]
\end{equation*}
holds for any $v\in W_{\widetilde{p}}^{1}\left( 0,T;L_{2}\left( \Omega
\right) \right) $, $\widetilde{p}>1$.
Hence
\begin{equation}
\left[ \frac{\partial ^{2}u}{\partial t^{2}},v\right] =\left[ \overset{n}{
\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) +h,v\right] \tag{4.9}
\end{equation}
holds for any $v\in L^{\infty }\left( Q\right) $.
Thus we obtain $\frac{\partial ^{2}u}{\partial t^{2}}\in L_{1}\left(
0T;L_{2}\left( \Omega \right) \right) $ by virtue of (4.1), (4.8) and as
\begin{equation*}
\overset{n}{\underset{i=1}{\sum }}D_{i}\left( \left\vert D_{i}u\right\vert
^{p-2}D_{i}u\right) +h\in L_{1}\left( 0T;L_{2}\left( \Omega \right) \right) .
\end{equation*}
\end{proof}
\end{document} |
\begin{document}
\title{Combinatorial $R$ matrices for a family \
of crystals : $B^{(1)}
\begin{abstract}
For coherent families of crystals of affine Lie algebras
of type $B^{(1)}_n$, $D^{(1)}_{n}$, $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$
we describe the combinatorial $R$ matrix using
column insertion algorithms for $B,C,D$ Young tableaux.
This is a continuation of \cite{HKOT}.
\end{abstract}
\section{Introduction}
\lambdabel{sec:intro}
A combinatorial $R$ matrix
is the $q = 0$ limit of the quantum $R$ matrix for a quantum affine algebra
$U_q(\mathfrak{g})$,
where $q$ is the deformation parameter and $q=1$ means non-deformed.
It is defined on the tensor
product of two affine crystals $\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')$
(See Section \ref{sec:crystals} for notations), and consists of
an isomorphism and an energy function.
It was first introduced in \cite{KMN1} for the {\em homogeneous} case
where one has $B=B'$.
In this case the isomorphism is trivial.
The energy function was used to
describe the path realization of the crystals of highest weight
representations of quantum affine algebras.
The definition of the energy function
was extended in \cite{NY} to the {\em inhomogeneous} case,
i.e. $B\neq B'$, to study the charge of the
Kostka-Foulkes polynomials \cite{Ma,LS,KR}.
In \cite{KKM} the theory of
coherent families of perfect crystals
was developed for quantum affine algebras
of type $A^{(1)}_n,A^{(2)}_{2n-1}, A^{(2)}_{2n}$,
$B^{(1)}_n, C^{(1)}_n$, $D^{(1)}_n$ and $D^{(2)}_{n+1}$.
An element of these crystals is written as
an array of nonnegative integers
and an explicit description of the
energy functions is given
in terms of piecewise linear functions
of its entries for the homogeneous case.
Unfortunately this description is not applicable to
the inhomogeneous cases.
The purpose of this paper is to give an explicit
description of the isomorphism and energy function
for the inhomogeneous cases.
The main tool of our description
is an insertion algorithm, that is a certain procedure on Young tableaux.
Insertion algorithm itself had been invented
in the context of the Robinson-Schensted correspondence \cite{F}
long before the crystal basis theory was initiated, and
subsequently generalized in, e.g. \cite{Ber,P,Su}.
As far as $A_n^{(1)}$ crystals are concerned,
the isomorphisms and
energy functions were obtained
in terms of usual (type $A$)
Young tableaux and insertion algorithms thereof \cite{S,SW}.
In contrast,
no similar description for the combinatorial $R$ matrix had been made
for other quantum affine algebras, since
an insertion algorithm suitable for the $B,C,D$ tableaux
given in \cite{KN} was known only recently \cite{B1,B2,L}.
In the previous work \cite{HKOT} the authors gave a description for
type $C_n^{(1)}$ and $A_{2n-1}^{(2)}$.
There we used the $\mathfrak{sp}$-version of semistandard tableaux
defined in \cite{KN}
and the column insertion algorithm presented in \cite{B1} on these tableaux.
In this paper we study the remaining types,
$A^{(2)}_{2n}, D^{(2)}_{n+1}, B^{(1)}_n$ and $D^{(1)}_n$.
We use $\mathfrak{sp}$- and $\mathfrak{so}$- versions of semistandard tableaux
defined in \cite{KN}
and the column insertion algorithms presented in \cite{B1,B2}
on these tableaux.
The layout of this paper is as follows.
In Section \ref{sec:crystals} we give a brief review of the basic notions
in the theory of crystals and give the definition of combinatorial $R$ matrix.
We first give the description for types
$A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ in Section \ref{sec:Cx}.
In Sections \ref{subsec:twista} and \ref{subsec:twistd}
we recall the definitions of crystal $B_l$ for
type $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ respectively,
and give a description of its elements in terms of one-row tableaux.
We introduce the map $\omega$ from these crystals to the crystal
of type $C^{(1)}_n$, hence the procedure is reduced to that of
the latter case which we have already developed in \cite{HKOT}.
In Section \ref{subsec:ccis} we list up elementary operations of
column insertions and their inverses
for type $C$ tableaux with at most two rows.
In Section \ref{subsec:ruleCx} we give the main theorem
and give the description of the isomorphism and energy function
for type $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$, and
in Section \ref{subsec:exCx} we give examples.
The $B^{(1)}_n$ and $D^{(1)}_n$ cases are treated in Section \ref{sec:bd}.
The layout is parallel to Section \ref{sec:Cx}.
In Sections \ref{subsec:cib} and \ref{subsec:cid}, however, we also
prove the column bumping lemmas
(Lemmas \ref{lem:bcblxx} and \ref{lem:cblxx}) for $B$ and $D$
tableaux, since a route in the tableau made from inserted letters
(bumping route) has some importance in the main theorem.
\section{\mathversion{bold}Crystals and combinatorial $R$ matrix}
\lambdabel{sec:crystals}
Let us recall basic notions in the theory of crystals.
See \cite{KMN1,KKM} for details.
Let $I=\{0,1,\cdotsots,n\}$ be the index set.
Let $B$ be a $P_{\scriptstyle \mbox{\scriptsize \sl cl}}$-weighted
crystal, i.e. $B$ is a finite set equipped with the {\em crystal structure} that is given by the
maps $\tilde{e}_i$ and $\tilde{f}_i$ from $B\sqcup \{0\}$ to $B \sqcup \{0\}$
and maps $\varepsilon_i$ and $\varphi_i$ from $B$ to ${\mathbb Z}n$.
It is always assumed that $\tilde{e}_i 0 = \tilde{f}_i 0 = 0$ and $\tilde{f}_i b = b'$ means
$\tilde{e}_i b' = b$.
The crystal $B$ is identified with a colored oriented graph ({\em crystal graph})
if one draws an arrow as $b \stackrel{i}{\rightarrow} b'$ for $\tilde{f}_i b = b'$.
Such an arrow is called $i$-arrow. Pick any $i$ and neglect all the $j$-arrows
with $j\neq i$. One then finds that all the connected
components are {\em strings} of finite lengths,
i.e. there is no loop or branch.
Fix a string and take any node $b$ in the string.
Then the maps $\varepsilon_i(b),\varphi_i(b)$ have the following meaning.
Along the string you can go forward by $\varphi_i(b)$ steps to an end
following the arrows and backward by $\varepsilon_i(b)$ steps against the arrows.
Given two crystals $B$ and $B'$, let $B \otimesimes B'$ be a crystal
defined as follows. As a set it is identified with $B\times B'$.
The actions of the operators $\et{i},\ft{i}$
on $B\otimes B'$ are given by
\begin{eqnarray*}
\et{i}(b\otimes b')&=&\left\{
\begin{array}{ll}
\et{i} b\otimes b'&\mbox{ if }\varphi_i(b)\ge\varepsilon_i(b')\\
b\otimes \et{i} b'&\mbox{ if }\varphi_i(b) < \varepsilon_i(b'),
\end{array}\right. \\
\ft{i}(b\otimes b')&=&\left\{
\begin{array}{ll}
\ft{i} b\otimes b'&\mbox{ if }\varphi_i(b) > \varepsilon_i(b')\\
b\otimes \ft{i} b'&\mbox{ if }\varphi_i(b)\le\varepsilon_i(b').
\end{array}\right.
\end{eqnarray*}
Here $0\otimes b'$ and $b\otimes 0$ should be understood as $0$.
All crystals $B$ and the tensor products of them $B\otimes B'$ are connected
as a graph.
Let $\mbox{\sl Aff} (B)=\left\{ z^d b | b \in B,\,d \in {\mathbb Z} \right\}$ be an
affinization of $B$ \cite{KMN1},
where $z$ is an indeterminate.
The crystal
$\mbox{\sl Aff} (B)$ is equipped with the crystal structure, where
the actions of $\et{i},\ft{i}$ are defined as
$\et{i}\cdotsot z^d b=z^{d+\delta_{i0}}(\et{i}b),\,
\ft{i}\cdotsot z^d b=z^{d-\delta_{i0}}(\ft{i}b)$.
The {\em combinatorial $R$ matrix}
is given by
\begin{eqnarray*}
R\;:\;\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')&\longrightarrow&\mbox{\sl Aff}(B')\otimes\mbox{\sl Aff}(B)\\
z^d b\otimes z^{d'} b'&\longmapsto&z^{d'+H(b\otimes b')}\tilde{b}'\otimes z^{d-H(b\otimes b')}\tilde{b},
\end{eqnarray*}
where $\iota (b\otimes b') = \tilde{b}'\otimes\tilde{b}$ under the isomorphism $\iota: B\otimes B'
\stackrel{\sim}{\rightarrow}B'\otimes B$.
$H(b\otimes b')$ is called the
{\em energy function} and determined up to a global additive constant by
\[
H(\et{i}(b\otimes b'))=\left\{
\begin{array}{ll}
H(b\otimes b')+1&\mbox{ if }i=0,\ \varphi_0(b)\geq\varepsilon_0(b'),\
\varphi_0(\tilde{b}')\geq\varepsilon_0(\tilde{b}),\\
H(b\otimes b')-1&\mbox{ if }i=0,\ \varphi_0(b)<\varepsilon_0(b'),\
\varphi_0(\tilde{b}')<\varepsilon_0(\tilde{b}),\\
H(b\otimes b')&\mbox{ otherwise},
\end{array}\right.
\]
since $B\otimes B'$ is connected.
By definition $\iota$ satisfies $\et{i} \iota = \iota \et{i}$ and
$\ft{i} \iota = \iota \ft{i}$ on $B \otimes B'$.
The definition of the energy function
assures the intertwining property of
$R$, i.e. $\et{i} R = R \et{i}$ and $\ft{i} R = R \ft{i}$
on $\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')$.
In the remaining part of this paper we do not
stick to the formalism on $\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')$
and rather treat the isomorphism and
energy function separately.
\section{\mathversion{bold}$U_q'(A_{2n}^{(2)})$ and $U_q'(D_{n+1}^{(2)})$
crystal cases}
\lambdabel{sec:Cx}
\subsection{\mathversion{bold}Definitions : $U_q'(A_{2n}^{(2)})$ case}
\lambdabel{subsec:twista}
Given a positive integer $l$,
we consider a $U_q'(A_{2n}^{(2)})$ crystal denoted by $B_l$,
that is defined
in \cite{KKM}.
$B_l$'s are the crystal bases
of the irreducible finite-dimensional representations
of the quantum affine algebra $U_q'(A_{2n}^{(2)})$.
As a set $B_{l}$ reads
$$
B_{l} = \left\{(
x_1,\ldots, x_n,\overline{x}_n,\ldots,\overline{x}_1) \Biggm|
x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0},
\sum_{i=1}^n(x_i + \overline{x}_i) \in \{l,l-1,\ldots ,0\} \right\}.
$$
For its crystal structure see \cite{KKM}.
$B_{l}$ is isomorphic to
$\bigoplus_{0 \leq j \leq l} B(j \Lambda_1)$ as
a $U_q(C_n)$ crystal, where $B(j \Lambda_1)$ is
the crystal associated with the irreducible representation of $U_q(C_n)$
with highest weight $j \Lambdambda_1$.
The $U_q(C_n)$ crystal $B(j \Lambda_1)$ has a description in terms of
semistandard $C$ tableaux \cite{KN}.
The entries are $1,\ldots ,n$ and $\ol{1}, \ldots ,\ol{n}$ with the
total order:
\begin{displaymath}
1 < 2 < \cdots < n < \ol{n} < \cdots < \ol{2} < \ol{1}.
\end{displaymath}
For an element $b$ of $B(j\Lambda_1)$ let us
denote by $\mathcal{T}(b)$ the
tableau associated with $b$.
Thus for
$b= (x_1, \ldots, x_n, \overline{x}_n,\ldots,\overline{x}_1) \in
B(j \Lambda_1)$
the tableau $\mathcal{T}(b)$ is depicted by
\begin{equation}
\lambdabel{eq:tabtwistax}
\mathcal{T}(b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}.
\end{equation}
The length of this one-row tableau is equal to $j$, namely
$\sum_{i=1}^n(x_i + \overline{x}_i) =j$.
Here and in the remaining part of this paper we denote
$\overbrace{\fbox{$\vphantom{\ol{1}}i$} \fbox{$\vphantom{\ol{1}}i$} \fbox{$\vphantom{\ol{1}}\cdots$} \fbox{$\vphantom{\ol{1}}i$}}^{x}$ by
\par\noindent
\setlength{\unitlength}{5mm}
\begin{picture}(22,3)(-6,0)
\put(0,0){\makebox(10,3)
{$\overbrace{\fbox{$\vphantom{\ol{1}} i \cdots i$}}^{x}$ or
more simply by }}
\put(10,0.5){\line(1,0){3}}
\put(10,1.5){\line(1,0){3}}
\put(10,0.5){\line(0,1){1}}
\put(13,0.5){\line(0,1){1}}
\put(10,0.5){\makebox(3,1){$i \cdotsots i$}}
\put(10,1.5){\makebox(3,1){${\scriptstyle x}$}}
\put(13,0){\makebox(1,1){.}}
\end{picture}
\par\noindent
To describe our rule for the combinatorial $R$ matrix
we shall depict the elements of
$B_{l}$
by one-row tableaux with length $2l$.
We do this by duplicating each letters and then by supplying
pairs of $0$ and $\ol{0}$.
Adding $0$ and $\ol{0}$ into the set of entries of the
tableaux, we assume
the total order $0 < 1 < \cdots < \ol{1} < \ol{0}$.
Let us introduce the map $\omega$ from the $U_q'(A_{2n}^{(2)})$
crystal $B_l$ to the $U_q'(C_{n}^{(1)})$ crystal\footnote{Here we adopted
the notation $B_{2l}$ that we have used in the previous work \cite{HKOT}.
This $B_{2l}$ was originally denoted by $B_l$ in \cite{KKM}.}
$B_{2l}$.
This $\omega$ sends
$b= (x_1, \ldots, x_n, \overline{x}_n,\ldots,\overline{x}_1)$ to
$\omega (b) = (2x_1, \ldots, 2x_n ,$ $2\overline{x}_n ,\ldots,2\overline{x}_1)$.
On the other hand let us introduce the symbol
${\mathbb T} (b')$ for
a $U_q'(C_{n}^{(1)})$ crystal
element $b' \in B_{l'}$ \cite{HKOT},
that represents a one-row tableau with length $l'$.
Putting these two symbols together we have
\begin{equation}
\lambdabel{eq:tabtwistaxx}
{\mathbb T} (\omega(b))={\bf b}x{0}{x_\emptyset}\!
\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{2x_1}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{2x_n}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{2\ol{x}_n}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{2\ol{x}_1}\!
\overbrace{\fbox{$\ol{0} \cdots \ol{0}$}}^{\ol{x}_\emptyset},
\end{equation}
where $x_\emptyset = \overline{x}_\emptyset
= l-\sum_{i=1}^n (x_i + \overline{x}_i)$.
We shall use this tableau in our description of the
combinatorial $R$ matrix (Rule \ref{rule:Cx}).
{}From now on we shall devote ourselves to describing several important
properties of the map $\omega$.
Our goal here is Lemma \ref{lem:1} that
our description of the combinatorial $R$ matrix relies on.
For this purpose
we also use the symbol $\omega$ for the following map for $\et{i},\ft{i}$.
\begin{align*}
\omega (\tilde{e}_i ) & = (\tilde{e}_i')^{2 - \delta_{i,0}} ,\\
\omega (\tilde{f}_i ) & = (\tilde{f}_i')^{2 - \delta_{i,0}}.
\end{align*}
Hereafter
we attach prime $'$ to the notations for
the $U_q'(C_{n}^{(1)})$ crystals, e.g. $\tilde{e}_i', \varphi_i'$ and so on.
\begin{lemma}
\lambdabel{lem:01}
\begin{align*}
\omega (\tilde{e}_i b) & = \omega (\tilde{e}_i) \omega (b), \\
\omega (\tilde{f}_i b) & = \omega (\tilde{f}_i) \omega (b),
\end{align*}
i.e. the $\omega$ commutes with actions of the operators on $B_l$.
\end{lemma}
\noindent
Let us give a proof in $\tilde{e}_0$ case.
(For the other $\tilde{e}_i$s (and also for $\tilde{f}_i$s) the proof is similar.)
\begin{proof}
Let $b=(x_1,\ldots,\ol{x}_l) \in B_l$ ($U_q'(A_{2n}^{(2)})$ crystal).
We have \cite{KKM}
\begin{displaymath}
\tilde{e}_0 b =
\begin{cases}
(x_1-1,x_2,\ldots,\ol{x}_1) & \text{if $x_1 \geq \ol{x}_1+1$,} \\
(x_1,\ldots,\ol{x}_2,\ol{x}_1+1) & \text{if $x_1 \leq \ol{x}_1$.}
\end{cases}
\end{displaymath}
This means that
\begin{displaymath}
\omega (\tilde{e}_0 b) =
\begin{cases}
(2x_1-2,2x_2,\ldots,2\ol{x}_1) & \text{if $2x_1 \geq 2\ol{x}_1+2$,} \\
(2x_1,\ldots,2\ol{x}_2,2\ol{x}_1+2) & \text{if $2x_1 \leq 2\ol{x}_1$.}
\end{cases}
\end{displaymath}
On the other hand,
let $b'=(x'_1,\ldots,\ol{x}'_l) \in B'_{l'}$ ($U_q'(C_{n}^{(1)})$ crystal).
We have \cite{KKM}
\begin{displaymath}
\tilde{e}'_0 b' =
\begin{cases}
(x'_1-2,x'_2,\ldots,\ol{x}'_1) & \text{if $x'_1 \geq \ol{x}'_1+2$,} \\
(x'_1-1,x'_2,\ldots,\ol{x}'_2,\ol{x}'_1+1) & \text{if $x'_1 = \ol{x}'_1+1$,} \\
(x'_1,\ldots,\ol{x}'_2,\ol{x}'_1+2) & \text{if $x'_1 \leq \ol{x}'_1$.}
\end{cases}
\end{displaymath}
Thus putting $l'=2l$ and $b'=\omega (b)$ we obtain
$\omega (\tilde{e}_0) \omega (b) = \tilde{e}'_0 b' = \omega (\tilde{e}_0 b) $.
(The second choice in the above equation does not occur.)
\end{proof}
Let us denote $\omega (b_1) \otimes \omega (b_2)$ by $\omega(b_1 \otimes b_2)$
for $b_1 \otimes b_2 \in B_l \otimes B_k$.
\begin{lemma}
\lambdabel{lem:02}
\begin{align*}
\omega (\tilde{e}_i (b_1 \otimes b_2)) &=
\omega (\tilde{e}_i) \omega(b_1 \otimes b_2), \\
\omega (\tilde{f}_i (b_1 \otimes b_2)) &=
\omega (\tilde{f}_i) \omega(b_1 \otimes b_2).
\end{align*}
Namely the $\omega$ commutes with actions of
the operators on $B_l \otimes B_k$.
\end{lemma}
\begin{proof}
Let us check the latter.
Suppose we have $\varphi_i(b_1) \geq \varepsilon_i(b_2)+1$.
Then $\tilde{f}_i (b_1 \otimes b_2) = (\tilde{f}_i b_1) \otimes b_2$.
In this case we have $\varphi'_i (\omega (b_1))
\geq \varepsilon'_i (\omega (b_2))+2-\delta_{i,0}$,
since
\begin{align*}
\varphi_i'(\omega (b)) & = (2 - \delta_{i,0}) \varphi_i (b) ,\\
\varepsilon_i'(\omega (b)) & = (2 - \delta_{i,0}) \varepsilon_i (b) .
\end{align*}
Therefore we obtain
\begin{align*}
\omega (\tilde{f}_i) \omega(b_1 \otimes b_2) &=
(\tilde{f}_i')^{2-\delta_{i,0}} (\omega (b_1) \otimes \omega (b_2)) \\
&= ((\tilde{f}_i')^{2-\delta_{i,0}} \omega (b_1)) \otimes \omega (b_2) \\
&= (\omega (\tilde{f}_i) \omega (b_1)) \otimes \omega (b_2) \\
&= \omega (\tilde{f}_i b_1) \otimes \omega (b_2) \\
&= \omega (\tilde{f}_i b_1 \otimes b_2).
\end{align*}
The other case when $\varphi_i(b_1)\le\varepsilon_i(b_2)$ is similar.
\end{proof}
Finally we obtain the following important properties
of the map $\omega$.
\begin{lemma}
\lambdabel{lem:1}
\par\noindent
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\item If $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the
isomorphism of the $U_q'(A_{2n}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$,
then $\omega(b_1) \otimes \omega(b_2)$ is mapped to
$\omega(b'_2) \otimes \omega(b'_1)$ under the
isomorphism of the $U_q'(C_{n}^{(1)})$ crystals
$B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$.
\item Up to a global additive constant,
the value of the energy function $H_{B_lB_k}(b_1 \otimes b_2)$
for the $U_q'(A_{2n}^{(2)})$ crystal $B_l \otimes B_k$
is equal to the value of the energy function
$H'_{B_{2l}B_{2k}}(\omega(b_1) \otimes \omega(b_2))$ for the $U_q'(C_{n}^{(1)})$ crystal
$B_{2l} \otimes B_{2k}$.
\end{enumerate}
\end{lemma}
\begin{proof}
First we consider (i).
Since the crystal graph of $B_l \otimes B_k$ is connected,
it remains to check (i) for any specific element in $B_l \otimes B_k \simeq
B_k \otimes B_l$.
We can do it by taking
$(l,0,\ldots,0)\otimes (k,0,\ldots,0) \stackrel{\sim}{\mapsto}
(k,0,\ldots,0)\otimes (l,0,\ldots,0) $ as the specific element,
for which (i) certainly holds.
We proceed to (ii).
We can set
\begin{displaymath}
H_{B_l B_k}((l,0,\ldots,0)\otimes (k,0,\ldots,0) ) =
H'_{B_{2l} B_{2k}}(\omega((l,0,\ldots,0))\otimes \omega((k,0,\ldots,0)) ).
\end{displaymath}
Suppose $\tilde{e}_i (b_1 \otimes b_2) \ne 0$.
Recall the defining relations of
the energy function $H_{B_l B_k}$.
\begin{displaymath}
H_{B_l B_k} (\tilde{e}_i (b_1 \otimes b_2)) =
\begin{cases}
H_{B_l B_k} (b_1 \otimes b_2)+1 &
\text{if $i=0, \varphi_0(b_1) \geq \varepsilon_0(b_2), \varphi_0(b_2') \geq \varepsilon_0(b_1')$,} \\
H_{B_l B_k} (b_1 \otimes b_2)-1 &
\text{if $i=0, \varphi_0(b_1) < \varepsilon_0(b_2), \varphi_0(b_2') < \varepsilon_0(b_1')$,} \\
H_{B_l B_k} (b_1 \otimes b_2) &
\text{otherwise.}
\end{cases}
\end{displaymath}
Claim (ii) holds if
for any $i$ and $b_1 \otimes b_2$ with $\tilde{e}_i (b_1 \otimes b_2) \ne 0$, we have
\begin{eqnarray}
&& H'_{B_{2l} B_{2k}}(\omega(\tilde{e}_i (b_1 \otimes b_2)))-
H'_{B_{2l} B_{2k}}(\omega(b_1 \otimes b_2)) \nonumber\\
&& \quad =
H_{B_l B_k} (\tilde{e}_i (b_1 \otimes b_2))-
H_{B_l B_k} (b_1 \otimes b_2).
\lambdabel{eq:hfuncdif}
\end{eqnarray}
The $i=0$ case is verified as follows.
Since $\omega$ commutes with crystal actions
we have $\omega (\tilde{e}_0 (b_1 \otimes b_2)) =
\omega (\tilde{e}_0) (\omega(b_1) \otimes \omega(b_2)) =
\tilde{e}'_0 (\omega(b_1 \otimes b_2))$.
On the other hand $\omega$
preserves the inequalities in the classification
conditions in the defining relations of the energy
function, i.e. $\varphi'_0(\omega (b_1)) \geq
\varepsilon'_0(\omega(b_2)) \Leftrightarrow
\varphi_0(b_1) \geq \varepsilon_0(b_2)$, and so on.
Thus (\ref{eq:hfuncdif}) follows from the defining relations of the
$H'_{B_{2l} B_{2k}}$.
The $i\neq0$ case is easier.
This completes the proof.
\end{proof}
Since $\omega$ is injective we obtain the converse of (i).
\begin{coro}
If $\omega(b_1) \otimes \omega(b_2)$ is mapped to
$\omega(b'_2) \otimes \omega(b'_1)$ under the
isomorphism of the $U_q'(C_{n}^{(1)})$ crystals
$B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$, then
$b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the
isomorphism of the $U_q'(A_{2n}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$.
\end{coro}
\subsection{\mathversion{bold}Definitions : $U_q'(D_{n+1}^{(2)})$ crystal case}
\lambdabel{subsec:twistd}
Given a positive integer $l$,
we consider a $U_q'(D_{n+1}^{(2)})$ crystal denoted by $B_l$
that is defined
in \cite{KKM}.
$B_l$'s are the crystal bases
of the irreducible finite-dimensional representation
of the quantum affine algebra $U_q'(D_{n+1}^{(2)})$.
As a set $B_{l}$ reads
$$
B_{l} = \left\{(
x_1,\ldots, x_n,x_{\circ},\overline{x}_n,\ldots,\overline{x}_1) \Biggm|
\begin{array}{l}
x_{\circ}=\mbox{$0$ or $1$}, \quad x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0}\\
x_{\circ}+\sum_{i=1}^n(x_i + \overline{x}_i) \in \{l,l-1,\ldots ,0\}
\end{array}
\right\}.
$$
For its crystal structure see \cite{KKM}.
$B_{l}$ is isomorphic to
$\bigoplus_{0 \leq j \leq l} B(j \Lambda_1)$ as a $U_q(B_n)$ crystal, where $B(j \Lambda_1)$ is
the crystal associated with the irreducible representation of $U_q(B_n)$
with highest weight $j \Lambdambda_1$.
The $U_q(B_n)$ crystal $B(j \Lambda_1)$ has a description in terms of semistandard $B$-tableaux \cite{KN}.
The entries are $1,\ldots ,n$, $\ol{1}, \ldots ,\ol{n}$ and $\circ$ with the
total order:
\begin{displaymath}
1 < 2 < \cdots < n < \circ < \ol{n} < \cdots < \ol{2} < \ol{1}.
\end{displaymath}
In this paper we use the symbol $\circ$ for the member of entries of
the semistandard $B$ tableaux that is conventionally denoted by $0$.
For $b= (x_1, \ldots, x_n, x_{\circ}, \overline{x}_n,\ldots,\overline{x}_1) \in
B(j \Lambda_1)$
the tableau $\mathcal{T}(b)$ is depicted by
\begin{displaymath}
\mathcal{T}(b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\hphantom{1}\circ\hphantom{1}$}}^{x_{\circ}}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}.
\end{displaymath}
The length of this one-row tableau is equal to $j$, namely
$x_{\circ}+\sum_{i=1}^n(x_i + \overline{x}_i) =j$.
To describe our rule for the combinatorial $R$ matrix
we shall depict the elements of $B_{l}$
by one-row $C$ tableaux with length $2l$.
We introduce the map $\omega$ from the $U_q'(D_{n+1}^{(2)})$
crystal $B_l$ to the $U_q'(C_{n}^{(1)})$ crystal $B_{2l}$.
$\omega$ sends
$b= (x_1, \ldots, x_n, \overline{x}_n,\ldots,\overline{x}_1)$ to
$\omega (b) = (2x_1, \ldots, 2x_{n-1},$ $2x_n + x_{\circ}, 2\overline{x}_n + x_{\circ},2\overline{x}_{n-1},\ldots,2\overline{x}_1)$.
By using the symbol ${\mathbb T}$
introduced in the previous subsection
the tableau ${\mathbb T} (\omega(b))$ is depicted by
\begin{displaymath}
{\mathbb T} (\omega (b))={\bf b}x{0}{x_\emptyset}\!
\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{2x_1}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{2x_n+x_{\circ}}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{2\ol{x}_n+x_{\circ}}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{2\ol{x}_1}\!
\overbrace{\fbox{$\ol{0} \cdots \ol{0}$}}^{\ol{x}_\emptyset},
\end{displaymath}
where $x_\emptyset = \overline{x}_\emptyset
= l-x_{\circ}-\sum_{i=1}^n (x_i + \overline{x}_i)$.
Our description of the combinatorial $R$ matrix (Theorem \ref{th:main1})
is based on the following lemma.
\begin{lemma}
\lambdabel{lem:2}
\par\noindent
\begin{enumerate}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\item If $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the
isomorphism of the $U_q'(D_{n+1}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$,
then $\omega(b_1) \otimes \omega(b_2)$ is mapped to $\omega(b'_2) \otimes
\omega(b'_1)$ under the
isomorphism of the $U_q'(C_{n}^{(1)})$ crystals
$B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$.
\item Up to a global additive constant,
the value of the energy function $H_{B_lB_k}(b_1 \otimes b_2)$
for the $U_q'(D_{n+1}^{(2)})$ crystal $B_l \otimes B_k$
is equal to the value of the energy function
$H'_{B_{2l}B_{2k}}(\omega(b_1) \otimes \omega(b_2))$ for the $U_q'(C_{n}^{(1)})$ crystal
$B_{2l} \otimes B_{2k}$.
\end{enumerate}
\end{lemma}
\begin{proof}
To distinguish the notations
we attach prime $'$ to the notations for
the $U_q'(C_{n}^{(1)})$ crystals.
Then we have
\begin{align*}
\varphi_i'(\omega (b)) & = (2 - \delta_{i,0}- \delta_{i,n}) \varphi_i (b) ,\\
\varepsilon_i'(\omega (b)) & = (2 - \delta_{i,0}- \delta_{i,n}) \varepsilon_i (b) .
\end{align*}
Let us define the action of $\omega$ on the operators by
\begin{align*}
\omega (\tilde{e}_i ) & = (\tilde{e}_i')^{2 - \delta_{i,0}- \delta_{i,n}} ,\\
\omega (\tilde{f}_i ) & = (\tilde{f}_i')^{2 - \delta_{i,0}- \delta_{i,n}}.
\end{align*}
By repeating an argument similar to that in the previous subsection
we obtain formally the same assertions of Lemmas
\ref{lem:01}, \ref{lem:02} and \ref{lem:1}.
This completes the proof.
\end{proof}
Since $\omega$ is injective we obtain the converse of (i).
\begin{coro}
If $\omega(b_1) \otimes \omega(b_2)$ is mapped to $\omega(b'_2) \otimes
\omega(b'_1)$ under the
isomorphism of the $U_q'(C_{n}^{(1)})$ crystals
$B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$, then
$b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the
isomorphism of the $U_q'(D_{n+1}^{(2)})$ crystals $B_l
\otimes B_k \simeq B_k \otimes B_l$.
\end{coro}
\subsection{\mathversion{bold} Column insertion and inverse insertion for
$C_n$}
\lambdabel{subsec:ccis}
Set an alphabet $\mathcal{X}=\mathcal{A} \sqcup \bar{\mathcal{A}},\,
\mathcal{A}=\{0, 1,\dots,n\}$ and
$\bar{\mathcal{A}}=\{\overline{0}, \overline{1},\dots,\overline{n}\}$,
with the total order
$0 < 1 < 2 < \dots < n < \overline{n} < \dots <
\overline{2} < \overline{1} < \overline{0}$.\footnote{We also introduce $0$ and $\ol{0}$ in the
alphabet. Compare with \cite{B1,KN}.}
\subsubsection{Semi-standard $C$ tableaux}
\lambdabel{subsubsec:ssct}
Let us consider a {\em semistandard $C$ tableau}
made by the letters from this alphabet.
We follow \cite{KN} for its definition.
We present the definition here, but restrict ourselves to
the special cases that are sufficient for our purpose.
Namely we consider only
those tableaux that have no more than two rows
in their shapes.
Thus they have the forms as
\begin{equation}
\lambdabel{eq:pictureofsst}
\setlength{\unitlength}{5mm}
\begin{picture}(6,1.5)(1.5,0)
\put(1,0){\line(1,0){5}}
\put(1,1){\line(1,0){5}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(3,0){\line(0,1){1}}
\put(5,0){\line(0,1){1}}
\put(6,0){\line(0,1){1}}
\put(1,0){\makebox(1,1){$\alpha_1$}}
\put(2,0){\makebox(1,1){$\alpha_2$}}
\put(3,0){\makebox(2,1){$\cdots$}}
\put(5,0){\makebox(1,1){$\alpha_j$}}
\end{picture}
\mbox{or}
\setlength{\unitlength}{5mm}
\begin{picture}(12,2.5)
\put(2,0){\line(1,0){5}}
\put(2,1){\line(1,0){10}}
\put(2,2){\line(1,0){10}}
\put(2,0){\line(0,1){2}}
\put(3,0){\line(0,1){2}}
\put(4,0){\line(0,1){2}}
\put(6,0){\line(0,1){2}}
\put(7,0){\line(0,1){2}}
\put(9,1){\line(0,1){1}}
\put(11,1){\line(0,1){1}}
\put(12,1){\line(0,1){1}}
\put(2,0){\makebox(1,1){$\beta_1$}}
\put(2,1){\makebox(1,1){$\alpha_1$}}
\put(3,0){\makebox(1,1){$\beta_2$}}
\put(3,1){\makebox(1,1){$\alpha_2$}}
\put(4,0){\makebox(2,1){$\cdots$}}
\put(4,1){\makebox(2,1){$\cdots$}}
\put(6,0){\makebox(1,1){$\beta_i$}}
\put(6,1){\makebox(1,1){$\alpha_i$}}
\put(7,1){\makebox(2,1){$\alpha_{i+1}$}}
\put(9,1){\makebox(2,1){$\cdots$}}
\put(11,1){\makebox(1,1){$\alpha_j$}}
\end{picture},
\end{equation}
and the letters inside the boxes should obey the following conditions:
\begin{equation}
\lambdabel{eq:notdecrease}
\alpha_1 \leq \cdotsots \leq \alpha_j,\quad \beta_1 \leq \cdotsots \leq \beta_i,
\end{equation}
\begin{equation}
\lambdabel{eq:strictincrease}
\alpha_a < \beta_a,
\end{equation}
\begin{equation}
\lambdabel{eq:zerozerobar}
(\alpha_a,\beta_a) \ne (0,\ol{0}),
\end{equation}
\begin{equation}
\lambdabel{eq:absenceofxxconf}
(\alpha_a,\alpha_{a+1},\beta_{a+1}) \ne (x,x,\ol{x}),\quad
(\alpha_a,\beta_{a},\beta_{a+1}) \ne (x,\ol{x},\ol{x}).
\end{equation}
Here we assume $1 \leq x \leq n$.
The last conditions (\ref{eq:absenceofxxconf}) are referred to as the
absence of the $(x,x)$-configurations.
\subsubsection{\mathversion{bold}Column insertion for
$C_n$ \cite{B1}}
\lambdabel{subsubsec:insc}
We give a list of column insertions on
semistandard $C$ tableaux that are sufficient for our
purpose (Rule \ref{rule:Cx}).
First of all let us explain the relation between the {\em insertion} and
the {\em inverse insertion}.
Since we are deliberately avoiding the occurrence of the
{\em bumping sliding transition}
(\cite{B1}), the situation is basically the same as that for the usual
tableau case (\cite{F}, Appendix A.2).
Namely, when a letter $\alpha$ was inserted into the tableau $T$,
we obtain a new tableau $T'$ whose shape is one more box larger than
the shape of $T$.
If we know the location of the new box we can reverse the insertion
process to retrieve the original tableau $T$ and letter $\alpha$.
This is the inverse insertion process.
These processes go on column by column.
Thus, from now on let us pay our attention to a particular column $C$ in the
tableau.
Suppose we have inserted a letter $\alpha$ into $C$.
Suppose then we have
obtained a column $C'$ and a letter $\alpha'$
bumped out from the column.
If we inversely insert the letter $\alpha'$ into the column
$C'$, we retrieve the original column $C$ and letter $\alpha$.
For the alphabet $\mathcal{X}$, we follow the convention
that Greek letters $ \alpha, \beta, \ldots $ belong to
$\mathcal{X}$ while Latin
letters $x,y,\ldots$ (resp. $\overline{x},\overline{y},\ldots$) belong to
$\mathcal{A}$ (resp. $\bar{\mathcal{A}}$).
The pictorial equations in the list should be interpreted as follows.
(We take up two examples.)
\begin{itemize}
\item In Case B0, the letter\footnote{By abuse of notation
we identify a letter with the one-box tableau having the letter in it.}
$\alpha$ is inserted into the column
with only one box that has letter $\beta$ in it.
The $\alpha$ is set in the box and the $\beta$ is bumped out to
the right-hand column.
\item In Case B1, the letter $\beta$ is inserted into the column
with two boxes that have letters $\alpha$ and $\gamma$ in them.
The $\beta$ is set in the lower box and the $\gamma$ is bumped out to
the right-hand column.
\end{itemize}
Other equations should be interpreted in a similar way\footnote{This interpretation
is also eligible for the lists of
type $B$ and $D$ cases in Sections \ref{subsubsec:insb} and
\ref{subsubsec:insd}.}.
We note that there is no overlapping case in the list.
Note also that it does not exhaust all patterns of the column insertions
that insert a letter into a column with at most two boxes.
For instance it does not cover the case of insertion
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){${\scriptstyle \ol{2}}$}}
\put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}}
\multiput(2,0)(1,0){2}{\line(0,1){2}}
\multiput(2,0)(0,1){3}{\line(1,0){1}}
\put(2,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 1}$}}
\end{picture}
\end{math}.
In Rule \ref{rule:Cx} we do not encounter such a case.
\par
\noindent
{\mathbb T}oBOX{A0}{\alpha}
\raisebox{1.25mm}{,}
\noindent
{\mathbb T}oDOMINO{A1}{\alpha}{\beta}{\alpha}{\beta}
\raisebox{4mm}{if $\alpha < \beta$ ,}
\noindent
{\mathbb T}oYOKODOMINO{B0}{\beta}{\alpha}{\beta}{\alpha}
\raisebox{1.25mm}{if $\alpha \le \beta$,}
\noindent
{\mathbb T}oHOOKnn{B1}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}{\beta}
\raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
{\mathbb T}oHOOKnn{B2}{\beta}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}
\raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
{\mathbb T}oHOOKll{B3}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}{\beta}
\raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 0$,}
\noindent
{\mathbb T}oHOOKln{B4}{\beta}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}
\raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$.}
\noindent
\subsubsection{Column insertion and $U_q (\ol{\mathfrak{g}})$ crystal morphism}
In this subsection we illustrate the relation between column
insertion and the crystal morphism that was given by Baker \cite{B1,B2}.
A crystal morphism is a (not necessarily one-to-one)
map between two crystals
that commutes with the actions of crystals.
See, for instance \cite{KKM} for a precise definition.
A $U_q (\ol{\mathfrak{g}})$ crystal morphism is a morphism
that commutes with the actions of
$\tilde{e}_i$ and $\tilde{f}_i$ for $i \ne 0$.
For later use we also include
semistandard $B$ and $D$ tableaux in our discussion,
therefore we assume $\ol{\mathfrak{g}} =B_n, C_n$ or $D_n$.
See section \ref{subsubsec:ssbt} (resp.~\ref{subsubsec:ssdt})
for the definition of semistandard $B$ (resp.~$D$) tableaux.
Let $T$ be a semistandard $B$, $C$ or $D$ tableau.
For this $T$ we denote by $w(T)$ the
Japanese reading word of $T$, i.e. $w(T)$ is a sequence of letters that is created by
reading all letters on $T$ from the rightmost column
to the leftmost one,
and in each column, from top to bottom.
For instance,
\par\noindent
\setlength{\unitlength}{5mm}
\begin{picture}(22,1.5)(-5,0)
\put(0,0){\makebox(1,1){$w($}}
\put(1,0){\line(1,0){5}}
\put(1,1){\line(1,0){5}}
\put(1,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(3,0){\line(0,1){1}}
\put(5,0){\line(0,1){1}}
\put(6,0){\line(0,1){1}}
\put(1,0){\makebox(1,1){$\alpha_1$}}
\put(2,0){\makebox(1,1){$\alpha_2$}}
\put(3,0){\makebox(2,1){$\cdots$}}
\put(5,0){\makebox(1,1){$\alpha_j$}}
\put(6,0){\makebox(6,1){$)=\alpha_j \cdots \alpha_2 \alpha_1,$}}
\end{picture}
\par\noindent
\setlength{\unitlength}{5mm}
\begin{picture}(22,2.5)(-2,0)
\put(0,0){\makebox(2,2){$w \Biggl($}}
\put(2,0){\line(1,0){5}}
\put(2,1){\line(1,0){10}}
\put(2,2){\line(1,0){10}}
\put(2,0){\line(0,1){2}}
\put(3,0){\line(0,1){2}}
\put(4,0){\line(0,1){2}}
\put(6,0){\line(0,1){2}}
\put(7,0){\line(0,1){2}}
\put(9,1){\line(0,1){1}}
\put(11,1){\line(0,1){1}}
\put(12,1){\line(0,1){1}}
\put(2,0){\makebox(1,1){$\beta_1$}}
\put(2,1){\makebox(1,1){$\alpha_1$}}
\put(3,0){\makebox(1,1){$\beta_2$}}
\put(3,1){\makebox(1,1){$\alpha_2$}}
\put(4,0){\makebox(2,1){$\cdots$}}
\put(4,1){\makebox(2,1){$\cdots$}}
\put(6,0){\makebox(1,1){$\beta_i$}}
\put(6,1){\makebox(1,1){$\alpha_i$}}
\put(7,1){\makebox(2,1){$\alpha_{i+1}$}}
\put(9,1){\makebox(2,1){$\cdots$}}
\put(11,1){\makebox(1,1){$\alpha_j$}}
\put(12,0){\makebox(14,2){$\Biggr) =\alpha_j \cdots \alpha_{i+1}
\alpha_i \beta_i \cdots
\alpha_2 \beta_2 \alpha_1 \beta_1.$}}
\end{picture}
\par\noindent
Let $T$ and $T'$ be two tableaux.
We define the product tableau $T * T'$ by
\begin{displaymath}
T * T' = (\tau_1 \to \cdots (\tau_{j-1} \to ( \tau_j \to T ) ) \cdots )
\end{displaymath}
where
\begin{displaymath}
w(T') = \tau_j \tau_{j-1} \cdots \tau_1.
\end{displaymath}
The symbol $\to$ represents the column insertions in \cite{B1,B2} which
we partly describe in sections \ref{subsubsec:insc},
\ref{subsubsec:insb} and \ref{subsubsec:insd}.
(Note that the author of \cite{B1,B2} uses $\leftarrow$ instead of $\to$.)
For a dominant integral weight $\lambdambda$ of the $\ol{\mathfrak{g}}$ root system,
let $B(\lambdambda)$ be the $U_q(\ol{\mathfrak{g}})$ crystal
associated with the irreducible highest weight representation $V(\lambdambda)$.
The elements of $B(\lambdambda)$ can be represented by
semistandard $\ol{\mathfrak{g}}$ tableaux of shape $\lambdambda$ \cite{KN}.
\begin{proposition}[\cite{B1,B2}]
\lambdabel{pr:morphgen}
Let $B(\mu) \otimes B(\nu) \simeq \bigoplus_j B(\lambdambda_j)^{\oplus m_j}$
be the tensor product decomposition of crystals. Here $\lambda_j$'s are
distinct highest weights and $m_j(\ge1)$ is the multiplicity of $B(\lambda_j)$.
Forgetting the multiplicities we have the canonical morphism from
$B(\mu) \otimes B(\nu)$ to $\bigoplus_j B(\lambdambda_j)$.
Define $\psi$ by
\begin{displaymath}
\psi(b_1 \otimes b_2) = b_1 * b_2.
\end{displaymath}
Then $\psi$ gives the unique $U_q(\ol{\mathfrak{g}})$ crystal morphism from
$B(\mu) \otimes B(\nu)$ to $\bigoplus_j B(\lambdambda_j)$.
\end{proposition}
\noindent
See Examples \ref{ex:morC1}, \ref{ex:morC2}, \ref{ex:morB}, \ref{ex:morD1}
and \ref{ex:morD2}.
\subsubsection{Column insertion and $U_q (C_n)$ crystal morphism}
To illustrate Proposition \ref{pr:morphgen},
let us check a morphism of the $U_q(C_2)$ crystal
$B(\Lambdambda_2) \otimes B(\Lambdambda_1)$ by taking two examples.
Let $\psi$ be the map that sends
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,0){\makebox(1,1){${\scriptstyle \beta}$}}
\put(0,1){\makebox(1,1){${\scriptstyle \alpha}$}}
\put(1,0.5){\makebox(1,1){${\scriptstyle \otimesimes}$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\put(2,0.5){\makebox(1,1){${\scriptstyle \gamma}$}}
\end{picture}
\end{math}
to the tableau which is made by the column insertion
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){${\scriptstyle \gamma}$}}
\put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}}
\multiput(2,0)(1,0){2}{\line(0,1){2}}
\multiput(2,0)(0,1){3}{\line(1,0){1}}
\put(2,0){\makebox(1,1){${\scriptstyle \beta}$}}
\put(2,1){\makebox(1,1){${\scriptstyle \alpha}$}}
\end{picture}
\end{math}.
\begin{example}
\lambdabel{ex:morC1}
\begin{displaymath}
\begin{CD}
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$\ol{2}$}}
\put(0,1){\makebox(1,1){$2$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\put(2,0.5){\makebox(1,1){$2$}}
\end{picture}
@>\text{$\tilde{e}_1$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$\ol{2}$}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\put(2,0.5){\makebox(1,1){$2$}}
\end{picture}
\\
@VV\text{$\psi$}V @VV\text{$\psi$}V \\
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,0){\makebox(1,1){$2$}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$\ol{1}$}}
\end{picture}
@>\text{$\tilde{e}_1$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,0){\makebox(1,1){$2$}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$\ol{2}$}}
\end{picture}
\end{CD}
\end{displaymath}
\vskip3ex
\noindent
Here the left (resp.~right) $\psi$ is given by Case B3 (resp.~B1) column insertion.
\end{example}
\begin{example}
\lambdabel{ex:morC2}
\begin{displaymath}
\begin{CD}
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$1$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{f}_1$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$2$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
\\
@VV\text{$\psi$}V @VV\text{$\psi$}V \\
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$2$}}\put(1,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{2}$}}
\end{picture}
@>\text{$\tilde{f}_1$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$2$}}\put(1,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\end{CD}
\end{displaymath}
\vskip3ex
\noindent
Here the left (resp.~right) $\psi$ is given by Case B4 (resp.~B2) column insertion.
\end{example}
\subsubsection{\mathversion{bold}Inverse insertion for
$C_n$ \cite{B1}}
\lambdabel{subsubsec:invinsc}
In this subsection we give a list of inverse column
insertions on semistandard $C$ tableaux that are sufficient
for our purpose (Rule \ref{rule:Cx}).
The pictorial equations in the list should be interpreted as follows.
(We take two examples.)
\begin{itemize}
\item In Case C0, the letter $\beta$ is inversely inserted into the column
with only one box that has letter $\alpha$ in it.
The $\beta$ is set in the box and the $\alpha$ is bumped out to
the left-hand column.
\item In Case C1, the letter $\gamma$ is inversely inserted into the column
with two boxes that have letters $\alpha$ and $\beta$ in them.
The $\gamma$ is set in the lower box and the $\beta$ is bumped out to
the left-hand column.
\end{itemize}
Other equations illustrate analogous procedures.
\par
\noindent
\FromYOKODOMINO{C0}{\beta}{\alpha}{\beta}{\alpha}
\raisebox{1.25mm}{if $\alpha \le \beta$,}
\noindent
\FromHOOKnn{C1}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}{\beta}
\raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
\FromHOOKnn{C2}{\beta}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}
\raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
\FromHOOKnl{C3}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}{\beta}
\raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$,}
\noindent
\FromHOOKll{C4}{\beta}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}
\raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 0$.}
\subsection{\mathversion{bold}Main theorem : $A^{(2)}_{2n}$ and
$D^{(2)}_{n+1}$ cases}
\lambdabel{subsec:ruleCx}
Fix $l, k \in {\mathbb Z}_{\ge 1}$.
Given $b_1 \otimesimes b_2 \in B_{l} \otimesimes B_{k}$,
we define an $U'_q(C^{(1)}_n)$ crystal element
$\tilde{b}_2 \otimesimes \tilde{b}_1 \in B_{2k} \otimesimes B_{2l}$
and $l',k', m \in {\mathbb Z}_{\ge 0}$ by the following rule.
\begin{rules}\lambdabel{rule:Cx}
\par\noindent
Set $z = \min(\sharp\,\fbx{0} \text{ in }{\mathbb T}(\omega(b_1)),\,
\sharp\,\fbx{0} \text{ in }{\mathbb T}(\omega(b_2)))$.
Thus ${\mathbb T}(\omega(b_1))$ and ${\mathbb T}(\omega(b_2))$ can be depicted by
\begin{eqnarray*}
{\mathbb T}(\omega (b_1)) &=&
\setlength{\unitlength}{5mm}
\begin{picture}(10.5,1.4)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){10}}
\put(0,0){\line(0,1){1}}
\put(3,0){\line(0,1){1}}
\put(7,0){\line(0,1){1}}
\put(10,0){\line(0,1){1}}
\put(0,0){\makebox(3,1){$0\cdotsots 0$}}
\put(3,0){\makebox(4,1){$T_*$}}
\put(7,0){\makebox(3,1){$\ol{0}\cdotsots \ol{0}$}}
\multiput(0,0.9)(7,0){2}{\put(0,0){\makebox(3,1){$z$}}}
\end{picture},\\
{\mathbb T}(\omega(b_2)) &=&
\setlength{\unitlength}{5mm}
\begin{picture}(9.5,2)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){9}}
\multiput(0,0)(3,0){4}{\line(0,1){1}}
\put(3.9,0){\line(0,1){1}}
\put(5,0){\line(0,1){1}}
\put(0,0){\makebox(3,1){$0\cdotsots 0$}}
\put(3,0){\makebox(1,1){$v_{1}$}}
\put(4,0){\makebox(1,1){$\cdotsots$}}
\put(5,0){\makebox(1,1){$v_{k'}$}}
\put(6,0){\makebox(3,1){$\ol{0}\cdotsots \ol{0}$}}
\multiput(0,0.9)(6,0){2}{\put(0,0){\makebox(3,1){$z$}}}
\end{picture}.
\end{eqnarray*}
Set $l' = 2l-2z$ and $k' =2k-2z$,
hence $T_*$ is a one-row tableau with length $l'$.
Operate the column insertions for semistandard $C$ tableaux
and define
\begin{displaymath}
T^{(0)} := (v_1 \longrightarrow ( \cdotsots ( v_{k'-1} \longrightarrow (
v_{k'} \longrightarrow T_* ) ) \cdotsots ) ).
\end{displaymath}
It has the form:
\setlength{\unitlength}{5mm}
\begin{picture}(20,4)
\put(5,1.5){\makebox(3,1){$T^{(0)}=$}}
\put(8,1){\line(1,0){3.5}}
\put(8,2){\line(1,0){9}}
\put(8,3){\line(1,0){9}}
\put(8,1){\line(0,1){2}}
\put(11.5,1){\line(0,1){1}}
\put(12.5,2){\line(0,1){1}}
\put(17,2){\line(0,1){1}}
\put(12.5,2){\makebox(4.5,1){$i_{m+1} \;\cdotsots\; i_{l'}$}}
\put(8,1){\makebox(3,1){$\;\;i_1 \cdotsots i_m$}}
\put(8.5,2){\makebox(3,1){$\;\;j_1 \cdotsots\cdotsots j_{k'}$}}
\end{picture}
\noindent
where $m$ is the length of the second row, hence that of the first
row is $l'+k'-m$ ($0 \le m \le k'$).
Next we bump out $l'$ letters from
the tableau $T^{(0)}$ by the type $C$ reverse bumping
algorithm in section \ref{subsubsec:invinsc}.
In general, an inverse column insertion starts at a rightmost box in a row.
After an inverse column insertion we obtain a tableau which has the shape
with one box deleted, i.e. the box where we started the reverse bumping is
removed from the original shape.
We have labeled the boxes by $i_{l'}, i_{l'-1}, \ldots, i_1$ at which we start
the inverse column insertions.
Namely, for the boxes containing $i_{l'}, i_{l'-1}, \ldots, i_1$ in the above
tableau, we do it first for $i_{l'}$ then $i_{l'-1}$ and so on.
Correspondingly, let $w_{1}$ be the first letter that is bumped out from
the leftmost column and $w_2$ be the second and so on.
Denote by $T^{(i)}$ the resulting tableau when $w_i$ is bumped out
($1 \le i \le l'$).
Note that $w_1 \le w_2 \le \cdotsots \le w_{l'}$.
Now the $U'_q(C^{(1)}_n)$ crystal elements
$\tilde{b}_1 \in B_{2l}$ and $\tilde{b}_2 \in B_{2k}$ are uniquely specified by
\begin{eqnarray*}
{\mathbb T}(\tilde{b}_2) &=&
\setlength{\unitlength}{5mm}
\begin{picture}(9.5,1.4)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){9}}
\multiput(0,0)(3,0){4}{\line(0,1){1}}
\put(0,0){\makebox(3,1){$0\cdotsots 0$}}
\put(3,0){\makebox(3,1){$T^{(l')}$}}
\put(6,0){\makebox(3,1){$\ol{0}\cdotsots \ol{0}$}}
\multiput(0,0.9)(6,0){2}{\put(0,0){\makebox(3,1){$z$}}}
\end{picture},\\
{\mathbb T}(\tilde{b}_1) &=&
\begin{picture}(10.5,2)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){10}}
\multiput(0,0)(3,0){2}{\line(0,1){1}}
\multiput(4.25,0)(1.5,0){2}{\line(0,1){1}}
\multiput(7,0)(3,0){2}{\line(0,1){1}}
\put(0,0){\makebox(3,1){$0\cdotsots 0$}}
\put(3,0){\makebox(1.25,1){$w_{1}$}}
\put(4.25,0){\makebox(1.5,1){$\cdotsots$}}
\put(5.75,0){\makebox(1.25,1){$w_{l'}$}}
\put(7,0){\makebox(3,1){$\ol{0}\cdotsots \ol{0}$}}
\multiput(0,0.9)(7,0){2}{\put(0,0){\makebox(3,1){$z$}}}
\end{picture}.
\end{eqnarray*}
\end{rules}
(End of the Rule)
\vskip3ex
We normalize the energy function as $H_{B_l B_k}(b_1 \otimesimes b_2)=0$
for
\begin{math}
\mathcal{T}(b_1) =
\setlength{\unitlength}{5mm}
\begin{picture}(3,1.5)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\multiput(0,0)(3,0){2}{\line(0,1){1}}
\put(0,0){\makebox(3,1){$1\cdotsots 1$}}
\put(0,1){\makebox(3,0.5){$\scriptstyle l$}}
\end{picture}
\end{math}
and
\begin{math}
\mathcal{T}(b_2) =
\setlength{\unitlength}{5mm}
\begin{picture}(3,1.5)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\multiput(0,0)(3,0){2}{\line(0,1){1}}
\put(0,0){\makebox(3,1){$\ol{1}\cdotsots \ol{1}$}}
\put(0,1){\makebox(3,0.5){$\scriptstyle k$}}
\end{picture}
\end{math}
irrespective of $l < k$ or $l \ge k$.
Our main result for $A^{(2)}_{2n}$ and
$D^{(2)}_{n+1}$ is
\begin{theorem}\lambdabel{th:main1}
Given $b_1 \otimes b_2 \in B_l \otimes B_k$, find
the $U'_q(C^{(1)}_n)$ crystal element
$\tilde{b}_2 \otimes \tilde{b}_1 \in B_{2k} \otimes B_{2l}$
and $l', k', m$ by Rule \ref{rule:Cx}.
Let $\iota: B_l \otimes B_k \stackrel{\sim}{\rightarrow} B_k \otimes B_l$ be the isomorphism of
$U'_q(A^{(2)}_{2n})$ (or $U'_q(D^{(2)}_{n+1})$) crystal.
Then $\tilde{b}_2 \otimes \tilde{b}_1$ is in the image of the
$B_k \otimes B_l$ by the injective map $\omega$ and
we have
\begin{align*}
\iota(b_1\otimesimes b_2)& = \omega^{-1}(\tilde{b}_2 \otimesimes \tilde{b}_1),\\
H_{B_l B_k}(b_1 \otimesimes b_2) &= \min(l',k')- m.
\end{align*}
\end{theorem}
\noindent
By Theorem 3.4 of \cite{HKOT} (the corresponding theorem for the $C^{(1)}_n$ case),
one can immediately obtain this theorem using
Lemmas \ref{lem:1}, \ref{lem:2} and their corollaries.
\subsection{Examples}
\lambdabel{subsec:exCx}
\begin{example}
Let us consider $B_3 \otimes B_2 \simeq B_2 \otimes B_3$ for $A^{(2)}_4$.
Let $b$ be an element of $B_3$ (resp.~$B_2$). It is depicted by a one-row
tableau $\mathcal{T} (b)$ with length 0, 1, 2 or 3 (resp.~0, 1 or 2).
\begin{displaymath}
\begin{array}{ccccccc}
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){1}}
\multiput(1,0)(0,1){2}{\line(1,0){1}}
\put(1,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0.5,0)(1,0){3}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){2}}
\put(0.5,0){\makebox(1,1){$1$}}
\put(1.5,0){\makebox(1,1){$2$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0.5,0)(1,0){2}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){1}}
\put(0.5,0){\makebox(1,1){$2$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0.5,0)(1,0){2}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){1}}
\put(0.5,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0.5,0)(1,0){3}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){2}}
\put(0.5,0){\makebox(1,1){$2$}}
\put(1.5,0){\makebox(1,1){$2$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$2$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$2$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){1}}
\multiput(1,0)(0,1){2}{\line(1,0){1}}
\put(1,0){\makebox(1,1){$2$}}
\end{picture}
\end{array}
\end{displaymath}
Here we have picked up three samples.
One can check that they are mapped to each other under the
isomorphism of the $U_q'(A^{(2)}_4)$ crystals by explicitly writing down
the crystal graphs of $B_3 \otimes B_2$ and $B_2 \otimes B_3$ .
First we shall show that the use of the
tableau $\mathcal{T} (b)$ given by (\ref{eq:tabtwistax})
is not enough for our purpose, while the less simpler
tableau ${\mathbb T} (\omega (b))$ given by (\ref{eq:tabtwistaxx}) suffices it.
Recall that by neglecting its zero arrows any $U_q'(A^{(2)}_4)$ crystal graph
decomposes into $U_q(C_2)$ crystal graphs.
Thus if $b_1 \otimes b_2$ is mapped to $b_2' \otimes b_1'$
under the isomorphism of the $U_q'(A^{(2)}_4)$ crystals,
they should also be mapped to each other under an
isomorphism of $U_q(C_2)$ crystals.
In this example this $U_q(C_2)$ crystal isomorphism
can be checked in terms of the tableau $\mathcal{T} (b)$ in the following way.
Given $b_1 \otimes b_2$ let us construct the product tableau
$\mathcal{T} (b_1) * \mathcal{T} (b_2)$
according to the original insertion rule in \cite{B1} where
in particular we have
\begin{math}
(\ol{1} \longrightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$1$}}
\end{picture}
) = \emptyset.
\end{math}\footnote{See the first footnote of subsection \ref{subsec:ccis}.}
One can see that
both sides of the
above three mappings then yield a common tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(2,2)(0,0.5)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(2,1){\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\end{picture}
\end{math}.
This means that they are mapped to each other under isomorphisms of
$U_q(C_2)$ crystals.
Thus we see that they are satisfying the necessary condition for
the isomorphism of $U_q'(A^{(2)}_4)$ crystals.
However, we also see that this method of
constructing $\mathcal{T} (b_1) * \mathcal{T} (b_2)$ is not strong enough to determine
the $U_q'(A^{(2)}_4)$ crystal isomorphism.
Theorem \ref{th:main1} asserts that we are able to determine
the $U_q'(A^{(2)}_4)$ crystal
isomorphism by means of the tableau ${\mathbb T} (\omega (b_1))* {\mathbb T} (\omega (b_2))$.
Namely the above three mappings are embedded into the following
mappings in
$B_6 \otimesimes B_4 \simeq B_4 \otimesimes B_6$ for the $U'_q(C^{(1)}_2)$
crystals.
\begin{displaymath}
\begin{array}{ccccccc}
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$0$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\put(4,0){\makebox(1,1){$\ol{0}$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$\ol{1}$}}
\put(5,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$2$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$2$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\put(4,0){\makebox(1,1){$2$}}
\put(5,0){\makebox(1,1){$2$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{1}$}}
\put(3,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$0$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$\ol{0}$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\end{array}
\end{displaymath}
We adopted a rule that
the column insertion
\begin{math}
(\ol{1} \longrightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$1$}}
\end{picture}
)
\end{math}
does not vanish \cite{HKOT}.
Accordingly the both sides of
the first mapping give the tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(6,2)(0,0.5)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(5,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(3,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(4,1){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\put(5,1){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(1,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(2,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(3,0){\makebox(1,1){${\scriptstyle 2}$}}
\end{picture}
\end{math}.
By deleting a $0,\ol{0}$ pair,
those of the second one give the tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(4,2)(0,0.5)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){4}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(3,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(1,0){\makebox(1,1){${\scriptstyle 2}$}}
\end{picture}
\end{math}.
Those of the third one give the tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(6,2)(0,0.5)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(5,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(3,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(4,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(5,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(1,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(2,0){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\put(3,0){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\end{picture}
\end{math}.
They are distinct.
The right hand side is uniquely
determined from the left hand side.
Second let us illustrate in more detail the procedure of Rule \ref{rule:Cx}.
Take the last example.
{}From the left hand side we proceed the column insertions as follows.
\begin{align*}
\ol{1} &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\put(4,0){\makebox(1,1){$2$}}
\put(5,0){\makebox(1,1){$2$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(2,1)(1,0){5}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\\
\ol{1} &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(2,1)(1,0){5}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){4}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\put(1,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\\
2 &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){4}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{1}$}}
\put(1,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\multiput(4,1)(1,0){3}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){3}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\ol{1}$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\\
2 &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\multiput(4,1)(1,0){3}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){3}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\ol{1}$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(5,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$0$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\end{align*}
\vskip3ex
\noindent
The reverse bumping procedure goes as follows.
\begin{align*}
T^{(0)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(5,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$0$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$1$}}
\put(4,1){\makebox(1,1){$2$}}
\put(5,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \\
T^{(1)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\put(5,1){\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){$0$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$1$}}
\put(3,1){\makebox(1,1){$2$}}
\put(4,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
&,w_1 = 0 \\
T^{(2)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){4}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$2$}}
\put(3,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
&,w_2 = 0 \\
T^{(3)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\put(4,1){\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){4}}
\put(0,0){\line(1,0){3}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$2$}}
\put(3,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\ol{0}$}}
\put(2,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
&,w_3 = 2 \\
T^{(4)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){4}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$2$}}
\put(3,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{0}$}}
\put(1,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
&,w_4 = 2 \\
T^{(5)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.8)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(2,1)(1,0){3}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){4}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){$1$}}
\put(1,1){\makebox(1,1){$1$}}
\put(2,1){\makebox(1,1){$2$}}
\put(3,1){\makebox(1,1){$2$}}
\put(0,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
&,w_5 = \ol{0} \\
T^{(6)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(6,2)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\end{picture}
&, w_6 = \ol{0}
\end{align*}
Thus we obtained the right hand side.
We assign $H_{B_3,B_2}=0$ to this element since we have $l'=6, k'=4$ and $m=4$
in this case.
\end{example}
\begin{example}
$B_3 \otimesimes B_2 \simeq B_2 \otimesimes B_3$ for $D^{(2)}_3$.
\begin{displaymath}
\begin{array}{ccccccc}
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){1}}
\multiput(1,0)(0,1){2}{\line(1,0){1}}
\put(1,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\circ$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0.5,0)(1,0){3}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){2}}
\put(0.5,0){\makebox(1,1){$1$}}
\put(1.5,0){\makebox(1,1){$\circ$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0.5,0)(1,0){2}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){1}}
\put(0.5,0){\makebox(1,1){$2$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0.5,0)(1,0){2}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){1}}
\put(0.5,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0.5,0)(1,0){3}{\line(0,1){1}}
\multiput(0.5,0)(0,1){2}{\line(1,0){2}}
\put(0.5,0){\makebox(1,1){$2$}}
\put(1.5,0){\makebox(1,1){$\circ$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$\circ$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$\circ$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){1}}
\multiput(1,0)(0,1){2}{\line(1,0){1}}
\put(1,0){\makebox(1,1){$2$}}
\end{picture}
\end{array}
\end{displaymath}
Here we have picked up three samples.
According to the rule of the type $B$ column insertion in \cite{B2} we obtain
\begin{math}
(\ol{1} \longrightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$1$}}
\end{picture}
) = \emptyset
\end{math}.
In this rule we find that
both sides of the
above three mappings give a common tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(2,2)(0,0.5)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(2,1){\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(1,1){\makebox(1,1){${\scriptstyle \circ}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\end{picture}
\end{math}.
Theorem \ref{th:main1} asserts that we are able to determine
the isomorphism of $U'_q(D^{(2)}_3)$ crystals by means of the tableau ${\mathbb T} (\omega (b))$.
The above three mappings are embedded into the following
mappings in
$B_6 \otimesimes B_4 \simeq B_4 \otimesimes B_6$ for the $U'_q(C^{(1)}_2)$
crystals.
\begin{displaymath}
\begin{array}{ccccccc}
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$0$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\put(4,0){\makebox(1,1){$\ol{0}$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$\ol{2}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$\ol{2}$}}
\put(4,0){\makebox(1,1){$\ol{1}$}}
\put(5,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$\ol{2}$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$\ol{2}$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$1$}}
\put(3,0){\makebox(1,1){$1$}}
\put(4,0){\makebox(1,1){$2$}}
\put(5,0){\makebox(1,1){$\ol{2}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$2$}}
\put(1,0){\makebox(1,1){$2$}}
\put(2,0){\makebox(1,1){$\ol{1}$}}
\put(3,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(4,1)(0,0.3)
\multiput(0,0)(1,0){5}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){4}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$\ol{2}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(6,1)(0,0.3)
\multiput(0,0)(1,0){7}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\makebox(1,1){$0$}}
\put(1,0){\makebox(1,1){$0$}}
\put(2,0){\makebox(1,1){$2$}}
\put(3,0){\makebox(1,1){$2$}}
\put(4,0){\makebox(1,1){$\ol{0}$}}
\put(5,0){\makebox(1,1){$\ol{0}$}}
\end{picture}
\end{array}
\end{displaymath}
The both sides of
the first mapping give the tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(6,2)(0,0.5)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(5,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(3,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(4,1){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\put(5,1){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(1,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(2,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(3,0){\makebox(1,1){${\scriptstyle \ol{2}}$}}
\end{picture}
\end{math}.
By deleting a $0,\ol{0}$ pair,
those of the second one give the tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(4,2)(0,0.5)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){4}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(3,1){\makebox(1,1){${\scriptstyle \ol{2}}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(1,0){\makebox(1,1){${\scriptstyle 2}$}}
\end{picture}
\end{math}.
Those of the third one give the tableau
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(6,2)(0,0.5)
\multiput(0,0)(1,0){5}{\line(0,1){2}}
\multiput(5,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(1,0){4}}
\put(0,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(1,1){\makebox(1,1){${\scriptstyle 0}$}}
\put(2,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(3,1){\makebox(1,1){${\scriptstyle 1}$}}
\put(4,1){\makebox(1,1){${\scriptstyle 2}$}}
\put(5,1){\makebox(1,1){${\scriptstyle \ol{2}}$}}
\put(0,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(1,0){\makebox(1,1){${\scriptstyle 2}$}}
\put(2,0){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\put(3,0){\makebox(1,1){${\scriptstyle \ol{0}}$}}
\end{picture}
\end{math}.
They are distinct.
The right hand side is uniquely
determined from the left hand side.
\end{example}
\section{\mathversion{bold}$U_q'(B_n^{(1)})$ and $U_q'(D_n^{(1)})$ crystal cases}
\lambdabel{sec:bd}
\subsection{\mathversion{bold}Definitions : $U_q'(B_n^{(1)})$ crystal case}
\lambdabel{subsec:typeB}
Given a positive integer $l$,
let us denote by $B_{l}$ the $U_q'(B_n^{(1)})$ crystal
defined in \cite{KKM}.
As a set $B_{l}$ reads
$$
B_{l} = \left\{(
x_1,\ldots, x_n,x_\circ,\overline{x}_n,\ldots,\overline{x}_1) \Biggm|
x_\circ=\mbox{$0$ or $1$}, x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0},
x_\circ+\sum_{i=1}^n(x_i + \overline{x}_i) = l \right\}.
$$
For its crystal structure see \cite{KKM}.
$B_{l}$ is isomorphic to
$B(l \Lambda_1)$ as a $U_q(B_n)$ crystal.
We depict the element
$b= (x_1, \ldots, x_n, x_\circ,\overline{x}_n,\ldots,\overline{x}_1) \in
B_{l}$ by the tableau
\begin{displaymath}
\mathcal{T}(b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\hphantom{1}\circ\hphantom{1}$}}^{x_\circ}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}.
\end{displaymath}
The length of this one-row tableau is equal to $l$, namely
$x_\circ+\sum_{i=1}^n(x_i + \overline{x}_i) =l$.
\subsection{\mathversion{bold}Definitions : $U_q'(D_n^{(1)})$ crystal case}
\lambdabel{subsec:typeD}
Given a positive integer $l$,
let us denote by $B_{l}$ the $U_q'(D_n^{(1)})$ crystal
defined in \cite{KKM}.
As a set $B_{l}$ reads
$$
B_{l} = \left\{(
x_1,\ldots, x_n,\overline{x}_n,\ldots,\overline{x}_1) \Biggm|
\mbox{$x_n=0$ or $\overline{x}_n=0$}, x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0},
\sum_{i=1}^n(x_i + \overline{x}_i) = l \right\}.
$$
For its crystal structure see \cite{KKM}.
$B_{l}$ is isomorphic to
$B(l \Lambda_1)$ as a $U_q(D_n)$ crystal.
We depict the element
$b= (x_1, \ldots, x_n,\overline{x}_n,\ldots,\overline{x}_1) \in
B_{l}$ by the tableau
\begin{displaymath}
{\mathcal T} (b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\!
\overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\!
\fbox{$\vphantom{\ol{1}}\cdots$}\!
\overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}.
\end{displaymath}
The length of this one-row tableau is equal to $l$, namely
$\sum_{i=1}^n(x_i + \overline{x}_i) =l$.
\subsection{\mathversion{bold}Column insertion and inverse insertion for
$B_n$}
\lambdabel{subsec:cib}
Set an alphabet $\mathcal{X}=\mathcal{A} \sqcup \{\circ\} \sqcup \bar{\mathcal{A}},\,
\mathcal{A}=\{ 1,\dots,n\}$ and
$\bar{\mathcal{A}}=\{\overline{1},\dots,\overline{n}\}$,
with the total order
$1 < 2 < \dots < n < \circ < \overline{n} < \dots < \overline{2} < \overline{1}$.
\subsubsection{Semistandard $B$ tableaux}
\lambdabel{subsubsec:ssbt}
Let us consider a {\em semistandard $B$ tableaux}
made by the letters from this alphabet.
We follow \cite{KN} for its definition.
We present the definition here, but restrict ourselves to
special cases that are sufficient for our purpose.
Namely we consider only
those tableaux that have no more than two rows
in their shapes.
Thus they have the forms as in (\ref{eq:pictureofsst}),
with the letters inside the boxes are now chosen from the alphabet
given as above.
The letters obey the conditions
(\ref{eq:notdecrease}) and the absence of the $(x,x)$-configuration
(\ref{eq:absenceofxxconf}) where we now assume $1 \leq x <n$.
They also obey the following conditions:
\begin{equation}
\alpha_a < \beta_a \quad \mbox{or} \quad (\alpha_a,\beta_a) = (\circ,\circ),
\end{equation}
\begin{equation}
\lambdabel{eq:absenceofoneonebar}
(\alpha_a,\beta_a) \ne (1,\ol{1}),
\end{equation}
\begin{equation}
\lambdabel{eq:absenceofnnconf}
(\alpha_a,\beta_{a+1}) \ne (n,\ol{n}),(n,\circ),(\circ,\circ),(\circ,\ol{n}).
\end{equation}
The last conditions (\ref{eq:absenceofnnconf}) are referred to as the
absence of the $(n,n)$-configurations.
\subsubsection{\mathversion{bold}Column insertion for
$B_n$ \cite{B2}}
\lambdabel{subsubsec:insb}
We give a partial list of patterns of column insertions on the
semistandard $B$ tableaux that are sufficient for our purpose.
For the alphabet $\mathcal{X}$, we follow the convention
that Greek letters $ \alpha, \beta, \ldots $ belong to
$\mathcal{X}$ while Latin
letters $x,y,\ldots$ (resp. $\overline{x},\overline{y},\ldots$) belong to
$\mathcal{A}$ (resp. $\bar{\mathcal{A}}$).
For the interpretation of the pictorial equations in the list,
see the remarks in Section \ref{subsubsec:insc}.
Note that this list does not exhaust all cases.
It does not contain, for instance, the insertion
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){${\scriptstyle \circ}$}}
\put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}}
\multiput(2,0)(1,0){2}{\line(0,1){2}}
\multiput(2,0)(0,1){3}{\line(1,0){1}}
\put(2,0){\makebox(1,1){${\scriptstyle \circ}$}}
\put(2,1){\makebox(1,1){${\scriptstyle \circ}$}}
\end{picture}
\end{math}.
As far as this case is concerned, we see that
in Rule \ref{rule:typeB} neither $\mathcal{T} (b_1)$ nor $\mathcal{T} (b_2)$
has more than one $\circ$'s.
Thus we do not encounter a situation where more than two $\circ$'s
appear in the procedure.
(See Proposition \ref{pr:atmosttworows}.)
\noindent
{\mathbb T}oBOX{A0}{\alpha}
\raisebox{1.25mm}{,}
\noindent
{\mathbb T}oDOMINO{A1}{\alpha}{\beta}{\alpha}{\beta}
\raisebox{4mm}{if $\alpha < \beta$ or $(\alpha,\beta)=(\circ,\circ)$,}
\noindent
{\mathbb T}oYOKODOMINO{B0}{\beta}{\alpha}{\beta}{\alpha}
\raisebox{1.25mm}{if $\alpha \le \beta$ and $(\alpha,\beta) \ne (\circ,\circ)$,}
\noindent
{\mathbb T}oHOOKnn{B1}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}{\beta}
\raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$
and $(\beta,\gamma) \ne (\circ,\circ)$,}
\noindent
{\mathbb T}oHOOKnn{B2}{\beta}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}
\raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$
and $(\alpha,\beta) \ne (\circ,\circ)$,}
\noindent
{\mathbb T}oHOOKnn{B3}{\circ}{\overline{x}}{\circ}{\overline{x}}{\circ}{\circ}
\raisebox{4mm}{,}
\noindent
{\mathbb T}oHOOKnn{B4}{\circ}{\circ}{x}{\circ}{x}{\circ}
\raisebox{4mm}{,}
\noindent
{\mathbb T}oHOOKll{B5}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}{\beta}
\raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 1$,}
\noindent
{\mathbb T}oHOOKln{B6}{\beta}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}
\raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$,}
\noindent
{\mathbb T}oHOOKnn{B7}{\circ}{\overline{n}}{n}{\overline{n}}{n}{\circ}
\raisebox{4mm}{.}
\noindent
\subsubsection{Column insertion and $U_q (B_n)$ crystal morphism}
To illustrate Proposition \ref{pr:morphgen}
let us check a morphism of the $U_q(B_3)$ crystal
$B(\Lambdambda_2) \otimes B(\Lambdambda_1)$ by taking an example.
Let $\psi$ be the map that is similarly defined
as in Section \ref{subsubsec:insc} for type $C$ case.
\begin{example}
\lambdabel{ex:morB}
\begin{displaymath}
\begin{CD}
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\circ$}}
\put(0,0){\makebox(1,1){$\ol{3}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$\circ$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{e}_3$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\circ$}}
\put(0,0){\makebox(1,1){$\ol{3}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$3$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{e}_3$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\circ$}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$3$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{e}_3$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$3$}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$3$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
\\
@VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V \\
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$\circ$}}\put(1,1){\makebox(1,1){$\ol{3}$}}
\put(0,0){\makebox(1,1){$\circ$}}
\end{picture}
@>\text{$\tilde{e}_3$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\ol{3}$}}
\put(0,0){\makebox(1,1){$\circ$}}
\end{picture}
@>\text{$\tilde{e}_3$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\circ$}}
\put(0,0){\makebox(1,1){$\circ$}}
\end{picture}
@>\text{$\tilde{e}_3$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$3$}}
\put(0,0){\makebox(1,1){$\circ$}}
\end{picture}
\end{CD}
\end{displaymath}
\vskip3ex
\noindent
Here the $\psi$'s are given by Case B3, B7, B4 and B2 column insertions,
respectively from left to right.
\end{example}
\subsubsection{\mathversion{bold}Inverse insertion for
$B_n$ \cite{B2}}
\lambdabel{subsubsec:invinsb}
We give a list of inverse column insertions on
semistandard $B$ tableaux that are sufficient for our purpose.
For the interpretation of the pictorial equations in the list,
see the remarks in Section
\ref{subsubsec:invinsc}.
\par
\noindent
\FromYOKODOMINO{C0}{\beta}{\alpha}{\beta}{\alpha}
\raisebox{1.25mm}{if $\alpha \le \beta$ and $(\alpha,\beta) \ne (\circ,\circ)$,}
\noindent
\FromHOOKnn{C1}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}{\beta}
\raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$
and $(\beta,\gamma) \ne (\circ,\circ)$,}
\noindent
\FromHOOKnn{C2}{\beta}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}
\raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$
and $(\alpha,\beta) \ne (\circ,\circ)$,}
\noindent
\FromHOOKnn{C3}{\overline{x}}{\circ}{\circ}{\circ}{\overline{x}}{\circ}
\raisebox{4mm}{,}
\noindent
\FromHOOKnn{C4}{\circ}{x}{\circ}{\circ}{\circ}{x}
\raisebox{4mm}{,}
\noindent
\FromHOOKnl{C5}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}{\beta}
\raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$,}
\noindent
\FromHOOKll{C6}{\beta}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}
\raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 1$,}
\noindent
\FromHOOKnn{C7}{\overline{n}}{n}{\circ}{\circ}{\overline{n}}{n}
\raisebox{4mm}{.}
\subsubsection{\mathversion{bold} Column bumping lemma for
$B_n$}
\lambdabel{subsubsec:cblB}
The aim of this subsection is to give a simple result on
successive insertions of two letters into a
tableau (Corollary \ref{cor:bcblxxx}).
This result will be
used in the proof of the main theorem (Theorem \ref{th:main3}).
This corollary follows from Lemma \ref{lem:bcblxx}.
This lemma is (a special case of) the {\em column bumping lemma},
whose claim is almost the same as that for the original lemma
for the usual tableaux (\cite{F}, Exercise 3 of
Appendix A).
We restrict ourselves to the situation where by column insertions there only appear
semistandard $B$ tableaux with at most two rows.
We consider a column insertion of a letter $\alpha$ into a tableau $T$.
We insert the $\alpha$ into the leftmost column of $T$.
According to the rules, the $\alpha$ is set in the column, and bump a
letter if possible.
The bumped letter is then inserted in the right column.
The procedure continues until we come to Case A0 or A1.
When a letter is inserted in a tableau, we can define a {\em bumping route}.
It is a collection of boxes in the new tableau that has those letters
set by the insertion.
In each column there is at most one such box.
Thus we regard the bumping route as a path that goes from the left to
the right.
In the classification of the column insertions, we regard
that the inserted letter is set in the first row in Cases A0, B0, B2, B4, B6 and
B7, and that it is set in the second row in the other cases.
\begin{example}
Here we give an example of column insertion and its resulting
bumping route in a $B_3$ tableau.
\begin{displaymath}
\setlength{\unitlength}{5mm}
\begin{picture}(4,4)(0,1)
\multiput(0,2)(1,0){5}{\line(0,1){2}}
\multiput(0,2)(0,1){3}{\line(1,0){4}}
\put(0,3){\makebox(1,1){$1$}}
\put(1,3){\makebox(1,1){$2$}}
\put(2,3){\makebox(1,1){$\circ$}}
\put(3,3){\makebox(1,1){$\ol{3}$}}
\put(0,2){\makebox(1,1){$3$}}
\put(1,2){\makebox(1,1){$3$}}
\put(2,2){\makebox(1,1){$\ol{3}$}}
\put(3,2){\makebox(1,1){$\ol{2}$}}
\put(0,1){\makebox(1,1){$\uparrow$}}
\put(0,0){\makebox(1,1){$2$}}
\end{picture}
\quad
\Rightarrow
\quad
\begin{picture}(4,4)(0,1)
\multiput(0,2)(1,0){5}{\line(0,1){2}}
\multiput(0,2)(0,1){3}{\line(1,0){4}}
\put(0,3){\makebox(1,1){$1$}}
\put(1,3){\makebox(1,1){$2$}}
\put(2,3){\makebox(1,1){$\circ$}}
\put(3,3){\makebox(1,1){$\ol{3}$}}
\put(0,2){\makebox(1,1){$2$}}
\put(0,2){\makebox(1,1){$\bigcirc$}}
\put(1,2){\makebox(1,1){$3$}}
\put(2,2){\makebox(1,1){$\ol{3}$}}
\put(3,2){\makebox(1,1){$\ol{2}$}}
\put(1,1){\makebox(1,1){$\uparrow$}}
\put(1,0){\makebox(1,1){$3$}}
\end{picture}
\quad
\Rightarrow
\quad
\begin{picture}(4,4)(0,1)
\multiput(0,2)(1,0){5}{\line(0,1){2}}
\multiput(0,2)(0,1){3}{\line(1,0){4}}
\put(0,3){\makebox(1,1){$1$}}
\put(1,3){\makebox(1,1){$2$}}
\put(2,3){\makebox(1,1){$\circ$}}
\put(3,3){\makebox(1,1){$\ol{3}$}}
\put(0,2){\makebox(1,1){$2$}}
\put(0,2){\makebox(1,1){$\bigcirc$}}
\put(1,2){\makebox(1,1){$3$}}
\put(1,2){\makebox(1,1){$\bigcirc$}}
\put(2,2){\makebox(1,1){$\ol{3}$}}
\put(3,2){\makebox(1,1){$\ol{2}$}}
\put(2,1){\makebox(1,1){$\uparrow$}}
\put(2,0){\makebox(1,1){$3$}}
\end{picture}
\quad
\Rightarrow
\quad
\begin{picture}(4,4)(0,1)
\multiput(0,2)(1,0){5}{\line(0,1){2}}
\multiput(0,2)(0,1){3}{\line(1,0){4}}
\put(0,3){\makebox(1,1){$1$}}
\put(1,3){\makebox(1,1){$2$}}
\put(2,3){\makebox(1,1){$3$}}
\put(2,3){\makebox(1,1){$\bigcirc$}}
\put(3,3){\makebox(1,1){$\ol{3}$}}
\put(0,2){\makebox(1,1){$2$}}
\put(0,2){\makebox(1,1){$\bigcirc$}}
\put(1,2){\makebox(1,1){$3$}}
\put(1,2){\makebox(1,1){$\bigcirc$}}
\put(2,2){\makebox(1,1){$\circ$}}
\put(3,2){\makebox(1,1){$\ol{2}$}}
\put(3,1){\makebox(1,1){$\uparrow$}}
\put(3,0){\makebox(1,1){$\ol{3}$}}
\end{picture}
\end{displaymath}
\begin{displaymath}
\quad
\Rightarrow
\quad
\setlength{\unitlength}{5mm}
\begin{picture}(5,4)(0,1)
\multiput(0,2)(1,0){5}{\line(0,1){2}}
\multiput(0,2)(0,1){3}{\line(1,0){4}}
\put(0,3){\makebox(1,1){$1$}}
\put(1,3){\makebox(1,1){$2$}}
\put(2,3){\makebox(1,1){$3$}}
\put(2,3){\makebox(1,1){$\bigcirc$}}
\put(3,3){\makebox(1,1){$\ol{3}$}}
\put(3,3){\makebox(1,1){$\bigcirc$}}
\put(0,2){\makebox(1,1){$2$}}
\put(0,2){\makebox(1,1){$\bigcirc$}}
\put(1,2){\makebox(1,1){$3$}}
\put(1,2){\makebox(1,1){$\bigcirc$}}
\put(2,2){\makebox(1,1){$\circ$}}
\put(3,2){\makebox(1,1){$\ol{2}$}}
\put(4,1){\makebox(1,1){$\uparrow$}}
\put(4,0){\makebox(1,1){$\ol{3}$}}
\end{picture}
\quad
\Rightarrow
\quad
\begin{picture}(5,4)(0,1)
\multiput(0,2)(1,0){5}{\line(0,1){2}}
\put(5,3){\line(0,1){1}}
\multiput(0,3)(0,1){2}{\line(1,0){5}}
\put(0,2){\line(1,0){4}}
\put(0,3){\makebox(1,1){$1$}}
\put(1,3){\makebox(1,1){$2$}}
\put(2,3){\makebox(1,1){$3$}}
\put(2,3){\makebox(1,1){$\bigcirc$}}
\put(3,3){\makebox(1,1){$\ol{3}$}}
\put(3,3){\makebox(1,1){$\bigcirc$}}
\put(0,2){\makebox(1,1){$2$}}
\put(0,2){\makebox(1,1){$\bigcirc$}}
\put(1,2){\makebox(1,1){$3$}}
\put(1,2){\makebox(1,1){$\bigcirc$}}
\put(2,2){\makebox(1,1){$\circ$}}
\put(3,2){\makebox(1,1){$\ol{2}$}}
\put(4,3){\makebox(1,1){$\ol{3}$}}
\put(4,3){\makebox(1,1){$\bigcirc$}}
\end{picture}
\end{displaymath}
\end{example}
\begin{lemma}
\lambdabel{lem:bcblx}
The bumping route does not move down.
\end{lemma}
\begin{proof}
It suffices to consider the bumping processes occurring
on pairs of neighboring columns
in the tableau.
Our strategy is as follows.
We are to show that if in the left column the inserted letter sets into
the first row then the same occurs in the right column as well.
Let us classify the situations on the neighboring columns into five cases.
\begin{enumerate}
\item
Suppose that in the following column insertion Case
B0 has occurred in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(5,1.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\multiput(2.5,0)(1,0){3}{\line(0,1){1}}
\multiput(2.5,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$\alpha$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\put(2.5,0){\makebox(1,1){$\beta$}}
\put(3.5,0){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Case B0 occurs and Case A1 does not happen.
The semistandard condition for $B$ tableau imposes that
$(\beta,\gamma) \ne (\circ,\circ)$ and $\beta \leq \gamma$.
Thus if $\beta$ is bumped out from the left column, it certainly bumps
$\gamma$ out of the right column.
\item
Suppose that in the following column insertion one of the Cases
B2, B4 or B6 has occurred in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(2.5,2.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$\alpha$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\multiput(2.5,1)(0,1){2}{\line(1,0){2}}
\multiput(2.5,0)(1,0){2}{\line(0,1){2}}
\put(2.5,0){\line(1,0){1}}
\put(4.5,1){\line(0,1){1}}
\put(2.5,0){\makebox(1,1){$\delta$}}
\put(2.5,1){\makebox(1,1){$\beta$}}
\put(3.5,1){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Case B0 occurs and Case A1 does not happen.
The reason is as follows.
Whichever one of the B2, B4 or B6 may have occurred in the first column,
the letter bumped out from the first column is always $\beta$.
And again we have
the semistandard condition between $\beta$ and $\gamma$.
\item
In the following column insertion Case
B7 occurs in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(2.5,2.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$n$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\multiput(2.5,1)(0,1){2}{\line(1,0){2}}
\multiput(2.5,0)(1,0){2}{\line(0,1){2}}
\put(2.5,0){\line(1,0){1}}
\put(4.5,1){\line(0,1){1}}
\put(2.5,0){\makebox(1,1){$\ol{n}$}}
\put(2.5,1){\makebox(1,1){$\circ$}}
\put(3.5,1){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Case B0 occurs and Case A1 does not happen.
The letter bumped out from the first column is $\ol{n}$.
Due to the semistandard condition we have $\gamma \geq \ol{n}$, hence the claim
follows.
\item
Suppose that in the following column insertion one of the Cases
B2, B4 or B6 has occurred in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(2.5,2.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\multiput(2.5,0)(1,0){3}{\line(0,1){1}}
\multiput(2.5,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$\alpha$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\multiput(2.5,0)(0,1){3}{\line(1,0){2}}
\multiput(2.5,0)(1,0){3}{\line(0,1){2}}
\put(2.5,0){\makebox(1,1){$\delta$}}
\put(2.5,1){\makebox(1,1){$\beta$}}
\put(3.5,0){\makebox(1,1){$\varepsilon$}}
\put(3.5,1){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Cases B1, B3 and B5 do not happen.
The reason is as follows.
The letter bumped out from the first column is always $\beta$.
Since $\beta \leq \gamma$, B1 does not happen.
Since $(\beta,\gamma) \ne (\circ,\circ)$, B3 does not happen.
B5 does not happen since $(\beta,\gamma,\varepsilon) \ne (x,x,\ol{x})$, i.e. due to
the absence of the $(x,x)$-configuration (\ref{eq:absenceofxxconf}).
\item
In the following column insertion Case
B7 occurs in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(2.5,2.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\multiput(2.5,0)(1,0){3}{\line(0,1){1}}
\multiput(2.5,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$n$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\multiput(2.5,0)(0,1){3}{\line(1,0){2}}
\multiput(2.5,0)(1,0){3}{\line(0,1){2}}
\put(2.5,0){\makebox(1,1){$\ol{n}$}}
\put(2.5,1){\makebox(1,1){$\circ$}}
\put(3.5,0){\makebox(1,1){$\varepsilon$}}
\put(3.5,1){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Cases B1, B3 and B5 do not happen.
The letter bumped out from the first column is $\ol{n}$.
Due to the semistandard condition we have $\gamma \geq \ol{n}$,
hence the claim follows.
\end{enumerate}
\end{proof}
\begin{lemma}
\lambdabel{lem:bcblxx}
Let $\alpha' \leq \alpha$ and $(\alpha,\alpha') \ne (\circ,\circ)$.
Let $R$ be the bumping route that is made when $\alpha$ is inserted into $T$,
and $R'$ be the bumping route that is made when
$\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$.
Then $R'$ does not lie below $R$.
\end{lemma}
\begin{proof}
First we consider the case where the bumping route
lies only in the first row.
Suppose that, when $\alpha$ was inserted
into the tableau $T$, it was set in the first row
in the first column.
We are to show that when $\alpha'$ is inserted,
it will be also set in the first row in the first column.
If $T$ is an empty set (resp.~has only one row), the insertion of
$\alpha$ should have been A0 (resp.~B0).
In either case we have B0 when $\alpha'$ is inserted, hence the claim is true.
Suppose $T$ has two rows.
By assumption B2, B4, B6 or B7 has occurred when $\alpha$ was inserted.
We see that,
if B4, B6 or B7 has occurred,
then B2 will occur when $\alpha'$ is inserted.
Thus it is enough to show that
if B2 has occurred,
then B1, B3 or B5 does not happen when $\alpha'$ is inserted.
Since $\alpha' \leq \alpha$, B1 does not happen.
Since $(\alpha,\alpha') \ne (\circ,\circ)$, B3 does not happen.
B5 does not happen, since the first column does not have the entry
${x \atop \overline{x}}$ as the result of B2 type insertion of
$\alpha$.
Second we consider the case where the bumping route $R$ lies across
the first and the second rows.
Suppose that from the leftmost column to
the $(i-1)$-th column the bumping route
lies in the second row, and from
the $i$-th column to the rightmost column it lies in the first row.
Let us call the position of the vertical line between the
$(i-1)$-th and the $i$-th columns the {\em crossing point} of $R$.
It is unique due to Lemma \ref{lem:bcblx}.
We call an analogous position of $R'$ its crossing point.
We are to show that the crossing point of $R'$ does not locate
strictly right to the crossing point of $R$.
Let the situation around the crossing point of $R$ be
\begin{equation}
\lambdabel{eq:crptb}
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\xi$}}
\put(2,1){\makebox(1,1){$\eta$}}
\end{picture}
\quad
\mbox{or}
\quad
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){2}}
\put(3,1){\line(0,1){1}}
\multiput(1,1)(0,1){2}{\line(1,0){2}}
\put(1,0){\line(1,0){1}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,1)(0,1){2}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\xi$}}
\put(2,1){\makebox(1,1){$\eta$}}
\end{picture}
\,
\mbox{.}
\end{equation}
While the insertion of $\alpha$ that led to these configurations,
let $\eta'$ be the letter that was bumped out from the
left column.
Claim 1: $\xi \leq \eta' \le \eta$ and $(\xi,\eta) \ne (\circ,\circ)$.
To see this note that
in the left column,
B1, B3 or B5 has occurred when $\alpha$ was inserted.
We have $\xi \leq \eta'$ and $(\xi,\eta') \ne (\circ,\circ)$ (B1),
or $\xi < \eta'$ (B3, B5).
In the right column
A0, B0, B2, B4, B6 or B7 has subsequently occurred.
We have $\eta' = \eta$ (A0, B0, B2, B4, B7), or $\eta' < \eta$ (B6).
In any case we have $\xi \le \eta' \le \eta$ and $(\xi,\eta) \ne (\circ,\circ)$.
Claim 2: In (\ref{eq:crptb}) the following configurations do not exist.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\ol{x}$}}
\put(1,1){\makebox(1,1){$x$}}
\put(2,1){\makebox(1,1){$\ol{x}$}}
\end{picture}
\,
\mbox{,}
\,
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){2}}
\put(3,1){\line(0,1){1}}
\multiput(1,1)(0,1){2}{\line(1,0){2}}
\put(1,0){\line(1,0){1}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,1)(0,1){2}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\ol{x}$}}
\put(1,1){\makebox(1,1){$x$}}
\put(2,1){\makebox(1,1){$\ol{x}$}}
\end{picture}
\,
\mbox{,}
\,
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$x$}}
\put(2,0){\makebox(1,1){$\ol{x}$}}
\put(2,1){\makebox(1,1){$x$}}
\end{picture}
\,
\mbox{,}
\,
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,1){\makebox(1,1){$\circ$}}
\end{picture}
\,
\mbox{or}
\,
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){2}}
\put(3,1){\line(0,1){1}}
\multiput(1,1)(0,1){2}{\line(1,0){2}}
\put(1,0){\line(1,0){1}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,1)(0,1){2}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,1){\makebox(1,1){$\circ$}}
\end{picture}
\,
\mbox{.}
\lambdabel{eq:bforbiddenconfs}
\end{equation}
Due to Claim 1,
the first and the second configurations can exist only if
B1 with $\alpha= x, \gamma=\eta'$ happens in the left column
and $\xi = \eta'= \eta = \overline{x}$.
But $(\alpha,\gamma) = (x,\overline{x})$ is not compatible with B1.
The third configuration can exist only if
B6 happens in the right column and $\xi= \eta'= \eta= x$ by Claim 1.
But we see from the proof of Claim 1 that B6 actually happens only when $\eta' < \eta$.
The fourth and the fifth are forbidden since
$(\xi,\eta) \neq (\circ,\circ)$ by Claim 1.
Claim 2 is proved.
Let the situation around the crossing point of $R$ be one of (\ref{eq:crptb})
excluding (\ref{eq:bforbiddenconfs}).
When inserting $\alpha'$,
suppose in the left column of the crossing point, B1, B3 or B5 has occurred.
Let $\xi'$ be the letter bumped out therefrom.
Claim 3: $\xi' \leq \eta$ and $(\xi',\eta) \ne (\circ,\circ)$.
We divide the check into two cases.
a) If B1 or B3 has occurred in the left column, we have $\xi' = \xi$.
Thus the assertion follows from Claim 1.
b) If B5 has occurred, the left column had the entry ${x \atop \ol{x}}$ and
we have $\xi' = \ol{x-1}$, $\xi = \ol{x}$.
Claim 1 tells $\xi = \ol{x} \le \eta$, and Claim 2 does $\eta \neq \ol{x}$.
Therefore we have $\xi' = \ol{x-1} \le \eta$.
$(\xi',\eta) \ne (\circ,\circ)$ is obvious.
Claim 3 is proved.
Now we are ready to finish the proof of the main assertion.
Assume the same situation as Claim 3.
We should verify that A1, B1, B3 and B5 do not occur in the right column.
Claim 3 immediately prohibits A1, B1 and B3 in the right column.
Suppose that B5 happens in the right column.
It means that
$\eta \in \{1,\ldots n\}$, $\xi' \geq \eta$ and the right column had the entry
${\eta \atop \ol{\eta}}$.
Since $\xi' \leq \eta$ by Claim 3, we find $\xi' = \eta$, therefore
$\xi' \in \{1,\ldots, n\}$.
Such $\xi'$ can be bumped out from B1 process only in the left column
and not from B3 or B5.
It follows that $\xi' = \xi$.
This leads to the third configuration in (\ref{eq:bforbiddenconfs}),
hence a contradiction.
Finally we consider the case where the bumping route $R$
lies only in the second row.
If $R'$ lies below $R$ the tableau should have
more than two rows, which is prohibited by
Proposition \ref{pr:atmosttworows}.
\end{proof}
\begin{coro}
\lambdabel{cor:bcblxxx}
Let $\alpha' \leq \alpha$ and $(\alpha,\alpha') \ne (\circ,\circ)$.
Suppose that a new box is added at the end of the first row
when $\alpha$ is inserted into $T$.
Then a new box is added also at the end of the first row
when $\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$.
\end{coro}
\subsection{\mathversion{bold}Column insertion and inverse insertion for
$D_n$}
\lambdabel{subsec:cid}
Set an alphabet $\mathcal{X}=\mathcal{A} \sqcup \bar{\mathcal{A}},\,
\mathcal{A}=\{ 1,\dots,n\}$ and
$\bar{\mathcal{A}}=\{\overline{1},\dots,\overline{n}\}$,
with the partial order
$1 < 2 < \dots < {n \atop \ol{n}} < \dots < \overline{2} < \overline{1}$.
\subsubsection{Semistandard $D$ tableaux}
\lambdabel{subsubsec:ssdt}
Let us consider a {\em semistandard $D$ tableau}
made by the letters from this alphabet.
We follow \cite{KN} for its definition.
We present the definition here, but restrict ourselves to
special cases that are sufficient for our purpose.
Namely we consider only
those tableaux that have no more than two rows
in their shapes.
Thus they have the forms as in (\ref{eq:pictureofsst}),
with the letters inside the boxes being chosen from the alphabet
given as above.
The letters obey the conditions
(\ref{eq:notdecrease}),\footnote{Note that there is no order between $n$ and $\ol{n}$.}
(\ref{eq:absenceofoneonebar})
and the absence of the $(x,x)$-configuration
(\ref{eq:absenceofxxconf}) where we now assume $1 \leq x <n$.
They also obey the following conditions:
\begin{equation}
\alpha_a < \beta_a \quad \mbox{or} \quad (\alpha_a,\beta_a) = (n,\ol{n})
\quad \mbox{or} \quad (\alpha_a,\beta_a) = (\ol{n},n),
\end{equation}
\begin{equation}
(\alpha_a,\alpha_{a+1},\beta_a,\beta_{a+1}) \ne (n-1,n,n,\ol{n-1}),
(n-1,\ol{n},\ol{n},\ol{n-1}),
\end{equation}
\begin{equation}
\lambdabel{eq:absenceofnnconfd}
(\alpha_a,\beta_{a+1}) \ne (n,n),(n,\ol{n}),(\ol{n},n),(\ol{n},\ol{n}).
\end{equation}
The conditions (\ref{eq:absenceofnnconfd}) are referred to as the
absence of the $(n,n)$-configurations.
\subsubsection{\mathversion{bold}Column insertion for
$D_n$ \cite{B2}}
\lambdabel{subsubsec:insd}
We give a list of column insertions on
semistandard $D$ tableaux that are sufficient for our purpose.
For the alphabet $\mathcal{X}$, we follow the convention
that Greek letters $ \alpha, \beta, \ldots $ belong to
$\mathcal{X}$ while Latin
letters $x,y,\ldots$ (resp. $\overline{x},\overline{y},\ldots$) belong to
$\mathcal{A}$ (resp. $\bar{\mathcal{A}}$).
For the interpretation of the pictorial equations in the list,
see the remarks in Section
\ref{subsubsec:insc}.
Note that this list does not exhaust all cases.
It does not contain, for instance, the insertion
\begin{math}
\setlength{\unitlength}{3mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){${\scriptstyle n}$}}
\put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}}
\multiput(2,0)(1,0){2}{\line(0,1){2}}
\multiput(2,0)(0,1){3}{\line(1,0){1}}
\put(2,0){\makebox(1,1){${\scriptstyle \ol{n}}$}}
\put(2,1){\makebox(1,1){${\scriptstyle n}$}}
\end{picture}
\end{math}.
(See Proposition \ref{pr:atmosttworows}.)
\noindent
{\mathbb T}oBOX{A0}{\alpha}
\raisebox{1.25mm}{,}
\noindent
{\mathbb T}oDOMINO{A1}{\alpha}{\beta}{\alpha}{\beta}
\raisebox{4mm}{if $\alpha < \beta$ or $(\alpha,\beta)=(n,\overline{n})$ or $(\overline{n},n)$,}
\noindent
{\mathbb T}oYOKODOMINO{B0}{\beta}{\alpha}{\beta}{\alpha}
\raisebox{1.25mm}{if $\alpha \le \beta$,}
\noindent
{\mathbb T}oHOOKnn{B1}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}{\beta}
\raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
{\mathbb T}oHOOKnn{B2}{\beta}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}
\raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
{\mathbb T}oHOOKll{B3}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}{\beta}
\raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne n,1\,$,}
\noindent
{\mathbb T}oHOOKln{B4}{\beta}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}
\raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n\!-\!1,n\,$,}
\noindent
{\mathbb T}oHOOKnn{B5}{\mu_1}{\mu_2}{x}{\mu_1}{x}{\mu_2}
\raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and
$x \ne n$,}
\noindent
{\mathbb T}oHOOKnn{B6}{\mu_1}{\overline{x}}{\mu_2}{\overline{x}}{\mu_1}{\mu_2}
\raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and
$\overline{x} \ne \overline{n}$,}
\noindent
{\mathbb T}oHOOKllnn{B7}{\mu}{\overline{n\!-\!1}}{n\!-\!1}{\mu}{\mu}{\overline{\mu}}
\raisebox{4mm}{if $\mu = n$ or $\overline{n}\;\;
(\overline{\mu}:=n$ if $\mu=\overline{n}$),}
\noindent
{\mathbb T}oHOOKll{B8}{\mu_1}{\mu_2}{\mu_2}{\overline{n\!-\!1}}{n\!-\!1}{\mu_2}
\raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$.}
\noindent
\subsubsection{Column insertion and $U_q (D_n)$ crystal morphism}
To illustrate Proposition \ref{pr:morphgen}
let us check a morphism of the $U_q(D_4)$ crystal
$B(\Lambdambda_2) \otimes B(\Lambdambda_1)$ by taking two examples.
Let $\psi$ be the map that is similarly defined
as in Section \ref{subsubsec:insc} for type $C$ case.
\begin{example}
\lambdabel{ex:morD1}
\begin{displaymath}
\begin{CD}
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$3$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{f}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$\ol{3}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$3$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{f}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$\ol{3}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$\ol{4}$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
\\
@VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V \\
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$4$}}
\end{picture}
@>\text{$\tilde{f}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}\put(1,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$4$}}
\end{picture}
@>\text{$\tilde{f}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}\put(1,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$\ol{3}$}}
\end{picture}
\end{CD}
\end{displaymath}
\vskip3ex
\noindent
Here the $\psi$'s are given by Case B5, B7 and B2 column insertions,
respectively.
\end{example}
\begin{example}
\lambdabel{ex:morD2}
\begin{displaymath}
\begin{CD}
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$\ol{3}$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$4$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{e}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$4$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
@>\text{$\tilde{e}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(3,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){1}}
\put(0,1){\makebox(1,1){$3$}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0.5){\makebox(1,1){$\otimesimes$}}
\put(2,0.5){\makebox(1,1){$4$}}
\multiput(2,0.5)(1,0){2}{\line(0,1){1}}
\multiput(2,0.5)(0,1){2}{\line(1,0){1}}
\end{picture}
\\
@VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V \\
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$\ol{4}$}}\put(1,1){\makebox(1,1){$\ol{3}$}}
\put(0,0){\makebox(1,1){$4$}}
\end{picture}
@>\text{$\tilde{e}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\ol{3}$}}
\put(0,0){\makebox(1,1){$4$}}
\end{picture}
@>\text{$\tilde{e}_4$}>>
\setlength{\unitlength}{5mm}
\begin{picture}(2,2)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\put(0,0){\line(1,0){1}}
\multiput(0,1)(0,1){2}{\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$4$}}
\put(0,0){\makebox(1,1){$4$}}
\end{picture}
\end{CD}
\end{displaymath}
\vskip3ex
\noindent
Here the $\psi$'s are given by Case B6, B8 and B1 column insertions,
respectively.
\end{example}
\subsubsection{\mathversion{bold} Inverse insertion for
$D_n$ \cite{B2}}
\lambdabel{subsubsec:invinsd}
We give a list of inverse column insertions on
semistandard $D$ tableaux that are sufficient for our purpose.
For the interpretation of the pictorial equations in the list,
see the remarks in Section
\ref{subsubsec:invinsc}.
\par
\noindent
\FromYOKODOMINO{C0}{\beta}{\alpha}{\beta}{\alpha}
\raisebox{1.25mm}{if $\alpha \le \beta$,}
\noindent
\FromHOOKnn{C1}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}{\beta}
\raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
\FromHOOKnn{C2}{\beta}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}
\raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,}
\noindent
\FromHOOKnl{C3}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}{\beta}
\raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n\!-\!1,n\,$,}
\noindent
\FromHOOKll{C4}{\beta}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}
\raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne n,1\,$,}
\noindent
\FromHOOKnn{C5}{\mu_1}{x}{\mu_2}{\mu_1}{\mu_2}{x}
\raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and
$x \ne n$}
\noindent
\FromHOOKnn{C6}{\overline{x}}{\mu_1}{\mu_2}{\mu_1}{\overline{x}}{\mu_2}
\raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and
$\overline{x} \ne \overline{n}$,}
\noindent
\FromHOOKll{C7}{\mu_1}{\mu_1}{\mu_2}{\mu_1}{\overline{n\!-\!1}}{n\!-\!1}
\raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$,}
\noindent
\FromHOOKllnn{C8}{\overline{n\!-\!1}}{n\!-\!1}{\mu}{\overline{\mu}}{\mu}{\mu}
\raisebox{4mm}{if $\mu = n$ or $\overline{n} \;\;
(\overline{\mu}:=n$ if $\mu=\overline{n}$).}
\subsubsection{\mathversion{bold} Column bumping lemma for
$D_n$}
\lambdabel{subsubsec:cblD}
The aim of this subsection is to give a simple result on
successive insertions of two letters into a
tableau (Corollary \ref{cor:cblxxx}).
This result will be
used in the proof of the main theorem (Theorem \ref{th:main3}).
This corollary follows from the column bumping lemma
(Lemma \ref{lem:cblxx}).
We restrict ourselves to the situation where by column insertions there only appear
semistandard $D$ tableaux with at most two rows.
In the classification of the column insertions, we regard
that the inserted letter is set in the first row in
Cases A0, B0, B2, B4, B5 and
B7, and that it is set in the second row in the other cases.
Then the bumping route is defined in the same way as
in section~\ref{subsubsec:cblB}.
\begin{lemma}
\lambdabel{lem:cblx}
The bumping route does not move down.
\end{lemma}
\begin{proof}
It is enough to consider the following three cases.
\begin{enumerate}
\item
Suppose that in the following column insertion Case
B0 has occurred in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(5,1.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\multiput(2.5,0)(1,0){3}{\line(0,1){1}}
\multiput(2.5,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$\alpha$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\put(2.5,0){\makebox(1,1){$\beta$}}
\put(3.5,0){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Case B0 occurs and Case A1 does not happen.
The semistandard condition for $D$ tableau imposes that
$(\beta,\gamma) \ne (n,\ol{n}), (\ol{n},n)$ and $\beta \leq \gamma$.
Thus if $\beta$ is bumped out from the left column, it certainly bumps
$\gamma$ out of the right column.
\item
Suppose that in the following column insertion one of the Cases
B2, B4, B5 or B7 has occurred in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(2.5,2.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$\alpha$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\multiput(2.5,1)(0,1){2}{\line(1,0){2}}
\multiput(2.5,0)(1,0){2}{\line(0,1){2}}
\put(2.5,0){\line(1,0){1}}
\put(4.5,1){\line(0,1){1}}
\put(2.5,0){\makebox(1,1){$\delta$}}
\put(2.5,1){\makebox(1,1){$\beta$}}
\put(3.5,1){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Case B0 occurs and Case A1 does not happen.
The reason is as follows.
Whichever one of the B2, B4, B5 or B7 may have occurred in the first column,
the letter bumped out from the first column is always $\beta$. And we have
the semistandard condition between $\beta$ and $\gamma$.
\item
Suppose that in the following column insertion one of the Cases
B2, B4, B5 or B7 has occurred in the first column.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(2.5,2.4)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\multiput(2.5,0)(1,0){3}{\line(0,1){1}}
\multiput(2.5,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$\alpha$}}
\put(1,0){\makebox(1.5,1){$\longrightarrow$}}
\multiput(2.5,0)(0,1){3}{\line(1,0){2}}
\multiput(2.5,0)(1,0){3}{\line(0,1){2}}
\put(2.5,0){\makebox(1,1){$\delta$}}
\put(2.5,1){\makebox(1,1){$\beta$}}
\put(3.5,0){\makebox(1,1){$\varepsilon$}}
\put(3.5,1){\makebox(1,1){$\gamma$}}
\end{picture}
\end{equation}
Then in the second column Cases B1, B3, B6 and B8 do not happen.
The reason is as follows.
As in the previous case
the letter bumped out from the first column is always $\beta$.
Since $\beta \leq \gamma$, B1 does not happen.
Since $(\beta,\gamma) \ne (n,\ol{n}), (\ol{n},n)$, B6 and B8 do not happen.
B3 does not happen since $(\beta,\gamma,\varepsilon) \ne (x,x,\ol{x})$, i.e. due to
the absence of the $(x,x)$-configuration (\ref{eq:absenceofxxconf}).
\end{enumerate}
\end{proof}
\begin{lemma}
\lambdabel{lem:cblxx}
Let $\alpha' \leq \alpha$,
in particular $(\alpha,\alpha') \ne (n,\ol{n}),(\ol{n},n)$.
Let $R$ be the bumping route that is made when $\alpha$ is inserted into $T$,
and $R'$ be the bumping route that is made when
$\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$.
Then $R'$ does not lie below $R$.
\end{lemma}
\begin{proof}
First we consider the case where the bumping route
lies only in the first row.
Suppose that, when $\alpha$ was inserted
into the tableau $T$, it was set in the first row
in the first column.
We are to show that when $\alpha'$ is inserted,
it will be also set in the first row in the first column.
If $T$ is an empty set (resp.~has only one row), the insertion of
$\alpha$ should have been A0 (resp.~B0).
In either case we have B0 when $\alpha'$ is inserted, hence the claim is true.
Suppose $T$ has two rows.
By assumption B2, B4, B5 or B7 has occurred when $\alpha$ was inserted.
We see that;
a) If B7 has occurred,
then B5 will occur when $\alpha'$ is inserted;
b) If B5 or B4 has occurred,
then B2 will occur when $\alpha'$ is inserted.
Thus it is enough to show that
if B2 has occurred,
then B1, B3, B6 and B8 do not happen when $\alpha'$ is inserted.
Since $\alpha' \leq \alpha$, B1 does not happen.
Since $(\alpha , \alpha') \ne (n,\ol{n}),(\ol{n},n)$, B6 and B8 do not happen.
B3 does not happen, since the first column does not have the entry
${x \atop \overline{x}}$ as the result of B2 type insertion of
$\alpha$.
Second we consider the case where the bumping route $R$ lies across
the first and the second rows.
Suppose that from the leftmost column to
the $(i-1)$-th column the bumping route
lies in the second row, and from
the $i$-th column to the rightmost column it lies in the first row.
As in the type $B$ case
let us call the position of the vertical line between the
$(i-1)$-th and the $i$-th columns the crossing point of $R$.
It is unique due to Lemma \ref{lem:cblx}.
We call an analogous position of $R'$ its crossing point.
We are to show that the crossing point of $R'$ does not locate
strictly right to the crossing point of $R$.
Let the situation around the crossing point of $R$ be
\begin{equation}
\lambdabel{eq:crptd}
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\xi$}}
\put(2,1){\makebox(1,1){$\eta$}}
\end{picture}
\quad
\mbox{or}
\quad
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){2}}
\put(3,1){\line(0,1){1}}
\multiput(1,1)(0,1){2}{\line(1,0){2}}
\put(1,0){\line(1,0){1}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,1)(0,1){2}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\xi$}}
\put(2,1){\makebox(1,1){$\eta$}}
\end{picture}
\,
\mbox{.}
\end{equation}
While the insertion of $\alpha$ that led to these configurations,
let $\eta'$ be the letter that was bumped out from the
left column.
Claim 1: $\xi \leq \eta$ and $(\xi,\eta) \ne (n,\ol{n}),(\ol{n},n)$.
To see this note that
in the left column,
B1, B3, B6 or B8 has occurred when $\alpha$ was inserted.
We have $\xi \leq \eta'$ (B1) or $\xi < \eta'$ (B3, B6, B8).
In the right column
A0, B0, B2, B4, B5 or B7 has subsequently occurred.
We have $\eta' = \eta$ (A0, B0, B2, B5), or $\eta' < \eta$ (B4, B7).
In any case we have $\xi \leq \eta$ and $(\xi,\eta)
\ne (n,\ol{n}),(\ol{n},n)$.
Claim 2: In (\ref{eq:crptd}) the following configurations do not exist.
\begin{equation}
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\ol{x}$}}
\put(1,1){\makebox(1,1){$x$}}
\put(2,1){\makebox(1,1){$\ol{x}$}}
\end{picture}
\quad
\mbox{,}
\quad
\setlength{\unitlength}{5mm}
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){2}{\line(0,1){2}}
\put(3,1){\line(0,1){1}}
\multiput(1,1)(0,1){2}{\line(1,0){2}}
\put(1,0){\line(1,0){1}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,1)(0,1){2}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\ol{x}$}}
\put(1,1){\makebox(1,1){$x$}}
\put(2,1){\makebox(1,1){$\ol{x}$}}
\end{picture}
\quad
\mbox{,}
\quad
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$x$}}
\put(2,0){\makebox(1,1){$\ol{x}$}}
\put(2,1){\makebox(1,1){$x$}}
\end{picture}
\quad
\mbox{or}
\quad
\begin{picture}(4,2.4)(0,0.3)
\multiput(1,0)(1,0){3}{\line(0,1){2}}
\multiput(1,0)(0,1){3}{\line(1,0){2}}
\multiput(0.1,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\multiput(3,0)(0,1){3}{
\multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}}
\put(1,0){\makebox(1,1){$\ol{n}$}}
\put(2,0){\makebox(1,1){$n$}}
\put(2,1){\makebox(1,1){$\ol{n}$}}
\end{picture}
\,
\mbox{.}
\lambdabel{eq:forbiddenconfs}
\end{equation}
Due to Claim 1,
the first and the second configurations can exist only if
B1 with $\alpha= x, \gamma=\eta'$ happens in the left column
and $\xi = \eta'= \eta = \overline{x}$.
But $(\alpha,\gamma) = (x,\overline{x})$ is not compatible with B1.
The third (resp.~fourth) configuration can exist only if
B4 (resp.~B7) happens in the right column and $\xi= \eta'= \eta= x
\mbox{(resp.~$=\ol{n}$)}$ by Claim 1.
But we see from the proof of Claim 1 that B4 (resp.~B7) actually happens only when $\eta' < \eta$.
Claim 2 is proved.
Let the situation around the crossing
point of $R$ be one of (\ref{eq:crptd})
excluding (\ref{eq:forbiddenconfs}).
When inserting $\alpha'$,
suppose in the left column of the crossing point,
B1, B3, B6 or B8 has occurred.
Let $\xi'$ be the letter bumped out therefrom.
Claim 3: $\xi' \leq \eta$ and $(\xi',\eta) \ne (n,\ol{n}),(\ol{n},n)$.
We divide the check into three cases.
a) If B1 or B6 has occurred in the left column, we have $\xi' = \xi$.
Thus the assertion follows from Claim 1.
b) If B3 has occurred, the left column had the entry ${x \atop \ol{x}}$ and
we have $\xi' = \ol{x-1}$, $\xi = \ol{x}$.
Claim 1 tells $\xi = \ol{x} \le \eta$, and Claim 2 does $\eta \neq \ol{x}$.
Therefore we have $\xi' = \ol{x-1} \le \eta$.
c) If B8 has occurred we have $\xi' = \ol{n-1}$ for
either $\xi = n$ or $\xi = \ol{n}$.
If $\xi = n \mbox{ (resp. $\ol{n}$)}$ the entry on the left of $\eta$
was $\ol{n} \mbox{ (resp. $n$)}$,
therefore $\eta \geq \ol{n} \mbox{ (resp. $n$)}$.
On the other hand Claim 1 tells $(\xi,\eta) \ne (n,\ol{n}),(\ol{n},n)$.
Thus we have $\eta \geq \ol{n-1}$.
Claim 3 is proved.
Now we are ready to finish the proof of the main assertion.
Assume the same situation as Claim 3.
We should verify that A1, B1, B3, B6 and B8 do not
occur in the right column.
Claim 3 immediately prohibits A1, B1, B6 and B8 in the right column.
Suppose that B3 happens in the right column.
It means that
$\eta \in \{1,\ldots n\}$, $\xi' \geq \eta$ and the
right column had the entry
${\eta \atop \ol{\eta}}$.
Since $\xi' \leq \eta$ by Claim 3, we find $\xi' = \eta$, therefore
$\xi' \in \{1,\ldots, n\}$.
Such $\xi'$ can be bumped out from B1 process only in the left column
and not from B3, B6 or B8.
It follows that $\xi' = \xi$.
This leads to the third configuration in (\ref{eq:forbiddenconfs}),
hence a contradiction.
Finally we consider the case where the bumping route $R$
lies only in the second row.
If $R'$ lies below $R$ the tableau should have
more than two rows, which is prohibited by
Proposition \ref{pr:atmosttworows}.
\end{proof}
\begin{coro}
\lambdabel{cor:cblxxx}
Let $\alpha' \leq \alpha$, in particular $(\alpha,\alpha') \ne
(n,\ol{n}),(\ol{n},n)$.
Suppose that a new box is added at the end of the first row
when $\alpha$ is inserted into $T$.
Then a new box is added also at the end of the first row
when $\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$.
\end{coro}
\subsection{\mathversion{bold}Main theorem :
$B^{(1)}_n$ and $D^{(1)}_n$ cases}
\lambdabel{subsec:isoruletypeb}
Given $b_1 \otimesimes b_2 \in B_{l} \otimesimes B_{k}$,
we define the element
$b'_2 \otimesimes b'_1 \in B_{k} \otimesimes B_{l}$
and $l',k', m \in {\mathbb Z}_{\ge 0}$ by the following rule.
\begin{rules}\lambdabel{rule:typeB}
\par\noindent
Set $z = \min(\sharp\,\fbx{1} \text{ in }{\mathcal T}(b_1),\,
\sharp\,\fbx{\ol{1}} \text{ in }{\mathcal T}(b_2))$.
Thus ${\mathcal T}(b_1)$ and ${\mathcal T}(b_2)$ can be depicted by
\[
{\mathcal T}(b_1) =
\setlength{\unitlength}{5mm}
\begin{picture}(7.5,1.4)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){7}}
\put(0,0){\line(0,1){1}}
\put(3,0){\line(0,1){1}}
\put(7,0){\line(0,1){1}}
\put(3,0){\makebox(4,1){$T_*$}}
\put(0,0){\makebox(3,1){$1\cdotsots 1$}}
\put(0,0.9){\makebox(3,1){$z$}}
\end{picture},\;\;
{\mathcal T}(b_2) =
\setlength{\unitlength}{5mm}
\begin{picture}(6.5,1.4)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(0,1){1}}
\put(0.9,0){\line(0,1){1}}
\put(2,0){\line(0,1){1}}
\put(3,0){\line(0,1){1}}
\put(6,0){\line(0,1){1}}
\put(0,0){\makebox(1,1){$v_{1}$}}
\put(1,0){\makebox(1,1){$\cdotsots$}}
\put(2,0){\makebox(1,1){$v_{k'}$}}
\put(3,0){\makebox(3,1){$\ol{1}\cdotsots\ol{1}$}}
\put(3,0.9){\makebox(3,1){$z$}}
\end{picture}.
\]
Let $l' = l-z$ and $k' = k-z$,
hence $T_*$ is a one-row tableau with length $l'$.
Operate the column insertions and define
\begin{equation}
\lambdabel{eq:prodtab}
T^{(0)} := (v_1 \longrightarrow ( \cdotsots ( v_{k'-1} \longrightarrow (
v_{k'} \longrightarrow T_* ) ) \cdotsots ) ).
\end{equation}
It has the form (See Proposition \ref{pr:atmosttworows}.):
\begin{equation}
\lambdabel{eq:prodtabpic}
\setlength{\unitlength}{5mm}
\begin{picture}(20,4)
\put(5,1.5){\makebox(3,1){$T^{(0)}=$}}
\put(8,1){\line(1,0){3.5}}
\put(8,2){\line(1,0){9}}
\put(8,3){\line(1,0){9}}
\put(8,1){\line(0,1){2}}
\put(11.5,1){\line(0,1){1}}
\put(12.5,2){\line(0,1){1}}
\put(17,2){\line(0,1){1}}
\put(12.5,2){\makebox(4.5,1){$i_{m+1} \;\cdotsots\; i_{l'}$}}
\put(8,1){\makebox(3,1){$\;\;i_1 \cdotsots i_m$}}
\put(8.5,2){\makebox(3,1){$\;\;j_1 \cdotsots\cdotsots j_{k'}$}}
\end{picture}
\end{equation}
where $m$ is the length of the second row, hence that of the first
row is $l'+k'-m$. ($0 \le m \le k'$.)
Next we bump out $l'$ letters from
the tableau $T^{(0)}$ by the reverse bumping
algorithm.
For the boxes containing $i_{l'}, i_{l'-1}, \ldots, i_1$ in the above
tableau, we do it first for $i_{l'}$ then $i_{l'-1}$ and so on.
Correspondingly, let $w_{1}$ be the first letter that is bumped out from
the leftmost column and $w_2$ be the second and so on.
Denote by $T^{(i)}$ the resulting tableau when $w_i$ is bumped out
($1 \le i \le l'$).
Now $b'_1 \in B_l$ and $b'_2 \in B_k$ are uniquely specified by
\[
{\mathcal T}(b'_2) =
\setlength{\unitlength}{5mm}
\begin{picture}(6.5,1.4)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){6}}
\put(0,0){\line(0,1){1}}
\put(3,0){\line(0,1){1}}
\put(6,0){\line(0,1){1}}
\put(3,0){\makebox(3,1){$T^{(l')}$}}
\put(0,0){\makebox(3,1){$1\cdotsots 1$}}
\put(0,0.9){\makebox(3,1){$z$}}
\end{picture},\;\;
{\mathcal T}(b'_1) =
\setlength{\unitlength}{5mm}
\begin{picture}(7.5,1.4)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){7}}
\put(0,0){\line(0,1){1}}
\put(1.25,0){\line(0,1){1}}
\put(2.75,0){\line(0,1){1}}
\put(4,0){\line(0,1){1}}
\put(7,0){\line(0,1){1}}
\put(0,0){\makebox(1.25,1){$w_{1}$}}
\put(1.25,0){\makebox(1.5,1){$\cdotsots$}}
\put(2.75,0){\makebox(1.25,1){$w_{l'}$}}
\put(4,0){\makebox(3,1){$\ol{1}\cdotsots\ol{1}$}}
\put(4,0.9){\makebox(3,1){$z$}}
\end{picture}.
\]
\end{rules}
(End of the Rule)
\vskip3ex
We normalize the energy function as $H_{B_l B_k}(b_1 \otimesimes b_2)=0$
for
\begin{math}
\mathcal{T}(b_1) =
\setlength{\unitlength}{5mm}
\begin{picture}(3,1.5)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\multiput(0,0)(3,0){2}{\line(0,1){1}}
\put(0,0){\makebox(3,1){$1\cdotsots 1$}}
\put(0,1){\makebox(3,0.5){$\scriptstyle l$}}
\end{picture}
\end{math}
and
\begin{math}
\mathcal{T}(b_2) =
\setlength{\unitlength}{5mm}
\begin{picture}(3,1.5)(0,0.3)
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\multiput(0,0)(3,0){2}{\line(0,1){1}}
\put(0,0){\makebox(3,1){$\ol{1}\cdotsots \ol{1}$}}
\put(0,1){\makebox(3,0.5){$\scriptstyle k$}}
\end{picture}
\end{math}
irrespective of $l < k$ or $l \ge k$.
Our main result for $U'_q(B^{(1)}_n)$ and $U'_q(D^{(1)}_n)$ is the following.
\begin{theorem}\lambdabel{th:main3}
Given $b_1 \otimes b_2 \in B_l \otimes B_k$, find $b'_2 \otimes b'_1 \in B_k \otimes B_l$
and $l', k', m$ by Rule \ref{rule:typeB}
with type $B$ (resp.~type $D$) insertion.
Let $\iota: B_l \otimes B_k \stackrel{\sim}{\rightarrow} B_k \otimes B_l$ be the isomorphism of
$U'_q(B^{(1)}_n)$ (resp.~ $U'_q(D^{(1)}_n)$) crystal.
Then we have
\begin{align*}
\iota(b_1\otimesimes b_2)& = b'_2 \otimesimes b'_1,\\
H_{B_l B_k}(b_1 \otimesimes b_2) &= 2\min(l',k')- m.
\end{align*}
\end{theorem}
\noindent
Before giving a proof of this theorem
we present two propositions associated with Rule \ref{rule:typeB}.
Let the product tableau ${\mathcal T}(b_1) *{\mathcal T}(b_2)$
be given by the $T^{(0)}$
in eq.~(\ref{eq:prodtab}) in Rule \ref{rule:typeB}.
We assume that it is indeed a (semistandard $B$ or $D$) tableau.
\begin{proposition}
\lambdabel{pr:atmosttworows}
The product tableau ${\mathcal T}(b_1) *{\mathcal T}(b_2)$
made by (\ref{eq:prodtab}) has no more than two rows.
\end{proposition}
\begin{proof}
Let $T_{\bullet}$ be a tableau that appears in
the intermediate process of the sequence of the
column insertions (\ref{eq:prodtab}).
Assume that $T_{\bullet}$ has two rows.
We denote by $\alpha$ the letter which we are going to insert
into $T_{\bullet}$ in the next step of the sequence, and denote by $\beta$
the letter which resides in the second row
of the leftmost column of $T_{\bullet}$.
It suffices to show that the $\alpha$ does not
make a new box in the third row in the leftmost column.
In other words it suffices to show that $\alpha \leq \beta$ and
$(\alpha,\beta) \ne (\circ,\circ)$
(resp.~and in particular $(\alpha,\beta) \ne (n,\ol{n}),(\ol{n},n)$)
in $B_n$ (resp.~$D_n$) case.
Let us first consider $B_n$ case.
We divide the proof in two cases: (i)
$\beta = \circ$ (ii) $\beta \ne \circ$.
In case (i) either this $\beta=\circ$ was
originally contained in ${\mathcal T}(b_2)$ or
this $\beta=\circ$ was made by Case B7 in section \ref{subsubsec:insb}.
In any case we see $\alpha \leq n$ (thus $\alpha < \beta$) because of the
original arrangement of the letters
in ${\mathcal T}(b_2)$.
(Note that ${\mathcal T}(b_2)$ did not have more than one $\circ$s.)
In case (ii) either this $\beta$ was
originally contained in ${\mathcal T}(b_2)$ or
this $\beta$ is an $\ol{x+1}$ which had
originally been an $\ol{x}$ in ${\mathcal T}(b_2)$ and then
transformed into $\ol{x+1}$ by Case B6 in section \ref{subsubsec:insb}.
In any case we see $\alpha \leq \beta$
and $(\alpha,\beta) \ne (\circ,\circ)$.
Second we consider $D_n$ case.
We divide the proof in two cases: (i) $\beta = n, \ol{n}$ (ii)
$\beta \ne n, \ol{n}$.
In case (i) either this $\beta=n, \ol{n}$ was
originally contained in ${\mathcal T}(b_2)$ or
this $\beta$ was made by Case B7 in section \ref{subsubsec:insd}.
In any case we see $\alpha \leq \beta$,
in particular $(\alpha,\beta) \ne (n,\ol{n}),(\ol{n},n)$,
because of the original arrangement of the letters
in ${\mathcal T}(b_2)$.
(Note that ${\mathcal T}(b_2)$
did not contain $n$ and $\ol{n}$ simultaneously.)
In case (ii) either this $\beta$ was
originally contained in ${\mathcal T}(b_2)$ or
this $\beta$ is an $\ol{x+1}$ which had
originally been an $\ol{x}$ in ${\mathcal T}(b_2)$ and then
transformed into $\ol{x+1}$ by Case B4 in section \ref{subsubsec:insd}.
In any case we see $\alpha \leq \beta$
and $(\alpha,\beta) \ne (n,\ol{n}),(\ol{n},n)$.
\end{proof}
Let $\mathfrak{g} = B^{(1)}_n \mbox{ or } D^{(1)}_n$
and $\ol{\mathfrak{g}} = B_n \mbox{ or } D_n$.
By neglecting zero arrows, the crystal graph of
$B_l \otimes B_k$
decomposes into $U_q(\ol{\mathfrak{g}})$ crystals,
where only arrows with indices $i=1,\ldots,n$ remain.
Let us regard $b_1 \in B_l$ as an element of
$U_q(\ol{\mathfrak{g}})$ crystal $B(l \Lambdambda_1)$, and
regard $b_2 \in B_k$ as an element of $B(k \Lambdambda_1)$.
Then $b_1 \otimes b_2$ is regarded as an element of $U_q(\ol{\mathfrak{g}})$ crystal
$B(l \Lambdambda_1) \otimes B(k \Lambdambda_1)$.
On the other hand the tableau
${\mathcal T}(b_1) *{\mathcal T}(b_2)$
specifies an element of $B(\lambdambda)$
which we shall denote by $b_1 * b_2$,
where $B(\lambdambda)$ is a $U_q(\ol{\mathfrak{g}})$ crystal
that appears in the decomposition of $B(l \Lambdambda_1) \otimes B(k \Lambdambda_1)$.
\begin{proposition}
\lambdabel{pr:crysmorpb}
The map $\psi : b_1 \otimes b_2 \mapsto b_1 * b_2$ is a
$U_q(\ol{\mathfrak{g}})$ crystal morphism, i.e. the
actions of $\tilde{e}_i$ and $\tilde{f}_i$
for $i=1,\ldots,n$
commute with the map $\psi$.
\end{proposition}
\noindent
This proposition is a special case of Proposition~\ref{pr:morphgen}.
Note that, although
we have removed the $z$ pairs of $1$'s and $\ol{1}$'s from the tableaux
by hand, this elimination of the letters
is also a part of this rule
of column insertions (i.e.
\begin{math}
(\ol{1} \longrightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$1$}}
\end{picture}
) = \emptyset
\end{math}),
followed by the sliding (jeu de taquin) rules \cite{B1,B2}.
\par\noindent
\begin{proof}[Proof of Theorem \ref{th:main3}]
First we consider the isomorphism.
We are to show:
\begin{enumerate}
\item If $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under
the isomorphism, then the product tableau ${\mathcal T}(b_1) *
{\mathcal T}(b_2)$ is equal to the product tableau
${\mathcal T}(b'_2) * {\mathcal T}(b'_1)$.
\item If $k$ and $l$ are specified, we can recover
${\mathcal T}(b'_2) $ and ${\mathcal T}(b'_1)$ from their
product tableau by using the algorithm shown in Rule \ref{rule:typeB}.
In other words, we can retrieve them by assuming
the arrangement of the locations $i_{l'},\ldots,i_1$ of
the boxes in (\ref{eq:prodtabpic}) from
which we start the reverse bumpings.
\end{enumerate}
Claim 2 is verified from
Corollary \ref{cor:bcblxxx} or \ref{cor:cblxxx}.
We consider Claim 1 in the following.
The value of the energy function value will be settled at the same time.
Thanks to the $U_q(\ol{\mathfrak{g}})$ crystal
morphism (Proposition \ref{pr:crysmorpb}),
it suffices to prove the theorem
for any element in each connected component of the
$U_q(\ol{\mathfrak{g}})$ crystal.
We take up the $U_q(\ol{\mathfrak{g}})$ highest weight element
as such a particular element.
There is a special extreme $U_q(\ol{\mathfrak{g}})$ highest weight element
\begin{equation}
\iota : (l,0,\ldots,0) \otimes (k,0,\ldots,0)
\stackrel{\sim}{\mapsto}
(k,0,\ldots,0) \otimes (l,0,\ldots,0),
\lambdabel{eqn:ultrahighest}
\end{equation}
wherein we find that they are obviously mapped to each other
under the $U'_q(\mathfrak{g})$ isomorphism,
and that the image of the map is also obviously
obtained by Rule \ref{rule:typeB}.
Let us assume $l \geq k$.
(The other case can be treated in a similar way.)
Suppose that $b_1 \otimes b_2 \in B_l \otimes B_k$
is a $U_q(\ol{\mathfrak{g}})$ highest element.
In general, it has the form:
$$b_1\otimes b_2 = (l,0,\ldots,0) \otimes (x_1,x_2,0,\ldots,0,\ol{x}_1),$$
where $x_1, x_2$ and $\ol{x}_1$ are arbitrary
as long as $k = x_1 + x_2 + \ol{x}_1$.
We are to obtain its image under the isomorphism.
Applying
\begin{eqnarray*}
&&\et{0}^{\ol{x}_1} \et{2}^{x_2+\ol{x}_1} \cdotsots
\et{n-1}^{x_2+\ol{x}_1} \et{n}^{2x_2+2\ol{x}_1}
\et{n-1}^{x_2+\ol{x}_1}
\cdotsots \et{2}^{x_2+\ol{x}_1} \et{0}^{x_2+\ol{x}_1}
\; \mbox{for $\mathfrak{g} = B^{(1)}_n$}\\
&&\et{0}^{\ol{x}_1} \et{2}^{x_2+\ol{x}_1} \cdotsots
\et{n-1}^{x_2+\ol{x}_1} \et{n}^{x_2+\ol{x}_1}
\et{n-2}^{x_2+\ol{x}_1}
\cdotsots \et{2}^{x_2+\ol{x}_1} \et{0}^{x_2+\ol{x}_1}
\; \mbox{for $\mathfrak{g} = D^{(1)}_n$}
\end{eqnarray*}
to the both sides of (\ref{eqn:ultrahighest}),
we find
\begin{displaymath}
\iota :
(l,0,\ldots,0) \otimes (x_1,x_2,0,\ldots,0,\ol{x}_1)
\stackrel{\sim}{\mapsto}
(k,0,\ldots,0) \otimes (x_1',x_2,0,\ldots,0,\ol{x}_1).
\end{displaymath}
Here $x_1' = l-x_2-\ol{x}_1$.
In the course of the application of $\tilde{e}_i$'s,
the value of the energy function has changed as
$$
H\left((l,0,\ldots,0) \otimes (x_1,x_2,0,\ldots,0,\ol{x}_1)\right) =
H\left((l,0,\ldots,0) \otimes (k,0,\ldots,0)\right) - x_2 - 2 \ol{x}_1.
$$
(We have omitted the subscripts of the energy function.)
Thus according to our normalization we have
$H(b_1 \otimes b_2)=2(k- \ol{x}_1)-x_2 $.
(Note that the $z$ in Rule \ref{rule:typeB}
is now equal to $\ol{x}_1$, hence
we have $k' = k-\ol{x}_1$.)
On the other hand
for this highest element
the column insertions lead to a common tableau
\begin{displaymath}
\setlength{\unitlength}{5mm}
\begin{picture}(7,2)
\put(0,0.5){\makebox(3,1){$T^{(0)}=$}}
\put(3,2){\line(1,0){4}}
\put(3,1){\line(1,0){4}}
\put(3,0){\line(1,0){3}}
\put(3,0){\line(0,1){2}}
\put(6,0){\line(0,1){1}}
\put(7,1){\line(0,1){1}}
\put(3,1){\makebox(4,1){$1\cdotsots\cdotsots 1$}}
\put(3,0){\makebox(3,1){$2\cdotsots 2$}}
\end{picture}
\end{displaymath}
whose second row has length $x_2$
(and first row has the length $l+k-x_2-2\ol{x}_1$).
This completes the proof.
\end{proof}
\subsection{Examples}
\lambdabel{subsec:exBD}
\begin{example}
$B_5 \otimesimes B_3 \simeq B_3 \otimesimes B_5$ for $B^{(1)}_5$.
\begin{displaymath}
\begin{array}{ccccccc}
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$5$}}
\put(2,0){\makebox(1,1){$\circ$}}
\put(3,0){\makebox(1,1){$\ol{5}$}}
\put(4,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$5$}}
\put(2,0){\makebox(1,1){$\circ$}}
\put(3,0){\makebox(1,1){$\ol{5}$}}
\put(4,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$5$}}
\put(2,0){\makebox(1,1){$\ol{5}$}}
\put(3,0){\makebox(1,1){$\ol{5}$}}
\put(4,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0){\makebox(1,1){$4$}}
\put(2,0){\makebox(1,1){$\circ$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0){\makebox(1,1){$\ol{5}$}}
\put(2,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0){\makebox(1,1){$4$}}
\put(2,0){\makebox(1,1){$5$}}
\put(3,0){\makebox(1,1){$5$}}
\put(4,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$\circ$}}
\put(3,0){\makebox(1,1){$\ol{5}$}}
\put(4,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0){\makebox(1,1){$\ol{1}$}}
\put(2,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(3,1)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$1$}}
\put(1,0){\makebox(1,1){$1$}}
\put(2,0){\makebox(1,1){$\circ$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0){\makebox(1,1){$\ol{5}$}}
\put(2,0){\makebox(1,1){$\ol{5}$}}
\put(3,0){\makebox(1,1){$\ol{1}$}}
\put(4,0){\makebox(1,1){$\ol{1}$}}
\end{picture}
\end{array}
\end{displaymath}
Here we have picked up three samples that are specific to type $B$.
The values of the energy function are
assigned to be 3, 5 and 1, respectively.
Let us illustrate in more detail the procedure of Rule \ref{rule:typeB}
by taking the first example.
{}From the left hand side we proceed the column insertions as follows.
\begin{align*}
\ol{5} &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(5,1)(0,0.3)
\multiput(0,0)(1,0){6}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){5}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$5$}}
\put(2,0){\makebox(1,1){$\circ$}}
\put(3,0){\makebox(1,1){$\ol{5}$}}
\put(4,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(2,1)(1,0){4}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){$5$}}
\put(1,1){\makebox(1,1){$5$}}
\put(2,1){\makebox(1,1){$\circ$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(4,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
\\
\circ &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(2,1)(1,0){4}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){$5$}}
\put(1,1){\makebox(1,1){$5$}}
\put(2,1){\makebox(1,1){$\circ$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(4,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){3}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$5$}}
\put(2,1){\makebox(1,1){$\circ$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(4,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
\\
5 &\rightarrow
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\multiput(3,1)(1,0){3}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$5$}}
\put(2,1){\makebox(1,1){$\circ$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(4,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
\quad = \quad
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\multiput(4,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){3}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$5$}}
\put(2,1){\makebox(1,1){$\circ$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(4,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
\end{align*}
\vskip3ex
\noindent
The reverse bumping procedure goes as follows.
\begin{align*}
T^{(0)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\multiput(4,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){5}}
\put(0,0){\line(1,0){3}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$5$}}
\put(2,1){\makebox(1,1){$\circ$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(4,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
& \\
T^{(1)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\put(4,1){\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){4}}
\put(0,0){\line(1,0){3}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$\circ$}}
\put(2,1){\makebox(1,1){$\ol{5}$}}
\put(3,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
&, w_1 = 5 \\
T^{(2)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){4}{\line(0,1){2}}
\multiput(0,0)(0,1){3}{\line(1,0){3}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$\circ$}}
\put(2,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\circ$}}
\put(1,0){\makebox(1,1){$\ol{5}$}}
\put(2,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
&, w_2 = 5 \\
T^{(3)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){3}{\line(0,1){2}}
\put(3,1){\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){3}}
\put(0,0){\line(1,0){2}}
\put(0,1){\makebox(1,1){$4$}}
\put(1,1){\makebox(1,1){$\circ$}}
\put(2,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\put(1,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
&, w_3 = \circ \\
T^{(4)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.8)
\multiput(0,0)(1,0){2}{\line(0,1){2}}
\multiput(2,1)(1,0){2}{\line(0,1){1}}
\multiput(0,1)(0,1){2}{\line(1,0){3}}
\put(0,0){\line(1,0){1}}
\put(0,1){\makebox(1,1){$5$}}
\put(1,1){\makebox(1,1){$\circ$}}
\put(2,1){\makebox(1,1){$\ol{5}$}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
&, w_4 = \ol{5} \\
T^{(5)} &=
\setlength{\unitlength}{5mm}
\begin{picture}(5,2)(0,0.3)
\multiput(0,0)(1,0){4}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){3}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$\circ$}}
\put(2,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
&, w_5 = \ol{5}
\end{align*}
Thus we obtained the right hand side.
We have $H_{B_5,B_3}=3$, since $l'=5, k'=3$ and $m=3$.
\end{example}
\begin{example}
$B_2 \otimesimes B_1 \simeq B_1 \otimesimes B_2$ for $D^{(1)}_5$.
\begin{displaymath}
\begin{array}{ccccccc}
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$5$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$5$}}
\put(1,0){\makebox(1,1){$5$}}
\end{picture}
\\
& & & & & & \\
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\put(1,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$5$}}
\end{picture}
& \stackrel{\sim}{\mapsto} &
\setlength{\unitlength}{5mm}
\begin{picture}(1,1)(0,0.3)
\multiput(0,0)(1,0){2}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){1}}
\put(0,0){\makebox(1,1){$\ol{5}$}}
\end{picture}
& \otimesimes &
\setlength{\unitlength}{5mm}
\begin{picture}(2,1)(0,0.3)
\multiput(0,0)(1,0){3}{\line(0,1){1}}
\multiput(0,0)(0,1){2}{\line(1,0){2}}
\put(0,0){\makebox(1,1){$4$}}
\put(1,0){\makebox(1,1){$\ol{4}$}}
\end{picture}
\end{array}
\end{displaymath}
Here we have picked up two samples that are specific to type $D$.
\end{example}
\noindent
{\bf Acknowledgements} \hspace{0.1cm}
It is a pleasure to thank T.H. Baker for many helpful discussions
and correspondence.
\end{document} |
\begin{document}
\title{Distance-Uniform Graphs with Large Diameter}
\begin{abstract}
An $\epsilon$-distance-uniform graph is one in which from every vertex, all but an $\epsilon$-fraction of the remaining vertices are at some fixed distance $d$, called the critical distance. We consider the maximum possible value of $d$ in an $\epsilon$-distance-uniform graph with $n$ vertices. We show that for $\frac1n \le \epsilon \le \frac1{\log n}$, there exist $\epsilon$-distance-uniform graphs with critical distance $2^{\Omega(\frac{\log n}{\log \epsilon^{-1}})}$, disproving a conjecture of Alon et al.\ that $d$ can be at most logarithmic in $n$. We also show that our construction is best possible, in the sense that an upper bound on $d$ of the form $2^{O(\frac{\log n}{\log \epsilon^{-1}})}$ holds for all $\epsilon$ and $n$.
\end{abstract}
\section{Introduction}
We say that an $n$-vertex graph is \emph{$\epsilon$-distance-uniform} for some parameter $\epsilon>0$ if there is a value $d$, called the \emph{critical distance}, such that, for every vertex $v$, all but at most $\epsilon n$ of the other vertices are at distance exactly $d$ from $v$. Distance-uniform graphs exist for some, but not all, possible triplets $(n, \epsilon, d)$; a trivial example is the complete graph $K_n$, which is distance-uniform with $\epsilon = \frac1n$ and $d=1$. So it is natural to try to characterize which triplets $(n,\epsilon, d)$ are realizable as distance-uniform graphs.
The notion of distance uniformity is introduced by Alon, Demaine, Hajiaghayi, and Leighton in \cite{alon13}, motivated by the analysis of network creation games. It turns out that equilibria in a certain network creation game can be used to construct distance-uniform graphs. As a result, understanding distance-uniform graphs tells us which equilibria are possible.
\subsection{From network creation games to distance uniformity}
The use of the Internet has been growing significantly in the last few decades. This fact has motivated theoretical studies that try to capture properties of Internet-like networks into models. Fabrikant et al. \cite{fabrikant03} proposed one of these first models, the so called \emph{sum classic network creation game} (or abbreviated sum classic) from which variations (like \cite{bilo15}, \cite{ehsani15}) and extensions of it (like \cite{bilo15max}, \cite{brandes08}) have been considered in the subsequent years.
Although all these models try to capture different aspects of Internet, all of them can be identified as \emph{strategic games}: every agent or node (every player in the game) buys some links (every player picks an strategy) in order to be connected in the network formed by all the players (the strategic configurations formed as a combination of the strategies of every player) and tries to minimize a cost function modeling their needs and interests.
All these models together with their results constitute a whole subject inside game theory and computer science that stands on its own: the field of \emph{network creation games}. Some of the most relevant concepts discussed in network creation games are \emph{optimal network}, \emph{Nash equilibria} and the \emph{price of anarchy}, among others.
An optimal network is the outcome of a configuration having minimum overall cost, that is, the sum of the costs of every player has the minimum possible value. A Nash equilibrium is a configuration where each player cannot strictly decrease his cost function given that the strategies of the other players are fixed. The price of anarchy quantifies the loss in terms of efficiency between the worst Nash equilibrium (anyone having maximum overall cost) and any optimal network (anyone having minimal overall cost).
The sum classic is specified with a set of players $N = \left\{ 1,...,n\right\}$ and a parameter $\alpha > 0$ representing the cost of establishing a link. Every player $i\in N$ wishes to be connected in the resulting network, then the strategy $s_i \in \mathcal{P}(N \setminus \left\{i \right\})$ represents the subset of players to which $i$ establishes links. Then considering the tuple of the strategies for every player $s=(s_1,...,s_n)$ (called a \emph{strategy profile}) the \emph{communication network} associated to $s$, noted as $G[s]$, is defined as the undirected graph having $N$ as the set of vertices and the edges $(i,j)$ iff $i \in s_j $ or $j \in s_i$. The communication network represents the resulting network obtained after considering the links bought for every node. Then the cost function for a strategy profile $s = (s_1,...,s_n)$ has two components: the \emph{link cost} and the \emph{usage cost}. The link cost for a player $i \in N$ is $\alpha |s_i|$ and it quantifies the cost of buying $|s_i|$ links. In contrast, the usage cost for a player $i$ is $\sum_{j \neq i} d_{G[s]}(i,j)$. Therefore, the total cost incurred for player $i$ is $c_i(s)=\alpha |s_i|+\sum_{j \neq i} d_{G[s]}(i,j)$.
On the other hand, a given undirected graph $G$ in the \emph{sum basic network creation game} (or abbreviated sum basic) is said to be in equilibrium iff, for every edge $(i,j) \in E(G)$ and every other player $k$, the player $i$ does not strictly decrease the sum of distances to the other players by swapping the edge $(i,j)$ for the edge $(i,k)$.
At first glance, the sum basic could be seen as the model obtained from the sum classic when considering only deviations that consists in swapping individual edges. However, in any Nash equilibrium for the sum classic, only one of the endpoints of any edge has bought that specific edge so that just one of the endpoints of the edge can perform a swap of that specific edge. Therefore, one must be careful when trying to translate a property or result from the sum basic to the sum classic.
In the sum classic game it has been conjectured that the price of anarchy is constant (asymptotically) for any value of $\alpha$. Until now this conjecture has been proved true for $\alpha = O(n^{1-\epsilon})$ with $\epsilon \geq 1/\log n $ (\cite{demaine12}) and for $\alpha >9n$ (\cite{alvarez17}). In \cite{demaine12} it is proved that the price of anarchy is upper bounded by the diameter of any Nash equilibrium. This is why the diameter of equilibria in the sum basic is studied.
In \cite{alon13}, the authors show that sufficiently large graph powers of an equilibrium graph in the sum basic model will result in distance-uniform graphs; if the critical distance is large, then the original equilibrium graph in the sum basic model imposed a high total cost on its nodes. In particular, it follows that if $\epsilon$-distance-uniform graphs had diameter $O(\log n)$, the diameter of equilibria for the sum basic would be at most $O(\log^3 n)$.
\subsection{Previous results on distance uniformity}
This application motivates the already natural question: in an $\epsilon$-distance-uniform graph with $n$ vertices and critical distance $d$, what is the relationship between the parameters $\epsilon$, $n$, and $d$? Specifically, can we derive an upper bound on $d$ in terms of $\epsilon$ and $n$? Up to a constant factor, this is equivalent to finding an upper bound on the diameter of the graph, which must be between $d$ and $2d$ as long as $\epsilon < \frac12$.
Random graphs provide one example of distance-uniform graphs. In \cite{bollobas81}, Bollob\'as shows that for sufficiently large $p = p(n)$, the diameter of the random Erd\H{o}s--R\'enyi random graph $\mathcal G_{n,p}$ is asymptotically almost surely concentrated on one of two values. In fact, from every vertex $v$ in $\mathcal G_{n,p}$, the breadth-first search tree expands by a factor of $O(np)$ at every layer, reaching all or almost all vertices after about $\log_r n$ steps. Such a graph is also expected to be distance-uniform: the biggest layer of the breadth-first search tree will be much bigger than all previous layers.
More precisely, suppose that we choose $p(n)$ so that the average degree $r = (n-1)p$ satisfies two criteria: that $r \gg (\log n)^3$, and that for some $d$, $r^d/n - 2 \log n$ approaches a constant $C$ as $n \to \infty$. Then it follows from Lemma~3 in \cite{bollobas81} that (with probability $1-o(1)$) for every vertex $v$ in $\mathcal G_{n,p}$, the number of vertices at each distance $k < d$ from $v$ is $O(r^k)$. It follows from Theorem~6 in \cite{bollobas81} that the number of vertex pairs in $\mathcal G_{n,p}$ at distance $d+1$ from each other is Poisson with mean $\frac12 e^{-C}$, so there are only $O(1)$ such pairs with probability $1-o(1)$. As a result, such a random graph is $\epsilon$-distance-uniform with $\epsilon = O(\frac{\log n}{r})$, and critical distance $d = \log_r n + O(1)$.
This example provides a compelling image of what distance-uniform graphs look like: if the breadth-first search tree from each vertex grows at the same constant rate, then most other vertices will be reached in the same step. In any graph that is distance-uniform for a similar reason, the critical distance $d$ will be at most logarithmic in $n$. In fact, Alon et al.\ conjecture that all distance-uniform graphs have diameter $O(\log n)$.
Alon et al.\ prove an upper bound of $O(\frac{\log n}{\log \epsilon^{-1}})$ in a special case: for $\epsilon$-distance-uniform graphs with $\epsilon<\frac14$ that are Cayley graphs of Abelian groups. In this case, if $G$ is the Cayley graph of an Abelian group $A$ with respect to a generating set $S$, one form of Pl\"unnecke's inequality (see, e.g., \cite{tao06}) says that the sequence
\[
|\underbrace{S + S + \dots + S}_k|^{1/k}
\]
is decreasing in $k$. Since $S, S+S, S+S+S, \dots$ are precisely the sets of vertices which can be reached by $1, 2, 3, \dots$ steps from 0, this inequality quantifies the idea of constant-rate growth in the breadth-first search tree; Theorem~15 in~\cite{alon13} makes this argument formal.
\subsection{Our results}
In this paper, we disprove Alon et al.'s conjecture by constructing distance-uniform graphs that do not share this behavior, and whose diameter is exponentially larger than these examples. We also prove an upper bound on the critical distance (and diameter) showing our construction to be best possible in one asymptotic sense. Specifically, we show the following two results:
\begin{theorem}
\label{thm:intro-upper}
In any $\epsilon$-distance-uniform graph with $n$ vertices, the critical distance $d$ satisfies
\[
d = 2^{O\left(\frac{\log n}{\log \epsilon^{-1}}\right)}.
\]
\end{theorem}
\begin{theorem}
\label{thm:intro-lower}
For any $\epsilon$ and $n$ with $\frac1n \le \epsilon \le \frac1{\log n}$, there exists an $\epsilon$-distance-uniform graph on $n$ vertices with critical distance
\[
d = 2^{\Omega\left(\frac{\log n}{\log \epsilon^{-1}}\right)}.
\]
\end{theorem}
Note that, since a $\frac1{\log n}$-distance-uniform graph is also $\frac12$-distance-uniform, Theorem~\ref{thm:intro-lower} also provides a lower bound of $d = 2^{\Omega(\frac{\log n}{\log \log n})}$ for any $\epsilon > \frac1{\log n}$.
Combined, these results prove that the maximum critical distance is $2^{\Theta(\frac{\log n}{\log \epsilon^{-1}})}$ whenever they both apply. A small gap remains for sufficiently large $\epsilon$: for example when $\epsilon$ is constant as $n \to \infty$. In this case, Theorem~\ref{thm:intro-upper} gives an upper bound on $d$ which is polynomial in $n$, while the lower bound of Theorem~\ref{thm:intro-lower} grows slower than any polynomial.
The family of graphs used to prove Theorem~\ref{thm:intro-lower} is interesting in its own right. We give two different interpretations of the underlying structure of these graphs. First, we describe a combinatorial game, generalizing the well-known Tower of Hanoi puzzle, whose transition graph is $\epsilon$-distance-uniform and has large diameter. Second, we give a geometric interpretation, under which each graph in the family is the skeleton of the convex hull of an arrangement of points on a high-dimensional sphere.
\section{Upper bound}
\newcommand{\Nexact}[2]{\Gamma_{#1}(#2)}
\newcommand{\Natmost}[2]{N_{#1}(#2)}
For a vertex $v$ of a graph $G$, let $\Nexact{r}{v}$ denote the set $\{w \in V(G) \mid d(v,w) = r\}$: the vertices at distance exactly $r$ from $v$. In particular, $\Nexact{0}{v} = \{v\}$ and $\Nexact{1}{v}$ is the set of all vertices adjacent to $v$. Let
\[
\Natmost{r}{v} = \bigcup_{i=0}^r \Nexact{i}{v}
\]
denote the set of vertices within distance at most $r$ from $v$.
Before proceeding to the proof of Theorem~\ref{thm:intro-upper}, we begin with a simple argument that is effective for an $\epsilon$ which is very small:
\begin{lemma}
\label{lemma:min-degree}
The minimum degree $\delta(G)$ of an $\epsilon$-distance-uniform graph $G$ satisfies $\delta(G) \ge \epsilon^{-1} - 1$.
\end{lemma}
\begin{proof}
Suppose that $G$ is $\epsilon$-distance-uniform, $n$ is the number of vertices of $G$, and $d$ is the critical distance: for any vertex $v$, at least $(1-\epsilon)n$ vertices of $G$ are at distance exactly $d$ from $v$.
Let $v$ be an arbitrary vertex of $G$, and fix an arbitrary breadth-first search tree $T$, rooted at $v$. We define the \emph{score} of a vertex $w$ (relative to $T$) to be the number of vertices at distance $d$ from $v$ which are descendants of $w$ in the tree $T$.
There are at least $(1-\epsilon)n$ vertices at distance $d$ from $v$, and all of them are descendants of some vertex in the neighborhood $\Nexact{1}{v}$. Therefore the total score of all vertices in $\Nexact{1}{v}$ is at least $(1-\epsilon)n$.
On the other hand, if $w \in \Nexact{1}{v}$, each vertex counted by the score of $w$ is at distance $d-1$ from $w$. Since at least $(1-\epsilon)n$ vertices are at distance $d$ from $w$, at most $\epsilon n$ vertices are at distance $d-1$, and therefore the score of $w$ is at most $\epsilon n$.
In order for $|\Nexact{1}{v}|$ scores of at most $\epsilon n$ to sum to at least $(1-\epsilon)n$, $|\Nexact{1}{v}|$ must be at least $\frac{(1-\epsilon)n}{\epsilon n} = \epsilon^{-1} - 1$.
\end{proof}
This lemma is enough to show that in a $\frac1{\sqrt n}$-distance-uniform graph, the critical distance is at most $2$. Choose a vertex $v$: all but $\sqrt n$ of the vertices of $G$ are at the critical distance $d$ from $v$, and $\sqrt n - 1$ of the vertices are at distance $1$ from $v$ by Lemma~\ref{lemma:min-degree}. The remaining uncounted vertex is $v$ itself. It is impossible to have $d \ge 3$, as that would leave no vertices at distance $2$ from $v$.
For larger $\epsilon$, the bound of Lemma~\ref{lemma:min-degree} becomes ineffective, but we can improve it by a more general argument of which Lemma~\ref{lemma:min-degree} is just a special case.
\begin{lemma}
\label{lemma:arnau}
Let $G$ be an $\epsilon$-distance-uniform graph with critical distance $d$. Suppose that for some $r$ with $2r+1 \le d$, we have $|\Natmost{r}{v}| \ge N$ for each $v \in V(G)$. Then we have $|\Natmost{3r+1}{v}| \ge N\epsilon^{-1}$ for each $v \in V(G)$.
\end{lemma}
\begin{proof}
Let $v$ be any vertex of $G$, and let $\{w_1, w_2, \dots, w_t\}$ be a maximal collection of vertices in $\Nexact{2r+1}{v}$ such that $d(w_i, w_j) \ge 2r+1$ for each $i \ne j$ with $1 \le i,j \le t$.
We claim that for each vertex $u \in \Nexact{d}{v}$---for each vertex $u$ at the critical distance from $v$---there is some $i$ with $1 \le i \le t$ such that $u \in \Natmost{d-1}{w_i}$. To see this, consider any shortest path from $v$ to $u$, and let $u_\pi \in \Nexact{2r+1}{v}$ be the $(2r+1)$\textsuperscript{th} vertex along this path. (Here we use the assumption that $2r+1 \le d$.) From the maximality of $\{w_1, w_2, \dots, w_t\}$, it follows that $d(w_i, u_\pi) \le 2r$ for some $i$ with $1 \le i \le t$. But then,
\[
d(w_i, u) \le d(w_i, u_\pi) + d(u_\pi, u) \le 2r + (d - 2r-1) = d-1.
\]
So $u \in \Natmost{d-1}{w_i}$.
To state this claim differently, the sets $\Natmost{d-1}{w_1}, \dots, \Natmost{d-1}{w_t}$ together cover $\Nexact{d}{v}$. These sets are all small while the set they cover is large, so there must be many of them:
\[
(1-\epsilon)n \le |\Nexact{d}{v}| \le \sum_{i=1}^t |\Natmost{d-1}{w_i}| \le \sum_{i=1}^t \epsilon n = t \epsilon n,
\]
which implies that $t \ge \frac{(1-\epsilon)n}{\epsilon n} = \epsilon^{-1} - 1$.
The vertices $v, w_1, w_2, \dots, w_t$ are each at distance at least $2r+1$ from each other, so the sets $\Natmost{r}{v}, \Natmost{r}{w_1}, \dots, \Natmost{r}{w_t}$ are disjoint.
By the hypothesis of this lemma, each of these sets has size at least $N$, and we have shown that there are at least $\epsilon^{-1}$ sets. So their union has size at least $N\epsilon^{-1}$. Their union is contained in $\Natmost{3r+1}{v}$, so we have $|\Natmost{3r+1}{v}| \ge N\epsilon^{-1}$, as desired.
\end{proof}
We are now ready to prove Theorem~\ref{thm:intro-upper}. The strategy is to realize that the lower bounds on $|\Natmost{r}{v}|$, which we get from Lemma~\ref{lemma:arnau}, are also lower bounds on $n$, the number of vertices in the graph. By applying Lemma~\ref{lemma:arnau} iteratively for as long as we can, we can get a lower bound on $n$ in terms of $\epsilon$ and $d$, which translates into an upper bound on $d$ in terms of $\epsilon$ and $n$.
More precisely, set $r_1 = 1$ and $r_k = 3r_{k-1} + 1$, a recurrence which has closed-form solution $r_k = \frac{3^k - 1}{2}$. Lemma~\ref{lemma:min-degree} tells us that in an $\epsilon$-distance-uniform graph $G$ with critical distance $d$, $\Natmost{r_1}{v} \ge \epsilon^{-1}$. Lemma~\ref{lemma:arnau} is the inductive step: if, for all $v$, $\Natmost{r_k}{v} \ge \epsilon^{-k}$, then $\Natmost{r_{k+1}}{v} \ge \epsilon^{-(k+1)}$, as long as $2r_k + 1 \le d$.
The largest $k$ for which $2r_k + 1 \le d$ is $k = \floor{\log_3 d}$. So we can inductively prove that
\[
n \ge \Natmost{r_{k+1}}{v} \ge \epsilon^{-(\floor{\log_3 d} + 1)}
\]
which can be rearranged to get
\[
\frac{\log n}{\log \epsilon^{-1}} -1 \ge \floor{\log_3 d}.
\]
This implies that
\[
d \le 3^{\frac{\log n}{\log \epsilon^{-1}}} = 2^{O\left(\frac{\log n}{\log \epsilon^{-1}}\right)},
\]
proving Theorem~\ref{thm:intro-upper}.
\section{Lower bound}
To show that this bound on $d$ is tight, we need to construct an $\epsilon$-distance-uniform graph with a large critical distance $d$. We do this by defining a puzzle game whose state graph has this property.
\subsection{The Hanoi game}
We define a \emph{Hanoi state} to be a finite sequence of nonnegative integers $\vec x = (x_1, x_2, \dots, x_k)$ such that, for all $i > 1$, $x_i \ne x_{i-1}$. Let
\[
{\mathcal H}_{r,k} = \big\{ \vec x \in \{0,1,\dots, r\}^k : \vec x \mbox{ is a Hanoi state}\big\}.
\]
For convenience, we also define a \emph{proper Hanoi state} to be a Hanoi state $\vec x$ with $x_1 \ne 0$, and ${\mathcal H}_{r,k}^* \subset {\mathcal H}_{r,k}$ to be the set of all proper Hanoi states. While everything we prove will be equally true for Hanoi states and proper Hanoi states, it is more convenient to work with ${\mathcal H}_{r,k}^*$, because $|{\mathcal H}_{r,k}^*| = r^k$.
In the \emph{Hanoi game on ${\mathcal H}_{r,k}$}, an initial state $\vec a \in {\mathcal H}_{r,k}$ and a final state $\vec b \in {\mathcal H}_{r,k}$ are chosen. The state $\vec a$ must be transformed into $\vec b$ via a sequence of moves of two types:
\begin{enumerate}
\item An \emph{adjustment} of $\vec x \in {\mathcal H}_{r,k}$ changes $x_k$ to any value in $\{0,1,\dots, r\}$ other than $x_{k-1}$. For example, $(1,2,3,4)$ can be changed to $(1,2,3,0)$ or $(1,2,3,5)$, but not $(1,2,3,3)$.
\item An \emph{involution} of $\vec x \in {\mathcal H}_{r,k}$ finds the longest tail segment of $\vec x$ on which the values $x_k$ and $x_{k-1}$ alternate, and swaps $x_k$ with $x_{k-1}$ in that segment. For example, $(1,2,3,4)$ can be changed to $(1,2,4,3)$, or $(1,2,1,2)$ to $(2,1,2,1)$.
\end{enumerate}
We define the Hanoi game on ${\mathcal H}_{r,k}^*$ in the same way, but with the added requirement that all states involved should be proper Hanoi states. This means that involutions (or, in the case of $k=1$, adjustments) that would change $x_1$ to $0$ are forbidden.
The name ``Hanoi game'' is justified because its structure is similar to the structure of the classical Tower of Hanoi puzzle. In fact, though we have no need to prove this, the Hanoi game on ${\mathcal H}_{3,k}^*$ is isomorphic to a Tower of Hanoi puzzle with $k$ disks.
It is well-known that the $k$-disk Tower of Hanoi puzzle can be solved in $2^k-1$ moves, moving a stack of $k$ disks from one peg to another. In \cite{hinz92}, a stronger statement is shown: only $2^k-1$ moves are required to go from any initial state to any final state. A similar result holds for the Hanoi game on ${\mathcal H}_{r,k}$:
\begin{lemma}
\label{lemma:hanoi-diameter}
The Hanoi game on ${\mathcal H}_{r,k}$ (or ${\mathcal H}_{r,k}^*$) can be solved in at most $2^k-1$ moves for any initial state $\vec a$ and final state $\vec b$.
\end{lemma}
\begin{proof}
We induct on $k$ to show the following stronger statement: for any initial state $\vec a$ and final state $\vec b$, a solution of length at most $2^k-1$ exists for which any intermediate state $\vec x$ has $x_1 = a_1$ or $x_1 = b_1$. This auxiliary condition also means that if $\vec a, \vec b \in {\mathcal H}_{r,k}^*$, all intermediate states will also stay in ${\mathcal H}_{r,k}^*$.
When $k=1$, a single adjustment suffices to change $\vec a$ to $\vec b$, which satisfies the auxiliary condition.
For $k>1$, there are two possibilities when changing $\vec a $ to $\vec b$:
\begin{itemize}
\item If $a_1 = b_1$, then consider the Hanoi game on ${\mathcal H}_{r,k-1}$ with initial state $(a_2, a_3, \dots, a_k)$ and final state $(b_2, b_3, \dots, b_k)$. By the inductive hypothesis, a solution using at most $2^{k-1} - 1$ moves exists.
Apply the same sequence of adjustments and involutions in ${\mathcal H}_{r,k}$ to the initial state $\vec a$. This has the effect of changing the last $k-1$ entries of $\vec a$ to $(b_2, b_3, \dots, b_k)$. To check that we have obtained $\vec b$, we need to verify that the first entry is left unchanged.
The auxiliary condition of the inductive hypothesis tells us that all intermediate states have $x_2 = a_2$ or $x_2 = b_2$. Any move that leaves $x_2$ unchanged also leaves $x_1$ unchanged. A move that changes $x_2$ must be an involution swapping the values $a_2$ and $b_2$; however, $x_1 = a_1 \ne a_2$, and $x_1 = b_1 \ne b_2$, so such an involution also leaves $x_1$ unchanged.
Finally, the new auxiliary condition is satisfied, since we have $x_1 = a_1 = b_1$ for all intermediate states.
\item If $a_1 \ne b_1$, begin by taking $2^{k-1}-1$ moves to change $\vec a$ to $(a_1, b_1, a_1, b_1, \dots)$ while satisfying the auxiliary condition, as in the first case.
An involution takes this state to $(b_1, a_1, b_1, a_1, \dots)$; this continues to satisfy the auxiliary condition.
Finally, $2^{k-1}-1$ more moves change this state to $\vec b$, as in the first case, for a total of $2^k-1$ moves.\qedhere
\end{itemize}
\end{proof}
If we obtain the same results as in the standard Tower of Hanoi puzzle, why use the more complicated game in the first place? The reason is that in the classical problem, we cannot guarantee that any starting state would have a final state $2^k-1$ moves away. With the rules we define, as long as the parameters are chosen judiciously, each state $\vec a \in {\mathcal H}_{r,k}$ is part of many pairs $(\vec a, \vec b)$ for which the Hanoi game requires $2^k-1$ moves to solve.
The following lemma almost certainly does not characterize such pairs, but provides a simple sufficient condition that is strong enough for our purposes.
\begin{lemma}
\label{lemma:hanoi-game}
The Hanoi game on ${\mathcal H}_{r,k}$ (or ${\mathcal H}_{r,k}^*$) requires exactly $2^k-1$ moves to solve if $\vec a$~and~$\vec b$ are chosen with disjoint support: that is, $a_i \ne b_j$ for all $i$ and $j$.
\end{lemma}
\begin{proof}
Since Lemma~\ref{lemma:hanoi-diameter} proved an upper bound of $2^k-1$ for all pairs $(\vec a, \vec b)$, we only need to prove a lower bound in this case.
Once again, we induct on $k$. When $k=1$, a single move is necessary to change $\vec a$ to $\vec b$ if $\vec a \ne \vec b$, verifying the base case.
Consider a pair $\vec a, \vec b \in {\mathcal H}_{r,k}$ with disjoint support, for $k > 1$. Moreover, assume that $\vec a$ and $\vec b$ are chosen so that, of all pairs with disjoint support, $\vec a$ and $\vec b$ require the least number of moves to solve the Hanoi game. (Since we are proving a lower bound on the number of moves necessary, this assumption is made without loss of generality.)
In a shortest path from $\vec a$ to $\vec b$, every other move is an adjustment: if there were two consecutive adjustments, the first adjustment could be skipped, and if there were two consecutive involutions, they would cancel out and both could be omitted. Moreover, the first move is an adjustment: if we began with an involution, then the involution of $\vec a$ would be a state closer to $\vec b$ yet still with disjoint support to $\vec b$, contrary to our initial assumption. By the same argument, the last move must be an adjustment.
Given a state $\vec x \in {\mathcal H}_{r,k}$, let its \emph{abbreviation} be $\vec x' = (x_1, x_2, \dots, x_{k-1}) \in {\mathcal H}_{r,k-1}$. An adjustment of $\vec x$ has no effect on $\vec x'$, since only $x_k$ is changed. If $x_k \ne x_{k-2}$, then an involution of $\vec x$ is an adjustment of $\vec x'$, changing its last entry $x_{k-1}$ to $x_k$. Finally, if $x_k = x_{k-2}$, then an involution of $\vec x$ is also an involution of $\vec x'$.
Therefore, if we take a shortest path from $\vec a$ to $\vec b$, omit all adjustments, and then abbreviate all states, we obtain a solution to the Hanoi game on ${\mathcal H}_{r,k-1}$ that takes $\vec a'$ to $\vec b'$. By the inductive hypothesis, this solution contains at least $2^{k-1} - 1$ moves, since $\vec a'$ and $\vec b'$ have disjoint support. Therefore the shortest path from $\vec a$ to $\vec b$ contains at least $2^{k-1}-1$ involutions. Since the first, last, and every other move is an adjustment, there must be $2^{k-1}$ adjustments as well, for a total of $2^k-1$ moves.
\end{proof}
Now let the \emph{Hanoi graph $G_{r,k}^*$} be the graph with vertex set ${\mathcal H}_{r,k}^*$ and edges joining each state to all the states that can be obtained from it by a single move. Since an adjustment can be reversed by another adjustment, and an involution is its own inverse, $G_{r,k}^*$ is an undirected graph.
For any state $\vec a \in {\mathcal H}_{r,k}^*$, there are at least $(r-k)^k$ other states with disjoint support to $\vec a$, out of $|{\mathcal H}_{r,k}^*| = r^k$ other states, forming a $\left(1 - \frac{k}{r}\right)^k > 1 - \frac{k^2}{r}$ fraction of all the states. By Lemma~\ref{lemma:hanoi-game}, each such state $\vec b$ is at distance $2^k-1$ from $\vec a$ in the graph $G_{r,k}^*$, so $G_{r,k}^*$ is $\epsilon$-distance uniform with $\epsilon = \frac{k^2}{r}$, $n = r^k$ vertices, and critical distance $d = 2^k-1$.
Having established the graph-theoretic properties of $G_{r,k}^*$, we now prove Theorem~\ref{thm:intro-lower} by analyzing the asymptotic relationship between these parameters.
\begin{proof}[Proof of Theorem~\ref{thm:intro-lower}]
Begin by assuming that $n = 2^{2^m}$ for some $m$. Choose $a$ and $b$ such that $a+b=m$ and
\[
\frac{2^{2b}}{2^{2^a}} \le \epsilon < \frac{2^{2(b+1)}}{2^{2^{a-1}}},
\]
which is certainly possible since $\frac{2^0}{2^{2^m}} = \frac1n \le \epsilon$ and $\frac{2^{2m}}{2^{2^0}} > 1 \ge \epsilon$. Setting $r = 2^{2^a}$ and $k = 2^b$, the Hanoi graph $G_{r,k}^*$ has $n$ vertices and is $\epsilon$-distance uniform, since $\frac{k^2}{r} \le \epsilon$. Moreover, our choice of $a$~and~$b$ guarantees that $\epsilon < \frac{4k^2}{\sqrt{r}}$, or $\log \epsilon^{-1} \ge \frac12 \log r - 2 \log 2k$. Since $n = r^k$, $\log n = k \log r$, so
\[
\log \epsilon^{-1} \ge \frac{1}{2k} \log n - 2 \log 2k.
\]
We show that $k \ge \frac{\log n}{6 \log \epsilon^{-1}}$. Since $\epsilon \le \frac1{\log n}$, this is automatically true if $k \ge \frac{\log n}{6 \log \log n}$, so assume that $k < \frac{\log n}{6 \log \log n}$. Then
\[
\frac{1}{3k} \log n > 2 \log \log n > 2 \log 2k,
\]
so
\[
\log \epsilon^{-1} \ge \frac1{2k} \log n - 2 \log 2k > \frac{1}{2k} \log n - \frac1{3k} \log n = \frac1{6k} \log n,
\]
which gives us the desired inequality $k \ge \frac{\log n}{6 \log \epsilon^{-1}}$. The Hanoi graph $G_{r,k}^*$ has critical distance $d = 2^k - 1 = 2^{\Omega(\frac{\log n}{\log \epsilon^{-1}})}$, so the proof is finished in the case that $n$ has the form $2^{2^m}$ for some $n$.
For a general $n$, we can choose $m$ such that $2^{2^m} \le n < 2^{2^{m+1}} = \left(2^{2^m}\right)^2$, which means in particular that $2^{2^m} \ge \sqrt n$. If $\epsilon < \frac{2}{\sqrt n}$, then the requirement of a critical distance of $2^{\Omega(\frac{\log n}{\log \epsilon^{-1}})}$ is only a constant lower bound, and we may take the graph $K_n$. Otherwise, by the preceding argument, there is a $\frac{\epsilon}{2}$-distance-uniform Hanoi graph with $2^{2^m}$ vertices; its critical distance $d$ satisfies
\[
d \ge 2^{\Omega\left(\frac{\log \sqrt{n}}{\log (\epsilon/2)^{-1}}\right)} = 2^{\Omega\left(\frac{\log n}{\log \epsilon^{-1}}\right)}.
\]
To extend this to an $n$-vertex graph, take the blow-up of the $2^{2^m}$-vertex Hanoi graph, replacing every vertex by either $\lfloor n/2^{2^m} \rfloor$ or $\lceil n/2^{2^m} \rceil$ copies.
Whenever $v$ and $w$ were at distance $d$ in the original graph, the copies of $v$ and $w$ will be at distance $d$ in the blow-up. The difference between floor and ceiling may slightly ruin distance uniformity, but the graph started out $\frac{\epsilon}{2}$-distance-uniform, and $\lceil n/2^{2^m} \rceil$ differs from $\lfloor n/2^{2^m} \rfloor$ at most by a factor of 2. Even in the worst case, where for some vertex $v$ the $\frac{\epsilon}{2}$-fraction of vertices not at distance $d$ from $v$ all receive the larger number of copies, the resulting $n$-vertex graph will be $\epsilon$-distance-uniform.
\end{proof}
\subsection{Points on a sphere}
In this section, we identify $G_{r,k}$, the graph of the Hanoi game on ${\mathcal H}_{r,k}$, with a graph that arises from a geometric construction.
Fix a dimension $r$. We begin by placing $r+1$ points on the $r$-dimensional unit sphere arbitrarily in general position (though, for the sake of symmetry, we may place them at the vertices of an equilateral $r$-simplex). We identify these points with a graph by taking the 1-skeleton of their convex hull. In this starting configuration, we simply get $K_{r+1}$.
Next, we define a truncation operation on a set of points on the $r$-sphere. Let $\delta>0$ be sufficiently small that a sphere of radius $1-\delta$, concentric with the unit sphere, intersects each edge of the 1-skeleton in two points. The set of these intersection points is the new arrangement of points obtained by the truncation; they all lie on the smaller sphere, and for convenience, we may scale them so that they are once again on the unit sphere. An example of this is shown in Figure~\ref{fig:truncation}.
\begin{figure}
\caption{A tetrahedron}
\label{fig:truncation-1}
\caption{A truncated tetrahedron}
\label{fig:truncation-2}
\caption{An example of truncation}
\label{fig:truncation}
\end{figure}
\begin{prop}
Starting with a set of $r+1$ points on the $r$-dimensional sphere and applying $k$ truncations produces a set of points such that the 1-skeleton of their convex hull is isomorphic to the graph $G_{r,k}$.
\end{prop}
\begin{proof}
We induct on $k$. When $k=1$, the graph we get is $K_{r+1}$, which is isomorphic to $G_{r,1}$.
From the geometric side, we add an auxiliary statement to the induction hypothesis: given points $p, q_1, q_2$ such that, in the associated graph, $p$ is adjacent to both $q_1$ and $q_2$, there is a 2-dimensional face of the convex hull containing all three points. This is easily verified for $k=1$.
Assuming that the induction hypotheses are true for $k-1$, fix an isomorphism of $G_{r,k-1}$ with the set of points after $k-1$ truncations, and label the points with the corresponding vertices of $G_{r,k-1}$. We claim that the graph produced after one more truncation has the following structure:
\begin{enumerate}
\item A vertex that we may label $(\vec x, \vec y)$ for every ordered pair of adjacent vertices of $G_{r,k-1}$.
\item An edge between $(\vec x, \vec y)$ and $(\vec y, \vec x)$.
\item An edge between $(\vec x, \vec y)$ and $(\vec x, \vec z)$ whenever both are vertices of the new graph.
\end{enumerate}
The first claim is immediate from the definition of truncation: we obtain two vertices from the edge between $\vec x$ and $\vec y$. We choose to give the name $(\vec x, \vec y)$ to the vertex closer to $\vec x$. The edge between $\vec x$ and $\vec y$ remains an edge, and now joins the vertices $(\vec x, \vec y)$ and $(\vec y, \vec x)$, verifying the second claim.
By the auxiliary condition of the induction hypothesis, the vertices labeled $\vec x$, $\vec y$, and $\vec z$ lie on a common 2-face whenever $\vec x$ is adjacent to both $\vec y$ and $\vec z$. After truncation, $(\vec x, \vec y)$ and $(\vec x, \vec z)$ will also be on this 2-face; since they are adjacent along the boundary of that face, and extreme points of the convex hull, they are joined by an edge, verifying the third claim.
To finish the geometric part of the proof, we verify that the auxiliary condition remains true. There are two cases to check. For a vertex labeled $(\vec x, \vec y)$, if we choose the neighbors $(\vec x, \vec z)$ and $(\vec x, \vec w)$, then any two of them are joined by an edge, and therefore they must lie on a common 2-dimensional face. If we choose the neighbors $(\vec x, \vec z)$ and $(\vec y, \vec x)$, then the points continue to lie on the 2-dimensional face inherited from the face through $\vec x$, $\vec y$, and $\vec z$ of the previous convex hull.
Now it remains to construct an isomorphism between the 1-skeleton graph of the truncation, which we will call $T$, and $G_{r,k}$. We identify the vertex $(\vec x, \vec y)$ of $T$ with the vertex $(x_1, x_2, \dots, x_{k-1}, y_{k-1})$ of $G_{r,k}$. Since $x_{k-1} \ne y_{k-1}$ after any move in the Hanoi game, this $k$-tuple really is a Hanoi state. Conversely, any Hanoi state $\vec z \in {\mathcal H}_{r,k}$ corresponds to a vertex of $T$: let $\vec x = (z_1, z_2, \dots, z_{k-1})$, and let $\vec y$ be the state obtained from $\vec x$ by either an adjustment of $z_{k-1}$ to $z_k$, if $z_k \ne z_{k-2}$, or else an involution, if $z_k = z_{k-2}$. Therefore the map we define is a bijection between the vertex sets.
Both $T$ and $G_{r,k}$ are $r$-regular graphs, therefore it suffices to show that each edge of $T$ corresponds so an edge in $G_{r,k}$. Consider an edge joining $(\vec x, \vec y)$ with $(\vec x, \vec z)$ in $T$. This corresponds to vertices $(x_1, x_2, \dots, x_{k-1}, y_{k-1})$ and $(x_1, x_2, \dots, x_{k-1}, z_{k-1})$ in $G_{r,k}$; these are adjacent, since we can obtain one from the other by an adjustment.
Next, consider an edge joining $(\vec x, \vec y)$ to $(\vec y, \vec x)$. If $\vec x$ and $\vec y$ are related by an adjustment in $G_{r,k-1}$, then they have the form $(x_1, \dots, x_{k-2}, x_{k-1})$ and $(x_1, \dots, x_{k-2}, y_{k-1})$. The vertices corresponding to $(\vec x, \vec y)$ and $(\vec y, \vec x)$ in $G_{r,k}$ are $(x_1, \dots, x_{k-2}, x_{k-1}, y_{k-1})$ and $(x_1, \dots, x_{k-2}, y_{k-1}, x_{k-1})$, and one can be obtained from the other by an involution.
Finally, if $\vec x$ and $\vec y$ are related by an involution in $G_{r,k-1}$, then that involution swaps $x_{k-1}$ and $y_{k-1}$. Therefore such an involution in $G_{r,k}$ will take $(x_1, \dots, x_{k-1}, y_{k-1})$ to $(y_1, \dots, y_{k-1}, x_{k-1})$, and the vertices corresponding to $(\vec x, \vec y)$ and $(\vec y, \vec x)$ are adjacent in $G_{r,k}$.
\end{proof}
\end{document} |
\begin{document}
\begin{center}
\textsc{{ \Large A fractional order cubic differential equation. Applications to natural resource management}\\[0pt]}
\hspace{0.2cm} Melani Barrios$^{1,2}$ \hspace{0.5cm} Gabriela Reyero$^{1}$ \hspace{0.5cm} Mabel Tidball$^{3}$ \\[0pt]
\end{center}
\scriptsize
$\,^{1}$ Departamento de Matem\'atica, Facultad de Ciencias Exactas, Ingeniería y Agrimensura, Universidad Nacional de Rosario, Avda. Pellegrini $250$, S$2000$BTP Rosario, Argentina.
$\,^{2}$ CONICET, Departamento de Matem\'atica, Facultad de Ciencias Exactas, Ingeniería y Agrimensura, Universidad Nacional de Rosario, Avda. Pellegrini $250$, S$2000$BTP Rosario, Argentina.
$\,^{3}$ CEE-M, Universidad de Montpellier, CNRS, INRA, SupAgro, Montpellier, France.\\
\normalsize
Correspondence should be addressed to [email protected]\\
\begin{abstract}
An analysis of a fractional cubic differential equation is presented, which is a generalization of different versions of fractional logistic equations, in order to obtain simpler numerical methods that globalize and extend the results already obtained, and allow the comparison between several methods. The existence and uniqueness of the solution of the fractional cubic problem subject to an initial value are demonstrated, a stability analysis is performed and, based on the implementation of numerical methods, a comparison is made between the different logistic models.
\end{abstract}
\textbf{Keywords} Fractional derivatives and integrals, Fractional differential equations, Environmental economics, Stability.\\
\textbf{Mathematics Subject Classification} 26A33, 34A08, 34D20, 91B76.
\section{Introduction}
Predicting the future of a population number is one of the most important factors needed for the good management of it. This has been treated by several known methods, one of them being the development of a mathematical model which describes the population growth. The model generally takes the form of a differential equation, or a system of differential equations, according to the complexity of the underlying properties of the population. The currently most used growth models are those that have a sigmoid solution in time, including Gompertz and Verhulst's logistic equations. The logistic equation is commonly used in population growth models, disease propagation epidemic, and social networks \cite{AmNuAnSu, Cl}.
Motivated by its applications in different scientific areas (electricity, magnetism, mechanics, fluid dynamics, medicine, etc. \cite{Alm, BaRe2, GoRe, Goo, Hil, Kil}), fractional calculus is in development, which has led to great growth in its study in recent decades. The fractional derivative is a nonlocal operator \cite{Die, Pod}, this makes fractional differential equations good candidates for modeling situations in which it is important to consider the history of the phenomenon studied \cite{FeSa}, unlike the models with classical derivative where this is not taken into account. There are several definitions of fractional derivatives. The most commonly used are the Riemann-Liouville fractional derivative and the Caputo fractional derivative. It is important to note that while the Riemann-Liouville fractional derivative \cite{Old} is historically the most studied approach to fractional calculus, the Caputo fractional derivative is more popular among physicists and scientists because the formulation of initial values problems with this type of derivative is more similar to the formulation with classical derivative.
There have been different fractional versions of the logistic equation, among which we find: the usual fractional logistic equation \cite{EsEmEs, KSQB}, and the fractional logistic equation with the Allee effect \cite{SyMaSh}. In this paper, the usual fractional logistic equation and with Allee effect will be analyzed, considering in both the harvest of the studied resource. For this reason, a study of a fractional cubic equation will be presented, which is a generalization of these different versions of fractional logistic equations. In the first part of the work, existence and uniqueness of the solution of the fractional cubic problem subject to an initial value will be proved, in the second part a stability analysis will be performed and finally, from the implementation of numerical methods, a comparison between the different logistic models will be seen.
\section{Preliminaries}\label{sec2}
\subsection{Different types of fractional logistic equations and fractional cubic equation}
\begin{itemize}
\item \textbf{Fractional logistic equation:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right),
\label{eclogistica}
\end{equation}
where $r>0$ is the intrinsic growth rate and $K>0$ the carrying capacity of the resource.
\item \textbf{Fractional logistic equation with harvest:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right)-Ex(t),
\label{eclogisticacosecha}
\end{equation}
where $E>0$ is the effort to be made to extract a proportion of the resource studied.
\item \textbf{Fractional logistic equation with Allee efect:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right)(x(t)-m),
\label{eclogisticaallee}
\end{equation}
where $m>0$ is the threshold of the Allee effect, which means the minimum population density for the growth of certain species, below which the population dies out (the population growth rate is positive only within the range $ m <x <K $ and is negative outside this range).
\item \textbf{Fractional logistic equation with Allee efect with harvest:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right)(x(t)-m)-Ex(t).
\label{eclogisticaalleecos}
\end{equation}
\end{itemize}
The objective of this paper is to study a fractional cubic equation, which is a generalization of these different versions of fractional logistic equations. Then, consider the fractional cubic equation given by:
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=ax^3(t)+bx^2(t)+cx(t),
\label{cubicafrac}
\end{equation}
where $a,b,c \in \mathbb{R}$.
It is necessary to introduce the meaning of fractional derivative and some of its properties.
\subsection{Fractional Calculus} \label{sec:frac}
\begin{definition}
The Gamma function, $\Gamma: (0, \infty)\rightarrow \mathbb{R}$, is defined by:
\begin{equation}
\Gamma(t) = \int_{0}^{\infty} s^{t-1} e^{-s} \, ds.
\label{gamma}
\end{equation}
\end{definition}
\begin{definition}
The Riemann-Liouville fractional integral operator of order $\alpha \in \mathbb{R}^{+}_{0}$ is defined in $L^1[a,b]$ by:
\begin{equation}
\,_{a}I_{t}^{\alpha} [f] (t) = \dfrac{1}{\Gamma(\alpha)} \int_{a}^{t} (t-s)^{\alpha -1} f(s) \, ds.
\label{frac1}
\end{equation}
\end{definition}
\begin{definition}\label{defi}
The Caputo fractional derivative operator of order $\alpha \in \mathbb{R}^{+}_{0}$ is defined by:
\begin{equation}
\,_{a}^{C}D_{t}^{\alpha}[f] (t)=\left(\,_{a}I_{t}^{n-\alpha} \circ \dfrac{d^{n}}{dt^{n}}\right)[f] (t)
\label{frac5}
\end{equation}
as long as $\dfrac{d^{n}f}{dt^{n}} \in L^1[a,b]$, and $n=\left\lceil \alpha \right\rceil$.
\end{definition}
In \cite{OdKuShAlHa} the following result is proved:
\begin{theorem} [Generalized Mean Value Theorem] \label{thm:tvmg}
Let $f(t)\in AC[0,b]$ (the set of absolutely continuous functions on $[0,b]$). Then, for $0<\alpha\leq 1$:
\[f(t)=f(0)+\frac{1}{\Gamma(\alpha+1)} \,_{0}^{C}D_{t}^{\alpha}[f](\xi)t^{\alpha},\]
\noindent with $0\leq\xi\leq t, \; \forall t\in[0,b]$.
\end{theorem}
\begin{remark}
When $\alpha=1$, the generalized mean value theorem reduces to the classical mean value theorem.
\end{remark}
\begin{corollary} \label{cor:crec}
Suppose that $f(t) \in AC[0,b]$ and $\,_{0}^{C}D_{t}^{\alpha}[f](t) \in C(0,b]$ for $0<\alpha\leq 1$. If $\,_{0}^{C}D_{t}^{\alpha}[f](t)\geq 0 \, \left(\,_{0}^{C}D_{t}^{\alpha}[f](t)> 0\right)$, $\forall t \in (0,b)$, then $f(t)$ is non-decreasing (increasing) and if $\,_{0}^{C}D_{t}^{\alpha}[f](t)\leq 0 \, \left(\,_{0}^{C}D_{t}^{\alpha}[f](t)< 0\right)$, $\forall t \in (0,b)$, then $f(t)$ is non-increasing (decreasing) for all $t \in [0,b]$.
\end{corollary}
\section{Study of the fractional cubic equation}
\subsection{Existence and uniqueness of the solution}\label{sec:uniq}
\,\\
Consider the following initial value problem
\begin{equation}
\left\{ \begin{array}{l}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=ax^3(t)+bx^2(t)+cx(t), \\
\\ x(0)=x_0, \\
\end{array}
\right.
\label{problemappal}
\end{equation}
with $t>0$ and $0<\alpha\leq 1$.
\begin{definition}
The Bielecki norm is defined by:
\[\left\|x\right\|_N=\sup_{t \in I} \left|e^{-Nt}x(t)\right|\]
with $N>0$ and $I=\left[0,T\right]$, \cite{Me}.
\end{definition}
\begin{definition} $x(t)$ is solution of the initial value problem (\ref{problemappal}) if:
\begin{enumerate}
\item $(t,x(t)) \in I\times H$ with $I=\left[0,T\right]$ and $H=\left[-h,h\right]$.
\item $x(t)$ satisfies (\ref{problemappal}).
\end{enumerate}
\end{definition}
\begin{theorem}
If $S=\left\{x \in L^1\left[0,T\right], \, \left\|x\right\|=\left\|e^{-Nt}x(t)\right\|_{L^1}\right\}$ and $N>0$ is such that $N^{\alpha}>3\left|a\right|h^2+2\left|b\right|h+\left|c\right|$ then the problem (\ref{problemappal}) has unique solution $x \in C(I)$, with $x' \in S$.
\end{theorem}
\begin{proof}
By definition \ref{defi},
\[ \,_0^{C} D_t^{\alpha}\left[x\right](t)=\,_0 I_t^{1-\alpha} \frac{d}{dt}\left[x\right](t)=ax^3(t)+bx^2(t)+cx(t).\]
Applying $\,_0 I_t^{\alpha}$ in both members, it is obtained
\begin{equation}
x(t)=x_0+\,_0 I_t^{\alpha} \left[ax^3+bx^2+cx\right](t).
\label{imp1}
\end{equation}
Let $G:C(I)\rightarrow C(I)$ be an operator such as $G(x(t))= x_0+\,_0 I_t^{\alpha} \left[ax^3+bx^2+cx\right](t)$. \\
To see that $G$ has only one fixed point, it can be proved that $G$ is contractive with the Bielecki norm $\left\|.\right\|_N$.
\[e^{-Nt}(Gx-Gy)=e^{-Nt} \,_0 I_t^{\alpha} \left[a(x^3-y^3)+b(x^2-y^2)+c(x-y)\right]=\]
\[=\int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} e^{-Nt} \left[a(x^3(s)-y^3(s))+b(x^2(s)-y^2(s))+c(x(s)-y(s))\right]ds=\]
\[\displaystyle\int_0^t \dfrac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} e^{-Nt+Ns} e^{-Ns} \left[a(x^3(s)-y^3(s))+b(x^2(s)-y^2(s))+c(x(s)-y(s))\right]ds=\]
\[\displaystyle\int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} e^{-N(t-s)} \left[e^{-Ns}(x(s)-y(s))\right] \left[a(x^2(s)+x(s)y(s)+y^2(s))+b(x(s)+y(s))+c\right]ds\leq\]
\[\leq \left\|x-y\right\|_{N} \displaystyle\int_0^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)} e^{-N(t-s)} \left[a(x^2(s)+x(s)y(s)+y^2(s))+ +b(x(s)+y(s))+c\right]ds\leq \]
with the change of variable $u=t-s$, it results
\[\left\|x-y\right\|_{N} \int_0^t \frac{u^{\alpha-1}}{\Gamma(\alpha)} e^{-Nu} \left[a(x^2(t-u)+x(t-u)y(t-u)+y^2(t-u))+b(x(t-u)+y(t-u))+c\right]du.\]
Applying module in both members,
\[\left|e^{-Nt}(Gx-Gy)\right|\leq \left\|x-y\right\|_N \left(3\left|a\right|h^2+2\left|b\right|h+\left|c\right|\right)
N^{-\alpha}\int_0^t N^{\alpha}\frac{(u)^{\alpha-1}}{\Gamma(\alpha)} e^{-Nu}du.\]
By Gamma function definition, it is obtained
\[\left|e^{-Nt}(Gx-Gy)\right|\leq \left\|x-y\right\|_N \frac{3\left|a\right|h^2+2\left|b\right|h+\left|c\right|}{N^{\alpha}}
\frac{\Gamma(\alpha)}{\Gamma(\alpha)},\]
then
\[\left|e^{-Nt}(Gx-Gy)\right|\leq \left\|x-y\right\|_N \frac{3\left|a\right|h^2+2\left|b\right|h+\left|c\right|}{N^{\alpha}}.\]
Considering $N$ such as $N^{\alpha}>3\left|a\right|h^2+2\left|b\right|h+\left|c\right|$, then $\left\|Gx-Gy\right\|_N< \left\|x-y\right\|_N$, therefore the operator $G$ results contractive and then by the fixed point theorem, the problem has unique solution.\\
To see that $x \in C(I)$ and $x' \in S$, from equation (\ref{imp1}),
\[x(t)=x_0+\,_0 I_t^{\alpha} \left[ax^3+bx^2+cx\right](t),\]
and it can be written as
\[x(t)=x_0+\,_0 I_t^{\alpha} \left[\,_0 I_t^{1}\frac{d}{dt}\left[ax^3+bx^2+cx\right](t)+ax_0^3+bx_0^2+cx_0\right].\]
Applying linearity and the Riemann-Liouville integral definition, it is obtained
\[x(t)=x_0+\frac{t^{\alpha}}{\Gamma(\alpha+1)}\left(ax_0^3+bx_0^2+cx_0\right)+\,_0 I_t^{\alpha+1}
\left[3a(x)^2x'+2bxx'+cx'\right](t),\]
then $x \in C(I)$. Deriving respect to $t$,
\[x'(t)=\frac{t^{\alpha-1}}{\Gamma(\alpha)}\left(ax_0^3+bx_0^2+cx_0\right)+\,_0 I_t^{\alpha}
\left[3a(x)^2x'+2bxx'+cx'\right](t),\]
which means that,
\[e^{-Nt}x'(t)=e^{-Nt}\left(\frac{t^{\alpha-1}}{\Gamma(\alpha)}\left(ax_0^3+bx_0^2+cx_0\right)+\,_0 I_t^{\alpha}
\left[3a(x)^2x'+2bxx'+cx'\right](t)\right),\]
where it can be deduced that $e^{-Nt}x'(t) \in L^{1}[0,T]$ and therefore $x' \in S$.
To prove that the expression of $x(t)$ given by (\ref{imp1}) verify the problem (\ref{problemappal}), deriving respect to $t$,
\[\frac{d}{dt}x(t)=\dfrac{d}{dt}\,_0 I_t^{\alpha} \left[ax^3+bx^2+cx\right](t).\]
Applying $\,_0 I_t^{1-\alpha}$,
\[\,_0 I_t^{1-\alpha} \frac{d}{dt}x(t)=\,_0 I_t^{1-\alpha}\frac{d}{dt}\,_0 I_t^{\alpha}\left[ax^3+bx^2+cx\right](t).\]
By Caputo derivative in definition \ref{defi},
\[\,_0^C D_t^{\alpha} x(t)= \frac{d}{dt}\,_0 I_t^{1-\alpha}\,_0 I_t^{\alpha} \left[ax^3+bx^2+cx\right](t),\]
\[\,_0^C D_t^{\alpha} x(t)= \frac{d}{dt}\,_0I_t \left[ax^3+bx^2+cx\right](t),\]
and then, it is the fractional cubic equation
\[\,_0^C D_t^{\alpha} x(t)=\left[ax^3+bx^2+cx\right](t).\]
Finally, to verify the initial condition, from the expression in (\ref{imp1}),
\[x(0)=x_0+\underbrace{\,_0 I_0^{\alpha} \left[ax^3+bx^2+cx\right](t)}_{=0}\]
then,
\[x(0)=x_0.\]
Accordingly, integral equation (\ref{imp1}) is equivalent to problem (\ref{problemappal}) and the theorem is proved.
\end{proof}
\subsubsection{Existence of solution for fractional logistic equations}
\,\\
Following, it will be shown that the condition for $N$, $N^{\alpha}>3\left|a\right|h^2+2\left|b\right|h+\left|c\right|$, are the same as the conditions for the existence of fractional logistic equations.
\begin{itemize}
\item \textbf{Fractional logistic equation:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right).
\label{eclogistica}
\end{equation}
This is equation (\ref{problemappal}) with $a=0$, $b=-\tfrac{r}{K}$ and $c=r$.
Then $N^{\alpha}>2\frac{r}{K}h+r=r\left(\frac{2h}{K}+1\right)$. This result is analogous to that obtained in \cite{EsEmEs}.
\item \textbf{Fractional logistic equation with harvest:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right)-Ex(t).
\label{eclogisticacosecha}
\end{equation}
This is equation (\ref{problemappal}) with $a=0$, $b=-\tfrac{r}{K}$ and $c=r-E$.
Then $N^{\alpha}>2\frac{r}{K}h+(r-E)$.
\item \textbf{Fractional logistic equation with Allee effect:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right)(x(t)-m).
\label{eclogisticaallee}
\end{equation}
This is equation (\ref{problemappal}) with $a=-\tfrac{r}{K}$, $b=\left(\tfrac{m}{K}+1\right)r$ and $c=-rm$.
Then $N^{\alpha}>3\frac{r}{K}h^2+2\left(\tfrac{m}{K}+1\right)r h+rm=r\left(m+\left(\tfrac{m}{K}+1\right)2h+3\frac{h^2}{K}\right)$.
\item \textbf{Fractional logistic equation with Allee effect with harvest:}
\begin{equation}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=rx(t)\left(1-\frac{x(t)}{K}\right)(x(t)-m)-Ex(t).
\label{eclogisticaalleecos}
\end{equation}
This is equation (\ref{problemappal}) with $a=-\tfrac{r}{K}$, $b=\left(\tfrac{m}{K}+1\right)r$ and $c=-rm-E$.
Then $N^{\alpha}>3\frac{r}{K}h^2+2\left(\tfrac{m}{K}+1\right)r h+rm+E=r\left(m+\left(\tfrac{m}{K}+1\right)2h+3\frac{h^2}{K}\right)+E$.
\end{itemize}
\subsection{Stability analysis of the fractional cubic equation}
\subsubsection{General case}
\,\\
Consider the following fractional initial value problem,
\begin{equation}
\left\{ \begin{array}{l}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=f(x(t)), \\
\\ x(0)=x_0, \\
\end{array}
\right.
\label{cfgeneral}
\end{equation}
with $t>0$ and $0<\alpha\leq 1$.
To find the equilibrium points $x_{eq}$ of this equation, the condition $\,_0^{C} D_t^{\alpha}x(t)=0$ is stated, which means,
\begin{equation}
f(x_{eq})=0.
\label{geq}
\end{equation}
To study the stability of each equilibrium point, consider the jacobian matrices $A_{eq}$ of $f$ evaluated at the equilibrium points,
\[A_{eq}=\left[\left.f'(x(t))\right|_{x_{eq}}\right].\]
The eigenvalues $\lambda_{eq}$ of $A_{eq}$, are calculated for each equilibrium point.
The following theorem can be seen in \cite{AhESES} and \cite{Die}.
\begin{theorem}
Let $\lambda_{eq}$ be the eigenvalues of $A_{eq}$, jacobian matrix associated to each equilibrium point $x_{eq}$. Then
\begin{itemize}
\item if $arg(\lambda_{eq})< \frac{\alpha\pi}{2}$, then the equilibrium point $x_{eq}$ is locally unstable (U),
\item if $arg(\lambda_{eq})\geq \frac{\alpha\pi}{2}$, then the equilibrium point $x_{eq}$ is locally stable (S), being locally asymptotically stable (AS) if ${arg(\lambda_{eq})> \frac{\alpha\pi}{2}}$.
\end{itemize}
\end{theorem}
Now, the equilibrium points and their stability of the fractional cubic equation will be examined.
\subsubsection{Fractional cubic equation case}
Consider $0<\alpha \leq 1$ and $a,b,c \in \Bbb{R}$ for the following problem
\begin{equation}
\left\{ \begin{array}{l}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=f(x(t))=ax^3(t)+bx^2(t)+cx(t), \\
\\ x(0)=x_0. \\
\end{array}
\right.
\label{problemappal2}
\end{equation}
In the Table \ref{table:pointsofequilibrium}, the different points of equilibrium and the stabilities of each one can be observed. To see the complete analysis refer to appendix or \cite{BaReTid}.
\begin{table}[!h]
\centering
\caption{Points of equilibrium \label{table:pointsofequilibrium}}
\begin{tabular}{|l|l|l|l|l|}
\hline
\multirow{10}{*}{$a\neq 0$} & \multirow{2}{*}{$x_1 = 0$} & \multicolumn{2}{c|}{$c>0$ } & $x_1$ is (U)\\
& & \multicolumn{2}{c|}{$c<0$ } & $x_1$ is (AS)\\ \cline{2-5}
& \multirow{4}{*}{$x_2=\frac{-b+\sqrt{b^2-4ac}}{2a}$} & \multirow{2}{*}{$a>0$} & $b<\sqrt{b^2-4ac}$ & $x_2$ is (U)\\
& & & $b>\sqrt{b^2-4ac}$ & $x_2$ is (AS)\\ \cline{3-5}
& & \multirow{2}{*}{$a<0$} & $b<\sqrt{b^2-4ac}$ & $x_2$ is (AS)\\
& & & $b>\sqrt{b^2-4ac}$ & $x_2$ is (U)\\ \cline{2-5}
& \multirow{4}{*}{$x_3=\frac{-b-\sqrt{b^2-4ac}}{2a}$} & \multirow{2}{*}{$a>0$} & $b<-\sqrt{b^2-4ac}$ & $x_3$ is (AS)\\
& & & $b>-\sqrt{b^2-4ac}$ & $x_3$ is (U)\\ \cline{3-5}
& & \multirow{2}{*}{$a<0$} & $b<-\sqrt{b^2-4ac}$ & $x_3$ is (U)\\
& & & $b>-\sqrt{b^2-4ac}$ & $x_3$ is (AS)\\ \hline
\hline
\multirow{4}{*}{$a = 0$} & \multirow{2}{*}{$x_1 = 0$} & \multicolumn{2}{c|}{$c>0$ } & $x_1$ is (U)\\
& & \multicolumn{2}{c|}{$c<0$ } & $x_1$ is (AS)\\ \cline{2-5}
& \multirow{2}{*}{$x_2 = -\frac{c}{b}$} & \multicolumn{2}{c|}{$c>0$ } & $x_2$ is (AS)\\
& & \multicolumn{2}{c|}{$c<0$ } & $x_2$ is (U)\\ \hline
\end{tabular}
\end{table}
\subsubsection{Equilibrium and stability applied to fractional logistic equations}
\,\\
Following, it will be shown that the results found, in the previous subsection, coincide with the results of the fractional logistic equations (\ref{eclogistica}), (\ref{eclogisticacosecha}), (\ref{eclogisticaallee}) and (\ref{eclogisticaalleecos}).
\begin{itemize}
\item \textbf{Fractional logistic equation:}
This is equation (\ref{problemappal}) with $a=0$, $b=-\tfrac{r}{K}$ and $c=r$.
Then, $a=0$, $b<0$ and $c>0$, therefore the equilibrium points are $x_1=0$ which is (U) and $x_2=K$ which is (AS).
This result is the same as the analyzed in \cite{EsEmEs}.
\item \textbf{Fractional logistic equation with harvest:}
This is equation (\ref{problemappal}) with $a=0$, $b=-\tfrac{r}{K}$ and $c=r-E$. Assuming that the intrinsic growth rate is bigger than the harvest coefficient $(r>E)$, $a=0$, $b<0$ and $c>0$, then the equilibrium points are $x_1=0$ which is (U) and $x_2=K\left(1-\tfrac{E}{r}\right)$ which is (AS). In the case where the intrinsic growth rate is lower than the harvest coefficient $(r<E)$, $a=0$, $b<0$ and $c<0$ is obtained, resulting $x_1=0$ (AS) and $x_2=K\left(1-\tfrac{E}{r}\right)$ (U).
\item \textbf{Fractional logistic equation with Allee effect:}
This is equation (\ref{problemappal}) with $a=-\tfrac{r}{K}$, $b=\left(\tfrac{m}{K}+1\right)r$ and $c=-rm$.
Then $a<0$, $b>0$ and $c<0$, therefore the equilibrium points are $x_1=0$ which is (AS), $x_2=m$ which is (U), because $b>\sqrt{b^2-4ac} $, and $x_3=K$ which is (AS), because $b>-\sqrt{b^2-4ac}$. This result is the same as the analyzed in \cite{SyMaSh}.
\item \textbf{Fractional logistic equation with Allee effect with harvest:}
This is equation (\ref{problemappal}) with $a=-\tfrac{r}{K}$, $b=\left(\tfrac{m}{K}+1\right)r$ and $c=-rm-E$.
Then $a<0$, $b>0$ and $c<0$. The equilibrium points are classified according to the value of $E$. If $E<\frac{r}{4K}(K-m)^2$ then they are $x_1=0$ which is (AS), $x_2=\frac{1}{2}\left(m+K+\sqrt{(K-m)^2-4\tfrac{KE}{r}}\right)$ which is (U), because $b>\sqrt{b^2-4ac} $, and $x_3=\frac{1}{2}\left(m+K-\sqrt{(K-m)^2-4\tfrac{KE}{r}}\right)$ which is (AS), because $b>-\sqrt{b^2-4ac}$, while if $E \geq \frac{r}{4K}(K-m)^2$ the only equilibrium point is $x_1=0$ which is (AS), leading to the extinction of the species.
\end{itemize}
\subsection{Numerical results}
\,\\
To perform numerical implementations a predictor-corrector method is used. The fractional forward Euler method is utilized to get $u_{n+1}^{P}$ (predictor), and then the fractional trapezoidal rule is used to get $u_{n + 1}$ (corrector), which leads to the fractional Adams method, \cite{BaDiScTru, LiZe}.
\[ \left\{ \begin{array}{ll}
u_{n+1}^{P}&=\displaystyle \sum_{j=0}^{m-1}\frac{t_{n+1}^{j}}{j!}u_{0}^{j}+\sum_{j=0}^{n}b_{j,n+1}f(t_j,u_j),\\
u_{n+1}&=\displaystyle \sum_{j=0}^{m-1}\frac{t_{n+1}^{j}}{j!}u_{0}^{j}+\sum_{j=0}^{n}a_{j,n+1}f(t_j,u_j)+a_{n+1,n+1}f(t_{n+1},u_{n+1}^{P}).\\
\end{array}\right.\]
Different graphics of $ x(t) $ are presented taking into account different variations of the parameters $r,\,K,\,m,\,E,\,x_0,\,\alpha$ and $T$.
\begin{itemize}
\item In the following graphics, the fractional logistic equation with harvest considered is
\[\,_0^{C} D_t^{0.5}\left[x\right](t)= 0.5\, x(t) \left(1-\frac{x(t)}{10}\right)-E \,x(t)\]
where $r=0.5$ is the same as in \cite{EsEmEs} but considering a carrying capacity of $K=10$ and $\alpha=0.5$. Each graphic represents the approximate solution of the fractional equation for a given harvest value $E$, where the initial values $x(0)=x_0$ are varied. The chosen final time is $T=500$ since in general the solutions of the fractional logistic equation take a long time to reach the equilibrium.
\begin{figure}
\caption{Fractional logistical equation solutions with $E=0, \, E=0.05, \, E=0.2$ and $E=0.5$}
\label{Figlogcosecha1}
\end{figure}
It can be noticed in Figure \ref{Figlogcosecha1} that if the harvest of the species is zero, the solutions are stabilized in $K=10$, analogously to the behavior seen in \cite{EsEmEs} in which the solutions are also stabilized in the carrying capacity. As the harvest increases, there is stability at intermediate points until the species are extinguished when there is over-exploitation as shown in the last graphic. This is similar to the integer case, the difference is the way in which these solutions reach that equilibrium.
\item In the following graphics, the fractional logistic equation with harvest considered is
\[\,_0^{C} D_t^{\alpha}\left[x\right](t)= 0.5\, x(t) \left(1-\frac{x(t)}{10}\right)-0.2 \,x(t)\]
where again $r=0.5$, $K=10$ and $E=0.2$. In this case, each graphic represents the fractional equation approximate solution for a given initial value $x(0)=x_0$, where the values of $\alpha$ are varied. Again, the chosen final time is $T=500$.
\begin{figure}
\caption{Fractional logistic equation with harvest solutions with $x_0=0.1, \, x_0=4, \, x_0=8$ and $x_0=12$}
\label{Figlogcosecha2}
\end{figure}
It can be noticed in Figure \ref{Figlogcosecha2} that for different values of $\alpha$ the same equilibrium is achieved, but in a different way. The shapes of each solution $x(t)$ are different and the times in which each one reaches the equilibrium value are different as well, that is when the value of $\alpha$ decreases, the solutions reach equilibrium more slowly.
\item In the following graphics, the fractional logistic equation with Allee effect with harvest considered is
\[\,_0^{C} D_t^{0.5}\left[x\right](t)= 0.5\, x(t) \left(1-\frac{x(t)}{10}\right)\left(x(t)-1\right)-Ex(t)\]
where $r=0.5$, $K=10$, $m=1$, are the same as the parameters in \cite{SyMaSh} and $\alpha=0.5$. In this case, each graphic represents the fractional equation approximate solution for a given harvest value $E$, where the initial values $x(0)=x_0$ are varied. The chosen final time is $T=25$, because the solutions of the equation with Allee effect reach the equilibrium faster than the previous cases with no Allee effect.
\begin{figure}
\caption{Fractional logistic equation with Allee effect with harvest solutions with ${E=0}
\label{Figalleecosecha1}
\end{figure}
It can be noticed in Figure \ref{Figalleecosecha1} that if the harvest is zero, the solutions stabilize in the carrying capacity $K=10$, analogously to \cite{SyMaSh}. As the harvest increases, there is stability at intermediate points until the species are extinguished when the harvest value $E=1.5>1.025=\frac{r}{4K}(K-m)^2$. This is similar to the integer case, the difference is the way in which these solutions reach this equilibrium, which is slower as the value of $\alpha$ decreases.
\item In the following graphics, the same fractional logistic equation with Allee effect with harvest considered is
\[\,_0^{C} D_t^{\alpha}\left[x\right](t)= 0.5\, x(t) \left(1-\frac{x(t)}{10}\right)\left(x(t)-1\right)-0.2\, x(t)\]
where again $r=0.5$, $K=10$, $m=1$ and $E=0.2$. In this case, each graphic represents the fractional equation approximate solution for a given initial value $x(0)=x_0$, where the values of $\alpha$ are varied. Again the chosen final time is $T=25$.
\begin{figure}
\caption{Fractional logistic equation with Allee effect with harvest solutions with $x_0=0.1, \, x_0=4, \, x_0=8$ and $x_0=12$}
\label{Figalleecosecha2}
\end{figure}
It can be noticed in Figure \ref{Figalleecosecha2} hat for different values of $\alpha$ the same equilibrium is reached, but in a different way. The shapes of each solution $x(t)$ are different and also the times in which each one reaches the equilibrium value are different.
\end{itemize}
\section{Conclusions}
\label{sec:conclusions}
In this paper, an analysis of a fractional cubic equation, which is a generalization of different versions of fractional logistic equations, has been presented. In the first part of the work it was demonstrated the existence and uniqueness of the fractional cubic problem subject to initial values. In the second part a stability analysis was performed and it could be seen that this analysis is similar to the analysis of the fractional logistic equations. Finally, the fractional Adams method was numerically implemented, being able to extend results already obtained in order to compare the different fractional logistic models with harvest.
\begin{comment}
\section{Mathematical tools}
\subsection{Introduction to fractional calculus}
In this section, we present some definitions and properties of the Caputo and Riemann-Liouville fractional calculus. For more details on the subject and applications, we refer the reader to \cite{Die, Old, Pod}.
\begin{definition}
The Mittag Leffler function with parameters $\alpha , \, \beta$, is defined by
\begin{equation}
E_{\alpha,\beta} (z) = \displaystyle \sum_{k=0}^{\infty} \frac{z^k}{\Gamma(\alpha k + \beta)}
\label{mittag}
\end{equation}
for all $z\in \mathbb{C}$.
\end{definition}
\begin{definition}
The Gamma function, $\Gamma: (0, \infty)\rightarrow \mathbb{R}$, is defined by
\begin{equation}
\Gamma(x) = \int_{0}^{\infty} s^{x-1} e^{-s} \, ds.
\label{gamma}
\end{equation}
\end{definition}
\begin{definition}
The Riemann-Liouville fractional integral operator of order ${\alpha \in \mathbb{R}^{+}_{0}}$ is defined in $L^1[a,b]$ by
\begin{equation}
\,_{a}I_{x}^{\alpha} [f] (x) = \dfrac{1}{\Gamma(\alpha)} \int_{a}^{x} (x-s)^{\alpha -1} f(s) \, ds.
\label{frac1}
\end{equation}
\end{definition}
\begin{definition}
If $f \in L^1[a,b]$, the left and right Riemann-Liouville fractional derivatives of order $\alpha \in \mathbb{R}^{+}_{0}$ are defined, respectively, by
\[ \,^{RL}_{a}D_{x}^{\alpha}[f](x)= \dfrac{1}{\Gamma(n-\alpha)}\dfrac{d^{n}}{dx^{n}}\int_a^{x}(x-s)^{n-1-\alpha}f(s)ds\]
and
\[ \,^{RL}_{x}D_{b}^{\alpha}[f](x)= \dfrac{(-1)^{n}}{\Gamma(n-\alpha)}\dfrac{d^{n}}{dx^{n}}\int_x^{b}(s-x)^{n-1-\alpha}f(s)ds,\]
with $n=\left\lceil \alpha \right\rceil$.
\label{defRL}
\end{definition}
\begin{definition}
If $\tfrac{d^{n}f}{dx^{n}} \in L^1[a,b]$, the left and right Caputo fractional derivatives of order $\alpha \in \mathbb{R}^{+}_{0}$ are defined, respectively, by
\[ \,_{a}^{C}D_{x}^{\alpha}[f](x)= \dfrac{1}{\Gamma(n-\alpha)}\int_a^{x}(x-s)^{n-1-\alpha}\dfrac{d^{n}}{ds^{n}}f(s)ds\]
and
\[ \,_{x}^{C}D_{b}^{\alpha}[f](x)= \dfrac{(-1)^{n}}{\Gamma(n-\alpha)}\int_x^{b}(s-x)^{n-1-\alpha}\dfrac{d^{n}}{ds^{n}}f(s)ds,\]
with $n=\left\lceil \alpha \right\rceil$.
\end{definition}
Now some different properties of the Riemann-Liouville and Caputo derivatives will be seen.
\begin{remark}(Relation between the Riemann-Liouville and the Caputo fractional derivatives)\\
Considering $0<\alpha<1$ and assuming that $f$ is such that $\,^{RL}_{a} D_{x}^{\alpha}[f], \, \,^{RL}_{x} D_{b}^{\alpha}[f], \, \,_{a}^{C}D_{x}^{\alpha}[f]$ and $\,_{x}^{C}D_{b}^{\alpha}[f]$ exist, then
\[\,_{a}^{C}D_{x}^{\alpha}[f](x)=\,^{RL}_{a} D_{x}^{\alpha}[f](x)-\dfrac{f(a)}{1-\alpha} (x-a)^{-\alpha}\]
and
\[\,_{x}^{C}D_{b}^{\alpha}[f](x)=\,^{RL}_{x} D_{b}^{\alpha}[f](x)-\dfrac{f(b)}{1-\alpha} (b-x)^{-\alpha}.\]
If $f(a)=0$ then
\[\,_{a}^{C}D_{x}^{\alpha}[f](x)=\,^{RL}_{a} D_{x}^{\alpha}[f](x)\]
and if $f(b)=0$ then
\[\,_{x}^{C}D_{b}^{\alpha}[f](x)=\,^{RL}_{x} D_{b}^{\alpha}[f](x).\]
\end{remark}
\begin{remark}An important difference between Riemann-Liouville derivatives and Caputo derivatives is that, being K an arbitrary constant,
\[ \,_{a}^{C}D_{x}^{\alpha} K= 0 \ ,\hspace{1cm} \,_{x}^{C}D_{b}^{\alpha}K=0,\]
however
\[ \,^{RL}_{a}D_{x}^{\alpha}K=\dfrac{K}{\Gamma (1- \alpha)}(x-a)^{-\alpha}, \hspace{1cm} \,^{RL}_{x}D_{b}^{\alpha}K=\dfrac{K}{\Gamma (1- \alpha)}(b-x)^{-\alpha},\]
\[ \,^{RL}_{a}D_{x}^{\alpha} (x-a)^{\alpha-1}=0, \hspace{1cm} \,^{RL}_{x}D_{b}^{\alpha}(b-x)^{\alpha-1}=0.\]
In this sense, the Caputo fractional derivatives are similar to the classical derivatives.
\label{remark}
\end{remark}
\begin{theorem} (Integration by parts. See \cite{Kil})\\
Let $0<\alpha<1$. Let $f \in C^{1}([a,b])$ and $g \in L^1([a,b])$. Then,
\[\int_a^b g(x) \,_a^CD_x^{\alpha}f(x)\,dx=\int_a^b f(x) \,^{RL}_xD_b^{\alpha}g(x)\,dx+\left[ \,_xI_b^{1-\alpha}g(x)f(x)\right]\left|_a^b\right.\]
and
\[\int_a^b g(x) \,_x^CD_b^{\alpha}f(x)\,dx=\int_a^b f(x) \,^{RL}_aD_x^{\alpha}g(x)\,dx- \left[\,_aI_x^{1-\alpha}g(x)f(x)\right]\left|_a^b\right..\]
Moreover, if $f(a)=f(b)=0$, we have that
\[\int_a^b g(x) \,_a^CD_x^{\alpha}f(x)\,dx=\int_a^b f(x) \,^{RL}_xD_b^{\alpha}g(x)\,dx\]
and
\[\int_a^b g(x) \,_x^CD_b^{\alpha}f(x)\,dx=\int_a^b f(x)\,^{RL}_aD_x^{\alpha}g(x)\,dx.\]
\label{parts}
\end{theorem}
\subsection{Fractional variational problems}
Consider the following problem of the fractional calculus of variations which consists in finding a function $y \in \,_{a}^{\alpha}E$ that optimizes (minimizes or maximizes) the functional
\begin{equation}
J(y)= \int^{b}_{a} L(x,y,\,_{a}^{C}D_{x}^{\alpha}y) \, dx
\label{prob}
\end{equation}
with a Lagrangian $L \in C^1([a,b]\times \mathbb{R}^2)$ and
\[\,_{a}^{\alpha}E=\{ y: [a,b] \rightarrow \mathbb{R}: y \in C^1([a,b]), \, \,_{a}^{C}D_{x}^{\alpha}y \in C([a,b])\},\]
subject to the boundary conditions: $y(a)=y_a \, ,\, \, y(b)=y_{b}$.
Now Euler-Lagrange equations for this problem will be stated. In the first one appears both Caputo and Riemann-Liouville derivatives (Theorem \ref{teoELcrl}), meanwhile the second one only depends on Caputo derivatives (Theorem \ref{teoELcc}).
The proof of the following theorem is in \cite{Mal4}.
\begin{theorem}
If $y$ is a local optimizer to the above problem, then $y$ satisfies the next Euler-Lagrange equation:
\begin{equation}
\dfrac{\partial{L}}{\partial y}+\,^{RL}_{x}D_{b}^{\alpha}\dfrac{\partial{L}}{\partial \,_{a}^{C}D_{x}^{\alpha}y}=0.
\label{eccrl}
\end{equation}
\label{teoELcrl}
\end{theorem}
\begin{remark} Equation (\ref{eccrl}) is said to involve Caputo and Riemann-Liouville derivatives.
This is a consequence of the Lagrange method to optimize functionals: the application of integration by parts (Theorem \ref{parts}) for Caputo derivatives in the Gateaux derivative of the functional relates Caputo with Riemann-Liouville derivatives.
\end{remark}
\begin{remark} Equation (\ref{eccrl}) is only a necessary condition to existence of the solution. We are now interested in finding sufficient conditions. Typically, some conditions of convexity over the Lagrangian are needed.
\end{remark}
\begin{definition} We say that $f(\underline{x},y,u)$ is convex in $S\subseteq \mathbb{R}^{3}$ if $f_{y}$ and $f_{u}$ exist and are continuous, and the condition
\[
f(x,y+y_{1},u+u_{1})-f(x,y,u) \geq f_{y}(x,y,u)y_{1}+f_{u}(x,y,u)u_{1},
\]
holds for every $(x,y,u),(x,y+y_{1},u+u_{1}) \in S.$
\end{definition}
The following theorem is valid only for the solution of the Euler-Lagrange equation involving Riemann-Liouville and Caputo derivatives (\ref{eccrl}). Its proof can be seen at \cite{AlTo}.
\begin{theorem}
Suppose that the function $L(\underline{x},y,u)$ is convex in $[a,b]\times\mathbb{R}^{2}$.
Then each solution $y$ of the fractional Euler–Lagrange equation (\ref{eccrl}) minimizes (\ref{prob}), when restricted to the boundary conditions $y(a)=y_a$ and $y(b)=y_b$.
\label{teosuficienteRL}
\end{theorem}
Following \cite{LaTo}, in the below theorem, we will see an Euler-Lagrange fractional differential equation only depending on Caputo derivatives.
\begin{theorem}
Let $y$ be an optimizer of (\ref{prob}) with $L\in C^{2}\left([a,b]\times \mathbb{R}^{2}\right)$ subject to boundary conditions $y(a)=y_a \, ,\, \, y(b)=y_{b}$, then $y$ satisfies the fractional Euler-Lagrange differential equation
\begin{equation}
\dfrac{\partial{L}}{\partial y}+\,_{x}^{C}D_{b}^{\alpha}\dfrac{\partial{L}}{\partial \,_{a}^{C}D_{x}^{\alpha} y}=0.
\label{EulerLagrange}
\end{equation}
\label{teoELcc}
\end{theorem}
\begin{remark}
We can see that the equation (\ref{EulerLagrange}) depends only on the Caputo derivatives. It is worth noting the importance that $L\in C^{2}\left([a,b]\times\mathbb{R}^{2}\right)$, without this the result would not be valid. As we remarked before, the advantage of this new formulation is that Caputo derivatives are more appropriate for modeling problems than the Riemann-Liouville derivatives and makes the calculations easier to solve because, in some cases, its behavior is similar to the behavior of classical derivatives.
From now on, when we work with the Euler-Lagrange equation that uses derivatives of Caputo and Riemann-Liouville (\ref{eccrl}), we will abbreviate it with C-RL and when we use the Euler-Lagrange equation that uses only derivatives of Caputo (\ref{EulerLagrange}), we will abbreviate it with C-C.
\end{remark}
\begin{remark}
Unlike the equation (\ref{eccrl}), at the moment, there are not sufficient conditions for the equation (\ref{EulerLagrange}) which only involves Caputo derivatives.
\end{remark}
Now we present an example that we are going to solve using these two different methods, in order to make comparisons.
\section{Example}
The scope of this section is to present two different candidates to be a solution for a particular problem that arise from solving the two Euler-Lagrange equations presented in the previous section.
First, we are going to solve the classical case, where only appears an integer derivative, and then we are going to deal with the fractional case.
\subsection{Classical case}
The classical problem consist in finding a function $y \in \,_{a}E'$ that optimizes (minimizes or maximizes) the functional
\[J(y)=\int_{0}^{1}\left(\left(y'(x)\right)^2-24\, y(x)\right) dx,\]
\[y(0)=0 \,,\, y(1)=0,\]
where $\,_{a}E'=\{ y: [a,b] \rightarrow \mathbb{R}: y \in C^1([a,b]) \}$.
To solve this (refer to \cite{Van}), we consider the Lagrangian
\begin{equation}
L(x,y,y')=\left(y'\right)^2-24\, y.
\label{lagrangianoclasico}
\end{equation}
Its Euler-Lagrange equation is
\[\frac{\partial L}{\partial y}- \frac{\partial}{\partial x} \left(\frac{\partial L}{\partial y'}\right)=0, \]
that is,
\[y''(x)=-12.\]
Solving this equation and taking into account that $y(0) =y(1)=0$, we obtain the solution
\begin{equation}
y(x)=-6x^2+6x.
\label{solclasica}
\end{equation}
\subsection{Fractional case}
The fractional problem consist in finding a function $y \in \,_{a}^{\alpha}E$ that optimizes (minimizes or maximizes) the functional
\[J(y)=\int_{0}^{1}\left( \left(\,^{C}_{0} D^{\alpha}_{x}\left[y\right](x)\right)^2-24\, y(x)\right) dx,\]
\[y(0)=0 \,,\, y(1)=0,\]
where $\,_{a}^{\alpha}E=\{ y: [a,b] \rightarrow \mathbb{R}: y \in C^1([a,b]), \, \,_{a}^{C}D_{x}^{\alpha}y \in C([a,b]) \}$.
To solve this we consider the Lagrangian
\begin{equation}
L(x,y, \,^{C}_{0} D^{\alpha}_{x}\left[y\right])= \,^{C}_{0} D^{\alpha}_{x}\left[y\right]^2-24\, y.
\label{lagrangiano}
\end{equation}
Like we said before, we are going to solve it using two methods, one with the C-RL Euler-Lagrange (\ref{eccrl}) and the other one with the C-C Euler-Lagrange equation (\ref{EulerLagrange}).
\subsubsection{Resolution by C-RL equation}
Applying the equation (\ref{eccrl}), we obtain
\begin{equation*}
\begin{array}{r l}
\dfrac{\partial L}{\partial y} + \,^{RL}_{x} D^{\alpha}_{1}\left(\dfrac{\partial L}{\partial \,^{C}_{0} D^{\alpha}_{x}\left[y\right] }\right)&=0\\
-24+\,^{RL}_{x} D^{\alpha}_{1}\left(2 \,^{C}_{0} D^{\alpha}_{x}\left[y\right]\right)&=0\\
\,^{RL}_{x} D^{\alpha}_{1}\left( \,^{C}_{0} D^{\alpha}_{x}\left[y\right]\right)&=12.
\end{array}
\end{equation*}
By definition,
\[\,^{RL}_{x} D^{\alpha}_{1}\left[(1-x)^{\beta}\right]=\frac{\Gamma (1+\beta)}{\Gamma(1+\beta-\alpha)}(1-x)^{\beta-\alpha}\]
and the property
\[\,^{RL}_{x} D^{\alpha}_{1}\left[ (1-x)^{\alpha-1} \right]=0,\]
which we have seen on remark \ref{remark}, considering $\beta=\alpha$ we can conclude
\begin{equation}
\begin{array}{r l}
\,^{C}_{0} D^{\alpha}_{x}\left[y\right](x)&= \dfrac{12}{\Gamma(1+\alpha)} (1-x)^{\alpha} + c_1\, (1-x)^{\alpha-1}
\end{array}
\label{change1rl}
\end{equation}
where $c_1 \in \mathbb{R}$.
Taking into account the following equalities
\begin{equation*}
\begin{aligned}
(-1)^n \prod_{j=0}^{n-1} (\alpha-j)&=\dfrac{\Gamma(n-\alpha)}{\Gamma(-\alpha)},\\
(-1)^n \prod_{j=0}^{n-1} (\alpha-1-j)&=\dfrac{\Gamma(n-\alpha+1)}{\Gamma(-\alpha+1)},
\end{aligned}
\end{equation*}
we can write
\[\begin{array}{r l}
(1-x)^\alpha&=\sum \limits_{n=0}^{\infty} \dfrac{(-1)^n \prod_{j=0}^{n-1} (\alpha-j)}{n!} x^n\\
&=\sum \limits_{n=0}^{\infty} \dfrac{\Gamma(n-\alpha)}{\Gamma(-\alpha)} \dfrac{x^n}{n!},\\
\,\\
(1-x)^{\alpha-1}&=\sum \limits_{n=0}^{\infty} \dfrac{(-1)^n \prod_{j=0}^{n-1} (\alpha-1-j)}{n!} x^n\\
&=\sum \limits_{n=0}^{\infty} \dfrac{\Gamma(n-\alpha+1)}{\Gamma(-\alpha+1)} \dfrac{x^n}{n!},\\
\end{array}\]
replacing this in (\ref{change1rl}),
\[ \,^{C}_{0} D^{\alpha}_{x}\left[y\right](x)= \frac{12}{\Gamma(1+\alpha)} \sum \limits_{n=0}^{\infty} \frac{\Gamma(n-\alpha)}{\Gamma(-\alpha)} \frac{x^n}{n!} + c_1\, \sum \limits_{n=0}^{\infty} \frac{\Gamma(n-\alpha+1)}{\Gamma(-\alpha+1)} \frac{x^n}{n!}.
\]
Considering $\,^{C}_{0} D^{\alpha}_{x}\left[ x^{\beta}\right]= \tfrac{\Gamma (1+\beta)}{\Gamma(1+\beta-\alpha)}x^{\beta-\alpha}$ and the linearity of the Caputo derivative, we obtain
\begin{equation*}
\begin{array}{r l}
y(x)&=\dfrac{12}{\Gamma(1+\alpha)^2} x^{\alpha} \sum \limits_{n=0}^{\infty} \dfrac{\Gamma(n+1)\Gamma(n-\alpha)\Gamma(1+\alpha)}{\Gamma(1)\Gamma(-\alpha)\Gamma(1+n+\alpha)}\dfrac{x^n}{n!}+\\
\,\\
&\qquad +\dfrac{c_1}{\Gamma(1+\alpha)} x^{\alpha} \sum \limits_{n=0}^{\infty} \dfrac{\Gamma(n+1)\Gamma(n-\alpha+1)\Gamma(1+\alpha)}{\Gamma(1)\Gamma(1-\alpha)\Gamma(1+n+\alpha)}\dfrac{x^n}{n!} +c_2,
\end{array}
\end{equation*}
where $c_2 \in \mathbb{R}$.
Using the definition of the Hypergeometric function of parameters $a$, $b$, $c$ \cite{Erl}:
\begin{equation}
\,_{2} F _{1}(a,b,c,x)=\sum \limits_{n=0}^{\infty} \frac{\Gamma(a+n)\Gamma(b+n)\Gamma(c)}{\Gamma(a)\Gamma(b)\Gamma(c+n)}\frac{x^n}{n!},
\label{hiper}
\end{equation}
we can rewrite the solution as
\[
\begin{aligned}
y(x)&=\frac{12}{\Gamma(1+\alpha)^2} x^{\alpha}\, \,_{2}F_{1}(1,-\alpha,1+\alpha,x)+\\
&\qquad +\frac{c_1}{\Gamma(1+\alpha)}x^{\alpha}\,_2F_1(1,1-\alpha,1+\alpha,x)+c_2.
\end{aligned}
\]
Taking into account that $y(0)=y(1)=0$, we obtain
\begin{equation}
\begin{aligned}
y_{RL}(x)&=\frac{12}{\Gamma(1+\alpha)^2} x^{\alpha}\, \,_{2}F_{1}(1,-\alpha,1+\alpha,x)-\\
&\qquad -\frac{6}{\Gamma(1+\alpha)^2}\frac{x^{\alpha}}{\,_2F_1(1,1-\alpha,1+\alpha,1)} \,_2F_1(1,1-\alpha,1+\alpha,x).\\
\end{aligned}
\label{solC-RL}
\end{equation}
\begin{remark}
This solution is valid only for $\alpha>0.5$ since otherwise the solution tends to infinity and does not satisfy the terminal condition.
\label{soldivergencia}
\end{remark}
Finally, since $L(x,y,u)=u^2-24y$ is a convex function,
indeed
\[
\begin{array}{r l}
L(x,y+y_1,u+u_1)-&L(x,y,u) =(u+u_1)^2-24(y-y_1)-u^2+24y=\\
&=u^2+2uu_1+u_{1}^{2}-24y-24y_1-u^2+24y=\\
&= u_1^{2}+2uu_1-24y_1\\
& \geq -24y_1+2 u u_1= \partial_{2}L(x,y,u,v)y_{1}+\partial_{3}L(x,y,u,v)u_{1},
\end{array}
\]
it is verified for every $(x,y,u),(x,y+y_{1},u+u_1) \in [0,1]\times \mathbb{R}^2$, applying the theorem \ref{teosuficienteRL}, $y_{RL}$ minimizes the problem for $0.5< \alpha \leq 1$.
\begin{remark}
We can notice that Theorem \ref{teosuficienteRL} only works for functions $y$ that satisfy the C-RL Euler-Lagrange equation, but furthermore they must satisfy the boundary conditions. In the case of not satisfying the boundary conditions (as in the case of $0<\alpha<0.5$), the theorem does not work.
\end{remark}
\begin{remark}
We can observed that the solution (\ref{solC-RL}) tends to (\ref{solclasica}) when $\alpha$ tends to 1. This means that when $\alpha=1$, we recover the solution of the classical problem.
\end{remark}
\subsubsection{Resolution by C-C equation}
As the Lagrangian (\ref{lagrangiano}) $L \in C^{2}\left([0,1]\times \mathbb{R}^{2}\right)$, we can apply the Theorem \ref{teoELcc}. Then using the equation (\ref{EulerLagrange}), we obtain
\begin{equation*}
\begin{array}{r l}
\dfrac{\partial L}{\partial y} + \,^{C}_{x} D^{\alpha}_{1}\left(\dfrac{\partial L}{\partial \,^{C}_{0} D^{\alpha}_{x}\left[y\right] }\right)&=0\\
-24+\,^{C}_{x} D^{\alpha}_{1}\left(2 \,^{C}_{0} D^{\alpha}_{x}\left[y\right]\right)&=0\\
\,^{C}_{x} D^{\alpha}_{1}\left( \,^{C}_{0} D^{\alpha}_{x}\left[y\right]\right)&=12.
\end{array}
\end{equation*}
By definition,
\[\,^{C}_{x} D^{\alpha}_{1}\left[(1-x)^{\beta}\right]=\frac{\Gamma (1+\beta)}{\Gamma(1+\beta-\alpha)}(1-x)^{\beta-\alpha}\]
and the property in remark \ref{remark} that, unlike the Riemann-Liouville derivative,
\[\,^{C}_{x} D^{\alpha}_{1}\left[ d_1 \right]=0,\]
for every $d_1 \in \mathbb{R}$, considering $\beta=\alpha$ we can conclude
\begin{equation}
\begin{array}{r l}
\,^{C}_{0} D^{\alpha}_{x}\left[y\right](x)&= \dfrac{12}{\Gamma(1+\alpha)} (1-x)^{\alpha} + d_1.
\end{array}
\label{change1c}
\end{equation}
Note that in this step this equation is different from (\ref{change1rl}), and that is why we are going to obtain two different solutions.
Now we can write
\[\frac{12}{\Gamma(1+\alpha)} (1-x)^{\alpha}= \frac{12}{\Gamma(1+\alpha)} \sum \limits_{n=0}^{\infty} \frac{(-1)^n \prod_{j=0}^{n-1} (\alpha-j)}{n!} x^n.\]
Taking into account the following equality
$$(-1)^n \prod_{j=0}^{n-1} (\alpha-j)=\frac{\Gamma(n-\alpha)}{\Gamma(-\alpha)},$$
we obtain
\[\frac{12}{\Gamma(1+\alpha)} (1-x)^{\alpha}= \frac{12}{\Gamma(1+\alpha)} \sum \limits_{n=0}^{\infty} \frac{\Gamma(n-\alpha)}{\Gamma(-\alpha)} \frac{x^n}{n!}. \]
Replacing this in (\ref{change1c}),
\[\begin{array}{r l}
\,^{C}_{0} D^{\alpha}_{x}\left[y\right](x)&=\dfrac{12}{\Gamma(1+\alpha)} \sum \limits_{n=0}^{\infty} \dfrac{\Gamma(n-\alpha)}{\Gamma(-\alpha)} \dfrac{x^n}{n!} + d_1.
\end{array}\]
Considering $\,^{C}_{0} D^{\alpha}_{x}\left[ x^{\beta}\right]=\tfrac{\Gamma (1+\beta)}{\Gamma(1+\beta-\alpha)}x^{\beta-\alpha}$ and the linearity of the Caputo derivative, we obtain
\[y(x)=\frac{12}{\Gamma(1+\alpha)} \sum \limits_{n=0}^{\infty} \frac{\Gamma(n-\alpha)}{\Gamma(-\alpha)} \frac{\Gamma(1+n)}{\Gamma(1+n+\alpha)} \frac{x^{n+\alpha}}{n!}+ d_1 x^{\alpha}+d_2,\]
where $d_2 \in \mathbb{R}$.
Using the definition (\ref{hiper}) of the Hypergeometric function of parameters $a$, $b$, $c$, we can rewrite the solution as
\[y(x)=\frac{12}{\Gamma(1+\alpha)^2} x^{\alpha} \, \,_{2}F_{1}(1,-\alpha,1+\alpha,x)+d_1 x^{\alpha}+d_2.\]
Taking into account that $y(0)=y(1)=0$, we obtain
\begin{equation}
y_{C}(x)=\frac{12}{\Gamma(1+\alpha)^2} x^{\alpha}\, \,_{2}F_{1}(1,-\alpha,1+\alpha,x)-\frac{6}{\Gamma(1+\alpha)^2}{x^{\alpha}}.
\label{solC-C}
\end{equation}
\begin{remark}
Unlike $y_{RL}$ in (\ref{solC-RL}), $y_{C}$ is valid for every $0<\alpha\leq 1$. However, we can not ensure that it is a minimum of the problem because there are no sufficient conditions theorem for the C-C Euler-Lagrange equation.
\end{remark}
\begin{remark}
We can observe that the solution (\ref{solC-C}) tends to (\ref{solclasica}) when $\alpha$ tends to 1. This means that both solutions of each Euler-Lagrange equations, $y_{RL}$ and $y_{C}$ tend to the solution of the classical Euler-Lagrange equation when $\alpha=1$.
\end{remark}
\subsection{Comparison between methods}
In this section we are going to show some graphics in order to compare the solutions obtained from the different methods.
Figure \ref{figconvergence} presents the convergence of both solutions $y_{RL}$ (\ref{solC-RL}) in the left and $y_C$ (\ref{solC-C}) in the right, when we take limit as $\alpha$ approaches one. We can see that both converge to the classical solution $y$ (\ref{solclasica}).
\begin{figure}
\caption{Convergence of $y_{RL}
\label{figconvergence}
\end{figure}
\begin{remark}
This figure shows us the difference between the shapes of the solutions obtained from the different methods. We can clearly see how the shapes of the solutions $y_{C}$ are more similar to the classical solution in contrast to the shapes of the solutions $y_{RL}$.
\end{remark}
Figure \ref{figcomparison} presents a comparison between both solutions $y_{RL}$ (\ref{solC-RL}) and $y_C$ (\ref{solC-C}), for different values of $\alpha$.
\begin{figure}
\caption{Comparison of the C-RL and C-C solutions}
\label{figcomparison}
\end{figure}
\begin{remark}
In this figure we can see how the difference between the shapes of both solutions becomes more remarkable when $\alpha$ approaches 0.5, where the solutions $y_{RL}$ diverge as we saw in Remark \ref{soldivergencia}.
\end{remark}
Figure \ref{figsolCalfa04} presents the solution $y_C$ (\ref{solC-C}) for $\alpha=0.4$.
\begin{figure}
\caption{Solution C-C for $\alpha=0.4$}
\label{figsolCalfa04}
\end{figure}
\begin{remark}
While the C-RL Euler-Lagrage equation does not provide us solutions for the cases $0<\alpha \leq 0.5$, the C-C equation does.
\end{remark}
Table \ref{table:minimum} presents the values obtained in each case. To calculate the integrand we approximate the Caputo fractional derivatives of both $y_{C}$ and $y_{RL}$. For this we use a method of $L_1$ type that can be seen in \cite{BaDiScTru, LiZe}. This method consists of making a regular partition of the interval $[0,1]$ as ${0=x_0 \leq x_1 \leq }$ ... $ \leq x_m=1 $, of size $h>0 $ sufficiently small, and then approximating the Caputo derivative as follows:
\[\,^{C}_{0} D^{\alpha}_{x}\left[y\right](x_{m}) =\displaystyle \sum_{k=0}^{m-1} b_{m-k-1}(y(x_{k+1})-y(x_k)),\]
where
$$b_k=\frac{h^{-\alpha}}{\Gamma(2-\alpha)}\left[(k+1)^{1-\alpha}-k^{1-\alpha}\right].$$
Then, to calculate the integrals, we use the Riemann sums approximation.
\begin{table}[H]
\begin{center}
\begin{tabular}{| c | c | c |}
\hline
$\alpha$ & C-RL & C-C \\ \hline
1 & -12.1752 & -12.1752 \\
0.95 & -16.4431 & -14.3133 \\
0.9 & -17.3685 & -16.7006 \\
0.8 & -36.6555 & -22.2567 \\
0.7 & -60.2608 & -28.9016 \\
0.55 & -127.9983 & -40.9804 \\
0.4 & the solution does not exist & -55.5863 \\ \hline
\end{tabular}
\caption{Values obtained with C-RL and C-C \label{table:minimum}}
\end{center}
\end{table}
\begin{remark}
In Table \ref{table:minimum} we can see that as we get closer to $ \alpha=0.5$, the difference between the values is very large, being the minimum the solution of the C-RL equation, while for values $0< \alpha \leq 0.5$ obviously we only have the solution of the C-C equation, since it is the only one that verifies the border conditions.
\end{remark}
\section{Conclusions}
In this article, two theorems of necessary conditions to solve fractional variational problems were studied: an Euler-Lagrange equation which involves Caputo and Riemann-Liouville fractional derivatives (C-RL), and other Euler-Lagrange equation that involves only Caputo derivatives (C-C).
A particular example was presented in order to make a comparison between both conditions.
We were able to get several conclusions. The first thing is that we were able to verify that for $0.5< \alpha \leq 1$, the minimum was obtained from the solution of the C-RL Euler-Lagrange equation, as suggested by the Theorem \ref{teosuficienteRL}. Now, for $0<\alpha \leq 0.5$, the C-RL Euler-Lagrange equation did not provide us with a solution, while C-C equation did. However, we wonder, can we ensure that the solution of the C-C equation ($y_C$) is the optimal solution for the problem at least for these values of $\alpha$? The answer to this question is NO. Although in \cite{LaTo} it was shown that the solution $y_C$ is a critical solution of the problem, and we also saw that its shape is graphically more similar to the shape of the classical solution, we cannot ensure that $y_C$ is an optimal solution or even for the cases in which $0<\alpha \leq 0.5$. If there was a Theorem of Sufficient Conditions for the C-C equations, our example would not verify it, because if it did,
this Theorem would be in contradiction with the Theorem of Sufficient Conditions for the C-RL equations (Theorem \ref{teosuficienteRL}). This means that the convexity conditions over the Lagrangian does not reach to obtain sufficient conditions for the C-C equations. Then, if there were other conditions and such a Theorem existed, in order not to contradict Theorem \ref{teosuficienteRL}, it should depends on the value of $\alpha$.
In other hand, we can observe that when $\alpha=1$, $y_C$ was the classical solution and it was the minimum of the problem, but when $\alpha$ decreased, when $0.5< \alpha<1$, these solutions were not minimums, because the $y_{RL}$ solutions were. There is a discontinuity in the $y_C$ solution when $\alpha$ goes to 1. Then, why $y_C$ would be the minimum solutions when $0<\alpha<0.5$? If they were minimums, it would exist another discontinuity of these solutions with respect to $\alpha$.
In conclusion, while working with C-C equations make the work easier when it comes to calculations, many times we have to be careful with the implementation of this method since C-RL Euler-Lagrange equations are the ones that truly provide us with the optimal solution.
\end{comment}
\section*{Acknowledgments}
This work was partially supported by Universidad Nacional de Rosario through the projects ING568 ``Problemas de Control \'Optimo Fraccionario''. The first author was also supported by CONICET through a PhD fellowship.
\section*{Appendix}
Consider $0<\alpha \leq 1$ and $a,b,c \in \mathbb{R}$ for the following problem,
\begin{equation}
\left\{ \begin{array}{l}
\,_0^{C} D_t^{\alpha}\left[x\right](t)=f(x(t))=ax^3(t)+bx^2(t)+cx(t), \\
\\ x(0)=x_0. \\
\end{array}
\right.
\label{problemappal20}
\end{equation}
To find the equilibrium points of the equation (\ref{problemappal20}), $\,_0^{C} D_t^{\alpha}x(t)=0$ is setted, therefore
\[ax^3(t)+bx^2(t)+cx(t)=0.\]
$\diamond$ If $a\neq 0$, three equilibrium points are obtained:
\[x_1=0 \, \, \text{ y } \, \, x_{2,3}=\frac{-b\pm \sqrt{b^2-4ac}}{2a}.\]
Analyzing the sign of $\frac{\partial f}{\partial x}(x_{i}), \ i=1,2,3$, the stability of the equilibrium points will be obtained.
\begin{itemize}
\item \underline{Equilibrium point $x_1=0$}: as $\frac{\partial f}{\partial x}(x_{1})=c$, it is concluded that
\begin{itemize}
\item If $c > 0$, results $arg(\frac{\partial f}{\partial x}(x_{1}))=0\leq \tfrac{\alpha \pi}{2}$ then $x_1$ is (U).
\item If $c<0$, results $arg(\frac{\partial f}{\partial x}(x_{1}))=\pi>\tfrac{\alpha \pi}{2}$ then $x_1$ is (AS).
\item If $c=0$, there is no conclusion because $\frac{\partial f}{\partial x}(x_{1})=0$.
\end{itemize}
\item \underline{Equilibrium point $x_2=\frac{-b+\sqrt{b^2-4ac}}{2a}$}: as $\frac{\partial f}{\partial x}(x_{2})=\frac{b^2-b\sqrt{b^2-4ac}-4ac}{2a}$, it is concluded that
\begin{itemize}
\item If $a>0$ and $b<\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{2}))=0<\tfrac{\alpha \pi}{2}$ then $x_2$ is (U).
\item If $a<0$ and $b>\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{2}))=0<\tfrac{\alpha \pi}{2}$ then $x_2$ is (U).
\item If $a>0$ and $b>\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{2}))=\pi>\tfrac{\alpha \pi}{2}$ then $x_2$ is (AS).
\item If $a<0$ and $b<\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{2}))=\pi>\tfrac{\alpha \pi}{2}$ then $x_2$ is (AS).
\item If $ b=\sqrt{b^2-4ac}$ or $b^2-4ac< 0$, then $x_2=x_1=0$, that was already analyzed.
\item If $b^2-4ac= 0$, there is no conclusion because $\frac{\partial f}{\partial x}(x_{2})=0$.
\end{itemize}
\item \underline{Equilibrium point $x_3=\frac{-b-\sqrt{b^2-4ac}}{2a}$}: as $\frac{\partial f}{\partial x}(x_{3})=\frac{b^2+b\sqrt{b^2-4ac}-4ac}{2a}$, it is concluded that
\begin{itemize}
\item If $a>0$ and $b<-\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{3}))=\pi>\tfrac{\alpha \pi}{2}$ then $x_3$ is (AS).
\item If $a<0$ and $b>-\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{3}))=\pi>\tfrac{\alpha \pi}{2}$ then $x_3$ is (AS).
\item If $a>0$ and $b>-\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{3}))=0<\tfrac{\alpha \pi}{2}$ then $x_3$ is (U).
\item If $a<0$ and $b<-\sqrt{b^2-4ac}$, results $arg(\frac{\partial f}{\partial x}(x_{3}))=0<\tfrac{\alpha \pi}{2}$ then $x_3$ is (U).
\item If $ b=-\sqrt{b^2-4ac}$ or $b^2-4ac<0$, then $x_3=x_1=0$, that was already analyzed.
\item If $b^2-4ac=0$, there is no conclution because $\frac{\partial f}{\partial x}(x_{3})=0$.
\end{itemize}
\end{itemize}
$\diamond$ If $a=0$, the equilibrium points are
\[x_1=0 \,\, \text{ y } \,\, x_{2}=-\frac{c}{b},\]
assuming that $b\neq 0$ and $c \neq 0$ since if this does not happen $x_1=0$ would be the only equilibrium point.\\
\noindent Analyzing the sign of $\frac{\partial f}{\partial x}(x_{i}), \ i=1,2$, the stability of the equilibrium points will be obtained.
\begin{itemize}
\item \underline{Equilibrium point $x_1=0$}: Analogous to the previous case.
\item \underline{Equilibrium point $x_2=-\frac{c}{b}$}: as $\frac{\partial f}{\partial x}(x_{2})=-c$, it is concluded that
\begin{itemize}
\item If $c>0$, results $arg(\frac{\partial f}{\partial x}(x_{2}))=\pi>\tfrac{\alpha \pi}{2}$ then $x_2$ is (AS).
\item If $c<0$, results $arg(\frac{\partial f}{\partial x}(x_{2}))=0\leq\tfrac{\alpha \pi}{2}$ then $x_2$ is (U).
\end{itemize}
\end{itemize}
\end{document} |
\begin{document}
\title{Identifiability of Open Quantum Systems}
\author{Daniel Burgarth}
\affiliation{Department of Mathematics and Physics, Aberystwyth University, SY23 3BZ Aberystwyth, United Kingdom}
\author{Kazuya Yuasa}
\affiliation{Department of Physics, Waseda University, Tokyo 169-8555, Japan}
\date[]{January 20, 2014}
\begin{abstract}
We provide a general framework for the identification of open quantum systems.
By looking at the input-output behavior, we try to identify the system inside a black box in which some Markovian time-evolution takes place.
Due to the generally irreversible nature of the dynamics, it is difficult to assure full controllability over the system.
Still, we show that the system is identifiable up to similarity under a certain rank condition.
The framework also covers situations relevant to standard quantum process tomography, where we do not have enough control over the system but have a tomographically complete set of initial states and observables.
Remarkably, the similarity cannot in general be reduced to unitarity even for unitary systems, and the spectra of Hamiltonians are not identifiable without additional knowledge.
\end{abstract}
\pacs{
03.67.-a,
02.30.Yy,
03.65.-w
}
\maketitle
\textit{Introduction.---}
It is well known that quantum process tomography \cite{ref:QuantumEstimation} becomes inefficient
as the dimension of the underlying system increases. In particular,
highly precise controls of the system are required for state preparation
and measurement. It is interesting to consider the case where such
controls are not given, and therefore, in general, the system cannot
be fully characterized.
In \cite{ref:QSI}, we characterized the unitary quantum
systems which cannot be distinguished by their input-output behavior,
provided that the dynamics of the control system is rich enough to
be ``accessible'' (that is, the system Lie algebra has the maximal rank, and therefore in the unitary case the system is fully controllable).
We found that under such conditions the quantum system can be identified
up to unitary equivalence. This implies that for accessible
unitary systems the relevant mathematical object which characterizes how
well they can be identified in principle is the Lie algebra $\mathfrak{l}_\text{known}$
generated by a priori information, i.e., known measurements, states, or
Hamiltonians. An accessible unitary system is fully identifiable if
and only if $\mathfrak{l}_\text{known}$ is irreducible. If it is
not irreducible, its commutant characterizes the unidentifiable parameters.
The previous work \cite{ref:QSI} left unanswered such characterization
for open quantum systems, as well as for non-accessible cases, which is relevant for quantum process tomography. In
the present paper we aim at such generalizations. We focus on non-unital
Markovian dynamics and find that under a certain rank condition systems
with equal input-output behavior are related through similarity
transformations. The rank condition describes accessible systems, tomographically complete systems, or all intermediate situations. Remarkably, the similarity cannot in general be reduced to unitarity, even for unitary systems.
We provide examples for such situations and discuss its physical consequences.
\begin{figure}
\caption{Our problem is to identify the system $\sigma=(\mathcal{L}
\label{fig:BlackBox}
\end{figure}
\textit{Setup.---}
We consider a black box with $N_i$ inputs and $N_o$ outputs (Fig.\ \ref{fig:BlackBox}) \cite{ref:QSI,ref:Sontag,ref:GutaYamamoto}.
Inside the black box, some quantum-mechanical dynamics takes place.
Our goal is to find a model for the black box by looking at its input-ouput behavior.
In \cite{ref:QSI}, we studied this problem under the restriction that the dynamics occurring in the black box is known to be unitary.
We here relax this condition and establish a framework that allows us to deal with open quantum systems.
More specifically, we consider a $d$-dimensional system whose dynamics is governed by a master equation
\begin{equation}
\dot{\rho}(t)=\mathcal{L}_{0}(\rho(t))+\sum_{k=1}^{N_{i}}f_{k}(t)\mathcal{L}_{k}(\rho(t)),\quad
\rho(0)=\rho_0,
\label{master}
\end{equation}
that is, the system is assumed to undergo a Markovian time-evolution from some initial state $\rho_0$.
The functions $f_k(t)$ ($k=1,\ldots,N_i$) are our inputs, which are non-negative
\cite{note:1}
and piecewise constant, while $\mathcal{L}_k$ ($k=0,1,\ldots,N_i$) are some generators of the Lindblad form \cite{ref:DynamicalMap-Alicki}.
The outputs are the expectation values $g_\ell(t)=\mathop{\text{tr}}\nolimits\{M_\ell\rho(t)\}$ of some observables $M_\ell$ ($\ell=1,\ldots,N_o$)
\cite{note:2}.
We assume that many copies of the system are at hand, so that we can measure the expectation values $g_\ell(t)$ without taking the measurement back-action into account.
Our objective is to identify the system $\sigma=(\mathcal{L}_0,\mathcal{L}_k,\rho_0,M_\ell)$, by looking at the response $\{g_\ell(t)\}$ of the black box to the inputs $\{f_k(t)\}$.
It is in general not possible to fully identify the system $\sigma$ if one does not know any of its elements \cite{ref:Sontag}.
In \cite{ref:QSI}, it is proved for the case where the dynamics in the black box is unitary that the system is identifiable up to unitary equivalence, provided that the system is fully \emph{controllable}.
What if the dynamics is not unitary but governed by the master equation (\ref{master})?
We wilwilll clarify to what extent we can identify the open quantum system and which conditions are required for it.
\textit{Accessibility of open quantum systems.---}
For a fixed choice of the control functions $f_k(t)$, we can formally write the solution to the master equation (\ref{master}) as $
\rho(t)=\mathcal{C}_{t}(\rho_{0})
$, where $\mathcal{C}_{t}$ is a completely positive trace-preserving (CPT) linear map \cite{ref:DynamicalMap-Alicki}.
Since it takes over the role of the time-evolution operator from closed systems, we call the system \emph{operator controllable} if all time-dependent Markovian CPT maps can be realized by steering $f_k(t)$ \cite{note:DirrHelmke}.
Note that such a set of maps is unequal to the set of all CPT maps \cite{ref:Mixing-Wolf}. Due to the
generally irreversible nature of the dynamics, the operator controllability
is not a very useful concept for open systems
\cite{note:3}.
A more useful concept is \emph{accessibility}, which states, roughly
speaking, that all directions can be explored, even if only irreversibly \cite{note:DirrHelmke}.
Accessibility is rather complicated when talking about directions
of \emph{states}, but has an easy Lie-algebraic characterization when
talking about \emph{operator accessibility}.
One could naively write the time-evolution superoperator $\mathcal{C}_{t}$ and its Lie algebra as complex-valued $d^{2}\times d^{2}$ matrices and define operator
accessibility if a system can explore all $d^{4}$ directions. This
however does not give rise to accessible systems, because directions such
as ``changing the trace'' and ``making Hermitian matrices non-Hermitian''
can never be realized.
To take account of these constraints one usually introduces
an orthogonal basis of Hermitian matrices $(I,\lambda_{1},\ldots,\lambda_{d^2-1})$,
in which density matrices are uniquely expressed as
\begin{equation}
\rho=\frac{1}{d}I
+\sqrt{\frac{d-1}{2d}}
\sum_{k=1}^{d^2-1}r_{k}\lambda_{k}.
\label{coherence}
\end{equation}
Here, $I$ is the $d\times d$ identity matrix, and the orthogonality
of the traceless Hermitian matrices $\lambda_{j}$ is with respect to the usual
Hilbert-Schmidt product, $\mathop{\text{tr}}\nolimits\{\lambda_{i}^{\dagger}\lambda_{j}\}=2\delta_{ij}.$
$D=d^{2}-1$ is the effective dimension of the density matrices, and the real $r_{k}$ make up a $D$-dimensional vector $\vec{r}$ called \emph{coherence vector} \cite{note:CoherenceVector}.
In terms of the coherence vector, the time-evolution
$\rho(t)=\mathcal{C}_{t}(\rho_{0})$
is represented by an affine transformation
\begin{equation}
\vec{r}(t)=V(t)\vec{r}_0+\vec{v}(t),\label{affine}
\end{equation}
where $V(t)$ is a real invertible
\cite{note:4}
$D\times D$ matrix, while $\vec{v}(t)$ a real vector.
Again, not every pair $(V(t),\vec{v}(t))$ represents a valid time-evolution since $\mathcal{C}_{t}$ needs to be CPT\@.
A trick is to embed such an affine transformation (\ref{affine}) in a matrix as
\begin{equation}
\left(\begin{array}{c}
\vec{r}(t)\\
1
\end{array}\right)
=\left(\begin{array}{cc}
V(t) & \vec{v}(t)\\
\vec{0}\,^T & 1
\end{array}\right)
\left(\begin{array}{c}
\vec{r}_0\\
1
\end{array}\right).
\end{equation}
Then, the master equation (\ref{master}) is represented by
\begin{equation}
\frac{d}{dt}
\left(\begin{array}{c}
\vec{r}(t)\\
1
\end{array}\right)
=\left(\begin{array}{cc}
A(t) & \vec{b}(t)\\
\vec{0}\,^T & 0
\end{array}\right)
\left(\begin{array}{c}
\vec{r}(t)\\
1
\end{array}\right),
\end{equation}
where
$
A(t) = A_{0}+\sum_{k}f_{k}(t)A_{k}
$
and
$
\vec{b}(t) = \vec{b}_{0}+\sum_{k}f_{k}(t)\vec{b}_{k}
$
correspond to the Lindblad generators in (\ref{master}).
The system Lie algebra is now defined by
\begin{equation}
\mathfrak{l}=\left\langle
\left(\begin{array}{cc}
A_{0} & \vec{b}_{0}\\
\vec{0}\,^T & 0
\end{array}\right),
\left(\begin{array}{cc}
A_{1} & \vec{b}_{1}\\
\vec{0}\,^T & 0
\end{array}\right),
\ldots,
\left(\begin{array}{cc}
A_{N_{i}} & \vec{b}_{N_{i}}\\
\vec{0}\,^T & 0
\end{array}\right)
\right\rangle _{[\cdot,\cdot]},
\end{equation}
and the system is called operator accessible iff
\begin{equation}
\mathfrak{l}
=
\left(\begin{array}{cc}
\mathbb{R}^{D\times D} & \mathbb{R}^{D}\\
\vec{0}\,^T & 0
\end{array}\right).
\label{eq:access}
\end{equation}
Just
like ``almost all'' systems are controllable in closed dynamics, it
can be shown that the set of accessible open quantum systems
is open and dense. Therefore, accessibility,
opposed to controllability, is a useful premise for considering identifiability (for closed systems these notions coincide).
Below, we will generalize the notion of accessibility to include cases
where, roughly speaking, $\mathfrak{l}$ is smaller, but instead we
have more measurements and state preparations at hand, as would be the case in quantum process tomography.
\textit{Input-output equivalence.---}
In this representation, our outputs $g_\ell(t)$ are expressed as
\begin{equation}
g_\ell(t)
=\mathop{\text{tr}}\nolimits\{M_{\ell}\rho(t)\}
=\left(\begin{array}{cc}
\vec{m}_{\ell}^T&
m_\ell^{(0)}
\end{array}\right)
\left(\begin{array}{c}
\vec{r}(t)\\
1
\end{array}\right),
\end{equation}
where $m_\ell^{(0)}=\mathop{\text{tr}}\nolimits\{M_{\ell}\}/d$ and $\vec{m}_\ell=\sqrt{(d-1)/2d}\mathop{\text{tr}}\nolimits\{M_\ell\vec{\lambda}\}$.
The systems are characterized by $\sigma=(\mathcal{L}_{0},\mathcal{L}_{k},\vec{r}_{0},\vec{m}_{\ell},m^{(0)}_\ell)$.
Now, we call two systems equivalent $\sigma\equiv\sigma'$ if they provide
the same outputs whenever the inputs are the same.
This means that when $f_{k}(t)=f'_{k}(t)$ for all $t$ we have $g_\ell(t)=g_\ell'(t)$ for all $t$, i.e.,
\begin{equation}
\left(\begin{array}{cc}
\vec{m}_{\ell}^T&m^{(0)}_\ell
\end{array}\right)
\left(\begin{array}{c}
\vec{r}(t)\\
1
\end{array}\right)
=
\left(\begin{array}{cc}
\vec{m}_{\ell}^{\prime\, T}&
m^{\prime(0)}_\ell
\end{array}\right)
\left(\begin{array}{c}
\vec{r}\,'(t)\\
1
\end{array}\right),
\ \ \forall t.
\label{eqn:InOutEq}
\end{equation}
As in \cite{ref:QSI} there is an algebraic version of this. Slightly stretching
the notation, let us use the same symbol for the generators in (\ref{master})
and their matrix representation,
\begin{equation}
\mathcal{L}_{\bm{\alpha}}
=\left(\begin{array}{cc}
A_{\bm{\alpha}} & \vec{b}_{\bm{\alpha}}\\
\vec{0}\,^T & 0
\end{array}\right).
\label{eqn:LindbladAffine}
\end{equation}
We denote $\mathcal{L}_{\bm{\alpha}}\equiv\mathcal{L}_{\alpha_{K}}\mathcal{L}_{\alpha_{K-1}}\cdots\mathcal{L}_{\alpha_{1}}$,
where $\bm{\alpha}$ is a multi-index of length $K$ with entries
$\alpha_{j}=0,1,\ldots,N_{i}$. Further, we include the case $K=0$
as the identity matrix and denote the corresponding $\bm{\alpha}=\emptyset$.
Then, the condition (\ref{eqn:InOutEq}) algebraically amounts to
\begin{equation}
\left(\begin{array}{cc}
\vec{m}_{\ell}^T&
m^{(0)}_\ell
\end{array}\right)
\mathcal{L}_{\bm{\alpha}}
\left(\begin{array}{c}
\vec{r}_{0}\\
1
\end{array}\right)
=
\left(\begin{array}{cc}
\vec{m}_{\ell}^{\prime T}&
m^{\prime(0)}_\ell
\end{array}\right)
\mathcal{L}_{\bm{\alpha}}'
\left(\begin{array}{c}
\vec{r}_{0}^{\,\prime}\\
1
\end{array}\right).
\label{eq:io}
\end{equation}
\textit{Similarity.---}
We are now ready to extend the arguments in \cite{ref:QSI} for open quantum systems.
Accessibility allows one to expand any vector as
\begin{equation}
\left(\begin{array}{c}
\vec{r}\\
1
\end{array}\right)
=\sum_{\bm{\alpha}}\lambda_{\bm{\alpha}}\mathcal{L}_{\bm{\alpha}}
\left(\begin{array}{c}
\vec{r}_0\\
1
\end{array}\right),
\label{eq:accessibility}
\end{equation}
and we define a linear mapping $T$ by
\begin{equation}
T\left(\begin{array}{c}
\vec{r}\\
1
\end{array}\right)
=\sum_{\bm{\alpha}}\lambda_{\bm{\alpha}}\mathcal{L}'_{\bm{\alpha}}
\left(\begin{array}{c}
\vec{r}_0^{\,\prime}\\
1
\end{array}\right),
\end{equation}
where $\lambda_{\bm{\alpha}}\in\mathbb{R}$ and $\lambda_{\emptyset}=1$.
It is possible to show, under the input-output equivalence (\ref{eq:io}) and the accessibility, that this mapping $T$ is well-defined, even though the decomposition (\ref{eq:accessibility}) is not unique.
We just need to show that for two different decompositions the image of $T$ is the same, and that $T$ is invertible.
We then reach the conclusion that the two systems $\sigma$ and $\sigma'$ which are indistinguishable by the input-output behavior are similar, $\sigma\sim\sigma'$, related by the similarity transformation $T$ \cite{ref:Sontag,ref:QSI}.
In other words, by looking at the input-output behavior, one can identify the system up to similarity.
In standard quantum process tomography, however, the accessibility is not assumed.
Instead, one tries various initial states and measures various observables to characterize the system.
We can generalize the above scheme in this way, relaxing the accessibility condition.
To this end, assume that the system can be initialized in several
(unknown but fixed) states $\vec{r}_{j}$.
Even if the system is not accessible, it is possible to expand any vector as
\begin{equation}
\left(\begin{array}{c}
\vec{r}\\
1
\end{array}\right)
=\sum_{\bm{\alpha},j}\lambda_{\bm{\alpha},j}\mathcal{L}_{\bm{\alpha}}
\left(\begin{array}{c}
\vec{r}_{j}\\
1
\end{array}\right)
\label{eq:rank1}
\end{equation}
with $\lambda_{\bm{\alpha},j}\in\mathbb{R}$ and $\lambda_{\emptyset,j}\ge0$, $\sum_j\lambda_{\emptyset,j}=1$, if a sufficient variety of the initial states $\vec{r}_j$ are available.
\begin{comment}
(in which case one $\vec{r}_{j}$ suffices, let us consider without
loss of generality $\vec{r}_{0}$): Since we do not have any constraints
on the $c_{\bm{\alpha}},$ if the system is accessible then
\begin{equation}
\left(\begin{array}{cc}
\mathbb{R}^{D\times D} & \mathbb{R}^{D}\\
0 & 0
\end{array}\right)\subset\left\{ \sum_{\bm{\alpha}}c_{\bm{\alpha}}\mathcal{L}_{\bm{\alpha}}\right\} ,
\end{equation}
as any element from the Lie algebra can be expanded as a linear combination
of products of generators (the converse is not true). If $\vec{r}_{0}=\vec{0},$
we can write
\begin{equation}
\left(\begin{array}{c}
\vec{r}\\
1
\end{array}\right)=\left(\begin{array}{cc}
0 & \vec{r}\\
0 & 0
\end{array}\right)\left(\begin{array}{c}
\vec{0}\\
1
\end{array}\right)+\left(\begin{array}{cc}
I & \vec{0}\\
0 & 1
\end{array}\right)\left(\begin{array}{c}
\vec{0}\\
1
\end{array}\right).
\end{equation}
Otherwise, using letting $A=(\vec{r}-\vec{r}_{0})\vec{r}_{0}{}^{T}/\left|\vec{r}_{0}\right|^{2}\in\mathbb{R}^{D\times D}$
we have
\begin{equation}
\left(\begin{array}{c}
\vec{r}\\
1
\end{array}\right)=\left(\begin{array}{cc}
A & \vec{0}\\
0 & 0
\end{array}\right)\left(\begin{array}{c}
\vec{r}_{0}\\
1
\end{array}\right)+\left(\begin{array}{cc}
I & \vec{0}\\
0 & 1
\end{array}\right)\left(\begin{array}{c}
\vec{r}_{0}\\
1
\end{array}\right).
\end{equation}
we obtain the result. If our system is only unitaly accessible we
need to have $\vec{r}_{0}\neq0,$ which is physically clear as such
a system does not evolve at all. For the time being, let us focus
on the non-unitary case.
\end{comment}
We also need sufficiently many different observables $M_\ell$ that allow one to
write an arbitrary measurement as
\begin{equation}
\left(\begin{array}{cc}
\vec{m}^T&
m^{(0)}
\end{array}\right)
=\sum_{\bm{\alpha},\ell}\mu_{\bm{\alpha},\ell}
\left(\begin{array}{cc}
\vec{m}_{\ell}^T&
m^{(0)}_\ell
\end{array}\right)
\mathcal{L}_{\bm{\alpha}}.
\label{eq:rank2}
\end{equation}
We then define $T$ by
\begin{equation}
T\left(\begin{array}{c}
\vec{r}\\
1
\end{array}\right)
=\sum_{\bm{\alpha},j}\lambda_{\bm{\alpha},j}\mathcal{L}'_{\bm{\alpha}}
\left(\begin{array}{c}
\vec{r}_{j}^{\,\prime}\\
1
\end{array}\right),
\end{equation}
for the vector $\vec{r}$ expanded as (\ref{eq:rank1}), and it is possible to prove the same statement as the one for the accessible case that the two systems $\sigma$ and $\sigma'$ which are indistinguishable by the input-output behavior are related by the similarity transformation $T$.
Summarizing, what we need for identifying a system up to similarity is that both sets
\begin{equation}
\mathcal{L}_{\bm{\alpha}}
\left(\begin{array}{c}
\vec{r}_{j}\\
1
\end{array}\right)
\qquad\text{and}\qquad
\left(\begin{array}{cc}
\vec{m}_{\ell}^T&
m_{\ell}^{(0)}
\end{array}\right)
\mathcal{L}_{\bm{\alpha}}
\label{eq:dualrank}
\end{equation}
have full rank, which is achieved by the accessibility of the system, by a sufficient variety of the initial states $\vec{r}_{j}$ and the measurements $M_\ell$ at hand, or by a scenario between these extremes.
\textit{Structure of $T$.---}
It is easy to see that the similarity transformation $T$ must have the structure
\begin{equation}
T=\left(\begin{array}{cc}
T_{1} & \vec{t}_{2}\\
\vec{0}\,^T & 1
\end{array}\right).\label{eq:structure}
\end{equation}
\begin{comment}
Sontag further assumes $\vec{r}_{0}=0$ in which case $\vec{t}_{2}=0,$
but in open quantum systems this does not apply.
\end{comment}
Since $T$ is invertible, $T_{1}$ must be invertible. Now we come to an essential
difference from the unitary case. At the moment, $T$ transforms
a specific fixed set of Lindbladians into another set of Lindbladians.
In the unitary case, we could use controllability
to conclude that in fact \emph{all} unitary
evolutions are transformed into unitary ones, and the similarity transformation $T$ is further constrained to be unitary \cite{ref:QSI}. In the present case, on the other hand, since the
Lindbladian structure is not preserved when generating a Lie algebra,
it is not clear if we can conclude that \emph{all} Lindbladians must
be transformed into Lindbladians.
We give a negative answer by providing
an explicit example below, in fact showing that the structure of $T$
in (\ref{eq:structure}) cannot be constrained further to unitary.
\textit{Example.---}
Let us consider the following two Lindbladians for the master equation (\ref{master}) for a qubit,
\begin{align}
\mathcal{L}_{0}
&=\left(
\begin{array}{cccc}
-\frac{1}{2} & -1\\
1 & -\frac{1}{2}\\
& & -0.9 & -0.89\\
\\
\end{array}
\right),
\\
\mathcal{L}_{1}
&=\left(
\begin{array}{cccc}
-\frac{1}{2} & -1 & -1\\
1 & -\frac{1}{2}\\
1 & & -0.9 & -0.89\\
\\
\end{array}
\right).
\end{align}
Notice that not all the matrices of the structure (\ref{eqn:LindbladAffine}) are valid Lindbladians: the matrix $\Gamma_{ij}$ (we call it Kossakowski matrix) of a generator for a qubit,
$
\mathcal{L}(\rho)
=-i\sum_{i=1}^3[h_i\sigma_i,\rho]-\frac{1}{2}\sum_{i,j=1}^3\Gamma_{ij}(\sigma_i\sigma_j\rho+\rho\sigma_i\sigma_j-2\sigma_j\rho\sigma_i)
$ [with $\sigma_i$ ($i=1,2,3$) being Pauli matrices], must be positive semi-definite for complete positivity \cite{ref:DynamicalMap-Alicki}.
The eigenvalues of the Kossakowski matrices of the above generators $\mathcal{L}_0$ and $\mathcal{L}_1$ are both given by $\gamma_{0}=\gamma_{1}=\{0.0025,0.0250,0.4475\}$, which are all positive.
Roughly speaking, the first Lindbladian $\mathcal{L}_0$ describes an amplitude damping process in the presence of a magnetic field in the $z$ direction (but with the elements $-0.9$ and $-0.89$ instead of $-1$) \cite{ref:NielsenChuang}, while the other one $\mathcal{L}_1$ has an additional magnetic field in the $y$ direction.
This additional field
guarantees that the system is accessible, as is easily confirmed by generating
their Lie algebra numerically (which is $12$-dimensional). Now, consider
the similarity transformation
\begin{equation}
T=\left(\begin{array}{cccc}
1 & 0.01\\
& 1\\
& & 1 & -0.01\\
&&&1\\
\end{array}\right).
\label{eqn:Texample}
\end{equation}
The Lindbladians $\mathcal{L}_0$ and $\mathcal{L}_1$ are transformed by this $T$ to generators whose Kossakowski spectra are given by $\gamma_{0}'=\{0.0047,0.0250,0.4453\}$ and $\gamma_{1}'=\{0.0044,0.0253,0.4453\}$, respectively, which means that they remain valid Lindbladians.
Since the Kossakowski spectra have changed, it is
clear that the two systems cannot be connected by a unitary transformation.
On the other hand, transforming
a purely unitary component such as
\begin{equation}
\mathcal{L}_{2}=\left(\begin{array}{cccc}
& & -1&\\
\\
1\\
\\
\end{array}\right)
\label{eqn:L3}
\end{equation}
is transformed to a generator with a Kossakowski spectrum $\{-0.0035,0,0.0035\}$, which is not physical.
If we know in advance that we have the control $\mathcal{L}_{2}$, the similarity transformation $T$ in (\ref{eqn:Texample}) is rejected and the similarity transformation connecting the two systems is restricted.
If not, however, we cannot exclude $T$ in (\ref{eqn:Texample}).
To complete the picture, we should also check whether initial states $\vec{r}_{j}$ remain valid by the similarity transformation $T$.
There actually exist such vectors: the completely mixed state is transformed to a valid state,
$\vec{r}_j=(0,0,0)\xrightarrow{\ T\ }(0,0,-0.01)$,
and so are the states around it.
\begin{comment}
\begin{equation}
\vec{m}_{0}=\left(\begin{array}{c}
0.5\\
0\\
0
\end{array}\right)\longrightarrow\left(\begin{array}{c}
0.5\\
0\\
-0.01
\end{array}\right).
\end{equation}
\end{comment}
In summary, there actually exists a similarity transformation $T$ connecting two valid, accessible open systems with equal input-output behavior, which are unitarily inequivalent.
\textit{Unitary dynamics without control.---}
Given the rich structure of noisy quantum dynamics, the above fact that indistinguishable open systems are generally not unitarily equivalent is perhaps not surprising. What is remarkable however is that in general this remains the case even under the premise of unitary dynamics.
Assume for simplicity that we apply no control to the system, except for deciding the initial states, run times, and measurements, with the rank condition (\ref{eq:dualrank}) fulfilled by the available states and measurements.
The system is not accessible, and the protocol is similar to standard quantum process tomography, with the main difference that
the reference states and the measurements themselves are not known.
Consider then a Liouvillian $\mathcal{L}_0$ whose spectrum is given by
\begin{equation}
\{0^6,\pm1^2,\pm2^2,\pm3,\pm4,\pm5^2,\pm6^2,\pm7,\pm8,\pm9,\pm10,\pm11\},
\label{eqn:TurnpikeLiouvilleSpectrum}
\end{equation}
where the superscripts denote multiplicities. According to our theorem, each valid black-box model has to have this spectrum, as the models are all related by similarity transformations.
Let us now assume that we are sure that the dynamics occurring in the black box is unitary, i.e., the true Liouvillian has the structure $\mathcal{L}_0 = -i [H_0,{}\cdot{}\,]$.
Is it possible
to identify the Hamiltonian $H_0$ up to unitarity?
The answer is no.
In fact, two Hamiltonians $H_0$ and $H_0'$ whose spectra are given by
\begin{equation}
\{E_n\}=\{0,1,2,6,8,11\},\ \
\{E'_n\}=\{0,1,6,7,9,11\}
\end{equation}
give rise to the same Liouvillian spectrum (\ref{eqn:TurnpikeLiouvilleSpectrum}).
This is an example of the non-uniqueness of the ``turnpike problem'' \cite{TURNPIKE} (interestingly, there are no such systems of dimension smaller than $6$).
We cannot discriminate these two Hamiltonians, which are not unitarily related to each other, from the input-output behavior of the back box.
Yet, having the same spectrum, there must exist a similarity transformation connecting the two Liouvillians $\mathcal{L}_0=-i[H_0,{}\cdot{}\,]$ and $\mathcal{L}_0'=-i[H_0',{}\cdot{}\,]$.
Consider the case where $H_0$ and $H_0'$ are diagonal in the same basis $\{|{n}\rangle\}$.
Then, $\mathcal{L}_0$ and $\mathcal{L}_0'$ are both diagonal on the same operator basis $|{m}\rangle\langle{n}|$ with eigenvalues $-i(E_{m}-E_{n})$ and $-i(E'_{m}-E'_{n})$, respectively.
Because these spectra are equal, they can be related by a permutation of double indices $(m,n)\leftrightarrow(m',n')$.
Furthermore, because the spectrum of Liouvillians is symmetric about zero, $(m,n)\leftrightarrow(m',n')$ implies $(n,m)\leftrightarrow(n',m')$, and in addition the permutation can be chosen such that $(n,n)\leftrightarrow(n,n)$.
Then, the action of this similarity transformation $\mathcal{P}$ on density matrices can be easily seen to be trace-preserving, Hermitianity-preserving,
and unital, $\mathcal{P}(I)=\sum_n\mathcal{P}(|n\rangle\langle n|)=\sum_n|n\rangle\langle n|=I$.
This implies that a ball of states around the maximally mixed state is mapped into another ball of states.
This shows the existence of a valid similarity transformation $\mathcal{P}$ connecting the two systems with the Hamiltonians $H_0$ and $H_0'$.
The non-uniqueness of the turnpike problem means that in the standard framework
of unitary quantum process tomography, without additional knowledge, the input-output behavior only determines the spectrum of Liouvillian, but not of Hamiltonian. The Hamiltonian formalism becomes meaningful only in the presence of further controls used to estimate the system, where the transformation can be represented by a unitary \cite{ref:QSI}.
Finally we remark that the above considerations imply that spectral data cannot be uniquely identified from transition frequencies. This is similar to the non-uniqueness discussed in \cite{ref:SchirmerOi}. However there the ambiguity arrises entirely from not knowing the multiplicity of the transition frequencies.
\textit{Conclusions.---}
We have provided a general framework for the identification of Markovian open quantum systems. The examples disclosed a rich and intriguing structure. In particular we found that systems with different strengths and types of noise can nevertheless display the same input-output behavior, and that even unitary systems with different Hamiltonians can give the same observable data, despite accessibility. Our work sets the frame for further generalizations to non-Markovian systems and systems with feedback dynamics \cite{ref:GutaYamamoto}.
An interesting application of our results would be to identify the input-ouput behavior of the Fenna-Mathews-Olson complex, which is a noisy system in which ultrafast control seems promising \cite{ref:Hoyer}. The question if quantum effects can be observed in this system could be answered unambiguously and without further assumptions if the system is found to be accessible and if each equivalent representation, as characterized by similarity transformations, turns out to be nonclassical.
\textit{Acknowledgements.---}
DB would like to thank David Gross for pointing out reference \cite{TURNPIKE}.
This work is partially supported by the Erasmus Mundus-BEAM Program, and by the Grant for Excellent Graduate Schools from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.
KY is supported by a Waseda University Grant for Special Research Projects (2013B-147).
\end{document} |
\begin{document}
\title{Polynomial representation of\
{Fermat's Last Theorem}
\begin{abstract}We propose a new approach at Fermat\rq{}s Last Theorem (FLT) solution: for each FLT equation we associate a polynomial of the same degree. The study of the roots of the polynomial allows us to investigate the FLT validity. This technique, certainly within the reach of Fermat himself, allows us infer that this is the \textit{marvelous proof} that Fermat claimed to have.\\
\\
Keywords: Fermat Last Theorem, Diophantine equations, polynomial
\end{abstract}
\setcounter {section}{0}
\section{Introduction}
In 1637 Pierre de Fermat wrote in the margins of a copy of Diophant’s Arithmetical, the book where he used to write many of his famous theories~[1]:
\begin{quotation}
\textit{"It is impossible to separate a cube into two cubes or a fourth power into two fourth powers or, in general, all the major powers of two as the sum of the same power. I have discovered a truly marvelous proof of this theorem, which can\rq{}t be contained in the too narrow page margin".}
\end{quotation}
In other words, the previous expression can be condensed into: \\
the equation:
\begin{equation} \label{eq.1}
{A^n+B^n=C^n}
\end{equation}
has no solutions, other than trivial ones\footnote{The trivial solution is a solution with at least one of the integers A, B and C equal to zero}, for any value of $A, B, C, n$ integers and n$>$2.
The equation (\ref{eq.1}) is known as Fermat's Last Theorem (FLT). Last, not because it was the last work of Fermat in chronological sense, but because it has remained for over 350 years the Fermat's theorem never solved. In fact, also the same Fermat, although stating the unsolvability of (1) he never provided a complete demonstration (maybe lost) but has left his proof limited only to the case n = 4. In reality, therefore, it would be more correct to talk about Fermat\rq{}s conjecture.
Today, many mathematicians are of the opinion that Fermat was wrong and that he had not a real full demonstration. Others think that Fermat had such proof, or at least that he had guessed the road, but, as was his custom, he was so listless that such evidence went lost.
In any case, as you wish to take a position, the fact remains that for over 350 years all the greatest mathematicians have tried to find such evidence without success.
Only in 1994, after seven years of complete dedication to the problem, Andrew Wiles, who was fascinated by the theorem that as a child dreamed to solve, finally managed to give a demonstration. Since then, we might refer to (\ref{eq.1}) as Fermat's theorem.
However, Wiles used elements of mathematic and modern algebra [2] that Fermat could not know: the demonstration that Fermat claimed to have, if it were correct, then must be so different.
In this paper we\rq{}ll try to give our contribution proposing a demonstration of Fermat\rq{}s Last Theorem using a technique certainly within the reach of Fermat himself, and then infer that this is the {\itshape marvelous proof} that Fermat claimed to have. In agreement to the supposed Fermat\rq{}s knowledge, we'll also avoid using procedure and notations proper of modern algebra.
\section{First considerations [3][4]}
Before to go in deep in the proof, we make some well known\footnote{See [3] pag.2} considerations relating to (1).
a) According to the usual spoken, to say that Fermat's theorem is true is equivalent to saying that (1) is never verified. Nevertheless, the trivial solution is a true solution that we have to consider as we\rq{}ll see later.
b) It is sufficient to prove (1) be true for the exponent n = 4 and for every n = odd prime.
As mentioned the case of n = 4 was proved directly by Fermat.
c) A, B, C must be such that their Greater Common Divider $(GCD)$ is the unit when taken in pairs, i.e.: \\
$GCD(A,B)=GCD(A,C)=GCD(B,C)=1$ and also $GCD(A,B,C)=1$.
d) Important corollary to the previous property is that the three variables $A, B, C$ can\rq{}t all have the same parity and, moreover, only one can be even following this scheme:
\small
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& A & B & C \\
\hline
1 & odd & odd & even \\
\hline
2 & odd & even & odd \\
\hline
3 & even & odd & odd \\
\hline
\end{tabular} ~ Tab.1
\end{center}
\normalsize
e) Another important corollary of c) is: \\
$GCD (A+B, C-A) = 1$\\
$GCD (A+B, C-B) = 1$\\
$GCD (C-A, C-B) = 1$
\section{Demonstration of FLT}
Here we consider the case 1 of Tab.1, that is $A$ and $B$ both odd. \\
The cases 2 and 3 in Tab.1 will be discussed in Appendix A.\\
Let
\begin{equation}
D = C-A = odd\ integer
\end{equation}
$$E = C-B = odd\ integer$$
then, by the considerations at previous point e), we have $GCD (D,E) = 1$. \\
From (1) and (2) we obtain
\begin{equation} \label{eq.3}
{A^n+B^n=(C-D)^n+(C-E)^n=C^n}
\end{equation}
where $n$ = prime number $\geq$ 3, then developing the powers of binomials
\begin{equation}
(C-D)^n=\sum_{k=0}^{n}(-1)^k {n \choose k} D^k C^{n-k}
=C^n+\sum_{k=1}^{n}(-1)^k {n \choose k} D^k C^{n-k}
\end{equation}
\begin{equation}
(C-E)^n=\sum_{k=0}^{n}(-1)^k {n \choose k} E^k C^{n-k}
=C^n+\sum_{k=1}^{n}(-1)^k {n \choose k} E^k C^{n-k}
\end{equation}
where, for convenience, we have released the first term under the sign of summation.
Substituting (4) and (5) in (3) we obtain the fundamental relationship
\begin{equation}
P_{(C,n)}=C^n+ \sum_{k=1}^{n}(-1)^k {n \choose k} \left ( D^k +E^k\right) C^{n-k}=0
\end{equation}
The (6) is the expression of a polynomial (that we call \textit{associated polynomial}) in the unknown $C$, complete, of degree n and with integer coefficients. The fundamental theorem of algebra assures us that there are n roots of (6) which can be: separate, (partially) overlapping, integer, irrational or complex conjugates\footnote{The polynomial in (6) being monic and with all integer coefficients cannot have rational no-integer roots [5]. Moreover, as well as in case of complex roots, the irrational roots must appear in conjugate pairs, that is, if $a+\sqrt{b}$ is an irrational root of (6) then also $a-\sqrt{b}$ is a root, where $a$ and $b$ are integer numbers and $\sqrt{b}$ is irrational. See appendix C}.
Whatever the type of roots, what interests us is the existence of possible integer roots of (6), in fact, given any integers $D, E$ and $n$, if we could find at least one integer solution, other than the trivial one, into full set \{$\Gamma_i$\} of its roots then, using the relations (2), we could get back $A$ and $B$ and disprove Fermat\rq{}s Theorem.
In other words, if we can prove that the (6) has no integer solutions in $C$, anyhow chosen $D, E$ and $n$, then we can never disprove the theorem and therefore Fermat was right, that is the (1) has no solution in the ring of integers.
Equivalently we can state the following
\newtheorem{teorema}{Lemma}
\begin{teorema}
given any integers D, E, and n such that $GCD (D, E)=1$ and n$\geq$3, showing that (6) does not admit any integer solution, other than the trivial one, for the unknown variable C is equivalent to prove that the Fermat\rq{}s Last Theorem is true.
\end{teorema}
The fundamental theorem of algebra assures us that the polynomial in (6) can be expressed as
\begin{equation}
P_{(C,n)}=\prod_{i=1}^{n}\left ( C-\Gamma_i \right)=0
\end{equation}
where $\Gamma_i$ are the roots of (6).
In order that (6) and (7) are equal, it is necessary and sufficient that the coefficients of the terms of same degree in $C$ are equal.
Expanding (6) and (7) in their terms, we get:
\begin{eqnarray*}
P_{(C,n)} & = & C^n-{n \choose 1} \left (D+E\right) C^{n-1}+{n \choose 2} \left ( D^2 +E^2\right) C^{n-2}-... \nonumber \\
& + & {n \choose {n-1}} \left ( D^{n-1} +E^{n-1}\right) C- {n \choose n}\left ( D^n +E^n\right)=0 \hspace{1.6cm} (6a) \\
\end{eqnarray*}
\begin{eqnarray*}
P_{(C,n)}& = & \prod_{i=1}^{n}\left ( C-\Gamma_i \right)=\left ( C-\Gamma_1 \right)\left ( C-\Gamma_2 \right) ... \left ( C-\Gamma_{n-1} \right)\left ( C-\Gamma_n\right)=\nonumber \\
& =&C^n-\left (\sum_{{i_1}=1}^{n}\Gamma_{i_1}\right)C^{n-1}+\left (\sum_{1\leq{i_1}<{i_2}}^{n}\Gamma_{i_{1}}\Gamma_{i_{2}}\right)C^{n-2}-...\nonumber \\
& +& \left (\sum_{1\leq{i_1}<{i_2}<...<{i_{n-1}}}^{n}\Gamma_{i_{1}}\Gamma_{i_{2}}...\Gamma_{i_{n-1}}\right)C-\left (\Gamma_1\Gamma_2...\Gamma_n\right)=0 \hspace{1.2cm} (7a) \\
\end{eqnarray*}
The (7a) shows that the development of (7) leads to an expression which is the sum of terms with decreasing powers in $C$ and whose ${(n-k)th}$ coefficient is related to the sum of all possible combinations, without repetition, of the n roots taken k-at-a-time.
Equating the coefficients of terms of equal degree in (6a) and (7a), we arrive at the following fundamental system of equations (also known as Viete\rq{}s formula):
$$
{\mathcal \ }
\left\{
\begin{array}{llll}
\displaystyle {n \choose 1}\left (D+E\right)={\sum_{{i_1}=1}^{n}}\Gamma_{i_1}=\\
\displaystyle \hspace{1.0cm}=\Gamma_1+(\Gamma_2+...+\Gamma_{n-1}+\Gamma_n ) \hspace{2.3cm}=\Gamma_1+t_1\hspace{1.3cm}(8a) \\
\\
\displaystyle {n \choose 2}\left (D^2+E^2\right) ={{\sum_{1\leq{i_1}<{i_2}}^{n}}\left(\Gamma_{i_1}\Gamma_{i_2}\right)=}\\
\\
\hspace{1.0cm}=\Gamma_1(\Gamma_2+\Gamma_3+...+\Gamma_n)+
\displaystyle {{\sum_{2\leq{i_2}<{i_3}}^{n}}\left(\Gamma_{i_2}\Gamma_{i_3}\right)}
\hspace{0.2cm}=\Gamma_1t_1+t_2 \hspace{1.0 cm}(8b)\\
\\
\cdots \cdots\\
\\
\displaystyle {n \choose {n-1}}\left (D^{n-1}+E^{n-1}\right) =
\displaystyle
{\sum_{1\leq{i_1}<{i_2}<...<{i_{n-1}}}^{n}}\left(\Gamma_{i_1}\Gamma_{i_2}...\Gamma_{i_{n-1}}\right)= \\
\\
\displaystyle \hspace{1.0cm}
=\Gamma_1\left( {\sum_{2\leq{i_2}<...<{i_{n-1}}}^{n}}\Gamma_{i_2}\Gamma_{i_3}...\Gamma_{i_{n-1}}\right)+\\
\\
\displaystyle \hspace{1.5cm}+\Gamma_2\Gamma_3...\Gamma_{n-1}\Gamma_n
\hspace{4.1cm}=\Gamma_1t_{n-2}+t_{n-1}\hspace{0.2cm}(8c)\\
\\
\displaystyle {n \choose n}\left (D^n+E^n\right)=\Gamma_1\left(\Gamma_2...\Gamma_{n-1}\Gamma_n
\right)\hspace{2.3cm}=\Gamma_1t_{n-1}\hspace{1.4cm}(8d)
\end{array}
\right.
$$
\\
where, $\displaystyle t_1 = (\Gamma_2+...+ \Gamma_{n-1} + \Gamma_n), \
t_2 ={\sum_{2\leq{i_2}<{i_3}}^{n}}\left(\Gamma_{i_2}\Gamma_{i_3}\right)=(\Gamma_2\Gamma_3+\dots+
\Gamma_2\Gamma_n+\dots+\Gamma_{n-1}\Gamma_n)$ and so on, and in particular $\displaystyle t_{n-1} = (\Gamma_2 \Gamma_3...\Gamma_{n-1} \Gamma_n)$. Moreover, let, without loss of generality, $\Gamma_1$ be the integer trivial root, we will show that it is the only possible integer root.
From equations (8) follows the important
\begin{teorema}
If D and E have the same parity then the terms on the right side of each equation in (8) must have an even integer value.
\end{teorema}
We will see that the condition $D$ and $E$ both odd is incompatible to fulfill all the relations (8).
According to Lemma 1, to prove the FLT, we have to show that the associate polynomial admits one, and only one, integer root and it is the trivial solution\footnote{Here, we impose at the trivial solution only the constraint to be integer. It is straightforward to verify into (6) that cases in which ABC=0 imply $P_{(C,n)}=0$.}.\\
\\
{\bfseries Proof:}
We begin observing that in the equation (6a) all terms, except the first one, contain the even factor $(D^i+E^i)$, therefore, if some integer root exists then it must have an even value.
\\
Now, by Lemma 2, the term on the right side of each equation in the system (8) must be an even integer value, then:
\begin{itemize}
\item The case in which any of the $t_i $ is non-integer is obviously ruled out.
\item $\Gamma_1$ being an even root of (6a) then also $t_1$ and all the $t_i$ must be even integers\footnote{The recursive form of the (8) implies the propagation of the $t_i\rq{}s$ parity along all the equations. In fact in (8a) $\Gamma_1$ and $t_1$, in order their sum is even, must have the same parity then, in (8b) also $t_2$ must have the $t_1$ parity, and so on. On the other hand, $\Gamma_1$ can not be odd, because otherwise the term $\Gamma_1t_{n-1}$ in (8d) would also be odd in contradiction with Lemma 2.}.
\item The left side of equation (8c)\footnote{The (8c) in general will be the penultimate equation of any system with n=p equations. On the left side of this equation there is always the sum of two even powers (i.e. n-1) of odd terms. On the right side, due to the construction procedure of Viete\rq{}s formulas, there will be always the sum of terms made by all possible combination, without repetition, of n roots taken at groups of n-1 elements.} can never be divided by 4 (see Appendix B) so, being $\Gamma_1$ and $t_{n-2}$ both even, if $t_{n-1}$ was divisible by 4 then the right side would be a multiple by 4 and therefore also this case is excluded.
\item The term $t_{n-1}$ is the product of the (n-1) roots of the equation (6a) and, as already said, they can be even integers, irrational or complex. In these last two cases they must appear as conjugate pairs\footnote{See Appendix C}.
\end{itemize}
Therefore we have the following two cases:
\begin{description}
\item[a)] If all the roots $\Gamma_2, \Gamma_3, \ldots \Gamma_{n-1}, \Gamma_n$ are conjugate pairs (irrational or complex) then, also if they fulfill all the relations (8), by Lemma 1 the FLT is proved because there is no any integer root other than the trivial one $\Gamma_1$.
\item[b)] If some of the $\Gamma_i$ (i=2, 3, \ldots n) were integers then they must be even and at least a pair, therefore carrying a factor 4 into equation (8c), but this is ruled out by Appendix B.
\end {description}
This exhausts all possible cases, showing that (6a) does not admit integer solutions, other than the trivial one, for any odd integers D and E and for all $n = odd \ primes$ then, by Lemma 1, the Fermat\rq{}s Last Theorem is proved.
\section{Conclusions}
In previous sections we have demonstrated the validity of FLT for n $\geq$ 3 where $A$ e $B$ are both odd (case 1 in Tab.1).
In Appendix A we show that also in cases in which $A$ and $B$ have opposite parity (cases 2 and 3 in Tab.1) the FLT holds.
In conclusion we have proved the validity of Fermat\rq{}s Last Theorem by a procedure, without doubt, Fermat himself could known and then we can infer that this is the {\itshape marvelous proof}, probably been lost, that he claimed to own. The procedure described in this paper does not allow to prove\footnote{See Appendix A} the case n = 4, then we understand why Fermat was worried to demonstrate it in another way.\\
We note that Andrew Wiles proved the FLT only indirectly. In fact Wiles proved the validity of the Taniyama-Shimura conjecture that asserts that every elliptic curve must be related to a modular form. Gerhard Frey had previously devised a mechanism that links the FLT to the elliptic equations and thus indirectly to the Taniyama-Shimura conjecture.
The demonstration of FLT presented in this work, as well as to verify the validity of FLT itself, through the mechanism of Frey, allows us to say that the Taniyama-Shimura conjecture is also verified without the use of the demonstration of Wiles.
\\
\appendix
\section{Appendix }
Here, we want analyze the cases 2 and 3 in Tab.1, that is $A $ and $B$ having opposite parity. Of course is enough discuss only the case 2, indeed the case 3 can be reported to case 2 exchanging the variables $A$ and $B$. \\
We start again from relation (1), where now we consider $A=odd$ and $B=even$ and therefore $C=odd$, then
\begin{eqnarray*}
A^n+B^n &=& C^n \ \ \ \ \ \ \ \ or \hspace{7.3cm} (A1a) \\
A^n-C^n &=& -B^n \hspace{8.4cm} (A1b)
\end{eqnarray*}
Let define the two variables\footnote{Similar consideration regarding $D$ and $E$ made into \lq\lq{}First consideration\rq\rq{} paragraph, brings us to conclude that $GCD (F,G)=1$.} (similar to $D$ and $E$)
\begin{eqnarray*}
F=-B-A =odd \ number \hspace{7cm}(A2a) \\
G=-B+C=odd \ number \hspace{7cm}(A2b)
\end{eqnarray*}
From (A1b), (A2a) and (A2b) we have
$$
A^n-C^n = (-B-F)^n - (B+G)^n = -B^n \ \ \ \ \ \ \ \ \ \ or \hspace{4cm} (A3a)
$$
$$
(B+F)^n+(B+G)^n = B^n \hspace{7.7cm} (A3b)
$$
\textbf{Note}: the step from (A3a) to (A3b), due to the negative signs inside the first parentheses, can be done only if $n$ is an odd number\footnote{Assuming that the proof given in this paper is actually the {\itshape marvelous proof} that Fermat claimed to have, probably been lost, then we understand why he worried to demonstrate by other ways the case n = 4.}.\\
\\
Developing the powers of binomials in (A3b), we get:
\begin{eqnarray*}
(B+F)^n &=& \sum_{k=0}^{n} {n \choose k} F^k B^{n-k}=B^n+\sum_{k=1}^{n} {n \choose k} F^k B^{n-k} \hspace{2.2cm} (A4) \\
(B+G)^n &=& \sum_{k=0}^{n} {n \choose k} G^k B^{n-k}=B^n+\sum_{k=1}^{n} {n \choose k} G^k B^{n-k} \hspace{2.2cm} (A5) \\
\end{eqnarray*}
where, for convenience, we have released the first term under the sign of summation.
Substituting (A4) and (A5) in (A3b) we obtain the fundamental relationship
\begin{eqnarray*}
P_{(B,n)}&=& B^n+ \sum_{k=1}^{n} {n \choose k} \left ( F^k +G^k\right) B^{n-k}=0 \hspace{4.cm} (A6) \\
\end{eqnarray*}
The (A6) is the expression of a polynomial in the unknown $B$ completely equivalent, except the term $(-1)^k$, to the equation (6) then it leads at an equation similar to (6a), that is:
\begin{eqnarray*}
P_{(B,n)} & = & B^n+{n \choose 1} \left (F+G\right) B^{n-1}+{n \choose 2} \left ( F^2 +G^2\right) B^{n-2}+... \nonumber \\
& + & {n \choose {n-1}} \left ( F^{n-1} +G^{n-1}\right) B+ \left ( F^n +G^n\right)=0 \hspace{2.2cm} (A6a)
\end{eqnarray*}
Now we can borrow all the considerations done on the \lq\lq{}Demonstration of FLT\rq\rq{} paragraph, then showing that the equation (A6a) cannot admit integer roots, other than the trivial one, therefore proving that the Fermat\rq{}s Last Theorem is valid also in the cases 2 and 3 of tab.1.
\section{Appendix }
\newtheorem{Theorem}{Theorem}
\begin{Theorem}
Let $X$ and $Y$ two odd positive integers and n even then the quantity $X^n+Y^n$ never is divisible by 4.
\end{Theorem}
{\bfseries Proof:}\\
Let n=2, then due to the odd value of $X$, will be either ${X\equiv1\pmod{4}}$ or ${X\equiv3\pmod{4}}$ , then ${X^2\equiv1\pmod{4}}$ for any odd $X$.
Moreover, ${X^4=X^2X^2\equiv1\pmod{4}}$ so, by induction, ${X^n=X^2X^{n-2} \equiv1\pmod{4}}$ for any even n and odd $X$.\\
To conclude then ${(X^n+Y^n)\equiv2\pmod{4}},$ therefore never divisible by 4.
\section{Appendix }
Actually the equations (8a) and (8c) pose strong constraints on the values of roots $\Gamma_j$, indeed:
let $\Gamma_{j>1}$ are irrational or integer numbers, then pose $\Gamma_j=\gamma_{j}+\delta_j$ (with $ j>1$) , where $\gamma_{j}$ is the integer part of $\Gamma_j$ and $0\leq \delta_j < 1$ its decimal irrational part.
So that the sum (8a) is an integer, must be
\[t_1=\sum_{j=2}^{n} \Gamma_{j} = \sum_{j=2}^{n}\left( \gamma_{j} +\delta_j\right)= \sum_{j=2}^{n} \gamma_{j} +\sum_{j=2}^{n} \delta_j \tag{C1}\label{eq1}\]
where $\displaystyle\sum_{j=2}^{n} \delta_j$ itself must be either integer or null and $n=odd \ prime$.\\
In similar way from (8c) we have
\[
t_{n-1}=\prod_{j=2}^{n} \Gamma_j= \prod_{j=2}^{n}\left (\gamma_{j} +\delta_j\right)\tag{C2}\label{eq2}
\]
so that both expressions (C1) and \eqref{eq2} give integer values, needs that the $\Gamma_j$ have conjugated values at pair\footnote{
We begin by considering only two terms $\Gamma_j$ and $\Gamma_{j+1}$, then we must have (from 8a)\\
$S_j=\Gamma_j+\Gamma_{j+1}=
\left(\gamma_{j}+\delta_j\right)+\left(\gamma_{j+1}+\delta_{j+1}\right)=integer$
therefore will be $\delta_j+\delta_{j+1}=0$ that is $\delta=\delta_j=-\delta_{j+1}$ and moreover (from 8c)\\
$M_j=\Gamma_j\Gamma_{j+1}=\left(\gamma_{j}+\delta\right)\left(\gamma_{j+1}-\delta\right)=
\gamma_{j}\gamma_{j+1}+\left(\gamma_{j+1}-\gamma_{j}\right)\delta-\delta^2=k$
with $k=integer$ then follows\\
$\delta=\frac{\gamma_{j+1}-\gamma_{j}}{2}\pm\frac{\sqrt{\left(\gamma_{j+1}-\gamma_{j}\right)^2-4\lambda}}{2}$ where $\lambda=k-\gamma_j\gamma_{j+1}$ then getting the positive sign only\\
\\
$\Gamma_j=\left(\gamma_{j}+\delta\right)=\gamma_{j}+\frac{\gamma_{j+1}-\gamma_{j}}{2}+\frac{\sqrt{\left(\gamma_{j+1}-\gamma_{j}\right)^2-4\lambda}}{2}=\alpha_j+\sqrt{\beta_j}$ and\\
\\
$\Gamma_{j+1}=\left(\gamma_{j+1}-\delta\right)=\gamma_{j+1}-\frac{\gamma_{j+1}-\gamma_{j}}{2}-\frac{\sqrt{\left(\gamma_{j+1}-\gamma_{j}\right)^2-4\lambda}}{2}=\alpha_j-\sqrt{\beta_j}$ where\\
\\
$\alpha_j=\frac{\gamma_{j}+\gamma_{j+1}}{2}$ and $\beta_j=\left(\frac{\gamma_{j+1}-\gamma_{j}}{2}\right)^2-\lambda$ \ \ \ therefore\\
\\
$\ S_j=\Gamma_j+\Gamma_{j+1}=2\alpha_j=\gamma_j+\gamma_{j+1}$ \\
$M_j=\Gamma_j\Gamma_{j+1}={\alpha_j}^2-\beta_j=\gamma_j\gamma_{j+1}+\lambda$ \\
Taking in account more terms $\Gamma_i$ (i=4, 6,...n) we obtain similar results where the $\alpha_i$ and $\beta_i$ will be function of the corresponding $\gamma_i$, always taken in pair.
}, i.e.:
\\
$\Gamma_j=\alpha_j+\sqrt{\beta_j} $\\
$\Gamma_{j+1}=\alpha_j-\sqrt{\beta_j}$ \\
Where $\alpha_j=\frac{\gamma_{j}+\gamma_{j+1}}{2}$ and $\beta_j=\left[\frac{\gamma_{j+1}-\gamma_{j}}{2}\right]^2-\lambda$ with $\lambda$ a suitable integer and $(j=2,4,...n)$.\\
\begin{flushleft}
\addcontentsline{toc}{chapter}{Reference}
\end{flushleft}
\end{document} |
\begin{document}
\title{Applications of a duality between generalized trigonometric and hyperbolic functions II
\footnote{The work of S.T. was supported by JSPS KAKENHI Grant Number 17K05336.}}
\author{Hiroki Miyakawa and Shingo Takeuchi \\
Department of Mathematical Sciences\\
Shibaura Institute of Technology
\thanks{307 Fukasaku, Minuma-ku,
Saitama-shi, Saitama 337-8570, Japan. \endgraf
{\it E-mail address\/}: [email protected] \endgraf
{\it 2010 Mathematics Subject Classification.}
33B10 (26D05 26D07 31C45 34A34)}}
\date{}
\maketitle
\begin{abstract}
Generalized trigonometric functions and generalized hyperbolic functions can be
converted to each other by the duality formulas previously discovered by the authors.
In this paper, we apply the duality formulas to prove dual pairs of Wilker-type
inequalities, Huygens-type inequalities, and (relaxed) Cusa-Huygens-type inequalities
for the generalized functions. In addition, multiple- and double-angle formulas
not previously obtained are also given.
\end{abstract}
\textbf{Keywords:}
Generalized trigonometric functions,
Generalized hyperbolic functions,
Mitrinovi\'{c}-Adamovi\'{c} inequalities,
Wilker inequalities,
Huygens inequalities,
Cusa-Huygens inequalities,
Multiple-angle formulas,
Double-angle formulas,
$p$-Laplacian.
\section{Introduction}
Generalized trigonometric functions (GTFs) and generalized hyperbolic
functions (GHFs) are natural mathematical generalizations of the trigonometric
and hyperbolic functions, respectively. They have been applied not only to
generalize $\pi$ and the complete elliptic integrals, but also to analyze
nonlinear differential equations involving $p$-Laplacian (see monographs
\cites{Dosly2005,Lang2011} and survey \cite{YHWL2019}, and the references
given there).
Although GTFs and GHFs have been actively studied,
they have been treated separately (e.g.,
\cites{Dosly2005,Klen,MSZH,Neuman2015,YHQ,YHWL2019}). In our
previous work [9], the authors proved duality formulas that can transform GTFs and GHFs into
each other. As an application, we were able to construct generalized inequalities of
the classical Mitrinovi\'{c}-Adamovi\'{c} inequalities to GTFs and GHFs such that
they are
dual pairs to each other in the sense explained in the next section.
In this paper, following \cite{Miyakawa-Takeuchi}, we will generalize the old and
vigorously studied Wilker inequalities, Huygens inequalities and (relaxed) Cusa-
Huygens inequalities to GTFs and GHFs. In fact, previous works, e.g.,
\cites{Klen,MSZH,Neuman2015,YHQ,YHWL2019}, have made various generalizations of
these inequalities, but the trigonometric and hyperbolic versions are not in dual pairs.
On the other hand, the pairs we create in the present paper are dual to each other.
This paper is organized as follows. Section \ref{sec:preparation} summarizes the
definitions of GTFs and GHFs and their properties, including the duality formulas
obtained in \cite{Miyakawa-Takeuchi}.
Here, the conditions imposed on the parameters contained in these functions are
more extended than usual. This extension reveals the duality between both
generalized functions. In Section \ref{sec:inequalities}, we generalize the classical
Wilker inequalities, Huygens inequalities, and (relaxed) Cusa-Huygens inequalities to
GTFs and GHFs. It should be noted that the pairs of inequalities obtained there are
dual to each other. In Section \ref{sec:formulas}, as a further application of the duality
formulas, we provide multiple- and double-angle formulas for GTFs and GHFs.
Although some formulas have already been obtained in previous studies
(cf. \cite{Takeuchi2016} and Table \ref{hyo} in Section \ref{sec:formulas}), we give
formulas for parameters for which no formulas were previously known.
\section{Preparation}
\langlebel{sec:preparation}
In this section, we summarize the definitions and
some properties of GTFs and GHFs (see \cite{Miyakawa-Takeuchi} for more details).
The relationship between GTFs and GHFs can be seen by making the range of
parameters in the functions wider than the conventional definition.
Let us assume
\begin{equation}
\langlebel{eq:pq}
\frac{q}{q+1}<p<\infty,\quad 0<q<\infty,
\end{equation}
and
$$F_{p,q}(y):=\int_0^y \frac{dt}{(1-t^q)^{1/p}}, \quad y \in [0,1).$$
We will denote by $\sin_{p,q}$ the inverse function of $F_{p,q}$, i.e.,
$$\sin_{p,q}{x}:=F_{p,q}^{-1}(x).$$
Clearly, $\sin_{p,q}{x}$ is monotone increasing on
$[0,\pi_{p,q}/2)$ onto $[0,1)$,
where
\begin{align*}
\pi_{p,q}:&=2F_{p,q}(1)=2\int_0^1 \frac{dt}{(1-t^q)^{1/p}}\\
&=
\begin{cases}
(2/q)B(1-1/p,1/q), & 1<p<\infty,\\
\infty, & q/(q+1)<p \leq 1,
\end{cases}
\end{align*}
and $B$ is the beta function.
In almost all literature dealing with GTFs,
the parameters $p,\ q$ are assumed to be $p,\ q>1$,
but we here allow them to be $p,\ q \leq 1$.
Note that the condition $q/(q+1)<p \leq 1$ implies that $\sin_{p,q}{x}$ is monotone
increasing on the \textit{infinite} interval
$[0,\infty)$ and no longer similar to $\sin{x}$, but to $\tanh{x}$
(Figure \ref{fig:sin}).
\begin{figure}
\caption{The graphs of $\sin_{p,q}
\end{figure}
Since $\sin_{p,q}{x} \in C^1(0,\pi_{p,q}/2)$,
we also define $\cos_{p,q}{x}$ by
$$\cos_{p,q}{x}:=\frac{d}{dx}(\sin_{p,q}{x}).$$
Then, it follows that
\begin{equation}
\langlebel{eq:Tpythagoras}
\cos_{p,q}^p{x}+\sin_{p,q}^q{x}=1.
\end{equation}
In case $(p,q)=(2,2)$, it is obvious that $\sin_{p,q}{x},\ \cos_{p,q}{x}$
and $\pi_{p,q}$ are reduced to the ordinary $\sin{x},\ \cos{x}$ and $\pi$,
respectively. Therefore these functions and the constant are called
\textit{generalized trigonometric functions} (GTFs)
and the \textit{generalized $\pi$}, respectively.
It is easy to check that $u=\sin_{p,q}{x}$ is a solution of
the initial value problem of $p$-Laplacian
\begin{equation}
\langlebel{eq:ivp}
(|u'|^{p-2}u')'+\frac{(p-1)q}{p} |u|^{q-2}u=0, \quad u(0)=0,\ u'(0)=1,
\end{equation}
which is closely related to
the eigenvalue problem of $p$-Laplacian.
In a similar way, we assume \eqref{eq:pq} and
$$G_{p,q}(y):=\int_0^y \frac{dt}{(1+t^q)^{1/p}}, \quad y \in [0,\infty).$$
We will denote by $\sinh_{p,q}$ the inverse function of $G_{p,q}$, i.e.,
$$\sinh_{p,q}{x}:=G_{p,q}^{-1}(x).$$
Clearly, $\sinh_{p,q}{x}$ is monotone increasing
on $[0,\pi_{r,q}/2)$ onto $[0,\infty)$,
where $r$ is the positive constant determined by
\begin{equation}
\langlebel{eq:rdefi}
\frac{1}{p}+\frac{1}{r}=1+\frac{1}{q}, \quad \mbox{i.e.}, \quad
r=\frac{pq}{pq+p-q}.
\end{equation}
Indeed, by $1+t^q=1/(1-s^q)$,
\begin{align*}
\lim_{y \to \infty}G_{p,q}(y)=
\int_0^\infty \frac{dt}{(1+t^q)^{1/p}}
=\int_0^1 \frac{ds}{(1-s^q)^{1/r}}
=\frac{\pi_{r,q}}{2}.
\end{align*}
The important point to note here is that for a fixed $q \in (0,\infty)$,
if $r=r_q(p)$ is regarded as a function of $p$, then
\begin{gather}
\mbox{$r_q$ is bijective from $(q/(q+1),\infty)$ to itself, and}
\langlebel{eq:r(p)}\\
r_q(r_q(p))=p. \langlebel{eq:r(r)=p}
\end{gather}
In particular, $\pi_{r,q}$ has been defined under \eqref{eq:pq}.
If $r>1$, i.e., $p<q$, then $\sinh_{p,q}{x}$ is defined in
the \textit{bounded} interval $[0,\pi_{r,q}/2)$ with
$\lim_{x \to \pi_{r,q}/2}\sinh_{p,q}{x}=\infty$ and
no longer similar to $\sinh{x}$, but to $\tan{x}$
(Figure \ref{fig:sinh}).
\begin{figure}
\caption{The graphs of $\sinh_{p,q}
\end{figure}
Since $\sinh_{p,q}{x} \in C^1(0,\pi_{r,q}/2)$,
we also define $\cosh_{p,q}{x}$ by
$$\cosh_{p,q}{x}:=\frac{d}{dx}(\sinh_{p,q}{x}).$$
Then, it follows that
\begin{equation}
\langlebel{eq:Hpythagoras}
\cosh_{p,q}^p{x}-\sinh_{p,q}^q{x}=1.
\end{equation}
In case $(p,q)=(2,2)$, it is obvious that $\sinh_{p,q}{x},\ \cosh_{p,q}{x}$
and the interval $[0,\pi_{r,q}/2)$ are reduced to
$\sinh{x},\ \cosh{x}$ and $[0,\infty)$, respectively.
Therefore these functions are called
\textit{generalized hyperbolic functions} (GHFs).
Just as $\sin_{p,q}{x}$ satisfies \eqref{eq:ivp},
$u=\sinh_{p,q}{x}$ is a solution of
the initial value problem of $p$-Laplacian
$$(|u'|^{p-2}u')'-\frac{(p-1)q}{p} |u|^{q-2}u=0, \quad u(0)=0,\ u'(0)=1.$$
We generalize the tangent and hyperbolic tangent functions in two ways.
These functions are often generalized as
$$\tan_{p,q}{x}:=\frac{\sin_{p,q}{x}}{\cos_{p,q}{x}}, \quad
\tanh_{p,q}{x}:=\frac{\sinh_{p,q}{x}}{\cosh_{p,q}{x}}$$
(e.g.,
\cites{Dosly2005,Edmunds2012,Klen,Lang2011,MSZH,Neuman2015,YHQ,YHWL2019}).
However, for practical purposes,
the following modified functions are more convenient than
the functions above:
$$\operatorname{tam}_{p,q}x:=\frac{\sin_{p,q}x}{\cos_{p,q}^{p/q}x}, \quad
\operatorname{tam}h_{p,q}x:=\frac{\sinh_{p,q}x}{\cosh_{p,q}^{p/q}x}.$$
These modified functions were first introduced in \cites{Miyakawa-Takeuchi,Takeuchi2016}
with the symbols $\tau_{p,q},\ \tilde{\tau}_{p,q}$, respectively.
Note that if $p=q$, then $\operatorname{tam}_{p,q}{x}=\tan_{p,q}{x}$ and
$\operatorname{tam}h_{p,q}{x}=\tanh_{p,q}{x}$.
In \cite{Miyakawa-Takeuchi},
we proved the following duality properties between GTFs and GHFs.
This property remains important in the present paper.
\begin{thm}[\cite{Miyakawa-Takeuchi}]\langlebel{lem:GHFGTFrelation}
Let $p$ and $q$ satisfy \eqref{eq:pq} and $r$ be the positive number defined as
\eqref{eq:rdefi}. Then, for $x\in[0,\pi_{p,q}/2)$,
\begin{align*}
&\sin_{p,q}x=\frac{\sinh_{r,q}x}{\cosh_{r,q}^{r/q}x}=\operatorname{tam}h_{r,q}x,\\
&\cos_{p,q}x=\frac{1}{\cosh_{r,q}^{r/p}x},\\
&\operatorname{tam}_{p,q}x=\sinh_{r,q}x.
\end{align*}
\end{thm}
\begin{thm}[\cite{Miyakawa-Takeuchi}]\langlebel{lem:GTFGHFrelation}
Let $p$ and $q$ satisfy \eqref{eq:pq} and $r$ be the positive number defined as
\eqref{eq:rdefi}. Then, for $x\in[0,\pi_{r,q}/2)$,
\begin{align*}
&\sinh_{p,q}x=\frac{\sin_{r,q}x}{\cos_{r,q}^{r/q}x}=\operatorname{tam}_{r,q}x,\\
&\cosh_{p,q}x=\frac{1}{\cos_{r,q}^{r/p}x},\\
&\operatorname{tam}h_{p,q}x=\sin_{r,q}x.
\end{align*}
\end{thm}
\begin{rem}
In \cite{Miyakawa-Takeuchi}, we have supposed $q$ to be $1<q<\infty$.
However, the proofs therein are perfectly valid in the case $0<q<\infty$ as well.
The same is true for Theorem \ref{thm:MA} below.
\end{rem}
Theorems \ref{lem:GHFGTFrelation} and \ref{lem:GTFGHFrelation} tell us the
counterparts to GHFs of the properties already known for GTFs, and vice versa.
For example, Theorem \ref{lem:GHFGTFrelation} immediately converts
\eqref{eq:Tpythagoras} into \eqref{eq:Hpythagoras} (with $p$ replaced by $r$);
that is,
\begin{equation}
\langlebel{eq:p+q=1'}
\cos_{p,q}^p{x}+\sin_{p,q}^q{x}=1
\end{equation}
into
\begin{equation}
\langlebel{eq:p-q=1'}
\cosh_{r,q}^r{x}-\sinh_{r,q}^q{x}=1;
\end{equation}
and Theorem \ref{lem:GTFGHFrelation} (with \eqref{eq:r(r)=p}) does vice versa.
Hence, it follows from \eqref{eq:r(p)} that
\eqref{eq:p+q=1'} and \eqref{eq:p-q=1'} correspond one-to-one.
In this sense, we say that inequalities \eqref{eq:p+q=1'} and \eqref{eq:p-q=1'}
(i.e., \eqref{eq:Hpythagoras} with $p$ replaced by $r$)
are
\textit{dual} to each other.
Moreover, using our theorems, the authors \cite{Miyakawa-Takeuchi}*{Theorem 1.3}
have shown the generalization
of Mitrinovi\'{c}-Adamovi\'{c} inequalities.
The generalized inequalities will be applied in the next section, and are
discussed now.
The classical Mitrinovi\'{c}-Adamovi\'{c} inequalities are as follows:
\begin{gather*}
\cos^{1/3}{x}<\frac{\sin{x}}{x}<1, \quad x \in \left(0,\frac{\pi}{2}\right),\\
\cosh^{1/3}{x}<\frac{\sinh{x}}{x}<\cosh{x}, \quad x \in \left(0,\infty\right).
\end{gather*}
The latter is also called the Lazarevi\'c inequality.
Kl\'{e}n et al. \cite{Klen}*{Theorems 3.6 and 3.8}
extend these inequalities to the one-parameter case: for $p \in (1,\infty)$,
\begin{gather*}
\cos_p^{1/(p+1)}{x}<\frac{\sin_p{x}}{x}<1, \quad x \in \left(0,\frac{\pi_p}{2}\right),\\
\cosh_p^{1/(p+1)}{x}<\frac{\sinh_p{x}}{x}<\cosh_p{x}, \quad x \in \left(0,\infty\right),
\end{gather*}
where $\sin_p{x}:=\sin_{p,p}{x}$ and the other functions are defined in the same way.
Moreover, Ma et al. \cite{MSZH}*{Lemma 2} obtain
the inequalities for the two-parameter case: for $p,\ q \in (1,\infty)$,
\begin{gather}
\cos_{p,q}^{1/(q+1)}{x}<\frac{\sin_{p,q}{x}}{x}<1, \quad x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:MAtrigo'}\\
\cosh_{p,q}^{1/(q+1)}{x}<\frac{\sinh_{p,q}{x}}{x}<\cosh_{p,q}{x}
\quad \mbox{for appropriate $x$}.
\langlebel{eq:MAhyper'}
\end{gather}
The proofs of Kl\'en et al. \cite{Klen} and Ma et al. \cite{MSZH}
are similar, and both prove the inequalities for the trigonometric case
and the hyperbolic case separately in the same way.
However, \eqref{eq:MAtrigo'} and \eqref{eq:MAhyper'} (with $p$ replaced by $r$)
are not dual to each other.
A dual pair of Mitrinovi\'{c}-Adamovi\'{c}-type inequalities is as follows:
\begin{thm}[Mitrinovi\'{c}-Adamovi\'{c}-type inequalities with duality, \cite{Miyakawa-Takeuchi}]\langlebel{thm:MA}
Let $p$ and $q$ satisfy \eqref{eq:pq} and $r$ be the positive number defined as \eqref{eq:rdefi}.
Then,
\begin{gather}
\cos_{p,q}^{1/(q+1)}x<\frac{\sin_{p,q}x}{x}<1, \quad x\in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:MAtrigo}\\
\cosh_{p,q}^{1/(q+1)}x<\frac{\sinh_{p,q}x}{x}<\cosh_{p,q}^{p/q}x,
\quad x\in \left(0,\frac{\pi_{r,q}}{2}\right).\langlebel{eq:MAhyper}
\end{gather}
Moreover, \eqref{eq:MAtrigo} and \eqref{eq:MAhyper} (with $p$ replaced by $r$)
are dual to each other.
\end{thm}
\begin{rem}
If $p=q$, then \eqref{eq:MAtrigo} and \eqref{eq:MAhyper} are
equal to \eqref{eq:MAtrigo'} and \eqref{eq:MAhyper'}; hence, to
the one-parameter ones above.
\end{rem}
In our approach in \cite{Miyakawa-Takeuchi},
Theorem \ref{thm:MA} allows us to obtain the inequalities
over the wider range \eqref{eq:pq} of parameters,
and \eqref{eq:MAhyper} immediately follows from \eqref{eq:MAtrigo}
by Theorem \ref{lem:GHFGTFrelation}.
\section{Dual pairs of inequalities}
\langlebel{sec:inequalities}
In this section,
we generalize the Wilker, Huygens, and (relaxed) Cusa-Huygens inequalities
for GTFs and GHFs to a form with duality
using our duality formulas (Theorems \ref{lem:GHFGTFrelation} and \ref{lem:GTFGHFrelation}),
just as we generalized the Mitrinovi\'{c}-Adamovi\'{c} inequalities
as Theorem \ref{thm:MA}.
\subsection{Wilker-type inequalities}
The classical Wilker inequalities are as follows:
\begin{gather*}
\left(\frac{\sin{x}}{x}\right)^2+\frac{\tan{x}}{x}>2, \quad x \in \left(0,\frac{\pi}{2}\right),\\
\left(\frac{\sinh{x}}{x}\right)^2+\frac{\tanh{x}}{x}>2, \quad x \in \left(0,\infty\right).
\end{gather*}
Kl\'{e}n et al. \cite{Klen}*{Corollary 3.19} and Yin et al. \cite{YHQ}*{Theorem 3.1}
extend these inequalities to the one-parameter case: for $p \in (1,\infty)$,
\begin{gather*}
\left(\frac{\sin_p{x}}{x}\right)^p+\frac{\tan_p{x}}{x}>2, \quad x \in \left(0,\frac{\pi_p}{2}\right),\\
\left(\frac{\sinh_p{x}}{x}\right)^p+\frac{\tanh_p{x}}{x}>2, \quad x \in \left(0,\infty\right).
\end{gather*}
Moreover, Neuman \cite{Neuman2015}*{Corollary 6.3 (6.13)} obtains
the inequalities for the two-parameter case: for $p,\ q \in (1,\infty)$,
\begin{gather}
\left(\frac{\sin_{p,q}{x}}{x}\right)^q+\frac{\tan_{p,q}{x}}{x}>2, \quad x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:wilker}\\
\left(\frac{\sinh_{p,q}{x}}{x}\right)^q+\frac{\tanh_{p,q}{x}}{x}>2 \quad
\mbox{for appropriate $x$}.
\langlebel{eq:wilkerh}
\end{gather}
However, \eqref{eq:wilker} and \eqref{eq:wilkerh}
(with $p$ replaced by $r$) are not dual to each other.
A dual pair of Wilker-type inequalities is as follows:
\begin{thm}[Wilker-type inequalities with duality]\langlebel{thm:Wil2pMiya}
Let $p$ and $q$ satisfy \eqref{eq:pq} and $r$ be the positive number defined as \eqref{eq:rdefi}.
Then,
\begin{gather}
\left(\frac{\sin_{p,q}x}{x}\right)^p+\left(\frac{\operatorname{tam}_{p,q}x}{x}\right)^r>2, \quad
x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:wilkertrigo}\\
\left(\frac{\sinh_{p,q}x}{x}\right)^p+\left(\frac{\operatorname{tam}h_{p,q}x}{x}\right)^r>2, \quad
x \in \left(0,\frac{\pi_{r,q}}{2}\right). \langlebel{eq:wilkerhyper}
\end{gather}
Moreover, \eqref{eq:wilkertrigo} and \eqref{eq:wilkerhyper} (with $p$ replaced by $r$)
are dual to each other.
\end{thm}
\begin{rem}
If $p=q$, then \eqref{eq:wilkertrigo} and \eqref{eq:wilkerhyper} are
equal to \eqref{eq:wilker} and \eqref{eq:wilkerh}; hence,
to the one-parameter ones above, respectively.
\end{rem}
\begin{proof}
We prove \eqref{eq:wilkertrigo}. From the inequality of arithmetic and geometric
means and \eqref{eq:MAtrigo} in Theorem \ref{thm:MA}, it follows that
\begin{align*}
\left(\frac{\sin_{p,q}x}{x}\right)^p+\left(\frac{\operatorname{tam}_{p,q}x}{x}\right)^r
& \geq 2\left(\frac{\sin_{p,q}x}{x}\right)^{p/2}\left(\frac{\operatorname{tam}_{p,q}x}{x}\right)^{r/2}\\
& =2\left(\frac{\sin_{p,q}x}{x}\right)^{p/2+r/2}\left(\frac{1}{\cos_{p,q}^{p/q}x}\right)^{r/2}\\
& >2\left(\frac{\sin_{p,q}x}{x}\right)^{p/2+r/2}\left(\frac{\sin_{p,q}x}{x}\right)^{-rp(q+1)/2q}\\
& =2.
\end{align*}
Next we show \eqref{eq:wilkerhyper}.
For any $x \in (0,\pi_{p,q}/2)$, we have proved \eqref{eq:wilkertrigo}.
Then, Theorem \ref{lem:GHFGTFrelation} gives the dual inequality to \eqref{eq:wilkertrigo}:
$$\left(\frac{\operatorname{tam}h_{r,q}x}{x}\right)^p+\left(\frac{\sinh_{r,q}x}{x}\right)^r>2.$$
Owing to \eqref{eq:r(p)} and \eqref{eq:r(r)=p},
this means \eqref{eq:wilkerhyper}.
\end{proof}
\subsection{Huygens-type inequalities}
The classical Huygens inequalities are as follows:
\begin{gather*}
\frac{2\sin{x}}{x}+\frac{\tan{x}}{x}>3, \quad x \in \left(0,\frac{\pi}{2}\right),\\
\frac{2\sinh{x}}{x}+\frac{\tanh{x}}{x}>3, \quad x \in \left(0,\infty\right).
\end{gather*}
Kl\'{e}n et al. \cite{Klen}*{Theorem 3.16}
extend these inequalities to the one-parameter case: for $p \in (1,\infty)$,
\begin{gather*}
\frac{p\sin_p{x}}{x}+\frac{\tan_p{x}}{x}>p+1, \quad x \in \left(0,\frac{\pi_p}{2}\right),\\
\frac{p\sinh_p{x}}{x}+\frac{\tanh_p{x}}{x}>p+1, \quad x \in \left(0,\infty\right).
\end{gather*}
Moreover, Neuman \cite{Neuman2015}*{Corollary 6.3 (6.14)} obtains
the inequalities for the two-parameter case: for $p,\ q \in (1,\infty)$,
\begin{gather}
\frac{q\sin_{p,q}{x}}{x}+\frac{\tan_{p,q}{x}}{x}>q+1, \quad x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:cusa}\\
\frac{q\sinh_{p,q}{x}}{x}+\frac{\tanh_{p,q}{x}}{x}>q+1 \quad
\mbox{for appropriate $x$}.
\langlebel{eq:cusah}
\end{gather}
However, \eqref{eq:cusa} and \eqref{eq:cusah} (with $p$ replaced by $r$)
are not dual to each other.
A dual pair of Huygens-type inequalities is as follows:
\begin{thm}[Huygens-type inequalities with duality]\langlebel{thm:HuyMiya}
Let $p$ and $q$ satisfy \eqref{eq:pq} and $r$ be the positive number defined as \eqref{eq:rdefi}.
Then,
\begin{gather}
\frac{p\sin_{p,q}x}{x}+\frac{r\operatorname{tam}_{p,q}x}{x}>p+r, \quad
x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:Huygenstrigo}\\
\frac{p\sinh_{p,q}x}{x}+\frac{r\operatorname{tam}h_{p,q}x}{x}>p+r, \quad
x \in \left(0,\frac{\pi_{r,q}}{2}\right).\langlebel{eq:Huygenshyper}
\end{gather}
Moreover, \eqref{eq:Huygenstrigo} and \eqref{eq:Huygenshyper}
(with $p$ replaced by $r$) are dual to each other.
\end{thm}
\begin{rem}
If $p=q$, then \eqref{eq:Huygenstrigo} and \eqref{eq:Huygenshyper} are
equal to \eqref{eq:cusa} and \eqref{eq:cusah}; hence,
to the one-parameter ones above, respectively.
\end{rem}
\begin{proof}
Let $\alpha,\ \beta$ be
$$\alpha=\frac{p(q+1)}{pq+p-q}=1+\frac{r}{p},\quad \beta=\frac{p(q+1)}{q}=1+\frac{p}{r}.$$
Since $\alpha,\ \beta >1$ and $1/\alpha+1/\beta=1$,
the following Young inequality holds true for positive numbers $A,\ B$:
$$A+B \geq (\alpha A)^{1/\alpha}(\beta B)^{1/\beta}.$$
Therefore, by this inequality and \eqref{eq:MAtrigo} in Theorem \ref{thm:MA},
\begin{align*}
\frac{p\sin_{p,q}x}{x}+\frac{r\operatorname{tam}_{p,q}x}{x}
& \geq \left(\alpha \frac{p\sin_{p,q}x}{x}\right)^{1/\alpha}
\left(\beta \frac{r\operatorname{tam}_{p,q}x}{x}\right)^{1/\beta}\\
& = (\alpha p)^{1/\alpha} (\beta r)^{1/\beta}
\frac{\sin_{p,q}x}{x} \left(\frac{1}{\cos_{p,q}^{p/q}x}\right)^{1/\beta}\\
& > (p+r)^{1/\alpha} (r+p)^{1/\beta}
\frac{\sin_{p,q}x}{x} \left(\frac{\sin_{p,q}x}{x}\right)^{-\frac{p(q+1)}{\beta q}}\\
&=p+r.
\end{align*}
Next we show \eqref{eq:Huygenshyper}.
For any $x \in (0,\pi_{p,q}/2)$, we have proved \eqref{eq:Huygenstrigo}.
Then, Theorem \ref{lem:GHFGTFrelation} gives the dual inequality to \eqref{eq:Huygenstrigo}:
$$\frac{p\operatorname{tam}h_{r,q}x}{x}+\frac{r\sinh_{r,q}x}{x}>p+r.$$
Owing to \eqref{eq:r(p)} and \eqref{eq:r(r)=p},
this means \eqref{eq:Huygenshyper}.
\end{proof}
\subsection{Relaxed Cusa-Huygens-type inequalities}
The classical Cusa-Huygens inequalities are as follows:
\begin{gather*}
\frac{\sin{x}}{x}<\frac{2+\cos{x}}{3}, \quad x \in \left(0,\frac{\pi}{2}\right),\\\frac{\sinh{x}}{x}<\frac{2+\cosh{x}}{3}, \quad x \in \left(0,\infty\right).
\end{gather*}
Ma et al. \cite{MSZH}*{Theorems 2 and 3} obtain
the inequalities for the two-parameter case: for $p,\ q \in (1,2]$,
\begin{gather}
\frac{\sin_{p,q}{x}}{x}<\frac{q+\cos_{p,q}{x}}{q+1}, \quad x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:cusahuygens}\\
\frac{\sinh_{p,q}{x}}{x}<\frac{q+\cosh_{p,q}{x}}{q+1} \quad
\mbox{for appropriate $x$}.
\langlebel{eq:cusahuygensh}
\end{gather}
The inequalities for the one parameter $p=q \in (1,2]$ are given by
Kl\'{e}n et al. \cite{Klen}*{Theorems 3.22 and 3.24}.
Unfortunately, these generalized inequalities are shown only for $p,\ q \in (1,2]$,
and \eqref{eq:cusahuygens} and \eqref{eq:cusahuygensh} (with $p$ replaced by $r$) are
not dual to each other.
We hope to find inequalities that hold for $p,\ q$ satisfying \eqref{eq:pq}
and are dual to each other.
Therefore, consider the following relaxed inequalities instead of the classical Cusa-Huygens inequalities:
\begin{gather*}
\frac{\sin{x}}{x}<\sqrt{\frac{2+\cos^2{x}}{3}}, \quad x \in \left(0,\frac{\pi}{2}\right),\\\frac{\sinh{x}}{x}<\sqrt{\frac{2+\cosh^2{x}}{3}}, \quad x \in \left(0,\infty\right).
\end{gather*}
Neuman \cite{Neuman2015}*{Theorem 6 (6.7), (6.9)} generalizes
the inequalities to the two-parameter case: for $p,\ q \in (1,\infty)$,
\begin{gather}
\frac{\sin_{p,q}{x}}{x}<\left(\frac{q+\cos_{p,q}^p{x}}{q+1}\right)^{1/p}, \quad x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:cusa'}\\
\frac{\sinh_{p,q}{x}}{x}<\left(\frac{q+\cosh_{p,q}^p{x}}{q+1}\right)^{1/p} \quad
\mbox{for appropriate $x$}.
\langlebel{eq:cusah'}
\end{gather}
However, \eqref{eq:cusa'} and \eqref{eq:cusah'} (with $p$ replaced by $r$) are
not dual to each other. A dual pair of Cusa-Huygens-type inequalities is as follows:
\begin{thm}[Relaxed Cusa-Huygens-type inequalities with duality]\langlebel{thm:CusaMiya}
Let $p$ and $q$ satisfy \eqref{eq:pq} and $r$ be the positive number defined as \eqref{eq:rdefi}.
Then,
\begin{gather}
\frac{\sin_{p,q}x}{x}<\left(\frac{p+r\cos_{p,q}^px}{p+r}\right)^{1/q}, \quad
x \in \left(0,\frac{\pi_{p,q}}{2}\right), \langlebel{eq:CusaGTF}\\
\frac{\sinh_{p,q}x}{x}<\left(\frac{p+r\cosh_{p,q}^px}{p+r}\right)^{1/q}, \quad
x \in \left(0,\frac{\pi_{r,q}}{2}\right). \langlebel{eq:CusaGHF}
\end{gather}
Moreover, \eqref{eq:CusaGTF} and \eqref{eq:CusaGHF} (with $p$ replaced by $r$) are dual to each other.
\end{thm}
\begin{rem}
If $p=q$, then \eqref{eq:CusaGTF} and \eqref{eq:CusaGHF} are
equal to \eqref{eq:cusa'} and \eqref{eq:cusah'}, respectively.
\end{rem}
\begin{proof}
We prove
$$\frac{(p+r+rx^q)\sin_{p,q}^qx}{x^q}<p+r,$$
which is equivalent to \eqref{eq:CusaGTF}.
Let $f(x):=p+r-(p+r+rx^q)\sin_{p,q}^qx/x^q$. Then,
\begin{align*}
f'(x)=\frac{q\sin_{p,q}^qx}{x^{q+1}}\left(p+r-\frac{(p+r)x\cos_{p,q}x}{\sin_{p,q}x}-\frac{rx^{q+1}\cos_{p,q}x}{\sin_{p,q}x}\right).
\end{align*}
From \eqref{eq:MAtrigo} in Theorem \ref{thm:MA}, it follows that
$$\frac{x\cos_{p,q}x}{\sin_{p,q}x}<\cos_{p,q}^{q/(q+1)}x,\ \ \ \frac{x^{q+1}\cos_{p,q}x}{\sin_{p,q}x}<1-\cos_{p,q}^px.$$
Therefore,
\begin{align*}
f'(x)>\frac{q\sin_{p,q}^qx}{x^{q+1}}\left(p-(p+r)\cos_{p,q}^{q/(q+1)}x+r\cos_{p,q}^{p}x\right).
\end{align*}
Now let $g(t)=p-(p+r)t^{q/(q+1)}+rt^p$. Then,
\begin{align*}
g'(t)=prt^{-1/(q+1)}(t^{p-q/(q+1)}-1)
\end{align*}
Since $q/(q+1)<p$, we see $g'(t)<0$. Therefore, $g(t)>\lim_{t \to 1-0}g(t)=0$ and
$$f'(x)>\frac{q\sin_{p,q}^qx}{x^{q+1}}g(\cos_{p,q}x)>0.$$
Moreover, $f(x)>\lim_{x\to+0}f(x)=0$, which means \eqref{eq:CusaGTF}.
Next we show \eqref{eq:CusaGHF}.
For any $x \in (0,\pi_{p,q}/2)$, we have proved \eqref{eq:CusaGTF}.
Then, Theorem \ref{lem:GHFGTFrelation} gives the dual inequality to \eqref{eq:CusaGTF}:
$$\frac{\sinh_{r,q}x}{x\cosh_{r,q}^{r/q}x}<\left(\frac{p+r\cosh_{r,q}^{-r}x}{p+r}\right)^{1/q};$$
hence,
$$\frac{\sinh_{r,q}x}{x}<\left(\frac{p\cosh_{r,q}^{r}x+r}{p+r}\right)^{1/q}.$$
Owing to \eqref{eq:r(p)} and \eqref{eq:r(r)=p},
this means \eqref{eq:CusaGHF}.
\end{proof}
\section{Multiple- and double-angle formulas}
\langlebel{sec:formulas}
Several multiple- and double-angle formulas for GTFs and GHFs are already known
(see \cite{Miyakawa-Takeuchi}*{Theorems 1.4 and 1.6}
and \cite{Takeuchi2016}*{Theorem 1.1} for multiple-angle formulas;
Table \ref{hyo} for double-angle formulas).
In this section,
we apply the duality formulas (Theorems \ref{lem:GHFGTFrelation} and \ref{lem:GTFGHFrelation}) to obtain
multiple- and double-angle formulas
which are not covered in \cite{Miyakawa-Takeuchi} and \cite{Takeuchi2016},
for these generalized functions.
\begin{table}[htb]
\begin{center}
\caption{The parameters for which the double angle formulas of GTF have been obtained.}\langlebel{hyo}
{\footnotesize
\begin{tabular}{clll} \hline
$q$ & $(q/(q-1),2)$ & $(2,q)$ & $(q/(q-1),q)$\\ \hline
$2$ & $(2,2)$ Abu al-Wafa & $(2,2)$ Abu al-Wafa & $(2,2)$ Abu al-Wafa\\
$3$ & $(3/2,2)$ Miyakawa-Takeuchi \cite{Miyakawa-Takeuchi} & $(2,3)$ Cox-Shurman \cite{Cox2005} & $(3/2,3)$ Dixon \cite{Dixon1890} \\
$4$ & $(4/3,2)$ Sato-Takeuchi \cite{Sato-Takeuchi2020} & $(2,4)$ Fagnano & $(4/3,4)$ Edmunds\ et\ al. \cite{Edmunds2012} \\
$6$ & $(6/5,2)$ Takeuchi \cite{Takeuchipre} & $(2,6)$ Shinohara \cite{Shinohara2017} & $(6/5,6)$ Takeuchi \cite{Takeuchipre}\\\hline\hline
$q$ & $(2q/(2+q),q/2)$&$(2q/(2+q),q)$ & $(q/2,q)$\\ \hline
$2$ & $(1,1)$ Napier&$(1,2)$ V. Riccati &$(1,2)$ V. Riccati\\
$3$ & $(6/5,3/2)$ \textbf{Theorem \ref{thm:6/53/2double}}&$(6/5,3)$ Miyakawa-Takeuchi \cite{Miyakawa-Takeuchi} &$(3/2,3)$ Dixon \cite{Dixon1890}\\
$4$ & $(4/3,2)$ Sato-Takeuchi \cite{Sato-Takeuchi2020} &$(4/3,4)$ Edmunds\ et\ al. \cite{Edmunds2012} &$(2,4)$ Fagnano\\
$6$ & $(3/2,3)$ Dixon \cite{Dixon1890} &$(3/2,6)$ Miyakawa-Takeuchi \cite{Miyakawa-Takeuchi} &$(3,6)$ Miyakawa-Takeuchi \cite{Miyakawa-Takeuchi}\\\hline
\end{tabular}
}
\end{center}
\end{table}
The multiple-angle formulas in the following theorem assure that GTFs for $(2q/(2+q),q/2)$
can be represented in terms of GTFs for $(2q/(2+q),q)$.
Moreover, the counterparts to GHFs are obtained as their dual inequalities.
\begin{thm}\langlebel{thm:sin_new_double2}
Let $0<q<\infty$. Then, for $x\in[0,\pi_{2q/(2+q),q/2}/(2^{2/q+1}))=[0,\pi_{2q/(2+q),q}/2)$,
\begin{align*}
\sin_{2q/(2+q),q/2}(2^{2/q}x)=&\frac{2^{2/q}\sin_{2q/(2+q),q}x}{(1+\sin_{2q/(2+q),q}^{q/2}x)^{2/q}},\\
\cos_{2q/(2+q),q/2}(2^{2/q}x)=&\left(\frac{1-\sin_{2q/(2+q),q}^{q/2}x}{1+\sin_{2q/(2+q),q}^{q/2}x}\right)^{1/q+1/2}.
\end{align*}
Moreover, for same $x$,
\begin{align*}
\sinh_{2q/(2+q),q/2}(2^{2/q}x)&=2^{2/q}\sinh_{2,q}x(\cosh_{2,q}x+\sinh_{2,q}^{q/2}x)^{2/q},\\
\cosh_{2q/(2+q),q/2}(2^{2/q}x)&=(\cosh_{2,q}x+\sinh_{2,q}^{q/2}x)^{2/q+1}.
\end{align*}
\end{thm}
\begin{proof}
The former half is shown as follows. Let $y\in[0,\infty)$. Setting $t^q=u^q/(4(1-u^{q/2}))$ in
$$\sinh_{2,q}^{-1}y=\int^y_0\frac{dt}{\sqrt{1+t^q}},$$
we have
\begin{align*}
\sinh_{2,q}^{-1}y=&2^{-2/q-1}\int^{2^{2/q}y/(y^{q/2}+\sqrt{y^q+1})^{2/q}}_0\frac{2(1-u^{q/2})^{1/2}}{2-u^{q/2}}\cdot\frac{2-u^{q/2}}{(1-u^{q/2})^{1/q+1}}\ du\\
=&2^{-2/q}\int^{2^{2/q}y/(y^{q/2}+\sqrt{y^q+1})^{2/q}}_0\frac{du}{(1-u^{q/2})^{1/q+1/2}};
\end{align*}
that is,
\begin{align}
\sinh_{2,q}^{-1}y=2^{-2/q}\sin_{2q/(2+q),q/2}^{-1}\left(\frac{2^{2/q}y}{(y^{q/2}+\sqrt{y^q+1})^{2/q}}\right).\langlebel{eq:sinh2qsin2q2+q}
\end{align}
Letting $y\to\infty$ in \eqref{eq:sinh2qsin2q2+q} and using $r_q(2)=2q/(2+q)$, we get
$$\frac{\pi_{2q/(2+q),q}}{2}=\frac{\pi_{2q/(2+q),q/2}}{2^{2/q+1}}.$$
From \eqref{eq:sinh2qsin2q2+q}, we see that for $x\in[0,\pi_{2q/(2+q),q/2}/(2^{2/q+1}))=[0,\pi_{2q/(2+q),q}/2)$,
\begin{align*}
\sin_{2q/(2+q),q/2}(2^{2/q}x)=&\frac{2^{2/q}\sinh_{2,q}x}{(\sinh_{2,q}^{q/2}x+\cosh_{2,q}x)^{2/q}}=\frac{2^{2/q}\operatorname{tam}h_{2,q}x}{(\operatorname{tam}h_{2,q}^{q/2}x+1)^{2/q}}.
\end{align*}
Theorem \ref{lem:GTFGHFrelation} with $r_q(2)=2q/(2+q)$ shows that the right-hand side becomes
\begin{align*}
\frac{2^{2/q}\sin_{2q/(2+q),q}x}{(\sin_{2q/(2+q),q}^{q/2}x+1)^{2/q}}.
\end{align*}
The formula of $\cos_{2q/(q+2),q/2}$ immediately follows from \eqref{eq:Tpythagoras}.
The latter half is proved as follows. By Theorem \ref{lem:GTFGHFrelation}
with $r_{q/2}{(2q/(2+q))}=2q/(2+q)$ and the former half,
\begin{align*}
\sinh_{2q/(2+q),q/2}(2^{2/q}x)=\frac{\sin_{2q/(2+q),q/2}(2^{2/q}x)}{\cos_{2q/(2+q),q/2}^{4/(2+q)}(2^{2/q}x)}=\frac{2^{2/q}\sin_{2q/(2+q),q}x}{(1-\sin_{2q/(2+q),q}^{q/2}x)^{2/q}}.
\end{align*}
Theorem \ref{lem:GHFGTFrelation} with $r_q(2q/(2+q))=2$ shows that the right-hand side becomes
$$2^{2/q}\sinh_{2,q}x(\cosh_{2,q}x+\sinh_{2,q}^{q/2}x)^{2/q}.$$
The formula of $\cosh_{2q/(q+2),q/2}$ immediately follows from \eqref{eq:Tpythagoras}.
\end{proof}
\begin{rem}
If $q=2$, then
the formulas of $\sin_{2q/(2+q),q/2}$ and $\sinh_{2q/(2+q),q/2}$ are
\begin{gather*}
1-e^{-2x}=\frac{2\tanh{x}}{1+\tanh{x}},\\
e^{2x}-1=2\sinh{x}(\cosh{x}+\sinh{x}).
\end{gather*}
\end{rem}
The following double-angle formula is proved by
\cite{Miyakawa-Takeuchi}*{Theorem 3.8}.
\begin{lem}[\cite{Miyakawa-Takeuchi}]\langlebel{lem:sin6/53double}
For $x\in[0,\pi_{6/5,3}/4)$,
$$\sin_{6/5,3}(2x)=\frac{4\cos_{6/5,3}^{1/5}x(3\cos_{6/5,3}^{3/5}x+1)(1-\cos_{6/5,3}^{3/5}x)^{1/3}}{\left(16\cos_{6/5,3}^{3/5}x+(3\cos_{6/5,3}^{3/5}x+1)^3(1-\cos_{6/5,3}^{3/5}x)\right)^{2/3}}.$$
\end{lem}
Now, we are in a position to show the double-angle formula of $\sin_{6/5,3/2}$.
\begin{thm}\langlebel{thm:6/53/2double}
For $x\in[0,\pi_{6/5,3/2}/4)$,
$$\sin_{6/5,3/2}(2x)=(\Theta \circ \Phi \circ \Psi)(\cos_{6/5,3/2}{x}),$$
where
\begin{gather*}
\Theta(x)=\left(\frac{2x}{1+x}\right)^{2/3},\\
\Phi(x)=\frac{8\sqrt{x(3x+1)^3(1-x)}}{16x+(3x+1)^3(1-x)},\\
\Psi(x)=\frac{2x^{3/5}}{1+x^{6/5}}.
\end{gather*}
\end{thm}
\begin{proof}
From Theorem \ref{thm:sin_new_double2} with $q=3$, for $x\in[0,\pi_{6/5,3/2}/(2^{5/3}))=[0,\pi_{6/5,3}/2)$,
$$\sin_{6/5,3/2}(2^{2/3}x)=\frac{2^{2/3}\sin_{6/5,3}x}{(1+\sin_{6/5,3}^{3/2}x)^{2/3}}
=\Theta(\sin_{6/5,3}^{3/2}x);$$
hence,
$$\sin_{6/5,3}^{3/2}x=\Theta^{-1}(\sin_{6/5,3/2}(2^{2/3}x))
=\frac{\sin_{6/5,3/2}^{3/2}(2^{2/3}x)}{2-\sin_{6/5,3/2}^{3/2}(2^{2/3}x)}.$$
Thus, from \eqref{eq:Tpythagoras},
\begin{align}
\cos_{6/5,3}^{3/5}x=\frac{2\cos_{6/5,3/2}^{3/5}(2^{2/3}x)}{1+\cos_{6/5,3/2}^{6/5}(2^{2/3}x)}
=\Psi(\cos_{6/5,3/2}{(2^{2/3}x)}). \langlebel{eq:c6/32to6/53/2}
\end{align}
Now,\ let $x\in[0,\pi_{6/5,3/2}/4)$ and $y:=x/(2^{2/3})$.
It follows from Theorem \ref {thm:sin_new_double2} with $q=3$
that since $2y\in[0,\pi_{6/5,3/2}/(2^{5/3}))=[0,\pi_{6/5,3}/2)$, we get
\begin{align}
\sin_{6/5,3/2}(2x)=\sin_{6/5,3/2}(2^{2/3}\cdot 2y)=\Theta(\sin_{6/5,3}^{3/2}{(2y)}).
\langlebel{eq:s6/53/2(2x)Tos6/532y}
\end{align}
Here,\ Lemma \ref{lem:sin6/53double} and \eqref{eq:c6/32to6/53/2} yield
\begin{align*}
\sin_{6/5,3}^{3/2}(2y)=\Phi(\cos_{6/5,3}^{3/5}y)=\Phi(\Psi(\cos_{6/5,3/2}(2^{2/3}y)))
=\Phi(\Psi(\cos_{6/5,3/2}x)).
\end{align*}
Therefore, from \eqref{eq:s6/53/2(2x)Tos6/532y}, we have
\begin{align*}
\sin_{6/5,3/2}(2x)=\Theta(\Phi(\Psi(\cos_{6/5,3/2}x)))
=(\Theta \circ \Phi \circ \Psi)(\cos_{6/5,3/2}x).
\end{align*}
The proof is completed.
\end{proof}
\end{document} |
\begin{document}
\title{Metastable Densities for the Contact Process on Power Law Random Graphs}
\author{Thomas Mountford\textsuperscript{1}, Daniel Valesin\textsuperscript{2,3}
and Qiang Yao\textsuperscript{4}}
\footnotetext[1]{\'Ecole Polytechnique F\'ed\'erale de Lausanne,
D\'epartement de Math\'ematiques,
1015 Lausanne, Switzerland}
\footnotetext[2]{University of British Columbia, Department of Mathematics, V6T1Z2 Vancouver, Canada}
\footnotetext[3]{Research funded by the Post-Doctoral Research Fellowship of the Government of Canada}
\footnotetext[4]{Department of Statistics and Actuarial Science,
East China Normal University,
Shanghai 200241, China}
\date{December 06, 2012}
\maketitle
\begin{abstract}
We consider the contact process on a random graph with fixed degree distribution given by a power law. We follow the work of Chatterjee and Durrett \cite{CD}, who showed that for arbitrarily small infection parameter $\lambda$, the survival time of the process is larger than a stretched exponential function of the number of vertices, $n$. We obtain sharp bounds for the typical density of infected sites in the graph, as $\lambda$ is kept fixed and $n$ tends to infinity. We exhibit three different regimes for this density, depending on the tail of the degree law.
\\
\noindent MSC: 82C22, 05C80. Keywords:
contact process, random graphs.
\end{abstract}
{\bf\large{}}\begin{itemize}gskip
\section{Introduction}
\label{Int}
In this paper we study the contact process on a random graph with a fixed degree distribution equal to a power law. Let us briefly describe the contact process and the random graph we consider.
The contact process is an interacting particle system that is commonly taken as a model for the spread of an infection in a population. Given a locally finite graph $G = (V, E)$ and $\lambda > 0$, the contact process on $G$ with infection rate $\lambda$ is a Markov process $(\xi_t)_{t \geq 0}$ with configuration space $\{0,1\}^V$. Vertices of $V$ (also called sites) are interpreted as individuals, which can be either healthy (state 0) or infected (state 1). The infinitesimal generator for the dynamics is
\begin{equation}\Omega f(\xi) = \sum_{x \in V}\left(f(\phi_x\xi) - f(\xi) \right) + \lambda \sum_{\substack{{(x,y):}\\{\{x, y\} \in E}}} \left(f(\phi_{(x,y)}\xi)-f(\xi) \right),\label{eq:gen}\end{equation}
where
$$\phi_x\xi(z) = \left\{\begin{array}{ll}\xi(z),&\text{if } z \neq x;\\0,&\text{if } z = x;\end{array}\right.\qquad \phi_{(x,y)}\xi(z) = \left\{\begin{array}{ll}\xi(z), &\text{if }z\neq y;\\I_{\left\{\max\left(\xi(x),\;\xi(y)\right) = 1\right\}},&\text{if } z = y. \end{array} \right.$$
Here and in the rest of the paper, $I$ denotes the indicator function. Given $A \subset V$, we will write $(\xi^A_t)$ to denote the contact process with the initial configuration $I_A$. If $A = \{x\}$, we write $(\xi_t^x)$. Sometimes we abuse notation and identify the configuration $\xi_t$ with the set $\{x: \xi_t(x) = 1\}$.
We refer the reader to \cite{lig85} and \cite{lig99} for an elementary treatment of the contact process and proofs of the basic properties that we will now review.
The dynamics given by the generator (\ref{eq:gen}) has two forms of transition. First, infected sites become healthy at rate 1; a recovery is then said to have occurred. Second, given an ordered pair of sites $(x, y)$ such that $x$ is infected and $y$ is healthy, $y$ becomes infected at rate $\lambda$; this is called a transmission.
We note that the configuration in which all individuals are healthy is absorbing for the dynamics. The random time at which this configuration is reached, $\inf\{t: \xi_t = \varnothing\}$ is called the extinction time of the process. A fundamental question for the contact process is: is this time almost surely finite? The answer to this question depends of course on the underlying graph $G$ and on the rate $\lambda$, but not on the initial configuration $\xi_0$, as long as $\xi_0$ contains a finite and non-zero quantity of infected sites. If, for one such $\xi_0$ (and hence all such $\xi_0$), the extinction time is almost surely finite, then the process is said to die out; otherwise it is said to survive. Using the graphical construction described below, it is very simple to verify that on finite graphs, the contact process dies out.
In order to make an analogy with the contact process on the random graphs we are interested in, it will be useful for us to briefly look at known results for the contact process on the $d$-dimensional lattice $\Z^d$ and on finite boxes of $\Z^d$. The contact process on $\Z^d$ exhibits a phase transition: there exists $\lambda_c(\Z^d)\in (0,\infty)$ such that the process dies out if and only if $\lambda \leq \lambda_c$. The process is said to be subcritical, critical and supercritical respectively if $\lambda < \lambda_c,\;\lambda = \lambda_c$ and $\lambda > \lambda_c$. In the supercritical case, if the process is started with every site infected, then as $t \to \infty$ its distribution converges to a non-trivial invariant measure on $\{0,1\}^{\Z^d}$ called the upper invariant distribution; we denote it by $\bar \pi$.
Interestingly, this phase transition is also manifest for the contact process on finite subsets of the lattice. Let $\Gamma_n = \{1,\ldots, n\}^d$ and consider the contact process on $\Gamma_n$ with parameter $\lambda$ starting from all infected, $(\xi^{\Gamma_n}_t)$. As mentioned above, this process almost surely dies out. However, the expected extinction time grows logarithmically with $n$ when $\lambda < \lambda_c(\Z^d)$ and exponentially with $n$ when $\lambda > \lambda_c(\Z^d)$ (\cite{durliu}, \cite{tommeta}). In the latter case, metastability is said to occur, because the process persists for a long time in an equilibrium-like state which resembles the restriction of $\bar \pi$ to the box $\Gamma_n$, and eventually makes a quick transition to the true equilibrium - the absorbing state. In particular, if $(t_n)$ is a sequence of (deterministic) times that grows to infinity slower than the expected extinction times of $(\xi^{\Gamma_n}_t)_{t\geq 0}$, we have
\begin{equation}\frac{|\xi^{\Gamma_n}_{t_n}|}{n^d} \;\;\stackrel{n \to \infty}{\xrightarrow{\hspace*{0.8cm}}}\;\; \bar \pi\left(\left\{\xi\in\{0,1\}^{\Z^d}: \xi(0) = 1\right\}\right) \text{ in probability,}\label{eq:motivate}\end{equation}
where $|\cdot|$ denotes cardinality. This means that the density of infected sites in typical times of activity is similar to that of infinite volume.
The main theorem in this paper is a statement analogous to (\ref{eq:motivate}) for the contact process on a class of random graphs, namely the configuration model with power law degree distribution, as described in \cite{NSW} and \cite{remco}. Let us define these graphs. We begin with a probability measure $p$ on $\mathbb{N}$; this will be our degree distribution. We assume it satisfies
\begin{eqnarray}
&p(\{0, 1, 2\}) = 0;&\label{eq:ele1}\\[0.3cm]
&\text{for some }a > 2,\;{\displaystyle 0 < \liminf_{m \to \infty}m^a\;p(m) \leq \limsup_{m \to \infty}m^ap(m) < \infty.}&\label{eq:ele2}\end{eqnarray}
The first assumption, that $p$ is supported on integers larger than 2, is made to guarantee that the graph is connected with probability tending to 1 as $n \to \infty$ (\cite{CD}). The second assumption, that $p$ is a power law with exponent $a$, is based on the empirical verification that real-world networks have power law degree distributions; see \cite{Dur} for details.
For fixed $n \in \N$, we will construct the random graph $G_n = (V_n, E_n)$ on the set of $n$ vertices $V_n = \{v_1, v_2,\ldots, v_n\}$. To do so, let $d_1, \ldots, d_n$ be independent with distribution $p$. We assume that $\sum_{i=1}^nd_i$ is even; if it is not, we add 1 to one of the $d_i$, chosen uniformly at random; this change will not have any effect in any of what follows, so we will ignore it. For $1 \leq i \leq n$, we endow $x_i$ with $d_i$ \textit{half-edges} (sometimes also called \textit{stubs}). Pairs of half-edges are then matched so that edges are formed; since $\sum_{i=1}^n d_i$ is even, it is possible to match all half-edges, and an edge set is thus obtained. We choose our edge set $E_n$ uniformly among all edge sets that can be obtained in this way. We denote by $\P_{p,n}$ a probability measure under which $G_n$ is defined. If, additionally, a contact process with parameter $\lambda$ is defined on the graph, we write $\P^\lambda_{p,n}$.
\noindent \textbf{Remark.}
$G_n$ may have loops (edges that start and finish at the same vertex) and multiple edges between two vertices. As can be read from the generator in (\ref{eq:gen}), loops can be erased with no effect in the dynamics, and when vertices $x$ and $y$ are connected by $k$ edges, an infection from $x$ to $y$ (or from $y$ to $x$) is transmitted with rate $k\lambda$.
In \cite{CD}, Chatterjee and Durrett studied the contact process $(\xi^{V_n}_t)$ on $G_n$, and obtained the surprising result that it is ``always supercritical'': for any $\lambda > 0$, the extinction time grows quickly with $n$ (it was shown to be larger than a stretched exponential function of $n$). This contradicted predictions in the Physics literature to the effect that there should be a phase transition in $\lambda$ similar to the one we described for finite boxes of $\Z^d$. In \cite{MMVY}, the result of \cite{CD} was improved and the extinction time was shown to grow as an exponential function of $n$.
As already mentioned, our main theorem is concerned with the density of infected sites on the graph at times in which the infection is still active. The main motivation in studying this density is shedding some light into the mechanism through which the infection manages to remain active for a long time when its rate is very close to zero. In particular, our result shows that this mechanism depends on the value of $a$, the exponent of the degree distribution.
\begin{theorem}
\label{thm:main} There exist $c, C > 0 $ such that, for $\lambda>0$ small enough and $(t_n$) with $t_n \to \infty$ and $\log t_n = o(n)$, we have
$$\P_{p, n}^\lambda \left(c\rho_a(\lambda) \leq \frac{|\xi^{V_n}_{t_n}|}{n} \leq C\rho_a(\lambda) \right) \stackrel{n \to \infty}{\xrightarrow{\hspace*{0.8cm}}} 1,$$
where $\rho_a(\lambda)$ is given by
$$\rho_a(\lambda) = \left\{\begin{array}{ll} \lambda^{\frac{1}{3-a}} &\text{if } 2 < a \leq 2\frac{1}{2};
\\\frac{\lambda^{2a-3}}{\log^{a-2}\left(\frac{1}{\lambda}\right)} &\text{if } 2\frac{1}{2} < a \leq 3;
\\\frac{\lambda^{2a-3}}{\log^{2a-4}\left(\frac{1}{\lambda}\right)} &\text{if } a > 3.\end{array} \right.$$
\end{theorem}
This theorem solves the open problem of \cite{CD}, page 2337, for $a > 2$. For $2 < a \leq 3$, the result is new, as no estimates were previously available. For $a > 3$, it is an improvement of the non-optimal bounds that were obtained in \cite{CD}: there, it proved that for $a > 3$, the density is between $\lambda^{2a-3+\end{proposition}silon}$ and $\lambda^{a-1-\end{proposition}silon}$ for any $\end{proposition}silon > 0$, when $\lambda$ is small.
Very recently (\cite{remco2}), Dommers, Giardin\`a and van der Hofstad studied the ferromagnetic Ising model on random trees and locally tree-like random graphs with power law degree distributions (in particular, their results cover the class of graphs we consider in this paper). In their context, the above theorem, which is about the exponent of the metastable density of the contact process when $\lambda \to 0$, translates to studying the exponent of the magnetization of the Ising model as $\beta \to \beta_c$, where $\beta$ is the inverse temperature and $\beta_c$ its critical value. Similarly to the above theorem, they showed how this exponent depends on the exponent of the degree distribution, and this dependence also exhibits different regimes.
By a well-known property of the contact process called duality (see \cite{lig85}, Section III.4), for any $t > 0$ and $1 \leq i \leq n$ we have
\begin{equation}\P_{p,n}^\lambda\left(\xi^{V_n}_t(v_i) = 1 \right) = \P_{p,n}^\lambda\left(\xi^{v_i}_t\neq \varnothing \right).\label{eq:dualint}\end{equation}
On the right-hand side, we have the probability that the contact process started at $v_i$ is still active at time $t$. In the study of this probability, we are required to understand the local structure of $G_n$ around a typical vertex. This is given, in the limit as $n \to \infty$, by a two-stage Galton-Watson tree.
In order to precisely state this, let $q$ be the size-biased distribution associated to $p$, that is, the measure on $\N$ given by $q(m) = (\sum_{i \geq 0} i \cdot p(i))^{-1}\cdot m\cdot p(m)$ (note that the assumption that $a > 2$ implies that $\sum_{i \geq 0} i \cdot p(i) < \infty$). Let $\Q_{p,q}$ be a probability measure under which a Galton-Watson tree is defined with degree distribution of the root given by $p$ and degree distribution of all other vertices given by $q$. Note that, since $p(\{0,1,2\}) = q(\{0,1,2\}) =0$, this tree is infinite. We emphasize that we are giving the \textit{degree} distribution of vertices, and not their \textit{offspring} distribution, which is more commonly used for Galton-Watson trees. The following Proposition then holds; see \cite{CD} and Chapter 3 of \cite{Dur} for details. For a graph $G$ with vertex $x$ and $R > 0$, we denote by $B_G(x, R)$ the ball in $G$ with center $x$ and radius $R$.
\begin{proposition}
\label{lem:kGW}
For any $k \in \mathbb{N}$ and $R > 0$, as $n \to \infty$, the $k$ balls $B_{G_n}(v_1, R),\ldots, B_{G_n}(v_k, R)$ under $\P_{p,n}$ are disjoint with probability tending to 1. Moreover, they jointly converge in distribution to $k$ independent copies of $B_\T(o, R)$, where $\T$ is a Galton-Watson tree (with root $o$) sampled from the probability $\Q_{p,q}$.
\end{proposition}
(Obviously, in the above, there is nothing special about the vertices $v_1,\ldots, v_k$ and the result would remain true if, for each $n$, they were replaced by $v_{i_{n,1}}, \ldots, v_{i_{n,k}}$, with $1 \leq i_{n,1} < \cdots < i_{n,k} \leq n$).
With this convergence at hand, in \cite{CD}, the right-hand side of (\ref{eq:dualint}) (and then, by a second moment argument, the density of infected sites) is shown to be related to the probability of survival of the contact process on the random tree given by the measure $\Q_{p,q}$. In this paper, we make this relation more precise, as we now explain. We denote by $\Q_{p,q}^\lambda$ a probability measure under which the two-stage Galton-Watson tree described above is defined and a contact process of rate $\lambda$ is defined on the tree. Typically this contact process will be started from only the root infected, and will thus be denoted by $(\xi_t^o)_{t \geq 0}$. Let $\upgamma_p(\lambda)$ denote the survival probability for this process, that is,
$$\upgamma_p(\lambda) = \Q_{p,q}^\lambda \left(\xi^o_t \neq \varnothing \;\forall t \right).$$
As we will shortly discuss in detail, this quantity turns out to be positive for every $\lambda > 0$. Here we prove
\begin{theorem}
\label{thm:reduc}
For any $\lambda > 0,\;\end{proposition}silon > 0$ and $(t_n)$ with $t_n \to \infty$ and $\log t_n = o(n)$, we have
$$\P_{p,n}^\lambda \left(\left|\frac{|\xi^{V_n}_{t_n}|}{n} - \upgamma_p(\lambda) \right| > \end{proposition}silon\right) \stackrel{n \to \infty}{\xrightarrow{\hspace*{0.8cm}}} 0.$$
\end{theorem}
The above result was conjectured in \cite{CD}, page 2336, and is an improvement of their Theorem 1 (also note that we do not assume that $\lambda$ is small). Since the proof is essentially a careful rereading of the arguments in \cite{CD}, we postpone it to the Appendix. Our main focus in the paper will be finding the asymptotic behaviour of $\upgamma_p(\lambda)$ as $\lambda \to 0$:
\begin{proposition}
\label{prop:main}
There exist $c, C > 0$ such that $c\rho_a(\lambda)\leq \upgamma_p(\lambda) \leq C\rho_a(\lambda)$ for $\lambda$ small enough, where $\rho_a(\lambda)$ is the function defined in the statement of Theorem \ref{thm:main}.\end{proposition}
Theorem \ref{thm:main} immediately follows from Theorem \ref{thm:reduc} and Proposition \ref{prop:main}. Since we concentrate our efforts in proving Proposition \ref{prop:main}, in all the remaining sections of the paper (except the Appendix) we do not consider the random graph $G_n$. Rather, we study the contact process on the Galton-Watson tree started with the root infected, $(\xi^o_t)_{t \geq 0}$.
We will now describe the ideas behind the proof of the above proposition.
\noindent $\bullet$ \textbf{Case} ${\bm a\;} {\bf > 3}$. Fix a small $\lambda > 0$. On the one hand, we show that a tree in which every vertex has degree smaller than $\frac{1}{8\lambda^2}$ is a ``hostile environment'' for the spread of the infection, in the sense that an infection started at vertex $x$ eventually reaches vertex $y$ with probability smaller than $(2\lambda)^{d(x,y)}$, where $d$ denotes graph distance (see Lemma \ref{lembound}). On the other hand, we show that if a vertex has degree much larger than $\frac{1}{\lambda^2}$, then it can sustain the infection for a long time; roughly, if $\deg(x) > \frac{K}{\lambda^2}$, then the infection survives on the star graph defined as $x$ and its neighbours for a time larger than $e^{cK}$ with high probability, where $c$ is a universal constant (see Lemma \ref{basic}). This is due to a ``bootstrap effect'' that occurs in this star, in which whenever $x$ becomes infected, it transmits the infection to several of its neighbours, and whenever it recovers, it receives the infection back from many of them.
Let us now call a vertex \textit{small} or \textit{big} depending on whether its degree is below or above the $\frac{1}{8\lambda^2}$ threshold, respectively (this terminology is only used in this Introduction). It is natural to imagine that the infection can propagate on the infinite tree by first reaching a big vertex, then being maintained around it for a long time and, during this time, reaching another vertex of still higher degree, and so on. However, when $a > 3$, big vertices are typically isolated and at distance of the order of $\log \frac{1}{\lambda}$ from each other. This suggests the introduction of another degree threshold, which turns out to be of the order of $\frac{1}{\lambda^2}\log^2 \frac{1}{\lambda}$ -- let us call a vertex \textit{huge} if its degree is above this threshold. The point is that if a vertex is big but not huge, then although it maintains the infection for a long time, this time is not enough for the infection to travel distances comparable to $\log \frac{1}{\lambda}$, and hence not enough to reach other big sites. Huge vertices, in comparison, do maintain the infection for a time that is enough for distances of order $\log\left( \frac{1}{\lambda}\right)$ to be overcome.
With these ideas in mind, we define a key event $E^* = \{$the root has a huge neighbour $x^*$ that eventually becomes infected$\}$ (again, this terminology is exclusive to this Introduction). We think of $E^*$ as the ``best strategy'' for the survival of the infection. Indeed, in Section \ref{s:lower} we show that if $E^*$ occurs, then the infection survives with high probability and in Section \ref{s:upper} we show that every other way in which the infection could survive has probability of smaller order, as $\lambda \to 0$, than that of $E^*$. The probability of $E^*$ is roughly $q\left(\left[\frac{1}{\lambda^2}\log^2 \frac{1}{\lambda} ,\;\infty\right) \right)\cdot \lambda$, the first term corresponding to the existence of the huge neighbour and the second term to its becoming infected. Since $p(m) \asymp m^{-a}$ (that is, $m^a\cdot p(m)$ is bounded from above and below), we have $q(m) \asymp m^{-(a-1)}$ and $q([m,\infty)) \asymp m^{-(a-2)}$; using this, we see that, modulo constants, $q\left(\left[ \frac{1}{\lambda^2}\log^2 \frac{1}{\lambda} ,\;\infty\right) \right)\cdot \lambda$ is $\frac{\lambda^{2a-3}}{\log^{2a-4}\left(\frac{1}{\lambda}\right)}$, which is the definition of $\rho_a(\lambda)$ when $a > 3$.
\noindent $\bullet$ \textbf{Case} ${\bf 2\frac{1}{2} <}$ ${\bm a}$ ${\bf\leq 3}$. This case is very similar to the previous one. The main difference is that now, when growing the tree from the root, if we find a vertex of large degree $K$, then with high probability, it will have a child with degree larger than $K$ (or a grandchild in the case $a = 3$). As a consequence, defining small and big vertices as before, big vertices will no longer be in isolation, but will rather be close to each other. For this reason, once the infection reaches a big site, the distance it needs to overcome to reach another big site is small, and we have to modify the ``big-huge'' threshold accordingly. The new threshold is shown to be $\frac{1}{\lambda^2}\log \frac{1}{\lambda}$. The key event $E^*$ is then defined in the same way as before, and shown to have probability of the order of $\rho_a(\lambda)$.
\noindent $\bullet$ \textbf{Case} ${\bf 2 <}$ ${\bm a}$ ${\bf\leq 2\frac{1}{2}}$. In both previous cases, the ``bootstrap effect'' that we have described is crucial. Interestingly, it does not play an important role in the regime in which the tree is the largest, that is, $2 < a \leq 2\frac{1}{2}$. In this case, the survival of the infection does not at all depend on vertices of high degree sustaining the infection around them for a long time, as we now explain. We define a comparison process $(\end{theorem}a_t)$ which is in all respects identical to the contact process $(\xi_t^o)$, with the only exception that once sites become infected and recover for the first time, they cannot become infected again. Thus, $(\xi^o_t)$ stochastically dominates $(\end{theorem}a_t)$, that is, both processes can be constructed in the same probability space satisfying the condition $\xi_t^o(x) \geq \end{theorem}a_t(x)$ for all $x$ and $t$. The process $(\end{theorem}a_t)$ is much easier to analyse than the contact process, and we find a lower bound for the probability that it remains active at all times (and thus a lower bound for the survival probability of the contact process). We then give an upper bound for the survival probability of $(\xi^o_t)$ that matches that lower bound. This implies that, in the regime $2 < a \leq 2\frac{1}{2}$, as $\lambda \to 0$, the survival probabilities of $(\xi^o_t)$ and $(\end{theorem}a_t)$ are within multiplicative constants of each other.
Finally, let us describe the organization of the paper. Section \ref{s:setup} contains a description of the graphical construction of the contact process and of the notation we use. In Section \ref{s:survestimate} we establish a lower bound for the survival time of the process on star graphs (Lemma \ref{basic}) and, as an application, a result that gives a condition for the process to go from one vertex $x$ of high degree to another vertex $y$ on a graph (Lemma \ref{lem:infNei}). In Section \ref{s:lower}, we use these results to prove the lower bound in Proposition \ref{prop:main}. In Section \ref{s:extestimate}, we give an upper bound on the probability that the process spreads on a tree of bounded degree (depending on $\lambda$), and in Section \ref{s:upper} we apply this to obtain the upper bound in Proposition \ref{prop:main}. In the Appendix, we prove Theorem \ref{thm:reduc}.
\section{Setup and notation}
\label{s:setup}
\subsection{Graphical construction of the contact process}
In order to fix notation, we briefly describe the graphical construction of the contact process. Let $G=(V,E)$ be a graph and $\lambda > 0$. We take a probability measure $P^\lambda_G$ under which we have a family $H$ of independent Poisson point processes on $[0,\infty)$ as follows:
$$\begin{aligned}
&\{D_x: x \in V\} \text{ with rate 1};\\
&\{D_{x,y}: \{x, y\} \in E\}\text{ with rate } \lambda.
\end{aligned}$$
The elements of the random sets $D_x$ are called recoveries, and those of the sets $D_{x,y}$ are called transmissions. The collection $H$ is called a graphical construction for the contact process on $G$ with rate $\lambda$. Given $x, y \in V$ and $0 \leq t_1 \leq t_2$, an infection path from $(x,t_1)$ to $(y, t_2)$ is a piecewise constant, right-continuous function $\gamma:[t_1,t_2] \to V$ satisfying $\gamma(t_1) = x,\; \gamma(t_2) = y$ and, for all $t$, \\
$\bullet\;$ if $\gamma(t-) \neq \gamma(t), \text{ then } \{\gamma(t-),\gamma(t)\} \in E \text{ and }t \in D_{\gamma(t-), \gamma(t)};$\\
$\bullet\;$ if $\gamma(t) = z,$ then $t \notin D_z$.\\
If such a path exists, we write $(x,t_1)\; \lra\; (y,t_2)$. Given $A,B \subset V$, $J_1, J_2 \subset [0,\infty)$, we write $A \times J_1 \;\lra\; B \times J_2$ if $(x, t_1) \;\lra\; (y, t_2)$ for some $x \in A,\; y\in B,\; t_1 \in J_1$ and $t_2 \in J_2$ with $t_1 \leq t_2$. Given a set $U \subset V$ with $A, B \subset U$, we say that $A \times J_1 \;\lra\; B \times J_2$ inside $U$ if $A \times J_1 \;\lra\; B \times J_2$ by an infection path that only visits vertices of $U$.
For $A \subset V$, by letting $\xi^A_t(x) = I_{\{A\times\{0\} \;\lra\;(x,t)\}}$ for each $t \geq 0$, we get a process $(\xi^A_t)_{t\geq 0}$ that has the same distribution as the contact process with initial configuration $I_A$, as defined by the generator (\ref{eq:gen}). A significant advantage of this construction is that, in a single probability space, we obtain contact processes with all initial configurations, $\left((\xi^A_t)_{t \geq 0} \right)_{A \subset V}$ with the property that for every $A$, $\xi^A_t = \cup_{x \in A}\;\xi^x_t$ and in particular, if $A \subset B$, we have $\xi^A_t \subset \xi^B_t$ for every $t$.
\subsection{Remarks on the laws $p$ and $q$}
Recall that our assumptions on the degree distribution $p$ are that $p(\{0,1,2\}) = 0$ and, for some $a > 2,\;c_0,\; C_0 > 0$ and large enough $k$, we have $c_0 k^{-a} < p(k) < C_0 k^{-a}$. The fact that $a > 2$ implies that
$\mu:=\sum_{k=1}^\infty kp(k) < \infty$
and that the size-biased distribution $q$ is well-defined. In case $a > 3$ we also have $\nu := \sum_{k=1}^\infty kq(k) < \infty$. We may and often will assume that the constants $c_0, C_0$ also satisfy, for large enough $k$,
\begin{eqnarray}
\label{c0a-1} &&p[k, \infty),\; q(k) \in (c_0k^{-(a-1)},\;C_0k^{-(a-1)});\\
\label{c0a-2} &&q[k, \infty) \in (c_0k^{-(a-2)},\;C_0k^{-(a-2)});\\
\label{c03-a} &&\sum_{k=0}^m k q(k) \in \left\{\begin{array}{ll}\;(c_0m^{3-a},\;C_0m^{3-a})&\text{if } 2 < a < 3;
\\\;(c_0\log(m),\;C_0\log(m)) &\text{if } a = 3.\end{array}\right.
\end{eqnarray}
\subsection{Notation}
For ease of reference, here we summarize our notation. Some of the points that follow were already mentioned earlier in the Introduction.
Given a graph $G$ and $\lambda > 0$, $P_G^\lambda$ denotes a probability measure for a graphical construction of the contact process on $G$ with rate $\lambda$. Under this measure, we can consider the contact process $(\xi^A_t)_{t\geq 0}$ on $G$ with any initial configuration $I_A$.
Unless otherwise stated, Galton-Watson trees are denoted by $\T$ and their root by $o$. Their degree distribution (or distributions in the case of two-stage trees) will be clear from the context. The probability measure is denoted $\Q_r$ if the degree distribution of all vertices is $r$ and $\Q_{r,s}$ if the root has degree distribution $r$ and other vertices have degree distribution $s$. If on top of the tree, a graphical construction for the contact process with rate $\lambda$ is also defined, we write $\Q^\lambda_r$ and $\Q^\lambda_{r,s}$.
$G_n$ denotes the random graph on $n$ vertices with fixed degree distribution $p$, as described above, and $\P_{p,n}$ a probability measure for a space in which it is defined. $\P_{p,n}^\lambda$ is used when a graphical construction with rate $\lambda$ is defined on the random graph.
The distribution $p$ has exponent $a$, as in (\ref{eq:ele2}), and its mean is denoted by $\mu$. If the sized-biased distribution $q$ has finite mean, this mean is denoted by $\nu$. Since $q$ may have infinite expectation, we may sometimes have to consider its truncation, that is, for $m > 0$, the law
\begin{equation}\overline q_m(k) = \left\{\begin{array}{ll}q(m,\infty) &\text{if } k = 1;\\q(k),&\text{if }1 < k \leq m;\\ 0,&\text{if }k > m.\end{array}\right.\label{eq:defhatq}\end{equation}
(since $q$ is used as a \textit{degree} distribution for vertices of a tree, we set the minimum value of its truncation to 1).
On a graph $G$, $d(x, y)$ denotes graph distance and $B(x,R) = \{y: d(x, y) \leq R\}$. For a set $A$, we denote by $|A|$ the number of elements of $A$.
\section{A survival estimate on star graphs}
\label{s:survestimate}
We start looking at the contact process on star graphs, that is, graphs in which all vertices except a privileged one (called the hub) have degree 1. A first result to the effect that the contact process survives for a long time on a large star was Lemma 5.3 in \cite{BBCS}, which showed that, for a star $S$, if $\lambda$ is small and $\lambda^2|S|$ is larger than a universal constant, then the infection survives for a time that is exponential in $\lambda^2|S|$. The following result adds some more detail to that picture.
\begin{lemma} \label{basic}
There exists $c_1 > 0$ such that, if $\lambda < 1$ and $S$ is a star with hub $o$,
\\
$(i.)\; \displaystyle{P_S^\lambda\left(|\xi^o_1| > \frac{1}{4e}\cdot\lambda\deg(o)\right) \geq \frac{1}{e}(1-e^{-c_1 \lambda \deg(o)})};$
\\
$(ii.)$ if $\lambda^2\deg(o) > 64e^2$ and $|\xi_0| >\frac{1}{16e} \cdot \lambda\deg(o)$, then $P_S^\lambda\left(\xi_{e^{c_1 \lambda^2\deg(o)}} \neq \varnothing\right) \geq 1-e^{-c_1 \lambda^2\deg(o)};$\\[0.3cm]
$(iii.)\;$ as $|S| \to \infty,\;P_S^\lambda\left(\exists t:\; |\xi^o_t| >\frac{1}{4e}\cdot \lambda\deg(o) \right) \to 1$.
\end{lemma}
\begin{proof}
Here and in the rest of the paper, we use the following fact, which is a consequence of the Markov inequality: for any $n \in \N$ and $r \in [0,1]$, if $X \sim \mathsf{Bin}(n, r)$ we have
\begin{eqnarray}\label{mark}\forall \alpha > 0 \;\exists \theta > 0:\; \P(|X-\E X| > \alpha n r) \leq e^{-\theta n r}.
\end{eqnarray}
For the event in $(i.)$ to occur, it is sufficient that there is no recovery at $o$ in $[0,1]$ and, for at least $\frac{\lambda}{4e}\deg(o)$ leaves, there is no recovery in $[0,1]$ and a transmission is received from $o$. Also using the inequality $1-e^{-\lambda} \geq \lambda/2$ for $\lambda < 1$, the probability in $(i.)$ is more than
$$\begin{aligned}&e^{-1} \cdot \P\left(\mathsf{Bin}\left(\deg(o),\; e^{-1}(1-e^{-\lambda})\right) >\frac{\lambda }{4e}\deg(o)\right) \\&\geq e^{-1} \cdot \P\left(\mathsf{Bin}\left(\deg(o),\; \frac{\lambda}{2e}\right) >\frac{\lambda }{4e}\deg(o)\right)\geq e^{-1}(1-e^{-c \lambda \deg(o)})\end{aligned}$$
for some $c > 0$, by (\ref{mark}). $(i.)$ is now proved.
For $j \geq 0$, define
$$\begin{aligned}&\Gamma_j = \{y \in S\backslash \{o\}: D_y \cap [j, j+1] = \varnothing\},\\
&\Psi_j = \{y \in \Gamma_j: \xi^o_j(y) = 1\}.\end{aligned}$$
$\Psi_j$ is thus the set of leaves of the star that are infected at time $j$ and do not heal until time $j + 1$. For each $j \geq 0$, we will now define an auxiliary process $(Z^j_t)_{j \leq t \leq j+1}$. We put $Z^0_t \equiv 1$ and, for $j \geq 1$, put
\begin{itemize}
\item $Z^j_j = 0;$
\item for each $t \in [j, j+1]$ such that for some $y \in \Psi_j$ we have $t \in D_{y,o}$, put $Z^j_t = 1$;
\item for each $t \in [j, j+1] \cap D_o$, put $Z^j_t = 0$;
\item complete the definition of $Z^j_t$ by making it constant by parts and right-continuous.
\end{itemize}
It is then clear that
\begin{equation} \label{hjEq1}Z^j_t \leq \xi^o_t(o) \; \forall j, t.\end{equation}
Consider the events, for $j \geq 0$:
$$\begin{aligned}
&A_{1,j} = \{|\Gamma_j| > \deg(o)/2e\};\\
&A_{2,j} = \{|\Psi_j| > \lambda \deg(o)/32e^2\};\\
&A_{3,j} = \left\{\int_j^{j+1} I_{\{Z_t = 1\}} \; dt > 1/2\right\};\\
&A_{4,j} = \left\{\left| \left\{\begin{array}{c}y \in \Gamma_j: \text{for some $t \in [j,j+1],$}\\Z^j_t = 1 \text{ and } t \in D_{o,y} \end{array}\right\}\right| > \frac{\lambda \deg(o)}{16e} \right\}.
\end{aligned}$$
Notice that, by (\ref{hjEq1}) and the definition of $A_{3,j}$,
\begin{equation}\label{hjEq2}\left\{\frac{1}{N} \int_0^N I_{\{\xi^o_t(o) = 1\}}\; dt > \frac{1}{2} \right\} \supset {\mathop\cap_{j=0}^{N-1}}\; A_{3,j} \qquad \forall N \in \N. \end{equation}
We have
\begin{equation}P_S^\lambda\left((A_{1,j})^c\right) \leq \P\left(\;\mathsf{Bin}(\deg(o), 1/e) \leq \deg(o)/2e\;\right) \leq e^{-\theta \deg(o)/e}. \label{hjB2}\end{equation}
We now want to bound $P_S^\lambda\left((A_{4,j})^c\;|\; A_{1,j} \cap A_{3,j}\right)$, for $j \geq 0$. By the definition of $(Z^j)$, the event $A_{1,j} \cap A_{3,j}$ depends only on $\xi^o_j$, $(D_y \cap [j, j+1])_{y \in S \backslash \{o\}}$ and $(D_{y,o} \cap [j,j+1])_{y \in S \backslash \{o\}}$. Therefore, conditioning on $A_{1,j} \cap A_{3,j}$ does not affect the law of $(D_{o,y} \cap [j, j+1])_{y \in S \backslash \{x\}}$, the set of arrows from the hub to the leaves at times in $[j, j+1]$. We thus have
\begin{flalign}\nonumber &P_S^\lambda\left((A_{4,j})^c\;|\; A_{1,j} \cap A_{3,j}\right)\leq \P\left(\mathsf{Bin}(\deg(o)/2e,\;1-e^{-\lambda/2}) \leq \lambda \deg(o)/16e\right)& \\&\qquad\qquad\qquad \leq \P\left(\;\mathsf{Bin}(\deg(o)/2e,\; \lambda/4)\leq \lambda \deg(o)/16e \right)) \leq e^{-\theta \lambda \deg(o)/8e} \label{hjB3} \quad \forall j \geq 0.& \end{flalign}
Let us now bound $P^\lambda_S((A_{3,j})^c\;|\; A_{2,j})$. Define the continuous-time Markov chains $(Y_t)_{t \geq 0},\;(Y'_t)_{t \geq 0}$ with state space $\{0,1\}$ and infinitesimal parameters
$$\begin{array}{ll}
q_{01} = \frac{\lambda^2\deg(o)}{32e^2},&q_{10} = 1;
\\
q'_{01} = 1,& q'_{10} = \frac{32e^2}{\lambda^2\deg(o)}.
\end{array}$$
Now, if $32e^2/(\lambda^2\deg(o)) < 1/2$ we have
$$\begin{aligned}
P^\lambda_S\left((A_{3,j})^c\;|\; A_{2,j}\right) \leq \P\left(\int_0^1 I_{\{Y_t = 0\}}\; dt \geq \frac{1}{2}\right) &= \P\left(\frac{32e^2}{\lambda^2\deg(o)}\int_0^{\frac{\lambda^2\deg(o)}{32e^2}}I_{\{Y'_t = 0\}}\;dt \geq \frac{1}{2}\right).
\end{aligned}$$
Denoting by $\pi$ the invariant measure for $Y'$, we have $\pi_0 = \frac{\frac{32e^2}{\lambda^2\deg(o)}}{1+\frac{32e^2}{\lambda^2\deg(o)}} < \frac{1}{3}$. Then, by the large deviations principle for Markov chains (see for example \cite{dembzeit}), we get
\begin{equation} P^\lambda_S\left((A_{3,j})^c\;|\; A_{2,j}\right) \leq e^{-c\lambda^2\deg(o)}\label{hjB4}\end{equation}
for some $c > 0$.
Finally, for any $j \geq 1$ we have
\begin{equation} \label{hjB5}P^\lambda_S\left((A_{2,j})^c\;|\;A_{4,j-1}\right) \leq \P\left(\mathsf{Bin}(\lambda \deg(o)/16e,\; 1/e) \leq \lambda \deg(o)/32e^2\right) \leq e^{-\theta \lambda^2\deg(o)/16\lambda e^2}\end{equation}
and similarly, $P^\lambda_S\left((A_{2,0})^c\right) \leq e^{-\theta \lambda^2\deg(o)/16\lambda e^2}$.
Putting together (\ref{hjEq2}), (\ref{hjB2}), (\ref{hjB3}), (\ref{hjB4}) and (\ref{hjB5}), we get the desired result.
Statement $(iii.)$ can be proved by similar (and simpler) arguments than $(ii.)$, so for brevity we omit a full proof.
\end{proof}
As an application of the previous result, for two vertices $x$ and $y$ of a connected graph, we give a condition on $\deg(x)$ and $d(x,y)$ that guarantees that, with high probability, the infection is maintained long enough around $x$ to produce a path that reaches $y$.
\begin{lemma}
\label{lem:infNei}
There exists $\lambda_0 > 0$ such that, if $0 < \lambda < \lambda_0$, the following holds. If $G$ is a connected graph and $x, y$ are distinct vertices of $G$ with $$\deg(x) > \frac{3}{c_1}\frac{1}{\lambda^2}\log\left(\frac{1}{\lambda}\right) \cdot d(x, y) \quad \text{ and }\quad \frac{|\xi_0\;\cap\; B(x,1)|}{\lambda \cdot|B(x,1)|} > \frac{1}{16e},$$ then
$$P_G^\lambda\left(\exists t: \frac{|\xi_t\;\cap\; B(y,1)|}{\lambda \cdot|B(y,1)|} > \frac{1}{16e} \right)>1-2e^{-c_1\lambda^2\deg(x)}.$$
\end{lemma}
\begin{proof}
Let $r = 2d(x,y)$ and $L = \lfloor \frac{\exp(c_{1} \lambda^2 \deg(x))}{r} \rfloor$. Define the event $$A^2_1 = \{\forall s \leq Lr,\; \exists z \in B(x,1): \xi^x_s(z) = 1 \}.$$
By Lemma \ref{basic} we have $P_{G}^\lambda(A^2_1) \geq 1 - e^{-c_1\lambda^2 \deg(x)}.$
Further define the events
$$A^2_{2,i} = \{\exists z \in B(x,1): \xi^x_{ir}(z) = 1\}, \quad i= 0, \ldots, L-1$$
so that $A^2_1 \subset \cap_{i=1}^{L-1} A^2_{2,i}$.
On $A^2_{2,i}$, we can choose $Z_i \in B(x,1)$ such that $\xi^x_{ir}(Z_{i}) = 1$ and a sequence $\gamma_{i,0} = Z_i,\; \gamma_{i,1}, \ldots, \gamma_{i,k_i} = y$ such that $d(\gamma_{i,j}, \gamma_{i,j+1}) = 1 \; \forall j$ and $k_i \leq d(x,y)+1 \leq r$. Define
$$A^2_{3,i} = A^2_{2,i} \cap \left\{ \exists s \in [ir,\; (i+1)r - 1): (Z_i, ir) \;\lra\; (y,s)\text{ and } \frac{|\xi_{s+1} \;\cap \;B(y,1)|}{\lambda \cdot|B(y,1)|} > \frac{1}{16e} \right\},$$
We claim that
\begin{equation}\label{eqnPathToy}P_{G}^\lambda\left( A^2_{3,i}\;|\;A^2_{2,i},\;(\xi_t)_{0\leq t \leq ir}\right) \geq \left(e^{-1}(1-e^{-\lambda})\right)^r\cdot e^{-1}(1-e^{-c_1\lambda\deg(y)}).\end{equation}
To see this, note that an infection path from $(Z_i, ir)$ to $\{y\} \times [ir, \; (i+1)r - 1)$ can be obtained by imposing that, for $0 \leq j < k_i$, there is no recovery in $\{\gamma_{i,j}\} \times [ir + j,\; ir + j + 1)$ and at least one transmission from $\gamma_{i,j}$ to $\gamma_{i, j+1}$ at some time in $[ir + j,\; ir+j + 1)$. This explains the term $(e^{-1}(1-e^{-\lambda}))^r$ in the right-hand side of (\ref{eqnPathToy}). The other term comes from Lemma \ref{basic}(i.).
The right-hand side of (\ref{eqnPathToy}) is larger than $\left(\frac{\lambda}{3}\right)^{r} \cdot \frac{c_1\lambda}{2e}$ when $\lambda$ is small.
We then have
$$\begin{aligned}P_{G}^\lambda\left(A^2_1 \cap (\cup_{i=0}^{L-1}\; A^2_{3,i})^c\right) &\leq P_{G}^\lambda\left((\cap_{i=0}^{L-1} \; A^2_{2,i}) \cap (\cup_{i=0}^{L-1}\; A^2_{3,i})^c\right)\\
&\leq P_{G}^\lambda\left((\cap_{i=0}^{L-1} \; A^2_{2,i}) \cap (\cup_{i=0}^{L-2}\; A^2_{3,i})^c\right) \cdot \left(1-(c_1\lambda/2e)\left(\lambda/3\right)^{r}\right)\\
&\leq P_{G}^\lambda\left((\cap_{i=0}^{L-2} \; A^2_{2,i}) \cap (\cup_{i=0}^{L-2}\; A^2_{3,i})^c\right) \cdot \left(1-(c_1\lambda/2e)\left(\lambda/3\right)^{r}\right)\end{aligned}$$
and iterating, this is less than
$$\begin{aligned} \left(1-\frac{c_1\lambda}{2e}\left(\frac{\lambda}{3}\right)^{r} \right)^{L} \leq \exp\left\{-\frac{c_1\lambda}{2e}\left( \frac{\lambda}{3}\right)^{2d(x,y)} \cdot \frac{1}{2d(x,y)} \cdot \exp\left\{c_1\lambda^2\deg(x) \right\} \right\},
\end{aligned}$$
which is smaller than $e^{-c_1\lambda^2\deg(x)}$ if $\lambda$ is small enough, since $\deg(x) > \frac{3}{c_1}\frac{1}{\lambda^2}\log\frac{1}{\lambda}$. This completes the proof.
\end{proof}
\section{Proof of Proposition \ref{prop:main}: lower bounds}
\label{s:lower}
Given a random graph $G$ (which will be either a Galton-Watson tree or the random graph $G_n$), a vertex $x$ of $G$ and $R,K > 0$, define the event
\begin{equation}
\mathcal{M}(x,R,K) = \{\exists y: d(x,y) \leq R,\; \deg(y) > K\}.\label{eq:defM}
\end{equation}
We will need the following simple result on Galton-Watson trees.
\begin{lemma}\label{lem:auxFinal}
If $2 < a \leq 3$, then ${\displaystyle\;\; \liminf_{K \to \infty}\; \Q_q\left(\;\mathcal{M}(o, 2, K\log K)\; \left|\;\deg(o) = K \right.\right) > 0}$.
\end{lemma}
\begin{proof}
Assume $\deg(o) = K$ and define
$$A=\left\{|\{x: d(o,x) = 2\}| > \frac{c_0}{2}K\log K\right\}.$$
Let $z_1, \ldots, z_K$ be the neighbours of the root and $Z_i = \deg(z_i)-1$ for $1 \leq i \leq K$, so that $\sum Z_i = |\{x: d(o,x) = 2\}|$. Note that the law of the $Z_i$ is given by $k \mapsto q(k+1)$, thus stochastically dominates the distribution $k \mapsto \hat q(k):=\overline q_{K}(k+1)$, where $\overline q_{K}$ is the truncation of $q$, as defined in (\ref{eq:defhatq}). Let $Y_1, Y_2, \ldots$ be i.i.d. with distribution $\hat q(k)$. We then have, by (\ref{c03-a}),
$$\E(Y_1) > c_0 \log(K), \qquad \text{Var}(Y_1) \leq \sum_{k\leq K} k^2q(k+1) \leq \bar C_0 K$$
where $\bar C_0 > 0$ is a constant that depends only on $p$. Then,
$$\begin{aligned}
&\Q_{q}(A\;|\;\deg(o) = K) = \Q_q\left(\left.\sum_{k \leq K} Z_k > \frac{c_0}{2} \;K\log K\;\right|\;\deg(o) = K\right) \geq \P\left(\sum_{k \leq K} Y_k > \frac{c_0}{2} \;K\log K\right) \\&\qquad\qquad\qquad\qquad> 1 - \P\left(\left|\sum_{k \leq K} Y_k - K\cdot \E(Y_1)\right| > \frac{c_0}{2} K\log K \right) > 1 - \frac{\bar C_0 \cdot K^2}{\left(\frac{c_0}{2} K\log K\right)^2} \stackrel{K \to \infty}{\xrightarrow{\hspace*{0.8cm}}} 1.
\end{aligned}$$
Now, if $A$ occurs, then there are at least $\frac{c_0}{2}K\log K$ vertices at distance 2 from $o$. Each of these vertices has degree larger than $K\log K$ with probability $q(K\log K,\;\infty) \geq c_0(K \log K)^{-(a-2)} \geq c_0(K \log K)^{-1}$, since $a \leq 3$. We thus get
$$\begin{aligned}&\Q_{q}(\mathcal{M}(o, 2, K\log K)\;|\;\{\deg(o) = K\}\;\cap\;A) \geq 1 - \left(1-c_0(K\log K)^{-1}\right)^{\frac{c_0}{2}K\log K} \\&> 1 - \exp \left\{-c_0\left(K\log K\right)^{-1} \cdot \frac{c_0}{2}K\log K \right\} = 1 - \exp\left\{-\frac{(c_0)^2}{2}\right\}.\end{aligned}$$
This completes the proof.
\end{proof}
Again assume that $G$ is a random graph; also assume we have a graphical construction for the contact process with parameter $\lambda$ on $G$. Let
$$\chi_t = \{y: (x,0) \lra (y,t) \text{ inside } B(x,R)\}.$$
Then, define
\begin{equation}
\mathcal{N}(x,R,K) = \left\{\exists y, t: d(x,y) \leq R,\;\deg(y) > K,\;\frac{|\chi_t \;\cap\;B(y,1)|}{|B(y,1)|} > \frac{\min(\lambda,\lambda_0)}{16e} \right\},\label{eq:defN}
\end{equation}
where $\lambda_0$ is as in Lemma \ref{lem:infNei}. In words, in the contact process started from $x$ infected, a proportion larger than $\frac{\min(\lambda, \lambda_0)}{16e}$ of the neighbours of $y$ become infected at some time $t$, and this occurs through infection paths contained in the ball $B(x, R)$.
In this subsection, we will assume that $\lambda < \lambda_0$, as in Lemma \ref{lem:infNei}, and often will state conditions that require $\lambda$ to be sufficiently small.
\subsection{Case $2\frac{1}{2} < a \leq 3$}\label{ss:lower}
Define
$$\begin{array}{lll}
K_1 = \frac{12}{c_1} \frac{1}{\lambda^2} \log \left(\frac{1}{\lambda}\right), &K_2 = \frac{18a}{c_1\log 2} \frac{1}{\lambda^2} \log^2 \left(\frac{1}{\lambda}\right)\,&K_i = \frac{1}{\lambda^3} + i - 3,\; i\geq 3;
\\
R_1 = 1,&R_2 = 3,&R_i = \lceil a\log_2 K_i\rceil,\; i \geq 3,
\end{array}$$
where $c_1$ is as in Lemma \ref{basic}. We will show that, for some $c > 0$ and $\lambda$ small enough,
\begin{eqnarray}
&&\Q_{p,q}\left({\mathop \cap_{i=1}^\infty} \; \mathcal{M}(o, R_i, K_i)\right) > c\; \left(\frac{\lambda^2}{\log \left(\frac{1}{\lambda}\right)}\right)^{a-2} \text{ and}\label{eqn:eqRK1}\\
&&\Q_{p,q}^\lambda \left({\mathop \cap_{i=1}^\infty}\; \mathcal{N}(o, R_i, K_i)\;\left|\;{\mathop \cap_{i=1}^\infty}\;\mathcal{M}(o, R_i, K_i) \right.\right) > c\lambda. \label{eqn:eqRK2}
\end{eqnarray}
Since $\{\xi^o_t \neq \varnothing\;\forall t\} \supset \cap_{i=1}^\infty\;\mathcal{N}(o, R_i, K_i)$, these inequalities will give us the desired result.
To prove (\ref{eqn:eqRK2}) we assume $\cap_{i=1}^\infty\;\mathcal{M}(o, R_i, K_i)$ occurs and let $y_1, y_2, \ldots$ denote sites with $\deg(y_i) > K_i$ and $d(o, y_i) \leq R_i$ (so that $d(y_i, y_{i+1}) \leq 2R_{i+1}$). With probability $\frac{\lambda}{1+\lambda}$, the root infects its neighbour $y_1$ before recovering (unless the root itself is equal to $y_1$, in which case this probability is 1). Then, by Lemma \ref{basic}$(i.)$, with probability larger than $e^{-1}(1-e^{-c_1\lambda\deg(y_1)})$, we have $\frac{|\xi^o_t \;\cap\;B(y_1,1)|}{\lambda \cdot|B(y_1,1)|} > \frac{1}{16e}$ for some $t > 0$, so that $\mathcal{N}(o, R_1, K_1)$ occurs. Since, for each $i$,
$$\deg(y_i) > K_i > \frac{3}{c_1}\left(\frac{1}{\lambda^2} \log \frac{1}{\lambda}\right)\cdot 2R_{i+1} \geq \frac{3}{c_1} \left(\frac{1}{\lambda^2} \log \frac{1}{\lambda}\right)\cdot d(y_i, y_{i+1}),$$
we can repeatedly use Lemma \ref{lem:infNei} to guarantee that, with probability larger than $1-2\sum_{i=1}^\infty\;e^{-c_1\lambda^2K_i}$, for each $i$ there exists $t > 0$ such that $\frac{|\xi^o_t \;\cap\;B(y_i,1)|}{\lambda \cdot|B(y_i,1)|} > \frac{1}{16e}$. This shows that
$$\Q_{p,q}^\lambda\left({\mathop \cap_{i=1}^\infty \;\mathcal{N}(o, R_i, K_i)} \left| {\mathop \cap_{i=1}^\infty \;\mathcal{M}(o, R_i, K_i)}\right.\right) > \frac{\lambda}{1 + \lambda} \cdot (e^{-1}(1-e^{-c_1\lambda K_1}))\cdot \left(1 - 2\sum_{i=1}^\infty e^{-c_1\lambda^2K_i}\right) > \frac{\lambda}{3}$$
when $\lambda$ is small enough.
We now turn to (\ref{eqn:eqRK1}). By (\ref{c0a-2}) we have
\begin{equation}\label{eq:lowboundEq31}\Q_{p,q}(\mathcal{M}(o, R_1, K_1)) \geq c_0\left(\frac{12}{c_1} \frac{1}{\lambda^2}\log \frac{1}{\lambda} \right)^{-(a-2)}.\end{equation}
On the event $\mathcal{M}(o, R_1, K_1) = \mathcal{M}(o, 1, K_1)$, again let $y_1$ denote a vertex in $B(o, 1)$ with degree larger than $K_1$. By Lemma \ref{lem:auxFinal}, we have
$$\Q_{p,q}\left(\mathcal{M}(y_1, 2, K_1\log K_1)\;|\;\mathcal{M}(o, 1, K_1) \right) > \bar c$$
for some $\bar c > 0$ that does not depend on $\lambda$. Since $K_1 \log K_1 > K_2$ for $\lambda$ small enough, this implies that
\begin{equation}\label{eq:lowboundEq32}\Q_{p,q}\left(\mathcal{M}(o, 3, K_2)\;|\;\mathcal{M}(o, R_1, K_1) \right) > \bar c.\end{equation}
To give a lower bound for the probability of $\mathcal{M}(o, R_i, K_i)$ when $i \geq 3$, we observe that there are at least $2^{R_i-1}$ vertices at distance $R_i-1$ from the root, by the fact that the degrees of all vertices are at least 3. Thus,
$$\begin{aligned} \Q_{p,q}(\mathcal{M}(o, R_i, K_i)) &\geq 1 - (1 - c_0K_i^{-(a-2)})^{2^{R_i-1}} \\&\geq 1 -\exp\left\{-\frac{c_0}{2}\cdot K_i^{-(a-2)}\cdot K_i^{a} \right\} = 1 - \exp\left\{-\frac{c_0}{2}\left(\frac{1}{\lambda^3} + i - 3\right)^2\right\}.\end{aligned}$$
We then get
\begin{equation}\label{eq:RKfinal}
\Q_{p,q}\left(\left({\mathop \cap_{i=3}^\infty}\; \mathcal{M}(o, R_i, K_i)\right)^c \right) < \sum_{i=3}^\infty e^{-\frac{c_0}{2}\left(\frac{1}{\lambda^3} + i -3\right)^2}
\end{equation}
and, as $\lambda \to 0$, the right-hand side converges to 0 faster than any power of $\lambda$. Inequality (\ref{eqn:eqRK1}) now follows from (\ref{eq:lowboundEq31}), (\ref{eq:lowboundEq32}) and (\ref{eq:RKfinal})
\subsection{Case $a > 3$}
This case is very similar to the previous one, only simpler. The proof can be repeated with the constants now given by
$$\begin{array}{ll}K_1 = \frac{12a}{\log2} \cdot \frac{1}{\lambda^2} \log^2 \left(\frac{1}{\lambda}\right),& K_i = \frac{1}{\lambda^3}+i-2,\;i\geq 2;
\\R_1 = 1,&R_i = \lceil a\log_2K_i \rceil,\; i \geq 2. \end{array}$$
It is thus shown that
$$\Q_{p,q}\left({\mathop \cap_{i=1}^\infty}\mathcal{M}(o, R_i, K_i) \right) > c\;\left(\frac{\lambda^2}{\log^2\left(\frac{1}{\lambda}\right)}\right)^{a-2}\text{and }\;\; \Q_{p,q}^\lambda \left( {\mathop \cap_{i=1}^\infty} \mathcal{N}(o, R_i, K_i)\;\left|\;{\mathop \cap_{i=1}^\infty} \mathcal{M}(o, R_i, K_i) \right. \right) > c\lambda.$$
\subsection{Case $2 < a \leq 2\frac{1}{2}$}
Recall the definition of $\hat q$ in (\ref{eq:defhatq}). We will show that
\begin{equation}\label{eq:afred2}\Q^\lambda_{\hat q}\left(\xi^o_t \neq \varnothing \; \forall t\right) > c \lambda^{\frac{1}{3-a} - 1}. \end{equation}
This will give the desired result since
$$\Q^\lambda_{p,q}\left(\xi^o_t \neq \varnothing \; \forall t\right) \geq \frac{\lambda}{1+\lambda} \cdot\Q^\lambda_{\hat q}\left(\xi^o_t \neq \varnothing \; \forall t\right).$$
In order to study the contact process $(\xi^o_t)_{t \geq 0}$ on $\T$ (a tree sampled from $\Q_{\hat q}$) and started from only the root infected, we introduce a comparison process $(\end{theorem}a_t)_{t \geq 0}$, started from the same initial condition. $(\end{theorem}a_t)$
will be a modification of the contact process: sites will become permanently set to value $0$ the first time (if ever) that they return to value $0$ after having taken value $1$. Consequently, sites cannot infect sites closer to the root than themselves.
More precisely, $(\end{theorem}a_t)_{t \geq 0}$ is defined as follows. Suppose we are given a tree $\T$ and a graphical construction $\{(D_x)_{x \in \T},\; (D_{x,y})_{x,y \in \T,\; x \sim y}\}$ with parameter $\lambda > 0$. Let $\sigma_o = \inf D_o$ be the first recovery time at the root and set $\end{theorem}a_t(o) = I_{[0, \sigma_o)}(t)$ for all $t \geq 0$. Now assume $(\end{theorem}a_t(x))_{t \geq 0}$ has been defined for all $x$ at distance $m$ or less from the root, and fix $y$ with $d(o,y) = m+1$. Let $z$ be the parent of $y$, that is, $d(o, z) = m$ and $d(z, y) = 1$. Let $\tau_y = \inf\left(\{t: \end{theorem}a_t(z) = 1\} \cap D_{z, y}\right)$ and, if $\tau_y < \infty$, let $\sigma_y = \inf\left([\tau_y, \infty) \cap D_y \right)$. Now, if $\tau_y < \infty$, set $\end{theorem}a_t(y) = I_{[\tau_y, \sigma_y)}(t)$ for all $t$ and otherwise set $\end{theorem}a_t(y) = 0$ for all $t$.
Define
$$X_m:=\left|\{z:d(o,z) = m, \exists t < \infty \text{ with } \end{theorem}a_t(z) = 1 \}\right|$$
for $m=0,1,2,\ldots$ Then $(X_m)_{m\geq0}$ is a branching process and is in principle easy to analyze. We start with the following lemma, which gives
a lower bound for the probability $\Q_{\hat q}^\lambda(X_1\geq
k)$.
\begin{lemma}\label{l:1}
There exists $c_{2.1}$ such that, for $\lambda \in (0,1)$ and all $k \geq 1$,
$$\Q_{\hat q}^\lambda(X_1\geq k)\geq c_{2.1}\left(\lambda/k\right)^{a-2}.$$
\end{lemma}
\begin{proof} For $k\geq1$, we have
$$\begin{aligned}\Q_{\hat q}^\lambda(X_1 \geq k) &\geq \Q_{\hat q}^\lambda\left(X_1 \geq k,\; \sigma_o \geq 1,\; \deg(o) \geq \frac{2k}{1-e^{-\lambda}}\right)\\&\geq \Q_{\hat q}^\lambda \left(\sigma_o \geq 1,\; \deg(o) \geq \frac{2k}{1-e^{-\lambda}} \right)\cdot \P\left(\mathsf{Bin}\left(\left\lceil\frac{2k}{1-e^{-\lambda}}\right \rceil,\;1-e^{-\lambda} \right) \geq k\right)\\
&=e^{-1}\cdot \hat q\left[\frac{2k}{1-e^{-\lambda}},\;\infty \right)\cdot \P\left(\mathsf{Bin}\left(\left \lceil\frac{2k}{1-e^{-\lambda}}\right \rceil,\;1-e^{-\lambda} \right) \geq k\right) \\&\geq C \cdot \hat q\left[\frac{2k}{1-e^{-\lambda}},\;\infty \right) \geq c_{2.1} \left(\frac{\lambda}{k} \right)^{a-2}
.\end{aligned}$$\end{proof}
As a consequence of the above result, $X_1$ has infinite expectation, so, with positive probability, $X_n \to \infty$ as $n \to \infty$. We define the generating function for the law of $X_1$:
$$\Psi_\lambda(s) = \sum_{n=0}^\infty \Q_{\hat q}^\lambda(X_1 = n) \cdot s^n \qquad (s \in (0,1]).$$
We can use Lemma \ref{l:1} to get the following estimate for $\Psi_\lambda(s)$, where the infection parameter $\lambda > 0$ is fixed.
\begin{lemma}
\label{l:2} There exists $c_{2.2} > 0$ such that, for $\lambda \in (0,1)$ and $s \in [1/2, 1]$,
$$\Psi_\lambda(s) \leq 1 - c_{2.2}(\lambda(1-s))^{a-2}.$$
\end{lemma}
\begin{proof}
By monotonicity of $s^m$ in $m$, we have for any positive integer $k$
$$\Psi_\lambda(s) \leq \sum_{i=0}^k \Q_{\hat q}^\lambda(X_1 = i) + s^k \sum_{i = k+1}^\infty \Q_{\hat q}^\lambda(X_1 = i) = 1 - \Q_{\hat q}^\lambda(X_1 \geq k)\cdot (1-s^k).$$
We choose $k$ equal to $\left \lfloor\frac{1}{1-s}\right \rfloor$ which gives the desired inequality, since $s \mapsto 1 - s^{\lfloor \frac{1}{1-s}\rfloor}$ is bounded away from zero for $s \in [1/2, 1]$.
\end{proof}
From Lemma \ref{l:2} we can easily get the following
\begin{corollary}
There exists $c_{2.3} > 0$ such that, for $\lambda > 0$ small enough,
$$\Q_{\hat q}^\lambda(X_n \neq 0 \; \forall n) \geq c_{2.3}\;\lambda^{\frac{a-2}{3-a}}.$$
\end{corollary}
\begin{proof}
We know that $X_1$ has infinite expectation and so from the standard theory of branching processes (see e.g. \cite{durprob}), we have that the survival probability $\beta$ satisfies
$$\beta = 1 - \Psi_\lambda(1-\beta).$$
By Lemma \ref{l:2}, the right-hand side is larger than $c_{2.2}(\lambda \beta)^{a-2}$, so $\beta > c_{2.2}(\lambda\beta)^{a-2}$, so $\beta >(c_{2.2})^\frac{1}{3-a}\cdot \lambda^\frac{a-2}{3-a}.$
\end{proof}
Since $\frac{a-2}{3-a}=\frac{1}{3-a} - 1$ and $\{\xi^o_t \neq \varnothing \; \forall t\} \supset \{X_n \neq 0\; \forall n\}$, (\ref{eq:afred2}) is now proved.
\section{Extinction estimates on star graphs and trees}
\label{s:extestimate}
Our main objective in this section is to establish estimates that allow us to say, under certain conditions, that the contact process does not spread too much and does not survive too long. In Lemma \ref{lembound}, we obtain upper bounds for the probability of existence of certain infection paths on finite trees of bounded degree. In Lemma \ref{lem:extStar}, we obtain a result for star graphs that works in the reverse direction as that of Lemma \ref{basic}: with high probability, the contact process on a star graph $S$ does not survive for longer than $e^{C\lambda^2|S|}$, for some large $C > 0$.
\begin{lemma} \label{lembound}
\noindent Let $\lambda <\frac{1}{2}$ and $T$ be a finite tree with maximum degree bounded by $\frac{1}{8\lambda^2}$. Then, for any $x, y \in T$ and $0 < t < t'$,
\\
$(i.)\;P_T^\lambda\left(\;(x,0) \;\lra\; \{y\} \times \R_+\;\right) \leq (2\lambda)^{d(x,y)};$
\\
$(ii.)\; P_{T}^\lambda\left(\;(x,0) \;\lra\; \{y\} \times [t, \infty)\;\right) \leq (2\lambda)^{d(x,y)}\cdot e^{-t/4};$
\\
$(iii.)\;P_{T}^\lambda \left( \xi^T_t \ne \emptyset \right) \ \leq \ |T|^2\cdot e^{-t/4};$
\\
$(iv.)\;P_{T}^\lambda\left(\;\{x\} \times [0, t] \;\lra\; \{y\} \times \R_+\;\right) \leq (t+1)\cdot(2\lambda)^{d(x,y)}$;
\\
If $x \neq y$,
\\
$(v.)\;P_{T}^\lambda\left(\;\exists \end{lemma}l < \end{lemma}l':\;(x,0) \;\lra\; (y, \end{lemma}l) \;\lra\; (x,\end{lemma}l') \;\right) \leq (2\lambda)^{2d(x,y)};$
\\
$(vi.)\;P_{T}^\lambda\left(\;\exists \end{lemma}l:\;\{x\}\times [0,t] \;\lra\; (y, \end{lemma}l) \text{ and } (y, \end{lemma}l) \;\lra\; \{x\} \times [\end{lemma}l, \infty)\;\right) \leq (t+1)\cdot(2\lambda)^{2d(x,y)}.$
\end{lemma}
\begin{proof}
\noindent $(i.)$ For $u > 0$, let $M_u = \sum_{z \in T}\; \xi^x_u(z) \cdot (2\lambda)^{d(z,y)}$. We claim that $(M_u)_{u \geq 0}$ is a supermartingale. To check this, notice that, for fixed $u \geq 0$ and $\xi \in \{0,1\}^T$,
\begin{eqnarray}\nonumber&&\frac{d}{dr} E_{T}^\lambda\left(M_{u+r} \;|\; \xi_u = \xi\right)\vert_{r=0+} = \sum_{\substack{z \in T:\;\xi(z) = 1}} \left(\left(\lambda \cdot \sum_{\substack{w: w \sim z,\;\xi(w) = 0}} (2\lambda)^{d(w,y)}\right) - (2\lambda)^{d(z,y)}\right)\\
&&\qquad\qquad\nonumber\leq \sum_{\substack{z \in T:\;\xi(z) = 1}} \left(2^{d(z,y)-1} \cdot \lambda^{d(z,y)} + \frac{1}{8\lambda^2} \cdot 2^{d(z,y) + 1} \cdot \lambda^{d(z,y) + 2} - (2\lambda)^{d(z,y)}\right)\\
&&\nonumber \qquad\qquad\leq \sum_{\substack{z \in T:\;\xi(z)=1}} \lambda^{d(z,y)}\left(2^{d(z,y) -1} + 2^{d(z,y)-2} - 2^{d(z,y)}\right) \\
&&\qquad \qquad= -\frac{1}{4} \sum_{z \in T} \xi(z) \cdot (2\lambda)^{d(x,y)}.\label{eq:14mart}\end{eqnarray}
Then, if $0 \leq s < u$,
$$\begin{aligned}&\frac{d}{dr}E_{T}^\lambda\left(M_{u+r}\;|\; \xi_{s'}: 0 \leq s' \leq s\right)\begin{itemize}g \vert_{r=0+} \\&\quad= \sum_\xi \frac{d}{dr}E_{T}^\lambda\left(M_{u+r}\;|\; \xi_u = \xi\right)\vert_{r=0+} \cdot P_{T}^\lambda\left(\xi_u = \xi\;|\; \xi_{s'}:0 \leq s' \leq s\right) < 0.\end{aligned}$$
In addition, the function $u \in [s,\infty) \mapsto E_{T}^\lambda\left(M_u\;|\;\xi_{s'}: 0 \leq s'\leq s\right)$ is continuous. Consequently, it is decreasing, so $E_{T}^\lambda(M_u\;|\; \xi_{s'}:0\leq s'\leq s) \leq M_s.$
Now, let $\tau = \inf\{u > 0: \xi^x_u(y) = 1\}$. By the optional sampling theorem (which may be applied since $M$ is a c\`adl\`ag supermartingale), we get
$$\begin{aligned}P_{T}^\lambda((x,0) \lra \{y\} \times \R_+) &= P_{T}^\lambda(\tau < \infty) \\&\leq E_{T}^\lambda(M_\tau;\; \tau < \infty) \leq E_{T}^\lambda(M_0) = (2\lambda)^{d(x,y)}.\end{aligned}$$
\noindent $(ii.)$ Since by (\ref{eq:14mart}) for any $u$ we have $$\frac{d}{dr} E_{T}^\lambda\left(M_{u+r}\;|\; \xi_u\right)\vert_{r=0+} \leq -\frac{1}{4}M_u,$$ the process $\tilde M_u = e^{u/4} \cdot M_u$ is a supermartingale. Now define $\sigma_t = \inf\{u \geq t: \xi^x_u(y) = 1\}$. The optimal sampling theorem gives
$$\begin{aligned}P_{T}^\lambda\left((x,0) \;\lra\; \{y\}\times [t,\infty)\right) &\leq e^{-t/4}\cdot E_{T}^\lambda\left(\tilde M_{\sigma_t}\cdot I_{\{\sigma_t < \infty\}}\right) \\&\leq e^{-t/4}\cdot E_{T}^\lambda\left(\tilde M_0\right) = e^{-t/4}\cdot (2\lambda)^{d(x,y)},\end{aligned}$$
completing the proof.
\noindent $(iii.)$ Again applying the optimal sampling theorem to the supermartingale $(\tilde M_u)$ defined above, we get
\begin{equation} \label{eqnxry}P_{T}^\lambda\left(\xi^x_u(y) = 1\right) \leq e^{-u/4} \cdot (2\lambda)^{d(x,y)} \qquad \forall u. \end{equation}
Applying (\ref{eqnxry}) and the fact that $\lambda < 1/2$,
$$P_{T}^\lambda\left(\xi^T_t \neq \emptyset\right) \leq \sum_{x,y\in T}\; P_{T}^\lambda\left(\xi^x_t(y) = 1\right) \leq |T|^2\cdot e^{-t/4}.$$
\noindent $(iv.)$ For $u > 0$ and $z \in T$, define $\zeta_u(z) = I_{\{\{x\} \times [0, t] \;\lra\; (z,u)\}}$. $(\zeta_u)_{u \geq 0}$ is thus a process that evolves as $(\xi_u)_{u \geq 0}$, with the difference that site $x$ is ``artificially'' kept at state 1 until time $t$. Next, define for $u > 0$
$$N_u = \max(t+1-u,\; 1) \cdot \zeta_u(x) \cdot (2\lambda)^{d(x,y)} + \sum_{z \neq x} \zeta_u(z)\cdot (2\lambda)^{d(z,y)}.$$
We claim that $(N_u)_{u \geq 0}$ is a supermartingale. As in the previous parts, this is proved from
\begin{equation}\label{eqnCad} \frac{d}{dr}E_{T}^\lambda\left(N_{u+r}\;|\;\zeta_u\right)\vert_{r=0+} < -\frac{1}{4}N_u < 0. \end{equation}
In case $u \geq t$, (\ref{eqnCad}) is proved exactly as in the first computation in the proof of part $(i.)$. In case $u < t$, we note that
$$\begin{aligned}
&\frac{d}{dr} E_{T}^\lambda\left((t+1-u-r)\cdot \zeta_{u+r}(x) \cdot (2\lambda)^{d(x,y)}\;|\; \zeta_u\right)\begin{itemize}g|_{r=0+} = -(2\lambda)^{d(x,y)},
\end{aligned}$$
so the same computation can again be employed and (\ref{eqnCad}) follows. The result is now obtained from the optional sampling theorem and the fact that $N_0 \equiv t+ 1$.
\noindent $(v.)$ The proofs of $(v.)$ and $(vi.)$ are similar but $(v.)$ is easier, so we only present $(vi.)$.
\noindent $(vi.)$ For $u \geq 0$ and $z \in T$, define
$$\begin{aligned}&\end{theorem}a_u(z) = I\{\{x\} \times [0,t] \lra (z,u) \text{ by a path that does not pass by }y\};\\
&\end{theorem}a'_u(z) = I\{\{x\} \times [0, t] \lra (z,u) \text{ by a path that passes by }y\}.
\end{aligned}$$
Notice that, in particular, $\end{theorem}a_u(x) = 1 \; \forall u \leq t$, $\end{theorem}a_u(y) = 0 \; \forall u$ and
$$\left\{\begin{array}{c}\exists t':\;\{x\}\times [0,t] \lra (y, t')\\ \text{ and } (y, t') \lra \{x\} \times [t', \infty)\end{array} \right\} = \{\exists s:\;\end{theorem}a'_s(x) = 1\}.$$
Also define, for $u \geq 0$,
$$\begin{aligned}L_u = \max\left((t+1-u),\; 1\right)\cdot \end{theorem}a_u(x) \cdot(2\lambda)^{2d(x,y)} &+ \sum_{\substack{z\in T:\\z \neq x}}\;\end{theorem}a_u(z)\cdot(2\lambda)^{d(z,y) + d(y,x)}\\
&+ \sum_{z\in T}\; \end{theorem}a'_u(z) \cdot (2\lambda)^{d(z,x)}. \end{aligned}$$
Proceeding as in the previous parts (and again treating separately the cases $u<t$ and $u \geq t$), we can show that $(L_u)_{u \geq 0}$ is a supermartingale. The result then follows from the optional sampling theorem (consider the stopping time $\inf\{s:\; \end{theorem}a'_s(x) = 1\}$) and the fact that $L_0 = (t+1)(2\lambda)^{2d(x,y)}$.
\end{proof}
\begin{lemma}
\label{lem:extStar}
If $\lambda < 1/4$ and $S$ is a star,
$$P^\lambda_S\left(\xi^S_{3\log\left(\frac{1}{\lambda}\right)} = \varnothing\right) \geq \frac{1}{4}\;e^{-16\lambda^2|S|}.$$
\end{lemma}
\begin{proof}
Let $(\zeta^S_{t})_{t\geq 0}$ be the process with state space $\{0,1\}^S$, starting from full occupancy, and with the same dynamics as that of contact process, with the only difference that recovery marks at the hub $o$ have no effect, so that $o$ is permanently in state 1. $(\xi^S_t)$ and $(\zeta^S_t)$ can obviously be jointly constructed with a single graphical construction, with the property that $\xi^S_t \leq \zeta^S_t$ for all $t$. Also note that the processes $\{\zeta^S_t(x): x \in S\}$ are independent and, if $x \neq o$, the function $t \mapsto P_S^\lambda(\zeta^S_t(x) = 1)$ is a solution of $f'(t) = \lambda(1-f(t)) - f(t)$, so
$$P^\lambda_S\left(\zeta^S_t(x) = 1\right) = \frac{1}{1+\lambda}\left(\lambda + e^{-(1+\lambda)t}\right).$$
Let $\sigma = \inf D_o \cap \left[\log \frac{1}{\lambda},\;\infty \right)$ be the first recovery time at the hub after time $\log \frac{1}{\lambda}$. Also define the events
$$\begin{aligned}
&B^1_1 = \left\{\sigma < 2\log \frac{1}{\lambda}\right\};\\
&B^1_2 = \left\{|\zeta^S_\sigma| \leq 4\lambda|S|\right\};\\
&B^1_3 = \left\{\begin{array}{c}\text{For all } x \in \zeta^S_\sigma,\; D_x \cap \left[\sigma,\; \sigma + \log \frac{1}{\lambda}\right] \neq \varnothing
\\ \text{and } \inf\left(D_x \cap [\sigma,\;\infty)\right) < \inf\left(D_{x,o} \cap [\sigma,\;\infty)\right) \end{array}\right\}.
\end{aligned}$$
We then have $\left\{\xi^S_{3\log\frac{1}{\lambda}} = \varnothing\right\} \supset B^1_1 \cap B^1_2 \cap B^1_3$. To see this, assume that the three events occur. By the definition of $\sigma$, we have $\xi^S_\sigma(o) = 0$ and $|\xi^S_\sigma| \leq |\zeta^S_\sigma| < 4\lambda S$ and every vertex that is infected at this time recovers without reinfecting the root by time $\sigma + \log(1/\lambda) < 3\log(1/\lambda)$.
We have
$$P_S^\lambda(B^1_1) \geq P^\lambda_S\left(D_o \cap \left[\log \frac{1}{\lambda},\;2\log \frac{1}{\lambda}\right] \neq \varnothing\right)\geq 1 - e^{-\log\frac{1}{\lambda}} = 1 -\lambda.$$
For $x \neq o,\; P_S^\lambda\left(\zeta^S_\sigma(x) = 1\right) \leq \frac{1}{1+\lambda}\left(\lambda + e^{-(1+\lambda)\log\frac{1}{\lambda}}\right) \leq 2\lambda,$ so
$$P_S^\lambda(B^1_2) \geq \P\begin{itemize}g(\mathsf{Bin}(|S|,\;2\lambda) \leq 4\lambda|S| \begin{itemize}g) \geq 1/2.$$
Also, for $x \neq o$,
$$P_S^\lambda\left(\begin{array}{c}D_x \cap \left[\sigma,\; \sigma + \log \frac{1}{\lambda}\right] \neq \varnothing \text{ and}
\\ \inf\left(D_x \cap [\sigma,\;\infty)\right) < \inf\left(D_{x,o} \cap [\sigma,\;\infty)\right) \end{array} \right) \geq 1 - e^{-\log\frac{1}{\lambda}}-\frac{\lambda}{1+\lambda} > 1 - 2\lambda,$$
so
$$P^\lambda_S\left(B^1_3\;|\;B^1_1 \cap B^1_2\right) \geq (1-2\lambda)^{4\lambda|S|} \geq e^{-2\cdot 2\lambda \cdot 4\lambda|S|} = e^{-16\lambda^2|S|}$$
since $1-\alpha \geq e^{-2\alpha}$ for $\alpha < 1/2$.
In conclusion,
$$P^\lambda_S\left(\xi^S_{3\log \frac{1}{\lambda}} = \varnothing\right) \geq \P^\lambda_S\left(B^1_3\;|\;B^1_1 \cap B^1_2 \right)\cdot P^\lambda_S\left(B^1_1 \cap B^1_2\right) \geq e^{-16\lambda^2|S|}\cdot\left(1-\lambda - \frac{1}{2}\right) \geq \frac{1}{4}\;e^{-16\lambda^2|S|}.$$
\end{proof}
Applying the above result and Lemma \ref{lembound}, we get a bound on the probability of extinction of the contact process on trees where, one vertex apart, degrees are bounded by $\frac{1}{8\lambda^2}$.
\begin{lemma}
\label{lem:extTree}
For $\lambda > 0$ small enough, the following holds. If $T$ is a tree with root $o,\;|T| < \frac{1}{\lambda^3}$ and $\deg(x) \leq \frac{1}{8\lambda^2}$ for all $x \neq o$, then
$$P^\lambda_T\left(\xi^T_{100\log\frac{1}{\lambda}} = \varnothing\right) \geq \frac{1}{8}\;e^{-16\lambda^2\deg(o)}.$$
\end{lemma}
\begin{proof}
Let $S$ be the star graph containing $o$ and it neighbours, $T' = T\backslash \{o\}$ be the disconnected graph obtained by removing $o$ and all edges incident to it from $T$ and $L = \frac{100}{3}\log\frac{1}{\lambda}$. We introduce three basic comparison processes, all generated with the same graphical construction on $T$ that is used to define $(\xi^T_t)_{t\geq 0}$.
\\
$\bullet\;\left(\xi^{T,1}_t\right)_{t \geq 0}$ is the contact process on $T'$ started from full occupancy, that is,
$$\xi^{T',1}_{t} = {\left\{x: T'\times \{0\} \;\lra\;(x,t) \text{ inside } T'\right\}};$$
$\bullet\;\left(\end{theorem}a^S_t\right)_{t \geq L}$ is the contact process on $S$, beginning from full occupancy at time $L$, that is,
$$\end{theorem}a^S_t = \left\{x: S \times L \;\lra\;(x,t) \text{ inside } S\right\};$$
$\bullet\;\left(\xi^{T,2}_t\right)_{t \geq 2L}$ is the contact process on $T'$ started from full occupancy at time $2L$, that is,
$$\xi^{T',2}_t = \left\{x: T' \times 2L \;\lra\; (x, t) \text{ inside } T' \right\}.$$
The event $\left\{\xi^T_{3L} = \varnothing \right\}$ contains the intersection of the following events:
$$\begin{aligned}
&B^2_1 = \left\{\xi^{T,1}_L = \varnothing \right\};\qquad B^2_2 = \left\{\end{theorem}a^S_{2L} = \varnothing\right\};\qquad
B^2_3 = \left\{\xi^{T,2}_{3L} = \varnothing \right\};\\
&B^2_4 = \left\{\nexists(x,s): d(o,x) \geq 2,\; o \times [0,\;3L] \;\lra\; (x,s) \;\lra\; \{o\}\times [s,\;\infty)\right\}.
\end{aligned}$$
Let us prove this. If $B^2_1$ occurs, then for any $s \geq L$ and $x\in T,\;\xi^T_s(x) = 1$ implies that $\{o\}\times [0,s] \;\lra\;(x,s)$. Thus, if $B^2_1 \cap B^2_4$ occurs, then for any $s \geq L,\;\end{theorem}a^S_s(o) = 0$ implies $\xi^T_s(o) = 0$ so that, if $B^2_1 \cap B^2_2 \cap B^2_4$ occurs, we have $\xi^T_s(o) = 0$ for $s \geq 2L$. It then follows that $\xi^T_{3L} =\varnothing$ if all four events occur.
We now note that the four events are decreasing with respect to the partial order on graphical constructions defined by setting, for graphical constructions $H$ and $H',\; H \prec H'$ if $H'$ contains more transmissions and less recoveries than $H$. Thus, by the FKG inequality, $P^\lambda_T(\cap_{i=1}^4 B^2_i) \geq \prod_{i=1}^4 P^\lambda_T(B^2_i)$.
By Lemma \ref{lembound}$(vi.)$, we have $P^\lambda_T(B^2_4) \geq 1 - (3L+1)\cdot(2\lambda)^4\cdot \frac{1}{\lambda^3}$. Also applying Lemma \ref{lembound}$(iii.)$ and Lemma \ref{lem:extStar}, it is then easy to verify that, for $\lambda$ small, $P^\lambda_T(B^2_1)\cdot P^\lambda_T(B^2_2) \cdot P^\lambda_T(B^2_3) > \frac{1}{2}$.
\end{proof}
\section{Proof of Proposition \ref{prop:main}: upper bounds}
\label{s:upper}
We now want to apply the estimates of the previous section in proving the upper bound of Proposition \ref{prop:main}. Since Lemma \ref{lembound} must be applied to finite trees, our first step is defining truncations of infinite trees: given the distance threshold $r$ and the size threshold $m$, vertices of degree larger than $m$ and vertices at distance $r$ from the root will be turned into leaves.
Let $r, m \in \N$ and $T$ be a tree with root $o$. Define the $r,m$-truncated tree
$$\overline T_{r,m} = \{o\} \cup \left\{x \in T: d(o, x) \leq r,\; \deg(y) \leq m \; \forall y \text{ in the geodesic from $o$ to $x$},\; y \notin \{o, x\}\right\}.$$
Also define, for $1 \leq i < r$,
$$S^{i}_{r,m}(T) =\left\{\begin{array}{c}x \in T: d(o,x) = i,\;\deg(x) > m,\\ \deg(y) \leq m \; \forall y \text{ in the geodesic from $o$ to $x$},\; y \notin \{o, x\}\end{array}\right\}$$
and, finally,
$$S^r_{r,m}(T) = \{x \in \overline T_{r,m}: d(o, x) = r \}.$$
We want to think of $\overline T_{r,m}$ as the result of inspecting $T$ upwards from the root until generation $r$ so that, whenever a vertex $x$ of degree larger than $m$ is found, the whole subtree that descends from it is deleted, so that $x$ becomes a leaf.
Note that, if $\T$ is a tree sampled from the probability $\Q_{p,q}$, then $\overline \T_{r,m}$ is a Galton-Watson tree of $r$ generations in which the degree distribution of the root is $p$ and that of other vertices is $\overline q_m(k)$, as in (\ref{eq:defhatq}).
In particular, using (\ref{c0a-2}) and (\ref{c03-a}), for $1 \leq i < r$ we have the upper bound
\begin{equation}
\label{eq:expSRM1}
\E_{\Q_{p,q}}(\;|S^i_{r,m}|\;) \leq \mu\cdot \left(\sum_{k=1}^m kq(k)\right)^{i-1} \cdot q(m, \infty) \leq \left\{\begin{array}{ll}C_0\mu\cdot (C_0m^{3-a})^{i-1}\cdot m^{-(a-2)} & \text{if } 2 < a < 3;
\\ C_0\mu\cdot (C_0\log m)^{i-1}\cdot m^{-1} &\text{if } a = 3;
\\C_0\mu\cdot \nu^{i-1}\cdot m^{-(a-2)} &\text{if } a> 3. \end{array} \right.
\end{equation}
Similarly, for $1 \leq i \leq r$,
\begin{equation}
\label{eq:expSRM2}
\E_{\Q_{p,q}}\left(\left|\left\{\begin{array}{c}x \in \overline \T_{r,m}:\\ d(o,x) = i\end{array}\right\}\right|\right) \leq \mu \cdot \left(\sum_{k=1}^m kq(k)\right)^{i-1} \leq \left\{\begin{array}{ll}C_0\mu\cdot (C_0m^{3-a})^{i-1} &\text{if } 2 < a< 3;\\C_0\mu\cdot (C_0\log m)^{i-1}&\text{if } a=3;\\C_0\mu\cdot \nu^{i-1}&\text{if } a > 3. \end{array}\right.
\end{equation}
Throughout the following subsections, we will take degree and distance thresholds, $M$ and $R$, which will depend on $\lambda$ and $a$, as follows:
$$M = \left\{ \begin{array}{cl}\left(\frac{1}{8C_0\lambda}\right)^{\frac{1}{3-a}} &\text{if } 2 < a \leq 2\frac{1}{2};
\\
\frac{1}{8\lambda^2} &\text{if } a > 2\frac{1}{2};
\end{array}\right. \qquad\qquad R = \left\{\begin{array}{cl}\begin{itemize}g \lceil \frac{2\log 4}{3-a} \log\left(\frac{1}{\lambda}\right) \begin{itemize}g \rceil &\text{if } 2 < a \leq 2\frac{1}{2};
\\ \begin{itemize}g \lceil {\frac{2a+1}{2a-5}} \begin{itemize}g \rceil&\text{if } 2\frac{1}{2} < a \leq 3;
\\ 2a+1 &\text{if } a > 3. \end{array} \right.$$
\subsection{Case $2 < a \leq 2\frac{1}{2}$}
\label{ss:a>2}
The treatment of this regime is very simple. We start defining the event that the root has degree above $M$,
$$B^3_1 = \left\{\deg(o) > M\right\},$$
and the event that the root has degree below $M$ and the infection reaches a leaf of the truncated tree,
$$B^3_2 = \left\{\deg(o) \leq M,\; (o,0) \lra \left(\mathop{\cup}_{i=1}^R S_{R,M}^{i} \right) \times \R_+ \text{ inside } \overline \T_{R,M}\right\}.$$
We observe that $\{\xi^o_t \neq \varnothing \; \forall t\} \subset B^3_1 \cup B^3_2$. We wish to show that both events have probability smaller than $C\rho_a(\lambda)$ for some universal constant $C$. For the first event, this is immediate:
$$\Q_{p,q}(B^3_1) \leq C_0(8\lambda)^{\frac{a-1}{3-a}} < \lambda^{\frac{1}{3-a}}$$
when $\lambda$ is small. For the second,
\begin{equation}\label{eq:sepdeg} \Q_{p,q}^\lambda(B^3_2) \leq \sum_{i=1}^R \sum_{k=1}^\infty \Q_{p,q}^\lambda \left(\begin{array}{c}(o,0) \lra S^i_{R,M} \times \R_+
\\ \text{ inside } \overline \T_{R,M} \end{array} \left| \begin{array}{c}\deg(o) \leq M,
\\ |S^i_{R,M}| = k \end{array} \right. \right) \cdot \Q_{p,q}(\;|S^i_{R,M}| = k\;). \end{equation}
Since the degrees of vertices of $\overline \T_{R,M}$ are bounded by $\left(\frac{1}{8\lambda C_0}\right)^{\frac{1}{3-a}} < \frac{1}{8\lambda^2}$, Lemma \ref{lembound}$(i.)$ implies that the conditional probability inside the sum is less than $k(2\lambda)^i$. (\ref{eq:sepdeg}) is thus less than $\sum_{i=1}^R (2\lambda)^i \; \E_{\Q_{p,q}}(\;|S^i_{R,M}|\;)$. Using (\ref{eq:expSRM1}) and (\ref{eq:expSRM2}), we have
$$\begin{aligned}
&\E_{\Q_{p,q}}(\;|S^i_{R,M}|\;) \leq C_0\mu\cdot (C_0M^{3-a})^{i-1} \cdot M^{-(a-2)} \text{ for } i < R \text{ and }\\
&\E_{\Q_{p,q}}(\;|S^R_{R,M}|\;) \leq C_0\mu \cdot (C_0M^{3-a})^{R-1}.
\end{aligned}$$
Thus,
\begin{equation}\sum_{i=1}^R(2\lambda)^i\;\E_{\Q_{p,q}}(\;|S^i_{R,M}|\;) \leq C_0\mu\cdot 2\lambda\left(M^{-(a-2)}\sum_{i=1}^{R-1} (2\lambda\cdot C_0 M^{3-a})^{i-1} + (2\lambda\cdot C_0 M^{3-a})^{R-1} \right).\nonumber\end{equation}
By the definition of $M$, $2\lambda \cdot C_0M^{3-a} < \frac{1}{2}$. Using the definition of $R$, we also have $$(2\lambda \cdot C_0 M^{3-a})^{R-1} < \left(\frac{1}{2}\right)^{R-1} < \left(\frac{1}{\lambda}\right)^\frac{2-a}{3-a}.$$
In conclusion,
$$\Q_{p,q}^\lambda(B^3_2) \leq 2 C_0 \mu \cdot \lambda\left(C_0 \lambda^{\frac{a-2}{3-a}}\sum_{i=1}^{R-1} \left(\frac{1}{2}\right)^{i-1} + \left(\frac{1}{\lambda}\right)^{\frac{2-a}{3-a}} \right) < C\lambda^{1+\frac{a-2}{3-a}} = C\lambda^\frac{1}{3-a}.$$
\subsection{Case $a > 2\frac{1}{2}$}
\label{ss:a>212}
If we merely repeated the computation of the previous subsection with the new value of the threshold $M = \frac{1}{8\lambda^2}$ and the same events, that would yield a correct upper bound, but it would not be optimal in this case. So we will need to consider more events, taking a closer look at the truncated tree and ways in which the infection can leave it.
Our first two events are similar to those of the previous subsection:
$$\begin{aligned}
&B^4_1 = \{\deg(o) > M \};
\\
&B^4_2 = \left\{\deg(o) \leq M,\;(o,0)\lra \left(\mathop{\cup}_{i=2}^R S_{R,M}^{i}\right)\times \R_+\text{ inside } \overline \T_{R,M}\right\}\end{aligned}$$
The difference to the previous subsection is that $B^4_2$ only includes leaves at distance two or more from the root. We will have to treat separately the leaves neighbouring the root. We first consider the case in which there are at least two leaves neighbouring the root and at least one of them becomes infected:
$$B^4_3 = \{\deg(o) \leq M,\; |S_{R,M}^{1}| \geq 2,\;(o,0) \;\lra\; S_{R,M}^{1} \times \R_+ \text{ inside } \overline \T_{R,M}\}.$$
Next, if the root has only one neighbour that is a leaf (that is, if $|S_{R,M}^1|=1$), then call this neighbour $o^*$. Let us distinguish two ways in which $o^*$ may receive the infection initially present from $o$. We say that $o^*$ becomes infected \textit{directly} if a transmission from $o$ to $o^*$ occurs before the first recovery time at $o$. We say that $o^*$ becomes infected \textit{indirectly} if there are infection paths starting at $(o,0)$ and ending at $\{o^*\} \times [0,\infty)$, but all of them must visit at least one vertex different from $o$ and $o^*$. We then define
$$\begin{aligned}&B^4_4 = \{\deg(o) \leq M,\; |S_{R,M}^{1}| = 1,\;\text{$o^*$ becomes infected indirectly}\},\\[0.25cm]
&B^4_5 = \left\{\begin{array}{c}\deg(o) \leq M,\; |S_{R,M}^{1}| = 1,\;\text{there exists $t^*$ such that}
\\\text{ $o^*$ becomes infected directly at time $t^*$ and}
\\(o^*,t^*)\;\lra\;\T\times\{t\}\text{ for all } t\geq t^*\end{array}\right\}.
\end{aligned}$$
Thus, in event $B^4_5$, there is a transmission from $o$ to $o^*$ at some time $t^*$ before the first recovery at $o$, and the infection generated from this transmission then survives for all times in the (non-truncated) tree $\T$.
The reason we make the distinction between $o^*$ becoming infected directly or indirectly is subtle; let us explain it. In our treatment of $B^4_5$, we will re-root the tree at $o^*$ and study the distribution of this re-rooted tree, so that we can find estimates for the infection that is transmitted from $(o^*, t^*)$. This study will be possible because, when we are told that a direct transmission has occurred, we only obtain information concerning the recovery process $D_o$ and the transmission process $D_{o,o^*}$, so the distribution of the degrees of other vertices in the tree is unaffected. In contrast, if the transmission is indirect, we have information concerning the portion of the tree that descends from $o$ through vertices different from $o^*$, so the study of the re-rooted tree is compromised and we have to follow a different approach.
We now have $\{\xi^o_t \neq \varnothing \; \forall t\} \subset \cup_{i=1}^5 B^4_i$. We wish to show that $\Q_{p,q}^\lambda (B^4_i) < C\rho_a(\lambda)$ for each $i$.
\noindent\textbf{1) Event $B^4_1$.} As in the previous section, we have $$\Q_{p,q}^\lambda(B^4_1) \leq C_0M^{-(a-1)} \leq 2C_0\cdot (8\lambda)^{2(a-1)} < \rho_a(\lambda)$$ when $\lambda$ is small.
\noindent \textbf{2) Event $B^4_2$.} As in our treatment of $B^3_2$ in the previous section, we have
\begin{equation}\Q_{p,q}^\lambda(B^4_2) \leq \sum_{i=2}^R (2\lambda)^i \cdot \E_{\Q_{p,q}}(\;|S^i_{R,M}|\;) = 2\lambda \sum_{i=2}^R (2\lambda)^{i-1} \cdot \E_{\Q_{p,q}}(\;|S^i_{R,M}|\;).\label{eq:3casesExp0}\end{equation}
Using (\ref{eq:expSRM1}), for $2 \leq i < R$, we have
\begin{equation}\nonumber (2\lambda)^{i-1}\cdot \E_{\Q_{p,q}}(\;|S^i_{R,M}|\;)\label{eq:3casesExp}\leq \left\{\begin{array}{ll}CM^{-(a-2)}\cdot (C'\lambda M^{3-a})^{i-1} & \\\leq C\lambda^{2a-4}\cdot(C'\lambda^{2a-5})^{i-1} &\text{if } 2\frac{1}{2} < a < 3;
\\ CM^{-(a-2)}\cdot(C'\lambda \log M)^{i-1}&\\\leq C\lambda^{2a-4}\cdot(C'\lambda \log \frac{1}{\lambda})^{i-1}&\text{if } a = 3;
\\CM^{-(a-2)}\cdot\nu^{i-1}&\\\leq C\lambda^{2a-4}\cdot \nu^{i-1}&\text{if } a > 3. \end{array}\right.
\end{equation}
We then have
\begin{equation}\nonumber \sum_{i=2}^{R-1} (2\lambda)^{i-1}\cdot \E_{\Q_{p,q}}(\;|S^{i}_{R,M}|\;) \leq \left\{\begin{array}{ll}C\lambda^{2a-4} \cdot \lambda^{2a-5}\cdot \sum_{i=2}^\infty (C'\lambda^{2a-5})^{i-2}&\text{if } 2\frac{1}{2} < a < 3;
\\ C\lambda^{2a-4}\cdot \lambda \log\frac{1}{\lambda}\cdot \sum_{i=2}^\infty (C'\lambda\log\frac{1}{\lambda})^{i-2}&\text{if } a = 3;
\\ C\lambda^{2a-4}\cdot \lambda \cdot \sum_{i=2}^\infty (C'\lambda)^{i-2}&\text{if } a > 3, \end{array} \right. \end{equation}
for constants $C, C'$ that do not depend on $\lambda$. Then,
\begin{equation}\label{eq:3casesExp2}
\sum_{i=2}^{R-1} (2\lambda)^{i-1}\cdot \E_{\Q_{p,q}}(\;|S^{i}_{R,M}|\;) \leq \lambda^{2a - 4 + \delta}
\end{equation}
when $\lambda$ is small enough, for some $\delta > 0$ that depends on $a$ but not on $\lambda$.
For $i = R$, using (\ref{eq:expSRM2}) we get
\begin{equation}\nonumber
(2\lambda)^{R-1}\cdot \E_{\Q_{p,q}}(\;|S^{R}_{R,M}|\;) \leq \left\{\begin{array}{ll}C\left(C'\lambda^{2a-5}\right)^{R-1}&\text{if } 2\frac{1}{2} < a < 3;
\\ C\left(C' \lambda \log \frac{1}{\lambda}\right)^{R-1} &\text{if } a = 3;
\\ C(C' \lambda)^{R-1} &\text{if } a > 3. \end{array} \right.
\end{equation}
By the choice of $R$ in each case, when $\lambda$ is small we get
\begin{equation}
\label{eq:3casesExp3}
(2\lambda)^{R-1}\cdot \E_{\Q_{p,q}}(\;|S^{R}_{R,M}|\;) \leq \lambda^{2a}.
\end{equation}
Using (\ref{eq:3casesExp2}) and (\ref{eq:3casesExp3}) in (\ref{eq:3casesExp0}), we conclude that, if $\lambda$ is small,
$$\Q_{p,q}^\lambda(B^4_2) \leq C\lambda^{2a-3+\delta} < \rho_a(\lambda).$$
\noindent \textbf{3) Event $B^4_3$.} We bound
\begin{equation}\Q_{p,q}^\lambda(B^4_3) \leq \sum_{k=3}^\infty p(k) \cdot 2\lambda \cdot \E_{\Q_{p,q}}\left(\;|S^1_{R,M}| \cdot I_{\{|S^1_{R,M}| \geq 2\}} \; \begin{itemize}g| \deg(o) = k\;\right)\label{eq:ests1rm}.\end{equation}
Under $\Q_{p,q}(\;\cdot |\deg(o) = k\;)$, $|S^1_{R,M}|$ is $\mathsf{Bin}(k, q(M,\infty))$. If $X \sim \mathsf{Bin}(n,p)$, then
\begin{equation}\label{eq:binn2}\E(X\cdot I_{\{X\geq 2\}}) = np - np(1-p)^{n-1} =np(1-(1-p)^{n-1}) < (np)^2,\end{equation}
since, by Bernoulli's Inequality, $(1-p)^{n-1} > 1-(n-1)p >1 -np$. Using the bound (\ref{eq:binn2}) for $k \leq M$ and the bound $\E(X \cdot I_{\{X\geq 2\}}) < np$ for $k > M$, (\ref{eq:ests1rm}) is less than
$$\begin{aligned}
&2\lambda\left( \sum_{k=3}^ M p(k)\cdot k^2\cdot q(M,\infty)^2 + \sum_{k=M+1}^\infty p(k)\cdot k\cdot q(M,\infty)\right)\\
&\leq C\lambda \left( M^{-2(a-2)}\cdot\sum_{k=3}^M p(k)\cdot k^2 + M^{-(a-2)}\cdot\sum_{k=M+1}^\infty p(k)\cdot k\right)\\
&\leq C\lambda \left(M^{-2(a-2)}\cdot M^{3-a} + M^{-(a-2)}\cdot M^{-(a-2)} \right)\\&\leq C\lambda \left(M^{-3a+7} + M^{-2a+4} \right) \leq C\lambda(\lambda^{6a - 14} + \lambda^{4a - 8}) < \rho_a(\lambda)
\end{aligned}$$
when $\lambda$ is small, since $6a-13, 4a-7 > 2a-3$ when $a > 2\frac{1}{2}$.
\\
In order to bound the probabilities of $B^4_4$ and $B^4_5$, we will need the following result, whose proof is omitted.
\begin{lemma}\label{lem:compTrees0}
The degrees of the vertices of $\T$ under $\Q_{p,q}(\;\cdot\;\begin{itemize}g|\;\deg(o) \leq M,\;|S_{R,M}^{1}| = 1\;)$ are distributed as follows:\\
$(i.)$\ First, $\deg(o)$ is chosen with distribution
\begin{flalign}\label{eq:lawTrunc}&k \in [0,M] \mapsto \frac{\frac{p(k)}{p[0,M]}\cdot\Q_{p,q}(|S^1_{R,M}|=1 \;\begin{itemize}g|\; \deg(o)=k)}{\sum_{w=1}^M\;\frac{p(w)}{p[0, M]}\cdot \Q_{p,q}(|S^1_{R,M}|=1 \;\begin{itemize}g|\; \deg(o)=w)}.\end{flalign}
$(ii.)$\ Given the choice of $\deg(o)$, the degrees of $o^*$ and the remaining neighbours of $o$ are chosen independently: $\deg(o^*)$ with law
\begin{equation}\label{eq:lawTrunc1}k \in (M, \infty) \mapsto \left(q(M,\infty)\right)^{-1}q(k) \end{equation}
and the remaining degrees with law
\begin{equation}\label{eq:lawTrunc2}k \in [0,M] \mapsto \left(q[0,M]\right)^{-1}q(k). \end{equation}
$(iii.)$\ All other vertices in the tree have degrees chosen independently with distribution $q$.
\end{lemma}
\noindent \textbf{Remark.} The distribution in (\ref{eq:lawTrunc}) is equal to
\begin{eqnarray}&&k \mapsto \frac{p(k) \cdot k \cdot q(M,\infty)\cdot \left(q[0,M]\right)^{k-1}}{\sum_{w=1}^M p(w) \cdot w \cdot q(M, \infty) \cdot \left(q[0,M] \right)^{w-1}}\nonumber\\&&\qquad \qquad=\left(\sum_{w=1}^M p(w)\cdot w \cdot \left(q[0,M]\right)^{w-1}\right)^{-1} p(k)\cdot k\cdot \left(q[0,M]\right)^{k-1};\nonumber\end{eqnarray}
hence, it is stochastically dominated by $q$. Obviously, the distribution in (\ref{eq:lawTrunc2}) is also dominated by $q$.
\noindent \textbf{4) Event $B^4_4$.}
\begin{equation} \Q_{p,q}^\lambda(B^4_4) \leq \Q_{p,q}(|S^1_{R,M}|>0) \cdot \Q_{p,q}^\lambda\left(\begin{array}{c}\exists y \in \overline \T_{R,M},\; 0<s<t:\\(o,0) \lra (y,s) \lra (o,t) \text{ inside } \overline \T_{R,M} \end{array}\left| \begin{array}{c}\deg(o) \leq M,\\|S^1_{R,M}| = 1 \end{array}\right.\right).\nonumber\end{equation}
The first probability on the right-hand side is less than $\sum_{k=3}^\infty p(k)\cdot k \cdot q(M,\infty) \leq C\lambda^{2(a-2)}$. By Lemma \ref{lembound}$(v.)$, the second probability is less than
\begin{equation}\label{eq:B24red}\sum_{i=1}^R \lambda^{2i} \cdot \E_{\Q_{p,q}}\left(|\{x \in \overline \T_{R,M}: d(o,x) = i\}|\;\begin{itemize}g|\; \deg(o) \leq M,\; |S^1_{R,M}| = 1 \right) \end{equation}
By Lemma \ref{lem:compTrees0} and the remark that follows it, this conditional expectation is bounded by
$$\E_{\Q_q}\left(\left|\left\{x \in \overline \T_{R,M}:d(o,x) = i\right\} \right|\right) \leq \left\{\begin{array}{ll}(C_0M^ {3-a})^{i}&\text{if } 2\frac{1}{2} < a < 3;
\\(C_0\log M)^i &\text{if } a = 3;
\\ \nu^i &\text{if } a > 3. \end{array}\right.$$
It is then easy to check that the sum in (\ref{eq:B24red}) is less than $\lambda^{1+\delta}$ for some $\delta > 0$. In conclusion, $\Q_{p,q}^\lambda(B^4_4) \leq \lambda^{2(a-2)+1+\delta} < \rho_a(\lambda)$ when $\lambda$ is small.
\\
\noindent \textbf{5) Event $B^4_5$.}
This is the bound that requires most effort. We start with
\begin{flalign}\nonumber&\Q_{p,q}^\lambda(B^4_5)& \\&\leq C\lambda^{2(a-2)} \cdot \frac{\lambda}{1+\lambda} \cdot \Q_{p,q}^\lambda\left(\;(o^*, t^*) \;\lra \;\T\times [t, \infty) \; \forall t > t^*\;\left|\;\deg(o) \leq M,\; |S_{R,M}^{1}| = 1,\; t^* < \inf D_o\right.\right)&\nonumber\\
\label{eq:B25red}&=C\lambda^{2(a-2)} \cdot \frac{\lambda}{1+\lambda} \cdot \Q_{p,q}^\lambda\left(\;(o^*, 0) \;\lra \;\T\times [t, \infty) \; \forall t > 0\;\left|\;\deg(o) \leq M,\; |S_{R,M}^{1}| = 1\right.\right).&\end{flalign}
In order to deal with the conditioning in (\ref{eq:B25red}), we need the following, which is a consequence of Lemma \ref{lem:compTrees0} and the remark that follows it.
\begin{lemma}\label{lem:compTrees}
Let $\hat \T$ be the random rooted tree obtained by
\\
$\bullet$\ sampling $\T$ under law $\Q_{p,q}(\;\cdot\;|\deg(o)\leq M,\; |S^1_{R,M}|= 1)$;
\\
$\bullet$ repositioning the root at $o^*$, the unique vertex in $S_{R,M}^{1}$.
\\
Then, $\hat \T$ is stochastically dominated by the distribution $\Q_{q}(\;\cdot\;|\deg(o) > M)$.
\end{lemma}
As a consequence of Lemma \ref{lem:compTrees} and attractiveness of the contact process, we get
$$\begin{aligned}&\Q_{p,q}^\lambda\left((o^*, 0) \;\lra \;\T\times \{t\} \; \forall t > 0\;\left|\;\deg(o) \leq M,\; |S_{R,M}^{1}| = 1\right.\right)\leq \Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t\;|\; \deg(o) > M\right).\end{aligned}$$
Using this in (\ref{eq:B25red}), we get
\begin{equation}\Q_{p,q}^\lambda(B^4_5) \leq C\lambda^{1+2(a-2)}\cdot \Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t\;|\; \deg(o) > M\right).\label{eq:redTom0}\end{equation}
In treating the last term of (\ref{eq:redTom0}), we will obtain the logarithmic term in the definition of $\rho_a(\lambda)$. This is encompassed in the following proposition.
\begin{proposition}
There exists $C > 0$ such that
\begin{equation}\label{eq:redTom}\Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t\;|\; \deg(o) > M\right) \leq \left\{\begin{array}{ll}C\log^{-(a-2)}\left(\frac{1}{\lambda}\right) &\text{if } 2\frac{1}{2} < a \leq 3;
\\C\log^{-2(a-2)}\left(\frac{1}{\lambda}\right) &\text{if } a > 3. \end{array}\right.\end{equation}
\label{prop:redTom}\end{proposition}
Define
$$M' = \left\{\begin{array}{ll}\lceil \end{proposition}silon_1 \frac{1}{\lambda^2}\log\left(\frac{1}{\lambda}\right)\rceil &\text{if } 2\frac{1}{2} < a \leq 3;
\\\lceil \end{proposition}silon_2 \frac{1}{\lambda^2}\log^2\left(\frac{1}{\lambda}\right)\rceil &\text{if } a > 3, \end{array}\right.$$
where $\end{proposition}silon_1,\; \end{proposition}silon_2$ are constants to be chosen later, depending on $a$ but not on $\lambda$. Our approach to prove Proposition \ref{prop:redTom} starts with the following:
$$\begin{aligned}
&\Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t \;|\; \deg(o) > M\right) \\&\quad= \sum_{m = \lceil M\rceil}^\infty \Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t \;|\; \deg(o) = m\right)\cdot \Q_q\left(\deg(o) = m\;|\; \deg(o) > M\right)\\
&\quad \leq \Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t \;|\; \deg(o) = M'\right)+\Q_q\left(\deg(o) > M'\;|\: \deg(o)> M\right) \\
&\quad \leq \Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \; \forall t \;|\; \deg(o) = M'\right) + \frac{q[M', \infty)}{q[M, \infty)}.
\end{aligned}$$
Now, by (\ref{c0a-2}), the term $\frac{q[M', \infty)}{q[M, \infty)}$ is bounded from above by the expression in the right-hand side of (\ref{eq:redTom}), for some $C > 0$. Proposition \ref{prop:redTom} will thus follow from
\begin{lemma}\label{lem:redTom}
If $a > 2\frac{1}{2}$, then there exists $\delta > 0$ such that
$$\Q_q^\lambda\left(\xi^{o}_t \neq \varnothing \;\forall t\;|\: \deg(o) = M'\right) < \lambda^\delta.$$
\end{lemma}
In the next two subsections, we prove Lemma \ref{lem:redTom} separately for the cases $2\frac{1}{2} < a \leq 3$ and $a > 3$.
\subsection{Completion of proof for $2\frac{1}{2} < a \leq 3$}
\label{ss:ca>212}
In this subsection and the next, we will consider the probability measure $\Q_{q}^\lambda(\;\cdot\;|\deg(o) = M')$, so $\T$ will be a tree with root degree equal to $M'$. Here we will give the proof in detail for the case $2\frac{1}{2} < a < 3$; the case $a = 3$ is treated similarly and we will omit it for brevity.
Let $\end{proposition}silon_1' = \frac{2a-5}{2}$ and $\end{proposition}silon_1 = \frac{\end{proposition}silon_1'}{64}$; this is the constant that appears in the definition of $M'$. Also let $L_1 = \lambda^{-\end{proposition}silon_1'/2}$ and fix an integer $R'$ large enough that $(2a-5)(R'-1) - 1 > 2a-5$. We will be particularly interested in the contact process on $B_{\T}(o, R')$ in the time interval $[0, L_1]$.
We will need the quantities
$$\begin{aligned}
&\upphi(\T) = \sum_{i=1}^{R'} (2\lambda)^i\cdot|S^i_{R',M}(\T)|;\\
&\uppsi(\T) = \sum_{i=2}^{R'} (2\lambda)^{2i} \cdot |\{x \in \overline \T_{R',M}: d(x, o) = i\}|.
\end{aligned}$$
We define two environment events, which are simply
$$B^5_1 = \left\{\upphi(\T) > \lambda^{\end{proposition}silon_1'}\right\};\qquad B^5_2 = \left\{\uppsi(\T) > \lambda^{\end{proposition}silon_1'}\right\},$$
and then define three events involving the contact process:
$$\begin{aligned}
&B^5_3 = (B^5_1 \cup B^5_2)^c \cap \left\{\{o\} \times [0,L_1] \;\lra\;\left(\mathop{\cup}_{i=1}^R S_{R',M}^{i} \right) \times \R_+\right\};\\
&B^5_4 = (B^5_1 \cup B^5_2)^c \cap \left\{\exists z \in \overline \T_{R',M},\;s>0: \{o\}\times [0,L_1] \;\lra\;(z,s)\;\lra\; \{o\}\times [s, \infty) \text{ inside }\overline \T_{R',M} \right\};\\
&B^5_5 = \left\{B(o,1) \times \{0\} \;\lra\; B(0,1) \times \{L_1\} \text{ inside } B(o,1)\right\}.
\end{aligned}$$
We claim that $\left\{\xi^o_t \neq \varnothing \;\forall t\right\} \subset \cup_{i=1}^5 B^5_i$. To show this, it suffices to show that, if an infection path $t \mapsto \gamma(t) \in \T$ with $\gamma(0) = o$ ever reaches any point of $\cup_{i=1}^{R'} S^i_{R',M}$, then one of the events must occur. Let $t^* = \inf\{t: \gamma(t) \in \cup_{i=1}^{R'} S^i_{R',M}\}$ and $t^{**} = \sup\{t \leq t^*: \gamma(t) = o\}$. If $t^{**} \leq L_1$, then $B^5_3$ occurs. If $t^{**} > L_1$ and $\gamma(t) \in B(o,1)$ for all $t \in [0, t^{**}]$, then $B^5_5$ occurs. Otherwise, $B^5_4$ occurs.
We now want to show that the probability of each of the five events is less than $\lambda^\delta$ when $\lambda$ is small, for some $\delta > 0$.
\\
\noindent \textbf{1) Event $B^5_1$.} Bounding as in (\ref{eq:expSRM1}), we have
\begin{eqnarray}
&&\E_{\Q_q}\left(\upphi(\T)\;|\; \deg(o) = M' \right) \nonumber\\&&\leq \sum_{i=1}^{R'-1}(2\lambda)^i \cdot M'\cdot (C_0M^{3-a})^{i-1} \cdot C_0M^{-(a-2)} + (2\lambda)^{R'}\cdot M' \cdot (C_0M^{3-a})^{R'-1}.\label{eq:lastAux}
\end{eqnarray}
The first term in (\ref{eq:lastAux}) is less than
$$2\lambda \cdot \end{proposition}silon_1\;\frac{1}{\lambda^2}\;\log\left(\frac{1}{\lambda}\right) \cdot C_0 (8\lambda^2)^{a-2}\cdot \sum_{i=1}^{R'-1}\left(C_0\left(\frac{1}{8\lambda^2}\right)^{3-a} \cdot 2\lambda\right)^{i-1} \leq C\lambda^{2a-5}\cdot \log\left(\frac{1}{\lambda}\right).$$
The second term in (\ref{eq:lastAux}) is less than
$$2\lambda \cdot \end{proposition}silon_1\;\frac{1}{\lambda^2}\;\log\left( \frac{1}{\lambda}\right)\cdot \left(C_0 \left( \frac{1}{8\lambda^2}\right)^{3-a} \cdot 2\lambda\right)^{R'-1}= 2\end{proposition}silon_1 \cdot \frac{1}{\lambda}\log \left(\frac{1}{\lambda}\right)\cdot \left(C_0\cdot \frac{1}{8^{3-a}}\cdot \lambda^{2a-5} \right)^{R'-1}$$
and this is also less than $C\lambda^{2a-5}\log(1/\lambda)$ by the choice of $R'$. This shows that
\begin{equation}
\nonumber\E_{\Q_q}\left(\upphi(\T)\;|\;\deg(o) = M' \right) \leq C \lambda^{2a-5}\cdot \log(1/\lambda)
\end{equation}
Thus, by the Markov inequality,
$$\Q_{q}\left(B^5_1\;|\;\deg(o) = M'\right) \leq \frac{C\lambda^{2a-5}\log(1/\lambda)}{\lambda^{(2a-5)/2}} < \lambda^{(2a-5)/4}.$$\\
\noindent \textbf{2) Event $B^5_2$.} Bounding as in (\ref{eq:expSRM2}),
$$\begin{aligned}
&\E_{\Q_q}\left(\uppsi(\T)\;|\;\deg(o) = M'\right) \leq M'\cdot \sum_{i=2}^{R'} (2\lambda)^{2i}\cdot (C_0M^{3-a})^{i-1}\\
&\leq \end{proposition}silon_1 \; \frac{1}{\lambda^2} \log\left(\frac{1}{\lambda}\right)\cdot \lambda^{3} \cdot\sum_{i=2}^{R'}(2\lambda)^{2i-3}\cdot\left(C_0 \left(\frac{1}{8\lambda^2}\right)^{3-a} \right)^{i-1} < C\lambda \log \left(\frac{1}{\lambda}\right),
\end{aligned}$$
since the exponent of $\lambda$ inside the sum, $2i - 3 - 2(3-a)(i-1) = (2a-4)i + 3 -2a$, is positive when $i \geq 2$. The desired bound now follows from the Markov inequality as above.
\\
\noindent \textbf{3) Event $B^5_3$.} For $x \in \T,\;x \neq o$, let $s(x)$ denote the neighbour of $o$ in the geodesic from $o$ to $x$, and let $\T(x)$ be the subtree of $\T$ with vertex set $$\{o\} \cup \{y\in \T: \text{the geodesic from $o$ to $y$ contains $s(x)$}\}$$ and edge set $\{\{z,w\}:z \sim w \text{ in } \T,\;z, w \in \T(x)\}$.
For $B^5_3$ to occur, there must exist $x \in \cup_{i=1}^{R'} S^i_{R',M}$ so that $\{o\} \times [0, L_1] \;\lra\;\{x\}\times \R_+$ inside $\T(x) \cap \overline \T_{R',M}$. For a fixed $x$, the probability of such a path is less than $(L_1+1)\cdot(2\lambda)^{d(o,x)}$ by Lemma \ref{lembound}$(iv.)$, since $\T(x) \cap \overline \T_{R',M}$ is a tree in which all degrees are bounded by $M$. Summing over all $x$, this yields
$$P_{\T}^\lambda\left(\{o\} \times [0,L_1] \; \lra \;\mathop{\cup_{i=1}^{R'}}S^i_{R',M} \times \R_+ \right) \leq (L_1+1)\cdot \upphi(\T).$$
If $B^5_1$ does not occur, then the right-hand side is less than $(L_1+1)\cdot \lambda^{\end{proposition}silon'_1} = (\lambda^{-\end{proposition}silon_1'/2}+ 1)\cdot \lambda^{\end{proposition}silon_1'}$. Thus,
$$\Q_q^\lambda(B^5_3\;|\;(B^5_1)^c) < \lambda^{\end{proposition}silon_1'/2}+ \lambda^{\end{proposition}silon_1'}.$$\\
\noindent \textbf{4) Event $B^5_4$.} This is treated similarly to the previous event; here we use Lemma \ref{lembound}$(vi)$ to conclude that
$$\Q_q^\lambda(B^5_4\;|\;(B^5_2)^c) < (L_1 + 1)\cdot \lambda^{\end{proposition}silon_1'} = \lambda^{\end{proposition}silon_1'/2}+ \lambda^{\end{proposition}silon_1'}.$$\\
\noindent \textbf{5) Event $B^5_5$.} For $i \leq 0$, let
$$E_i = \left\{B(o, 1) \times \{i\cdot 3\log(1/\lambda)\} \;\nleftrightarrow B(o,1) \times \{(i+1)\cdot 3\log(1/\lambda)\}\;\right\}.$$
These events are independent and, by Lemma \ref{lem:extStar},
\begin{equation}\Q^\lambda_q(E_i\;|\;\deg(o) = M') \geq (1/4)e^{-16\lambda^2M'} = (1/4)\lambda^{16\end{proposition}silon_1} = (1/4)\lambda^{\end{proposition}silon_1'/4}. \label{eq:compareStarl1}\end{equation}
If $B(o,1) \times \{0\} \; \lra \; B(0,1) \times \{L_1\}$, then $E_i$ cannot occur for
\begin{equation} 0 \leq i \leq \lfloor L_1/(3\log(1/\lambda)) \rfloor = \lfloor \lambda^{-\end{proposition}silon_1'/2}/(3\log(1/\lambda)) \rfloor.
\label{eq:compareStarl2}\end{equation}
Comparing (\ref{eq:compareStarl1}) and (\ref{eq:compareStarl2}), it is easy to see that $\Q_q^\lambda(B^5_5)$ is smaller than any power of $\lambda$ as $\lambda \to 0$.
\subsection{Completion of proof for $a > 3$}
\label{ss:ca>3}
Fix $\end{proposition}silon_2' > 0$ with $\end{proposition}silon_2' < \min\left((4\log\nu)^{-1},\; a-3\right)$ and set $\end{proposition}silon_2 = \frac{\end{proposition}silon_2'}{18}$; this is the constant that appears in the definition of $M'$. Also define $R' = \lceil\end{proposition}silon_2' \log \frac{1}{\lambda} \rceil$ and $L_2=\lambda^{-17\end{proposition}silon_2\log\frac{1}{\lambda}}$. We will be particularly interested in the contact process on $B_\T(o, R')$ in the time interval $[0, L_2]$.
This time, our environment events correspond to violations of the properties required for Lemma \ref{lem:extTree} to be applied:
$$\begin{aligned}
&B^6_1 = \left\{|B(o, R')| > \lambda^{-3}\right\};\\
&B^6_2 = \{\exists x \in \T\backslash\{o\}: d(o,x) \leq R',\;\deg(x) > M\}.\end{aligned}$$
The first event involving the contact process is the existence of an infection path starting on $\{o\}\times[0,L_2]$ and reaching vertices at distance more than $R'$ from the root,
$$B^6_3 = (B^6_1 \cup B^6_2)^c \cap \left\{\{o\}\times \left[0, L_2\right] \;\lra\; B(o, R')^c \times \R_+\right\},$$
The second event is the infection surviving up to time $L_2$ without leaving the ball $B(o, R')$,
$$B^6_4 = (B^6_1 \cup B^6_2)^c \cap \left\{B(o, R') \times \{0\} \;\lra \; B(o, R') \times \{L_2\} \text{ inside } B(o, R')\right\}.
$$
Again we have $\left\{\xi^o_t \neq \varnothing \; \forall t\right\} \subset \cup_{i=1}^4 B^6_i$. We proceed to show that each of these event has probability smaller than $\lambda^\delta$, for some $\delta > 0$.
\\
\noindent \textbf{1) Event $B^6_1$.} Using Markov's inequality,
$$\begin{aligned}\Q_q(B^6_1\;|\;\deg(o) = M')&\leq \lambda^3\cdot \E_{\Q_q}\left(|B(o, R')|\;\begin{itemize}g|\;\deg(o) = M'\right)\\
&\leq \lambda^3\cdot R'\cdot \E_{\Q_q}\left(|\{x \in \T: d(o,x) = R'\}|\; \begin{itemize}g|\;\deg(o) = M'\right)\\
&\leq \lambda^3 \cdot R' \cdot M' \cdot \nu^{R'}\\
&\leq \lambda^3 \cdot 2\end{proposition}silon_2'\log \frac{1}{\lambda} \cdot 2\end{proposition}silon_2 \frac{1}{\lambda^2}\log^2\left(\frac{1}{\lambda}\right) \cdot \nu^{\frac{1}{4\log \nu}\log \frac{1}{\lambda}} < \lambda^{1/2}
\end{aligned}$$
when $\lambda$ is small.
\\
\noindent \textbf{2) Event $B^6_2$.}
$$\begin{aligned}\Q_q(B^6_2\;|\;\deg(o)=M') &\leq \sum_{i=1}^{R'} \end{proposition}silon_2 \frac{1}{\lambda^2}\log\frac{1}{\lambda}\cdot \nu^{i-1}\cdot q(M,\infty) \\&\leq \lambda^{2(a-2)-2}\cdot \log \frac{1}{\lambda} \cdot \nu^{R'+1} < \lambda^{2a-6-2\end{proposition}silon'};\end{aligned}$$
by the choice of $\end{proposition}silon_2'$, $2a-6-2\end{proposition}silon' > 0$, so we are done.
\\
\noindent \textbf{3) Event $B^6_3$.} Assume $(B^6_1 \cup B^6_2)^c$ occurs, so that $\T$ is such that $|B_\T(o, R')| \leq \lambda^{-3}$ and, with the exception of the root $o$, the degrees of all vertices in $B_\T(o, R')$ are less than $M$. Recall from the previous subsection (in the treatment of the event $B^5_3$) the definition of $\T(x)$ for a vertex $x \neq o$. Note that presently, for any $x \in B_\T(o, R')$, $\T(x)$ is a tree in which all degrees are bounded by $M$.
If $\{o\} \times [0,L_2] \;\lra\;B_\T(o, R')^c \times \R_+$, then there must exist $x$ with $d(o,x) = R'$ so that $\{o\} \times [0, L_2] \;\lra\;\{x\} \times \R_+$ inside $\T(x)$. For a fixed $x$, the probability of this is bounded by $(L_2 + 1)\cdot (2\lambda)^{R'}$, by Lemma \ref{lembound}$(iv.)$, so
$$P^\lambda_\T\left(\{o\} \times [0,L_2] \;\lra\; B(o, R')^c \times \R_+\right) \leq |\{x: d(o,x) = R'\}|\cdot (L_2+1)\cdot(2\lambda)^{R'} \leq \lambda^{-3}\cdot (L_2+1)\cdot(2\lambda)^{R'},$$
so that
$$\Q_q^ \lambda(B^6_3\;|\;(B^6_1 \cup B^6_2)^c,\;\deg(o) = M') \leq \lambda^{-3}\cdot \left(\lambda^{-17\end{proposition}silon_2\log\frac{1}{\lambda}}+1\right)\cdot (2\lambda)^ {\end{proposition}silon_2'\log\frac{1}{\lambda}},$$
so, using the fact that $\end{proposition}silon_2 = \frac{\end{proposition}silon_2'}{18},$ we are done.
\\
\noindent \textbf{4) Event $B^6_4$.} Again assume that $(B^6_1 \cup B^6_2)^c$ occurs. For $i \geq 0$, let
$$F_i = \left\{B(o, R')\times \left\{i\cdot 100\log(1/\lambda)\right\}\nleftrightarrow B(o, R') \times \left\{(i+1)\cdot 100\log(1/\lambda)\right\}\right\}.$$
These events are independent and, by Lemma \ref{lem:extTree},
\begin{equation} P^\lambda_\T(F_i) \geq (1/8)e^{-16\lambda^2M'} = (1/8)\lambda^{16\end{proposition}silon_2\log(1/\lambda)}\label{eq:lastComp1}.\end{equation} If $B(o, R') \times \{0\} \;\lra\; B(o, R') \times \{L_2\}$, then $F_i$ cannot occur for
\begin{equation} 0 \leq i \leq \lfloor L_2/(100\log(1/\lambda)) \rfloor = \lfloor \lambda^{-17\end{proposition}silon_2\log(1/\lambda)}/(100\log(1/\lambda)) \rfloor.\label{eq:lastComp2}\end{equation}
Comparing (\ref{eq:lastComp1}) and (\ref{eq:lastComp2}), it is easy to see that $\Q_q^\lambda(B^6_4\;|\;(B^6_1\cup B^6_2)^c)$ is smaller than any power of $\lambda$ as $\lambda \to 0$.
\section{Appendix: Proof of Theorem \ref{thm:reduc}}\label{s:appendix}
Recall the definition of $\mathcal{M}(x, R, K)$ and $\mathcal{N}(x, R, K)$ in (\ref{eq:defM}) and (\ref{eq:defN}). We will also need
$$\mathcal{L}(x,R) = \left\{(x,0) \;\lra\; B(x, R)^c \times \R_+ \right\}.$$
\begin{lemma}
\label{lem:aphelp}
For any $\end{proposition}silon > 0$ and $\lambda > 0$ there exists $K_0 > 0$ such that, for any $K > K_0$,
\begin{equation}\Q^\lambda_{q}\left(\left.{\mathop\cap_{i=K+1}^{\infty}} \;\mathcal{N}\left(o, a\log_2(i), i\right)\; \right|\;\deg(o) = K\right) > 1 - \end{proposition}silon.\label{eq:auxappend}\end{equation}
\end{lemma}
Clearly, it is enough to prove the above for $\lambda$ small enough. This is done using Lemma \ref{basic}$(iii.)$ and Lemma \ref{lem:infNei}; since the proof is essentially a repetition of the ideas of Subsection \ref{ss:lower}, we omit it.
The point of the following lemma is approximating the event $\{\xi^o_t \neq \varnothing \;\forall t\}$ on the infinite tree $\T$ by events involving the contact process on a finite ball around the root, $B(o, R)$.
\begin{lemma}
For any $\end{proposition}silon > 0,\; \lambda > 0$ and $R_0 > 0$, there exists $R > R_0$ such that
\begin{equation}\Q_{p,q}^\lambda\left(\mathcal{L}(o,R)\right) - \end{proposition}silon < \Q_{p,q}^\lambda\left(\xi^o_t \neq \varnothing \; \forall t \right) < \Q_{p,q}^\lambda\left(\mathcal{N}(o,R,R^2)\right) + \end{proposition}silon.\nonumber\end{equation}\label{lem:compFinEv}
\end{lemma}
\begin{proof} Fix $\end{proposition}silon,\;\lambda$ and $R_0$. The existence of $R$ such that the first inequality is satisfied is a direct consequence of
$\left\{\xi^o_t \neq \varnothing \; \forall t \right\} = \cap_{r=1}^\infty\; \mathcal{L}(o,R).$
Let us now deal with the second inequality. For the process $(\xi^o_t)_{t\geq 0}$, let $\sigma_i$ be the first time the infection reaches a vertex at distance $i$ from the root, and $X_i$ the vertex that becomes infected at this time. Define $N_k = \inf\{i: \deg(X_i) > k\}$. Since
$$\begin{aligned}
&\lim_{k \to \infty} \Q^\lambda_{p,q}\left(\left.N_k < \infty \;\right|\;\exists t: \xi^o_t = \varnothing \right) = 0 \text{ and }\\
&\lim_{r\to\infty}\Q^\lambda_{p,q}\left(\left.N_k < \infty,\;d(o,X_{N_k}) < r \;\right|\;\xi^o_t \neq \varnothing \;\forall t\right) = 1 \text{ for all } k,
\end{aligned}$$
we can choose $r_0, k_0$ such that
$$\Q_{p,q}^\lambda(N_{k_0} < \infty,\; d(o, X_{N_{k_0}}) < r_0) > \Q_{p,q}^\lambda\left(\xi^o_t \neq \varnothing \; \forall t \right) - \end{proposition}silon.$$
Also assume $k_0$ is large enough that (\ref{eq:auxappend}) is satisfied when $K = k_0 - 1$.
Choose $k_1$ large enough that $\left(r_0 + a \log_2 k_1\right)^2 < k_1$ and $r_0 + a \log_2 k_1 > R_0$. We define the event $\mathcal{N}'\left(X_{N_{k_0}}, a\log_2 k_1, k_1 \right)$ as the event $\mathcal{N}\left(X_{N_{k_0}}, a\log_2 k_1, k_1 \right)$ with time shifted so that $\sigma_{N_{k_0}}$ becomes the time origin (so that the infection starts at the space-time point $(X_{N_{k_0}},\;\sigma_{N_{k_0}})$). By the definition of $N_{k_0}$ and the choice of $k_0$,
$$\begin{aligned}&\Q_{p,q}^\lambda\left(\left.\mathcal{N}'\left(X_{N_{k_0}}, a\log_2 k_1, k_1 \right) \right| \;N_{k_0} < \infty,\;d(o, X_{N_{k_0}}) < r_0 \right)
\\
&\geq \Q^\lambda_q\left( \;\mathcal{N}(o, a\log_2 k_1, k_1 )\;|\;\deg(o) = k_0 - 1 \right) > 1 - \end{proposition}silon.\end{aligned}$$
We have thus shown that, with probability larger than $(1-\end{proposition}silon)\left(\Q^\lambda_{p,q}\left(\xi^o_t \neq \varnothing \forall t\right) -\end{proposition}silon \right)$, the infection reaches a site $X_{N_{k_0}}$ of degree larger than $k_0$ and distance less than $r_0$ from the root and then, reaches a site $y$ of degree larger than $k_1$ and distance less than $a\log_2 k_1$ from $X_{N_{k_0}}$. All this occurs through infection paths through vertices whose distance from the root is never more than $R := r_0 + a \log_2 k_1$, so that $R > R_0$ and $R^2 < k_1$ as required. Since $\end{proposition}silon$ is arbitrary, the proof is complete.
\end{proof}
\begin{lemma}
\label{lem:NGn}
For any $\end{proposition}silon > 0,\; \lambda > 0$ and $(t_n)$ with $\log t_n = o(n)$, there exists $R>0$ such that
$$\liminf_{n \to \infty}\; \P^\lambda_{p, n}\left(\xi^{v_1}_{t_n} \neq \varnothing \;|\;\mathcal{N}(v_1, R, R^2)\right) > 1 - \end{proposition}silon.$$
\end{lemma}
Since the proof of this lemma requires several preliminary results, we will postpone it. With Lemmas \ref{lem:compFinEv} and \ref{lem:NGn} at hand, we are ready for our main proof.
\begin{proof}thm{thm:reduc}
Fix $\lambda >0,\;\end{proposition}silon >0$ and $(t_n)$ with $t_n \to \infty$ and $\log t_n = o(n)$. We will write $\upgamma = \Q_{p,q}^\lambda \left(\xi^o_t \neq \varnothing \;\forall t\right)$.
By Lemmas \ref{lem:compFinEv} and \ref{lem:NGn}, we can choose $R > 0$ such that
\begin{equation} \begin{array}{l}
\Q_{p,q}^\lambda\left(\mathcal{L}(o,R)\right) - \end{proposition}silon < \upgamma < \Q_{p,q}^\lambda\left(\mathcal{N}(o,R,R^2)\right) + \end{proposition}silon;
\\
{\displaystyle \limsup_{n \to \infty}}\;\P_{p,n}^\lambda \left(\xi^{v_1}_{t_n} = \varnothing \;|\;\mathcal{N}(v_1, R, R^2) \right) < \end{proposition}silon^2.
\end{array}
\label{eq:auxCompLem}
\end{equation}
For the contact process with parameter $\lambda$ on $G_n$, define:
$$X_{n,i} = I_{\left\{\xi^{v_i}_{t_n} \neq \varnothing \right\}},\qquad \overline X_{n,i} = I_{\mathcal{L}(v_i, R)},\qquad Y_{n,i} = I_{\mathcal{L}(v_i, R)^c\;\cap \;\left\{\xi^{v_i}_{t_n} \neq \varnothing\right\}},\qquad 1\leq i \leq n.$$
Under $\P_{p,n}^\lambda$, $(X_{n,1},\ldots, X_{n,n}),\;(\overline X_{n,1},\ldots, \overline X_{n,n})$ and $(Y_{n,1},\ldots, Y_{n,n})$ are exchangeable random vectors with
$X_{n,i} \leq \overline X_{n,i} + Y_{n,i}$ for each $i$ and, by Proposition \ref{lem:kGW},
\begin{equation}\label{eq:XXY2}
\begin{array}{l}{\displaystyle \lim_{n \to \infty}}\;\P_{p,n}^\lambda\left(\overline X_{n,1} = 1\right) = \Q_{p,q}^\lambda(\mathcal{L}(o,R)) < \upgamma + \end{proposition}silon;
\\ {\displaystyle \lim_{n \to \infty}}\;\P_{p,n}^\lambda\left(Y_{n,1} = 1\right) = 0;
\\{\displaystyle \lim_{n \to \infty}}\;\mathsf{Cov}(\overline X_{n,1}, \overline X_{n,2}) = 0.\end{array}
\end{equation}
Using duality for the contact process, we have
$$\begin{aligned}\P_{p,n}^\lambda\left(|\xi^{V_n}_{t_n} | > (\upgamma + 3\end{proposition}silon)n \right)&= \P_{p,n}^\lambda\left(\sum_{i=1}^n X_{n,i} > (\upgamma + 3\end{proposition}silon)n \right)\\
&\leq\P_{p,n}^\lambda\left(\sum_{i=1}^n \overline X_{n,i} > (\upgamma + 2\end{proposition}silon)n \right) + \P_{p,n}^\lambda\left(\sum_{i=1}^n Y_{n,i} > \end{proposition}silon n \right) \stackrel{n \to \infty}{\xrightarrow{\hspace*{0.8cm}}}0
\end{aligned}$$
by (\ref{eq:XXY2}).
We now define
$$\overline Z_{n,i} = I_{\mathcal{N}(v_i, R, R^2)},\qquad W_{n,i} = I_{\mathcal{N}(v_i, R, R^2) \; \cap \;\left\{\xi^{v_i}_{t_n} = \varnothing\right\}}, \qquad 1 \leq i \leq n.$$
Again, we get exchangeable random vectors and $X_{n,i} \geq \overline Z_{n,i} - W_{n,i}$. By Proposition \ref{lem:kGW} and (\ref{eq:auxCompLem}),
\begin{equation}
\label{eq:ZZW2}\begin{array}{l}
{\displaystyle \lim_{n \to \infty}}\P_{p,n}^\lambda\left(\overline Z_{n,1} = 1 \right) = \Q_{p,q}^\lambda\left(\mathcal{N}(o, R, R^2)\right) > \upgamma - \end{proposition}silon,
\\
{\displaystyle \limsup_{n\to\infty}}\;\P_{p,n}^\lambda\left(W_{n,1} = 1\right) < \end{proposition}silon^2,
\\
{\displaystyle \lim_{n \to \infty}}\;\mathsf{Cov}(\overline Z_{n,1}, \overline Z_{n,2}) = 0.
\end{array}
\end{equation}
We then have
$$\begin{aligned}\P_{p,n}^\lambda\left(|\xi^{V_n}_{t_n}| < (\upgamma - 3\end{proposition}silon)n\right) &=\P_{p,n}^\lambda\left(\sum_{i=1}^n X_{n,i} < (\upgamma - 3\end{proposition}silon)n \right)\\
&\leq \P_{p,n}^\lambda\left(\sum_{i=1}^n \overline Z_{n,i} < (\upgamma - 2\end{proposition}silon)n \right) + \P_{p,n}^\lambda\left(\sum_{i=1}^n W_{n,i} > \end{proposition}silon n \right).
\end{aligned}$$
The first term in the right-hand side vanishes as $n \to \infty$; by Markov's inequality, the second term is less than
$$\frac{n\E_{n,p}^\lambda\left(W_{n,1}\right)}{\end{proposition}silon n} \leq \frac{2\end{proposition}silon^2n}{\end{proposition}silon n} = 2\end{proposition}silon$$
when $n$ is large. Since $\end{proposition}silon$ is arbitrary, the proof is now complete.
\end{proposition}rthm
We now turn to the proof of Lemma \ref{lem:NGn}. Let us first explain our approach. We want the infection started at $v_1$ to survive until time $t_n$. Lemma \ref{lem:SurCD} below guarantees that, to this end, it is enough to show that the infection reaches a vertex of degree $n^\delta$. Lemmas \ref{lem:echainb} and \ref{lem:echain} show that with high probability, there exists a ``bridge'' of vertices of increasing degree that can take the infection from $v_1$ to a site of degree larger than $n^\delta$. In order to cross this bridge, the infection needs some ``initial strength'', which is provided by the event $\mathcal{N}(v_1, R, R^2)$ in the conditioning in the probability in Lemma \ref{lem:NGn}.
The following result was proved in \cite{CD} for $t_n = e^{n^\beta}$, where $\beta < 1$. Applying the exponential extinction time result of \cite{MMVY}, it is easy to improve this to $t_n$ satisfying $\log(t_n) = o(n)$.
\begin{lemma}
\label{lem:SurCD}
For any $\delta,\; \end{proposition}silon,\;\lambda > 0$ and $(t_n)$ with $\log t_n = o(n)$, we have
$$\P_{p,n}\left(\min_{v \in V_n:\; \deg(v) \geq n^\delta} \;P_{G_n}^\lambda\left(\xi^v_{t_n} \neq \varnothing\right) > 1 - \end{proposition}silon \right) \stackrel{n \to \infty}{\xrightarrow{\hspace*{0.8cm}}}1.$$
\end{lemma}
\noindent In words: as $n$ becomes large, the probability of the following converges to 1: the graph $G_n$ is such that, starting the $\lambda$-contact process with a single infection at any site of degree larger than $n^\delta$, with probability larger than $1-\end{proposition}silon$ the process will still be active by time $t_n$.
In order to prove the two following lemmas, we describe an alternate, algorithmic construction of the random graph $G_n$. Let $d_1, \ldots,d_n$ be independent with law $p$ and, by adding a half-edge to some vertex if necessary, assume that $\sum_{i=1}^n d_i$ is even. We will match pairs of half-edges, one pair at a time. Let $\mathcal{H}$ denote the set of half-edges. To start, we select a half-edge $h_1$ in any way we want and then choose a half-edge $h_2$ uniformly at random from $\mathcal{H}\backslash\{h_1\}$. We then match $h_1$ and $h_2$ to form an edge. Next, we select a half-edge $h_3$ from $\mathcal{H} \backslash \{h_1, h_2\}$, match it to a half-edge $h_4$ uniformly chosen from $\mathcal{H} \backslash \{h_1, h_2, h_3\}$, and so on, until there are no more half-edges to select. With a moment's reflection, we see that the random graph produced from this procedure is $G_n$.
\begin{lemma}
\label{lem:echainb}
There exists $\kappa = \kappa(a) > 0$ such that, with probability tending to 1 as $n \to \infty$, no cycle is formed when less than $n^\kappa$ matchings of half-edges are made.
\end{lemma}
\begin{proof}
Let $\sigma = \frac{1}{4(a-1)},\; \kappa = \frac{a-2}{9(a-1)}$. Define
$$A = \left\{\sum_{i=1}^n d_i > \frac{n\mu}{2},\; |\{i: d_i > n^\sigma\}| > n^{1-2\sigma(a-1)},\;\sum_{i:\; d_i > n^\sigma} d_i \leq n^{1-\frac{\sigma}{2}(a-2)} \right\}.$$
Using the Law of Large Numbers, (\ref{c0a-1}) and (\ref{c0a-2}), we get $\P_{p,n}(A) \to 1$ as $n \to \infty$. Assume $A$ occurs and we have matched $j$ pairs of half-edges, with $j < n^\kappa$. Let $J$ be the set of vertices associated to half-edges that were matched; we have $|J| \leq 2j < 2n^\kappa < n^{1-2\sigma(a-1)} = \sqrt{n}$ since $\kappa <1/2$. Suppose we now choose a half-edge uniformly at random from the set of half-edges that have not yet been matched. The probability that the chosen half-edge belongs to a vertex that is not in $J$ is
$$\frac{\sum_{i:\;v_i \notin J} d_i}{\sum_{i=1}^n d_i - 2j} = 1 -\frac{\sum_{i:\;v_i \in J}d_i - 2j}{\sum_{i=1}^n d_i - 2j} \geq 1 - \frac{\sum_{i:\;v_i \in J}d_i}{n\mu/4}$$
since $\sum_{i=1}^n d_i > n\mu/2$ and $j << n$. Since $|\{i: d_i > n^\sigma\}| > |J|$, the right-hand side is larger than
$$1-\frac{\sum_{i:\;d_i >n^\sigma}d_i}{n\mu/4} > 1 - \frac{n^{1-\frac{\sigma}{2}(a-2)}}{n\mu/4} = 1 -\frac{4}{\mu n^{\frac{\sigma}{2}(a-2)}}.$$
So the probability of forming a cycle in $\lfloor n^\kappa \rfloor$ matchings is less than
$n^\kappa\cdot \frac{4}{\mu n^{(\sigma/2)(a-2)}} \stackrel{n \to \infty}{\xrightarrow{\hspace*{0.8cm}}} 0$
since $\kappa < \frac{\sigma}{2}(a-2)$.
\end{proof}
\begin{lemma}
\label{lem:echain}
There exists $\delta = \delta(a) > 0$ such that the following holds. For any $\end{proposition}silon > 0$, there exists $K_0$ such that, for any $K > K_0$ and $n$ large enough,
$$\P_{p,n}\left({\mathop\cap_{k=K}^{\lceil n^\delta \rceil}}\;\mathcal{M}(x_1, a\log_2 k, k)\right) > 1 - \end{proposition}silon.$$
\end{lemma}
\begin{proof}
Let $\kappa$ be as in the above lemma; set $\delta = \frac{\kappa}{a}$ and $N_n = \lceil n^\delta \rceil$. Define the event
$$A'(K) = {\mathop \cap_{k=K}^{N_n}}\left\{\frac{\sum_{i:\; d_i \geq k}\; d_i}{\sum_{i=1}^n d_i} > \frac{1}{k^{a-1}} \right\}.$$
We have
$$\P_{p,n}(A'(K)) \geq 1 - \P_{p,n}\left(\sum_{i=1}^n d_i > 2n\mu\right) - \sum_{k=K}^{N_n} \P_{p,n}\left(\sum_{i:\;d_i \geq k} d_i \leq \frac{2n\mu}{k^{a-1}}\right).$$
For fixed $k$, we have
$$\P_{p,n}\left(\sum_{i:\;d_i \geq k} d_i \leq \frac{2n\mu}{k^{a-1}} \right) \leq \P_{p,n}\left(|\{i: d_i \geq k\}|\leq \frac{2n\mu}{k^a} \right).$$
Now, letting $X \sim \mathsf{Bin}(n, p[k, \infty))$, the probability in the right-hand side is less than
$$\P\left(X \leq 2n\mu/k^{a}\right) \leq e^{-c\frac{n}{k^{a}}};$$
by (\ref{c0a-1}) and (\ref{mark}). We have thus shown that ${\displaystyle\liminf_{n \to \infty}}\; \P_{p,n}(A'(K)) \geq 1 - \sum_{k=K}^{N_n}e^{-cnk^{-a}} > 1 - \end{proposition}silon$ when $K$ is large enough. Fix one such $K$.
We now start matching half-edges; we first match all half-edges incident to $v_1$, then the half-edges incident to the neighbours of $v_1$, and so on. We continue until either a cycle is formed with the edges that we have built (call this a \textit{failed exploration}) or we have revealed more than $n^\kappa$ vertices (a \textit{successful exploration}). By the above lemma, as $n \to \infty$, with high probability we have a successful exploration. We remark that, since all vertices have degree larger than 2, in a successful exploration we reveal at least $2^i$ vertices at distance $i$ from $v_1$, for $0 \leq i \leq \lfloor \log_2 n^\kappa \rfloor$.
Assume $A'(K)$ occurs and let $k \in [K, N_n]$. If at some point in the exploration, $j$ matchings have already been made and no vertex of degree larger than $k$ has been found, then the probability that the next revealed vertex has degree larger than $k$ is larger than $\left(\sum_{i:\;\deg(v_i) \geq k}d_i\right)/\left(\sum_{i=1}^n d_i - 2j\right) \geq k^{-(a-1)}$. Thus,
$$\begin{aligned}&\P_{p,n}\left( \mathcal{M}(v_1, \log_2 k, k)\;\left|\;A'(K) \cap \left\{\text{Successful exploration}\right\}\right. \right)\\
&\geq \P_{p,n}\left(\begin{array}{c}\text{A vertex of degree larger than $k$ is}\\\text{found in $k^a$ steps in the exploration} \end{array}\left|\;A'(K)\cap\left\{\begin{array}{c}\text{Successful}\\\text{exploration} \end{array} \right\} \right.\right)\\
&\geq 1-(1-k^{-(a-1)})^{k^a} \geq 1-e^{-k}.
\end{aligned}$$
This completes the proof.
\end{proof}
\begin{proof}lem{lem:NGn}
Fix $\end{proposition}silon, \lambda$ and $(t_n)$ as in the statement of the lemma. Since for $R$ large enough, ${\displaystyle\lim_{n \to \infty}}\; \P^\lambda_{p,n}\left(\mathcal{N}(v_1, R, R^2)\right) = \Q_{p,q}^\lambda\left(\mathcal{N}(o, R, R^2)\right) > \frac{1}{2}\;\Q_{p,q}\left(\xi^o_t \neq \varnothing\; \forall t \right) > 0,$
the lemma will follow if we prove that for $R$ large enough,
\begin{equation}\limsup_{n \to \infty} \;\P_{p,n}^\lambda\left(\mathcal{N}(v_1, R, R^2)\cap \left\{\xi^{v_1}_{{t_n}}= \varnothing\right\} \right) < \end{proposition}silon. \label{eq:auxSuf1}\end{equation}
Recall that, if $\mathcal{N}(v_1, R, R^2)$ occurs, then there exist $y^*, t^*$ so that $d(v_1, y^*)<R,\;\deg(y^*) > R^2$ and $\frac{|\xi^{v_1}_{t^*}\;\cap\; B(y^*,1)|}{|B(y^*,1)|} > \frac{\min(\lambda, \lambda_0)}{16e}$. So, to prove (\ref{eq:auxSuf1}) it suffices to prove that for $R$ large enough,
\begin{equation}
\liminf_{n \to \infty}\; \P_{p,n}\left(\begin{array}{c}
\text{for all $y^* \in B(v_1, R)$ with $\deg(y^*) > R^2$},
\\\frac{|\xi_{0}\;\cap\; B(y^*,1)|}{|B(y^*,1)|} > \frac{\min(\lambda, \lambda_0)}{16e} \Longrightarrow P^\lambda_{G_n}\left(\xi^{y^*}_{t_n} \neq \varnothing\right)> 1-\end{proposition}silon
\end{array}\right) > 1-\end{proposition}silon.\label{eq:auxNew}
\end{equation}
Also, it is enough to prove (\ref{eq:auxNew}) under the assumption that $\lambda$ is small enough, so we take $\lambda < \lambda_0$, where $\lambda_0$ is as in Lemma \ref{lem:infNei}.
Fix $\delta > 0$ and $K_0$ corresponding to $\end{proposition}silon/2$ in Lemma \ref{lem:echain}. Then take $K \geq K_0$ such that
\begin{equation} 2\sum_{k=K}^\infty e^{-c_1\lambda^2 k} < \frac{\end{proposition}silon}{3}, \qquad k > \frac{3}{c_1} \cdot\left(\frac{1}{\lambda} \log \frac{1}{\lambda} \right)\cdot 2a\log_2(k+1) \; \forall k \geq K.\label{eq:pNGn1}\end{equation}
Next, choose $R > 0$ such that
\begin{equation}2e^{-c_1\lambda^2R^2} < \frac{\end{proposition}silon}{3},\qquad R^2>\frac{3}{c_1}\cdot \frac{1}{\lambda^2}\log\frac{1}{\lambda}\cdot (R+a\log_2K). \label{eq:pNGn2}\end{equation}
Now define the events for the graph $G_n$:
$$B_1 = \left\{\min_{v \in V_n:\; \deg(v) \geq n^\delta} P^\lambda_{G_n}\left(\xi^v_{t_n} \neq \varnothing\right) > 1-\frac{\end{proposition}silon}{3} \right\};\qquad B_2 ={\mathop \cap_{k=K}^{\lceil n^\delta\rceil}}\;\mathcal{M}(v_1, a\log_2 k, k)$$
By Lemma \ref{lem:SurCD} and the choice of $K_0$, when $n$ is large enough we have $\P_{p,n}(B_1 \cap B_2) > 1-\end{proposition}silon$. Assume $B_1\cap B_2$ occurs and fix $y^*$ with $\deg(y^*)>R^2$ and $d(v_1, y^*) < R$; let us now prove that, if $\frac{|\xi_0 \cap B(y^*,1)|}{\lambda\cdot |B(y^*,1)|} > \frac{1}{16e}$, then $P^\lambda_G(\xi^{y^*}_{t_n} \neq \varnothing) > 1-\end{proposition}silon$. Since $B_2$ occurs, we can take $z_K^*, z_{K+1}^*,\ldots, z^*_{\lceil n^\delta \rceil}$ such that $\deg(z^*_k) \geq k,\; d(v_1, z^*_k) \leq a\log_2 k$. Now, by (\ref{eq:pNGn1}) and (\ref{eq:pNGn2}),
$$\deg(y^*) > \frac{3}{c_1}\cdot \frac{1}{\lambda^2}\log\frac{1}{\lambda} \cdot d(y^*, z_K^*) \text{ and } \deg(z^*_k) > \frac{3}{c_1}\cdot \frac{1}{\lambda^2}\log\frac{1}{\lambda} \cdot d(z^*_k, z^*_{k+1}) \text{ for all } k \geq K,$$
so Lemma \ref{lem:infNei} can be used repeatedly to guarantee that the infection is transmitted from $y^*$, through $z^*_K, z^*_{K+1},\ldots$ until $z^*_{\lceil n^\delta \rceil}$ with probability larger than
$$1-2e^{-c_1\lambda^2\deg(y^*)} -2\sum_{k=K}^{\lceil n^\delta \rceil } e^{-c_1\lambda^2\deg(z^*_k)} > 1-\frac{2}{3}\end{proposition}silon,$$
To conclude, by the definition of $B_1$, the infection then survives until time $t_n$ with probability larger than $1-\end{proposition}silon$.
\end{proposition}rlem
\end{document} |
\begin{document}
\title{Modeling Dependence via Copula of Functionals of Fourier Coefficients}
\author{Charles Fontaine \thanks{Corresponding author. e-mail: [email protected]} \and Ron D. Frostig \and Hernando Ombao}
\institute{C. Fontaine \and H. Ombao \at
Statistics Program, King Abdullah University of Science and Technology (KAUST),
23955 Thuwal (Saudi Arabia)
\and
R. D. Frostig \at
Departments of Neurobiology and Behavior, Biomedical Engineering, and the Center for Neurobiology of Learning and Memory, University of California-Irvine, Irvine, CA, 92697 U.S.A.}
\maketitle
\begin{abstract}
The goal of this paper is to develop a measure for characterizing complex dependence between stationary time series that cannot be captured by traditional measures such as correlation and coherence. Our approach is to use copula models of functionals of the Fourier coefficients which is a generalization of coherence. Here, we use standard parametric copula models with a single parameter both from elliptical and Archimedean families. Our approach is to analyze changes in local field potentials in the rat cortex prior to and immediately following the onset of stroke. We present the necessary theoretical background, the multivariate models and an illustration of our methodology on these local field potential data. Simulations with non-linear dependent data show information that were missed by not taking into account dependence on specific frequencies. Moreover, these simulations demonstrate how our proposed method captures more complex non-linear dependence between time series. Finally, we illustrate our copula-based approach in the analysis of local field potentials of rats.
\end{abstract}
\noindent
{\it Keywords:} Coherence, Dependence, Fourier transform, Parametric copulas, Ranks, Time series.
\section{Introduction}
\label{sec:intro}
Consider an experimental setting where multichannel brain signals are recorded continuously from an animal (rat, monkey, human) over a certain period of time. The key scientific questions being addressed often center
on brain connectivity, that is, how different brain regions interact. In particular, the emphasis on these finely-sampled brain electrophysiological signals (e.g., local field potentials (LFP)) is on interactions between oscillatory components extracted from each channels. Methods for
analyzing dependence between brain signals have been developed in the
literature (see, e.g., \cite{FiecasPCoh}
and \cite{Ombaobook}) and a
more formal and general treatment of spectral analysis is discussed in
\cite{ShumwayStoffer}. However, classical spectral metrics
(e.g., coherence and partial coherence) are limited in that they can
capture only the strength of the linear dependence between the Fourier coefficients. The goal of this paper is to develop a rigorous approach that can comprehensively model general dependence structures between oscillatory activity of time series via copulas but using spectral features such as functionals of the Fourier coefficients.
In Figure \ref{exampleIntro1}, one observes three dependence structure having the same correlation measure (or value). The difference between these cases cannot be captured by a linear dependence measure (e.g. correlation, partial correlation, coherence, partial coherence). Hence, our goal in this paper is to present a copula-based framework to deal with these complex-structured dependencies in the spectral domain, for particular fundamental Fourier frequencies. We present a semi-parametric copula-based methodology to express these complex dependencies. The novelty here is that we develop a new approach that incorporates major statistical features (Kendall's tau, empirical cumulative distribution function) in the semi-parametric copula inference in order to consider spectrally represented data.
\begin{figure}
\caption{Illustration of cases of non-linear dependence \textit{(middle and right)}
\label{exampleIntro1}
\end{figure}
We now present an overview of this work. Consider a $d$-dimensional time series segmented into $R$ possibly over-lapping epochs (periods of $1$ second) with $T$ observations within each epoch. We think of this data as signals recorded in $d$ locations on the brain. The time series across all epochs will be assumed to be stationary so that the dependence structure between variables remains constant over the course of all $R$ epochs. We denote the time series for the $r$-th epoch
to be the $T \times d$ matrix ${\bf X}^{(r)} = [X_1^{(r)}, \ldots, X_{d}^{(r)}]$ and $X^{(r)}_\ell$, $\ell=1,...,d$, to be the vector $[X_\ell^{(r)}(1),...,X_\ell^{(r)}(T)]'$ corresponding to recordings in channel $\ell$. The time domain approach to model dependence between time series
is directly via ${\bf X}^{(r)}$. However, if one wants to represent the dependence in the spectral domain between the Fourier fundamental frequency, the main approach is based on \textit{coherence} between channels $X_\ell$ and $X_{\ell '}$, $\ell,\ell '=1,...,d$, at
frequency $\omega_k$, which is approximately equal to the expected value of the squared
absolute correlation between the Fourier coefficients. In this paper, we will examine more general (non-linear) dependence between oscillatory components by modeling copulas of functionals of Fourier coefficients.
The motivation behind this work is the following experiment. At the neurobiology laboratory at the University of California-Irvine (Principal Investigator: Frostig, second author; see \citet{wann2017}), stroke was artificially induced in experimental rat model of ischemic stroke by severing the medial cerebral artery (MCA). Brain activity prior to and after the stroke was recorded through the local field potentials (LFPs) from $d=32$ microelectrodes placed directly within the rat cortex. The recording time window covered $5$ minutes pre-stroke and $5$ minutes post-stroke. For each second, $T=1000$ time points were recorded and analyzed. Figure \ref{RatBrain} shows the placement of the microelectrodes. The key scientific questions being addressed by neuroscientists is focused on stroke-induced changes in brain connectivity (i.e., communication patterns between neuronal populations). In particular, the emphasis on these local field potentials is on interactions between oscillatory components extracted from each microelectrode. Thus, the statistical interest here was to develop a measure that can characterize the complex nature of dependence (particularly in the spectral domain) between the signals recorded by the microelectrodes and to develop a method that can detect changes in dependence following a shock to the brain system (such as a stroke). The experimental setup described above will be detailed in Section \ref{lfp}.
\begin{figure}
\caption{Placement of the $32$ microelectrodes in the cortex of the rat. There are $8$ columns (blue) of microelectrodes, each column having $4$ layers (red) that span most of the cortical depth. }
\label{RatBrain}
\end{figure}
Methods for analyzing dependence between brain signals have been developed in the
literature (see, e.g., \citet{matousek1973}, \citet{FiecasPCoh}, \citet{Ombaobook}, \citet{guevara1996}, \citet{shaw1981} and \citet{shaw1984}) and a more formal and general treatment of spectral analysis is discussed in \citet{ShumwayStoffer}. However, classical spectral metrics
(e.g., coherence and partial coherence) are limited in that they can capture only the strength of that linear dependence between the Fourier coefficients. Thus, these measures may miss some complex (or non-linear) dependence structures between two frequencies. For example, taking the well-known work on changes in dependence for brain channels in the spectral domain: \citet{long2004}, \citet{purdon2001}, \citet{nunez2006} and \citet{gotman1982}, one realizes that these authors constrained their work to the linear dependence between signals (or frequencies). Hence, the proposed approach in this paper is an attempt to provide a new tool for capturing more general (beyond linear) dependence between signals.
We denote the microelectrode $\ell=1,...,d$ and the epoch $r=1,...,R$ with each epoch having $T$ time points (for this specific data, we have $d=32$, $R=600$ and $T=1000$). Thus, the time domain variable representing these microelectrodes on the $r$-th epoch is $X^{(r)}_\ell$. As we are interested to work in the spectral domain in order to capture dependence between oscillatory activity, we compute the Fourier coefficients in order to get the Fourier coefficients $$f_{\ell, \omega_k}^{(r)} = \frac{1}{\sqrt{T}} \sum_{t=1}^{T} X_\ell^{(r)}(t)
\exp(-i 2 \pi \omega_k t)$$ where $\omega_k = \frac{k}{T}$,
\ $k = 0, 1, \ldots, T-1$ \ are the fundamental Fourier
frequencies. As we work in this paper with the magnitudes of the Fourier coefficients (or square root of periodograms), let $\delta_{\ell,\omega_k}=[\delta_{\ell,\omega_k}^{(1)},...,\delta_{\ell,\omega_k}^{(R)} ]'=[ | f_{\ell,\omega_k}^{(1)}|,...,|f_{\ell,\omega_k}^{(R)}|]'$ being the vector of magnitudes of the Fourier coefficients for microelectrode $\ell=1,...,d$ and for frequency $\omega_k$ over the $R$ existing epochs. Let ${\bm{\delta}}_{\omega_k}=[\delta_{1,\omega_k},...,\delta_{d,\omega_k}]$ to be the matrix of dimension $R \times d$ of these vectors for the $d$ microelectrodes. Let $F_{1,\omega_k},...,F_{d, \omega_k}$ be the marginal distributions associated to ${\bm{\delta}}_{\omega_k}$, admitting a density. Obviously, when components of
${\bm{\delta}}_{\omega_k}$ are independent, the joint cumulative distribution
function (cdf) is the simply the product of its marginal cumulative
distributions. However,
when there is dependence between the different components of the time series,
\citet{sklar1959} provides an explicit form of the joint cdf $H({\bm{\delta}}_{\omega_k})$ by
\begin{eqnarray} \label{sklar}
H\left( \delta_{1,\omega_k},...,\delta_{d,\omega_k} \right) &=& C_{1,...,d;\omega_k} \left( \delta_{1,\omega_k},...,\delta_{d,\omega_k} \right)
\end{eqnarray}
where $C_{1,...,d;\omega_k}$ is a copula: a cumulative distribution function expressing the mapping $[0,1]^d \to [0,1]$. In practice, a copula is characterized by a model which may be either parametric or non-parametric. Under a correct specification, one
clear advantage of this approach is the flexbility of $C$ in characterizing changes in the nature and strength of
dependence and yet still retain its general structure. Indeed, if we look at the basic example in Figure \ref{exampleIntro1} to consider both structure and strength, we see the evidence that the dependence between $Variable \text{ }1$ and $Variable\text{ }2$ differs in the three case. Using Pearson's correlation, the equivalent of coherence on real-valued domain, the estimated strength of dependence is $\tau=0.70$, which is not truly reflective of the actual process. The copula model captures the information about that non-linear structure of dependence; in this particular example, there is a distinct difference in the tail dependence. Indeed, one observes a strong linear dependence in the lower tail in the middle figure and a strong linear dependence in the upper tail in the left one.
Copulas have been used to model dependence between random
variables (e.g., \citet{aas2009} and \citet{joe1997}). However, a
straightforward application of a standard parametric copula model
cannot fully capture the essence of certain forms of dependence. Hence, many
copula-based approaches have been developed to deal with these
complex dependence issues. For example, the empirical multivariate approaches
(e.g. \citet{deheuvels1979}), the kernel-based approaches
(e.g. \citet{gijbels1990}) or approaches based on Bernstein
polynomials approximations (e.g. \citet{li1998} and
\citet{sancetta2004}) have
the flexibility to capture more complex dependence. However, while these models are more robust,
they typically suffer from lower power.
As a remark, in our work, $C_{\omega_k}$ is assumed to be constant across all epochs $r=1,...,R$ and thus data from all epochs (functionals of Fourier coefficients) will used to estimate this common dependence structure. Some studies have dealt with this frequency-spectral dependence between two or more channels, particularly in an inference perspective. For example, \citet{ibragimov2005} and \citet{lowin2010} theorized the modeling of principle of the Fourier copula, based on the work of \citet{delapena2003}, where the bivariate copula is expressed by $C(u,v)=\int_0^u \int^v_0 (1+g(u,v))dudv,$ for $g(u,v)$ being simply a global
measure across the entire frequency range. They proposed to estimate this copula by an empirical joint cdf and showed the asymptotic convergence of the related empirical copula process. While seemingly attractive, a major drawback is that we are interested in expressing the dependence between specific frequency bands rather than the entire frequency range. In
this case, it will not be possible to model the strength of both full and partial dependence through a parameter as we can see in a standard
elliptic copula using Pearson's correlation matrix as the expression of
this parameter.
In this paper, we develop an approach to model dependence while keeping the robustness of parametric copulas
(seeing through the expression of a parameter structure) and using the advantages of the decomposition of time series data in
band-specific frequency oscillations. The main feature of our methodology is that it uses the coherence or the Kendall rank-based coherence as measures to express the strength of the dependence in a parametric model where margins are expressed from the magnitude of Fourier coefficients. In Section \ref{sec:models}, we present an inferential framework for both elliptical and Archimedean families of copulas. In Section \ref{simuls}, we illustrate the potential of our approach through some simulations on specific cases of idiosyncratic dependencies. Finally, we apply in Section \ref{lfp} the presented methodology on local field potential (LFP) of experimental rats.
\section{Models}
\label{sec:models}
Any parametric copula consists of three main components: the marginal distributions, the dependence parameter(s) and the copula structure itself. Here, we assume that we use a straightforward copula structure in its classical form. Hence, we have to express both margins and dependence spectrally with regard to its analytic properties, to infer, in order, the cdf and the dependence coefficient(s) - functionals of spectral domain analogy of Pearson or Kendall measures.
Before presenting our models and inferential framework, we review and set some necessary notation. Let $\delta_{\ell, \omega_k}^{(r)}$ to be the magnitude of the Fourier coefficient $f_{\ell, \omega_k}^{(r)}$ for the microelectrode (brain channel) $\ell=1,...,d$ at frequency $\omega_k=\frac{k}{T},k=0,...,T-1$ for epoch $r=1,...,R$. The collection of all these magnitudes over all the possible epochs is denoted by the vector $\delta_{\ell, \omega_k}=[\delta_{\ell, \omega_k}^{(1)},...,\delta_{\ell, \omega_k}^{(R)} ]'$ and any subvector for $1 \leq r < s \leq R$ will be denoted by $\delta^{(r:s)}_{\ell, \omega_k}=[\delta_{\ell, \omega_k}^{(r)},...,\delta_{\ell, \omega_k}^{(s)} ]'$. Let $\xi_{\ell,\omega_k}$ to be the cumulative distribution function of $\delta_{\ell, \omega_k}$, which admits a density.
\subsection{Marginals inference}
We note that our goal is to model dependence directly on the magnitude
of the Fourier coefficients. Thus, we express the distribution of the vector $\delta_{\ell,\omega_k}$ at any
channel $\ell=1,...,d$, by the definition of cdf, i.e.
$\mathbb{P}(\delta_{\ell,\omega_k} \leq y)=\xi_{\ell,\omega_k}(y). $
A natural method to estimate the distribution is to apply
straightforward the empirical cumulative distribution (ecdf) estimator
to the components of $\delta_{\ell,\omega_k}$ where the data length is the number of epochs $R$ on which we compute the following
\begin{eqnarray}
\widehat{\xi}_{\ell,\omega_k}(y) &=& \frac{1}{R}\sum_{r=1}^{R} \mathbb{I}(\delta_{\ell,\omega_k}^{(r)} \leq y)
\end{eqnarray}
on a single frequency over all epochs; thus the asymptotic convergence to the real cdf is preserved due to Glivenko-Cantelli (see \citet{vandervaart1998}). One remark is that the latter formula holds only when we are interested by dependency on a particular frequency and that this dependence is assumed to be constant across all epochs. In the situation where we are interested in a frequency \textit{band} (rather than a single frequency) we apply filtering on each epoch and then compute the sum across frequencies in that band, over a single epoch, one should sum on $k\in \mathcal{K}$ where $\mathcal{K}$ is the set containing all possible frequency bands (e.g., the delta frequency band $\mathcal{K}_\Delta=(0,4)$ Hertz).
\subsection{Dependence parametrization}
We present here two measures of dependence that will serve, using a functional of them, to express the estimation of the dependence parameter of any well-known parametric copula.
\subsubsection{Coherence}
The analogue of the Pearson correlation for the spectral domain
at frequency $\omega_k$ is coherency which is the ratio of the
cross-spectrum
(or covariance between ${\bm{f}}_{\ell,\omega_k}$ and ${\bm{f}}_{\ell',\omega_k}$, $\ell,\ell' =1,...,d$; where ${\bm{f}}_{\ell,\omega_k}=[f_{\ell,\omega_k}^{(1)},...,f_{\ell,\omega_k}^{(R)} ]'$) over the square
root of the
product of their autospectra at $\omega_k$ (see \citet{ShumwayStoffer}).
Coherency is complex-valued and lies inside the unit circle
(i.e., its magnitude is less than or equal to $1$). Here, we
consider \textit{coherence}, denoted by $\kappa_{\ell,\ell'; \omega_k}$, which
is the squared modulus of the coherency and thus lies in
$[0,1]$.
We consider two approaches to estimating $\kappa_{\ell,\ell'; \omega_k}$.
In the first case, when
the dependence between microelectrodes $\ell$ and $\ell'$, $\ell,\ell'=1,...,d$, is constant across all epochs, we have
the estimator
\begin{eqnarray*}
\widehat{\kappa}_{\ell,\ell'; \omega_k}&:=& \frac{ \left| \sum_{r=1}^{R}f_{\ell,\omega_k}^{(r)}f_{\ell',\omega_k}^{\star (r)} \right|^2}
{\sum_{r=1}^{R}\left( f_{\ell,\omega_k}^{(r)}f_{\ell,\omega_k}^{\star (r)} \right) \sum_{s=1}^{R}\left( f_{\ell',\omega_k}^{(s)}f_{\ell',\omega_k}^{\star (s)} \right)}
\end{eqnarray*}
where $f^{\star (r)}$ refers to the complex conjugate of $f^{(r)}$.
In practice, we are interested in estimating the dependence over a
{\it band of frequencies} rather than single-valued frequencies. Thus,
the second case is justified: we assume that we are interested by the dependence on a fixed epoch $r=1,...,d$; we compute the latter over frequencies (i.e. over the frequencies of a given band in the set $\mathcal{K}:=\{ \Delta:[0,4)\text{Hertz},$ $ \theta:[4,8)\text{ Hertz},$ $ \alpha:[8,12)\text{ Hertz},$ $ \beta:[12,30)\text{ Hertz},$ $ \gamma \geq 30\text{ Hertz}
\}$). Hence, the estimator is
\begin{eqnarray*}
\widetilde{\kappa}_{\ell,\ell'; \mathcal{K}_l}^{(r)}&=& \frac{ \left| \sum_{\omega_k \in \mathcal{K}_l}f_{\ell,\omega_k}^{(r)}f_{\ell',\omega_k}^{\star (r)} \right|^2}
{\sum_{\omega_k \in \mathcal{K}_l}\left( f_{\ell,\omega_k}^{(r)}f_{\ell,\omega_k}^{\star (r)} \right) \sum_{\omega_k \in \mathcal{K}_l}
\left( f_{\ell',\omega_k}^{(r)}f_{\ell',\omega_k}^{\star (r)} \right) }
\end{eqnarray*}
where $\mathcal{K}_l$, $l=1,...,5$ represents a particular frequency band. We remark that indeed, the cardinality of $\mathcal{K}_l$ depends on how big are $T$ is. Thus, for the rest of this article, we consider the first case. One remarks that these estimators are analog versions, for the spectral domain analysis, of the sample estimation of Pearson's correlation matrix. Thus, we make the following assumption.
\begin{assu}
\label{assuCoherence}
(Convergence in probability of the coefficient of dependence): The asymptotic behavior of $\widehat{\kappa}_{\ell,\ell'; \omega_k}$ (or of $\widetilde{\kappa}_{\ell,\ell'; \mathcal{K}_l}^{(r)}$) relative to $\kappa_{\ell,\ell'; \omega_k}$ is analog to the one of the sample Pearson's correlation relative to the real Pearson's correlation measure. Hence, if the second joint moment between the variables on which the correlation is measured is finite (e.g $\mathbb{E}\left[ (\delta_{\ell,\omega_k})^2(\delta_{\ell',\omega_k})^2\right] < \infty$), then $\widehat{\kappa}_{\ell,\ell'; \omega_k} \xrightarrow{P} \kappa_{\ell,\ell'; \omega_k}$ as $R \to \infty$.
\end{assu}
\begin{prop}
(Convergence in probability of the copula model): Let $\xi_{\ell,\omega_k}$ and $\xi_{\ell',\omega_k}$ be two continuous cumulative distributions functions, $\theta$ (properly denoted $\theta_{\ell,\ell';\omega_k}$ that we simplified on purpose) be a monotone function of $\kappa_{\ell,\ell'; \omega_k}$, $\widehat{\theta}$ be the same monotone function applied to $\widehat{\kappa}_{\ell,\ell'; \omega_k}$; and $u,v$ two uniform observations lying on the unit interval. Let $C_\theta$ be the parametric copula model of $C_{\ell,\ell';\omega_k}$. Thus,
$C_{\widehat{\theta}} (\hat{\xi}_{\ell,\omega_k}(u),\hat{\xi}_{\ell',\omega_k}(v)) \xrightarrow{P} C_\theta (\xi_{\ell,\omega_k}(u),\xi_{\ell',\omega_k}(v))$ when $R \to \infty$.
\end{prop}
\noindent \textit{Justification:}
Using the Glivenko-Cantelli Lemma, it is obvious that $C_\theta (\widehat{\xi}_{\ell,\omega_k}(u),\widehat{\xi}_{\ell',\omega_k}(v))$ converges almost surely to $C_\theta (\xi_{\ell,\omega_k}(u),\xi_{\ell',\omega_k}(v))$. Thus, it implies that $C_\theta (\widehat{\xi}_{\ell,\omega_k}(u),\widehat{\xi}_{\ell',\omega_k}(v))$ converges in probability to $C_\theta (\xi_{\ell,\omega_k}(u),\xi_{\ell',\omega_k}(v))$. Now, we have to show that $C_{\widehat{\theta}} (\widehat{\xi}_{\ell,\omega_k}(u),\widehat{\xi}_{\ell',\omega_k}(v))$ converges in probability to $C_\theta (\widehat{\xi}_{\ell,\omega_k}(u),\widehat{\xi}_{\ell',\omega_k}(v))$. In other words, we must show that $\lim_{R \to \infty} \mathbb{P}\left( \left| \widehat{\kappa}_{\ell,\ell'; \omega_k}-\kappa_{\ell,\ell'; \omega_k} \right| > \epsilon \right)=0$ $ \forall\epsilon >0$, which is the same than showing that $\lim_{R \to \infty} \mathbb{P}\left( \left| \widehat{\theta}-\theta \right| > \epsilon \right)=0 \forall \epsilon >0$ due to the fact that $\theta$ is a one-to-one function of $\kappa_{\ell,\ell'; \omega_k} $. It is directly demonstrated due to Assumption \ref{assuCoherence}.\newline
\subsubsection{Kendall's rank-based coherence}
Since coherence measures only linear associations between a pair of signals,
it is important to look into other approaches that could express
dependence via non-linear measures that can be used in inference of
association parameters for non-elliptical copulas. For this reason, we
consider rank-based dependence measures; especially because their direct relations with Archimedean copulas are well studied in the literature. We propose here a rank-based coherence measure which is the direct
analogue of Kendall's tau applied the spectral domain. To the best
of our knowledge, such a nonparametric measure of rank correlation has never been studied nor proposed before. We note that the same approach for a rank correlation in the sense of Spearman's rho is possible. However, for certain copula families, Kendall's approach leads to a closed analytic form of the copula dependence parameter while Spearman's approach
does not.
Let the rank-based coherence being computed over epochs $R$, between channels $\ell$ and $\ell'$; $\ell,\ell'=1,...,d$. Hence, one estimates $\mathbb{K}_{\ell,\ell';\omega_k}$ by $\widehat{\mathbb{K}}_{\ell,\ell';\omega_k}=\frac{\mathcal{C}_{\ell,\ell';\omega_k} - \mathcal{D}_{\ell,\ell';\omega_k}}{R(R-1)/2}$ where
{\small
\begin{eqnarray*}
\mathcal{C}_{\ell,\ell';\omega_k}&=&\sum_{r=1}^{R} \sum_{s=1}^R \sum_{t=r}^{R} \sum_{w=s+1}^{R}
\left[\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} < \delta_{\ell',\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} < \delta_{\ell',\omega_k}^{(w)}\right)
+\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} > \delta_{\ell',\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} > \delta_{\ell',\omega_k}^{(w)}\right) \right] \\
& + &
\sum_{r=1}^{R} \sum_{s=1}^R \sum_{t=r+1}^{R} \sum_{w=1}^{s}
\left[\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} < \delta_{\ell',\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} < \delta_{\ell',\omega_k}^{(w)}\right)
+\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} > \delta_{\ell',\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} > \delta_{\ell',\omega_k}^{(w)}\right) \right],
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{D}_{\ell,\ell';\omega_k} &=& \sum_{r=1}^{R} \sum_{s=1}^R \sum_{t=r}^{R} \sum_{w=s+1}^{R}
\left[\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} < \delta_{\ell'\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} > \delta_{\ell',\omega_k}^{(w)}\right)
+\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} > \delta_{\ell',\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} < \delta_{\ell',\omega_k}^{(w)}\right) \right]\\
&+& \sum_{r=1}^{R} \sum_{s=1}^R \sum_{t=r+1}^{R} \sum_{w=1}^{s}
\left[\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} < \delta_{\ell'\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} > \delta_{\ell',\omega_k}^{(w)}\right)
+\mathbb{I}\left( \delta_{\ell,\omega_k}^{(r)} > \delta_{\ell',\omega_k}^{(s)}, \delta_{\ell,\omega_k}^{(t)} < \delta_{\ell',\omega_k}^{(w)}\right) \right]
\end{eqnarray*}}
are the concordances and discordances, respectively. To infer for a single epoch, conditioning on frequencies (or over a specific band), the estimation is, analogously to the coherence measure, a summation over the frequencies of a given band $\mathcal{K}_l$ instead of over many epochs. It is interesting to note that, following similar arguments than for coherence measure, the asymptotic convergence of $\hat{\mathbb{K}}_{\ell,\ell';\omega_k}$ may be shown based on the original work of \citet{kendall1948}.
\subsection{Model-examples}
\subsubsection{Elliptical family}
We extend our model to the field of elliptical copulas (see \citet{nelsen2007}), by applying to this well-known family the spectral-based elements discussed above. One of the main advantages in the use of elliptical copulas is related to the dependence parameter: for this family, the dependence is expressed as the Pearson's correlation matrix. With the analogy between coherence and correlation, moving to the spectral domain, we could directly use the coherence measure as the Fourier transforms are expressed in term of periodograms, similarly to the coherency expressed in term of coherence.
One observation here is that due to the range of the coherence measure $[0,1]$, we have to impose positive dependence only when
elliptical copulas are used within our proposed framework.
For instance, let $\Phi$ the standard cdf of a Gaussian distribution and $\Phi^{-1}$ its inverse. Let $u=\widehat{\xi}_{\ell,\omega_k}(z_1)$ and $v=\widehat{\xi}_{\ell',\omega_k}(z_2)$. Then, the bivariate semi-parametric estimator of the coherence-based Gaussian copula computed on a frequency $\omega_k$ is expressed by
\begin{eqnarray*}
\widehat{C}_{\widehat{\kappa}_{\ell,\ell'}}^{gaussian}(\hat{\xi}_{\ell,\omega_k}(x_\ell),\hat{\xi}_{\ell',\omega_k}(x_{\ell'})) &=& \int_{-\infty}^{\Phi^{-1}(\hat{\xi}_{\ell,\omega_k}(x_\ell))} \int_{-\infty}^{\Phi^{-1}(\widehat{\xi}_{\ell',\omega_k}(x_{\ell'}))} \frac{1}{2\pi (1-\widehat{\kappa}_{\ell,\ell';\omega_k}^2)^{1/2}} \times \\ & & \hspace{3cm}\exp \left\{ -\frac{s^2-2 \widehat{\kappa}_{\ell,\ell';\omega_k} st +t^2}{2(1-\widehat{\kappa}_{\ell,\ell';\omega_k}^2)} \right\}ds\text{ }dt \\
&=& \Phi^{(r)}_{\widehat{\kappa}_{\ell,\ell'}^{(r)}}\left( \Phi^{-1}(\widehat{\xi}_{\ell,\omega_k}(x_\ell)),\Phi^{-1}(\widehat{\xi}_{\ell',\omega_k}(x_{\ell'})) \right)
\end{eqnarray*}
where one estimates a single copula using data from all epochs.
\subsubsection{Archimedean family}
With respect to the inference approach for elliptical copulas, the main difference for Archimidean copulas concerns the dependence parameter: instead of working with a straight expression of the coherence as a parameter, we consider functionals of the rank-based coherence (see \citet{genest1993}). In fact, we consider the same functionals as for the case of inference using Kendall's tau. Hence, taking the example for a Clayton copula, one obtains
\begin{eqnarray*}
\widehat{C}_{\widehat{\theta}_{\ell,\ell'}}^{Clayton}(\hat{\xi}_{\ell,\omega_k}(x_\ell),\hat{\xi}_{\ell',\omega_k}(x_\ell')) &=& \left[(\hat{\xi}_{\ell,\omega_k}(x_\ell))^{-\theta}+(\hat{\xi}_{\ell',\omega_k}(x_{\ell'}))^{-\theta} \right]^{-1/\theta}
\end{eqnarray*}
where, the function linking $\theta$ to $\mathbb{\kappa}_{\ell,\ell'; \omega_k}$ here, is $\mathbb{K}=1+\frac{\theta}{2}$. Again, it is important to reiterate that we estimate the copula using data from all epochs.
\section{Simulations}
\label{simuls}
These simulations are divided in two parts. In the first part, we show that working on a specific frequency instead of on the original data themselves may lead to the detection of strong hidden dependencies, in particular in the spectral domain. Hence, it may improve the robustness of the multivariate function of probability due to the fact that in such a case, dependence is less likely to have been due to just random chance. Also, it could lead to a more robust multivariate function of probability since this is less sensitive to noise. In the second part, we show the our approach captures non-linear dependencies that standard standard linearity-based methods do not.
\subsection{Simulation 1: Dependencies in the spectral domain instead of in the time domain}\label{SimuSec1}
We show one feature: we illustrate that our methodology catches strong dependence hidden into frequencies. We do that by comparing the rank-based coherence measured over epochs on a single frequency to the Kendall's tau measured on original data.
\noindent
Let $Z_t^{(r)}$ be a latent signal which is an second order autoregressive
(AR(2)) process $Z_t^{(r)} = 1.989 Z_{t-1}^{(r)} - 0.990 Z_{t-2}^{(r)} + W_t^{(r)}$
where $W_t^{(r)}$ is a white noise sequence. Let the sampling rate be $1500 \text{ Hertz}$. The roots of this AR(2) polymomial function are complex-valued with magnitude
$1.005$ and phase $2 \pi \frac{12}{1500}$ so that the spectra
of this latent process has power concentrated around the phase.
The observed time series $X_t^{(r)}$ and $Y_t^{(r)}$ are defined by
$X_t^{(r)} = 0.90 Z_{t-1}^{(r)} + \epsilon_{X,t}^{(r)}; Y_t^{(r)} = 0.85 Z_t^{(r)} +
\epsilon_{Y,t}^{(r)}$
where $\epsilon_{X,t}^{(r)}$ and $\epsilon_{Y,t}^{(r)}$ are independent
of each other and each is a white noise with identical variance
$\sigma_{\epsilon}^2$. There were a total of $2000$ replicated
datasets. Each dataset consisted of $R=1000$ epochs
and the total number of observations for each epoch was $T=1500$.
For each of these $2000$ datasets, we computed a rank-based measure (Kendall's tau) on all data, and we computed our proposed rank-based measure (rank-based coherence) on a single frequency. For this example, rank-based coherence was
calculated only for frequency $2 \pi \frac{12}{1500}$ which is the location of the peak of the spectrum of $Z_t^{(r)}$.
\begin{figure}
\caption{\textbf{Left: }
\label{Sim}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|l||*{5}{c|}}
\hline
Dependence & mean & median & variance & minimum & maximum \\
measure & & & & & \\
\hline \hline
Kendall's tau & $0.2942$ & $0.2943$ & $7.47 \times 10^{-6}$ & $0.2876$ & $0.3025$ \\
on original data $(\tau)$ & & & & & \\
\hline
Rank-based & $0.8694$ & $0.8707$ & $2.15 \times 10^{-4}$ & $0.8214$ & $0.8998$ \\
coherence $(\mathbb{K})$& & & & & \\
\hline
\end{tabular}
\caption{Top row: Kendall's tau measure based on the original time series. Bottom row: rank-based coherence based on frequency 12 Hertz. }
\label{tableKendall}
\end{center}
\end{table}
The goal of this first part was to note a difference in the strength of the dependence between the original data, measured by Kendall's tau, where epochs are considered only as segments of the dataset due to the stationarity assumption; and amplitudes of the Fourier transforms on all epochs, measured at 12 Hz, by Kendall rank-based coherence. Results are shown in Table \ref{tableKendall}. Note that under the null scenario of no difference in the dependence, the difference in the distributions of the dependence measures is obvious.
\subsection{Simulation 2: Assessment of non-linear dependencies}
For this part, we generated two types of latent signals from AR(2) processes in such a way that the spectra of these latent signals are concentrated of the phase of the frequencies of interest. The principle here is that we have, for $r=1,...,500$ and for $t=1,...,1000$ the following latent signals: $Z_\alpha^{(r)}(t) \sim AR(2)$ with polynomial function having complex-valued roots of phase $p_\alpha=\pm 12/1000\times 2 \pi$ (which means that the spectra if concentrated around $12$ Hertz), and $Z_\beta^{(r)}(t) \sim AR(2)$ of phase $p_\beta=\pm 40/1000\times 2 \pi$. From these latent signals, we generated the two following observed signals:
\begin{itemize}
\item $X_1^{(r)}(t)=Z_\alpha^{(r)}(t)+Z_\beta^{(r)}(t)+\epsilon_t^{(r)}$
\item $X_2^{(r)}(t)=\frac{3}{2}Z_\alpha^{(r)}(t)+\eta \left( Z_\beta^{(r)}(t) \right)^4 \sin( Z_\beta^{(r)}(t)) + \epsilon_t^{(r)}$
\end{itemize}
where $\eta = 1/100000$,$r=1,...,500$, $t=1,...,1000$ and $\epsilon_t^{(r)}\sim \mathcal{N}(0,0.01\mathbb{V}(Z_1\beta^{(r)}(t))$. We replicated this simulation scenario $B=10000$ times. We denote $\delta_{\ell, \omega_k}=[\delta_{\ell, \omega_k}^{(1)},...,\delta_{\ell, \omega_k}^{(500)}]'$, $\ell=1,2$; the vector of length $r=500$ of the Fourier coefficients for a fundamental frequency $\omega_k$, applied on observed signal $X_\ell^{(r)}(t)$. Thus, in Figure \ref{Simuls}, one sees the dependence for the Fourier coefficients filtered at $12$ and $40$ Hertz between the two observed signals. One notices that obviously a linearity-based dependence measure is suitable for the filtering at $12$ Hertz. However, in the case of the filtering at $40$ Hertz, any linearity-based dependence measure will fail to catch this non-linear dependence, while a well-specified copula will. We note that we selected the copula model for these simulations based on AIC \cite{akaike1987}. Indeed, \citet{jordanger2014} have shown that AIC, based on a penalized likelihood approach on the number of parameters of the model, has almost no difference in the choice of a model with other copula-based information criterions when copula are estimated on large populations.
Table \ref{tableSimuls} shows the results of these simulations by exhibiting the mean of $\mathbb{K}_{1,2;\omega_k}^b$ (which denotes the Kendall-based coherence measure for Simulation $b=1,...,10000$ at frequency $\omega_k$ between variables $X_1$ and $X_2$), the most selected copula model among all the simulations and it's inherent dependence parameter. Table \ref{tableFreqSim} shows the dispersion of the selected copula model for the $10000$ simulations.
\begin{figure}
\caption{\textbf{Left: }
\label{Simuls}
\end{figure}
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{|l||*{2}{c|}}
\hline
Fundamental & $\bar{\mathbb{K}}_{1,2; \omega_k}^b$ & Selected copula model \\
frequency & & $\bar{\hat{\theta}}$ \\
\hline \hline
$12$ Hertz & $0.8397$ & Frank \\
& & $23.45$ \\
\hline
$40$ Hertz & $0.4172$ & Gumbel \\
& & $1.65$ \\
\hline
\end{tabular}
\caption{Averaged Kendall-based coherence measure, selected copula model and parameter of copula related to the selected copula model, over 10 000 simulations. }
\label{tableSimuls}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{|l||*{7}{c|}}
\hline
Fundamental & Independent & Gaussian & Student & Clayton & Gumbel & Frank & Joe \\
frequency & & & & & & & \\
\hline \hline
$12$ Hertz & 0& 0 & 0& 0& 27 & 9973 & 0 \\
\hline
$40$ Hertz & 0 & 63 & 0 & 0 & 9628 & 0 & 309 \\
\hline
\end{tabular}
\caption{Frequency of selection, for each frequency band, between $\delta_{1,\omega_k}$ and ${\delta}_{2,\omega_k}$, over the 10 000 simulations }
\label{tableFreqSim}
\end{center}
\end{table}
\section{Application: Local field potential of rats} \label{lfp}
We apply our copula-based approach for modeling dependence in local field potential (LFP) of rats based on the experimentation of Frostig and \citet{wann2017}. Microelectrodes were inserted in $32$ locations on the rat cortex (4 layers, respectively at $300 \mu m,$ $700\mu m,$ $1100 \mu m$ and $1500\mu m$; 8 microelectodes lined up in each layer). From these microelectrodes, $T=1000$ time points were recorded per second. As we assume a stationary behavior within each second, we consider each second as a distinct epoch $r$. A total of $r=600$ epochs were recorded. Midway in this period (at epoch $r=300$), stroke was mechanically induced on each rat.
In this paper, as the scope is not about assessing a difference between two different copulas, but is rather to write adequately these copulas with respect to the idiosyncrasies of data, we limit ourselves here to show the copula-based modeling done on LFP data. Thus, we present three different situations which might lead to a further analysis. We remind the reader that the selection of a model of copula is not the topic of this paper; hence we based our selection on a well-accepted criterion in the copula literature: AIC (\citet{akaike1987}); among the following copula models: Independent, Gaussian, Student, Clayton, Gumbel, Frank and Joe.
\subsection{Modeling the copula between two microelectrodes for a given frequency}
We are interested here by modeling the dependence between two different microelectrodes, for a frequency of $12$ Hertz, for the whole course of the experiment (data from the LFP recording are considered for the entire $600$ seconds). In this case, because the nature of the dependence between electrodes is intrinsic to each rat, we apply our method to only one rat: rat id $141020$.
\subsubsection{First case: Highly-dependent microelectrodes}
We consider the case where dependence between two microelectrodes (channels) is high, from Kendall rank-based coherence perspective (at the alpha frequency band). Thus, we considered dependence between microelectrodes $1$ and $2$ (two microelectrodes on columns $1$ and $2$ of the first layer) where $\mathbb{K}_{1,2; 12Hz}=0.753$. Figure \ref{lfp1_2} shows (left) the dependence between the empirical cdf of both microelectrodes. Based on Akaike information criterion \cite{akaike1987}, this dependence is well represented through a Gumbel copula (right) of parameter $4.12$. We also included a plot of the empirical copula (middle); for more about the empirical copula, see \citet{deheuvels1979} in order to show that visually, the empirical copula is close to the one chosen from AIC. We note that in Figure \ref{lfp1_2_prepost}, this dependence is graphically represented for the pre-stroke period (epochs $r=1,...,300$) as well as for the post-stroke period (epochs $r=301,...,600$).
\begin{figure}
\caption{\textbf{Left:}
\label{lfp1_2}
\end{figure}
\begin{figure}
\caption{\textbf{Left:}
\label{lfp1_2_prepost}
\end{figure}
\begin{figure}
\caption{\textbf{Left:}
\label{lfp1_9_prepost}
\end{figure}
\subsubsection{Second case: Independent microelectrodes}\label{ind}
Still working with microelectrode $1$, we decided to model the dependence of the latter with his other neighbor (on the same column): microelectrode $9$. Thus, we computed the Kendall rank-based coherence and obtained a value of $\mathbb{K}_{1,9; 12Hz}=0.041$; which is small enough to suggests no strong evidence to conclude dependence between microelectrodes. Thus, we applied the Kendall's based independence test for bivariate samples (for more about this test, see \citet{genest2007}) where the null hypothesis is the independence between the microelectrodes and the test statistic is
$$\text{\textit{Test statistic}}:= \mathbb{K}_{1,9; 12Hz}|\sqrt{\frac{9R(R-1)}{2(2R+5)}}$$
where $R=600$ is the number of epochs. The \textit{p-value} for this test is 0.13 which indicates insufficient evidence against independence. Thus, the independence copula (simply the product of the margin) is the adequate choice here. In Figure \ref{lfp1_9}, one observes that independence between in the scatterplot (left) and the related copula (right), which is no more than the product of the margins. In this case again, the dependence for the pre-stroke period as well as the one for the post-stroke period are shown in Figure \ref{lfp1_9_prepost}.
\begin{figure}
\caption{\textbf{Left:}
\label{lfp1_9}
\end{figure}
\subsection{Modeling the copula between an epoch during pre-stroke and an epoch during post-stroke}
The principle here is to model the dependence structure, for a given Fourier fundamental frequency and a given microelectrode, between the pre-stroke period and the post-stroke's one. As the rat's brain activity is perturbed by the induced stroke, for many microelectrodes and for many frequencies, one observes quasi-independence. Hence, we present here the case for a very small dependence, but where the independence test (described in \ref{ind}) applied between $\delta_{9,2Hz}^{1:300}$ and $\delta_{9,2Hz}^{301:600}$, gives a \textit{p-value}$<0.05$. We introduce the superscripts for the epochs $(s:t), (s':t'), s<t, s'<t'$ in the notation $\mathbb{K}_{\ell,\ell';\omega_k}^{(r:s),(r':s')}$ where that superscript has a similar meaning than for $\delta_{\ell,\omega_k}^{(r:s)}$: for $\delta_{\ell,\omega_k}$, microelectrode $\ell$ is measured respectively from epochs $r$ to $s$ and from epochs $r'$ to $s'$. Thus, we considered microelectrode $9$ for the simple reason that it has been difficult to find a microelectrode, for a given frequency, exhibithing where the independence test does not indicate the independence between the epochs pre-stroke and those post-stroke. Then, for a frequency of $2$ Hertz, the Kendall's rank-based coherence between $\delta_{9,2Hz}^{(1:300)}$ and $\delta_{9,2Hz}^{(301:600)}$ is $\mathbb{K}_{9,9; 2Hz}^{(1:300)(301:600)}=0.116$. Thus, using AIC, we selected a Gumbel copula of parameter $1.113$. Figure \ref{lfp9} shows this weak dependence (left) where one observes a little upper tail dependence. On the right, one observed the perspective of the fitted copula.
\begin{figure}
\caption{\textbf{Left:}
\label{lfp9}
\end{figure}
\section{Discussion}
\label{conclusion}
This paper proposed a new approach to express dependence between two time series at a given frequency using copulas in the spectral domain. We provided the necessary methodological framework and proposed a rank-based coherence, strongly inspired by Kendall's tau measure, in order to infer adequately a semi-parametric copula function (parametric copula model with non-parametric margins) to deal with data represented in the spectral-domain. Our simulations show that
the copulas parametrized between two raw time series (i.e., time domain
data) and the one parametrized with modulus of the Fourier transform
on specific frequencies (i.e., spectral domain data) might be totally two different objects. Crucially, even when dependence between two time series appears to be weak, we can build particular probabilities functions (i.e. copulas) on specific frequencies of their spectrum; therefore we are able to model stronger dependence. Finally, we illustrate the applicability of our methodology through the local field potential of some rats.
This work opens the way to model, for example, dependence between channels (represented through microelectrodes in this paper) of local field potential which will potentially lead to a better understanding of brain connectivity. Non-linear dependence is not a rare phenomenon with neuroimaging data represented in the spectral domain. However, very few papers have dealt with this specific problem and thus our hope is that this paper can make some contribution on this front.
\end{document} |
\begin{document}
\title{Infinitely many cyclic solutions to the Hamilton-Waterloo problem with odd length cycles
\thanks{Work performed under the auspicies of the G.N.S.A.G.A. of the C.N.R. (National Research Council) of Italy and supported by M.I.U.R. project "Disegni combinatorici, grafi e loro applicazioni, PRIN 2008". The second author is supported by a fellowship of INdAM.}}
\author{Francesca Merola\thanks{Dipartimento di Matematica e Fisica,
Universit\`a Roma Tre, Largo Murialdo 1, 00146 Rome, Italy,
email: [email protected]} \quad
Tommaso Traetta \thanks{Dipartimento di Matematica e Informatica,
Universit\`a di Perugia, via Vanvitelli 1, 06123 Perugia, Italy,
email: [email protected], [email protected]}}
\date{}
\maketitle
\begin{abstract}
\noindent
It is conjectured that for every pair $(\ell,m)$ of odd integers greater than 2 with
$m \equiv 1\; \pmod{\ell}$, there exists a cyclic two-factorization of $K_{\ell m}$
having exactly $(m-1)/2$ factors of type $\ell^m$ and all the others of type $m^{\ell}$.
The authors prove the conjecture in the affirmative when
$\ell \equiv 1\; \pmod{4}$ and $m \geq \ell^2 -\ell + 1$.
\end{abstract}
\noindent
{\bf Keywords:} two-factorization; Hamilton-Waterloo problem; Skolem
sequence; group action.
\section{Introduction}
A {\it $2$-factorization} of order $v$ is a set $\mathcal{F}$ of spanning $2$--regular subgraphs of $K_v$ (the {\it complete graph} of order $v$)
whose edges partition the edge set of $K_v$ or $K_v - I$
(the complete graph minus a $1$-factor $I$)
according to whether $v$ is odd or even.
We refer the reader to \cite{We96} for the standard terminology and notation of elementary graph theory.
Note that every spanning $2$-regular subgraph $F$ of $K_v$ determines a partition
$\pi=[\ell_1^{n_1}, \ell_2^{n_2},\ldots, \ell_t^{n_t}]$ of the integer $v$ where
$\ell_1, \ell_2,\ldots,\ell_t$ are the distinct lengths of the cycles of $F$ and $n_i$ is the number of cycles in $F$ of length $\ell_i$ (briefly, $\ell_i$-cycles).
We will refer to $F$ as a {\it $2$--factor of $K_v$} of {\it type $\pi$}. Of course, $2$--factors of the same type are pairwise isomorphic, and viceversa.
In this paper we deal with the well-known Hamilton-Waterloo problem (in short, HWP) which can be formulated as follows:
given two non-isomorphic $2$-factors $F, F'$ of $K_v$, and two positive integers $r, r'$ summing up to $\lfloor (v-1)/2\rfloor$, then HWP asks for a $2$-factorization of order $v$ consisting of $r$ copies of $F$ and $r'$ copies of $F'$.
Denoted by $\pi$ and $\pi'$ the types of $F$ and $F'$, respectively,
this problem will be denoted by $HWP(v;\pi, \pi'; r, r')$.
The case where $F$ and $F'$ are $2$-factors of the same type $\pi$ is known as the Oberwolfach problem, $OP(v; \pi)$.
This has been formulated much earlier by Ringel in 1967 for $v$ odd, while the case $v$ even was later considered in \cite{HuKoRo79}.
Apart from OP$(6;[3^2])$, OP$(9;[4,5])$, OP$(11;[3^2,5])$,OP$(12,[3^4])$, none of which has a solution, the problem is conjectured to be always solvable: evidence supporting this conjecture can be found in \cite{BrRo07}. We only mention some of the most important results on the Oberwolfach problem recently achieved: OP($v;\pi$) is solvable for an infinite set of primes $v \equiv 1 \pmod{96}$ \cite{BrSch09}, when $\pi$ has exactly two terms \cite{Tr13},
and when every term of $\pi$ is even \cite{BrDa11, Ha85}.
However, although the literature is rich in solutions of many infinite classes of OP($v;\pi$), these only solve a small fraction of the problem which still remains open.
As one would expect, there is much less literature on the Hamilton-Waterloo problem.
Apart from some non-solvable instances of small order \cite{BrRo07}, the Hamilton-Waterloo problem HWP($v;\pi,\pi';r,r'$) is known to have a solution when $v$ is odd and $\leq 17$ \cite{AdBr06, FrHoRo04, FrRo00}, and when $v$ is even and $\leq 10$ \cite{AdBr06, An07}.
When all terms of $\pi$ and $\pi'$ are even, with $r,r' >1$, a complete solution has been given in \cite{BrDa11}. Surprisingly, very little is known when $\pi$ or $\pi'$ contain odd terms, even when all terms of $\pi$ and all terms of $\pi'$ coincide.
For example, HWP($v;[4^{v/4}],[\ell^{v/\ell}];r,r'$) has been dealt with in \cite{KeOz13, OdOz16} and completely solved only
when $\ell=3$ in \cite{BoBu, DaQuSt09, WangChenCao}, while HWP($v;[3^{v/3}],[v]; \lfloor v/2 \rfloor-1$, 1) is still open (see, \cite{DiLi091, DiLi092, HoNeRo04}). Other results can be found in \cite{AdBiBrEl02, BuRi05}. In this paper we make significant headway with the most challenging case of the Hamilton-Waterloo problem, that is, the one in which the two partitions contain only odd terms.
Further progress on this case has been recently made in \cite{BuDaTr}.
An effective method to determine a $2$-factorization $\mathcal{F}$ solving a given Oberwolfach or Hamilton-Waterloo problem is to require that $\mathcal{F}$ has
a suitable {\it automorphism group} $G$, that is, a group of permutation on the vertices which leaves $\mathcal{F}$ invariant.
One usually requires that $G$ fixes $k$ vertices and has $r\geq 1$ regular orbits on the remaining vertices.
If $k=1$, then $\mathcal{F}$ is said to be {\it $r$-rotational} (under $G$) \cite{BuRi08}; if $r=1$ and $k\geq 1$, then $\mathcal{F}$ is $k$-pyramidal \cite{BoMaRi09} -- note that $1$-pyramidal means $1$-rotational. Finally, $\mathcal{F}$ is {\it sharply transitive} or {\it regular}
(under $G$) if $(k,r)=(0,1)$.
The $1$-rotational approach and the $2$-pyramidal one have proved to be successful \cite{BuRi08, BuTr13, Tr13} in solving infinitely many cases of the Oberwolfach problem.
On the other hand, all solutions of small order given in \cite{DeFrHuMeRo10} turn out to be $r$-rotational for some suitable small $r$.
A $2$-factorization $\mathcal{F}$ of order $v$ is regular under $G$ if we can label the vertices with the elements of $G$ so that for any $2$--factor $F\in \mathcal{F}$ we have that $F+g$ is also in $\mathcal{F}$. One usually speaks of a {\it cyclic}
$2$-factorization when $G$ is the cyclic group.
The few known facts on regular solutions to the Oberwolfach problem only concern
cyclic groups \cite{BuDel04, BuRaZu05, JoMo08},
elementary abelian groups and Frobenius groups
\cite{BuRi05}.
Concerning the Hamilton-Waterloo problem, there are only two known infinite classes of solutions having a cyclic automorphism group; specifically, there exists a cyclic solution to
HWP$(18n+3;[3^{6n+1}],[(6 n+1)^3];3n,6n+1)$ \cite{BuRi05} and
HWP$(50n+5;[5^{10n+1}],[(10n+1)^5];5n,20n+2)$ \cite{BuDa} for any $n\geq1$.
These are instances of the following much more general problem.
\begin{prob}\label{prob} Given $\ell\geq3$ odd and given $n>0$, establish if there exists a cyclic solution to
HWP$(\ell(2\ell n+1);[\ell^{2\ell n+1}],[(2\ell n+1)^\ell];\ell n,
{\frac{(\ell-1)(2\ell n+1)}{2}})$.
\end{prob}
The above mentioned solutions settle the problem for $\ell=3$ and $\ell=5$.
In this paper, we build on the techniques of \cite{BuDa}
and consider the case $\ell=4k+1$; we manage to solve the problem for \textit{all} $\ell$ and \textit{all} $n\ge (\ell -1)/2$ by giving always ``cyclic'' solutions.
Our main result is the following.
\begin{thm}\label{mainth}
If $\ell \equiv 1\pmod{4}$, then Problem \ref{prob} admits a solution provided that $n\geq (\ell-1)/2$.
\end{thm}
We believe that techniques similar to those developed in this paper may
be used to settle the case $\ell\equiv 3\;\pmod{4}$. However, in view of the complexity of this article, we postpone
the study of this case to a future work.
We finally point out that regular
solutions under non-cyclic groups can be found in \cite{BuRi05}: for example,
given a positive integer $j$ and two odd primes $\ell, m$ such that $m^j \equiv 1\;\pmod{\ell}$,
there exists a regular solution to HWP$(\ell m^j; [\ell^{(m^j)}], [m^{\ell m^{j-1}}];$
$(\ell-1)m^j/2, (m^j-1)/2)$
under a Frobenius group of order $\ell m^j$.
\section{Some preliminaries}
The techniques used in this paper are based on those in the papers by Buratti and Rinaldi \cite{BuRi05} and Buratti and Danziger \cite{BuDa}; we collect here some preliminary notation and definitions, more details can be found in these references.
In what follows, we shall label the vertices of the complete graph $K_v$ with the elements of $\mathbb{Z}_v$;
if $\Gamma$ is a subgraph of $K_v$, we define the list of differences of $\Gamma$ to be the
multiset $\Delta\Gamma$ of all possible differences $\pm(a-b)$ between a pair
$(a,b)$ of adjacent vertices of $\Gamma$. More generally, the list of differences of a collection
$\{\Gamma_1,\dots,\Gamma_t\}$ of subgraphs of $\Gamma$ is the multiset union of
the lists of differences of all the $\Gamma_i$s.
A cycle $C$ of length $d$ in $K_v$ is called {\it transversal} if $d|v$ and the vertices of $C$ form a complete system of representatives for the residue classes modulo $d$, namely a transversal for the cosets of the subgroup of $\mathbb{Z}_v$ of index $d$.
A fundamental result we shall use in our construction is the following theorem,
a consequence of a more general result proved in \cite{BuRi05} (see also \cite{BuDa}).
\begin{thm}\label{BR}
There exists a cyclic solution to HWP$(\ell m;[\ell^m],[m^\ell];r,r')$ if and only if the following conditions hold:
\begin{itemize}
\item $(r,r')=(\ell x,my)$ for a suitable pair $(x,y)$ of positive integers such that $2\ell x+2my=\ell m-1$;
\item there exist $x$ transversal $\ell$-cycles $\{A_1,\dots,A_x\}$ of $K_{\ell m}$ and $y$ transversal $m$-cycles
$\{B_1,\dots,B_y\}$ of $K_{\ell m}$ whose lists of differences cover, altogether, $\mathbb{Z}_{\ell m}\setminus\{0\}$ exactly once.
\end{itemize}
\end{thm}
The reason for looking for a cyclic solution to the HWP with the particular parameter set of Problem \ref{prob} stems from the fact that for $\ell$ an odd integer and $m=2\ell n+1$ with $n$ a positive integer, taking $x=n$ and
$y={\frac{\ell-1}{2}}$ will give $2\ell x+2my=\ell m-1$; thus Theorem \ref{BR} ensures that the problem will be solved if we construct $n$ transversal $\ell$-cycles of $K_{\ell(2\ell n+1)}$ and $(\ell -1)/2$ transversal $(2\ell n+1)$-cycles
of $K_{\ell(2\ell n+1)}$ whose list of differences cover $\mathbb{Z}_{\ell(2\ell n+1)}\setminus\{0\}$ exactly once.
For the sake of brevity, throughout the paper we will refer to an $\ell$--cycle or a $(2\ell n +1)$--cycle as a {\it short} cycle or a {\it long} cycle, respectively. Also, using the Chinese remainder theorem, we will identify $\mathbb{Z}_{\ell(2\ell n +1)}$ with $\mathbb{Z}_{2\ell n +1}\times \mathbb{Z}_{\ell}$. We point out to the reader that, after such an identification, a long (short) cycle $C$ of $K_{\ell(2\ell n +1)}$ is transversal if and only if the first (second) components of the vertices of $C$ are all distinct.
By $(c_0, c_1, \ldots, c_{\ell})$ we will denote both the path of length $\ell$ with edges
$(c_0, c_1),$ $(c_1, c_2), \ldots, (c_{\ell-1}, c_{\ell})$
and the cycle that we get from it by joining $c_0$ and $c_{\ell}$. To avoid misunderstandings, we will always specify whether we are dealing with a path or a cycle. Finally, given two integers $a \leq b$, we will denote by $[a,b]$ the {\it interval} containing the integers $a, a+1, a+2, \ldots, b$. Of course, if $a<b$, then $[b,a]$ will be the empty set.
\section{The general construction}
In this section we shall sketch the steps needed to prove
the main result of this paper, Theorem \ref{mainth}. Let us start with some notation.
From now on $\ell$ will be an odd positive integers with $\ell \equiv 1 \pmod{4}$.
We will usually write $\ell=4k+1$: {we shall assume that $k>1$, since the case $k=1$, that is $\ell=5$, has been covered in \cite{BuDa}}. For $n$ a positive integer,
we denote by $\mathbb{Z}_{2\ell n+1}^*$ and $\mathbb{Z}_\ell ^*$ the set of nonzero-elements of $\mathbb{Z}_{2\ell n+1}$ and $\mathbb{Z}_\ell$, respectively. Also, $\mathbb{Z}_{2\ell n+1}^-$ will denote the set $\mathbb{Z}_{2\ell n+1}^* \setminus \{\pm 1, \pm \ell n\}$.
We point out to the reader that given a subset $S$ of $\mathbb{Z}_m$ $(m>1)$ and a subset $T$ of $\mathbb{Z}$, we write $S=T$ whenever
$T$ is a complete set of representatives for the residue classes $\pmod{m}$ in $S$.
For example, $\mathbb{Z}_{m}=[0,m-1]$ or $\mathbb{Z}_{m}=[-\frac{m-1}{2}, \frac{m-1}{2}]$ when $m$ is odd.
To explain the construction used in what follows we first need to define a specific set $D$ of $2k-2$ positive integers which will play a crucial role in building the cycles we need.
\begin{defn}\label{setD}
For $\ell=4k+1$ and for {an integer $n>0$},
we denote by $D=\{d_{i1}, d_{i2} | i=1,\ldots,k-1\}$ the set of positive integers defined as follows:
\[
d_{i1}=
\begin{cases}
4i-2 &\text{if $n$ is odd},\\
4i &\text{if $n$ is even},\\
\end{cases} \quad\text{and}\quad
d_{i2}= 4i+1.
\]
For the case $n-2k \equiv 2,3 \pmod{4}$, we replace the last pair $(d_{k-1,1}, d_{k-1,2})$ with
\[
d_{k-1,1}=(4k+1)n-2k-1
\quad \text{and}\quad
d_{k-1,2}= (4k+1)n-2k+3.\\
\]
{Also, we set $\overline{D}= [2, \ell n-1]\setminus D$.}
\end{defn}
\begin{ex}
All through this work, we shall use our construction to build explicitly a solution for $\ell=9$ and $n=5$; in this case we have $D=\{2,5\}$
{and $\overline{D}=\{3,4\}\cup[6,44]$}.
\end{ex}
Let us now look at how to prove Theorem \ref{mainth}; we need to
produce a set $\cal B$ of base cycles as required in Theorem \ref{BR}, so we
will build $(\ell -1)/2$ {transversal} long cycles and $n$ {transversal} short cycles whose list of differences will cover $\mathbb{Z}_{\ell(2\ell n+1)}\setminus\{0\}$ exactly once.
First, in Section \ref{lc} we shall construct a set $\cal L$ consisting of $(\ell -5)/2$, that is all but two, of the {transversal} long cycles of length
$2\ell n+1$. The list of differences provided by $\cal L$ will be
\begin{equation}\label{diff1}
\Delta {\mathcal{L}} = \; (\mathbb{Z}_{2\ell n+1}^* \times (\mathbb{Z}_\ell \setminus \{0,\pm1,\pm2\})) \ \cup \
\{(d,f(d)) \;|\; d \in D \ \cup \ -D\}
\end{equation}
with $f(d)=\pm 1$ for any $d \in D \ \cup \ -D$.
We point out that the differences not covered by
$\mathcal{L}$ are chosen in such a way as to facilitate the construction of the {transversal} short cycles.
Then in Section \ref{sc} we shall construct
a set $\cal S$ of $n$ transversal short cycles such that
\begin{equation}\label{diff2}
\Delta \mathcal{S} = (\{0\}\times \mathbb{Z}_{\ell}^*) \ \cup \ \{(x, \varphi(x)) \;|\; x \in Z_{2\ell n+1}^-\setminus (D \ \cup \ -D)\}
\end{equation}
where $\varphi: \mathbb{Z}_{2\ell n+1}^-\setminus (D\ \cup \ -D) \rightarrow \{\pm 1, \pm 2\}$ is a map with some additional properties. More precisely,
\begin{enumerate}
\item we construct (Lemma \ref{skolem}) a set
$\mathcal{A}=\{A_1,\dots,$ $A_{n-{2k}}\}$ of
$\ell$-cycles and a set
$\mathcal{B}=\{B_1,\ldots,B_{2k}\}$
of $(\ell-1)$-cycles with vertices in $\mathbb{Z}_{2\ell n +1}$ such that
$\Delta \mathcal{A} \ \cup\ \Delta \mathcal{B} = \mathbb{Z}_{2\ell n+1}^-\setminus (D\ \cup \ -D)$.
The construction of the set $\cal A$ requires the crucial use of Skolem sequences.
Also, the cycles in $\mathcal{B}\setminus\{B_1\}$ will have a particular structure called alternating
(see Definition \ref{alternating}).
\item we ``lift'' (Proposition \ref{shortcycles}) the cycles in $\cal A \ \cup \ \cal B$ to obtain a set
$\cal S$ of {transversal}
$\ell$-cycles with vertices in
$\mathbb{Z}_{2\ell n +1} \times \mathbb{Z}_{\ell}$; to lift these cycles to $K_{\ell(2\ell n+1)}$ means first
to add a vertex to the cycles in $\cal B$ (so that they become
$\ell$-cycles) and then to add a second coordinate to the vertices of each cycle, in such a way that $\Delta \cal S$
satisfies \eqref{diff2}.
\end{enumerate}
Let us point out that in Proposition \ref{shortcycles} we construct a set of {transversal}
short cycles whose list of differences covers, in particular,
$\{0\}\times \mathbb{Z}^*_{4k+1}$. More precisely, the differences of the form
$\pm(0,1)$, $\pm(0,2), \ldots, \pm(0,2k)$ are covered by $2k$ distinct {transversal} short cycles.
Thus our construction requires at least $2k$
{transversal} short cycles, and this is the reason why in Theorem \ref{mainth} we {require}
$n\geq 2k=(\ell-1)/2$.
To complete the set of base cycles, we need to provide two more {transversal} long cycles $C$ and $C'$.
The construction of these two missing $(2\ell n+1)$-cycles is described in Section \ref{main}; clearly
$C$ and $C'$ are built in so as to cover the set of the remaining differences, that is, the union of the
following disjoint sets:
\[\mathbb{Z}_{2\ell n+1}^*\times\{0\}; \quad
\{\pm1,\pm\ell n\}\times \{\pm1,\pm2\};\quad
\bigcup_{i\in\mathbb{Z}_{2\ell n+1}^-} \{i\}\times (\{\pm1, \pm2\}\setminus\{F(i)\}).
\]
where $F:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ is the map obtained by glueing together the maps $f$ and
$\varphi$, that is, $F(x)=f(x)$ or $F(x)=\varphi(x)$ according to whether $x\in D \ \cup \ -D$ or not.
For example, to cover the differences of the form $(i,-F(i))$, the cycle $C$ will contain the path
$P=(c_1,c_2, \ldots, c_{\ell n-1})$ where $c_{i} = (x_i,y_i)$ with
$(x_1, x_2,\ldots, x_{\ell n-1}) = (1,-1,2,-2,3,-3, \ldots)$ and
$(y_1, y_2,\ldots, y_{\ell n-1}) = (1, 1+F(2), 1+F(2)-F(3), 1+F(2)-F(3)+F(4), \ldots)$.
The reader can check that $\Delta P = \pm\{(i,-F(i))\;|\;2\leq i\leq \ell n-1)\}$.\\
In order to obtain the remaining differences, we need $y_{\ell n -1} \equiv 1\; \pmod{\ell}$,
and since $y_{\ell n -1} = 1+ \sum F$ with $\sum F = \sum_{j=2}^{\ell n -1} (-1)^jF(j)$, we must have
$\sum F \equiv 0\;\pmod{\ell}$. Recall that $F$ is obtained from the maps $f$ and $\varphi$; we will show that
$\sum F \equiv 0\;\pmod{\ell}$ by choosing suitably the map $\varphi$, relying on the fact,
proved in Proposition \ref{shortcycles}.1, that the alternating sums for $\varphi$ can take any value in $\mathbb{Z}_\ell$.
Putting it all together, the set ${\cal B}= {\cal L} \ \cup \ {\cal S} \ \cup\ \{C, C'\}$
is a set of base cycles as required in Theorem \ref{BR}.
\section{Transversal long cycles}\label{lc}
In this section we construct (Proposition \ref{longcycles}) a set $\mathcal{L}$ of $(\ell-5)/2$ transversal long cycles whose list of differences $\Delta \mathcal{L}$ has no repeated elements. All
{the remaining differences}
not lying in $\Delta \mathcal{L}$ will be covered by the
two {transversal} long cycles given in Section \ref{main} together with the {transversal} short cycles constructed in Section \ref{sc}.
The {transversal} long cycles that we are going to build will be obtained as union of paths with specific vertex-set
and list of differences. More precisely,
given four integers $a\leq b < c \leq d$ with $|(b-a)-(d-c)|\leq 1$, we denote by
$\mathcal{P}([a, b], [c,d])$ the family of all paths $P$ with vertex-set
$[a,b]\ \cup \ [c,d]$ which satisfy the following properties:
\begin{enumerate}
\item[(1)] for each edge $\{u,u'\}$ of $P$, with $u<u'$, we have that
$u\in[a, b]$ and $u'\in[c,d]$;
\item[(2)] $\Delta P = \pm [c-b, d-a]$.
\end{enumerate}
In other words, $\mathcal{P}([a, b], [c, d])$ contains all paths whose {two partite sets} are
$[a, b]$ and $[c, d]$, and
having the interval $\pm [c-b, d-a]$ as list of differences.
We point out {the connection to $\alpha$-labelings:} a member of $\mathcal{P}([0, b], [b+1, d])$ can be seen
as an {\it $\alpha$-labeling} of a path on $d+1$ vertices
(see \cite{Ga13}).
The following lemma generalizes a result by Abrham \cite{Ab93} concerning the
existence of an $\alpha$-labeling of a path with a given end-vertex.
\begin{lem}\label{paths}
Let $a\leq b < c \leq d$ with $|(b-a)-(d-c)| \leq 1$; also, set $\gamma_1=b-a$ and $\gamma_2=d-c$.
Then, there exists a path
of $\mathcal{P}([a,b], [c,d])$ with end vertices $w$ and $w'$ for each of the following values of the pair $(w,w')$:
\begin{enumerate}
\item $(w,w')=(a+i, b-i)$
if $\gamma_1 = \gamma_2+1$ and $i\in[0,\gamma_1]\setminus\{\gamma_1/2\}$;
\item $(w,w')=(a+i, c+i)$ if $\gamma_1= \gamma_2$ and $i\in[0,\gamma_1]$;
\item $(w,w')=(c+i, d-i)$ if $\gamma_1 = \gamma_2-1$ and $i\in[0,\gamma_2]\setminus\{\gamma_2/2\}$.
\end{enumerate}
\end{lem}
\begin{proof}
Set $I_1=[0, \gamma_1]$ and $I_2=[\gamma_1+1, \gamma_1+\gamma_2 + 1]$.
The existence of a path $P\in\mathcal{P}(I_1, I_2)$
whose end-vertices $u_i$ and $v_i$ satisfy the assertion is proven in \cite{Ab93}
by using the terminology of $\alpha$-labelings
(see also \cite{Tr13}).
Now, let $f:V(P)\rightarrow \mathbb{Z}$ be the map defined as follows:
\[
f(x)=
\begin{cases}
x+a & \text{if $x \in I_1$},\\
x+c-(b-a+1) & \text{if $x\in I_2$}
\end{cases}
\]
and let $Q=f(P)$ be the path obtained from $P$ by replacing each vertex,
say $x$, with $f(x)$. It is clear that
the {two partite sets} of
$Q$ are $f(I_1)=[a,b]$ and $f(I_2)=[c,d]$.
By recalling the properties of a path in $\mathcal{P}(I_1, I_2)$, we have that
for any $d\in [1, \gamma_1+\gamma_2 + 1]$, there exists an edge $\{u,u'\}$ of $P$,
with $u\leq \gamma_1<u'$ such that $u'-u=d$.
By construction, the edge
$\{f(u), f(u')\}=\{u+a, u'+c -(b-a+1) \}$ lies in $Q$ and it gives rise to the differences
$\pm(d+c-b-1)$; hence,
$\Delta Q = \pm [c-b, d-a]$. In other words, $Q\in\mathcal{P}([a,b],[c,d])$ and it is not difficult to check that its end-vertices, $f(w_i)$
and $f(w'_i)$, satisfy the assertion.
\end{proof}
We use the above result to construct pairs of $(2t+1)$-cycles with vertices in $\mathbb{Z}\times\mathbb{Z}$ satisfying the conditions of the following lemma.
\begin{lem}\label{longcy}
Let $d_1, d_2$, and $t\geq2$ be integers having the same
parity, with $d_1, d_2 \in [1,t-1]\setminus
\{\lceil t/2\rceil \}$. For any $x,y\in \mathbb{Z}$, there exist two $(2t+1)$-cycles $C_1$ and $C_2$ with vertices in $\mathbb{Z} \times \mathbb{Z}$
such that
\begin{enumerate}
\item[$(i)$] the projection of $V(C_i)$ on the first components is $[0,2t]$ for $i=1,2$,
\item[$(ii)$] $\Delta C_1 \ \cup \ \Delta C_2 = \; \pm ([1, 2t] \times \{x,y\}) \ \cup \
\ \{\pm (d_1,x-y), \pm (d_2,y-x)\}.$
\end{enumerate}
\end{lem}
\begin{proof}
Let $d_1, d_2,t,x,y$ be integers satisfying the assumptions.
Also, set $J=[1,2t]\setminus I$ where $I=[1,t]$ or $[1,t+1]$ according to whether $t$ is odd or even.
We are going to show that for any integer $d \in [1,t-1]\setminus
\{\lceil t/2\rceil \}$ with $d \equiv t \pmod{2}$, there exists a cycle $C(d)$ with vertices in $\mathbb{Z} \times \mathbb{Z}$ satisfying the following two properties:
\begin{enumerate}
\item[$(1)$] the projection of $V(C(d))$ on the first components is $[0,2t]$,
\item[$(2)$] $\Delta C(d) = \; \pm (I\times \{x\}) \ \cup \ \pm (J\times \{y\}) \ \cup \
\ \{\pm (d,y-x)\}.$
\end{enumerate}
By setting $C_1=C(d_1)$ and letting $C_2$ be the cycle obtained from $C(d_2)$ by exchanging the roles of
$x$ and $y$, we obtain a pair $\{C_1, C_2\}$ of cycles which clearly satisfy the assertion.
We first deal with the case $t\geq3$ odd. Thus, set $t=2m-1$ and let $d=2k+1$
with $0\leq k \leq m-2$.\\
By Lemma \ref{paths} with $[a,b]=[m,2m-2]$,
$[b,c]=[2m-1,3m-2]$ and $i=k$, we obtain a path
$U=(u_1, \ldots, u_{2m-1})$ of $\mathcal{P}([a,b], [c,d])$ whose end-vertices are
$u_1=3m-2-k$ and $u_{2m-1}=2m-1+k$.
Again by Lemma \ref{paths} with $[a,b]=[0,m-1]$, $[c,d]=[3m-1,4m-2]$ and $i=k$, there exists
a path $W=(w_0, w_1, \ldots, w_{2m-1})$ of $\mathcal{P}([a,b], [c,d])$ whose end-vertices are
$w_0=k$ and $w_{2m-1}= 3m-1+k$.
Now, let $U'$ and $W'$ be the following paths:
\begin{align*}
& U' = ((u_1,x),(u_2,0),(u_3,x), \ldots, (u_{2m-3},x),(u_{2m-2},0),(u_{2m-1},x)),\\
& W' = ((w_0,0),(w_1,y), (w_2,0), \ldots, (w_{2m-3},y), (w_{2m-2},0), (w_{2m-1},y)).
\end{align*}
and let $C(d)=U' \ \cup \ ((w_{2m-1},y), (u_1,x)) \ \cup \ W_y \ \cup \ ((u_{2m-1},x), (w_0,0))$ be the $(4m-1)$-cycle obtained by joining $U'$ and $W'$. Since the projection of $V(C(d))$ on the first coordinates is $V(U) \ \cup V(W)=[0,2t]$, condition (1) holds.\\
Now note that $u_i - u_j>0$ ($w_i -w_j>0$) for any $i$ odd and $j$ even.
Therefore, by recalling the list of differences of a path in $\mathcal{P}([a, b], [c,d])$, we have that
\begin{align*}
\Delta U' = \; \pm ([1, 2m-2] \times \{x\}),
\quad\text{and} \quad
\Delta W' = & \; \pm ([2m, 4m-2] \times \{y\}).
\end{align*}
Since $\Delta C(d) = \Delta U' \ \cup \ \Delta W' \ \cup \ \{\pm (w_{2m-1}-u_1, y-x), \pm (u_{2m-1}-w_0,x)\}$ where $w_{2m-1}-u_1 = d$ and $u_{2m-1}-w_0 = 2m-1$, then (2) is satisfied.
We proceed in a similar way when $t\geq 2$ is even. So, let $t=2m$ and let
$d = 2k$ with $1\leq k \leq m-1$ and $k \neq m/2$.\\
We apply Lemma \ref{paths} with $[a,b]=[m,2m-1]$,
$[c,d]=[2m,3m]$ and $i=k$ to obtain a path
$U=(u_1, \ldots, u_{2m+1})$ of $\mathcal{P}([a, b], [c,d])$ whose end-vertices are
$u_1=3m-k$ and $u_{2m+1}=2m+k$.
Again, we apply Lemma \ref{paths} with $[a,b]=[0,m-1]$, $[c,d]=[3m+1,4m]$ and $i=k-1$ and obtain
a path $W=(w_0, w_1, \ldots, w_{2m-1})$ of $\mathcal{P}([a, b],[c,d])$ whose end-vertices are
$w_0=k-1$ and $w_{2m-1}= 3m+k$.
Now, let $U'$ and $W'$ be the following paths:
\begin{align*}
& U' = ((u_1,x), (u_2,0),(u_3,x), \ldots, (u_{2m-3},x), (u_{2m-2},0), (u_{2m+1},x)),\\
& W' = ((w_0,0),(w_1,y),(w_2,0), \ldots, (w_{2m-3},y),(w_{2m-2},0),(w_{2m-1},y)).
\end{align*}
and let $C(d)=U' \ \cup \ ((w_{2m-1},y), (u_1,x)) \ \cup \ W' \ \cup \ ((u_{2m+1},x), (w_0,0))$ be the $(4m+1)$-cycle obtained by joining $U'$ and $W'$. As before, one can check that $C(d)$ satisfies (1) and (2). This completes the proof.
\end{proof}
\begin{ex} \label{exlongcy} Let $(d_1, d_2,t) = (1,43,45)$, and let $(x,y)=(3,4)$.
Below are two $91$-cycles $C_1$ and $C_2$ satisfying Lemma \ref{longcy}:
\begin{scriptsize}
\begin{align*}
C_{1} & = ((0,0),(90,3),(1,0),(89,3), \dots, (i,0), (90-i,3), \ldots, (22,0),(68,3),\\
&(67,4), (23,0),(66,4),\dots,(67-i,4),(23+i,0), \ldots, (46,4),(44,0),(45,4)), \\
C_{2} & = ( (21,0),(69,4),(22,0),(68,4), \dots, (21-2i,0),(69+2i,4),(22-2i,0),(68+2i,4), \ldots, \\
& (3,0), (87,4), (4,0), (86,4), (1,0), (88,4), (2,0), (90,4), (0,0), (89,4),\\
& (46,3),(43,0),(45,3),(44,0),\dots, (46+2i,3),(43-2i,0),(45+2i,3),(44-2i,0), \ldots, \\
& (62,3), (27,0), (61,3), (28,0),
(64,3), (24,0), (65,3), (26,0), (63,3), (25,0), (67,3), (23,0), (66,3))
\end{align*}
\end{scriptsize}\noindent
It is straightforward to check that $\Delta C_1 = \pm ([1,45]\times\{4\}) \ \cup \
\pm ([46,90]\times \{3\}) \ \cup \ \{\pm (1,-1)\}$ and $\Delta C_2 = \pm ([1,45]\times\{3\}) \ \cup \
\pm ([46,90]\times \{4\}) \ \cup \ \{\pm (43,1)\}$.
\end{ex}
We are now able to construct the required set ${\cal L}$ of $(\ell-5)/2$ transversal long cycles; {we shall use the set $D$ of definition \ref{setD}.}
\begin{prop}\label{longcycles} For $\ell \equiv 1 \pmod{4}$ and $\ell\geq 5$
there exists a set $\mathcal{L}$ of $(\ell-5)/2$ transversal $(2\ell n+1)$-cycles of
$K_{\ell(2\ell n+1)}$ such that
\[\Delta {\mathcal{L}} = \; (\mathbb{Z}_{2\ell n+1}^* \times (\mathbb{Z}_\ell \setminus \{0,\pm1,\pm2\})) \ \cup \
\{(d,f(d)) \;|\; d \in D \ \cup \ -D\}
\]
where $f(d)=\pm 1$ for any $d \in D \ \cup \ -D$.
\end{prop}
\begin{proof}
We shall construct a set of transversal long cycles $\mathcal{L}$ by applying repeatedly Lemma \ref{longcy}.
We will use the following straightforward property of an integer $d \in D$:
$d \equiv 1,2 \pmod{4}$ for $n$ odd, and $d \equiv 0,1 \pmod{4}$ for $n$ even.
Set $t=\ell n$, $k=(\ell-1)/4$ and let $D=\{d_{i1}, d_{i2} \;|\; i=1,\ldots,k-1\}$. For any $d\in D$ we define the integer $d'$ as follows:
\[
\text{$d' = d$ when $d\equiv 0,2 \pmod{4}$, and $d'=2t+1-d$ when $d\equiv 1 \pmod{4}$,}
\]
and set $D' = \{d' \;|\; d\in D\}$.
Of course, $D \ \cup \ -D \equiv D' \ \cup \ -D' \pmod{2t+1}$; also, all integers in $D'$ are congruent to $0$ or $2 \mod{4}$ according to whether $t$ is even or odd; it then follows that all integers in $D'/2$ have the same parity as
$t$. Recalling how $D$ is defined (Definition \ref{setD}),
it is easy to check that $D'/2 \subseteq [2, t-1]\setminus\{\lceil t/2 \rceil\}$.
For $i=1,\dots,k-1$ set $x=2i+1,y=2i+2$ and take $d_1={d'_{i1}}/2$ and $d_2={d'_{i2}}/{2}$. In view of the considerations above, all the assumptions of Lemma \ref{longcy} are satisfied. Therefore, there exist two cycles $C_{i1}, C_{i2}$ for $i=1,\dots,k-1$ with vertices in $\mathbb{Z}\times \mathbb{Z} $ such that
\[\text{the projection of $C_{i1}$ (resp. $C_{i2}$) on the first coordinates is $[0,2\ell n]$}, \text{and}
\]
\[\Delta C_{i1} \ \cup \ \Delta C_{i2} = \; \pm \left([1,2\ell n] \times [2i+1,2i+2]\right) \
\cup \ \{\pm (d'_{i1}/2,-1),
\pm (d'_{i2}/2,1)\}.\]
Now, consider the vertices of $C_{i1}$ and $C_{i2}$ (and also their differences), as elements of $\mathbb{Z}_{2\ell n+1} \times \mathbb{Z}_{\ell}$. We denote by $\psi$
the permutation of $\mathbb{Z}_{2\ell n+1}\times \mathbb{Z}_{\ell}$ defined as
$\psi(a,b)=(2a,b)$, and set $C'_{ij}=\psi(C_{ij})$ for $i\in[1,k-1]$ and $j=1,2$.
It is clear that $C'_{ij}$ is a transversal $(2\ell n+1)$-cycle of $K_{\ell(2\ell n+1)}$; moreover,
\[\Delta C'_{i1} \ \cup \ \Delta C'_{i2} = \; \pm \left(\mathbb{Z}^*_{2\ell n+1} \times \{2i+1,2i+2\}\right) \
\cup \ \{\pm (d'_{i1},-1),
\pm (d'_{i2},1)\}.\]
Finally, set ${\mathcal{L}}=\{C'_{i1}, C'_{i2}\;|\; i\in[1,k-1]\}$; of course,
$\Delta {\mathcal{L}} = \bigcup_i\Delta \{C'_{i1}, C'_{i2}\}$.
By recalling how the $d'_{ij}$s are defined, we get
\begin{align*}
\Delta {\mathcal{L}} &=
(\mathbb{Z}_{2\ell n+1}^* \times \mathbb{Z}^-_\ell) \ \cup \
\{\pm(d_{i1},f(d_{i1})) \;|\; i\in[1,k-1], j=0,1\}.
\end{align*}
where $f(d_{ij})=1$ or $-1$ for any $i\in[1,k-1]$ and $j=1,2$. This shows that $\mathcal{L}$ is the required set of transversal long cycles.
\end{proof}
\begin{ex} \label{exlongcycle}
Let $\ell=9$ and $n=5$; then $k=2$ and the set $D$ is $\{2,5\}$. In this case, the set
${\cal L}$ of Proposition \ref{longcycles} consists of two {transversal} long cycles, ${\cal L}=\{C_1', C_2'\}$, that we will construct by following
the proof. Note that $D'=\{2,86\}$, and hence $D'/2=\{1,43\}$.
Now, set $(d_1, d_2, t) = (1,43,45)$, $(x,y)=(3,4)$ and apply Lemma \ref{longcy} to obtain two {transversal} $91$-cycle $C_1$ and $C_2$: for example, we will take the cycles of Example \ref{exlongcy}.
To obtain $C_1'$ and $C_2'$, it is enough to multiply by $2$ the first component
in the cycles $C_{1}$ and $C_{2}$ and reduce modulo $91$, that is,
\begin{small}
\begin{align*}
\small
C'_{1} & = ((0,0),(89,3),(2,0),(87,3), \ldots, (45,3), (43,4) , \ldots, (1,4),(88,0),(90,4)), \\
C'_{2} & = ((42,0),(47,4),(44,0),(45,4), \dots, (87,4),(1,3), \ldots, (43,3), (46,0), (41,3)).
\end{align*}
\end{small}
Taking into account the list of differences of $C_1$ and $C_2$
(Example \ref{exlongcy}), we can see that
$\Delta {\cal L} = \mathbb{Z}_{91}^* \times \{\pm3,\pm4\} \ \cup \ \{\pm(2,-1)), \pm (5,-1)\}.$
\end{ex}
\section{Transversal short cycles}\label{sc}
In this section we show that there exists a set $\mathcal{S}$
of $n$ transversal short cycles for all $n \geq (\ell-1)/2$, whose list of differences is disjoint from $\Delta \mathcal{L}$, where $\mathcal{L}$ is the set of
{transversal} long cycles constructed in Section \ref{lc}.
We first provide a set $\mathcal{A}$ of $n-2k$ cycles of length $\ell=4k+1$ and a set $\mathcal{B}$ of $2k$ cycles of length $\ell-1$ (Lemma \ref{skolem}), with vertices in $\mathbb{Z}_{2\ell n+1}$. After that, in Proposition \ref{shortcycles} we first inflate the cycles in $\mathcal{B}$ to get $\ell$--cycles; then, we lift the cycles in $\mathcal{A}$ and $\mathcal{B}$ to $\mathbb{Z}_{\ell(2\ell n+1)}$ by adding a second coordinate to the vertices of the cycles
(this will be done
by following Lemmas \ref{pathi} and \ref{pathx}). All these steps will give us
the required set $\mathcal{S}$.
\subsection{Building the short cycles in $\mathbb{Z}_{2\ell n +1}$}
First, we need the following definition which describes the structure of the cycles in the set $\mathcal{B}$
we shall build in Lemma \ref{skolem}.
\begin{defn}\label{alternating} Let $\ell=4k+1$ and let
$B=(b_0=0, b_1, \ldots, b_{4k-1})$ be a $4k$\-cycle
with vertices in {$[-\ell n, \ell n]$}.
For any $i\in[1,4k-1]$ set $\delta_i = (-1)^i (b_i-b_{i-1})$
and $\delta_{4k} = b_0-b_{4k-1}$. Then $B$ is said to be {\it alternating} if the following conditions
are satisfied:
\begin{enumerate}
\item $\delta_i \in [1,\ell n]$ for any $i\in[1,4k]$;
\item $\delta_i \equiv i+1 \pmod{2}$ for $i \in [1,2k]$;
\item $\delta_i \equiv i \pmod{2}$ for $i \in [2k+1,4k]$.
\end{enumerate}
\end{defn}
\begin{ex}\label{exalternating}
Let $\ell=9$, $n=5$. Denote by $B=(b_0, b_1, \ldots, b_7)$ the $8$-cycle with vertices in
$[-45,45]$ defined as follows:
$B= (0,-30,1, -31, 2, -33, 3,-34)$.
Now, note that
$(\delta_1, \delta_2, \ldots, \delta_8)=(30,31, 32, 33, 35, 36, 37, 34)$; also each $\delta_h$ lies in $[1,45]$. Therefore,
conditions 1, 2, and 3 of the above definition are satisfied, and hence $B$ is alternating.
Note that $-B = (0,10,-1, 11, -2, 13, -3,14)$ has the same list of differences as $B$, but it is not alternating since
conditions 2 and 3 are not satisfied.
\end{ex}
We start building the short cycles. First, we need a result providing a sufficient condition for a set of $(\ell-1)$ positive integers $U$ to be obtained as the list of differences of an $(\ell-1)$-cycle, possibly alternating.
\begin{lem}\label{4kgons} Let $\ell \equiv 1 \;\pmod{4}$ and
let $U\subseteq[1,\ell n]$ be a set of size $(\ell-1)$.
If $U$ can be partitioned into pairs of consecutive integers, then there exists an
$(\ell-1)$--cycle $C$ with $V(C)\subseteq \mathbb{Z}_{2\ell n+1}$ and $\Delta C = \pm U$.
Also, if $U=[u,u+\ell-2]$ with $u$ even, then $C$ is alternating.
\end{lem}
\begin{proof} Let $\ell = 4k+1$ and let $(\delta_1, \delta_2, \ldots, \delta_{2k}, x, \delta_{2k+1}, \ldots, \delta_{4k-1})$ be
the increasing sequence of the integers in $U$. By assumption, $U$ can be partitioned into pairs
of consecutive integers, that is, $\delta_{2i-1}+1 = \delta_{2i}$, $x+1=\delta_{2k+1}$, and
$\delta_{2j}+1=\delta_{2j+1}$, for $i\in [1,k]$ and $j \in [k+1,2k-1]$.
Therefore, it is not difficult to check that
$x = -\sum_{i=1}^{4k-1}(-1)^{i} \delta_{i}$.
Now, set $b_0=0$ and $b_i = \sum_{h=1}^{i}(-1)^{h}\delta_{h}$ for $i\in [1,4k-1]$.
Since the sequence $(\delta_1, \delta_2, \ldots, \delta_{4k-1})$ is increasing,
it is straightforward to check that all $b_i$s are pairwise distinct.
Then, we can consider the $4k$-cycle $C=(b_0, b_{1},\dots,b_{4k-1})$.
Note that $\delta_i = (-1)^i(b_i-b_{i-1})$ for $i \in [1, 4k-1]$. Also, $b_{4k-1} = -x$ hence,
$x = b_0-b_{4k-1}$. It then follows that
$\Delta C = \pm U$.
The final part of the assertion follows by taking into account that $(\delta_1, \ldots,$ $\delta_{2k},$ $x, \delta_{2k+1}, \ldots, \delta_{4k-1})$ is
the increasing sequence of the integers in $U$.
\end{proof}
As in \cite{BuDa,BuRi05}, we also need the crucial use of {\it Skolem sequences}.
A Skolem sequence of order $n$ can be viewed as a sequence
$S=(s_1,\dots,s_n)$ of positive integers such that $\bigcup_{i=1}^n\{s_i,s_i+i\}=\{1,2,\dots,2n\}$
or $\{1,2,\dots,2n+1\}\setminus\{2n\}$. One speaks of an {\it ordinary Skolem sequence} in the first
case and
of a {\it hooked Skolem sequence} in the second.
It is well known (see \cite{Sha07}) that there exists a Skolem sequence of order $n$ for every positive integer $n$; it is ordinary for $n\equiv0$ or $1 \pmod{4}$ and hooked for $n\equiv2$ or $3 \pmod{4}$.
\begin{lem}\label{skolem} For $\ell=4k+1$ and $n\geq 2k$, there exist a set
$\mathcal{A}=\{A_1,\dots,$ $A_{n-{2k}}\}$ of $\ell$-cycles and a set
$\mathcal{B}=\{B_1,\ldots,B_{2k}\}$
of $(\ell-1)$-cycles with vertices in $\mathbb{Z}_{2\ell n +1}$ satisfying the following properties:
\begin{enumerate}
\item $\Delta \mathcal{A} \ \cup\ \Delta \mathcal{B} = \mathbb{Z}_{2\ell n+1}^-\setminus (D\ \cup \ -D)$;
\item for $i\geq 2$, the cycle $B_i$ is alternating and
$\Delta B_i=\pm [u_i,u_i+4k-1]$, with $u_i=2(i-1)n$.
\end{enumerate}
\end{lem}
\begin{proof} In the proof we will start by constructing the cycles of $\cal A$ so that
the set of integers in $\mathbb{Z}_{2\ell n+1}^-\setminus(D \ \cup \ -D)$ not covered by $\Delta \cal A$ can be split up into even-length intervals.
A suitable choice of $4k$-gons as in Lemma \ref{4kgons} will then make up the set $\cal B$ and take care of the remaining differences.
Let us fix a Skolem sequence $S=(s_1,...,s_{n-2k})$ of order $n-2k$.
So $S$ is ordinary for $n-2k \equiv 0$ or $1$ $\pmod{4}$ and hooked otherwise.
We start with the non-hooked case, so let $n\equiv2k$ or $2k+1 \pmod{4}$.
Let us construct the cycles of $\cal A$; for $1\leq i\leq n-2k$, let $A_i=(a_{i0}, a_{i1}, \ldots,$ $a_{i,4k})$,
where:
\[
a_{ij}=
\begin{cases}
(4k-2-j)n & \text{for $1 \leq j \leq 4k-3$, $j$ odd},\\
jn+i-2k & \text{for $0 \leq j \leq 2k-2$, $j$ even}, \\
jn+i-1+2k & \text{for $2k \leq j \leq 4k-2$, $j$ even}, \\
-2k & \text{for $j=4k-1$},\\
s_i+i+(4k-1)n-1 &\text{for $j=4k$}.
\end{cases}
\]
It is tedious but straightforward to check that
$\Delta {\cal A} = I_0 \cup I_1 \dots \cup I_{2k-1}$
where
\[I_\alpha=
\begin{cases}
[4k, 2n-1] +2n\cdot \alpha & \text{for $\alpha=0, \ldots, 2k-2$} \\
[(4k-2)(n+1)+2,(4k+1)n-2k-1] & \text{for $\alpha=2k-1$}.
\end{cases}
\]
We now apply Lemma \ref{4kgons} to construct $\cal B$ such that
$\Delta {\cal B}=\mathbb{Z}_{2\ell n+1}^-\setminus (\pm D \ \cup \ \Delta{\cal A})$.
Let $J_{\beta}$ be the interval between $I_{\beta-1}$ and $I_{\beta}$ for $1\leq\beta\leq 2k-1$;
each such $J_\beta$ has even length $4k$.
Also, set $J_{0}=[2,4k-1]$ and $J_{2k}=[(4k+1)n-2k,(4k+1)n-1]$. Note that the $I_{\alpha}$'s and $J_{\beta}$'s are pairwise disjoint and cover altogether the integers from $2$ to $(4k+1)n-1$, namely:
\[
[2,(4k+1)n-1]= \bigcup_{\alpha=0}^{2k-1}I_{\alpha}\ \cup \ \bigcup_{\beta=0}^{2k}J_{\beta}.
\]
It is easy to check that $D\subseteq J_0$ and $J_{0} \setminus D$ is a set of $k$ pairs of disjoint consecutive integers. In view of Lemma \ref{4kgons} there exists $2k$ cycles $C_{0}, C_{1}, \ldots, C_{2k-1}$ of length $4k$ and vertices in $\mathbb{Z}_{2\ell n+1}$ whose lists of differences are the following:
\[\Delta C_{\beta}=
\begin{cases}
\pm (J_{0}\setminus D) \ \cup \ \pm J_{2k}& \text{for $\beta=0$} \\
\pm J_{\beta} & \text{for $1\leq \beta\leq2k-1$}.
\end{cases}
\]
Note that for any $\beta \in [1,2k-1]$ the smallest integer in $J_{\beta}$ is even, hence $C_{\beta}$ is alternating by
Lemma \ref{4kgons}.
We now consider the hooked case, so let $n\equiv2k+2$ or $2k+3 \pmod{4}$.
Let $\cal A$ be the cycle--set constructed earlier.
Since now the Skolem sequence is hooked, then the list of differences of $\cal A$ has the following form:
$
\Delta {\cal A} = I_0 \ \cup \ I_1 \dots \ \cup \ I_{2k-2} \ \cup \ I^*_{2k-1}
$
where $I_\alpha=[4k, 2n-1] +2n\cdot \alpha$ for $\alpha=0, \ldots, 2k-2$ and
\[
I^*_{2k-1} = [(4k-2)(n+1)+2,(4k+1)n-2k-2] \ \cup \ \{(4k+1)n-2k\}.
\]
We apply Lemma \ref{4kgons} to construct $\cal B$ such that
$\Delta {\cal B}= \mathbb{Z}_{2\ell n+1}^-\setminus (\pm D \ \cup \ \Delta{\cal A})$.
Let $J_{\beta}$ be the interval between $I_{\beta-1}$ and $I_{\beta}$ for $1\leq\beta\leq 2k-1;$
each such $J_\beta$ has even length $4k$.
Also, set $J_{0}=[2,4k-1]$ and $J_{2k}=[(4k+1)n-2k-1,(4k+1)n-1]\setminus \{(4k+1)n-2k\}$.
Note that the $I_{\alpha}$'s and $J_{\beta}$'s are pairwise disjoint and cover altogether the integers from $2$ to $(4k+1)n-1$, namely:
\[
[2,(4k+1)n-1]= \bigcup_{\alpha=0}^{2k-1}I_{\alpha}\ \cup \ \bigcup_{\beta=0}^{2k}J_{\beta}.
\]
It is easy to check that $D\subseteq J_0 \ \cup \ J_{2k}$ and that $J_{0} \setminus D$ is a set of $k+1$ pairs of disjoint consecutive integers. Also,
$J_{2k}\setminus D=[(4k+1)n-2k+1, (4k+1)n-2k+2]\ \cup \ [(4k+1)n-2k+4, (4k+1)n-1]$; then, $J_{2k}$ is the union of a $2$--set and a $(2k-4)$--set both made of consecutive integers.
In view of Lemma \ref{4kgons} there exist $2k$ cycles $C_{0}, C_{1}, \ldots, C_{2k-1}$ of length $4k$ and vertices in $\mathbb{Z}_{2\ell n+1}$ whose lists of differences are the following:
\[\Delta C_{\beta}=
\begin{cases}
\pm (J_{0}\setminus D) \ \cup \ \pm(J_{2k}\setminus D)& \text{for $\beta=0$} \\
\pm J_{\beta} & \text{for $1\leq \beta\leq2k-1$}.
\end{cases}
\]
As before, we point out that for any $\beta \in [1,2k-1]$ the smallest integer in $J_{\beta}$ is even, hence $C_{\beta}$ is alternating by Lemma \ref{4kgons}.
We shall obtain the set $\cal B$ required in the proof by setting, for instance, $B_{i+1}=C_i$ for $i\in [0,2k-1]$.
\end{proof}
\begin{ex}\label{exskolem}
Let $n=5$ and $\ell=9$, hence $k=2$.
In this case, $\mathbb{Z}^-_{91}=\pm[2,44]$ and the set $D$ of Definition \ref{setD} has only two elements: $D=\{2,5\}$.
Below, we provide the sets
of short cycles ${\cal A} = \{A_1\}$ and ${\cal B} = \{B_1,B_2,B_3,B_4\}$ of Lemma \ref{skolem}.
\begin{align*}
A_1 &=(-3,25,7,15,24,5,34,-4,36), \\
B_1 &= (0,-3, 1, -5, 2, -40, 3,-41), \;\;
B_2 = (0,-10,1, -11, 2, -13, 3,-14), \\
B_3 &= (0,-20,1, -21, 2, -23, 3,-24), \;\;
B_4 = (0,-30,1, -31, 2, -33, 3,-34).
\end{align*}
First, note that $\Delta A_1 = \pm\{8,9,18,19,28,29,38,39,40\}$ and
$\Delta B_1=\pm\{3,4,$
$6,7,41,42,43,44\}$.
Now, for $i=2,3,4$, let $B_i=(b_{0,i}, b_{1,i}, \ldots, b_{4k-1,i})$, and
set $\delta_{h,i} = (-1)^h (b_{h,i} - b_{h,i})$ for $h\in [1,8]$, with $b_{4k}=b_0$.
It is easy to check that
\[
(\delta_{1,i}, \delta_{2,i}, \ldots, \delta_{4k,i}) =
\begin{cases}
(10,11, 12, 13, 15, 16, 17, 14) & \text{for $i=2$},\\
(20,21, 22, 23, 25, 26, 27, 24) & \text{for $i=3$},\\
(30,31, 32, 33, 35, 36, 37, 34) & \text{for $i=4$}.\\
\end{cases}
\]
It follows that the cycles $B_2, B_3, B_4$ are alternating. Also,
$\Delta B_2= \pm[10,17]$, $\Delta B_3= \pm[20,27]$, and $\Delta B_4= \pm[30,37]$.
Therefore, $\Delta {\cal A} \ \cup \ \Delta {\cal B} = \mathbb{Z}_{81}\setminus (\pm D)$.
\end{ex}
\subsection{Lifting the short cycles to $\mathbb{Z}_{\ell(2\ell n+1)}$}
As mentioned at the beginning of this section, to obtain the set $\mathcal{S}$ of transversal short cycles we need to lift the cycles in ${\cal A}$ and $\cal B$
by adding a second component to each vertex. A special attention is to be devoted to the cycles of $\cal B$. Indeed,
given an $(\ell-1)$-cycle $B\in \cal B$, say $B=(b_0, b_1, \ldots, b_{4k-1})$,
we will consider as a lift of $B$ any cycle $B'$ of the following form:
\[B'=((b_0,p_0), (b_1,p_1), \ldots, (b_{4k-1},p_{4k-1}), (b_{4k}, p_{4k}))\]
whose vertices are elements of
$\mathbb{Z}_{2\ell n +1} \times \mathbb{Z}_{\ell}$ and either $b_{4k}=b_{0}$ or $b_{4k}=b_{4k-1}$.
If $\Delta B=\pm[u, u']$ with $0<u\leq u'$, then $\Delta B'$ has the following form:
\[
\Delta B' = \{\pm(i, \varphi(i))\;|\; i\in [u,u']\}\ \cup \
\{\pm(0, x)\}
\]
for a suitable map $\varphi: [u,u'] \rightarrow \mathbb{Z}_{\ell}$, and $x=p_{4k} - p_0$ or $x=p_{4k} - p_{4k-1}$ according to whether $b_{4k}=b_{0}$ or $b_{4k-1}$.
As will become clear later on, we will need to determine the
\emph{partial sum} $\sum_{i=u}^{u'} (-1)^i\varphi(i)$ related to $B'$.
The following lemma shows that the partial sum related to the lift $B'$ of an alternating cycle $B$
depends only on some of the labelings of the second components. More precisely,
\begin{lem}\label{specialcycles}
Let $\ell=4k+1$ and let
$B=(0, b_{1},\dots,b_{4k-1})$ be a $4k$-cycle with vertices in $\mathbb{Z}_{2\ell n+1}$.
Denote by $B'$ the $\ell$-cycle with vertices in
$\mathbb{Z}_{2\ell n+1}\times \mathbb{Z}_{\ell}$ obtained from $B$ as follows:
$B'=((0,0), (b_1,p_1), \ldots, (b_{4k-1},$ $p_{4k-1}),$ $(b_{4k},p_{4k}))$ where $b_{4k}=0$ or $b_{4k-1}$.
If $B$ is alternating and $\Delta B \ \cap\ [1,\ell n] = [u,u']$, then the map
$\varphi:[u, u'] \rightarrow \mathbb{Z}_{\ell}$ such that $(i,\varphi(i))\in \Delta B'$
satisfies the relation:
\[
\sum_{i=u}^{u'} (-1)^i\varphi(i) =
\begin{cases}
p_{4k}-2p_{2k} & \text{if $b_{4k}=0$}, \\
p_{4k-1}-p_{4k}-2p_{2k} & \text{if $b_{4k}=b_{4k-1}$}.
\end{cases}
\]
\end{lem}
\begin{proof} For brevity, set $\sum = \sum_{i=u}^{u'} (-1)^i\varphi(i)$. Also, as in Definition \ref{alternating}, for any $h\in[1,4k-1]$ set
$\delta_h = (-1)^h (b_h-b_{h-1})$, where $b_0=0$, and set $\delta_{4k} = -b_{4k-1}$. Since $B$ is alternating,
$\delta_h \in [1,\ell n]$ for any $h\in[1,4k]$; also, by assumption,
$\Delta B \ \cap\ [1, \ell n] = [u,u']$.
Therefore, $[u,u']=\{\delta_1, \delta_2, \ldots, \delta_{4k}\}$ hence,
$\sum = \sum_{h=1}^{4k} (-1)^{\delta_h}\varphi(\delta_h)$.
For any $h \in [1,4k-1]$ we have that $\varphi(\delta_h) = (-1)^h (p_h-p_{h-1})$, where $p_0=0$;
also, it is not difficult to check that
\[
\varphi(\delta_{4k})=
\begin{cases}
p_{4k}-p_{4k-1} & \text{if $b_{4k}=0$},\\
-p_{4k} & \text{if $b_{4k}=b_{4k-1}$}.
\end{cases}
\]
Now set
$\sum' = \sum_{h=1}^{2k} (-1)^{\delta_h}\varphi(\delta_h)$ and
$\sum'' = \sum_{h=2k+1}^{4k-1} (-1)^{\delta_h}\varphi(\delta_h)$.
Since by Definition \ref{alternating} we have $\delta_h \equiv h+1 \pmod{2}$ for $h \in [1,2k]$, then
\[
{\sum}' = \sum_{h=1}^{2k} (-1)^{h+1}\varphi(\delta_h) =
- \sum_{h=1}^{2k} (p_h-p_{h-1}) = p_0 - p_{2k} = - p_{2k}.
\]
On the other hand, $\delta_h \equiv h \pmod{2}$ for $h \in [2k+1,4k]$, therefore
\[
{\sum}'' = \sum_{h=2k+1}^{4k-1} (-1)^{h}\varphi(\delta_h) =
\sum_{h=2k+1}^{4k-1} (p_h-p_{h-1}) = p_{4k-1} - p_{2k}.
\]
and $(-1)^{\delta_{4k}}\varphi(\delta_{4k}) = \varphi(\delta_{4k})$. Since
$\sum = \sum' + \sum'' +$ $\varphi(\delta_{4k})$, the assertion easily follows.
\end{proof}
\begin{ex}\label{exspecialcycles}
Let once more $n=5$ and $\ell=9$ (i.e. $k=2$). Consider the $8$-cycle
$B_4 = (0,-30,1, -31, 2, -33, 3,-34)$ of Example \ref{exskolem}. It is
alternating and $\Delta B \ \cap\ [1,45] = [30,37]$. Therefore, the assumption of
Lemma \ref{specialcycles} are satisfied.
Now, denote by $B'$ the cycle with vertices in $\mathbb{Z}_{91} \times \mathbb{Z}_9$ obtained from $B$ by repeating the last vertex and then adding the second components $(p_0, p_1, \ldots, p_8) = (0,2,3,4,5,7,8,6,1)$, that is,
\begin{align*}
B' = (&(0,0),(-30,2),(1,3),(-31,4),(2,5),(-33,7),
(3,8),(-34,6),(-34,1)).
\end{align*}
It is easy to check that $\Delta B' = \pm(\{31,33,36\}\times\{1\})
\ \cup \ \pm(\{32,34\}\times\{-1\})
\ \cup \ \pm(\{30,35\}\times\{-2\})
\ \cup \ \{\pm (37,2), \pm (0,4)\}$.
Therefore, we can define the map
$\varphi:[30,37] \rightarrow \mathbb{Z}_{\ell}$ such that $(i,\varphi(i))\in \Delta B'$. It is easy to check that
\[
\sum_{i=30}^{37} (-1)^i\varphi(i) = 4 =
p_{7}-p_{8}-2p_{4}.
\]
\end{ex}
The following two lemmas tell us how to label the second components of the vertices in the cycles of Proposition \ref{skolem}, in order to get the set $\mathcal{S}$ of short cycles.
\begin{lem}\label{pathi}
Let $\ell=4k+1$; for any $i \in [1, 2k-1]$ there is an $\ell$-cycle $Q_i=(0,q_1,\dots q_{4k})$ with vertex set $\mathbb{Z}_\ell$
such that
$q_j=j$ \text{for} $j=1,2,\dots,4k-i, q_{4k}=-i,$ and $\Delta Q_i\setminus\{\pm i\}$ contains only $\pm1$s and $\pm2$s.
\end{lem}
\begin{proof}
The assertion is easily verified: consider for instance the cycle
\begin{align*}
Q_i=(&0, 1, 2, \dots, 4k-i, \\
&4k-i+2,4k-i+4,4k-i+6,\dots,4k,\\
&4k-1,4k-3,4k-5,\dots,4k-i+1)
\end{align*}
for $i$ even, and for $i$ odd the cycle
\begin{align*}
Q_i=(&0,1,2,\dots,4k-i,\\
&4k-i+2,4k-i+4,4k-i+6,\dots,4k-1,\\
&4k,4k-2,4k-4,\dots, 4k-i+1).
\end{align*}
\end{proof}
We point out to the reader that the cycles $Q_i$s of the above lemma are unique.
\begin{lem}\label{pathx}
Let $\ell=4k+1$; for any $m\in \mathbb{Z}_\ell$ there is an $\ell$-cycle $P_m=(0,p_1,\dots,p_{4k})$ with vertex set
$\mathbb{Z}_{\ell}$ such that
\begin{enumerate}
\item for $m \neq \pm 2k$, we have $p_{4k}=2k$ and $p_{4k}-2p_{2k}=m$,
\item for $m=\pm 2k$, we have $p_{4k}-p_{4k-1}=m$ and $p_{2k} = -m$,
\item for any $m \in \mathbb{Z}_{\ell}$, $\Delta P_m\setminus\{\pm 2k\}$ contains only $\pm1$s and $\pm2$s.
\end{enumerate}
\end{lem}
\begin{proof} For a clear description of our construction, we
need to work first on the integers. The required cycles will be then obtained by reducing $\pmod{\ell}$.
Let $m \in [0,4k]$ with $m\neq 2k,2k+1$ and let $x$ be the integer in $[0, 4k]$ such that $x \equiv (2k-m)/2 \pmod{\ell}$; of course, $x \neq 0,2k$.
We first work with the case $0<x<2k$. If $x$ is odd, we may take
\begin{align*}
P_m=(&0,4k-1,4k-3,\dots,2k+x+2,\\
&2k+x+1, 2k+x+3, 2k+x+5,\dots, 4k, \\
&1,2,3,\dots,x,x+1,\dots,2k-1,\\
&2k+1,2k+3,2k+5,\dots, 2k+x,\\
&2k+x-1,2k+x-3,2k+x-5, \dots, 2k).
\end{align*}
Similarly, for $x$ even, we may take
\begin{align*}
P_m=(&0,4k-1,4k-3,\dots,2k+x+1,\\
&2k+x+2, 2k+x+4, 2k+x+6, \dots, 4k, \\
&1,2,\dots,x,x+1,\dots,2k-1,\\
&2k+1,2k+3,2k+5,\dots,2k+x-1 \\
&2k+x,2k+x-2,2k+x-4,\dots, 2k).
\end{align*}
Suppose now that $2k+1\le x \le 4k$, and say $x=2k+x'$. If $x$ is even, we take
\begin{align*}
P_m=(&0,2,4,\dots,x'-2,\\
&x'-1,x'-3,x'-5,\dots, 1,\\
&4k,4k-1,4k-2,\dots,x,x-1,\dots,2k+1,\\
&2k-1,2k-3,2k-5,\dots, x'+1,\\
&x',x'+2,x'+4, \dots, 2k).
\end{align*}
If $x\ne2k+1$ is odd, then
\begin{align*}
P_m=(&0,2,4,\dots,x'-1,\\
&x'-2,x'-4, x'-6, \dots,1,\\
&4k,4k-1,4k-2,\dots, x, x-1, \dots,2k+1\\
&2k-1,2k-3,2k-5,\dots,x', \\
&x'+1, x'+3, x'+5,\dots, 2k),
\end{align*}
and for $x=2k+1$, that is for $m=2k-1$, we take:
\begin{align*}
P_{2k-1}=(&0, 4k, 4k-1, \ldots, 2k+1, \\
&2k-1, 2k-3, 2k-5, \ldots,1,\\
&2,4,6,\ldots, 2k).
\end{align*}
In all the above cases we have $p_{2k}=x$.
Finally, the cycle $P_{2k}$ is given as follows:
\begin{align*}
P_{2k}=(&0, 2,3,4, \ldots, p_{2k}=2k+1, \\
& 2k+3, 2k+5, 2k+7, \ldots, 4k-1,\\
& 4k,4k-2,4k-4,\ldots, 2k+2, 1),
\end{align*}
and $P_{-2k} = - P_{2k}$.
It is straightforward to check that after reducing modulo $\ell$, all cycles just defined satisfy the requirements of the lemma.
\end{proof}
\begin{ex} For $\ell=9$ the cycles constructed in the two previous lemmas are as follows.
\begin{scriptsize}
\begin{align*}
Q_1=&(0,1,2,3,4,5,6,7,8) &Q_2&=(0,1,2,3,4,5,6,8,7) & Q_3&=(0,1,2,3,4,5,7,8,6)\\
P_0 = & (0,7,8,1,2,3,5,6,4) & P_1&= (0,1,8,7,6,5,3,2,4) & P_2& = (0,7,6,8,1,2,3,5,4)\\
P_3= & (0,8,7,6,5,3,1,2,4) &P_4 &= (0,2,3,4,5,7,8,6,1) & P_5&= (0,7,6,5,4,2,1,3,8) \\
P_6 = & (0,2,3,1,8,7,6,5,4) & P_7&= (0,8,1,2,3,5,7,6,4) &P_8 &= (0,2,1,8,7,6,5,3,4).
\end{align*}
\end{scriptsize}
\end{ex}
We are now able to construct the required set $\mathcal{S}$ of $n$ transversal short cycles in such a way that $\Delta \mathcal{S} \ \cup \ \Delta\mathcal{L}$ has no repeated differences.
\begin{prop}\label{shortcycles}
Let $\ell \equiv 1 \pmod{4}$. For every integer $n\geq (\ell-1)/2$ and for every $s \in \mathbb{Z}_{\ell}$, there exists a set $\mathcal{S}$ of $n$ transversal $\ell$--cycles of $K_{\ell(2\ell n+1)}$ whose list of differences is of the form
\[
\Delta \mathcal{S} = (\{0\}\times \mathbb{Z}_{\ell}^*) \ \cup \ \{(x, \varphi(x)) \;|\; x \in Z_{2\ell n+1}^-\setminus (D \ \cup \ -D)\}
\]
where $\varphi: \mathbb{Z}_{2\ell n+1}^-\setminus (D\ \cup \ -D) \rightarrow \{\pm 1, \pm 2\}$ is a map such that:
\begin{enumerate}
\item \label{allvalues}
$\sum_{i\in \overline{D}} (-1)^i \varphi(i)=s$, where $\overline{D} = [2,\ell n-1]\setminus D$;
\item \label{uni}
the set $\{i \in \overline{D} \;|\;\varphi(i)=(-1)^{i+1}\}$
has size $\geq (\ell-5)(\ell-1)/4$.
\end{enumerate}
\end{prop}
\begin{proof}
Consider the sets $\cal A$ and $\cal B$, constructed in Lemma \ref{skolem}, containing cycles of $K_{2\ell n+1}$.
We have to ``lift'' them to cycles of $K_{\ell(2\ell n+1)}$. Since the vertices of the cycles in $\cal A \ \cup \ B$ lie in $\mathbb{Z}_{2\ell n +1}$, while the vertex-set of $K_{\ell(2\ell n+1)}$ has been identified with
$\mathbb{Z}_{2\ell n +1} \times \mathbb{Z}_{\ell}$, to lift these cycles to $K_{\ell(2\ell n+1)}$ means to add a second coordinate to each of their vertices. We will also add a vertex to the cycles in $\cal B$ so that they become $\ell$-cycles.
This lift is easily done for the set $\cal A$; from each cycle $A\in \cal A$, $A=(a_0=0,a_1,\dots,a_{4k})$, we obtain the cycle $A'=(a_0',\dots,a'_{4k})$ by setting
$a'_i=(a_i,i),$ and we set $\cal A'$ to be the set $\{A' | A\in \cal A\}$. It is important to note that
\begin{align}\label{A}
& \text{the projection of} \; \Delta A'_i \; \text{on}\; \mathbb{Z}_{2\ell n+1} \; \text{is}\; \Delta A_i,\\
& \text{the projection of} \; \Delta A'_i \; \text{on}\; \mathbb{Z}_{\ell} \; \text{is a list of $\pm1$'s},
\end{align}
for $i=1,\dots, n-2k$.
We will now work with the set $\mathcal{B}=\{B_1,\ldots,B_{2k}\}$, where
$B_i=(b_{0,i},$ $b_{1,i}, \ldots,$ $b_{4k-1,i})$ with $b_{0,i}=0$ for $i=1, \ldots, 2k$.
When dealing with the set $\mathcal{B}$ we need to add an extra vertex to each $B_i$ to have an $\ell$-cycle of $K_{2\ell n+1}$ {\em and} a second coordinate to obtain cycles in $K_{\ell(2\ell n+1)}$.
The lift will work differently in the case
$i=2k$, so we shall start by describing the situation for $i=1,\dots,2k-1$.
Consider the cycles $Q_i=(0,q_{1,i},\dots q_{4k,i})$ constructed in Lemma \ref{pathi}, and consider the $\ell$-cycle
$B'_i= ( (0, 0), (b_{1,i},q_{1,i}), \dots, (b_{4k-1,i},q_{4k-1,i}), (0,q_{4k,i}))$.
By following this construction to lift $B_i$ we have that
\begin{align}
&\text{the projection of} \; \Delta B'_i \; \text{on}\; \mathbb{Z}_{2\ell n+1} \; \text{is}\;
\Delta B_i\ \cup \ \{0,0\}, \label{condiz1}\\
&\text{the projection of} \; \Delta B'_i \; \text{on}\; \mathbb{Z}_{\ell} \; \text{is}\;
\Delta Q_i. \label{condiz2}
\end{align}
By recalling that $q_{4k,i}=-i$ and
$\Delta Q_i\setminus\{\pm q_{4k,i}\}$ contains only $\pm1$s and $\pm2$s (Lemma \ref{pathi}),
we then have that for any $i=1,\dots, 2k-1$,
\begin{align}
&\text{the differences of}\; B'_i \;\text{in}\; \{0\} \times \mathbb{Z}_\ell
\;\text{are}\; \pm(0,q_{4k,i}) = \pm(0,i) \label{condiz3}\\
&\text{the differences of}\; B'_i \;\text{in}\; \mathbb{Z}_{2\ell n+1}^*\times\mathbb{Z}_{\ell} \;
\text{lie in} \; \Delta B_i \times \{\pm1,\pm2\}. \label{condiz4}
\end{align}
A similar, but slightly more complicated approach will be used when lifting $B_{2k}$;
to obtain the result, we need $\ell$ possible lifts for
$B_{2k} = (0,b_{1,2k},$ $\dots,b_{4k-1,2k})$,
say $B'_{2k,m}$ for $m \in \mathbb{Z}_\ell$,
corresponding to the fact that we want condition (\ref{allvalues})
to be satisfied for all $s\in\mathbb{Z}_\ell$.
Once more, we add a vertex to $B_{2k}$, and a second coordinate using now the cycles $P_m=(0,p_{1,m},\dots p_{4k,m})$ built in Lemma \ref{pathx}.
For $m \in \mathbb{Z}_\ell$, $m\ne \pm 2k$, set
$$B'_{2k,m}=( (0, 0), (b_{1,2k},p_{1,m}), \dots, (b_{4k-1,2k},p_{4k-1,m}), (0,p_{4k,m}));$$
on the other hand for $m = \pm 2k$, the difference with $0$ as first coordinate will appear from positions $4k-1$ and $4k$; in this case set
$$B'_{2k,m}=( (0, 0), (b_{1,2k},p_{1,m}), \dots, (b_{4k-1,2k},p_{4k-1,m}), (b_{4k-1,2k},p_{4k,m})).$$
We first note that $B'_{2k,m}$ satisfies conditions equivalent to \eqref{condiz1}-\eqref{condiz2}.
Also, by Lemma \ref{pathx} we have that
$p_{4k,m}=2k$ for $m\neq \pm 2k$, $p_{4k,m}-p_{4k-1,m}=m$ for $m=\pm 2k$ and
$\Delta P_m \setminus\{\pm 2k\}$ contains only $\pm1$s and $\pm2$s.
It then follows by the structure of $B'_{2k,m}$ that for any $m\in \mathbb{Z}_{\ell}$
\begin{align}
&\text{the differences of}\; B'_{2k,m} \;\text{in}\; \{0\} \times \mathbb{Z}_\ell
\;\text{are}\; \pm(0,2k) \label{condiz5}\\
&\text{the differences of}\; B'_{2k,m} \;\text{in}\; \mathbb{Z}_{2\ell n+1}^*\times\mathbb{Z}_{\ell} \;
\text{lie in} \; \Delta B_{2k,m} \times \{\pm1,\pm2\}. \label{condiz6}
\end{align}
In this way, taking into account \eqref{condiz3}-\eqref{condiz6}
any set $\mathcal{S}$ of possible lifts of the $B_i$'s that we obtain
will satisfy the first part of the assertion, that is,
\begin{equation}\label{C}
\begin{aligned}
\Delta \mathcal{S} = &(\{0\}\times \mathbb{Z}_{\ell}^*) \ \cup \ \{(x, \varphi_{\mathcal{S}}(x)) \;|\; x \in Z_{2\ell n+1}^-\setminus (D\ \cup \ -D)\}\\
&\text{where}\; \varphi_{\mathcal{S}}: \mathbb{Z}_{2\ell n+1}^-\setminus (D \ \cup \ -D) \rightarrow \{\pm 1, \pm 2\}.
\end{aligned}
\end{equation}
Finally, consider the set ${\cal S}_{m}=\{A'_1,\dots,A'_{n-2k},B'_1,\dots,B'_{2k-1}, B'_{2k,m}\}$
consisting of the lifts of the cycles in $\mathcal{A}$ and $\mathcal{B}$ just described. In view of the above considerations, for $m\in \mathbb{Z}_{\ell}$ the set
$\mathcal{S}_m$ satisfies \eqref{C}.
For the sake of readability, we set $\varphi_{m} = \varphi_{\mathcal{S}_{m}}$ and
${\sum}_m = \sum_{i\in \overline{D}} (-1)^i\varphi_{m}(i)$ where $\overline{D}=[2, \ell n-1]\setminus D$.
We want to evaluate the contribution of the cycle $B'_{2k,m}$ to the quantity
$\sum_m$; to do so, note that by Lemma \ref{skolem} we have
$\Delta B_{2k}=\pm[u,u']$ where $u=2(2k-1)n$ and $u'=u+4k-1$, and set
\[\displaystyle S_{2k,m}=\sum_{i=u}^{u'} (-1)^i\varphi_{m}(i) \;\;\;\text{for any $m \in \mathbb{Z}_{\ell}$}.\]
On the other hand, the contribution given by ${\cal S}_m \setminus \{B'_{2k,m}\}$ to $\sum_m$ does not depend on $m$, so we can write ${\sum}_{m} = S_{2k,m} + S'$ for some fixed $S'$.
Since $B_{2k}$ is alternating (Lemma \ref{skolem}), we have that
$B'_{2k,m}$ satisfies the assumption of Lemma \ref{specialcycles}
for any $m \in \mathbb{Z}_\ell$. By Lemma \ref{pathx} we have that
$p_{4k} - 2p_{2k}=m$ for $m\neq \pm 2k$;
also, $p_{4k-1} - p_{4k} - 2p_{2k}=m$ for $m = \pm 2k$. It then
follows by Lemma \ref{specialcycles} that $S_{2k,m}=m$ for any $m \in \mathbb{Z}_\ell$
hence, ${\sum}_{m} = m + S'$.
As $m$ runs over $\mathbb{Z}_\ell$, we have that $\sum_{m}$ covers all integers in $\mathbb{Z}_\ell$ hence,
condition 1 is proven.
We are left to show that condition \ref{uni} holds. For
$i\in [2, 2k-1]$
we recall that $Q_i=(0,q_{1,i}, \ldots, q_{4k,i})$ is the cycle of Lemma \ref{pathi} hence,
$q_{h,i}=h$ for $h\in[1,2k]$. Therefore, the lift $B_i'$ of the cycle $B_i$ has the following form:
$B'_i= ((0,0),(b_{1,i},1),\ldots, (b_{2k,i}, 2k),\ldots)$.
Since $B_i$ is a cycle as in Lemma \ref{skolem} then,
$B_i$ is alternating and for any $h \in [1,4k]$ we have
that $\delta_{h,i} = (-1)^h (b_{h,i}-b_{h-1,i})\in \overline{D}$.
By Definition \ref{alternating}, $\delta_{h,i} \equiv h+1 \pmod{2}$ and $q_{h,i}-q_{h-1,i}=1$ for $h \in [1,2k]$, hence
$\varphi(\delta_{h,i})= (-1)^h(q_{h,i}-q_{h-1,i}) = (-1)^{\delta_{h,i}+1}$ for $h\in [1,2k]$.
In other words, set $X=\{\delta_{h,i} \; | \; (h,i)\in [1,2k]\times[2,2k-1]\}$, we have that
$\varphi(x)=(-1)^{x+1}$ for any $x\in X$, where $X\subseteq \overline{D}$ has size $(2k-2)2k$ and this proves condition 2.
\end{proof}
\begin{ex}\label{exshortcycles}
Once again, let $\ell=9$ and $n=5$, so that $D=\{2,5\}$.
Here, we explicitly construct a set {\cal S} of
{transversal} short cycles as in Proposition \ref{shortcycles}
with $s=0$.
First, we consider the sets ${\cal A} = \{A_1\}$ and ${\cal B} = \{B_1, B_2, B_3, B_4\}$
of short cycles with vertices in $\mathbb{Z}_{91}$ from Example \ref{exskolem}:
\begin{align*}
A_1 &=(-3,25,7,15,24,5,34,-4,36), \\
B_1 &= (0,-3, 1, -5, 2, -40, 3,-41), \;\;
B_2 = (0,-10,1, -11, 2, -13, 3,-14), \\
B_3 &= (0,-20,1, -21, 2, -23, 3,-24), \;\;
B_4 = (0,-30,1, -31, 2, -33, 3,-34).
\end{align*}
We lift these cycles to $\mathbb{Z}_{9\cdot91}$ according to Proposition \ref{shortcycles}.
First we have that
$A'_1=((-3,0),(25,1),(7,2),(15,3),(24,4),(5,5),(34,6),(-4,7),(36,8)).$\\
Let us ``expand and lift'' the first three cycles of $\cal B$. We will need the three cycles from Lemma \ref{pathi} which in our case are $Q_1=(0,1,2,3,4,5,6,7,8)$, $Q_2=(0,1,2,3,4,5,6,8,7)$ and $Q_3=(0,1,2,3,4,5,7,8,6)$, so that the new cycles are
\begin{align*}
B'_1 = & ((0,0), (-3,1),(1,2), (-5,3),(2,4),(-40,5),(3,6),(-41,7),(0,8)),\\
B'_2 = & ((0,0),(-10,1),(1,2),(-11,3),(2,4),(-13,5),(3,6),(-14,8),(0,7)), \\
B'_3 = & ((0,0),(-20,1),(1,2),(-21,3),(2,4),(-23,5),(3,7),(-24,8),(0,6)).
\end{align*}
For the cycle $B_4$ we need 9 different lifts $B'_{4,m}$ obtained through the cycles $P_m$, $m=0,1,\ldots,8$, of Lemma \ref{pathx}:
\begin{align*}
P_0 = & (0,7,8,1,2,3,5,6,4) & P_1= & (0,1,8,7,6,5,3,2,4) \\
P_2 = & (0,7,6,8,1,2,3,5,4) & P_3= & (0,8,7,6,5,3,1,2,4) \\
P_4 = & (0,2,3,4,5,7,8,6,1) & P_5= & (0,7,6,5,4,2,1,3,8) \\
P_6 = & (0,2,3,1,8,7,6,5,4) & P_7= & (0,8,1,2,3,5,7,6,4) \\
P_8 = &(0,2,1,8,7,6,5,3,4).
\end{align*}
For example, when $m=6$, the lift $B'_{4,6}$ of $B'_4$ via $P_6$ is:
\[
B'_{4,6}= ((0, 0),(-30, 2),(1, 3), (-31, 1), (2, 8), (-33, 7), (3, 6), (-34,5),(0, 4)).
\]
Finally, we set ${\cal S}_m = \{A'_1, B'_1, B'_2, B'_3, B'_{4,m}\}$
for any $m \in \mathbb{Z}_9$. Of course,
$\Delta {\cal S}_m = \Delta A'_1 \ \cup \ $ $\Delta B'_1 \ \cup \ \ldots \ \cup \
\Delta B'_{4,m}$ where:
\begin{equation}\label{deltas}
\begin{aligned}
& \Delta A'_1 = \pm(\{8,9,28,29,40\}\times\{1\}) \ \cup \
\pm(\{18,19,38,39\}\times\{-1\}), \\
& \Delta B'_1 = \pm(\{4,7,41,43\}\times\{1\}) \ \cup \
\pm(\{3,6,42,44\}\times\{-1\}) \ \cup \ \{\pm(0,1)\}, \\
& \Delta B'_2 =
\pm(\{11,13,16\}\times\{1\}) \ \cup \ \pm(\{10,12,14,15\}\times\{-1\}) \\
& \quad\quad\quad\; \cup \ \{\pm(0,2), \pm(17,-2)\}, \\
& \Delta B'_3 =
\pm(\{21,23\}\times\{1\}) \cup \ \pm(\{20,22,25,27\}\times\{-1\}) \\
& \quad\quad\quad\; \cup \ \{\pm(0,3), \pm(24,-2), \pm(26,2)\}.
\end{aligned}
\end{equation}
In view of Lemma \ref{pathx}, the reader can check that
$\Delta B'_{4,m} \subset \Delta B_4 \times \{\pm1, \pm2\} \ \cup \ \{(0,\pm4)\}$
for any $m\in \mathbb{Z}_9$. For example, when $m=6$ we have
\begin{align*}
& \Delta B'_{4,6} =
\pm(\{31,35,37\}\times\{1\}) \cup \ \pm(\{34,36\}\times\{-1\})
\cup \ \pm(\{30,33\}\times\{-2\})\\
& \quad\quad\quad\; \cup \ \{\pm(0,4), \pm(32,2)\}.
\end{align*}
Since the projection of $\Delta {\cal S}_m$ on $\mathbb{Z}^*_{91}$ is the
set $\Delta A_1 \ \cup \ $ $\Delta B_1 \ \cup \ \ldots \ \cup \
\Delta B_{4}$ which therefore does not have repeated elements, we are guaranteed
that there exists a map $\varphi_{m}:
\mathbb{Z}_{91}^-\setminus (D \ \cup \ -D) \rightarrow \{\pm 1, \pm 2\}$,
where $D=\{2, 5\}$, that allows us to describe $\Delta {\cal S}_m$ as follows:
\begin{align*}
\Delta \mathcal{S}_m = &(\{0\}\times \mathbb{Z}_{9}^*) \ \cup \
\{(x, \varphi_{m}(x)) \;|\; x \in Z_{91}^-\setminus (D\ \cup \ -D)\}.
\end{align*}
Note that $\varphi_{m}(-x) = -\varphi_{m}(x)$ for any
$x \in Z_{91}^-\setminus (D\ \cup \ -D)$.
In particular, for $m=6$ the map $\varphi_m$ acts on
$[2,44]\setminus D$ as follows:
\begin{align*}
&\{(x, \varphi_{6}(x))\mid x \in [3,44]\setminus \{5\}\} =
\{(3, -1), (4, 1), (6, -1), (7, 1),(8, 1),\\
&\;\; (9, 1), (10, -1), (11, 1), (12, -1), (13, 1), (14, -1), (15, -1), (16, 1), (17, -2),\\
&\;\; (18, -1), (19, -1), (20, -1), (21, 1),(22, -1), (23, 1), (24, -2), (25, -1), (26, 2),\\
&\;\; (27, -1), (28, 1), (29, 1), (30, -2), (31, 1), (32, 2), (33, 7), (34, -1), (35, 1),\\
&\;\; (36, -1), (37, 1), (38, -1), (39, 8),(40, 1), (41, 1), (42, -1), (43, 1), (44, -1)\}.
\end{align*}
By \eqref{deltas}, we can easily see that condition 2 of Proposition \ref{shortcycles} is satisfied.
We are then left to compute the sum ${\sum}_m= \sum_{i\in \overline{D}} (-1)^i\varphi_{m}(i)$ where $\overline{D}=[2,44]\setminus D$. Note that the projection of
$\Delta {B'_{4,m}}$ on $\overline{D}$ is $[30,37]$.
Therefore, ${\sum}_{m} = S' + S_{4,m}$ where $S'$ and $S_{4,m}$ are the contributions
given by ${\cal S}_m \setminus \{B'_{4,m}\}$ and $B'_{4,m}$ to $\sum_m$, respectively.
In other words, for any $m\in\mathbb{Z}_9$ we have
\[
S'= \sum_{i\in\overline{D} \setminus [30,37]} (-1)^i\varphi_{m}(i) = 3
\quad\text{and}\quad
S_{4,m}=\sum_{i=30}^{37} (-1)^i\varphi_{m}(i).
\]
Note that $S'$ does not depend on $m$.
According to Lemma \ref{pathx} we have that $S_{4,m}=m$: the reader can easily check that
$S_{4,6}=6$ (check also Example \ref{exspecialcycles} for the case $m=4$).
Therefore, ${\sum}_{m} = m + S'$ for any $m \in \mathbb{Z}_9$; in particular, ${\sum}_{6} = 0$.
\end{ex}
\section{The main result}\label{main}
In this section we finish proving our main result, Theorem \ref{mainth}. We are trying to build a set of base cycles such that its list of differences is $\mathbb{Z}_{\ell\cdot(2\ell n+1)} \setminus \{0\}$; this will be done by completing the set
${\cal L} \ \cup \ {\cal S}$ of {transversal} cycles we have built in the two previous sections with two more {transversal} long cycles $C$ and $C'$ providing the $4\cdot(2\ell n+1)$ missing differences. These differences are a particular subset of elements of the form $\mathbb{Z}_{2\ell n+1}^*\times\{\pm 1,\pm 2\}$, together with the elements of the form $\mathbb{Z}_{2\ell n+1}^*\times\{0\}$.
Much of the work in this section will be to ensure that the differences from
$\mathbb{Z}_{2\ell n+1}^*\times\{\pm 1,\pm 2\}$ that appear in $C$ and $C'$
have not been already covered in ${\cal L} \ \cup \ {\cal S}$:
to this end, first we build an auxiliary function $G$.
Given a map $F:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$, we will briefly denote with $\sum F$ the integer
$\displaystyle\sum_{i=2}^{\ell n-1} (-1)^i F(i)$.
Also, recall that
$\overline{D}=[2, \ell n-1]\setminus D$.
\begin{lem}\label{G} Let $F:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ be a map such that
\[
\sum F \equiv 0 \pmod{\ell}, \quad \quad F(-x)=-F(x) \;\; \text{for every} \; x\in \mathbb{Z}_{2\ell n+1}^-,
\]
and {assume that} the set $\{x \in \overline{D} \;|\; F(x)=(-1)^{x+1}\}$ has size $\geq \ell-1$.
Then, for every integer $\rho$ there exists a map $G:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ such that:
\begin{enumerate}
\item $\sum G\equiv \rho\pmod{\ell}$;
\item $|G(x)| = 1$ (or $2$) if and only if $|F(x)| = 2$ (or $1$);
\item $G(-x)=-G(x)$ for every $x\in \mathbb{Z}_{2\ell n+1}^-.$
\end{enumerate}
\end{lem}
\begin{proof}
Let $g:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ be the map defined as follows:
\[
g(x)=
\begin{cases}
2F(x) & \text{if $F(x)=\pm 1$},\\
\frac{F(x)}{2} & \text{if $F(x)=\pm 2$}.
\end{cases}
\]
We will get the map required in the statement by slightly modifying $g$. \\
Let $t\in [0,\ell-1]$ such that $t \equiv (\sum g - \rho)(\ell-1)/4 \pmod{\ell}$.
By assumption, there is a set $X \subseteq \overline{D}$ of size $t$ such that $F(x)=(-1)^{x+1}$ for any $x \in X$.
We define a map $G:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ as follows
\[
G(x)=
\begin{cases}
-g(x) & \text{if $x\in X \ \cup \ -X$},\\
g(x) & \text{otherwise}.
\end{cases}
\]
Note that $g(x) = (-1)^{x+1} \cdot 2$ for $x\in X$, hence
\begin{equation}\label{g+4}
-g(x) = g(x) + (-1)^x\cdot 4 \;\;\;\text{for any}\;\;\; x\in X.
\end{equation}
It is easily seen, keeping in mind the definition of $g$ and the fact that $|G(x)|=|g(x)|$ for $x \in \mathbb{Z}_{2\ell n+1}^-$, that
Property 2. and 3. hold. Let us prove that also the first property is satisfied.
Let $S_1, S_2,$ and $S_3$ be the partial sums defined below:
\[S_1= \displaystyle\sum_{i\in \overline{X}} (-1)^{i}G(i), \;\;\; S_2=\displaystyle\sum_{i\in X_e} G(i), \;\;\; \mbox{and} \;\;\; S_3=-\displaystyle\sum_{i\in X_o} G(i).\]
where $X_e$ (resp. $X_o$) denotes the set of even (resp. odd) integers in $X$, and $\overline{X}=[2,\ell n-1]\setminus X$; of course,
\[\sum G = S_1 + S_2 + S_3 \;\;\; \text{and}\;\;\; t=|X| = |X_e|+|X_o|.\]
Taking \eqref{g+4} into account and recalling how $G$ is defined, it is straightforward to check that we have:
\[
S_1 = \displaystyle\sum_{i\in \overline{X}} (-1)^i g(i), \;\;\;\;\;
S_2 = \displaystyle\sum_{i\in X_e} -g(i) = \displaystyle\sum_{i\in X_e} (g(i) + 4) =
\displaystyle\sum_{i\in X_e} g(i) + 4\cdot|X_e|,
\]
\[
\mbox{and} \;\;\; S_3 = \displaystyle\sum_{i\in X_o} g(i) = \displaystyle\sum_{i\in X_o} (-g(i) + 4) =
\displaystyle\sum_{i\in X_o} -g(i) + 4\cdot|X_o|.
\]
By recalling how $t$ is defined, it follows that
\[
\begin{aligned}
\sum G = &\sum_{i=2}^{\ell n-1} (-1)^ig(i) + 4\cdot (|X_e| + |X_o|) = \\
&\sum g + 4t \equiv \ell(\sum g - \rho) + \rho \equiv \rho \pmod{\ell}.
\end{aligned}
\]
We have therefore proven that Property 1. holds and this completes the proof.
\end{proof}
We are now able to prove the main result of this paper.\\
\noindent
{\bf Theorem \ref{mainth}.}\,
{\it
HWP$(\ell(2\ell n+1);[\ell^{2\ell n+1}],[(2\ell n+1)^\ell];\ell n,
{\frac{(\ell-1)(2\ell n+1)}{2}})$
admits a cyclic solution for any $\ell \equiv 1 \pmod{4}$ and $n\geq (\ell-1)/2$.
}
\begin{proof}
Let $\ell=4k+1$ and assume $n\geq 2k$.
Once more we set $k\geq 2$, since the assertion has been proven in \cite{BuDa} for $k=1$.
We first take a set ${\cal L}$ of $2k-2$ transversal $(2\ell n +1)$-cycles as in Proposition \ref{longcycles}, and set $s=-\sum_{i\in D} (-1)^i \ f(i)$, where $f$ is the map from Proposition \ref{longcycles}.
Then we take a set ${\cal S}$ of $n$ transversal $\ell$--gons as in Proposition \ref{shortcycles}, choosing $s$ as above:
it then follows that
$s= \sum_{i\in \overline{D}} (-1)^i \varphi(i)$ where $\overline{D}=[2,\ell n-1]\setminus D$
and $\varphi$ is the map from Proposition \ref{shortcycles}.
To apply Theorem \ref{BR} we have to find two transversal $(2\ell n+1)$-cycles $C$ and $C'$ of $K_{\ell(2\ell n+1)}$ whose differences coincides with the complement of $\Delta{\cal S} \ \cup \ \Delta{\cal L}$ in $(\mathbb{Z}_{2\ell n+1}\times\mathbb{Z}_\ell)\setminus\{(0,0)\}$.
Let $F$ be the map $F:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ obtained by glueing together the maps $f$ and
$\varphi$:
\begin{equation}\label{mapF}
F(x)=
\begin{cases}
f(x) & \text{if $x\in D \ \cup \ -D$},\\
\varphi(x) & \text{if $x\in \overline{D} \ \cup \ -\overline{D}$}.
\end{cases}
\end{equation}
Note that we can write $\Delta \mathcal{S} \ \cup \ \Delta \mathcal{L}$ as the disjoint union of the following sets:
\begin{equation}
\Delta_1 = \{0\} \ \times \ \mathbb{Z}_{\ell}^*; \quad\quad
\Delta_2= \mathbb{Z}^*_{2\ell n+1} \ \times \ \mathbb{Z}_{\ell}\setminus\{0,\pm1,\pm2\};
\end{equation}
\[\displaystyle \Delta_3 = \cup_{i\in \mathbb{Z}_{2\ell n+1}^-} \{i\} \ \times \ \{F(i)\}.
\]
One can check that $F$ satisfies the assumptions of Lemma \ref{G}. In fact,
\begin{equation}\label{sumF}
\sum F = s + \sum_{i\in D} (-1)^i \ f(i) \equiv 0\pmod{\ell}.
\end{equation}
Moreover, it is clear that the map $F$ has the property that $F(-x)= F(x)$ for every $x \in \mathbb{Z}_{2\ell n+1}^-$ since $\Delta \mathcal{S} \ \cup \ \Delta \mathcal{L}$ is obviously symmetric.
Finally, Proposition \ref{shortcycles}.(2) ensures that $\{x \in \overline{D} \;|\; F(x)=(-1)^{x+1}\}$
has size $\geq \ell-1$. Then, by Lemma \ref{G} there is a map $G:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ such that
\begin{equation}\label{sumG}
\sum G \equiv -1\pmod{\ell};
\end{equation}
\begin{equation}\label{GandF}
|G(x)| = 1 \;\; (\text{or}\; 2) \quad \text{if and only if} \quad |F(x)| = 2 \;\;(\text{or}\; 1);
\end{equation}
\begin{equation}\label{oddmap}
G(-x)=-G(x)\quad \text{for every}\quad x\in \mathbb{Z}_{2\ell n+1}^-.
\end{equation}
Let us construct the {transversal}
$(2\ell n+1)$-cycle $C=(c_0,c_1,\dots,c_{2\ell n})$ of $K_{\ell(2\ell n+1)}$ with $c_i=(x_i,y_i) \in \mathbb{Z}_{2\ell n+1}\times\mathbb{Z}_\ell$ defined as follows:
\[x_i=(-1)^{i+1}\biggl{\lfloor}{\frac{i+1}{2}}\biggl{\rfloor}\quad\mbox{for }0\leq i\leq 2\ell n;\]
\[
y_i=
\begin{cases}
1+\displaystyle\sum_{j=2}^i (-1)^j F(j) & \text{for $i\in [2,\ell n-1]$,}\\
1+\displaystyle\sum_{j=\ell n+2}^i (-1)^j G(j) & \text{for $i \in [\ell n+2, 2\ell n-1]$;}
\end{cases}
\]
$$(y_0,y_1,y_{\ell n},y_{\ell n+1},y_{2\ell n})=(0,1,2,1,-2).$$
Now construct another {transversal} $(2\ell n+1)$-cycle $C'=(c'_0,c'_1,\dots,c'_{2\ell n})$
of $K_{\ell(2\ell n+1)}$ with $c'_i=(x'_i,y'_i)$ defined as follows.
1st case: $n$ is even.
\[
x'_i=
\begin{cases}
x_i &\mbox{for $i \in [0, \ell n]$,} \\
-x_i &\mbox{for $i \in [\ell n+1, 2\ell n]$;}
\end{cases}\quad\quad
y'_i=
\begin{cases}
0 & \mbox{for $i\in [0,\ell n]$,} \\
y_i &\mbox{for $i \in [\ell n+1, 2\ell n]$.}
\end{cases}
\]
2nd case: $n$ is odd.
\[
x'_i=
\begin{cases}
x_i &\mbox{for $i \in [0, \ell n-1]$,} \\
-x_i &\mbox{for $i \in [\ell n, 2\ell n]$;}
\end{cases}\quad\quad
y'_i=
\begin{cases}
0 & \mbox{for $i\in [0,\ell n-1]$,} \\
1 & \mbox{for $i=\ell n$,} \\
y_i &\mbox{for $i \in [\ell n+1, 2\ell n]$.}
\end{cases}
\]
Let $\pi(C)$ and $\pi(C')$ be the projections of $C$ and $C'$ on $\mathbb{Z}_{2\ell n+1}$. It is clear that
\[\pi(C)=(0,1,-1,2,-2,\dots,\ell n,-\ell n).\]
The expression of $\pi(C')$ is similar but there is a twist in the middle, namely,
\[ \pi(C')=(0,1,-1,2,-2,\dots,\nu,-\nu,-(\nu+1),\nu+1,\dots,-\ell n,\ell n), \;
\text{and $\nu = \left\lfloor\frac{\ell n}{2}\right\rfloor$}.
\]
In any case the above remark guarantees that both $C$ and $C'$ are transversals. Now we need to
calculate the lists of differences of $C$ and $C'$. This is easily done, but it is important to note first that we have
\begin{equation}\label{f implies}
y_{\ell n-1}=1 \quad\quad \text{and} \quad\quad y_{2\ell n-1}=0.
\end{equation}
Indeed, by definition, we have $y_{\ell n-1}=1+\sum F$.
Again by definition, we have $y_{2\ell n-1}=1+\sigma$ with
$\sigma=\sum_{i=\ell n+2}^{2\ell n-1}(-1)^iG(i)$.
Taking \eqref{oddmap} into account, it is straightforward to check that
$\sigma = \sum_{i=2}^{\ell n-1}(-1)^{i+1}G(-i) = \sum G$. Hence, in view of \eqref{sumF} and \eqref{sumG}, we have that (\ref{f implies}) holds.
Let us consider the following subpaths of the cycle $C$:
\[
P_1=(c_1,c_2,\dots,c_{\ell n-1});\quad
P_2=(c_{\ell n-1},c_{\ell n},c_{\ell n+1});
\]
\[
P_3=(c_{\ell n+1},c_{\ell n+2},\dots,c_{2\ell n-1});
\quad P_4=(c_{2\ell n-1},c_{2\ell n},c_{0},c_1).
\]
Taking (\ref{f implies}) into account, it is straightforward to check that we have:
\begin{description}
\item $\Delta P_1=\pm\{(i,-F(i)) \ | \ 2\leq i\leq \ell n-1\}$;
\item $\Delta P_2= \pm\{(\ell n,-1),(\ell n+1,-1)\} =
\pm\{(\ell n,-1),(\ell n,1)\}$;
\item $\Delta P_3 = \pm\{(i,-G(i)) \ | \ \ell n+2\leq i\leq 2\ell n-1\}
= \pm\{(i,-G(i)) \ | \ 2\leq i\leq \ell n-1\}$;
\item $\Delta P_4=\pm\{(1,-2),(\ell n,2),(1,1)\}$.
\end{description}
Now consider the following subpaths of the cycle $C'$:
\[
P'_1=(c'_0,c'_1,\dots,c'_{\ell n});\quad
P'_2=(c'_{\ell n},c'_{\ell n+1});
\]
\[
P'_3=(c'_{\ell n+1},c'_{\ell n+2},\dots,c'_{2\ell n-1});\quad P'_4=(c'_{2\ell n-1},c'_{2\ell n},c'_{0}).
\]
Also here, in view of (\ref{f implies}), we can easily check that we have:
\begin{description}
\item
$\Delta P_1'=
\begin{cases}
\mathbb{Z}_{2\ell n+1}^*\times\{0\} & \text{for $n$ even;}\\
(\mathbb{Z}_{2\ell n+1}^*\setminus\{\pm\ell n\})\times\{0\} \ \cup \ \pm\{(1,-1)\} & \text{for $n$ odd;}
\end{cases}
$
\item
$
\Delta P_2'=
\begin{cases}
\pm\{(1,-1)\} & \mbox{for $n$ even;}\\
\pm\{(\ell n,0)\}& \mbox{for $n$ odd;}
\end{cases}
$
\item $\Delta P_3'=\pm\{(i,G(i)) \ | \ 2\leq i\leq \ell n-1\}$; \quad $\Delta P_4'=\pm\{(1,2),(\ell n,-2)\}$.
\end{description}
It is evident that $C$ is the union of the pairwise edge-disjoint paths $P_i$s and that $C'$ is the union of the edge-disjoint paths $P'_i$s; we can therefore write:
$$\Delta\{C,C'\}=\Delta\{P_1,P_2,P_3,P_4,P'_1,P'_2,P'_3,P'_4\}.$$
In this way we see that $\Delta\{C,C'\}$ is the union of the following pairwise disjoint lists:
\[\Delta_4=\mathbb{Z}_{2\ell n+1}^*\times\{0\}; \quad\quad
\Delta_5=\{1,\ell n,\ell n+1,2\ell n\}\times \{\pm1,\pm2\};
\]
\[
\Delta_6=\bigcup_{i\in\mathbb{Z}_{2\ell n+1}^-} \{i\}\times\{-F(i),G(i),-G(i)\}.
\]
Note that $\{1,\ell n,\ell n+1,2\ell n\}=\mathbb{Z}_{2\ell n+1}^*\setminus \mathbb{Z}_{2\ell n+1}^-$.
Also, taking \eqref{GandF} into account we can write $\{-F(i),G(i),-G(i)\}=\{\pm1,\pm2\}\setminus\{F(i)\}$ for every $i\in\mathbb{Z}_{2\ell n+1}^-$.
Therefore, recalling that $\Delta_3 = \bigcup_{i\in \mathbb{Z}_{2\ell n+1}^-} \{i\} \ \times \ \{F(i)\}$, we have that
\[\Delta_3 \ \cup \ \Delta_5 \ \cup \ \Delta_6 = \mathbb{Z}^*_{2\ell n+1} \ \times \ \{\pm1, \pm2\}. \]
We then conclude that $\Delta\{C,C'\} \ \cup \ \Delta{\cal S} \ \cup \ \Delta{\cal L} = \mathbb{Z}_{2\ell n+1}\times\mathbb{Z}_{\ell}\setminus\{(0,0)\}$, namely, $\Delta\{C,C'\}$ is exactly the complement of
$\Delta{\cal S} \ \cup \ \Delta{\cal L}$ in $\mathbb{Z}_{2\ell n+1}\times\mathbb{Z}_{\ell}\setminus\{(0,0)\}$.
This means that ${\cal S}$ and ${\cal L}'={\cal L}\cup\{C,C'\}$ are two sets of short and long transversal cycles, respectively, as required by Theorem \ref{BR},
and the assertion follows.
\end{proof}
\begin{ex}
Here, we solve HWP$(9\cdot 91;[9^{91}],[91^9];45, 364)$
by showing the existence of two sets
of short and long transversal cycles as required by Theorem \ref{BR}.
Let $\ell=9$ and $n=5$. Note that $D=\{2,5\}$ and $\overline{D}=[2,44]\setminus\{2,5\}$;
also, $\mathbb{Z}^-_{91} =\pm [2,44]$.
The set ${\cal L}=\{C_1',C_2'\}$ built in Example \ref{exlongcycle} according to Proposition \ref{longcycles} is such that
\begin{equation}\label{fD}
\Delta {\cal L} = \mathbb{Z}_{91}^* \times \mathbb{Z}_9 \setminus \{0,\pm1,\pm2\} \cup \
\{\pm(2,f(2))), \pm (5,f(5))\}.
\end{equation}
where
$f(2) = f(5) = -1$.
Now set $s=-\sum_{i\in D} (-1)^i \ f(i) = 0$. We need a set ${\cal S}$ of $5$ transversal $9$--gons as in Proposition \ref{shortcycles}, choosing $s$ as above. It is enough to take ${\cal S}$ to be the set of
{transversal} short cycles constructed in Example \ref{exshortcycles} for $m=6$.
Let $F:\mathbb{Z}_{2\ell n+1}^- \rightarrow \{\pm 1, \pm2\}$ be the map \eqref{mapF}
obtained by glueing
together the maps $f$ and $\varphi$. Recall that $F(-i)=-F(i)$ for any $i\in \mathbb{Z}^-_{91}$; also,
by collecting the values of $\varphi$ from Example \ref{exshortcycles}, we have that
\begin{footnotesize}
\begin{align*}
\{ (i, F(i)) \;|\; i \in [2,44] \} =
\{(2,8), (3, 8), (4, 1), (5,8), (6, 8), (7, 1), (8, 1), (9, 1) &, \\
(10, 8), (11, 1), (12, 8), (13, 1), (14, 8), (15, 8), (16, 1), (17, 7), (18, 8) &,\\
(19, 8), (20, 8), (21, 1), (22, 8), (23, 1), (24, 7), (25, 8), (26, 2), (27, 8) &,\\
(28, 1), (29, 1), (30, 7), (31, 1), (32, 2), (33, 7), (34, 8), (35, 1), (36, 8) &,\\
(37, 1), (38, 8), (39, 8), (40, 1),(41, 1), (42, 8), (43, 1), (44, 8) & \}.
\end{align*}
\end{footnotesize}
Note that $\Delta \mathcal{S} \ \cup \ \Delta \mathcal{L}$ is the disjoint union of the following sets:
\[
\Delta_1 = \{0\} \times \mathbb{Z}_{9}^*; \quad
\Delta_2= \mathbb{Z}^*_{91} \times \mathbb{Z}_{\ell}\setminus\{0,\pm1,\pm2\}; \quad
\Delta_3 = \{(i,F(i) \;|\;i\in\mathbb{Z}_{91}^-\}.
\]
We are going to build two remaining transversal 91-cycles $C$ and $C'$ of
$K_{9\cdot91}$ whose differences coincides with the complement of
$\Delta{\cal S} \ \cup \ \Delta{\cal L}$ in $(\mathbb{Z}_{91}\times\mathbb{Z}_9)\setminus\{(0,0)\}$, that is,
$\Delta C \ \cup \ \Delta C' = \Delta_4 \ \cup \ \Delta_5 \ \cup \ \Delta_6$ where:
\[\Delta_4=\mathbb{Z}_{91}^*\times\{0\}; \quad
\Delta_5=\{\pm1,\pm45\}\times \{\pm1,\pm2\};
\]
\[
\Delta_6=\bigcup_{i\in\mathbb{Z}_{91}^-} \{i\}\times(\{\pm1,\pm2\}\setminus\{F(i)\}).
\]
As a result, we obtain two sets ${\cal S}$ and ${\cal L}'={\cal L}\cup\{C,C'\}$
of short and long transversal cycles, respectively, as required by Theorem \ref{BR}.
This guarantees the existence of a solution to
HWP$(9\cdot 91;[9^{91}],[91^9];45, 364)$.
Let us start with $C=((x_0,y_0),(x_1,y_1),\dots,(x_{90},y_{90}))$. By construction,
$(x_0, x_1, \ldots,x_{44}) = (0,1,-1,2,-2,\ldots, 45,-45)$; Also,
it is possible to check that the sequence
$(y_2,y_3,\dots,y_{44})$ of second components is the string
\begin{align*}
(0, 1, 2, 3, 2, 1, 2, 1, 0, 8, 7, 6, 5, 6, 7, 0, 8, 0, 8, 7, 6, 5, 3, 4, 6, 7, 8, 7, \\
5, 4, 6, 8, 7, 6, 5, 4, 3, 4, 5, 4, 3, 2, 1)
\end{align*}
with $y_{44}=1$ as required in (\ref{f implies}).
We have that $(y_0,y_1,y_{45},y_{46},y_{90})$ are defined in the construction to be $(0,1,2,1,7)$.
It remains to apply Lemma \ref{G}, with $\rho=-1$, to find the function $G$ and thus
$(y_{47},\dots,y_{89})$.
The quantity $t$ required in the proof of the lemma is $t \equiv 2(\sum g + 1)\;\pmod{9}$,
where $g:\mathbb{Z}^-_{91} \rightarrow \{\pm1, \pm2\}$ is the map defined as follows: $g(x) = 2\cdot F(x)$ or $F(x)/2$ according to whether $F(x)=\pm1$ or $F(x)= \pm2$. One can check that $t = 8$. Now, a set $X\subset[2,44]$ of size $t$
with the property that $F(x) = (-1)^{x+1}$ can be $X= \{10,11,12,13,14,18,20,21\}$. We can now obtain
the required map $G:\mathbb{Z}^-_{91} \rightarrow \{\pm1, \pm2\}$ defined as follows: $G(x)=-g(x)$ or $g(x)$ according to whether $x \in X \ \cup -X$ or not. One can check that $G(-x) = -G(x)$ for $x \in Z^-_{91}$ and
\begin{footnotesize}
\begin{align*}
\{ (i, G(i)) \;|\; i \in [2,44] \} =
\{(47,2),(48,7),(49,2),(50,7),(51,7),(52,2),(53,2),(54,7) &,\\
(55,2),(56,7),(57,2),(58,1),(59,8),(60,7),(61,1),(62,7),(63,7) &,\\
(64,2),(65,8),(66,2),(67,1),(68,7),(69,2),(70,2),(71,7),(72,2) &,\\
(73,7),(74,1),(75,7),(76,2),(77,7),(78,2),(79,7),(80,2),(81,7) &,\\
(82,7),(83,7),(84,7),(85,2),(86,2),(87,7),(88,2),(89,2) &\}.
\end{align*}
\end{footnotesize}The remaining list of second components $(y_{47},\dots,y_{89})$ can then be computed, giving us the list
\begin{align*}
(8,6,4,2,4,6,4,2,0,7,5,6,7,5,4,2,4,6,7,0,8,6,4,6,8,\\
1,3,4,6,8,1,3,5,7,0,7,0,7,5,7,0,2,0)
\end{align*}
so that $y_{89}=0$ as needed in (\ref{f implies}). Some minor tweaks are required for the cycle
$C'=((x'_0,y'_0),(x'_1,y'_1),\dots,(x'_{90},y'_{90}))$;
here the first components $(x'_0,x'_1,\dots,x'_{90})$ are $(0,1-1,2,-2,\dots,22,-22,$ $-23,23,-24,24, \ldots, -45,45)$, and the second components are simply
$y'_0=y'_1=\dots=y'_{44}=0$,
$y'_{45}=1$, and
$y'_{46}=y_{46}, y'_{47}=y_{47}, \dots, y'_{90}=y_{90}$. We can finally check that:
\[\Delta C = \{(i,-F(i)), (i,-G(i)) \;|\; i\in\mathbb{Z}^-_{91}\} \ \cup \
\pm\{(45,\pm1), (1,-2), (45,2), (1,1)\}, \]
\[\Delta C' = \mathbb{Z}^*_{91}\times\{0\} \ \cup \ \{(i,G(i)) \;|\; i\in\mathbb{Z}^-_{91}\} \ \cup \
\pm\{(1,-1), (1,2), (45,-2)\}.\]
Therefore $\Delta C \ \cup \ \Delta C' = \Delta_4 \ \cup \ \Delta_5 \ \cup \ \Delta_6$,
since $\pm\{F(i), G(i)\} = \pm\{1, 2\}$ for any $i\in \mathbb{Z}^-_{91}$.
\end{ex}
\section*{Acknowledgments}
The authors are very grateful to the anonymous referees for the careful reading,
and for the many comments and suggestions that improved the readability of this paper.
\end{document} |
\begin{document}
\title[Systems of diagonal cubic forms]{The Hasse principle for systems\\ of diagonal cubic forms}
\author[J\"org Br\"udern]{J\"org Br\"udern}
\address{Mathematisches Institut, Bunsenstrasse 3--5, D-37073 G\"ottingen, Germany}
\email{[email protected]}
\author[Trevor D. Wooley]{Trevor D. Wooley}
\address{School of Mathematics, University of Bristol, University Walk, Clifton, Bristol BS8 1TW, United Kingdom}
\email{[email protected]}
\subjclass[2010]{11D72, 11P55, 11E76}
\keywords{Cubic Diophantine equations, Hardy-Littlewood method.}
\thanks{The authors are grateful to the Hausdorff Research Institute for Mathematics in
Bonn for excellent working conditions that made the writing of this paper feasible. The
support of the Akademie der Wissenschaften zu G\"ottingen is also gratefully
acknowledged.}
{\,{\rm d}}ate{}
\begin{abstract} We establish the Hasse Principle for systems of $r$ simultaneous diagonal cubic equations
whenever the number of variables exceeds $6r$ and the associated coefficient matrix contains no singular
$r\times r$ submatrix, thereby achieving the theoretical limit of the circle method for such systems.
\end{abstract}
\maketitle
\section{Introduction} The Diophantine analysis of systems of diagonal equations was pioneered by
Davenport and Lewis with a pivotal contribution on pairs of cubic forms \cite{DL1966}, followed by work
on more general systems \cite{DL1969}. For natural numbers $r,s$ and an $r\times s$ integral matrix
$(c_{ij})$, they applied the circle method to the system
\begin{equation}\ellabel{1.1}
\sum_{j=1}^sc_{ij}x_j^3=0\quad (1\elle i\elle r),
\end{equation}
and when $s\ge 27r^2\ellog 9r$ were able to show that
(\ref{1.1}) has infinitely many primitive integral solutions. Even a casual practitioner in the field will
acknowledge that the implicit use of mean values demands at least $6r+1$ variables in the system for the
circle method to be applicable. We now attain this theoretical limit, surmounting the obstacles encountered
by previous writers.
\begin{theorem}\ellabel{theorem1.1} Let $s>6r$ and suppose that the matrix $(c_{ij})$ contains no singular
$r\times r$ submatrix. Then, whenever the system (\ref{1.1}) has non-zero $p$-adic solutions for all
primes $p$, it has infinitely many primitive integral solutions.
\end{theorem}
The conclusion of Theorem \ref{theorem1.1} may be interpreted as a Hasse principle for
systems of diagonal cubic forms in general position. As we remark in \S4, the condition on the matrix of
coefficients can be relaxed considerably. Should the local solubility conditions be met, our methods show
that the number $N(P)$ of integral solutions of (\ref{1.1}) with ${\mathbf x}\in [-P,P]^s$ satisfies
$N(P)\gg P^{s-3r}$. We note that work of the first author joint with Atkinson and Cook \cite{ABC1992}
implies that for $p>9^{r+1}$ the $p$-adic solubility hypothesis in Theorem \ref{theorem1.1} is void.
\par
Early work on this subject concentrated on methods designed to disentangle the system so as to invoke
results on single equations. The most recent such contribution is Br\"udern and Cook \cite{BC1992} where
the condition $s>7r$ is imposed on the number of variables. Such methods are incapable of establishing the
conclusion of Theorem \ref{theorem1.1} unless one is prepared to invoke conditional mean value estimates
that depend on speculative Riemann hypotheses for global Hasse-Weil $L$-functions (see Hooley
\cite{Hoo1986, Hoo1996} and Heath-Brown \cite{HB1998}).\par
When $r=1$, the conclusion of Theorem \ref{theorem1.1} is due to Baker \cite{Bak1989}. For $r\ge 2$,
the present authors \cite{BW2002} identified features of fully entangled systems of equations which permit
highly efficient use of divisor estimates in bounding associated multidimensional mean values. These allow
treatment of systems in $6r+3$ variables. By a method special to the case $r=2$, we established that case
of Theorem \ref{theorem1.1} in more general form (see \cite{BW2007}). In this paper we instead develop
a recursive process that relates mean values associated with the original system to a one-dimensional sixth
moment of a smooth Weyl sum on the one hand, and on the other to another system of the shape
(\ref{1.1}), but of much larger format. The new system is designed in such a way that the methods of
\cite{BW2002} provide very nearly square-root cancellation. By comparison with older routines, we are
forced to incorporate the losses implied by the use of a sixth moment of a smooth Weyl sum only once, as
opposed to $r$ times (in \cite{BC1992}, for example).\par
Our recursive process relies on an analytic inequality that is simple to describe. Suppose that $1\elle r<R$,
and that $G({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r)$ and $F({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R)$ are exponential sums, and consider the
integral
$$I(F,G)=\int_0^1\elldots \int_0^1 G({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r)F({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R){\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1 \elldots {\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R .
$$
Then by Schwarz's inequality, one finds that $I(F,G)^2\elle I_1I_2$, with
\begin{equation}\ellabel{1.2}
I_1=\int_0^1\elldots \int_0^1\biggl| \int_0^1\elldots \int_0^1 F({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R){\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{r+1}\elldots
{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R \biggr|^2{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1\elldots {\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r
\end{equation}
and
\begin{equation}\ellabel{1.3}
I_2=\int_0^1\elldots \int_0^1 |G({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r)|^2{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1\elldots {\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r.
\end{equation}
In our application of this inequality, the integral $I(F,G)$ will count the number of solutions of a system of
$R$ linear equations to be solved in integral cubes. We shall take $r=1$ and $G({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1)$ equal to an
exponential sum related to sums of three cubes. Then, the mean square (\ref{1.3}) is a sixth moment of
cubic Weyl sums for which strong bounds are available. Also, on opening the square in $I_1$, a Diophantine
interpretation of (\ref{1.2}) with $2R-r$ equations is induced. It transpires that this procedure can be
repeated, achieving a satisfactory bound for $I(F,G)$ whenever a good bound for the mean square
(\ref{1.3}) is partnered with good control for the high-dimensional iterates of $I_1$ that arise from the
recursion. While inspired by the work of Gowers \cite{Gow2001}, the procedure sketched here is in
principle very flexible. For example, variants may be developed involving higher moments.\par
This paper is organised as follows. We begin in \S2 by describing the linked block matrices underpinning
our new mean value estimates. By using an argument motivated by our earlier work \cite{BW2002}, we
derive strong estimates associated with Diophantine systems having six times as many variables as
equations. Next, in \S3, by repeated application of Schwarz's inequality, we transform an initial system of
equations into a more complicated system of the type just analysed. Thus, a powerful mean value estimate
is obtained that leads in \S4 via the circle method to the proof of Theorem \ref{theorem1.1}.\par
Our basic parameter is $P$, a sufficiently large positive number. In this paper, implicit constants in
Vinogradov's notation $\elll$ and $\gg$ may depend on $s$, $r$ and $\varepsilon$, as well as ambient coefficients.
Whenever $\varepsilon$ appears in a statement, either implicitly or explicitly, we assert that the statement holds
for each $\varepsilon>0$. We employ the convention that whenever $G:[0,1)^k\rightarrow {\mathbb C}}{\,{\rm d}}ef{\,{\rm d}}bF{{\mathbb F}}{\,{\rm d}}ef{\,{\rm d}}bN{{\mathbb N}}{\,{\rm d}}ef{\,{\rm d}}bP{{\mathbb P}$ is integrable,
then
$$\oint G({\mathbf a}lp){\,{\rm d}}{\mathbf a}lp =\int_{[0,1)^k}G({\mathbf a}lp){\,{\rm d}}{\mathbf a}lp .$$
Here and elsewhere, we use vector notation in the natural way. Finally, we write $e(z)$ for $e^{2\pi iz}$
and put $\|{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}\|=\min\{|{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}-m|:m\in {\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}\}$.
\section{Auxiliary equations} We begin by defining a strong form of non-singularity satisfied by almost all
coefficient matrices. We refer to an $r\times s$ matrix $A$ as {\it highly non-singular} when any subset of
at most $r$ columns of $A$ is linearly independent. For example, the matrix
$$B={\tiny{\elleft( \arraycolsep=1.6pt\begin{array}{*{9}c}
7&1&4&8&8&4&9&8&1\\
7&5&6&3&3&7&1&7&8\\
9&4&5&7&1&6&5&3&6\\
6&3&3&8&8&6&9&9&3\end{array}\right)}}$$
is highly non-singular, as the reader may care to check.
\begin{lemma}\ellabel{lemma2.1} Suppose that the matrix $A$ is highly non-singular. Then the submatrix
obtained by deleting a column is highly non-singular. Also, if a column of $A$ contains just one non-zero
element, then the submatrix obtained by deleting the column and row containing this element is highly
non-singular.\end{lemma}
\begin{proof} Both conclusions follow from the definition of highly non-singular.\end{proof}
Next we describe linked block matrices critical to our arguments. Even to describe the
shape of these matrices takes some effort. When $n$ is a non-negative integer and
$0\elle l\elle n$, consider natural numbers $r_l,s_l$ and an $r_l\times s_l$ matrix $A_l$
having non-zero columns. Let $\text{diag}(A_0,A_1,\elldots ,A_n)$ be the conventional
diagonal block matrix with the lower right hand corner of $A_l$ sited at $(i_l,j_l)$. For
$1\elle l\elle n$, append a row to the top of the matrix $A_l$, giving an $(r_l+1)\times s_l$
matrix $B_l$. Next, consider the matrix $D=(d_{ij})$ obtained from
$\text{diag}(A_0,\elldots ,A_n)$ by replacing $A_l$ by $B_l$ for $1\elle l\elle n$, with the
lower right hand corner of $B_l$ still sited at $(i_l,j_l)$. This new {\it linked-block} matrix
$D$ should be thought of as a matrix with additional entries by comparison to
$\text{diag}(A_0,\elldots ,A_n)$, with the property that adjacent blocks are glued together
by a shared row sited at index $i_l$, for $0\elle l<n$.
\begin{definition}\ellabel{definition2.2} We say that the linked block matrix $D$ is {\it congenial of type}
$(n,r;\rho,u,t)$ when it has the shape described above, and
\begin{itemize}
\item[(a)] $A_l$ and $B_l$ are highly non-singular, with $B_l$ of format $r\times 3(r-1)$,
for $1\elle l\elle n$;
\item[(b)] $A_0$ is a $\rho \times t$ matrix having the following properties:
\begin{itemize}
\item[(i)] when $\rho\ge 2$, its first $u$ columns define a subspace of dimension $1$
distinct from the $\rho$-th coordinate axis;
\item[(ii)] the matrix of its last $t-u+1$ columns is highly non-singular;
\item[(iii)] if $u\ge 3$, then $t\ge 3\rho$.
\end{itemize}
\end{itemize}
\end{definition}
As a helping hand to the reader, we illustrate this definition with an example. Thus the
matrix\footnote{Henceforth we adopt the convention that zero entries in a matrix are left
blank.}
$${\tiny{\elleft( \arraycolsep=1.6pt\begin{array}{*{36}c}
1&3&3&3&7&1&7&8&&&&&&&&&&&&&&&&&&&&&&&&\\
&&7&1&6&5&3&6&&&&&&&&&&&&&&&&&&&&&&&&\\
&&8&8&6&9&9&3&7&1&4&8&8&4&9&8&1&&&&&&&&&&&&&&&&\\
&&&&&&&&7&5&6&3&3&7&1&7&8&&&&&&&&&&&&&&&&\\
&&&&&&&&9&4&5&7&1&6&5&3&6&&&&&&&&&&&&&&&&\\
&&&&&&&&6&3&3&8&8&6&9&9&3&7&1&4&8&8&4&9&8&1&&&&&&&&\\
&&&&&&&&&&&&&&&&&7&5&6&3&3&7&1&7&8&&&&&&&&\\
&&&&&&&&&&&&&&&&&9&4&5&7&1&6&5&3&6&&&&&&&&\\
&&&&&&&&&&&&&&&&&6&3&3&8&8&6&9&9&3&7&1&4&8&8&4&9&8&1\\
&&&&&&&&&&&&&&&&&&&&&&&&&&7&5&6&3&3&7&1&7&8\\
&&&&&&&&&&&&&&&&&&&&&&&&&&9&4&5&7&1&6&5&3&6\\
&&&&&&&&&&&&&&&&&&&&&&&&&&6&3&3&8&8&6&9&9&3
\end{array}\right) }}$$
is congenial of type $(3,4;3,2,8)$. In terms of the description above, one sees that
$$A_0={\tiny{ \elleft( \arraycolsep=1.6pt\begin{array}{*{8}c}
1&3&3&3&7&1&7&8\\
&&7&1&6&5&3&6\\
&&8&8&6&9&9&3\end{array}\right) }}$$
and $B_1=B_2=B_3=B$, and further $A_1=A_2=A_3=A$, with
$$A={\tiny{\elleft( \arraycolsep=1.6pt\begin{array}{*{9}c}
7&5&6&3&3&7&1&7&8\\
9&4&5&7&1&6&5&3&6\\
6&3&3&8&8&6&9&9&3\end{array}\right)}}.$$
Some additional remarks are in order to clarify this definition. With an inductive argument
in mind, we allow the possibility that $n=0$, in which case the parameter $r$ plays no
role. Note that when $n\ge 1$, the definition is non-empty only when $r\ge 2$. In
preparation for our inductive argument, once again, we allow the possibility that $A_0$ is
the empty matrix formally considered to have format $1\times 0$, and we accommodate
this situation by identifying congenial matrices (formally) of type $(n,r;1,0,0)$ with those
of type $(n-1,r;r,1,3r-3)$. Here, as we shall see, there is no loss of generality in assuming
that the columns in the first non-empty block of $D$ have been permuted in order to
ensure that its first column is distinct from the $r$-th coordinate axis. When $t\ge 1$, we
insist that $u\ge 1$, consistent with the hypothesis that the last $t-u+1$ columns of $A_0$
be highly non-singular. Also, we note that when $\rho=1$, the conditions imposed in the
preamble to Definition \ref{definition2.2} require that $d_{1j}\ne 0$ for $1\elle j\elle t$, and
(b) is then satisfied for all $1\elle u\elle t$. When $\rho\ge 2$, meanwhile, the value of $u$ is
uniquely determined by the conditions in (b).\par
Our goal in this section is to obtain mean value estimates corresponding to auxiliary
equations having congenial coefficient matrices. Let $D$ be an integral congenial matrix of
type $(n,r;\rho,u,t)$. Then $D$ is an $R\times S$ matrix, where $S=3n(r-1)+t$ and
$R=n(r-1)+\rho$. Define the linear forms ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j={\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp)$ by putting
$${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp)=\sum_{i=1}^Rd_{ij}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i\quad (1\elle j\elle S),$$
and the Weyl sum
$$f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})=\sum_{|x|\elle P}e({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} x^3).$$
Our main lemma provides an estimate for the mean value
\begin{equation}\ellabel{2.1}
I(P;D)=\oint |f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1)\elldots f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_S)|^2{\,{\rm d}}{\mathbf a}lp .
\end{equation}
By considering the underlying Diophantine system, one finds that $I(P;D)$ is unchanged by
elementary row operations on $D$, and likewise by permutations of its columns. Thus, in
the discussion to come we may always pass to a convenient matrix row equivalent to $D$.
\par
When $\rho$, $w$, $u$ and $t$ are non-negative integers, define
\begin{equation}\ellabel{bw1}
{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,w)=\begin{cases} 1,&\text{when $w=3\rho$,}\\
\max\{0,w-3\rho\},&\text{otherwise,}
\end{cases}
\end{equation}
and then put
\begin{equation}\ellabel{bw2}
{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t)=\begin{cases} {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-u)+u-3,&\text{when $\rho\ge 2$ and
$u\ge 3$,}\\
t-2,&\text{when $\rho=1$ and $t\ge 3$,}\\
{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t),&\text{otherwise.}
\end{cases}
\end{equation}
\begin{lemma}\ellabel{lemma2.3}
When $\rho\ge 2$, $u\elle t<3\rho$ and $u\elle 2$, one has
\begin{equation}\ellabel{bw3}
\max\{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t-1),{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho-1,u,t-1)-1\}\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t).
\end{equation}
Meanwhile, when $\rho\ge 2$, $t\ge 3\rho$ and $2\elle u\elle t$, one has
\begin{equation}\ellabel{bw4}
\max\{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,\max\{u-2,1\},t-2)+1,{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho-1,1,t-u)+u-3\}\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t).
\end{equation}
Finally, when $\rho\ge 3$ and $t\elle u+\rho-1$, one has ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho-1,u,t)\elle
{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t)$.
\end{lemma}
\begin{proof} We first establish (\ref{bw3}). By (\ref{bw2}), the inequality to be
confirmed reads
\begin{equation}\ellabel{bw5}
\max\{ {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t-1),{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-1)-1\}\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t).
\end{equation}
Since $t\elle 3\rho-1$, we have ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t-1)={{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t)=0$, and since
$t-1\elle 3(\rho-1)+1$, we have also ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-1)\elle 1$. The desired conclusion
(\ref{bw5}) follows.\par
Suppose next that $\rho\ge 2$, $t\ge 3\rho$ and $t\ge u=2$. In such circumstances, the
inequality (\ref{bw4}) to be confirmed reads
$$\max\{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t-2)+1,{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-2)-1\}\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t).$$
By considering the cases $t\in \{ 3\rho,3\rho+1\}$, $t=3\rho+2$, and $t\ge 3\rho+3$, in
turn, the desired conclusion follows directly from (\ref{bw1}). When instead
$u\in \{3,4\}$, the inequality (\ref{bw4}) reads
$$\max\{{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t-2)+1,{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-u)+u-3\}\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-u)+u-3,$$
and one has only to verify that ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t-2)\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-u)+u-4$. By considering the
cases $t\in \{3\rho,3\rho+1\}$, $t=3\rho+2$, and $t\ge 3\rho+3$, in turn, the desired
conclusion follows directly from (\ref{bw1}). Finally, when $t\ge u\ge 5$, the inequality
(\ref{bw4}) is trivial. We have now confirmed (\ref{bw4}) in all cases.\par
In our proof of the final claim of the lemma, we may assume that $\rho\ge 3$. Thus
$\rho+1\elle 3(\rho-1)-1$ and $\rho-1\elle 3(\rho-2)-1$, and hence ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,\rho+1)=0$
and ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-2,\rho-1)=0$. Since $t\elle u+\rho-1$, it follows that when $u\elle 2$, one has
${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t)\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,\rho+1)\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho,t)$. Likewise, when $u\ge 3$ it follows
that
$${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-2,t-u)\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-2,\rho-1)\elle {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(\rho-1,t-u).$$
The desired conclusion now follows in both cases from (\ref{bw2}), completing the proof
of the lemma.\end{proof}
For future use we record the elementary inequality
\begin{equation}\ellabel{2.3}
|z_1\elldots z_n|\elle |z_1|^n+\elldots +|z_n|^n.
\end{equation}
\begin{lemma}\ellabel{lemma2.4}
Let $D$ be an integral congenial matrix of type $(n,r;\rho,u,t)$. Then
\begin{equation}\ellabel{2.4}
I(P;D)\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t)+\varepsilon}.
\end{equation}
\end{lemma}
\begin{proof} We proceed by induction. Write $\widehat H_{n,r}^{\rho,u,t}$ to denote the
hypothesis that the bound (\ref{2.4}) holds for all congenial matrices of type
$(n,r;\rho,u,t)$, and $H_{n,r}^{\rho,u,t}$ to denote the hypothesis that
$\widehat H_{n',r'}^{\rho',u',t'}$ holds for all $n'\elle n$, $r'\elle r$, $\rho'\elle \rho$, $u'\elle u$,
$t'\elle t$. Our outer induction is on $n$, with an inner induction on $r$, $\rho$, $u$ and
$t$. The basis for this induction is provided by Hua's Lemma (see
\cite[Lemma 2.5]{Vau1997}). This establishes that
$$\int_0^1|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|^{2u}{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elll P^{u+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)+\varepsilon}\quad (u=1,2,4).$$
Thus it follows from H\"older's inequality and the trivial estimate $|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|\elle 2P+1$ that,
when $n=0$ and $\rho=1$, then
\begin{equation}\ellabel{2.fx}
I(P;D)=\int_0^1|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1)\elldots f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_u)|^2{\,{\rm d}}{\mathbf a}lp \elll P^{u+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)+\varepsilon}\quad
(u\ge 1),
\end{equation}
and one obtains $H_{0,r}^{1,u,u}$ for all $u\ge 1$. Given a congenial matrix $D$ of type
$(0,r;\rho,u,u)$ with $\rho\ge 2$, meanwhile, one has either $u\elle 2$ or $u\ge 3\rho$. It
follows by applying elementary row operations that $D$ is row equivalent to a matrix $D'$
whose first row entries are all non-zero. By considering the system of equations underlying $I(P;D')$, and discarding every equation
except that corresponding to the first row of $D'$, one finds that $I(P;D)\elle I(P;D'')$,
where $D''$ is a congenial matrix of type $(0,r;1,u,u)$. But ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,u)\ge {{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)$
for $u\elle 2$ and also for $u\ge 6$, and thus we deduce from (\ref{2.fx}) that
$\widehat H_{0,r}^{\rho,u,u}$ holds for all natural numbers $\rho$ and $u$.\par
Our strategy for proving the lemma involves two steps. We confirm below that when
$\rho\ge 2$ and $t>u$, one has
\begin{equation}\ellabel{2.5}
H_{n,r}^{\rho,u,t-1}\quad \text{implies}\quad \widehat H_{n,r}^{\rho,u,t}.
\end{equation}
Notice that when $\rho=1$, then since ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(1,t,t)={{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(1,u,t)$, there is no loss of
generality in supposing that $t=u$. Since $u$ (possibly zero) is the smallest value that $t$
can assume in a congenial matrix of type $(n,r;\rho,u,t)$, it therefore suffices to establish
$\widehat H_{n,r}^{\rho,u,u}$ $(u\ge 1)$. We show below that when $\rho\ge 1$, then
\begin{equation}\ellabel{rv2.12}
(\text{$\widehat H_{n-1,r}^{r,1,3(r-1)}$ and $\widehat H_{n,r}^{1,u,u}$})\quad
\text{implies}\quad \widehat H_{n,r}^{\rho,u,u}.
\end{equation}
Since there is no loss in supposing that a congenial matrix of type $(n,r;1,u,u)$ is also of
type $(n-1,r;r,\max\{u,1\},3(r-1)+u)$, one finds via (\ref{rv2.12}) that
$H_{n-1,r}^{r,u+1,3r+u}$ implies $\widehat H_{n,r}^{1,u,u}$, and hence also
$\widehat H_{n,r}^{\rho,u,u}$. We note in this context that
${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(1,u,u)={{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(r,\max\{u,1\},3(r-1)+u)$. In view of (\ref{2.5}), one sees that
whenever $H_{n-1,r}^{{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig},v,v}$ holds for all ${\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}$ and $v$, then one has
$\widehat H_{n,r}^{\rho,u,u}$ for all $\rho$ and $u$, and hence also $H_{n,r}^{\rho,u,t}$ for
all $\rho$, $u$ and $t$. We have already established $\widehat H_{0,r}^{{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig},v,v}$ for all
${\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}$ and $v$, and hence the conclusion of the lemma follows by induction on $n$.\par
We begin by confirming (\ref{rv2.12}). Let $D$ be congenial of type $(n,r;\rho,u,u)$, and
suppose $\widehat H_{n-1,r}^{r,1,3(r-1)}$ and $\widehat H_{n,r}^{1,u,u}$. We may suppose that
$\rho\ge 2$, for otherwise (\ref{rv2.12}) is
trivial. Since the first $u$ columns of $D$ define a subspace of dimension $1$ distinct from
the $\rho$-th coordinate axis, the matrix $D$ has non-zero entries populating one of its
first $\rho-1$ rows in the first $u$ columns. The matrix $D$ is consequently row
equivalent to one of separated block form, with one block $D_0$ of format $1\times u$
(trivially) congenial of type $(0,r;1,u,u)$, and the second block $D_1$ of format
$(R-\rho+1)\times (S-u)$. There is no loss of generality in supposing $D_1$ to be
congenial of type $(n-1,r;r,1,3(r-1))$. On considering the underlying Diophantine systems,
we therefore find that $I(P;D)=I(P;D_0)I(P;D_1)$. We may assume (\ref{2.fx}) and
$\widehat H_{n-1,r}^{r,1,3(r-1)}$. Thus we deduce via (\ref{bw1}) and (\ref{bw2}) that
\begin{align*}
I(P;D)&\elll P^{u+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)+\varepsilon}\cdot P^{S-u+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(r,1,3(r-1))+\varepsilon}\\
&= P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(r,3(r-1))+2\varepsilon}=P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)+2\varepsilon}.
\end{align*}
When $\rho\ge 2$, the congeniality of $D$ ensures that either $u\elle 2$ or $u\ge 6$, and
hence ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,u)={{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)$. Thus $I(P;D)\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,u)+\varepsilon}$,
confirming (\ref{rv2.12}).\par
We now commence the proof of (\ref{2.5}). Let $D$ be a matrix of type $(n,r;\rho,u,t)$
with $\rho\ge 2$ and $u<t$, and suppose $H_{n,r}^{\rho,u,t-1}$. Should the first
$\rho-1$ rows of the matrix $D$ be linearly dependent, then by applying elementary row
operations on these rows, we may suppose that $D$ is congenial with one of these rows
zero. Thus $t-u+1<\rho$ and $\rho\ge 3$, and on deleting this row and applying the
final conclusion of Lemma \ref{lemma2.3}, it is apparent that (\ref{2.4}) will be confirmed
provided that we establish the bound $I(P;D)\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho-1,u,t)+\varepsilon}$.
Repeated use of this simplification permits us to condition the first $\rho-1$ rows of $D$
to be linearly independent. We divide into cases according to whether $t<3\rho$ or
$t\ge 3\rho$.\par
We first establish (\ref{2.5}) in the situation where $t<3\rho$. One then has $u\elle 2$. We
distinguish three cases. When $t=u+1$, it follows from the conditioned congeniality of $D$
that $\rho=2$ or $3$. In such circumstances, we say that $D$ has type I when
${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_t=d_{\rho,t}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_\rho$ with $d_{\rho,t}\ne 0$. Note that the conditioned
congeniality of $D$ then implies that $\rho=2$. When $t=u+1$ and $D$ is not of type I,
we apply elementary row operations to ensure that ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_t=d_{1,t}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$ with
$d_{1,t}\ne 0$, and also that $d_{2,j}\ne 0$ for $1\elle j\elle u$. We describe the resulting
matrix as having type~II. A conditioned congenial matrix $D$ not of type I or II we
describe as having type III. For such matrices, one has $\rho\ge 3$ and $t\ge u+2$.\par
Consider first a matrix $D$ of type I. By performing elementary row operations, one may
suppose that ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j=d_{1,j}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$, with $d_{1,j}\ne 0$, for $1\elle j\elle u$. The matrix
$D$ is of separated block form, with one block $D_0$ of format $1\times u$ (trivially)
congenial of type $(0,1;1,u,u)$, and the second block $D_1$ of format $(R-1)\times (S-u)$
congenial of type $(n,r;1,1,1)$. On considering the underlying Diophantine systems, we
find that $I(P;D)\elll I(P;D_0)I(P;D_1)$. We may assume (\ref{2.fx}) and
$\widehat H_{n,r}^{1,1,1}$, and thus
$$I(P;D)\elll P^{u+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(1,u)+\varepsilon}\cdot P^{S-u+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(1,1,1)+\varepsilon}=
P^{S+2\varepsilon}\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(2,u,u+1)+\varepsilon},$$
confirming the estimate (\ref{2.4}) in this case.\par
Next consider a matrix $D$ of type III. The last $t-u+1$ columns of $A_0$ span a linear
space of dimension $\min\{t-u+1,\rho\}\ge 3$. Hence, by permuting the last $t-u$
columns of $A_0$, we may suppose that the $t$-th column of $A_0$ is not contained in
the linear space generated by the first $u$ columns and the $\rho$-th coordinate vector.
By applying elementary row operations, we may arrange that the conditioned matrix $D$
is congenial with ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_t=d_{1,t}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$ and $d_{1,t}\ne 0$. We note that the linear
space spanned by the first $u$ columns of $D$ is now distinct from both the first and the
$\rho$-th coordinate axis. Let $D_0$ denote the matrix obtained from $D$ by deleting
column $t$, and let $D_1$ denote the matrix obtained by instead deleting row $1$ and
column $t$. Lemma \ref{lemma2.1} shows the $R\times (S-1)$ matrix $D_0$ to be
congenial of type $(n,r;\rho,u,t-1)$, and the $(R-1)\times (S-1)$ matrix $D_1$ to be
congenial of type $(n,r;\rho-1,u,t-1)$. Observe that when $D$ is a matrix of type II, then
$\rho=2$ or $3$, and this same conclusion holds.\par
We may now consider matrices $D$ of types II and III together. We have
${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_t=d_{1,t}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$ with $d_{1,t}\ne 0$.
Weyl differencing (see \cite[equation (2.6)]{Vau1997}) yields
$$|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_t)|^2\elll P+\sum_{0<|h|\elle 16P^3}c_he({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_th),$$
where the integers $c_h$ satisfy $c_h=O(|h|^\varepsilon)$. We therefore find from (\ref{2.1})
that
\begin{equation}\ellabel{2.7}
I(P;D)\elll PT(0)+\sum_{0<|h|\elle 16P^3}c_hT(h),
\end{equation}
where
\begin{equation}\ellabel{2.8}
T(h)=\oint \prod_{\substack{1\elle i\elle S\\ i\ne t}}|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_i)|^2e({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_th){\,{\rm d}}{\mathbf a}lp .
\end{equation}
The contribution of the terms with $h\ne 0$ in (\ref{2.7}) is given by
\begin{equation}\ellabel{2.9}
\sum_{0<|h|\elle 16P^3}c_hT(h)\elll P^\varepsilon \oint \prod_{\substack{1\elle i\elle S\\ i\ne t}}|
f({\widehat {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}}_i)|^2{\,{\rm d}}{\widehat {\mathbf a}lp},
\end{equation}
where ${\widehat {\mathbf a}lp}=({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_2,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R)$ and
${\widehat {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}}_m={\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_m(0,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_2,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R)$. On considering the underlying
Diophantine systems, we discern on the one hand from (\ref{2.8}) that $T(0)=I(P;D_0)$,
and on the other that the integral on the right hand side of (\ref{2.9}) is equal to
$I(P;D_1)$. Thus
$$I(P;D)\elll PI(P;D_0)+P^\varepsilon I(P;D_1).$$
We may assume $H_{n,r}^{\rho,u,t-1}$, and thus Lemma \ref{lemma2.3} yields the
estimate
$$I(P;D)\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t-1)+\varepsilon}+P^{S-1+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho-1,u,t-1)+2\varepsilon}
\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t)+2\varepsilon}.$$
This confirms the bound (\ref{2.4}), and hence (\ref{2.5}) holds whenever $t<3\rho$.\par
We turn next to the situation with $t\ge 3\rho$. Recall that $\rho\ge 2$ and $u\ge 1$. By
relabelling the first $t$ columns of $D$, we may assume without loss that the conditioned
congenial matrix $D$ has the property that, should any one of these columns lie on the
$\rho$-th coordinate axis, then this is the $t$-th column. Then, applying the bound
(\ref{2.3}) within (\ref{2.1}), one finds that with $j=1$ or $2$, one has
\begin{equation}\ellabel{2.xy}
I(P;D)\elll \oint|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j)^4f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_3)^2\elldots f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_S)^2|{\,{\rm d}}{\mathbf a}lp .
\end{equation}
Thus, by symmetry, we may suppose that $j=1$ and $u\ge 2$. Since the first $u$ columns
of $D$ lie in a subspace of dimension $1$ distinct from the $\rho$-th coordinate axis, by
applying elementary row operations, we see that there is no loss of generality in assuming
that the congenial matrix $D$ satisfies the condition that ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j=d_{1,j}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$ with
$d_{1,j}\ne 0$ for $1\elle j\elle u$.\par
We first examine the situation in which $\rho\ge 2$, $2\elle u\elle 4$ and $t\ge 3\rho\ge 6$.
By Weyl differencing (see \cite[equation (2.6)]{Vau1997}), one has
$$|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1)|^4\elll P^3+P\sum_{0<|h|\elle 32P^3}b_he({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1h),$$
where the integers $b_h$ satisfy $b_h=O(|h|^\varepsilon)$. We therefore find from (\ref{2.xy})
that
\begin{equation}\ellabel{2.11}
I(P;D)\elll P^3U(0)+P\sum_{0<|h|\elle 32P^3}b_hU(h),
\end{equation}
where
\begin{equation}\ellabel{2.12}
U(h)=\oint |f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_3)\elldots f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_S)|^2e({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1h){\,{\rm d}}{\mathbf a}lp .
\end{equation}
The contribution of the terms with $h\ne 0$ in (\ref{2.11}) is given by
\begin{equation}\ellabel{2.13}
P\sum_{0<|h|\elle 32P^3}b_hU(h)\elll P^{1+\varepsilon}\oint |f({\widehat {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}}_3)\elldots
f({\widehat {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}}_S)|^2{\,{\rm d}}{\widehat {\mathbf a}lp }.
\end{equation}
Let $D_0$ now denote the matrix obtained from $D$ by deleting the first two columns,
and let $D_1$ denote the matrix obtained by instead deleting row $1$ and the first $u$
columns. Since $t\ge 6$, Lemma \ref{lemma2.1} shows the $R\times (S-2)$ matrix $D_0$
to be congenial of type $(n,r;\rho,\max\{u-2,1\},t-2)$, and the $(R-1)\times (S-u)$ matrix
$D_1$ to be congenial of type $(n,r;\rho-1,1,t-u)$. On considering the underlying
Diophantine systems, we find on the one hand from (\ref{2.12}) that $U(0)=I(P;D_0)$,
and on the other that the integral on the right hand side of (\ref{2.13}) is equal to
$P^{2u-4}I(P;D_1)$. Here, we have made use of the fact that
${\widehat {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}}_m({\mathbf a}lp)=0$ for $3\elle m\elle u$. Thus
$$I(P;D)\elll P^3I(P;D_0)+P^{2u-3+\varepsilon}I(P;D_1).$$
We may assume $H_{n,r}^{\rho,u,t-1}$, and thus Lemma \ref{lemma2.3} delivers the
estimate
\begin{align*}
I(P;D)&\elll P^{S+1+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,\max\{u-2,1\},t-2)+\varepsilon}+
P^{S+u-3+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho-1,1,t-u)+2\varepsilon}\\
&\elll P^{S+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,u,t)+2\varepsilon}.
\end{align*}
Since ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,1,t)={{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(\rho,2,t)$, we obtain (\ref{2.4}) even when the case
$u=1$ was simplified to that with $u=2$.\par
Finally, suppose that $u\ge 5$. Recall that ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j=d_{1,j}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$, with $d_{1,j}\ne 0$,
for $1\elle j\elle u$. Let $D_0$ denote the matrix obtained from $D$ by deleting all but the
first row and all but the first $u$ columns, and let $D_1$ denote the matrix obtained by
instead deleting the first row and first $u$ columns. Then the $1\times u$ matrix $D_0$ is
(trivially) congenial of type $(0,1;1,u,u)$, and the $(R-1)\times (S-u)$ matrix $D_1$ is
congenial of type $(n,r;\rho-1,1,t-u)$. On considering the underlying Diophantine systems
and applying the triangle inequality, we find via (\ref{2.fx}) that
$$I(P;D)\elll I(P;D_0)I(P;D_1)\elll P^{2u-3+\varepsilon}I(P;D_1),$$
which as above confirms the estimate (\ref{2.4}) in this final case. Hence we have
completed the proof of (\ref{2.5}) when $t\ge 3\rho$, completing the proof of the
inductive step. The conclusion of the lemma now follows.
\end{proof}
We extract a simple consequence from this lemma for future use.
\begin{corollary}\ellabel{corollary2.5}
Let $r\ge 2$, suppose that $D$ is an integral congenial matrix of type $(n,r;r,3,3r)$, and
write $w=(n+1)r-n$. Then $I(P;D)\elll P^{3w+1+\varepsilon}$.
\end{corollary}
\begin{proof} We have only to note that ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}^*(r,3,3r)={{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}(r-1,3r-3)=1$.
\end{proof}
\section{Complification} Before describing the process which leads from the basic mean value to the more
complicated ones described in the previous section, we introduce some additional Weyl sums. When
$2\elle R\elle P$, we put
$${\mathcal A}} {\,{\rm d}}ef\calAbar{{\overline \calA}(P,R)=\{n\in [-P,P]\cap {\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}: \text{$p$ prime and $p|n$} \Rightarrow p\elle R\},$$
and then define the exponential sum $g({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})=g({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha};P,R)$ by
$$g({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha};P,R)=\sum_{x\in {\mathcal A}} {\,{\rm d}}ef\calAbar{{\overline \calA}(P,R)}e({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} x^3).$$
We find it convenient to write $\tau$ for any positive number satisfying $\tau^{-1}>852+16\sqrt{2833}=1703.6\elldots $, and then put $\xi=\tfrac{1}{4}-\tau$.
\begin{lemma}\ellabel{lemma3.1} When $\eta$ is sufficiently small and $2\elle R\elle P^\eta$, one has
$$\int_0^1|g({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha};P,R)|^6{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elll P^{3+\xi}.$$
\end{lemma}
\begin{proof} The conclusion follows from \cite[Theorem 1.2]{Woo2000} by considering the underlying
Diophantine equations.\end{proof}
Next we establish an auxiliary lemma that executes the complification process. Let $n$ and $r$ be
non-negative integers with $r\ge 2$, and write $R=n(r-1)$ and $S=3R$. Let $B=(b_{i,j})$ be an integral
$(R+1)\times (S+2)$ matrix, write ${\mathbf b}_j$ for the column vector $(b_{i,j})_{1\elle i\elle R+1}$, and define
${\mathbf b}_j^*$ to be the column vector $(b_{R+2-i,j})_{1\elle i\elle R+1}$ in which the entries of
${\mathbf b}_j$ are flipped upside down. Also, define ${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_j={\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_j({\mathbf a}lp)$ by putting
\begin{equation}\ellabel{3.1}
{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_j({\mathbf a}lp)=\sum_{i=1}^{R+1}b_{ij}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i\quad (0\elle j\elle S+1).
\end{equation}
We say that the matrix $B$ is {\it bicongenial of type} $(n,r)$ when (i) the column vectors
${\mathbf b}_0,{\mathbf b}_1,\elldots ,{\mathbf b}_S$ and ${\mathbf b}^*_{S+1},{\mathbf b}^*_S,\elldots ,{\mathbf b}^*_1$ both form
congenial matrices having type $(n-1,r;r,1,3r-2)$, and (ii) one has
${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0({\mathbf a}lp)=b_{1,0}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$ and ${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{S+1}({\mathbf a}lp)=b_{R+1,S+1}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{R+1}$. At
this point, we introduce the mean value
\begin{equation}\ellabel{3.2}
J(P;B)=\oint |g({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0)^3f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_1)^2\elldots f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_S)^2g({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{S+1})^3|{\,{\rm d}}{\mathbf a}lp .
\end{equation}
Finally, we fix $\eta>0$ to be sufficiently small in the context of Lemma \ref{lemma3.1}.
\begin{lemma}\ellabel{lemma3.2} Suppose that $B$ is an integral bicongenial matrix of type
$(n,r)$. Then there exists an integral bicongenial matrix $B^*$ of type $(2n,r)$ for which
$$J(P;B)\elll (P^{3+\xi})^{1/2}J(P;B^*)^{1/2}.$$
\end{lemma}
\begin{proof} Define the linear forms ${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_j$ as in (\ref{3.1}). Also, define
$$T(P;B)=\int_0^1\elleft( \oint |g({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0)^3f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_1)^2\elldots f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_S)^2|
{\,{\rm d}}{\widehat {\mathbf a}lp}_R\right)^2{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{R+1},$$
where ${\rm d}{\widehat {\mathbf a}lp}_R$ denotes ${\rm d}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1\elldots {\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_R$. Then Schwarz's inequality
leads from (\ref{3.2}) to the bound
\begin{equation}\ellabel{3.3}
J(P;B)\elle \Bigl( \int_0^1|g({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{S+1})|^6{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{R+1}\Bigr)^{1/2}T(P;B)^{1/2}.
\end{equation}
By expanding the square inside the outermost integration, we see that
$$T(P;B)=\oint |g({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0^*)^3f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_1^*)^2\elldots f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{2S}^*)^2g({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{2S+1}^*)^3|
{\,{\rm d}}{\widehat {\mathbf a}lp}_{2R+1},$$
where ${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_i^*={\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_i^*({\mathbf a}lp)$ is defined by
$${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_i^*({\mathbf a}lp)=\begin{cases}
{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_i({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{R+1}),&\text{when $0\elle i\elle S$,}\\
{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{2S+1-i}({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{2R+1},\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_{R+1}),&\text{when $S+1\elle i\elle 2S+1$.}
\end{cases}$$
The integral $(2R+1)\times (2S+2)$ matrix $B^*=(b_{ij}^*)$ defining the linear forms
${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0^*,\elldots ,{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_{2S+1}^*$ is bicongenial of type $(2n,r)$, and one has $T(P;B)=J(P;B^*)$. The
conclusion of the lemma therefore follows from (\ref{3.3}) and Lemma \ref{lemma3.1}.
\end{proof}
While Lemma \ref{lemma3.2} bounds $J(P;B)$ in terms of a mean value almost twice the original
dimension, superficially {\it complicating} the task at hand, the higher dimension in fact {\it simplifies} the
problem of obtaining close to square root cancellation. Hence our use of the term {\it complification}.\par
Consider an $r\times s$ integral matrix $C=(c_{ij})$, write ${\mathbf c}_j$ for the column vector
$(c_{ij})_{1\elle i\elle r}$, and put
\begin{equation}\ellabel{3.4}
{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j=\sum_{i=1}^rc_{ij}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i\quad (1\elle j\elle s).
\end{equation}
Also, when $s\ge 3$, write
\begin{equation}\ellabel{3.5}
K(P;C)=\oint |g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1)g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_2)g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_3)f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_4)\elldots f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_s)|^2{\,{\rm d}}{\mathbf a}lp .
\end{equation}
\begin{theorem}\ellabel{theorem3.3}
Suppose that $r\ge 2$ and that the $r\times 3r$ integral matrix $C$ is highly non-singular. Then
$K(P;C)\elll P^{3r+\xi+\varepsilon}$.
\end{theorem}
\begin{proof} Write $s=3r$. Since the $r\times s$ matrix $C$ is highly non-singular with
$r\ge 2$, we may apply elementary row operations to $C$ in such a manner that
$c_{1,1}\ne 0$, $c_{r,2}\ne 0$, and $c_{1,3}c_{r,3}\ne 0$. On considering the underlying
Diophantine system, it is apparent from (\ref{3.5}) that these operations leave the mean value
$K(P;C)$ unchanged. Next, by applying the elementary relation (\ref{2.3}) within
(\ref{3.5}), one finds by symmetry that there is no loss in supposing that
$$K(P;C)\elll \oint |g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1)^3f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_4)^2\elldots f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_s)^2g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_2)^3|{\,{\rm d}}{\mathbf a}lp .$$
By relabelling the linear forms, we infer that $K(P;C)\elll J(P;B_0)$, where $B_0$ is the matrix
with columns ${\mathbf c}_1,{\mathbf c}_4,{\mathbf c}_5,\elldots ,{\mathbf c}_{s-1},{\mathbf c}_s,{\mathbf c}_2$. From here, by applying
elementary row operations, which amounts to making a non-singular change of variable
within (\ref{3.5}), we may suppose that ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_1=c_{1,1}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1$ and
${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_2=c_{r,2}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r$. Since the $r\times s$ matrix $C$ is highly non-singular, Lemma
\ref{lemma2.1} shows that $B_0$ is bicongenial of type $(1,r)$.\par
We show by induction that for each non-negative integer $l$, there exists an integral
bicongenial matrix of type $(2^l,r)$ having the property that
\begin{equation}\ellabel{3.6}
K(P;C)\elll (P^{3+\xi})^{1-2^{-l}}J(P;B_l)^{2^{-l}}.
\end{equation}
This bound holds when $l=0$ as a trivial consequence of the upper bound $K(P;C)\elll J(P;B_0)$ just
established. Suppose then that the estimate (\ref{3.6}) holds for $0\elle l\elle L$. By applying Lemma
\ref{lemma3.2}, we see that there exists an integral bicongenial matrix $B_{L+1}$ of type $(2^{L+1},r)$
having the property that
$$J(P;B_L)\elll (P^{3+\xi})^{1/2}J(P;B_{L+1})^{1/2}.$$
Substituting this estimate into the case $l=L$ of (\ref{3.6}), one confirms that (\ref{3.6}) holds with
$l=L+1$. The bound (\ref{3.6}) therefore follows for all $l$ by induction.\par
We now prepare to apply the bound just established. Let ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$ be any small positive number, and choose
$l$ large enough that $2^{1-l}(1-\xi)<{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$. We have shown that an integral bicongenial matrix
$B_l=(b_{ij})$ exists for which (\ref{3.6}) holds. The matrix $B_l$ is of format $(R+1)\times (S+2)$,
where $R=2^l(r-1)$ and $S=3R$. Define the linear forms ${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_j$ as in (\ref{3.1}) and recall (\ref{3.2}).
Applying (\ref{2.3}), invoking symmetry, and considering the underlying Diophantine system, we find that
there is no loss in supposing that
$$J(P;B_l)\elll \oint |f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0)^6f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_1)^2\elldots f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_S)^2|{\,{\rm d}}{\mathbf a}lp .$$
Let $D$ be the integral matrix underlying the $S+3$ forms ${\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0,{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0,{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_0,{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_1,\elldots ,{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}_S$. Then $D$ is congenial of
type $(2^l-1,r;r,3,3r)$, and one has $J(P;B_l)\elll I(P;D)$. Substituting the bound $J(P;B_l)\elll
P^{3R+4+\varepsilon}$ that follows from Corollary \ref{corollary2.5} into (\ref{3.6}), we obtain the estimate
$$K(P;C)\elll (P^{3+\xi})^{1-2^{-l}}(P^{3(2^l(r-1)+1)+1+\varepsilon})^{2^{-l}}\elll
P^{3r+\xi+(1-\xi)2^{-l}+\varepsilon}.$$
In view of our assumed upper bound $2^{1-l}(1-\xi)<{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$, one therefore sees that
$$K(P;C)\elll P^{3r+\xi+\frac{1}{2}{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}+\varepsilon}\elll P^{3r+\xi+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}.$$
The conclusion of the theorem now follows by taking ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$ sufficiently small.
\end{proof}
\section{The Hardy-Littlewood method} In this section we turn to the proof of Theorem \ref{theorem1.1}.
Let $(c_{ij})$ denote an integral $r\times s$ highly non-singular matrix with $r\ge 2$ and $s\ge 6r+1$.
We define the linear forms ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j={\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp)$ as in (\ref{3.4}), and for concision put
$g_j=g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp))$ and $f_j=f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp))$. When $\grB\subseteq [0,1)^r$ is measurable, we
then define
$$N(P;\grB)=\int_\grB g_1\elldots g_6f_7\elldots f_s{\,{\rm d}}{\mathbf a}lp .$$
By orthogonality, it follows from this definition that $N(P;[0,1)^r)$ counts the number of integral solutions
of the system (\ref{1.1}) with $x_1,\elldots ,x_6\in {\mathcal A}} {\,{\rm d}}ef\calAbar{{\overline \calA}(P,R)$ and $x_7,\elldots ,x_s\in [-P,P]$. In this section
we prove the lower bound $N(P;[0,1)^r)\gg P^{s-3r}$, subject to the hypothesis that the system
(\ref{1.1}) has non-zero $p$-adic solutions for all primes $p$. The conclusion of Theorem
\ref{theorem1.1} then follows.\par
In pursuit of the above objective, we apply the Hardy-Littlewood method. Let $\grM$ denote the union of
the intervals
$$\grM(q,a)=\{ {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}\in [0,1):|q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} -a|\elle (6P^2)^{-1}\},$$
with $0\elle a\elle q\elle P$ and $(a,q)=1$, and let ${\mathfrak m}}{\,{\rm d}}ef\grM{{\mathfrak M}}{\,{\rm d}}ef\grN{{\mathfrak N}=[0,1){\setminus\!\,}us \grM$. In addition, write
$L=\ellog \ellog P$, denote by $\grN$ the union of the intervals
$$\grN(q,a)=\{{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \in [0,1):|q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} -a|\elle LP^{-3}\},$$
with $0\elle a\elle q\elle L$ and $(a,q)=1$, and put ${\mathfrak n}}{\,{\rm d}}ef\grS{{\mathfrak S}}{\,{\rm d}}ef\grP{{\mathfrak P}=[0,1){\setminus\!\,}us \grN$. We summarise some useful
estimates in this context in the form of a lemma.
\begin{lemma}\ellabel{lemma4.1}
One has
$$\int_{\grM{\setminus\!\,}us \grN}|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|^5{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elll P^2L^{\varepsilon-1/3}\quad \text{and}\quad
\int_0^1|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|^8{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \elll P^5.$$
\end{lemma}
\begin{proof} The first estimate follows as a special case of \cite[Lemma 5.1]{Vau1989}, and the second is
immediate from \cite[Theorem 2]{Vau1986}, by orthogonality.
\end{proof}
Next we introduce a multi-dimensional set of arcs. Let $Q=L^{10r}$, and define the narrow set of major
arcs $\grP$ to be the union of the boxes
$$\grP(q,{\mathbf a})=\{ {\mathbf a}lp\in [0,1)^r:|{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i-a_i/q|\elle QP^{-3}\ (1\elle i\elle r)\},$$
with $0\elle a_i\elle q\elle Q$ $(1\elle i\elle r)$ and $(a_1,\elldots ,a_r,q)=1$.
\begin{lemma}\ellabel{lemma4.2} Suppose that the system (\ref{1.1}) admits non-zero $p$-adic solutions
for each prime number $p$. Then one has $N(P;\grP)\gg P^{s-3r}$.
\end{lemma}
\begin{proof}We begin by defining the auxiliary functions
$$S(q,a)=\sum_{r=1}^qe(ar^3/q)\quad \text{and}\quad v({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta})=\int_{-P}^Pe({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta} {\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}^3){\,{\rm d}}{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma} .$$
For $1\elle j\elle s$, put $S_j(q,{\mathbf a})=S(q,{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}))$ and $v_j({\mathbf b}et)=v({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf b}et))$, and define
\begin{equation}\ellabel{4.1}
A(q)=\underset{(q,a_1,\elldots ,a_r)=1}{\sum_{a_1=1}^q\cdots \sum_{a_r=1}^q}
q^{-s}\prod_{j=1}^sS_j(q,{\mathbf a})\quad \text{and}\quad V({\mathbf b}et)=\prod_{j=1}^sv_j({\mathbf b}et).
\end{equation}
Finally, write ${\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}(X)$ for $[-XP^{-3},XP^{-3}]^r$, and define
$${\mathfrak J}(X)=\int_{{\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}(X)}V({\mathbf b}et){\,{\rm d}}{\mathbf b}et \quad \text{and}\quad \grS(X)=\sum_{1\elle q\elle X}A(q).$$
\par We prove first that there exists a positive constant $C$ with the property that
\begin{equation}\ellabel{4.2}
N(P;\grP)-C\grS(Q){\mathfrak J}(Q)\elll P^{s-3r}L^{-1}.
\end{equation}
It follows from \cite[Lemma 8.5]{Woo1991} (see also \cite[Lemma 5.4]{Vau1989}) that there exists a
positive constant $c=c(\eta)$ such that whenever ${\mathbf a}lp \in \grP(q,{\mathbf a})\subseteq \grP$, then
$$g({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp))-cq^{-1}S_j(q,{\mathbf a})v_j({\mathbf a}lp-{\mathbf a}/q)\elll P(\ellog P)^{-1/2}.$$
Under the same constraints on ${\mathbf a}lp$, one finds from \cite[Theorem 4.1]{Vau1997} that
$$f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp))-q^{-1}S_j(q,{\mathbf a})v_j({\mathbf a}lp-{\mathbf a}/q)\elll \ellog P.$$
Thus, whenever ${\mathbf a}lp\in \grP(q,{\mathbf a})\subseteq \grP$, one has
$$g_1\elldots g_6f_7\elldots f_s-c^6q^{-s}\prod_{j=1}^sS_j(q,{\mathbf a})v_j({\mathbf a}lp-{\mathbf a}/q)\elll
P^s(\ellog P)^{-1/2}.$$
The measure of the major arcs $\grP$ is $O(Q^{2r+1}P^{-3r})$, so that on integrating over $\grP$, we
confirm the relation (\ref{4.2}) with $C=c^6$.\par
We next discuss the singular integral ${\mathfrak J}(Q)$. By applying (\ref{2.3}), we find that
\begin{equation}\ellabel{4.2bw}
V({\mathbf b}et)\elll \sum_{1\elle j_1<\elldots <j_r\elle s}|v_{j_1}({\mathbf b}et)\elldots v_{j_r}({\mathbf b}et)|^{s/r}.
\end{equation}
Recall from \cite[Theorem 7.3]{Vau1997} that $v({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta})\elll P(1+P^3|{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta}|)^{-1/3}$. Since $(c_{ij})$ is
highly non-singular and $s\ge 6r+1$, a change of variables reveals that $V({\mathbf b}et)$ is integrable, that the
limit ${\mathfrak J}=\underset{X\rightarrow \infty}\ellim {\mathfrak J}(X)$ exists, and that ${\mathfrak J}\elll P^{s-3r}$. Write
${\widehat {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}}(X)={\mathbb R}}{\,{\rm d}}ef{\,{\rm d}}bT{{\mathbb T}^r{\setminus\!\,}us {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}(X)$. Then by applying (\ref{4.2bw}), we discern that there are
distinct indices $j_1,\elldots ,j_r$ such that
$${\mathfrak J}-{\mathfrak J}(X)=\int_{{\widehat {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}}(X)}V({\mathbf b}et){\,{\rm d}}{\mathbf b}et \elll \int_{{\widehat {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}}(X)}|v_{j_1}
({\mathbf b}et)\elldots v_{j_r}({\mathbf b}et)|^{s/r}{\,{\rm d}}{\mathbf b}et .$$
The linear independence of the ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j$ ensures that whenever ${\mathbf b}et \in {\widehat {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}}(X)$, then
for some index $l$ with $1\elle l\elle r$, one has $|{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_l}({\mathbf b}et)|>X^{1/2}P^{-3}$. Consequently, the
hypothesis $s\ge 6r+1$ again ensures via a change of variables that
\begin{align*}
{\mathfrak J}-{\mathfrak J}(X)&\elll \Bigl( \sup_{{\mathbf b}et\in {\widehat {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}}(X)}|v_{j_1}({\mathbf b}et)\elldots v_{j_r}({\mathbf b}et)|\Bigr)
\int_{{\widehat {\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}}(X)}|v_{j_1}({\mathbf b}et)\elldots v_{j_r}({\mathbf b}et)|^{(s-r)/r}{\,{\rm d}}{\mathbf b}et \\
&\elll P^sX^{-1/6}\int_{{\mathbb R}}{\,{\rm d}}ef{\,{\rm d}}bT{{\mathbb T}^r}\prod_{j=1}^r(1+P^3|{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}_i|)^{-(s-r)/(3r)}{\,{\rm d}}{\mathbf t}et \elll P^{s-3r}X^{-1/6}.
\end{align*}
The system of equations (\ref{1.1}) possesses a non-zero real solution in $[-1,1]^s$, and this must be
non-singular since $(c_{ij})$ is highly non-singular. An application of Fourier's integral formula (see
\cite[Chapter 4]{Dav2005} and \cite[Lemma 30]{DL1969}) therefore leads to the lower bound
${\mathfrak J}\gg P^{s-3r}$. Thus we may conclude that
\begin{equation}\ellabel{4.3}
{\mathfrak J}(Q)\gg P^{s-3r}+O(P^{s-3r}Q^{-1/6})\gg P^{s-3r}.
\end{equation}
\par We turn next to the singular series $\grS(Q)$. It follows from \cite[Theorem 4.2]{Vau1997} that
whenever $(q,a)=1$, one has $S(q,a)\elll q^{2/3}$. Given a summand ${\mathbf a}$ in the formula for $A(q)$
provided in (\ref{4.1}), write $h_j=(q,{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}))$. Then we find that
$$A(q)\elll \underset{(q,a_1,\elldots ,a_r)=1}{\sum_{a_1=1}^q\cdots \sum_{a_r=1}^q}q^{-s/3}
(h_1\elldots h_s)^{1/3}.$$
By hypothesis, we have $s/(3r)\ge 2+1/(3r)$. The proof of \cite[Lemma 23]{DL1969} is therefore easily
modified to show that $A(q)\elll q^{-1-1/(6r)}$. Thus, the series $\grS=\underset{X\rightarrow \infty}\ellim
\grS(X)$ is absolutely convergent and
$$\grS-\grS(Q)\elll \sum_{q>Q}q^{-1-1/(6r)}\elll Q^{-1/(6r)}\elll L^{-1}.$$
The system (\ref{1.1}) has non-zero $p$-adic solutions for each prime $p$, and these are non-singular
since $(c_{ij})$ is highly non-singular. A modification of the proof of \cite[Lemma 31]{DL1969} therefore
shows that $\grS>0$, whence $\grS(Q)=\grS+O(L^{-1})>0$. The proof of the lemma is completed by
recalling (\ref{4.3}) and substituting into (\ref{4.2}) to obtain the bound $N(P;\grP)\gg
P^{s-3r}+O(P^{s-3r}L^{-1})$.
\end{proof}
In order to prune a wide set of major arcs down to the narrow set $\grP$ just considered, we introduce the
auxiliary sets of arcs
$$\grM_j=\{{\mathbf a}lp \in [0,1)^r:{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp)\in \grM+{\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}\},$$
and we put $\grV=\grM_7\cap\grM_{8}\cap\elldots \cap \grM_s$. In addition, we define
${\mathfrak m}}{\,{\rm d}}ef\grM{{\mathfrak M}}{\,{\rm d}}ef\grN{{\mathfrak N}_j=[0,1)^r{\setminus\!\,}us \grM_j$ $(7\elle j\elle s)$, and write ${\mathfrak v}}{\,{\rm d}}ef\grV{{\mathfrak V}=[0,1)^r{\setminus\!\,}us \grV$. Finally, for any
positive integer $n$, when
$\bfome\in [1,s]^n$, we define
$${\mathfrak K}}{\,{\rm d}}ef\grL{{\mathfrak L}}{\,{\rm d}}ef\grp{{\mathfrak p}_\bfome=\{{\mathbf a}lp\in \grV{\setminus\!\,}us \grP:{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}_m}({\mathbf a}lp)\in {\mathfrak n}}{\,{\rm d}}ef\grS{{\mathfrak S}}{\,{\rm d}}ef\grP{{\mathfrak P}+{\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}\ (1\elle m\elle n)\}.
$$
\begin{lemma}\ellabel{lemma4.3} One has $N(P;\grV{\setminus\!\,}us \grP)\elll P^{s-3r}L^{-1/4}$.
\end{lemma}
\begin{proof} Let ${\mathbf a}lp\in \grV{\setminus\!\,}us \grP$, and suppose temporarily that ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_m}\in
\grN+{\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}$ for $r$ distinct indices $j_m\in [7,s]$. For each $m$ there is a natural number $q_m\elle L$
having the property that $\|q_m{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_m}\|\elle LP^{-3}$. With $q=q_1\elldots q_r$, one has $q\elle L^r$
and $\|q{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_m}\|\elle L^rP^{-3}$. Next eliminating between ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_1},\elldots ,{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_r}$ in
order to isolate ${\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots ,{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r$, one finds that there is a positive integer ${\kappa}$, depending at most
on $(c_{ij})$, such that $\|{\kappa} q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_l\|\elle L^{r+1}P^{-3}$ $(1\elle l\elle r)$. Since ${\kappa} q\elle L^{r+1}$, it
follows that ${\mathbf a}lp\in \grP$, yielding a contradiction to our hypothesis that ${\mathbf a}lp\in \grV{\setminus\!\,}us \grP$.
Thus ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_\nu({\mathbf a}lp)\in {\mathfrak n}}{\,{\rm d}}ef\grS{{\mathfrak S}}{\,{\rm d}}ef\grP{{\mathfrak P}+{\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}$ for at least $s-6-r\ge 5(r-1)$ of the suffices $\nu$ with $7\elle \nu
\elle s$. Then for some tuple ${\mathbf n}u=(\nu_1,\elldots ,\nu_{5r-5})$ of distinct integers $\nu_m\in [7,s]$, one
has
$$N(P;\grV{\setminus\!\,}us \grP)\elll \int_{{\mathfrak K}}{\,{\rm d}}ef\grL{{\mathfrak L}}{\,{\rm d}}ef\grp{{\mathfrak p}_{\mathbf n}u}|g_1\elldots g_6f_7\elldots f_s|{\,{\rm d}}{\mathbf a}lp .$$
\par By symmetry, we may suppose that ${\mathbf n}u=(9,\elldots ,5r+3)$. Let $k_l$ denote $g_l$ when
$1\elle l\elle 6$, and $f_l$ when $l=7,8$. Then combining (\ref{2.3}) with a trivial estimate for $|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|$,
one finds that for some tuple $({\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}_1,\elldots ,{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}_{r-1})$ of distinct integers ${\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}_m\in [9,5r+3]$, and
some integer $l$ with $1\elle l\elle 8$, one has
$$N(P;\grV{\setminus\!\,}us \grP)\elll P^{s-5r-3}\int_{{\mathfrak K}}{\,{\rm d}}ef\grL{{\mathfrak L}}{\,{\rm d}}ef\grp{{\mathfrak p}_\bfsig}|k_l^8f_{{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}_1}^5
\elldots f_{{\sigma}} {\,{\rm d}}ef\Sig{{\Sigma}} {\,{\rm d}}ef\bfsig{{\boldsymbol \sig}_{r-1}}^5|{\,{\rm d}}{\mathbf a}lp .$$
By changing variables, considering the underlying Diophantine equations, and applying Lemma
\ref{lemma4.1}, we deduce that
\begin{align*}
N(P;\grV{\setminus\!\,}us \grP)&\elll P^{s-5r-3}\Bigl( \int_0^1|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|^8{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \Bigr)
\Bigl( \int_{\grM{\setminus\!\,}us \grN}|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|^5{\,{\rm d}}{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha} \Bigr)^{r-1}\\
&\elll P^{s-5r-3}(P^5)(P^2L^{\varepsilon-1/3})^{r-1}\elll P^{s-3r}L^{-1/4},
\end{align*}
and the proof of the lemma is complete.
\end{proof}
\begin{lemma}\ellabel{lemma4.4} There is a positive number ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$ such that $N(P;{\mathfrak v}}{\,{\rm d}}ef\grV{{\mathfrak V})\elll P^{s-3r-{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}$.
\end{lemma}
\begin{proof} If ${\mathbf a}lp\in {\mathfrak v}}{\,{\rm d}}ef\grV{{\mathfrak V}$, then for some index $j$ with $7\elle j\elle s$, one has
${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp)\not\in \grM+{\mathbb Z}}{\,{\rm d}}ef{\,{\rm d}}bQ{{\mathbb Q}$, and so ${\mathbf a}lp\in {\mathfrak m}}{\,{\rm d}}ef\grM{{\mathfrak M}}{\,{\rm d}}ef\grN{{\mathfrak N}_j$. Thus, combining (\ref{2.3}) with a trivial
estimate for $|f({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha})|$, we find that for some suffix $j\in [7,s]$, and some tuple $(j_1,\elldots,j_{3r})$ with
$$1\elle j_1<j_2<j_3\elle 6<j_4<\elldots <j_{3r}\elle s,$$
one has
\begin{equation}\ellabel{4.4}
N(P;{\mathfrak v}}{\,{\rm d}}ef\grV{{\mathfrak V})\elll P^{s-6r-1}\sup_{{\mathbf a}lp\in {\mathfrak m}}{\,{\rm d}}ef\grM{{\mathfrak M}}{\,{\rm d}}ef\grN{{\mathfrak N}_j}|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp))|\oint
|g_{j_1}g_{j_2}g_{j_3}f_{j_4}\elldots f_{j_{3r}}|^2{\,{\rm d}}{\mathbf a}lp .
\end{equation}
The matrix underlying the linear forms ${\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_1},\elldots ,{\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_{j_{3r}}$ is highly non-singular, and so
we may apply Theorem \ref{theorem3.3} to estimate the integral on the right hand side of (\ref{4.4}).
Moreover, by Weyl's inequality (see \cite[Lemma 2.4]{Vau1997}), one has
$$\sup_{{\mathbf a}lp \in {\mathfrak m}}{\,{\rm d}}ef\grM{{\mathfrak M}}{\,{\rm d}}ef\grN{{\mathfrak N}_j}|f({\gamma}} {\,{\rm d}}ef\Gam{{\Gamma}_j({\mathbf a}lp))|\elle \sup_{{\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta} \in {\mathfrak m}}{\,{\rm d}}ef\grM{{\mathfrak M}}{\,{\rm d}}ef\grN{{\mathfrak N}}|f({\beta}} {\,{\rm d}}ef\bfbet{{\boldsymbol \beta})|\elll P^{3/4+\varepsilon}.$$
We therefore conclude that for some positive number ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}$, one has
$$N(P;{\mathfrak v}}{\,{\rm d}}ef\grV{{\mathfrak V})\elll P^{s-6r-1}(P^{3/4+\varepsilon})(P^{3r+\xi+\varepsilon})\elll P^{s-3r-{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}.$$
This completes the proof of the lemma.\end{proof}
By combining Lemmata \ref{lemma4.2}, \ref{lemma4.3} and \ref{lemma4.4}, we infer that whenever
the system (\ref{1.1}) possesses a non-zero $p$-adic solution, one has
\begin{align*}
N(P)&=N(P;\grP)+N(P;\grV{\setminus\!\,}us \grP)+N(P;{\mathfrak v}}{\,{\rm d}}ef\grV{{\mathfrak V})\\
&\gg P^{s-3r}+O(P^{s-3r}L^{-1/4}+P^{s-3r-{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}})\gg P^{s-3r}.
\end{align*}
This completes our proof of Theorem \ref{theorem1.1}.
We remark that the condition in Theorem \ref{theorem1.1} that $(c_{ij})$ be highly
non-singular can certainly be relaxed. Let us refer to the number of columns lying in a given
one dimensional subspace of the column space of $(c_{ij})$ as the {\it multiplicity} of that
subspace. The discussion of \S\S2 and 3 would suffer no ill consequences were $(c_{ij})$ to
satisfy the condition that the maximum multiplicity be $2$. In order to see this, one has
simply to note that in such circumstances, the mean value estimates relevant to the application
of the Hardy-Littlewood method can be related, via H\"older's inequality, to mean values of the
shape (\ref{3.5}). We note in this context that the matrix $(c_{ij})$ occuring in Theorem
\ref{theorem1.1} is of course different from that occuring in Theorem \ref{theorem3.3}. With
rather greater effort in a more cumbersome argument, this maximum multiplicity $2$ could
be increased to $3$, and even several multiplicities of $4$ can be tolerated. This and further
refinements are topics that we intend to pursue on a future occasion.
\noindent {\bf Acknowledgement.} The authors are grateful to the referees for the
extreme care taken in reviewing this paper, and in particular for numerous suggestions
which have clarified our exposition and prompted significant corrections.
\providecommand{\bysame}{\elleavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} |
\begin{document}
\title{On Sj\"{o}strand's skew sign-imbalance identity}
\begin{abstract}
Recently, Sj\"{o}strand gave an identity for the sign-imbalance of
skew shapes. We give a quick proof of this using the skew domino
Cauchy identity and some sign analysis for skew shapes.
\end{abstract}
\maketitle
\section{The Theorem}
Let $T$ be a standard Young tableau with skew shape $\lambda/\mu$.
We will use the English notation for our tableaux, so that
partitions are top-left justified. The {\it reading word} $r(T)$ of
$T$ is obtained by reading each row from left to right, starting
with the top row. The {\it sign} ${{\rm r}m sign}(T)$ of $T$ is the sign of
${\rm r}(T)$ as a permutation. The {\it sign-imbalance}
$I_{\lambda/\mu}$ of $\lambda/\mu$ is given by
$$
I_{\lambda/\mu} = \sum_{T} {{\rm r}m sign}(T),
$$
where the summation is over all tableaux $T$ with shape
$\lambda/\mu$.
For a partition $\lambda$, let $v(\lambda) = \sum_i \lambda_{2i}$
denote the sum of the even parts. denote the Generalizing an earlier
conjecture of Stanley~\cite{Sta}, Sj\"{o}strand~\cite{S} proved the
following identity.
\begin{thm}[\cite{S}]
\label{thm:S} Let $\alpha$ be a partition and let $n \in {\mathbb
N}$ be even. Then
$$
\sum_{\lambda/\alpha \vdash n} (-1)^{v(\lambda)}
I_{\lambda/\alpha}^2 = \sum_{\alpha/\mu \vdash n} (-1)^{v(\mu)}
I_{\alpha/\mu}^2.
$$
\end{thm}
The aim of this note is to give a quick derivation of
Theorem~{\rm r}ef{thm:S} using the techniques developed in~\cite{Lam} and
the {\it skew domino Cauchy identity}. Let ${\mathcal G}_{\lambda/\mu}(X;q) =
\sum_D q^{{{\rm r}m spin}(D)}x^{{{\rm r}m weight}(D)}$ be the spin-weight generating
function of domino tableaux with shape $\lambda/\mu$; see for
example~\cite{Lam}. Here we will use the convention that ${{\rm r}m spin}(D)$
is equal to {\it half} the number of vertical dominoes in $D$.
Though not stated explicitly, the following identity is a
straightforward generalization of the ``domino Cauchy identity''
proved in any of~\cite{Lam,Lam1,vL}.
\begin{thm}
\label{thm:skewcauchy} Let $\alpha, \beta$ be two fixed partitions.
Then
$$
\sum_{\lambda} {\mathcal G}_{\lambda/\alpha}(X;q) {\mathcal G}_{\lambda/\beta}(Y;q) =
\prod_{i,j}\frac{1}{(1-x_iy_j)(1-qx_iy_j)} \sum_{\mu}
{\mathcal G}_{\alpha/\mu}(X;q) {\mathcal G}_{\beta/\mu}(Y;q).
$$
\end{thm}
\section{The Proof}
Let $D$ be a standard domino tableau with shape $\lambda/\mu$. The
sign ${{\rm r}m sign}(D)$ is equal to ${{\rm r}m sign}(T)$ where $T$ is the standard
Young tableau obtained from $D$, also with shape $\lambda/\mu$, by
replacing the domino labeled $i$ by the numbers $2i-1$ and $2i$. The
following result follows from a sign-reversing
involution~\cite{Lam,S,Sta}.
\begin{lem}
\label{lem:first} If $\lambda/\mu$ has an even number of squares,
then its sign-imbalance is given by
$$
I_{\lambda/\mu} = \sum_D {{\rm r}m sign}(D),
$$
where the summation is over all standard domino tableau with shape
$\lambda/\mu$.
\end{lem}
Let $\delta \in D$ be a vertical domino occupying squares in rows
$i-1$ and $i$. We call $\delta$ {\it nice} if the number of squares
contained in $\lambda/\mu$ to the left of $D$ in row $i$ is odd. In
other words, if $\delta$ lies in column $j$ then it is nice if and
only if $j - \mu_i$ is even. Let ${{\rm r}m nv}(D)$ denote the number of nice
(and thus vertical) dominoes in $D$. Let ${{\rm r}m bv}(D)$ denote the number
of non-nice vertical dominoes in $D$. Then we have ${{\rm r}m spin}(D) =
\frac{{{\rm r}m nv}(D) + {{\rm r}m bv}(D)}{2}$.
\begin{lem}
\label{lem:nv} Let $D$ be a domino tableau of shape $\lambda/\mu$.
Then ${{\rm r}m sign}(D) = (-1)^{{{\rm r}m nv}(D)}$.
\end{lem}
\begin{proof}
This follows from the same argument as in the proof of
\cite[Proposition 21]{Lam}.
\end{proof}
\begin{lem}
\label{lem:nvbv} Let $D$ be a domino tableau of shape $\lambda/\mu$.
The number ${{\rm r}m nv}(D) - {{\rm r}m bv}(D)$ depends only on the shape
$\lambda/\mu$.
\end{lem}
\begin{proof}
The number ${{\rm r}m nv}(D)-{{\rm r}m bv}(D)$ only depends on the tiling of
$\lambda/\mu$ by dominoes. It is well known (see for
example~\cite{Pak}) that every two such domino tilings can be
connected by a number of moves of the form shown in
Figure~{\rm r}ef{fig:localmove}. These moves do not change the value of
${{\rm r}m nv}(D)-{{\rm r}m bv}(D)$.
\end{proof}
\begin{figure}
\caption{The ``local move'' which connects all domino tilings.}
\label{fig:localmove}
\end{figure}
For each skew shape $\lambda/\mu$, define $v'(\lambda/\mu) := {{\rm r}m nv}(D)
- {{\rm r}m bv}(D)$ where $D$ is any domino tableau with shape $\lambda/\mu$.
\begin{lem}
\label{lem:mod2} We have $v(\lambda)+v(\mu) \equiv v'(\lambda/\mu)
\mod 2$.
\end{lem}
\begin{proof}
This is straight forward to prove by induction on the size of
$\lambda$, starting with $\lambda = \mu$ and adding dominoes.
\end{proof}
\begin{proposition}
\label{prop:main} Let $\lambda/\mu$ have an even number of squares.
Then
$$
I_{\lambda/\mu}^2 = (-1)^{v(\lambda/\mu)} \left(\sum_{D}
(-1)^{{{\rm r}m spin}(D)} {\rm r}ight)^2,
$$
where the summation is over all domino tableaux of shape
$\lambda/\mu$.
\end{proposition}
\begin{proof}
We have
\begin{align*}
I_{\lambda/\mu}^2 & = \left( \sum_D {{\rm r}m sign}(D){\rm r}ight)^2 & \mbox{by
Lemma~{\rm r}ef{lem:first}} \\&= \left( \sum_D (-1)^{{{\rm r}m nv}(D)} {\rm r}ight)^2 &
\mbox{by
Lemma~{\rm r}ef{lem:nv},} \\
&= \left( \sum_D (-1)^{{{\rm r}m spin}(D) + v'(\lambda/\mu)/2}
{\rm r}ight)^2 & \mbox{using ${{\rm r}m spin}(D) = \frac{{{\rm r}m nv}(D) + {{\rm r}m bv}(D)}{2}$,} \\
&= (-1)^{v(\lambda/\mu)} \left(\sum_{D} (-1)^{{{\rm r}m spin}(D)} {\rm r}ight)^2 &
\mbox{by Lemma~{\rm r}ef{lem:mod2}}.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~{\rm r}ef{thm:S}]
In Theorem~{\rm r}ef{thm:skewcauchy}, let $\alpha = \beta$ and $q = -1$.
Then take the coefficient of $x_1x_2\cdots x_n y_1 y_2 \cdots y_n$ on
both sides. Using Proposition~{\rm r}ef{prop:main}, this gives exactly
Theorem~{\rm r}ef{thm:S}.
\end{proof}
\end{document} |
\begin{document}
\title{Tavis-Cummings model beyond the rotating wave approximation:
Inhomogeneous coupling}
\author{Lijun Mao}
\affiliation{Institute of Theoretical Physics, Shanxi University, Taiyuan 030006, P. R.
China}
\author{Sainan Huai}
\affiliation{Institute of Theoretical Physics, Shanxi University, Taiyuan 030006, P. R.
China}
\author{Yunbo Zhang}
\email{[email protected]}
\affiliation{Institute of Theoretical Physics, Shanxi University, Taiyuan 030006, P. R.
China}
\begin{abstract}
We present the analytical solution of the Tavis-Cummings (TC) model for more
than one qubit inhomogeneously coupled to a single mode radiation field
beyond the rotating-wave approximation (RWA). The significant advantage of
the displaced oscillator basis enables us to apply the same truncation
techniques adopted in the single qubit Jaynes-Cummings (JC) model to the
multiple qubits system. The derived analytical spectrum match perfectly the
exact diagonalization numerical solutions of the inhomogeneous TC model
in the parameter regime where the qubits transition frequencies are far
off-resonance with the field frequency and the interaction strengths reach
the ultra-strong coupling regime. The two-qubit TC model is quasi-exactly
solvable because part of the spectra can be determined exactly in the
homogeneous coupling case with two identical qubits or with symmetric(asymmetric)
detuning. By means of the fidelity of quantum states we identify several
nontrivial level crossing points in the same parity subspace, which implies that
homogeneous coupled two-qubit TC model with $\omega_1=\omega_2$ or
$\omega_1\pm\omega_2=2\omega_c$ is integrable.
We further explore the time evolution of the qubit's population inversion
and the entanglement behavior taking two qubits as an example. The
analytical methods provide unexpectedly accurate results in describing the
dynamics of the qubit in the present experimentally accessible coupling
regime, showing that the collapse-revival phenomena emerge, survive, and are
finally destroyed when the coupling strength increases beyond the
ultra-strong coupling regime. The inhomogeneous coupling system exhibits new
dynamics, which are different from homogeneous coupling case. The suggested
procedure applies readily to the multiple qubits system such as the GHZ
state entanglement evolution and quantum entanglement between a single
photon and superconducting qubits of particular experiment interest.
\end{abstract}
\pacs{42.50.Pq, 42.50.Md, 03.65.Ud}
\maketitle
\section{Introduction}
The Jaynes-Cummings (JC) model with the rotating wave approximation (RWA),
first introduced in 1963 \cite{Jaynes}, is the simplest model that describes
the interaction between a two-level atom and a single mode quantized
radiation field. The RWA is applicable when the applied electromagnetic
field frequency $\omega _{c}$ is near resonance with the atom transition
frequency $\omega _{j}$ and the interaction between the atom and the
radiation field is weak. The reason is that the contribution of the
counter-rotating terms of the system is very small. Typical new era of
experiments witness the breakdown of the JC model in terms of both coupling
and detuning, including circuit QED experiments with superconducting qubits
coupled to LC and waveguide resonators \cite{Schuster, Hofheinz, Forn-Diaz,
Niemczyk} and Cooper-pair boxes or Josephson phase qubits coupled to
nanomechanical resonators \cite{Bouchiat, Wallraff, Nakamura, LaHaye,
Connell}. Each of these artificial atoms has an internal degree of freedom
(d.o.f.) that can be either up or down, creating a spin-1/2 system. These systems
generally allow coupling strengths up to $g_{j}/\omega _{c}\simeq 0.1$ in
the so-called ultrastrong coupling regime, or the qubit transition frequency
far-detuned from the field frequency \cite{Fedorov, Abdumalikov, Higgins,
Irish1, Amico, Irish2, Chen1, Werlang, Zheng, Zhang, Liu, He, He2, Wolf}. An
adiabatic approximation approach \cite{Irish1} was proposed to treat the
parameter regime outside the near-resonance and weak-coupling assumption of
RWA based on the displaced oscillator basis. In this basis the Hamiltonian
are truncated to a block-diagonal form and the blocks are solved
individually. The time evolution of the two-level-system occupation
probability with thermal and coherent state initial conditions for the
oscillator exhibits clear signals of collapse and revival.
The quantum Rabi model \cite{Rabi}, or JC model without the RWA, was
recently declared solved exactly in \cite{Braak, Solano}. By means of the
representation of bosonic operators in the Bargmann space Braak argued that
the regular spectrum of the Rabi model was given by the zeros of a
transcendental function, which is given as an infinite power series. Chen
et. al. mapped the model to a polynomial equation with a single variable in
terms of tunable extended bosonic coherent states \cite{Chen2}. They recover
Braak's exact solution in an alternative more physical way and point out
that both methods have one thing in common: the spectrum can not be obtained
without truncation in the power series \cite{Chen4}. Thus the Rabi model is
quasi-exactly solvable in the sense that only a finite part of the spectrum
can be obtained in closed form and the remaining part of the spectrum can
only be determined by numerical means \cite{Moroz1, Moroz2, Zhang2, Peng}.
The number of ca. 1350 calculable energy levels in each parity subspace are
obtained in double precision by an elementary stepping algorithm up to two
orders of magnitude higher than Braak's solution \cite{Moroz2}. Quantum
integrability is, according to Braak's criterion, equivalent to the
existence of quantum numbers that classify eigenstates uniquely. The Rabi
model is integrable because it has two d.o.f. and the eigenstates
can be uniquely labelled by two quantum numbers associated with the energy
and the parity, respectively. Moreover, examples of nonintegrable but
(quasi)exactly solvable system are given with broken parity symmetry or
vanished level splitting of an additional qubit \cite{Braak, Peng}. These
theoretical progress has renewed the interest in the Rabi and related
models. Analytical solutions of this model have brought clarity and
intuition to several important problems and experimental results in
contemporary quantum information. Furthermore, it is expected that the
experiments could reach the deep strong coupling regime \cite{Casanova,
Liberato} where the ratio of the coupling strength to the relevant
frequencies exceeds unity. Perturbative methods and the concept of Rabi
oscillations should be superseded by novel physics such as parity chains and
photon number wave packets.
To describe the collective behavior of multiple atomic dipoles interacting
with an electromagnetic field mode, the Dicke model \cite{Dicke} was
introduced where the Pauli operators are summed and transformed into a
bosonic operator. Though very successful in treating the system of an
alkaline atomic ensemble \cite{Baumann} in an optical microcavity with the
number of atoms over $10^5$, it does not apply well to the case of
multiqubit superconducting circuits with $N \le 10$. Theoretical studies for
a finite number $N$ of qubits in the system often employ the Tavis-Cummings
(TC) model \cite{Tavis} under RWA, where all the spins are grouped into a
total large spin. The TC model has received much attention and has been
involved in both experiments and theories \cite{Agarwal, Lopez, Ian, Chen3,
Chilingaryan, Braak1, Wilson, Wang, Lambert, Fink, DiCarlo}. Being exactly
solvable only when the coupling is homogeneous and when the eigenfrequencies
between the qubits and the photon mode are equal, it was recently \cite
{Agarwal} extended beyond the RWA for quasi-degenerate qubits in the
parameter regime in which the frequencies of the qubits are much smaller
than the oscillator frequency and the coupling strength is allowed to be
ultrastrong. The case of inhomogeneous coupling is drastically different -
there is no such straightforward way to access the Hilbert space. The
extension to the inhomogeneous coupling system is limited to RWA \cite
{Lopez, Ian} or numerical exact approach on the entanglement evolution of
two independent JC atoms \cite{Chen3}. It is worthwhile to
notice that more efforts are paid to the system composed of two nonidentical
qubits \cite{Chilingaryan}, or the $N = 3$ Dicke model which couples three
qubits to a single radiation mode and constitutes the simplest
quantum-optical system allowing for Greenberger-Horne-Zeilinger (GHZ) states
\cite{Braak1}.
We in this paper solve the TC model beyond the RWA with $N$ qubits coupled
to a single oscillator mode by comprehensively considering the recently
developed approaches in solving the JC model with extension to the case of
inhomogeneous coupling. A systematic truncated method is developed in finding
the exact wave functions of the model with $N$ discrete and one continuous d.o.f.
In the basis of the displaced operators we construct the Hamiltonian by primitive
building blocks and allow transitions between adjacent blocks in addition. We take
mainly arbitrary two qubits interacting with a bose field as an example. The
analytical eigensolutions are derived in the zeroth-order and first-order
approximation to the exact wave function in deep strong coupling parameter
regimes. The procedure is easily extended to systems with more than two
qubits. We further examine the solvability and integrability of the system and
level crossing points in the energy spectrum of the same parity space
are related to a hidden symmetry in the system. Then, starting
from any initial state of the system we are able to derive the time
evolution properties of the qubits by using a linear combination of the
analytical eigensolutions and tracing over the oscillator field. In other
words, some general techniques are applied to investigate the time evolution
of the rather complicated multiqubit-field system. Subsequently, we can
apply the approximated eigensolutions to study the dynamical evolution of
the entanglement between the two qubits as a fundamental consequence of
quantum mechanics and as a resource for communication and information
processing \cite{Bell, Wootters, Lee, Coffman, Lopez1}.
The paper is organized as follows. In Sec. II the analytical eigen solutions
of the TC model beyond the RWA is given after introducing the general
procedure of constructing the determinant of the secular equation. Sec. III
is devoted to the dynamical behaviors of the qubits, in which the analytical
eigensolutions are employed to approximately describe the time evolution of
the probability of finding both qubits in their initial state and the
population inversion when the quantum field is prepared initially in the
displaced coherent state. In Sec. IV we further study the entanglement
dynamics for two qubits starting from the Bell state and the field in a
coherent state. Finally, we make some discussions on the characteristics of
case of unequal coupling strengthes for the two qubits from the present
study in Sec. V.
\section{ Eigen solutions of the TC model beyond the RWA}
The system we consider here consists of $N$ qubits inhomogeneously
interacting with a single-mode bose field. It is described by the TC
Hamiltonian beyond the RWA (we set $\hbar =1 $) \cite{Tavis}
\begin{equation}
H=\omega _{c}a^{\dagger }a+\sum\limits_{j=1}^{N}\left( -\frac{\omega _{j}}{2}
\hat{\sigma} _{j}^{x}+g_{j}\left( a^{\dagger }+a\right) \hat{\sigma}
_{j}^{z}\right), \label{H}
\end{equation}
where $a^{\dagger }$ ($a$) is the bosonic creation (annihilation) operator
of the single bosonic mode with frequency $\omega_c$, $\omega _{j}$ denotes
the energy splitting of $j$-th qubit described by Pauli matrices $\hat{\sigma
}_{j}^{k}$ ($k=x,y,z$), and $g_{j}$ is the dipole-field coupling strength
between the qubit $j$ and the field. Here we have rotated the system around
the $y-$axis by an angle $\pi /4$ realized through a unitary transformation
\cite{Chen3} $V=\exp \left( \frac{i\pi }{4}\sum_{j}\hat{\sigma}
_{j}^{y}\right) $. Basically the calculation here applies to arbitrary
non-identical two-level atoms and non-uniform coupling strengths $g_j$ in
any form. We also note that the Hamiltonian (\ref{H}) conserves the global
parity operator defined as $\Pi =\prod\limits_{j=1}^{N}\hat{\sigma}
_{j}^{x}\exp (i\pi a^{\dagger }a)$, i.e. $\left[ H,\Pi \right] =0$. In the
following we use the parity operator to decompose the Hilbert space into
even and odd subspaces.
For the convenience of description, we take $N=2$ as an illustrative
example. Denote the upper and lower eigenstates of $\hat{\sigma}_{j}^{z}$ as
$\left\vert 1\right\rangle _{j}$ and $\left\vert 0\right\rangle _{j}$
respectively. Formally we treat qubit 2 as a new member to the Rabi model
\cite{Rabi,Braak} and unfold the dimension of the space from 2 to 4. In the
basis of product space of the two qubits, i.e. $\left\vert 11\right\rangle
=\left\vert 1\right\rangle _{2}\otimes \left\vert 1\right\rangle _{1},$ $
\left\vert 10\right\rangle =\left\vert 1\right\rangle _{2}\otimes \left\vert
0\right\rangle _{1},$ $\left\vert 01\right\rangle =\left\vert 0\right\rangle
_{2}\otimes \left\vert 1\right\rangle _{1}$, and $\left\vert 00\right\rangle
=\left\vert 0\right\rangle _{2}\otimes \left\vert 0\right\rangle _{1}$, we
may write the Hamiltonian into a matrix form
\begin{widetext}
\begin{eqnarray}
H=\left(
\begin{array}{cccc}
\omega_{c} \left( a^{\dagger }a+\beta _{1}\left( a^{\dagger }+a\right) \right) &
-\frac{\omega _{1}}{2} & -\frac{\omega _{2}}{2} & 0 \\
-\frac{\omega _{1}}{2} & \omega_{c} \left( a^{\dagger }a+\beta _{2}\left(
a^{\dagger }+a\right) \right) & 0 & -\frac{\omega _{2}}{2} \\
-\frac{\omega _{2}}{2} & 0 & \omega_{c} \left( a^{\dagger }a+\beta _{3}\left(
a^{\dagger }+a\right) \right) & -\frac{\omega _{1}}{2} \\
0 & -\frac{\omega _{2}}{2} & -\frac{\omega _{1}}{2} & \omega_{c} \left(
a^{\dagger }a+\beta _{4}\left( a^{\dagger }+a\right) \right)
\end{array}
\right) \label{4}
\end{eqnarray}
\end{widetext}We notice that the original $2\times 2$ Hamiltonian \cite{Liu}
as a primitive block is shifted along the diagonal line with $g_{1}$ and $
g_{2}$ being recombined into 4 coupling parameters $\beta _{i}$ with
relations $\beta _{1}=-\beta _{4}=\left( g_{2}+g_{1}\right) /\omega _{c}$
and $\beta _{2}=-\beta _{3}=\left( g_{2}-g_{1}\right) /\omega _{c}$, while
the transition frequency $\omega _{2}$ always appears in the off-diagonal
blocks of the matrix. Similar procedure can be applied when we add one more
qubit to the system. The basis of $N$ qubits is $2^{N}$ dimension and we
have $2^{(N-1)}$ independent dimensionless coupling parameters $\beta _{i}$.
A special case is the system of two identical qubits ($\omega _{1}=\omega
_{2}$) coupling with a common oscillator mode, which has been extended
beyond the RWA in Ref. \cite{Agarwal}. By assuming that the coupling
parameters are larger than the transition frequencies, i.e. in the
deep-strong-coupling regime, $g_{1},g_{2}\gg \omega _{1},\omega _{2}$, the
eigenvalues have been calculated up to the second-order perturbation
correction \cite{Chilingaryan}. The main results for these limiting cases
may be readily reproduced from the method in this paper.
Let us first introduce the displacement operators \cite{Schweber,Irish1} $
\hat{D}\left( \beta _{i}\right) =\exp \left[ \beta _{i}\left( a^{\dagger
}-a\right) \right] $, which will translate the field operators $a^{\dagger }$
and $a$ by a distance $\beta_i$ and give rise to $A_{i}^{\dagger } =\hat{D}
^{\dagger }\left( \beta _{i}\right) a^{\dagger }\hat{D}\left( \beta
_{i}\right) =a^{\dagger }+\beta _{i}$ and $A_{i}=\hat{D}^{\dagger }\left(
\beta _{i}\right) a\hat{D}\left( \beta _{i}\right) =a+\beta _{i}$, and will
transform the number state $\left\vert n\right\rangle$ defined as $
a^{\dagger }a\left\vert n\right\rangle =n\left\vert n\right\rangle $ into
the so-called displaced Fock number state defined as $A_{i}^{\dagger
}A_{i}\left\vert n\right\rangle _{A_{i}} =n\left\vert n\right\rangle
_{A_{i}} $, i.e. $\hat{D}^{\dagger }\left( \beta _{i}\right) \left\vert
n\right\rangle =\left\vert n\right\rangle _{A_{i}}$. The states $\left\vert
n\right\rangle _{A_{i}}$ are orthogonal for the same index $i$ and
non-orthogonal for different subspaces $i$ and $j$. The lack of
orthogonality between states with different displacements leads to the
unusual results in the dynamics of population inversion and entanglement
which will be shown later in this work.
The diagonal elements of the Hamiltonian are in this way reconstructed as $
\omega _{c}\left( A_{i}^{\dagger }A_{i}-\beta _{i}^{2}\right) $ with the
off-diagonal $\omega _{j}$ unchanged. The Hilbert space of the diagonal
Hamiltonian is now of the form of a combination of qubit basis and displaced
oscillator basis, e.g. $\left\vert 111...\right\rangle \left\vert
n\right\rangle _{A_{1}}$, which can be taken as a starting point to expand
the eigenfunction of the total Hamiltonian $H$. For two-qubit system we
suppose that
\begin{eqnarray}
\left\vert \psi \right\rangle &=&\sum_{n=0}^{\infty }\left( d_{1n}\left\vert
11\right\rangle \left\vert n\right\rangle _{A_{1}}+d_{2n}\left\vert
10\right\rangle \left\vert n\right\rangle _{A_{2}}\right. \notag \\
&&\left. +d_{3n}\left\vert 01\right\rangle \left\vert n\right\rangle
_{A_{3}}+d_{4n}\left\vert 00\right\rangle \left\vert n\right\rangle
_{A_{4}}\right) \label{eigenfunction}
\end{eqnarray}
which in addition should be the eigenfunction of the parity operator $\Pi $,
i.e. $\Pi \left\vert \psi \right\rangle =\kappa \left\vert \psi
\right\rangle $ with $\kappa =+,-$ for even and odd parity respectively.
Actually we find that the $\sigma $'s operators in $\Pi $ transform the
qubit basis from $\left\vert 11\right\rangle $ ($\left\vert 10\right\rangle $
) to $\left\vert 00\right\rangle $ ($\left\vert 01\right\rangle $) and vice
versa, while the field operators set up links between the displaced Fock
state $\left\vert n\right\rangle _{A_{1}}$ ($\left\vert n\right\rangle
_{A_{2}}$) and its symmetric counterpart $\left\vert n\right\rangle _{A_{4}}$
($\left\vert n\right\rangle _{A_{3}}$). Thus the coefficients are related to
each other through $d_{4n}=\kappa \left( -1\right) ^{n}d_{1n}$ and $
d_{3n}=\kappa \left( -1\right) ^{n}d_{2n}$. This symmetry separates the
state space into two different invariant subspaces can be labeled by the
eigenvalues of the operation $\Pi $ with $\kappa =+,-.$ In accordance with
the Schr\"{o}dinger equation we find that the number of equations is reduced
by half
\begin{eqnarray}
\varepsilon _{1m}d_{1m}+\sum_{n=0}^{\infty }\Omega _{mn}^{\kappa }d_{2n}
&=&Ed_{1m}, \label{Schrdinger equation1} \\
\varepsilon _{2m}d_{2m}+\sum_{n=0}^{\infty }W_{mn}^{\kappa }d_{1n}
&=&Ed_{2m}, \label{Schrdinger equation2}
\end{eqnarray}
where $\varepsilon _{im}=\left( m-\beta _{i}^{2}\right) \omega _{c}$ and the
off-diagonal terms describe the transitions between states belonging to
different displaced Fock spaces
\begin{eqnarray*}
\Omega _{mn}^{\kappa } &=&-\frac{\omega _{1}}{2}\left[ _{A_{1}}\left\langle
m|n\right\rangle _{A_{2}}\right] -\kappa \left( -1\right) ^{n}\frac{\omega
_{2}}{2}\left[ _{A_{1}}\left\langle m|n\right\rangle _{A_{3}}\right] , \\
W_{mn}^{\kappa } &=&-\frac{\omega _{1}}{2}\left[ _{A_{2}}\left\langle
m|n\right\rangle _{A_{1}}\right] -\kappa \left( -1\right) ^{n}\frac{\omega
_{2}}{2}\left[ _{A_{2}}\left\langle m|n\right\rangle _{A_{4}}\right] .
\end{eqnarray*}
Clearly the symmetry of the coefficients $d_{im}$ reduces the number of
equations by half. This effectively folds the already expanded Hilbert space
to a diagonal block, in which the anti-diagonal elements are occupied by the
newly added $\omega _{N}$ together with the parity $\kappa $ as in the
expressions for $\Omega $ and $W$ above. The nonzero off-diagonal elements
originates from the non-orthogonality of displaced Fock states \cite{Liu}
\begin{equation}
_{A_{i}}\left\langle m|n\right\rangle _{A_{j}}=e^{\frac{-\beta _{ij}^{2}}{2}
}\sum_{l}^{min(m,n)}\frac{(-1)^{n-l}\sqrt{m!n!}}{l!(n-l)!(m-l)!}\beta
_{ij}^{m+n-2l} \label{Xmn}
\end{equation}
with $\beta _{ij}=\beta _{i}-\beta _{j}$. This implies that the interchange
of $i$ and $j$, that of $m$ and $n$, or the inversion of $\beta _{i}$ to $
-\beta _{i}$ will introduce a factor $(-1)^{m+n}$ in eq. (\ref{Xmn}). Then
in terms of expressions about $\Omega _{mn}^{\kappa }$ and $W_{mn}^{\kappa }$
, we find the symmetry relation
\begin{equation}
\Omega _{mn}^{\kappa }=W_{nm}^{\kappa }, \label{OmegaW}
\end{equation}
which turns the Hamiltonian in the Hilbert space into a real symmetric
matrix which assures that all coefficients $d_{im}$ are real. This allows us
keep only $\Omega $ in the rest of the paper.
\begin{figure}
\caption{(Color Online) The ED numerical solution of the energy levels as a
function of coupling strength $\protect\beta_{1}
\label{Fig1}
\end{figure}
The $2^{N-1}$ equations for $N$ qubits take similar forms as eqs. (\ref
{Schrdinger equation1}) and (\ref{Schrdinger equation2}) with the diagonal
terms $\varepsilon _{im}d_{im}$ and in each equation off-diagonal terms indicate
the transitions between Fock
states displaced in different directions and distances. It is easy to show
that the eigenvalue spectrum of the $N$ qubits system is unaltered when any
of the coupling strengths changes its sign, e.g. $g_{1}\rightarrow -g_{1}$
or $g_{2}\rightarrow -g_{2}$ so that it suffices to discuss the energy
spectrum for positive values of $g_{i}$. The equations (\ref{Schrdinger
equation1}) and (\ref{Schrdinger equation2}) are solved by means of exact
diagonalization (ED) and the energy spectrum are shown in Fig. \ref{Fig1} as
functions of $\beta _{1}$ for $g_{1}=0.3\omega _{c}$ and $\omega _{1}=\omega
_{2}=0.25\omega _{c}$. As mentioned earlier, the power series (\ref
{eigenfunction}) has to be truncated in order to obtain the spectrum. Here
we set the truncation number $n_{tr}=48$ such that the calculation is done
in a closed subspace $\left\vert n\right\rangle _{A_{i}}$ with $\left(
n=0,1,2\cdots ,n_{tr}\right) $ and the off-digonal elements $\Omega _{mn}$
for $m,n>n_{tr}$ are less than $10^{-6}$. The lowest 6 levels are
illustrated in Fig. \ref{Fig1} for even and odd parities respectively and we
find similar to the single qubit case, strong coupling strength tends to
lower the eigen-energies of the system and level crossing occurs for
different parities, while levels with the same parity prefers to avoid this.
Furthermore, the energy spectrum is symmetric about the vertical dashed line
$\beta _{1}=0.3$ in the shadowed area (correspondingly $g_{2}/\omega _{c}$
takes the value between $-0.3$ and $0.3$).
In any case, the process of getting analytically the eigenvalues and
eigenvectors for the $N$-qubit system is very difficult, if not impossible,
so approximation techniques have to be employed. Here we illustrate how to
identify the building blocks of the determinant by taking the two-qubit
system as an example, and the procedure of solving the eigen-equations up to
any order of approximations to the exact result of the wave function. The
condition for the existence of non-trivial solutions of $d_{im}$ is the
secular equation described by the determinant
\begin{widetext}
\begin{eqnarray}
\left\vert
\begin{array}{cccccc}
\!\cdots\! & \vdots & \vdots & \vdots & \vdots & \!\cdots\! \\
\!\cdots\! & \!\varepsilon _{1m}\!\!-\!E & \Omega _{mm}^{\kappa } & 0 & \Omega _{m\left(
m+1\right) }^{\kappa } & \cdots \\
\!\cdots\! & \Omega_{mm}^{\kappa } & \!\varepsilon _{2m}\!\!-\!E & \Omega_{\left( m+1\right)m }^{\kappa }
& 0 & \!\cdots\! \\
\!\cdots\! & 0 & \Omega _{\left( m+1\right) m}^{\kappa } & \!\varepsilon _{1\left(
m+1\right) }\!\!-\!E & \Omega _{\left( m+1\right)\left( m+1\right) }^{\kappa } &
\!\cdots\! \\
\!\cdots\! & \Omega_{m\left( m+1\right) }^{\kappa } & 0 & \Omega_{\left( m+1\right)\left( m+1\right) }^{\kappa } & \!\varepsilon _{2\left( m+1\right) }\!\!-\!E & \!\cdots \!\\
\!\cdots\! & \vdots & \vdots & \vdots & \vdots & \!\cdots\!
\end{array}
\right\vert =0. \label{Det}
\end{eqnarray}
\end{widetext}
We see that in the two-qubit case the primitive building block of the
determinant is $2 \times 2$ and in the first-order approximation two blocks $
m$ and $m+1$ are involved, the transitions between which are determined by
the off-diagonal elements $\Omega_{m(m+1)}$ and $\Omega_{(m+1)m}$. The
primitive building block for $N$ qubits is $2^{N-1}\times 2^{N-1}$, while
higher-order approximation involves more identical blocks with $m$ increased
with step $1$ and those off-diagonal $\Omega $ terms induce transition
between blocks with different $m$.
\subsection{Zeroth-order approximation}
The key property of the inhomogeneously coupled $N$-qubit system is
exhibited by the zeroth-order approximation of equation (\ref{Det}), which
neglects all transitions between states with different $m$. This is often
called the adiabatic approximation. Similar approximated solution to the JC
model without the RWA is shown to be valid when the transition frequency of
the qubit $\omega _{0}$ is much smaller than the frequency of the
single-mode bose field $\omega _{c}$ and it is very efficient for coupling
strengths $g$ up to or larger than the oscillator frequency \cite{Irish1}.
For the $N$-qubit system we first consider the zeroth-order approximation
and truncate the determinant Eq. (\ref{Det}) to the lowest order. This
leaves us a block with the same index $m$, the diagonal terms of which read
as $\varepsilon _{im}-E$ with $i=1,2...2^{N-1}$ and the off-diagonal
transition terms are those coefficients $_{A_{i}}\left\langle
m|m\right\rangle _{A_{j}}$. For two-qubit system the zeroth-order
approximation for the determinant takes the following block form
\begin{equation}
\left\vert
\begin{array}{cc}
\varepsilon _{1m}-E & \Omega _{mm}^{\kappa } \\
\Omega _{mm}^{\kappa } & \varepsilon _{2m}-E
\end{array}
\right\vert =0. \label{hanglieshi2}
\end{equation}
For convenience, we denote $\Omega _{mm}^{\kappa }$ as $\Omega _{m}^{\kappa
} $. Consequently the solutions for the eigenergies are
\begin{equation}
E_{m}^{\kappa \pm }=m\omega _{c}-\left( \beta _{1}^{2}+\beta _{2}^{2}\right)
\omega _{c}/2\pm \theta _{m}^{\kappa } \label{zero app}
\end{equation}
with $\theta _{m}^{\kappa }=\sqrt{\left( \Omega _{m}^{\kappa }\right)
^{2}+\omega _{c}^{2}\left( \beta _{1}^{2}-\beta _{2}^{2}\right) ^{2}/4}$.
Based on the symmetry of the coefficients $d_{im}$, the eigenstates of the
system that satisfy the orthogonality and completeness conditions have the
form
\begin{equation}
\left\vert \psi _{m}^{\kappa \pm }\right\rangle =\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
d_{1m}^{\kappa \pm } \\
d_{2m}^{\kappa \pm } \\
(-1)^{m}\kappa d_{2m}^{\kappa \pm } \\
(-1)^{m}\kappa d_{1m}^{\kappa \pm }
\end{array}
\right) , \label{function}
\end{equation}
where
\begin{equation*}
d_{1m}^{\kappa \pm }=\xi _{m}^{\kappa \pm }\sqrt{\frac{1}{1+\left( \xi
_{m}^{\kappa \pm }\right) ^{2}}},d_{2m}^{\kappa \pm }=-\sqrt{\frac{1}{
1+\left( \xi _{m}^{\kappa \pm }\right) ^{2}}},
\end{equation*}
with $\xi _{m}^{\kappa \pm }=\Omega _{m}^{\kappa }/\left( \left( \beta
_{2}^{2}-\beta _{1}^{2}\right) \omega _{c}/2\mp \theta _{m}^{\kappa }\right)
$.
A special case is the system with two completely identical qubits
homogeneously coupled to a bose field, which means that $\beta _{2}=0$ and $
\omega _{1}=\omega _{2}$. This simplifies the transition frequency as $
\Omega _{m}^{\kappa }\sim \left( 1+\kappa (-1)^{m}\right) $, i.e. the parity
of the Hamiltonian ($\kappa $) and that of the displaced Fock space ($m$)
together decide whether $\Omega _{m}^{\kappa }$ is zero or not. When $\Omega
_{m}^{\kappa }=0$ we have the eigenenergies
\begin{equation}
E_{m}^{\kappa +}=m\omega _{c},E_{m}^{\kappa -}=\left( m-\beta
_{1}^{2}\right) \omega _{c}, \label{zero2}
\end{equation}
and the corresponding eigenfunctions $\left\vert \psi _{m}^{\kappa
+}\right\rangle =\left\vert \psi _{1}\right\rangle $, $\left\vert \psi
_{m}^{\kappa -}\right\rangle =\left\vert \psi _{2}\right\rangle ,$ with
\begin{equation}
\left\vert \psi _{1}\right\rangle =\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
0 \\
1 \\
-1 \\
0
\end{array}
\right) ,\left\vert \psi _{2}\right\rangle =\frac{1}{\sqrt{2}}\left(
\begin{array}{c}
1 \\
0 \\
0 \\
-1
\end{array}
\right) . \label{singlet}
\end{equation}
For nonzero $\Omega _{m}^{\kappa }$ we assume $\left\vert \Omega
_{m}^{\kappa }\right\vert /\omega _{c}\gg \beta _{1}^{2}$ as in Ref. \cite
{Agarwal}, the eigenergies can be expressed as
\begin{equation}
E_{m}^{\kappa +}=m\omega _{c}+\Omega _{m}^{\kappa },E_{m}^{\kappa -}=m\omega
_{c}-\Omega _{m}^{\kappa }, \label{zero app2}
\end{equation}
with eigenfunctions $\left\vert \psi _{m}^{\kappa +}\right\rangle
=\left\vert \psi _{3}\right\rangle $, $\left\vert \psi _{m}^{\kappa
-}\right\rangle =\left\vert \psi _{4}\right\rangle ,$ with
\begin{equation}
\left\vert \psi _{3}\right\rangle =\frac{1}{2}\left(
\begin{array}{c}
1 \\
1 \\
1 \\
1
\end{array}
\right) ,\left\vert \psi _{4}\right\rangle =\frac{1}{2}\left(
\begin{array}{c}
1 \\
-1 \\
-1 \\
1
\end{array}
\right) . \label{zero1}
\end{equation}
\begin{figure}
\caption{(Color Online) The zeroth-order approximation of the energy levels
with even and odd parities as a function of coupling strength $g_{2}
\label{Fig2}
\end{figure}
We note that in writing down these four eigenstates of $H$ the qubit basis
are chosen as the uncoupled representation of spin operators $\hat{\sigma}
_{1,2}^{z}$ in order to solve the qubits system with different frequencies
and coupling strength. Three of states are alternatively \cite{Agarwal}
expanded in terms of the triplet states of the total spin component $S_{x}$
of the two identical qubits in the case of homogeneous coupling. The spin
singlet state is exactly $\psi _{1}$ in (\ref{singlet}) which is itself the
eigenstate of the Hamiltonian. By including the singlet state into the
eigenvectors, we are enable to treat the dynamics of any initial state of
the system. States such as $\left\vert 10\right\rangle $ or $\left\vert
01\right\rangle $, i.e. when the two qubits are respectively put in the
upper and lower eigenstates of their $\hat{\sigma}^{z}$s, are out of reach
in the triplet manifold, which will be seen this in the next section. In
Fig. \ref{Fig2}, we can see that the analytical results in the zeroth-order
approximation already agree with the exact solutions very well in the
ultra-strong coupling case $g_{2}\sim 0.2\omega _{c}$ $(g_{1}=0.1\omega
_{c}) $, where each four eigenenergies with the same index $m$ bundle into a
group corresponding to the four qubit basis $\left\vert 00\right\rangle
,\left\vert 01\right\rangle ,\left\vert 10\right\rangle ,\left\vert
11\right\rangle $ in the absence of coupling. In each group, the parity of
each eigenstate is fixed with the lowest level being always even parity. For
even $m$, two odd parity levels are held between two even parity ones, or
vice versa for odd $m$. This can be compared to the single-qubit case where
the energy levels are arranged as (even, odd), (odd, even), (even, odd),
etc. because the qubit basis are instead $\left\vert 0\right\rangle
,\left\vert 1\right\rangle $ and adding one photon will alter the parity.
\subsection{First-Order Approximation}
Next we consider the first-order approximation for the determinant (\ref{Det}
) and permit transitions between blocks $m$ and $m+1$ \cite{Liu}. In the
case of two-qubit system we consequently make a cut-off in Eq. (\ref{Det})
such that
\begin{equation}
\left\vert
\begin{array}{cccc}
\varepsilon _{1m}\!-\!E & \Omega _{m}^{\kappa } & 0 & \Omega _{m\left(
m\!+\!1\right) }^{\kappa } \\
\Omega _{m}^{\kappa } & \varepsilon _{2m}\!-\!E & \Omega _{\left(
m\!+\!1\right) m}^{\kappa } & 0 \\
0 & \Omega _{\left( m\!+\!1\right) m}^{\kappa } & \varepsilon _{1(m+1)}-E &
\Omega _{m+1}^{\kappa } \\
\Omega _{m\left( m\!+\!1\right) }^{\kappa } & 0 & \Omega _{m+1}^{\kappa } &
\varepsilon _{2(m+1)}\!-\!E
\end{array}
\right\vert =\!0. \label{hanglieshi}
\end{equation}
The equation can be solved analytically because it leads to a quartic
equation in the form of $E^{4}+bE^{3}+cE^{2}+dE+e=0$. The coefficients in
the quartic equation are assigned for parity $\kappa $ and index $m$ and
expressed as (for simplicity we drop the superscript and subscript)
\begin{eqnarray*}
b &=&-\varepsilon _{1m}-\varepsilon _{2m}-\varepsilon _{1(m+1)}-\varepsilon
_{2(m+1)}, \\
c &=&\left( \varepsilon _{1m}+\varepsilon _{2m}\right) \left( \varepsilon
_{1(m+1)}+\varepsilon _{2(m+1)}\right) \\
&+&\varepsilon _{1m}\varepsilon _{2m}+\varepsilon _{1\left( m+1\right)
}\varepsilon _{2\left( m+1\right) }-(\Omega _{m}^{\kappa })^{2} \\
&-&(\Omega _{m+1}^{\kappa })^{2}-(\Omega _{m(m+1)}^{\kappa })^{2}-(\Omega
_{\left( m\!+\!1\right) m}^{\kappa })^{2}, \\
d &=&\left( \varepsilon _{1m}\!+\!\varepsilon _{2\left( m\!+\!1\right)
}\right) (\Omega _{\left( m\!+\!1\right) m}^{\kappa })^{2} \\
&+&\left( \varepsilon _{1\left( m\!+\!1\right) }\!+\!\varepsilon
_{2m}\right) (\Omega _{m(m+1)}^{\kappa })^{2} \\
&+&((\Omega _{m}^{\kappa })^{2}-\varepsilon _{1m}\varepsilon
_{2m})(\varepsilon _{1(m+1)}+\varepsilon _{2(m+1)}) \\
&+&(\varepsilon _{1m}+\varepsilon _{2m})((\Omega _{m+1}^{\kappa
})^{2}-\varepsilon _{1(m+1)}\varepsilon _{2(m+1)}),
\end{eqnarray*}
and
\begin{equation*}
e=\left\vert
\begin{array}{cccc}
\varepsilon _{1m} & \Omega _{m}^{\kappa } & 0 & \Omega _{m(m+1)}^{\kappa }
\\
\Omega _{m}^{\kappa } & \varepsilon _{2m} & \Omega _{\left( m\!+\!1\right)
m}^{\kappa } & 0 \\
0 & \Omega _{\left( m\!+\!1\right) m}^{\kappa } & \varepsilon _{1(m+1)} &
\Omega _{m+1}^{\kappa } \\
\Omega _{m(m+1)}^{\kappa } & 0 & \Omega _{m+1}^{\kappa } & \varepsilon
_{2(m+1)}
\end{array}
\right\vert .
\end{equation*}
For each given parity $\kappa $ and block index $m$ we get in general four
analytical solutions for the eigenenergies of Hamiltonian $H$
\begin{equation}
E_{m}^{\kappa }=-\frac{b}{4}\pm _{\gamma }Y\pm _{s}\frac{1}{2}\sqrt{-\left(
4Y^{2}+2p\pm _{\gamma }\frac{q}{Y}\right) }, \label{first app}
\end{equation}
where the two occurrences of $\pm _{\gamma }$ must denote the same sign,
while $\pm _{s}$ can take its sign independently. The notations relating to
the coefficients of the quartic equation are defined as $p=c-3b^{2}/8$, $
q=\left( b^{3}-4bc+8d\right) /8$, $\Delta _{0}=c^{2}-3bd+12e$, $\Delta
_{1}=2c^{3}-9bcd+27b^{2}e+27d^{2}-72ce$, and
\begin{eqnarray*}
Y &=&\frac{1}{2}\sqrt{-\frac{2p}{3}+\frac{Q+\Delta _{0}/Q}{3}}, \\
Q &=&\sqrt[3]{\frac{\Delta _{1}+\sqrt{\Delta _{1}^{2}-4\Delta _{0}^{3}}}{2}}.
\end{eqnarray*}
The first-order approximation improves the analytical results applicable
even in the deep coupling region $g>1$ as shown in Fig. 3, where we set $
g_{1}=0.3\omega _{c}$ and $\omega_{1}=\omega_{2}=0.25 \omega_c$. According
to the assumption of the wavefunction (\ref{eigenfunction}), the dimension
of the Hilbert space depends on the truncation of the displaced Fock number
state as $4\left( n_{tr}+1\right)$. Correspondingly, for each $m$ we only
have four genuine solutions. The zeroth-order approximation permits exactly
four eigensolutions for each $m$ as shown above, while in the first-order
approximation one has eight eigensolutions for each combination $\left(
m,m+1\right) $ including four even parity and four odd parity solutions.
\begin{figure}
\caption{(Color Online) The first-order approximation solution of the energy
levels as a function of coupling strength $g_{2}
\label{Fig3}
\end{figure}
Obviously the analytical results of first-order approximation show energy
level crossing between states with the same parity or different parities.
The level crossing of states with different parities are accidental and we
will focus on those with the same parity. On the uncertainty whether or not
energy levels can cross by changing parameters we give an argument in terms
of von Neumann and Wigner non-crossing theorem: For a real symmetric
(hermitian) matrix, we need to tune two (three) parameters to get a level
crossing \cite{Neumann}. In the case of two-qubit system, the Hamiltonian in
the Hilbert space is a real symmetric matrix. To determine the crossing or
anti-crossing we calculate the fidelity between appropriate states before
and after the crossing which is a measure of the "closeness" of two quantum
states and defined as $F=\left\vert \left\langle \phi |\varphi \right\rangle
\right\vert ^{2}$ for pure states \cite{Fidelity, Zanardi, Quan, Gu, ShuChen}
. We remark that (1) for fixed $m$ the energy levels can never cross by
changing one parameter $g_{2}$; (2) However, for different $m$ the energy
level crossings may occur at some isolated points because two parameters $m$
and $g_{2}$ are tuned. In the left (A) inset of Fig. \ref{Fig3}, we zoom out
a particular anti-crossing point of exact results of energy levels. Four
analytical levels (deep-colored lines) in the first-order approximation
match the exact results which avoid crossing, while the rest (light-colored
lines) four curves mismatch and cross each other which should be discarded.
Taking the above aspects into consideration, for each invariant parity
subspace we must rule out half solutions of the first-order approximation by
keeping only solutions (\ref{first app}) with $\pm _{\gamma }$ opposite to $
\pm _{s}$ for each combination $\left( m,m+1\right) $ because each $m$ has
been used twice in our calculation. An exception is the combination $(0,1)$,
in which case we only drop the solution with both signs positive. In this
way the pseudo solutions are removed and the analytical eigenvalues for the
two-qubit system agree perfectly with the exact results in the deep coupling
limit $g_{2}\sim 1.5\omega _{c}$ $(g_{1}=0.3\omega _{c})$ in Fig. \ref{Fig3}
. The genuine solutions are two-fold degenerate in the deep strong coupling
regime $g_{2}>1$. This degeneracy has been found in the quantum Rabi model
for two qubits \cite{Chilingaryan}, though for both coupling parameters $
g_{1},g_{2}$ larger than the transition frequencies of the qubits.
Moreover, in the right inset (B) of in Fig. \ref{Fig3}, as $g_{2}$ increases
across the next level anti-crossing point all the solutions of the
first-order approximation mismatch the exact results which suggests that
higher order approximation is needed. Our procedure applies readily to the
second-order approximation, in which case one has 12 solutions for each
combination of three blocks $(m,m+1,m+2)$ and the degeneracy grows rapidly.
In this way the approximated solutions will match the exact results in the
right inset of Fig. \ref{Fig3}. In particular, we notice that the
transitions would be suppressed if the off-diagonal matrix elements are much
smaller than the energy difference between the states belonging to different
blocks. Specifically, all elements $\left\vert \Omega _{nm}^{\kappa
}\right\vert $, $\left\vert \Omega _{mn}^{\kappa }\right\vert $ with $n\neq
m $ are set to zero in the zeroth-order approximation, while those with $
n\neq m,m\pm 1$ are negligible in the first-order approximation.
\subsection{Solvability and Integrability}
In quantum mechanics there exist potentials for which it is possible to find
a finite number of exact eigenvalues and associated eigenfunctions in the
closed form. These systems are said to be quasi exactly solvable. The Rabi
model is a typical example distinguished by the fact that part of its
eigenvalues and corresponding eigenfunctions can be determined algebraically
for special values of the energy splitting of the qubit $\omega $ and the
coupling strength $g$ \cite{Moroz1,Moroz2,Zhang2}. Known as Judd's isolated
solutions \cite{Judd}, these exceptional spectrum with energy eigenvalues $
E=n-g^{2}/\omega _{c}^{2}$ constitute the exact part of Rabi model and
doubly degenerate with respect to parity.
Here we show that the two-qubit TC model provides another example of
quasi-exactly solvable models, i.e part exact spectrum of the model can be
obtained in some special parameter region. First of all, in the homogeneous
coupling case $g_{1}=g_{2}$, there always exists a constant solution $
E=\omega _{c}$ corresponding to either the even parity eigenstates
\begin{equation*}
\left\vert \psi _{e}\right\rangle =\left( q_{e}\left( \left\vert
01\right\rangle -\left\vert 10\right\rangle \right) |1\rangle +\left\vert
11\right\rangle |0\rangle \right) /\sqrt{2q_{e}^{2}+1},
\end{equation*}
for the symmetric detunings with $\omega _{1}+\omega _{2}=2\omega _{c}$
(suppose $\omega _{1}>\omega _{2}$), or the odd parity eigenstates
\begin{equation*}
\left\vert \psi _{o}\right\rangle =\left( q_{o}\left( \left\vert
00\right\rangle -\left\vert 11\right\rangle \right) |1\rangle +\left\vert
01\right\rangle |0\rangle \right) /\sqrt{2q_{o}^{2}+1},
\end{equation*}
for the asymmetric detunings with $\omega_{1}-\omega _{2}=2\omega _{c}$ with
$q_{e,o}=2g/\left( \omega _{1}\mp \omega _{2}\right) $. Secondly, for two
completely identical qubits homogeneously coupled to the bose field, i.e. $
g_{1}=g_{2}$ and $\omega_{1}=\omega _{2}$, it is easy to prove the state $
\left\vert \psi_{1}\right\rangle =\left( \left\vert 10\right\rangle
-\left\vert 01\right\rangle \right) |m\rangle /\sqrt{2}$ in (\ref{singlet})
for any $m$ is exactly the eigenstate of $H$ with eigenvalue $E_{m}=m\omega
_{c}$. The state has even(odd) parity for odd(even) $m$.
Very recently, an alternative form of analytical solution is given to the quantum
Rabi models with two identical qubits in
a similar way, however, essentially different from the Juddian solutions
with doubly degenerate eigenvalues in the one-qubit quantum Rabi model \cite
{HuiWang}. In short, for the TC model with two qubits a finite part of the
spectrum can be obtained in closed form and the remaining part of the
spectrum is only numerically accessible. So we conclude that the TC model
with two qubits is quasi exactly solvable.
\begin{figure}
\caption{(Color Online) The first-order approximation and numerical solution
for the homogeneous coupling system ($g_{1}
\label{Fig4}
\end{figure}
\begin{figure}
\caption{(Color Online) Typical level crossing points for homogeneous
coupled non-identical qubits with symmetric detuning $\protect\omega_1+
\protect\omega_2=2\protect\omega_c$ in the even parity space(upper panel),
and asymmetric detuning $\protect\omega_1-\protect\omega_2=2\protect\omega_c$
in the odd parity space (lower panel). The exact solutions $E=\protect\omega
_c$ are shown as red-dashed and blue-dotted horizontal lines. The
frequencies of the qubits and the oscillator are labeled on the right side
of the figure.}
\label{Fig5}
\end{figure}
In Fig. \ref{Fig4}, we show the energy spectrum of homogeneous coupling
system as a function of total coupling strength $\beta _{1}$ for $
g_{1}=g_{2} $ and $\omega _{1}=\omega _{2}=0.25\omega _{c}$. In the
decoupling limit $\beta _{1}=0$, the qubits are set free from the field, and
we find the two odd(even) parity states, i.e. $|01\rangle |m\rangle $ and $
|10\rangle |m\rangle $, are degenerate for even(odd) $m$, which now means
the photon number of the free bosonic field. The first-order approximation
is valid for the entire region $0<\beta _{1}<1.2$ and it shows that the
energy levels of the same parity would never cross with each other until $
\beta _{1}\sim 1$. The constant solutions $E_{m}=m\omega _{c}$ are shown as
two horizontal lines $m=0$ for odd parity and $m=1$ for even parity in Fig.
\ref{Fig4}. Level crossing occurs in the same parity space when other
solutions with different $m$ sweep across them, denoted by two black dots at
$\beta _{1}\simeq 0.9898$ and $1.0004$. The fidelity of both the lower or upper
states on the two sides of the
crossing points is calculated to be exactly zero, which proves the existence
of the level crossing. In Fig. \ref{Fig4} two parameters $m$ and $\beta _{1}$
are tuned to get these level crossing points in the first-order
approximation, which is consistent with the von Neumann and Wigner
non-crossing theorem.
Besides those shown in Fig. \ref{Fig4}, nontrivial level crossing points may
appear in the fixed parity subspace for non-identical qubits. In the
homogeneous coupling case ($g_{1}=g_{2}$) a level crossing has been found in
the even parity subspace for two inequivalent qubits with symmetric
detunings $\omega _{1}+ \omega _{2}=2\omega _{c}$ \cite{Chilingaryan}
denoted as a black point in the upper panel of Fig. \ref{Fig4}. We here show
that this is only half of the story. It is actually found that the level crossing in the case of
homogeneous coupling occurs for $\omega _{1}\pm \omega _{2}=2\omega _{c}$.
The reason is that the constant solution $E=\omega _{c}$ holds for either
symmetric or asymmetric detuning conditions. Other levels would inevitably
run across it with increasing coupling strength. In Fig. \ref{Fig5}, we
numerically confirm that the level crossing appears at $\beta _{1}\simeq
1.0251$ for $\omega _{1}=1.3\omega _{c},\omega _{2}=0.7\omega _{c}$ in even
parity subspace \cite{Chilingaryan}, and at $\beta _{2}\simeq 0.9442$ for $
\omega _{1}=2.7\omega _{c},\omega _{2}=0.7\omega _{c}$ in odd parity
subspace. Level crossings in quantum theory are often related to symmetry.
In the case of inhomogeneous coupling of nonidentical qubits with the
oscillator mode no level crossing in the same parity sapce is found, while $g_{1}=g_{2}$ enlarges the
symmetry so that the level crossing appears on the line $E=\omega _{c}$, and
$\omega _{1}=\omega _{2}$ enlarges the symmetry even further and more
crossing points appear on $E=m\omega _{c}$. This is similar to what happened
in the JC model - the enlarged symmetry by RWA leads to two level crossing
points in the even parity space (Fig. 3 in \cite{Braak})which are not found in the Rabi model.
The appearance of level crossing and associated symmetry are essentially
related to the integrability of our model, which we shall address in the
following. According to the criterion on quantum integrable proposed by
Braak \cite{Braak}, integrability is equivalent to the existence of $f$
numbers to classify the eigenstates uniquely with $f$ the sum of discrete and
continuous d.o.f. The Rabi model has two d.o.f. and at
the same time can be uniquely labelled by two quantum numbers associated
with the energy level and the parity, respectively. So the Rabi model is
considered to be integrable \cite{Braak, Peng}. We argue that the
homogeneous coupled two-qubit TC model with $\omega_1=\omega_2$ or $
\omega_1\pm\omega_2=2\omega_c$ is integrable, based on the following three
reasons:
(1) In general a system is not integrable in the whole parameter region, but
it may be integrable under some special conditions. A generalization of Rabi
model with an additional term $\epsilon \sigma_x$ (Eq. (7) of Ref. \cite
{Braak}) breaking the parity symmetry is constructed to be the first example
of nonintegrable but exactly solvable system. Clearly we recover the
integrable Rabi model for $\epsilon=0$. Another example is spinor
Bose-Einstein condensate of alkali gases \cite{Spinor}. Generally considered
to be nonintegrable, the 1D homogeneous spinor bosons can be exactly solved
with Bethe ansatz (BA) method \cite{Cao} along two integrable lines $c_2=0$
and $c_0=c_2$. In spite of the nonintegrability of our model in the whole
parameter regime, there do exist special situation for the parameters, i.e. $
\omega_1=\omega_2$ or $\omega_1\pm\omega_2=2\omega_c$, in which case the
homogeneous coupled TC model becomes integrable.
(2) The system is nonintegrable if the total number of d.o.f. exceeds the
quantum numbers used to label the eigenstates uniquely. In the case of
Braak's generalized Rabi model the absence of any level crossing in the
spectral graph is sufficient to rule out its integrability. There are two
d.o.f and one quantum number, energy, is sufficient to number the
eigenstates uniquely. In the case of two-qubit TC model, for the simplified
case of $\omega_2=0$ no level crossing is found in the spectra graph \cite
{Peng}, while for inhomogeneous coupling accidental level crossing occurs
only for different parities (Figure 1). In both cases the model is
nonintegrable because we have three d.o.f and the two quantum numbers,
energy and parity, are enough to label the states uniquely.
(3) Level crossing in the same parity subspace is nontrivial because we need
another quantum number other than parity to label the degenerate states,
which leads the system to be integrable. As a typical integrable system,
each eigenstate in the hydrogen atom is assigned three quantum numbers $n,
l, m$ which characterize the quantization of radial, angular and orientation
of the electronic orbit. None of them can be omitted in picking up the state
of this three d.o.f system. This parallels the characterization of each
eigenstate of Rabi model through two quantum numbers, the parity quantum
number $n_0$ and the $n_1$th zero of the transcendental function $G_(x)$,
corresponding to two d.o.f. For JC model with enlarged $U(1)$ symmetry,
level crossing occurs in the same parity space. However, we are lucky that
the operator $C$ can be used for a further decomposition of the subspaces
with fixed parity. The state spaces entails a second possibility to label
the states uniquely through $C$ and a two-valued index $n_0$, with the
parity being a redundant quantum number. Similarly, the level crossing
appeared in homogeneous coupled two-qubit TC model implies an enlarged
hidden symmetry for $\omega_1= \omega_2$ or $\omega_1\pm\omega_2=
2\omega_c$. What we need to do is to find a $C$-like conserved quantity to
decompose the even and odd subspaces further. Consequently we would have
three quantum numbers (parity, two-valued index $n_0$ and $C$-like number)
to uniquely label the state with three d.o.f. Though not an exact result, in the
zeroth and first order approximation we have shown a possible scheme of labeling
the states with parity, $\pm $ and $m$. In summary the homogeneous coupled
model is integrable for two identical qubits or with (a)symmetric detuning, though further
exploration of the conserved quantity and hidden symmetry is needed.
\section{Population inversion dynamics}
To better learn the quantum behavior in the prototypical problem of cavity
electrodynamics with more that one qubit involved, we study the dynamical
properties of a two-qubit system strongly coupled to a high- frequency
quantum oscillator. The eigenvectors and eigenvalues of the system derived
in Sec. II can be taken as a complete set, upon which the time evolution of
wave function can be expanded. We discuss here the probability of finding
the two qubits remaining in the initial state \cite{Irish1}, which is
essentially the fidelity between the wave function at subsequent time $t$
and the initial state.
The simplest dynamical behavior is considered when we put the qubits
initially in any one of the four product states and the initial state of the
oscillator is prepared in the displaced Fock basis corresponding to them. In
the zeroth-order approximation, these initial states are inversely linear
combinations of the eigenvectors of the Hamiltonian (\ref{function}) and
expressed respectively as following
\begin{eqnarray*}
\left\vert 11\right\rangle \left\vert m\right\rangle _{A_{1}} &=&\frac{1}{
\sqrt{2}}\sum\limits_{\kappa ,\gamma =\pm }d_{1m}^{\kappa \gamma }\left\vert
\psi _{m}^{\kappa \gamma }\right\rangle \\
\left\vert 10\right\rangle \left\vert m\right\rangle _{A_{2}} &=&\frac{1}{
\sqrt{2}}\sum\limits_{\kappa ,\gamma =\pm }d_{2m}^{\kappa \gamma }\left\vert
\psi _{m}^{\kappa \gamma }\right\rangle \\
\left\vert 01\right\rangle \left\vert m\right\rangle _{A_{3}} &=&\frac{1}{
\sqrt{2}}\sum\limits_{\kappa ,\gamma =\pm }(-1)^{m}\kappa d_{2m}^{\kappa
\kappa \gamma }\left\vert \psi _{m}^{\kappa \gamma }\right\rangle \\
\left\vert 00\right\rangle \left\vert m\right\rangle _{A_{4}} &=&\frac{1}{
\sqrt{2}}\sum\limits_{\kappa ,\gamma =\pm }(-1)^{m}\kappa d_{1m}^{\kappa
\kappa \gamma }\left\vert \psi _{m}^{\kappa \gamma }\right\rangle ,
\end{eqnarray*}
which in the special case of two completely identical qubits reduce to
\begin{eqnarray*}
\left\vert 11\right\rangle \left\vert m\right\rangle _{A_{1}} &=&\frac{1}{2}
\left( \sqrt{2}\left\vert \psi _{2}\right\rangle +\left\vert \psi
_{3}\right\rangle +\left\vert \psi _{4}\right\rangle \right) \\
\left\vert 10\right\rangle \left\vert m\right\rangle _{A_{2}} &=&\frac{1}{2}
\left( \sqrt{2}\left\vert \psi _{1}\right\rangle -\left\vert \psi
_{3}\right\rangle -\left\vert \psi _{4}\right\rangle \right) \\
\left\vert 01\right\rangle \left\vert m\right\rangle _{A_{3}} &=&-\frac{1}{2}
\left( \sqrt{2}\left\vert \psi _{1}\right\rangle -\left\vert \psi
_{3}\right\rangle -\left\vert \psi _{4}\right\rangle \right) \\
\left\vert 00\right\rangle \left\vert m\right\rangle _{A_{4}} &=&-\frac{1}{2}
\left( \sqrt{2}\left\vert \psi _{2}\right\rangle +\left\vert \psi
_{3}\right\rangle +\left\vert \psi _{4}\right\rangle \right) .
\end{eqnarray*}
As an example, we study the system dynamics with only one qubit, say qubit
2, being excited to the upper level, i.e. $\Psi \left( 0\right) =\left\vert
10\right\rangle \left\vert m\right\rangle _{A_{2}}$. The probability of
finding the two qubits in any possible product states is easily obtained and
we are interested in the fidelity to the initial state
\begin{equation}
P_{10}\left(m, t\right) =\left\vert _{A_{2}}\left\langle m\right\vert
\left\langle 10|\Psi \left( t\right) \right\rangle \right\vert ^{2}
\label{FP}
\end{equation}
with
\begin{equation*}
\Psi \left( t\right) =\frac{1}{\sqrt{2}}\sum\limits_{\kappa ,\gamma =\pm
}d_{2m}^{\kappa \gamma }\left\vert \psi _{m}^{\kappa \gamma }\right\rangle
e^{-iE_{m}^{\kappa \gamma }t}.
\end{equation*}
It is easy to show that $\xi _{m}^{\kappa +}\xi _{m}^{\kappa -}=-1,\left(
d_{1m}^{\kappa +}\right) ^{2}=\left( d_{2m}^{\kappa -}\right) ^{2}$. By
means of these, we find the probability of the two qubits staying in their
initial states consists of four oscillating terms, the frequencies of which
are all possible combinations of $\theta _{m}^{\pm }$, i.e.
\begin{eqnarray}
&&P_{10}\left(m, t\right) =\frac{1}{2}\left\{ 1+\sum\limits_{\kappa =\pm
}\left( c_{1m}^{\kappa }\right) ^{2}\left( \cos \left( 2\theta _{m}^{\kappa
}t\right) -1\right) \right. \notag \\
&&+\left. \left( 1-c_{2m}\right) \cos \left( \theta _{m}^{+}-\theta
_{m}^{-}\right) t+c_{2m}\cos \left( \theta _{m}^{+}+\theta _{m}^{-}\right)
t\right\} \notag \\
&& \label{population1}
\end{eqnarray}
with the two coefficients defined as
\begin{eqnarray*}
c_{1m}^{\kappa } &=&\frac{\xi _{m}^{\kappa +}}{1+\left( \xi _{m}^{\kappa
+}\right) ^{2}} , \\
c_{2m} &=&\frac{\left( \xi _{m}^{++}\right) ^{2}+\left( \xi _{m}^{-+}\right)
^{2}}{\left( 1+\left( \xi _{m}^{++}\right) ^{2}\right) \left( 1+\left( \xi
_{m}^{-+}\right) ^{2}\right) }.
\end{eqnarray*}
For homogeneous coupling $\beta _{2}=0$ and $\omega _{1}=\omega _{2}$, eq. (
\ref{population1}) is reduced to
\begin{eqnarray*}
&&P_{10}\left(m, t\right) =\frac{1}{8}\left\{ 2+\cos \left( 2\Omega
_{m}^{+}t\right) +\cos \left( 2\Omega _{m}^{-}t\right) \right. \\
&&\left. +2\left( \cos \left( \Omega _{m}^{+}+\Omega _{m}^{-}\right) t+\cos
\left( \Omega _{m}^{+}-\Omega _{m}^{-}\right) t\right) \right\},
\end{eqnarray*}
which consists essentially two oscillating terms because $\Omega_m^+=0$ for
even $m$ and $\Omega_m^-=0$ for odd $m$.
\begin{figure*}
\caption{(Color Online)Probability $P_{10}
\label{Fig6}
\end{figure*}
Consider now the harmonic oscillator in the state which most closely
approaches the classical limit, that is, we choose the oscillator begins in
the displaced coherent state. The initial state is thus given by
\begin{equation}
\left\vert \Psi \left( 0\right) \right\rangle =\sum\limits_{m=0}^{\infty }
\frac{e^{-|z|^{2}/2}z^{m}}{\sqrt{m!}}\left\vert 10\right\rangle \left\vert
m\right\rangle _{A_{2}}. \label{initial state}
\end{equation}
The probability of two qubits remaining in their initial state $\left\vert
10\right\rangle $ is calculated by tracing over all Fock states of the
oscillator as follows
\begin{equation}
P_{10}\left( z,t\right) =\left\langle 10|Tr_{A}\rho (z,t)|10\right\rangle
=\sum\limits_{m=0}^{\infty }p\left( m\right) P_{10}\left( m,t\right) ,
\label{population}
\end{equation}
where $\rho (z,t)=|\Psi (t)\rangle \langle \Psi (t)|$ is the density matrix
of the system and the normalized Poisson distribution is defined as
\begin{equation*}
p\left( m\right) \!=\!\frac{e^{-\left\vert z\right\vert ^{2}}\left\vert
z\right\vert ^{2m}}{m!}.
\end{equation*}
We find that the probabilities of two qubits populating in the four product
states oscillate with the same characteristic frequencies. For the states $
\left\vert 10\right\rangle $ or $\left\vert 01\right\rangle $ the
oscillation is around an equilibrium position $(1-B)/2$, while for the
states $\left\vert 11\right\rangle $ or $\left\vert 00\right\rangle $ the
oscillation equilibrium is $B/2$ with $B=\sum\limits_{m=0}^{\infty
}\sum\limits_{\kappa =\pm }p\left( m\right) \left( c_{1m}^{\kappa }\right)
^{2}$. For the homogeneous coupling case and $\omega _{1}=\omega _{2}$, we
recover the the analytical result established previously \cite{Agarwal} by
keeping only three terms $l=m,m-1,m-2$ in the summation of $\Omega _{m}^{\pm
}$ (eq. {\ref{Xmn}}) and replacing the Poisson distribution by a Gaussian
one for big enough $|z|$
\begin{equation}
P_{10}\left( z,t\right) =\frac{3}{8}+\frac{1}{2}S\left( t,\omega _{1}\right)
+\frac{1}{8}S\left( t,2\omega _{1}\right) , \label{population3}
\end{equation}
where
\begin{equation*}
S\left( t,\omega _{1}\right) =Re\left[ \sum\limits_{k=0}^{\infty }\bar{S}
_{k}\left( t,\omega _{1}\right) \right]
\end{equation*}
and
\begin{equation}
\bar{S}_{k}\left( t,\omega _{1}\right) =\frac{\exp \left( \Phi _{Re}+i\Phi
_{Im}\right) }{\left( 1+\left( \pi kf\right) ^{2}\right) ^{1/4}} \label{sk}
\end{equation}
with
\begin{eqnarray*}
\Phi _{Re} &=&\frac{-\left( \mu -\mu _{k}\right) ^{2}f\beta _{1}^{2}}{
2\left( 1+\left( \pi kf\right) ^{2}\right) }, \\
\Phi _{Im} &=&\frac{\tan ^{-1}\left( \pi kf\right) }{2}+\mu -\left\vert
z\right\vert ^{2}\left( \mu \beta _{1}^{2}-2\pi k\right) .
\end{eqnarray*}
Here we have defined $f=\left\vert z\right\vert ^{2}\beta _{1}^{2},\mu
=\omega _{1}te^{-\beta _{1}^{2}/2},\mu _{k}=2\pi k\left( 1+f/2\right) /\beta
_{1}^{2}$. It is obviously that the revival in $S\left( t,\omega _{1}\right)
$ occurs around each time $\mu =\mu _{k}$, the envelope and the fast
oscillatory of which are determined by the exponential and cosine terms
respectively.
We restrict our discussion to the regime of large detuning $\omega _{c}\gg
\omega _{j}$ and ultrastrong coupling strength $g_{j}\sim 0.1\omega _{c}$.
Fig. \ref{Fig6} shows the results of the time evolution of the probability (
\ref{population}) by means of the zeroth-order approximation analytical
method compared with the numerically exact solution, where we have made a
cutoff for $m$ to a maximum value $30$ because $p\left( m\right) \approx 0$
for $z=3$ and $m\geq 30$. Our approximated results prove to be unexpectedly
powerful, giving accurate dynamics perfectly in the present experimentally
accessible coupling regime. Here we assume that one of two
qubits reaches ultrastrong coupling regime $g_{1}/\omega _{c}=0.1$, while
the coupling strength to the other qubit $\left( g_{2}/\omega _{c}\right) $
changes from small to large. In Fig. \ref{Fig6}(a) $g_{2}$ is much smaller than
$g_{1}$ and there is no collapse-revival phenomena in the evolution of the
probability. As $g_{2}$ increases, the collapses and revivals emerge
gradually and the first collapse becomes faster and faster and the revival
signal is more and more distinct in Fig. \ref{Fig6}(b-c). In Fig. \ref{Fig6}
(d) the two coupling strengths are equal, resulting in the most regular
shape in the oscillation of probability and the peaks become periodic in
time. Finally in Fig. \ref{Fig6}(e) when $g_{2}$ is larger than $g_{1}$,
collapses and revivals continue for a while and the oscillation becomes
apparently irregular. Because $\left( c_{1m}^{\kappa }\right) ^{2}<c_{2m}$,
the revivals with smaller amplitude are mainly determined by the second term
of Eq. (\ref{population1}) containing $c_{1m}^{\kappa }$, instead, the
revivals with larger amplitude depend on the third and fourth terms
containing $c_{2m}$. The above analysis and discussions suggest that the
collapse-revival phenomena are sensitive to the coupling strength in the
evolution of the probability, as well as it is periodic only for $
g_{1}=g_{2} $ and $\omega _{1}=\omega _{2}$. We also find that the
probability of two qubits populating in the four product states exhibit
similar envelope of the revival signal.
It is sometimes more convenient to measure the population inversion in one
of the qubit, that is, we observe the time evolution of the expectation
value of the Pauli matrix operator of qubit 1 defined as $\sigma
_{1}^{z}=\left( \left\vert 1\right\rangle \left\langle 1\right\vert
-\left\vert 0\right\rangle \left\langle 0\right\vert \right) _{1}$ and
related to the probabilities through $\left\langle \sigma
_{1}^{z}\right\rangle =P_{11}(z,t)+P_{01}(z,t)-P_{10}(z,t)-P_{00}(z,t)$. In
doing this, we again fix the value of $g_{1}=0.1\omega _{c}$ and study how
the existence of qubit 2 will change the dynamics of the qubit 1. With the
initial state (\ref{initial state}) the expectation value of $\sigma
_{1}^{z} $ is calculated as
\begin{eqnarray}
\left\langle \sigma _{1}^{z}\right\rangle &=&\sum\limits_{m=0}^{\infty
}p\left( m\right) \left\{ \left( D_{m}-1\right) \cos \left( \theta
_{m}^{+}-\theta _{m}^{-}\right) t\right. \notag \\
&&\left. -D_{m}\cos \left( \theta _{m}^{+}+\theta _{m}^{-}\right) t\right\} ,
\label{inversion}
\end{eqnarray}
with $D_{m}=2c_{1m}^{+}c_{1m}^{-}+c_{2m}$. In the case of $\beta _{2}=0$
this reduces to
\begin{equation}
\left\langle \sigma _{1}^{z}\right\rangle =-S\left( t,\omega _{1}\right) .
\end{equation}
In Fig. \ref{Fig7} we show the time dependent inversion of qubit
1 for different coupling strength between qubit 2 and the bose field. Due to
the excellent agreement with the numerically exact solution, we only show
the analytical result in Fig. \ref{Fig7} and the parameters are the same as
in Fig. \ref{Fig6}. In the absence of qubit 2, the dynamics of a single
qubit already exhibits the collapse-revival phenomena in the strong coupling
regime $g_{1}\sim 0.1$. The population inversion given in (\ref{inversion})
shows that the revival signal is robust for weak coupling to the second
qubit - we even can not tell the difference for the single qubit dynamics
and that for a coupling strength $g_{2}=0.01$ and the revival signals for
weak coupling cases are only found to be delayed a little in Fig. \ref{Fig7}
(a-c). With the increasing of $g_{2}$ to the same amplitude as $g_{1}$ the
revival signal is destroyed, indicating that the qubit 2 influences the
qubit 1 by interacting with the optical field.
This behavior can be understood qualitatively as following. For each $m$,
besides a common factor $p\left( m\right) $ Eq. (\ref{inversion}) consists
now of two cosine terms, whose amplitudes are determined by $D_{m}$. For $
g_{2}<g_{1}$, we can show numerically that $D_{m}$ is always smaller than $
0.5$ and can be neglected for larger $m$. The dynamics depends thus mainly
on the difference, instead of the summation, of $\theta _{m}^{\pm }$ as in
the first term in (\ref{inversion}). This gives the periodicity of revivals
in Fig. \ref{Fig7}, which would persist even for the homogeneous coupling
case when the two terms in (\ref{inversion}) are comparable. In Fig. \ref
{Fig7}(e) the interference of the two revival signal terms with almost equal
amplitude leads to the irregular oscillation of population inversion.
\begin{figure}
\caption{(Color Online)The time-dependent inversion of qubit 1 as a function
of $\protect\omega _{1}
\label{Fig7}
\end{figure}
\section{Entanglement behaviors}
Quantum entanglement can be used in studies of fundamental quantum phenomena
and the on-chip entanglement of solid-state qubits provides a key building
block for the solid-state realization of quantum optical networks. It has
attracted much attention in connection with Bell's inequality \cite{Bell,
Wootters, Coffman}. However, realization of long-distance entanglement based
on solid-state systems coupled to an optical field is an outstanding
challenge. In the homogeneous coupling case with equal strengths to two
identical qubits the entanglement properties have recently been studied \cite
{Agarwal}. In this section we aim to describe the entanglement properties by
considering different coupling strengths to the two qubits. Thus it would be
very interesting to study more general quantum correlations between two
qubits. We suppose an initial entanglement of the two qubits in the form of
a familiar Bell state and the oscillator in a coherent state, which is
expressed as
\begin{equation}
\left\vert \Psi \left( 0\right) \right\rangle =\frac{1}{\sqrt{2}}\left(
\left\vert 11\right\rangle +\left\vert 00\right\rangle \right) \left\vert
z\right\rangle . \label{intial state1}
\end{equation}
As a good approximations in the case of small $\beta _{1}$ we may expand the
state $\left\vert m\right\rangle$ in terms of the displaced Fock space and
the most important contribution in the summation over $m$ are the terms with
the same $m$, which is equivalent to take $\left\vert m\right\rangle \approx
\left\vert m\right\rangle _{A_{i}}$ \cite{Agarwal}. Thus we can obtain
\begin{equation}
\left\vert \Psi \left( 0\right) \right\rangle \!=\!\frac{1}{\sqrt{2}}
\!\sum\limits_{m=0}^{\infty }\!\frac{e^{-\left\vert z\right\vert ^{2}/2}z^{m}
}{\sqrt{m!}}\left( \left\vert 11\right\rangle \!\left\vert m\right\rangle
\!_{A_{1}}\!+\!\left\vert 00\right\rangle \!\left\vert m\right\rangle
\!_{A_{4}}\right) . \label{initial state3}
\end{equation}
The initially entangled state of two qubits evolves into $\left\vert \Psi
\left( t\right) \right\rangle $ which is given by
\begin{equation}
\left\vert \Psi \left( t\right) \right\rangle =\sum\limits_{m=0}^{\infty
}\sum\limits_{\kappa =\pm }\left( e^{-iE_{m}^{\kappa \gamma }t}\left\vert
\psi _{m}^{\kappa \gamma }\right\rangle \left\langle \psi _{m}^{\kappa
\gamma }|\Psi \left( 0\right) \right\rangle \right) . \label{time state3}
\end{equation}
\begin{figure}
\caption{(Color Online)Plots of the concurrence evolutions as a function of $
\protect\omega _{1}
\label{Fig8}
\end{figure}
To quantify the entanglement of a two-qubit system, we need to calculate the
reduced density operator by tracing out the quantum field. The result is
given by
\begin{eqnarray}
\hat{\rho}_{Q}\left( t\right) &=&\sum\limits_{m}\left\langle m|\Psi \left(
t\right) \right\rangle \left\langle \Psi \left( t\right) |m\right\rangle
\notag \\
&=&\sum\limits_{m=0}^{\infty }\sum\limits_{\kappa =\pm }\frac{p\left(
m\right) \left( 1+\kappa \left( -1\right) ^{m}\right) }{4}\left(
q_{1m}^{\kappa +}\left\vert ee\right\rangle \left\langle ee\right\vert
\right. \notag \\
&+&q_{2m}^{\kappa +}\left\vert ee\right\rangle \left\langle gg\right\vert
+q_{2m}^{\kappa -}\left\vert gg\right\rangle \left\langle ee\right\vert
+q_{1m}^{\kappa -}\left\vert gg\right\rangle \left\langle gg\right\vert
\notag \\
&& \label{density}
\end{eqnarray}
where the calculation is done in the eigenbasis $\left\vert e\right\rangle$
and $\left\vert g\right\rangle$ of $\sigma^x$ with eigenvalues $\pm1/2$
respectively. The coefficients $q$'s are defined as
\begin{eqnarray*}
q_{1m}^{\kappa \pm }&=&1\mp \frac{4\xi _{m}^{\kappa +}\left( \left( \xi
_{m}^{\kappa +}\right) ^{2}-1\right) \sin ^{2}\left( \theta _{m}^{\kappa
}t\right) }{\left( \left( \xi _{m}^{\kappa +}\right) ^{2}+1\right) ^{2}}, \\
q_{2m}^{\kappa \pm }&=&1-\frac{8\left( \xi _{m}^{\kappa +}\right) ^{2}\sin
^{2}\left( \theta _{m}^{\kappa }t\right) }{\left( \left( \xi _{m}^{\kappa
+}\right) ^{2}+1\right) ^{2}}\pm i\frac{2\xi _{m}^{\kappa +}\sin \left(
2\theta _{m}^{\kappa }t\right) }{\left( \xi _{m}^{\kappa +}\right) ^{2}+1},
\end{eqnarray*}
which are all unity at $t=0$. This means that the qubits are initially
prepared in a pure state, but as time evolves, the reduced state of the
qubits becomes mixed. Obviously the reduced density matrix in the eigenspace
of the spin product operators $\sigma_{1}^{x}\otimes \sigma _{2}^{x}$ with
the standard two-qubit basis $\left\vert ee\right\rangle ,\left\vert
eg\right\rangle ,\left\vert ge\right\rangle ,\left\vert gg\right\rangle $
belongs to a special class of density matrices (X-matrices) with only
diagonal and anti-diagonal elements. It is thus more convenient to quantify
the entanglement using concurrence, which in our case takes a very simple
form \cite{Agarwal, Wootters}
\begin{equation}
C\left( t\right) =\left\vert \sum\limits_{m=0}^{\infty }\sum\limits_{\kappa
=\pm }\frac{p\left( m\right) \left( 1+\kappa \left( -1\right) ^{m}\right) }{2
}q_{2m}^{\kappa +}\right\vert . \label{C}
\end{equation}
It is also worthwhile to mention that for homogeneous coupling $\beta _{2}=0$
we immediately have $q_{1m}^{\kappa \pm }=1$ and $q_{2m}^{\kappa \pm
}=e^{\pm i2\Omega^{\kappa}_m t}$. Then the concurrence (\ref{C}) reduces to
the homogeneous result as in \cite{Agarwal}
\begin{equation*}
\!C\left( t\right) \!\approx \!\sum_{k=0}^{\infty }\!\left\vert \bar{S}
_{k}\left( t,2\omega _{1}\right) \right\vert =\!\sum_{k=0}^{\infty }\!\frac{
\exp \left( \frac{\!-\!\left( 2\mu \!-\!\mu _{k}\right) ^{2}f\beta _{1}^{2}}{
2\left( 1\!+\!\left( \pi kf\right) ^{2}\right) }\right) }{\left(
1\!+\!\left( \pi kf\right) ^{2}\right) ^{1/4}}.
\end{equation*}
In Fig. \ref{Fig8} we plot the time evolution of the concurrence of
two qubits, coupled to the bose field with different coupling strengthes. It
is interesting to examine how the entanglement changes when one of two
qubits reaches the ultrastrong coupling regime while the other coupling
parameter varies. Obviously, we observe a variety of qualitative features
such as entanglement birth, death, as well as rebirth, in which revivals
appear periodically. Moreover, the periodicity of revivals disappears after
a period of time and the duration of death time becomes shorter and shorter
over time. We realize that the period gets shorter and shorter and the
periodicity of revivals vanishes faster and faster with the increasing of $
g_{2}$. The zero-order approximation reproduces quite accurate evolution in
short time and fails in describing the long time behavior when the coupling
strengths are sufficiently large.
\section{Conclusion}
In conclusion, we have developed a systematic truncated subspace approach
for solving the TC model beyond RWA by using the displaced Fock basis,
parity operator subspace and truncation in the power series \cite{Irish1, Liu,
Braak}. This provides a straightforward way to access the Hilbert space of
the inhomogeneous coupling system. In principal we are able to
solve the inhomogeneously coupled $N$-qubits-oscillator model to get an
analytical result to any order by constructing the $2^{N}$ displacement
operators. The complexity of the solutions depends on the determinant of the
secular equation, the primitive building blocks of which involve transition
between Fock states displaced in different directions and distances.
As an example of particular experimental interest, the two-qubit TC model
manifests a lot of new features of the qubit-oscillator system and our main
findings include:
(1) The analytical energy spectrum
of the two-qubit inhomogeneous coupling TC model are given in the
zeroth-order and first-order approximations. The zeroth-order results already agree
with the numerical solutions very well in the ultra-strong coupling case
$\beta_{1}\sim 0.2\omega _{c}$, while the first-order approximation improves
the analytical eigenenergies applicable even in the deep coupling regime
$\beta _{1}\sim \omega _{c}$ after half of the pseudo solutions are ruled out.
(2) The TC model consisting of two qubits is quasi-exactly solvable, that is,
a finite number of exact eigenvalues and associated eigenfunctions are given
in the closed form. Specifically, in the homogeneous coupling case, $E=\omega_c$
is always a solution corresponding to even(odd) parity for symmetric(asymmetric)
detuning $\omega_1\pm\omega_2=2\omega_c$. For two completely identical
qubits homogeneously coupled to the bose field, the singlet state $\left( \left\vert
10\right\rangle -\left\vert 01\right\rangle \right) |m\rangle /\sqrt{2}$ for any $m$
is an exact eigenstate with eigenvalue $E_m = m\omega_c$. The remaining part of
the spectrum is only numerically accessible through truncation subspace approach.
(3) Several nontrivial level crossing points in the same parity subspace are
identified by means of the fidelity between states before and after the crossing.
This implies an enlarged hidden symmetry and we show that the homogeneous
coupled two-qubit TC model with $\omega _{1}=\omega _{2}$ or $\omega _{1}
\pm \omega _{2}=2\omega _{c}$ is integrable.
(4) The quantum dynamical of the TC model beyond the RWA are investigated in
the adiabatic approximation, with a special attention paid on the unequal coupling
strengths for the two qubits. The probability of the two qubits staying in their initial states
is characteristic of four oscillating frequencies, which is distinct from that of the single
qubit system and the homogeneous coupling system. The approximated results of
population inversion are surprisingly accurate in describing the dynamics of the qubit,
which shows that the collapse-revival phenomena emerge, survive, and
are finally destroyed when the coupling strength increases beyond the deep
coupling regime. This provides a method to control the revival signal of one
qubit by means of the involvement of another one, which imprints its
influences in the system by interacting with the optical field.
(5) The entanglement evolution of the two qubits as a principal measure of
intrinsically quantum coherence is examined with an initial inter-qubit
entanglement in the form of a familiar Bell state and the oscillator in a
coherent state. Analytical results are obtained for the concurrence in the
inhomogeneous coupling case by tracing out the quantum field in the reduced
density matrix.
Our approximation approach is applicable to systems of
arbitrary two qubits satisfying $\left( \left\vert g_{1}\right\vert
+\left\vert g_{2}\right\vert \right) \leq 0.2\omega _{c}$ and $\omega
_{c}\gg \omega _{j}$. The time evolution of the two qubits reproduces
perfectly the special case with two completely identical qubits
homogeneously coupled to a common oscillator mode, i.e. $g_{1}=g_{2}$ and $
\omega _{1}=\omega _{2}$ as in Ref. \cite{Agarwal}.
Interestingly, there are still further work to do in the multiple qubits and
oscillator system in the ultrastrong regime, e.g. the GHZ state entanglement
evolution, quantum entanglement between the polarization of a single optical
photon and solid-state qubits, the decoherence behavior analysis in an
external environment, etc.
\begin{acknowledgments}
This work is supported by the NSF of China under Grant Nos. 11234008,
11104171 and 11074153, the National Basic Research Program of China (973
Program) under Grant Nos. 2010CB923103, 2011CB921601. We thank D. Braak, Tao
Liu, Yuxi Liu, Li Wang and Qinghu Chen for helpful discussions.
\end{acknowledgments}
\end{document} |
\begin{document}
\twocolumn[
\icmltitle{Efficient Algorithms for Adversarial Contextual Learning}
\icmlauthor{Vasilis Syrgkanis}{[email protected]}
\icmladdress{Microsoft Research,
641 Avenue of the Americas, New York, NY 10011 USA}
\icmlauthor{Akshay Krishnamurthy}{[email protected]}
\icmladdress{Microsoft Research,
641 Avenue of the Americas, New York, NY 10011 USA}
\icmlauthor{Robert E. Schapire}{[email protected]}
\icmladdress{Microsoft Research,
641 Avenue of the Americas, New York, NY 10011 USA}
\vskip 0.3in
]
\begin{abstract}
\input{abstract}
\epsilonnd{abstract}
\section{Introduction}\label{sec:intro}
\input{intro}
\section{Online Learning with Oracles}\label{sec:oracles}
\input{general-oracles}
\section{Adversarial Contextual Learning}\label{sec:cont-lin}
\input{complete-info}
\section{Linear Losses and Semi-Bandit Feedback}
\label{sec:contextual-bandits}
\input{bandit-info}
\section{Switching Policy Regret}\label{sec:switching}
\input{switching}
\section{Efficient Path Length Regret Bounds}\label{sec:path-length}
\input{optimistic}
\section{Discussion}
In this work we give fully oracle efficient algorithms for adversarial online learning problems including contextual experts, contextual bandits, and problems involving linear optimization or switching experts.
Our main algorithmic contribution is a new Follow-The-Perturbed-Leader style algorithm that adds perturbed low-dimensional statistics.
We give a refined analysis for this algorithm that guarantees sublinear regret for all of these problems.
All of our results hold against adaptive adversaries, both with full and partial feedback.
While our algorithms achieve sublinear regret in all problems we consider, we do not always match the regret bounds attainable by inefficient alternatives.
An interesting direction for future work is whether fully oracle-based algorithms can achieve optimal regret bounds in the settings we consider.
Another interesting direction focuses on a deeper understanding of the small-separator condition and whether it enables efficient non-transductive learning in other settings.
We look forward to studying these questions in future work.
\appendix
\onecolumn
\input{appendix}
\epsilonnd{document} |
\begin{document}
\title{Strong Singularity for Subfactors}
\begin{abstract}
We examine the notion of $\alpha _gpha$-strong singularity for subfactors of a ${\textrm{II}}_1\ $ factor, which is a metric quantity that relates the distance between a unitary in the factor and a subalgebra with the distance between that subalgebra and its unitary conjugate. Through planar algebra techniques, we demonstrate the existence of a finite index singular subfactor of the hyperfinite ${\textrm{II}}_1\ $ factor that cannot be strongly singular with $\alpha _gpha=1$, in contrast to the case for masas. Using work of Popa, Sinclair, and Smith, we show that there exists an absolute constant $0<c<1$ such that all singular subfactors are $c$-strongly singular. Under the hypothesis of $2$-transitivity, we prove that finite index subfactors are $\alpha _gpha$-strongly singular with a constant that tends to $1$ as the Jones Index tends to infinity and infinite index subfactors are $1$-strongly singular. Finally, we give a proof that proper finite index singular subfactors do not have the weak asymptotic homomorphism property relative to the containing factor.
\end{abstract}
\section{Introduction}
The study of subfactors of a ${\textrm{II}}_1\ $ factor was initiated by Vaughan Jones in \cite{Jones.Index}, where he defined the index $[M:N]$ of a subfactor inclusion $N\subseteq M$ to be dimension of $L^2(M)$ as a left Hilbert $N$-module. He showed that the spectrum of possible index values contains both a continuous and a discrete part: while the index can take any value greater than or equal to $4$, values less than $4$ are necessarily of the form $4\cos^2(\frac{\pi}n)$ for an integer $n _{g,g}eq 3$. He also showed that all admissible index values are realized by subfactors of the hyperfinite ${\textrm{II}}_1\ $ factor.
Since the introduction of the index, invariants of subfactors both numerical and otherwise have been a rich area of study. It was shown in \cite{Jones.Index} that if $[M:N]<\infty$, then $N'\cap \langle M, e_N\rangle$ is finite dimensional, and by repeating the Jones construction one obtains a double sequence of inclusions of finite dimensional algebras
\[
\begin{array}{ccccccc} N'\cap M & \subseteq & N'\cap \langle M, e_N \rangle & \subseteq & N'\cap \langle \langle M,e_N\rangle , e_M \rangle & \subseteq & \dots \\ \begin{sideways}$\subseteq$\end{sideways} & & \begin{sideways}$\subseteq$\end{sideways} & & \begin{sideways}$\subseteq$\end{sideways} & & \\\mathbb{C}I & \subseteq & M'\cap \langle M,e_N\rangle & \subseteq & M'\cap \langle \langle M,e_N\rangle , e_M \rangle & \subseteq & \dots \end{array}
\]
called the standard invariant. Popa showed in \cite{Popa.CBMSNotes} that when $[M:N]\leq 4$ and $M$ is hyperfinite, the subfactor can be reconstructed from the standard invariant. He also proved that this holds in general for a larger class of \emph{strongly amenable} subfactors. In \cite{Dietmar.Index6} it was shown that there exist infinitely many nonisomorphic inclusions of index 6 subfactors of the hyperfinite ${\textrm{II}}_1\ $ factor with the same standard invariant. It is a major open question in subfactor theory to decide whether all possible standard invariants occur for subfactors of the hyperfinite ${\textrm{II}}_1\ $ factor. A subproblem, still open, is whether all index values can be obtained for irreducible subfactors, those with $N'\cap M=\mathbb{C}I$. Singular subfactors fall under the umbrella of this latter problem.
The concept of singularity for a subalgebra of a ${\textrm{II}}_1\ $ factor $M$ dates back to Jacques Dixmier \cite{Dixmier.Masa} in the context of maximal abelian *-subalgebras (masas) $A$ of $M$. If $\mathcal{U}(M)$ is the group of unitaries in $M$, and
\[
\Vertrm (A)=\{u\in \mathcal{U}(M):uAu^*=A\},
\]
$A$ is said to be singular if $\Vertrm(A)=\mathcal{U}(A)$. Dixmier provided examples of singular masas in the hyperfinite ${\textrm{II}}_1\ $ factor $R$.
In general, it is difficult to tell whether a given masa is singular or not. This difficulty led Sinclair and Smith to define the notion of $\alpha _gpha$-strong singularity in \cite{Sinclair.strongsing} as an analytic quantity which would imply the algebraic condition of singularity for masas in a ${\textrm{II}}_1\ $ factor $M$. The definition was extended to subalgebras $B$ in \cite{Sinclair.StrongSing2} by Sinclair, Smith, and Robertson.
In \cite{singular.al}, it was shown that singularity and strong singularity for masas are equivalent to a formally stronger property which first appeared in \cite{Sinclair.StrongSing2} and has come to be known as the weak asymptotic homomorphism property, or WAHP, in $M$. Using the equivalence between the WAHP and singularity, it was shown in \cite{singular.al} that the tensor product of singular masas in ${\textrm{II}}_1\ $ factors is again a singular masa in the tensor product factor.
It is natural to ask what relationships these properties have for arbitrary subalgebras of $M$. Herein, we will consider the case where $B=N$ is a subfactor of $M$. Understanding singular subfactors of the hyperfinite ${\textrm{II}}_1\ $ factor would shed light on (and possibly decide) whether all index values occur for irreducible subfactors for subfactors of the hyperfinite ${\textrm{II}}_1\ $ factor. It is worth noting that all ``small'' noninteger index values (between 3 and 4) yield singular subfactors.
One might guess that in this situation, the other extreme from masas, that it would be possible to deduce strong singularity from singularity. The main result of this paper is that this is not the case. Using planar algebra techniques, we produce an example of a finite index singular subfactor of the hyperfinite ${\textrm{II}}_1\ $ factor $\mathcal{R}$. that is no more than $\sqrt{2({\sqrt{2}-1})}$-strongly singular in $\mathcal{R}$. The supremum over all admissible numbers $\alpha _gpha$ appearing in the strong singularity inequality then represents a new, nontrivial numerical invariant for singular subfactors of ${\textrm{II}}_1\ $ factors under unitary conjugacy.
The paper is organized as follows: Section \ref{prelims} establishes notation and general background. In Section \ref{mainstuff}, we establish positive results for strong singularity constants. Theorem \ref{relativecomm} shows that when the higher relative commutant $N' \cap \langle M,e_N\rangle$ is $2$-dimensional, proper finite index subfactors of $M$ are $\alpha _gpha$-strongly singular in $M$ where $\alpha _gpha=\sqrt{\frac {[M:N]-2}{[M:N]-1}}$. As this constant tends to one as the index tends to infinity, this suggests that infinite index singular subfactors are strongly singular. Indeed, the methods of proof for the finite index case yield strong singularity for an infinite index inclusion $N\subseteq M$ when $N'\cap \langle M, e_N \rangle$ is $2$-dimensional. Using results from \cite{Sinclair.PertSubalg}, we obtain an absolute constant $c=\frac 1 {13}$ for which all singular subfactors are $c$-strongly singular.
In Section \ref{counterex}, we give the example described above as the unique (up to conjugacy) subfactor of index $2+\sqrt{2}$ subfactor of the hyperfinite ${\textrm{II}}_1\ $ factor, and prove the aforementioned upper bound on $\alpha _gpha$. Using Theorem \ref{relativecomm}, we can obtain a lower bound of $\sqrt{2-\sqrt{2}}$. Finally, we give a simple proof in Section \ref{nowahp} of the fact, due to Popa, that no proper singular finite index subfactor of $M$ has the WAHP. Thus, these properties cannot be equivalent in general.
Let us briefly discuss existence questions for singular subfactors. Since technically $M$ is a strongly singular subfactor of itself (with the WAHP), existence questions for singular (or strongly singular) subfactors must be qualified. Recently, Stefaan Vaes has proved that there exists a factor $M$ such that every finite index irreducible subfactor is equal to $M$ \cite{Vaes.nontrivialsubfactor}. This implies that there exist factors with no proper finite index singular subfactors. On the other hand, Popa has shown in \cite{Popa.SingularMasas} that there always exist singular masas in separable ${\textrm{II}}_1\ $ factors. The correct analog of the question for masas, then, is to ask whether there always exist infinite index hyperfinite singular or $\alpha _gpha$-strongly singular subfactors of any separable ${\textrm{II}}_1\ $ factor.
An example in the hyperfinite ${\textrm{II}}_1\ $ factor of an infinite index subfactor with the WAHP was provided in \cite{Sinclair.StrongSing2}. In \cite{Popa.MaxInjective}, Popa remarks that by results from \cite{Popa.Kadison}, every separable ${\textrm{II}}_1\ $ factor has a semi-regular masa that is contained in some (necessarily irreducible) hyperfinite subfactor, and so by Zorn's Lemma has an irreducible maximal hyperfinite subfactor. Such an object is then a maximal hyperfinite subalgebra of $M$, and as Popa observes, any maximal hyperfinite subalgebra is singular \cite{Popa.MaxInjective}. Therefore, any separable ${\textrm{II}}_1\ $ factor has an infinite index hyperfinite singular subfactor. Maximal hyperfinite subfactors in any ${\textrm{II}}_1\ $ factor were first exhibited in \cite{Fuglede.MaxInjective}. Whether there exist strongly singular hyperfinite subfactors or hyperfinite subfactors with the WAHP in any ${\textrm{II}}_1\ $ factor remains an open question.
\section{Preliminaries and Notation}\label{prelims}
Throughout, $M$ will denote a ${\textrm{II}}_1\ $ factor and $N$ a subfactor of $M$. Unless otherwise noted, $M$ shall be regarded as faithfully represented on the Hilbert space $L^2(M)=L^2(M, \tau)$, where $\tau$ denotes the unique normal, faithful, tracial state on $M$. Elements of $M$ considered as a subspace of $L^2(M)$ shall be denoted by $\hat{x}$ or $x\hat{I}$ for $x$ in $M$ and $I$ the identity element of $M$. The element $\hat{I}$ is a cyclic and separating vector for $M\subseteq B(L^2(M))$. If $J$ denotes the isometric involution on $L^2(M)$ defined by
\begin{equation}
J(x\hat{I})= x^*\hat{I}
\end{equation}
on $M\hat{I}$, then $JMJ=M'$.
If $e_N$ is the orthogonal projection from $L^2(M)$ onto $L^2(N)$, then the von Neumann algebra $\langle M, e_N\rangle$ generated by $M$ and $e_N$ is a factor of type II equal to $JN'J$, and so is of type ${\textrm{II}}_1\ $ if and only if $N'$ is. We will denote by $\mathbb{E}_N$ the unique normal, faithful, trace-preserving conditional expectation from $M$ onto $N$, which can be thought of as the restriction of $e_N$ to $M\hat{I}$. The factor $\langle M, e_N\rangle$ possesses a unique normal, faithful, semifinite tracial weight ${\textrm{Tr}}$ such that for all $x,y$ in $M$,
\begin{enumerate}
\item ${\textrm{Tr}}(I)=[M:N]$;
\item $e_N\langle M,e_N \rangle e_N=Ne_N$;
\item ${\textrm{Tr}} (xe_Ny)=\tau(xy)$;
\item $e_Nxe_N=\ce{N}{x}e_N=e_N\ce{N}{x}$.
\end{enumerate}
If $[M:N]<\infty$, then for every element $x$ in $\langle M, e_N \rangle$, there is a unique element $y$ in $M$ with $xe_N=ye_N$. Proofs of these facts may be found in \cite{Jones.SubfactorsBook} or \cite{Jones.Index}. We shall denote by Aut$(N)$, $\mathcal{U}(N)$, and $\Vertrm(N)$ the groups of automorphisms of $N$, unitaries in $N$, and normalizing unitaries of $N$ in $M$, respectively.
A von Neumann subalgebra $B$ of $M$ is $\alpha _gpha$-strongly singular \cite{Sinclair.strongsing} if there is a constant $0< \alpha _gpha\leq 1$ such that for all unitaries $u\in M$,
\begin{equation}\label{ss}
\alpha _gpha \Vert u-\ce{B}{u}\Vert_2 \leq \Vert \mathbb{E}_B-\mathbb{E}_{uBu^*}\Vert_{\infty, 2}
\end{equation}
where $\Vert T\Vert _{\infty,2}=\displaystyle\sup_{_{g,g}enfrac{}{}{0cm}{1}{x\in M}{\Vert x\Vert \leq 1}}\Vert Tx\Vert_2.$ If $\alpha _gpha=1$, then $B$ is said to be strongly singular.
\section{Positive Strong Singularity Results for Subfactors}\label{mainstuff}
Under the assumption that $N'\cap \langle M,e_N\rangle $ is 2-dimensional, we can establish strong singularity constants for singular subfactors. In \cite{G.J}, this condition is referred to as 2-transitivity. Note that if $[M:N]>2$ and $N'\cap \langle M,e_N\rangle$ is 2-dimensional, then $N$ is automatically singular in $M$, as any $u$ in $\mathcal{N}_M(N)\backslash \mathcal{U}(N)$ yields the projection $ue_Nu^*$ in $N'\cap \langle M, e_N\rangle$. This projection is not $e_N$ since $\{e_N\}'\cap M=N$ and it is also not $e_N^{\perp}$ since
\[
{\textrm{Tr}} (e_N^{\perp})>{\textrm{Tr}} (e_N)={\textrm{Tr}}(ue_Nu^*).
\]
By Goldman's Theorem (\cite{Goldman.Index2} or \cite{Jones.SubfactorsBook}), all index 2 subfactors are regular. Any subfactor of index strictly between 3 and 4 is 2-transitive, and therefore singular. There also exist 2-transitive hyperfinite subfactors for every integer index $_{g,g}eq 3 $.
\renewcommand{\arabic{enumi})}{\arabic{enumi})}
\begin{thm}\label{relativecomm}
Let $N\subseteq M$ be a singular subfactor with $N'\cap \langle M,e_N\rangle$ 2-transitive. If $[M:N]<\infty$, then $N$ is $\sqrt{\frac {[M:N]-2}{[M:N]-1}}$-strongly singular in $M$. If $[M:N]=\infty$, then $N$ is strongly singular in $M$.
\end{thm}
\begin{proof} Let $N\subseteq M$ be a singular inclusion of subfactors and suppose $N'\cap \langle M, e_N \rangle $ is 2-dimensional. Let $u$ be a unitary in $M$ and define $C$ to be the weakly closed convex hull of the set $\{we_Nw^* : w\in uNu^*\}$ where $w$ is unitary. Then $C$ admits a unique element of minimal $2$-norm denoted by $h$ which has the following properties, detailed in \cite{Sinclair.PertSubalg} :
\begin{enumerate}
\item $h\in (uNu^*)'\cap \langle M, e_N\rangle$;
\item Tr$(e_Nh)={\textrm{Tr}}(h^2)$;
\item Tr$(h)=1$;
\item $1-{\textrm{Tr}} (e_Nh)\leq \Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}$.
\end{enumerate}
Now $(uNu^*)'\cap \langle M, e_N \rangle$ has basis $ue_Nu^*$ and $ue_N^{\perp}u^*$, so that $h=\alpha _gpha (ue_Nu^*)+\beta (ue_N^{\perp}u^*)$ for some scalars $\alpha _gpha$ and $\beta$. By 3),
\[
1={\textrm{Tr}} (h)=\alpha _gpha +\lambda \beta,
\]
where $\lambda=[M:N]-1$. If $[M:N]=\infty$, then ${\textrm{Tr}} (e_N ^{\perp})=\infty$, which implies that $\beta =0$, and therefore $\alpha _gpha=1$. If $[M:N]$ is finite, then $\alpha _gpha=1-\lambda \beta$, and expanding ${\textrm{Tr}} (e_Nh)$ yields
\begin{align*}
{\textrm{Tr}}(e_Nh)=&\alpha _gpha {\textrm{Tr}} (e_Nue_Nu^*)+\beta {\textrm{Tr}}(e_N-e_Nue_Nu^*)\\
=&\alpha _gpha {\textrm{Tr}} (e_N\mathbb{E}_N(u)\mathbb{E}_N(u^*))+\beta ({\textrm{Tr}}(e_N)-{\textrm{Tr}} (e_N\mathbb{E}_N (u)\mathbb{E}_N (u^*)))\\
=&\alpha _gpha \tau (\mathbb{E}_N (u)\mathbb{E}_N (u^*))+\beta (1-\tau (\mathbb{E}_N(u) \mathbb{E}_N (u^*)))\\
=&\alpha _gpha \Vert \mathbb{E}_N(u)\Vert^2 _2+\beta \Vert u-\mathbb{E}_N(u)\Vert ^2 _2,
\end{align*}
since $1=\Vert u\Vert^2 _2= \Vert \mathbb{E}_N(u)\Vert^2 _2+\Vert u-\mathbb{E}_N(u)\Vert^2 _2$. Setting $k=\Vert u-\mathbb{E}_N(u)\Vert^2 _2$ and substituting the formula for $\alpha _gpha$ gives
\[
{\textrm{Tr}} (e_Nh)=(1-\lambda \beta)(1-k)+\beta k.
\]
Using 2), we have
\[
(1-\lambda \beta)(1-k)+\beta k={\textrm{Tr}} (e_Nh)={\textrm{Tr}} (h^2)=(1-\lambda \beta)^2+\lambda \beta ^2,
\]
and so $(\lambda^2+\lambda)\beta^2-(\lambda +k+\lambda k)\beta +k=0$. We may then solve for $\beta$ in terms of $\lambda$ and $k$, obtaining the roots $\beta=\dfrac k \lambda$ and $\beta =\dfrac 1 {1+\lambda}$.
Suppose that $\beta=\dfrac 1 {1+\lambda}$. Then
\[
\alpha _gpha=1-\frac {\lambda}{1+\lambda}=\frac 1 {1+\lambda}=\beta,
\]
and so
\[
h=\beta (ue_Nu^*)+\beta (ue_N^{\perp}u^*)=\beta I=\dfrac 1 {1+\lambda}I.
\]
Since $h$ is an element of $C$, there exist natural numbers $\{n_j\}_{j=1} ^{\infty}$, positive reals $\{_{g,g}amma_i ^{(j)}\}_{i=1} ^{n_j}$ with $\sum_{i=1}^{n_j}_{g,g}amma_i^{(j)}=1$ and unitaries $\{w_i ^{(j)}\}_{i=1} ^{n_j}$ in $N$ with
\[
\lim_{j\to \infty}\sum_{i=1} ^{n_j}_{g,g}amma_i^{(j)}uw_i^{(j)}u^*e_Nu(w_i^{(j)})^*u^*=\frac 1 {1+\lambda} I
\]
in WOT.
Then also
\[
\lim_{j\to \infty}\sum_{i=1} ^{n_j}_{g,g}amma_i^{(j)}w_i^{(j)}u^*e_Nu(w_i^{(j)})^*= \frac 1 {1+\lambda} I
\]
in WOT and
\[
\lim_{j\to \infty}e_N\left(\sum_{i=1} ^{n_j}_{g,g}amma_i^{(j)}w_i^{(j)}u^*e_Nu(w_i^{(j)})^*\right)= \frac {e_N} {1+\lambda}
\]
in WOT. Taking the trace of both sides yields
\[
{\textrm{Tr}} \left( \lim_{j\to \infty}e_N\left(\sum_{i=1} ^{n_j}_{g,g}amma_i^{(j)}w_i^{(j)}u^*e_Nu(w_i^{(j)})^*\right)\right)= {\textrm{Tr}} \Big{(}\frac {e_N} {1+\lambda}\Big{)}
=\frac 1 {1+\lambda}.
\]
However, for any $n_j$, $1\leq j< \infty$,
\begin{align*}
{\textrm{Tr}} \left(e_N\left(\sum_{i=1} ^{n_j}_{g,g}amma_i^{(j)}w_i^{(j)}u^*e_Nu(w_i^{(j)})^*\right)\right)&= {\textrm{Tr}} \left(e_N\left(\sum_{i=1} ^{n_j}_{g,g}amma_i^{(j)}w_i^{(j)}\ce{N}{u^*}\ce{N}{u}(w_i^{(j)})^*\right)\right)\\
&=\tau \left(\sum_{i=1} ^{n_j}_{g,g}amma_i^jw_i^{(j)}\ce{N}{u^*}\ce{N}{u}(w_i^{(j)})^*\right)=\Vert \ce{N}{u}\Vert^2 _2,
\end{align*}
and so $\dfrac 1{1+\lambda} =\Vert \ce{N}{u}\Vert^2 _2$. We obtain that
\[
k=1-\Vert \ce{N}{u}\Vert^2 _2=1-\dfrac 1{1+\lambda}=\dfrac{\lambda}{1+\lambda}.
\]
Then $\dfrac k \lambda=\dfrac 1 {1+\lambda}$, and so the only instance where $\beta=\dfrac 1 {1+\lambda}$ is when $k=\dfrac {\lambda}{1+\lambda}$, and there the two roots are identical.
We may then take $\beta=\dfrac k \lambda$ and so $\alpha _gpha=1-\lambda\beta=1-k$ when $[M:N]$ is finite. Hence
\begin{equation}
h=(1-k)(ue_Nu^*)+\dfrac k \lambda (ue_N^{\perp}u^*).
\end{equation}
By 4),
\[
\Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}_{g,g}eq 1-{\textrm{Tr}}(e_Nh)=1-\Big{(}(1-k)^2+\frac{k^2}={\lambda}\Big{)}=k\Big{(}2-\Big{(}1+\frac 1 \lambda \Big{)}k\Big{)}
\]
and therefore
\begin{equation}\label{quad}
\Vert u-\ce{N}{u}\Vert^2_2=k\leq \dfrac 1{2-\Big{(}1+\frac 1 \lambda\Big{)}k}\Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}.
\end{equation}
As $k\leq 1$,
\[
2-\Big(1+\dfrac 1 \lambda \Big{)}k_{g,g}eq 2-\Big{(}1+\dfrac 1 \lambda \Big{)}=1-\dfrac 1 \lambda,
\]
and it follows that
\begin{align*}
\Vert u-\ce{N}{u}\Vert^2_2\leq \frac 1 {1-\frac 1 \lambda}\Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}&=\frac \lambda {\lambda -1}\Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}\\
&=\frac {[M:N]-1}{[M:N]-2}\Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}.
\end{align*}
Hence $N$ is $\displaystyle \sqrt{\frac {[M:N]-2}{[M:N]-1}}$-strongly singular in $M$.
If $[M:N]=\infty$, then as previously noted, $\alpha _gpha=1$ and so $h=ue_Nu^*$. Therefore,
\[
\Vert u-\ce{N}{u}\Vert^2_2=1-{\textrm{Tr}} (e_Nh)\leq \Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}
\]
so that $N$ is strongly singular in $M$ and the proof is complete. \end{proof}
In the situation of Theorem \ref{relativecomm}, we may immediately show that when unitaries are close to a finite index singular subfactor in $2$-norm, they satisfy the equation for strong singularity.
\begin{cor} Under the hypotheses of Theorem \ref{relativecomm}, if $[M:N]<\infty$ and
\[
\Vert u-\ce{N}{u}\Vert_2\leq \sqrt{\frac {[M:N]-1} {[M:N]}},
\]
then
\[
\Vert u-\ce{N}{u}\Vert_2\leq \Vert\mathbb{E}_N-\mathbb{E}_{uNu^*}\Vert_{\infty,2}.
\]
\end{cor}
\begin{proof} Recall $k=\Vert u-\ce{N}{u}\Vert_2^2$ and $\lambda=[M:N]-1$. If $k\leq \frac {\lambda} {\lambda +1}$, then
\[
2-\left(1+\frac 1 \lambda\right)k=2-\left(\frac {\lambda +1}{\lambda}\right)k_{g,g}eq 2-\left(\frac {\lambda +1}{\lambda}\right)\left(\frac {\lambda}{\lambda+1}\right)=1
\]
Using equation \eqref{quad},
\[
\Vert u-\ce{N}{u}\Vert_2^2=k\leq k\left(2-\left(1+\frac 1 \lambda\right)k\right)\leq \Vert \mathbb{E}_N-\mathbb{E} _{uNu^*}\Vert ^2 _{\infty, 2}.
\]
\end{proof}
We end this section by producing an absolute constant $\alpha _gpha$ for which all singular subfactors of $M$ are strongly singular. First, we need a lemma dealing with the form of certain partial isometries. This fact is well-known, but we include a proof for completeness.
\begin{lem}\label{partialiso} (Dye's Theorem for subfactors) Let $M$ be a {\rm ${\textrm{II}}_1\ $}factor and let $N$ be a subfactor of $M$. Suppose $v$ is a partial isometry in $M$ with $vNv^*\subseteq N$ and $v^*v\in N$. Then there exists a unitary $u\in M$ with $uNu^*\subseteq N$ and $v=uv^*v$. If $v^*Nv\subseteq N$ as well, then $u$ can be chosen to be in $\Vertrm (N)$.
\end{lem}
\begin{proof}
Suppose $v^*v\Verte 0$. Let $n$ be the integer such that $(n-1)\cdot\tau(p)<1\leq n\cdot\tau(p)$. Let $p_1=v^*v$ and let $\{p_i\}_{i=2}^{n-1}\subseteq N$ be pairwise orthogonal projections subordinate to $p_1^{\perp}$ with $\tau(p_i)=\tau(p_1)$. Set $p_n=I-\sum_{i=1}^{n-1}p_i$.
For each $2\leq i\leq n$ take $w_i$ to be a partial isometry in $N$ with $w_i^*w_i=p_i$ and $w_iw_i^*\leq p_1$. Set $w_1=p$. Then the projections $vw_ip_iw_i^*v^*\in N$ for all $1\leq i\leq n$. With $q_1=vv^*$, choose a collection of pairwise orthogonal projections $\{q_i\}_{i=2}^{n}\subseteq N$ in the same manner as for $p_1$. Take partial isometries $\{v_i\}_{i=1}^n\subseteq N$ so that $v_1=q$, $v_iv_i^*=q_i$ and $v_i^*v_i=vw_iw_i^*v^*\leq q_1$ for $2\leq i\leq n$. Then $\displaystyle u=\sum_{i=1}^nv_ivw_i$ is a unitary in $M$, and if $x\in N$,
\[
uxu^*=\left(\sum_{i=1}^nv_ivw_i\right)x\left(\sum_{j=1}^nw_j^*v^*v_j^*\right)=\sum_{i,j=1}^nv_iv(w_ixw_j^*)v^*v_j\in N
\]
as $w_ixw_j^*\in N$, $vNv^*\subseteq N$, and $v_i\in N$ for all $1\leq i,j\leq n$. Therefore $uNu^*\subseteq N$. Since $w_ip_1=\delta_{i,1}p_1$, we obtain that $v=up_1=uv^*v$.
We have shown that if $vNv^*\subseteq N$ and $v^*v\in N$, then $v=uv^*v$ for some unitary $u\in M$ with $uNu^*\subseteq N$. If in addition $v^*Nv\subseteq N$, then one can check that $u^*Nu\subseteq N$, so that $u$ normalizes $N$. \end{proof}
The following result is Theorem 5.4 in \cite{Sinclair.PertSubalg}.
\begin{thm}\label{pert}{\rm(Popa, Sinclair, \& Smith)}
Suppose $\delta >0$ and $N$, $N_0$ are two subfactors of $M$ with $\Vert \mathbb{E}_{N}-\mathbb{E}_{N_0}\Vert_{\infty, 2}\leq\delta$. Then there exist projections $q_0\in N_0$, $q\in N$, $q_0'\in N_0'\cap M$, $q'\in N'\cap M$, $p_0=q_0q_0'$, $p=qq'$, and a partial isometry $v$ in $M$ such that $vp_0N_0p_0v^*=pNp$, $vv^*=p$, $v^*v=p_0$, and
\begin{equation}\label{isom}
\Vert 1-v\Vert_2\leq 13\delta, \quad \tau(p)=\tau(p_0)_{g,g}eq 1-67\delta^2.
\end{equation}
\end{thm}
There is a similar theorem for arbitrary subalgebras of $M$, also in \cite{Sinclair.PertSubalg}. As a direct consequence of Theorem \ref{pert} and Lemma \ref{partialiso}, we obtain
\begin{cor} Let $N$ be a singular subfactor in $M$. Then $N$ is $\frac 1 {13}$-strongly singular in $M$. \end{cor}
Theorem \ref{relativecomm} shows that the constant $\frac 1 {13}$ is not always optimal even for proper finite index singular subfactors.
\section{A Singular Subfactor that is Not Strongly Singular}\label{counterex}
In this section we describe an example, suggested to the authors by Vaughan Jones, of a subalgebra of the hyperfinite II$_1$ factor which is singular but not strongly singular. The subalgebra is in fact the unique (up to conjugacy) subfactor with index $2+\sqrt{2}$. To show it is not strongly singular, we will estimate both sides of inequality \ref{ss} for a specific unitary.
In \cite{G.J} an irreducible quadrilateral of hyperfinite II$_1$ factors
$\begin{array}{ccc}
P & \subset& M \\
\cup & & \cup \\
N &\subset &Q
\end{array}$ was constructed such that each of the four elementary subfactors $$N \subset P, N \subset Q, P \subset M, Q \subset M $$ has index $2+\sqrt{2}$. The principal graph of $N\subseteq M$ is given as\\
\includegraphics[width=2in]{principalgraph.pdf}
\\
It is shown in \cite{G.J} that such a quadrilateral is unique; in particular, it is isomorphic to its dual quadrilateral $\begin{array}{ccc}
\bar{P} & \subset& M_1 \\
\cup & & \cup \\
M &\subset &\bar{Q}
\end{array}$, where $$N \subset M \subset M_1, P \subset M \subset \bar{P}, Q \subset M \subset \bar{Q} $$ are each the basic construction. There is also present another intermediate subfactor $N \subset R \subset M$ with index $[M:R]=2 $.
In this quadrilateral $P$ and $Q$ are inner conjugate, and in fact a specific unitary $u \in N' \cap M_1$ which conjugates $\bar{P} $ onto $\bar{Q} $ is given by $u=2e_R-1 $, where $e_R$ is the biprojection in $M_1$ associated to $R$ \cite[Corollary 7.3.4]{G.J}.
We shall require the following results from \cite{G.J}, which hold for any two intermediate subfactors $P$ and $Q$ of a finite index irreducible inclusion $N\subseteq M$:
\begin{thm}\label{Landau}(Landau) $\displaystyle e_P\circ e_Q=\frac{{\textrm{Tr}}(e_Pe_Q)}{\delta}e_{PQ}$
\end{thm}
where $PQ$ is the (necessarily) strongly-closed subspace of $M$ generated by sums of products of elements in $P$ followed by elements of $Q$ and $\delta $ is $[M:N]^{\frac{1}{2}}$.
\begin{prop}\label{traces} ${\textrm{Tr}}(e_{PQ}){\textrm{Tr}}(e_Pe_Q)={\textrm{Tr}}(e_P){\textrm{Tr}}(e_Q).$
\end{prop}
Using these facts, we can then prove:
\begin{lem}
Let $\begin{array}{ccc}
P & \subset& M \\
\cup & & \cup \\
N &\subset &Q
\end{array}$ be the unique irreducible quadrilateral of hyperfinite II$_1$ factors such that the index of each elementary subfactor is $2+\sqrt{2} $. There exists a unitary $v \in M $ such that $vPv^*=Q $ and $E_{P}(v)=0$.
\end{lem}
\begin{proof}
By self-duality of the quadrilateral, it suffices to show that $E_{\bar{P}}(u)=0 $ for the $u$ described above. Let $e_{\bar{P}} \in M_2$ be the biprojection associated to $\bar{P} $. We then have $E_{\bar{P}}(u)e_{\bar{P}}=e_{\bar{P}}ue_{\bar{P}} $. It suffices to show that this quantity is zero, which we will do using the pictorial calculus of planar algebras.
Recall that an element $x$ in the relative commutant $N' \cap M_1 $ is represented by the ``$2$-box'' \vpic{2box.pdf}{.3in}, and more generally, an element of $N' \cap M_k $ by the ``$k$-box'' \vpic{kbox.pdf}{.65in}. For details on planar algebras, see \cite{Jones.PA}. (Note that we follow the convention of \cite{G.J}, omitting the ``outer boundary''.) If \vpic{eP.pdf}{.3in} is the $2$-box representing $e_P$, then by \cite[Lemma 3.2.6]{G.J} $e_{\bar{P}}=\displaystyle \frac{\delta}{{\textrm{Tr}}(e_P)}\phi(e_P)=\displaystyle \frac{\delta}{{\textrm{Tr}}(e_P)} $\vpic{co_ep.pdf}{.5in}$\in M'\cap M_2$ , where the modulus of the planar algebra $\delta $ is given by $[M:N]^{\frac{1}{2}}=2+\sqrt{2}=\frac{{\textrm{Tr}}(e_P)}{\delta} $ and $\phi$ is the linear isomorphism $x\to\delta^3\ce{M'}{xe_1e_2}$. Using Bisch's exchange relation for biprojections \cite{Bisch.Note}
\begin{center}
\vpic{ex1.pdf} {1 in} $=$ \vpic{ex2.pdf} {1 in}
\end{center}
we compute:
\begin{center}
\vpic{co_epuco_ep.pdf}{1in}=\vpic{co_epuco_ep2.pdf}{1.3in}
\end{center}
$=e_{\bar{P}}\cdot (u \circ e_P)$, where $\circ $ is the comultiplication in the planar algebra. By Theorem \ref{Landau} and Proposition \ref{traces},
\begin{align*}
u \circ e_P=2e_R \circ e_P - 1 \circ e_P&=\frac 1 {\delta} \left(2{\textrm{Tr}}(e_Pe_R)e_{RP}-{\textrm{Tr}}(e_P)I\right)\\
&=\frac 1 {\delta}\left(2\left(\frac{{\textrm{Tr}}(e_R){\textrm{Tr}}(e_P)}{{\textrm{Tr}}(e_{RP})}\right)e_{RP}-{\textrm{Tr}}(e_P)I\right).
\end{align*}
Since $R$ and $P$ cocommute (\cite[Lemma 7.3.1]{G.J}), by \cite[Theorem 3.2.1]{G.J} we have $RP=MP=M$ and so $e_{RP}=I$. Finally, ${\textrm{Tr}}(e_R)=\frac{\delta^2}2$, so that
\[
u \circ e_P=\frac 1 {\delta}\left(2\frac{\delta^2{\textrm{Tr}}(e_P)}{2\delta^2}-{\textrm{Tr}}(e_P)\right)I=0,
\]
therefore $E_{\bar{P}}(u)=0$ as well.
\end{proof}
\begin{lem}
Let $x \in M $ with $||x||_2=1$. Then $||\ce{P}{x}-\ce{Q}{x}||_2 \leq \sqrt{2\sqrt{2}-2} $.
\end{lem}
\begin{proof}
Consider the decomposition of $M$ into $N-N$ bimodules, which can be computed from the principal graph of $N \subset M $. We have $M \cong N \oplus 2V_1 \oplus 2V_2 \oplus V_3 $, where $N, V_1, V_2, V_3 $ are distinct irreducible $N-N$ bimodules (reference). Moreover, we have $P \cong Q \cong N \oplus V_1 $. Without loss of generality, we may assume that $x$ lies in the $V_1 \oplus V_1$ component of $M$, since any component of $x$ contained in $N$ would be preserved under both $E_P$ and $E_Q$ and any component of $x$ contained in $V_2 \oplus V_2 \oplus V_3$ would vanish under both $E_P$ and $E_Q$.
On the Hilbert space $L^2(V_1) \oplus L^2(V_1) $, the operators $e_P$ and $e_Q$ can be represented (after a suitable unitary transformation) in block form as the $2\times2$ matrices $A=\left( \begin{array}{cc}
1 & 0 \\
0 & 0 \end{array} \right)$ and $B=\left( \begin{array}{cc}
\lambda & \sqrt{\lambda(1-\lambda)} \\
\sqrt{\lambda(1-\lambda)} & 1-\lambda \end{array} \right)$, where the parameter $\lambda$ is the square of the cosine of the angle between these two projections. This angle is the same as the angle between the subfactors $P$ and $Q$, computed in \cite{G.J} as $Ang(P,Q)=\cos^{-1}(\sqrt{2}-1) $. So $\lambda=3-2\sqrt{2} $, and the norm of $e_P-e_Q $ on $L^2(V_1) \oplus L^2(V_1) $ is given by the positive eigenvalue of the matrix $A-B$, which is $\sqrt{1-\lambda}=\sqrt{2\sqrt{2}-2} $.
\end{proof}
\begin{cor}
$||\mathbb{E}_P-\mathbb{E}_{uPu^*}||_{\infty,2} \leq \sqrt{2\sqrt{2}-2}$.
\end{cor}
\begin{proof}
If $x$ in $M$ satisfies $||x||_{\infty}\leq 1 $, then $||x||_{2} \leq ||x||_{\infty} \leq 1 $
so $||\mathbb{E}_P(x)-\mathbb{E}_Q(x)||_2 \leq \sqrt{2\sqrt{2}-2} $.
\end{proof}
\begin{thm}\label{final}
Let $A \subset M $ be the unique (up to conjugacy) subfactor of the hyperfinite II$_1$ factor with index $2+ \sqrt{2}$. Then $A$ is singular in $M$ but is not strongly singular.
\end{thm}
\begin{proof}
Since $[M:A]=2+\sqrt{2}$ is between $3$ and $4$, $A \subseteq M$ is singular. By the previous two lemmas, the inequality \ref{ss} does not hold for $\alpha _gpha=1 $ and the $u$ described above, so the subfactor is not strongly singular.
\end{proof}
Note that by combining Theorem \ref{final} with Theorem \ref{relativecomm}, we obtain that the optimal value for $\alpha _gpha$ in equation \eqref{ss} is between $\sqrt{\sqrt{2}(\sqrt{2}-1)}$ and $\sqrt{2(\sqrt{2}-1)}$.
\section{The WAHP and finite index singular subfactors}\label{nowahp}
A subalgebra $B$ is said to have the weak asymptotic homomorphism property (WAHP) if for every $\varepsilon >0$ and for all $x_1,\dots, x_n$, $y_1,\dots, y_m$ in $M$, there exists a unitary $u$ in $B$ with
\begin{equation}\label{wahp}
\Vert \ce{B}{x_iuy_j}-\ce{B}{x_i}u\ce{B}{y_j}\Vert_2<\varepsilon
\end{equation}
for every $1\leq i\leq n$, $1\leq j\leq m$.
As a consequence of Popa's Intertwining Theorem \cite{Popa.StrongrigidityI}, a subalgebra $B$ of a ${\textrm{II}}_1\ $ factor $M$ will have the WAHP if and only if there are no nonzero finite projections in $B'\cap \langle M,e_B\rangle$ subordinate to $e_B^{\perp}$. We include a simple proof showing that no finite index subfactor may have the WAHP.
Before beginning the proof, recall from \cite{Jones.SubfactorsBook} or \cite{Popa.Entropy} that a Pimsner-Popa basis for $N\subseteq M$ a finite index inclusion of subfactors is a collection of elements $\lambda_1, \dots ,\lambda _k$ in $M$ with $k$ any integer greater than or equal to $[M:N]$ such that every $x\in M$ may be represented as $\sum^k _{j=1} \lambda_j \ce{N}{\lambda^* _jx}$ and $\sum^k _{j=1} \lambda _j e_N\lambda _j^*=1$.
\begin{thm}\label{nwahp}
If $N\subseteq M$ is a {\rm{${\textrm{II}}_1\ $}}factor with $1<[M:N]<\infty$, then $N$ does not have the WAHP in $M$.
\end{thm}
\begin{proof} It will be advantageous to use a Pimsner-Popa basis obtained by first choosing $k$ to be the least integer greater than or equal to $[M:N]$. We then select a collection of orthogonal projections $\{p_j\}_{j=1}^k$ in $\langle M,e_N\rangle$ with $p_1=e_N$, $\displaystyle \sum_{j=1} ^k p_j=1$ and ${\textrm Tr}(p_i)\leq 1$ with equality except possibly for $j=k$.
Let $v_1,v_2,\dots ,v_k$ be partial isometries in $\langle M,e_N\rangle$ such that $e_N=v_1$, $v_jv^* _j=p_j$ and $v_j ^*v_j\leq e_N$. The desired basis is given by the unique elements $\lambda_j\in M$ with the property that
\[
\lambda_je_N=v_je_N.
\]
Observe that $\lambda_1=1$. Since for $i\Verte j$,
\begin{align*}
\ce{N}{\lambda_i^* \lambda_j}e_N&=e_N\lambda_i^*\lambda_je_N=e_Nv_i^*v_je_N\\
&=e_Nv_i^*p_ip_jv_je_N=0,
\end{align*}
we have that $\ce{N}{\lambda_i^*\lambda_j}=0$ for $i\Verte j$. In particular, $\ce{N}{\lambda_j}=0$ for all $1<j\leq k$. It is worth noting that this is the original construction in \cite{Popa.Entropy}.
Now suppose $1<[M:N]<\infty$ and $\lambda_1,\dots ,\lambda_k$ are chosen as indicated. We will show that the WAHP fails for the sets $\{ x_i=\lambda_i\}$ and $\{ y_j=\lambda_j^*\}$, $1<i,j\leq k$. Let $u$ be any unitary in $N$. Then since
\[
\tau (\ce{N}{x})=\tau (x)={\textrm{Tr}}(e_Nx)
\]
for all $x$ in $M$,
\begin{align*}
\sum_{i,j=2} ^k \Vert \ce{N}{\lambda_i ^*u\lambda_j}\Vert^2 _2&=\sum_{i,j=2} ^k \tau(\ce{N}{\lambda_j^* u^*\lambda_i}\ce{N}{\lambda_i^* u\lambda_j})=\sum_{i,j=2} ^k \tau(\lambda_j^* u^*\lambda_i\ce{N}{\lambda_i^* u\lambda_j})\\
&=\sum_{i,j=2} ^k {\textrm{Tr}} (e_N\lambda_j^* u^*\lambda_i\ce{N}{\lambda_i^* u\lambda_j})=\sum_{i,j=2} ^k {\textrm{Tr}} (e_N\lambda_j^* u^*\lambda_ie_N{\lambda_i^* u\lambda_j}e_N).
\end{align*}
Using this equality, the fact that $u$ commutes with $e_N$, and $\sum_{j=1}^k\lambda_je_N\lambda_j^*=1$, we get
\begin{align*}
\sum_{i,j=2} ^k \Vert \ce{N}{\lambda_i ^*u\lambda_j}\Vert^2 _2&=\sum_{i,j=2} ^k {\textrm{Tr}} (e_N\lambda_j^* u^*\lambda_ie_N{\lambda_i^* u\lambda_j}e_N)=\sum_{i,j=2} ^k {\textrm{Tr}}(u^*\lambda_ie_N\lambda_i^*u\lambda_je_N\lambda_j^*)\\
&={\textrm{Tr}} (u^*(1-e_N)u(1-e_N))={\textrm{Tr}} ((1-e_N)u^*u)=[M:N]-1>0.
\end{align*}
This implies that for any given unitary $u$ in $N$, there are indices $1<i,j\leq k$ with
\[
\Vert \ce{N}{\lambda^* _iu\lambda_j}\Vert_2_{g,g}eq \frac {\sqrt{[M:N]-1}}{k-1},
\]
and so the WAHP fails to hold.
\end{proof}
Combining the previous theorem with the discussion at the beginning of this section, we immediately get
\begin{cor}
There exist singular subfactors that do not have the WAHP.
\end{cor}
\begin{bibdiv}
\begin{biblist}
\bib{Dietmar.Index6}{article}{
author={Bisch, Dietmar},
author={Nicoara, Remus},
author={Popa, Sorin},
title={Continuous families of hyperfinite subfactors with the same
standard invariant},
date={2007},
ISSN={0129-167X},
journal={Internat. J. Math.},
volume={18},
number={3},
pages={255\Vertdash 267},
review={\MR{MR2314611 (2008k:46188)}},
}
\bib{Bisch.Note}{article}{
author={Bisch, Dietmar},
title={A note on intermediate subfactors},
date={1994},
ISSN={0030-8730},
journal={Pacific J. Math.},
volume={163},
number={2},
pages={201\Vertdash 216},
review={\MR{MR1262294 (95c:46105)}},
}
\bib{Dixmier.Masa}{article}{
author={Dixmier, J.},
title={Sous-anneaux ab\'eliens maximaux dans les facteurs de type fini},
date={1954},
ISSN={0003-486X},
journal={Ann. of Math. (2)},
volume={59},
pages={279\Vertdash 286},
review={\MR{MR0059486 (15,539b)}},
}
\bib{Fuglede.MaxInjective}{article}{
author={Fuglede, Bent},
author={Kadison, Richard~V.},
title={On a conjecture of {M}urray and von {N}eumann},
date={1951},
journal={Proc. Nat. Acad. Sci. U. S. A.},
volume={37},
pages={420\Vertdash 425},
review={\MR{MR0043390 (13,255a)}},
}
\bib{Goldman.Index2}{article}{
author={Goldman, M.},
title={On subfactors of factors of type {${\rm II}\sb 1$}},
date={1960},
journal={Mich. Math. J.},
volume={7},
pages={167\Vertdash 172},
}
\bib{G.J}{article}{
author={Grossman, Pinhas},
author={Jones, Vaughan F.~R.},
title={Intermediate subfactors with no extra structure},
date={2007},
ISSN={0894-0347},
journal={J. Amer. Math. Soc.},
volume={20},
number={1},
pages={219\Vertdash 265 (electronic)},
review={\MR{MR2257402 (2007h:46077)}},
}
\bib{Jones.SubfactorsBook}{book}{
author={Jones, V.},
author={Sunder, V.~S.},
title={Introduction to subfactors},
series={London Mathematical Society Lecture Note Series},
publisher={Cambridge University Press},
address={Cambridge},
date={1997},
volume={234},
ISBN={0-521-58420-5},
review={\MR{MR1473221 (98h:46067)}},
}
\bib{Jones.Index}{article}{
author={Jones, V. F.~R.},
title={Index for subfactors},
date={1983},
ISSN={0020-9910},
journal={Invent. Math.},
volume={72},
number={1},
pages={1\Vertdash 25},
review={\MR{84d:46097}},
}
\bib{Jones.PA}{unpublished}{
author={Jones, V.F.R.},
title={Planar algebras, i},
date={1999},
note={Preprint, arXiv:math/9909027v1},
}
\bib{Popa.Entropy}{article}{
author={Pimsner, Mihai},
author={Popa, Sorin},
title={Entropy and index for subfactors},
date={1986},
ISSN={0012-9593},
journal={Ann. Sci. \'Ecole Norm. Sup. (4)},
volume={19},
number={1},
pages={57\Vertdash 106},
review={\MR{MR860811 (87m:46120)}},
}
\bib{Sinclair.PertSubalg}{article}{
author={Popa, Sorin},
author={Sinclair, Allan~M.},
author={Smith, Roger~R.},
title={Perturbations of subalgebras of type {II{$\sb 1$}} factors},
date={2004},
ISSN={0022-1236},
journal={J. Funct. Anal.},
volume={213},
number={2},
pages={346\Vertdash 379},
review={\MR{MR2078630}},
}
\bib{Popa.Kadison}{article}{
author={Popa, Sorin},
title={On a problem of {R}. {V}. {K}adison on maximal abelian {$\ast
$}-subalgebras in factors},
date={1981/82},
ISSN={0020-9910},
journal={Invent. Math.},
volume={65},
number={2},
pages={269\Vertdash 281},
review={\MR{MR641131 (83g:46056)}},
}
\bib{Popa.MaxInjective}{article}{
author={Popa, Sorin},
title={Maximal injective subalgebras in factors associated with free
groups},
date={1983},
ISSN={0001-8708},
journal={Adv. in Math.},
volume={50},
number={1},
pages={27\Vertdash 48},
review={\MR{MR720738 (85h:46084)}},
}
\bib{Popa.SingularMasas}{article}{
author={Popa, Sorin},
title={Singular maximal abelian {$\ast $}-subalgebras in continuous von
{N}eumann algebras},
date={1983},
ISSN={0022-1236},
journal={J. Funct. Anal.},
volume={50},
number={2},
pages={151\Vertdash 166},
review={\MR{MR693226 (84e:46065)}},
}
\bib{Popa.CBMSNotes}{book}{
author={Popa, Sorin},
title={Classification of subfactors and their endomorphisms},
series={CBMS Regional Conference Series in Mathematics},
publisher={Published for the Conference Board of the Mathematical Sciences,
Washington, DC},
date={1995},
volume={86},
ISBN={0-8218-0321-2},
review={\MR{MR1339767 (96d:46085)}},
}
\bib{Popa.StrongrigidityI}{article}{
author={Popa, Sorin},
title={Strong rigidity of {$\rm II\sb 1$} factors arising from malleable
actions of {$w$}-rigid groups. {I}},
date={2006},
ISSN={0020-9910},
journal={Invent. Math.},
volume={165},
number={2},
pages={369\Vertdash 408},
review={\MR{MR2231961 (2007f:46058)}},
}
\bib{Sinclair.StrongSing2}{article}{
author={Robertson, Guyan},
author={Sinclair, Allan~M.},
author={Smith, Roger~R.},
title={Strong singularity for subalgebras of finite factors},
date={2003},
ISSN={0129-167X},
journal={Internat. J. Math.},
volume={14},
number={3},
pages={235\Vertdash 258},
review={\MR{2004c:22007}},
}
\bib{Sinclair.strongsing}{article}{
author={Sinclair, A.~M.},
author={Smith, R.~R.},
title={Strongly singular masas in type {$\rm II\sb 1$} factors},
date={2002},
ISSN={1016-443X},
journal={Geom. Funct. Anal.},
volume={12},
number={1},
pages={199\Vertdash 216},
review={\MR{2003i:46066}},
}
\bib{singular.al}{article}{
author={Sinclair, Allan~M.},
author={Smith, Roger~R.},
author={White, Stuart~A.},
author={Wiggins, Alan},
title={Strong singularity of singular masas in {${\rm II}\sb 1$}
factors},
date={2007},
ISSN={0019-2082},
journal={Illinois J. Math.},
volume={51},
number={4},
pages={1077\Vertdash 1084},
review={\MR{MR2417416}},
}
\bib{Vaes.nontrivialsubfactor}{unpublished}{
author={Vaes, Stefaan},
title={Factors of type {II{$\sb 1$}} without non-trivial finite index
subfactors},
date={2006},
note={Preprint, arXiv:math.OA/0610231},
}
\end{biblist}
\end{bibdiv}
\section*{Author Addresses}
\begin{tabular*}{\textwidth}{l@{\hspace*{2cm}}l}
Pinhas Grossman&Alan Wiggins\\
Department of Mathematics&Department of Mathematics\\
1326 Stevenson Center&1326 Stevenson Center\\
Vanderbilt University&Vanderbilt University\\
Nashville, TN, 37209&Nashville, TN, 37209\\
USA&USA\\
\texttt{[email protected]}&\texttt{[email protected]}
{}\\\\
\end{tabular*}
\end{document} |
\begin{document}
\title{Data re-uploading with a single qudit}
\author[1]{\fnm{Noah L.} \sur{Wach}}
\author[2,3]{\fnm{Manuel S.} \sur{Rudolph}}
\author[1,4]{\fnm{Fred} \sur{Jendrzejewski}}
\author[5]{\fnm{Sebastian} \sur{Schmitt}}
\affil[1]{\orgdiv{Kirchhoff-Institut f\"ur Physik}, \orgname{Universit\"at Heidelberg}, \orgaddress{\street{Im Neuenheimer Feld 227}, \city{Heidelberg}, \postcode{69120}, \country{Germany}}}
\affil[2]{\orgname{Zapata Computing Canada Inc.}, \orgaddress{\street{25 Adelaide St E}, \city{Toronto}, \postcode{M5C3A1}, \country{Canada}}}
\affil[3]{\orgdiv{Institute of Physics}, \orgname{Ecole Polytechnique Fédérale de Lausanne (EPFL)}, \orgaddress{\street{Station 3}, \city{Lausanne}, \postcode{1015}, \country{Switzerland}}}
\affil[4]{\orgname{Alqor UG (haftungsbeschr\"ankt)}, \orgaddress{\street{Alexanderstr. 65}, \city{Frankfurt am Main}, \postcode{60489}, \country{Germany}}}
\affil[5]{\orgname{Honda Research Institute Europe GmbH}, \orgaddress{\street{Carl-Legien-Str.\ 30}, \city{Offenbach}, \postcode{63073}, \country{Germany}}}
\abstract{
Quantum two-level systems, i.e. qubits, form the basis for most quantum machine learning approaches that have been proposed throughout the years. However, in some cases, higher dimensional quantum systems may prove to be advantageous. Here, we explore the capabilities of multi-level quantum systems, so-called qudits, for their use in a quantum machine learning context.
We formulate classification and regression problems with the data re-uploading approach and demonstrate that a quantum circuit operating on a single qudit is able to successfully learn highly non-linear decision boundaries of classification problems such as the MNIST digit recognition problem.
We demonstrate that the performance strongly depends on the relation between the qudit states representing the labels and the structure of labels in the training data set.
Such a bias can lead to substantial performance improvement over qubit-based circuits in cases where the labels and qudit states are well-aligned.
Furthermore, we elucidate the influence of the choice of the elementary operators and show that the non-linear squeezing operator is necessary to achieve good performances.
We also show that there exists a trade-off for qudit systems between the number of circuit-generating operators in each processing layer and the total number of layers needed to achieve a given accuracy.
Finally, we compare classification results from numerically exact simulations and their equivalent implementation on actual IBM quantum hardware.
The findings of our work support the notion that qudit-based algorithms exhibit attractive traits and constitute a promising route to increasing the computational capabilities of quantum machine learning approaches.
}
\keywords{Quantum machine learning, qudits, parameterized quantum circuits, data re-uploading}
\maketitle
\section{Introduction}
In recent years, the field of
quantum machine learning has attracted much attention. There, quantum circuits are employed as central processing units for data-driven applications~\citep{biamonte_quantum_2017,schuld_supervised_2018,dunjko_machine_2018}.
While it is currently not clear whether or not quantum processing can provide a benefit on practical machine learning problems~\citep{schuld_is_2022,schuld_supervised_2021}, there has been some evidence that quantum machine learning models can outperform classical models in certain tasks~\citep{liu_rigorous_2021,sweke_quantum_2021,gyurik_establishing_2022, gyurik_towards_2022}.
While most studies, theoretical~\citep{bharti_noisy_2022,montanaro_quantum_2016} as well as experimental~\citep{graham_multi-qubit_2022, pino_demonstration_2021, kjaergaard_superconducting_2020}, focus on quantum systems consisting of two-level quantum systems, i.e., quantum bits (qubits), quantum computing hardware and algorithms can also be based on $d$-level systems, which are typically
called qudits~\citep{wang_qudits_2020,ringbauer_universal_2022}. Such qudit systems were shown to have advantages in specific contexts~\citep{cozzolino_highdimensional_2019,sheridan_security_2010} and have already been applied to several tasks~\citep{bravyi_hybrid_2022,deller_quantum_2022, weggemans_solving_2022}. However, a full evaluation on application-relevant tasks is still lacking.
Currently, it is not clear whether there is a fundamental advantage or disadvantage of utilizing qudit systems over their qubit counterparts for quantum machine learning or other application domains such as quantum optimization. However, qudits provide an complementary route to increasing the Hilbert space size in pursuit of better computation performance.
This is an alternative direction to qubit systems, where the route to larger Hilbert spaces is to increase the number of qubits.
Among other challenges, increasing the number of qubits comes with increased difficulty of engineering high-fidelity interactions between separate qubits, which are technologically challenging and typically involve larger gate errors than single qubit operations.
In that regard, expanding the local Hilbert space dimension by utilization of qudit systems seems very promising, since certain long-range qubit operations could instead be mapped to local qudit operations, which might prove to be more efficient.
We discuss several of such possible advantages of qudit system at various parts in this work.
We explore the usage of qudit systems in the context of the data re-uploading~\citep{perez-salinas_data_2020} quantum machine learning approach to build regression and multi-class classification models. In this scheme, a single qudit provides a natural way of encoding multiple classes by representing each class label as an orthogonal basis state.
This paper is structured as follows: In Sec.~\ref{sec:qudits}, we introduce the mathematical description of qudits. Sec.~\ref{sec:data_re-uploading} illustrates the implementation of the data re-uploading algorithm \citep{perez-salinas_data_2020} with qudits and shows the circuit structure as well as the chosen loss functions used during training.
Sec.~\ref{sec:experimental_setup} presents the training procedure which is done numerically and on IBMQ hardware. To be able to run our model on qubit based quantum hardware, we present a way to encode qudits with multiple qubits. In Sec.~\ref{sec:results}, we verify the expressivity of our model by testing it on a simple regression problem. We then go on and test our model on multi-class classification problems where we show an intrinsic bias between the qudit state representation and the data structure. Finally, we present numerical results of the model when being trained on the MNIST handwritten digits data set~\citep{lecun_mnist_2005}. We additionally investigate an equivalent qubit-based implementation on IBMQ hardware and the effect of entangling operations on the performance of the model.
\section{Qudits}\label{sec:qudits}
$d$ level quantum systems, typically called qudits, are a generalization of qubits to $ d>2 $ and can serve as a basis for quantum information processing.
The Hilbert space is spanned by $d$ orthonormal basis vectors, denoted by $ \ket{0}, \ket{1}, ... \ket{d-1} $ and
arbitrary qudit states can be represented by the supersposition $ \ket{\psi} = \sum_{k=0}^{d-1}c_k\ket{k} $ with
the normalization condition $ \sum_{k=0}^{d-1}\vert c_k\vert^2 = 1 $.
Inspired by cold atom systems \citep{kasper_universal_2022}, we interpret a $d$-level qudit as a spin with total angular momentum $\ell=\tfrac{d-1}{2}$, such that the basis state $\ket{k}$ corresponds to the spin eigenstate with angular momentum $m=\tfrac{2k-d+1}{2}$.
We employ the angular momentum operators $\{L_x, L_y, L_z\}$, which generate rotations around the corresponding axes and which obey the canonical commutation relations of the special unitary group $SU(2)$, $[L_i,L_j]=i \epsilon_{ijk}L_k$.
The action of the angular momentum operators on the qudit basis state are given by
\begin{align}
\label{eq:angularmometum}
L_x\ket{k} & = \tfrac12\big(\gamma_{d,k+1}\ket{k+1}+\gamma_{d,k-1}\ket{k-1}\big) \\
L_y\ket{k} & = \tfrac1{2i}\big(\gamma_{d,k+1}\ket{k+1}-\gamma_{d,k-1}\ket{k-1}\big) \\
L_z\ket{k} & = \tfrac{2k-d+1}{2}\ket{k}
\end{align}
where $k\in[0,d-1]$ and with $\gamma_{d,k}=\sqrt{(d-k-1)(k+1)}$.
There are various ways to define a universal gate set to realize actions in the qudit Hilbert space, see e.g.\ \cite{wang_qudits_2020} and \cite{luo_geometry_2014}.
For a single qudit, as it is considered in this work, we choose the two angular momentum operators $L_x$ and $L_z$ and the nonlinear squeezing or one-axis twisting operator $L_{z^2}=L_z^2$. This set is sufficient to generate any state by (possibly many) repeated finite rotations as detailed in \cite{kasper_universal_2022} and \cite{giorda_universal_2003}.
Equivalent to qubits, the gates for qudit circuits are then generated by exponentiation of the basic operators which implements the rotations
\begin{align}
R_j(\theta) = e^{-i \theta L_j} ,
\end{align}
with $j \in \{x, z, z^2\}$ and where $\theta\in\mathbb R$ are free parameters.
\begin{figure*}
\caption{a: Schematic representation of $d=7$ qudit states on a generalized Bloch sphere. b: Schematic illustration of the action of the three operators $R_x$ (left), $R_z$ (middle) and $R_{z^2}
\label{fig:data_reuploading}
\end{figure*}
In Fig.~\ref{fig:data_reuploading} we illustrate the qudit states (panel A) and the action of the elementary operators by showing the Husimi-Q quasi-probability distribution \citep{husimi_formal_1940} on a generalized Bloch sphere in panel B. The pure rotation does not deform the probability distributions, but applying the squeezing operation leads to a deformation of the state, indicating its nonlinear action.
\section{
Data re-uploading with a single qudit
}\label{sec:data_re-uploading}
We utilize a single $d$-level qudit and implement the quantum machine learning model as a data re-uploading quantum circuit \citep{perez-salinas_data_2020,jerbi_quantum_2023}.
The quantum circuit in its general form is build up from $L$ layers and encodes a quantum state, which depends on the input data $\mathbf{x}$ as
\begin{align}\label{eq:state_unitary}
\ket{\mathbf{x},\boldsymbol{\omega},\boldsymbol{\theta}} &
=\prod_{l=1}^LU(\mathbf{x}, \boldsymbol{\omega}^{(l)},\boldsymbol{\theta}^{(l)}) \ket{0}\,,
\end{align}
where $\ket0$ is the initial state. The characteristic of the data re-uploading architecture is that the unitary operation of each layer $l$ encompasses data dependent unitaries, which are parametrized by the scaling parameters $\boldsymbol{\omega}^{(l)}$ and data independent operations with free parameters $\boldsymbol{\theta}^{(l)}$.
We tested several layer structures and found that they all produce similar results. In the following we report results for two architectures. The first one is inspired by classical Euler rotations, where the unitaries of each layer have the structure:
\begin{align}
\label{eq:DRUL_euler1}
U(\mathbf{x}, \boldsymbol{\omega}^{(l)},\boldsymbol{\theta}^{(l)}) = W(\boldsymbol{\theta}^{(l)}) S(\mathbf{x}, \mathbf{\omega}^{(l)}) \,.
\end{align}
The data encoding block $S$ of each layer consists of alternating $x$ and $z$ rotations,
\begin{align}
\label{eq:DRUL_euler2}
S(\mathbf{x}, \bm\omega^{(l)}) = R_\alpha(x_D\omega_D^{(l)} )\cdots {R}_z(x_2\omega_2^{(l)}){R}_x(x_1\omega_1^{(l)})
\end{align}
where the number of rotations is determined by the dimensionality $D$ of the input data and consequently $\alpha=x$ ($\alpha=z$) in case $D$ is odd (even).
The alternating $x$ and $z$ rotations is chosen to ensure that these are non-commuting. This is necessary to distinguish between the individual dimensions of the data vector. If only $x$ (or $z$) rotations were chosen to encode the data, the classifier would only be able to learn the sum of the input data.
The data-independent block in each layer is composed of a sequence of three rotations followed by a squeezing gate $R_{z^2}$ as the last operation, i.e.\
\begin{align}\label{eq:DRUL_euler3}
W(\boldsymbol{\theta}^{(l)}) = {R}_{z^2}(\theta^{(l)}_4){R}_{x}(\theta^{(l)}_3){R}_{z}(\theta^{(l)}_2){R}_{x}(\theta^{(l)}_1)
\end{align}
The first three operations give rise to Euler angles, starting from $\ket{0}$, and thus the freedom to create overlap with an arbitrary qudit state.
The total number of adjustable parameters of a $L$-layer model is given by $(4+D) L$.
The action of this circuit structure is illustrated in Fig.~\ref{fig:data_reuploading} C.
The second architecture considered here is inspired by the simplified form presented by \cite{perez-salinas_data_2020} and has the structure:
\begin{align}
\label{eq:simleCirc}
U(\mathbf{x}, \boldsymbol{\omega}^{(l)},\boldsymbol{\theta}^{(l)}) = e^{-i\sum_{j}^D (\theta_j^{(l)}+\omega_j^{(l)} x_j) L_{c(j)} -i \theta_{D+1}^{(l)} L_{z^2} }\,.
\end{align}
Here the first term in the exponent is the sum over the three angular momentum operators, i.e.\ the generators of the $SU(2)$, and the function $c(j)=(j\mod 3)$ selects one of them. The second term in the exponent is a generalization of the original simplified architecture by including the squeezing operator in each layer. The total number of adjustable parameters for an $L$-layer model of this structure is given by $(2D+1) L$.
In our work we approach supervised classification and regression tasks by using qudit-based quantum circuits.
In a supervised learning setting, there exists a set of $N$ training samples $(\mathbf{x}_i,y_i)$ with $i=1,\dots, N$, which consist of pairs of input samples $\mathbf{x}$ with corresponding output values $y$.
The input samples are real-valued $D$-dimensional vectors $\mathbf{x}\in\mathbb{R}^D$ while the output values are either real numbers or a finite set of integers for regression and classification tasks, respectively.
For classification problems the output values $y\in(0, 1,\dots,d-1)$ indicate to which of the $d$ classes the data sample $\textbf{x}$ belongs.
In the quantum formulation, each class label is represented by a basis state of the $d$-level qudit $\ket{y}$ with $y=0,\dots,d-1$. The model prediction, i.e., the probability that a data sample $\mathbf{x}$ belongs to class $y$, can then conveniently be calculated from the overlap of the label state and the qudit wave function obtained from the quantum circuit with input $\mathbf{x}$,
\begin{align}
\label{eq:overlap}
P(y\vert \mathbf{x},\boldsymbol{\omega},\boldsymbol{\theta}) = \vert \braket{ y\vert \psi(\mathbf{x},\boldsymbol{\omega},\boldsymbol{\theta}) }\vert ^2\,.
\end{align}
Training the quantum circuit is achieved by minimizing a loss function over the given training data set $\mathcal{D} = \{ (\textbf{x}_i,y_i)\}_{i=1,\dots, N}$.
The overlap of Eq.~\eqref{eq:overlap} is the basis for formulating the mean squared error (MSE) loss function,
\begin{align}\label{eq:mse}
\mathcal{L}_{\text{MSE}}(\boldsymbol{\omega},\boldsymbol{\theta}) &=\frac{1}{N}\sum_{i=1}^N \big(\braket{ \bar{y}_i}- y_i\big)^2
\end{align}
with the average predicted label of the quantum model
\begin{align}\label{eq:averageoutput}
\braket{ \bar{y}_i} = \sum_{y=0}^{d-1} y \,P(y \vert \mathbf{x}_i,\boldsymbol{\omega},\boldsymbol{\theta})\,.
\end{align}
Another popular choice is the overlap loss as used in \cite{perez-salinas_data_2020},
\begin{align}
\mathcal{L}_\text{overlap}(\boldsymbol{\omega},\boldsymbol{\theta} ) =
\sum_{i=1}^N \big(1- P(y_i \vert \mathbf{x}_i,\boldsymbol{\omega},\boldsymbol{\theta} )\big).
\end{align}
The learning procedure amounts to adjusting the parameters of the quantum circuit $(\boldsymbol{\omega},\boldsymbol{\theta})$ in order to minimize the loss function, which is done by running a classical optimization algorithm.
After the quantum circuit has been trained its accuracy is evaluated by analyzing its predictions of the output variables on a test data set, i.e.\ a set of data samples not used during the training procedure.
For a classification task the predicted output labels is given by the basis state with the highest probability in the quantum state with the corresponding input data sample, i.e.
\begin{align}
\label{eq:predictedclassification}
y_i^\m{predicted}=\m{argmax}_yP(y\vert \mathbf{x}_i,\boldsymbol{\omega}_\m{opt},\boldsymbol{\theta}_\m{opt})\,,
\end{align}
where $(\boldsymbol{\omega}_\m{opt},\boldsymbol{\theta}_\m{opt})$ are the optimized values of the circuit parameters.
The accuracy of the trained model can then be evaluated by calculating the fraction of correctly predicted labels in the test data set,
\begin{align}
\text{Accuracy} = \frac1{N_\m{test}}\sum_{i\in\mathcal{D}_\text{test}} \delta_{y_i^\m{predicted},y_i}
\,,
\end{align}
where $\mathcal{D}_\text{test}$ denotes the test data set which contains $N_\m{test}$ data samples and $\delta_{a,b}$ is the Kronecker delta.
The MSE loss of Eq.~\eqref{eq:mse} is also suitable to learn regression tasks. In that case the output variables are finite-range continuous variables, $y\in[0,d-1]$. The resulting qudit state of the trained quantum circuit is then a superposition of basis states and the predicted output value for input $\mathbf{x}_i$ is calculated as the expectation value of Eq.~\eqref{eq:averageoutput},
\begin{align}\
\label{eq:predictedregression}
\braket{\bar {y}_i}^\m{predicted}=\sum_{y=0}^{d-1} y \,P( y \vert \mathbf{x}_i,\boldsymbol{\omega}_\m{opt},\boldsymbol{\theta}_\m{opt})\,.
\end{align}
For simulations running on actual quantum hardware, the probability distribution of Eq.~\eqref{eq:overlap} is estimated by performing a finite number of measurement shots and recording the measurement results as a histogram. This approximative distribution is then used to calculate the predicted output values of Eqs.~\eqref{eq:predictedclassification} and \eqref{eq:predictedregression}.
\section{Experimental setup}\label{sec:experimental_setup}
We perform the training of the quantum circuit utilizing exact numerical simulations of the qudit states and quantum gates.
The parameter values of $\boldsymbol{\omega}$ and $\boldsymbol{\theta}$ are initialized randomly in the range $[-\pi, \pi]$. As classical optimization algorithms for minimizing the loss function we employ the ADAM optimizer~\citep{kingma_adam_2014} paired with the exact gradients calculated by the automatic differentiation package JAX~\citep{bradbury_jax_2018}. Additionally, we also used L-BFGS-B and Powell optimization approaches from the \texttt{scipy} library \citep{virtanen_scipy_2020}, where we do not utilize automatic differentiation. Apart from differences in the run time for the training, the results obtained with all approaches were always comparable.
If not stated otherwise, all models are trained on randomly selected data sets with size $N=750$. The performance is evaluated on a separate randomly selected test data set, which contains $\tfrac N3=250$ data samples.
In order to obtain reliable statistics of the results, we run 60 different simulations for each setting, each time randomly varying the data set and the model initialization. To visualize the decision boundaries of the trained classifiers, we utilize the visualization library \textit{orqviz}~\citep{rudolph_orqviz_2021}.
In addition to the numerically exact simulation, in \ref{sec:ibmq} we re-train and evaluate the qudit quantum circuit on the IBM \texttt{ibmq\_lima} hardware using the qiskit \cite{treinish_qiskitqiskit_2023} framework. This allows us to estimate the impact of gate errors and noise of actual NISQ hardware on the learning performance.
As the IBM hardware naturally operates on qubits, we employ a mapping of the $d$-level qudit Hilbert space to $d-1$ qubits, which is inspired by cold atom systems~\citep{kasper_universal_2022}.
The qudit basis state $\ket{k}$ is represented by the qubit Dicke-state $\ket{D^{d-1}_k}$, i.e.\ $\ket k \to\ket{D^{d-1}_k}$ \citep{gasieniec_deterministic_2019}. The $k^{th}$ Dicke state $\ket{D^{d-1}_k}$ of $d-1$ qubits is given by the equal superposition of all states which have $k$ qubits in the state $\ket{1}$ and $d-1-k$ in state $\ket 0$, i.e.\
\begin{align}
\ket{D^{d-1}_k}=\begin{pmatrix}d-1\\k\end{pmatrix}^{-\tfrac{1}{2}}\sum_{x\in\{0,1\}^{d-1}, hw(x)=k } \ket{x}\,.
\end{align}
where $hw(x)$ indicates the Hamming weight of string $x$, i.e.\ the number of 1's in $x$.
The angular momentum operators for the $d-1$ qubit states are defined as the sum over the single-qubit operators,
$L^\text{tot}_a=\sum_{j=0}^{d-1} L_a^j$ with $a=\{x,y,z\}$, and they act as described by Eqs.~\eqref{eq:angularmometum} on the Dicke states.
In particular, the Dicke state $\ket{D^{d-1}_k}$ and the qudit state $\ket{k}$ have the same $z$-component of the angular momentum, i.e.\ $ L^\text{tot}_z\ket{D^{d-1}_k}=\tfrac{2k-d+1}{2} \ket{D^{d-1}_k}$.
In this representation the squeezing operation consists of all pairwise qubit interactions,
\begin{align}
L^\text{tot}_{z^2}=\big(L^\text{tot}_{z}\big)^2=\sum_{i,j} L_z^i L_z^j
\end{align}
which is implemented as two-qubit $zz$-rotation gates on the hardware~\citep{treinish_qiskitqiskit_2023}.
On the qubit device, we start from the optimized parameters found by the training with the exact simulation and re-train the model on the actual hardware. There, we perform 512 measurement shots for each qiskit circuit evaluation.
For comparison we also report the accumulated results on classification tasks of a standard classical machine learning approach, namely the \texttt{scikit-learn} \citep{pedregosa_scikit-learn_2018} implementation of the random forest (RF) classifier with 100 estimators, a $k$-nearest neighbor classifier with $k=3$ ($k$nn) and a support vector classifier (SVC). The performance of these approaches were always comparable and we report the cumulative results of running each algorithm 50 times on randomized data stets.
\section{Results}\label{sec:results}
\begin{figure}
\caption{Results of trained data re-uploading models with structure of Eqs.~\eqref{eq:DRUL_euler1}
\label{Fig:03}
\end{figure}
\subsection{Expressivity in one dimension}
First, we examine the expressivity of the data re-uploading circuit with a single qudit on a continuous one-dimensional regression problem analogous to ~\cite{schuld_effect_2021}.
We train a model with one qutrit, i.e.\ a $d=3$ qudit, to learn the simple function
\begin{align}\label{eq:simple1D}
f(x)=\tfrac12 \big(\cos{(1.5 x)} + \cos{(2.5 x)}\big)
\end{align}
with \(x \in [-\pi, \pi]\).
As the model output we use the expectation value of Eq.~\eqref{eq:averageoutput} but shift it to match the data range, i.e.\ $f_i^\m{predicted}=\braket{\bar{y_i}}-1$. For training the quantum circuit we employ the
MSE loss function of Eq.~\eqref{eq:mse} and use a training set of 100 linearly distributed samples.
We show the results of a $L=1$ layer model in the left panel of Fig.~\ref{Fig:03}.
It is evident that this model is not able to learn the function properly. This confirms the previous insight for qubit circuits~\citep{schuld_effect_2021}, that data re-uploading models learn truncated Fourier series, and that a circuit containing only one angle parametrized by the input can only encode one Fourier component.
In the right panel we show the results of a model with $L=2$ layers. As expected, this model is now able to learn the function with two Fourier coefficients exactly. The derivation is analogous to the qubit case~\citep{schuld_effect_2021}, but is not shown in this work.
Apart from the results obtained by numerically exact simulations, we also include results from the evaluation on the IBM \texttt{ibmq\_lima} hardware (green dots in the plot). These predicted values are generally noisy which result from the inherent errors in the quantum circuit, as well as the shot noise from the finite number of samples used to estimate the probability distribution of Eq.~\eqref{eq:overlap}.
\subsection{Two-dimensional classification tasks}
\begin{figure*}
\caption{Results of the seven-class horizontal stripe classification problem with circuit structure of Eqs.~\eqref{eq:DRUL_euler1}
\label{fig:StraightLines}
\end{figure*}
Inspired by the original benchmark of the data re-uploading algorithm~\citep{perez-salinas_data_2020}, we investigate the performance of the qudit circuit with the structure of Eqs.~\eqref{eq:DRUL_euler1}-\eqref{eq:DRUL_euler3} on various two-dimensional multi-class classification problems.
The data samples are located on a two-dimensional square $\mathbf{x}=(x_1,x_2) \in [-1, 1]^2$, where each sample is associated with one out of $d$ classes.
In the first problem setting, the classes are arranged in parallel horizontal stripes.
Fig.~\ref{fig:StraightLines} shows the results for seven classes, where we used a $d=7$ qudit to represent the classes in the quantum circuit.
In the upper row of the figure, we show results from the case where we align the qudit states with the labels in such a way, that the $z$-components of the spin are in order with the class label. In that case, adjacent classes are represented by qudit states with adjacent $z$-spin values. Under these circumstances, the model has a strong inductive bias towards the data set.
This is illustrated in the leftmost figure where we draw the qudit states on the generalized Bloch sphere.
The lowest qudit state $\ket 0$ is associated with the class of the lowest (blue) stripe, the second lowest qudit $\ket 1$ with the second lowest stripe, and so on.
The corresponding results show that the data re-uploading circuit can predict this data set almost perfectly with three or more layers, $L\geq3$ (blue graph in top right plot).
Remarkably, the performance of the data re-uploading circuit is even better than the classical machine learning models shown on the far right of the right panel.
However, the learned classes and decision boundaries, as shown in the middle panels of the Figure, can still differ from the ground truth. This is due to the small size of the training data set, which necessarily leads to small random variations of the learned decision boundaries, depending on the precise location of the training data close to the decision boundaries.
The importance of the squeezing operation $R_{z^2}$ in the $W$ operator of Eq.~\eqref{eq:DRUL_euler3} is highlighted by observing the massively degraded performance of the same circuits without this gate (green graph in the top right panel of Fig.~\ref{fig:StraightLines}). Without squeezing, the median accuracy saturates at around $0.7$ while with squeezing the median accuracy reaches $0.95$ and higher.
This performance difference is attributed to the fact, that the squeezing operator is necessary to allow for the representation of arbitrary unitary operations in the qudit Hilbert space.
\begin{figure*}
\caption{Learned classification regions of a single qudit classifier circuit with $L=5$ layers trained on the first two (a), three (b) and five (c) classes (i.e.\ digits) of the MNIST data set, which was compressed down to two dimensions using PCA.
Each background color indicates one region associated with the same label. The training data samples are shown in the plots and colored according to their true label.
(d) Accuracy of the qudit classifier as a function of the number of layers for various numbers of classes. RF denotes the random forest classifier for comparison
}
\label{fig:orqviz_mnist_pca}
\end{figure*}
The second row of Fig.~\ref{fig:StraightLines} shows the performance when the class labels are randomly assigned to the qudit states.
This removes the inductive bias of the model and makes the problem much harder to learn for the qudit quantum circuit. Consequently, the accuracy drops significantly and many more layers are necessary to recover the performance level of the scenario with aligned labels.
The plots of the accuracies also show the accumulated results of the three classical machine learning classifiers RF, SVC and $k$nn for comparison. The performance of the quantum circuits including squeezing with randomized assignment of qudit states to labels is comparable to the classical approaches. For aligned labels the quantum circuit even slightly outperform the classical approaches.
We also considered several other two-dimensional multi-class classification problems and trained data re-uploading models on several horizontal stripe data sets with varying number of stripes (i.e.\ classes), as well as data sets where the stripes are not horizontal but rotated by an angle.
We also investigated the models on data sets where the class regions are given by concentric rings with approximately the same width and where the center of the rings is somewhere in the plane.
The results (not shown) on those models were always qualitatively similar to the ones presented here. Squeezing is always necessary to achieve good performance and when the label order is aligned with the qudit states, the performance is consistently higher.
The overall performance was slightly reduced for the tilted stripes and the concentric rings and the variance in the results was also slightly larger as compared to the horizontal stripes cases presented here.
\subsection{Classifying MNIST data}
As the next application example, we train the model of Eqs.~\eqref{eq:DRUL_euler1}-\eqref{eq:DRUL_euler3} on subsets of a \texttt{scikit-learn}~\citep{pedregosa_scikit-learn_2018} version of the MNIST handwritten digits data set~\citep{lecun_mnist_2005}.
This version includes down-sampled images with $8$x$8$ pixels instead of the $28$x$28$ pixels of the original MNIST data set.
Since it is very computationally demanding to encode $8\times8=64$ dimensional data samples into the quantum circuit, we reduce the input dimension further using the principal component analysis (PCA)~\citep{jolliffe_principal_2016}.
In Fig.~\ref{fig:orqviz_mnist_pca}(a)-(c) we visualize the classification boundaries of a $L=5$ layer circuit for two, three and five classes, where the input dimension is reduced to $D=2$ using a PCA. The plots show that the data re-uploading classifier is able to learn highly non-linear decision boundaries. It can also be seen that the classifier tends to produce disconnected classification regions due to the oscillatory nature of parametrized quantum circuits. The statistics of the accuracy for various classes is shown in Fig.~\ref{fig:orqviz_mnist_pca} (d) as a function of the quantum circuit layers.
It can be observed that the accuracy, as expected, increases with more layers, and that increasing the number of classes in the problem reduces the accuracy. For more classes, the reduction to two dimensions using PCA leads to more spacial overlap between classes, which makes the problems inherently noisy and limits the overall achievable accuracy. The overlapping classes can be directly observed in panel (c).
Additionally, panel (d) shows the result from 50 runs of a random forest (RF) classifier.
The accuracy of the quantum-based classifier approaches the values of the RF model with increasing number of layers.
However, the variance in the result is significantly larger for the data re-uploading circuit. One reason for this is found in the increasingly more complex classical optimization problem when increasing the number of parameters in the quantum circuit.
\subsection{Qudit vs Qubit}
The quantum circuits we employed up to this point used a single $d$-level qudit to solve classification problems with $d$ classes, where each basis state encodes one class label.
While it may appear less natural, multi-class classification problems can also be learned with a data re-uploading circuit operating with a single qubit~\citep{perez-salinas_data_2020}.
In those approaches, the different classes are represented by single-qubit quantum states which are chosen to be maximally orthogonal. Unless a two-class classification problems is considered, where the labels states are $\ket{0}$ and $\ket{1}$, these states cannot be fully orthogonal to each other. As a $d=6$ class example, we choose the eigenstates of the three spin operators, i.e.\
$\ket{y} \in \{\ket{0}, \ket{1}, \tfrac{1}{\sqrt{2}}(\ket{0}+\ket{1}),\tfrac{1}{\sqrt{2}}(\ket{0}-\ket{1}),\tfrac{1}{\sqrt{2}}(\ket{0}+i\ket{1}),\tfrac{1}{\sqrt{2}}(\ket{0}-i\ket{1})\}$,
as the maximally orthogonal label states.
Note that in this situation, the overlap of Eq.~\eqref{eq:overlap} as a function of the labels is no longer a proper probability distribution since $\sum_y P(y)>1$.
The natural question which arises is whether there is any difference when using qubit or qudit data re-uploading circuits for multi-class classification problems. Therefore, we study and compare the performance of a single qubit as well as a single qudit on several problems.
For the single qubit approach, we utilize the simplified data re-uploading structure of Eqs.~\eqref{eq:simleCirc}.
Here, the squeezing operator can be removed from the circuit since it is proportional to the identity, $L_{z^2}=\tfrac{\mathds{1}}{4}$, and therefore only applies a global phase.
\begin{figure}
\caption{
Comparison of $d=6$ multi-class classification performance of qubit and qudit data re-uploading circuits with the simplified structure of Eq.~\eqref{eq:simleCirc}
\label{fig:qubitVSqudit}
\end{figure}
Fig.~\ref{fig:qubitVSqudit} shows the results where the qubit and qudit models were trained on two different classification problems with $d=6$ classes and $D=2$ input dimensions. The left panel of Fig.~\ref{fig:qubitVSqudit} shows the classification accuracy as a function of the circuit layers for the tilted stripes data set, where the horizontal stripes from Fig.~\ref{fig:StraightLines} are rotated by an angle of $27^\circ$.
It can be seen that the accuracy of the qubit and qudit circuits are comparable for the case where the class labels were randomly assigned to the qudit basis states.
However, for labels aligned with the qudit states, the performance is consistently higher for the qudit circuits.
The right panel shows the results of the qubit and qudit data re-uploading circuits for the MNIST handwritten digits data set limited to six randomly selected digits (i.e.\ classes) and where each image is reduced to two input dimensions using a PCA.
Here, qubit and qudit approaches are comparable in their performance, while the qubit architectures seem to outperform the qudit ones for smaller circuit depths.
These results highlight the importance and potential strength of the bias exhibited by the qudit-based model. When applied mindfully, this bias can be leveraged to enhance training performance and possibly generalization capabilities.
In our simulations, the loss function and the classification accuracy are calculated numerically exact with full access to the quantum state.
When running on real quantum hardware, this cannot be done and one needs to prepare and sample, i.e.\ measure, the quantum state multiple times in order to estimate the probabilities/overlaps of Eq.~\eqref{eq:overlap}.
For the qudit system, where each class label is associated with an orthogonal basis state in the measurement basis, this is straight forward since the statistics of the measurement outcomes directly translate to estimating the probabilities.
However, for qubit systems with non-orthogonal label states (i.e. for $d>2$), this involves the necessity for a full quantum state tomography, or techniques such as classical shadows~\citep{huang_predicting_2020} with appropriately chosen measurements. Therefore, more measurement samples will typically be required for qubit systems.
Additionally and more importantly, when the different label states are overlapping, the difference in overlap between the true label state to be estimated as compared to an undesired label state is less pronounced.
For qudits, this difference is always 1 since $P_\text{qudit}(y\vert y')=\delta_{y,y'}$ for label states $y,y'$ (here, basis states). In the qubit representation, the difference is $\leq1$ since $P_\text{qubit}(y\vert y')=\delta_{y,y'} + \sum_{y''\neq y'} c_{y''} \delta_{y,y''}$ with $c_{y}\geq 0$.
For example, in the case of $d=6$ classes considered here, $P_\text{qubit}(0\vert y')=\vert\braket{ 0\vert y' }\vert^2=\tfrac{1}{2}$ for $y'\in\{2,3,4,5\}$.
This leads to a significantly reduced training signal in the overlap loss function in Eq.~\eqref{eq:overlap}, since even contributions from incorrect label states have non-zero overlap and therefore reduce the loss.
Obtaining the overlaps using a finite number of measurement samples necessarily adds noise to the estimates,
which in turn makes the problem of discriminating between correctly and wrongly predicted labels more difficult.
As a consequence,
the shot noise
is expected to have a more pronounced negative effect on the training performance for multi-class problems when overlapping label states are used.
\subsection{Circuit structure and basic operators}
For qubit circuits, the spin-$\frac12$ Pauli matrices allow to represent arbitrary single-qubit unitary operations inside each layer.
For $d$-level qudits, one instead needs to include all $d^2-1$ generators of the special unitary group in $d$ dimensions $SU(d)$, to achieve the same arbitrary control.
However, as indicated above and shown in \cite{kasper_universal_2022} and \cite{giorda_universal_2003}, the three operators $L_x$, $L_z$ and $L_{z^2}$ are sufficient to represent any unitary operation by repeated finite rotations with multiple layers.
This leads to the question whether there is a benefit when more than this reduced set of three operators are used in the data re-uploading circuit. We test this hypothesis by adding the following two types of operators,
\begin{align}
\label{eq:extOps}
X_{j}&=\ket{0}\bra{0} - \ket{j}\bra{j} \qquad (1\leq j\leq d-1) \nonumber\\
Y_{j}&=\ket{0}\bra{j} + \ket{j}\bra{0} \, .
\end{align}
The motivation behind this choice is that these operators directly couple the initial state $\ket{0}$ to all other states $\ket{j}$ and thus may allow for a more efficient learning. We then use the simplified structure of Eq.~\eqref{eq:simleCirc} and replace the squeezing operator by the sum over all operators in Eq.~\eqref{eq:extOps}.
The results of the three types of circuit structures on the six-class reduced MNIST data set are shown in Fig.~\ref{fig:extOps}.
The left panel shows the accuracies as a function of the number of circuit layers. First, one can observe that there is no significant difference between the simplified structure of Eq.~\eqref{eq:simleCirc} and circuit structure of Eqs.~\eqref{eq:DRUL_euler1}-\eqref{eq:DRUL_euler2}. On the other hand, the performance of the circuits with the extended set of operators is significantly better.
At first glance this seems to support the hypothesis that adding more operators to the set generators does in fact enhance the trainability of qudit quantum circuits.
However, adding more operators also introduces more free parameters to the quantum circuit for a given number of layers, which allows for more flexibility in the trainable circuit.
When comparing the accuracies as a function of number of free parameters, as shown in the right panel of Fig.~\ref{fig:extOps}, one can observe that the performance is statistically the same for all circuit structures and number of operators.
Therefore, we conclude that it is not the number of operators which is determining the trainability, but rather the number of trainable parameters.
This reveals a tradeoff between the complexity of each layer, i.e.\ the number of operators and free parameters in each layer, and the total number of layers which are necessary to achieve a certain accuracy.
Including more operators in each layer allows to achieve good performance with fewer number of layers. On quantum hardware, where gate errors play an important role, the freedom to choose the elemental operations provides additional flexibility. And because different hardware may natively support different gate sets, this allows to choose configurations which result in running variational quantum circuits with less errors.
For qubits, this is not possible since the three Pauli matrices form a basis and no additional operators can be constructed.
\begin{figure}
\caption{
Comparison of results of qudit data re-uploading circuits of different structures on the $d=6$ class MNIST data set with input dimension $D=2$.
The left panels shows the results as function of the number of layers, while the right panel shows it as function of the number of free parameters in the circuit (notice: the $x$-axis is not to scale in the right plot)
}
\label{fig:extOps}
\end{figure}
\subsection{Re-training on IBM hardware}\label{sec:ibmq}
Finally, we compare the classification performance of the model of Eqs.~\eqref{eq:DRUL_euler1}-\eqref{eq:DRUL_euler3} with and without the squeezing operation between numerically exact results and the case where the re-training and evaluation is done on actual \texttt{ibmq\_lima} hardware. For the qubit implementation of the qudit circuits we use the Dicke-state encoding described in Sec.~\ref{sec:experimental_setup}.
The left panel of Fig.~\ref{fig:MNIST} shows the accuracy as a function of the quantum circuit layers for the first five digits, i.e.\ qudit dimension $d=5$, and with input dimension $D=5$.
The numerically exact simulations show the same trend as previously described for two input dimensions and approach the values of the traditional RF classifier with an increasing circuit depth. However, when re-trained and evaluated on actual hardware the performance is only comparable for the first two layers, after that it saturates and then decreases strongly.
This is due to the noise and infidelities in the actual hardware realization of the entangling gates used to implement the squeezing operations.
Removing the squeezing operations from the circuit reveals this explicitly, since the accuracies of both approaches are then comparable as shown in the right panel of Fig.~\ref{fig:MNIST}.
\begin{figure}
\caption{
Results for the classification of the first five digits of the MNIST data set with input dimension $D=5$ with (left) and without (right) squeezing as a function of the layers in the quantum circuit operating with a $d=5$ qudit. The numerical exact simulations are shown in blue, whereas the results from \texttt{ibmq\_lima}
\label{fig:MNIST}
\end{figure}
This is an explicit example of a situation where is more efficient to increase the local Hilbert space and work with qudits, instead of increasing the number of qubits.
\section{Discussion}
In this work, we demonstrated that multi-level qudit systems are well-suited to be applied to multi-class classification problems, as each qudit basis state can naturally encode one class of the data. We implemented data re-uploading quantum circuits, where we used the angular momentum and the squeezing operators to build a universal gate set. We illustrated the capabilities of the qudit-based approach on regression and classification benchmarks. Owing to the ability to learn highly non-linear classification boundaries, the models were able to successfully learn on various data sets and achieve performances comparable to standard classical machine learning models.
Interestingly, the achievable performance was strongly dependent on the qudit states representing the class labels and their relation to the structure of the labels in the data set.
This intrinsic bias due to the label alignment can boost the performance of qudit circuits substantially, which might be beneficial for certain types of application problems. One may therefore conclude that there exist tasks where variational circuits based on qudits might be the better choice compared to their qubit counterparts.
We also studied the influence of the choice of the elementary operations and the layer structure in qudit quantum circuits. There
we found, that the structure, i.e.\ the particular sequence and types of rotations, does not appear to have significant influence on the performance, as long as the number of parameters was accounted for.
This reveals a trade-off between the number of elementary operators in the gate set and the number of layers to achieve the same performance. In some situations it might be advantageous to utilize the minimum set of operators and to employ more layers, while in others, more operators and less layers might be the better choice.
This is especially interesting on quantum hardware where different gates typically have different error rates, and the freedom to choose between several configurations may allow to minimize the influence of errors on the resulting performance.
It should be noted that we did not employ any of the more sophisticated techniques to improve the performance of single-qudit data re-uploading circuits, for example, employing different classical optimizers~\citep{deller_quantum_2022,lavrijsen_classical_2020}, fine-tuning hyperparameter settings~\citep{pascal_hyperparameter_2022}, and finding better parameter initializations~\citep{sack_quantum_2021,grant_initialization_2019,egger_warm-starting_2021}. All techniques have been shown to be very beneficial in related contexts and can be used in the future to improve the current approach.
A necessary next step is to investigate the performance of multi-qudit circuits, since here we only investigated single-qudit circuits.
Interesting research questions include how the intrinsic bias of a single qudit is influences a multi-qudit circuit and its learning performance, and what the role of the set of elementary operators is in these cases. Crucially, the effect of multi-qudit entangling gates needs to be elucidated.
In summary, our results and discussion support the conclusion that qudit systems offer a promising alternative quantum computing architecture. There are several differences to qubit-based systems which could be leveraged and thus provide a practical benefit for quantum algorithms and quantum machine learning tasks in particular.
\bmhead{Acknowledgments}
NLW acknowledges funding from the Honda Research Institute Europe GmbH for attending the conference on Quantum Techniques in Machine Learning (QTML) 2022. NLW would like to thank the Van der Waals-Zeeman Institute in Amsterdam for the extended hospitality.
SS acknowledges funding by the European Union under Horizon Europe Programme -- Grant Agreement 101080086 -- NeQST.
Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Climate, Infrastructure and Environment Executive Agency (CINEA). Neither the European Union nor the granting authority can be held responsible for them.
\section*{Declarations}
\bmhead{Conflict of interest} The authors declare no competing interests.
\end{document} |
\begin{document}
\title[Nonlinear Schr\"odinger equations with third order dispersion]{Sharp global well-posedness for the cubic nonlinear Schr\"odinger equation with third order dispersion}
\author{X. Carvajal}
\address{Instituto de Matem\'atica, UFRJ, 21941-909, Rio de Janeiro, RJ, Brazil}
\email{[email protected]}
\author{M. Panthee}
\address{Department of Mathematics, University of Campinas\\
13083-859, Campinas, S\~ao Paulo, SP, Brazil}
\email{[email protected]}
\maketitle
\begin{abstract} We consider the
initial value problem (IVP) associated to the cubic nonlinear Schr\"odinger equation with third-order dispersion
\begin{equation*}
\partial_{t}u+i\alpha
\partial^{2}_{x}u- \partial^{3}_{x}u+i\beta|u|^{2}u = 0,
\quad x,t \in \mathbb R,
\end{equation*}
for given data in the Sobolev space $H^s(\mathbb R)$. This IVP is known to be locally well-posed for given data with Sobolev regularity $s>-\frac14$ and globally well-posed for $s\geq 0$ \cite{XC-04}. For given data in $H^s(\mathbb R)$, $0>s> -\frac14$ no global well-posedness result is known. In this work, we derive an {\em almost conserved quantity} for such data and obtain a sharp global well-posedness result. Our result answers the question left open in \cite{XC-04}.
\end{abstract}
\noindent
Key-words: Schr\"{o}dinger equation, Korteweg-de Vries equation, Initial value problem, Local and global well-posedness, Sobolev spaces, Almost conservation law.
{\mathcal S}ection{Introduction}
In this work we consider the initial value problem (IVP) associated to the cubic nonlinear Schr\"odinger equation with third-order dispersion
\begin{equation}\label{e-nls}
\begin{cases}
\partial_{t}u+i\alpha
\partial^{2}_{x}u- \partial^{3}_{x}u+i\beta|u|^{2}u = 0,
\quad x,t \in \mathbb R, \\
u(x,0) = u_0(x),
\end{cases}
\end{equation}
where $\alpha,\beta\in \mathbb R $ and $u = u(x, t)$ is complex valued function.
The equation in \eqref{e-nls}, also known as the extended nonlinear Schr\"odinger (e-NLS) equation, appears to describe several physical phenomena like the nonlinear pulse propagation in an optical fiber, nonlinear modulation of a capillary gravity wave on water, for more details we refer to \cite{Agr-07}, \cite{XC-04}, \cite{DT-21}, \cite{HK-81}, \cite{MT-18}, \cite{Oikawa-93}, \cite{Tsutsumi-18} and references therein. In some literature, this model is also known as the third order Lugiato-Lefever equation \cite{MT-17} and can also be considered as a particular case of the higher order nonlinear Schr\"odinger (h-NLS) equation proposed by Hasegawa and Kodama in \cite{[H-K]} and \cite{[Ko]} to describe
the nonlinear propagation of pulses in optical fibers
\begin{equation*}\label{honse}
\begin{cases}
\partial_{t}u-i\alpha
\partial^{2}_{x}u+ \partial^{3}_{x}u-i\beta|u|^{2}u+\gamma
|u|^{2}\partial_{x}u+\delta\partial_{x}(|u|^2)u = 0,
\quad x,t \in \mathbb R,\\
u(x,0) = u_0(x),
\end{cases}
\end{equation*}
where $\alpha,\beta, \gamma\in \mathbb R $, $ \delta
\in \mathbb{C}$ and $u = u(x, t)$ are complex valued function.
The well-posedness issues and other properties of solutions of the IVP \eqref{e-nls} posed on $\mathbb R$ or $\mathbb T$ have extensively been studied by several authors, see for example \cite{XC-04}, \cite{DT-21}, \cite{MT-17}, \cite{OTT-19}, \cite{CP-22} and references threrein. As far as we know, the best local well-posedness result for the IVP~\eqref{e-nls} with given data in the $L^2$-based Sobolev spaces $H^s(\mathbb R)$, $s>-\frac14$, is obtained by the first author in \cite{XC-04}. More precisely, the following result was obtained in \cite{XC-04}.
\begin{theorem} \cite{XC-04}\label{LocalTh}
Let $u_0\in H^s(\mathbb R)$ and $s>-\frac14$. Then there exist
$\delta = \delta(\|u_0\|_{H^s})$ (with $\delta(\rho)\to \infty$ as $\rho\to 0$) and a unique solution to the IVP \eqref{e-nls} in the time interval $[0, \delta]$. Moreover, the solution satisfies the estimate
\begin{equation}\label{loc-est-2}
\|u\|_{X_{\delta}^{s, b}}\lesssim \|u_0\|_{H^s},
\end{equation}
where the norm $\|u\|_{X_{\delta}^{s, b}}$ is as defined in \eqref{xsb-rest}.
\end{theorem}
To obtain this result, the author in \cite{XC-04} derived a trilinear estimate
\begin{equation}
\label{tri-xc}
\|u_1u_2\bar{u_3}\|_{X^{s,b'}}\lesssim \prod_{j=1}^3\|u_j\|_{X^{s,b}}^3, \quad 0\geq s>-\frac14,\;\;
b>\frac{7}{12}, \;\: b'<\frac{s}3,
\end{equation}
where, for $s,b\in\mathbb R$, $X^{s,b}$ is the Fourier transform restriction norm space introduced by Bourgain \cite{B-93} with norm
\begin{equation}
\label{X-norm}
\|u\|_{X^{s,b}}:=\|\langle\xi\rangle^s\langle\tau-\phi(\xi)\rangle^b\widehat{u}(\xi, \tau)\|_{L^2_{\xi}L^2_{\tau}},
\end{equation}
where $\langle x\rangle:=1+|x|$ and $\phi(\xi) $ is the phase function associated to the e-NLS equation \eqref{e-nls} (for detailed definition, see \eqref{xsb} below). The author in \cite{XC-04} also showed that the crucial trilinear estimate \eqref{tri-xc} fails for $s<-\frac14$. Further, it has been proved that the application data to solution fails to be $C^3$ at the origin if $s<-\frac14$, see Theorem 1.3, iv) in \cite{XC-13}. In this sense, the local well-posedness result given by Theorem \ref{LocalTh} is sharp using this method.
\begin{remark}
We note that, the following quantity
\begin{equation}
\label{conserved-1}
E(u): = \int_{\mathbb R}|u(x,t)|^2 dx,
\end{equation}
is conserved by the flow of \eqref{e-nls}. Using this conserved quantity, the local solution given by Theorem \ref{LocalTh} can be extended globally in time, thereby proving the global well-posedness of the IVP \eqref{e-nls} in $H^s(\mathbb R)$, whenever $s\geq 0$.
\end{remark}
Looking at the local well-posedness result given by Theorem \ref{LocalTh} and the Remark above, it is clear that there is a gap between the local and the global well-posedness results. In other words, one may ask the following natural question. Is it possible that the local solution given by Theorem \ref{LocalTh} can be extended globally in time for $ 0>s>-\frac14$?
The main objective of this work is to answer the question raised in the previous paragraph that is left open in \cite{XC-04} since 2004. In other words, the main focus of this work is in investigating the global well-posedness issue of the IVP \eqref{e-nls} for given data in the low regularity Sobolev spaces $H^s(\mathbb R)$, $0>s>-\frac14$. No conserved quantities are available for data with regularity below $L^2$ to apply the classical method to extend the local solution globally in time. To overcome this difficulty we use the famous {\em I-method} introduced by Colliander et al \cite{CKSTT1, CKSTT2, CKSTT} and derive an {\em almost conserved quantity} to obtain the global
well-posedness result for given data in the low regularity Sobolev spaces. More precisely, the main result of this work is the following.
\begin{theorem}
\label{Global-Th}
The IVP \eqref{e-nls} is globally well-posed for any initial data $ u_0\in H^s(\mathbb R)$, $s>-\frac14$.
\end{theorem}
\begin{remark}
In the proof of this theorem, an almost conservation of the second generation of the modified energy, viz.,
$$|E^2_I(u(\delta))|\leq |E^2_I(\phi)| + C N^{-\frac74}\|Iu\|_{X^{0, \frac12+}_{\delta}}^6$$
plays a crucial role. The decay $N^{-\frac74}$ is more than enough to get the required result. Behind the proof of an almost conservation law, there are decay estimates of the multipliers involved. Structure of the multipliers in our case is different from the ones that appear in the case of the KdV or the NLS equations, see for example \cite{XC-06}, \cite{CKSTT2} and \cite{CKSTT}. This fact creates some extra difficulties as can be seen in the proof of Proposition \ref{prop3.3}.
\end{remark}
The well-posedness issues of the IVP \eqref{e-nls} posed on the periodic domain $\mathbb T:=\mathbb R/2\pi\mathbb Z$ are also considered by several authors in recent time. The authors in \cite{MT-17} studied the IVP \eqref{e-nls} considering that $\frac{2\alpha}3\notin \mathbb Z$ with data $u_0\in L^2(\mathbb T)$ and obtained the global existence of the solution. They also obtained the global attractor in $L^2(\mathbb T)$. The local existence result obtained in \cite{MT-17} is further improved in \cite{MT-18} for given data in the Sobolev spaces $H^s(\mathbb T)$ with $s>-\frac16$ (see also \cite{Tsutsumi-18}) with the same consideration.
Taking in consideration the results in \cite{MT-17} and \cite{MT-18}, there is a gap between the local and the global well-posedness results in the periodic case too. In other words, one has the following natural question. Is it possible to extend to local solution to the IVP \eqref{e-nls} posed on periodic domain $\mathbb T$ can be extended globally in time for given data in $H^s(\mathbb T)$, $0>s>-\frac16$? Although this is a very good question, deriving {\em almost conserved quantities} in the periodic setting is more demanding and we will not consider it here.
In recent time, other properties of solutions of the IVP \eqref{e-nls} have also been studied in the literature. The authors in \cite{OTT-19} proved that the mean-zero Gaussian measures on Sobolev spaces $H^s(\mathbb T)$ are quasi-invariant under the flow whenever $s >\frac34$. This result is further improved in \cite{DT-21} on Sobolev spaces $H^s(\mathbb T)$ for $s>\frac12$. Quite recently, in \cite{CP-22}, we considered the IVP \eqref{e-nls} with given data in the modulation spaces $M_s^{2,p}(\mathbb R) $ and obtained the local well-posedness result for $s> -\frac14$ and $2\leq p<\infty$.
Now we present the organization of this work. In Section
\ref{sec-2}, we define function spaces and provide some preliminary results. In Section \ref{sec-3} we introduce multilinear estimates and an {\em almost conservation law} that is fundamental to prove the main result of this work. In Section \ref{sec-4} we provide the proof of the main result of this paper.
We finish this section recording some standard notations that will be used throughout this work.\\
\noindent
{\textbf{Notations:}} We use $c$ to denote various constants whose exact values are immaterial and may
vary from one line to the next. We use $A\lesssim B$ to denote an estimate
of the form $A\leq cB$ and $A{\mathcal S}im B$ if $A\leq cB$ and $B\leq cA$. Also, we
use the notation $a+$ to denote $a+\epsilon$ for $0< \epsilon \ll 1$.
{\mathcal S}ection{Function spaces and preliminary results}\label{sec-2}
We start this section by introducing some function spaces that will be used throughout this work. For $f:\mathbb R\times [0, T] \to \mathbb R$ we define the mixed
$L_x^pL_T^q$-norm by
\begin{equation*}
\|f\|_{L_x^pL_T^q} = \left(\int_{\mathbb R}\left(\int_0^T |f(x, t)|^q\,dt
\right)^{p/q}\,dx\right)^{1/p},
\end{equation*}
with usual modifications when $p = \infty$. We replace $T$ by $t$ if $[0, T]$ is the whole real line $\mathbb R$.
We use $\widehat{f}(\xi)$ to denote the Fourier transform of $f(x)$ defined by
$$
\widehat{f}(\xi) = c \int_{\mathbb R}e^{-ix\xi}f(x)dx$$
and
$\widetilde{f}(\xi)$ to denote the Fourier transform of $f(x,t)$ defined by
$$
\widetilde{f}(\xi, \tau) = c \int_{\mathbb R^2}e^{-i(x\xi+t\tau)}f(x,t)dxdt.$$
We use $H^s$ to denote the $L^2$-based Sobolev space of order $s$ with norm
$$\|f\|_{H^s(\mathbb R)} = \|\langle \xi\rangle^s \widehat{f}\|_{L^2_{\xi}},$$
where $\langle \xi\rangle = 1+|\xi|$.
In order to simplify the presentation we consider the following gauge transform considered in \cite{Takaoka}
\begin{equation}\label{gauge}
u(x,t) := v(x-d_1t, -t)e^{i(d_2x+d_3t)}.
\end{equation}
Using this transformation the IVP \eqref{e-nls} turns out to be
\begin{equation}\label{e-nls1}
\!\!\!\!\!\!\!\begin{cases}
\partial_{t}v+ \partial^{3}_{x}v-i(\alpha -3d_2)
\partial^{2}_{x}v+(d_1+2\alpha d_2-3d_2^2)\partial_x v -i(d_2^3-\alpha d_2^2 +d_3) v - i\beta|v|^2v = 0,\\
v(x,0) = v_0(x) := u_0(x)e^{-id_2x}.
\end{cases}
\end{equation}
If one chooses $d_1=-\frac{\alpha^2}3$, $d_2= \frac{\alpha}3$ and $d_3=\frac{2\alpha^3}{27}$ the third, fourth and fifth terms in the first equation in \eqref{e-nls1} vanish. Also, we note that
$$\|u_0\|_{H^s}{\mathcal S}im \|v_0\|_{H^s}.
$$
So from now on, we will consider the IVP \eqref{e-nls} with $\alpha = 0$, more precisely,
\begin{equation}\label{e-nlsT}
\begin{cases}
\partial_{t}u+ \partial^{3}_{x}u-i\beta|u|^{2}u = 0,
\quad x,t \in \mathbb R, \\
u(x,0) = u_0(x).
\end{cases}
\end{equation}
This simplification allows us to work in the Fourier transform restriction norm space restricted to the cubic $\tau-\xi^3$. In what follows we formally introduce the Fourier transform restriction norm space, commonly known as the Bourgain's space.
For $s, b \in \mathbb R$, we define the Fourier transform restriction norm space $X^{s,b}(\mathbb R\times\mathbb R)$ with norm
\begin{equation}\label{xsb}
\|f\|_{ X^{s, b}} = \|(1+D_t)^b U(t)f\|_{L^{2}_{t}(H^{s}_{x})} = \|\langle\tau-\xi^3\rangle^b\langle \xi\rangle^s \widetilde{f}\|_{L^2_{\xi,\tau}},
\end{equation}
where $U(t) = e^{-t\partial^{3}_{x}}$ is the unitary group.
If $b> \frac12$, the Sobolev lemma imply that, $ X^{s, b} {\mathcal S}ubset C(\mathbb R ; H^s_x(\mathbb R)).$ For any interval $I$, we define the localized spaces $X^{s,b}_I:= X^{s,b}(\mathbb R\times I)$ with norm
\begin{equation}\label{xsb-rest}
\|f\|_{ X^{s, b}(\mathbb R\times I)} = \inf\big\{\|g\|_{X^{s, b}};\; g |_{\mathbb R\times I} = f\big\}.
\end{equation}
Sometimes we use the definition $X^{s,b}_{\delta}:=\|f\|_{ X^{s, b}(\mathbb R\times [0, \delta])}$.\\
We define a cut-off function $\psi_1 \in C^{\infty}(\mathbb R;\; \mathbb R^+)$ which is even, such that $0\leq \psi_1\leq 1$ and
\begin{equation}\label{cut-1}
\psi_1(t) = \begin{cases} 1, \quad |t|\leq 1,\\
0, \quad |t|\geq 2.
\end{cases}
\end{equation}
We also define $\psi_T(t) = \psi_1(t/T)$, for $0\leq T\leq 1$.
In the following lemma we list some estimates that are crucial in the proof of the local well-posedness result whose proof can be found in \cite{GTV}.
\begin{lemma}\label{lemma1}
For any $s, b \in \mathbb R$, we have
\begin{equation}\label{lin.1}
\|\psi_1U(t)\phi\|_{X^{s,b}}\leq C \|\phi\|_{H^s}.
\end{equation}
Further, if $-\frac12<b'\leq 0\leq b<b'+1$ and $0\leq \delta\leq 1$, then
\begin{equation}\label{nlin.1}
\|\psi_{\delta}\int_0^tU(t-t')f(u(t'))dt'\|_{X^{s,b}}\lesssim \delta^{1-b+b'}\|f(u)\|_{X^{s, b'}}.
\end{equation}
\end{lemma}
As mentioned in the introduction, our main objective is to prove the global well-posedness result for the low regularity data. Using the $L^2$ conservation law \eqref{conserved-1} we have the global well-posedness of the IVP \eqref{e-nlsT} for given data in
$H^s(\mathbb R),$ $s\geq 0$. So, from now on we suppose $0>s>-\frac14$ throughout
this work. \\
Our aim is to derive an {\em almost conserved quantity} and use it to prove Theorem \ref{Global-Th}. For this, we use the {\em I-method} introduced in \cite{CKSTT} and define the Fourier
multiplier operator $I$ by,
\begin{equation}\label{I-1}
\widehat{Iu}(\xi) = m(\xi) \widehat{u}(\xi),
\end{equation}
where $m(\xi)$ is a smooth, radially symmetric and nonincreasing function given by
\begin{equation}\label{m-1}
m(\xi) =
\begin{cases}
1, \quad \quad \quad\,\,\,\,\,\,\,\;\;|\xi|< N, \\
N^{-s}|\xi|^s, \quad \quad |\xi|\geq 2N,
\end{cases}
\end{equation}
with $N\gg 1$ to be fixed later.
Note that, $I$ is the identity operator in low frequencies, $\{\xi :
|\xi|< N\}$, and simply an integral operator in high frequencies. In
general, it commutes with differential operators and satisfies the
following property.
\begin{lemma} \label{gwplem1}
Let $0>s>-\frac14$ and $N\geq 1$. Then the operator $I$ maps $H^s(\mathbb R)$ to
$L^2(\mathbb R)$ and
\begin{equation}
\label{gwlem12}
\|I f\|_{L^2(\mathbb R)} \lesssim N^{-s}\|f\|_{H^s(\mathbb R)}.
\end{equation}
\end{lemma}
Now record a variant of the local well-posedness result for initial data $u_0 \in H^s$, $0>s>-\frac14$ such that $Iu_0\in L^2$. More precisely we have the following result which will be very useful in the proof of the global well-posedness theorem.
\begin{theorem}\label{local-variant}
Let $0>s>-\frac14$, then for any $u_0$ such that $Iu_0\in L^2$, there exist
$\delta = \delta(\|Iu_0\|_{L^2})$ (with $\delta(\rho)\to \infty$ as $\rho\to 0$) and a unique solution to the IVP \eqref{e-nlsT} in the time interval $[0, \delta]$. Moreover, the solution satisfies the estimate
\begin{equation}\label{variant-2}
\|Iu\|_{X_{\delta}^{0, b}}\lesssim \|Iu_0\|_{L^2},
\end{equation}
and the local existence time $\delta$ can be chosen satisfying
\begin{equation}\label{delta-var}
\delta \lesssim \|Iu_0\|_{L^2}^{-\theta},
\end{equation}
where $\theta>0$ is some constant.
\end{theorem}
\begin{proof}
As the operator $I$ commutes with the differential operators, the linear estimates in Lemma \ref{lemma1} necessary in the contraction mapping principle hold true after applying $I$ to equation \eqref{e-nlsT}. Since the operator $I$ does not commute with the nonlinearity, the trilinear estimate is not straightforward. However, applying the interpolation lemma (Lemma 12.1 in \cite{CKSTT-5}) to \eqref{tri-xc} we obtain, under the same assumptions on the parameters $s$, $b$ and $b'$ that
\begin{equation}\label{tlin-I}
\|I(u^3)_x\|_{X^{0, b'}}\lesssim \|Iu\|_{X^{0,b}}^3,
\end{equation}
where the implicit constant does not depend on the parameter $N$ appearing in the definition of the operator $I$.
Now, using the trilinear estimate \eqref{tlin-I} and the linear estimates the proof of this theorem follows exactly as in the proof of Theorem \ref{LocalTh}. So, we omit the details.
\end{proof}
We finish this section recording some known results that will be useful in our work.
First we record the following double mean value theorem (DMVT).
\begin{lemma}[DMVT]
Let $f\in C^2(\mathbb R)$, and $\max\{|\eta|,|\lambda|\}\ll |\xi|$, then
$$
|f(\xi+\eta +\lambda)-f(\xi+\eta )-f(\xi +\lambda)+f(\xi)|\lesssim|f''(\theta)\,|\eta|\,|\lambda|,
$$
where $|\theta| {\mathcal S}im |\xi|$.
\end{lemma}
The following Strichartz's type estimates will also be useful.
\begin{lemma}\label{Lema36} For any $s_1 \geq -\frac14$, $s_2 \geq 0$ and $b>1/2$, we have
\begin{align}
\|u\|_{L_x^5 L_t^{10}} &\lesssim \|u\|_{X^{s_2,b}}\label{eqx1},\\
\|u\|_{L_x^{20/3} L_t^5} &\lesssim \|u\|_{X^{s_1,b}}\label{eqx2},\\
\|u\|_{L_x^\infty L_t^\infty} &\lesssim \|u\|_{X^{s_2,b}}\label{eqx3},\\
\|u\|_{L_x^2 L_t^2} &\lesssim \|u\|_{X^{0,0}}\label{eqx4},\\
\|u\|_{L_t^\infty L_x^2 } &\lesssim \|u\|_{X^{0,0}}\label{eqx5}.
\end{align}
\end{lemma}
\begin{proof}
The estimates \eqref{eqx1} and \eqref{eqx2} follow from
$$
\|U(t)u_0\|_{L_x^5 L_t^{10}}\lesssim \|u_0\|_{L^2} \quad \textrm{and} \quad \|D_x^{\frac14}U(t)u_0\|_{L_x^{20/3} L_t^{5}}\lesssim \|u_0\|_{L^2},
$$
whose proofs can be found in \cite{[KPV1]}.
The estimates \eqref{eqx3} and \eqref{eqx5} follow by immersion and inequality \eqref{eqx4} is obviuous.
\end{proof}
\begin{lemma}
Let $n\geq 2$ be an even integer, $f_1,\dots,f_n \in \mathbf{S}(\mathbb R)$, then
$$
\int_{\xi_1+\cdots +\xi_n=0}\widehat{f_1}(\xi_1)\widehat{\overline{f_2}}(\xi_2)\cdots\widehat{f_{n-1}}(\xi_{n-1})\widehat{\overline{f_{n}}}(\xi_{n})=\int_{\mathbb R}f_1(x)\overline{f_2}(x)\cdots f_{n-1}(x)\overline{f_{n}}(x).
$$
\end{lemma}
{\mathcal S}ection{Almost Conservation law}\label{sec-3}
{\mathcal S}ubsection{Modified energy}
Before introducing modified energy functional, we define $n$-multiplier and $n$-linear functional.
Let $n\geq 2$ be an even integer. An $n$-multiplier $M_n(\xi_1, \dots, \xi_n)$ is a function defined on the hyper-plane $\Gamma_n:= \{(\xi_1, \dots, \xi_n);\;\xi_1+\dots +\xi_n =0\}$ with Dirac delta $\delta(\xi_1+\cdots +\xi_n)$ as a measure.
If $M_n$ is an $n$-multiplier and $f_1, \dots, f_n$ are functions on $\mathbb R$, we define an $n$-linear functional, as
\begin{equation}\label{n-linear}
\Lambda_n(M_n;\; f_1, \dots, f_n):= \int_{\Gamma_n}M_n(\xi_1, \dots, \xi_n)\prod_{j=1}^{n}\widehat{f_j}(\xi_j).
\end{equation}
When $f$ is a complex function and $\Lambda_n$ is applied to the $n$ copies of the same function $f$, we write $$\Lambda_n(M_n)\equiv \Lambda_n(M_n; f):=\Lambda_n(M_n;\; f,\bar{f},f,\bar{f},\dots,f,\bar{f}).$$
For $1\leq j\leq n$ and $k\geq 1$, we define the elongation ${\bf{X}}_j^k(M_n)$ of the multiplier $M_n$ to be the multiplier of order $n+k$ given by
\begin{equation}\label{elong}
{\bf{X}}_j^k(M_n)(\xi_1, \cdots , \xi_{n+k} := M_n(\xi_1,\cdots,\xi_{j-1}, \xi_j+\cdots+\xi_{j+k}, \xi_{j+k+1}, \cdots, \xi_{n+k}).
\end{equation}
Using Plancherel identity, the energy $E(u)$ defined in \eqref{conserved-1} can be written in terms of the $n$-linear functional as
\begin{equation}\label{e.2}
E(u)= \Lambda_2(1).
\end{equation}
In what follows we record a lemma that relates the time-derivative of the $n$-linear functional defined for the solution $u$ of the e-NLS equation \eqref{e-nlsT}.
\begin{lemma}\label{derivative}
Let $u$ be a solution of the IVP \eqref{e-nlsT} and $M_n$ be a $n$-multiplier, then
\begin{equation}\label{der.1}
\frac{d}{dt}\Lambda_n(M_n;u) = i\Lambda_n(M_n\gamma_n; u)+i\Lambda_{n+2}\Big({\mathcal S}um_{j=1}^n\gamma_j^{\beta}{\bf{X}}_j^2(M_n;u)\Big),
\end{equation}
where $\gamma_n = \xi_1^3+\cdots +\xi_n^3$, $\gamma_j^{\beta}=(-1)^{j-1}\beta$ and ${\bf{X}}_j^2(M_n)$ as defined in \eqref{elong}.
\end{lemma}
Now we introduce the first modified energy
\begin{equation}\label{mod-1}
E^1_I(u):= E(Iu),
\end{equation}
where $I$ is the Fourier multiplier operator defined in \eqref{I-1} with $m$ given by \eqref{m-1}. Note that for $m\equiv 1$, $E^1_I(u)= \|u\|_{L_2}^2 = \|u_0\|_{L_2}^2$.
Using Plancherel identity, we can write the first modified energy in terms of the $n$-linear functional as
\begin{equation}\label{mod-12}
\begin{split}
E^1_I(u)&= \int m(\xi)\widehat{u}(\xi)m(\xi)\bar{\widehat{u}}(\xi)d\xi\\
&=\int_{\xi_1+\xi_2=0} m(\xi_1)m(\xi_2)\widehat{u}(\xi_1)\widehat{\bar{u}}(\xi_2)\\
&=\Lambda_2(M_2; u),
\end{split}
\end{equation}
where $M_2=m_1m_2$ with $m_j =m(\xi_j)$, $j=1, 2$.
We define the second generation of the modified energy as
\begin{equation}\label{sec-m1}
E^2_I(u):= E^1_I(u)+\Lambda_4(M_4;u),
\end{equation}
where the multiplier $M_4$ is to be chosen later.
Now, using the identity \eqref{der.1}, we get
\begin{equation}\label{sec-m2}
\begin{split}
\frac{d}{dt} E^2_I(u) &= i\Lambda_2\Big(M_2\gamma_2;u\Big)+i\Lambda_4\Big({\mathcal S}um_{j=1}^2\gamma_j^{\beta}{\bf{X}}_j^2(M_2);u\Big)\\
&\qquad +i\Lambda_4\Big(M_4\gamma_4;u\Big) + i\Lambda_6\Big({\mathcal S}um_{j=1}^4\gamma_j^{\beta}{\bf{X}}_j^2(M_4); u\Big).
\end{split}
\end{equation}
Note that $\Lambda_2\big(M_2\gamma_2;u\big)=0$.
If we choose, $M_4$ in such a way that
$$M_4\gamma_4+{\mathcal S}um_{j=1}^2\gamma_j^{\beta}{\bf{X}}_j^2(M_2)=0,$$
i.e.,
\begin{equation}\label{m.4}
M_4(\xi_1, \xi_2, \xi_3, \xi_4) = -\frac{{\mathcal S}um_{j=1}^2\gamma_j^{\beta}{\bf{X}}_j^2(M_2)}{\gamma_4},
\end{equation}
then we get $\Lambda_4 =0$ as well.
So, for the choice of $M_4$ in \eqref{m.4}, we have
\begin{equation}\label{second-m3}
\frac{d}{dt} E^2_I(u) =\Lambda_6(M_6),
\end{equation}
where
\begin{equation}\label{m.6}
M_6= {\mathcal S}um_{j=1}^4\gamma_j^{\beta}{\bf{X}}_j^2(M_4),
\end{equation}
with $M_4$ given by \eqref{m.4}.
We recall that on $\Lambda_n$ ($n=4,6$), one has $\xi_1+\cdots+\xi_n =0$. Let us introduce the notations $\xi_i+\xi_j=\xi_{ij}$, $\xi_{ijk} = \xi_i+\xi_j+\xi_k$ and so on.
Using the fact that $m$ is an even function, we can symmetrize the multiplier $M_4$ given by \eqref{m.4}, to obtain
\begin{equation}\label{m.4s}
\delta_4\equiv\delta_4(\xi_1, \xi_2, \xi_3, \xi_4):=[M_4]_{\mathrm{sym}} = \frac{\beta(m_1^2-m_2^2+m_3^2-m_4^2)}{6\xi_{12}\xi_{13}\xi_{14}},
\end{equation}
where we have used the identity $\xi_1^3+\xi_2^3+\xi_3^3= 3\xi_{12}\xi_{13}\xi_{14}$ on the hyperplane $\xi_1+\xi_2+\xi_3=0$.
Using the multiplier $[M_4]_{\mathrm{sym}}$ given by \eqref{m.4s} in \eqref{m.6} we obtain $[M_6]_{\mathrm{sym}}$ in the symmetric form as follows
\begin{equation}\label{m.6s}
\begin{split}
&\delta_6\equiv \delta_6(\xi_1, \xi_2, \xi_3, \xi_4, \xi_5, \xi_6) :=[M_6]_{\mathrm{sym}}\\
&= \frac{\beta}{36}{\mathcal S}um_{{\mathcal S}ubstack{\{k,m,o\}=\{1,3,5\}\\\{l,n,p\}=\{2,4,6\} }}\left[ \delta_4(\xi_{klm}, \xi_n, \xi_o, \xi_p)- \delta_4(\xi_k,\xi_{lmn}, \xi_o, \xi_p)+\delta_4(\xi_k,\xi_l,\xi_{mno}, \xi_p)-\delta_4(\xi_k, \xi_l, \xi_m, \xi_{nop})\right].
\end{split}
\end{equation}
\begin{remark}
In the case $k=1$, $l=2$, $m=3$, $n=4$, $o=5$, $p=6$, one can obtain the following sum of the symmetric multiplier $[M_6]_{\mathrm{sym}}$ in the extended form as
\begin{equation*}
\begin{split}
&\frac{\beta^2}{36}\Big[-\frac{m^2(\xi_{123})-m^2(\xi_4)+ m^2(\xi_5)-m^2(\xi_6)}{\xi_{56}\xi_{46}\xi_{45}}+\frac{m^2(\xi_1)-m^2(\xi_{234})+m^2(\xi_5)-m^2(\xi_6)}{\xi_{56}\xi_{15}\xi_{16}}\\
&\qquad -\frac{m^2(\xi_1)-m^2(\xi_2)+m^2(\xi_{345})-m^2(\xi_{6})}{\xi_{12}\xi_{26}\xi_{16}} +\frac{m^2(\xi_1)-m^2(\xi_2)+m^2(\xi_3)-m^2(\xi_{456})}{\xi_{12}\xi_{13}\xi_{23}}\Big].
\end{split}
\end{equation*}
But, for our purpose $\delta_6$ given by \eqref{m.6s} in terms of $\delta_4$ is enough to obtain the required estimates, see Proposition \ref{prop3.3} below.
\end{remark}
{\mathcal S}ubsection{Multilinear estimates}
In this subsection we will derive some multilinear estimates associated to the symmetric multipliers $\delta_4$ and $\delta_6$, use them to get some local estimates in the Bourgain's space that will be useful to obtain an {\em almost conserved quantity}.
From here onwards we will consider the notation $|\xi_i|=N_i$, $m(N_i)=m_i$. Given four number $N_1, N_2, N_3, N_4$ and $\mathcal{C}=\{N_1, N_2, N_3, N_4\}$, we will denote $N_s=\max\mathcal{C}$, $N_a=\max\mathcal{C}{\mathcal S}etminus\{N_s\}$, $N_t=\max\mathcal{C}{\mathcal S}etminus\{N_s, N_a\}$, $N_b=\min\mathcal{C}$. Thus
$$
N_s \geq N_a\geq N_t \geq N_b.
$$
\begin{proposition}\label{prop3.3}
Let $m$ be as defined in \eqref{m-1}
\\
1) If $|\xi_{1j}| \gtrsim N_s$ for all $j=2,3,4$ and $|N_b|\ll N_s$, then
\begin{equation}\label{delta4.1}
|\delta_4| {\mathcal S}im \dfrac{m^2 (N_b)}{N_s^3}.
\end{equation}
2) If $|\xi_{1j}| \gtrsim N_s$ for all $j=3,4$ and $|\xi_{12}|\ll N_s$, then
\begin{equation}\label{delta4.2}
|\delta_4| \lesssim \dfrac{m^2 (N_b)}{\max\{N_t,N\} \,N_s^2}.
\end{equation}
3) If $|\xi_{1j}| \ll N_s$ for $j=2,3$, $a\geq0$, $b\geq0$, $a+b=1$, then
\begin{equation}\label{delta4.3}
|\delta_4| \lesssim \dfrac{m^2 (N_s)}{N_s^2|\xi_{12} |^{a}|\xi_{13} |^{b}}.
\end{equation}
4) In the other cases, we have
\begin{equation}\label{delta4.4}
|\delta_4| \lesssim \dfrac{m^2 (N_s)}{N_s^3}.
\end{equation}
\end{proposition}
\begin{proof}
Let $f(\xi):=m^2(\xi)$ be an even function, nonincreasing on $|\xi|$. From definition of $m(\xi)$, we have $|f'(\xi) |{\mathcal S}im \frac{m^2(\xi)}{|\xi|}$ if $|\xi|>N$.
Without loss of generality we can assume $N_s=|\xi_1|$ and $N_a=|\xi_2|$. As $N_s=|\xi_2+\xi_3+\xi_4|$, we have $N_a {\mathcal S}im N_s$. Also by symmetry we can assume $|\xi_{12}|\leq |\xi_{14}|$.
By the definition of $\delta_4$, if $N_s\leq N$ then $\delta_4=0$. Thus so from now on, throughout the proof, we will consider that $N_s>N$. Depending on the frequency regimes we divide the proof in two different cases, viz., $|\xi_{13}| \gtrsim N_s$ and $|\xi_{14}| \gtrsim N_s$; and $|\xi_{14}| \ll N_s$ or $|\xi_{13}| \ll N_s$.\\
\noindent
{\bf Case A. {\underline{$|\xi_{13}| \gtrsim N_s$ and $|\xi_{14}| \gtrsim N_s$:}}} We further divide this case in two sub-cases.\\
\noindent
{\bf Sub-case A1. {\underline{$|\xi_{12}| \ll N_s$:}}}
Using the standard Mean Value Theorem, we have
\begin{equation}\label{eqA1}
|m^2 (\xi_1)-m^2 (\xi_2)|= |f (\xi_1)-f (-\xi_2)|
= |f'(\xi_{\theta_1})|\,|\xi_{12}|
\end{equation}
where $\xi_{\theta_1}= \xi_1-\theta_1\xi_{12}$ with $\theta_1\in (0,1)$.
Since $|\xi_{12}|\ll N_s$ we have $|\xi_{\theta_1}|{\mathcal S}im |\xi_1|{\mathcal S}im N_s$ and consequently $|f'(\xi_{\theta_1})|{\mathcal S}im \dfrac{m^2 (N_s)}{N_s} $. Using this in \eqref{eqA1}, we obtain
\begin{equation}\label{eqA2}
\dfrac{|m^2 (\xi_1)-m^2 (\xi_2)|}{|\xi_{12}| |\xi_{13}| |\xi_{14}|}\lesssim \dfrac{m^2 (N_s)}{N_s^3}.
\end{equation}
Now, we move to estimate $|m^2(\xi_3)-m^2(\xi_4)|$. First note that, if $N_t \leq N$, then we have $|m^2(\xi_3)-m^2(\xi_4)|=0$. Thus we will assume that $|\xi_3|=N_t > N$. We divide in two cases.\\
\noindent
{\bf Case 1. {\underline{$|\xi_{34}|\ll N_t$:}}} Using the Mean Value Theorem, we get
\begin{equation}\label{eqA3}
|m^2 (\xi_3)-m^2 (\xi_4)|= |f (\xi_3)-f (-\xi_4)|=|f'(\xi_{\theta_2})|\,|\xi_{34}|,
\end{equation}
where $\xi_{\theta_2}= \xi_3-\theta_2\xi_{34}$ with $\theta_2\in (0,1)$.
Since $|\xi_{34}|\ll N_t$ we have $|\xi_{\theta_2}|{\mathcal S}im |\xi_3|{\mathcal S}im N_t$ and consequently $|f'(\xi_{\theta_2})|{\mathcal S}im \dfrac{m^2 (N_t)}{N_t} $. Using this in \eqref{eqA3}, we obtain
\begin{equation}\label{eqA4}
\dfrac{|m^2 (\xi_3)-m^2 (\xi_4)|}{{|\xi_{12}| |\xi_{13}| |\xi_{14}|}}
\lesssim \dfrac{m^2 (N_t)}{N_tN_s^2}.
\end{equation}
\noindent
{\bf Case 2. {\underline{$|\xi_{34}|\gtrsim N_t$:}}} In this case, using triangular inequality and the fact that the function $f(\xi)=m^2(\xi)$ is nonincreasing on $|\xi|$ we obtain from the
definition of $\delta_4$ that
\begin{equation}\label{eqA5}
\dfrac{|m^2(\xi_3)-m^2(\xi_4)|}{|\xi_{12}|\,|\xi_{13}|\,|\xi_{14}|\,}\lesssim\dfrac{m^2 (N_b)}{N_t\,N_s^2}.
\end{equation}
Now, combining \eqref{eqA2}, \eqref{eqA4} and \eqref{eqA5}, we obtain from the definition of $\delta_4$ in \eqref{m.4s} that
\begin{equation*}
|\delta_4| {\mathcal S}im \dfrac{|f(\xi_1)-f(\xi_2)+f(\xi_3)-f(\xi_4)|}{|\xi_{12}|\,|\xi_{13}|\,|\xi_{14}|\,}\lesssim\dfrac{m^2 (N_b)}{\max\{N_t,N\}\,N_s^2}.
\end{equation*}
\noindent
{\bf Sub-case A2. {\underline{$|\xi_{12}| \gtrsim N_s$:}}} Here also, we divide in two different sub-cases.\\
\noindent
{\bf Sub-case A21. \underline{$N_b \gtrsim N_s$:}} In this case we have $N_b{\mathcal S}im N_t{\mathcal S}im N_a{\mathcal S}im N_s$. Without loss of generality we can assume $\xi_1 > 0$. Since $\xi_1+\cdots+\xi_4=0$, two largest frequencies must have opposite signs, i.e., $\xi_2<0$. If possible, suppose $\xi_2 \geq 0$. Then we have $\xi_{1}+\xi_{2}=:M>N_s$ and $\xi_{3}+\xi_{4}=-M<-N_s<0$. In this situation one has $\xi_3 \xi_4>0$, otherwise
$$
\xi_{3}^2+\xi_{4}^2=M^2-2\xi_3 \xi_4\geq M^2>\xi_{1}^2+\xi_{2}^2,
$$
which is a contradiction. As $\xi_3+\xi_4<0$, we conclude that $\xi_{3}<0$ and $\xi_{4}<0$. Now, the frequency ordering $|\xi_2|\geq |\xi_3|$ implies
\begin{equation*}
\xi_2=M-\xi_1\geq |\xi_3|=-\xi_3 = M+\xi_4,
\end{equation*}
and consequently $\xi_{14} \leq 0$. On the other hand, $|\xi_1|\geq |\xi_4| \implies \xi_1\geq -\xi_4 \implies \xi_{14}\geq 0$. Therefore, we get $\xi_{14}=0$ contradicting the hypothesis $|\xi_{14}| \gtrsim N_s$ of this case.
Now, for $\xi_1>0$ and $\xi_2 < 0$, we have
\begin{equation}\label{eqA21}
|m^2 (\xi_1)-m^2 (\xi_2)|= |f (\xi_1)-f (-\xi_2)|=|f'(\xi_\theta)|\,|\xi_{12}|,
\end{equation}
where $ \xi_1 \geq \xi_\theta\geq -\xi_2$, so that $\xi_\theta {\mathcal S}im N_s$ and consequently $|f'(\xi_\theta)|{\mathcal S}im \dfrac{m^2(N_s)}{N_s}$. Using this in \eqref{eqA21}, we get
\begin{equation}\label{eqA22}
|m^2 (\xi_1)-m^2 (\xi_2)|\lesssim m^2 (N_s).
\end{equation}
Similarly, one can also obtain
\begin{equation}\label{eqA23}
|m^2 (\xi_3)-m^2 (\xi_4)|
\lesssim m^2 (N_s).
\end{equation}
Thus, taking in consideration of \eqref{eqA22} and \eqref{eqA23}, from definition of $\delta_4$, we get
$$|\delta_4| \lesssim\dfrac{m^2 (N_s)}{N_s^3}.$$
\noindent
{\bf Sub-case A22. \underline{$N_b \ll N_s$:}} Without loss of generality we can assume $|\xi_4|=N_b$. In this case $|\xi_3|=|\xi_{12}+\xi_4|{\mathcal S}im |\xi_{12}|{\mathcal S}im N_s{\mathcal S}im |\xi_1| {\mathcal S}im |\xi_2|$. It follows that
\begin{equation*}
|m^2 (\xi_1)-m^2 (\xi_2)+m^2 (\xi_3)-m^2 (\xi_4)|{\mathcal S}im |m^2 (\xi_4)|= |m^2 (N_b)|.
\end{equation*}
Therefore in this case
$$
|\delta_4| {\mathcal S}im\dfrac{m^2 (N_b)}{N_s^3}.
$$
\noindent
{\bf Case B. \underline{$|\xi_{14}| \ll N_s$ or $|\xi_{13}| \ll N_s$:}} We divide in two sub-cases.\\
\noindent
{\bf Sub-case B1. \underline{$|\xi_{14}| \ll N_s$:}} We move to find estimates considering two different sub-cases\\
{\bf Sub-case B11. \underline {$|\xi_{13}|\gtrsim N_s$:}} In this case we necessarily have $|\xi_{12}| \ll N_s$. If $|\xi_{12}| \gtrsim N_s$, using the consideration made in the beginning of the proof, we get
$$
|\xi_{14}|\gtrsim|\xi_{12}|\gtrsim N_s,
$$
but this contradicts the defining condition $|\xi_{14}| \ll N_s$ of {\bf Case B1}.
Now, for $|\xi_{12}| \ll N_s$
using the Double Mean Value Theorem with $\xi:=-\xi_1$, $\eta:=\xi_{12}$ and $\lambda:=\xi_{14}$, we have
\begin{equation*}
\begin{split}
|f (\xi+\lambda+\eta)-f (\xi+\eta)-f (\xi+\lambda)+f(\xi)|&\lesssim|f''(\xi_\theta)|\, |\xi_{12}|\, \xi_{14}|\\
&\lesssim \dfrac{m^2 (N_s)|\xi_{12}|\,|\xi_{14}|}{N_s^2}.
\end{split}
\end{equation*}
Hence,
$$
|\delta_4| \lesssim\dfrac{m^2 (N_s)|\xi_{12}|\,|\xi_{14}|}{N_s^2}\dfrac{1}{|\xi_{13}|\,|\xi_{12}|\,|\xi_{14}|}{\mathcal S}im \dfrac{m^2 (N_s)}{N_s^3}.
$$
\noindent
{\bf Sub-case B12. {\underline{$|\xi_{13}|\ll N_s$:}}}
Without loss of generality we can assume $\xi_1\geq 0$. Recall that, in this Sub-case $|\xi_{12}|\leq |\xi_{14}| \ll N_s$. As $N_s=\xi_1$, we have
\begin{equation*}
\begin{cases}
|\xi_{12}|\ll N_s &\Longrightarrow \,\,\xi_2 <0\quad \textrm{and} \quad |\xi_2|{\mathcal S}im N_s,\\
|\xi_{13}|\ll N_s &\Longrightarrow \,\,\xi_3 <0\quad \textrm{and} \quad |\xi_3|{\mathcal S}im N_s,\\
|\xi_{14}|\ll N_s &\Longrightarrow \,\,\xi_4 <0\quad \textrm{and} \quad |\xi_4|{\mathcal S}im N_s.
\end{cases}
\end{equation*}
Combining these informations, we get
$$
N_s\gg |\xi_{13}|=|\xi_{24}|=|\xi_2|+|\xi_4|{\mathcal S}im N_s,
$$
which is a contradiction. Consequently this case is not possible.
\\
\noindent
{\bf Sub-case B2. \underline{$|\xi_{13}| \ll N_s$:}} Taking in consideration {\bf Sub-case B1}, we will assume that $|\xi_{14}| \gtrsim N_s$. In this case too, we will analyse considering two different sub-cases.
\\
\noindent
{\bf Sub-case B21. {\underline{$|\xi_{12}|\ll N_s$:}}}
In this case we have $|\xi_1| {\mathcal S}im |\xi_2|{\mathcal S}im |\xi_3| {\mathcal S}im N_s$. Furthermore
$
|\xi_4|=|\xi_{12}+ \xi_3|{\mathcal S}im N_s
$.
Hence
$$
|\xi_1|{\mathcal S}im |\xi_2|{\mathcal S}im|\xi_3|{\mathcal S}im|\xi_4|{\mathcal S}im N_s.
$$
Observe that, $N_a=|\xi_2|\geq |\xi_3|$ implies
$|\xi_{12}|\leq |\xi_{13}|$.
In fact, if $\xi_1 > 0$, then
\begin{equation*}
\begin{cases}
|\xi_{12}|\ll N_s &\Longrightarrow \,\,\xi_2 <0,\\
|\xi_{13}|\ll N_s &\Longrightarrow \,\,\xi_3 <0,
\end{cases}
\end{equation*}
and it follows that $\xi_{13}\geq \xi_{12}\geq 0$.
If $\xi_1 < 0$, then
\begin{equation*}
\begin{cases}
|\xi_{12}|\ll N_s &\Longrightarrow \,\,\xi_2 >0,\\
|\xi_{13}|\ll N_s &\Longrightarrow \,\,\xi_3 >0,
\end{cases}
\end{equation*}
and it follows that $0\geq \xi_{12}\geq \xi_{13}$. Hence $|\xi_{12}|\leq |\xi_{13}|$.
On the other hand using the Mean Value Theorem, we obtain
\begin{equation*}
\begin{split}
|f(\xi_{1})-f(\xi_{2})+f(\xi_{3})-f(\xi_{4})| &=|f(\xi_{1})-f(-\xi_{2})+f(\xi_{3})-f(-\xi_{4})| \\
&=|\xi_{12} f'(-\xi_2+\theta_1 \xi_{12})+\xi_{34} f'(-\xi_4+\theta_2 \xi_{34})|\\
&=|\xi_{12} | \,|f'(-\xi_2+\theta_1 \xi_{12})-f'(-\xi_4+\theta_2 \xi_{34})|\\
&\lesssim |\xi_{12} |\,|f'(N_s)|\\
&\lesssim |\xi_{12} |\,\dfrac{m^2(N_s)}{N_s},
\end{split}
\end{equation*}
where $|\theta_j|\leq 1$, $j=1,2$. From this we deduce
\begin{equation*}
|\delta_4| \lesssim \dfrac{|\xi_{12} |\, m^2 (N_s)}{N_s}\cdot\dfrac{1}{|\xi_{12} |\, |\xi_{13}|\, |\xi_{14}|}\leq\dfrac{ m^2 (N_s)}{N_s^2|\xi_{12} |^{a}|\xi_{13} |^{b}}.
\end{equation*}
\\
{\bf Sub-case B22. {\underline{$|\xi_{12}|\gtrsim N_s$:}}}
As $N_a=|\xi_2|{\mathcal S}im |\xi_1|=N_s{\mathcal S}im |\xi_3|$, one has $N_s{\mathcal S}im |\xi_2|=|\xi_{13}+\xi_4| $. Thus $|\xi_4| {\mathcal S}im N_s$ and $|\xi_j| {\mathcal S}im N_s$, $j=1,2,3,4$. Also
\begin{equation}\label{sign1324}
|\xi_{24}|=|\xi_{13}|\ll N_s \Longrightarrow \,\, \xi_3\,\xi_1 <0\quad \textrm{and} \quad\xi_2\,\xi_4 <0.
\end{equation}
Let
\begin{equation}\label{eq1234}
\epsilon:=\xi_{13}=-\xi_{24}.
\end{equation}
We consider the following cases.
\\
\noindent
{\bf Case 1. {\underline{$\epsilon >0$}:}} In this case if $\xi_1<0$, then $\xi_3=\epsilon +|\xi_1|> |\xi_1|$ which is a contradiction because $|\xi_3|\leq|\xi_1|$. Similarly by \eqref{sign1324} and \eqref{eq1234} if $\xi_2>0$, then $|\xi_4|=\epsilon +|\xi_2|> |\xi_2|$ which is a contradiction. Therefore we can assume $\xi_1>0$ and $\xi_2<0$ and by \eqref{sign1324} $\xi_3<0$ and $\xi_4>0$. One has that
$$
\xi_1\geq -\xi_2\geq -\xi_3\geq \xi_4>0,
$$
and using \eqref{eq1234}
\begin{equation}\label{eq1234-10}
\xi_1\geq \xi_4+\epsilon\geq \xi_1-\epsilon \geq \xi_4>0.
\end{equation}
Let $b:=\xi_4-N_s$, using \eqref{eq1234-10}, we have
$$
N_s\geq N_s+b+\epsilon\geq N_s-\epsilon \geq N_s+b>0,
$$
which implies that $-2\epsilon\leq b\leq -\epsilon$. Consequently by \eqref{eq1234}, $\xi_2=-N_s-b-\epsilon$ and therefore using the condition of this {\bf Sub-case B22}
$$
N_s \lesssim \xi_{12}=-b-\epsilon,
$$
which is a contradiction. So, this case is not possible.
\\
\noindent
{\bf Case 2. {\underline{$\epsilon <0$}:}} Similarly as above, if $\xi_1>0$, then $|\xi_3|=|\epsilon| +\xi_1>|\xi_1|$ which is a contradiction. Similarly by \eqref{sign1324} if $\xi_2<0$, then $\xi_4=|\epsilon| +|\xi_2|> |\xi_2|$ which is a contradiction. Therefore we can assume $\xi_1<0$ and $\xi_2>0$ and by \eqref{sign1324} $\xi_3>0$ and $\xi_4<0$. Using \eqref{eq1234} one has that
\begin{equation}\label{eq1234-1}
-\xi_1\geq |\epsilon|-\xi_4\geq N_s-|\epsilon| \geq -\xi_4>0.
\end{equation}
Let $b:=\xi_4+N_s$, using \eqref{eq1234-1}, we have
$$
N_s\geq N_s-b+|\epsilon|\geq N_s-|\epsilon| \geq N_s-b>0,
$$
which implies that $2|\epsilon|\geq b\geq |\epsilon|$. Consequently $\xi_2=N_s-b+|\epsilon|$ and
$$
N_s \lesssim \xi_{12}=|\epsilon|-b,
$$
which is a contradiction. Therefore, this case also does not exist.
Combining all cases we finish the proof of proposition.
\end{proof}
\begin{remark}
Let $0<\epsilon\ll N_s$. An example for the {\bf Sub-case A1} is
$$
\xi_1=N_s, \quad\xi_2=-N_s+\epsilon, \quad\xi_3=-\dfrac{\epsilon}{2} , \quad\xi_4=-\dfrac{\epsilon}{2},
$$
other example is
$$
\xi_1=N_s, \quad\xi_2=-N_s+\epsilon, \quad\xi_3=\dfrac{N_s}2-\dfrac{\epsilon}{2} , \quad\xi_4=-\dfrac{N_s}2-\dfrac{\epsilon}{2}.
$$
An example for the {\bf Sub-case A21} with $\xi_1 \geq 0$ and $\xi_2 \leq 0$ is
$$
\xi_1=N_s, \quad\xi_2=-\dfrac{N_s}2, \quad\xi_3=-\dfrac{N_s}4, \quad\xi_4=-\dfrac{N_s}4.
$$
An example for the {\bf Sub-case A22} is
$$
\xi_1=N_s, \quad\xi_2=-\dfrac{N_s}{2}-\epsilon, \quad\xi_3=-\dfrac{N_s}{2} , \quad\xi_4=\epsilon.
$$
An example for the {\bf Sub-case B21} is
$$
\xi_1=N_s, \quad\xi_2=-N_s+\dfrac{\epsilon}2, \quad\xi_3=-N_s+\dfrac{\epsilon}{2} , \quad\xi_4=N_s-\epsilon.
$$
\end{remark}
\begin{proposition}
Let $w\in \mathbf{S}(\mathbb R \times \mathbb R)$, $0>s>-\frac14$ and $b>\frac12$, then we have
\begin{equation}\label{lambda4}
\left| \Lambda_4(\delta_4; u(t) ) dt\right| \lesssim \dfrac{1}{N^{(\frac54-3s)}}\,\|Iu\|^4_{L^2},
\end{equation}
and
\begin{equation}\label{lambda6}
\left| \int_0^\delta \Lambda_6(\delta_6; u(t) ) dt\right| \lesssim N^{-\frac74}\|Iu\|^6_{X_\delta^{0,b}}.
\end{equation}
\end{proposition}
\begin{proof}
To prove \eqref{lambda4}, taking idea from \cite{CKSTT2, CKSTT}, first we perform a Littlewood-Paley decomposition of the four factors $u$ on $\delta_4$ so that $\xi_j$ are essentially constants $N_j$, $j=1,2,3,4$. To recover the sum at the end we borrow a factor $N_s^{-\epsilon}$ from the large denominator $N_s$ and often this will not be mentioned. Also, without loss of generality, we can suppose that the Fourier transforms involved in the multipliers are all positive.
Recall that for $N_s\leq N$ one has $m(\xi_j)=1$ for all $j=1,2,3,4$ and consequently the multiplier $\delta_4$ vanish. Therefore, we will consider $N_s\leq N$.\\
In view of the estimates obtained in Proposition \ref{prop3.3}, we divide the proof of \eqref{lambda4} in two different parts.\\
\noindent
{\bf First part: Cases 1), 2) and 4) of Proposition \ref{prop3.3}}. We observe that $N_s^{\frac14}m_s \gtrsim N^{-s}$. In fact, if $N_s\in [N,2N]$, then $m_s {\mathcal S}im 1$ and $N_s^{\frac14}m_s \gtrsim N_s^{-s}\gtrsim N^{-s}$. If $N_s>2N$, then from the definition of $m$ and the fact that $s>-\frac14$, we arrive at $N_s^{\frac14}m_s =N_s^{\frac14}\dfrac{N^{-s}}{N_s^{-s}}=N_s^{\frac14+s}N^{-s}\gtrsim~N^{-s}$. Furthermore, we observe that $\frac{1}{\max\{N_t,\, N\}} \leq \frac{1}{N}$. Thus
\begin{equation}\label{x1lambda45}
\begin{split}
\left| \Lambda_4(\delta_4; u(t) ) \right|& =\left|\int_{\xi_1+\cdots +\xi_4=0}\delta_4(\xi_1, \dots, \xi_4)\widehat{u_1}(\xi_1)\cdots\widehat{\overline{u_4}}(\xi_4)\right|\\
&\lesssim \int_{\xi_1+\cdots +\xi_4=0}\dfrac{m^2(N_b)}{N\,N_s^2}\dfrac{\widehat{Iu_1}(\xi_1)\cdots\widehat{I\overline{u_4}}(\xi_4)}{m_1\cdots m_4}\\
&\lesssim \int_{\xi_1+\cdots +\xi_4=0}\dfrac{N_s}{N\,N_s^2m_s^3}\widehat{D_x^{-\frac14}Iu_1}(\xi_1)\cdots\widehat{D_x^{-\frac14}I\overline{u_4}}(\xi_4)\\
&\lesssim \int_{\xi_1+\cdots +\xi_4=0}\dfrac{1}{N\,N_s^{\frac14-3s}}\widehat{D_x^{-\frac14}Iu_1}(\xi_1)\cdots\widehat{D_x^{-\frac14}I\overline{u_4}}(\xi_4)\\
&\lesssim \dfrac{1}{N\,N_s^{\frac14-3s}}\|D_x^{-1/4} Iu\|^4_{L^4}\\
&\lesssim \dfrac{1}{N^{(\frac54-3s)}}\|Iu\|^4_{L^2},
\end{split}
\end{equation}
where in the fourth line we used the following estimate
$$
\dfrac{N_s}{N_s^2 m_s^3}=\dfrac{1}{N_s^{\frac14}(N_s^{\frac14}m_s)^3}\lesssim\dfrac{1}{N_s^{\frac14-3s}}.
$$
\noindent
{\bf Second part. Case 3) of Proposition \ref{prop3.3}}. Recall from the first part, we have $N_s^{\frac14}m_s \gtrsim N^{-s}$.
Using \eqref{delta4.3} with $a=1$ and $b=0$, and recalling the fact that $|\xi_{12}|= |\xi_{34}|$, we get
\begin{equation}\label{x1lambda4}
\begin{split}
\left| \Lambda_4(\delta_4; u(t) ) \right|& =\left|\int_{\xi_1+\cdots +\xi_4=0}\delta_4(\xi_1, \dots, \xi_4)\widehat{u_1}(\xi_1)\cdots\widehat{\overline{u_4}}(\xi_4)\right|\\
&=\left|\int_{\xi_1+\cdots +\xi_4=0}\delta_4(\xi_1, \dots, \xi_4)\dfrac{\widehat{Iu_1}(\xi_1)\cdots\widehat{\overline{Iu_4}}(\xi_4)}{m_1\cdots m_4}\right|\\
&\lesssim\int_{\xi_1+\cdots +\xi_4=0}\dfrac{1}{N_s^2 m_s^2}|\xi_{12}|^{-\frac12}\widehat{Iu_1}(\xi_1)\widehat{\overline{Iu_2}}(\xi_2)\,|\xi_{34}|^{-\frac12}\widehat{Iu_3}(\xi_3)\widehat{\overline{Iu_4}}(\xi_4)
\\
&\lesssim \int_{\mathbb R}\dfrac{1}{N_s^{3/2-2s}}D_x^{-\frac12}(Iu_1Iu_2)\,D_x^{-\frac12}(Iu_3Iu_4)\\
&\lesssim \dfrac{1}{N_s^{\frac32-2s}}\|D_x^{-\frac12}(Iu_1Iu_2)\|_{L^2}\|D_x^{-\frac12}(Iu_3Iu_4)\|_{L^2},
\end{split}
\end{equation}
where in the second last line we used
$$
\dfrac{1}{N_s^2 m_s^2}=\dfrac{1}{N_s^{\frac32}(N_s^{\frac14}m_s)^2}\lesssim\dfrac{1}{N_s^{3/2-2s}}.
$$
Now, applying Hardy-Litlewwod-Sobolev inequality, we obtain from \eqref{x1lambda4} that
\begin{equation}\label{x1lambda4.1}
\begin{split}
\left| \Lambda_4(\delta_4; u(t) ) \right| &\lesssim \dfrac{1}{N^{(\frac32-2s)}}
\|Iu_1Iu_2\|_{L^1}\|Iu_3Iu_4\|_{L^1}\\
&\lesssim \dfrac{1}{N^{(\frac32-2s)}}\|Iu\|^4_{L^2}.
\end{split}
\end{equation}
Observe that the condition $s>-\frac14$ implies that $\frac54-3s<\frac32-2s$ and this completes the proof of \eqref{lambda4}.
Now we move to prove \eqref{lambda6}. As in the proof of \eqref{lambda4}, first we perform a Littlewood-Paley decomposition of the six factors $u$ on $\delta_6$ so that $\xi_j$ are essentially constants $N_j$, $j=1, \cdots, 6$. Recall that for $N_s\leq N$ one has $m(\xi_j)=1$ for all $j=1,\cdots,6$ and consequently the multiplier $\delta_6$ vanish. Therefore, we will consider $N_s\leq N$. Since $N_a{\mathcal S}im N_s>N$, it follows that
$$
m_sN_s\gtrsim N\quad \textrm{and}\quad m_aN_a\gtrsim N.
$$
Without loss of generality we will consider only the term $\delta_4(\xi_{123}, \xi_4, \xi_5, \xi_6)$ in the symmetrization of $\delta_6(\xi_1, \dots, \xi_6)$, see \eqref{m.6s}. The estimates for the other terms are similar.
Here also, we will provide a proof of \eqref{lambda6} dividing in two parts.\\
\noindent
{\bf First part. Cases 1), 2) and 4) in Proposition \ref{prop3.3}.} In these cases, we have
\begin{equation}\label{x2lambda4}
\begin{split}
J&:=\left| \int_0^\delta \Lambda_6(\delta_6; u(t) ) \right| = \left|\int_0^\delta \int_{\xi_1+\cdots +\xi_6=0}\delta_4(\xi_{123}, \xi_4, \xi_5, \xi_6)\widehat{u_1}(\xi_1)\cdots\widehat{\overline{u_6}}(\xi_6)\right|\\
&\lesssim \int_0^\delta \int_{\xi_1+\cdots +\xi_6=0}\dfrac{m_b^2}{\max\{N_t, N\} \,N_s^2}\,\cdot\,\dfrac{m_a m_s\widehat{u_1}(\xi_1)\cdots\widehat{\overline{u_6}}(\xi_6)}{m_a m_s}\\
&\lesssim \int_0^\delta \int_{\mathbb R}\dfrac{1}{\max\{N_t, N\}\,N^2 }Iu_s Iu_a Iu_b u_t u_5 u_6\\\
&\lesssim \int_0^\delta \int_{\mathbb R}\dfrac{N_t^{\frac14}}{\max\{N_t, N\}\,N^2}Iu_s Iu_a Iu_b( D_x^{-\frac14}u_t) u_5 u_6\\
&\lesssim \dfrac{1}{N^{11/4}}\|Iu_s\|_{L_x^2 L_t^2} \|Iu_a\|_{L_x^\infty L_t^\infty}\|Iu_b\|_{L_x^\infty L_t^\infty}\|D_x^{-\frac14}u_t\|_{L_x^5 L_t^{10}}\|u_5\|_{L_x^{20/3} L_t^{5}}\|u_6\|_{L_x^{20/3} L_t^5}.
\end{split}
\end{equation}
Using estimates from Lemma \ref{Lema36}, we obtain from \eqref{x2lambda4} that
\begin{equation}\label{x2lambda4.1}
\begin{split}
J&\lesssim \dfrac{1}{N^{\frac{11}4}}\|Iu_s\|_{X^{0,b}_{\delta}} \|Iu_a\|_{X^{0,b}_{\delta}}\|Iu_b\|_{X^{0,b}_{\delta}}\|u_t\|_{X^{-\frac14,b}_{\delta}}\|u_5\|_{X^{-\frac14,b}_{\delta}}\|u_6\|_{X^{-\frac14,b}_{\delta}}\\
&\lesssim \dfrac{1}{N^{\frac{11}4}}\|Iu_s\|_{X^{0,b}_{\delta}} \|Iu_a\|_{X^{0,b}_{\delta}}\|Iu_b\|_{X^{0,b}_{\delta}}\|Iu_t\|_{X^{0,b}_{\delta}}\|Iu_5\|_{X^{0,b}_{\delta}}\|Iu_6\|_{X^{0,b}_{\delta}}\\
&\lesssim \dfrac{1}{N^{\frac{11}4}}\|Iu\|_{X^{0,b}_{\delta}}^6.
\end{split}
\end{equation}
\noindent
{\bf Second part. Case 3) in Proposition \ref{prop3.3}.} Without loss of generality we can assume that $|\xi_{123}|=N_s$, $|\xi_4|=N_a$, $|\xi_5|=N_t$ and $|\xi_6|=N_b$. Notice that $m_s^2\leq m_t m_b$ and $|\xi_j|{\mathcal S}im N_s$ for some $j=1,2,3$. So, we can assume $|\xi_3|{\mathcal S}im N_s$. Using \eqref{delta4.3} in Proposition \ref{prop3.3} with $a=1$, and $b=0$, we can obtain
\begin{equation}\label{x3lambda4}
\begin{split}
J:=&\left| \int_0^\delta \Lambda_6(\delta_6; u(t) ) \right| = \left|\int_0^\delta \int_{\xi_1+\cdots +\xi_6=0}\delta_4(\xi_{123}, \xi_4, \xi_5, \xi_6)\widehat{u_1}(\xi_1)\cdots\widehat{\overline{u_6}}(\xi_6)\right|\\
\lesssim &\int_0^\delta \int_{\xi_1+\cdots +\xi_6=0}\dfrac{m_t m_b}{N_s^2|\xi_{1234}|^{\frac12}\, |\xi_{56}|^{\frac12}}\,\cdot\,\dfrac{m_a \widehat{u_1}(\xi_1)\cdots\widehat{\overline{u_6}}(\xi_6)}{m_a }\\
\lesssim &\int_0^\delta \int_{\xi_1+\cdots +\xi_6=0}\dfrac{N_s^{\frac14}}{N N_s}|\xi_{1234}|^{-\frac12}
(\widehat{u_1}(\xi_1)\widehat{\overline{u_2}}(\xi_2)|\xi_3|^{-\frac14}\widehat{u_3}(\xi_3))\widehat{\overline{Iu_4}}(\xi_4))
|\xi_{56}|^{-\frac12}(\widehat{Iu_5}(\xi_1)\widehat{\overline{Iu_6}}(\xi_6))\\
\lesssim &\dfrac{1}{N^{\frac74}}\int_0^\delta \int_{\mathbb R}D_x^{-\frac12}(u_1 u_2 (D_x^{-\frac14}u_3) Iu_a) D_x^{-\frac12}(Iu_t Iu_b)\\
\lesssim &\dfrac{1}{N^{\frac74}}\int_0^\delta \| D_x^{-\frac12}(u_1 u_2 (D_x^{-\frac14}u_3) Iu_a) \|_{L^2_x}\|D_x^{-\frac12}(Iu_t Iu_b)\|_{L^2_x}.
\end{split}
\end{equation}
Now, applying Hardy-Litlewwod-Sobolev inequality followed by estimates from Lemma \ref{Lema36}, we obtain from \eqref{x3lambda4} that
\begin{equation}\label{x3lambda4.1}
\begin{split}
J&\lesssim \dfrac{1}{N^{\frac74}}\int_0^\delta \| u_1 u_2(D_x^{-\frac14}u_3) Iu_a \|_{L^1_x}\|Iu_t Iu_b\|_{L^1_x}\\
&\lesssim \dfrac{1}{N^{\frac74}} \|u_1\|_{ L_x^{20/3}L_t^5}\|u_2\|_{ L_x^{20/3}L_t^5}\|D_x^{-\frac14}u_3\|_{L_x^5 L_t^{10}}\|I u_a\|_{L_x^{2} L_t^{2}}\|Iu_t\|_{L_t^\infty L_x^2 }\|Iu_b\|_{L_t^\infty L_x^2 }\\
&\lesssim \dfrac{1}{N^{\frac74}}\|u_1\|_{X^{-\frac14,b}_{\delta}} \|u_2\|_{X^{-\frac14,b}_{\delta}}\|u_3\|_{X^{-\frac14,b}_{\delta}}\|Iu_a\|_{X^{0,b}_{\delta}}\|Iu_t\|_{X^{0,b}_{\delta}}\|Iu_b\|_{X^{0,b}_{\delta}}\\
&\lesssim \dfrac{1}{N^{\frac74}}\|Iu_1\|_{X^{0,b}_{\delta}} \|Iu_2\|_{X^{0,b}_{\delta}}\|Iu_3\|_{X^{0,b}_{\delta}}\|Iu_a\|_{X^{0,b}_{\delta}}\|Iu_t\|_{X^{0,b}_{\delta}}\|Iu_b\|_{X^{0,b}_{\delta}}\\
&\lesssim \dfrac{1}{N^{\frac74}}\|Iu\|_{X^{0,b}_{\delta}}^6.
\end{split}
\end{equation}
\end{proof}
{\mathcal S}ubsection{Almost conserved quantity}
We use the estimates proved in the previous subsection to obtain the following almost conservation law for the second generation of the energy.
\begin{proposition}\label{prop-almost}
Let $u$ be the solution of the IVP \eqref{e-nlsT} given by Theorem \ref{local-variant} in the interval $[0, \delta]$. Then the second generation of the modified energy satisfies the following estimates
\begin{equation}
\label{almost-CL}
|E^2_I(u(\delta))|\leq |E^2_I(\phi)| + C N^{-\frac74}\|Iu\|_{X^{0, \frac12+}_{\delta}}^6.
\end{equation}
\end{proposition}
\begin{proof}
The proof follows combining \eqref{second-m3} and \eqref{lambda6}.
\end{proof}
{\mathcal S}ection{proof of the main results}\label{sec-4}
In this section we provide proof of the main results of this work.
\begin{proof}[Proof of Theorem \ref{Global-Th}] Let $u_0\in H^s(\mathbb R)$, $s>0>-\frac14$. Given any $T>0$, we are interested in extending the local solution to the IVP \eqref{e-nlsT} to the interval $[0, T]$.
To make the analysis a bit easy we use the scaling argument. If $u(x,t)$ solves the IVP \eqref{e-nlsT} with initial data $u_0(x)$ then for $1<\lambda<\infty$, so does $u^{\lambda}(x,t)$ with initial data $u_0^{\lambda}(x)$; where $u^{\lambda}(x,t)= \lambda^{-\frac32} u(\frac x\lambda, \frac t{\lambda^3})$ and $u_0^{\lambda}(x)=\lambda^{-\frac32}u_0(\frac x\lambda)$.
Our interest is in extending the rescaled solution $u^{\lambda}$ to the bigger time interval $[0, \lambda^3T]$.
Observe that
\begin{equation}\label{g-1}
\|u_0^{\lambda}\|_{{H}^s}\lesssim \lambda^{-1-s}\|u_0\|_{{H}^s}.
\end{equation}
From this observation and \eqref{gwlem12} we have that
\begin{equation}\label{g-2}
E^1_I(u_0^{\lambda})=\|Iu_0^{\lambda}\|_{L^2}^2\lesssim N^{-2s}\lambda^{-2(1+s)}\|u_0\|_{L^2}^2.
\end{equation}
The number $N\gg 1$ will be chosen later suitably. Now we choose the parameter $\lambda=\lambda(N)$ in such a way that $E^1_I(u_0^{\lambda})=\|Iu_0^{\lambda}\|_{L^2}^2$ becomes as small as we please. In fact, for arbitrary $\epsilon>0$, if we choose
\begin{equation}\label{g-4}
\lambda {\mathcal S}im N^{-\frac{s}{1+s}},
\end{equation}
we can obtain
\begin{equation}\label{g-5}
E^1_I(u_0^{\lambda})=\|Iu_0^{\lambda}\|_{L^2}^2\leq \epsilon.
\end{equation}
From \eqref{g-5} and the variant of the local well-posedness result \eqref{delta-var}, we can guarantee that the rescaled solution $Iu^{\lambda}$ exists in the time interval $[0, 1]$.
Moreover, for this choice of $\lambda$, from \eqref{sec-m1}, \eqref{lambda4} and \eqref{g-5}, in the time interval $[0, 1]$, we have
\begin{equation}\label{g-6}
|E^2_I(u_0^{\lambda})|\lesssim \|E^1_I(u_0^{\lambda})\|_{H^1}^2 +|\Lambda_4(M_4)|\lesssim \|Iu_0^{\lambda}\|_{L^2}^2 + \|Iu_0^{\lambda}\|_{L^2}^4\leq \epsilon+\epsilon^2\lesssim \epsilon.
\end{equation}
Using the almost conservation law \eqref{almost-CL} for the modified energy, \eqref{variant-2}, \eqref{g-5} and \eqref{g-6}, we obtain
\begin{equation}\label{g-7}
\begin{split}
|E^2_I(u^{\lambda})(1)|&\lesssim |E^2_I(u_0^{\lambda})| +N^{-\frac74}\|Iu^{\lambda}\|_{X_1^{0,{\frac12+}}}^6\\
&\lesssim \epsilon+N^{-\frac74}\epsilon^3\\
&\lesssim \epsilon+N^{-\frac74}\epsilon.
\end{split}
\end{equation}
From \eqref{g-7}, it is clear that we can iterate this process $N^{\frac74}$ times before doubling the modified energy $|E^2(u^{\lambda})|$. Therefore, by taking $N^{\frac74}$ times steps of size $O(1)$, we can extend the rescaled solution to the interval $[0, N^{\frac74}]$. As we are interested in extending the the solution to the interval $[0, \lambda^3T]$, we must select $N=N(T)$ such that $\lambda^3T\leq N^{\frac74}$. Therefore, with the choice of $\lambda$ in \eqref{g-4}, we must have
\begin{equation}\label{g-8}
TN^{\frac{-7-19s}{4(1+s)}}\leq c.
\end{equation}
Hence, for arbitrary $T>0$ and large $N$, \eqref{g-8} is possible if $s>-\frac7{19}$, which is true because we have considered $s>-\frac14$. This completes the proof of the theorem.
\end{proof}
\begin{remark}
From the proof of Theorem \ref{Global-Th} it can be seen that the global well-posedness result might hold for initial data with Sobolev regularity below $-\frac14$ as well provided there is local solution. But, as shown in \cite{XC-04} one cannot obtain the local well-posedness result for such data because the crucial trilinear estimate fails for for $s<-\frac14$.
\end{remark}
\noindent
{\bf Acknowledgments.}
The first author extends thanks to the Department of Mathematics, UNICAMP, Campinas for the kind hospitality where a significant part of this work was developed. The second author acknowledges the grants from FAPESP (2020/14833-8) and CNPq (307790/2020-7). \\
\end{document} |
\begin{document}
\title[Polynomial Interpretations Revisited]
{Polynomial Interpretations over the Natural, Rational and Real Numbers
Revisited}
\author[F.~Neurauter]{Friedrich Neurauter\rsuper a}
\address{{\lsuper a}TINETZ-Stromnetz Tirol AG}
\email{[email protected]}
\author[A.~Middeldorp]{Aart Middeldorp\rsuper b}
\address{{\lsuper b}Institute of Computer Science \newline
University of Innsbruck, Austria}
\email{[email protected]}
\keywords{term rewriting, termination, polynomial interpretations}
\begin{abstract}
Polynomial interpretations are a useful technique for proving
termination of term rewrite systems. They come in various flavors:
polynomial interpretations with real, rational and integer coefficients.
As to their relationship with respect to termination proving power,
Lucas managed to prove in 2006 that there are rewrite systems that can
be shown polynomially terminating by polynomial interpretations with real
(algebraic) coefficients, but cannot be shown polynomially terminating
using polynomials with rational coefficients only.
He also proved the corresponding statement regarding the use of rational
coefficients versus integer coefficients.
In this article we extend these results, thereby giving the full picture
of the relationship between the aforementioned variants of polynomial
interpretations. In particular, we show that polynomial
interpretations with real or rational coefficients do not subsume
polynomial interpretations with integer coefficients.
Our results hold also for incremental termination proofs with
polynomial interpretations.
\end{abstract}
\maketitle
\section{Introduction}
\label{sect:intro}
Polynomial interpretations are a simple yet useful technique for proving
termination of term rewrite systems (TRSs, for short). While originally
conceived in the late seventies by
Lankford \cite{L79} as a means for establishing direct termination
proofs,
polynomial interpretations are nowadays often used in the context of the
dependency pair (DP) framework \cite{AG00,GTSF06,HM07}.
In the classical approach of Lankford, one considers polynomials with
integer coefficients inducing polynomial algebras over the well-founded
domain of the natural numbers.
To be precise, every $n$-ary function symbol $f$ is interpreted by a
polynomial $P_f$ in $n$ indeterminates with
integer coefficients, which induces a mapping or \emph{interpretation}
from terms to integer numbers in the obvious way. In order to conclude
termination of a given TRS, three conditions have to be satisfied.
First, every polynomial must be \emph{well-defined}, i.e., it must induce
a well-defined polynomial function
$f_\mathbb{N}\colon \mathbb{N}^n \to \mathbb{N}$ over the natural numbers. In addition, the
interpretation functions $f_\mathbb{N}$ are required to be
\emph{strictly monotone} in all arguments. Finally, one has to show
\emph{compatibility} of the interpretation with the given TRS. More
precisely,
for every rewrite rule $\ell \to r$, the polynomial $P_\ell$ associated
with the left-hand side must be greater than $P_r$, the corresponding
polynomial of the right-hand side, i.e., $P_\ell > P_r$ for all values
of the indeterminates.
Already back in the seventies, an alternative approach using polynomials
with real coefficients instead of integers was proposed by
Dershowitz~\cite{D79}. However, as the real numbers $\mathbb{R}$ equipped
with the standard order $>_\mathbb{R}$ are not well-founded,
a subterm property is explicitly required to ensure well-foundedness.
It was not until 2005 that this limitation was overcome, when
Lucas \cite{L05} presented a framework for proving polynomial termination
over the real numbers, where well-foundedness is basically
achieved by replacing $>_\mathbb{R}$ with a new ordering $>_{\mathbb{R},\delta}$
requiring comparisons between terms to not be below a given positive real
number $\delta$. Moreover, this framework also facilitates polynomial
interpretations over the rational numbers.
Thus, one can distinguish three variants of polynomial interpretations,
polynomial interpretations with real, rational and integer coefficients,
and the obvious question is:
what is their relationship with regard to termination proving power?
For Knuth-Bendix orders it is known~\cite{KV03,L01} that extending the
range of the underlying weight function from natural numbers to
non-negative reals does not result in an increase in termination proving
power. In 2006 Lucas~\cite{L06} proved that there are TRSs that can be
shown polynomially terminating by polynomial interpretations with
rational coefficients, but cannot be shown polynomially terminating using
polynomials with integer coefficients only. Likewise, he proved that
there are TRSs that can be handled by polynomial interpretations with
real (algebraic)
coefficients, but cannot be handled by polynomial interpretations
with rational coefficients.
In this article we extend these results and give a complete comparison
between the various notions of polynomial termination.\footnote{
Readers familiar with Lucas~\cite{L06} should note that we use a
different definition of polynomial termination over the reals and
rationals, cf.\ Remark~\ref{Lucas}.}
In general, the situation turns out to be
as depicted in Figure~\ref{fig:summary}, which illustrates both
our results and the earlier results of Lucas~\cite{L06}.
\begin{figure}
\caption{Comparison.}
\label{fig:summary}
\end{figure}
In particular, we prove that polynomial interpretations with real
coefficients
subsume polynomial interpretations with rational coefficients.
Moreover, we show that polynomial interpretations with real or rational
coefficients do not subsume polynomial interpretations with integer
coefficients by exhibiting the TRS $\mathcal{R}_1$ in
Section~\ref{sect:nvsr}.
Likewise, we prove that there are TRSs
that can be shown terminating by polynomial interpretations with real
coefficients as well as by polynomial interpretations with integer
coefficients, but cannot be shown terminating using polynomials with
rational coefficients only, by exhibiting the TRS $\mathcal{R}_2$ in
Section~\ref{sect:nrvsq}. The TRSs $\mathcal{R}_3$ and
$\mathcal{R}_4$ can be found in Section~\ref{sect:incrementality}.
The remainder of this article is organized as follows. In
Section~\ref{sect:prelim}, we introduce some preliminary definitions
and terminology concerning polynomials and polynomial interpretations.
In Section~\ref{sect:qvsr}, we show that polynomial
interpretations with real coefficients subsume polynomial interpretations
with rational coefficients. We further show that for polynomial
interpretations over the reals, it suffices to consider real algebraic
numbers as interpretation domain.
Section~\ref{sect:nvsr} is dedicated to showing that polynomial
interpretations with real or rational coefficients do not subsume
polynomial interpretations with integer coefficients.
Then, in Section~\ref{sect:nrvsq}, we present a TRS
that can be handled by a polynomial interpretation with real coefficients
as well as by a polynomial interpretation with integer coefficients, but
cannot be handled using polynomials with rational coefficients.
In Section~\ref{sect:incrementality}, we show that the relationships
in Figure~\ref{fig:summary} remain true if incremental termination
proofs with polynomial interpretations are considered.
We conclude in Section~\ref{sect:conclusion}.
This paper is an extended version of \cite{FNAM-RTA10}, which
contained the result of Section~\ref{sect:nvsr}.
The results in Sections~\ref{sect:qvsr}, \ref{sect:nrvsq}
and~\ref{sect:incrementality} are new.
\section{Preliminaries}
\label{sect:prelim}
As usual, we denote by $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{Q}$ and $\mathbb{R}$ the sets of natural,
integer, rational and real numbers, respectively. An \emph{irrational}
number is a real number, which is not in $\mathbb{Q}$. Given some
$D \in \{ \mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{R} \}$ and $m \in D$,
$>_D$ denotes the standard order of the respective domain and
$D_m := \{ x \in D \mid x \geqslant m \}$.
A sequence of real numbers $(x_n)_{n \in \mathbb{N}}$ \emph{converges} to the
\emph{limit} $x$ if for every real number $\varepsilon > 0$ there exists
a natural number $N$ such that the absolute distance $|x_n - x|$ is less
than $\varepsilon$ for all $n > N$; we denote this by
$\lim_{n \to \infty} x_n = x$. As convergence
in $\mathbb{R}^k$ is equivalent to componentwise convergence, we use the same
notation also for limits of converging sequences of vectors of real
numbers $(\vec{x}_n \in \mathbb{R}^k)_{n \in \mathbb{N}}$.
A real function $f\colon \mathbb{R}^k \to \mathbb{R}$ is \emph{continuous} in $\mathbb{R}^k$
if for every converging sequence $(\vec{x}_n \in \mathbb{R}^k)_{n \in \mathbb{N}}$
it holds that
$\lim_{n \to \infty} f(\vec{x}_n) = f(\lim_{n \to \infty} \vec{x}_n)$.
Finally, as $\mathbb{Q}$ is dense in $\mathbb{R}$, every real number is a
rational number or the limit of a converging sequence of rational
numbers.
\subsection*{Polynomials}
For any ring $R$ (e.g. $\mathbb{Z}$, $\mathbb{Q}$, $\mathbb{R}$), we denote the associated
\emph{polynomial ring} in $n$ \emph{indeterminates} $\seq{x}$ by
$R[\seq{x}]$, the elements of which are finite sums of products
of the form $c\cdot x_1^{i_1}x_2^{i_2}\cdots x_n^{i_n}$, where the
\emph{coefficient} $c$ is an element of $R$ and the exponents $\seq{i}$
in the \emph{monomial} $x_1^{i_1}x_2^{i_2}\cdots x_n^{i_n}$
are non-negative integers.
If $c \neq 0$, we call a product
$c\cdot x_1^{i_1}x_2^{i_2}\cdots x_n^{i_n}$ a \emph{term}.
The \emph{degree} of a monomial
is just the sum of its exponents, and the degree of a term is
the degree of its monomial.
An element $P \in R[\seq{x}]$ is called an
\emph{($n$-variate) polynomial with coefficients in $R$}.
For example, the polynomial $2x^2-x+1$ is an element of $\mathbb{Z}[x]$,
the ring of all univariate polynomials with integer coefficients.
In the special case $n = 1$, a polynomial $P \in R[x]$ can be written as
follows: $P(x) = \sum_{k=0}^d {a_k x^k}$ ($d \geqslant 0$).
For the largest $k$ such that $a_k \neq 0$, we call $a_k x^k$ the
\emph{leading term} of $P$, $a_k$ its \emph{leading coefficient}
and $k$ its \emph{degree}, which we denote by $\deg(P) = k$.
A polynomial $P \in R[x]$ is said to be \emph{linear} if $\deg(P) = 1$,
and \emph{quadratic} if $\deg(P) = 2$.
\subsection*{Polynomial Interpretations}
We assume familiarity with the basics of term rewriting and polynomial
interpretations (e.g.\ \cite{BN98,TeReSe}).
The key concept for establishing (direct) termination of TRSs
via polynomial interpretations is the notion of well-founded
monotone algebras as they induce reduction orders on terms.
\begin{defi}
\label{def:ma}
Let $\mathcal{F}$ be a signature, i.e., a set of function symbols equipped
with fixed arities.
An $\mathcal{F}$-\emph{algebra} $\mathcal{A}$ consists if a non-empty
\emph{carrier} set $A$ and a collection of
interpretation functions $f_A\colon A^n \to A$ for
each $n$-ary function symbol $f \in \mathcal{F}$.
The \emph{evaluation} or \emph{interpretation} $[\alpha]_\mathcal{A}(t)$
of a term $t \in \mathcal{T}(\mathcal{F},\mathcal{V})$ with respect to a
variable
assignment $\alpha\colon \mathcal{V} \to A$ is inductively defined as
follows:
\[
[\alpha]_\mathcal{A}(t) =
\begin{cases}
\alpha(t) & \text{if $t \in \mathcal{F}$} \\
f_A([\alpha]_\mathcal{A}(t_1),\dots,[\alpha]_\mathcal{A}(t_n)) &
\text{if $t = f(\seq{t})$}
\end{cases}
\]
Let $\sqsupset$ be a binary relation on $A$.
For $i \in \{ 1, \dots, n \}$, an interpretation function
$f_A \colon A^n \to A$ is \emph{monotone in its $i$-th argument}
with respect to $\sqsupset$ if $a_i \sqsupset b$ implies
\[
f_A(a_1,\dots,a_i,\dots,a_n) \sqsupset f_A(a_1,\dots,b,\dots,a_n)
\]
for all $\seq{a}, b \in A$. It is said to be \emph{monotone} with respect
to $\sqsupset$ if it is monotone in all its arguments.
We define $s \sqsupset_A t$ as
$[\alpha]_\mathcal{A}(s) \sqsupset [\alpha]_\mathcal{A}(t)$
for all assignments $\alpha$.
\end{defi}
In order to pave the way for incremental polynomial termination in
Section~\ref{sect:incrementality}, the following definition is more
general than what is needed for direct termination proofs.
\begin{defi}
\label{def:wma}
Let $(\mathcal{A}, >, \geqslantslant)$ be an $\mathcal{F}$-algebra together
with two binary relations $>$ and $\geqslantslant$ on $A$. We say that
$(\mathcal{A},>,\geqslantslant)$ and a TRS $\mathcal{R}$ are
\emph{(weakly) compatible} if
$\ell >_\mathcal{A} r$ ($\ell \geqslantslant_\mathcal{A} r$) for each rewrite
rule $\ell \to r \in \mathcal{R}$.
An interpretation function $f_A$ is called strictly (weakly)
monotone if it is monotone with respect to $>$ ($\geqslantslant$).
The triple $(\mathcal{A}, >, \geqslantslant)$ (or just $\mathcal{A}$ if $>$
and $\geqslantslant$ are clear from the context) is a
\emph{weakly (strictly) monotone $\mathcal{F}$-algebra}
if $>$ is well-founded,
${> \cdot \geqslantslant} \subseteq {>}$ and for each $f \in \mathcal{F}$,
$f_A$ is \emph{weakly (strictly) monotone}.
It is said to be an
\emph{extended monotone $\mathcal{F}$-algebra} if it is both weakly
monotone and strictly monotone.
Finally, we call $(\mathcal{A}, >, \geqslantslant)$ a
\emph{well-founded monotone $\mathcal{A}$-algebra} if $>$ is a
well-founded order on $A$, $\geqslantslant$ is its
reflexive closure, and each interpretation function
is strictly monotone.
\end{defi}
It is well-known that well-founded monotone algebras provide a complete
characterization of termination.
\begin{thm}
A TRS is terminating if and only if it is compatible with a well-founded
monotone algebra.
\qed
\end{thm}
\begin{defi}
\label{def:PolyInt_N}
A \emph{polynomial interpretation over $\mathbb{N}$} for a signature
$\mathcal{F}$ consists of a polynomial $f_\mathbb{N} \in \mathbb{Z}[\seq{x}]$ for every
$n$-ary function symbol $f \in \mathcal{F}$ such that for all
$f \in \mathcal{F}$ the following two properties are satisfied:
\begin{enumerate}
\item
\emph{well-definedness}:
$f_\mathbb{N}(\seq{x}) \in \mathbb{N}$ for all $\seq{x} \in \mathbb{N}$,
\item
\emph{strict monotonicity} of $f_\mathbb{N}$ in all arguments with respect
to $>_\mathbb{N}$, the standard order on $\mathbb{N}$.
\end{enumerate}
Due to well-definedness, each of the polynomials $f_\mathbb{N}$ induces
a function from $\mathbb{N}^n$ to $\mathbb{N}$. Hence, the pair
$\mathcal{N} = (\mathbb{N}, \{ f_\mathbb{N} \}_{f \in \mathcal{F}})$ constitutes an
$\mathcal{F}$-algebra over the carrier $\mathbb{N}$.
Now $(\mathcal{N}, >_\mathbb{N}, \geqslantslant_\mathbb{N})$
where $\geqslantslant_\mathbb{N}$ is the reflexive closure of $>_\mathbb{N}$ constitutes a
well-founded monotone algebra, and we say that a polynomial
interpretation over $\mathbb{N}$ is \emph{compatible} with a TRS
$\mathcal{R}$ if the well-founded monotone algebra
$(\mathcal{N}, >_\mathbb{N}, \geqslantslant_\mathbb{N})$
is compatible with $\mathcal{R}$. Finally, a TRS is
\emph{polynomially terminating over $\mathbb{N}$} if it admits a
compatible polynomial interpretation over $\mathbb{N}$.
\end{defi}
In the sequel, we often identify a polynomial
interpretation with its associated $\mathcal{F}$-algebra.
\begin{rem}
\label{rem:isom}
In principle, one could take any set $\mathbb{N}_m$ (or even $\mathbb{Z}_m$) instead of
$\mathbb{N}$ as the carrier for polynomial interpretations. However, it is
well-known~\cite{TeReSe,CMTU05} that all these sets are
order-isomorphic to $\mathbb{N}$ and hence do not change the class
of polynomially terminating TRSs.
In other words, a TRS $\mathcal{R}$ is polynomially terminating over $\mathbb{N}$
if and only if it is polynomially terminating over $\mathbb{N}_m$.
Thus, we can restrict to $\mathbb{N}$ as carrier without loss of generality.
\end{rem}
The following simple criterion for strict monotonicity of a
univariate quadratic polynomial will be used in
Sections~\ref{sect:nvsr} and \ref{sect:nrvsq}.
\begin{lem}
\label{lem:nquadmon}
The quadratic polynomial $f_\mathbb{N}(x) = ax^2 + bx + c$
with $a, b, c \in \mathbb{Z}$ is strictly monotone and well-defined if and only
if $a > 0$, $c \geqslantslant 0$, and $a + b > 0$.
\qed
\end{lem}
Now if one wants to extend the notion of polynomial interpretations to
the rational or real numbers, the main problem one is confronted with
is the non-well-foundedness of these domains with respect to the standard
orders $>_\mathbb{Q}$ and $>_\mathbb{R}$. In \cite{H01,L05}, this problem is
overcome by replacing these orders with new non-total orders
$>_{\mathbb{R},\delta}$ and
$>_{\mathbb{Q},\delta}$, the first of which is defined as
follows: given some fixed positive real number $\delta$,
\[
x >_{\mathbb{R},\delta} y \quad:\iff\quad
x-y \geqslant_\mathbb{R} \delta \quad \text{for all $x, y \in \mathbb{R}$.}
\]
Analogously, one defines $>_{\mathbb{Q},\delta}$ on $\mathbb{Q}$. Thus, $>_{\mathbb{R},\delta}$
($>_{\mathbb{Q},\delta}$) is well-founded on subsets of $\mathbb{R}$ ($\mathbb{Q}$) that are
bounded from below. Therefore, any set $\mathbb{R}_m$ ($\mathbb{Q}_m$) could be used
as carrier for polynomial interpretations over $\mathbb{R}$ ($\mathbb{Q}$).
However, without loss of generality we may restrict to
$\mathbb{R}_0$ ($\mathbb{Q}_0$) because the main argument of
Remark~\ref{rem:isom} also applies to polynomials over $\mathbb{R}$
($\mathbb{Q}$), as is already mentioned in \cite{L05}.
\begin{defi}
\label{def:PolyInt_R}
A \emph{polynomial interpretation over $\mathbb{R}$} for a signature
$\mathcal{F}$ consists of a polynomial $f_\mathbb{R} \in \mathbb{R}[\seq{x}]$ for every
$n$-ary function symbol $f \in \mathcal{F}$ and some positive real
number $\delta > 0$ such that
$f_\mathbb{R}$ is well-defined over $\mathbb{R}_0$, i.e.,
$f_\mathbb{R}(\seq{x}) \in \mathbb{R}_0$ for all $\seq{x} \in \mathbb{R}_0$.
\end{defi}
Analogously, one defines polynomial interpretations over $\mathbb{Q}$
by the obvious adaptation of the definition above.
Let $D \in \{ \mathbb{Q}, \mathbb{R} \}$.
As for polynomial interpretations over $\mathbb{N}$,
the pair $\mathcal{D} = (D_0, \{ f_D \}_{f \in \mathcal{F}})$ constitutes
an $\mathcal{F}$-algebra over the carrier $D_0$ due to the
well-definedness of all interpretation functions.
Together with $>_{D_0,\delta}$ and $\geqslantslant_{D_0}$,
the restrictions of $>_{D,\delta}$ and $\geqslantslant_D$ to $D_0$,
we obtain an algebra
$(\mathcal{D}, >_{D_0,\delta}, \geqslantslant_{D_0})$, where
$>_{D_0,\delta}$ is well-founded (on $D_0$) and
${>_{D_0,\delta} \cdot \geqslantslant_{D_0}} \subseteq {>_{D_0,\delta}}$.
Hence, if for each $f \in \mathcal{F}$, $f_D$ is weakly (strictly)
monotone,
that is, monotone with respect to $\geqslantslant_{D_0}$ ($>_{D_0,\delta}$),
then $(\mathcal{D}, >_{D_0,\delta}, \geqslantslant_{D_0})$
is a weakly (strictly) monotone $\mathcal{F}$-algebra.
However, unlike for polynomial interpretations over $\mathbb{N}$,
strict monotonicity of $(\mathcal{D}, >_{D_0,\delta}, \geqslantslant_{D_0})$
does not entail weak monotonicity as it can very well be the
case that an interpretation function is monotone with respect to
$>_{D_0,\delta}$ but not with respect to $\geqslantslant_{D_0}$.
\begin{defi}
Let $D \in \{ \mathbb{Q}, \mathbb{R} \}$. A polynomial interpretation over $D$
is said to be \emph{weakly (strictly) monotone} if the algebra
$(\mathcal{D}, >_{D_0,\delta}, \geqslantslant_{D_0})$ is weakly (strictly)
monotone. Similarly, we say that a polynomial interpretation over $D$ is
\emph{(weakly) compatible} with a TRS $\mathcal{R}$ if
the algebra $(\mathcal{D}, >_{D_0,\delta}, \geqslantslant_{D_0})$ is (weakly)
compatible with~$\mathcal{R}$.
Finally, a TRS $\mathcal{R}$ is \emph{polynomially terminating over $D$}
if there exists a polynomial interpretation over $D$ that is both
compatible with $\mathcal{R}$ and strictly monotone.
\end{defi}
We conclude this section with a more useful characterization of
monotonicity with respect to the orders $>_{\mathbb{R}_0,\delta}$ and
$>_{\mathbb{Q}_0,\delta}$ than the one obtained by specializing
Definition~\ref{def:wma}.
To this end, we note that a function $f\colon \mathbb{R}_0^n \to \mathbb{R}_0$ is
strictly monotone in its $i$-th argument with respect to
$>_{\mathbb{R}_0,\delta}$ if and only if
$f(x_1,\dots,x_i+h,\dots,x_n) - f(x_1,\dots,x_i,\dots,x_n) \geqslant_\mathbb{R} \delta$
for all $\seq{x}, h \in \mathbb{R}_0$ with $h \geqslant_\mathbb{R} \delta$.
From this and from the analogous characterization of
$>_{\mathbb{Q}_0,\delta}$-monotonicity, it is easy to derive the following
lemmata, which will be used in Sections~\ref{sect:nrvsq}
and~\ref{sect:incrementality}.
\begin{lem}
\label{lem:rlinmon}
\label{lem:qlinmon}
\label{lem:qrlinmon}
For $D \in \{ \mathbb{Q}, \mathbb{R} \}$ and $\delta \in D_0$ with $\delta > 0$,
the linear polynomial
$f_D(\seq{x}) = a_nx_n + \cdots + a_1x_1 + a_0$
in $D[\seq{x}]$ is monotone in all arguments with respect to
$>_{D_0,\delta}$ and well-defined if and only if $a_0 \geqslant 0$ and
$a_i \geqslant 1$ for all $i \in \{ 1, \dots, n \}$.
\qed
\end{lem}
\begin{lem}
\label{lem:qrquadmon}
For $D \in \{ \mathbb{Q}, \mathbb{R} \}$ and $\delta \in D_0$ with $\delta > 0$,
the quadratic polynomial $f_D(x) = ax^2 + bx + c$ in $D[x]$ is
monotone with respect to $>_{D_0,\delta}$ and
well-defined if and only if $a > 0$, $c \geqslant 0$,
$a\delta + b \geqslant 1$, and $b \geqslant 0$ or $4ac - b^2 \geqslant 0$.
\qed
\end{lem}
In the remainder of this article we will sometimes use the term
``polynomial interpretations with integer coefficients'' as a synonym
for polynomial interpretations over $\mathbb{N}$. Likewise, the term
``polynomial interpretations with real (rational) coefficients'' refers
to polynomial interpretations over $\mathbb{R}$ ($\mathbb{Q}$).
\begin{rem}
\label{Lucas}
Lucas~\cite{L06,L07} considers a different definition of
polynomial termination over $\mathbb{R}$ ($\mathbb{Q}$). He allows an
arbitrary subset $A \subseteq \mathbb{R}$ ($A \subseteq \mathbb{Q}$) as
interpretation domain,
provided it is bounded from below and unbounded from above.
The definition of well-definedness is modified accordingly.
According to his definition, polynomial termination over $\mathbb{N}$
trivially implies polynomial interpretations over $\mathbb{R}$ (and $\mathbb{Q}$) since
one can take $A = \mathbb{N} \subseteq \mathbb{R}$ and $\delta = 1$, in which
case the induced order $>_{A,\delta}$ is the same as the standard
order on $\mathbb{N}$. Our definitions are based on the understanding
that the interpretation domain together with the underlying order
determine whether one speaks of polynomial interpretations over the
reals, rationals, or integers.
As a consequence, several of the new results obtained in this paper
do not hold in the setting of \cite{L06,L07}.
\end{rem}
\section{Polynomial Termination over the Reals vs.\ the Rationals}
\label{sect:qvsr}
In this section we show that polynomial termination over $\mathbb{Q}$
implies polynomial termination over $\mathbb{R}$. The proof is based upon
the fact that polynomials induce continuous functions, whose
behavior at irrational points is completely defined by the values
they take at rational points.
\begin{lem}
\label{lem:continuity}
Let $f\colon \mathbb{R}^k \to\mathbb{R}$ be continuous in $\mathbb{R}^k$.
If $f(\seq[k]{x}) \geqslant 0$ for all $\seq[k]{x} \in \mathbb{Q}_0$, then
$f(\seq[k]{x}) \geqslant 0$ for all $\seq[k]{x} \in \mathbb{R}_0$.
\end{lem}
\proof
Let $\vec{x} = (\seq[k]{x}) \in \mathbb{R}_0^k$ and let
$(\vec{x}_n)_{n \in\mathbb{N}}$ be a sequence of vectors of non-negative
rational numbers $\vec{x}_n \in \mathbb{Q}_0^k$ whose limit is
$\vec{x}$. Such a sequence exists because $\mathbb{Q}^k$ is dense in
$\mathbb{R}^k$. Then
\[
f(\vec{x}) = f(\lim_{n \to \infty} \vec{x}_n) =
\lim_{n \to \infty} f(\vec{x}_n)
\]
by continuity of $f$. Thus, $f(\vec{x})$ is the limit of
$(f(\vec{x}_n))_{n \in \mathbb{N}}$, which is a sequence of non-negative real
numbers by assumption. Hence, $f(\vec{x})$ is non-negative, too.
\qed
\begin{thm}
\label{thm:qvsr}
If a TRS is polynomially terminating over $\mathbb{Q}$, then it is
also polynomially terminating over $\mathbb{R}$.
\end{thm}
\proof
Let $\mathcal{R}$ be a TRS over the signature $\mathcal{F}$ that is
polynomially terminating over $\mathbb{Q}$.
So there exists some polynomial interpretation $\mathcal{I}$ over $\mathbb{Q}$
consisting of a positive rational number $\delta$ and a
polynomial $f_\mathbb{Q} \in \mathbb{Q}[\seq{x}]$ for every $n$-ary function symbol
$f \in \mathcal{F}$ such that:
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item
for all $n$-ary $f \in \mathcal{F}$,
$f_\mathbb{Q}(\seq{x}) \geqslant 0$ for all $\seq{x} \in \mathbb{Q}_0$,
\item
for all $f \in \mathcal{F}$,
$f_\mathbb{Q}$ is strictly monotone with respect to $>_{\mathbb{Q}_0,\delta}$ in
all arguments,
\item
for every rewrite rule $\ell \to r \in \mathcal{R}$,
$P_\ell >_{\mathbb{Q}_0,\delta} P_r$ for all
$\seq[m]{x} \in \mathbb{Q}_0$.
\end{enumerate}
Here $P_\ell$ ($P_r$) denotes the polynomial
associated with $\ell$ ($r$) and the variables
$\seq[m]{x}$ are those occurring in $\ell \to r$.
Next we note that all three conditions are quantified polynomial
inequalities of the shape
``$P(\seq[k]{x}) \geqslant 0$ for all $\seq[k]{x} \in \mathbb{Q}_0$'' for
some polynomial $P$ with rational coefficients. This is easy to see
for the first and third condition. As to the second condition,
the function $f_\mathbb{Q}$ is strictly monotone in its $i$-th argument with
respect to $>_{\mathbb{Q}_0,\delta}$ if and only if
$f_\mathbb{Q}(x_1,\dots,x_i+h,\dots,x_n) - f_\mathbb{Q}(x_1,\dots,x_i,\dots,x_n)
\geqslant \delta$ for all $\seq{x}, h \in \mathbb{Q}_0$ with $h \geqslant \delta$,
which is equivalent to
\[
f_\mathbb{Q}(x_1,\dots,x_i+\delta+h,\dots,x_n) - f_\mathbb{Q}(x_1,\dots,x_i,\dots,x_n)
- \delta \geqslant 0
\]
for all $\seq{x}, h \in \mathbb{Q}_0$. From
Lemma~\ref{lem:continuity} and the fact that polynomials induce continuous
functions we infer that all these polynomial inequalities do not only
hold in $\mathbb{Q}_0$ but also in $\mathbb{R}_0$.
Hence, the polynomial interpretation $\mathcal{I}$
proves termination over $\mathbb{R}$.
\qed
\begin{rem}\label{rem:qvsr}
Not only does the result established above
show that polynomial termination over $\mathbb{Q}$ implies polynomial termination
over $\mathbb{R}$, but it even reveals that the same interpretation applies.
\end{rem}
We conclude this section by showing that
for polynomial interpretations over $\mathbb{R}$ it suffices to consider real
\emph{algebraic}\footnote{A real number is said to be algebraic if
it is a root of a non-zero polynomial in one variable with rational
coefficients.}
numbers as interpretation domain.
Concerning the use of real algebraic numbers in polynomial
interpretations, in \cite[Section~6]{L07} it is shown that it suffices to
consider polynomials with real algebraic coefficients as interpretations
of function symbols.
Now the obvious question is whether it is also sufficient to consider
only the (non-negative) real algebraic numbers $\mathbb{R}_{\m{alg}}$ instead of
the entire set $\mathbb{R}$ of real numbers as interpretation domain. We give an
affirmative answer to this question by extending the result of \cite{L07}.
\begin{thm}
A finite TRS is polynomially terminating over $\mathbb{R}$ if and only if
it is polynomially terminating over $\mathbb{R}_{\m{alg}}$.
\end{thm}
\proof
Let $\mathcal{R}$ be a TRS over the signature $\mathcal{F}$ that is
polynomially terminating over $\mathbb{R}$.
There exists a positive real number
$\delta$ and a polynomial $f_\mathbb{R} \in \mathbb{R}[\seq{x}]$ for every
$n$-ary function symbol $f \in \mathcal{F}$ such that:
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item
for all $n$-ary $f \in \mathcal{F}$,
$f_\mathbb{R}(\seq{x}) \geqslant 0$ for all $\seq{x} \in \mathbb{R}_0$,
\item
for all $f \in \mathcal{F}$,
$f_\mathbb{R}$ is strictly monotone with respect to $>_{\mathbb{R}_0,\delta}$ in
all arguments,
\item
for every rewrite rule $\ell \to r \in \mathcal{R}$,
$P_\ell >_{\mathbb{R}_0,\delta} P_r$ for all
$\seq[m]{x} \in \mathbb{R}_0$.
\end{enumerate}
Next we treat $\delta$ as a variable and replace all coefficients of
the polynomials in $\{ f_\mathbb{R} \mid f \in \mathcal{F} \}$
by distinct variables
$\seq[j]{c}$. Thus, for each $n$-ary function symbol $f \in \mathcal{F}$,
its interpretation function is a parametric polynomial
$f_\mathbb{R} \in \mathbb{Z}[\seq{x},\seq[j]{c}] \subseteq
\mathbb{Z}[\seq{x},\seq[j]{c},\delta]$, where all non-zero
coefficients are $1$. As a consequence, we claim that all three
conditions listed above can be expressed as (conjunctions of) quantified
polynomial inequalities of the shape
\begin{equation}
\label{eq:polyconstraints}
p(\seq{x},\seq[j]{c},\delta) \geqslantslant 0
\quad
\text{for all $\seq{x} \in \mathbb{R}_0$}
\end{equation}
for some polynomial $p \in \mathbb{Z}[\seq{x},\seq[j]{c},\delta]$. This is
easy to see for the first condition.
For the third condition it is a direct consequence of the
nature of the interpretation functions and the usual closure properties
of polynomials. For the second condition we additionally need the fact
that $f_\mathbb{R}$ is strictly monotone in its $i$-th argument with
respect to $>_{\mathbb{R}_0,\delta}$ if and only if
$
f_\mathbb{R}(x_1,\dots,x_i+\delta+h,\dots,x_n) - f_\mathbb{R}(x_1,\dots,x_i,\dots,x_n)
- \delta \geqslant 0
$
for all $\seq{x}, h \in \mathbb{R}_0$. Now any
of the quantified inequalities~\eqref{eq:polyconstraints} can readily
be expressed as a formula in the language of ordered fields with
coefficients in $\mathbb{Z}$, where $\seq[j]{c}$ and $\delta$ are the only
free variables. By taking the conjunction of all these formulas,
existentially quantifying $\delta$ and adding the conjunct $\delta > 0$,
we obtain a formula $\Phi$ in the language of ordered fields with free
variables $\seq[j]{c}$ and coefficients in $\mathbb{Z}$ (as $\mathcal{R}$
and $\mathcal{F}$ are assumed to be finite). By assumption
there are coefficients
$\seq[j]{C} \in \mathbb{R}$ such that $\Phi(\seq[j]{C})$ is true in $\mathbb{R}$, i.e.,
there exists a
satisfying assignment for $\Phi$ in $\mathbb{R}$ mapping its free
variables $\seq[j]{c}$ to $\seq[j]{C} \in \mathbb{R}$. In order to prove the
theorem, we first show that there also exists a satisfying assignment
mapping each free variable to a real algebraic number. We
reason as follows. Because real closed fields admit
quantifier elimination (\cite[Theorem~2.77]{BPR06}), there exists a
quantifier-free formula $\Psi$ with free variables $\seq[j]{c}$ and
coefficients in $\mathbb{Z}$ that is $\mathbb{R}$-equivalent to $\Phi$, i.e.,
for all $\seq[j]{y} \in \mathbb{R}$, $\Phi(\seq[j]{y})$ is true in $\mathbb{R}$ if
and only if $\Psi(\seq[j]{y})$ is true in $\mathbb{R}$. Hence, by assumption,
$\Psi(\seq[j]{C})$ is true in $\mathbb{R}$.
Therefore, the sentence $\exists c_1 \cdots \exists c_j\,\Psi$
is true in $\mathbb{R}$ as well.
Since both $\mathbb{R}$ and $\mathbb{R}_{\m{alg}}$ are real closed fields with
$\mathbb{R}_{\m{alg}} \subset \mathbb{R}$ and all coefficients in this sentence
are from $\mathbb{Z} \subset \mathbb{R}_{\m{alg}}$, we may apply the Tarski-Seidenberg
transfer principle (\cite[Theorem~2.80]{BPR06}), from which we infer that
this sentence is true in $\mathbb{R}$ if
and only if it is true in $\mathbb{R}_{\m{alg}}$. So there exists an assignment
for $\Psi$ in $\mathbb{R}_{\m{alg}}$ mapping its free variables $\seq[j]{c}$ to
$\seq[j]{C'} \in \mathbb{R}_{\m{alg}}$ such that $\Psi(\seq[j]{C'})$ is true
in $\mathbb{R}_{\m{alg}}$, and hence also in $\mathbb{R}$ as $\Psi$ is a boolean
combination of atomic formulas in the variables $\seq[j]{c}$ with
coefficients in $\mathbb{Z}$. But then $\Phi(\seq[j]{C'})$ is true in $\mathbb{R}$
as well because of the $\mathbb{R}$-equivalence of $\Phi$ and $\Psi$.
Another application of the Tarski-Seidenberg transfer principle reveals
that $\Phi(\seq[j]{C'})$ is true in $\mathbb{R}_{\m{alg}}$,
and therefore the TRS $\mathcal{R}$ is polynomially terminating over
$\mathbb{R}_{\m{alg}}$ (whose formal definition is the obvious specialization of
Definition~\ref{def:PolyInt_R}). This shows that polynomial termination
over $\mathbb{R}$ implies polynomial termination over $\mathbb{R}_{\m{alg}}$.
As the reverse implication can be shown to hold by the same
technique, we conclude that polynomial termination over $\mathbb{R}$ is
equivalent to polynomial termination over $\mathbb{R}_{\m{alg}}$.
\qed
\section{Polynomial Termination over the Reals vs.\ the Integers}
\label{sect:nvsr}
As far as the relationship of polynomial interpretations with real,
rational and integer coefficients with regard to termination proving
power is concerned,
Lucas~\cite{L06} managed to prove the following two theorems.
\footnote{The results of \cite{L06} are actually stronger, cf.\
Remark~\ref{Lucas}.}
\begin{thm}[Lucas, 2006]
\label{thm:qvsn}
There are TRSs that are polynomially terminating over $\mathbb{Q}$ but not
over $\mathbb{N}$.
\qed
\end{thm}
\begin{thm}[Lucas, 2006]
\label{thm:rvsq}
There are TRSs that are polynomially terminating over $\mathbb{R}$ but not
over $\mathbb{Q}$ or $\mathbb{N}$.
\qed
\end{thm}
Hence, the extension of the coefficient domain from the integers to the
rational numbers entails the possibility to prove some TRSs
polynomially terminating, which could not be proved polynomially
terminating
otherwise. Moreover, a similar statement holds for the extension of the
coefficient domain from the rational numbers to the real numbers.
Based on these results and the
fact that we have the strict inclusions $\mathbb{Z} \subset \mathbb{Q} \subset \mathbb{R}$,
it is tempting to believe that polynomial interpretations with real
coefficients properly subsume polynomial interpretations with rational
coefficients, which in turn properly subsume polynomial interpretations
with integer coefficients.
Indeed, the former proposition holds according to
Theorem~\ref{thm:qvsr}.
However, the latter proposition does not hold,
as will be shown in this section.
In particular, we present a TRS that can be proved
terminating by a polynomial interpretation with integer coefficients, but
cannot be proved terminating by a polynomial interpretation
over the reals or rationals.
\subsection{Motivation}
In order to motivate the construction of this particular TRS,
let us first observe that from the viewpoint of number theory there is a
fundamental difference between the integers and the real or rational
numbers. More precisely, the integers are an example of a discrete domain,
whereas both the real and rational numbers are \emph{dense}\footnote{
Given two distinct real (rational) numbers $a$ and $b$, there exists a
real (rational) number $c$ in between.}
domains. In the context of polynomial interpretations,
the consequences of this major distinction are best explained by an
example. To this end, we consider the polynomial function
$x \mapsto 2x^2 - x$ depicted in
Figure~\ref{fig:nvsrpoly} and assume that we want to use it as
the interpretation of some unary function symbol. Now the point is that
this function is permissible in a polynomial interpretation over $\mathbb{N}$
as it is both non-negative and strictly monotone over the natural
numbers. However, viewing it
as a function over a
real (rational) variable, we observe that non-negativity is violated
in the open interval $(0,\frac{1}{2})$ (and monotonicity requires a
properly chosen value for $\delta$). Hence, the polynomial function
$x \mapsto 2x^2 - x$ is not permissible in any polynomial interpretation
over $\mathbb{R}$ ($\mathbb{Q}$).
\begin{figure}
\caption{The polynomial function $x \mapsto 2x^2 - x$.}
\label{fig:nvsrpoly}
\end{figure}
Thus, the idea is to design a TRS that enforces an
interpretation of this shape for some unary function symbol, and the tool
that can be used to achieve this is polynomial interpolation.
To this end, let us consider the following scenario, which is
fundamentally based on the assumption that some unary function symbol
$\m{f}$ is interpreted by a quadratic polynomial
$\m{f}(x) = ax^2+bx+c$ with (unknown) coefficients $a$, $b$ and $c$.
Then, by polynomial interpolation, these coefficients are uniquely
determined by the image of $\m{f}$ at three pairwise different
locations; in this way the interpolation constraints
$\m{f}(0) = 0$, $\m{f}(1) = 1$ and $\m{f}(2) = 6$
enforce the interpretation $\m{f}(x) = 2x^2 - x$. Next we encode these
constraints in terms of the TRS $\mathcal{R}$
consisting of the following rewrite rules, where $\m{s}^n(x)$ abbreviates
$
\smash{\underbrace{\m{s}(\m{s}(\cdots\m{s}}_{\text{$n$-times}}(x)\cdots))}
$,
\begin{xalignat*}{2}
\m{s}(\m{0}) &\to \m{f}(\m{0}) \\
\m{s}^2(\m{0}) &\to \m{f}(\m{s}(\m{0})) &
\m{f}(\m{s}(\m{0})) &\to \m{0} \\
\m{s}^7(\m{0}) &\to \m{f}(\m{s}^2(\m{0})) &
\m{f}(\m{s}^2(\m{0})) &\to \m{s}^5(\m{0})
\end{xalignat*}
and consider the following two cases:
polynomial interpretations over $\mathbb{N}$ on the one hand
and polynomial interpretations over $\mathbb{R}$ on the other hand.
In the context of polynomial interpretations over $\mathbb{N}$, we observe that
if we equip the function symbols $\m{s}$ and $\m{0}$ with the (natural)
interpretations $\m{s}_{\mathbb{N}}(x) = x+1$ and $\m{0}_{\mathbb{N}} = 0$, then the TRS
$\mathcal{R}$ indeed implements the above interpolation constraints.
\footnote{In fact, one can even show that $\m{s}_{\mathbb{N}}(x) = x+1$ is
sufficient for this purpose.}
For example, the constraint
$\m{f}_\mathbb{N}(1) = 1$ is expressed by $\m{f}(\m{s}(\m{0})) \to \m{0}$ and
$\m{s}^2(\m{0}) \to \m{f}(\m{s}(\m{0}))$. The former encodes
$\m{f}_\mathbb{N}(1) > 0$,
whereas the latter encodes $\m{f}_\mathbb{N}(1) < 2$. Moreover, the rule
$\m{s}(\m{0}) \to \m{f}(\m{0})$ encodes $\m{f}_\mathbb{N}(0) < 1$, which is
equivalent to $\m{f}_\mathbb{N}(0) = 0$ in the domain of the natural numbers.
Thus, this interpolation constraint can be expressed by a single rewrite
rule, whereas the other two constraints require two rules each. Summing
up, by virtue of the method of polynomial interpolation, we have reduced
the problem of enforcing a specific interpretation for some unary
function symbol to the problem of enforcing natural semantics for the
symbols $\m{s}$ and $\m{0}$.
Next we elaborate on the ramifications of considering the
TRS $\mathcal{R}$ in the context of polynomial interpretations over
$\mathbb{R}$. To this end, let us assume that the symbols $\m{s}$ and $\m{0}$
are interpreted by $\m{s}_{\mathbb{R}}(x) = x + s_0$ and $\m{0}_{\mathbb{R}} = 0$, so
that $\m{s}$ has some
kind of \emph{successor function} semantics. Then the TRS $\mathcal{R}$
translates to the following constraints:
\begin{xalignat*}{2}
s_0 - \delta &\geqslant_\mathbb{R} \m{f}_\mathbb{R}(0) \\
2 s_0 - \delta &\geqslant_\mathbb{R} \m{f}_\mathbb{R}(s_0) &
\m{f}_\mathbb{R}(s_0) &\geqslant_\mathbb{R} 0 + \delta \\
7 s_0 - \delta &\geqslant_\mathbb{R} \m{f}_\mathbb{R}(2 s_0) &
\m{f}_\mathbb{R}(2 s_0) &\geqslant_\mathbb{R} 5 s_0 + \delta
\end{xalignat*}
Hence, $\m{f}_\mathbb{R}(0)$ is confined to the closed interval
$[0,s_0 - \delta]$, whereas $\m{f}_\mathbb{R}(s_0)$ is confined to
$[0 + \delta,2 s_0 - \delta]$ and
$\m{f}_\mathbb{R}(2 s_0)$ to $[5 s_0 + \delta,7 s_0 - \delta]$.
Basically, this means that these constraints do not uniquely determine
the function $\m{f}_\mathbb{R}$. In other words, the method of polynomial
interpolation does not readily apply to the case of polynomial
interpretations over $\mathbb{R}$. However, we can make it work. To this end, we
observe that if $s_0 = \delta$, then the above system of inequalities
actually turns into the following system of equations, which can be
viewed as a set of interpolation constraints (parameterized by $s_0$)
that uniquely determine $\m{f}_\mathbb{R}$:
\begin{xalignat*}{3}
\m{f}_\mathbb{R}(0) &= 0 &
\m{f}_\mathbb{R}(s_0) &= s_0 &
\m{f}_\mathbb{R}(2 s_0) &= 6 s_0
\end{xalignat*}
Clearly, if $s_0 = \delta = 1$, then the symbol $\m{f}$ is fixed to the
interpretation $2x^2 - x$, as was the case in the context of polynomial
interpretations over $\mathbb{N}$ (note that in the latter case
$\delta = 1$ is implicit because of the equivalence
$x >_\mathbb{N} y \Longleftrightarrow x \geqslant_\mathbb{N} y+1$).
Hence, we conclude that once we can manage to design a TRS that enforces
$s_0 = \delta$, we can again leverage the method of polynomial
interpolation to enforce a specific interpretation for some unary
function symbol.
Moreover, we remark that the actual value of $s_0$ is irrelevant
for achieving our goal. That is to say that $s_0$ only serves as a scale
factor in the interpolation constraints determining $\m{f}_\mathbb{R}$. Clearly,
if $s_0 \neq 1$, then $\m{f}_\mathbb{R}$ is not fixed to the interpretation
$2x^2 - x$; however, it is still fixed to an interpretation of the same
(desired) shape, as will become clear in the proof of
Lemma~\ref{lem:mainlemma1}.
\subsection{Main Theorem}
In the previous subsection we have presented the basic method that we
use in order to show that polynomial interpretations with real or rational
coefficients do not properly subsume polynomial interpretations with
integer coefficients.
The construction presented there was based on several assumptions, the
essential ones of which are:
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\renewcommand{(\theenumi)}{(\alph{enumi})}
\item \label{Ass1}
The symbol $\m{s}$ had to be interpreted by
a linear polynomial of the shape $x + s_0$.
\item \label{Ass2}
The condition $s_0 = \delta$ was required to hold.
\item \label{Ass3}
The function symbol $\m{f}$ had to be interpreted by a quadratic
polynomial.
\end{enumerate}
Now the point is that one can get rid of all these assumptions
by adding suitable rewrite rules to the TRS $\mathcal{R}$. The
resulting TRS will be referred to as $\mathcal{R}_1$, and it
consists of the rewrite rules given in Table~\ref{tab:trs_r1}.
\begin{table}[tb]
\begin{center}
\fbox{\begin{minipage}[t]{72mm}
\begin{align}
\m{s}(\m{0}) &\to \m{f}(\m{0}) \label{l1} \\
\m{s}^2(\m{0}) &\to \m{f}(\m{s}(\m{0})) \label{l2} \\
\m{s}^7(\m{0}) &\to \m{f}(\m{s}^2(\m{0})) \label{l3} \\
\m{f}(\m{s}(\m{0}))\vphantom{^2} &\to \m{0} \label{l4} \\
\m{f}(\m{s}^2(\m{0})) &\to \m{s}^5(\m{0}) \label{l5} \\
\m{f}(\m{s}^2(x)) &\to \m{h}(\m{f}(x),\m{g}(\m{h}(x,x))) \label{l6}
\end{align}
\end{minipage}
\begin{minipage}[t]{72mm}
\begin{align}
\m{f}(\m{g}(x)) &\to \m{g}(\m{g}(\m{f}(x))) \label{r1} \\
\m{g}(\m{s}(x))\vphantom{^2} &\to \m{s}(\m{s}(\m{g}(x))) \label{r2} \\
\m{g}(x)\vphantom{^2} &\to \m{h}(x,x) \label{r3} \\
\m{s}(x)\vphantom{^2} &\to \m{h}(\m{0},x) \label{r4} \\
\m{s}(x)\vphantom{^2} &\to \m{h}(x,\m{0}) \label{r5} \\
\m{h}(\m{f}(x),\m{g}(x))\vphantom{^2} &\to \m{f}(\m{s}(x)) \label{r6}
\end{align}
\end{minipage}}
\end{center}
\caption{The TRS $\mathcal{R}_1$.}
\label{tab:trs_r1}
\end{table}
The rewrite rules \eqref{r1} and \eqref{r2} serve
the purpose of ensuring the first of the above items. Informally,
\eqref{r2} constrains the interpretation of the symbol $\m{s}$ to a
linear polynomial by simple reasoning about the degrees of the left- and
right-hand side polynomials, and \eqref{r1} does the same thing with
respect to $\m{g}$. Because both interpretations are linear, compatibility
with \eqref{r2} can only be achieved if the leading coefficient of
the interpretation of $\m{s}$ is one.
Concerning item~\eqref{Ass3} above, we remark that the tricky part is
to enforce the upper bound of two on the degree of the polynomial
$\m{f}_\mathbb{R}$
that interprets the symbol $\m{f}$. To this end, we make the
following observation. If
$\m{f}_\mathbb{R}$
is at most quadratic, then
the function
$\m{f}_\mathbb{R}(x + s_0) - \m{f}_\mathbb{R}(x)$
is at most
linear; i.e., there is a linear function
$\m{g}_\mathbb{R}(x)$
such that
$\m{g}_\mathbb{R}(x) > \m{f}_\mathbb{R}(x + s_0) - \m{f}_\mathbb{R}(x)$,
or equivalently,
$\m{f}_\mathbb{R}(x) + \m{g}_\mathbb{R}(x) > \m{f}_\mathbb{R}(x + s_0)$,
for all values of $x$. This can be encoded in terms of
rule~\eqref{r6}
as soon as the interpretation of $\m{h}$ corresponds to addition of two
numbers. And this is exactly the purpose of rules \eqref{r3},
\eqref{r4} and \eqref{r5}. More precisely, by linearity of the
interpretation of $\m{g}$, we infer from \eqref{r3} that the
interpretation of $\m{h}$ must
have the linear shape $h_2 x + h_1 y + h_0$.
Furthermore, compatibility with \eqref{r4} and \eqref{r5} implies
$h_2 = h_1 = 1$ due to item~\eqref{Ass1} above. Hence, the interpretation
of $\m{h}$ is $x + y + h_0$, and it really models addition of two numbers
(modulo adding a constant).
Next we comment on how to enforce the second of
the above assumptions. To this end, we remark that the hard part is to
enforce the condition $s_0 \leqslant \delta$. The idea is as
follows. First, we consider rule~\eqref{l2}, observing that if
$\m{f}$ is interpreted by a quadratic polynomial $\m{f}_\mathbb{R}$ and $\m{s}$
by the linear polynomial $x + s_0$, then (the interpretation of) its
right-hand side will eventually become larger than its left-hand side
with growing $s_0$, thus violating compatibility.
In this way, $s_0$ is bounded from above, and the faster the growth of
$\m{f}_\mathbb{R}$, the lower the bound. The problem with
this statement, however, is that it is only true if $\m{f}_\mathbb{R}$ is fixed
(which is a priori not the case);
otherwise, for any given value of $s_0$, one can always find a quadratic
polynomial $\m{f}_\mathbb{R}$
such that compatibility with \eqref{l2} is satisfied.
The parabolic curve associated with $\m{f}_\mathbb{R}$ only has to be
flat enough. So in order to prevent this, we have to somehow control the
growth of $\m{f}_\mathbb{R}$. Now that is where rule~\eqref{l6}
comes into play, which basically expresses that if one increases the
argument of $\m{f}_\mathbb{R}$
by a certain amount (i.e., $2 s_0$), then the value
of the function is guaranteed to increase by a certain minimum amount,
too. Thus, this rule establishes a lower bound on the growth of
$\m{f}_\mathbb{R}$. And it turns out that if $\m{f}_\mathbb{R}$
has just the right amount of growth,
then we can readily establish the desired upper bound $\delta$ for $s_0$.
Finally, having presented all the relevant details of our construction,
it remains to formally prove our main claim that the TRS $\mathcal{R}_1$
is polynomially terminating over $\mathbb{N}$ but not over $\mathbb{R}$ or $\mathbb{Q}$.
\begin{lem}
The TRS $\mathcal{R}_1$ is polynomially terminating over $\mathbb{N}$.
\end{lem}
\proof
We consider the following interpretation:
\[
\m{0}_\mathbb{N} = 0
\qquad \m{s}_\mathbb{N}(x) = x + 1
\qquad \m{f}_\mathbb{N}(x) = 2x^2 - x
\qquad \m{g}_\mathbb{N}(x) = 4 x + 4
\qquad \m{h}_\mathbb{N}(x,y) = x + y
\]
Note that the polynomial $2x^2 - x$ is a permissible interpretation
function as it is both non-negative and strictly monotone over the natural
numbers by Lemma~\ref{lem:nquadmon}
(cf.~Figure~\ref{fig:nvsrpoly}). The
rewrite rules of $\mathcal{R}_1$ are compatible with this interpretation
because the resulting inequalities
\begin{xalignat*}{2}
1 &>_\mathbb{N} 0 &
32x^2 + 60x + 28 &>_\mathbb{N} 32x^2 - 16x + 20 \\
2 &>_\mathbb{N} 1 &
4x + 8 &>_\mathbb{N} 4x + 6 \\
7 &>_\mathbb{N} 6 &
4x+4 &>_\mathbb{N} 2x \\
1 &>_\mathbb{N} 0 &
x+1 &>_\mathbb{N} x \\
6 &>_\mathbb{N} 5 &
x+1 &>_\mathbb{N} x \\
2x^2 + 7x + 6 &>_\mathbb{N} 2x^2 + 7x + 4 &
2x^2 + 3x + 4 &>_\mathbb{N} 2x^2 + 3x + 1
\end{xalignat*}
are clearly satisfied for all natural numbers $x$.
\qed
\begin{lem}
\label{lem:mainlemma1}
The TRS $\mathcal{R}_1$ is not polynomially terminating over $\mathbb{R}$.
\end{lem}
\proof
Let us assume that $\mathcal{R}_1$ is polynomially terminating over $\mathbb{R}$
and derive a contradiction.
Compatibility with rule \eqref{r2} implies
\[
\deg(\m{g}_\mathbb{R}(x)) \cdot \deg(\m{s}_\mathbb{R}(x)) \geqslant
\deg(\m{s}_\mathbb{R}(x)) \cdot \deg(\m{s}_\mathbb{R}(x)) \cdot \deg(\m{g}_\mathbb{R}(x))
\]
As a consequence, $\deg(\m{s}_\mathbb{R}(x)) \leq 1$, and because
$\m{s}_\mathbb{R}$ and $\m{g}_\mathbb{R}$ must be strictly monotone, we conclude
$\deg(\m{s}_\mathbb{R}(x)) = 1$. The same reasoning applied to rule \eqref{r1}
yields $\deg(\m{g}_\mathbb{R}(x)) = 1$. Hence, the symbols $\m{s}$
and $\m{g}$ must be interpreted by linear polynomials. So
$\m{s}_\mathbb{R}(x) = s_1 x + s_0$ and $\m{g}_\mathbb{R}(x) = g_1 x + g_0$ with
$s_0, g_0 \in \mathbb{R}_0$ and, due to Lemma~\ref{lem:rlinmon},
$s_1 \geqslant_\mathbb{R} 1$ and $g_1 \geqslant_\mathbb{R} 1$.
Then the compatibility constraint imposed by rule \eqref{r2} gives rise
to the inequality
\begin{equation}
\label{eq:comp}
g_1 s_1 x + g_1 s_0 + g_0
>_{\mathbb{R}_0,\delta}
s_1^2 g_1 x + s_1^2 g_0 + s_1 s_0 + s_0
\end{equation}
which must hold for all non-negative real numbers $x$. This implies
the following condition on the respective leading coefficients:
$g_1 s_1 \geqslant_\mathbb{R} s_1^2 g_1$. Because of $s_1 \geqslant_\mathbb{R} 1$ and
$g_1 \geqslant_\mathbb{R} 1$, this can only hold if $s_1 = 1$. Hence,
$\m{s}_\mathbb{R}(x) = x + s_0$. This result simplifies \eqref{eq:comp} to
$g_1 s_0 >_{\mathbb{R}_0,\delta} 2 s_0$, which implies $g_1 s_0 >_\mathbb{R} 2 s_0$.
From this, we conclude that $s_0 >_\mathbb{R} 0$ and $g_1 >_\mathbb{R} 2$.
Now suppose that the function symbol $\m{f}$ were also interpreted by
a linear polynomial $\m{f}_\mathbb{R}$. Then we could apply the same
reasoning to rule \eqref{r1} because it is structurally
equivalent to \eqref{r2}, thus inferring $g_1 = 1$.
However, this would contradict $g_1 >_\mathbb{R} 2$;
therefore, $\m{f}_\mathbb{R}$ cannot be linear.
Next we turn our attention to the rewrite rules
\eqref{r3}, \eqref{r4} and \eqref{r5}.
Because $\m{g}_\mathbb{R}$ is linear, compatibility with \eqref{r3} constrains
the function $h\colon\mathbb{R}_0 \to \mathbb{R}_0, x\mapsto \m{h}_\mathbb{R}(x,x)$
to be at most linear. This can only be the case if
$\m{h}_\mathbb{R}$ contains no terms of degree two or higher.
In other words, $\m{h}_\mathbb{R}(x,y) = h_1 \cdot x + h_2 \cdot y + h_0$,
where $h_0 \in \mathbb{R}_0$, $h_1 \geqslant_\mathbb{R} 1$
and $h_2 \geqslant_\mathbb{R} 1$ (cf.\ Lemma~\ref{lem:rlinmon}).
Because of $\m{s}_\mathbb{R}(x) = x + s_0$, compatibility with
\eqref{r5} implies $h_1 = 1$, and compatibility with
\eqref{r4} implies $h_2 = 1$; thus, $\m{h}_\mathbb{R}(x,y) = x + y + h_0$.
Using the obtained information in the compatibility constraint associated
with rule \eqref{r6}, we get
\[
\m{g}_\mathbb{R}(x) + h_0 >_{\mathbb{R}_0,\delta} \m{f}_\mathbb{R}(x + s_0) - \m{f}_\mathbb{R}(x)
\quad \text{for all $x \in \mathbb{R}_0$.}
\]
This implies that
$\deg(\m{g}_\mathbb{R}(x) + h_0) \geqslant \deg(\m{f}_\mathbb{R}(x + s_0) - \m{f}_\mathbb{R}(x))$,
which simplifies to
$1 \geqslant \deg(\m{f}_\mathbb{R}(x)) - 1$ because $s_0 \neq 0$.
Consequently, $\m{f}_\mathbb{R}$ must be a quadratic polynomial.
Without loss of generality, let $\m{f}_\mathbb{R}(x) = ax^2 + bx +c$, subject
to the constraints: $a >_\mathbb{R} 0$ and $c \geqslant_\mathbb{R} 0$ because of
non-negativity (for all $x \in \mathbb{R}_0$), and $a\delta + b \geqslant_\mathbb{R} 1$
because $\m{f}_\mathbb{R}(\delta) >_{\mathbb{R}_0,\delta} \m{f}_\mathbb{R}(0)$
due to strict monotonicity of $\m{f}_\mathbb{R}$.
Next we consider the compatibility constraint associated with rule
\eqref{l6}, from which we deduce an important auxiliary result.
After unraveling the definitions of
$>_{\mathbb{R}_0,\delta}$ and the interpretation functions, this constraint
simplifies to
\[
4as_0x + 4a s_0^2 + 2bs_0 \geqslant_\mathbb{R} 2g_1 x + g_1 h_0 + g_0 + h_0 + \delta
\quad \text{for all $x \in \mathbb{R}_0$,}
\]
which implies the following condition on the respective leading
coefficients:
$4 a s_0 \geqslant_\mathbb{R} 2 g_1$; from this and $g_1 >_\mathbb{R} 2$, we conclude
\begin{equation}
\label{eq:as0}
a s_0 >_\mathbb{R} 1
\end{equation}
and note that $a s_0 = \m{f}'_\mathbb{R}(\frac{s_0}{2}) - \m{f}'_\mathbb{R}(0)$.
Hence, $a s_0$
expresses the change of the slopes of the tangents to $\m{f}_\mathbb{R}$ at the
points $(0,\m{f}_\mathbb{R}(0))$ and $(\frac{s_0}{2},\m{f}_\mathbb{R}(\frac{s_0}{2}))$,
and thus
\eqref{eq:as0} actually sets a lower bound on the growth of $\m{f}_\mathbb{R}$.
Now let us consider the combined compatibility constraint imposed by the
rules \eqref{l2} and \eqref{l4}, namely
$\m{0}_\mathbb{R} + 2 s_0 >_{\mathbb{R}_0,\delta} \m{f}_\mathbb{R}(\m{s}_\mathbb{R}(\m{0}_\mathbb{R}))
>_{\mathbb{R}_0,\delta} \m{0}_\mathbb{R}$, which implies
$\m{0}_\mathbb{R} + 2 s_0 \geqslant_\mathbb{R} \m{0}_\mathbb{R} + 2 \delta$
by definition of $>_{\mathbb{R}_0,\delta}$.
Thus, we conclude $s_0 \geqslant_\mathbb{R} \delta$.
In fact, we even have $s_0 = \delta$, which can be derived from
the compatibility constraint of rule \eqref{l2} using the
conditions $s_0 \geqslant_\mathbb{R} \delta$,
$a\delta + b \geqslant_\mathbb{R} 1$ and
$a s_0 + b \geqslant_\mathbb{R} 1$, the combination of the former
two conditions:
\begin{eqnarray*}
\m{0}_\mathbb{R} + 2 s_0
& >_{\mathbb{R}_0,\delta} &
\m{f}_\mathbb{R}(\m{s}_\mathbb{R}(\m{0}_\mathbb{R})) \\
\m{0}_\mathbb{R} + 2 s_0 -\delta
& \geqslant_\mathbb{R} &
\m{f}_\mathbb{R}(\m{s}_\mathbb{R}(\m{0}_\mathbb{R})) \\
& = &
a(\m{0}_\mathbb{R}+s_0)^2 + b(\m{0}_\mathbb{R} + s_0) + c \\
& = &
a\m{0}_\mathbb{R}^2 + \m{0}_\mathbb{R}(2a s_0 + b) + a s_0^2 + b s_0 + c \\
& \geqslant_\mathbb{R} &
a\m{0}_\mathbb{R}^2 + \m{0}_\mathbb{R} + a s_0^2 + b s_0 + c \\
& \geqslant_\mathbb{R} &
\m{0}_\mathbb{R} + a s_0^2 + b s_0 \\
& \geqslant_\mathbb{R} & \m{0}_\mathbb{R} + a s_0^2 + (1 - a\delta)s_0 \\
& = &
\m{0}_\mathbb{R} + a s_0(s_0 - \delta) + s_0
\end{eqnarray*}
Hence, $\m{0}_\mathbb{R} + 2 s_0 - \delta \geqslant_\mathbb{R}
\m{0}_\mathbb{R} + a s_0(s_0 - \delta) + s_0$, or equivalently,
$s_0 - \delta \geqslant_\mathbb{R} a s_0(s_0 - \delta)$. But because of
\eqref{eq:as0} and $s_0 \geqslant_\mathbb{R} \delta$, this inequality can only be
satisfied if:
\begin{equation}
\label{eq:s0=delta}
s_0 = \delta
\end{equation}
This result has immediate consequences concerning the interpretation of
the constant $\m{0}$. To this end, we consider the compatibility
constraint of rule \eqref{r4}, which simplifies to
$s_0 \geqslant_\mathbb{R} \m{0}_\mathbb{R} + h_0 + \delta$.
Because of \eqref{eq:s0=delta} and the fact that $\m{0}_\mathbb{R}$ and $h_0$
must be non-negative, we conclude $\m{0}_\mathbb{R} = h_0 = 0$.
Moreover, condition \eqref{eq:s0=delta} is the key to the proof of this
lemma. To this end, we consider the compatibility constraints
associated with the five rewrite rules \eqref{l1}~--~\eqref{l5}:
\begin{xalignat*}{2}
s_0 &>_{\mathbb{R}_0,s_0} \m{f}_\mathbb{R}(0) \\
2 s_0 &>_{\mathbb{R}_0,s_0} \m{f}_\mathbb{R}(s_0) &
\m{f}_\mathbb{R}(s_0) &>_{\mathbb{R}_0,s_0} 0 \\
7 s_0 &>_{\mathbb{R}_0,s_0} \m{f}_\mathbb{R}(2 s_0) &
\m{f}_\mathbb{R}(2 s_0) &>_{\mathbb{R}_0,s_0} 5 s_0
\end{xalignat*}
By definition of $>_{\mathbb{R}_0,s_0}$,
these inequalities give rise to the following system of equations:
\begin{xalignat*}{3}
\m{f}_\mathbb{R}(0) &= 0 &
\m{f}_\mathbb{R}(s_0) &= s_0 &
\m{f}_\mathbb{R}(2 s_0) &= 6 s_0
\end{xalignat*}
After unraveling the definition of $\m{f}_\mathbb{R}$ and substituting
$z := a s_0$, we get a system of linear equations in the unknowns $z$, $b$
and $c$
\begin{xalignat*}{3}
c &= 0 &
z + b &= 1 &
4z + 2b &= 6
\end{xalignat*}
which has the unique solution $z = 2$, $b = -1$ and $c = 0$.
Hence, $\m{f}_\mathbb{R}$ must have the shape
$\m{f}_\mathbb{R}(x) = ax^2-x = ax(x - \frac{1}{a})$ in
every compatible polynomial interpretation over $\mathbb{R}$.
However, this function is not a permissible interpretation for the
function symbol $\m{f}$ because it is not non-negative for all
$x \in \mathbb{R}_0$. In particular, it is negative in the open interval
$(0,\frac{1}{a})$; e.g. $\m{f}_\mathbb{R}(\frac{1}{2a}) = -\frac{1}{4a}$.
Hence, $\mathcal{R}_1$
is not compatible with any polynomial interpretation over $\mathbb{R}$.
\qed
\begin{rem}
In this proof the interpretation of $\m{f}$ is fixed to
$\m{f}_\mathbb{R}(x) = ax^2-x$, which violates well-definedness in $\mathbb{R}_0$.
However, this function is obviously well-defined in $\mathbb{R}_m$ for
a properly chosen negative real number $m$. So what happens if we take this
$\mathbb{R}_m$ instead of $\mathbb{R}_0$ as the carrier of a polynomial interpretation?
To this end, we observe that $\m{f}_\mathbb{R}(0) = 0$ and
$\m{f}_\mathbb{R}(\delta) = \delta(a \delta - 1) = \delta(a s_0 - 1) = \delta$.
Now let us consider some negative real number $x_0 \in \mathbb{R}_m$.
We have $\m{f}_\mathbb{R}(x_0) >_\mathbb{R} 0$ and thus
$\m{f}_\mathbb{R}(\delta) - \m{f}_\mathbb{R}(x_0) <_\mathbb{R} \delta$,
which means that $\m{f}_\mathbb{R}$ violates monotonicity with respect to the
order $>_{\mathbb{R}_m,\delta}$.
\end{rem}
The previous lemma, together with Theorem~\ref{thm:qvsr}, yields the
following corollary.
\begin{cor}
The TRS $\mathcal{R}_1$ is not polynomially terminating over $\mathbb{Q}$.
\qed
\end{cor}
Finally, combining the material presented in this section,
we establish the following theorem, the main result of this section.
\begin{thm}
\label{thm:nvsr}
There are TRSs that can be proved polynomially terminating over $\mathbb{N}$,
but cannot be proved polynomially terminating over $\mathbb{R}$ or $\mathbb{Q}$.
\qed
\end{thm}
We conclude this section with a remark on the actual choice of the
polynomial serving as the interpretation of the function symbol $\m{f}$.
\begin{rem}
As explained at the beginning of this section,
the TRS $\mathcal{R}_1$ was designed to enforce an interpretation for
$\m{f}$, which is permissible in a polynomial interpretation over $\mathbb{N}$
but not over $\mathbb{R}$ ($\mathbb{Q}$). The interpretation of our choice was the
polynomial $2x^2 - x$.
However, we could have chosen any other polynomial as long as it is
well-defined and strictly monotone over $\mathbb{N}$ but not over $\mathbb{R}$ ($\mathbb{Q}$).
The methods introduced in this section are
general enough to handle any such polynomial. So the actual choice
is not that important.
\end{rem}
\section{Polynomial Termination over the Integers and Reals vs.\ the
Rationals}
\label{sect:nrvsq}
This section is devoted to showing that polynomial termination over $\mathbb{N}$
and $\mathbb{R}$ does not imply polynomial termination over $\mathbb{Q}$.
The proof is constructive, so we give a concrete TRS
having the desired properties.
In order to motivate the construction underlying this particular system,
let us consider the following quantified polynomial inequality
\begin{equation*}\label{eq:motiv}
\tag{$\ast$}
\forall\,x \quad (2x^2 - x) \cdot P(a) \geqslant 0
\end{equation*}
where $P \in \mathbb{Z}[a]$ is a polynomial with integer coefficients, all of
whose roots are irrational and which is positive for some
non-negative integer value of $a$. To be concrete, let us take
$P(a) = a^2 - 2$ and try to satisfy \eqref{eq:motiv} in $\mathbb{N}$, $\mathbb{Q}_0$
and $\mathbb{R}_0$, respectively.
First, we observe that $a := \sqrt{2}$ is a satisfying assignment in
$\mathbb{R}_0$. Moreover, \eqref{eq:motiv} is also satisfiable in $\mathbb{N}$ by
assigning $a := 2$, for example, and observing that
the polynomial $2 x^2 - x$ is non-negative for all $x \in \mathbb{N}$.
However, \eqref{eq:motiv} cannot be satisfied in $\mathbb{Q}_0$ as
non-negativity of $2 x^2 - x$ does not hold for all $x \in \mathbb{Q}_0$ and $P$
has no rational roots. To sum up, \eqref{eq:motiv} is
satisfiable in $\mathbb{N}$ and $\mathbb{R}_0$ but not in $\mathbb{Q}_0$. Thus, the basic
idea now is to design a TRS containing some rewrite rule whose
compatibility constraint reduces to a polynomial inequality similar
in nature to \eqref{eq:motiv}.
To this end, we rewrite the inequality
$(2x^2 - x) \cdot (a^2 - 2) \geqslant 0$ to
\[
2 a^2 x^2 + 2 x \geqslant 4 x^2 + a^2 x
\]
because now both the left- and right-hand side can be
viewed as a composition of several functions, each of which is strictly
monotone and well-defined. In particular, we identify the following
constituents: $\m{h}(x,y) = x + y$, $\m{r}(x) = 2 x$, $\m{p}(x) = x^2$
and $\m{k}(x) = a^2 x$. Thus, the above inequality can be written in the
form
\begin{equation*}\label{eq:motiv2}
\tag{$\ast\ast$}
\m{h}(\m{r}(\m{k}(\m{p}(x))),\m{r}(x)) \geqslant
\m{h}(\m{r}(\m{r}(\m{p}(x))),\m{k}(x))
\end{equation*}
which can easily be modeled as a rewrite rule. (Note that
$\m{r}(x)$ is not strictly necessary as $\m{r}(x) = \m{h}(x,x)$, but it
gives rise to a shorter encoding.) And then we also need rewrite rules
that enforce the desired interpretations for the function symbols
$\m{h}$, $\m{r}$, $\m{p}$ and $\m{k}$. For this purpose, we leverage the
techniques presented in the previous section, in particular the method of
polynomial interpolation. The resulting TRS will be referred to as
$\mathcal{R}_2$, and it consists of the rewrite rules
given in Table~\ref{tab:trs_r2}.
\begin{table}[tb]
\begin{center}
\begin{tabular}{@{}c@{\qquad}c@{}}
\begin{minipage}[t]{80mm}
\fbox{\begin{minipage}[t]{80mm}
\begin{align}
\m{f}(\m{g}(x)) &\to \m{g}^2(\m{f}(x)) \label{rule1} \\
\m{g}(\m{s}(x)) &\to \m{s}^2(\m{g}(x)) \label{rule2} \\
\m{s}(x) &\to \m{h}(\m{0},x) \label{rule4} \\
\m{s}(x) &\to \m{h}(x,\m{0}) \label{rule5} \\
\m{f}(\m{0}) &\to \m{0} \label{rulel1} \\
\m{s}^3(\m{0}) &\to \m{f}(\m{s}(\m{0})) \label{rulel2} \\
\m{f}(\m{s}(\m{0})) &\to \m{s}(\m{0}) \label{rulel4} \\
\makebox[18mm][r]{$\m{h}(\m{f}(x),\m{g}(x))$}
&\to \m{f}(\m{s}(x)) \label{rule6} \\
\m{g}(x) &\to
\makebox[36mm][l]{$\m{h}(\m{h}(\m{h}(\m{h}(x,x),x),x),x)$} \label{rulel7}
\\
\m{f}(\m{s}^2(x)) &\to \m{h}(\m{f}(x),\m{g}(\m{h}(x,x))) \label{rulel6}
\end{align}
\end{minipage}~} \\[1ex]
\fbox{\begin{minipage}[t]{55mm}
\begin{align}
\makebox[18mm][r]{$\m{s}(\m{0})$} &\to
\m{r}(\m{0}) \label{rr1} \\
\m{s}^3(\m{0}) &\to \m{r}(\m{s}(\m{0})) \label{rr2} \\
\m{r}(\m{s}(\m{0})) &\to \m{s}(\m{0}) \label{rr3} \\
\m{g}(x) &\to \m{r}(x) \label{rr4}
\end{align}
\end{minipage}~}
\end{minipage}
&
\begin{minipage}[t]{60mm}
\fbox{\begin{minipage}[t]{60mm}
\begin{align}
\m{s}(\m{0}) &\to \m{p}(\m{0}) \label{p1} \\
\m{s}^2(\m{0}) &\to \m{p}(\m{s}(\m{0})) \label{p2} \\
\m{p}(\m{s}(\m{0})) &\to \m{0} \label{p3} \\
\m{s}^5(\m{0}) &\to \m{p}(\m{s}^2(\m{0})) \label{p4} \\
\m{p}(\m{s}^2(\m{0})) &\to \m{s}^3(\m{0}) \label{p5} \\
\makebox[20mm][r]{$\m{h}(\m{p}(x),\m{g}(x))$} &\to
\makebox[20mm][l]{$\m{p}(\m{s}(x))$} \label{p6}
\end{align}
\end{minipage}~} \\[3.3ex]
\fbox{\begin{minipage}[t]{60mm}
\begin{align}
\m{s}(\m{0}) &\to \m{k}(\m{0}) \label{k1} \\
\makebox[20mm][r]{$\m{s}^2(\m{p}^2(\m{a}))$} &\to
\makebox[20mm][l]{$\m{s}(\m{k}(\m{p}(\m{a})))$} \label{k2} \\
\m{s}(\m{k}(\m{p}(\m{a}))) &\to \m{p}^2(\m{a}) \label{k3} \\
\m{g}(x) &\to \m{k}(x) \label{k4} \\
\m{a} &\to \m{0} \label{a1}
\end{align}
\end{minipage}~} \\[3.3ex]
\hspace*{-25mm}\fbox{\begin{minipage}[t]{85mm}
\begin{align}
\hspace{-2mm}
\m{s}(\m{h}(\m{r}(\m{k}(\m{p}(x))),\m{r}(x))) &\to
\makebox[25mm][l]{$\m{h}(\m{r}^2(\m{p}(x)),\m{k}(x))$} \label{mainrule}
\end{align}
\end{minipage}~}
\end{minipage} \\
\end{tabular}
\end{center}
\caption{The TRS $\mathcal{R}_2$.}
\label{tab:trs_r2}
\end{table}
Each of the blocks serves a specific purpose. The largest block
consists of the rules \eqref{rule1}~--~\eqref{rulel6}
and is basically a slightly modified version of
the TRS $\mathcal{R}_1$ of Table~\ref{tab:trs_r1}.
These rules ensure that the symbol $\m{s}$ has the semantics of a
successor function $x \mapsto x + s_0$. Moreover, for any
compatible polynomial interpretation
over $\mathbb{Q}$ ($\mathbb{R}$), it is guaranteed that $s_0$ is equal to $\delta$,
the minimal step width of the order $>_{\mathbb{Q},\delta}$.
In Section~\ref{sect:nvsr},
these conditions were identified as the key requirements
for the method of polynomial interpolation to work in this setting.
Finally, this block also enforces $\m{h}(x,y) = x + y$.
The next block, consisting of the rules \eqref{rr1}~--~\eqref{rr4}, makes
use of polynomial interpolation to achieve $\m{r}(x) = 2 x$. Likewise,
the block consisting of the rules \eqref{p1}~--~\eqref{p6} equips the
symbol $\m{p}$ with the semantics of a squaring function.
And the block \eqref{k1}~--~\eqref{a1} enforces the desired semantics
for the symbol $\m{k}$, i.e., a linear function $x \mapsto k_1 x$
whose slope $k_1$ is proportional to the square of the interpretation of
the constant $\m{a}$.
Finally, the rule \eqref{mainrule} encodes the main idea presented at the
beginning of this section (cf.\ \eqref{eq:motiv2}).
\begin{lem}\label{lem:nrvsq1}
The TRS $\mathcal{R}_2$ is polynomially terminating over $\mathbb{N}$ and $\mathbb{R}$.
\end{lem}
\proof
For polynomial termination over $\mathbb{N}$, the following interpretation
applies:
\begin{gather*}
\m{0}_\mathbb{N} = 0 \qquad
\m{s}_\mathbb{N}(x) = x + 1 \qquad
\m{f}_\mathbb{N}(x) = 3x^2 - 2x + 1 \qquad
\m{g}_\mathbb{N}(x) = 6x + 6 \\
\m{h}_\mathbb{N}(x,y) = x + y \qquad
\m{p}_\mathbb{N}(x) = x^2 \qquad
\m{r}_\mathbb{N}(x) = 2x \qquad
\m{k}_\mathbb{N}(x) = 4x \qquad
\m{a}_\mathbb{N} = 2
\end{gather*}
Note that the polynomial $3x^2 - 2x + 1$ is a permissible
interpretation function by Lemma~\ref{lem:nquadmon}.
Rule \eqref{mainrule} gives rise to the constraint
\[
8x^2 + 2x + 1 >_\mathbb{N} 4x^2 + 4x
\qquad\iff\qquad
4x^2 - 2x + 1 >_\mathbb{N} 0
\]
which holds for all $x \in \mathbb{N}$.
For polynomial termination over $\mathbb{R}$, we let $\delta = 1$
but we have to modify the interpretation as
$4x^2 - 2x + 1 >_{\mathbb{R}_0,\delta} 0$ does not hold for all $x \in \mathbb{R}_0$.
Taking $\m{a}_\mathbb{R} = \sqrt{2}$, $\m{k}_\mathbb{R}(x) = 2x$ and
the above interpretations for the other function symbols
establishes polynomial termination over $\mathbb{R}$. Note that the constraint
$
4x^2 + 2x + 1 >_{\mathbb{R}_0,\delta} 4x^2 + 2x
$
associated with rule \eqref{mainrule} trivially holds.
Moreover, the functions
$\m{f}_\mathbb{R}(x) = 3x^2 - 2x + 1$ and $\m{p}_\mathbb{R}(x) = x^2$ are
strictly monotone with respect to $>_{\mathbb{R}_0,\delta}$ due to
Lemma~\ref{lem:qrquadmon}.
\qed
\begin{lem}
\label{lem:nrvsq2}
The TRS $\mathcal{R}_2$ is not polynomially terminating over $\mathbb{Q}$.
\end{lem}
\proof
Let us assume that $\mathcal{R}_2$ is polynomially terminating over $\mathbb{Q}$
and derive a contradiction. Adapting the reasoning in the proof of
Lemma~\ref{lem:mainlemma1},
we infer from compatibility with the rules
\eqref{rule1}~--~\eqref{rulel7} that $\m{s}_\mathbb{Q}(x) = x + s_0$,
$\m{g}_\mathbb{Q}(x) = g_1 x + g_0$, $\m{h}_\mathbb{Q}(x,y) = x + y + h_0$, and
$\m{f}_\mathbb{Q}(x) = ax^2 + bx +c$, subject to the following constraints:
\[
s_0 >_\mathbb{Q} 0 \qquad
g_1 >_\mathbb{Q} 2 \qquad
g_0, h_0 \in \mathbb{Q}_0 \qquad
a >_\mathbb{Q} 0 \qquad
c \geqslant_\mathbb{Q} 0 \qquad
a\delta + b \geqslant_\mathbb{Q} 1
\]
Next we consider the compatibility constraints associated with the rules
\eqref{rulel7} and \eqref{rulel6}, from which we deduce an important
auxiliary
result. Compatibility with rule \eqref{rulel7} implies the condition
$g_1 \geqslant_\mathbb{Q} 5$ on the respective leading coefficients
since $\m{h}_\mathbb{Q}(x,y) = x + y + h_0$, and compatibility with rule
\eqref{rulel6} simplifies to
\[
4as_0x + 4a s_0^2 + 2bs_0 \geqslant_\mathbb{Q} 2g_1 x + g_1 h_0 + g_0 + h_0 + \delta
\quad \text{for all $x \in \mathbb{Q}_0$,}
\]
from which we infer $4 a s_0 \geqslant_\mathbb{Q} 2 g_1$; from this and
$g_1 \geqslant_\mathbb{Q} 5$, we conclude $a s_0 >_\mathbb{Q} 2$.
Now let us consider the combined compatibility constraint imposed by
the rules \eqref{rulel2} and \eqref{rulel4}, namely
$\m{0}_\mathbb{Q} + 3 s_0 >_{\mathbb{Q}_0,\delta} \m{f}_\mathbb{Q}(\m{s}_\mathbb{Q}(\m{0}_\mathbb{Q}))
>_{\mathbb{Q}_0,\delta} \m{0}_\mathbb{Q} + s_0$, which implies
$\m{0}_\mathbb{Q} + 3 s_0 \geqslant_\mathbb{Q} \m{0}_\mathbb{Q} + s_0 + 2 \delta$
by definition of $>_{\mathbb{Q}_0,\delta}$.
Thus, we conclude $s_0 \geqslant_\mathbb{Q} \delta$.
In fact, we even have $s_0 = \delta$, which can be derived from
the compatibility constraint of rule \eqref{rulel2} using the
conditions $s_0 \geqslant_\mathbb{Q} \delta$,
$a\delta + b \geqslant_\mathbb{Q} 1$, $a s_0 + b \geqslant_\mathbb{Q} 1$, the combination of the
former
two conditions, and $\m{f}_\mathbb{Q}(\m{0}_\mathbb{Q}) \geqslant_\mathbb{Q} \m{0}_\mathbb{Q} + \delta$,
the compatibility constraint of rule \eqref{rulel1}:
\begin{eqnarray*}
\m{0}_\mathbb{Q} + 3 s_0 -\delta
& \geqslant_\mathbb{Q} &
\m{f}_\mathbb{Q}(\m{s}_\mathbb{Q}(\m{0}_\mathbb{Q}))
~=~ \m{f}_\mathbb{Q}(\m{0}_\mathbb{Q}) + 2a \m{0}_\mathbb{Q} s_0 + a s_0^2 + b s_0 \\
& \geqslant_\mathbb{Q} &
\m{0}_\mathbb{Q} + \delta + a s_0^2 + b s_0 \\
& \geqslant_\mathbb{Q} &
\m{0}_\mathbb{Q} + \delta + a s_0^2 + (1 - a \delta) s_0
~=~ \m{0}_\mathbb{Q} + s_0 + \delta + a s_0 (s_0 - \delta)
\end{eqnarray*}
Hence, $\m{0}_\mathbb{Q} + 3 s_0 - \delta \geqslant_\mathbb{Q}
\m{0}_\mathbb{Q} + s_0 + \delta + a s_0 (s_0 - \delta)$, or equivalently,
$2 (s_0 - \delta) \geqslant_\mathbb{Q} a s_0(s_0 - \delta)$. But since
$a s_0 >_\mathbb{Q} 2$
and $s_0 \geqslant_\mathbb{Q} \delta$, this inequality can only hold if
\begin{equation}
\label{eq:s0=delta1}
s_0 = \delta
\end{equation}
This result has immediate consequences concerning the interpretation of
the constant $\m{0}$. To this end, we consider the compatibility
constraint of rule \eqref{rule4}, which simplifies to
$s_0 \geqslant_\mathbb{Q} \m{0}_\mathbb{Q} + h_0 + \delta$.
Because of \eqref{eq:s0=delta1} and the fact that $\m{0}_\mathbb{Q}$ and $h_0$
must be non-negative, we conclude $\m{0}_\mathbb{Q} = h_0 = 0$.
Moreover, as in the proof of Lemma~\ref{lem:mainlemma1},
condition \eqref{eq:s0=delta1}
is the key to the proof of the lemma at hand.
To this end, we consider the compatibility constraints
associated with the rules \eqref{p1}~--~\eqref{p5}.
By definition of $>_{\mathbb{Q}_0,s_0}$,
these constraints give rise to the following system of equations:
\begin{xalignat*}{3}
\m{p}_\mathbb{Q}(0) &= 0 &
\m{p}_\mathbb{Q}(s_0) &= s_0 &
\m{p}_\mathbb{Q}(2 s_0) &= 4 s_0
\end{xalignat*}
Viewing these equations as polynomial interpolation constraints,
we conclude that no linear polynomial can satisfy them
(because $s_0 \neq 0$). Hence, $\m{p}_\mathbb{Q}$ must at least be quadratic.
Moreover, by rule \eqref{p6}, $\m{p}_\mathbb{Q}$ is at most quadratic (using the
same reasoning as for rule \eqref{rule6},
cf.\ the proof of Lemma~\ref{lem:mainlemma1}).
So we let $\m{p}_\mathbb{Q}(x) = p_2 x^2 + p_1 x + p_0$ in the equations above
and infer the (unique) solution $p_0 = p_1 = 0$ and $p_2 s_0 = 1$,
i.e., $\m{p}_\mathbb{Q}(x) = p_2 x^2$ with $p_2 \neq 0$.
Next we consider the compatibility constraints associated with the rules
\eqref{rr1}~--~\eqref{rr3}, from which we deduce the interpolation
constraints $\m{r}_\mathbb{Q}(0) = 0$ and $\m{r}_\mathbb{Q}(s_0) = 2 s_0$.
Because $\m{g}_\mathbb{Q}$ is linear, $\m{r}_\mathbb{Q}$ must be linear, too, for
compatibility with rule \eqref{rr4}. Hence, by polynomial interpolation,
$\m{r}_\mathbb{Q}(x) = 2 x$.
Likewise, $\m{k}_\mathbb{Q}$ must be linear for
compatibility with rule \eqref{k4}, i.e., $\m{k}_\mathbb{Q}(x) = k_1 x + k_0$.
In particular, $k_0 = 0$ due to compatibility with rule \eqref{k1}, and
then the compatibility constraints associated with rule
\eqref{k2} and rule \eqref{k3} yield
$p_2^3 \m{a}_\mathbb{Q}^4 + 2 s_0 - \delta \geqslant_\mathbb{Q} k_1 p_2 \m{a}_\mathbb{Q}^2 + s_0
\geqslant_\mathbb{Q} p_2^3 \m{a}_\mathbb{Q}^4 + \delta$. But $s_0 = \delta$, hence
$k_1 p_2 \m{a}_\mathbb{Q}^2 = p_2^3 \m{a}_\mathbb{Q}^4$, and since $\m{a}_\mathbb{Q}$ cannot be
zero due to compatibility with rule \eqref{a1}, we obtain
$k_1 = p_2^2 \m{a}_\mathbb{Q}^2$.
In other words, $\m{k}_\mathbb{Q}(x) = p_2^2 \m{a}_\mathbb{Q}^2 x$.
Finally, we consider the compatibility constraint associated with rule
\eqref{mainrule}, which simplifies to
\[
(2 p_2 x^2 - x)((p_2 \m{a}_\mathbb{Q})^2 - 2) \geqslant_\mathbb{Q} 0 \quad
\text{for all $x \in \mathbb{Q}_0$.}
\]
However, this inequality is unsatisfiable as the polynomial $2 p_2 x^2 - x$
is negative for some $x \in \mathbb{Q}_0$ and
$(p_2 \m{a}_\mathbb{Q})^2 - 2$ cannot be zero because both $p_2$ and $\m{a}_\mathbb{Q}$
must be rational numbers.
\qed
Combining the previous two lemmata, we obtain
the main result of this section.
\begin{thm}
\label{thm:nrvsq}
There are TRSs that can be proved polynomially terminating over both $\mathbb{N}$
and $\mathbb{R}$, but cannot be proved polynomially terminating over $\mathbb{Q}$.
\qed
\end{thm}
\section{Incremental Polynomial Termination}
\label{sect:incrementality}
In this section, we consider the possibility of establishing termination
by using polynomial interpretations in an incremental way.
In this setting, which goes back to Lankford~\cite[Example~3]{L79}, one
weakens the compatibility requirement of the interpretation and
the TRS $\mathcal{R}$ under consideration to $P_\ell \geqslant P_r$ for every
rewrite rule $\ell \to r$ of $\mathcal{R}$ and $P_\ell >_\delta P_r$ for
at least one rewrite rule $\ell \to r$ of $\mathcal{R}$. After removing
those rules of $\mathcal{R}$ satisfying the second condition, one is
free to choose a different interpretation for the remaining rules. This
process is repeated until all rewrite rules have been removed.
\begin{defi}
\label{def:polyint_sn_inc}
For $D \in \{ \mathbb{N}, \mathbb{Q}, \mathbb{R}_{\m{alg}}, \mathbb{R} \}$ and $n \geqslantslant 1$,
a TRS $\mathcal{R}$ is said to be
\emph{polynomially terminating over $D$ in $n$ steps} if
either
$n = 1$ and $\mathcal{R}$ is polynomially terminating over $D$ or
$n > 1$ and there exists a polynomial interpretation $\mathcal{P}$ over
$D$ and a non-empty subset $\mathcal{S} \subsetneq \mathcal{R}$ such that
\begin{enumerate}
\item
$\mathcal{P}$ is weakly and strictly monotone,
\item
$\mathcal{R} \subseteq {\geqslantslant_\mathcal{P}}$ and
$\mathcal{S} \subseteq {>_\mathcal{P}}$, and
\item
\label{def_polyint_sn_3}
$\mathcal{R} \setminus \mathcal{S}$ is polynomially terminating over $D$
in $n-1$ steps.
\end{enumerate}
Furthermore, we call a TRS $\mathcal{R}$
\emph{incrementally polynomially terminating over $D$}
if there exists some
$n \geqslantslant 1$, such that $\mathcal{R}$ is polynomially
terminating over $D$ in $n$ steps.
\end{defi}
In Section~\ref{subsec:relship_nrvsq_inc} we show that incremental
polynomial termination
over $\mathbb{N}$ and $\mathbb{R}$ does not imply incremental polynomial termination
over $\mathbb{Q}$.
In Section~\ref{subsec:relship_nvsr_inc} we show that incremental
polynomial termination over $\mathbb{N}$ does not imply incremental polynomial
termination over $\mathbb{R}$. Below we show that the TRSs
$\mathcal{R}_1$ and $\mathcal{R}_2$ cannot be used for this purpose.
We moreover extend Theorems~\ref{thm:qvsr},
\ref{thm:qvsn}, and \ref{thm:rvsq} to incremental polynomial termination.
\begin{thm}
Let $D \in \{ \mathbb{N}, \mathbb{Q}, \mathbb{R}_\m{alg}, \mathbb{R} \}$, and let $\mathcal{R}$ be a
TRS. If $\mathcal{R}$ is incrementally polynomially terminating
over $D$, then it is terminating.
\qed
\end{thm}
\proof
Note that the polynomial interpretation $\mathcal{P}$ in
Definition~\ref{def:polyint_sn_inc} is an extended monotone algebra that
establishes relative termination of $\mathcal{S}$ with respect to
$\mathcal{R}$, cf.\ \cite[Theorem~3]{EWZ08}. The result follows by an
easy induction on the number of steps $n$ in
Definition~\ref{def:polyint_sn_inc}.
\qed
For weak monotonicity of univariate quadratic polynomials we use the
following obvious criterion.
\begin{lem}
\label{lem:qrquadwmon}
For $D \in \{ \mathbb{Q}, \mathbb{R} \}$,
the quadratic polynomial $f_D(x) = ax^2 + bx + c$ in $D[x]$ is
weakly monotone if and only if $a >_D 0$ and $b, c \geqslant_D 0$.
\qed
\end{lem}
We give the full picture of the relationship
between the three notions of incremental polynomial termination
over $\mathbb{N}$, $\mathbb{Q}$ and $\mathbb{R}$, showing that it is essentially
the same as the one depicted in Figure~\ref{fig:summary} for direct
polynomial termination.
However,
we have to replace the TRSs $\mathcal{R}_1$ and $\mathcal{R}_2$ as
the proofs of Lemmata~\ref{lem:mainlemma1} and~\ref{lem:nrvsq2} break
down if we allow incremental termination proofs. In more detail, the
proof of Lemma~\ref{lem:mainlemma1} does not extend because the
TRS $\mathcal{R}_1$ is incrementally polynomially terminating over
$\mathbb{Q}$.
\begin{lem}
\label{lem:R1_ipt_Q}
The TRS $\mathcal{R}_1$ is incrementally polynomially terminating over
$\mathbb{Q}$.
\end{lem}
\proof
This can be seen by considering the interpretation
\[
\m{0}_\mathbb{Q} = 0
\quad \m{s}_\mathbb{Q}(x) = x + 1
\quad \m{f}_\mathbb{Q}(x) = x^2 + x
\quad \m{g}_\mathbb{Q}(x) = 2 x + \tfrac{5}{2}
\quad \m{h}_\mathbb{Q}(x,y) = x + y
\]
with $\delta = 1$. The rewrite rules of $\mathcal{R}_1$ give rise to
the following inequalities:
\begin{xalignat*}{2}
1 &\geqslant_\mathbb{Q} 0 &
4x^2 + 12x + \tfrac{35}{4} &\geqslant_\mathbb{Q} 4x^2 + 4x + \tfrac{15}{2} \\
2 &\geqslant_\mathbb{Q} 2 &
2x + \tfrac{9}{2} &\geqslant_\mathbb{Q} 2x + \tfrac{9}{2} \\
7 &\geqslant_\mathbb{Q} 6 &
2x+\tfrac{5}{2} &\geqslant_\mathbb{Q} 2x \\
2 &\geqslant_\mathbb{Q} 0 &
x+1 &\geqslant_\mathbb{Q} x \\
6 &\geqslant_\mathbb{Q} 5 &
x+1 &\geqslant_\mathbb{Q} x \\
x^2 + 5x + 6 &\geqslant_\mathbb{Q} x^2 + 5x + \tfrac{5}{2} &
x^2 + 3x + \tfrac{5}{2} &\geqslant_\mathbb{Q} x^2 + 3x + 2
\end{xalignat*}
Removing the rules from $\mathcal{R}_1$ for which the corresponding
constraint remains true after strengthening $\geqslant_\mathbb{Q}$ to
$>_{\mathbb{Q}_0,\delta}$, leaves us with \eqref{l2}, \eqref{r2} and
\eqref{r6}, which are easily handled, e.g.\ by the interpretation
\[
\m{0}_\mathbb{Q} = 0
\qquad \m{s}_\mathbb{Q}(x) = x + 1
\qquad \m{f}_\mathbb{Q}(x) = x
\qquad \m{g}_\mathbb{Q}(x) = 3 x
\qquad \m{h}_\mathbb{Q}(x,y) = x + y + 2
\qquad \delta = 1
\]
\qed
Similarly, the TRS $\mathcal{R}_2$ of
Table~\ref{tab:trs_r2}
can be shown to be incrementally polynomially terminating over $\mathbb{Q}$.
The following result strengthens Theorem~\ref{thm:qvsn}.
\begin{thm}
There are TRSs that are incrementally polynomially terminating over
$\mathbb{Q}$ but not over $\mathbb{N}$.
\end{thm}
\proof
Consider the TRS $\mathcal{R}_3$ consisting of the single rewrite
rule
\[
\m{f}(\m{a}) \to \m{f}(\m{g}(\m{a}))
\]
It is easy to see that
$\mathcal{R}_3$ cannot be polynomially terminating over $\mathbb{N}$.
As the notions of polynomial termination and incremental polynomial
termination coincide for one-rule TRSs, $\mathcal{R}_3$ is not
incrementally polynomially terminating over $\mathbb{N}$.
The following interpretation establishes polynomial termination over
$\mathbb{Q}$:
\begin{gather*}
\delta = 1 \qquad
\m{a}_\mathbb{Q} = \tfrac{1}{2} \qquad
\m{f}_\mathbb{Q}(x) = 4x \qquad
\m{g}_\mathbb{Q}(x) = x^2
\end{gather*}
To this end, we note that the compatibility constraint associated
with the single rewrite rule gives rise to the inequality
$2 >_{\mathbb{Q}_0,1} 1$, which holds by definition of $>_{\mathbb{Q}_0,1}$.
Further note that the interpretation functions are
well-defined and monotone with respect to $>_{\mathbb{Q}_0,1}$
as a consequence of Lemmata~\ref{lem:qrlinmon} and~\ref{lem:qrquadmon}.
\qed
In fact, the TRS $\mathcal{R}_3$ proves the stronger statement that
there are TRSs which are polynomially terminating over $\mathbb{Q}$ but not
incrementally polynomially terminating over $\mathbb{N}$.
Our proof is both
shorter and simpler than the original proof of Theorem~\ref{thm:qvsn}
in \cite[pp.~62--67]{L06}, but see Remark~\ref{Lucas}.
In analogy to Theorem~\ref{thm:qvsr}, incremental polynomial
termination over $\mathbb{Q}$ implies incremental polynomial termination
over $\mathbb{R}$.
\begin{thm}
\label{cor:qvsr_inc}
If a TRS is incrementally polynomially terminating over $\mathbb{Q}$, then
it is also incrementally polynomially terminating over $\mathbb{R}$.
\end{thm}
\proof
The proof of Theorem~\ref{thm:qvsr}
can be extended with the following statements, which also follow from
Lemma~\ref{lem:continuity}:
\begin{enumerate}
\renewcommand{\alph{enumi}}{\alph{enumi}}
\item
weak monotonicity of $f_\mathbb{Q}$ with respect to $\geqslant_\mathbb{Q}$ implies
weak monotonicity on $\mathbb{R}_0$ with respect to $\geqslant_\mathbb{R}$,
\item
$P_\ell \geqslant_\mathbb{Q} P_r$ for all $\seq[m]{x} \in \mathbb{Q}_0$ implies
$P_\ell \geqslant_\mathbb{R} P_r$ for all $\seq[m]{x} \in \mathbb{R}_0$.
\end{enumerate}
Hence the result follows.
\qed
To show that the converse of Theorem~\ref{cor:qvsr_inc} does not hold,
we consider the TRS $\mathcal{R}_4$
consisting of the rewrite rules of Table~\ref{tab:trs_r4}.
\begin{table}[tb]
\begin{center}
\fbox{
\begin{minipage}[t]{48mm}
\begin{align}
\m{f}(\m{g}(x)) &\to \m{g}(\m{g}(\m{f}(x))) \tag*{\eqref{r1}} \\
\m{g}(\m{s}(x))\vphantom{^2} &\to \m{s}(\m{s}(\m{g}(x)))
\tag*{\eqref{r2}} \\
\m{g}(x)\vphantom{^2} &\to \m{h}(x,x) \tag*{\eqref{r3}} \\
\m{s}(x)\vphantom{^2} &\to \m{h}(\m{0},x) \tag*{\eqref{r4}}
\end{align}
\end{minipage}
\quad
\begin{minipage}[t]{72mm}
\begin{align}
\m{s}(x)\vphantom{^2} &\to \m{h}(x,\m{0}) \tag*{\eqref{r5}} \\
\phantom{\m{s}(x)\vphantom{^2}} & \notag \\
\m{k}(\m{k}(\m{k}(x)))\vphantom{^2} &\to \m{h}(\m{k}(x),\m{k}(x))
\label{eq:intrule6} \\
\m{s}(\m{h}(\m{k}(x),\m{k}(x)))\vphantom{^2} &\to \m{k}(\m{k}(\m{k}(x)))
\label{eq:intrule7}
\end{align}
\end{minipage}}
\end{center}
\caption{The TRS $\mathcal{R}_4$.}
\label{tab:trs_r4}
\end{table}
\begin{lem}
\label{lem:trs_R_4_sn}
The TRS $\mathcal{R}_4$ is polynomially terminating over $\mathbb{R}$.
\end{lem}
\proof
We consider the following interpretation:
\begin{gather*}
\delta = 1
\qquad \m{0}_\mathbb{R} = 0
\qquad \m{s}_\mathbb{R}(x) = x + 4
\qquad \m{f}_\mathbb{R}(x) = x^2 \\
\m{g}_\mathbb{R}(x) = 3x + 5
\qquad \m{h}_\mathbb{R}(x,y) = x + y
\qquad \m{k}_\mathbb{R}(x) = \sqrt{2} x + 1
\end{gather*}
The rewrite rules of $\mathcal{R}_4$ are compatible with this
interpretation because the resulting inequalities
\begin{xalignat*}{2}
9x^2 + 30x + 25 &>_{\mathbb{R}_0,\delta} 9x^2 + 20 &
x + 4 &>_{\mathbb{R}_0,\delta} x \\
3x + 17 &>_{\mathbb{R}_0,\delta} 3x + 13 \\
3x + 5 &>_{\mathbb{R}_0,\delta} 2x &
2\sqrt{2}x + 3 &>_{\mathbb{R}_0,\delta} 2\sqrt{2}x + 2 \\
x + 4 &>_{\mathbb{R}_0,\delta} x &
2\sqrt{2}x + 6 &>_{\mathbb{R}_0,\delta} 2\sqrt{2}x + 3
\end{xalignat*}
are clearly satisfied for all $x \in \mathbb{R}_0$.
\qed
It remains to show
that $\mathcal{R}_4$ is not incrementally polynomially terminating over
$\mathbb{Q}$.
We also show that it is neither incrementally polynomially
terminating over $\mathbb{N}$. But first we present the following
auxiliary result on a subset of its rules.
\begin{lem}
\label{lem:trs_succadd}
Let $D \in \{ \mathbb{N}, \mathbb{Q}, \mathbb{R} \}$, and let $\mathcal{P}$ be a strictly
monotone
polynomial interpretation over $D$ that is weakly compatible with the
rules \eqref{r1}~--~\eqref{r5}. Then the interpretations of the
symbols $\m{s}$, $\m{h}$ and $\m{g}$ have the shape
\begin{gather*}
\m{s}_D(x) = x + s_0
\qquad \m{h}_D(x,y) = x + y + h_0
\qquad \m{g}_D(x) = g_1 x + g_0
\end{gather*}
where all coefficients are non-negative and $g_1 \geqslantslant 2$. Moreover,
the interpretation of the symbol $\m{f}$ is at least quadratic.
\end{lem}
\proof
Let the unary symbols $\m{f}$, $\m{g}$ and $\m{s}$ be
interpreted by non-constant polynomials $\m{f}_D(x)$, $\m{g}_D(x)$
and $\m{s}_D(x)$. (Note that strict monotonicity of $\mathcal{P}$
obviously
implies these conditions.) Then the degrees of these polynomials
must be at least $1$, such that weak compatibility with \eqref{r2}
implies
\[
\deg(\m{g}_D(x)) \cdot \deg(\m{s}_D(x)) \geqslantslant
\deg(\m{s}_D(x)) \cdot \deg(\m{s}_D(x)) \cdot \deg(\m{g}_D(x))
\]
which simplifies to $\deg(\m{s}_D(x)) \leqslant 1$. Hence, we obtain
$\deg(\m{s}_D(x)) = 1$ and, by applying the same reasoning to \eqref{r1},
$\deg(\m{g}_D(x)) = 1$.
So the function symbols $\m{s}$ and $\m{g}$ must be interpreted by
linear polynomials $\m{s}_D(x) = s_1 x + s_0$ and
$\m{g}_D(x) = g_1 x + g_0$, where $s_0, s_1, g_0, g_1 \in D_0$ due to
well-definedness over $D_0$ and $s_1, g_1 > 0$ to make them non-constant.
Then the weak compatibility constraint imposed by \eqref{r2}
gives rise to the inequality
\begin{equation}
\label{eq:comp2}
g_1 s_1 x + g_1 s_0 + g_0
\geqslantslant_{D_0}
s_1^2 g_1 x + s_1^2 g_0 + s_1 s_0 + s_0
\end{equation}
which must hold for all $x \in D_0$. This implies the following condition
on the respective leading coefficients: $g_1 s_1 \geqslantslant s_1^2 g_1$.
Due to $s_1, g_1 > 0$, this can only hold if $s_1 \leqslant 1$.
Now suppose that the function symbol $\m{f}$ were also interpreted by a
linear polynomial $\m{f}_D$. Then we could apply the same reasoning to
the rule \eqref{r1} because it is structurally equivalent to \eqref{r2},
thus inferring $g_1 \leqslant 1$. So $\m{f}_D$ cannot
be linear if $g_1 > 1$.
Next we consider the rewrite rules \eqref{r3}, \eqref{r4} and \eqref{r5}.
As $\m{g}_D$ is linear, weak compatibility with \eqref{r3}
implies that the function $\m{h}_D(x,x)$ is at most linear as well.
This can only be the case if the interpretation $\m{h}_D$
is a linear polynomial function $\m{h}_D(x,y) = h_1 x + h_2 y + h_0$,
where $h_0, h_1, h_2 \in D_0$ due to well-definedness over $D_0$. Since
$\m{s}_D(x) = s_1 x + s_0$, weak compatibility with \eqref{r5}
implies $s_1 \geqslantslant h_1$, and weak compatibility with \eqref{r4}
implies $s_1 \geqslantslant h_2$. Similarly, we obtain
$g_1 \geqslantslant h_1 + h_2$ from weak compatibility with \eqref{r3}.
Now if $s_1, h_1, h_2 \geqslantslant 1$, conditions that are implied by
strict monotonicity of $\m{s}_D$ and $\m{h}_D$
(using Lemma~\ref{lem:qrlinmon}
for $D \in \{ \mathbb{Q}, \mathbb{R} \}$),
then we obtain $s_1 = h_1 = h_2 = 1$
and $g_1 \geqslantslant 2$,
such that
\begin{gather*}
\m{s}_D(x) = x + s_0
\qquad \m{h}_D(x,y) = x + y + h_0
\qquad \m{g}_D(x) = g_1 x + g_0
\end{gather*}
with $g_1 \geqslantslant 2$, which shows that $\m{f}_D$ cannot be linear.
Due to the fact that all of the above assumptions (on the interpretations
of the symbols $\m{f}$, $\m{g}$, $\m{h}$ and $\m{s}$) follow from strict
monotonicity of $\mathcal{P}$, this concludes the proof.
\qed
With the help of this lemma it is easy to show that the
TRS $\mathcal{R}_4$ is not incrementally polynomially terminating
over $\mathbb{Q}$ or $\mathbb{N}$.
\begin{lem}
\label{lem:trs_R_4}
The TRS $\mathcal{R}_4$ is not incrementally polynomially terminating
over $\mathbb{Q}$ or $\mathbb{N}$.
\end{lem}
\proof
Let $D \in \{ \mathbb{N}, \mathbb{Q} \}$, and let $\mathcal{P}$ be a strictly monotone
polynomial interpretation over $D$ that is weakly compatible
with $\mathcal{R}_4$. Then, by Lemma~\ref{lem:trs_succadd},
the interpretations of the symbols $\m{s}$, $\m{h}$ and $\m{g}$ have
the shape
\begin{gather*}
\m{s}_D(x) = x + s_0
\qquad \m{h}_D(x,y) = x + y + h_0
\qquad \m{g}_D(x) = g_1 x + g_0
\end{gather*}
As the interpretations of the symbols $\m{s}$ and $\m{h}$ are linear,
weak compatibility with \eqref{eq:intrule7} implies that the
interpretation of $\m{k}$ is at most linear as well. Then, letting
$\m{k}_D(x) = k_1 x + k_0$, the weak
compatibility constraints associated with \eqref{eq:intrule6}
and \eqref{eq:intrule7} give rise to the following conditions on the
respective leading coefficients: $2 \geqslantslant k_1^2 \geqslantslant 2$.
Hence, $k_1 = \sqrt{2}$, which is not a rational number.
So we conclude that there is no strictly monotone polynomial
interpretation over $\mathbb{N}$ or $\mathbb{Q}$ that is weakly compatible with
the TRS $\mathcal{R}_4$. This implies that $\mathcal{R}_4$ is not
incrementally polynomially terminating over $\mathbb{N}$ or $\mathbb{Q}$.
\qed
Combining Lemmata~\ref{lem:trs_R_4_sn} and~\ref{lem:trs_R_4},
we obtain the following result.
\begin{cor}
\label{cor:rvsq_inc}
There are TRSs that are incrementally polynomially terminating over
$\mathbb{R}$ but not over $\mathbb{Q}$ or $\mathbb{N}$.
\qed
\end{cor}
As a further consequence of Lemmata~\ref{lem:trs_R_4_sn}
and~\ref{lem:trs_R_4}, we see that the TRS $\mathcal{R}_4$ is
polynomially terminating over $\mathbb{R}$ but not over $\mathbb{Q}$ or $\mathbb{N}$,
which provides an alternative proof of Theorem~\ref{thm:rvsq}.
\subsection{\texorpdfstring{Incremental Polynomial Termination
over $\mathbb{N}$ and $\mathbb{R}$ vs.\ $\mathbb{Q}$}{Incremental Polynomial Termination
over N and R vs.\ Q}}
\label{subsec:relship_nrvsq_inc}
Next we establish the analogon of Theorem~\ref{thm:nrvsq}
in the incremental setting. That is, we show that incremental polynomial
termination over $\mathbb{N}$ and $\mathbb{R}$ does not imply incremental
polynomial termination over $\mathbb{Q}$. Again, we give a concrete TRS
having the desired properties, but unfortunately, as was already
mentioned in the introduction of this section, we cannot reuse the
TRS $\mathcal{R}_2$ directly. Nevertheless, we can and do reuse the
principle idea underlying the construction of $\mathcal{R}_2$
(cf.~\eqref{eq:motiv2}). However, we use a different
method than polynomial interpolation in order to enforce
the desired interpretations for the involved function symbols.
To this end, let us consider the (auxiliary) TRS $\mathcal{S}$
consisting of the
rewrite rules given in Table~\ref{tab:trs_s}.
\begin{table}[tb]
\begin{center}
\fbox{\begin{minipage}[t]{48mm}
\begin{align}
\m{f}(\m{g}(x)) &\to \m{g}(\m{g}(\m{f}(x))) \tag*{\eqref{r1}} \\
\m{g}(\m{s}(x))\vphantom{^2} &\to \m{s}(\m{s}(\m{g}(x)))
\tag*{\eqref{r2}} \\
\m{g}(x)\vphantom{^2} &\to \m{h}(x,x) \tag*{\eqref{r3}} \\
\m{s}(x)\vphantom{^2} &\to \m{h}(\m{0},x) \tag*{\eqref{r4}} \\
\m{s}(x)\vphantom{^2} &\to \m{h}(x,\m{0}) \tag*{\eqref{r5}}
\end{align}
\end{minipage}}
\quad
\fbox{\begin{minipage}[t]{72mm}
\begin{align}
\m{k}(x) &\to \m{h}(x,x)
\label{srule1} \\
\m{s}^3(\m{h}(x,x)) &\to \m{k}(x)
\label{srule2} \\
\m{h}(\m{f}(x),\m{k}(x))\vphantom{^3} &\to \m{f}(\m{s}(x))
\label{srule3} \\
\m{f}(\m{s}^2(x))\vphantom{^3} &\to \m{h}(\m{f}(x),\m{k}(\m{h}(x,x)))
\label{srule4} \\
\m{f}(\m{s}(x))\vphantom{^3} &\to \m{h}(\m{f}(x),\m{s}(\m{0}))
\label{srule5} \\
\m{s}^2(\m{0})\vphantom{^3} &\to \m{h}(\m{f}(\m{s}(\m{0})),\m{s}(\m{0}))
\label{srule6}
\end{align}
\end{minipage}}
\end{center}
\caption{The auxiliary TRS $\mathcal{S}$.}
\label{tab:trs_s}
\end{table}
The purpose of this TRS is to equip the symbol $\m{s}$ ($\m{f}$) with
the semantics of a successor (squaring) function and to ensure that
the interpretation of the symbol $\m{h}$ corresponds to the addition
of two numbers. Besides, this TRS will not only be helpful in this
subsection but also in the next one.
\begin{lem}
\label{lem:trs_S}
Let $D \in \{ \mathbb{N}, \mathbb{Q}, \mathbb{R} \}$, and let $\mathcal{P}$ be a strictly
monotone
polynomial interpretation over $D$ that is weakly compatible with the
TRS $\mathcal{S}$. Then
\begin{gather*}
\m{0}_D = 0
\quad \m{s}_D(x) = x + s_0
\quad \m{h}_D(x,y) = x + y \\
\quad \m{g}_D(x) = g_1 x + g_0
\quad \m{k}_D(x) = 2 x + k_0
\quad \m{f}_D(x) = a x^2
\end{gather*}
where $a s_0 = 1$, $g_1 \geqslantslant 2$ and all coefficients are
non-negative.
\end{lem}
\proof
By Lemma~\ref{lem:trs_succadd}, the interpretations of the
symbols $\m{s}$, $\m{h}$ and $\m{g}$ have the shape
$\m{s}_D(x) = x + s_0$, $\m{h}_D(x,y) = x + y + h_0$ and
$\m{g}_D(x) = g_1 x + g_0$,
where all coefficients are non-negative and $g_1 \geqslantslant 2$.
Moreover, the interpretation of $\m{f}$ is at least quadratic.
Applying this partial interpretation in \eqref{srule1}
and \eqref{srule2}, we obtain, by weak compatibility, the
inequalities
\[
2 x + h_0 + 3 s_0 \geqslantslant_{D_0} \m{k}_D(x) \geqslantslant_{D_0}
2 x + h_0 \quad\text{for all $x \in D_0$,}
\]
which imply $\m{k}_D(x) = 2 x + k_0$ with $k_0 \geqslantslant 0$ (due
to well-definedness over $D_0$).
Next we consider the rule \eqref{srule4} from which we infer that
$\m{s}_D(x) \neq x$ because otherwise weak compatibility would be
violated; hence, $s_0 > 0$. Then, by weak compatibility
with \eqref{srule3}, we obtain the inequality
\[
\m{k}_D(x) + h_0 \geqslantslant_{D_0} \m{f}_D(x + s_0) - \m{f}_D(x)
\quad\text{for all $x \in D_0$.}
\]
Now this can only be the case if $\deg(\m{k}_D(x) + h_0) \geqslantslant
\deg(\m{f}_D(x + s_0) - \m{f}_D(x))$, which simplifies to
$1 \geqslantslant \deg(\m{f}_D(x)) - 1$ since $s_0 \neq 0$ and $\m{f}_D$
is at least quadratic (hence not constant).
Consequently, $\m{f}_D$ must be a quadratic polynomial function, that
is, $\m{f}_D(x) = ax^2 + bx +c$ with $a > 0$ (due to well-definedness
over $D_0$). Then the inequalities arising from weak
compatibility with \eqref{srule3} and \eqref{srule4}
simplify to
\begin{align*}
2 x + k_0 + h_0 &\geqslantslant_{D_0} 2 a s_0 x + a s_0^2 + b s_0 \\
4 a s_0 x + 4 a s_0^2 + 2 b s_0 &\geqslantslant_{D_0} 4 x + 3 h_0 + k_0
\end{align*}
both of which must hold for all $x \in D_0$. Hence, by looking at
the leading coefficients, we infer that $a s_0 = 1$. Furthermore,
weak compatibility with \eqref{srule5} is satisfied if
and only if the inequality
\[
2 a s_0 x + a s_0^2 + b s_0 \geqslantslant_{D_0} \m{0}_D + s_0 + h_0
\]
holds for all $x \in D_0$. For $x = 0$, and using the condition
$a s_0 = 1$, we conclude that
$b s_0 \geqslantslant_{D_0} \m{0}_D + h_0 \geqslantslant_{D_0} 0$, which
implies that $b \geqslantslant 0$ as $s_0 > 0$.
Using all the information gathered above, the compatibility constraint
associated with \eqref{srule6} gives rise to the inequality
$0 \geqslantslant_{D_0} \m{f}_D(\m{0}_D) + 2\, \m{0}_D + b s_0 + h_0$,
all of whose summands on the right-hand side are non-negative as
$b \geqslantslant 0$ and all interpretation functions must be
well-defined over $D_0$. Consequently, we must have
$\m{0}_D = h_0 = b = c = \m{f}_D(\m{0}_D) = 0$.
\qed
In order to establish the main result of this subsection, we extend the
TRS $\mathcal{S}$ by the rewrite rules given in Table~\ref{tab:trs_r5},
calling the resulting system $\mathcal{R}_5$.
\begin{table}[tb]
\begin{center}
\begin{tabular}{@{~~}c@{\qquad}c@{}}
\begin{minipage}[t]{85mm}
\fbox{\begin{minipage}[t]{80mm}
\begin{align}
\m{k}(x) &\to \m{r}(x) \label{r5r1} \\
\m{s}(\m{r}(x)) &\to \m{h}(x,x) \label{r5r2} \\
\m{h}(\m{0},\m{0}) &\to \m{r}(\m{0}) \label{r5r3}
\end{align}
\end{minipage}} \\[2.3ex]
\fbox{\begin{minipage}[t]{80mm}
\begin{align}
\hspace*{-1.5ex} \m{h}(\m{r}(\m{q}(\m{f}(x))),\m{r}(x)) &\to
\m{h}(\m{r}^2(\m{f}(x)),\m{q}(x)) \label{r5mainrule}
\end{align}
\end{minipage}}
\end{minipage}
&
\begin{minipage}[t]{65mm}
\hspace*{-5ex}
\fbox{\begin{minipage}[t]{63mm}
\begin{align}
\m{g}^2(x) &\to \m{q}(x) \label{r5r4} \\
\m{h}(\m{0},\m{0}) &\to \m{q}(\m{0}) \label{r5r5} \\
\m{f}(\m{f}(\m{m})) &\to
\m{q}(\m{f}(\m{m})) \label{r5r6} \\
\hspace*{-1.5ex} \m{h}(\m{0},\m{q}(\m{f}(\m{m}))) &\to
\m{h}(\m{f}(\m{f}(\m{m})),\m{0}) \label{r5r7} \\
\m{m} &\to \m{s}(\m{0}) \label{r5r8}
\end{align}
\end{minipage}}
\end{minipage}
\\
\end{tabular}
\end{center}
\caption{The TRS $\mathcal{R}_5$ (without the $\mathcal{S}$-rules).}
\label{tab:trs_r5}
\end{table}
As in Section~\ref{sect:nrvsq}, each block serves a
specific purpose. The one made up of \mbox{\eqref{r5r1}~--~\eqref{r5r3}}
enforces the desired semantics
for the symbol $\m{r}$, that is, a linear function $x \mapsto 2 x$
that doubles its input, while the block \eqref{r5r4}~--~\eqref{r5r8}
enforces a linear function $x \mapsto q_1 x$ for the symbol $\m{q}$
whose slope $q_1$ is proportional to the square of the interpretation
of the constant $\m{m}$. Finally, \eqref{r5mainrule} encodes
the main idea of the construction, as mentioned above.
\begin{lem}
\label{lem:trs_R_5_sn}
The TRS $\mathcal{R}_5$ is incrementally polynomially terminating over
$\mathbb{N}$ and $\mathbb{R}$.
\end{lem}
\proof
For incremental polynomial termination over $\mathbb{N}$, we start with the
interpretation
\begin{gather*}
\m{0}_\mathbb{N} = 0 \quad
\m{s}_\mathbb{N}(x) = x + 1 \quad
\m{f}_\mathbb{N}(x) = x^2 \quad
\m{g}_\mathbb{N}(x) = 3x + 5 \\
\m{h}_\mathbb{N}(x,y) = x + y \quad
\m{k}_\mathbb{N}(x) = 2x + 2 \quad
\m{q}_\mathbb{N}(x) = 4x \quad
\m{r}_\mathbb{N}(x) = 2x \quad
\m{m}_\mathbb{N} = 2
\end{gather*}
All interpretation functions are well-defined over $\mathbb{N}$ and strictly
monotone (i.e.,
monotone with respect to $>_\mathbb{N}$) as well as weakly monotone (i.e.,
monotone with respect to $\geqslantslant_\mathbb{N}$). Moreover, it is easy to
verify that this interpretation is weakly compatible
with $\mathcal{R}_5$. In particular, the rule \eqref{r5mainrule} gives
rise to the constraint
\[
8x^2 + 2x \geqslantslant_\mathbb{N} 4x^2 + 4x
\qquad\iff\qquad
2x^2 - x \geqslantslant_\mathbb{N} 0
\]
which holds for all $x \in \mathbb{N}$.
After removing the rules from $\mathcal{R}_5$ for which (strict)
compatibility holds, we are left with the rules \eqref{srule5},
\eqref{srule6}, \eqref{r5r3}, \eqref{r5mainrule} and
\eqref{r5r5}~--~\eqref{r5r7}, all of which can be handled (that
is, removed at once) by the following linear interpretation:
\begin{gather*}
\m{0}_\mathbb{N} = 0 \quad
\m{s}_\mathbb{N}(x) = 7x + 2 \quad
\m{h}_\mathbb{N}(x,y) = x + 2y + 1 \\
\m{f}_\mathbb{N}(x) = 4x + 2 \quad
\m{q}_\mathbb{N}(x) = 4x \quad
\m{r}_\mathbb{N}(x) = x \quad
\m{m}_\mathbb{N} = 0
\end{gather*}
For incremental polynomial termination over $\mathbb{R}$, we consider the
interpretation
\begin{gather*}
\delta = 1 \quad
\m{0}_\mathbb{R} = 0 \quad
\m{s}_\mathbb{R}(x) = x + 1 \quad
\m{f}_\mathbb{R}(x) = x^2 \quad
\m{g}_\mathbb{R}(x) = 3x + 5 \\
\m{h}_\mathbb{R}(x,y) = x + y \quad
\m{k}_\mathbb{R}(x) = 2x + 2 \quad
\m{q}_\mathbb{R}(x) = 2x \quad
\m{r}_\mathbb{R}(x) = 2x \quad
\m{m}_\mathbb{R} = \sqrt{2}
\end{gather*}
which is both weakly and strictly monotone according to
Lemmata~\ref{lem:qrquadmon} and~\ref{lem:qrquadwmon}.
So all interpretation functions are
well-defined over $\mathbb{R}_0$ and monotone with respect to $>_{\mathbb{R}_0,\delta}$
and $\geqslantslant_{\mathbb{R}_0}$. Moreover, one easily verifies that this
interpretation is weakly compatible with $\mathcal{R}_5$. In particular,
the constraint $4x^2 + 2x \geqslantslant_{\mathbb{R}_0} 4x^2 + 2x$ associated
with \eqref{r5mainrule} trivially holds. After removing the rules
from $\mathcal{R}_5$ for which (strict) compatibility holds (i.e.,
for which the corresponding constraint remains true after
strengthening $\geqslantslant_{\mathbb{R}_0}$ to $>_{\mathbb{R}_0,\delta}$), we are left
with \eqref{srule5}, \eqref{srule6}, \eqref{r5r3}, \eqref{r5mainrule}
and \eqref{r5r5}~--~\eqref{r5r8}, all of which can be removed at once
by the following linear interpretation:
\begin{gather*}
\delta = 1 \quad
\m{0}_\mathbb{R} = 0 \quad
\m{s}_\mathbb{R}(x) = 6x + 2 \quad
\m{f}_\mathbb{R}(x) = 3x + 2 \\
\m{h}_\mathbb{R}(x,y) = x + 2y + 1 \quad
\m{q}_\mathbb{R}(x) = 2x \quad
\m{r}_\mathbb{R}(x) = x \quad
\m{m}_\mathbb{R} = 3\rlap{\hbox to 83 pt{
\qEd}}
\end{gather*}
\begin{lem}
\label{lem:trs_R_5}
The TRS $\mathcal{R}_5$ is not incrementally polynomially
terminating over $\mathbb{Q}$.
\end{lem}
\proof
Let $\mathcal{P}$ be a strictly monotone polynomial interpretation over
$\mathbb{Q}$ that is weakly compatible with $\mathcal{R}_5$. According to
Lemma \ref{lem:trs_S}, the symbols $\m{0}$,
$\m{s}$, $\m{f}$, $\m{g}$, $\m{h}$ and $\m{k}$ are interpreted as
follows:
\begin{gather*}
\m{0}_\mathbb{Q} = 0
\quad \m{s}_\mathbb{Q}(x) = x + s_0
\quad \m{h}_\mathbb{Q}(x,y) = x + y \\
\quad \m{g}_\mathbb{Q}(x) = g_1 x + g_0
\quad \m{k}_\mathbb{Q}(x) = 2 x + k_0
\quad \m{f}_\mathbb{Q}(x) = a x^2
\end{gather*}
where $s_0, g_1, a > 0$ and $g_0, k_0 \geqslantslant 0$.
As the interpretation of $\m{k}$ is linear, weak compatibility with the
rule \eqref{r5r1} implies that the interpretation of $\m{r}$ is at most
linear as well, i.e., $\m{r}_\mathbb{Q}(x) = r_1 x + r_0$ with
$r_0 \geqslantslant 0$ and $ 2 \geqslantslant r_1 \geqslantslant 0$. We also have
$r_1 \geqslantslant 2$ due to weak compatibility with \eqref{r5r2}
and $0 \geqslantslant r_0$ due to weak compatibility with \eqref{r5r3};
hence, $\m{r}_\mathbb{Q}(x) = 2 x$.
Similarly, by linearity of $\m{g}_\mathbb{Q}$ and weak compatibility
with \eqref{r5r4}, the interpretation of $\m{q}$ must have the
shape $\m{q}_\mathbb{Q}(x) = q_1 x + q_0$. Then weak compatibility
with \eqref{r5r5} yields $0 \geqslantslant q_0$; hence,
$\m{q}_\mathbb{Q}(x) = q_1 x$, $q_1 \geqslantslant 0$.
Next we note that weak compatibility with \eqref{r5r6}
and \eqref{r5r7} implies that
$\m{f}_\mathbb{Q}(\m{f}_\mathbb{Q}(\m{m}_\mathbb{Q})) = \m{q}_\mathbb{Q}(\m{f}_\mathbb{Q}(\m{m}_\mathbb{Q}))$,
which evaluates to $a^3 \m{m}_\mathbb{Q}^4 = a\, q_1 \m{m}_\mathbb{Q}^2$. From this
we infer
that $q_1 = a^2 \m{m}_\mathbb{Q}^2$ as $a > 0$ and $\m{m}_\mathbb{Q} \geqslantslant s_0 > 0$
due to weak compatibility with \eqref{r5r8}; i.e.,
$\m{q}_\mathbb{Q}(x) = a^2 \m{m}_\mathbb{Q}^2 x$.
Finally, we consider the weak compatibility constraint associated
with \eqref{r5mainrule}, which simplifies to
\[
(2 a x^2 - x)((a\, \m{m}_\mathbb{Q})^2 - 2) \geqslantslant 0 \quad
\text{for all $x \in \mathbb{Q}_0$.}
\]
However, this inequality is unsatisfiable as the polynomial
$2 a x^2 - x$ is negative for some $x \in \mathbb{Q}_0$ and
$(a\, \m{m}_\mathbb{Q})^2 - 2$ cannot be zero because both $a$
and $\m{m}_\mathbb{Q}$ must be rational numbers.
So we conclude that there is
no strictly monotone polynomial interpretation over $\mathbb{Q}$
that is weakly compatible with the TRS $\mathcal{R}_5$.
This implies that $\mathcal{R}_5$ is not incrementally polynomially
terminating over $\mathbb{Q}$.
\qed
Together, Lemma~\ref{lem:trs_R_5_sn} and Lemma~\ref{lem:trs_R_5}
yield the main result of this subsection.
\begin{cor}
\label{cor:nrvsq_inc}
There are TRSs that are incrementally polynomially terminating over
$\mathbb{N}$ and $\mathbb{R}$ but not over $\mathbb{Q}$.
\qed
\end{cor}
\subsection{\texorpdfstring{Incremental Polynomial Termination
over $\mathbb{N}$ vs.\ $\mathbb{R}$}{Incremental Polynomial Termination
over N vs.\ R}}
\label{subsec:relship_nvsr_inc}
In this subsection, we show that there are TRSs that are incrementally
polynomially terminating over $\mathbb{N}$ but not over $\mathbb{R}$. For this
purpose, we extend the TRS $\mathcal{S}$ of
Table~\ref{tab:trs_s} by the single rewrite rule
\[
\m{f}(x) \to x
\]
and call the resulting system $\mathcal{R}_6$.
\begin{lem}
\label{lem:trs_R_6_sn}
The TRS $\mathcal{R}_6$ is incrementally polynomially terminating
over $\mathbb{N}$.
\end{lem}
\proof
First, we consider the interpretation
\begin{gather*}
\m{0}_\mathbb{N} = 0 \qquad
\m{s}_\mathbb{N}(x) = x + 1 \qquad
\m{f}_\mathbb{N}(x) = x^2 \\
\m{h}_\mathbb{N}(x,y) = x + y \qquad
\m{g}_\mathbb{N}(x) = 3x + 5 \qquad
\m{k}_\mathbb{N}(x) = 2x + 2
\end{gather*}
which is both weakly and strictly monotone
as well as weakly compatible
with $\mathcal{R}_6$. In particular, the constraint
$x^2 \geqslantslant_\mathbb{N} x$ associated with $\m{f}(x) \to x$
holds for all $x \in \mathbb{N}$.
Removing the rules from $\mathcal{R}_6$ for which (strict)
compatibility holds leaves us with the rules \eqref{srule5},
\eqref{srule6} and $\m{f}(x) \to x$,
which are easily handled, e.g.~by the linear interpretation
\begin{gather*}
\m{0}_\mathbb{N} = 0 \qquad
\m{s}_\mathbb{N}(x) = 3x + 2 \qquad
\m{f}_\mathbb{N}(x) = 2x + 1 \qquad
\m{h}_\mathbb{N}(x,y) = x + y\rlap{\hbox to 59 pt{
\qEd}}
\end{gather*}
\begin{lem}
\label{lem:trs_R_6}
The TRS $\mathcal{R}_6$ is not incrementally polynomially terminating over
$\mathbb{R}$ or $\mathbb{Q}$.
\end{lem}
\proof
Let $D \in \{ \mathbb{Q}, \mathbb{R} \}$, and let $\mathcal{P}$ be a polynomial
interpretation
over $D$ that is weakly compatible with $\mathcal{R}_6$, and in which the
interpretation of the function symbol $\m{f}$ has the shape
$\m{f}_D(x) = a x^2$ with $a > 0$. Then the weak compatibility constraint
$a x^2 \geqslantslant_{D_0} x$ associated with $\m{f}(x) \to x$
does not hold for all $x \in D_0$ because the polynomial
$a x^2-x = ax\left(x - \tfrac{1}{a}\right)$ is negative in the open
interval $\left(0,\tfrac{1}{a}\right)$.
As the above assumption on the interpretation of $\m{f}$ follows from
Lemma~\ref{lem:trs_S} if $\mathcal{P}$ is strictly monotone, we conclude
that there is no strictly monotone polynomial interpretation over $\mathbb{R}$
or $\mathbb{Q}$ that is weakly compatible with the TRS $\mathcal{R}_6$.
This implies that $\mathcal{R}_6$ is not incrementally polynomially
terminating over $\mathbb{R}$ or $\mathbb{Q}$.
\qed
Together, Lemma~\ref{lem:trs_R_6_sn} and Lemma~\ref{lem:trs_R_6}
yield the main result of this subsection.
\begin{cor}
\label{cor:nvsr_inc}
There are TRSs that are incrementally polynomially terminating over
$\mathbb{N}$ but not over $\mathbb{R}$ or $\mathbb{Q}$.
\qed
\end{cor}
The results presented in this section can be summarized by
stating that the relationships expressed in Figure~\ref{fig:summary}
remain true for incremental polynomial termination, after
replacing $\mathcal{R}_1$ by $\mathcal{R}_6$ and
$\mathcal{R}_2$ by $\mathcal{R}_5$.
\section{Concluding Remarks}
\label{sect:conclusion}
In this article, we investigated the relationship of polynomial
interpretations with real, rational and integer coefficients with
respect to termination proving power.
In particular, we presented three new results, the first of which
shows that polynomial interpretations
over the reals
subsume polynomial interpretations
over the rationals,
the second of which shows that polynomial interpretations
over the reals or rationals
do not properly subsume polynomial interpretations over the
integers,
a result that comes somewhat unexpected, and the third of which shows
that there are TRSs that can be proved terminating by polynomial
interpretations over the naturals or the reals but not
over the rationals.
These results were extended to incremental termination proofs.
In~\cite{FN12} it is shown how to adapt the results to the dependency
pair framework~\cite{GTSF06,HM07}.
We conclude this article by reviewing
our results in the context of automated termination
analysis, where linear polynomial interpretations, i.e., polynomial
interpretations with all interpretation functions being linear,
play an important role. This naturally raises the question as
to what extent the restriction to linear polynomial interpretations
influences the hierarchy depicted in Figure~\ref{fig:summary},
and in what follows we shall see that it changes considerably.
More precisely, the areas inhabited by the TRSs $\mathcal{R}_1$ and
$\mathcal{R}_2$ become empty, such that polynomial termination by a
linear polynomial interpretation over $\mathbb{N}$ implies
polynomial termination by a linear polynomial interpretation over $\mathbb{Q}$,
which in turn implies
polynomial termination by a linear polynomial interpretation over $\mathbb{R}$.
The latter follows directly from Theorem~\ref{thm:qvsr} and
Remark~\ref{rem:qvsr}, whereas the former is shown below.
\begin{lem}
Polynomial termination by a linear polynomial interpretation over $\mathbb{N}$
implies polynomial termination by a linear polynomial interpretation
over $\mathbb{Q}$.
\end{lem}
\proof
Let $\mathcal{R}$ be a TRS that is compatible with a linear
polynomial interpretation $\mathcal{I}$ over $\mathbb{N}$, where every
$n$-ary function symbol~$\m{f}$ is associated
with a linear polynomial $a_n x_n + \cdots + a_1 x_1 + a_0$.
We show that the same interpretation also establishes polynomial
termination over $\mathbb{Q}$ with the value of $\delta$ set to one.
To this end, we note that in order to guarantee
strict monotonicity and well-definedness over $\mathbb{N}$,
the coefficients of the respective interpretation functions
have to satisfy the following conditions:
$a_0 \geqslantslant 0$ and $a_i \geqslantslant 1$ for all $i \in \{1,\ldots,n\}$.
Hence, by Lemma~\ref{lem:qlinmon}, we also have well-definedness
over $\mathbb{Q}_0$ and strict monotonicity with respect to
the order $>_{\mathbb{Q}_0,1}$.
(Strict monotonicity also follows from \cite[Theorem~2]{L05}.)
Moreover, as $\mathcal{R}$ is compatible with $\mathcal{I}$,
each rewrite rule $\ell \to r \in \mathcal{R}$ satisfies
\begin{equation}
\label{eq:linpoly}
P_\ell - P_r >_\mathbb{N} 0 \quad \text{for all $\seq[m]{x} \in \mathbb{N}$,}
\end{equation}
where $P_\ell$ ($P_r$) denotes the polynomial associated with $\ell$ ($r$)
and the variables $\seq[m]{x}$ are those occurring in $\ell \to r$.
Since linear functions are closed under composition, the polynomial
$P_\ell - P_r$ is a linear polynomial $c_m x_m + \cdots + c_1 x_1 + c_0$,
such that \eqref{eq:linpoly} holds if and only if
$c_0 \geqslantslant 1$ and $c_i \geqslantslant 0$ for all $i \in \{1,\ldots,m\}$.
However, then we also have
\[
P_\ell - P_r >_{\mathbb{Q}_0,1} 0 \quad \text{for all $\seq[m]{x} \in \mathbb{Q}_0$,}
\]
which shows that $\mathcal{R}$ is compatible with the linear polynomial
interpretation $(\mathcal{I},\delta) = (\mathcal{I},1)$ over $\mathbb{Q}$.
\qed
Hence, linear polynomial interpretations over $\mathbb{R}$ subsume
linear polynomial interpretations over $\mathbb{Q}$, which in turn
subsume linear polynomial interpretations over $\mathbb{N}$, and these
subsumptions are proper due to the results of \cite{L06}, which
were obtained using linear polynomial interpretations.
\end{document} |
\begin{document}
\title[Annihilators of local cohomology and Ext]{Some bounds for the annihilators of local cohomology and Ext modules}
\author[A.~fathi]{Ali Fathi}
\address{Department of Mathematics, Zanjan Branch,
Islamic Azad University, Zanjan, Iran.}
\email{fathi\[email protected], [email protected]}
\keywords{ Local cohomology modules, Ext modules, annihilator, primary decomposition.}
\subjclass[2010]{13D45, 13D07.}
\begin{abstract}
Let $\mathfrak a$ be an ideal of a commutative Noetherian ring $R$ and $t$ be a non-negative integer. Let $M$ and $N$ be two finitely generated $R$-modules. In certain cases, we give some bounds under inclusion for the annihilators of $\operatorname{Ext}^t_R(M, N)$ and
$\operatorname{H}^t_{\mathfrak a}(M)$ in terms of minimal primary decomposition of the zero submodule of $M$ which are independent of the choice of minimal primary decomposition. Then, by using those bounds, we compute the annihilators of local cohomology and Ext modules in certain cases.
\end{abstract}
\maketitle
\setcounter{section}{-1}
\section{\bf Introduction}
Throughout the paper, $R$ is a commutative Noetherian ring with nonzero identity.
The $i$-th local cohomology of an $R$-module $M$ with respect to an ideal $\fa$ was defined by Grothendieck as follows:
$$\h i{\fa}M={\underset{n}{\varinjlim}\operatorname{Ext}^i_R\left(R/\fa^n, M\right)};$$
see \cite{bs, bh, h} for more details.
In this section, we assume $M$ is a non-zero finitely generated $R$-module, $N$ is a Gorenstein $R$-module, $0=M_1\cap\hdots\cap M_n$ is a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$ and $\fa$ is an ideal of $R$. We refer the reader to \cite[Sec. 6]{mat} for basic properties of primary decomposition of modules and to \cite{s, s2} for more details about the Gorenstein modules (see also the paragraph before Lemma \ref{gor}).
We denote, for an $R$-module $M$, $\sup\{i\in\N_0: \h i{\fa}M\neq 0\}$ by $\cd {\fa}M$. Assume $d=\dim_R(M)<\infty$. By Grothendieck's Vanishing Theorem, $\cd{\fa}M\leq d$. When $\cd{\fa}M=d$, then we have
\begin{equation}\label{eq1}
\operatorname{Ann}_R\left(\h d{\fa}M\right)=\operatorname{Ann}_R\left(M/\bigcap_{\cd{\fa}{R/\fp_i}=d}M_i\right).
\end{equation}
This equality is proved by Lynch \cite[Theorem 2.4]{l} whenever $R$ is a complete local ring and $M=R$.
In \cite[Theorem 2.6]{bag} Bahmanpour {\it et al.} proved that $\operatorname{Ann}_R\left(\h d{\fa}M\right)=\operatorname{Ann}_R(M/T_R(\fa, M))$ whenever $\fa=\fm$ and $R$ is a complete local ring, where $T_R(\fa, M)$ denotes the largest submodule $N$ of $M$ such that $\cd \fa N<\cd \fa M$. Then Bahmanpour
\cite[Theorem 3.2]{ba} extended the result of Lynch for the $R$-module $M$. Next, Atazadeh {\it et al.} \cite{asn1} proved this equality whenever $R$ is a local ring (not necessarily complete) and finally in \cite{asn2} they extended it to the non-local case.
(Note that $T_R(\fa, M)=\bigcap_{\cd{\fa}{R/\fp_i}=\cd {\fa}M}M_i$ \cite[Remark 2.5]{asn1}, also, if $(R, \fm)$ is a complete local ring and $\fp\in\ass_R(M)$, then, by the Lichtenbaum-Hartshorne Vanishing Theorem, $\cd {\fa}{R/\fp}=d$ if and only if $\dim_R(R/\fp)=d$ and $\sqrt{\fa+\fp}=\fm$).
In the first section (see Theorem \ref{annh1} and Remark \ref{rem1}), for an arbitrary integer $t$, we give a bound for the annihilator of $\ext tRMN$ in terms of minimal primary decomposition of the zero submodule of $M$. More precisely, we show that
\begin{equation}\label{eq2}
\operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Delta(t)}M_i\right)\subseteq \operatorname{Ann}_R\left(\ext tRMN\right)\subseteq\operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Sigma(t)}M_i\right)
\end{equation}
where $\Delta(t)=\{\fp\in\ass_R(M)\cap\supp_R(N): \Ht R\fp\leq t\}$, $\Sigma(t)=\{\fp\in\mass_R(M)\cap\supp_R(N): \Ht R\fp=t\}$ and $\mass_R(M)$ denotes the set of minimal elements of $\ass_R(M)$.
If $t=\grad {\operatorname{Ann}_R(M)}N<\infty$, then the above index sets are equal and we can compute the annihilator of $\ext tRMN$. Note that, in general, for an arbitrary integer $t$, there is not a subset $\Sigma$ of $\ass_R(M)$ such that $ \operatorname{Ann}_R\left(\ext tRMN\right)=\operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Sigma}M_i\right)$; see Example \ref{exa1}.
In the second section, we consider the annihilators of local cohomology modules. By using the above bound on the annihilators of Ext modules, when $(R, \fm)$ is a local ring, we show, in Theorem \ref{annh}, that \begin{equation}\label{eq3}
\operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Delta'(t)}M_i\right)\subseteq \operatorname{Ann}_R\left(\h t{\fm}M\right)\subseteq\operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Sigma'(t)}M_i\right),
\end{equation}
where $\Delta'(t)=\{\fp\in\ass_R(M): \dim_R(R/\fp)\geq t\}$ and $\Sigma'(t)=\{\fp\in\mass_R(M): \dim_R(R/\fp)=t\}$. Next, whenever $R$ is not necessarily local, in Theorem \ref{annh2}, we give a bound for the annihilator of the top local cohomology module $\h {\cd{\fa}M}{\fa}M$ which implies equality (\ref{eq1}) when $d=\cd{\fa}M$. Finally, for each $t$, in Theorem \ref{annh3}, we provide a bound for the annihilator of $\h t{\fa}M$ when $M$ is Cohen-Macaulay, and also we compute its annihilator at $t=\grad{\fa}M$. All the given bounds are independent of the choice of minimal primary decomposition. We adopt the convention that the intersection of empty family of submodules of an $R$-module $M$ is $M$.
\section{\bf Bounds for the annihilators of Ext-modules}
Assume $M, N$ are finitely generated $R$-modules such that $N$ is a Gorenstein module, and $0=M_1\cap\hdots\cap M_n$ is a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$. We refer the reader to
\cite[Sec. 6]{mat} for basic properties and unexplained terminologies about the primary decomposition of modules and to \cite{s,s2} for more details about the Gorenstein modules. In this section [Theorem \ref{annh1}], for each integer $t$, we give a bound for the annihilator of $\ext tRMN$ in terms of minimal primary decomposition of the zero submodule of $M$ which is independent of the choice of minimal primary decomposition. As an application, in the case where $t=\grad {\operatorname{Ann}_R(M)}N$, we compute the annihilator of $\ext t RMN$. More precisely, for $t=\grad {\operatorname{Ann}_R(M)}N$, we have
$$\operatorname{Ann}_R\left(\ext t RMN\right)= \operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Sigma(t)} M_i\right)$$
where $\Sigma(t) =\{\fp\in\mass_R(M)\cap\supp_R(N): \Ht R\fp=t\}$; see Theorem \ref{annh1} and Remark \ref{rem1}.
Note that, in general, for an arbitrary integer $t$, there is not a subset $\Sigma$ of $\ass_R(M)$ such that $\operatorname{Ann}_R\left(\ext t RMN\right)= \operatorname{Ann}_R\left(M/\bigcap_{\fp_i\in\Sigma} M_i\right)$; see Example \ref{exa1}. These results will be used in the second section to compute the annihilators of local cohomology modules.
Before proving these results, we need some lemmas.
\begin{lem}[{\cite[Theorem 6.8]{mat}}]\label{ass} Let $M$ be a non-zero finitely generated $R$-module. Let
$\ass_R(M)=\{\fp_1,\dots,\fp_n\}$, and $0=M_1\cap\hdots\cap M_n$ be a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$. Assume $\Phi$ is a subset of $\ass_R(M)$ and $N=\bigcap_{\fp_i\in\Phi}M_i$. Then $$\ass_R(M/N)=\Phi \textrm{, and } \ass_R(N)=\ass_R(M)\setminus\Phi.$$
\end{lem}
Assume $N$ is a submodule of an $R$-module $M$. For any multiplicatively closed subset $S$ of $R$, we denote the contraction of $S^{-1}N$ under the canonical map $M\rightarrow S^{-1}M$ by $S_M(N)$. Assume $\Sigma\subseteq\ass_R(M)$. We say that $\Sigma$ is an isolated subset of $\ass_R(M)$ if it satisfies the following condition: if $\fq\in \ass_R(M)$ and $\fq\subseteq\fp$ for some $\fp\in\Sigma$, then $\fq\in\Sigma$.
The following lemma is well-known, but we prove it for the readers' convenience.
\begin{lem}[See {\cite[Theorem 4.10 and Exercise 4.23]{at}}]\label{primary} Let $M$ be a finitely generated $R$-module, and $N$ a proper submodule of $M$. Let $N=\bigcap_{i=1}^nN_i$ be a minimal primary decomposition of $N$ in $M$ with $\ass_R(M/N_i)=\fp_i$ for all $1\leq i\leq n$. Assume $\Sigma$ is an isolated subset of $\ass_R(M/N)$. Then
$$\bigcap_{\fp_i\in\Sigma}N_i=S_M(N),$$
where $S=R\setminus \bigcup_{\fp\in\Sigma}\fp$.
In particular, $\bigcap_{\fp_i\in\Sigma}N_i$ is independent of the choice of minimal primary decomposition of $N$ in $M$.
\end{lem}
\begin{proof}
Assume $\Sigma\subseteq\ass_R(M/N)$ is an isolated subset of $\ass_R(M/N)$ and $S=R\setminus\bigcup_{\fp\in\Sigma}\fp$.
If $S^{-1}\left({M}/{\bigcap_{\fp_i\in\ass_R(M/N)\setminus\Sigma }N_i}\right)\neq 0$, then there exists
$$\fq\in \ass_R\left({M}/{\bigcap_{\fp_i\in\ass_R(M/N)\setminus\Sigma }N_i}\right)=\ass_R(M/N)\setminus\Sigma$$
such that $\fq\cap S=\emptyset$. Since $\fq\cap S=\emptyset$, by the Prime Avoidance Theorem, $\fq\subseteq\fp$ for some $\fp\in\Sigma$. But $\Sigma$ is an isolated subset of $\ass_R(M/N)$ and so $\fq\in \Sigma$,
which is a contradiction. Hence
$S^{-1}\left(\bigcap_{\fp_i\in\ass_R(M/N)\setminus\Sigma }N_i\right)=S^{-1}M$. It follows that $S^{-1}N=\bigcap_{\fp_i\in\Sigma}S^{-1}N_i$. Contracting both sides under the canonical map $M\rightarrow S^{-1}M$ we obtain
$(S^{-1}N)^c=\bigcap_{\fp_i\in\Sigma}(S^{-1}N_i)^c$. Now, assume $\fp_i\in\Sigma$. It is clear that $N_i\subseteq(S^{-1}N_i)^c$. Conversely, if $m\in (S^{-1}N_i)^c$, then $m/1\sim n/s$ for some $n\in N_i$ and $s\in S $. Hence $tsm=tn\in N_i$ for some $t\in S$. Since $N_i$ is a $\fp_i$-primary submodule of $M$ and $ts\notin\fp_i$, we have $m\in N_i$. Therefore $N_i=(S^{-1}N_i)^c$, and hence $(S^{-1}N)^c=\bigcap_{\fp_i\in\Sigma}N_i$. This completes the proof.
\end{proof}
\begin{rem}\label{rem} Let the situation and notations be as in above lemma. Assume, in addition, that $\Sigma=\emptyset$, and we consider the above lemma in this special case separately. It is clear that $\Sigma$ is an isolated subset of $\ass_R(M/N)$ and $\bigcap_{\fp_i\in\Sigma}N_i=M$ because the intersection of the empty family of subsets of a set $M$ is $M$. On the other hand, we have
$S=R\setminus\bigcup_{\fp\in\Sigma}\fp=R$. Since $0\in S$, we obtain $S^{-1}(N)=S^{-1}(M)=0$, and so the contraction of $S^{-1}(N)$ under the map $M\rightarrow S^{-1}(M)$ is $M$. Therefore we have $S_M(N)=M=\bigcap_{\fp_i\in\Sigma}N_i$ in this case.
\end{rem}
Let $(R, \fm)$ be a local ring. A non-zero finitely generated $R$-module $G$ is said to be Gorenstein if $$\depth_R(G)=\dim_R(G)=\operatorname{inj\,dim}_R(G)=\depth_R(R)=\dim_R(R)$$
(so $R$ is Cohen-Macaulay) or equivalently $\ext iR{R/\fm}G$ is non-zero only at $i=\dim_R(G)$; see \cite[Theorem 3.11]{s}. More generally, if $R$ is not necessarily local, a non-zero finitely generated $R$-module $G$ is said to be Gorenstein if $G_\fp$ is a Gorenstein $R_\fp$-module for all $\fp\in\supp_R(G)$; see \cite[corollary 3.7]{s}.
When $(R, \fm)$ is a complete local ring, then Gorenstein modules under isomorphism are the non-empty finite direct sums of the canonical module \cite[Corollary 2.7]{s2}.
The following property of Gorenstein modules is needed in the proof the main theorem of this section.
\begin{lem}\label{gor} Let $G$ be a Gorenstein $R$-module, and $\fp$ a prime ideal of $R$. Then
$\fp\in\supp_R(G)$ if and only if $G\neq\fp G$.
\end{lem}
\begin{proof}
Assume $\fp\in\supp_R(G)$. Hence $G_\fp\neq 0$ and consequently $G_\fp\neq\fp R_{\fp}G_\fp$ by Nakayama's Lemma. It follows that $G\neq\fp G$.
Conversely, assume $G\neq\fp G$. Thus there exists $\fq\in\supp_R(G)$ such that $G_\fq\neq \fp R_q G_\fq$. Therefore $\fp \subseteq\fq$, and hence \cite[Corollary 4.14]{s} implies that $\fp\in\supp_R(G)$.
\end{proof}
Now we are ready to state and prove the main theorem of this section which provides bound for the annihilators of Ext modules. Local version of this theorem Remark \ref{rem1}(1.1) will be used to compute the annihilators of local cohomology modules in the next section.
\begin{thm}\label{annh1} Let $M, N$ be non-zero finitely generated $R$-modules,
and let $0=M_1\cap\hdots\cap M_n$ be a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$. Let $t\in\N_0$ and set $\Delta(t)=\{ \fp\in\ass_R(M): \grad{\fp}N\leq t\}$, $\Sigma(t)=\{ \fp\in\mass_R(M): \grad{\fp}N=t\}$, $S^t=R\setminus\bigcup_{\fp\in\Delta(t)}\fp$, and $T^t=R\setminus\bigcup_{\fp\in\Sigma(t)}\fp$.
Then
\begin{enumerate}[\rm(i)]
\item $\bigcap_{\fp_i\in\Delta(t)} M_i=S^t_M(0)$ and $\bigcap_{\fp_i\in\Sigma(t)} M_i=T^t_M(0)$. In particular, $\bigcap_{\fp_i\in\Delta(t)} M_i$ and $\bigcap_{\fp_i\in\Sigma(t)} M_i$ are independent of the choice of minimal primary decomposition of the zero submodule of $M$.
\item $S^t_M(0)$ is the largest submodule $L$ of $M$ such that $\ext iRLN=0$ for all $i\leq t$.
\item $$\operatorname{Ann}_R({M}/{S^t_M(0)})\subseteq\operatorname{Ann}_R\left(\ext t RMN\right).$$
If, in addition, $N$ is a Gorenstein module, then
$$\operatorname{Ann}_R\left(\ext t RMN\right)\subseteq \operatorname{Ann}_R({M}/{T^t_M(0)}).$$
\item If $N$ is a Gorenstein module such that $\supp_R(M)\cap\supp_R(N)\neq\emptyset$ and $t=\grad{\operatorname{Ann}_R(M)}N$, then $\Delta(t)=\Sigma(t)$ and
$$\operatorname{Ann}_R\left(\ext t RMN\right)= \operatorname{Ann}_R({M}/{T^t_M(0)}).$$
\end{enumerate}
\end{thm}
\begin{proof} Set $S=S^t_M(0)$ and $T=T^t_M(0)$.
i) Since $\Delta(t)$ and $\Sigma(t)$ are isolated subsets of $\ass_R(M)$, (i) is an immediate consequence of Lemma \ref{primary}.
ii) By Lemma \ref{ass}, in view of \cite[Proposition 1.2.10]{bh}, we have
\begin{align*}
\grad{\operatorname{Ann}_R(S)}N&=\grad{\sqrt{\operatorname{Ann}_R(S)}}N=\grad{\bigcap_{\fp\in\ass_R(S)}\fp}N\\
&=\min_{\fp\in\ass_R(S)}\grad{\fp}N=\min_{\fp\in\ass_R(M)\setminus\Delta(t)}\grad{\fp}N>t.
\end{align*}
Since $\grad{\operatorname{Ann}_R(S)}N>t$, we have $\ext iR{S}N=0$ for all $i\leq t$ by \cite[Proposition 1.2.10(e)]{bh}. Also, we note that, if $\Delta(t)=\ass_R(M)$, then $S=0$ and $\grad{\operatorname{Ann}_R(S)}N=\grad RN=\infty$. Now, assume $L$ is a submodule of $M$ such that $\ext iRLN=0$ for all $i\leq t$. Suppose, for the sake of contradiction, that $L\nsubseteq S$. Then
$$0\neq L/(L\cap S)\cong (L+S)/S\subseteq M/S.$$
Thus $\emptyset\neq\ass_R(L/(L\cap S))\subseteq \Delta(t)$. Hence, there exists $\fp\in\ass_R(L/(L\cap S))\subseteq V(\operatorname{Ann}_R(L))$ such that $\grad {\fp}N\leq t$. But this is impossible, because, by our assumption, $\grad{\operatorname{Ann}_R(L)}N>t$; see again \cite[Proposition 1.2.10(e)]{bh}. Hence $L\subseteq S$ and the proof of (ii) is completed.
iii) Since $\ext tRSN=0$, the exact sequence $0\rightarrow S\rightarrow M\rightarrow M/S\rightarrow 0$ induces the epimorphism
$\ext tR{M/S}N\rightarrow\ext tRMN.$
It follows that $$\operatorname{Ann}_R(M/S)\subseteq\operatorname{Ann}_R\left(\ext tR{M/S}N\right)\subseteq\operatorname{Ann}_R\left(\ext tR{M}N\right)$$
and hence the first inclusion in (iii) holds. To prove the second inclusion in (iii), assume that $N$ is a Gorenstein module. If $\Sigma(t)=\emptyset$, then $T=M$ by Remark \ref{rem} and there is nothing to prove. Hence, suppose that $\Sigma(t)\neq\emptyset$, $\fp_i\in\Sigma(t)$ and $y\in\operatorname{Ann}_R\left(\ext tRMN\right)$. Since $\grad{\fp_i}N=t<\infty$, we have $\fp_iN\neq N$ and so, by Lemma \ref{gor}, $\fp_i\in\supp_R(N)$. Hence $N_{\fp_i}$ is a Gorenstein ${R_{\fp_i}}$-module \cite[Corollary 3.7]{s}. Because $N$ is Cohen-Macaulay, we have $\dim_{R_{\fp_i}}( N_{\fp_i})=\grad {\fp_i}N=t$ and so, by \cite[Theorem 4.12]{s}, we have $\dim_{R_{\fp_i}}(R_{\fp_i})=\dim_{R_{\fp_i}} (N_{\fp_i})=t$. We proved that $N_{\fp_i}$ is a Gorenstein $R_{\fp_i}$-module of dimension $t$, and hence, in view of the faithfully flatness of completion, we can deduce that $\widehat{N_{\fp_i}}$ is also a Gorenstein $\widehat{R_{\fp_i}}$-module of dimension $t$. Hence $\widehat{N_{\fp_i}}\cong\omega^{\alpha}_{\widehat{R_{\fp_i}}}$ for some $\alpha\in\N$ \cite[Corollary 2.7]{s2}, where $\omega_{\widehat{R_{\fp_i}}}$ denotes the canonical module of $\widehat{R_{\fp_i}}$. Since $\widehat{R_{\fp_i}}$ is a Cohen-Macaulay complete local ring of dimension $t$, by the Local Duality Theorem \cite[Theorem 11.2.8]{bs} and \cite[Remarks 10.2.2(ii)]{bs}, we have
\begin{align*}
\operatorname{Ann}_{R_{\fp_i}}\left(\ext t{R_{\fp_i}}{ M_{\fp_i}}{N_{\fp_i}}\right)
&=R_{\fp_i}\cap\operatorname{Ann}_{\widehat{R_{\fp_i}}}\left(\ext t{\widehat{R_{\fp_i}}}{\widehat{ M_{\fp_i}}}{\widehat{N_{\fp_i}}}\right)\\
&=R_{\fp_i}\cap\operatorname{Ann}_{\widehat{R_{\fp_i}}}\left(\ext t{\widehat{R_{\fp_i}}}{\widehat{ M_{\fp_i}}}{\omega_{\widehat{R_{\fp_i}}}}\right)\\
&=R_{\fp_i}\cap\operatorname{Ann}_{\widehat {R_{\fp_i}}}\left(\gam {\widehat{\fp_i{R_{\fp_i}}}}{\widehat{M_{\fp_i}}}\right)\\
&=\operatorname{Ann}_{R_{\fp_i}}\left(\gam {\fp_i{R_{\fp_i}}}{M_{\fp_i}}\right)=\operatorname{Ann}_{R_{\fp_i}}{(M_{\fp_i})}
\end{align*}
(note that since $\fp_i$ is a minimal element of $\ass_R(M)$, it follows that $\dim_{R_{\fp_i}}(M_{\fp_i})=0$ and hence
$\gam {\fp_i{R_{\fp_i}}}{M_{\fp_i}}={M_{\fp_i}}$).
Now, if $1\leq j\neq i\leq n $, then $(M/M_j)_{\fp_i}=0$, because $\ass_R(M/M_j)=\{\fp_j\}$ and $\fp_i$ is a minimal element of $\ass_R(M)$. Thus
$(M_j)_{\fp_i}= M_{\fp_i}$ for all $1\leq j\neq i\leq n$, and so $\left(\bigcap_{j=1}^nM_j\right)_{\fp_i}\cong(M_i)_{\fp_i}$. Since $M_{\fp_i}\cong (M/0)_{\fp_i}\cong \left(M/{\bigcap_{j=1}^nM_j}\right)_{\fp_i}\cong(M/M_i)_{\fp_i}$, we have $y/1\in(\operatorname{Ann}_R(M/M_i))_{\fp_i}$, and hence $y/1\sim z/s$ for some $z\in\operatorname{Ann}_R(M/M_i)$, $s\in R\setminus\fp_i$. Thus $rsy=rz\in \operatorname{Ann}_R(M/M_i)$ for some
$r\in R\setminus\fp_i$. Hence $rsyM\subseteq M_i$. Since $M_i$ is a $\fp_i$-primary submodule of $M$, it follows from $rs\notin\fp_i$ that $yM\subseteq M_i$.
Because $\fp_i$ is an arbitrary element of $\Sigma(t)$, $yM\subseteq \bigcap_{\fp_i\in\Sigma(t)}M_i$, and by part (i), this implies that $yM\subseteq T$.
Thus $\operatorname{Ann}_R\left(\ext tRMN\right)\subseteq\operatorname{Ann}_R(M/T)$.
iv) Assume $N$ is a Gorenstein module such that $\supp_R(M)\cap\supp_R(N)\neq\emptyset$. Thus $N/(\operatorname{Ann}_R(M))N\neq 0$, and so $t=\grad{\operatorname{Ann}_R(M)}N<\infty$; see \cite[Definition 1.2.6]{bh}. It is clear that $\Sigma(t)\subseteq\Delta(t)$. To prove the reverse inclusion, let $\fp\in\Delta(t)$. Since $\operatorname{Ann}_R(M)\subseteq \fp$, we obtain $\grad{\operatorname{Ann}_R(M)}N\leq\grad{\fp}N$ and consequently $\grad{\operatorname{Ann}_R(M)}N=\grad{\fp}N$. Now, let $\fq\in\supp_R(M)$ be such that $\fq\subseteq\fp$. It follows from $\grad {\fp}N=t<\infty$ that $\fp\in\supp_R(N)$, and so $\fq\in\supp_R(N)$ by \cite[Corollary 4.14]{s}. Hence, by \cite[Theorem 2.1.3 (b)]{bh} and \cite[Theorem 4.12]{s}, we have
\begin{align*}
t&=\grad{\operatorname{Ann}_R(M)}N\leq\grad{\fq}N=\dim_{R_\fq}(N_{\fq})\\
&=\dim_{(R_\fp)_{\fq R_\fp}}(N_\fp)_{\fq R_\fp}=\dim_{R_\fp}(N_{\fp})-\dim_{R_\fp}(N_{\fp}/(\fq R_{\fp})N_\fp)\\
&=\grad{\fp}N-\dim_{R_\fp}(R_{\fp}/\fq R_{\fp})=t-\dim_{R_\fp}(R_{\fp}/\fq R_{\fp})
\end{align*}
Therefore $\dim_{R_\fp}(R_{\fp}/\fq R_{\fp})=0$ or equivalently $\fq=\fp$. Hence $\fp\in\mass_R(M)$ and consequently $\Delta(t)\subseteq\Sigma(t)$.
\end{proof}
For an integer $t$ and an $R$-module $M$, we denote, respectively, the sets $\{\fp\in\ass_R(M): \dim_R (R/\fp)=t\}$ and $\{\fp\in\ass_R(M): \dim_R (R/\fp)\geq t\}$ by $\ass_R^t(M)$ and $\ass_R^{\geqslant t}(M)$.
Similarly, $\mass_R^t(M)$ and $\mass_R^{\geqslant t}(M)$ are defined as above by replacing $\ass_R(M)$ by $\mass_R(M)$. Also, when $\dim_R(M)<\infty$, the set of prime ideals in $\ass_R(M)$ of the highest possible dimension $\{\fp\in\ass_R(M): \dim_R (R/\fp)=\dim_R(M)\}$ is denoted by $\assh_R(M)$.
\begin{rem}\label{rem1} Let the situation and notations be as in above theorem. Let $N$ be a Gorenstein $R$-module, and $\fp$ a prime ideal of $R$. Then $\grad {\fp}N=t<\infty$ if and only if $N\neq\fp N$ or equivalently $\fp\in\supp_R(N)$. Also, if $\fp\in\supp_R(N)$, then $N_\fp$ is a Gorenstein module on the local ring $R_\fp$ and, in view of \cite[Theorem 4.12]{s}, we have
$$\grad{\fp}N=\dim_{R_\fp}(N_{\fp})=\dim_{R_\fp}(R_{\fp})=\Ht R\fp.$$
Hence $$\Delta(t)=\{\fp\in\ass_R(M)\cap\supp_R(N): \Ht R\fp\leq t\},$$
$$\Sigma(t)=\{\fp\in\mass_R(M)\cap\supp_R(N): \Ht R\fp=t\}.$$
In the remainder of this remark, assume in addition that $R$ is a local ring of dimension $d$. Then $\Ht R\fp=d-\dim_R(R/\fp)$ and $\supp_R(N)=\spec (R)$. Thus above theorem states that
\begin{align}\label{ext1}
\operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\ass_R^{\geqslant d-t}(M)}M_i}\right)&\subseteq\operatorname{Ann}_R\left(\ext tRMN\right)\\
\nonumber &\subseteq \operatorname{Ann}_R\left({M}/{\bigcap_{\mass_R^{d-t}(M)}M_i}\right).
\end{align}
In particular, if $M\neq 0$, then $$\grad{\operatorname{Ann}_R(M)}N=\dim_R(N)-\dim_R(N/(\operatorname{Ann}_R(M))N)=d-\dim_R(M)$$ and the equality in Theorem \ref{annh1}(iv) can be rewrite as follows
\begin{equation}\label{ext2}
\operatorname{Ann}_R\left(\ext {d-\dim_R(M)}RMN\right)=\operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\assh_R(M)}M_i}\right).
\end{equation}
\end{rem}
These results are needed in the proof of the main theorem of the next section (Theorem \ref{annh}) which provides some bounds for the annihilators of local cohomology modules.
We end this section by two examples to show that how we can compute the above bounds for the annihilators of Ext modules. Moreover, these examples show that to improve the upper bound in (1.1) we can not replace the index set $\mass_R^{d-t}(M)$ by the larger sets $\mass_R^{\geqslant d-t}(M)$, $\ass_R^{d-t}(M)$ or $\ass_R^{\geqslant d-t}(M)$ and also to improve the lower bound in (1.1) we can not replace the index set $\ass_R^{\geqslant d-t}(M)$ by the smaller set $\ass_R^{d-t}(M)$. Also, in general for an arbitrary integer $t$, there is not a subset $\Sigma$ of $\ass_R(M)$ such that
$\operatorname{Ann}_R\left(\ext tRMN\right)=\operatorname{Ann}_R\left(M/(\bigcap_{\fp_i\in\Sigma}M_i)\right)$.
Let $U$ be a subset of an $R$-module $M$. We use $\langle U\rangle$ to denote the submodule of $M$ generated by $U$. If $U=\{m_1,\dots, m_n\}$, then we show $\langle U\rangle$ by $\langle m_1,\dots, m_n\rangle$.
\begin{exa} \label{exa1} Let $K$ be a field and let $R=K[[X, Y]]$ be the ring of formal power series over $K$ in indeterminates $X, Y$.
Set $M=R/\langle X^2, XY\rangle$, $M_1=\langle X\rangle/\langle X^2, XY\rangle$, and $M_2=\langle X^2, Y \rangle/\langle X^2, XY \rangle$. Then $0=M_1\cap M_2$ is a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_1)=\{\fp_1=\langle X\rangle\}$ and $\ass_R(M/M_2)=\{\fp_2=\langle X, Y\rangle\}$. So $\ass_R(M)=\{\fp_1, \fp_2\}$ and $\mass_R(M)=\{\fp_1\}$.
Hence, we have
$$\ass_R^{\geqslant 2-t}(M)=\left\{\begin{array}{ll}
\emptyset&{\rm if }\ t=0,\\
\{\fp_1\}&{\rm if }\ t=1,\\
\{\fp_1,\fp_2\}&{\rm if }\ t=2
\end{array}\right.$$
and
$$\mass_R^{ 2-t}(M)=\left\{\begin{array}{ll}
\emptyset&{\rm if }\ t=0, 2,\\
\{\fp_1\}&{\rm if }\ t=1.
\end{array}\right.$$
It follows that
$$\operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\ass_R^{\geqslant 2-t}(M)}M_i}\right)=\left\{\begin{array}{ll}
R&{\rm if }\ t=0,\\
\langle X\rangle&{\rm if }\ t=1,\\
\langle X^2, XY \rangle&{\rm if }\ t=2
\end{array}\right.$$ and
$$ \operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\mass_R^{2-t}(M)}M_i}\right)=\left\{\begin{array}{ll}
R&{\rm if }\ t=0, 2,\\
\langle X\rangle&{\rm if }\ t=1.
\end{array}\right.$$
Hence, Remark \ref{rem1} implies that $$\Hom RMR=0,\ \operatorname{Ann}_R\left(\ext 1RMR\right)=\langle X\rangle$$ and $$\langle X^2, XY\rangle\subseteq\operatorname{Ann}_R\left(\ext 2RMR\right)\subseteq R.$$
Also, since $\operatorname{inj\,dim}_R(R)=2$, we deduce that $\ext tRMR=0$ for all $t>2$.
Now, we directly compute $\operatorname{Ann}_R\left(\ext tRMR\right)$ for all $t$ (specially for $t=2$). It is straightforward to see that
$$\textbf{P}: 0\longrightarrow R\stackrel{d_2}{\longrightarrow} R^2\stackrel{d_1}{\longrightarrow} R\stackrel{\epsilon}{\longrightarrow} M\longrightarrow 0$$ with $\epsilon(f)=f+\langle X^2, XY\rangle,\, d_1(f, g)=X^2f+XYg,\, d_2(f)=(Yf, -Xf)$ for all $f, g\in R$ is a projective resolution of $M$. Applying the functor $\Hom R{\cdot}R$ to the delated projective resolution $\textbf{P}_M$, we obtain the following commutative diagram
\begin{displaymath} \xymatrix{
0\ar[r]&\Hom RRR\ar[d]^{\alpha}_{\cong}\ar[r]^{d_1^*}&\Hom R{R^2}R\ar[d]^{\beta}_{\cong}\ar[r]^{d_2^*}&\Hom RRR\ar[d]^{\gamma}_{\cong}\ar[r]&0\\
0\ar[r]&R\ar[r]^{\delta_1} &R^2\ar[r]^{\delta_2} &R\ar[r] &0,
} \end{displaymath}
where $\alpha, \beta, \gamma$ are natural isomorphisms, $\delta_1(f)=(X^2f, XYf)$, and $ \delta_2(f,g)=Yf-Xg$ for all $f, g\in R$. Hence
$$ \ext 1RMR\cong{\ker\delta_2}/{\im\delta_1}=\langle(X, Y)\rangle/\langle(X^2, XY)\rangle\cong R/\langle X\rangle,$$ and
$$ \ext 2RMR\cong {R}/{\langle X, Y\rangle} \ \text{and} \ \ext tRMR=0 \ \text{for all}\ t\neq 1, 2$$
(note that by our notation $\ker\delta_2$ and $\im\delta_1$ are cyclic $R$-modules generated by the elements $(X, Y)$ and $(X^2, XY)$ of $R^2$ respectively).
It follows that $$\operatorname{Ann}_R \left(\ext 1RMR\right)=\langle X\rangle \ \text{and} \ \operatorname{Ann}_R \left(\ext 2RMR\right)=\langle X, Y\rangle.$$
Thus, there is not a subset $\Sigma$ of $\ass_R(M)$ such that
$\operatorname{Ann}_R\left(\ext 2RMR\right)=\operatorname{Ann}_R\left(M/(\bigcap_{\fp_i\in\Sigma}M_i)\right)$.
Moreover, for $t=2$, this example shows that in the second inclusion of (\ref{ext1}) in Remark \ref{rem1}, to obtain a better upper bound (under inclusion) of $\operatorname{Ann}_R\left(\ext tRMR\right)$, we can not replace the index set $\mass_R^{d-t}(M)$ by the larger sets $\mass_R^{\geqslant d-t}(M)$, $\ass_R^{d-t}(M)$ or $\ass_R^{\geqslant d-t}(M)$.
\end{exa}
\begin{exa}\label{exa2} Let $K$ be a field and let $R=K[[X, Y, Z, W]]$ be the ring of formal power series over $K$ in indeterminates $X, Y, Z, W$. Then $R$ is a local ring with maximal ideal $\fm=\langle X, Y, Z, W\rangle$. Set $\fp_1=\langle X, Y\rangle$, $\fp_2=\langle Z, W\rangle$, and $M=R/(\fp_1\cap\fp_2)$. Then $\depth_R{(R/\fp_1)}=\depth_R{(R/\fp_2)}=2$, and hence $\h i{\fm}{R/\fp_1}=\h i{\fm}{R/\fp_2}=0$ for $i=0, 1$. Now, the exact sequence $$0\rightarrow M\rightarrow {R}/{\fp_1}\oplus {R}/{\fp_2}\rightarrow {R}/{\fm}\rightarrow 0$$ induces the exact sequence
\begin{align*}
0&\rightarrow \h 0{\fm}M\rightarrow \h 0{\fm}{{R}/{\fp_1}}\oplus \h 0{\fm}{{R}/{\fp_2}}\rightarrow \h 0{\fm}{{R}/{\fm}}
\rightarrow \h 1{\fm}M\\&\rightarrow \h 1{\fm}{{R}/{\fp_1}}\oplus \h 1{\fm}{{R}/{\fp_2}}
\end{align*}
of local cohomology modules. It follows that $\h 0{\fm}M=0$ and $\h 1{\fm}M\cong R/\fm$. Since $R$ is a regular ring,
it is Gorenstein \cite[Proposition 3.1.20]{bh}, and hence $R$ is the canonical module of $R$ \cite[Theorem 3.3.7]{bh}. Therefore, by the Grothendieck duality \cite[Theorem 11.2.8]{bs}, we have $\Hom{R}{\ext 3RMR}{E(R/\fm)}\cong\h 1{\fm}M$.
Thus $\operatorname{Ann}_R\left(\ext 3RMR\right)=\fm$.
On the other hand, if $M_1=\fp_1/(\fp_1\cap\fp_2)$ and $M_2=\fp_2/(\fp_1\cap\fp_2)$, then $0=M_1\cap M_2$ is a minimal primary decomposition of the zero submodule of $M$. Since $\ass^1_R(M)=\emptyset$, we have $$R=\operatorname{Ann}_R \left({M}/{\bigcap_{\fp_i\in\ass^1_R(M)}M_i}\right)\nsubseteq \operatorname{Ann}_R\left(\ext 3RMR\right).$$
Therefore in the first inclusion of (\ref{ext1}) in Remark \ref{rem1}, to obtain a better lower bound of $\operatorname{Ann}_R\left(\ext tRMR\right)$,
we can not replace the index set $\ass_R^{\geqslant d-t}(M)$ by the smaller set $\ass_R^{d-t}(M)$.
\end{exa}
\section{\bf Bounds for the annihilators of local cohomology modules}
In this section we investigate the annihilators of local cohomology modules. For an $R$-module $M$, we denote $\sup\{i\in\N_0: \h i{\fa}M\neq 0\}$ by $\cd {\fa}M$. Let $\fa$ be a proper ideal of $R$, $M$ a non-zero finitely generated $R$-module of dimension $d$, and $0=M_1\cap\hdots \cap M_n$ a minimal primary
decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$. If $\cd{\fa}M=d<\infty$, then
$$\operatorname{Ann}_R\left(\h d{\fa}M\right)=\operatorname{Ann}_R\left(M/\bigcap_{\cd{\fa}{R/\fp_i}=d}M_i\right);$$
see \cite{asn2} and Introduction for more details.
For an arbitrary integer $t$, when $(R, \fm)$ is a local ring, we give a bound for the $\operatorname{Ann}_R\left(\h t{\fm}M\right)$, see Theorem \ref{annh}. Also,
whenever $R$ is not necessarily local, in Theorem \ref{annh2}, we provide a bound for $\operatorname{Ann}_R\left(\h {\cd {\fa}M}{\fa}M\right)$ which implies the above equality when $\cd{\fa}M=\dim_R(M)$. Finally, when $M$ is Cohen-Macaulay, a bound of $\operatorname{Ann}_R\left(\h t{\fa}M\right)$ is given and at $t=\grad{\fa}M$ this annihilator is computed in Theorem \ref{annh3}.
Assume $(R, \fm)$ is a local ring. The $\fm$-adic completion $\hat R$ of $R$ is a faithfully flat $R$-module (see \cite[Theorem 8.14]{mat}), and so $R\subseteq \hat R$. Applying \cite[Theorem 23.2]{mat} to the ring homomorphism $\varphi :R\rightarrow \hat R$ we obtain the following lemma.
\begin{lem}[See {\cite[Theorem 23.2]{mat}} ]\label{ass2} Let $(R, \fm)$ be a local ring, and $M$ an $R$-module. Then
\begin{enumerate}[\rm(i)]
\item If $\fp\in\spec(R)$ and $\fP\in\ass_{\hat R}(\hat R/\fp \hat R)$, then $R\cap\fP=\fp$.
\item $$\ass_{\hat R}(M\otimes_{R}\hat R)=\bigcup_{\fp\in\ass_R(M)}\ass_{\hat R}(\hat R/\fp \hat R).$$
\end{enumerate}
\end{lem}
This lemma is used in the proof of the following theorem which is the main theorem of this section.
\begin{thm} \label{annh}Let $(R, \fm)$ be a local ring and $t\in\N_0$. Let $M$ be a non-zero finitely generated $R$-module, and
$0=M_1\cap\hdots\cap M_n$ a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$.
Then
\begin{enumerate}[\rm(i)]
\item $\bigcap_{\fp_i\in\ass_R^{\geqslant t}(M)}M_i=S^t_M(0)$ and $\bigcap_{\fp_i\in\mass_R^{t}(M)}M_i=T^t_M(0)$, where $S^t=R\setminus\bigcup_{\fp\in\ass_R^{\geqslant t}(M)}\fp$ and $T^t=R\setminus\bigcup_{\fp\in\mass_R^t(M)}\fp$. In particular, $\bigcap_{\fp_i\in\ass_R^{\geqslant t}(M)}M_i$ and $\bigcap_{\fp_i\in\mass_R^{t}(M)}M_i$ are independent of the choice of minimal primary decomposition of the zero submodule of $M$.
\item $S^t_M(0)$ is the largest submodule $N$ of $M$ such that $\dim_R(N)<t$.
\item $$ \operatorname{Ann}_R\left({M}/{S^t_M(0)}\right)\subseteq\operatorname{Ann}_R\left(\h t{\fm}M\right)
\subseteq \operatorname{Ann}_R\left({M}/{T^t_M(0)}\right).$$
In particular, for $t=\dim_R(M)$, there are the equalities $S^t_M(0)=T^t_M(0)={\bigcap_{\fp_i\in\assh_R(M)}M_i}$, and
$$\operatorname{Ann}_R\left(\h {\dim_R(M)}{\fm}M\right)=\operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\assh_R(M)}M_i}\right).$$
\end{enumerate}
\end{thm}
\begin{proof}
Set $S=S^t_M(0)$ and $T=T^t_M(0)$. It is clear that $\ass_R^{\geqslant t}(M)$ and $\mass_R^t(M)$ are isolated subsets of $\ass_R(M)$, and hence (i) follows from Lemma \ref{primary}.
To prove (ii), first note that $\ass_R(S)=\ass_R(M)\setminus\ass_R^{\geqslant t}(M)$ by Lemma \ref{ass} and hence $\dim_R(S)<t$.
Now, assume that $N$ is a submodule of $M$ such that $\dim_R(N)<t$. Suppose, for the sake of contradiction, that $N\nsubseteq S$. Then
$$0\neq {N}/({N\cap S})\cong({N+S})/{S}\subseteq {M}/{S}.$$ Hence
$$\emptyset\neq\ass_R\left( {N}/{\left(N\cap S\right)}\right)\subseteq\ass_R\left( {M}/{S}\right)=\ass_R^{\geqslant t}(M)$$
which is impossible, because $\dim_R \left({N}/\left({N\cap S}\right)\right)\leq\dim_R(N)<t$. This proves (ii).
Now, we prove (iii). In the case when $t=\dim_R(M)$, it is clear that
$$\mass^t_R(M)=\ass^{\geqslant t}_R(M)=\assh_R(M),$$ and so $S^t_M(0)=T^t_M(0)={\bigcap_{\fp_i\in\assh_R(M)}M_i}$. Therefore the first part of (iii) yields the equality
$\operatorname{Ann}_R\left(\h {\dim_R(M)}{\fm}M\right)=\operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\assh_R(M)}M_i}\right)$ whenever $t=\dim_R(M)$.
Also, we saw in (ii) that $\dim_R(S)<t$, and so we obtain $\h t{\fm}M\cong\h t{\fm}{M/S}$. Therefore
$$\operatorname{Ann}_R\left({M/S}\right)\subseteq\operatorname{Ann}_R\left(\h t{\fm}{M/S}\right)=\operatorname{Ann}_R\left(\h t{\fm}M\right).$$
Thus, to complete the proof of (iii), it only remains to prove that
$$\operatorname{Ann}_R\left(\h t{\fm}M\right)\subseteq\operatorname{Ann}_R\left(M/T\right).$$
Set $d=\dim_R(R)$. First, assume that $R$ is complete. By the Cohen's structure theorem for complete local rings \cite[Theorem 29.4(ii)]{mat}, there is a complete regular local ring $R'$ such that $R=R'/I$ for some ideal $I$ of $R'$. Now, let $h=\Ht {R'}I$ and $x_1,\cdots,x_h$ a maximal $R'$-sequence in $I$. Set ${R''}=R'/(x_1,\cdots,x_h)$ and
$J=I/(x_1,\cdots,x_h)$. Then ${R''}$ is a local Gorenstein ring of dimension $d$ \cite[Corollary 3.1.15]{bh} and $R\cong {R''}/J$.
Now, let $\fn$ be the maximal ideal of ${R''}$. Then $\fm\cong\fn/J$. By the Grothendieck duality for Gorenstein rings \cite[Theorem 11.2.5]{bs}, there is the following isomorphism of $R''$-modules
$$
\h t{\fn}M\cong \Hom {R''}{\ext {d-t}{R''}M{R''}}{E_{R''}({R''}/\fn)}.
$$
Also, by using the Independence Theorem under the ring homomorphism $R''\rightarrow R''/J\cong R$, we obtain
the following isomorphism of $R''$-modules
$$\h t{\fn}M\cong\h t{\fn (R''/J)}M\cong\h t{\fm}M$$
(we recall that $\fn/J\cong\fm$).
We refer the reader about the Independence Theorem to \cite[Theorem 4.2.1]{bs} or \cite[Proposition 2.11(2)]{h}. Also, we note that any $R$-module $N$ has an $R''$-module structure given by $r''\cdot x=(r''+J)x=\psi(r''+J)x$ for all $r''\in R''$ and $x\in N$, where $\psi$ denotes the ring isomorphism from $R''/J$ to $R$. Hence, by \cite[Remarks 10.2.2(ii)]{bs}, we have
$$\operatorname{Ann}_{R''}\left(\h t{\fm}M\right)=\operatorname{Ann}_{R''}\left(\h t{\fn}M\right)=\operatorname{Ann}_{R''}\left(\ext {d-t}{R''}M{R''}\right).$$
Let, for each $1\leq i\leq n$, $P_i$ be the contraction of $\fp_i$ in ${R''}$ under the ring homomorphism $R''\rightarrow R''/J\cong R$. Then $\ass_{R''}(M)=\{P_1,\ldots,P_n\}$ and there is the bijective correspondence between the sets $\ass_{R''}(M)$ and $\ass_R(M)$ given by $P_i\leftrightarrow\fp_i$. Also,
$0=M_1\cap\hdots\cap M_n$ is a minimal primary decomposition of the zero submodule of $M$ as ${R''}$-modules with $\ass_{R''}(M/M_i)=\{P_i\}$ for all $1\leq i\leq n$. Since ${R''}$ is Gorenstein, by Remark \ref{rem1}(\ref{ext1}), we obtain \begin{align*}
\operatorname{Ann}_{R''}\left(\h t{\fm}M\right)\subseteq \operatorname{Ann}_{R''}\left({M}/{\bigcap_{P_i\in\mass^{t}_{R''}(M)}M_i}\right).
\end{align*}
For any $R$-module $N$, we have $J\subseteq \operatorname{Ann}_{R''}(N)$ and so $\operatorname{Ann}_{{R''}/J}(N)=(\operatorname{Ann}_{R''}(N))/J$. Therefore the above inclusion proves the claimed inclusion in the case where $R$ is complete.
Now, suppose that $R$ is not necessarily complete. Assume $0=\bigcap_{k\in K}\mathcal{M}_k$ is a minimal $\hat R$-primary decomposition of the zero submodule of $\hat M$ with $\ass_{\hat R}(\hat M/\mathcal {M}_k)=\{\fP_k\}$. Since $\ass_{\hat R}(\hat M)=\bigcup_{i=1}^n\ass_{\hat R}(\hat R/\fp_i \hat R)$, there exists subsets $K_1,\dots, K_n$ of $K$ such that $K=\bigcup_{i=1}^n {K_i}$, and for each $i$, $\ass_{\hat R}(\hat R/\fp_i \hat R)=\{\fP_k: k\in K_i\}$. Also, the subsets $K_1,\dots, K_n$ of $K$ are disjoint by Lemma \ref{ass2}(i).
Assume $x\in\operatorname{Ann}_R\left(\h t{\fm}M\right)$ and $\fp_i\in\mass_R^{t}(M)$. By the complete case,
\begin{equation}\label{eqann}
x\hat R\subseteq\operatorname{Ann}_{\hat R}\left(\h t{\fm\hat R}{\hat M}\right)\subseteq\operatorname{Ann}_{\hat R}\left({\hat M}/{\bigcap_{\fP_k\in\mass_{\hat R}^{t}(\hat M)}\mathcal{M}_k}\right).
\end{equation}
Now, suppose that $k\in K_i$ and $\fP_k\in\assh_{\hat R}(\hat M/\widehat{M_i})$ (note that, by Lemma \ref{ass2}, $\ass_{\hat R}(\hat M/\widehat{M_i})=\ass_{\hat R}(\hat R/\fp_i\hat R)$). We have
$$\dim_{\hat R}(\hat R/\fP_k)=\dim_{\hat R}(\hat M/\widehat{M_i})=\dim_R(M/M_i)=\dim_R(R/\fp_i)=t.$$
We show that $\fP_k$ is a minimal element of $\ass_{\hat R}(\hat M)$.
Assume that $1\leq i'\leq n$, $k'\in K_{i'}$, and $\fP_{k'}\subseteq \fP_k$. Then $\fp_{i'}=\fP_{k'}\cap R\subseteq \fP_k\cap R=\fp_i$. Since $\fp_i$ is a minimal element of $\ass_R(M)$ and $K_1,\dots,K_n$ are disjoint sets, we deduce that $i=i'$. It follows that both $\fP_k$ and $\fP_{k'}$ are elements of $\ass_{\hat R}(\hat M/\widehat{M_i})$. Therefore $$\dim_{\hat R}(\hat M/\widehat{M_i})=\dim_{\hat R}(\hat R/\fP_k)\leq\dim_{\hat R}(\hat R/\fP_{k'})\leq\dim_{\hat R}(\hat M/\widehat{M_i}),$$
and hence $\fP_k=\fP_{k'}$. Thus $\fP_k\in\mass_{\hat R}^{t}(\hat M)$ and inclusion (\ref{eqann}) yields $x\hat M\subseteq \mathcal{M}_k$.
Since $\fP_k$ is a minimal element of $\ass_{\hat R}(\hat M/\widehat{M_i})$, it follows that the contraction of $(\widehat{M_i})_{\fP_k}$ under the canonical map $\hat M\rightarrow \hat M_{\fP_k}$, denoted by $\mathcal N_k$, is the $\fP_k$-primary component of each minimal primary decomposition of $\widehat{M_i}$ in $\hat M$ (see Lemma \ref{primary} or \cite[Theorem 6.8.3(iii)]{mat}). Hence $\mathcal{N}_k/\widehat{M_i}$ is the $\fP_k$-primary component of each minimal primary decomposition of $0$ in $\hat M/\widehat{M_i}$. Also, we have $\mathcal M_k\subseteq\mathcal N_k$ because $\mathcal M_k$ is the contraction of the zero submodule under the map $\hat M\rightarrow \hat M_{\fP_k}$. Therefore $x(\hat M/\widehat{M_i})\subseteq \mathcal{N}_k/\widehat{M_i}$.
Since $\fP_k$ is an arbitrary element of $\assh_{\hat R}(\hat M/\widehat{M_i})$, we have
$x(\hat M/\widehat{M_i})\subseteq \bigcap_{\fP_k\in\assh_{\hat R}(\hat M/\widehat{M_i})}\mathcal{N}_k/\widehat{M_i}.$
Hence, by Lemma \ref{ass}, $\ass_{\hat R}(x({\hat M}/{\widehat{M_i}}))\subseteq\ass_{\hat R}({\hat M}/{\widehat{M_i}})\setminus \assh_{\hat R}({\hat M}/{\widehat{M_i}})$. This yields
$$\dim_R\left(x(M/M_i)\right)=\dim_{\hat R}(x({\hat M}/{\widehat{M_i}}))<\dim_{\hat R}(\hat M/\widehat{M_i})=t.$$ Therefore $\fp_i\notin\ass_R(x(M/M_i))$ and hence $\ass_R(x(M/M_i))=\emptyset$ or equivalently $xM\subseteq M_i$. This proves the claimed inclusion and completes the proof.
\end{proof}
Now, in the following theorem, we give a bound for the annihilator of top local cohomology module without the local assumption on $R$. But before that, we need the following lemma.
\begin{lem}[{\cite[Theorem 2.2]{dnt}}]\label{cd}
Let $\fa$ be an ideal of $R$ and $M, N$ two finitely generated $R$-modules such that $\supp_R(M)\subseteq\supp_R(N)$. Then $\cd {\fa}M\leq\cd {\fa}N$.
\end{lem}
Assume $\fa$ is an ideal of $R$ and $M$ is a finitely generated $R$ module. Since $\supp_R(M)=\supp_R\left(\bigoplus_{\fp\in\ass_R(M)}R/\fp\right)$, above lemma implies that $$\cd{\fa}M=\operatorname{cd}_{R}\left(\fa, \bigoplus_{\fp\in\ass_R(M)}R/\fp\right)=\sup\{\cd{\fa}{R/\fp}: \fp\in\ass_R(M)\}.$$
By \cite[Exercise 6.2.6 and Theorem 6.2.7]{bs}, $\h i{\fa}M$ is non-zero for all $i$ if and only if $M=\fa M$, and so, in this case, we have $\cd{\fa}M=\sup\emptyset=-\infty$. On the other hand, if $\fa$ is generated by $t\in\N_0$ elements, then $\cd{\fa}M\leq t<\infty$; see \cite[Theorem 3.3.1]{bs}. Hence $\cd{\fa}M$ is a non-negative integer if and only if $M\neq\fa M$.
\begin{thm} \label{annh2} Let $M$ be a non-zero finitely generated $R$-module and $\fa$ an ideal of $R$ such that $M\neq\fa M$. Let $c=\cd{\fa}M$
and $0=M_1\cap\hdots\cap M_n$ a minimal primary decomposition of the zero submodule of $M$ with $\ass_R(M/M_i)=\{\fp_i\}$ for all $1\leq i\leq n$. Set $\Delta=\{\fp\in\ass_R(M): \cd{\fa}{R/\fp}=c\}$ and $\Sigma=\{\fp\in\ass_R(M): \cd{\fa}{R/\fp}=\dim_R (R/\fp)=c\}$.
Then
\begin{enumerate}[\rm(i)]
\item $\bigcap_{\fp_i\in\Delta}M_i=S_M(0)$, where $S=R\setminus\bigcup_{\fp_i\in\Delta}\fp$. In particular, $\bigcap_{\fp_i\in\Delta}M_i$ is independent of the choice of minimal primary decomposition of the zero submodule of $M$.
\item $S_M(0)$ is the largest submodule $N$ of $M$ such that $\cd{\fa}N<c$.
\item $$\operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\Delta}M_i}\right)\subseteq\operatorname{Ann}_R(\h c{\fa}M)
\subseteq \operatorname{Ann}_R\left({M}/{\bigcap_{\fp_i\in\Sigma}M_i}\right).$$
In particular, when $c=\dim_R(M)$, there are the equalities $\Delta=\Sigma$ and
$$\operatorname{Ann}_R\left(\h {\dim_R(M)}{\fa}M\right)=\operatorname{Ann}_R\left({M}/{S_M(0)}\right).$$
\end{enumerate}
\end{thm}
\begin{proof} Set $S=\bigcap_{\fp_i\in\Delta}M_i$ and $T=\bigcap_{\fp_i\in\Sigma}M_i$.
(i) If $\fq\in\ass_R (M)$ and $\fq\subseteq \fp$ for some $\fp\in\Delta$, then, by Lemma \ref{cd}, $$c=\cd{\fa}{R/\fp}\leq\cd{\fa}{R/\fq}\leq\cd{\fa}M=c.$$
It follows that $\fq\in\Delta$, and hence $\Delta$ is an isolated subset of $\ass_R (M)$. Therefore (i) follows from Lemma \ref{primary}.
(ii) Lemma \ref{ass} implies that $\ass_R (S)=\{\fp\in\ass_R (M): \cd{\fa}{R/\fp}<c\}$. Hence, by Lemma \ref{cd}, $\cd {\fa}{S}<c$. Also, if $N$ is a submodule of $M$ such that $\cd{\fa}N<c$, then
$$\ass_R ({N}/({N\cap S}))=\ass_R (({N+S})/{S})\subseteq\ass_R ({M}/{S})=\Delta.$$
Thus, if $\ass_R ({N}/({N\cap S}))\neq\emptyset$, then $c=\cd{\fa}{{N}/({N\cap S})}\leq\cd{\fa}N$, which is impossible. Therefore
$N\subseteq S$ and the proof of (ii) is completed.
(iii) We proved in (ii) that $\cd{\fa}{S}<c$. Therefore $\h c{\fa}M\cong\h c{\fa}{M/S}$ and hence
$$\operatorname{Ann}_R(M/S)\subseteq\operatorname{Ann}_R(\h c{\fa}{M/S})=\operatorname{Ann}_R(\h c{\fa}M).$$
This proves the first inclusion. Now, we prove the second inclusion claimed in (iii).
{\bf Case 1:} Assume that $c=\dim_R(M)$ and $(R, \fm)$ is a complete local ring. For each prime ideal $\fp$, in view of the Grothendieck's Vanishing Theorem
\cite[Theorem 6.1.2]{bs}, we have $\cd{\fa}{R/\fp}\leq\dim_R (R/\fp)$. It follows that $\Delta=\Sigma$, and so $S=T$. Also, we have $\Delta=\{\fp\in\assh_R(M): \sqrt{\fa+\fp}=\fm\}$ by the Lichtenbaum-Hartshorne Theorem. Therefore
$$\sqrt{\fa+\operatorname{Ann}_R(M/S)}=\sqrt{\fa+\bigcap_{\fp\in\ass_R (M/S)}\fp}=\sqrt{\fa+\bigcap_{\fp\in\Delta}\fp}.$$
Since $M$ is a finitely generated $R$-module, the set $\Delta$ is finite and so
$$\sqrt{\fa+\bigcap_{\fp\in\Delta}\fp}=\sqrt{\bigcap_{\fp\in\Delta}(\fa+\fp)}=\bigcap_{\fp\in\Delta}\sqrt{\fa+\fp}=\fm.$$
(Note that for ideals $\fa, \fb, \fc$ and prime ideal $\fq$, we have $(\fa+\fb)\cap(\fa+\fc)\subseteq\fq$ if and only if $\fa+(\fb\cap\fc)\subseteq\fq$. Therefore
$\sqrt{(\fa+\fb)\cap(\fa+\fc)} =\sqrt{\fa+(\fb\cap\fc)}$). Hence $\sqrt{\fa+\operatorname{Ann}_R(M/S)}=\fm$ and we deduce from the Independence Theorem
$$\h c{\fa}M\cong\h c{\fa}{M/S}\cong\h c{\fm}{M/S}.$$
Also, since $\ass_R(M/S)=\Delta=\Sigma\subseteq \assh_R(M)$ and $\Delta$ is not empty, we have $\dim_R(M/S)=\dim_R(M)=c$ and $\assh_R(M/S)=\ass_R(M/S)$. Therefore the previous theorem yields
$$\operatorname{Ann}_R\left(\h c{\fa}M\right)=\operatorname{Ann}_R\left(\h c{\fm}{M/S}\right)=\operatorname{Ann}_R(M/S).$$
{\bf Case 2:} Assume that $c=\dim_R(M)$ and $R$ is not necessarily local. As the before case, we have $\Delta=\Sigma$ and $S=T$. To prove
$\operatorname{Ann}_R(\h c{\fa}M)\subseteq\operatorname{Ann}_R(M/S)$, assume that $x\in R$ and
$x M\nsubseteq S$ and we show $x\h c{\fa}{M}\neq 0$. By (ii), $\h c{\fa}{xM}\neq 0$. Thus there exists a prime ideal $\fm$ such that $\h c{\fa R_\fm}{xM_\fm}\neq 0$ and consequently $\h c{\fa\widehat{R_\fm}}{x\widehat{M_\fm}}\neq 0$. Therefore $c\leq\operatorname {cd}_{\widehat{R_\fm}}({\fa\widehat{R_\fm}}, {x\widehat{M_\fm}})$. It follows from Lemma \ref{cd} and Grothendieck's Vanishing Theorem that
$$c\leq \operatorname {cd}_{\widehat{R_\fm}}({\fa\widehat{R_\fm}}, {x\widehat{M_\fm}})\leq \operatorname {cd}_{\widehat{R_\fm}}({\fa\widehat{R_\fm}}, {\widehat{M_\fm}})\leq\dim_{\widehat{R_\fm}} (\widehat{M_\fm})\leq\dim_R(M)=c.$$
Hence
$\dim_{\widehat{R_\fm}} (\widehat{M_\fm})=\operatorname {cd}_{\widehat{R_\fm}}({\fa\widehat{R_\fm}}, {\widehat{M_\fm}})=c$. Since $\h c{\fa\widehat{R_\fm}}{x\widehat{M_\fm}}\neq 0$, we obtain $x{\widehat{M_\fm}}\nsubseteq S'$, where $S'$ is the largest submodule of ${\widehat{M_\fm}}$ such that $\operatorname {cd}_{\widehat{R_\fm}}({\fa\widehat{R_\fm}}, S')<c$. So, by the complete case, we have $x\h c{\fa\widehat{R_\fm}}{\widehat{M_\fm} }\neq 0$ and therefore
$x\h c{\fa}{M}\neq 0$. This proves the claimed inclusion (in fact equality) in the case where $c=\dim_R(M)$.
{\bf Case 3:} Assume $c<\dim_R(M)$. If $\Sigma=\emptyset$, then $T=M$ and there is nothing to prove. Assume $\Sigma\neq\emptyset$. Since $\cd{\fa}{T}\leq c$, the short exact sequence
$$0\rightarrow T\rightarrow M\rightarrow M/T\rightarrow 0$$ induces the epimorphism
$\h c{\fa}M\rightarrow\h c{\fa}{M/T}.$
It follows that $\operatorname{Ann}_R(\h c{\fa}M)\subseteq\operatorname{Ann}_R(\h c{\fa}{M/T})$. Since $\ass_R (M/T)=\Sigma\neq\emptyset$, we have
$$\cd{\fa}{M/T}=\max_{\fp\in\ass_R(M/T)}\cd{\fa}{R/\fp}=\max_{\fp\in\Sigma}\ \cd{\fa}{R/\fp}=c$$
and
$$\dim_R(M/T)=\max_{\fp\in\ass_R(M/T)}\dim_R(R/\fp)=\max_{\fp\in\Sigma}\ \dim_R(R/\fp)=c.$$
Thus $\dim_R (M/T)=\cd{\fa}{M/T}=c$, and so $\operatorname{Ann}_R(\h c{\fa}{M/T})=\operatorname{Ann}_R({M/T})$ by the previous case. This completes the proof.
\end{proof}
When $(R, \fm)$ is a Cohen-Macaulay local ring, $\fa$ is a non-zero proper ideal of $R$ and $t=\grad{\fa}R$, Bahmanpour calculated the annihilator of $\h t{\fa}R$ in \cite[Theorem 2.2]{ba}. The following theorem generalizes his result for Cohen-Macaulay modules whenever $R$ is not necessarily local.
\begin{lem}[{\cite[Theorem 2.1]{ctt}}] Let $\fa$ be an ideal of $R$, and $M$ a finitely generated $R$-module such that $\fa M\neq M$. Then
$$\ass_R\left(\h {\grad{\fa}M}{\fa}M\right)=\{\fp\in V(\fa): \depth_{R_\fp}{M_\fp}=\grad{\fa}M\}. $$
\end{lem}
Let $M$ be an $R$-module. For $\fp\in\supp_R(M)$, the $M$-height of $\fp$, denoted $\Ht M{\fp}$, is the supremum of the lengths $t$ of strictly descending chains $$\fp=\fp_0\supset\fp_1\ldots\supset\fp_t$$
of prime ideals in $\supp_R(M)$. For an arbitrary ideal $\fa$, we define the $M$-height of $\fa$, denoted $\Ht M{\fa}$, by
$$\Ht{M}{\fa}=\inf\{\Ht M{\fp}: \fp\in\supp_R(M)\cap \operatorname{V}(\fa)\}.$$
In particular, if $\supp_R(M)\cap \operatorname{V}(\fa)=\emptyset$, then $\Ht{M}{\fa}=\inf\emptyset=\infty$.
\begin{thm}\label{annh3} Let $\fa$ be an ideal of $R$, $M$ a non-zero finitely generated Cohen-Macaulay $R$-module, and
$0=M_1\cap\hdots\cap M_n$ with $\ass_R(M/M_i)=\fp_i$ for all $1\leq i\leq n$ a minimal primary decomposition of the zero submodule of $M$. Then, for each $t\in\N_0$,
$$\operatorname{Ann}_R\left(\h t{\fa}M\right)\subseteq\operatorname{Ann}_R\left({M}/{\bigcap_{\Ht{M}{\fa+\fp_i}=t } M_i}\right),$$
Moreover, if $M\neq\fa M $ and $t=\grad {\fa}M$, then equality holds.
\end{thm}
\begin{proof}
Set $\Sigma(t)=\{\fp\in\ass_R(M): \Ht{M}{\fa+\fp}=t\}.$ To prove the claimed inclusion, assume that $x\in R$ and $x\notin\operatorname{Ann}_R({M}/{\bigcap_{\fp_i\in\Sigma(t)} M_i})$ and we show $x\notin\operatorname{Ann}_R(\h t{\fa}M)$. Hence $xM\nsubseteq M_i$ for some $\fp_i\in\Sigma(t)$. Therefore $\ass_R(x(M/M_i))=\ass_R(M/M_i)=\{\fp_i\}$. Suppose that $\fq$ is a minimal prime ideal of $\fa+\fp_i$ such that $\Ht{M}{\fq}=\Ht{M}{\fa+\fp_i}=t$. Then
$$\ass_{R_\fq}(x(M/M_i)_\fq)=\ass_{R_\fq}(M/M_i)_\fq=\{\fp_iR_\fq\}.$$
Therefore
\begin{align*}
\sqrt{\fa R_\fq+\operatorname{Ann}_{R_\fq}(M/M_i)_\fq}
=\sqrt{\fa R_\fq+\fp_i R_\fq}=\fq R_\fq.
\end{align*}
Also, since $M_\fq$ is Cohen-Macaulay and $\fp_i R_\fq\in\ass_{R_\fq}(M_\fq)$, we have $\dim_{R_\fq}(R_\fq/\fp_i R_\fq)=\dim_{R_\fq}(M_\fq)=t$. Hence
$\dim_{R_\fq}\left((M/M_i)_\fq\right)=t$
and, by Theorem \ref{annh} in view of the Independence Theorem, we have
$$\operatorname{Ann}_{R_\fq}\left(\h t{\fa R_\fq}{(M/M_i)_\fq}\right)=\operatorname{Ann}_{R_\fq}\left(\h t{\fq R_\fq}{(M/M_i)_\fq}\right)=\operatorname{Ann}_{R_\fq}(M/M_i)_\fq.$$
Thus $x\h t{\fa R_\fq}{(M/M_i)_\fq}\neq 0$ because $x(M/M_i)_\fq\neq 0$.
On the other hand, the exact sequence $$0\rightarrow (M_i)_\fq\rightarrow M_\fq\rightarrow (M/M_i)_\fq\rightarrow 0$$
induces the epimorphism $\h t{\fa R_\fq}{M_\fq}\rightarrow\h t{\fa R_\fq}{(M/M_i)_\fq}.$ Thus $x\h t{\fa R_\fq}{M_\fq}\neq 0$ and consequently $x \h t{\fa}M\neq 0$. This proves the claimed inclusion.
Finally, assume $t=\grad{\fa}M$ and we prove the reverse inclusion. Let $x\in R$ be such that $x\h t{\fa}M\neq 0$. Hence, there exists $\fq\in\ass_R(\h t{\fa}M)\subseteq \supp_R(M/\fa M)$ such that $x\h t{\fa R_\fq}{M_\fq}\neq 0$. By above lemma, $\Ht {M}\fq=\Ht M\fa$, and hence $\fq$ is a minimal prime ideal of $\fa+\operatorname{Ann}_R(M)$. Since $M_\fq$ is a Cohen-Macaulay module of dimension $t$, Theorem \ref{annh} and Independence Theorem yield
$$\operatorname{Ann}_{R_\fq}\left(\h t{\fa R_\fq}{M_\fq}\right)=\operatorname{Ann}_{R_\fq}\left(\h t{\fq R_\fq}{M_\fq}\right)=\operatorname{Ann}_{R_\fq}(M_\fq),$$
and so we have $xM_\fq\neq 0$.
If $\fq\in\supp_R\left({\bigcap_{\fp_i\in\Sigma(t)} M_i}\right)$, then there is a $\fp\in\ass_R\left({\bigcap_{\fp_i\in\Sigma(t)} M_i}\right)=\ass_R(M)\setminus \Sigma(t)$ such that $\fp\subseteq\fq$. Therefore $$t=\Ht M\fa\leq\Ht M{\fa+\fp}\leq\Ht M\fq=t.$$
Hence $\Ht M{\fa+\fp}=t$, and so $\fp\in\Sigma(t)$, a contradiction. Thus $\left({\bigcap_{\fp_i\in\Sigma(t)} M_i}\right)_\fq=0$. It follows that $xM_\fq\nsubseteq \left({\bigcap_{\fp_i\in\Sigma(t)} M_i}\right)_\fq$ and consequently $xM\nsubseteq {\bigcap_{\fp_i\in\Sigma(t)} M_i}$. This proves the claimed equality in the case where $t=\grad {\fa}M$ and completes the proof.
\end{proof}
\end{document} |
\begin{document}
\title{Interior H\"older continuity for singular-degenerate porous medium type equations with an application to a biofilm model}
\begin{abstract}
We show interior H\"older continuity for a class of quasi-linear degenerate reaction-diffusion equations.
The diffusion coefficient in the equation has a porous medium type degeneracy and its primitive has a singularity.
The reaction term is locally bounded except in zero.
The class of equations we analyse is motivated by a model that describes the growth of biofilms.
Our method is based on the original proof of interior H\"older continuity for the porous medium equation.
We do not restrict ourselves to solutions that are limits in the weak topology of a sequence of approximate continuous solutions of regularized problems, which is a common assumption.
\end{abstract}
\textbf{Keywords:} Quasilinear parabolic equations; Degenerate and singular diffusion, Regularity, Interior H\"older continuity, Biofilm
\textbf{MSC:} 35K57, 35K59, 35B65, 35K65, 35K67,
\section*{Introduction}
We study the regularity of local solutions of equations of the form
\begin{equation}\label{eq:SD-PME}
\solA_t=\Delta \phi(\solA)+f({\,\cdot\,},\solA)\quad\text{in }\Omega\times(0,T],
\end{equation}
where $\Omega\subseteq{\mathbb{R}}^N$ is open, $0<T\leq\infty$ and $\solA$ takes values in $[0,1)$.
The key assumptions on $\phi$ are that it possesses a singularity $\phi(1)=\infty$ and a degeneracy $\phi'(0)=0$.
The degeneracy in $0$ is of the same type as observed in the porous medium equation $\solA_t=\Delta\solA^m$.
Therefore, we call \eqref{eq:SD-PME} a \emph{singular-degenerate equation of porous medium type}.
Our main result is that solutions of \eqref{eq:SD-PME} that are bounded away from $1$ are H\"older continuous in the interior of $\Omega\times(0,T]$.
Our motivation for the study of \eqref{eq:SD-PME} is the biofilm growth model introduced and numerically studied in \cite{Eberl2000} and rigorously analysed in \cite{Efendiev2009}.
Biofilms are communities of microorganisms in a moist environment in which cells stick to each other and often to a surface as well.
These cells are embedded in a slimy matrix of extracellular polymeric substances produced by the cells within the biofilm.
On the mesoscale, fully grown biofilms form complex heterogeneous shaped structures that the model in \cite{Eberl2000} is capable of predicting.
The model consists of two reaction-diffusion equations that are coupled via the reaction terms.
The first equation describes the biomass density $M$.
It is a quasilinear equation with a diffusion coefficient that vanishes whenever the biomass density is zero and blows up as the biomass density approaches its maximal value.
The second equation is a classical semilinear equation describing the growth limiting nutrient concentration $C$.
The variables are normalized; $C$ is scaled with respect to the bulk concentration and $M$ with respect to the maximum biomass density.
The biofilm growth model is given by the system
\begin{equation}\label{eq:Biofilm}
\left\{\begin{aligned}
\partial_t M&=d_2\nabla\cdot\left(D(M)\nabla M\right)-K_2M+K_3\frac{CM}{K_4+C}&\\
\partial_t C&=d_1\Delta C-K_1\frac{CM}{K_4+C}&
\end{aligned}\right.
\quad \text{in}\ \Omega\times(0,T],
\end{equation}
where the biomass-dependent diffusion coefficient $D$ is given by
\begin{align*}
D(M)=\frac{M^b}{(1-M)^a}, \qquad a\geq 1, \ b>0.
\end{align*}
The constants $d_1$, $d_2$ and $K_4$ are strictly positive and $K_1$, $K_2$ and $K_3$ are non-negative.
The biomass diffusion coefficient $D$ has a degeneracy in $0$ known from the porous medium equation.
Moreover, $D$ becomes singular as $M$ approaches $1$ so that spatial spreading becomes very large whenever $M$ is close to $1$. This ensures, heuristically, that the biomass density remains bounded by its maximum value.
As a consequence, we do not need any boundedness assumption for the reaction terms.
Finally, observe that the equation for the biofilm density is included in the class of equations we study.
Indeed, by setting
\begin{equation}\label{eq:nonlin.Biofilm}
\phi(\solA)=\int_0^\solA\frac{z^b}{(1-z)^a}\mathrm{d} z
\end{equation}
we recognize that the first equation of \eqref{eq:Biofilm} is a particular case of \eqref{eq:SD-PME}.
In \eqref{eq:Biofilm} the actual biofilm is the subregion of $\Omega$ where $M$ is positive, that is,
\[
\Omega_{M}(t)=\{x\in\Omega\mid M(x,t)>0\}.
\]
This region and its boundary are well-defined provided that the function $M$ is continuous.
Therefore, the study of continuity of solutions is of fundamental importance for the viability of the biofilm model.
Moreover, H\"older regularity of solutions ensures the convergence of certain more efficient numerical schemes as well.
The main result of this paper is that any weak solution $\solA$ of \eqref{eq:SD-PME} is H\"older continuous in the interior of $\Omega\times(0,T]$, provided that $\solA$ is bounded away from $1$.
This builds upon our earlier work \cite{HissMull-Sonn22}, where we showed the well-posedness of \eqref{eq:SD-PME} and \eqref{eq:Biofilm} on bounded Lipschitz domains for initial and boundary value problems with mixed boundary conditions.
Indeed, we infer from our main result that the solutions of \eqref{eq:SD-PME} obtained in \cite{HissMull-Sonn22} are H\"older continuous in the interior of the domain.
The same holds for the first equation of \eqref{eq:Biofilm} implying interior H\"older continuity for $M$.
Moreover, the second equation of \eqref{eq:Biofilm} is non-degenerate so classical results on interior H\"older continuity can be applied to obtain interior H\"older continuity for $C$, see for example \cite{lady1968}.
\Cref{eq:SD-PME} falls within a larger class of equations for which a weaker regularity result is available.
By setting $\beta=\phi^{-1}$ we see that \eqref{eq:SD-PME} is an example of a Stefan problem, which is an equation of the form
\begin{equation} \label{eq:Stefan}
\partial_t\beta(\solB)=\Delta\solB + f({\,\cdot\,},\beta(\solB))
\end{equation}
for some non-decreasing $\beta:{\mathbb{R}}\to{\mathbb{R}}$.
Bounded classical solutions of \eqref{eq:Stefan} have a modulus of continuity in the interior of $\Omega_T$ depending only on $\norm{\solB}_{\infty}$ and the structure of the equation, see \cite{Sacks83}.
This modulus of continuity carries over to bounded weak solutions provided that they can be approximated by classical solutions.
The techniques we use to study \eqref{eq:SD-PME} can also be applied to the corresponding Stefan problem \eqref{eq:Stefan} to conclude interior H\"older continuity for $\solB=\phi(\solA)$.
Then, H\"older continuity of $\solA$ can be inferred provided that $\beta:=\phi^{-1}$ is H\"older continuous.
This was the original method applied to the porous medium equation, see \cite{DiBen1984}.
In fact, this is the natural approach, since a typical property of Stefan problems is that their solutions $\solB$ enjoy better continuity properties than $\solA=\beta(\solB)$, see \cite{Sacks83}.
Therefore, if H\"older continuity for $\phi(\solA)$ cannot be deduced, then it is unlikely that H\"older continuity for $\solA$ can be obtained.
This can be made rigorous if $\phi$ and $\beta:=\phi^{-1}$ are both H\"older continuous.
However, in our case $\phi$ is not H\"older continuous due to the singularity $\phi(1)=\infty$.
We remark that we chose to study the original equation \eqref{eq:SD-PME}, since it describes the quantity of interest in applications.
We show interior H\"older continuity for solutions $\solA$ of \eqref{eq:SD-PME} that are bounded away from $1$ and this requirement is to be expected in view of the interpretation of \eqref{eq:SD-PME} as a Stefan problem of the form \eqref{eq:Stefan}.
Indeed, observe that this requirement for $\solA$ is equivalent to stating that the solution $\solB=\phi(\solA)$ is bounded.
The latter condition is needed to obtain a modulus of continuity for $\solB$, see \cite{Sacks83}, which is a weaker property than H\"older continuity.
Moreover, the boundedness of $\solB$ assumed in \cite{Sacks83} is a necessary condition; it can be deduced from assuming a modulus of continuity on $\solB$.
Returning to the original equation \eqref{eq:SD-PME}, we note that the condition on $\solA$ is not restrictive either; in \cite{HissMull-Sonn22} we have shown that solutions are bounded away from $1$ provided that the initial data and Dirichlet boundary conditions satisfy this assumption, which is a standard assumption in the typical problems considered in applications.
Interior H\"older continuity of solutions of non-degenerate parabolic equations is well-understood.
Moser adapted the techniques of DeGiorgi for uniformly elliptic linear equations for uniformly parabolic linear equations by studying the oscillation of solutions in a family of shrinking space-time cylinders reflecting the scaling invariance of the equation, see \cite{Mos60,Mos64}.
These methods where subsequently generalized by Lady{\ifmmode\check{z}\else\v{z}\fi}enskaja, Solonnikov and Ural'ceva for quasi-linear non-degenerate parabolic equations, see \cite{lady1968}.
Our study of regularity of solutions of \eqref{eq:SD-PME} uses so-called \emph{intrinsic scaling} techniques.
It was originally developed by DiBenedetto and Friedman in order to prove interior H\"older continuity for the $p$-Laplace equation and the porous medium equation, see \cite{DiBen1984}.
These techniques are widely used, see for example \cite{DiBenedetto1993,DiBenedetto2012,Urb08} and the references therein.
The key idea of intrinsic scaling is to change the scaling of the aforementioned space-time cylinders in a manner that depends on the solution itself.
Although some technical details are needed, the method allows us to carry over the techniques for non-degenerate parabolic equations to degenerate equations.
In the study of the regularity of solutions of porous medium type equations it is often assumed that solutions can be constructed as weak limits of sufficiently smooth solutions to a regularized problem, see for example \cite{DiBenedetto1983,DiBenedetto-Vespri1995}, or it is assumed that the solution is a classical solution, see \cite{Sacks83}.
This is done to justify some of the computations and it is always stressed that the obtained modulus of continuity solely relies on the data and is independent of the approximation.
This assumption is not considered to be restrictive in view of typical existence proofs and is therefore often omitted, for instance, see \cite{DiBen1984,Urb08}.
On the other hand, an important class of solutions is constructed by a finite-time discretization and Galerkin approximation scheme, see \cite{Alt-Luck1983}, which is needed when dealing with mixed Neumann-Dirichlet boundary conditions on Lipschitz domains.
The well-posedness proof in \cite{HissMull-Sonn22} is based on these methods.
For this class of solutions it is not immediately clear that the approximation assumption holds, since it does not follow from the existence proof presented in \cite{Alt-Luck1983} in a natural manner.
It can be shown that there exists a sequence of approximate continuous solutions, which would be sufficient for our purposes.
Nevertheless, this approach relies on the availability of the well-posedness of an appropriate initial- / boundary value problem rather than invoking only the definition of local solutions.
In this paper, we opt for a self-contained approach and ask for slightly more time regularity in our solution concept than usual in the literature on regularity theory, see \cite{DiBenedetto1993,DiBenedetto2012,lady1968,Liao19,Urb08}.
We assume that locally the time derivative of a solution is given as a bounded linear functional on the space of test functions.
It allows us to prove a chain rule for the term involving the time derivative used to obtain the necessary estimates.
In particular, we do not assume that solutions are the limit of sufficiently regular solutions of appropriate approximate problems.
Our assumption is not restrictive and is in line with the solution concept used in \cite{Alt-Luck1983} and \cite{HissMull-Sonn22}.
Interestingly, the assumption that solutions are weak limits of more regular solutions is not needed when studying the $p$-Laplace equation $u_t=\mathrm{div}(\abs{\nabla\solA}^{p-2}\nabla\solA)$.
Indeed, the natural functional space used in the solution concept is $L^p(0,T;W^{1,p}(\Omega))$.
For this space smoothing techniques such as Steklov averaging apply in a straightforward manner, because the gradient of the solution exists.
This does not hold for porous medium type equations, since only the existence of the gradient of $\phi(\solA)$ is assumed.
On the reaction term in \eqref{eq:SD-PME} we impose that it is locally bounded with respect to $\solA \in (0,\infty)$ and allow for a singularity in zero.
In particular, we ask that $\norm{f({\,\cdot\,},z)}_\infty \leq L z^{-m_0}$, $L\geq 0$, for all $z\in[0,1)$ for a certain $m_0>0$ depending on the structure of the degeneracy of $\phi$ in zero.
Motivated by the application to the biofilm growth model and to simplify the analysis, we had first imposed that the reaction term is bounded on the interval $[0,1)$.
We were able to show interior H\"older continuity under this assumption.
Later, we observed that our proof allowed for even more general reaction terms that might be singular in $0$, so we include those in our result.
It differs from standard assumptions in the literature on regularity theory, see \cite{DiBen1984,Sacks83,Urb08,DiBenedetto2012}.
We can show that the more general solution concept used in \cite{DiBenedetto2012} has a time derivative that can be interpreted as a bounded linear functional and therefore, is included in our solution concept, provided that $f({\,\cdot\,},\solA)\in L^2_{\mathrm{loc}}(\Omega\times (0,T))$.
For details, see Proposition 3.19 in \cite{HissMull-Sonn22}.
Our proof follows closely the original proof of interior H\"older continuity for the model case $\solA_t=\Delta\solA^m$ in \cite{DiBen1984}.
Still, we do need to modify the proof to accommodate the reaction term.
For instance, we replace the typical `oscillation is large' estimate $\omega^{m-1}\geq R^\varepsilon$ by $\omega^m\geq R$, see \eqref{eq:oscillation.Really.Large} below.
Consequently, we also change some of the parameters in the subsequent iterative scheme.
Again, note that we do not rewrite the equation as a Stefan problem and prove H\"older continuity of $\solB=\phi(\solA)$, as was done in \cite{DiBen1984}.
Instead, we work with \Cref{eq:SD-PME} itself.
This leads to some difficulty in obtaining local energy estimates near the degeneracy in $\solA=0$.
We solve this issue in the same manner as was done in \cite{Urb08}, Chapter 6.
There is a different approach to obtain H\"older continuity for degenerate equations using \emph{expansion of positivity}, see \cite{Liao19}.
The main feature of this method is that neither the logarithmic integral estimate nor the analysis of two alternatives in the key steps of the original proof are needed.
The expansion of positivity is a fundamental ingredient in proving a Harnack inequality, see \cite{DiBenedetto2012}.
We refrain from using these techniques so that our proof stays self-contained.
Furthermore, it is also not clear whether this approach allows for the reaction term we are considering.
More recently, intrinsic scaling has been applied successfully to doubly nonlinear parabolic equations whose prototype is $\partial_t (\abs{\solA}^{p-2}\solA) = \mathrm{div}(\abs{\nabla \solA}^{p-2} \solA)$, $p>1$, see \cite{BOG21,BOG21Part2,Liao22}.
This equation is a combination of the porous medium equation and the $p$-Laplace equation.
The expansion of positivity plays a fundamental role in the proofs.
The outline of this paper is as follows.
In \Cref{sect:MainHypothesis.and.THM} the main hypotheses, the solution concept and the main result on interior H\"older continuity are stated.
\Cref{sect:Geometry.Holder} introduces some further notation and a geometric setting in which the corner stone of the proof of the main result is given, the so-called \emph{De Giorgi-type Lemma}.
Based on this lemma, we prove H\"older continuity of solutions via an iterative scheme of intrinsically scaled shrinking space-time cylinders.
The next two sections are dedicated to proving the De Giorgi-type Lemma, which is where the main technicalities lie.
In particular, \Cref{sect:Int.est.and.Aux} covers interior integral estimates and some auxiliary technical statements and \Cref{sect:Proof.of.DeGiorgiLemma} contains the actual proof.
\section{Assumptions and main result}\label{sect:MainHypothesis.and.THM}
Let $\Omega\subseteq {\mathbb{R}}^N$ be open, $0<T\leq\infty$ and write $\Omega_T:=\Omega\times(0,T]$.
The function $\phi:[0,1)\to[0,\infty)$ satisfies the structural assumptions:
\newcounter{counterHypotheses}
\begin{enumerate}[label=\upshape(H\arabic*),ref=\upshape H\arabic*]
\item $\phi$ is continuous and strictly increasing; \label{itm:nonlin.basic.assumption}
\item $\phi$ is surjective; \label{itm:nonlin.blow-up.assumption}
\item $\phi\in C^1([0,1))$, $\phi'>0$ in $(0,1)$ and a \emph{porous-medium type degeneracy} holds, that is, $\phi'$ satisfies
\[
c_1z^{m-1}\leq \phi'(z)\leq c_2z^{m-1}
\]
for all $z\in [0,\varepsilon]$ for certain constants $c_1,c_2>0$, $\varepsilon\in(0,1)$ and $m>1$. \label{itm:nonlin.PME-like.degeneracy}
\setcounter{counterHypotheses}{\value{enumi}}
\end{enumerate}
Heuristically, \eqref{itm:nonlin.blow-up.assumption} encodes the singularity $\phi(1)=\infty$ and \eqref{itm:nonlin.PME-like.degeneracy} the degeneracy $\phi'(0)=0$.
The conditions \eqref{itm:nonlin.basic.assumption} and \eqref{itm:nonlin.blow-up.assumption} are used to prove well-posedness in \cite{HissMull-Sonn22}.
The newly added assumption \eqref{itm:nonlin.PME-like.degeneracy} provides a connection to the porous medium equation and it is instrumental to prove H\"older continuity.
Of course, \eqref{itm:nonlin.basic.assumption} is implied by \eqref{itm:nonlin.PME-like.degeneracy}, but we mention both assumptions, because they play different roles in the study of \eqref{eq:SD-PME}.
It is important to point out that $\phi'\geq \lambda > 0$ on $[\varepsilon,1)$ for some $\lambda>0$.
Indeed, \eqref{itm:nonlin.blow-up.assumption} implies that $\lim_{z\to1}\phi(z)=\infty$, and therefore $\lim_{z\to 1} \phi'(z) = \infty$ as well so that $\phi'$ attains a minimum in $[\varepsilon,1)$.
This minimum cannot be $0$ due to the assumption $\phi'>0$ in $(0,1)$.
On the reaction term $f:\Omega_T\times [0,1)\to{\mathbb{R}}$ we impose:
\newcounter{counterHypothesesReactionTerms}
\begin{enumerate}[label=\upshape(R\arabic*),ref=\upshape R\arabic*]
\item\label{itm:source.Lipschitz.assumption} $f$ is measurable and there exists a $L\geq 0$ such that
\[
\norm{f({\,\cdot\,},z)}_{L^\infty(\Omega_T)}\leq L z^{-{m_0}} \quad \text{for all}\ z\in[0,1),
\]
where $m_0\in[0,m)$ with $m$ as in \eqref{itm:nonlin.PME-like.degeneracy}.
\setcounter{counterHypothesesReactionTerms}{\value{enumi}}
\end{enumerate}
In is important to remark that \eqref{itm:source.Lipschitz.assumption} is not restrictive.
Indeed, solutions take values in the interval $[0,1)$, hence reaction terms such as
\[
f(x,t,\solA)=g(x,t) \solA \quad \text{and} \quad f(\solA) = \solA^p(1-\solA)^q + c
\]
are included for any bounded function $g$ and constants $p>-m$, $q\geq 0$ and $c\in{\mathbb{R}}$.
The first example appears in the equation for $M$ of the biofilm growth model \eqref{eq:Biofilm} (provided that the function $C$ is known).
The latter example is used in the Porous-Fisher equation $\solA_t=\Delta \solA^m +\solA(1-\solA)$, studied e.g.\ in \cite{mccue2019hole}.
We employ the following notation.
Given a measurable set $K\subseteq\Omega$, let $\pinprod{{\,\cdot\,}}{{\,\cdot\,}}$ denote the pairing of $H^{-1}(K)$ with $H^1_0(K)$ and let $\inprod{{\,\cdot\,}}{{\,\cdot\,}}$ denote the $L^2(K)$-inner product.
We use the following solution concept.
\begin{definition}\label{def:local.solution}
A measurable function $\solA:\Omega_T\to[0,1)$ is called a local solution of \eqref{eq:SD-PME} if for any compact subset $K \subset \Omega$ we have that
\begin{enumerate}[label=(\roman*),font=\itshape]
\item $\solA\in W^{1,2}_{\mathrm{loc}}(0,T;H^{-1}({K}))$, $\phi(\solA)\in L^2_{\mathrm{loc}}(0,T;H^1({K}))$ and $f({\,\cdot\,},\solA)\in L_{{\mathrm{loc}}}^2(0,T;L^2({K}))$, and
\item the identity
\begin{equation}\label{eq:SD-PME.local.solution.including.time-derivative.id}
\displayindent0pt
\displaywidth\textwidth
\begin{aligned}
\pinprod{\solA_t}{\eta}+\inprod{\nabla\phi(\solA)}{\nabla\eta}=\inprod{f({\,\cdot\,},\solA)}{\eta}
\end{aligned}
\end{equation}
holds a.e.\ in $(0,T)$ for all $\eta\in H^1_0({K})$.
\end{enumerate}
\end{definition}
Define the \emph{parabolic boundary} of $\Omega_T$ by $\Gamma=\overline{\Omega}\times\{0\}\cup \partial\Omega\times(0,T)$ and the \emph{parabolic distance} of compact set $K\subset \Omega_T$ to $\Gamma$ by
\[
\mathrm{dist}(K;\Gamma)=\inf\left\{\abs{x-y}+\abs{t-s}^{\frac{1}{2}}\, : \, (x,t)\in K,\ (y,s)\in \Gamma\right\}.
\]
The following statement is the main result of this paper.
\begin{theorem}[Interior H\"older continuity of solutions]\label{thm:holder}
Let \eqref{itm:nonlin.basic.assumption}, \eqref{itm:nonlin.blow-up.assumption}, \eqref{itm:nonlin.PME-like.degeneracy} and \eqref{itm:source.Lipschitz.assumption} be satisfied and let $\solA$ be a local solution of \eqref{eq:SD-PME} that is bounded away from $1$, that is, there exists a $\mu \in(0,1)$ such that $\solA\leq 1-\mu$.
Then, there exist constants $C\geq 0$ and $\alpha\in(0,1)$ depending only on $N$, $c_1$, $c_2$, $\varepsilon$, $m$, $L$, $m_0$, ${M}:=\max_{0\leq z\leq 1-\mu}\phi'(z)$ and $\lambda:=\min_{\varepsilon\leq z<1}\phi'(z)$ such that
\begin{equation}\label{eq:holder}
\abs{\solA(x_0,t_0)-\solA(x_1,t_1)}\leq C\left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{d(K;\Gamma)}\right)^\alpha
\end{equation}
for all $(x_0,t_0),(x_1,t_1)\in K$ for any compact $K\subset\Omega_T$, where $d(K;\Gamma):=\min \left\{ \mathrm{dist}(K;\Gamma),1 \right\}$.
\end{theorem}
\begin{remark}\label{rem:reaction.term.cond}
Let us provide some remarks on the assumption \eqref{itm:source.Lipschitz.assumption} for the reaction term.
\begin{itemize}
\item The condition \eqref{itm:source.Lipschitz.assumption} has not been considered in the literature to the author's knowledge.
The condition requires some changes in the classical proof for the prototype porous medium equation $\solA_t=\Delta\solA^m$ in \cite{DiBen1984}.
It is more general than what we would need in view of the application to the biofilm model \eqref{eq:Biofilm} we have in mind.
\item Conditions on the reaction terms for the porous medium equation that have been covered include the assumption
\[
\abs{f({\,\cdot\,},z)}\leq \varphi_1
\]
for some non-negative function $\varphi_1\in L^{q}(0,T;L^p(\Omega))$, see page 48 in \cite{Urb08}, and the assumption
\[
\abs{f({\,\cdot\,},z)}\leq \abs{\solA}^m\varphi_2,
\]
for some non-negative function $\varphi_2$ such that $\varphi_2^2\in L^{q}(0,T;L^p(\Omega))$, see page 261 in \cite{DiBenedetto2012}.
In both cases $p$ and $q$ satisfy $\frac{1}{q}+\frac{N}{2p}\in (0,1)$.
\item The condition \eqref{itm:source.Lipschitz.assumption} is not covered by our previous work on the well-posedness of \eqref{eq:SD-PME}, see \cite{HissMull-Sonn22}.
However, the condition that $f$ is bounded on $\Omega_T\times[0,1)$ is included.
In particular, the biofilm growth model \eqref{eq:Biofilm} is covered by in both the well-posedness result and our current result on H\"older continuity.
\end{itemize}
\end{remark}
\begin{remark}\label{rem:extending.PME-like.degeneracy}
Let us discuss some additional observations regarding the hypothesis \eqref{itm:nonlin.PME-like.degeneracy}.
\begin{itemize}
\item Without loss of generality, we may assume that $\varepsilon \leq 1-\mu$ in \eqref{itm:nonlin.PME-like.degeneracy} so that the solution $\solA$ in \Cref{thm:holder} satisfies
\begin{equation}\label{eq:extending.PME-like.degeneracy}
c_1 \solA^{m-1}\leq \phi'(\solA)\leq c_2 \solA^{m-1}.
\end{equation}
Indeed, suppose $\varepsilon > 1-\mu$ and observe that the function $\phi'$ satisfies $\phi'\leq {M}$ on $[\varepsilon , 1-\mu]$ and $\phi'\geq \lambda$ on $[\varepsilon,1)$.
There certainly exist constants $d_1$, $d_2>0$ such that $d_1z^{m-1}\leq\phi'(z)\leq d_2 z^{m-1}$ for all $z\in [\varepsilon,1-\mu]$.
For instance, set $d_1={M}$ and $d_2=\lambda / \varepsilon$.
We pick new constants $\tilde{c}_1=\min\{c_1,d_1\}$ and $\tilde{c}_2=\min\{c_2,d_2\}$, which depend on $c_1$, $c_2$, $\varepsilon$, ${M}$ and $\lambda$.
Now \eqref{itm:nonlin.PME-like.degeneracy} holds with $c_1$, $c_2$ and $\varepsilon$ replaced by $\tilde{c}_1$, $\tilde{c}_2$ and $1-\mu$, respectively.
\item We point out that the previous argument is the only place where $\epsilon$, ${M}$ and $\lambda$ play a role in the proof of \Cref{thm:holder}.
We will always assume that $\varepsilon\leq 1-\mu$ in \eqref{itm:nonlin.PME-like.degeneracy} holds and we do not mention the dependency of the constants on $\varepsilon$, ${M}$ and $\lambda$ from now on.
The symbols $\varepsilon$ and $\lambda$ are used again below in different contexts, which is justified by this remark.
\item Observe that \eqref{itm:nonlin.PME-like.degeneracy} implies that
\begin{equation}\label{eq:nonlin.PME-like.degeneracy.integrated}
c_1z^{m}\leq m\phi(z)\leq c_2z^{m}.
\end{equation}
This can be seen by integrating the estimate over $z$, multiplying by $m$ and observing that \eqref{itm:nonlin.basic.assumption} and \eqref{itm:nonlin.blow-up.assumption} imply $\phi(0)=0$.
\end{itemize}
\end{remark}
\begin{remark}\label{rem:defined.all.time}
It is important to point out that $\solA(t)\in L^1(K)$ is well-defined for all $t\in(0,T)$.
Indeed, for any $t_1,t_2\in(0,T)$, $t_1<t_2$, we have that $\solA\in W^{1,2}(t_1,t_2;H^{-1}(K))\subset C([t_1,t_2];H^{-1}(K))$.
Therefore, $\solA(t)\in L^1(K)\cap L^\infty(K) \subset L^2(K) \subset H^{-1}(K)$ is uniquely determined.
Moreover, $\solA\in L^\infty(t_1,t_2;L^1(K))$ with $\norm{\solA(t)}_{L^1(K)}\leq \abs{K}$ for all $t\in(0,T)$ due to $0\leq\solA<1$.
\end{remark}
\begin{remark}
Let us also comment on the use of the parabolic distance.
\begin{itemize}
\item In the literature, see \cite{DiBenedetto2012} or \cite{Urb08}, the intrinsic parabolic $m$-distance is introduced to quantify the dependency of the constants of \Cref{thm:holder} on the compact subset $K\subset\Omega_T$ and $\norm{\solA}_{L^\infty(\Omega_T)}$.
In our case we know that $0\leq\solA\leq 1$, so the standard parabolic distance suffices.
\item In the denominator of \eqref{eq:holder} we use $d(K;\Gamma)$ instead of $\mathrm{dist}(K;\Gamma)$.
This is due to the lower order term in the equation.
To justify the estimates in the computations, we need to bound the size of the appropriate space-time cylinders.
Without the reaction term the full parabolic distance $\mathrm{dist}(K;\Gamma)$ can be used and a Liouville-type result can be deduced.
However, this is not possible in our case.
\end{itemize}
\end{remark}
\begin{remark}
We mention a few additional points concerning the condition that the solution has to be bounded away from $1$.
\begin{itemize}
\item We can replace it by the requirement that the solution is \emph{locally} bounded away from $1$, that is, for every compact subset $K\subset\Omega_T$ there exists a $\mu$ such that $\solA\leq 1-\mu$ in $K$.
In this case, $C$ and $\alpha$ in \Cref{thm:holder} do depend on $K$.
\item The assumption is not restrictive and builds upon our earlier work \cite{HissMull-Sonn22}.
There we showed that $\solA \leq 1-\mu$ provided that the initial data $\solA_0$ satisfies $\solA_0\leq 1-\theta$, where $\mu$ depends on $\theta$.
This argument relies on the blow-up behaviour of $\phi$ encoded in \eqref{itm:nonlin.blow-up.assumption} and on the domain being bounded and having a Lipschitz boundary.
Now, the interior H\"older continuity of $\solA$ only depends on the initial data $\solA_0$ through $\mu$.
\item The assumption implies that the singularity of $\phi$ in \eqref{itm:nonlin.blow-up.assumption} is not attained.
Indeed, we assume that $\solA\leq 1-\mu$ so the behaviour of $\phi$ in $(1-\mu,1)$ does not matter, and therefore we could remove \eqref{itm:nonlin.blow-up.assumption} provided that we impose that $\phi'\geq \lambda>0$ on $[\varepsilon,1-\mu]$ for some $\lambda>0$ and $\phi(0)=0$.
Consequently, our result also holds for degenerate equations without a singularity such as the Porous-Fisher equation, see \cite{mccue2019hole}.
However, we aim to prove H\"older continuity for the solutions obtained in our earlier work on well-posedness, see \cite{HissMull-Sonn22}, where \eqref{itm:nonlin.blow-up.assumption} is a key ingredient to obtain the bound $\solA \leq 1-\mu$.
Moreover, \eqref{itm:nonlin.blow-up.assumption} plays a fundamental role to obtain the uniform bound $\solA<1$ and it is necessary to be able to formulate $\eqref{eq:SD-PME}$ as a Stefan problem of the form \eqref{eq:Stefan}, since otherwise $\phi$ might not be invertible.
Therefore, we keep the hypothesis \eqref{itm:nonlin.blow-up.assumption}.
\end{itemize}
\end{remark}
We use the following notation throughout.
Given $k\in{\mathbb{R}}$, we write
\[
\left[\solA < k \right] = \left\{(x,t)\in \Omega_T \, :\, \solA(x,t)<k \right\}\quad \text{and} \quad \left[\solA(t) < k\right] = \left\{ x \in \Omega \, :\, \solA(x,t)<k \right\}
\]
for fixed $t\in(0,T)$ and we extend the notation for $\leq$, $>$ and $\geq$ in an obvious manner.
Both sets are defined up to a subset of measure zero, the latter for all $t\in(0,T)$ by \Cref{rem:defined.all.time}, hence their measures are well-defined and their characteristic functions exist almost everywhere.
\section{Geometry for the equation and proof of the main result}\label{sect:Geometry.Holder}
In this section we prove \Cref{thm:holder} based on an intermediary result called the De Giorgi-type Lemma, which we prove in \Cref{sect:Proof.of.DeGiorgiLemma}.
The outline of this section is as follows.
First, we discuss the method of \emph{intrinsic scaling} heuristically.
Then, we rigorously define the intrinsic geometrical scaling tailored for our equation, where we introduce the necessary notation and assumptions concerning intrinsically scaled space-time cylinders.
After this we state the De Giorgi-type Lemma.
Next, we provide an iterative scheme where we consider a shrinking sequence of the cylinders and we show that the oscillation of $\solA$ in each cylinder decreases proportionally to its size using the De Giorgi-lemma iteratively.
In the next step of the proof of \Cref{thm:holder} we consider the case where the De Giorgi-lemma cannot be applied.
In this situation we can use classical estimates for non-degenerate equations.
Finally, we combine both cases in one final Corollary, which we then use to prove \Cref{thm:holder}.
Let us begin with a short informal discussion on intrinsic scaling.
The proof of interior H\"older continuity for non-degenerate parabolic equations relies on estimates of the oscillation of the solution $\solA$ in a family of shrinking space-time cylinders $\{Q_n\}_{n=0}^\infty$, see \cite{lady1968}.
These cylinders are roughly of the form $Q_n=B_{R_n}(x_0) \times (-R_n^2+t_0,t_0)$ and their space-time scaling reflects the structure of the equation.
For instance, the heat equation $\solA_t=\Delta\solA$ is scale invariant under coordinate transformations of the type $(x,t)\mapsto (\lambda x,\lambda^2 t)$, $\lambda\in{\mathbb{R}}$, that is, if $\solA$ is a solution of the equation then $\solA_\lambda$ given by $\solA_\lambda(x,t)=\solA(\lambda x,\lambda^2,t)$ is a solution as well.
The method of shrinking cylinders with this parabolic scaling fails to hold for degenerate equations such as the porous medium equation $\solA_t=\Delta \solA^m$.
However, we can rewrite this equation as
\[
\frac{1}{m} \frac{1}{\solA^{m-1}}(\solA^m)_t=\Delta \solA^m,
\]
which is a non-degenerate equation except for the factor $\solA^{1-m}$.
Therefore, we should consider space-time cylinders of the form $Q_n=B_{R_n}(x_0) \times (-\solA^{1-m}R^2_n+t_0,t_0)$ instead.
These cylinders are defined in terms of the solution itself so we say that they are intrinsically scaled.
The techniques for non-degenerate equations can be applied along this set of shrinking cylinders, since we now consider the equation in its own geometry.
This method is only relevant if $\solA$ is small, because then the equation becomes degenerate.
Consequently, the intrinsically scaled cylinders are stretched along the temporal axis to \emph{accommodate the degeneracy}.
\Cref{fig:intrinsicscalingsolplotwithcylinders} highlights the difference in the types of scaling.
Here, $\solA$ is a solution of \eqref{eq:SD-PME} and we consider points $(x_0,t_0)$ and $(\tilde{x}_0,\tilde{t}_0)$.
We pick $(x_0,t_0)$ near the degeneracy and $(\tilde{x}_0,\tilde{t}_0)$ away from the degeneracy.
In the first case we use intrinsically scaled cylinders and in the latter case we use classically scaled cylinders.
\begin{figure}
\caption{Intrinsically scaled and classically scaled cylinders.}
\label{fig:intrinsicscalingsolplotwithcylinders}
\end{figure}
Let us now provide the intrinsic scaling in a rigorous manner.
First, assume that $\solA$ is a local solution of \eqref{eq:SD-PME} that is bounded away from $1$.
Let $(x_0,t_0)\in \Omega_T$ and assume that $(x_0,t_0)=(0,0)$ by translating the axes, where we note that only the reaction term $f$ is not invariant under such a translation, but we only rely on global estimates of this term, hence the assumption is justified.
Given $R_1,R_2>0$ we define the space-time cylinder
\begin{equation*}
Q(R_1,R_2)=B_{R_1}(0)\times(-R_2,0),
\end{equation*}
and given $\omega,R>0$ we define the \emph{intrinsically scaled cylinder}
\[
Q_\omega(R):=Q(R,\omega^{1-m}R^2).
\]
For now we assume that
\begin{equation}\label{eq:cond.intrinsic.scaled.cylinder.in.domain}
Q_\omega(R)\subseteq Q_\omega(2R)\subseteq\Omega_T.
\end{equation}
We define $\mu_-,\mu_+\in[0,1-\mu]$ and the essential oscillation of $\solA$ by
\[
\mu_-:=\essinf_{Q_\omega(R)}\solA,\quad \mu_+:=\esssup_{Q_\omega(R)}\solA,\quad\essosc_{Q_\omega(R)}\solA:=\mu_+-\mu_-.
\]
We assume that
\begin{equation}\label{eq:cond.oscillation.recursion.hypothesis}
\essosc_{Q_\omega(R)}\solA\leq \omega,
\end{equation}
so that $\omega$ can be viewed as (an upper bound of) the essential oscillation of $\solA$ in $Q_\omega(R)$.
Further, we assume that
\begin{equation}\label{eq:cond.inf.small}
\mu_-\leq \frac{\omega}{4}
\end{equation}
holds as well.
Note that \eqref{eq:cond.inf.small} implies that $\mu_+\leq \omega+\mu_-\leq \frac{5}{4}\omega$.
\begin{remark}
Heuristically, \eqref{eq:cond.inf.small} means that $\solA$ takes values close to $0$ and this specific estimate gives us a means to quantify this in terms of the oscillation.
We use it primarily to estimate $\mu_+$ in terms of $\omega$.
If \eqref{eq:cond.inf.small} does not hold, then $\solA$ is bounded away from $0$ in $Q_\omega(R)$.
In this case we are effectively dealing with a non-degenerate equation and we can invoke standard estimates, see the text below \Cref{prop:iterative.scheme}.
\end{remark}
We use the notation
\[
Q^{\nu_0}_\omega(R)=Q\left(R,\frac{\nu_0}{2}\omega^{1-m}R^2\right),
\]
for a given $\nu_0\in(0,1)$.
\begin{proposition}[De Giorgi-type Lemma]\label{lem:DeGiorgi-type}
Assume that \eqref{itm:nonlin.basic.assumption}, \eqref{itm:nonlin.blow-up.assumption}, \eqref{itm:nonlin.PME-like.degeneracy} and \eqref{itm:source.Lipschitz.assumption} are satisfied and let $\solA$ be a local solution of \eqref{eq:SD-PME} that is bounded away from $1$.
Then, there exist constants $R_{\mathrm{max}},\nu_0\in(0,1)$ and $n_0\in{\mathbb{N}}$ depending only on $N$, $c_1$, $c_2$, $m$, $L$ and $m_0$ such that the following holds.
If $R\in(0,R_{\mathrm{max}}]$ and $\omega>0$ are such that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis}, \eqref{eq:cond.inf.small} and
\begin{equation}\label{eq:oscillation.Really.Large}
\omega\geq R^{\frac{1}{m}}
\end{equation}
are satisfied, then we have the dichotomy:
\begin{enumerate}[label=(\roman*),font=\itshape]
\item If
\begin{equation}\label{eq:DeGiorgi-type.alternative.I}
\abs{Q_\omega(R)\cap\left[\solA<\mu_-+\frac{\omega}{2}\right]}<\nu_0\abs{Q_\omega(R)},
\end{equation}
then $\solA>\mu_-+\frac{\omega}{4}$ a.e.\ in $Q_\omega(\frac{R}{2})$.
\item If
\begin{equation}\label{eq:DeGiorgi-type.alternative.II}
\abs{Q_\omega(R)\cap\left[\solA\geq\mu_-+\frac{\omega}{2}\right]}\leq(1-\nu_0)\abs{Q_\omega(R)},
\end{equation}
then $\solA<\mu_-+\left(1-\frac{1}{2^{n_0}}\right)\omega$ a.e.\ in $Q^{\nu_0}_\omega(\frac{R}{2})$.
\end{enumerate}
\end{proposition}
\Cref{sect:Proof.of.DeGiorgiLemma} is dedicated to proving \Cref{lem:DeGiorgi-type}.
The constants $R_{\mathrm{max}}$ and $\nu_0$ are defined explicitly by \eqref{eq:define.Rmax} and \eqref{eq:def.nu_0}, respectively, in terms of $N$, $c_1$, $c_2$, $m$, $L$ and $m_0$.
\begin{remark}
Comparing to previous results, see \cite{Urb08}, we add condition \eqref{eq:oscillation.Really.Large} in \Cref{lem:DeGiorgi-type}.
We also introduce $R_{\mathrm{max}}$ to ensure that $R$ is small enough in a certain sense.
Both are needed to derive suitable estimates for the reaction term in the proof of \Cref{lem:DeGiorgi-type}.
This is necessary due to the assumption \eqref{itm:source.Lipschitz.assumption}, which differs from the standard assumptions.
\end{remark}
\begin{remark}
\Cref{lem:DeGiorgi-type} provides a way to quantitatively improve the oscillation of the solution in a smaller cylinder, a role which is played by the Harnack inequality in the analysis of the heat equation.
Informally speaking, \Cref{lem:DeGiorgi-type} states that if $\solA$ mostly takes values in the upper half / lower half of $[\mu_-,\mu_+]$, then $\solA$ is bounded away from $\mu_-$ / from $\mu_+$, respectively, in a smaller cylinder.
\end{remark}
Assume \Cref{lem:DeGiorgi-type} is proven.
We set
\begin{equation}\label{eq:defining.improvement.of.oscillation.parameter}
{\eta_0} = \max\left\{ \frac{3}{4},1-2^{n_0} \right\}.
\end{equation}
By noting that $Q^{\nu_0}_\omega(R)\subset Q_\omega(R)$ we conclude
\begin{align}
\essosc_{Q^{\nu_0}_\omega(\frac{R}{2})}\solA&\leq\left\{\begin{array}{cl}
\mu_+-\mu_--\frac{\omega}{4} &\quad\text{if \eqref{eq:DeGiorgi-type.alternative.I} holds},\\
\mu_-+(1-\frac{1}{2^{n_0}})\omega-\mu_- &\quad\text{if \eqref{eq:DeGiorgi-type.alternative.II} holds}
\end{array}\right. \nonumber \\
&\leq {\eta_0} \omega \label{eq:improvement.of.oscillation.parameter.estimate}
\end{align}
for any $R\in(0,R_{\mathrm{max}}]$ and $\omega>0$ satisfying \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis}, \eqref{eq:cond.inf.small} and \eqref{eq:oscillation.Really.Large}.
Estimate \eqref{eq:improvement.of.oscillation.parameter.estimate} is the key implication of \Cref{lem:DeGiorgi-type}; it shows that the oscillation of $\solA$ is improved in a smaller cylinder.
We proceed with the proof of \Cref{thm:holder} by considering a decreasing sequence of numbers $\{\omega_n\}_{n=0}^\infty$ and of shrinking intrinsically scaled space-time cylinders $\{Q_n\}_{n=0}^\infty$ along which we can iteratively apply \Cref{lem:DeGiorgi-type}.
It allows us to describe the oscillation of $\solA$ in $Q_n$ in terms of the radius of $Q_n$, which is an essential ingredient to prove H\"older continuity.
For the starting cylinder we assume that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large} hold.
Then, we show that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large} are satisfied in each subsequent step of the iteration.
Only \eqref{eq:cond.inf.small} might fail to hold at some step, in which case we have to stop the iterative procedure.
This leads to the split into two cases in the following proposition.
\begin{proposition}[The iterative scheme]\label{prop:iterative.scheme}
Suppose $R_0\in(0,R_{\mathrm{max}}]$ and $\omega_0>0$ such that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large} hold (substituting $R=R_0$ and $\omega=\omega_0$).
Define sequences $\{R_n\}_{n=1}^\infty$ and $\{\omega_n\}_{n=1}^\infty$ recursively by
\[
R_{n+1}=aR_n,\quad\omega_{n+1}=\eta_0 \omega_n,
\]
where ${\eta_0}$ is given by \eqref{eq:defining.improvement.of.oscillation.parameter} and
\begin{equation}\label{eq:defining.proportionality.R_n}
a:=\frac{1}{2}\sqrt{\frac{\nu_0}{2}} \eta_0^{m},
\end{equation}
and define the cylinders $Q_n=Q_{\omega_n}(R_n)$.
Then at least one of the following statements holds.
\begin{enumerate}[label=(\roman*), font=\itshape]
\item The estimate
\begin{equation}\label{eq:oscillation.intrisinc.scales.with.cylinder}
\essosc_{Q_{\omega_n}(R_n)}\solA\leq \omega_0 \left(\frac{R_n}{R_0}\right)^\alpha
\end{equation}
holds for all $n\in{\mathbb{N}}$, where $\alpha\in(0,1)$ is given by
\begin{equation}\label{eq:definition.holder.exp}
\alpha=\frac{\log(\eta_0)}{\log(a)}.
\end{equation}
\item There exists an integer $n_*\in{\mathbb{N}}$ such that \eqref{eq:oscillation.intrisinc.scales.with.cylinder} holds for all $n\leq n_*$ and the estimate
\begin{equation}\label{eq:cond.inf.large}
\essinf_{Q_{{n_*}}}\solA\geq \frac{1}{4}\omega_{n_*}
\end{equation}
holds.
\end{enumerate}
\end{proposition}
\begin{proof}
By assumption, $\omega_0$ and $R_0$ satisfy \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large}.
Suppose that $\omega_0$ and $R_0$ satisfy \eqref{eq:cond.inf.small} as well, because otherwise case \textit{(ii)} holds and there is nothing left to prove.
First, we show that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large} hold for $\omega_1$ and $R_1$.
Certainly $R_1\leq R_2$ and we observe that
\[
\omega_1^{1-m}R_1^2\leq ({\eta_0}\omega_0)^{1-m}a^2R_0^2 = \frac{\nu_0}{2} \eta_0^{m+1} \omega_0^{1-m}\left(\frac{R_0}{2}\right)^2\leq \frac{\nu_0}{2}\omega_0^{1-m}\left(\frac{R_0}{2}\right)^2,
\]
hence $Q_{1}\subseteq Q^{\nu_0}_{\omega_0}\left(\frac{R_0}{2}\right)\subseteq Q_{0}$.
Replacing $R_0$ and $R_1$ by $2R_0$ and $2R_1$ shows that $Q_{\omega_1}(2R_1)\subset Q_{\omega_0}(2R_0)$ and therefore \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain} holds.
Additionally, by \Cref{lem:DeGiorgi-type} we know that
\[
\essosc_{Q^{\nu_0}_{\omega_0}\left(\frac{R_0}{2}\right)}\solA \stackrel{\eqref{eq:improvement.of.oscillation.parameter.estimate}}{\leq} h\omega_0\leq\omega_1,
\]
so $\essosc_{Q_{1}}\solA\leq \omega_{1}$, i.e.\ \eqref{eq:cond.oscillation.recursion.hypothesis} holds.
Finally, we compute that
\[
R_1=\frac{1}{2}\sqrt{\frac{\nu_0}{2}} \eta_0^{m}R_0\leq \eta_0^{m}R_0 \stackrel{\eqref{eq:oscillation.Really.Large}}{\leq} \eta_0^{m}\omega_0^{m}=\omega_1^{m},
\]
so \eqref{eq:oscillation.Really.Large} is valid for $\omega_1$ and $R_1$ as well.
We can continue the arguments recursively as long as \eqref{eq:cond.inf.small} holds at each step, where we redefine
\[
\mu_-=\essinf_{Q_{n-1}}\solA
\]
at each step $n$.
If this is the case, then we obtain the inclusions $Q_n\subset Q_{n-1}\subset\dots\subset Q_0\subset\Omega_T$ and the estimates $\essosc_{Q_{n}}\solA\leq \omega_n$ for each $n$ and it follows that
\[
\essosc_{Q_{n}}\solA\leq \omega_n=\eta_0^n\omega_0=\omega_0\left(\frac{R_n}{R_0}\right)^\alpha.
\]
Here, we used that $\left(R_n/R_0\right)^{\alpha}=a^{n\alpha}= \eta_0^n$.
Indeed, $\alpha$ is given by \eqref{eq:definition.holder.exp}, hence $a^\alpha=\eta_0$.
Moreover, $\alpha\in(0,1)$ due to $\frac{3}{4}\leq \eta_0 \leq 1$ and $\frac{1}{2}\sqrt{\frac{\nu_0}{2}} \eta_0^{m}\leq \frac{3}{4}$.
This concludes case \textit{(i)} and the first statement of case \textit{(ii)}.
Suppose that \eqref{eq:cond.inf.small} fails to hold at step $n_*+1$, then
\[
\essinf_{Q_{n_*}}\solA\geq \frac{1}{4}\omega_{n_*},
\]
i.e.\ case \textit{(ii)} holds.
\end{proof}
\begin{remark}
The factor $a$ given in \eqref{eq:defining.proportionality.R_n} is proportional to $\eta_0^m$, which differs from the factor used for the porous medium equation, where it is proportional to $\eta_0^{m-1}$.
This difference is due to the assumption \eqref{itm:source.Lipschitz.assumption} on the reaction term.
Indeed, we define $a$ by \eqref{eq:defining.proportionality.R_n} to make sure \eqref{eq:oscillation.Really.Large} is satisfied in each step of the iterative scheme and condition \eqref{eq:oscillation.Really.Large} is introduced due to this growth assumption.
We could have picked $a$ proportional to $\eta_0^{m-1}$ and checked whether \eqref{eq:oscillation.Really.Large} holds at each step, but \eqref{eq:defining.proportionality.R_n} simplifies the arguments.
\end{remark}
To obtain the appropriate estimates for the oscillation of $\solA$ in $Q_{n_*}$ in case \textit{(ii)} of \Cref{prop:iterative.scheme} we use known results for uniformly parabolic quasi-linear equations to replace the role of \Cref{lem:DeGiorgi-type}.
This is possible, because \eqref{eq:cond.inf.large} implies that $u$ is bounded away from $0$ in $Q_{n_*}$, so \eqref{eq:SD-PME} is a non-degenerate quasilinear parabolic equation in $Q_{n_*}$.
First, we introduce a change of variables (stretching the space variable) to rewrite \eqref{eq:SD-PME} as a uniformly parabolic equation whose ellipticity condition does not depend on $\solA$ (through $\mu_{\pm}$ or $\omega_{n_*}$).
We set
\[
\bar{x}= \frac{x}{\sigma^{1/2} \rho},\quad \bar{t}= \frac{t}{\sigma},
\]
where $\rho^2:=\mu_-^{m-1}$, $\sigma:=\mu_-^{m_0}$ and $\mu_-:=\essinf_{Q_{n_*}}\solA\geq \frac{1}{4}\omega_{n_*}$ due to \eqref{eq:cond.inf.large}.
We also write $\bar{\solA}(\bar{x},\bar{t})=\solA(x,t)=\solA(\sigma^{1/2}\rho\bar{x},\sigma t)$, $\bar{f}(\bar{x},\bar{t},z)=f(x,t,z)=f(\sigma^{1/2}\rho\bar{x},\sigma t,z)$ and $\bar{R}= \sigma^{-1/2}\rho^{-1} {R}$, so that the cylinder $Q(R_1,R_2)$ transforms into $\bar{Q}(R_1,R_2)$ $:=$ $B_{\bar{R}_1}\times(-\sigma^{-1} R_2^2,0)$ $=$ $B_{\bar{R}_1}\times(-\rho^2 \bar{R}_2^2,0)$.
Moreover, \eqref{eq:SD-PME} transforms into $\sigma^{-1}\bar{\solA}_{\bar{t}}=\sigma^{-1}\rho^{-2}\Delta_{\bar{x}}\phi(\bar{\solA})+\bar{f}({\,\cdot\,},\bar{\solA})$, which we multiply by $\sigma$ to obtain the equation
\begin{equation}\label{eq:SD-PME.stretched.space.variable}
\bar{\solA}_{\bar{t}} = \frac{1}{\rho^2} \Delta_{\bar{x}}\phi(\bar{\solA})+\sigma\bar{f}({\,\cdot\,},\bar{\solA}).
\end{equation}
The uniformly parabolic equation \eqref{eq:SD-PME.stretched.space.variable} has an ellipticity condition that only depends on $c_1$, $c_2$ and $m$, because
\[
\frac{\phi'(\bar{\solA})}{\rho^2}\geq c_1\frac{\mu_-^{m-1}}{\rho^2}= c_1,
\]
and
\begin{align*}
\frac{\phi'(\bar{\solA})}{\rho^2}&\leq c_2\frac{\mu_+^{m-1}}{\rho^2}\leq c_2 5^{m-1}\frac{\mu_-^{m-1}}{\rho^2}=5^{m-1}c_2,
\end{align*}
where we used that $\mu_+:=\esssup_{Q_{n_*}}\solA\leq \omega_{n_*}+\mu_-\leq 5\mu_-$ due to \eqref{eq:cond.inf.large}.
Moreover, the reaction term in \eqref{eq:SD-PME.stretched.space.variable} is uniformly bounded, because
\[
|\sigma{\bar{f}({\,\cdot\,},\bar{\solA})} | \leq L \sigma\mu_-^{-m_0}=L
\]
due to \eqref{itm:source.Lipschitz.assumption}.
We can now apply known results on uniformly parabolic quasi-linear equations to \eqref{eq:SD-PME.stretched.space.variable}.
In particular, from \cite{lady1968}, Lemma 7.4 of Chapter II, page 119, we obtain the following.
\begin{lemma}\label{lem:Ladyzenskaja.non.deg.improv.osc}
Suppose statement \textit{(ii)} of \Cref{prop:iterative.scheme} holds.
Then there exist constants $\theta,\eta\in(0,1)$ depending only on $N$, $c_1$, $c_2$, $m$ and $L$ such that for any $\bar{R}>0$ with
\[
Q^\theta(\bar{R}):=Q(\bar{R},\theta\bar{R}^2)\subseteq \bar{Q}_{n_*}
\]
at least one of the following estimates is valid: either
\begin{equation}\label{eq:Ladyzenskaja.non.deg.improv.osc.case1}
\essosc_{Q^\theta(\frac{\bar{R}}{4})} \bar{\solA} \leq 2(1-\eta)^{-1} \bar{R}
\end{equation}
or
\begin{equation}\label{eq:Ladyzenskaja.non.deg.improv.osc.case2}
\essosc_{Q^\theta(\frac{\bar{R}}{4})} \bar{\solA} \leq \eta \essosc_{Q^\theta(\bar{R})}\bar{\solA}.
\end{equation}
Moreover, $\theta$ may be chosen arbitrarily small (in particular, $\theta\leq 4^{1-m}$).
\end{lemma}
\begin{remark}
Historically, the intrinsic scaling introduced in \cite{DiBen1984} was inspired by \Cref{lem:Ladyzenskaja.non.deg.improv.osc} and it leads to such a result for degenerate parabolic equations.
Indeed, the De Giorgi-type Lemma and the subsequent estimate \eqref{eq:improvement.of.oscillation.parameter.estimate} provide the analogous result, where \eqref{eq:improvement.of.oscillation.parameter.estimate} and the negation of \eqref{eq:oscillation.Really.Large} correspond to \eqref{eq:Ladyzenskaja.non.deg.improv.osc.case2} and \eqref{eq:Ladyzenskaja.non.deg.improv.osc.case1}, respectively.
\end{remark}
We consider an iterative scheme for case \textit{(ii)} of \Cref{prop:iterative.scheme}, where \Cref{lem:Ladyzenskaja.non.deg.improv.osc} replaces the role of \Cref{lem:DeGiorgi-type}.
We take $R_0$ and $\omega_0$ as in \Cref{lem:DeGiorgi-type} and we define decreasing sequences $\{R_k\}_{k=1}^\infty$ and $\{\omega_k\}_{k=1}^\infty$ recursively by
\begin{equation}\label{eq:sequences.R_k.and.omega_k}
R_{k+1}=\left\{ \begin{array}{rl}
aR_{k} &\quad\text{if}\ k< n_*,\\
\frac{1}{4}R_k &\quad\text{if}\ k\geq n_*,
\end{array}\right.
\quad
\omega_{k+1}=\left\{ \begin{array}{rl}
\eta_0 \omega_{k} &\quad\text{if}\ k< n_*,\\
\eta \omega_k &\quad\text{if}\ k\geq n_*,
\end{array}\right.
\end{equation}
where $a$ and $\eta_0$ are given by \eqref{eq:defining.proportionality.R_n} and \eqref{eq:defining.improvement.of.oscillation.parameter}, respectively.
Recall that $\rho^2=\mu_-^{m-1}\geq 4^{1-m}\omega_{n_*}^{m-1}$ due to \eqref{eq:cond.inf.large}, so
\[
\bar{Q}_{n_*}=B_{\bar{R}_{n_*}}\times(-\omega_{n_*}^{1-m}\rho^2\bar{R}_{n_*}^2,0)\supset B_{\bar{R}_{n_*}}\times(-4^{1-m}\bar{R}_{n_*}^2,0)\supset Q^\theta(\bar{R}_{n_*}),
\]
where we assume that $\theta\leq 4^{1-m}$.
Therefore, if \eqref{eq:Ladyzenskaja.non.deg.improv.osc.case2} holds for $\bar{R}_{n_*}$, then
\[
\essosc_{Q^\theta(\bar{R}_{n_*+1})}\bar{u}\leq\eta\essosc_{\bar{Q}_{n_*}}\bar{u}\leq \eta\omega_{n_*}=\omega_{n_*+1}.
\]
If \eqref{eq:Ladyzenskaja.non.deg.improv.osc.case1} holds for $\bar{R}_k$, $k\geq n_*$, then
\[
\essosc_{Q^\theta(\bar{R}_{{k+1}})}\bar{u}\leq2(1+\eta^{-1}) \bar{R}_{k}\leq \frac{8}{\sigma^{1/2} \rho}(1+\eta^{-1})R_{k+1}\leq C R_{n_*}^{-\frac{m-1+m_0}{2m}}R_{k+1}\leq C R_{k+1}^{1-\frac{m-1+m_0}{2m}},
\]
where we used that $\sigma^{1/2}\rho\geq 2^{-(m-1+m_0)}\omega_{n_*}^{({m-1+m_0})/{2}}\geq 2^{-(m-1+m_0)}R_{n_*}^{({m-1+m_0})/{2m}}$ due to \eqref{eq:cond.inf.large} and \eqref{eq:oscillation.Really.Large} and where we absorbed some factors into the constant $C$.
Let us define $\tilde{\alpha}=\min\left\{ -\log_4(\eta), 1-\frac{m-1+m_0}{2m}, \alpha \right\}$ and update $\alpha=\tilde{\alpha}$.
Combining \eqref{eq:Ladyzenskaja.non.deg.improv.osc.case1} and \eqref{eq:Ladyzenskaja.non.deg.improv.osc.case2}, for $k>n_*$ we now have
\begin{align*}
\essosc_{Q^\theta(\bar{R}_{k})} \bar{\solA} &\leq \max\left\{CR_k^{\alpha},\omega_k\right\}=\max\left\{CR_k^{\alpha},4^{-(k-n_*)}a^{n_*}\omega_0\right\}\\
&\leq\max\left\{ CR_k^{\alpha},\omega_0\left(\frac{{R}_k}{{R}_{n_*}}\right)^\alpha\left(\frac{{R}_{n_*}}{{R}_{0}}\right)^\alpha\right\} = C \left( \frac{{R}_k}{{R}_{0}} \right)^\alpha. \label{eq:Ladyzenskaja.non.deg.improv.osc.max}
\end{align*}
Transforming $Q^\theta(\bar{R}_{k})$ back to the original coordinates gives $Q(R_k,\theta
\rho^{-2}R_k^2)$, which certainly contains $Q^\theta(R_k)$.
Therefore,
\begin{equation}
\essosc_{Q^\theta({R}_{k})} {\solA} \leq C(N,c_1,c_2,m,L,m_0)\left(\frac{{R}_k}{{R}_{0}}\right)^\alpha \label{eq:Ladyzenskaja.non.deg.oscillation.scales.with.cylinder}
\end{equation}
for all $k>n_*$.
Moreover, observe that $Q^\theta({R}_{k})\subset Q_{\omega_k}(R_k)$ for any $k\in{\mathbb{N}}$, so \eqref{eq:oscillation.intrisinc.scales.with.cylinder} implies that \eqref{eq:Ladyzenskaja.non.deg.oscillation.scales.with.cylinder} is valid for all $k\in{\mathbb{N}}$.
We collect both cases of \Cref{prop:iterative.scheme} and our conclusion \eqref{eq:Ladyzenskaja.non.deg.oscillation.scales.with.cylinder} in the following result.
\begin{corollary}\label{prop:iterative.scheme.Continuous.version}
Suppose $R\in(0,R_{\mathrm{max}}]$ and $\omega>0$ such that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large} hold.
Then at least one of the following estimates holds:
\begin{align}
\essosc_{Q_\omega(r)}\solA&\leq C(N,c_1,c_2,m,L,m_0)\,\omega\left(\frac{r}{R}\right)^{\alpha} & & \text{for all}\ r\in[0,R], \label{eq:oscillation.bounded.r/R} \\
\essosc_{Q^\theta(r)}{\solA}&\leq C(N,c_1,c_2,m,L,m_0)\,\left(\frac{r}{R}\right)^\alpha & & \text{for all}\ r\in[0,R], \label{eq:oscillation.bounded.r/R.non-degen}
\end{align}
where $\theta$ is given in \Cref{lem:Ladyzenskaja.non.deg.improv.osc} and $\alpha>0$ depends on $N$, $c_1$, $c_2$, $m$, $L$ and $m_0$.
\end{corollary}
\begin{proof}
The statements hold trivially for $r=0$, so assume $r>0$.
Set $R_0=R$, $\omega_0=\omega$, recall that $a$ and $\eta_0$ are defined in \eqref{eq:defining.proportionality.R_n} and \eqref{eq:defining.improvement.of.oscillation.parameter}, respectively, and define $R_n$, $\omega_n$ and $Q_n$ as in \Cref{prop:iterative.scheme}.
Next, let $r\in (0,R]$ and pick $n\in{\mathbb{N}}$ such that $R_{n+1}\leq r\leq R_n$.
Note that $Q_{\omega}(r)\subseteq Q_{n}$, because $\omega\geq \omega_n$ and $r\leq R_n$, so $\omega^{1-m}r\leq \omega_{n}^{1-m}R_{n}$.
Suppose case \textit{(i)} of \Cref{prop:iterative.scheme} holds, then it follows that
\[
\essosc_{Q_{\omega}(r)}\solA\leq \essosc_{Q_{n}}\solA\leq\omega\left(\frac{R_{n}}{R}\right)^\alpha=a^{-\alpha} \omega \left(\frac{R_{n+1}}{R}\right)^\alpha\leq C\,\omega \left(\frac{r}{R}\right)^\alpha,
\]
where $C=a^{-\alpha}$.
Suppose case \textit{(ii)} of \Cref{prop:iterative.scheme} holds.
Define $R_k$ and $\omega_k$ by \eqref{eq:sequences.R_k.and.omega_k}.
Pick $k\in {\mathbb{N}}$ such that $R_{k+1}\leq r< R_{k}$ and note that $Q^\theta(r)\subseteq Q^\theta(R_k)$.
Then \eqref{eq:Ladyzenskaja.non.deg.oscillation.scales.with.cylinder} implies
\begin{equation*}
\essosc_{Q^\theta(r)}{\solA}\leq \essosc_{Q^\theta(R_k)}{\solA} \leq C\,\left(\frac{R_k}{R}\right)^\alpha\leq C\,\left(\frac{R_{k+1}}{R}\right)^\alpha\leq C\,\left(\frac{r}{R}\right)^\alpha,
\end{equation*}
where we absorbed the factor $\max\left\{ a^{-\alpha},4^{\alpha} \right\}$ into the constant $C$.
\end{proof}
Now we are in a position to prove the main result.
\begin{proof}[Proof of \Cref{thm:holder}]
Let $K\subset\Omega_T$ be compact and let $(x_0,t_0),(x_1,t_1)\in K$.
Assume $t_0\neq t_1$ and suppose, without loss of generality, that $t_0>t_1$.
Let $R>0$ and consider the cylinder
\[
Q=(x_0,t_0)+Q(R,R^2).
\]
Let us pick
\[
2R=R_{\mathrm{max}}\cdot d(K;\Gamma)
\]
so that $Q\subset (x_0,t_0)+Q(2R,(2R)^2)\subseteq\Omega_T$ and $R\in(0,R_{\mathrm{max}}]$.
We consider the following two cases.
$\bullet$ Suppose $(x_1,t_1)\notin Q$, then
\[
\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}\geq R,
\]
hence
\begin{align*}
\abs{\solA(x_0,t_0)-\solA(x_1,t_1)}
&\leq 2\leq 2 \left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{R}\right) \leq 2 \left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{R}\right)^\alpha\\
&\leq C\left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{d(K;\Gamma)}\right)^\alpha,
\end{align*}
where $C={2} / {R_{\mathrm{max}}}$.
$\bullet$ Suppose $(x_1,t_1)\in Q$.
Set $\omega=1$, then, by the choice of $R$, \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain} is satisfied for the cylinder $(x_0,t_0)+Q_\omega(R)=Q$.
Moreover, \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:oscillation.Really.Large} hold trivially, so the hypotheses of \Cref{prop:iterative.scheme.Continuous.version} are satisfied.
Suppose \eqref{eq:oscillation.bounded.r/R} holds.
Define $r\in(0,R]$ by
\[
r=\max\left\{\abs{x_0-x_1},\abs{t_0-t_1}^{\frac{1}{2}}\right\},
\]
then $(x_1,t_1)\in (x_0,t_0)+Q_\omega(r)\subseteq Q$ and \eqref{eq:oscillation.bounded.r/R} implies that
\begin{align*}
\abs{\solA(x_0,t_0)-\solA(x_1,t_1)}&\leq \essosc_{Q_{\omega}(r)} \solA\leq C\left(\frac{r}{R}\right)^\alpha\leq C\left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{d(K;\Gamma)}\right)^{\alpha}.
\end{align*}
Suppose \eqref{eq:oscillation.bounded.r/R.non-degen} holds.
If $(x_1,t_1)\notin Q^\theta(R)$, then
\[
\abs{x_0-x_1}+\theta^{-\frac{1}{2}}\abs{t_0-t_1}^\frac{1}{2}\geq R,
\]
hence, as before,
\begin{align*}
\abs{\solA(x_0,t_0)-\solA(x_1,t_1)} &\leq 2\leq 2 \left(\frac{\abs{x_0-x_1}+\theta^{-\frac{1}{2}}\abs{t_0-t_1}^{\frac{1}{2}}}{R}\right)\\
&\leq 2\theta^{-\frac{1}{2}}\left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{d(K;\Gamma)}\right)^{\alpha}.
\end{align*}
Suppose $(x_1,t_1)\in Q^\theta(R)$ and define $r\in(0,R]$ by
\[
r=\max\left\{\abs{x_0-x_1},\theta^{-\frac{1}{2}}\abs{t_0-t_1}^{\frac{1}{2}}\right\},
\]
then $(x_1,t_1)\in (x_0,t_0)+Q^\theta(r)\subseteq Q$ and \eqref{eq:oscillation.bounded.r/R.non-degen} implies
\begin{align*}
\abs{\solA(x_0,t_0)-\solA(x_1,t_1)}&\leq \essosc_{Q^\theta(r)} \solA\leq C\left(\frac{r}{R}\right)^\alpha\leq C\left(\frac{\abs{x_0-x_1}+\abs{t_0-t_1}^{\frac{1}{2}}}{d(K;\Gamma)}\right)^{\alpha},
\end{align*}
where we absorbed $\theta^{-\frac{\alpha}{2}}$ into $C$.
Finally, suppose $t_0=t_1$.
Pick any compact set $K'\supset K$ such that $d(K';\Gamma)\geq \frac{1}{2}d(K;\Gamma)$ and such that $\{x_0,x_1\}\times[t_0,t_0+\tau]\subset K'$ for some $\tau>0$.
Pick a sequence $\{\tau_n\}_{n=1}^\infty$ in $(t_0,t_0+\tau]$ with $\tau_n\to t_0$.
\Cref{thm:holder} holds for each pair $(x,t)$ and $(y,s)$, $t\neq s$, in the compact set $K'$, hence $\lim_{n\to\infty}\solA(x_i,\tau_n)=\solA(x_i,t_0)$, $i=0,1$.
In the limit $n\to\infty$ we obtain \eqref{eq:holder} for $(x_0,t_0), (x_1,t_0)\in K$, where we absorbed the factor $2^\alpha$ into $C$.
\end{proof}
\section{Interior integral estimates and technical lemma's}\label{sect:Int.est.and.Aux}
In this section we show auxiliary lemma's and discuss known technical results we use in the proof of \Cref{lem:DeGiorgi-type} in \Cref{sect:Proof.of.DeGiorgiLemma}.
First, we prove a chain rule for the time derivative satisfied by any solution of \eqref{eq:SD-PME} in the sense of \Cref{def:local.solution}.
Next, we show two interior energy estimates and an interior logarithmic estimate involving truncations of the solution, where we use the aforementioned chain rule.
Finally, we discuss an additional functional space with a related estimate, a Poincar\'e type inequality incorporating truncated functions and a technical lemma on the convergence of certain sequences.
\subsection{Chain rule}
We prove a chain rule for the term involving the time derivative in \eqref{eq:SD-PME}.
The method we use is similar to the one used to prove Proposition 3.10 in \cite{HissMull-Sonn22}.
In the cited reference the initial data provides a key bound to pass to the limit, but here we can use the bound $\solA<1$ instead.
In this subsection we work with the original equation \eqref{eq:SD-PME}, i.e.\ we have not performed the translation of the axes mentioned in the third paragraph of \Cref{sect:Geometry.Holder}.
\begin{lemma}[Chain rule]\label{lem:chainrule_local_solution}
Let $\solA:\Omega_T\to[0,1)$ be a measurable function such that $\solA \in W^{1,2}_{\mathrm{loc}} (0,T;H^{-1}(K))$ and $\phi(\solA) \in L^2_{\mathrm{loc}}(0,T;H^1(K))$ for any compact subset $K\subset\Omega$.
Suppose $\psi:[0,\infty)\to {\mathbb{R}}$ is a continuous piece-wise continuously differentiable function with bounded derivative and let $\zeta\in C^\infty_c(\Omega\times{\mathbb{R}})$ be non-negative.
Then the mapping $t\mapsto \inprod{\Psi(\solA(t))}{\zeta^2(t)}$ is absolutely continuous on any compact subinterval of $(0,T)$ with
\[
\frac{\mathrm{d}}{\mathrm{d} t}\inprod{\Psi(\solA)}{\zeta^2}=\pinprod{\solA_t}{\psi(\phi(\solA))\zeta^2}+2\inprod{\Psi(\solA)}{\zeta\zeta_t}
\]
a.e.\ in $(0,T)$, where $\Psi(z):=\int_{l}^z\psi(\phi(\tilde{z}))\mathrm{d} \tilde{z}$, for some ${l}\in [0,1]$.
\end{lemma}
\begin{proof}
Without loss of generality we may assume that $\psi(0)=0$, since the statement in linear with respect to $\psi$.
Indeed, first suppose $\psi=c$ for some constant $c$, then $\Psi(\solA)=c(\solA-l)$ and \Cref{lem:chainrule_local_solution} is clearly valid.
In general, if $\psi(0)=c$, then define ${\psi}_0:=\psi-c$, which satisfies $\psi_0(0)=0$.
If we prove the proposition for $\psi_0$, then adding the constant function $c$ to $\psi_0$ shows that the result holds for $\psi$ as well.
First, suppose $\Psi$ is convex, i.e.\ $\psi$ is non-decreasing, then one readily checks that
\begin{equation}\label{eq:transformed.function.estimates}
\psi(\phi(z_2))(z_1-z_2)\leq\Psi(z_1)-\Psi(z_2)\leq \psi(\phi(z_1))(z_1-z_2)
\end{equation}
for all $z_1,z_2\in [0,1)$.
In particular, for $z_2=0$ we have that $0\leq \Psi(z)\leq \psi(\phi(z))z$ for any $z\in[0,1)$.
Moreover, $\psi'$ is bounded, so $\psi(\phi(\solA))\in L^2_{\mathrm{loc}}(0,T;H^1_{\mathrm{loc}}(\Omega))$ and therefore $\Psi(\solA)\in L^2_{\mathrm{loc}}(\Omega_T)$.
Let $[t_1,t_2]\subset (0,T)$ be a compact subinterval.
Given $0<h<t_1$, let $\solA^h$ denote the \emph{backward Steklov average}, that is, using the Bochner integral we define
\[
\solA^h(t):=\frac{1}{h}\int_{t-h}^{t}\solA(s)\mathrm{d} s\quad\text{for}\ t\in[t_1,t_2].
\]
From known results on Steklov averaging, see the Appendix in \cite{HissMull-Sonn22}, we have $\solA^h\to \solA$ in $H^1(t_1,t_2;H^{-1}({K}))$ as $h\to 0$ and $\solA^h\in H^1(t_1,t_2;L^2(K))$ with $\partial_t\solA^h(t)=\frac{1}{h}(\solA(t)-\solA(t-h)$, for any compact subset $K \subset \Omega$.
Using the first estimate of \eqref{eq:transformed.function.estimates} and then the second estimate we obtain
\begin{align*}
\inprod{\partial_t\solA^h(t)}{\psi(\phi(\solA(t)))\zeta^2(t)}
&=\frac{1}{h}\inprod{\solA(t)-\solA(t-h)}{\psi(\phi(\solA(t)))\zeta^2(t)}\\
&\geq \frac{1}{h}\inprod{\Psi^\star(\solA(t))-\Psi^\star(\solA(t-h))}{\zeta^2(t)}\\
&=\inprod{\partial_t[\Psi^\star(\solA(t))]^h}{\zeta^2(t)}\\
&\geq \inprod{\partial_t\solA^h(t)}{\psi(\phi(\solA(t-h)))\zeta^2(t)}.
\end{align*}
We also note that $\inprod{\partial_t[\Psi^\star(\solA)]^h(t)}{\zeta^2(t)}=\frac{\mathrm{d}}{\mathrm{d} t}\inprod{[\Psi^\star(\solA)]^h(t)}{\zeta^2(t)}-2\inprod{[\Psi^\star(\solA)]^h(t)}{[\zeta\zeta_t](t)}$ by the product rule for the weak derivative in Bochner spaces.
Integrate the estimates above over $t\in [t_1,t_2]$ and observe that the left- and right-hand side converge to $\int_{t_1}^{t_2}\pinprod{\solA_t}{\psi(\phi(\solA))\zeta^2}$ as $h\to 0$, since
$\psi(\phi(\solA))\zeta^2\in L^2(t_1,t_2;H^1(\Omega))$ and
\[
\int_{t_1}^{t_2}\norm{\psi(\phi(\solA(t-h)))\zeta^2(t)-\psi(\phi(\solA(t)))\zeta^2(t)}_{H^1(\Omega)}\mathrm{d} t\to 0
\]
as $h\to 0$.
It follows that
\begin{align*}
\left[\inprod{\Psi^\star(\solA(t))}{\zeta^2(t)}\right]_{t_1}^{t_2}=\int_{t_1}^{t_2}\Bigl(\pinprod{\partial_t\solA}{\psi(\phi(\solA))\zeta^2}-2\inprod{[\Psi^\star(\solA)]}{\zeta\zeta_t}\Bigr),
\end{align*}
where we used that $[\Psi^\star(\solA)]^h\to \Psi^\star(\solA)$ in $L^2_{\mathrm{loc}}(\Omega_T)$ as $h\to 0$.
This proves \Cref{lem:chainrule_local_solution} provided that $\Psi$ is convex.
Finally, suppose $\Psi$ is not convex.
Then, define $\theta(\zeta)=\int_0^\zeta(\Psi'')_-$ and $\Psi_0(\zeta)=\int_0^\zeta\theta$ for $\zeta\in{\mathbb{R}}$, then $\Psi_0''=(\Psi'')_-\geq 0$ and $[\Psi+\Psi_0]''=(\Psi'')_+\geq 0$, so both functions are convex.
Clearly $\Psi_0$ has the same regularity as $\Psi$ and $\Psi'_0(0)=0$, so $\Psi_0$ satisfies the hypotheses.
It follows that we can apply the previous arguments to $\Psi_0$ and $\Psi+\Psi_0$.
Observe that the statements of \Cref{lem:chainrule_local_solution} are linear with respect to $\Psi$ to complete the proof.
\end{proof}
\subsection{Interior integral estimates}\label{subsect:Int.int.estimates}
In this subsection we show that local solutions of \eqref{eq:SD-PME} satisfy three interior integral estimates.
Our arguments rely on the chain rule proved in \Cref{lem:chainrule_local_solution}.
Let $\solA$ be a local solution of \eqref{eq:SD-PME}, let $(x_0,t_0)\in \Omega_T$ and perform the translation of the axes mentioned in the third paragraph of \Cref{sect:Geometry.Holder} to set $(x_0,t_0)=0$.
We assume that $\omega>0$ and $R\in(0,1]$ such that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain} holds.
Further, we use the following notation:
\[
{\bar{t}_0} := -\omega^{m-1}R^2,\quad {\solA_{(l)}} := \max\left\{ \solA , l \right\} \quad \text{and} \quad {\solA^{(l)}}:=\min\left\{ \solA , l \right\},
\]
given $l\in{\mathbb{R}}$.
We also write $\solA_{+}=\max\left\{ \solA,0 \right\}$ and $\solA_{-}=(-\solA)_{+}$.
From now on $\zeta$ will always denote a cut-off function, so in particular
\[
0\leq\zeta\leq 1.
\]
Observe that for $l>0$ we have that ${\solA_{(l)}}\in L^2_{\mathrm{loc}}(0,T;H^1_{\mathrm{loc}}(\Omega))$.
Indeed, ${\solA_{(l)}}=\beta(\phi(\solA_{(l)}))$, where $\beta:=\phi^{-1}$, and $\beta$ restricted to $[\phi(l),\infty)$ is a continuously differentiable function with a bounded derivative.
By the same argument, $(\solA-l)_+ \in L^2_{\mathrm{loc}}(0,T;H^1_{\mathrm{loc}}(\Omega))$.
\begin{proposition}[Interior energy estimate - lower truncation]\label{prop:interior.energy.estimate.lower.trunc}
Let $k\geq l>0$ and $\zeta\in C^\infty_c({B_R}\times({\bar{t}_0},\infty))$.
Then we have estimate
\begin{equation*}
\begin{aligned}
&\norm{({\solA_{(l)}}-{k})_{-}\zeta}_{L^\infty({\bar{t}_0},{0};L^2({B_R}))}^2+c_1l^{m-1}\norm{\nabla({\solA_{(l)}}-{k})_{-}\zeta}_{L^2({Q_\omega(R)})}^2\\
&\quad\leq C\Biggl( (k-l)(k+l)\iint_{{Q_\omega(R)}}\abs{\zeta_t}\chi_{[\solA<k]}+c_2(k-l)^2k^{m-1}\iint_{{Q_\omega(R)}}\abs{\nabla\zeta}^2\chi_{[l<\solA< k]} \\
&\qquad\qquad\qquad\qquad\qquad + (k-l) \frac{l^m}{m} \iint_{Q_\omega(R)}\abs{\Delta\zeta}\chi_{[\solA<l]}+L l^{-m_0} (k-l) \iint_{Q_\omega(R)}\chi_{[\solA<k]} \Biggr)
\end{aligned}
\end{equation*}
for some constant $C\geq 0$.
\end{proposition}
\begin{remark}\label{rem:gradient.interchanging.with.cut-off.function}
The second term in the estimate is written ambiguously in view of the placement of $\zeta$; it could either mean the norm of $(\nabla({\solA_{(l)}}-{k})_{-})\zeta$ or $\nabla(({\solA_{(l)}}-{k})_{-}\zeta)$.
This is on purpose, since the estimate holds for both interpretations.
Indeed, the second case yields the extra term $c_1l^{m-1}\norm{({\solA_{(l)}}-k)_-\nabla\zeta}^2_{L^2({Q_\omega(R)})}$, which is estimated by $c_2k^{m-1}(k-l)^2\iint_{{Q_\omega(R)}}\abs{\nabla\zeta}^2\chi_{[\solA<k]}$.
Therefore it can be absorbed by the second term on the right-hand side.
\end{remark}
\begin{proof}
We may assume without loss of generality that $l\leq \mu_+$, because otherwise ${(\solA_{(l)}-k)_-}$ vanishes and the estimate certainly holds.
We may also assume that $k\leq \mu_+$, because the left-hand side of the estimate does not change if $k>\mu_+$ compared to $k=\mu_+$ while the right-hand side increases.
Observe that
\[
\eta=- {(\solA_{(l)}-k)_-} \zeta^2
\]
is an admissible test function in \eqref{eq:SD-PME.local.solution.including.time-derivative.id}.
Indeed, define $\psi:[0,\infty)\to{\mathbb{R}}$ by
\[
\psi(\solB)=-\left(\max\left\{ \beta(\solB),l \right\} -k \right)_-
\]
and observe that $\beta:=\phi^{-1}$ is continuous on $[0,\infty)$ and $C^1$-regular on $(0,\infty)$.
We conclude that $\psi$ is continuous and piece-wise $C^1$-regular on $[0,\infty)$ and $\psi'$ is bounded, which implies that $-{(\solA_{(l)}-k)_-}=\psi(\phi(\solA))$ is in $L^2_{\mathrm{loc}}(0,T;H^1_{\mathrm{loc}}(\Omega))$.
Applying \eqref{eq:SD-PME.local.solution.including.time-derivative.id} to this test function and integrating over $t\in[{\bar{t}_0},\tau]$ for some fixed $\tau\in[{\bar{t}_0},0]$ gives
\begin{equation}\label{eq:interior.energy.estimate.lower.trunc.test_function_identity}
\int_{{\bar{t}_0}}^{\tau}\left[-\pinprod{\solA_t}{{(\solA_{(l)}-k)_-}\zeta^2}-\inprod{\nabla\phi(\solA)}{\nabla\left({(\solA_{(l)}-k)_-}\zeta^2\right)}\right]=-\int_{{\bar{t}_0}}^{\tau}\inprod{f({\,\cdot\,},\solA)}{{(\solA_{(l)}-k)_-}\zeta^2}.
\end{equation}
We study each of the three terms in \eqref{eq:interior.energy.estimate.lower.trunc.test_function_identity} separately.
For the first term we apply \Cref{lem:chainrule_local_solution}, where we observe that
\begin{align*}
\Psi^\star(z)&=-\int_{l}^{z}(\max\left\{\tilde{z},l\right\}-k)_-\mathrm{d}\tilde{z}
=\begin{cases}
(k-l)(l-z) &\text{if}\ 0\leq z<{l},\\
\frac{1}{2}(z-k)_-^2-\frac{1}{2}(k-l)^2 &\text{if}\ z\geq l
\end{cases}\\
&=(k-l)(z-l)_-+\tfrac{1}{2}(z_{(l)}-k)_-^2-\tfrac{1}{2}(k-l)^2,
\end{align*}
and we note that $\zeta({\bar{t}_0})=0$, to obtain
\begin{align*}
-\int_{{\bar{t}_0}}^{\tau}\pinprod{\solA_t}{{(\solA_{(l)}-k)_-}\zeta^2}
&=\inprod{\Psi^\star(\solA({\tau}))}{\zeta^2({\tau})}-2\int_{{\bar{t}_0}}^{{\tau}}\inprod{\Psi^\star(\solA)}{\zeta_t\zeta}\\
&=\frac{1}{2}\inprod{{(\solA_{(l)}-k)_-}^2(\tau)}{\zeta^2(\tau)}-\int_{{\bar{t}_0}}^{{\tau}}\inprod{{(\solA_{(l)}-k)_-}^2}{\zeta_t\zeta}\\
&\quad+(k-l)\inprod{(\solA(\tau)-l)_-}{\zeta^2(\tau)}-2(k-l)\int_{{\bar{t}_0}}^{{\tau}}\inprod{(\solA-l)_-}{\zeta_t\zeta}\\
&\quad-\frac{1}{2}\inprod{(k-l)^2}{\zeta^2(\tau)}+\int_{{\bar{t}_0}}^{{\tau}}\inprod{(k-l)^2}{\zeta\zeta_t}.
\end{align*}
The third term is non-negative and the last two terms cancel.
Furthermore, we estimate the second and fourth term using ${{(\solA_{(l)}-k)_-}^2}{\zeta_t\zeta}\geq -(k-l)^2\zeta\abs{\zeta_t}\chi_{[\solA<k]}$ and $-2(k-l) (\solA-l)_-\zeta\zeta_t\geq -2l(k-l)\zeta\abs{\zeta_t}\chi_{[\solA<k]}$ to obtain the lower bound
\begin{equation}
-\int_{{\bar{t}_0}}^{\tau}\pinprod{\solA_t}{{(\solA_{(l)}-k)_-}\zeta^2}\geq \frac{1}{2}\inprod{{(\solA_{(l)}-k)_-}^2(\tau)}{\zeta^2(\tau)}-(k-l)(k+l)\int_{{\bar{t}_0}}^{{\tau}}\inprod{\zeta}{\abs{\zeta_t}\chi_{[\solA<k]}}.
\end{equation}
For the second term in \eqref{eq:interior.energy.estimate.lower.trunc.test_function_identity} we compute
\begin{align*}
&-\inprod{\nabla\phi(\solA)}{\left(\nabla{(\solA_{(l)}-k)_-}\zeta^2\right)\left\{\chi_{[\solA\geq l]}+\chi_{[\solA< l]}\right\}}\\
&\quad=\inprod{\phi'(\solA)|\nabla{(\solA_{(l)}-k)_-}|^2}{\zeta^2}+\inprod{\phi'(\solA){(\solA_{(l)}-k)_-}\nabla{(\solA_{(l)}-k)_-}}{\nabla\zeta^2}\\
&\qquad-(k-l)\inprod{\nabla\phi(\solA^{(l)})}{\nabla\zeta^2}\\
&\quad\geq \frac{1}{2}\inprod{\phi'(\solA)|\nabla{(\solA_{(l)}-k)_-}|^2}{\zeta^2}-2\inprod{\phi'(\solA){(\solA_{(l)}-k)_-}^2}{\abs{\nabla\zeta}^2\chi_{[l<\solA<k]}}\\
&\qquad+(k-l)\inprod{\phi(\solA^{(l)})}{\Delta\zeta^2}
\end{align*}
a.e.\ in $({\bar{t}_0},{\tau})$, where we used Young's inequality to estimate the second term from below and absorbed one of the resulting terms in the first term.
Further, we applied integration by parts in space / the definition of the weak derivative to the last term.
Next, we compute $\Delta\zeta^2=2\abs{\nabla\zeta}^2+2\zeta\Delta\zeta$, of which the first term is non-negative, so we find the lower bound
\begin{align*}
&-\inprod{\nabla\phi(\solA)}{\left(\nabla{(\solA_{(l)}-k)_-}\zeta^2\right)}\\
&\quad\geq \frac{1}{2}\inprod{\phi'(\solA)|\nabla{(\solA_{(l)}-k)_-}|^2}{\zeta^2}-2\inprod{\phi'(\solA){(\solA_{(l)}-k)_-}^2}{\abs{\nabla\zeta}^2\chi_{[l<\solA<k]}}\\
&\qquad\qquad-2(k-l)\inprod{\phi(\solA^{(l)})}{\abs{\Delta\zeta}\zeta} \\
&\; \stackrel{\eqref{eq:nonlin.PME-like.degeneracy.integrated},\eqref{eq:extending.PME-like.degeneracy}}{\geq}
\frac{1}{2}c_1l^{m-1}\inprod{|\nabla{(\solA_{(l)}-k)_-}|^2}{\zeta^2}-2c_2k^{m-1}(k-l)^2\inprod{\abs{\nabla\zeta}^2}{\chi_{[l<\solA<k]}}\\
&\qquad\qquad-2\frac{c_2}{m}l^{m}(k-l)\inprod{\abs{\Delta\zeta}}{\zeta\chi_{[\solA< l]}}
\end{align*}
a.e.\ in $({\bar{t}_0},{\tau})$.
For the last term we simply recall that $f$ satisfies \eqref{itm:source.Lipschitz.assumption}, so we obtain
\[
-\int_{{\bar{t}_0}}^{\tau}\inprod{f({\,\cdot\,},\solA)}{{(\solA_{(l)}-k)_-}\zeta^2}\leq L l^{-m_0}\int_{{\bar{t}_0}}^{\tau}\inprod{({\solA_{(l)}}-k)_{-}}{\zeta^2}\leq L l^{-m_0} (k-l)\iint_{Q_\tau}\chi_{[\solA<k]},
\]
where ${Q_\tau}:={B_R}\times({\bar{t}_0},{\tau})$.
Combining the three estimates and taking the support of $\zeta$ into account yields
\begin{equation*}
\begin{aligned}
&\int_{B_R\times\{{\tau}\}}({\solA_{(l)}}-{k})_{-}^2\zeta^2+c_1 l^{m-1}\norm{\zeta\nabla({\solA_{(l)}}-{k})_{-}}_{L^2({Q_\tau})}^2\\
&\quad\leq C(k-l)(k+l)\iint_{{Q_\tau}}\abs{\zeta_t}\chi_{[\solA<k]}+Cc_2(k-l)^2k^{m-1}\iint_{{Q_\tau}}\abs{\nabla\zeta}^2\chi_{[l<\solA< k]} \\
&\qquad+ C\frac{c_2}{m}(k-l)l^{m}\iint_{Q_\tau}\abs{\Delta\zeta}\chi_{[\solA<l]}+CL l^{-m_0} (k-l)\iint_{Q_\tau}\chi_{[\solA<k]}.
\end{aligned}
\end{equation*}
Taking the supremum over $\tau\in[-{\bar{t}_0},0]$ yields an estimate for the first term on the left-hand side, where we estimate the right-hand side uniformly with respect to $\tau$ by taking integrals over the larger domain $Q_\omega(R)$ instead, i.e.\ we put $\tau=0$.
Moreover, putting $\tau=0$ we obtain the same bound for the second term on the left-hand side.
Adding the two inequalities and absorbing a factor $2$ into $C$ yields the desired estimate.
\end{proof}
Recall that $\mu_+:=\esssup_{{Q_\omega (R)}}\solA$.
\begin{proposition}[Interior energy estimate - upper truncation]\label{prop:interior.energy.estimate.upper.trunc}
Suppose $k>0$ and $\zeta\in C^\infty_c({B_R}\times({\bar{t}_0},\infty))$.
Then we have estimate
\begin{equation}\label{eq:interior.energy.estimate.upper.trunc}
\begin{aligned}
&\norm{(\solA-{k})_{+}\zeta}_{L^\infty({\bar{t}_0},{\tau};L^2({B_R}))}^2+c_1k^{m-1}\norm{\nabla(\solA-{k})_{+}\zeta}_{L^2({Q_\omega (R)})}^2\\
&\quad\leq C\Bigl( (\mu_+
-k)^2\iint_{{Q_\omega (R)}}\abs{\zeta_t}\chi_{[\solA>k]}+c_2(\mu_+-k)^2\mu_+^{m-1}\iint_{{Q_\omega (R)}}\abs{\nabla\zeta}^2\chi_{[\solA>k]} \\
&\qquad\qquad+L k^{-m_0} (\mu_+-k)\iint_{Q_\omega (R)}\chi_{[\solA>k]}\Bigr)
\end{aligned}
\end{equation}
for some constant $C\geq 0$.
\end{proposition}
\begin{remark}
An analogous statement as in \Cref{rem:gradient.interchanging.with.cut-off.function} holds, that is, the estimate holds for both interpretations of $\nabla(\solA-k)_+\zeta$.
\end{remark}
\begin{proof}
Without loss of generality we may assume that $k<\mu_+$, because $(\solA-k)_+$ vanishes otherwise and the estimate then certainly holds.
Observe that
\[
\eta={(\solA-k)_{+}}\zeta^2
\]
is an admissible test function in \eqref{eq:SD-PME.local.solution.including.time-derivative.id} by similar arguments as for the test function in the proof of \Cref{prop:interior.energy.estimate.lower.trunc}, where we now put
\[
\psi(\solB)=(\beta(\solB)-k)_+,\quad\text{for}\ \solB\geq 0.
\]
Again, $\psi$ is continuous and piece-wise $C^1$-regular on $[0,\infty)$ and $\psi'$ is bounded.
We apply \eqref{eq:SD-PME.local.solution.including.time-derivative.id} to this test function, which we integrate over $t\in[{\bar{t}_0},{\tau}]$ for some fixed ${\tau}\in[{\bar{t}_0},0]$.
We study each of the three terms in the resulting identity separately.
For the first term we apply \Cref{lem:chainrule_local_solution}, where we first observe that
\[
\Psi^\star(z)=\int_k^z(\tilde{z}-k)_+\mathrm{d}\tilde{z}=
\frac{1}{2}(z-k)_+^2
\]
and recall that $\zeta({\bar{t}_0})=0$, to obtain
\[
\begin{aligned}
\int_{{\bar{t}_0}}^{{\tau}}\pinprod{\solA_{t}}{{(\solA-k)_{+}}\zeta^2}&=\frac{1}{2}\inprod{(\solA({\tau})-k)_+^2}{\zeta^2}-\int_{\bar{t}_0}^{\tau}\inprod{{(\solA-k)_{+}}^2}{\zeta\zeta_t}.
\end{aligned}
\]
For the second term we use Young's inequality to obtain
\begin{align*}
&\int_{\bar{t}_0}^{\tau}\inprod{\nabla\phi(\solA)}{\nabla\left({(\solA-k)_{+}}\zeta^2\right)}=\int_{\bar{t}_0}^{\tau}\inprod{\nabla\phi(\solA)}{\left(\nabla{(\solA-k)_{+}}\right)\zeta^2+{(\solA-k)_{+}}\nabla\zeta^2}\\
&\geq \frac{1}{2}\phi'(k)\iint_{Q_\tau}|\nabla{(\solA-k)_{+}}|^2\zeta^2-2\phi'(\mu_+)\iint_{Q_\tau}{(\solA-k)_{+}}^2\abs{\nabla\zeta}^2,
\end{align*}
where ${Q_\tau}:={B_R}\times({\bar{t}_0},{\tau})$.
We use \eqref{eq:extending.PME-like.degeneracy} to estimate $\phi'(k)\geq c_1 k^{m-1}$ and $\phi'(\mu_+)\leq c_2 \mu_+^{m-1}$.
For the last term we note that $f$ satisfies \eqref{itm:source.Lipschitz.assumption}, hence as in the proof of \Cref{prop:interior.energy.estimate.lower.trunc} we have the upper bound
\[
\int_{{\bar{t}_0}}^{{\tau}}\inprod{f({\,\cdot\,},\solA)}{{(\solA-k)_{+}}\zeta^2} \leq L k^{-m_0} \iint_{Q_\tau}{(\solA-k)_{+}}\leq L k^{-m_0} (\mu_+-k)\iint_{Q_\tau}\chi_{[\solA>k]}.
\]
Combining the three estimates and taking the supremum over ${\tau}\in[{\bar{t}_0},0]$ as in the final step in the proof of \Cref{prop:interior.energy.estimate.lower.trunc} shows that the desired estimate holds.
\end{proof}
Recall that $\mu_+:=\esssup_{{Q_\omega(R)}}\solA$, $\mu_-:=\essinf_{{Q_\omega(R)}}\solA$ and assume $\omega\geq \mu_+-\mu_+$.
\begin{proposition}[Interior logarithmic estimate]\label{prop:int.log.estimate}
Let $k,l\in{\mathbb{N}}$, $l>k$, let $\tau,t\in [{\bar{t}_0},{\tau}]$, $\tau\leq t$ and let $\zeta\in C^\infty_c({B_R})$.
Then
\begin{equation*}
\begin{aligned}
&(l-k-1)^2\int_{{B_R}\times\{\tau\}}\zeta^2\chi_{[\solA>\mu_-+\omega-\frac{\omega}{2^{l}}]}\\
&\leq (l-k)^2\int_{{B_R}\times\{t\}}\zeta^2\chi_{[\solA>\mu_-+\omega-\frac{\omega}{2^k}]} + C\, c_2 (l-k)\mu_+^{m-1}\frac{R^2}{\omega^{m-1}}\int_{B_R}\abs{\nabla\zeta}^2 \\
&\quad +C\, L \left( \frac{\omega}{2} \right)^{-m_0} 2^{l}\frac{R^2}{\omega^{m}}\abs{{B_R}}
\end{aligned}
\end{equation*}
for some constant $C\geq 0$.
\end{proposition}
\begin{proof}
Consider the function $\varphi:[0,\mu_+]\to[0,\infty)$ given by
\[
\begin{aligned}
\varphi(z)&=\log^+\left(\frac{\frac{\omega}{2^k}}{\frac{\omega}{2^k}-(z-(\mu_-+\omega-\frac{\omega}{2^k}))_+ +\frac{\omega}{2^l}}\right)\\
&=\left\{\begin{array}{cl}
\log\left(\frac{\frac{\omega}{2^k}}{\mu_-+\omega-z+\frac{\omega}{2^l}}\right)&\text{if}\ z\geq\mu_-+\omega-\frac{\omega}{2^k}+\frac{\omega}{2^l},\\
0&\text{if}\ z<\mu_-+\omega-\frac{\omega}{2^k}+\frac{\omega}{2^l}.
\end{array}\right.
\end{aligned}
\]
Note that $\varphi$ is a continuous, piece-wise smooth function with bounded derivative.
We compute
\begin{align*}
\varphi'(z)&=\frac{1}{\mu_-+\omega-z+\frac{\omega}{2^l}},\quad\varphi''(z)=\frac{1}{\left(\mu_-+\omega-z+\frac{\omega}{2^l}\right)^2}=(\varphi'(z))^2
\end{align*}
for all $z\geq \mu_-+\omega-\frac{\omega}{2^k}+\frac{\omega}{2^l}$, so
\[
(\varphi^2)''=2((\varphi)'^2+\varphi\varphi'')=2(1+\varphi)(\varphi')^2.
\]
We use the function $\varphi$ to define the test function
\[
\eta=(\varphi^2)'(\solA)\zeta^2,
\]
which is admissible in \eqref{eq:SD-PME.local.solution.including.time-derivative.id}, because $\psi:[0,\infty)\to[0,\infty)$,
\[
\psi(\solB):=(\varphi^2)'\left(\min\left\{ \beta(\solB) , \mu_+ \right\}\right),
\]
is a continuous, piece-wise $C^1$-regular function with bounded derivative.
In particular,
\begin{equation}
\nabla \psi(\phi(\solA)) = (\varphi^2)''(\solA)\nabla(\solA-\tfrac{\omega}{2})_+
=2(1+\varphi(\solA))\varphi'(\solA)^2\nabla(\solA-\tfrac{\omega}{2})_+.\label{eq:int.log.estimate.gradient.of.test_function}
\end{equation}
We apply \eqref{eq:SD-PME.local.solution.including.time-derivative.id} to this test function and integrate over $[\tau,t]$ to obtain
\begin{equation*}
\int_{{t}}^{\tau}\left[\pinprod{\solA_t}{(\varphi^2)'(\solA)\zeta^2}+\inprod{\nabla\phi(\solA)}{\nabla\left((\varphi^2)'(\solA)\zeta^2\right)}\right]=\int_{{t}}^{\tau}\inprod{f({\,\cdot\,},\solA)}{(\varphi^2)'(\solA)\zeta^2}.
\end{equation*}
We study each of the three terms separately.
To simplify the notation, we write
\[
\varphi(x,t)=\varphi(\solA(x,t)),\quad\varphi'(x,t)=\varphi'(\solA(x,t))\quad\text{and}\quad \varphi''(x,t)=\varphi''(\solA(x,t)).
\]
For the first term we use \Cref{lem:chainrule_local_solution}.
First we note that
\[
\Psi^\star(\solA)=\int_0^\solA (\varphi^2)'(z)\mathrm{d} z=\varphi^2(\solA)
\]
and that $\zeta_t=0$, since $\zeta$ does not depend on $t$.
We conclude that
\begin{align*}
\int_{{t}}^{\tau}\pinprod{\solA_t}{(\varphi^2)'(\solA)\zeta^2}=\inprod{\varphi^2(\solA({\tau}))}{\zeta^2}-\inprod{\varphi^2(\solA({t}))}{\zeta^2}.
\end{align*}
For the second term we compute
\begin{align*}
&\int_{{t}}^{{\tau}}\inprod{\nabla\phi(\solA)}{\nabla\left((\varphi^2)'\zeta^2\right)}
\stackrel{\eqref{eq:int.log.estimate.gradient.of.test_function}}{=}\int_{{t}}^{{\tau}}\inprod{\nabla\phi(\solA)}{2(1+\varphi)\varphi'^2\nabla(\solA-\tfrac{\omega}{2})_+\zeta^2+(\varphi^2)'\nabla\zeta^2}\\
&\quad=2\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)(1+\varphi)\varphi'^2\abs{\nabla(\solA-\tfrac{\omega}{2})_+}^2}{\zeta^2}+\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)(\varphi^2)'\nabla(\solA-\tfrac{\omega}{2})_+}{\nabla\zeta^2}\\
&\quad\geq 2\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)(1+\varphi)\varphi'^2\abs{\nabla(\solA-\tfrac{\omega}{2})_+}^2}{\zeta^2}-4\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)\varphi\varphi'\abs{\nabla(\solA-\tfrac{\omega}{2})_+}}{\zeta\abs{\nabla\zeta}}\\
&\quad \geq 2\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)(1+\varphi)\varphi'^2\abs{\nabla(\solA-\tfrac{\omega}{2})_+}^2}{\zeta^2}-2\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)\varphi\varphi'^2\abs{\nabla(\solA-\tfrac{\omega}{2})_+}^2}{\zeta^2}\\
&\qquad -2\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)\varphi}{\abs{\nabla\zeta}^2}\\
&\quad \geq -2\int_{{t}}^{{\tau}}\inprod{\phi'(\solA)\varphi}{\abs{\nabla\zeta}^2} \stackrel{\eqref{eq:extending.PME-like.degeneracy}}{\geq} -2c_2 \mu_+^{m-1}\int_{{t}}^{{\tau}}\inprod{\varphi}{\abs{\nabla\zeta}^2},
\end{align*}
where we used Young's inequality to obtain the second estimate.
For the last term we simply recall \eqref{itm:source.Lipschitz.assumption} to obtain estimate
\begin{equation*}
\int_{{t}}^{{\tau}}\inprod{f({\,\cdot\,},\solA)}{(\varphi^2)'\zeta^2}\leq 2 L \left(\frac{\omega}{2}\right)^{-m_0}\int_{{t}}^{{\tau}}\inprod{\varphi \varphi'}{\zeta^{2}},
\end{equation*}
where we used that $\solA\geq \frac{\omega}{2}$ in the support of $\varphi$.
Combining the three estimates, we have that
\begin{equation*}
\begin{aligned}
\int_{{B_R}}\varphi^2(\solA({\tau}))\zeta^2&\leq\int_{{B_R}}\varphi^2(\solA({t}))\zeta^2+2c_2\mu_+^{m-1}\iint_{Q^\tau}\varphi(\solA)\abs{\nabla\zeta}^2+2L \mu_-^{-m_0} \int_{Q^\tau}{\varphi(\solA)\varphi'(\solA)\zeta^2},
\end{aligned}
\end{equation*}
where ${Q^\tau}:={B_R}\times({t},{\tau})$.
Note that $\varphi$ and $\varphi'$ are a non-decreasing functions, so evaluating at $\mu_+$ we find the upper bounds
\begin{gather*}
\varphi(\solA)\leq\log\left(\frac{\frac{\omega}{2^k}}{\omega-(\mu_+-\mu_-)+\frac{\omega}{2^l}}\right)\leq\log\left(\frac{\frac{\omega}{2^k}}{\frac{\omega}{2^l}}\right)=(l-k)\log2,\\
\varphi'(\solA)\leq\frac{1}{\omega-(\mu_+-\mu_-)+\frac{\omega}{2^l}}\leq\frac{2^l}{\omega}.
\end{gather*}
Similarly, on the set ${B_R}\cap\left[\solA>\mu_-+\omega-\frac{\omega}{2^{l}}\right]$ the function $\varphi(\solA)$ has the lower bound
\[
\varphi(\mu_-+\omega-\tfrac{\omega}{2^{l}})=\log\left(\frac{\frac{\omega}{2^k}}{\frac{\omega}{2^{l}}+\frac{\omega}{2^l}}\right)=\log\left(\frac{2^{l-1}}{2^k}\right)=(l-k-1)\log 2.
\]
Substituting these estimates into the inequality and dividing both sides by $(\log(2))^2$, we obtain
\begin{equation*}
\begin{aligned}
&(l-k-1)^2\int_{{B_R}\times\{{\tau}\}}\zeta^2\chi_{[\solA>\mu_-+\omega-\frac{\omega}{2^l}]}\\
&\leq (l-k)^2\int_{{B_R}\times\{{t}\}}\zeta^2\chi_{[\solA>\mu_-+\omega-\frac{\omega}{2^k}]}+C\, c_2(l-k)\mu_+^{m-1}\iint_{Q^\tau}\abs{\nabla\zeta}^2 \\
&\quad + C\, L \left(\frac{\omega}{2}\right)^{-m_0} (l-k)\frac{2^l}{\omega}\abs{{Q^\tau}}.
\end{aligned}
\end{equation*}
Finally, note that $\abs{{Q^\tau}}\leq\frac{R^2}{\omega^{m-1}}\abs{{B_R}}$ and substitute this in the last two terms.
\end{proof}
\subsection{Embeddings of parabolic spaces and technical lemma's}
We recall the definition of the functional spaces that we will use in the sequel.
\begin{definition}[Parabolic spaces]\label{def:parab.space}
The Banach spaces $V^2(\Omega_T)$ and $V^2_0(\Omega_T)$ are given by
\begin{align*}
V^2(\Omega_T)&=L^\infty(0,T;L^2(\Omega))\cap L^2(0,T;H^1(\Omega)),\\
V^2_0(\Omega_T)&=L^\infty(0,T;L^2(\Omega))\cap L^2(0,T;H^1_0(\Omega)).
\end{align*}
Both spaces are equipped with the norm
\[
\norm{\solC}_{V^2(\Omega_T)}=\esssup_{0\leq t\leq T}\norm{\solC(t)}_{L^2(\Omega)}+\norm{\nabla\solC}_{L^2(\Omega_T)}
\]
where $\solC\in V^2(\Omega_T)$.
\end{definition}
The following result shows that the space $V^2_0(\Omega_T)$ is embedded into $L^2(\Omega_T)$.
\begin{lemma}\label{lem:parab.space.embedding}
Let $\Omega\subset {\mathbb{R}}^N$ be any bounded domain and $0<T<\infty$.
Then there exists a constant $C\geq 0$ depending only on $N$ such that
\begin{equation}\label{eq:parab.space.embedding}
\norm{\solC}_{L^2(\Omega_T)}\leq C\abs{\Omega_T\cap\left[\solC\neq 0\right]}^{\frac{1}{N+2}}\norm{\solC}_{V^2(\Omega_T)}
\end{equation}
for all $\solC\in V^2_0(\Omega_T)$.
\end{lemma}
\begin{proof}
This inequality is given in \cite{lady1968}, Equation (3.7) on page 76.
\end{proof}
Let us mention the following Poincar\'e type inequality that is concerned with truncated functions. Actually, this is a lemma due to De Giorgi, see Lemma 2.3 in \cite{DiBenedetto1982}.
\begin{lemma}\label{lem:Poincare-type.ineq.tracking.levelsets}
Let $\solC\in W^{1,1}(B_R)$ and let $l,k$ be any reals such that $l>k$.
Then there exists a constant $C\geq 0$ depending only on dimension $N$ such that
\begin{align*}
(l-k)\abs{B_R\cap\left[\solC>l\right]}^{1-\frac{1}{N}}\leq C\frac{R^N}{\abs{B_R\cap\left[\solC\leq k\right]}}\int_{B_R\cap\left[l>\solC\geq k\right]}\abs{\nabla \solC}\mathrm{d} x.
\end{align*}
\end{lemma}
\begin{proof}
This statement is proven in \cite{ladyUral1968}, Lemma 3.5 on p.\ 55.
\end{proof}
Finally, we have the following lemma on fast geometric convergence.
\begin{lemma}\label{lem:fast.geometric.convergence}
Assume that $\{y_n\}_{n=1}^\infty$ is a non-negative sequence such that
\begin{align*}
y_{n+1}\leq C\, b^ny_n^{1+{a}}
\end{align*}
for all $n\in\mathbb{N}$ for some constants $C>0$, $b>1$ and ${a}\in(0,1)$. If $y_0\leq \theta$, then
\begin{align*}
y_n\leq \theta\, b^{-n{a}^{-1}},
\end{align*}
where $\theta:= C^{-{a}^{-1}}b^{-{a}^ {-2}}$.
\end{lemma}
\begin{proof}
A simple proof by induction can by found in \cite{ladyUral1968}, Lemma 4.7 on page 66.
\end{proof}
\section{Proof of the De Giorgi-type Lemma} \label{sect:Proof.of.DeGiorgiLemma}
We prove each of the two alternatives in \Cref{lem:DeGiorgi-type} separately.
\subsection{The first alternative}
We assume the hypotheses of \Cref{lem:DeGiorgi-type} are satisfied, i.e.\ $\omega>0$ and $R\in(0,R_{\mathrm{max}}]$ such that \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, \eqref{eq:cond.oscillation.recursion.hypothesis}, \eqref{eq:cond.inf.small} and \eqref{eq:oscillation.Really.Large} hold.
Note that \eqref{eq:cond.oscillation.recursion.hypothesis} and \eqref{eq:cond.inf.small} imply $\mu_+\leq \frac{5}{4}\omega$.
\begin{proposition}\label{prop:DeGiorgi.first.alternative}
There exists a $\nu_0\in (0,1)$ depending on $N$, $c_1$, $c_2$, $m$ and $L$ such that, if \eqref{eq:DeGiorgi-type.alternative.I} holds, then $\solA> \mu_- + \frac{\omega}{2}$ a.e.\ in $Q_\omega(\frac{R}{2})$.
\end{proposition}
\begin{proof}
We define sequences
\[
R_n=\frac{R}{2}+\frac{R}{2^{n+1}},\quad k_n=\mu_-+\frac{\omega}{4}+\frac{\omega}{2^{n+2}}
\]
and construct the family of nested and shrinking cylinders ${Q}_n=Q_\omega(R_n)$.
Let $\zeta_n$ be a smooth cut-off function corresponding to the inclusion ${Q}_{n+1}\subset {Q}_{n}$, i.e.\ $0\leq\zeta\leq 1$, $\zeta_n$ vanishes on the parabolic boundary of ${Q}_{n}$, $\zeta_n\equiv 1$ in ${Q}_{n+1}$ and
\[
\abs{\nabla\zeta_n}\leq\frac{2^{n+2}}{R},\quad\abs{\Delta\zeta_n}\leq \frac{2^{2(n+2)}}{R^2},\quad 0\leq \zeta_{n,t}\leq 2^{2(n+2)}\frac{\omega^{m-1}}{R^2}.
\]
We use \Cref{prop:interior.energy.estimate.lower.trunc}, substituting $R=R_n$, $k=k_n$ and $l=\mu_-+\frac{\omega}{4}$, and we write $\solA_\omega={\solA_{(l)}}=\max\left\{ \solA,\mu_-+\frac{\omega}{4} \right\}$.
Moreover, we have the following inequalities:
\begin{gather*}
\frac{\omega}{4}\leq l; \quad
(k-l)(k+l)=\frac{\omega}{2^{n+2}}(2\mu_-+\frac{\omega}{2}+\frac{\omega}{2^{n+2}})\leq\frac{\omega^2}{2^{n}}; \\
(k-l)^2k^{m-1} \leq \frac{\omega^2}{2^{2(n+2)}}\omega^{m-1}\leq \frac{\omega^{m+1}}{2^{2n}}; \; (k-l)k^m\leq \frac{\omega}{2^{n+2}}\omega^{m}\leq \frac{\omega^{m+1}}{2^{n}}; \\
L l^{-m_0} (k-l) \leq L 4^{m_0}\omega^{-m_0} \frac{\omega}{2^{n}}.
\end{gather*}
After absorbing $4^{m-1}$, $c_1^{-1}$, $c_2$, $m^{-1}$, $L$ and $4^{m_0}$ into $C$, the interior energy inequality in \Cref{prop:interior.energy.estimate.lower.trunc} reads
\begin{equation*}
\begin{aligned}
&\norm{(\solA_\omega-{k_n})_{-}{\zeta}_n}_{L^\infty({t},{\tau};L^2(B(R_{n})))}^2+\omega^{m-1}\norm{\nabla((\solA_{\omega}-{k_n})_{-}{\zeta}_n)}_{L^2(Q_{n})}^2\\
&\quad\leq C(c_1,c_2,m,L,m_0)\Bigl( 2^{n}\frac{\omega^{m+1}}{R^2}\iint_{{Q}_n}\chi_{[\solA<k_n]}+\frac{\omega^{m+1}}{R^2}\iint_{{Q}_n}\chi_{[\solA< k_n]} \\
&\qquad\qquad\qquad\qquad\qquad +2^{n}\frac{\omega^{m+1}}{R^2}\iint_{{Q}_n}\chi_{[\solA<\mu_-+\frac{\omega}{4}]}+\frac{\omega^{1-m_0}}{2^{n}}\iint_{{Q}_n}\chi_{[\solA<k_n]} \Bigr) \\
&\quad\leq C(c_1,c_2,m,L,m_0)\left(2^{n}\frac{\omega^{m+1}}{R^2}\iint_{{Q}_n}\chi_{[\solA<k_n]}+ 2^{n}\frac{\omega^{m+1}}{R^2} \left(\frac{R^2}{\omega^{m+m_0}}\right)\iint_{{Q}_n}\chi_{[\solA<k_n]}\right)\\
&\quad\leq C(c_1,c_2,m,L,m_0)\ 2^{n}\frac{\omega^{m+1}}{R^2}\iint_{{Q}_n}\chi_{[\solA<k_n]},
\end{aligned}
\end{equation*}
where the second inequality is obtained by combining the first three terms and the fact that $2^{-n}\leq 1 \leq 2^n$.
The third inequality is derived by noting that \eqref{eq:oscillation.Really.Large} implies that
\[
\frac{R^2}{\omega^{m+m_0}}\leq R^{1-\frac{m_0}{m}} \leq 1
\]
due to $0<m_0<m$ by assumption.
We introduce the change of variables
\[
\tau=\omega^{m-1}t,
\]
denote by ${\tilde{Q}_n}$ the transformed cylinder $B_{R_n}\times(-R_n^2,0)$ and define the transformed function
\[
\solAvar_\omega(x,\tau):=\solA_\omega(x,\omega^{1-m}\tau), \quad \tilde{\zeta}(x,\tau):=\zeta(x,\omega^{1-m}\tau).
\]
The above estimate transforms into
\begin{equation*}
\begin{aligned}
&\norm{(\solAvar_\omega-{k_n})_{-}\tilde{\zeta}_n}_{L^\infty(-R_{n}^2,0;L^2(B(R_{n})))}^2+\norm{\nabla((\solAvar_{\omega}-{k_n})_{-}\tilde{\zeta}_n)}_{L^2(\tilde{Q}_{n})}^2\\
&\quad\leq C(c_1,c_2,m,L,m_0)\ 2^{n}\frac{\omega^2}{R^2}\iint_{\tilde{Q}_n}\chi_{[\solAvar<k_n]},
\end{aligned}
\end{equation*}
in other words
\[
\norm{(\solAvar_\omega-{k_n})_{-}\tilde{\zeta}_n}_{V^2(\tilde{Q}_{n})}^2\leq C(c_1,c_2,m,L)\ 2^{n}\frac{\omega^2}{R^2}|{\tilde{Q}_{n}\cap[\solAvar<k_n]}|.
\]
We apply \Cref{lem:parab.space.embedding} to the function $\solC=(\solAvar_\omega-{k_n})_{-}\tilde{\zeta}_n\in V^2_0(\tilde{Q}_n)$ to estimate its $L^2$-norm in terms of its $V^2$-norm to obtain
\begin{align*}
&\frac{\omega^2}{2^{2(n+3)}}|{\tilde{Q}_{n+1}\cap[\solAvar<k_{n+1}]}|=(k_{n}-k_{n+1})^2|{\tilde{Q}_{n+1}\cap[\solAvar<k_{n+1}]}|\\
&\leq \norm{(\solAvar_\omega-{k_n})_{-}\tilde{\zeta}_n}_{L^2(\tilde{Q}_{n})}^2\leq C(N) |\tilde{Q}_{n}\cap[\solAvar<k_n]|^{\frac{2}{N+2}}\norm{(\solAvar_\omega-{k_n})_{-}\tilde{\zeta}_n}_{V^2(\tilde{Q}_{n})}^2\\
&\leq C(N,c_1,c_2,m,L,m_0)\ 2^{n}\frac{\omega^2}{R^2}|\tilde{Q}_{n}\cap[\solAvar<k_n]|^{1+\frac{2}{N+2}}.
\end{align*}
Next, let us set ${y}_n=\frac{|\tilde{Q}_{n}\cap[\solAvar<k_n]|}{|\tilde{Q}_{n}|}$ and note that $|\tilde{Q}_{n}|=\abs{B_1}R_n^{N+2}$, where $B_1$ is the $N$-dimensional unit sphere, so
\[
\begin{aligned}
\frac{|\tilde{Q}_{n}|^{1+\frac{2}{N+2}}}{|\tilde{Q}_{n+1}|}&={\abs{B_1}}^{\frac{2}{N+2}}\frac{(R_n^{N+2})^{1+\frac{2}{N+2}}}{R_{n+1}^{N+2}}=C(N)\ R_n^2\left(\frac{R_n}{R_{n+1}}\right)^{N+2}\\
&=C(N)\ R_n^2\left(\frac{\frac{1}{2}+\frac{1}{2^{n+2}}}{\frac{1}{2}+\frac{1}{2^{n+3}}}\right)^{N+2}\leq C(N)\ 2^{N+2}R^2.
\end{aligned}
\]
We conclude that
\[
y_{n+1}\leq C(N,c_1,c_2,m,L,m_0)\ 2^{3n} y_{n}^{1+\frac{2}{N+2}}.
\]
By \Cref{lem:fast.geometric.convergence}, if
\[
y_0\leq C(N,c_1,c_2,m,L,m_0)^{-\frac{N+2}{2}}2^{-(N+2)^2},
\]
then $y_n\to 0$ as $n\to\infty$.
Let us pick
\begin{equation}\label{eq:def.nu_0}
\nu_0:= C(N,c_1,c_2,m,L,m_0)^{-\frac{N+2}{2}}2^{-(N+2)^2},
\end{equation}
then the statement is proven, since $y_n\to0$ implies that
\[
\left| {{Q}_n\cap \left[\solA<\mu_-+\frac{\omega}{4}+\frac{\omega}{2^{n+2}}\right]} \right|\to 0
\]
as $n\to \infty$, and therefore $\abs{{Q}\left( \frac{R}{2} \right) \cap[\solA\leq\mu_-+\frac{\omega}{2}]}=0$.
\end{proof}
\subsection{The second alternative}
As in the previous subsection we assume that the hypotheses of \Cref{lem:DeGiorgi-type} are satisfied.
Observe that \eqref{eq:DeGiorgi-type.alternative.I} is violated if and only if \eqref{eq:DeGiorgi-type.alternative.II} is satisfied, because
\begin{equation*}
\abs{Q_\omega(R)\cap\left[\solA<\mu_-+\frac{\omega}{2}\right]}\geq\nu_0\abs{Q_\omega(R)}\\
\end{equation*}
if and only if
\begin{equation*}
\begin{aligned}
\abs{Q_\omega(R)\cap\left[\solA\geq\mu_-+\frac{\omega}{2}\right]}&=\abs{Q_\omega(R)}-\abs{Q_\omega(R)\cap\left[\solA<\mu_-+\frac{\omega}{2}\right]}\\
&\leq(1-\nu_0)\abs{Q_\omega(R)}.
\end{aligned}
\end{equation*}
This justifies calling the two alternatives in \Cref{lem:DeGiorgi-type} a dichotomy.
We prove the second alternative of \Cref{lem:DeGiorgi-type} in two steps.
In the first step we generalize the condition \eqref{eq:DeGiorgi-type.alternative.II}.
In particular, we show that we may replace the factor $1-\nu_0$ by any $\nu\in(0,1)$ on the right-hand side of \eqref{eq:DeGiorgi-type.alternative.II} at the cost of shrinking the set involved in the left-hand side appropriately.
This is necessary due to the fact that $\nu_0$ has been fixed by \Cref{prop:DeGiorgi.first.alternative}.
Indeed, $\nu_0$ was chosen such that \Cref{lem:fast.geometric.convergence} could be applied.
In the second alternative we do not have this freedom of choice and therefore this additional step is required.
Here, we use the interior logarithmic estimate, that is, \Cref{prop:int.log.estimate}.
In the second step we prove the second alternative in the same manner as in \Cref{prop:DeGiorgi.first.alternative}.
We start with the first step by proving an auxiliary estimate.
Recall that ${\bar{t}_0}=-\omega^{1-m}R^2$.
\begin{lemma}\label{lem:Alternative.condition.specific.tau}
Let $\nu_0$ be given by \Cref{prop:DeGiorgi.first.alternative} and suppose \eqref{eq:DeGiorgi-type.alternative.II} holds.
Then there exists $\tau\in [{\bar{t}_0},{\frac{\nu_0}{2}\bar{t}_0}]$ such that
\[
\abs{{B_R}\cap\left[\solA(\tau)>\mu_-+\frac{\omega}{2}\right]}\leq \frac{1-\nu_0}{1-\frac{\nu_0}{2}}\abs{{B_R}}.
\]
\end{lemma}
\begin{proof}
Suppose that the inequality does not hold for all $\tau\in[{\bar{t}_0},{\frac{\nu_0}{2}\bar{t}_0}]$, then
\begin{align*}
\abs{{Q_\omega(R)}\cap\left[\solA>\mu_-+\frac{\omega}{2}\right]}&\geq \int_{{\bar{t}_0}}^{{\frac{\nu_0}{2}\bar{t}_0}}\abs{{B_R}\cap\left[\solA(\tau)>\mu_-+\frac{\omega}{2}\right]}\mathrm{d} \tau\\
&>(1-\tfrac{\nu_0}{2})\omega^{-(m-1)}R^2\frac{1-\nu_0}{1-\frac{\nu_0}{2}}\abs{{B_R}}=(1-\nu_0)\abs{{Q_\omega(R)}}.
\end{align*}
This inequality implies \eqref{eq:DeGiorgi-type.alternative.I}, since
\begin{align*}
&\abs{{Q_\omega(R)}\cap\left[\solA<\mu_-+\frac{\omega}{2}\right]}\leq\abs{{Q_\omega(R)}\cap\left[\solA\leq\mu_-+\frac{\omega}{2}\right]}\\
&\quad=\abs{{Q_\omega(R)}}-\abs{{Q_\omega(R)}\cap\left[\solA>\mu_-+\frac{\omega}{2}\right]}<\nu_0\abs{{Q_\omega(R)}},
\end{align*}
so we have a contradiction, which proves the lemma.
\end{proof}
Next, we aim to extend \Cref{lem:Alternative.condition.specific.tau} to the interval $[{\frac{\nu_0}{2}\bar{t}_0},0]$ instead of only a specific $\tau$ by reducing the size of the set on the left-hand side.
\begin{corollary}\label{cor:Alternative.condition.time-interval}
Let $\nu_0$ be given by \Cref{prop:DeGiorgi.first.alternative} and suppose \eqref{eq:DeGiorgi-type.alternative.II} holds.
There exists an integer $n_*$ depending on $N$, $c_1$, $c_2$, $m$ and $L$ such that
\[
\abs{{B_R}\cap\left[\solA(t)\geq\mu_-+\omega-\frac{\omega}{2^{n_*}}\right]}\leq \left(1-\left(\frac{\nu_0}{2}\right)^2\right)\abs{{B_R}}
\]
for all $t\in[{\frac{\nu_0}{2}\bar{t}_0},0]$.
\end{corollary}
\begin{proof}
We use the interior logarithmic estimate, i.e.\ \Cref{prop:int.log.estimate}.
Let $\lambda\in(0,1)$, to be determined later, consider the ball $B_{(1-\lambda)R}$ and let $\zeta$ be the corresponding cut-off function, i.e.\ $\zeta\in C^\infty_c({B_R})$ such that
\[
0\leq\zeta\leq 1,\quad\zeta\equiv 1\ \text{in}\ B_{(1-\lambda)R},\quad\abs{\nabla\zeta}\leq\frac{C}{\lambda R}.
\]
Now, let $k,l\in{\mathbb{N}}$, $l>k$.
Then \Cref{prop:int.log.estimate} gives
\begin{equation*}
\begin{aligned}
&(l-k-1)^2\abs{B_{(1-\lambda)R}\cap\left[\solA(t)>\mu_-+\omega-\tfrac{\omega}{2^l}\right]}\\
&\leq (l-k)^2\abs{{B_R}\cap\left[\solA(\tau)>\mu_-+\omega-\tfrac{\omega}{2^k}\right]}+ C\left( \frac{(l-k)}{\lambda^2}\abs{{B_R}}+ L \left(\frac{\omega}{2}\right)^{-m_0} 2^{l}\frac{R^2}{\omega^m}\abs{{B_R}}\right),
\end{aligned}
\end{equation*}
where we used that $\mu_+^{m-1}\frac{1}{\omega^{m-1}}\leq C$ by the assumption $\mu_+\leq\frac{5}{4}\omega$.
Moreover, $\mu_-+\omega-\frac{\omega}{2^k}\geq \mu_-+\frac{\omega}{2}$, so \Cref{lem:Alternative.condition.specific.tau} implies that
\[
\begin{aligned}
&(l-k-1)^2\abs{B_{(1-\lambda)R}\cap\left[\solA(t)>\mu_-+\omega-\tfrac{\omega}{2^l}\right]}\\
&\leq (l-k)^2\frac{1-\nu_0}{1-\frac{\nu_0}{2}}\abs{{B_R}}+C\ \frac{(l-k)}{\lambda^2}\abs{{B_R}}+C(L,m_0)\, 2^{l+1}\frac{R^2}{\omega^{m+m_0}}\abs{{B_R}}.
\end{aligned}
\]
Next, note that $\abs{{B_R}\backslash B_{(1-\lambda)R}}=\abs{{B_R}}-\abs{B_{(1-\lambda)R}}\leq\lambda N\abs{{B_R}}$, because the volume of an $N$-dimensional spherical shell of width $\lambda$ is proportional to
\begin{align*}
R^N-(1-\lambda)^NR^N&=(1-b^N)R^N=(1-b)(1+b+b^2+\ldots+b^{N_1})R^N\\
&\leq(1-b) NR^N=\lambda NR^N,\quad b:=1-\lambda.
\end{align*}
We apply this to obtain the estimate
\begin{equation*}
\begin{aligned}
&\abs{{B_R}\cap\left[\solA(t)>\mu_-+\omega-\tfrac{\omega}{2^l}\right]} \leq \\
& \left(\left(\frac{l-k}{l-k-1}\right)^2\frac{1-\nu_0}{1-\frac{\nu_0}{2}}+C \frac{(l-k)}{(l-k-1)^2}\frac{1}{\lambda^2}+ \frac{C(L,m_0) \, 2^{l+1}}{(l-k-1)^2}\frac{R^2}{\omega^{m+m_0}}+C N\lambda\right)\abs{{B_R}}.
\end{aligned}
\end{equation*}
Let us set $k=1$ and pick $\lambda$ and $l$ in an appropriate manner to obtain the desired estimate.
First, we chose $\lambda\in(0,1)$ small enough such that
\[
C N\lambda<\frac{1}{4}\nu_0^2.
\]
Then, we pick $l$ large enough such that
\[
\left(\frac{l-1}{l-2}\right)^2\leq\left(1-\frac{\nu_0}{2}\right)(1+\nu_0),
\]
which is possible, because $\left(1-\frac{\nu_0}{2}\right)(1+\nu_0)>1$ and $\left(\frac{l-1}{l-2}\right)^2\to 1$ as $l\to\infty$.
So we can bound the first term by
\[
(1-\nu_0^2)\abs{{B_R}}=\left(1-\left(\frac{\nu_0}{2}\right)^2\right)\abs{{B_R}}-\frac{3}{4}\nu_0^2\abs{{B_R}}.
\]
If needed, we pick $l$ larger such that
\[
C \frac{(l-1)}{(l-2)^2}\frac{1}{\lambda^2}\leq\frac{1}{4}\nu_0^2.
\]
Finally, we have
\[
C(L,m_0) \, \frac{2^{l+1}}{(l-2)^2}\frac{R^2}{\omega^{m+m_0}}\leq \frac{1}{4}\nu_0^2,
\]
because we assume that $R\leq R_\mathrm{max}$.
Indeed, recall that \eqref{eq:oscillation.Really.Large} is satisfied by assumption, that is, $\omega^{-m-m_0} R^2\leq R^{(m-m_0)/m}$, so we set
\begin{equation}\label{eq:define.Rmax}
R_{\mathrm{max}}=\left(\frac{\nu_0^2}{4C}\frac{(l-2)^2}{2^{l}}\right)^{\frac{m}{m-m_0}}
\end{equation}
to ensure that this bound holds.
Set $n_*=l$ to finish the proof.
\end{proof}
The conclusion of the first step in the proof of the second alternative of \Cref{lem:DeGiorgi-type}, i.e.\ the desired generalization of the condition \eqref{eq:DeGiorgi-type.alternative.II}, is given by the following lemma.
Recall that
\[
Q^{\nu_0}_\omega(R):=Q\left( \tfrac{\nu_0}{2}\omega^{1-m}R^2 , R \right).
\]
\begin{lemma}\label{lem:alternative.condition.refined}
Let $\nu_0$ be given by \Cref{prop:DeGiorgi.first.alternative} and suppose \eqref{eq:DeGiorgi-type.alternative.II} holds.
For any $\nu\in(0,1)$ there exists a $n_0>n_*$ such that
\begin{equation*}
\abs{{Q^{\nu_0}_\omega(R)}\cap \left[\solA>\mu_-+\omega-\tfrac{1}{2^{n_0}}\omega\right]} < \nu\abs{{Q^{\nu_0}_\omega(R)}}.
\end{equation*}
\end{lemma}
Before we prove \Cref{lem:alternative.condition.refined}, we need the following auxiliary lemma that allows us to estimate the gradient of the truncated solution.
It is an immediate consequence of the interior energy inequality given in \Cref{prop:interior.energy.estimate.upper.trunc}.
Here, the second inclusion of \eqref{eq:cond.intrinsic.scaled.cylinder.in.domain}, that is, $Q_\omega(2R)\subseteq\Omega_T$, is a key ingredient.
\begin{lemma}\label{lem:Gradient.Estimate.via.Energy.Ineq}
Let $\nu_0\in(0,1)$, then we have the estimate
\[
\begin{aligned}
\norm{\nabla(\solA-(\mu_-+(1-\tfrac{1}{2^n})\omega))_+}_{L^2({Q^{\nu_0}_\omega(R)}}^2
\leq C(N,c_1,c_2,m,L,m_0)\ \frac{\omega^2}{2^{2n}R^2} \abs{Q^{\nu_0}_\omega(R)}.
\end{aligned}
\]
\end{lemma}
\begin{proof}
Fix $n\in{\mathbb{N}}$.
Without loss of generality we may assume that $\omega$ is small enough such that
\[
k:=\mu_-+\left( 1-\frac{1}{2^n} \right) \omega<\mu_+.
\]
Otherwise, the left-hand side of the estimate vanishes and the lemma holds trivially.
It follows that
\[
(\mu_+-k)^2=(\mu_+-\mu_--(1-\frac{1}{2^{n}})\omega)^2\leq \frac{\omega^2}{2^{2n}}.
\]
We also have that $\frac{1}{2}\omega\leq k$ and $\mu_+\leq \frac{5}{4}\omega\leq 2\omega$.
Let $\zeta$ be a smooth cut-off function corresponding to the inclusion ${Q^{\nu_0}_\omega(R)}\subset Q^{\nu_0}_\omega(2R)$, then
\[
\abs{\nabla\zeta}\leq\frac{4}{R},\quad 0\leq\zeta_t\leq\frac{8\omega^{m-1}}{\nu_0R^2}
\]
and $\zeta$ vanishes on the parabolic boundary of $Q^{\nu_0}_\omega(R)$.
We use \Cref{prop:interior.energy.estimate.upper.trunc}, substituting $\omega^{1-m}$ by $\frac{\nu_0}{2}\omega^{1-m}$ and $R$ by $2R$
to obtain an estimate of the $L^2$-norm of the gradient, namely
\begin{align*}
&c_1\left(\frac{\omega}{2}\right)^{m-1}\norm{\nabla(\solA-k)_+}_{L^2({Q^{\nu_0}_\omega(R)})}^2 \\
&\quad \leq C \left(\frac{\omega^{m+1}}{\nu_02^{2n}R^2}+c_2\frac{(2\omega)^{m+1}}{2^{2n}R^2}+L \left(\frac{\omega}{2}\right)^{-m_0} \frac{\omega}{2^{n}}\right) \iint_{Q^{\nu_0}_{\omega}(2R)}\chi_{[\solA>k]}.
\end{align*}
Multiplying the left- and right-hand side by $\omega^{1-m}$ and absorbing the constants into $C$ we obtain
\[
\norm{\nabla(\solA-k)_+}_{L^2({Q^{\nu_0}_\omega(R)})}^2\leq C(N,c_1,c_2,m,L,m_0)\ \left(1+\frac{R^2}{\omega^{m+m_0}}\right)\frac{\omega^2}{2^{2n}R^2} \abs{Q^{\nu_0}_\omega(R)}.
\]
By \eqref{eq:oscillation.Really.Large} we know that $\omega^{-m-m_0}R^2\leq R^{1-\frac{m_0}{m}}\leq 1$, hence the estimate is proved.
\end{proof}
\begin{proof}[Proof of \Cref{lem:alternative.condition.refined}]
We apply \Cref{lem:Poincare-type.ineq.tracking.levelsets} with $\solC=\solA(t)$, $t\in (-\tfrac{\nu_0}{2}\omega^{1-m}R^2,0)$,
\[
l=\mu_-+(1-\tfrac{1}{2^{n+1}})\omega \quad \text{and} \quad k=\mu_-+(1-\tfrac{1}{2^n})\omega,
\]
where $n>n_*$ is fixed and $n_*$ is given by \Cref{cor:Alternative.condition.time-interval}.
We multiply the left- and right-hand side of the estimate by $\abs{{B_R}\cap[\solC>l]}^{\frac{1}{N}}\leq C(N) R$ and we bound the integral on the right-hand side of the resulting estimate by
\begin{equation}\label{eq:alternative.condition.refined.Holder.ineq.result}
\int_{{B_R}}\abs{\nabla(\solA(t)-k)_+}\chi_{[k\leq\solC<l]} \leq \left(\int_{{B_R}}\abs{\nabla(\solA(t)-k)_+}^2\chi_{[k\leq\solC<l]}\right)^{\frac{1}{2}}\abs{{B_R}\cap\left[k\leq\solA(t)\leq l\right]}^{\frac{1}{2}},
\end{equation}
which holds by H\"older's inequality.
By \Cref{cor:Alternative.condition.time-interval} we know that
\[
\abs{{B_R}\cap\left[\solA(t)\leq k\right]}\geq C(N)\ \left(\tfrac{\nu_0}{2}\right)^2 R^N,
\]
hence $R^{N+1}{\abs{{B_R}\cap[\solA(t)\leq k]}}^{-1}\leq C(N)\ R{\nu_0}^{-2}$.
Integrating the resulting inequality over $t\in (-\tfrac{\nu_0}{2}\omega^{1-m}R^2,0)$ yields
\begin{equation*}
\begin{aligned}
&\frac{\omega}{2^{n}}\abs{{Q^{\nu_0}_\omega(R)}\cap\left[\solA>\mu_-+\left(1-\tfrac{1}{2^{n+1}}\right)\omega\right]}\\
&\leq C(N)\ \frac{R}{{\nu_0}^2} \norm{\nabla(\solA-(\mu_-+(1-\tfrac{1}{2^n})\omega))_+}_{L^2({Q^{\nu_0}_\omega(R)})}\abs{{Q^{\nu_0}_\omega(R)}\cap\left[k\leq\solA<l\right]}^{\frac{1}{2}}.
\end{aligned}
\end{equation*}
We multiply the estimate by $\left(\frac{\omega}{2^{n}}\right)^{-1}$, use the inequality in \Cref{lem:Gradient.Estimate.via.Energy.Ineq} and square the resulting estimate to obtain
\begin{equation*}
\begin{aligned}
&\abs{{Q^{\nu_0}_\omega(R)}\cap\left[\solA>\mu_-+\left(1-\tfrac{1}{2^{n+1}}\right)\omega\right]}^2\\
&\quad\leq C(N,c_1,c_2,m,L,m_0)\ \nu_0^{-2}\abs{Q^{\nu_0}_\omega(R)}\\
&\qquad\cdot\abs{{Q^{\nu_0}_\omega(R)}\cap\left[\mu_-+\left(1-\tfrac{1}{2^{n}}\right)\omega\leq\solA<\mu_-+\left(1-\tfrac{1}{2^{n+1}}\right)\omega\right]}\\
\end{aligned}
\end{equation*}
Sum this estimate over $n=n_*,n_*+1,\ldots, n_0-1$ for some $n_0\in{\mathbb{N}}$ and observe that
\[
\sum_{n=n_*}^{n_0-1}\abs{{Q^{\nu_0}_\omega(R)}\cap\left[\mu_-+(1-\tfrac{1}{2^{n}})\omega\leq\solA<\mu_-+(1-\tfrac{1}{2^{n+1}})\omega\right]}\leq \abs{{Q^{\nu_0}_\omega(R)}},
\]
so that
\begin{equation*}
(n_0-n_*)\abs{{Q^{\nu_0}_\omega(R)}\cap\left[\solA>\mu_-+\left(1-\tfrac{1}{2^{n_0}}\right)\omega\right]}^2\leq C(N,c_1,c_2,m,L,m_0)\ \nu_0^{-4}\abs{{Q^{\nu_0}_\omega(R)}}^2.
\end{equation*}
Pick $n_0$ large enough such that
\[
\frac{C(N,c_1,c_2,m,L,m_0)}{(n_0-n_*)\nu_0^4}\leq \nu^2
\]
to finish the proof.
\end{proof}
Now we can prove the second alternative of \Cref{lem:DeGiorgi-type} in the same manner as the first alternative, i.e.\ \Cref{prop:DeGiorgi.first.alternative}.
\begin{proposition}\label{prop:DiGiorgi.SecondAlternative}
Let $\nu_0$ be given by \Cref{prop:DeGiorgi.first.alternative} and suppose \eqref{eq:DeGiorgi-type.alternative.II} holds.
Then there exists $n_0\in{\mathbb{N}}$ depending on $N$, $c_1$, $c_2$, $m$ and $L$ such that
\[
\solA< \mu_-+(1-\tfrac{1}{2^{n_0}})\omega
\]
a.e.\ in $Q^{\nu_0}_\omega\left(\tfrac{R}{2}\right)$.
\end{proposition}
\begin{proof}
Define the sequences
\[
R_n=\frac{R}{2}+\frac{R}{2^{n+1}},\quad k_n=\mu_-+(1-\frac{1}{2^{n_0}})\omega-\frac{\omega}{2^{n+1}},
\]
where $n_0$ is fixed and will be chosen later depending solely on $N$, $c_1$, $c_2$, $m$ and $L$.
Construct the family of nested and shrinking cylinders $Q_n=Q^{\nu_0}_\omega\left(R_n\right)$.
Let $\zeta_n$ be the smooth parabolic cut-off function corresponding to the inclusion ${Q}_{n+1}\subseteq{Q}_{n}$ and note that
\[
\abs{\nabla\zeta_n}\leq \frac{2^{n+2}}{R},\quad 0\leq\zeta_t\leq C\ \omega^{m-1}\frac{2^{2n+2}}{\nu_0R^2}.
\]
Without loss of generality we may assume that
\[
\mu_-+(1-\frac{1}{2^{n_0}})\omega \leq \mu_+.
\]
Otherwise \Cref{prop:DiGiorgi.SecondAlternative} holds trivially, since $\solA\leq\mu_+$ holds by definition.
First, we observe that the following inequalities hold:
\begin{gather*}
\frac{\omega}{4}\leq k_n;\quad (\mu_+-k_n)^2= (\mu_+-\mu_--(1-\frac{1}{2^{n_0}}-\frac{1}{2^{n+1}})\omega)^2\leq (\mu_+-\mu_-)^2 \leq \omega^2;\\
\mu_+\leq \frac{5}{4}\omega; \quad k_n^{-m_0}(\mu_+-k_n)\leq \left(\frac{\omega}{2}\right)^{-m_0} 2\omega.
\end{gather*}
We use \Cref{prop:interior.energy.estimate.upper.trunc} substituting $R$ by $R_n$, $k$ by $k_n$ and $\omega^{m-1}$ by $\tfrac{\nu_0}{2}\omega^{m-1}$ (recall that we set ${\bar{t}_0}=-\frac{\nu_0}{2}\omega^{1-m}\omega$) and we absorb all constants depending on $c_1$, $c_2$ and $m$ into $C$ to obtain
\begin{equation*}
\begin{aligned}
&\norm{(\solA-k_n)_{+}\zeta_n}_{L^\infty({\bar{t}_0},{0};L^2(B_{R_{n+1}}))}^2+\omega^{m-1}\norm{\nabla(\solA-{k_n})_{+}\zeta_n}_{L^2({Q}_{n+1})}^2\\
&\quad\leq C\ 2^{2n}\frac{\omega^{m+1}}{\nu_0R^2}\iint_{{Q}_{n}}\chi_{[\solA>k_n]}+C\ 2^{2n}\frac{\omega^{m+1}}{R^2}\iint_{{Q_n}}\chi_{[\solA>k_n]} \\
&\qquad+C\ L2^{-m_0}\frac{\omega^{m+1}}{R^2}\left(\frac{R^2}{\omega^{m+m_0}}\right)\iint_{Q_n}\chi_{[\solA>k_n]}\\
&\quad\leq C(c_1,c_2,m,L,m_0)\ 2^{2n}\frac{\omega^{m+1}}{R^2}\iint_{Q_n}\chi_{[\solA>k_n]},
\end{aligned}
\end{equation*}
where we used that $\omega^{-m-m_0} R^2\leq R^{(m-m_0)/m}\leq 1$ by \eqref{eq:oscillation.Really.Large}.
Introduce the change of variables
\[
\tau=\omega^{m-1}t,
\]
denote by $\tilde{Q}_n$ the transformed cylinder $B_{R_n}\times(-\frac{\nu_0}{2}R_n^2,0)$ and define the functions
\[
\solAvar(x,\tau)=\solA(x,\omega^{1-m}\tau),\quad\tilde{\zeta}_n(x,\tau)=\zeta_n(x,\omega^{1-m}\tau).
\]
The transformed estimate reads
\begin{equation*}
\begin{aligned}
\norm{(\solAvar-k_n)_{+}\tilde{\zeta}_n}_{V^2(\tilde{Q}_{n+1})}^2\leq C(c_1,c_2,m,L,m_0)\ 2^{2n}\frac{\omega^{2}}{R^2}\iint_{\tilde{Q}_{n}}\chi_{[\solAvar>k_n]}.
\end{aligned}
\end{equation*}
We apply \Cref{lem:parab.space.embedding} to $(\solAvar-k_n)_{+}\tilde{\zeta}_n\in V^2_0$ and use the resulting estimate to conclude that
\begin{align*}
&\frac{\omega^2}{2^{2(n+1)}} |\tilde{Q}_{n+1}\cap[\solAvar>k_{n+1}] | = (k_{n+1}-k_n)^2|\tilde{Q}_{n+1}\cap[\solAvar>k_{n+1}] |\\
&\quad\leq \norm{(\solAvar-k_n)_+\tilde{\zeta}_n}^2_{L^2(\tilde{Q}_n)}\leq C(N)\ |\tilde{Q}_{n}\cap[\solAvar>k_{n}] |^{\frac{2}{N+2}}\norm{(\solAvar-k_n)_+\tilde{\zeta}_n}^2_{V^2(\tilde{Q}_n)}\\
&\quad\leq C(N,c_1,c_2,m,L,m_0)\ 2^{2n}\frac{\omega^{2}}{R^2}\iint_{\tilde{Q}_{n}}\chi_{[\solAvar>k_n]}.
\end{align*}
Note that $|\tilde{Q}_n|= \abs{B_1}\frac{\nu_0}{2} R_n^{N+2}$, so
\[
\frac{|\tilde{Q}_n|^{1+\frac{2}{N+2}}}{|\tilde{Q}_{n+1}|}\leq C(N)\ \nu_0^{\frac{2}{N+2}}2^{N+2}R^2.
\]
Next, we set $y_n=\frac{\abs{\tilde{Q}_n\cap[\solAvar>k_n]}}{|\tilde{Q}_n|}$ and we conclude that
\[
y_{n+1}\leq C(N,c_1,c_2,m,L,m_0)\ 2^{4n}y_n^{1+\frac{2}{N+2}}.
\]
By \Cref{lem:fast.geometric.convergence}, if
\[
y_0\leq C(N,c_1,c_2,m,L,m_0)^{-\frac{N+2}{2}}(2^4)^{-\left(\frac{N+2}{2}\right)^2},
\]
then
\[
y_n\leq C(N,c_1,c_2,m,L,m_0)^{-\frac{N+2}{2}}2^{-\left(\frac{N+2}{2}\right)^2} (2^4)^{-\frac{N+2}{2}n},
\]
so in this case $y_n\to 0$ as $n\to \infty$.
Let us set
\[
\nu= C(N,c_1,c_2,m,L,m_0)^{-\frac{N+2}{2}}(2^4)^{-\left(\frac{N+2}{2}\right)^2}
\]
and use \Cref{lem:alternative.condition.refined} to pick an $n_0$ corresponding to this $\nu$.
Then we have that
\[
y_0=\frac{\abs{\tilde{Q}_0\cap[\solAvar>k_n]}}{|\tilde{Q}_0|}=\frac{\abs{Q^{\nu_0}_\omega(R)\cap[\solA>\mu_-+(1-\frac{1}{2^{n_0}})]}}{\abs{Q^{\nu_0}_\omega(R)}}\leq \nu
\]
by \Cref{lem:alternative.condition.refined}, so $y_n\to 0$ as $n\to \infty$.
In particular,
\[
\abs{Q_n\cap[\solA>\mu_-+(1-\frac{1}{2^{n_0}}-\frac{1}{2^{n+1}})\omega]}\to 0
\]
as $n\to\infty$, so $\abs{Q^{\nu_0}(\frac{R}{2})\cap[\solA\geq\mu_-+(1-\frac{1}{2^{n_0}})\omega]}=0$.
\end{proof}
The De Giorgi-type Lemma, that is, \Cref{lem:DeGiorgi-type}, follows by combining \Cref{prop:DeGiorgi.first.alternative,prop:DiGiorgi.SecondAlternative}.
\vspace*{0.2 cm}
\noindent\textbf{Declaration of competing interest.}
None.
\vspace*{0.2 cm}
\noindent\textbf{Data availability statement.}
My manuscript has no associated data.
\vspace*{0.2 cm}
\noindent\textbf{Acknowledgement.}
The author thanks S.\ Sonner for introducing the topic and for the support and feedback during the writing process and N.\ Liao for the discussions on the regularity of Stefan problems.
\end{document} |
\betaegin{document}
\muaketitle
\betaegin{abstract}
In this short note, we prove Strichartz estimates for Schr\"odinger operators with slowly decaying singular potentials in dimension two. This is a generalization of the recent results by Mizutani, which are stated for dimension greater than two. The main ingredient of the proof is a variant of Kato's smoothing estimate with a singular weight.
\varepsilonnd{abstract}
\sigmaection{Introduction}
The purpose of this note is to prove the Strichartz estimates for Schr\"odinger equations with slowly decaying singular potentials in dimension two. Such results have been recently proved by Mizutani \psiammaite{M2} in dimension greater than two. Throughout this paper, we assume $0<\mu<2$. We consider the two-dimensional admissible condition:
\betaegin{align}\lambdaabel{admissible}
p\psieq 2,\quad \varphirac{2}{p}+\varphirac{2}{q}=1,\quad (p,q)\nueq (2,\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty).
\varepsilonnd{align}
The main result of this note is the following theorem:
\betaegin{thm}\lambdaabel{twoSt}
Let $n=2$, $Z>0$ and $H_1=-\muathcal{D}elta+Z|x|^{-\mu}+\varepsilon V_S(x)$, where $V_S\mubox{\rhoaisebox{.5ex}{$\chi$}}n C^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe^2;\rhoe)$ satisfies
\betaegin{align*}
|\partial_x^{\alpha}V_S(x)|\lambdaeq C_{\alpha}(1+|x|)^{-1-\mu-|\alpha|}
\varepsilonnd{align*}
for all $\alpha\mubox{\rhoaisebox{.5ex}{$\chi$}}n\muathbb{Z}_+^2$.
Then there exists $\varepsilon_*>0$ such that for all $\varepsilon\mubox{\rhoaisebox{.5ex}{$\chi$}}n [0,\varepsilon_*)$ and $(p,q), (\tauilde{p},\tauilde{q})$ satisfying $(\rhoef{admissible})$, there exists $C>0$ such that
\betaegin{align}
\|e^{-itH_1}u_0\|_{L^p(\rhoe;L^q(\rhoe^2))}\lambdaeq& C\|u_0\|_{L^2(\rhoe^2)},\lambdaabel{hom}\\
\|\mubox{\rhoaisebox{.5ex}{$\chi$}}nt_0^te^{-i(t-s)H_1}F(s)ds\|_{L^p(\rhoe;L^q(\rhoe^2))}\lambdaeq& C\|F\|_{L^{\tauilde{p}^*}(\rhoe;L^{\tauilde{q}^*}(\rhoe^2))}\lambdaabel{Inhom}
\varepsilonnd{align}
for all $u_0\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^2(\rhoe^2)$ and $F\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^{\tauilde{p}^*}(\rhoe;L^{\tauilde{q}^*}(\rhoe^2))$.
\varepsilonnd{thm}
Moreover, we introduce a family of smooth slowly decaying potential with repulsive conditions.
Assume that a real-valued function $V$ satisfies
\nuoindent $(i)$ $|\partial_x^{\alpha}V(x)|\lambdaeq C_{\alpha}(1+|x|)^{-\mu-|\alpha|}$
\nuoindent $(ii)$ There exists $C>0$ such that $V(x)\psieq C(1+|x|)^{-\mu}$.
\nuoindent $(iii)$ There exist $C>0$ and $R>0$ such that $-x\psiammadot \partial_xV(x)\psieq C|x|^{-\mu}$ for $|x|>R$.
\nuoindent Moreover, we set
\betaegin{align*}
H=H_0+V,\quad H_0=-\muathcal{D}elta.
\varepsilonnd{align*}
As is stated above, Theorem \rhoef{twoSt} is proved in \psiammaite[Theorem 1.2]{M2} for dimension $n\psieq 3$. Its proof is based on the Strichartz estimates for the propagator $e^{-itH}$ and the smooth perturbation method developed in \psiammaite{BM} and \psiammaite{RodSc}.
To control the local singularities $|x|^{-\mu}$, we need Kato's smoothing estimates with the weight $\psiammahi(x)|x|^{-\mu}$ with $\psiammahi\mubox{\rhoaisebox{.5ex}{$\chi$}}n C_c^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe^n)$.
For $n\psieq 3$, such estimates follow from the endpoint Strichrtz estimates for $e^{-itH}$ and the result in \psiammaite{BM} (see \psiammaite[Remark 1.6]{M2} and the proof of \psiammaite[Theorem 1.2]{M2}). However, since the endpoint Strichartz estimates might not be hold in dimension two, we cannot use this strategy for $n=2$. In this note, we supply such a space-time $L^2$-estimate stated in \psiammaite[Remark 1.6]{M2} for $n=2$. More strongly, we have the following theorem.
\betaegin{thm}\lambdaabel{main}
Suppose $n=2$ and let $\psiammahi\mubox{\rhoaisebox{.5ex}{$\chi$}}n C_c^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe^2)$. Then we have
\betaegin{align*}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n\muathbb{C}\sigmaetminus \rhoe}\|\psiammahi(x)|x|^{-\varphirac{\mu}{2}}(H-z)^{-1}|x|^{-\varphirac{\mu}{2}}\psiammahi(x)\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty.
\varepsilonnd{align*}
\varepsilonnd{thm}
\betaegin{rem}
Using the method written in the next section, we can prove a local smoothing estimate:
\betaegin{align}\lambdaabel{locsmooth}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n\muathbb{C}\sigmaetminus \rhoe}\|\jap{x}^{-\psiamma}\jap{D_x}^{\varphirac{1}{2}}(H-z)^{-1}\jap{D_x}^{\varphirac{1}{2}} \jap{x}^{-\psiamma}\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty,
\varepsilonnd{align}
for $\psiamma>\varphirac{1}{2}+\varphirac{\mu}{4}$ and for $n\psieq 1$, that is, $\jap{x}^{-\psiamma}\jap{D_x}^{\varphirac{1}{2}}$ is $H$-supersmooth. In fact, $(\rhoef{locsmooth})$ follows from $(\rhoef{semicl})$ and $(\rhoef{generalbd1})$, which are true for each $n\psieq 1$. We also recall that $\jap{x}^{-1}\jap{D_x}^{\varphirac{1}{2}}$ is $H_0$-supersmooth for $n\psieq 3$ (\psiammaite{KY}).
\varepsilonnd{rem}
By the smooth perturbation theory \psiammaite[Theorem XIII.25]{RS}, we obtain a global in time estimate for the propagator $e^{-itH}$.
\betaegin{cor}\psiammaite[Conjectured in Remark 1.6]{M2}\lambdaabel{corsmooth}
Suppose $n=2$ and let $\psiammahi\mubox{\rhoaisebox{.5ex}{$\chi$}}n C_c^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe^2)$. Then we have
\betaegin{align*}
\|\psiammahi(x)|x|^{-\varphirac{\mu}{2}}e^{-itH}u_0\|_{L^2(\rhoe^{3})}\lambdaeq C\|u_0\|_{L^2(\rhoe^2)}\quad u_0\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^2(\rhoe^2).
\varepsilonnd{align*}
\varepsilonnd{cor}
\betaegin{rem}
By $(\rhoef{locsmooth})$, we have $\|\jap{x}^{-\psiamma}\jap{D_x}^{\varphirac{1}{2}}e^{-itH}u_0\|_{L^2(\rhoe^{n+1})}\lambdaeq C\|u_0\|_{L^2(\rhoe^n)}$ for $u_0\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^2(\rhoe^n)$, $n\psieq 1$ and $\psiamma>\varphirac{1}{2}+\varphirac{\mu}{4}$.
\varepsilonnd{rem}
\betaegin{proof}[Proof of Theorem \rhoef{twoSt} assuming Corollary \rhoef{corsmooth}] The inhomogeneous estimates $(\rhoef{Inhom})$ follow from the homogeneous estimates $(\rhoef{hom})$ and the Christ-Kiselev lemma \psiammaite{CK}. Thus, we only need to prove the homogeneous estimates $(\rhoef{hom})$. Let $\psiammahi\mubox{\rhoaisebox{.5ex}{$\chi$}}n C_c^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe^n;[0,1])$ such that $\psiammahi(x)=1$ on the unit ball.
Set $W=Z|x|^{-\mu}+\varepsilon V_S$. Following the proof of \psiammaite[Theorem 1.2]{M2}, write $W(x)=V_1(x)+V_2(x)$, where
\betaegin{align*}
V_1(x)=\psiammahi(x)^2+(1-\psiammahi(x)^2)W(x),\quad V_2(x)=\psiammahi(x)^2(W(x)-1).
\varepsilonnd{align*}
Then it turns out that $V_1$ satisfies the condition $(i)$, $(ii)$ and $(iii)$. Set $B=|V_2|^{\varphirac{1}{2}}$ and $A=|V_2|^{\varphirac{1}{2}}\muathrm{sgn} V_2$. By virtue of the Strichartz estimates for $e^{-it(H_0+V_1)}$ (\psiammaite[Theorem 1.1]{M2}) and \psiammaite[Theorem 4.1]{RodSc}, then it suffices for $(\rhoef{hom})$ to prove that $B$ is $(H_0+V_1)$-smooth and $A$ is $(H_0+W$)-smooth. Since $|A|, |B|\lambdaeq C\psiammahi(x)|x|^{-\varphirac{\mu}{2}}$ with a constant $C>0$, we only need to prove that $\psiammahi(x)|x|^{-\varphirac{\mu}{2}}$ is both $(H_0+V_1)$-smooth and $(H_0+W$)-smooth. The former follows from Corollary \rhoef{corsmooth} and the latter follows from \psiammaite[Proposition 5.1]{M2} with $\varepsilon>0$ small enough, where we note that \psiammaite[Proposition 5.1]{M2} holds for $n=2$ since its proof is based on \psiammaite[Corollary 2.21]{BM}. This completes the proof.
\varepsilonnd{proof}
In the rest of this paper, we prove Theorem \rhoef{main}.
We use the following notations throughout this paper. For Banach spaces $X$ and $Y$, $B(X,Y)$ denotes a set of all bounded linear operators from $X$ to $Y$. We denote the norm of a Banach space $X$ by $\|\psiammadot\|_X$. We write $\jap{x}=(1+|x|^2)^{\varphirac{1}{2}}$ and $D_x=(2\pi i)^{-1}\nuabla_x$.
We also denote $\|\psiammadot\|_{p\tauo q}=\|\psiammadot\|_{B(L^p,L^q)}$ and $\|\psiammadot\|=\|\psiammadot \|_{2\tauo 2}$.
\nuoindent\tauextbf{Acknowledgment.}
The author would like to thank Haruya Mizutani for suggesting this problem.
\sigmaection{Proof of Theorem \rhoef{main}}
\sigmaubsection{Preliminary}
\betaegin{lem}
Let $\psi\mubox{\rhoaisebox{.5ex}{$\chi$}}n C_c^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe)$ and $s, k\mubox{\rhoaisebox{.5ex}{$\chi$}}n \rhoe$. Then we have
\betaegin{align}\lambdaabel{Bound1}
\psiammahi(x)|x|^{-\varphirac{\mu}{2}}\jap{D_x}^{-\varphirac{\mu}{2}}\jap{x}\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(L^2(\rhoe^2)),\quad \jap{x}^{-k}\jap{D_x}^{s}\psi(H)\jap{x}^k\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(L^2(\rhoe^2)).
\varepsilonnd{align}
\varepsilonnd{lem}
\betaegin{proof}
By Hardy's inequality, we have $|x|^{-\varphirac{\mu}{2}}\jap{D_x}^{-\varphirac{\mu}{2}}\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(L^2(\rhoe^2))$, where we note $\varphirac{\mu}{2}<1=\varphirac{n}{2}$ for $n=2$. Moreover, a simple commutator argument gives $\jap{D_x}^{\varphirac{\mu}{2}}\psiammahi(x)\jap{D_x}^{-\varphirac{\mu}{2}}\jap{x}\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(L^2(\rhoe^2))$.
Writing $\psiammahi(x)|x|^{-\varphirac{\mu}{2}}\jap{D_x}^{-\varphirac{\mu}{2}}\jap{x}=|x|^{-\varphirac{\mu}{2}}\jap{D_x}^{-\varphirac{\mu}{2}}\psiammadot \jap{D_x}^{\varphirac{\mu}{2}}\psiammahi(x)\jap{D_x}^{-\varphirac{\mu}{2}}\jap{x}$, we obtain $\psiammahi(x)|x|^{-\varphirac{\mu}{2}}\jap{D_x}^{-\varphirac{\mu}{2}}\jap{x}\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(L^2(\rhoe^2))$. The boundedness of $\jap{x}^{-1}\jap{D_x}^{\varphirac{\mu}{2}}\psi(H)\jap{x}$ immediately follows from a simple commutator argument.
\varepsilonnd{proof}
\betaegin{lem}
We have
\betaegin{align}\lambdaabel{semicl}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n\muathbb{C}\sigmaetminus \rhoe,\,\,|z|\psieq 1}\|\jap{x}^{-\psiamma}\jap{D_x}^{s}(H-z)^{-1}\jap{D_x}^{s} \jap{x}^{-\psiamma}\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty,
\varepsilonnd{align}
for $0\lambdaeq s\lambdaeq \varphirac{1}{2}$ and $\psiamma>\varphirac{1}{2}$.
\varepsilonnd{lem}
\betaegin{proof}
This lemma directly follows from a high-energy resolvent estimates due to the assumption on the potential $(i)$.
\varepsilonnd{proof}
\betaegin{prop}
We have
\betaegin{align}\lambdaabel{generalbd1}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n\muathbb{C}\sigmaetminus \rhoe,\,\,|z|\lambdaeq 1}\|\jap{x}^{-\psiamma}\jap{D_x}^s(H-z)^{-1}\jap{D_x}^s \jap{x}^{-\psiamma}\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty,
\varepsilonnd{align}
for $0\lambdaeq s\lambdaeq 1$ and $\psiamma>\varphirac{1}{2}+\varphirac{\mu}{4}$.
\varepsilonnd{prop}
\betaegin{rem}
In our proof of Theorem \rhoef{main}, the repulsive condition $(ii)$ and $(iii)$ are used for this proposition only.
\varepsilonnd{rem}
\betaegin{proof}
Let $\psi\mubox{\rhoaisebox{.5ex}{$\chi$}}n C_c^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}(\rhoe)$ satisfying $\psi(t)=1$ on $|t|\lambdaeq 2$.
By $(\rhoef{Bound1})$ and Nakamura's result (\psiammaite[Theorem 1.8]{N}):
\betaegin{align*}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe}\|\jap{x}^{-\psiamma}(H-z)^{-1}\jap{x}^{-\psiamma}\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty,
\varepsilonnd{align*}
it turns out that
\betaegin{align}\lambdaabel{generalbd2}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n\muathbb{C}\sigmaetminus \rhoe}\|\jap{x}^{-\psiamma}\jap{D_x}^{s}\psi(H)(H-z)^{-\psiamma}\psi(H) \jap{D_x}^{s} \jap{x}^{-1}\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty.
\varepsilonnd{align}
We note that $(H-z)^{-1}(1-\psi(H)^2)$ is bounded from $H^{s}(\rhoe^2)$ to $H^{-s}(\rhoe^2)$ and its operator norm is uniformly bounded in $|z|\lambdaeq 1$. Thus $(\rhoef{generalbd1})$ follows from the estimate $(\rhoef{generalbd2})$.
\varepsilonnd{proof}
\sigmaubsection{The case $0<\mu\lambdaeq 1$}
In this subsection, we assume $0<\mu\lambdaeq 1$. By virtue of $(\rhoef{Bound1})$, for proving Theorem \rhoef{main}, we only need to show
\betaegin{align}\lambdaabel{Bound2}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe}\|\jap{x}^{-1}\jap{D_x}^{\varphirac{\mu}{2}}(H-z)^{-1}\jap{D_x}^{\varphirac{\mu}{2}} \jap{x}^{-1}\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty.
\varepsilonnd{align}
This bound directly follows from $(\rhoef{semicl})$ and $(\rhoef{generalbd1})$ since $\varphirac{\mu}{2}\lambdaeq \varphirac{1}{2}$.
This completes the proof.
\sigmaubsection{The case $1<\mu< 2$}
In this subsection, we assume $1<\mu< 2$. In this case, the inequality $(\rhoef{Bound2})$ might be false even if $H$ is replaced by $H_0$, where the problem is high energy case $|z|\psieq 1$.
By $(\rhoef{Bound1})$ and $(\rhoef{generalbd1})$, for proving Theorem \rhoef{main}, it suffices to show that
\betaegin{align}\lambdaabel{Bound3}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe,\,|z|\psieq 1}\|\psiammahi(x)|x|^{-\varphirac{\mu}{2}}(H-z)^{-1}|x|^{-\varphirac{\mu}{2}}\psiammahi(x)\|<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty.
\varepsilonnd{align}
We denote $R(z)=(H-z)^{-1}$ and $R_0(z)=(H_0-z)^{-1}$. We shall prove more general estimates
\betaegin{align}\lambdaabel{Bound4}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe,\,|z|\psieq 1}\|R(z)\|_{p\tauo p^*}<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty,\quad \tauext{for}\quad 1<p\lambdaeq \varphirac{6}{5},
\varepsilonnd{align}
which are proved in \psiammaite{IS} for middle energy case $|z|\sigmaim 1$.
In fact, since $\psiammahi(x)|x|^{-\varphirac{\mu}{2}}\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^q(\rhoe^2)$ for some $2<q\lambdaeq 3$, the estimate $(\rhoef{Bound3})$ follows from H\"older's inequality and $(\rhoef{Bound4})$.
By the resolvent identity, we have
\betaegin{align*}
R(z)=R_0(z)-R_0(z)VR_0(z)+R_0(z)VR(z)VR_0(z),
\varepsilonnd{align*}
which implies
\betaegin{align}
\|R(z)\|_{p\tauo p^*}\lambdaeq& \|R_0(z)\|_{p\tauo p^*}+\|R_0(z)\|_{B(\muathcal{B},L^{p^*})}\|V\|_{B(\muathcal{B}^*, \muathcal{B})}\|R_0(z)\|_{B(L^p,\muathcal{B}^*)}\nuonumber\\
&+\|R_0(z)\|_{B(\muathcal{B},L^{p^*})}\|R(z)\|_{B(\muathcal{B}, \muathcal{B}^*)}\|V\|_{B(\muathcal{B}^*, \muathcal{B})}^2\|R_0(z)\|_{B(L^p,\muathcal{B}^*)},\lambdaabel{Bound5}
\varepsilonnd{align}
where we denote $D_j=\{x\mubox{\rhoaisebox{.5ex}{$\chi$}}n \rhoe^2\muid |x|\mubox{\rhoaisebox{.5ex}{$\chi$}}n [2^{j-1},2^j]\}$ for $j\psieq 1$, $D_0=\{x\mubox{\rhoaisebox{.5ex}{$\chi$}}n \rhoe^2\muid |x|\lambdaeq 1\}$ and
\betaegin{align*}
\muathcal{B}=\{u\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^2_{loc}(\rhoe^2) \muid \sigmaum_{j=0}^{\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty}2^{j/2}\|u\|_{L^2(D_j)}<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty\},\,\, \muathcal{B}^*=\{u\mubox{\rhoaisebox{.5ex}{$\chi$}}n L^2_{loc}(\rhoe^{2})\muid \sigmaup_{j\psieq 0}2^{-j/2}\|u\|_{L^2(D_j)}<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty\}.
\varepsilonnd{align*}
Since $\jap{x}^{-s}L^2(\rhoe^2)\sigmaubset \muathcal{B}\sigmaubset \muathcal{B}^*\sigmaubset \jap{x}^{s}L^2(\rhoe^2)$ for $s>\varphirac{1}{2}$ and $|V(x)|\lambdaeq C\jap{x}^{-\mu}$ with $\mu>1$, we have $V\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(\muathcal{B}^*, \muathcal{B})$. The bounds for the free operator
\betaegin{align*}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe,\,|z|\psieq 1}\|R_0(z)\|_{p\tauo p^*},\,\, \sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe,\,|z|\psieq 1}\|R_0(z)\|_{B(\muathcal{B},L^{p^*})},\,\,\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe,\,|z|\psieq 1}\|R_0(z)\|_{B(L^p,\muathcal{B}^*)}<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty
\varepsilonnd{align*}
are well-known (\psiammaite{IS}, \psiammaite{KRS}, \psiammaite{RV}). Moreover, the standard limiting absorption principle between Besov spaces gives
\betaegin{align*}
\sigmaup_{z\mubox{\rhoaisebox{.5ex}{$\chi$}}n \muathbb{C}\sigmaetminus \rhoe,\,|z|\psieq 1}\|R(z)\|_{B(\muathcal{B}, \muathcal{B}^*)}<\mubox{\rhoaisebox{.5ex}{$\chi$}}nfty.
\varepsilonnd{align*}
Hence the right hand side of $(\rhoef{Bound5})$ is uniformly bounded in $|z|\psieq 1$, which implies $(\rhoef{Bound4})$.
\betaegin{rem}
In the proof above, we can avoid the use of Besov spaces. In fact, if we take $s>\varphirac{1}{2}$ such that $V\mubox{\rhoaisebox{.5ex}{$\chi$}}n B(\jap{x}^{s}L^2(\rhoe^2), \jap{x}^{-s}L^2(\rhoe^2))$ and if replace $\muathcal{B}$ by $\jap{x}^{-s}L^2(\rhoe^2)$ and $\muathcal{B}^*$ by $\jap{x}^{s}L^2(\rhoe^2)$, the proceeding argument remains true.
\varepsilonnd{rem}
\betaegin{thebibliography}{99}
\betaibitem{BM} J. Bouclet and H. Mizutani, Uniform resolvent and Strichartz estimates for Schr\"odinger equations with scaling critical potentials, Trans. Amer. Math. Soc. 370 (2018), 7293--7333.
\betaibitem{CK} M. Christ, and A. Kiselev, Maximal functions associated to filtrations, J. Funct. Anal, 179 (2001), 409--425.
\betaibitem{IS} A. D. Ionescu, W. Schlag, Agmon-Kato-Kuroda theorems for a large class of perturbations. Duke Math. J. 131 (2006), no. 3, 397--440.
\betaibitem{KY} T. Kato, K. Yajima, Some examples of smooth operators and the associated smoothing effect. Rev. Math. Phys. 1 (1989), no. 4, 481--496.
\betaibitem{KRS} C. Kenig, A. Ruiz, C. Sogge, Uniform Sobolev inequalities and unique continuation for second order constant coefficient differential operators, Duke Math. J. 55(1987), 329--347.
\betaibitem{M2} H. Mizutani, Strichartz estimates for Schr\"odinger equations with slowly decaying potentials, J. Funct. Anal. 279 (2020) 108789.
\betaibitem{N} S. Nakamura, Low energy asymptotics for Schr\"odinger operators with slowly decreasing potentials, Comm. Math. Phys. 161 (1994) 63--76.
\betaibitem{RS} M. Reed, B. Simon, {\mubox{\rhoaisebox{.5ex}{$\chi$}}t The Methods of Modern Mathematical Physics}, Vol.\ I--IV. Academic Press, 1(972--1980).
\betaibitem{RV} A. Ruiz, L. Vega, On local regularity of Schr\"odinger equations, Internat. Math. Res. Notices, 13--27 (1993).
\betaibitem{RodSc} I. Rodnianski, W. Schlag, Time decay for solutions of Schr\"odinger equations with rough and time-dependent potentials. Invent. Math. 155 (2004), no. 3, 451--513.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title{The Fixed Point of the Composition of Derivatives
\thanks{This research was completed during the Workshop on
Geometric and Dynamical aspects of
Measure Theory in R\'evf\"ul\"op, which was supported by the Erd\H os Center.}
\begin{abstract}
We give an affirmative answer to a question of K. Ciesielski by showing
that the composition $f\circ g$ of two derivatives $f,g:[0,1]\to[0,1]$
always has a fixed point. Using Maximoff's theorem we obtain that the
composition of two $[0,1]\to[0,1]$ Darboux Baire-1 functions must also
have a fixed point.
\end{abstract}
\section*{Introduction}
In \cite{GN} R. Gibson and T. Natkaniec mentioned the question of
K. Ciesielski, which asks whether the composition $f\circ g$ of two
derivatives $f,g:[0,1]\to[0,1]$ must always have a fixed point.
Our main result is an affirmative answer to this question.
(An alternative proof has also been found by M. Cs\"ornyei,
T. C. O'Neil and D. Preiss, see \cite{CNP}.)
\begin{theorem}\label{thm:fo}
Let $f$ and $g:[0,1]\to[0,1]$ be derivatives. Then $f\circ g$ has a fixed
point.
\end{theorem}
Since any bounded approximately continuous function is a derivative
(see e.g. \cite{Br}) we get the following:
\begin{corollary}\label{appr}
Let $f$ and $g:[0,1]\to[0,1]$ be approximately continuous functions.
Then $f\circ g$ has a fixed point.
\end{corollary}
Now suppose that $f,g:[0,1]\to[0,1]$ are Darboux Baire-1 functions.
Then, by Maximoff's theorem (\cite{Ma}, see also in \cite{Pr}),
there exist homeomorphisms
$h,k:[0,1]\to[0,1]$ such that $f\circ h$ and $g\circ k$ are approximately
continuous functions. Then clearly
$\tilde f = k^{\textrm{-}1}\circ f\circ h$ and
$\tilde g = h^{\textrm{-}1}\circ g\circ k$ are also $[0,1]\to[0,1]$
approximately continuous functions and $f\circ g$ has a fixed point if and
only if $\tilde f\circ \tilde g$ has a fixed point. Thus from
Corollary~\ref{appr} we get the following:
\begin{corollary}\label{DB1}
Let $f$ and $g:[0,1]\to[0,1]$ be Darboux Baire-1 functions. Then $f\circ
g$ has a fixed point.
\end{corollary}
\section*{Preliminaries}
We shall denote the square $[\textrm{-}1,2]\times[\textrm{-}1,2]$ by
$Q$. The partial derivatives of a function $h:\mathbb{R}^2\to\mathbb{R}$ with respect to
the $i$-th variable ($i=1,2$) will be denoted by $\partial_i h$. The sets
$\{h=c\}$, $\{h\neq c\}$, $\{h>c\}$, etc.(???) will denote the appropriate
level sets of the function $h$. The notations $\cl A, \interior A$ and
$\partial A$ stand for the closure, interior and boundary of a set $A$,
respectively. By \emph{component} we shall always mean connected component.
\section{Skeleton of the proof of Theorem~\ref{thm:fo}}\label{skeleton}
Let $f,g:[0,1]\to[0,1]$ be the derivatives of $F$ and $G$, respectively. We
can extend $F$ and $G$ linearly to the complement of the unit interval such
that they remain differentiable and such that the derivatives of the
extended functions are still between 0 and 1. Therefore we may assume that
$f$ and $g$ are derivatives defined on the whole real line and $0\le f,g\le
1$.
The key step of the proof is the following. Let
\begin{equation}\label{formula}
H(x,y)=F(x)+G(y)-xy\ \ \ (\ (x,y) \in\mathbb{R}^2\ ),
\end{equation}
\begin{center}
$\textrm{where } F'=f,\ G'=g:\mathbb{R}\to [0,1] \textrm{ are derivatives}.$
\end{center}
Then $H$ is differentiable as a function of two variables, and its gradient
is
\begin{equation}\label{gradient}
H'(x,y) = (f(x)-y,g(y)-x).
\end{equation}
Thus the gradient vanishes at $(x_0,y_0)$ if and only if $x_0$ is a fixed
point of $g\circ f$ and $y_0$ is a fixed point of $f\circ g$. Moreover, the
gradient cannot be zero outside the closed unit square, since $0\le f,g\le
1$. Therefore it is enough to prove that there exists a point in $\mathbb{R}^2$
where the gradient vanishes.
We argue by contradiction, so throughout the paper we shall suppose that
the gradient of $H$ nowhere vanishes.
Let us now examine the behavior of $H$ on the edges of $Q$. By
(\ref{gradient}) the partial derivative $\partial_1 H$ is clearly negative
on the top edge of $Q$ and positive on the bottom edge, and similarly
$\partial_2 H$ is positive on the left edge and negative on the right
edge. This implies the following.
\begin{lemma}\label{monotone}
$H$ is strictly decreasing on the top and right edges, and strictly
increasing on the bottom and left edges of $Q$.
\end{lemma}
Consequently,
\[
\max\left(H(\textrm{-}1,\textrm{-}1),H(2,2)\right)<
\min\left(H(\textrm{-}1,2), H(2,\textrm{-}1)\right).
\]
This inequality suggests that there must be a kind of `saddle point' in
$Q$. Therefore define
\[
c=\inf\set{d}{(\textrm{-}1,\textrm{-}1) \text{ and } (2,2) \text{ are in
the same component of } \{H\leq d\} \cap Q }
\]
as a candidate for the value of $H$ at a saddle point.
At a typical saddle point $q$ of some smooth function $K:\mathbb{R}^2\to\mathbb{R}$
such that $K(q)=d$, the level set in a neighborhood of $q$ consists of two
smooth curves intersecting each other at $q$. In Section~\ref{proof:c} we
shall prove the following theorem, which states that the level set
$\{H=c\}$ behaves indeed in a similar way.
\begin{theorem}\label{c}
$\{H=c\} \cap Q$ intersects each edge of $Q$ at exactly one point (which is
not an endpoint of the edge). Moreover, this level set cuts $Q$ into four
pieces, that is $\{H\neq c\} \cap Q$ has four components, and each of these
components contains one of the corners of $Q$. The values of $H$ are
greater than $c$ in the components containing the top left and the bottom
right corner of and less than $c$ in the components containing the bottom left
and the top right corner.
\end{theorem}
This theorem suggests the picture that the level set $\{H=c\} \cap Q$ looks
like the union of two arcs crossing each other, one from the left to the
right and one from the top to the bottom.
We are particularly interested in the existence of this `crossing point',
because at such a point four components of $\{H\neq c\} \cap Q$ should
meet. Indeed, we shall prove the following (see
Section~\ref{proof:branching}).
\begin{theorem}\label{branching}
There exists a point $p \in \{H=c\} \cap \interior Q$ that is in the
closure of more than two components of $\{H\neq c\} \cap Q$.
\end{theorem}
\begin{rem}
It is in fact not much harder to see that there exists a point $p \in
\{H=c\} \cap \interior Q$ in the intersection of the closure of exactly
four components.
\end{rem}
But on the other hand we shall also prove (see
Section~\ref{proof:non-branching}) the following theorem about the local
behavior of the level set.
\begin{theorem}\label{non-branching}
Every $p \in \{H=c\} \cap \interior Q$ has a neighborhood that intersects
exactly two components of $\{H\neq c\} \cap Q$.
\end{theorem}
However, the last two theorems clearly contradict each other, which will
complete the proof of Theorem~\ref{thm:fo}.
\section{Proof of Theorem~\ref{c}}\label{proof:c}
Recall that we defined
\[
c=\inf\set{d}{(\textrm{-}1,\textrm{-}1) \text{ and } (2,2) \text{ are in
the same component of } \{H\leq d\} \cap Q }.
\]
\begin{lemma}\label{min}
In fact, $c$ is a minimum; that is,
$(\textrm{-}1,\textrm{-}1) \text{ and } (2,2)$ are in
the same component of $\{H\leq c\} \cap Q$.
\end{lemma}
\begin{proof}
Let $K_n$ be the component of
$\lset{h\leq c+1/n} \cap Q$ containing both $(\textrm{-}1,\textrm{-}1)$ and $(2,2).$
Then $K_n$ is clearly a decreasing sequence of compact connected sets.
A well known theorem (see e.g. \cite[I. 9. 4]{Wh}) states
that the intersection of such a sequence is connected.
Hence $\cap_n K_n\subsetbset
\lset{h\leq c}$ is connected and contains both $(\textrm{-}1,\textrm{-}1)$ and
$(2,2)$, which shows that
$(\textrm{-}1,\textrm{-}1) \text{ and } (2,2)$ are in
the same component of $\{H\leq c\} \cap Q$.
\end{proof}
\begin{lemma}
$\max\lset{H(\textrm{-}1,\textrm{-}1),H(2,2)}<c<\min\lset{H(\textrm{-}1,2),H(2,\textrm{-}1)}.$
\end{lemma}
\begin{proof}
To show $H(2,2)<c$ consider the polygon $P$ with vertices
$(\textrm{-}1,2)$, $(1,2)$, $(2,1)$ and $(2,\textrm{-}1)$.
Since $\partial_1 H(x,y)=f(x)-y<0$ if $y>1$ and
$\partial_2 H(x,y)=g(y)-x<0$ if $x>0$, it is easy to check that
$H$ is strictly bigger than $H(2,2)$ on the whole $P$.
Therefore $P$ connects $(\textrm{-}1,2)$ and $(2,\textrm{-}1)$ in $\lset{H>H(2,2}$,
thus $(\textrm{-}1,\textrm{-}1)$ and $(2,2)$ cannot be in the same component of
$\lset{H\le H(2,2)}$. By the previous Lemma this implies that
$H(2,2)<c$. Proving that $H(\textrm{-}1,\textrm{-}1)$ is similar.
To show that $c<H(\textrm{-}1,2)$ consider the polygon $P'$
with vertices $(\textrm{-}1,\textrm{-}1)$, $(\textrm{-}1,1)$, $(0,2)$ and
$(2,2)$. Like above one
can show that $H$ is strictly less than $H(\textrm{-}1,2)$ on the whole $P'$,
so denoting the maximum
of $H$ on $P'$ by $d$, we have $d<H(\textrm{-}1,2)$.
Since $P'$ connects $(\textrm{-}1,\textrm{-}1)$
and $(2,2)$ in $\{H\le d\}\cap Q$ we also have $c\le d$, therefore
we have $c<H(\textrm{-}1,2)$. Proving that $c<H(2,\textrm{-}1)$ would be similar.
\end{proof}
The previous Lemma together with Lemma~\ref{monotone} clearly implies
the first statement of Theorem~\ref{c}: $\lset{H=c}\cap Q$ intersect
each edge of $Q$ at exactly one point (which is not an endpoint of
the edge). Since $4$ points cut $\partial Q$ into four components
we also get that $\lset{H\neq c}\cap \partial Q$ has $4$ components
and each of them contains a vertex of $Q$. Therefore for completing
the proof of Theorem~\ref{c} we have to show that any component
of $\lset{H\neq c}\cap Q$ intersect $\partial Q$ and that
the vertices of $Q$ belong to different components of
$\lset{H\neq c}\cap Q$.
The first claim is clear since if $C$ were a component of
$\lset{H\neq c}\cap Q$ inside $Q$ then one of the (global) extrema
of $H$ on the (compact)
closure of $C$ could not be on the boundary of $C$ (where $H=c$),
so this would be a local extremum but a function with non-vanishing
gradient cannot have a local extremum.
To prove the second claim first note that the components of
$\lset{H\neq c}\cap Q$ are the components of $\lset{H<c}\cap Q$
and the components of $\lset{H>c}\cap Q$.
By the previous Lemma,
$(\textrm{-}1,\textrm{-}1)$ and $(2,2)$ are in $\lset{H<c}\cap Q$ while
$(\textrm{-}1,2)$ and $(2,\textrm{-}1)$ are in $\lset{H>c}\cap Q$, so it is enough to
prove that that the opposite vertices belong to different components.
Assume that $(\textrm{-}1,\textrm{-}1)$ and $(2,2)$ are in the
same component of $\lset{H<c}\cap Q$. Then there is a continuous curve
in $\lset{H<c}\cap Q$ that connects $(\textrm{-}1,\textrm{-}1)$ and $(2,2).$
Let $d$ be the maximum of $H$ on this curve. Then on one hand
$d<c$; on the other hand,
$(\textrm{-}1,\textrm{-}1) \text{ and } (2,2)$ are in
the same component of $\{H\leq d\} \cap Q$,
so $c\leq d$, which is a contradiction.
Finally, assume that $(\textrm{-}1,2)$ and $(2,\textrm{-}1)$ are in the same components of
$\lset{H>c}\cap Q$. Then there is a continuous curve
in $\lset{H>c}\cap Q$ that connects $(\textrm{-}1,2)$ and $(2,\textrm{-}1)$, so it
also separates $(\textrm{-}1,1)$ and $(2,2)$. But this is a contradiction
with Lemma~\ref{min}.
\section{Proof of Theorem~\ref{branching}}\label{proof:branching}
The following topological lemma is surely well known. However, we were
unable to find it in the literature, so we sketch a proof here.
\begin{lemma}\label{Zoretti}
Let $M$ be a closed subset of $Q$ and $p\neq q\in M\cap\partial Q.$ Denote
by $E_1$ and $E_2$ the two connected components of $\partial
Q\setminus\{p,q\}$. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $p$ and $q$ are not in the same component of $M$.
\item[(ii)] there are points $e_i\in E_i \ (i=1,2)$ that can be connected
by a polygon in $Q\setminus M$.
\end{itemize}
\end{lemma}
\begin{proof}
The implication (ii) $\mathbb{R}ightarrow$ (i) is obvious. The implication (i)
$\mathbb{R}ightarrow$ (ii) easily follows from Zoretti's Theorem (see \cite{Wh}
Ch. VI., Cor. 3.11), which states that if $K$ is a component of a compact
set $M$ in the plane and $\eps>0$, then there exists a simple closed
polygon $P$ in the $\eps$-neighborhood of $K$ that encloses $K$ and is
disjoint from $M$.
\end{proof}
Now we turn to the proof of Theorem~\ref{branching}. Let us denote by $C_p$
the component of $\{H\neq c\} \cap Q$ that contains the corner $p$ of $Q$
(Theorem~\ref{c} shows that $p\in\{H \neq c\} \cap Q$). Again by
Theorem~\ref{c} we have $\{H\neq c\} \cap Q = C_{(\textrm{-}1,\textrm{-}1)}
\cup C_{(2,2)} \cup C_{(\textrm{-}1,2)} \cup C_{(2,\textrm{-}1)}$, and $H$
is less than $c$ in the first two and greater than $c$ in the last two
components.
\begin{proposition}\label{r}
There exists a point $r \in \cl{C_{(\textrm{-}1,\textrm{-}1)}} \cap
\cl{C_{(2,2)}}$.
\end{proposition}
\begin{proof}
Suppose, on the contrary, that there is no such point. Then the components
of $\{H\leq c\}\cap Q$ are $\cl{C_{(\textrm{-}1,\textrm{-}1)}}$ and
$\cl{C_{(2,2)}}$, hence $(\textrm{-}1,\textrm{-}1)$ and $(2,2)$ are in
different components. Therefore, if we apply the
previous lemma to $\{H\leq c\}\cap Q$ with
$p=(\textrm{-}1,\textrm{-}1)$ and $q=(2,2)$, then we obtain a polygon in
$\{H>c\} \cap Q$ joining either the left or the top edge to either the
right or the bottom edge of $Q$. Let us now join the endpoints of this
polygon to the points $(\textrm{-}1,\textrm{-}1)$ and $(2,2)$ by two
segments. By Lemma~\ref{monotone} these segments must also be in $\{H > c\}
\cap Q$, and we thus obtain a polygon in $\{H > c\} \cap Q$ connecting
$(\textrm{-}1,\textrm{-}1)$ and $(2,2)$, which contradicts Theorem~\ref{c}.
\end{proof}
Our next aim is to show that such a point $r$ cannot be on the edges of
$Q$. The other cases being similar we only consider the case of the left
edge.
\begin{lemma}
$\{[\textrm{-}1,0) \times [\textrm{-}1,2]\} \cap C_{(2,2)} = \emptyset$
\end{lemma}
\begin{proof}
We have to prove that if $(x,y)\in [\textrm{-}1,0) \times [\textrm{-}1,2])$
such that $H(x,y)<c$, then $(x,y)\in C_{(\textrm{-}1,\textrm{-}1)}$. In
order to do this, it is sufficient to show that the vertical segment $S_1$
between $(x,y)$ and $(x,\textrm{-}1)$ and the horizontal segment $S_2$
between $(x,\textrm{-}1)$ and $(\textrm{-}1,\textrm{-}1)$ together form a
polygon in $\lset{H(x,y)<c}$ connecting $(x,y)$ to
$(\textrm{-}1,\textrm{-}1)$. But by (\ref{gradient}) in
Section~\ref{skeleton} the partial derivative $\partial_2 H$ is positive on
$S_1$, thus all values of $H$ are less then $H(x,y)$ here, hence less then
$c$, and $\partial_1 H$ is positive on $S_2$, thus the values of $H$ here
are less then $H(x,\textrm{-}1)$, which is again less than $c$.
\end{proof}
Let $r$ be a point provided by Proposition~\ref{r}. If we repeat the
agrument of the previous lemma for the other three edges, then we obtain
that $r$ must be in the interior of $Q$. To complete the proof of
Theorem~\ref{branching} it is enough to show that there exists a component
in which $H$ is greater than $c$ (that is either component
$C_{(\textrm{-}1,2)}$ or $C_{(2,\textrm{-}1)}$) such that the closure of
the component contains $r$. But $r\in \interior Q$, and then this statement
follows from the fact that the gradient of $H$ nowhere vanishes, so $H$
attains no local extremum.
\section{Proof of Theorem~\ref{non-branching}}\label{proof:non-branching}
\begin{lemma}\label{teglalap}
Let $x_1,x_2,y_1,y_2\in\mathbb{R}$ be such that $x_1<x_2$
and $y_1<y_2$. Then $H(x_1,y_2)+H(x_2,y_1)-H(x_1,y_1)-H(x_2,y_2)>0$.
\end{lemma}
\begin{proof}
Using the definition (\ref{formula}) of $H$, a straightforward computation
shows that the value of the above sum in fact equals
$(x_2-x_1)(y_2-y_1)>0.$
\end{proof}
\begin{lemma}\label{Tamas}
Let $(x_0,y_0)\in\mathbb{R}^2$ be such that $H(x_0,y_0)=0$ and $\partial_2
H(x_0,y_0)\neq 0$. Then there exists $\eps>0$ such that if
$x_1\in[x_0-\eps,x_0+\eps]$ and $y_1,
y_2\in[y_0-\eps,y_0+\eps]$ and $H(x_1,y_1)=H(x_1,y_2)=0$ then one
of the followings holds:
\begin{rlist}
\item $y_1=y_2=y_0$,
\item $y_1<y_0$ and $y_2<y_0$,
\item $y_1>y_0$ and $y_2>y_0$.
\end{rlist}
\end{lemma}
\begin{proof}
From $H(x_0,y_0)=0$ and (\ref{formula})
we get $G(y_0)=x_0 y_0 - F(x_0)$, while
$\partial_2 H(x_0,y_0)\neq 0$ means
$G'(y_0)\neq x_0$. These imply that for a
sufficiently small $\eps$ if $y_0-\eps<y_1\le y_0\le y_2<y_0+\eps$ and
$y_1\neq y_2$ then the slope of the segment between $(y_1,G(y_1))$ and
$(y_2,G(y_2))$ is not in $[x_0-\eps,x_0+\eps]$.
But if $H(x_1,y_1)=H(x_1,y_2)=0$ then $G(y_1)=x_1 y_1 - F(x_1)$ and
$G(y_2)=x_1 y_2 - F(x_1)$, so the slope of the segment between
$(y_1,G(y_1))$ and $(y_2,G(y_2))$ is $x_1$.
Therefore, with this $\eps$, the conditions of the lemma imply that
one of (i),(ii) or (iii) must hold.
\end{proof}
Before turning to the proof of Theorem~\ref{non-branching}
observe that if
$H$ is given by equation (\ref{formula}) then $H_1(x,y)=-H(1-x,y)$ is
also of the form (\ref{formula}).
Indeed, taking $F_1(x)=-F(1-x)$ and $G_1(y)=y-G(y)$, we get
$f_1(x)=F'_1(x)=f(1-x)$ and $g_1(y)=G'_1(y)=1-g(y) : [0,1]\to[0,1]$
derivatives and $H_1(x,y)=F_1(x)+G_1(x)-xy= -H(1-x,y).$
It is also clear if $H$ is of the form (\ref{formula}) then
$H_2(x,y)=H(y,x)$ is also of the form (\ref{formula}). This means
that when we study the local behavior of the function $H$ around an
arbitrary point $(x,y)\in\lset{H=0}\cap Q$ we can assume that
$\partial_2 H(x,y)>0$ as this is
true (at the apropriate point)
for at least one of the functions we can get using these
symmetries.
By adding a constant to $H$ we can also assume that $d=0$.
Therefore for proving Theorem~\ref{non-branching}
we can assume these restrictions,
so it is enough to prove the following.
\begin{claim}\label{two}
Let $(x_0,y_0)\interior Q$ be such that $H(x_0,y_0)=0$ and
$\partial_2 H(x_0,y_0)>0$.
Then there exists a
neighbourhood of $(x_0,y_0)$ that intersects exactly two components of
$\{H\neq 0\}\cap Q$.
\end{claim}
\begin{proof}
Every neighbourhood of $(x_0,y_0)$
has to meet at least two components, otherwise
$(x_0,y_0)$ would be a local extremum but we assumed
that gradient of $H$ is nowhere vanishing.
In order to show that a small neighborhood of $(x_0,y_0)$ meets at most
two components it is enough to construct a polygon in $Q$
around the point which intersects $\{H=0\}$ at exactly two points. Indeed,
in this case there can be at most two components that intersect the
polygon
(since two points cut a polygon into two components).
On the other hand, every
component of $\{H\neq 0\}\cap Q$ that intersects the interior of the polygon
has to
meet the
polygon itself,
since by Theorem~\ref{branching} each component of $\{H\neq 0\}\cap Q$
contains a vertex of $Q$.
To construct such
a polygon it is enough to find two rectangles in $Q$, one to the left and
one to
the right from $(x_0,y_0)$, which contain this point on vertical edges (but
not as a vertex) and which both intersect $\{H=0\}$ in just one additional
point. (See Fig. 2.)
\begin{figure}\label{fig:polygon}
\end{figure}
Since $\partial_2 H(x_0,y_0)>0$ we can find a small
open sector in $Q$
with vertex $(x_0,y_0)$ with vertical axis upwards, where $H$ is
positive and a similar negative sector
downwards. (See Fig. \ref{fig:harom}. and Fig. \ref{fig:negy}.)
Choose the radius of the sectors smaller than the $\eps$ we get in
Lemma~\ref{Tamas}. Choose $x_1<x_0$ and $x_2>x_0$ such that
the vertical lines through them intersect the two sectors.
\begin{figure}\label{fig:harom}
\end{figure}
First we construct the rectangle on the right hand side. (See Fig. \ref{fig:harom}.)
If (i) holds in
Lemma \ref{Tamas} for the point $(x_0,y_0)$ and for $x_2$ then we are done,
so as the other two cases are similar we assume that (ii) holds. This means
that even for the maximum $y_1$ of the zero set of $H$ between the two
sectors on this vertical line, $y_1<y_0$ holds.
We choose the leftmost point $(x_3,y_1)$ of the zero set on the segment
$[x_0,x_2]\times\{y_1\}$ as the lower right corner of the rectangle and we
choose the upper right corner from the upper sector.
Clearly there are only negative values on the bottom edge, so
what remains to show is that all values are positive above $(x_3,y_1)$ on
the vertical line until it reaches the upper sector. If $x_3=x_2$ then we
are done, otherwise this is an easy consequence of Lemma \ref{teglalap}
once we apply it to $x_3,x_2,y_1$ and any number $y_2>y_1$ such that
$(x_3,y_2)$ is still between the two sectors.
\begin{figure}\label{fig:negy}
\end{figure}
Now we turn to the left hand side. (See Fig. \ref{fig:negy}.)
We assume again that (ii) of
Lemma~\ref{Tamas} holds for
$x_1$, and choose a maximal $y_3$ such that $(x_1,y_3)$ is between the two
sectors and $h(x_1,y_3)=0$.
Note that, by (ii), we have $y_3<y_0$.
Let $(x_4,y_3)$ be the rightmost point of the zero set of $H$ on the segment
$[x_1,x_0]\times\{y_3\}$.
Let us now define the lower left corner $(x_4,y_4)$
of the rectangle as the point of maximal $y$ coordinate on the vertical
line $x=x_4$ between the sectors, where $H$ vanishes.
By Lemma \ref{Tamas}, $y_4<y_0$
and clearly all values on the vertical half-line are positive until it
reaches the upper sector. Thus we only have to show that $H(x,y_4)<0$ for
all $x_4<x<x_0$, which easily follows from Lemma \ref{teglalap} once we
apply it to $x_4,x,y_3$ and $y_4$.
\end{proof}
\begin{rem}
Theorem~\ref{non-branching} is not true for every $\mathbb{R}^2\to\mathbb{R}$
differentiable function with nowhere vanishing gradient. For an example
see \cite{Bu}.
\end{rem}
\end{document} |
\begin{document}
\title{One Hundred and Twelve Point Three Degree Theorem}
\author{George Tokarsky, Jacob Garber, Boyan Marinov, Kenneth Moore\\University of Alberta}
\maketitle
\section{Introduction}
It has been known since Fagnano in 1775 that an acute triangle always has a periodic billiard path, namely the orthic triangle. In 1993, Holt \cite{holt} showed that every right triangle has a periodic path using simple arguments. It is currently unknown whether every obtuse triangle has a periodic path, and there don't appear to be any simple arguments. In between, there have been various partial results. In 1986, Masur \cite{masur} proved that every rational obtuse triangle has a periodic path, and in 2006, Schwartz \cite{schwartz} showed that every obtuse triangle with obtuse angle at most 100 degrees has a periodic path using a computer assisted proof. The aim of this paper is to show that every obtuse triangle with obtuse angle at most 112.3 degrees has a periodic path using a different computer assisted proof.
\section{Side, Code and Alphabet Sequences}
Drawing inspiration from the game of billiards, consider a frictionless particle moving about the interior of a triangle. If the particle encounters a side, it bounces off according to the law of reflection, and if the particle hits a vertex, it is considered to end there. A \textbf{billiard trajectory} or \textbf{poolshot} is the path this particle traces out as it moves within the triangle. A poolshot is said to be \textbf{periodic} if the particle eventually returns to its starting point and repeats the same path over again.
\subsection{Side Sequences}
To begin classifying billiard trajectories, we first need some notation. Given a triangle ABC with angles \( x \), \( y \), and \( z \), we label the side AB opposite \( z \) with 1, the side BC opposite \( x \) with 2, and the side AC opposite \( y \) with 3. For any periodic billiard trajectory in the triangle, we then define its \textbf{side sequence} to be the list of consecutive sides that are hit during one period. For example, for the periodic billiard trajectory shown in Figure 1, if we begin at side 2 and continue to side 1, it has the side sequence 213132313 of \textbf{length} 9. Observe that no two consecutive positive integers from 1,2,3 are the same including the first and the last which we formally call a \textbf{legal side sequence}. We call a finite side sequence \textbf{repeating} if the last integer in it is followed by the first integer and then the integers keep repeating.
\begin{figure}
\caption{Periodic Billiard Trajectory 213132313}
\label{fig:traj}
\end{figure}
There are other equivalent side sequences for the trajectory in Figure \ref{fig:traj} that we could consider. Given the periodic nature of the trajectory, the initial starting side and direction are both arbitrary, so we could instead start on side 3 and continue to side 1, giving the side sequence 313213132. These choices correspond to rotations and reversals of the original sequence, respectively. Likewise, we could continue around the triangle for as many periods as we wish, leading to the infinite family of side sequences 313213132313213132, 313213132313213132313213132, etc. All of these periodic sequences represent the same trajectory, so it is convenient to pick one among them as being canonical.
A periodic side sequence is said to be in \textbf{standard form} if it has one period, and is lexicographically least among all its rotations and reversals. All side sequences can be converted to an equivalent one in standard form. For example, the standard form of the side sequence 213132313 is 123132313. In this way, every periodic billiard trajectory in triangle ABC can be uniquely identified by a side sequence in standard form.
A periodic side sequence is said to be in \textbf{extra standard form} if we renumber the sides to make the side sequence minimal. For example the side sequence 3132 can be minimalized to 1213 by renumbered the sides. In this way any side sequence can be written in the form 12...23 or 12...13
Note that given a legal side sequence, there may not exist any triangle with a periodic billiard trajectory of that type. We call such side sequences \textbf{empty}.
Observe also that any periodic path in a triangle must hit all three sides since it will eventually bounce out of any angle of the triangle and hence any periodic side sequence must include all three integers 1, 2 and 3.
\subsection{Code Sequences}
Long side sequences quickly become cumbersome to work with, so it is helpful to introduce a more compact notation. For each pair of adjacent integers in the side sequence (including the first and last), write out the angle between those corresponding sides. For example in the side sequence 123231323, this is \(y z z z x x z z x \) where the first y is between the first 1 and 2 and right up to the last x coming between the last 3 and the first 1. Then, group the consecutive angles together, shuffling identical angles from the back to the front if necessary, and count the number of angles in each group. This gives \( 1y\ 3z\ 2x\ 2z\ 1x \). The sequence of integers 1 3 2 2 1 with spaces is called the \textbf{code sequence} consisting of the five individual positive integers each called a \textbf{code number}, and the sequence of angles \( y\ z\ x\ z\ x \) with spaces is called the \textbf{angle sequence}. The \textbf{length} of the code is the number of code numbers called an \textbf{odd code} or \textbf{even code} depending on its length and the \textbf{sum} of the code is the sum of the code numbers which is the same as the length of its corresponding side sequence. Given a code and angle sequence, it is possible to recover the original side sequence if we start it with the first two integers 12... as in our example.
Alternately observe the first code number 1 represents how many times the integers 1 and 2 are interchanged in a row at the start of the side sequence, the second code number 3 then represents the following number of succesive interchanges of 2 and 3, the third code number 2 then represents the following number of succesive interchanges of 3 and 1 and so forth. In this process assuming we are dealing with a side sequence of a periodic path, we always view the last integer as followed by the first integer and thus in this example there is just one interchange of 3 and 1 at the end. Also note that any code number by itself plus one yields the number of two successive alternate integers in the side sequence. For example the code number 2 corresponds to a side sequence of the form aba which is symmetric about the middle b.
Note that specifying any two consecutive angles in the angle sequence will completely determine the rest of it as long as we know the code sequence. For example, for the sequence 1 3 2 2 1, consider the angles \( y\ z \) for the 1 and the 3. After bouncing once across angle \( y \), the pool ball will bounce 3 times across angle \( z \), and 3 being odd, will end at the opposite side from where it started. It will then bounce twice across angle \( x \), and 2 being even, will end up at the same side as where it started, and then bounce back across the previous angle \( z\). Continuing this pattern will produce the rest of the angles in the angle sequence and, crucially, wrap-around to match the last \( x \) with the first \( y \) we started with.
As with side sequences, a code sequence is in \textbf{standard form} if it is lexicographically least among its rotations and reversals. For example, the standard form of 1 1 3 2 2 is 1 1 2 2 3.
We will call a finite string of positive integers separated by spaces a \textbf{legal code sequence} if it is the \textbf{code sequence of a legal side sequence} and hence potentially represents a periodic path in a triangle.
\textbf{Important comment one:} We will only use the code sequence notation when dealing with a \textbf{repeating} side sequence corresponding to that code sequence.
\noindent
\textbf{Important comment two:} Observe that if a side sequence starts 13 and ends in 2, then given its corresponding code sequence, we can recover the original side sequence by starting the sequence with 13... since successive integers are completely determined by the code numbers. If we choose to recover the side sequence by starting it with 12... then we will recover it with the 3's and 2's interchanged which means it will now end in a 3. This is not a problem as it just represents a relabeling of the sides of triangle ABC. Similarly we can start the side sequence 23 or any combination of 1,2 and 3 with the corresponding relabelling of the triangle. Caution: If the original side sequence is not in standard form and for example starts 13 and ends in 3, then we will recover some relabelling of the original side sequence from the code.
\textbf{Important comment three:} Given a code sequence representing a legal side sequence then we can always write in the form ab...bc or ab...ac where a,b and c are distinct.
We also make the convention that if there are three dots in front of (or following) a sequence of code numbers then this means there is at least one \textbf{code number} preceding (or following) that sequence in which case we will call it a \textbf{subcode}. For example ...2 4... is a subcode of the code sequence 2 2 4 4. Caution: A subcode need not be a legal code sequence by itself.
\subsection{Alphabet Code Sequences}
We write each code sequence in terms as odd and even integers without spaces. For instance, the \textbf{alphabet code sequence} of 1 1 2 2 3 is OOEEO. A alphabet code sequence is said to be in \textbf{standard form} if it is alphabetically least among all its rotations and reversals. For example, the standard form of OOEEO is EEOOO.
\noindent
It is very easy to determine if a code sequence is legal or not as in the automaton below.
\noindent
\textbf{Algorithm:} An alphabet code sequence is legal if and only if it forms a closed path of E's and O's as in the automaton.
\noindent
\begin{proof}
Each circle corresponds to a side of a triangle and two angles adjacent to it, say x y and then make a sequence of O's and E's. Every E ends up at the same side but in the opposite direction while every O ends up at the next side in the same direction. This means a periodic path can only start at some side (corresponding circle) and continue in the same direction. Equivalently this means there must be a closed path of O's and E's in the automaton. One can then check that a closed path can then be reduced to the empty set $\phi$ by successively eliminating strings of type EE, OOO, OEOE or EOEO, OOEOOE, OEOOEO or EOOEOO.
\end{proof}
\begin{figure}
\caption{Automaton for Alphabet Code Sequences}
\label{fig:dfa}
\end{figure}
Observe from the automaton that a code sequence always has an even number of even numbers. It follows that the length and sum of a code sequence have the same parity.
\section{The Map of Triangles}
We will suppose that $m<A=x, m<B=y$ and $m<C=z$ and since $z=180-x-y$, the angles of the triangle are completely determined by the coordinates $(x,y)$ and we will plot this point in an $X-Y$ coordinate system \textbf{only if that triangle has a periodic path}. Note this is independent of the size of the triangle. Observe that if the triangle corresponding to $(a,b)$ has a periodic path, then so do the triangles corresponding to $(a,180-a-b)$, $(b,a)$, $(b,180-a-b)$, $(180-a-b,a)$ and $(180-a-b,b)$. To prove that every triangle has a periodic path amounts to plotting every point $(x,y)$ with $x,y>0$ and $x+y<180$ or equivalently to plotting every point in the region $0<x\leq y\leq z$ as in Figure \ref{fig:mapb}. To prove that all obtuse triangles have a periodic path amounts to plotting every point $(x,y)$ with $x+y<90$ and $x\leq y$ which means it is enough to fill in the shaded region of Figure \ref{fig:mapc}.
Our goal in this paper is to shade in the region bounded by $x=0$, $y=0$, $x+y=80$ and $x+y=67.7$ as shown in Figure \ref{fig:mapd} or equivalently the region bounded by $x=0$, $x\leq y$, $x+y=80$ and $x+y=67.7$ which in conjunction with the other known results will prove that every triangle whose largest angle is 112.3 degrees or less has a periodic path. We call this the 112.3 degree theorem.
\begin{figure*}
\caption{Acute Vs. Obtuse regions}
\caption{Angle Proportion Regions}
\caption{All Unique Obtuse Triangles}
\caption{The 112.3 Degree Theorem}
\caption{The Map of all Triangles}
\label{fig:mapa}
\label{fig:mapb}
\label{fig:mapc}
\label{fig:mapd}
\end{figure*}
\section{Tower of Mirror Images of a Triangle}
\noindent
Let triangle ABC be oriented counterclockwise from A to B to C. A finite sequence of mirror images in the sides of triangle ABC will be called a \textbf{tower} if successive mirror images are in different sides of the triangle. The \textbf{length} of a tower is the number of triangles in it. We will make the convention that the first mirror image is not in side AB which will form the \textbf{base} of the tower. If the last mirror image is in side UV, then either of the other two sides can be viewed as the \textbf{top} of the tower. It is a \textbf{parallel tower} if the last mirror image in side AC or side BC makes side AB parallel to the base and pointing in the same direction as we go from from A to B. AB is then the top of the tower.
If we take a successive subset of these mirror images then we will call it a \textbf{subtower} of the given tower which is a tower in its own right allowing its base to be any one of the three sides.
If we number the triangles consecutively the first or starting triangle will always be oriented counterclockwise and so will every odd numbered triangle while every even numbered triangle will be oriented clockwise. All successive mirror images of the vertices A,B and C will be called A,B, and C points. Successive "A" points are successively labelled $A_{0}$ to $A_{n}$, successive "B" points $B_{0}$ to $B_{m}$ and successive "C" points $C_{0}$ to $C_{p}$. This means the orginal triangle is labelled $A_{0}$$B_{0}$$C_{0}$ and each vertex in the tower has an unique label.
Also note that a tower can overlap itself and that there may or may not be any poolshot associated with it as discussed in the next section. Further the A,B,C points are in fact ordered in an increasing order that follows the ordering of the formation of the sequence of mirror images of the tower. So for example, $i<j$ if and only if $C_{i}$ was formed before $C_{j}$ in the sequence of mirror images. Caution: If the tower overlaps itself, some vertices could have multiple labels which is not a problem as they would belong to different triangles.
We will color the vertices of the tower according to the following rule. $A_{0}$ is a \textbf{blue point} and $B_{0}$ is a \textbf{black point}. If the first reflection is in side AC then C is a black point and if the first reflection is in side BC then C is a blue point. Inductively if vertex U has color blue(black) and the next reflection is in side UV, then V has the opposite color black(blue).
\begin{figure}
\caption{Tower of Mirror Images}
\label{fig:tower}
\end{figure}
We can also use the side sequence notation to describe a tower where the first integer say 1 represents its base and successive integers represent the successive sides in which the mirror images are taken. For example the tower in figure \ref{fig:tower} can be described by the non-repeating and hence \textbf{non-legal side sequence} 131212121312121313131
\noindent \textbf{Important comment:} If a side sequence is symmetric about some integer say the integer "i" corresponding to side UV, then the corresponding subtower is symmetric about that same side UV. For example 312123\textbf{1}321213 is symmetric about the bolded 1. As a consequence, the line joining corresponding vertices of symmetric sides will be perpendicular to UV.
\section{Poolshot Towers}
Given triangle ABC oriented counterclockwise from A to B to C and given a finite billiard trajectory or poolshot starting say at side AB and which doesn't hit a vertex, if we \textbf{straighten} out this poolshot upwards by successive reflections in the sides that the poolshot hits then we get a corresponding finite \textbf{tower} of mirror images in which the poolshot is now a straight line segment. All vertices occuring on one side of the straightened poolshot will be blue points while all vertices occuring on the other side will be black points.
It follows that the convex hull of the blue points and the convex hull of the black
points are disjoint. This further means we cannot have a $blue-black-blue$ collinear situation where a black point is between two blue points. Similarly we cannot have a $black-blue-black$ collinear situation.
Since the poolshot starts at AB, if we straighten it out upwards with the base AB placed horizontal with A to the left of B and C above the base which we call \textbf{standard position}, then all blue points are on the $left$ $side$ and all black points are on the $right$ $side$ of the straightened poolshot and we get a \textbf{poolshot tower}. It is worth noting that a parallel poolshot tower must have an even number of triangles in it which is not necessarily the case for a parallel tower.
\begin{figure}
\caption{Poolshot Tower}
\end{figure}
\noindent
\textbf{Convention:} Any straightened poolshot can be viewed as forming the positive Y coordinate axis in an XY coordinate system by introducing a perpendicular X axis through the starting point of the poolshot on side AB. With this convention all blue points will be truly on the left side and all black points will be truly on the right side of the poolshot in this coordinate system.
\section{Periodic Poolshot Towers}
If the poolshot forms a periodic path, then the corresponding poolshot tower is called a \textbf{periodic poolshot tower}. If it starts at side $A_{0}B_{0}$ and is periodic of even length, then it finishes at $A_{n}B_{m}$ for some n and m where $A_{n}B_{m}$ is \textbf{parallel} to $A_{0}B_{0}$ where $A_{0}$ and $A_{n}$ are blue points and $B_{0}$ and $B_{m}$ are black points and we have a parallel poolshot tower. If the periodic poolshot is of odd length and finishes at $A_{n}B_{m}$, then $A_{n}B_{m}$ is \textbf{antiparallel} (in the sense that interior angles on the same side of the straightened poolshot are equal) to $A_{0}B_{0}$ with $A_{0}$ a blue point and $A_{n}$ a black point and it follows that if we double the length of the poolshot and go around the periodic path twice then $A_{2n}B_{2m}$ will be parallel to $A_{0}B_{0}$ and both $A_{0}$ and $A_{2n}$ will be on the same side of the straightened poolshot and both will be blue points and again we end with a parallel poolshot tower.
\begin{figure}
\caption{Periodic Poolshot Tower}
\label{fig:periodicpoolshot}
\end{figure}
\noindent \textbf{Important comment:} In a periodic poolshot tower, the side which is at the top of the tower is completely determined and is the same as the base.
\section{The Poolshot Tower Test}
As previously noted given a poolshot tower the convex hulls of the blue and black points must be disjoint. Conversely if the convex hulls of the blue and black points respectively of a tower are disjoint then by a well known separation theorem there is a line separating the two sets and since A is a blue point and B a black point, that line must go through the base AB of the tower (and also through every segment joining a blue point to a black point) and hence there must be a straightened poolshot which produces the tower.
\textbf{The Poolshot Tower Test:} A tower is a poolshot tower if and only if the convex hulls of the blue and black points don't intersect.
\section{The Periodic Poolshot Tower Test}
If we are given a periodic poolshot tower of \textbf{even length} then as stated previously the base $A_{0}B_{0}$ and the final side $A_{n}B_{m}$ of the tower are parallel line segments. The periodic poolshot that produces the tower will leave some point $P=P_{0}$ on $A_{0}$$B_{0}$ at an angle $\theta$ where $0<\theta\leq90$ and return to that point $P=P_{q}$ on $A_{n}$$B_{m}$ also at the angle $\theta$ but on the other side of the straightened poolshot. Since $A_{0}B_{0}$ and $A_{n}B_{m}$ are parallel, this is the same acute angle between the line $A_{0}A_{n}$ and the base $A_{0}B_{0}$ or between the line $B_{0}B_{m}$ and the base $A_{0}B_{0}$. Indeed $A_{0}A_{n}$ and $B_{0}B_{m}$ are both parallel to the straightened poolshot $P_{0}P_{q}$.
Now observe that any line segment between a blue point and a black point must cross the line through $P_{0}$ and $P_{q}$. Vectorially if $v=(a,b)$ is a non-zero vector from any blue point to any black point and $w=(c,d)$ is a non-zero vector along the straightened poolshot or equivalently along $A_{0}$$A_{n}$ or $B_{0}$$B_{m}$ called the \textbf{shooting vector}, then $bc<ad$.
Conversely if we a given a parallel tower of even length in which $A_{0}B_{0}$ is parallel to $A_{n}B_{m}$ and where $bc<ad$ for every vector from a blue point to a black point then it must be a periodic poolshot tower since if U is a blue point farthest to the right (reminding the reader that we are orienting the coordinate system so that $A_{0}A_{n}$ is vertical) and V is a black point farthest to the left and since $bc<ad$, there must be a band of non zero width between the two points between which there is a periodic poolshot which produces the given tower. Hence we get the
\textbf{Periodic Poolshot Tower Test I:} A parallel tower of even length with base $A_{0}B_{0}$ parallel to $A_{n}B_{m}$ is a periodic poolshot tower if and only if $bc<ad$ (or equivalently $ad-bc>0$) for all vectors $v$ where $v=(a,b)$ is a non-zero vector from any blue point to any black point and $w=(c,d)$ is a vector from $A_{0}$ to $A_{n}$.
\noindent As with repeating side sequences we can also use a code sequence to describe a repeating tower of mirror images of triangle ABC be it a periodic poolshot tower or not. For example the periodic poolshot tower in figure \ref{fig:periodicpoolshot} can be described by the side sequence 131212313121212312 (the sequence of reflected sides) or the code sequence 2 3 1 3 5 1 1 2. Since by convention the first triangle in a tower is oriented counterclockwise, we make a similar convention as before that the first 1 in the side sequence just represents the first appearance of triangle ABC oriented counterclockwise and after that the integers represent the sequence of reflected sides and the resulting triangle that is produced. For example the last 2 in the 14th spot represents a reflection in side BC which produces the 14th triangle in the tower which must be oriented clockwise. Note that corresponding to any subcode, there is a corresponding sequence of mirror images of triangle ABC which is a subtower.
\section{Fans}
A \textbf{fan} is a tower of mirror images of a triangle in which all successive mirror images alternate between the same two sides which means that all triangles of the fan intersect at the same vertex which we will call its \textbf{center}. We can further classify the fans as \textbf{blue fans} if the center is black and all other vertices are blue points or \textbf{black fans} if its center is blue and all other vertices are black. The \textbf{central angle} of a blue or black fan is the maximal angle at its center produced by the blue or black vertices in the tower.
Any tower can be viewed as a succession of fans and in particular any poolshot tower can be viewed as a succession of alternating black and blue fans as cut off by the straightened poolshot. Even more a blue(black) fan will consist of a black(blue) center corresponding to as an example vertex A say and two circular arcs of blue(black) points on the other side of the straightened poolshot, one arc corresponding to vertex B and one arc corresponding to vertex C. The end vertices of these two \textbf{blue(black) arcs} will be called the \textbf{key blue(black) points} of this blue (black) fan. Any blue or black fan contains at most 4 key points.
\begin{figure}
\caption{One black Fan}
\caption{A Black and Blue Fan}
\caption{Fan}
\end{figure}
\textbf{Fan Fact I}: The center of a fan is a key point of the following and preceding fan of opposite color assuming it has a following or preceding fan.
\textbf{Fan Fact II}: Given a blue fan in a poolshot tower, then the key points of each blue arc are the points on the arc closest to the straightened poolshot. Similarly for black fans. This is a consequence of the fact that $sin\theta$ is a minimum at the endpoints of the interval [a,b] where $0< a \leq b <180$ or equivalently that $cos\theta$ is a minimum at the endpoints of the interval [a,b] where $-90< a \leq b <90$ and that in a poolshot tower the central angle of any fan is less than 180 degrees.
\textbf{Fan Fact III}: Every blue(black) vertex lies on some blue(black) arc whose endpoints are key blue(black) points and whose center is black(blue).
\noindent
\textbf{Labelling of the Centers and Key points in a periodic poolshot tower}
We will assume that the base is $A_{0}B_{0}$ and the first reflection is in side $A_{0}C_{0}$ , then the black point $B_{0}$ has the label $L_{(1,0)}$ and the blue point $A_{0}$ the label $L_{(2,0)}$. Now as the centers alternate between black and blue points the labels increase by one. Observe that all black centers have odd labels $L_{(2i-1,0)}$ and all blue centers have even labels $L_{(2i,0)}$ for $i\geq1$. If the tower has 2m fans in it, then the last labels are $L_{(2m+2,0)}$ and $L_{(2m+1,0)}$ which belong to the last A and B vertices in the tower respectively.
As to the key points if any belonging to the fan with center $L_{(k,0)}$, if there are four of them the first one appearing in the tower after $L_{(k-1,0)}$ is labelled $L_{(k,1)}$ and the second $L_{(k,2)}$ whereas if there is three key points, the one after $L_{(k-1,0)}$ is labelled $L_{(k,1)}$.
Note 1: The number of fans in a periodic poolshot tower in standard form equals the number of code numbers which is always even where we remind the reader that periodic paths of odd side sequence length are doubled in order to get a parallel tower.
Note 2: The first B and the last A vertices can be considered as centers of degenerate fans involving no triangles and are key points and are not counted in the number of fans.
\begin{figure}
\caption{Labelled Periodic Poolshot Tower}
\end{figure}
\textbf{Periodic Poolshot Tower Test II:} A parallel tower of even length with base $A_{0}B_{0}$ parallel to $A_{n}B_{m}$ is a periodic poolshot tower if and only if $bc<ad$ or equivalently $ad-bc>0$ where $v=(a,b)$ is a non-zero vector from any key blue point to any key black point and $w=(c,d)$ is a vector from $A_{0}$ to $A_{n}$ \textbf{provided the central angle of any blue or black fan is less than 180 degrees}.
\begin{proof}
Choose our coordinate system so that $A_{0}A_{n}$ is vertical. Let V be a black point with the smallest X coordinate in our system and let $L_{2}$ be a vertical line through V and let U be a blue point with the largest X coordinate and let $L_{1}$ be a vertical line through U. Then our tower is a periodic poolshot tower if and only if U lies to the strict left of V.
If the tower is a periodic poolshot tower then by the Test I $ad-bc>0$ for all vectors from blue points to black points and hence this is certainly true for all vectors from key blue points to key black points.
On the other hand to show the other direction, it is enough to show that U and V are both key points since then the condition that $ad-bc>0$ will guarantee that U lies to the strict left of V. Now since V is black, it must lie on some black arc whose center is blue. If that blue center lies to the left of or on $L_{2}$, then V must be a key black point otherwise since the central angle is less than 180 degrees one of the black endpoints of the black arc through V would lie to the left of V. If that blue center lies to the right of $L_{2}$, then since that center is a key blue point and the endpoints of the black arc through V are key black points, the condition $ad-bc>0$ would force the central angle of that black arc to be greater than 180 degrees which is impossible. Hence this case can't arise and V is a key black point. Similarly U is a key blue point.
\end{proof}
Note 1: We can disregard the last blue and black points from this calculation since if $ad-bc>0$ using a vector from the first blue point $A_{0}$ to some black point (not the last), then it follows that $ad-bc>0$ using a vector from the last blue point $A_{n}$ to that same black point since both vectors are on the same side of $A_{0}A_{n}$.
Also we can disregard using a vector from $A_{n}$ to $B_{m}$, since this works if and only if the vector from $A_{0}$ to $B_{0}$ works since these are the same vectors.
Note 2: If two blue points form a vector parallel to the shooting vector then using either blue point and any fixed black point produces the same sign for $ad-bc$. This means we need only choose one blue point and disregard the others if they form a vector parallel to the shooting vector.
Note 3: If two or more blue points determine a vector parallel to the shooting vector and two or more black points determine a vector parallel to the shooting vector, then we need only choose one blue point and one black point to find the sign of $ad-bc$.
\noindent
Conclusion: In our computer calculations to show that a periodic path exists in a given triangle we use this second test taking into account the notes above.
\noindent
\textbf{Important comment:} Alternating succesive code numbers can be taken to represent alternating blue and black centers of alternating black and blue fans and we can call them \textbf{blue or black code numbers}. Observe that blue(black) code number gives one less than the number of black(blue) vertices in the corresponding fan and that the sum of the blue(black) codes plus one is the number of black(blue) vertices in the tower. Alternately each code number gives you the number of triangles in each fan and the sum of the code numbers gives you the number of triangles in the corresponding tower.
With successive code numbers in any code sequence, we can associate the X,Y or Z angles used in the central angles of the fans of the corresponding tower. Note that we use X and x, Y and y and Z and z represent the same angles. \textbf{Notationally we like to use X,Y,Z when dealing with the alternating angles of the fans in a tower and x,y,z when dealing with the angles of an individual triangle.}
Algorithm to do this:
Rule 1. Let the first code number correspond to X and the second code number correspond to Y (Note X and Y can be replaced by any of X,Y or Z)
Rule 2. Now consider any two successive code numbers $C_{i}$ and $C_{i+1}$ that have angles say X and Y assigned to them. Then if $C_{i+1}$ is even, then $C_{i+2}$ has the same angle as $C_{i}$ (X in this example) whereas if $C_{i+1}$ is odd then $C_{i+2}$ has an angle different from that of $C_{i}$ or $C_{i+1}$ (Z in this example).
Notationally we will write these angles successively above and below the corresponing code numbers starting with X say on top. We will call these the \textbf{top angles and the bottom angles}.
\noindent
Example
\noindent
X \: Z \: Y
\noindent
1 3 3 1 3 3
\noindent
\; Y \: X \; Z
Observe that the center angles of the fans are then successively 1X, 3Y, 3Z, 1X, 3Y, 3Z, and these centers alternate from one side to the other of the associated tower. These center angles are of the form code number times appropriate angle of the triangle.
Also observe that if the sum of the top angles times its code number equals the sum of the bottom angles times its code number and if the base of the tower is AB, then the side of the triangle at the top of the tower is parallel to the base although not necessarily the same side as the base.
Finally observe that if a periodic poolshot tower has an even number of triangles in it and the above is satisfied then the top and the base are parallel and of the form AB where A is a blue point and B is a black point.
Given a periodic poolshot tower with corresponding code sequence n m ... then one can determine the successive acute angles \textbf{(the shooting angles)} as the straightened poolshot crosses each side of a triangle in the tower. If the first shooting angle from the base of the tower is $\theta$ where $0<\theta<90$ and the first fan it crosses involves the angle X and has central angle nX and if n=2k+1 then the successive shooting angles are $\theta$, $\theta + x$, $\theta + 2x$, ...
, $\theta + kx$, $180-\theta -(k+1) x$, $180-\theta -(k+2) x$, ... , $180-\theta -(2k+1) x$. If n=2k, then the successive shooting angles are $\theta$, $\theta + x$, $\theta + 2x$, ...
, $\theta + (k-1)x$, $\theta + kx$//$180 - \theta - kx$, $180-\theta -(k+1) x$, $180-\theta -(k+2) x$, ... , $180-\theta -2kx$ where the // indicates that either $\theta + kx$ or $180 - \theta - kx$ is the acute angle.
Note that the first shooting angle cannot be 90 since in any fan that contains the 90 degree shooting angle, the side with the 90 degree shooting angle is also right in the center of the fan. Also observe that all the angles are integer linear combinations of x, y, 180 and $\theta$ and that as the poolshot passes from one fan to the next all the shooting angles are completely determined. In particular, if $\theta$ can be expressed as an integer linear combination of x,y and 90 then so too can every shooting angle.
\section{Classifying Codes}
\subsection{Stable and Unstable}
A code sequence is \textbf{stable} if the sum of all the top angles involving X times its code number equals the sum of all the bottom angles involving X times its code number and similarly for all the angles involving Y and all the angles involving Z.
\noindent
There are stable codes for which there are no triangles (x,y) which have a periodic path corresponding to that code sequence.
\noindent
\textbf{Convention:} We will only call a code sequence stable if there is a triangle (x,y) which has a periodic path corresponding to that code sequence.
If that is the case, then since the code is stable there is a finite periodic path within the triangle whose straightened trajectory is a postive minimum distance from all vertices. This means we can always change the coordinates (x,y) by a small amount and have another different triangle with the same periodic path. The upshot of this is that a region corresponding to a stable code is an open non-empty set in the plane and would cover a positive area. We will call it the \textbf{region} corresponding to that stable code sequence.
\noindent
\textbf{Convention:} We will call a code sequence \textbf{unstable} if it is not stable and the sum of the top angles times its code number equals the sum of the bottom angles times its code number and there exists a triangle (x,y) which has a periodic path corresponding to that code sequence.
For example 1 2 1 2 is an unstable code sequence since along the line 2X=2Y+2Z or X=Y+Z or X=90 or Y=90 or X+Y=90 there is a periodic path of type 1 2 1 2. In fact it can be easily shown that every right triangle has a periodic path of this type.
Observe that an unstable code sequence correponds to a \textbf{region} which is a finite open line segment whose equation is determined as above.
\subsection{The Five Code Types}
There are exactly five code types where four have even length, two of type stable or unstable and either with a 90 degree reflection or none. The last code type is of odd length which must be stable and can't have a 90 degree reflection.\newline
\noindent \textbf{CS codes:} These are stable even codes which contain a 90 degree reflection. The first such example is 1 1 1 1 2 1 1 1 1 2\newline
\noindent
Properties:
\noindent
1. Their side sequence length is a multiple of 4 and code sequence length is even and the code sequence passes the stable test.
\noindent
2. They are stable codes of the form $E_1$ $C_1$ $C_2$ ... $C_k$ $E_2$ $C_k$ ... $C_2$ $C_1$ or some cyclic permutation of the above where $E_1$ and $E_2$ are even code numbers and when converted to side sequence form becomes a side sequence of odd length where the middle number represents the side at which the pool shot hits at 90. For example if $E_1=4$ then the corresponding side sequence is of the form uvuvu and the poolshot hits the middle "u" at 90 and the rest of the shooting angles follow. Note that this means there are two special parallel sides corresponding to $E_1$ and $E_2$ in the corresponding tower which are perpendicular to the poolshot and half the length of the tower apart. We will call these the two \textbf{special perpendiculars}. It follows that these are the only sides of triangles in the tower which are perpendicular to the poolshot.
\noindent
3. The corresponding region covers an finite non zero area in the plane and is an open set in the plane.
\noindent
4. One can determine the \textbf{first} shooting angle of each fan by the following algorithm which we illustrate by the following example. Consider the CS code sequence\newline
X~ Z~~ Y\:~ Z\:\;~ X
1 1 1 1 2 1 1 1 1 2
~~Y~~X~~~X~~~Y~~~Z\newline
Then the first shooting angle of each successive fan is assuming the first one is $\theta$ are\newline
$\theta$
$180-\theta-1X$
$\theta + 1X-1Y$
$180-\theta-1X+1Y-1Z$
$\theta + 1X-1Y+1Z-1X$
$180-\theta-1X+1Y-1Z+1X-2Y$
$\theta + 1X-1Y+1Z-1X+2Y-1X$
$180-\theta-1X+1Y-1Z+1X-2Y+1X-1Z$
$\theta + 1X-1Y+1Z-1X+2Y-1X+1Z-1Y$
$180-\theta-1X+1Y-1Z+1X-2Y+1X-1Z+1Y-1X$\newline
\noindent Note that if we continued this pattern then the next shooting angle would be
$\theta + 1X-1Y+1Z-1X+2Y-1X+1Z-1Y+1X-2Z$=$\theta$ since $1X-1Y+1Z-1X+2Y-1X+1Z-1Y+1X-2Z=0$ since the code sequence is stable.
\noindent Further since there is a 90 angle associated with the first 2, the first shooting angle of that fan is $(180-2Y)/2$=90-Y and we can determine that $\theta$=X+Y-90 and hence express all shooting angles in terms of X,Y and 90.
\noindent
5. The first shooting angle of the first fan can be used to get the vector (c,d)=(-$cos\theta$, $sin\theta$) used in the periodic poolshot tower test.\newline
\begin{figure}
\caption{CS Periodic Path}
\end{figure}
\noindent \textbf{CNS codes:} These are unstable even codes which contain a 90 degree reflection. The first example is 2 2\newline
\noindent
Properties:
\noindent
1. Their side sequence length is a multiple of 2 and code sequence length is even and the code sequence doesn't pass the stable test.
\noindent
2. As above they are of the form $E_1$ $C_1$ $C_2$ ... $C_k$ $E_2$ $C_k$ ... $C_2$ $C_1$ or some cyclic permutation of the above where $E_1$ and $E_2$ are even code numbers with the difference that they are not stable codes. The corresponding tower would also contain two special perpendiculars.
\noindent
3. The corresponding region is a straight line segment with equation found by taking the sum of the top angles times their code numbers minus the sum of the bottom angles times their code numbers and setting this equal to zero. For the 2 2 CNS this becomes 2X-2Y=0 or Y=X.
\noindent
4. One can determine the first shooting angle of each fan similar to the above which we illustrate by the following example. Consider the CNS code sequence\newline
Y~~~Y
1 2 1 6
~~~Z~~~X\newline
\noindent
Then the first shooting angle of each successive fan is assuming the first one is $\theta$ are
$\theta$
$180-\theta-1Y$
$\theta + 1Y-2Z$
$180-\theta-1Y+2Z-1Y$\newline
\noindent Note that if we continued this pattern then the next shooting angle would be
$\theta + 1Y-2Z+1Y-6X$=$\theta$ since $1Y-2Z+1Y-6X$=0 or $Y=90+X$ is the equation associated with this code sequence.
\noindent Further since there is a 90 angle associated with the first 2, the first shooting angle of that fan is $(180-2Z)/2$=90-Z=X+Y-90 and we can determine that $\theta=270-X-2Y=90-3X$ and hence express all shooting angles in terms of X and 90.
\noindent
5. The first shooting angle of the first fan can be used to get the vector (c,d)=(-$cos\theta$, $sin\theta$) used in the periodic poolshot tower test.
\begin{figure}
\caption{CNS Periodic Path}
\end{figure}
\noindent \textbf{OSO codes:} These are stable odd code sequences whose sum is an odd length or some multiple of an odd length and never contain a 90 angle. The first example is 1 1 1.\newline
\noindent
1. The side sequence length of an OSO with minimum period is odd and the code sequence length is also odd.
\noindent
2. In order to create a parallel tower from an OSO, one has to double the code as in the example above the tower has to correspond to 1 1 1 1 1 1. This means OSO's as a parallel tower are of the form $C_1$ $C_2$ ... $C_k$ $C_1$ $C_2$ ... $C_k$ or multiples thereof where $C_1$ + $C_2$ + ... + $C_k$ is odd.
\noindent
3. Since it is stable the corresponding region covers an finite non zero area in the plane and is an open set in the plane.
\noindent
4. One can determine the first shooting angle of each fan similar to the above which we illustrate by the following example. Consider the OSO code sequence. Note we don't need to double its length in this calculation.\newline
Z~~~X~~~X
1 1 2 2 5
~~~Y~~Y\newline
\noindent
Then the first shooting angle of each successive fan is assuming the first one is $\theta$ are
\newline
$\theta$
$180-\theta-1Z$
$\theta + 1Z-1Y$
$180-\theta-1Z+1Y-2X$
$\theta + 1Z-1Y+2X-2Y$\newline
\noindent Note that if we continued this pattern then the next shooting angle would be
$180-\theta-1Z+1Y-2X+2Y-5X$=$\theta$ from which we can solve for $\theta$ to get $\theta$=$(180-1Z+1Y-2X+2Y-5X)/2$.
\noindent
5. The first shooting angle of the first fan can be used to get the vector $(c,d)=(-cos\theta$, $sin\theta)$ used in the periodic poolshot tower test.
\begin{figure}
\caption{OSO Periodic Path}
\end{figure}
\noindent \textbf{ONS codes:} These are the unstable even codes which don't contain a 90 degree reflection. One of the first examples is 1 1 2 1 3 2.\newline
\noindent
Properties:
\noindent
1. Their side sequence length is a multiple of 2 and code sequence length is even and the code sequence doesn't pass the stable test.
\noindent
2. As above they are non stable codes of the form $C_1$ $C_2$ ... $C_k$ which are not of the CNS form.
\noindent
3. The corresponding region is a straight line segment with equation found by taking the sum of the top angles times their code numbers minus the sum of the bottom angles times their code numbers and setting this equal to zero. For the 1 1 2 1 3 2 ONS this becomes 4X-2Y=0 or Y=2X.
\noindent
4. One can determine the first shooting angle of each fan similar to the above which we illustrate by the following example. Consider the ONS code sequence\newline
X~~~X
2 2 4 4
~~~Y~~Y\newline
\noindent
Then the first shooting angle of each successive fan is assuming the first one is $\theta$ are\newline
$\theta$
$180-\theta-2X$
$\theta + 2X-2Y$
$180-\theta-2X+2Y-4X$\newline
\noindent Note that if we continued this pattern then the next shooting angle would be
$\theta+2X-2Y+4X-4Y$=$\theta$ and we \textbf{cannot} solve for $\theta$ since on the corresponding linear tile Y=X and then $2X-2Y+4X-4Y$=0. It can be shown that the shooting angles are not integer or even rational linear combinations of X,Y and 90.
\noindent
5. Because of the above, the vector (c,d)=(-$cos\theta$, $sin\theta$) used in the periodic poolshot tower test is calculated another way as in the next section.
\noindent 6. There is a way to eliminate $\theta$ to form a bounding polygon which is discussed later and which contains the straight line segment region corresponding to the ONS code.
\begin{figure}
\caption{ONS Periodic Path}
\end{figure}
\noindent \textbf{OSNO codes:} These are the stable even codes which don't contain a 90 degree reflection. The first example is 1 1 2 2 1 1 3 3.\newline
\noindent
Properties:
\noindent
1. Their side sequence length is a multiple of 2 and code sequence length is even and the code sequence passes the stable test.
\noindent
2. They are stable codes of the form $C_1$ $C_2$ ... $C_k$ which are not of the CS or even multiples of the OSO form.
\noindent
3. The corresponding region covers an finite non zero area in the plane and is an open set in the plane.
\noindent
4. One can determine the first shooting angle of each fan similar to the above which we illustrate by the following example. Consider the OSNO code sequence\newline
X~~~Y~~~Y~~Z
1 1 2 2 1 1 3 3
~~~Z~~~Z~~~X~~~Y\newline
\noindent
Then the first shooting angle of each successive fan is assuming the first one is $\theta$ are\newline
$\theta$
$180-\theta-1X$
$\theta + 1X-1Z$
$180-\theta-1X+1Z-2Y$
$\theta + 1X-1Z+2Y-2Z$
$180-\theta-1X+1Z-2Y+2Z-1Y$
$\theta + 1X-1Z+2Y-2Z+1Y-1X$
$180-\theta-1X+1Z-2Y+2Z-1Y+1X-3Z$\newline
\noindent Note that if we continued this pattern then the next shooting angle would be
$\theta + 1X-1Z+2Y-2Z+1Y-1X+3Z-3Y$=$\theta$ since $1X-1Z+2Y-2Z+1Y-1X+3Z-3Y=0$ since the code sequence is stable.
As above it can be shown that the shooting angles are not integer or even rational linear combinations of X,Y and 90.
\noindent
5. Because of the above, the vector $(c,d)=(-cos\theta$, $sin\theta$) used in the periodic poolshot tower test is calculated another way as in the next section.
\noindent 6. There is a way to eliminate $\theta$ to form a bounding polygon which is discussed later and which contains the open region corresponding to the OSNO code.
\begin{figure}
\caption{OSNO Periodic Path}
\end{figure}
\noindent There are faster ways to test a parallel tower for being a periodic poolshot tower if it is of the form $E_1$ $C_1$ $C_2$ ... $C_k$ $E_2$ $C_k$ ... $C_2$ $C_1$ or some cyclic permutation of the above where $E_1$ and $E_2$ are even code numbers and the sum of the top angles times its code number equals the sum of the bottom angles times its code number. This means the two special sides corresponding to $E_1$ and $E_2$ are parallel and perpendicular to $A_{0}A_{n}$.
\textbf{Periodic Poolshot Tower Test III:} A parallel tower of code sequence form $E_1$ $C_1$ $C_2$ ... $C_k$ $E_2$ $C_k$ ... $C_2$ $C_1$ (of CS or CNS form) or some cyclic permutation of the above where $E_1$ and $E_2$ are even code numbers and the sum of the top angles times its code number equals the sum of the bottom angles times its code number and with base $A_{0}B_{0}$ parallel to $A_{n}B_{m}$ and where $UV$ and $WZ$ are the two special perpendiculars associated with $A_{0}A_{n}$ is a periodic poolshot tower if and only if $bc<ad$ or equivalently $ad-bc>0$ where $v=(a,b)$ is a non-zero vector from any key blue point to any key black point which lie between or on the two special perpendiculars and $w=(c,d)$ is a vector from $A_{0}$ to $A_{n}$ \textbf{provided the angle of any blue or black fan is less than 180 degrees}.
\noindent
\section{Previous results on periodic paths:}
\noindent
1. Acute triangles always have a periodic path of OSO code type 1 1 1 which has length 3 namely the orthic triangle whose vertices are the feet of the altitudes.
\begin{figure}
\caption{Orthic Triangle}
\end{figure}
\noindent
2. Right triangles always have a periodic path of CNS code type 1 2 1 2.
\begin{figure}
\caption{Right Triangle with CNS Periodic Path 1 2 1 2}
\end{figure}
\noindent
3. Isosceles triangles always have a periodic path of CNS code type 2 2.
\begin{figure}
\caption{Isosceles Triangle with CNS Periodic Path 2 2}
\end{figure}
\noindent
4. Rational triangles always have a periodic path of type CNS. A triangle is \textbf{rational} if all angles can be expressed as integer multiples of x where x divides 90, say $m<A=nx$, $m<B=mx$ and $m<C=px$. We can prove this from the following facts about rational triangles.
\noindent
Fact I: If a poolshot leaves a side at 90 degrees and $90=qx$ for some integer $q$, then it bounces off any side at some integer multiple of x.
Conclusion: There are at most $q$ angles involved as a poolshot bounces off the sides of the triangle.
\noindent
Fact II: If a poolshot leaves a side at 90 degrees and $90=qx$ for some integer x and hits a vertex say vertex B, then it must enter B along a ray making an angle tx where $0<t<m$.
Conclusion: There are at most n+m+p-3 perpendicular poolshots which hit a vertex in a rational triangle. This means if a 90 degree poolshot leaves a side and doesn't hit a vertex, there is an open band around that poolshot of finite width $\delta>0$ which leaves that side at 90 and never hits a vertex and in which $\delta$ is maximal. This is only possible if the band hits another side at 90 and becomes periodic since otherwise the band must hit some side say side AB infinitely often at some angle jx for a fixed integer j less than q and which is repeated infintely often. But each time this band hits this side, it hits on some open interval $(a_i,b_i)$ of width $\delta$ no two of which can intersect without contradicticting the maximality of $\delta$. But since AB is finite this is impossible.
\noindent
\section{Calculating the coordinates of the vertices in a code tower}
Given a code tower of even length, let us assume that the ordering of the angles in the code tower is such that the first top angle is $U_{1}=X$ and that the last bottom angle is $U_{2k}=Z$ as shown.\newline
$U_{1}$~~~~~~~$U_{3}$
$C_1$ $C_2$ $C_3$... $C_{2k}$
~~~~$U_{2}$~~~~~~~~~~$U_{2k}$\newline
Then the base of the tower is AB with $m<A=x$, $m<B=z$ and $m<C=y$ and if we let $AB=siny$, $BC=sinx$ and $AC=sinz=sin(x+y)$ since $z=180-x-y$ we then can recursively calculate the coordinates of each fan center $L_{(i,0)}$ as follows: Letting $a_{i}=U_{i}C_i$ be the center angle of the fan with corresponding label $L_{(i+1,0)}$ for $i\geq 1$, $u_{i}$ be the length of the side between $(x_{i},y_{i})$ and $(x_{i+1},y_{i+1})$ where $(x_{i},y_{i})$ are the coordinates of the center of the fan at $L_{(i,0)}$ then recursively let $x_{1}=sin y$, $y_{1}=0$, $x_{2}=0$, $y_{2}=0$ and
\newline
\noindent$x_{2n}=x_{2n-1}-u_{2n-1}cos(a_{2n-2}-a_{2n-3}+a_{2n-4}...-a_{3}+a_{2}-a_{1})$
\noindent$y_{2n}=y_{2n-1}+u_{2n-1}sin(a_{2n-2}-a_{2n-3}+a_{2n-4}...-a_{3}+a_{2}-a_{1})$
\noindent$x_{2n+1}=x_{2n}+u_{2n}cos(a_{2n-1}-a_{2n-2}+a_{2n-3}...+a_{3}-a_{2}+a_{1})$
\noindent$y_{2n+1}=y_{2n}+u_{2n}sin(a_{2n-1}-a_{2n-2}+a_{2n-3}...+a_{3}-a_{2}+a_{1})$
\newline
\newline
\noindent
An example of this process is worked through in the appendix A.
\section{The Prover}
The basic idea behind the prover is the Mean Value Theorem in two dimensions.\newline
\noindent
\textbf{The Mean Value Theorem:} Let $f(x,y)$ be a differentiable function of two variables, then
$f(b_{1}, b_{2}) - f(a_{1}, a_{2}) = f_x(c_{1}, c_{2})(b_{1} - a_{1}) + f_y(c_{1}, c_{2})(b_{2} - a_{2})$ for some
($c_{1}, c_{2}$) on the line between ($a_{1}, a_{2}$) and ($b_{1}, b_{2}$).\newline
\noindent
\textbf{Fact I:} If $f(x,y)=\sum_{i=1}^{k} \pm u_icos(m_{i}x + n_{i}y)$, then $f_x=\sum_{i=1}^{k} \mp$$m_{i}u_{i}$sin($m_{i}x+ n_{i}y$) and $f_y$=$\sum_{i=1}^{k} \mp$$n_{i}u_{i}$sin($m_{i}x$ + $n_{i}y$) and hence
$\lvert f_x\rvert\leq\sum_{i=1}^{k} $$\lvert m_{i}u_{i}\rvert=M$ and $\lvert f_y\rvert\leq\sum_{i=1}^{k} $$\lvert n_{i}u_{i}\rvert=N$
\noindent
Similarly if $f(x,y)=\sum_{i=1}^{k} \pm u_isin(m_{i}x + n_{i}y)$\newline
\noindent
\textbf{Fact II:} If $f(b_{1}, b_{2}) >0$ and $f(a_{1}, a_{2}) \leq0$, then by the Mean Value Theorem
$f(b_{1}, b_{2}) - f(a_{1}, a_{2}) = f_x(c_{1}, c_{2})(b_{1} - a_{1}) + f_y(c_{1}, c_{2})(b_{2} - a_{2})$$\geq$$f(b_{1}, b_{2})$
and since $ f_x(c_{1}, c_{2})(b_{1} - a_{1}) + f_y(c_{1}, c_{2})(b_{2} - a_{2})\leq M\lvert b_{1} - a_{1}\rvert + N\lvert b_{2} - a_{2}\rvert$, then $f(b_{1}, b_{2})$$\leq$$M\lvert b_{1} - a_{1}\rvert + N\lvert b_{2} - a_{2}\rvert$\newline
\noindent
\textbf{Conclusion:} If $f(b_{1}, b_{2})> M\lvert b_{1} - a_{1}\rvert + N\lvert b_{2} - a_{2}\rvert$, then $f(a_{1}, a_{2}) >0$\newline
\noindent
\textbf{Fact III:} Let $(b_{1}, b_{2})$ be the center of a square of side $2r>0$, then for any $(a_{1}, a_{2})$ in or on the boundary of the square we must have $(b_{1} - a_{1}) \leq r$ and $(b_{2} - a_{2}) \leq r$ and hence
$M\lvert b_{1} - a_{1}\rvert + N\lvert b_{2} - a_{2}\rvert \leq (M+N)r$\newline
\noindent
\textbf{Conclusion I:} If $f(b_{1}, b_{2}) > (M+N)r \geq M\lvert b_{1} - a_{1}\rvert + N\lvert b_{2} - a_{2}\rvert$
then $f(a_{1}, a_{2})>0$\newline
\noindent
\textbf{Conclusion II:} If $0<r<f(b_{1}, b_{2})/(M+N)$, and $f(b_{1}, b_{2})>0$, then all points $(a_{1}, a_{2})$ in the square centered at $(b_{1}, b_{2})$ and of side 2r must also satisfy $f(a_{1}, a_{2})>0$, \newline
\noindent
\textbf{The Gradient Algorithm:} For every function $f_j(x,y)=\sum_{i=1}^{k} \pm u_icos(m_{i}x + n_{i}y)$ or
\noindent$f_j(x,y)=\sum_{i=1}^{k} \pm u_isin(m_{i}x + n_{i}y)$ and which form the boundary equations of a stable code region\newline
\noindent
1. calculate $G_j=\sum_{i=1}^{k} \lvert u_i\rvert (\lvert m_i\rvert + \lvert n_i\rvert )$
\noindent
2. then if a square centered at $(b_1,b_2)$ satisfies $f_j(b_1,b_2)>0$ and
$f_j(b_1,b_2) - rG_j >0$ for all $f_j$ where 2r is the length of a side of the square, then every point in or on the boundary of the square satisfies $f_j>0$ and that square lies completely within the given code region.\newline
Note: If we are dealing with a CNS or ONS code and its corresponding linear region, then it is exactly the same algorithm as long as $(b_1,b_2)$ is the center of the linear region intersecting the square.
\noindent
\textbf{The Triple Rule:} Suppose two code regions $R_1$ and $R_2$ intersect along a common boundary line segment from $(x_1,y_1)$ to $(x_2,y_2)$. This can happen if $R_1$ is defined by the equations $f_i(x,y)>0$ and $f(x,y)>0$ and $R_2$ is defined by the equations $g_i(x,y)>0$ and $g(x,y)>0$ and the equations $f(x,y)=0$ and $g(x,y)=0$ have a common factor of the form $sin(ax+by)$ or $cos(ax+by)$. Now observe that $sin(ax+by)=0$ if and only if $ax+by=180k$ and $cos(ax+by)=0$ if and only if $ax+by=90+180k$ for some integer k.
\begin{figure}
\caption{Two Stable Regions Sharing a Line Segment}
\end{figure}
\noindent Lets illustrate with the sine case and let us further suppose that $R_1$ lies between the parallel lines $ax+by=180(k-1)$ and $ax+by=180k$ and that $R_2$ lies between the parallel lines $ax+by=180k$ and $ax+by=180(k+1)$ and that $sin(ax+by)>0$ between the first two parallels and $sin(ax+by)<0$ between the second two parallels. Let $f(x,y)=sin(ax+by)u(x,y)$ and $g(x,y)=sin(ax+by)v(x,y)$
\noindent Now consider a square with sides parallel to the coordinate axis whose vertex coordinates are all rational numbers and which lies between the first and third parallels and which may or may not intersect the second parallel. It is worth noting that if this square lies inside either code region then it cannot intersect any of the three parallel lines above since $sin(ax+by)$ is zero there. We can use the following to decide if every point in the square including its boundary has a periodic path.
\noindent
\textbf{Triple Rule Algorithm:} Using interval arithmetic, if we can prove \newline
1. that each of the four corners $(x,y)$ of the square satisfy $180(k+1)>ax+by>180(k-1)$
2. that the center of the square $(x_0,y_0)$ satisfies $f_i(x,y)>0$, $g_i(x,y)>0$, $u(x,y)>0$ and $-v(x,y>0$ (noting that we use -v since we assume $sin(ax+by)<0$ between the second pair of parallels) and each of these equations satisfies the Gradient algorithm with respect to the given square. It then follows that all points on the square and its boundary satisfy these inequalities.
3. that each point on the common boundary line segment $ax+by=180k$ from $(x_1,y_1)$ to $(x_2,y_2)$ has a periodic path corresponding to a CNS or ONS code which includes this line segment and runs from from $(x_3,y_3)$ to $(x_4,y_4)$ and that each of the four corners of the given square lies between the lines $x=x_3$ and $x=x_4$ or between the lines $y=y_3$ and $y=y_4$.\newline
Then that square must lie within $R_1$ union $R_2$ union $R_3$ where $R_3$ is the linear region corresponding to the CNS or ONS code from 3 and every point in that union has a periodic path.
\noindent Proof: If all points on the square satisfy $sin(ax+by)>0$, it is within $R_1$. If all points satisfy $sin(ax+by)<0$, it is within $R_2$. Otherwise it is within $R_1$ union $R_2$ union $R_3$.
\noindent
QED
\section{The Two Infinite Patterns}
\noindent There are two infinite patterns which converge to the line segment x=0, $67.5<y<90$ as follows.
\begin{figure}
\caption{Infinite Pattern 1}
\caption{Infinite Pattern 2}
\caption{The two infinite patterns}
\end{figure}
\textbf{Infinite Pattern I:}
Given a triangle ABC with $m<A=x$, $m<B=y$ where $(n+1)x+2y<180<(n+2)x+2y$ and $0<x<90/(2n+2)$ for $n\geq1$, then it contains a CS periodic path 1 1 $2n+1$ 1 2 1 $2n+1$ 1 1 $4n+2$. Note: Since $x<22.5$ and $y+(n+2)x/2>90$ and $(n+2)x/2<(n+2)22.5/(2n+2)<22.5$ then $y>67.5$. Also observe that since $n\geq1$, then $2x+2y\leq(n+1)x+2y<180$ which means that $x+y<90$. Finally observe that the successive regions determined by these conditions share the boundary line $(n+1)x+2y=180$ for $n\geq2$.
\begin{figure}
\caption{$n\geq2$ even case}
\caption{$n\geq2$ even case}
\caption{$n\geq2$ even case}
\caption{Infinite Pattern I}
\label{fig:infa}
\end{figure}
\begin{proof}
For the $n\geq2$ even case, take a wedge of acute angle x and vertex A with arms $l_1$ and $l_2$ as shown on figure \ref{fig:infa} and pick a point P on $l_1$ and shoot a poolball at an acute angle $0<270-(n+2)x -2y<90$ as shown. Now since $270-(n+2)x -2y+x<180$, it hits $l_2$ at an angle $0<(n+1)x+2y-90<90$ and continues bouncing all on the wedge at angles $nx+2y-90>(n-1)x+2y-90>...>x+2y-90>2y-90>0$ as shown. If we consider the ray from P in the other direction, it bounces off the sides at the angles $270-(n+3)x-2y>270-(n+4)x -2y>...>270-(2n+3)x -2y>0$ where the last inequality holds since $(n+1)x+2y<180$ and $(n+2)x<(2n+2)x<90$. Now let W be the intersection of the last two rays and draw a line through W hitting $l_1$ at B, $l_2$ at C and such that the angle at B as shown is y. Observe that B lies between the last two reflections on $l_1$ since $y>2y-90$ since $y>45$ and $180-y>270-(2n+2)x -2y$ since $(2n+2)x+y>90$ where this last inequality holds since $(2n+2)x+y>(n+2)x/2+y>90$. The last ray from $l_2$ hits BC at W at the angle $90-y$ and bounces to hit AB at 90 whereas the last ray from $l_1$ hits W at $(2n+2)x+y-90$ and since $(2n+2)x+y-90$ +$180-x-y$ = $90+(2n+1)x<90+(2n+2)x<180$, it reflects off BC and hits $l_2$ at $90-(2n+1)x$. Observing that $90-(2n+1)x>x$ since $(2n+2)x<90$, the ray then bounces off $l_1$ and $l_2$ until it hits at 90 producing a CS periodic path 1 1 $2n+1$ 1 2 1 $2n+1$ 1 1 $4n+2$.
The odd case is handled similarly interchanging $l_1$ and $l_2$.
\end{proof}
\textbf{Infinite Pattern II:}
Given a triangle ABC with $m<A=x$, $m<B=y$ where $(n+1)x+2y=180$ and $0<x<90/n$ for $n\geq1$, then it contains a CNS periodic path 1 2 1 2$n$.
\noindent
Note: These are just the boundary lines (extended) between the regions of theorem 1.
\begin{figure}
\caption{Infinite Pattern 2 Even Case}
\caption{Infinite Pattern 2 Odd Case}
\caption{Infinite Pattern II}
\label{fig:infd}
\label{fig:infe}
\end{figure}
\begin{proof}
For the odd integer case $n=2k+1$, $k\geq0$, take a wedge of angle x and vertex A with arms $l_1$ and $l_2$ as shown in figure \ref{fig:infd} and shoot a poolball at 90 degrees from $l_1$ which then bounces off the sides at angles $90-x>90-2x>...>90-(2k+1)x>0$ noting that $(2k+1)x<90$. Now on the last ray leaving $l_2$ at angle $90-(2k+1)x$, choose any point W between $l_1$ and $l_2$ and draw a line through W hitting $l_1$ at B at an acute angle $y=90-(k+1)x>0$ (and so $(n+1)x+2y=180$) and hitting $l_2$ at C. Observe that triangle ABC has a periodic path of type 1 2 1 2$n$.
For the even integer case n=2k, $k\geq1$, again take a wedge of angle x and vertex A with arms $l_1$ and $l_2$ as shown in figure \ref{fig:infe} and shoot a poolball at 90 degrees from $l_2$ which then bounces off the sides at angles $90-x>90-2x>...>90-2kx>0$ noting that $2kx<90$. On the last ray leaving $l_2$ at angle $90-2kx$, choose any point W between $l_1$ and $l_2$ and draw a line through W hitting $l_1$ at B at an acute angle $y=90-(2k+1)x/2>0$ (and so $(n+1)x+2y=180$) and hitting $l_2$ at C. Observe that triangle ABC has a periodic path of type 1 2 1 2$n$.
\end{proof}
\begin{figure}
\caption{Infinite Patterns}
\caption{Region Covered}
\caption{Infinite Pattern Cover}
\end{figure}
\noindent
$\textbf{IMPORTANT CONCLUSION:}$ Given an obtuse triangle ABC, with $0<x<22.5$, $67.5<y<90$ and $x+y<90$, then that triangle has a periodic path.
\section{Bounding Polygons}
\noindent A \textbf{bounding polygon} is a convex polygon which includes a code region. Every code region has a bounding polygon for example the region bounded by $0<x+y<180$. It is useful in our calculations to find a bounding polygon with rational vertices which is as small as possible, the smallest being the convex hull of the region. There are up to six bounding polygons for each code, one corresponding to each permutation of the angles. For each code, we will assume we have a fixed order of the code angles $X$,$Y$ and $Z$.
\noindent
\textbf{The corner bounding polygon:} This is the polygon determined by the conditions that $0<nX<180$, $0<mY<180$ and $0<pZ<180$ where $nX,mY,pZ$ are the code angles corresponding to the largest $X$,$Y$,$Z$ code numbers in the code sequence.
\noindent \textbf{The angle bounding polygon:} Given a periodic side sequence leaving side AB of triangle ABC at an angle $T$ where $0<T \leq 90$, then we can calculate its successive angles as it reflects off each side. Since $z=180-x-y$, these reflecting angles will be linear combinations of $x$, $y$, 90 and $T$ with integer coefficients. If $T$ can be expressed in terms of $x$, $y$ and 90 with integer coefficients, then so can all reflecting angles. This is the case for the OSO, CS and CNS periodic paths but not for the OSNO or ONS periodic paths. An example is given below.
\begin{figure}
\caption{CS Code 1 3 6 3 1 7 3 1 8 1 3 7}
\caption{OSO Code 1 1 2 2 3 1 2 1 4}
\caption{OSNO Code 1 1 2 2 3 1 3 5}
\caption{Angle Bounding Polygons}
\end{figure}
\noindent Now observe that in the OSO, CS and CNS cases each reflecting angle $\theta$ must satisfy $0<\theta\leq90$ and if we omit the 90 degree angles, then the set of $(x,y)$ satisfying $0<\theta<90$ forms a bounding polygon which contains the region determined by the given periodic path. In the CNS case, it is a bounding line segment.
\noindent On the other hand in the OSNO and ONS cases, we must have $0<\theta<90$ since there are no 90 degree angles. However since these reflecting angles are expressed in terms of $x$, $y$, 90 and $T$ with integer coefficients, in order to form the bounding polygon we must eliminate $T$. This can be done as follows. For each reflecting angle of the form $mx+ny+p90+T$, we can also say that $0<90-mx-ny-p90-T<90$ and similarly for reflecting angles of the form $mx+ny+p90-T$. We then get two sets of angles involving either $T$ or $-T$ and if we add each equation with $T$ to each equation with $-T$ and divide by 2, we end up with a set of linear cominations of $x$,$y$ and 90 with rational coefficients which lie between 0 and 90 and hence produce a bounding polygon in these cases. In the ONS case it is a bounding line segment. These are the bounding polygons
that we usually use in our calculations.
It is worth noting that the corner bounding polygon equations are included amongst the angle bounding polygon equations. This is a consequence of the fact that if a poolshot enters a corner $A$ where $m<A=x$ at an angle $\theta$ and bounces n times before it leaves then the angles involved are $\theta$, $\theta +x$, $\theta +2x$, ... , $180- \theta - (n-2)x$, $180- \theta - (n-1)x$ , $180- \theta - nx$ and then since $0< \theta <90$ and $0< 180- \theta - nx <90$, we must have $0< 180- nx <180$ or $0< nx <180$ which is one of the corner equations.
\section{The Program and Proof}
Because of the complexity and quantity of these equations, there is a dire need to automate the process of proving the codes work. Thus, we have written a program to crunch the numbers. In these calculations, each $G_j$ is an integer and is exact. On the other hand $r$, $b_{1}$, and $b_{2}$ are in radians and in fact are rational multiples of $\pi/2$, so they too are exact. The $f_j$ involve evaluating sines and cosines, so they are not exact. All these calculations are done by computer and have a certain degree of accuracy. We need to make sure that when we calculate that $f_j>0$, that it is indeed true. To do this we use interval arithmetic which can show that $f_j$ lies exactly within an interval $[u,v]$ with $u>0$. The interval that we use for $\pi/2$ correct to 7 decimal places is (1.57079631,1.57079637). This precision can be increased as required.
\noindent
We mainly use the arithmetical operations of\newline
addition: $[x_1,x_2]+[y_1,y_2]=[x_1+y_1,x_2+y_2]$
subtraction: $[x_1,x_2]-[y_1,y_2]=[x_1-y_2,x_2-y_1]$
multiplication: $[x_1,x_2][y_1,y_2]=[min(x_1y_1,x_1y_2,x_2y_1,x_2y_2),max(x_1y_1,x_1y_2,x_2y_1,x_2y_2)]$\newline
\noindent Note, we don't use any division operations.\newline
\noindent
In doing the calculations in the prover, we rely on the following libraries for arbitrary precision arithmetic.
\noindent
1. GMP: https://gmplib.org/
\noindent
2. MPFR: http://www.mpfr.org/ \cite{mpfr}
\noindent
We also rely on the boost multiprecision interface for all three libraries.\newline
\noindent
Along with this paper, you should get our program and instructions here using the terminal: \href{https://bitbucket.org/gtokarsky/billiardviewersm/src/master/}{Billiard Viewer}\newline
In the instructions of this program, we detail how you can view the proof. You can go square by square to see all the equations that produce the given code region and the lower bound using interval arithmetic of the calculations that show a square satisfies the prover. Because of how massive the proof is, there isn't a good way to present all of the calculations at once, so to trust this program is doing the calculations properly, we recommend that you read the code.
In Appendix B, we have an ennumeration of the 134 code regions that cover the rest of the total region between $z=75$ and $z=80$ and which are not part of the two infinite patterns. The endpoints of this region in clockwise order are (37.5, 37.5),(40, 40),
(12.5, 67.5),(7.5, 67.5). We used 13,862 squares at 7 decimal accuracy starting from a single square and had to subdivide any square at most 20 times to be able to prove that these 134 code regions do cover this region. Our interval arithmetic calculations showed that the smallest lower bound on the prover at any equation used on a covering square was 6.65023 x $10^{-9}$.
The enumeration shows the code type, a 2-tuple consisting of the code length and the side sequence length followed by the code sequence.
On this same program, you can find listed the 2439 single codes, 278,131 squares and 21 triples that are used to prove from 105 to 110, the 38,132 single codes, 4,994,538 squares and 310 triples to prove from 110 to 112 and the 118,809 single codes, 27,783,085 squares and 1,115 triples to prove from 112 to 112.3
\begin{figure}
\caption{112.3 Cover}
\end{figure}
\begin{appendices}
\section{Example of Calculating Coordinates of a Code Tower}
Here we will calculate the vertices in the code tower of the code ONS code sequence along the axis $x=y$.
\newline
\newline
X~~~Z~~~X
\noindent
1 1 2 3 3 2
\noindent
~~Y~~~Y~~~Z
\noindent
Then
\noindent
$x_{1}=siny$, $y_{1}=0$ which are the coordinates of $L_{(1,0)}$
\noindent
$x_{2}$=0, $y_{2}=0$ which are the coordinates of $L_{(2,0)}$
\noindent
$x_{3}=sinzcosX$,
\noindent
$y_{3}=sinzsinX$ which are the coordinates of $L_{(3,0)}$
\noindent
$x_{4}=sinzcosX-sinxcos(Y-X)$,
\noindent$y_{4}=sinzsinX+sinxsin(Y-X)$ which are the coordinates of $L_{(4,0)}$
\noindent
$x_{5}=sinzcosX-sinxcos(Y-X)+sinxcos(2Z+X-Y)$,
\noindent$y_{5}=sinzsinX+sinxsin(Y-X)+sinxsin(2Z+X-Y)$ which are the coordinates of $L_{(5,0)}$
\noindent
$x_{6}=sinzcosX-sinxcos(Y-X)+sinxcos(2Z+X-Y)-sinzcos(3Y-2Z+Y-X)$,
\noindent$y_{6}=sinzsinX+sinxsin(Y-X)+sinxsin(2Z+X-Y)+sinzsin(3Y-2Z+Y-X)$ which are the coordinates of $L_{(6,0)}$
\noindent
$x_{7}=sinzcosX-sinxcos(Y-X)+sinxcos(2Z+X-Y)-sinzcos(3Y-2Z+Y-X)+sinycos(3X-3Y+2Z-Y+X)$,
\noindent$y_{7}=sinzsinX+sinxsin(Y-X)+sinxsin(2Z+X-Y)+sinzsin(3Y-2Z+Y-X)+sinysin(3X-3Y+2Z-Y+X)$ which are the coordinates of $L_{(7,0)}$
\noindent
$x_{8}=sinzcosX-sinxcos(Y-X)+sinxcos(2Z+X-Y)-sinzcos(3Y-2Z+Y-X)+sinycos(3X-3Y+2Z-Y+X)-sinycos(2Z-3X+3Y-2Z+Y-X)$,
\noindent$y_{8}=sinzsinX+sinxsin(Y-X)+sinxsin(2Z+X-Y)+sinzsin(3Y-2Z+Y-X)+sinysin(3X-3Y+2Z-Y+X)+sinysin(2Z-3X+3Y-2Z+Y-X)$ which are the coordinates of $L_{(8,0)}$
\newline
\newline
\noindent We then use standard trig identities together with the conditions that $z=180-x-y$ and $y=x$ from the code pattern.
\newline
$sin(-A)=-sinA$
$cos(-A)=cosA$
$sin(180-A)=sinA$
$cos(180-A)=-cosA$
$sinA+sinB=2sin(\frac{A+B}{2})cos(\frac{A-B}{2})$
$sinA-sinB=2sin(\frac{A-B}{2})cos(\frac{A+B}{2})$
$cosA+cosB=2cos(\frac{A+B}{2})cos(\frac{A-B}{2})$
$cosA-cosB=-2sin(\frac{A+B}{2})sin(\frac{A-B}{2})$
$2cosAcosB=cos(A+B)+cos(A-B)$
$2sinAcosB=sin(A+B)+sin(A-B)=sin(A+B)-sin(B-A)$
$2sinAsinB=cos(A-B)-cos(A+B)$
\newline\newline
\noindent
The shooting vector is $(c,d)$ where $(c,d)$ is the vector from $L_{(2,0)}$ to $L_{(8,0)}$
\noindent
$c= sin(y)cos(180)+sin(z)cos(x)+sin(x)cos(x-y+180)+sin(x)cos(-x-3y+4(180))+sin(z)cos(-x-6y+5(180))+sin(y)cos(2x-6y+6(180))$
\noindent
$d= sin(y)sin(180)+sin(z)sin(x)+sin(x)sin(x-y+180)+sin(x)sin(-x-3y+4(180))+sin(z)sin(-x-6y+5(180))+sin(y)sin(2x-6y+6(180))$
\noindent
\textbf{CONVENTION:} In using the last three trig identities to simplify to sums of sines and cosines, we multiply all coordinates by 2 so that all coefficients are integers.
\noindent
The shooting vector $(c,d)$ then further becomes
\noindent
$c= -2sin(y)-sin(3y)+sin(5y)-sin(2x-7y)+sin(2x-5y)-sin(2x-y)+sin(2x+y)+sin(2x+3y)-sin(2x+7y)$ a sum of sines.
\noindent
$d=-cos(3y)+cos(5y)+cos(2x-7y)-cos(2x-5y)+cos(2x-y)-cos(2x+y)+cos(2x+3y)-cos(2x+7y)$ a sum of cosines.
\noindent To calculate the coordinates of the key points in the tower which are not centers of fans if any, we illustrate by the same example. Suppose we look at the coordinates of $L_{(4,1)}$. It is found by starting with the coordinates of $L_{(4,0)}$ and since the corresponding code is 2 adding one more reflection of the given triangle and proceeding as before.
\noindent
$x_{4}=sinzcosX-sinxcos(Y-X)$,
\noindent$y_{4}=sinzsinX+sinxsin(Y-X)$
becomes
\noindent
$x_{(4,1)}=sinzcosX-sinxcos(Y-X)+ sinycos(Z-Y+X)$
\noindent
$y_{(4,1)}=sinzsinX+sinxsin(Y-X)+ sinysin(Z-Y+X)$
\noindent and then simplyfing to sums of sines or cosines.
\noindent To find the coordinates of $L_{(6,2)}$, we would start with the coordinates of $L_{(6,0)}$ and since the corresponding code is 3 adding two more reflections of the given triangle which corresponds to using the angle $2X-3Y+2Z-Y+X$ and we get
\noindent
$x_{(6,2)}=x_{6}+sinzcos(2X-3Y+2Z-Y+X )=x_{6}+sinzcos(x-6y)$
\noindent
$y_{(6,2)}=y_{6}+sinzsin(2X-3Y+2Z-Y+X )=x_{6}+sinzsin(x-6y)$
\noindent and then simplyfing to sums of sines or cosines.
\noindent
It is now a simple matter to calculate a vector $(a,b)$ from any key blue point to any key black point. As an example the blue-black vector from $L_{(6,0)}$ to $L_{(5,0)}$ is given by
$a=sinzcos(3Y-2Z+Y-X)=sinzcos(6y+x)=sin(7y+2x)-sin5x=sin9x-sin5x$ since y=x for this code
$b=-sinzsin(3Y-2Z+Y-X)=-sinzsin(6y+x)=cos(7y+2x)-cos5y=cos9x-cos5x$
\section{The Single Codes for the 105 theorem}
\begin{enumerate}[noitemsep]
\item OSO (3, 7) 1 3 3
\item OSO (5, 11) 1 1 2 2 5
\item OSO (5, 15) 1 1 4 2 7
\item OSO (5, 15) 1 3 2 6 3
\item OSO (5, 17) 1 1 4 2 9
\item OSO (5, 21) 1 1 6 2 11
\item OSO (5, 23) 1 1 6 2 13
\item OSO (7, 15) 1 1 3 1 2 1 6
\item OSO (7, 17) 1 1 3 1 2 1 8
\item OSO (7, 19) 1 1 2 2 6 2 5
\item OSO (7, 21) 1 1 2 2 8 2 5
\item OSO (7, 23) 1 1 4 2 6 2 7
\item OSO (7, 29) 1 1 6 2 8 2 9
\item OSO (7, 17) 1 2 1 2 1 3 7
\item OSNO (8, 18) 1 1 2 2 3 1 3 5
\item OSNO (8, 22) 1 1 4 2 3 1 3 7
\item OSO (9, 25) 1 1 2 2 6 2 4 2 5
\item CS (10, 20) 1 1 3 1 2 1 3 1 1 6
\item OSNO (10, 24) 1 1 3 1 2 1 5 1 1 8
\item CS (10, 28) 1 1 5 1 2 1 5 1 1 10
\item OSNO (10, 32) 1 1 5 1 2 1 7 1 1 12
\item CS (10, 36) 1 1 7 1 2 1 7 1 1 14
\item OSNO (10, 40) 1 1 7 1 2 1 9 1 1 16
\item CS (10, 44) 1 1 9 1 2 1 9 1 1 18
\item OSNO (10, 48) 1 1 9 1 2 1 11 1 1 20
\item OSNO (10, 26) 1 1 2 2 7 1 1 4 2 5
\item OSNO (10, 38) 1 1 4 2 11 1 1 6 2 9
\item OSO (11, 29) 1 1 2 1 1 7 2 4 1 1 8
\item OSO (11, 25) 1 1 2 1 1 6 1 1 2 2 7
\item OSNO (12, 38) 1 1 2 2 8 2 3 1 3 8 2 5
\item OSNO (14, 40) 1 1 3 1 2 1 7 2 4 1 1 7 2 7
\item OSNO (14, 44) 1 1 3 1 2 1 7 2 6 1 1 9 2 7
\item OSNO (14, 52) 1 1 3 1 2 1 11 2 6 1 1 11 2 9
\item OSNO (14, 36) 1 1 2 2 7 1 2 1 3 1 1 7 2 5
\item CS (14, 36) 1 2 1 5 3 1 3 2 4 2 3 1 3 5
\item CS (16, 44) 1 1 2 1 1 7 3 1 3 2 6 2 3 1 3 7
\item CS (16, 52) 1 1 4 1 1 9 3 1 3 2 8 2 3 1 3 9
\item OSNO (18, 42) 1 1 2 1 1 6 1 1 2 2 7 1 2 1 2 1 3 7
\item CS (18, 44) 1 1 2 2 5 1 2 1 5 2 2 1 1 5 2 4 2 5
\item CS (18, 52) 1 1 4 2 5 1 2 1 5 2 4 1 1 7 2 4 2 7
\item OSO (19, 57) 1 1 2 1 1 6 1 2 1 3 1 1 7 2 8 2 8 2 7
\item CS (20, 48) 1 1 3 1 2 1 6 1 2 1 3 1 1 8 1 1 4 1 1 8
\item CS (20, 56) 1 1 3 1 2 1 10 1 2 1 3 1 1 10 1 1 4 1 1 10
\item CS (20, 64) 1 1 5 1 2 1 8 1 2 1 5 1 1 12 1 1 6 1 1 12
\item CS (20, 88) 1 1 9 1 2 1 8 1 2 1 9 1 1 18 1 1 10 1 1 18
\item OSNO (20, 74) 1 1 2 1 1 7 1 1 13 2 7 1 2 1 7 1 1 13 2 9
\item CS (20, 52) 1 1 2 1 1 7 2 2 1 1 5 2 6 2 5 1 1 2 2 7
\item CS (20, 60) 1 1 2 1 1 7 2 4 1 1 7 2 6 2 7 1 1 4 2 7
\item CS (20, 68) 1 1 4 1 1 9 2 4 1 1 7 2 8 2 7 1 1 4 2 9
\item CS (20, 92) 1 1 6 1 1 13 2 6 1 1 11 2 10 2 11 1 1 6 2 13
\item CS (20, 60) 1 1 2 2 9 1 1 4 2 8 2 4 1 1 9 2 2 1 1 6
\item CS (20, 92) 1 1 6 2 13 1 1 8 2 10 2 8 1 1 13 2 6 1 1 12
\item CS (20, 60) 1 1 4 2 6 2 4 1 1 7 2 4 1 1 8 1 1 4 2 7
\item OSO (21, 49) 1 1 1 1 2 1 7 2 5 1 1 2 1 1 7 2 5 1 2 1 4
\item OSNO (22, 54) 1 1 1 1 3 7 1 1 3 1 2 1 5 2 6 2 2 1 1 5 2 5
\item OSNO (22, 82) 1 1 2 1 1 7 1 1 12 1 1 4 1 1 11 2 8 1 1 13 2 9
\item OSNO (22, 58) 1 1 2 2 7 1 2 1 3 2 7 1 1 3 1 2 1 5 2 6 2 5
\item OSNO (22, 64) 1 1 2 2 7 1 2 1 3 2 6 2 4 1 1 7 2 4 2 6 2 5
\item OSNO (22, 60) 1 1 4 2 7 1 2 1 4 1 2 1 5 1 1 9 2 3 1 2 1 8
\item OSNO (22, 74) 1 1 2 2 8 2 4 2 6 2 7 1 1 4 2 6 2 6 2 6 2 5
\item OSO (23, 55) 1 1 2 1 1 5 1 1 8 1 2 1 2 1 2 1 8 1 1 4 1 1 8
\item CS (24, 64) 1 1 1 1 2 1 9 3 1 2 1 3 9 1 2 1 1 1 1 5 2 8 2 5
\item OSNO (24, 60) 1 1 3 1 2 1 5 2 3 1 3 6 2 3 1 3 7 1 1 3 1 2 1 6
\item OSNO (24, 88) 1 1 3 1 2 1 11 2 9 1 2 1 5 1 1 12 1 1 6 1 1 13 2 9
\item OSNO (24, 68) 1 1 3 1 2 1 8 1 1 4 1 1 9 2 4 1 1 9 2 3 1 2 1 8
\item CS (24, 64) 1 1 2 1 1 5 1 1 10 1 1 6 1 1 10 1 1 5 1 1 2 1 1 8
\item CS (24, 76) 1 1 4 2 6 2 3 1 3 8 3 1 3 2 6 2 4 1 1 7 2 4 2 7
\item CS (26, 64) 1 1 1 1 2 1 7 2 4 2 7 1 2 1 1 1 1 5 2 5 1 2 1 5 2 5
\item CS (26, 104) 1 1 5 1 2 1 9 2 12 2 9 1 2 1 5 1 1 13 2 7 1 2 1 7 2 13
\item OSNO (26, 72) 1 1 2 1 1 5 1 1 8 1 2 1 3 2 9 1 1 4 2 9 1 1 4 1 1 8
\item CS (26, 64) 1 1 2 1 1 6 1 1 2 1 1 7 3 1 3 2 5 1 2 1 5 2 3 1 3 7
\item CS (26, 124) 1 1 6 2 12 2 7 1 2 1 7 2 12 2 6 1 1 11 2 8 2 12 2 8 2 11
\item CS (28, 80) 1 1 3 1 2 1 7 2 4 1 1 8 1 1 4 2 7 1 2 1 3 1 1 7 2 6 2 7
\item CS (28, 112) 1 1 3 1 2 1 11 2 8 1 1 14 1 1 8 2 11 1 2 1 3 1 1 9 2 12 2 9
\item OSNO (28, 118) 1 1 3 1 2 1 11 2 8 2 10 2 11 1 1 6 2 12 2 8 2 11 1 1 4 1 1 10
\item CS (28, 76) 1 1 3 1 2 1 10 1 1 4 1 1 10 1 2 1 3 1 1 8 1 2 1 6 1 2 1 8
\item CS (28, 80) 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 5 1 1 2 1 1 9 3 1 2 1 3 9
\item OSNO (28, 74) 1 1 2 1 1 5 2 6 2 5 1 1 2 2 7 1 2 1 3 2 6 2 5 1 1 2 2 7
\item OSNO (28, 68) 1 1 2 2 6 2 5 1 1 2 2 7 1 1 3 1 2 1 6 1 1 3 1 2 1 5 2 5
\item CS (28, 88) 1 1 4 2 10 2 4 1 1 7 2 9 1 1 5 1 2 1 4 1 2 1 5 1 1 9 2 7
\item CS (28, 92) 1 1 4 2 6 2 6 2 6 2 4 1 1 7 2 6 2 4 1 1 8 1 1 4 2 6 2 7
\item OSNO (30, 76) 1 1 2 2 5 1 2 1 4 1 2 1 5 3 1 2 1 3 6 3 1 3 2 4 2 4 2 4 2 5
\item OSO (31, 87) 1 1 3 1 2 1 6 1 2 1 3 1 1 7 2 6 2 4 1 1 7 2 5 1 2 1 5 2 6 2 7
\item OSNO (32, 84) 1 1 2 1 1 5 1 1 8 1 1 2 1 1 5 2 6 2 7 1 1 4 2 7 1 1 2 1 1 5 2 7
\item CS (32, 76) 1 1 2 1 1 5 1 1 8 1 2 1 2 1 2 1 8 1 1 5 1 1 2 1 1 8 1 1 4 1 1 8
\item CS (32, 76) 1 1 2 1 1 6 1 2 1 3 1 1 8 1 1 4 1 1 8 1 1 4 1 1 8 1 1 3 1 2 1 6
\item CS (32, 80) 1 1 2 2 5 1 2 1 5 3 1 2 1 3 5 1 2 1 5 2 2 1 1 5 2 4 2 4 2 4 2 5
\item OSNO (34, 80) 1 1 1 1 3 7 1 1 3 1 2 1 5 2 5 1 2 1 3 2 7 1 1 3 1 2 1 5 2 5 1 2 1 4
\item CS (34, 96) 1 1 3 1 2 1 5 2 6 2 4 2 6 2 5 1 2 1 3 1 1 7 2 4 2 5 1 2 1 5 2 4 2 7
\item CS (36, 84) 1 1 2 1 1 5 1 1 8 1 1 2 1 1 6 1 1 2 1 1 8 1 1 5 1 1 2 1 1 8 1 1 4 1 1 8
\item CS (36, 108) 1 1 2 1 1 7 1 1 12 1 1 4 1 1 10 1 1 4 1 1 12 1 1 7 1 1 2 1 1 10 1 1 4 1 1 10
\item CS (36, 132) 1 1 4 1 1 9 1 1 14 1 1 6 1 1 12 1 1 6 1 1 14 1 1 9 1 1 4 1 1 12 1 1 6 1 1 12
\item CS (36, 84) 1 1 2 1 1 6 1 1 2 1 1 6 1 1 2 1 1 7 3 1 3 2 5 1 2 1 4 1 2 1 5 2 3 1 3 7
\item CS (36, 84) 1 1 2 1 1 6 1 1 2 2 7 1 2 1 2 1 3 7 1 1 2 1 1 7 3 1 2 1 2 1 7 2 2 1 1 6
\item OSNO (36, 102) 1 1 2 1 1 6 1 2 1 5 2 7 1 1 3 1 2 1 6 1 2 1 5 2 6 2 6 2 7 1 1 4 2 6 2 7
\item CS (36, 112) 1 1 2 2 9 1 1 4 1 1 9 3 1 3 2 8 2 4 2 8 2 3 1 3 9 1 1 4 1 1 9 2 2 1 1 6
\item CS (38, 100) 1 1 2 1 1 6 1 1 2 1 1 7 2 7 1 1 3 1 2 1 7 2 5 1 2 1 5 2 7 1 2 1 3 1 1 7 2 7
\item OSNO (40, 96) 1 1 1 1 3 7 1 1 3 1 2 1 5 2 5 1 1 2 2 7 1 1 3 1 2 1 5 2 6 2 2 1 1 5 2 5 1 2 1 4
\item OSNO (40, 128) 1 1 3 1 2 1 10 1 1 5 1 2 1 7 2 13 1 1 7 1 2 1 6 1 2 1 7 2 13 1 1 7 1 2 1 6 1 2 1 8
\item OSNO (40, 120) 1 1 3 1 2 1 8 1 1 4 2 8 2 5 1 2 1 5 1 1 9 2 5 1 2 1 5 2 8 2 4 1 1 9 2 3 1 2 1 8
\item CS (40, 148) 1 1 2 1 1 7 1 1 13 2 9 1 2 1 5 1 1 13 2 8 2 13 1 1 5 1 2 1 9 2 13 1 1 7 1 1 2 1 1 10
\item CS (40, 120) 1 1 2 1 1 7 2 4 1 1 7 2 6 2 6 2 6 2 7 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7
\item CS (40, 100) 1 1 2 1 1 6 1 1 3 1 2 1 5 2 6 2 5 1 2 1 3 1 1 6 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7
\item OSNO (42, 116) 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 2 9 1 1 4 1 1 8 1 2 1 3 1 1 8 1 1 2 1 1 6 1 1 2 2 9
\item OSNO (44, 118) 1 1 2 1 1 7 2 4 1 1 8 1 1 2 1 1 7 2 4 1 1 8 1 2 1 3 2 9 1 1 5 1 2 1 4 1 2 1 7 2 4 1 1 8
\item CS (48, 140) 1 1 1 1 3 9 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 5 1 1 2 1 1 9 3 1 1 1 1 5 2 8 2 5
\item CS (48, 136) 1 1 2 1 1 6 1 1 2 1 1 7 2 2 1 1 5 2 6 2 4 2 7 1 1 4 2 6 2 4 2 6 2 4 1 1 7 2 4 2 6 2 5 1 1 2 2 7
\item CS (50, 156) 1 1 1 1 3 9 1 1 4 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 4 1 1 9 3 1 1 1 1 5 2 8 2 5 1 2 1 5 2 8 2 5
\item OSNO (50, 156) 1 1 2 1 1 7 2 6 2 6 2 8 2 4 1 1 9 2 3 1 2 1 8 1 1 3 1 2 1 8 1 1 4 2 8 2 6 2 6 2 7 1 2 1 3 1 1 7 2 7
\item CS (56, 132) 1 1 1 1 2 1 7 2 4 1 1 8 1 1 4 2 7 1 2 1 1 1 1 4 1 2 1 5 2 7 1 1 3 1 2 1 6 1 1 2 1 1 6 1 2 1 3 1 1 7 2 5 1 2 1 4
\item CS (56, 160) 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 5 1 1 2 1 1 9 3 1 2 1 3 9 1 1 2 1 1 6 1 1 2 1 1 9 3 1 2 1 3 9
\item OSNO (56, 216) 1 1 4 1 1 9 2 11 1 1 5 1 2 1 8 1 2 1 5 1 1 12 1 1 6 1 1 13 2 6 2 13 1 1 6 2 13 1 1 6 1 1 12 1 1 4 1 1 9 2 12 2 6 1 1 12
\item CS (56, 216) 1 1 2 1 1 8 1 1 2 1 1 9 2 12 2 9 1 2 1 5 1 1 13 2 7 1 2 1 7 2 13 1 1 6 1 1 13 2 7 1 2 1 7 2 13 1 1 5 1 2 1 9 2 12 2 9
\item OSNO (56, 142) 1 1 2 1 1 6 1 2 1 3 1 1 7 2 5 1 2 1 4 1 2 1 6 1 1 2 1 1 7 2 5 1 1 2 2 8 2 5 1 1 2 2 9 1 1 5 1 2 1 4 1 2 1 5 2 7
\item OSNO (56, 170) 1 1 2 1 1 6 1 2 1 5 2 8 2 4 1 1 8 1 2 1 3 1 1 9 2 2 1 1 5 1 1 9 2 4 1 1 9 2 4 2 8 2 6 2 6 2 7 1 2 1 3 1 1 7 2 7
\item CS (56, 172) 1 1 2 2 9 1 1 4 1 1 9 2 2 1 1 6 1 1 2 2 9 1 1 4 1 1 9 3 1 3 2 8 2 4 2 8 2 4 2 8 2 3 1 3 9 1 1 4 1 1 9 2 2 1 1 6
\item OSNO (58, 158) 1 1 2 1 1 6 1 2 1 5 2 7 1 2 1 3 1 1 7 2 7 1 1 2 1 1 6 1 2 1 5 2 8 2 4 1 1 9 2 2 1 1 5 1 1 9 2 4 1 1 8 1 2 1 3 1 1 8
\item OSNO (60, 162) 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 2 9 1 1 4 1 1 9 2 3 1 2 1 8 1 2 1 2 1 3 9 1 1 2 1 1 6 1 1 2 1 1 8 1 1 2 1 1 6 1 1 2 2 9
\item CS (60, 144) 1 1 2 1 1 5 1 1 8 1 2 1 2 1 2 1 8 1 1 4 1 1 8 1 2 1 2 1 2 1 8 1 1 5 1 1 2 1 1 8 1 1 4 1 1 8 1 2 1 2 1 2 1 8 1 1 4 1 1 8
\item CS (60, 180) 1 1 2 1 1 7 2 4 1 1 7 2 6 2 6 2 6 2 6 2 6 2 7 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7
\item OSNO (60, 186) 1 1 2 1 1 7 2 6 2 6 2 8 2 4 1 1 8 1 2 1 3 1 1 8 1 2 1 3 2 9 1 1 4 2 8 2 6 2 6 2 7 1 1 2 1 1 7 2 6 2 7 1 2 1 3 1 1 7 2 7
\item CS (64, 156) 1 1 2 1 1 6 1 1 2 1 1 8 1 1 3 1 2 1 7 2 6 2 7 1 2 1 3 1 1 8 1 1 2 1 1 6 1 1 2 1 1 9 2 2 1 1 6 1 1 2 1 1 8 1 1 2 1 1 6 1 1 2 2 9
\item CS (66, 200) 1 1 1 1 3 9 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 5 1 1 2 1 1 9 3 1 1 1 1 5 2 8 2 5 1 2 1 5 2 8 2 5
\item CS (66, 208) 1 1 1 1 3 9 1 1 4 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 4 1 1 9 3 1 1 1 1 5 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 5
\item OSNO (66, 230) 1 1 2 1 1 5 1 1 9 2 4 1 1 9 2 4 2 8 2 6 2 6 2 8 2 4 1 1 9 2 4 2 8 2 6 2 6 2 8 2 4 2 9 1 1 4 1 1 9 2 4 1 1 9 2 4 2 9 1 1 5 1 1 2 2 9
\item CS (68, 188) 1 1 2 1 1 6 1 1 2 1 1 8 1 2 1 3 2 9 1 1 4 2 8 2 5 1 2 1 6 1 1 2 1 1 8 1 1 4 2 8 2 4 1 1 8 1 1 2 1 1 6 1 2 1 5 2 8 2 4 1 1 9 2 3 1 2 1 8
\item CS (72, 216) 1 1 2 1 1 7 2 4 1 1 8 1 1 2 1 1 7 2 4 2 8 2 4 2 8 2 4 2 7 1 1 2 1 1 8 1 1 4 2 7 1 1 2 1 1 7 2 5 1 2 1 5 2 8 3 1 3 2 8 2 3 1 3 8 2 5 1 2 1 5 2 7
\item OSNO (74, 220) 1 1 2 1 1 6 1 2 1 5 2 8 2 4 2 9 1 1 4 2 8 2 6 2 6 2 7 1 1 2 1 1 7 2 7 1 1 3 1 2 1 7 2 6 2 7 1 1 3 1 2 1 7 2 5 1 2 1 5 1 1 9 2 4 1 1 8 1 2 1 3 1 1 8
\item CS (80, 324) 1 1 2 1 1 7 2 13 1 1 8 2 10 2 9 1 2 1 5 1 1 12 1 1 6 2 13 1 1 6 1 1 12 1 1 6 2 12 2 8 2 12 2 6 1 1 12 1 1 6 1 1 13 2 6 1 1 12 1 1 5 1 2 1 9 2 10 2 8 1 1 13 2 7 1 1 2 1 1 10
\item CS (84, 260) 1 1 1 1 3 9 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 5 1 1 2 1 1 9 3 1 1 1 1 5 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 5
\item CS (92, 252) 1 1 1 1 3 9 1 1 5 1 1 2 1 1 8 1 1 4 1 1 8 1 2 1 2 1 2 1 8 1 1 4 1 1 8 1 1 2 1 1 5 1 1 9 3 1 1 1 1 5 2 6 2 7 1 2 1 2 1 2 1 8 1 1 4 2 8 2 4 2 8 2 4 2 8 2 4 1 1 8 1 2 1 2 1 2 1 7 2 6 2 5
\item CS (100, 300) 1 1 2 1 1 7 2 4 1 1 7 2 6 2 6 2 6 2 6 2 6 2 6 2 6 2 6 2 6 2 7 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7 1 1 2 1 1 7 2 4 1 1 8 1 1 4 2 7
\item CS (102, 320) 1 1 1 1 3 9 1 1 2 1 1 5 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 4 1 1 9 2 4 2 9 1 1 5 1 1 2 1 1 9 3 1 1 1 1 5 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 5
\item OSNO (116, 326) 1 1 2 1 1 6 1 2 1 5 2 7 1 2 1 4 1 2 1 6 1 2 1 5 2 7 1 1 2 1 1 8 1 1 4 2 8 2 5 1 2 1 5 2 8 2 5 1 2 1 5 2 9 1 1 5 1 2 1 5 2 8 2 5 1 2 1 5 2 8 2 4 1 1 8 1 1 2 1 1 7 2 5 1 2 1 6 1 2 1 4 1 2 1 7 2 5 1 2 1 5 1 1 9 2 4 1 1 8 1 2 1 3 1 1 8
\end{enumerate}
\end{appendices}
\end{document} |
\begin{document}
\pagestyle{plain}
\title{An Epistemic Perspective on Consistency of Concurrent Computations}
\author{Klaus v. Gleissenthall\inst{1} \and Andrey Rybalchenko\inst{1,2}}
\institute{Technische Universität M\"unchen \and Microsoft Research Cambridge}
\maketitle
\begin{abstract}
Consistency properties of concurrent computations, e.g., sequential
consistency, linearizability, or eventual consistency, are
essential for devising correct concurrent algorithms. In this paper,
we present a logical formalization of such consistency properties that
is based on a standard logic of knowledge.
Our formalization provides a declarative perspective on what is
imposed by consistency requirements and provides some interesting
unifying insight on differently looking properties.
\end{abstract}
\section{Introduction}
\label{sec-intro}
Writing correct distributed algorithms is notoriously difficult.
While in the sequential case, various techniques for proving
algorithms correct exist \cite{Hoare69,OHearn01}, in the
concurrent setting, due to the nondeterminism induced by
scheduling decisions and transmission failures,
it is not even obvious what correctness actually means.
Over the years, a variety of different \emph{consistency properties}
restricting the amount of tolerated nondeterminism have been
proposed~\cite{Herlihy08,Herlihy87,Herlihy90,Lamport79,Papadimitriou79,shapiro09}. These
properties range from simple properties like sequential
consistency~\cite{Lamport79} or
linearizability~\cite{Herlihy87,Herlihy90} to complex conditions like
eventual consistency~\cite{shapiro09}, a distributed systems
condition.
Reasoning about these properties is a difficult, yet important task
since their implications are often surprising.
Currently, the study of consistency properties and the
development of reasoning tools and techniques for such
properties~\cite{Burckhardt10,Vafeiadis10,Qadeer09} is done for each property individually, i.e., on a
per property basis. To some extent, this trend might be traced back to
the way consistency properties are formulated.
Typically, they explicitly require existence of certain computation
traces that are obtained by rearrangement of the trace that is to be
checked for consistency,
i.e., these descriptions of consistency properties do not rely on a
logical formalism. While such an approach provides fruitful grounds for the design of
specialized algorithms and efficient tools, it leaves open important
questions such as how various properties relate to each other or
whether advances in dealing with one property can be leveraged for
dealing with other properties.
In contrast to the trace based definitions found in literature, we propose
to study consistency conditions in terms of epistemic
logic~\cite{Fagin03,Halpern90}.
Here we can rely on a distributed knowledge modality~\cite{Fagin03},
which is a natural fit for describing distributed computation.
In this logic, an application $D_G (\varphi)$ of the distributed
knowledge modality to a formula $\varphi$ denotes the fact that a
group $G$ \emph{knows} that a formula $\varphi$ holds.
We present a logical formalization of three consistency properties:
the classical sequential consistency~\cite{Lamport79} and
linearizability~\cite{Herlihy87,Herlihy90}, as well as a recently
proposed formulation~\cite{evCons} of eventual
consistency for distributed databases~\cite{shapiro09}.
Our characterizations show that moving the viewpoint from reasoning
about traces (models) to reasoning about knowledge (logic) can lead to
new insights.
When formulated in the logic of knowledge, these differently looking
properties agree on a common schematic form: $\neg D_G (\neg
\mathit{correct})$.
According to this schematic form, a computation satisfies a
consistency property if and only if a group $G$ of its participants,
i.e., threads or distributed nodes, \emph{do not know that} the
computation violates a specification~$\mathit{correct}$ that describes
computations from the \emph{sequential} perspective, i.e., without
referring to permutations thereof.
For example, when formalising sequential consistency of a concurrent
register $\mathit{correct}$ only states that the first read operation
returns zero and any subsequent read operation returns the value
written by the latest write operation.
The common form of our characterizations exposes the differences
between the consistency properties in a formal way.
A key difference lies in the group of participants that provides
knowledge for validating the specification~$\mathit{correct}$.
For example, a computation is sequentially consistent if it satisfies
the formula $\neg D_\textsc{Threads}(\neg \mathit{correct})$, i.e., the group
$G$ of agents needed to validate the sequential specification
comprises the group of threads $\textsc{Threads}$ accessing the shared
memory.
Surprisingly, the same group of agents is needed to validate eventual
consistency, since in our logic it is characterized by the formula
$\neg D_\textsc{Threads}(\neg \mathit{correctEVC})$.
This reveals an insight that eventual consistency is actually not an
entirely new consistency condition, but rather an instance of
sequential consistency that is determined by a particular choice
of~$\mathit{correct}$.
In contrast to the two above properties, the threads' knowledge is not
enough to validate linearizability.
To capture linearizability, the set of participants $G$ needs to go
beyond the participating threads $\textsc{Threads}$ and include an
additional observer thread~$\mathit{obs}$ as well.
The observer only acquires knowledge of the relative order between
returns and calls. As logical characterization of linearizability, we
obtain $\neg D_{\textsc{Threads}\cup\{\mathit{obs}\}}(\neg
\mathit{correct})$.
We show that including the observer induces a different kind of
knowledge, i.e., it weakens the modal system from \emph{S5} to
\emph{S4}~\cite{Ditmarsch08}.
As a consequence, the agents lose certainty about their
decision whether or not a trace is consistent.
For sequential consistency ($\mathit{seqCons}$) the agents know whether or not a trace is
sequentially consistent, i.e., the formula $ (\mathit{seqCons} \leftrightarrow
D_\textsc{Threads}(\mathit{seqCons})) \mathrel{\land} (\neg\mathit{seqCons}
\leftrightarrow D_\mathit{Threads}(\neg\mathit{seqCons}))$ is valid.
In contrast, for linearizability ($\mathit{Lin}$) the threads cannot be sure whether a trace they
validate as linearizable is indeed linearizable, i.e., there exists a
trace that satisfies $\mathit{Lin} \wedge \neg D_{\textsc{Threads} \uplus \{ \mathit{obs} \}} (\mathit{Lin})$.
The discovery that eventual consistency can be reduced to sequential
consistency is facilitated by a generalization of classical sequential
consistency that follows naturally from taking the epistemic
perspective.
Our formalization of $\mathit{correct}$ for eventual consistency is given
by $\mathit{correctEVC}$ that requires nodes to keep consistent logs, i.e.,
whenever a transaction is received by a distributed node, the
transaction must be inserted into the node's logs in a way that is
consistent with the other nodes' recordings.
We allow $\mathit{correctEVC}$ not only to refer to events that are performed by
the nodes that take part in the computation, but also to auxiliary
events that model the \emph{environment} that interacts with nodes.
We use the environment to model transmission of updates from one
distributed node to another. Our knowledge characterization then
implicitly quantifiers over the order of occurrence of such events,
which serves as a correctness certificate for a given trace.
\paragraph{Contributions}
In summary, our paper makes the following
contributions.
We provide characterisations for sequential consistency (Section
\ref{sec:seqCons}), eventual consistency (Section \ref{sec:evCons})
and linearizability (Section \ref{sec:lin}) which we prove correct
wrt. their standard definitions. Our characterizations reveal a
remarkable similarity between consistency properties that is not
apparent in their standard formulations.
Through our characterizations, we identify a natural generalization of sequential consistency that
allows us to reduce eventual consistency, a complex property usually
defined by the existence of two partial-ordering relations, to
sequential consistency. In contrast to this reduction, we show that
linearizability requires a different kind of knowledge than
sequentially consistency and prove a theorem (Section
\ref{sec:detect}) illustrating the ramifications of this difference.
\section{Examples}
\label{sec:illu}
In this section, before providing technical details, we give an informal overview of our characterizations.
\subsection{Sequential Consistency}
\paragraph*{Trace-Based Definition}
The most fundamental consistency condition that concurrent computation are intuitively expected to satisfy is sequential
consistency \cite{Lamport79}. Its original definition reads:
\begin{quote}
The result of any execution is the same as if the operations of all
the processors were executed in \emph{some sequential order}, and the
operations of each individual processor appear in the sequence in the
\emph{order specified by its program}.
\end{quote}
Equivalently, this more formal version can be found in the literature (cf., \cite{Attiya94}): For a trace $E$ to be sequentially consistent, it needs to satisfy two conditions: (1) $E$ must be \emph{equivalent} to a witness trace $E'$
and (2) trace $E'$ needs to be \emph{correct} with respect to some specification.
To be equivalent, two traces need to be permutations that preserve the local order of events for each thread.
\begin{example}
\label{ex:seq}
Consider the following traces representing threads $t_1$ and $t_2$ storing and loading values on a shared register.
For the purpose of this example, we assume the register to be initialized with value~$0$. We use ``$:=$'' to abbreviate ``equals by definitions''.
\begin{equation*}
\begin{array}{rl}
E_1 :=& (t_2,\mathit{ld}(0)) \; (t_2,\mathit{ld}(1)) \; (t_1,\mathit{st}(1))\\[\jot]
E_2 :=& (t_2,\mathit{ld}(0)) \; (t_1,\mathit{st}(1)) \; (t_2,\mathit{ld}(1))\\[\jot]
E_3 :=&(t_2,\mathit{ld}(0)) \; (t_1,\mathit{st}(1)) \; (t_2,\mathit{ld}(2))
\end{array}
\end{equation*}
Trace $E_1$ is sequentially consistent, because it is
equivalent to $E_2$ and $E_2$ meets the specification of a shared
register, i.e., each load returns the last value stored. In contrast,
$E_3$ is not sequentially consistent, because no appropriate witness
can be found. In no equivalent trace, $t_2$'s load of $2$ is
preceded by an appropriate store operation.
\end{example}
\paragraph{Logic} In this paper, in contrast to the above
trace-based formulation, we investigate consistency from the
perspective of \emph{epistemic logic}. Epistemic logic is a formalism used for reasoning
about the knowledge distributed nodes/threads acquire in a distributed
computation. For example, in trace $E_1$ thread $t_2$
\emph{knows} it first loaded value~$0$ and then value~$1$ while $t_1$
\emph{knows} it stored~$1$. When we consider the knowledge acquired
by the threads $t_1$ and $t_2$ together as a group, we say that the
group of threads $\{t_1, t_2\}$ \emph{jointly knows} $t_2$ first
loaded~$0$ and then~$1$ while $t_1$ stored~$1$. We denote the
fact that a group $G$ jointly knows that a formula $\varphi$ holds
by~$D_G(\varphi)$, which is an application of the distributed
knowledge modality.
According to our logical characterization of sequential consistency:
$\neg D_\textsc{Threads}(\neg \mathit{correct})$,
a trace is sequentially consistent, if the group of all threads accessing the shared data-structure does not jointly know that the trace is not correct.
\addtocounter{example}{-1}
\begin{example}[continued]
This means trace $E_1$ is sequentially consistent. In trace $E_1$, the threads know that $t_2$ first loaded $0$ and then $1$ and that $t_1$ stored $1$, however they do not know in which order these events were scheduled. This means, for all they know $t_1$ could have stored $1$ before $t_2$ loaded it and after $t_2$ loaded~$1$, which would meet the specification. In contrast, $E_3$ is not sequentially consistent. The threads know that $t_2$ loaded $2$, however no thread stored it. This means $E_3$ cannot have met the specification.
\end{example}
\paragraph{Indistinguishability} We formalize this notion of knowledge in
terms of the local perspective individual threads have on the
computation. We extract this perspective by a function
$\downarrow$ such that $E \downarrow t$ projects trace $E$
onto the local events of thread $t$.
If two traces do not differ from the local perspective of thread $t$,
we say that they are indistinguishable for $t$. We write $E \sim_t E'$
to denote that for thread $t$, trace $E$ is indistinguishable from
trace $E'$. Combining their abilities to distinguish traces, a group
of threads can distinguish two traces whenever there is a thread in
the group that can. We write $E \sim_G E'$ to denote that for
group~$G$ trace $E$ is indistinguishable from trace $E'$.
Indistinguishability allows us to define the knowledge of a group.
A group $G$ knows a fact $\varphi$ if this fact holds on all traces
that the threads in $G$ cannot distinguish from the actual trace. We
write $E \models \varphi$ to say that trace $E$ satisfies formula
$\varphi$. Formally (see Section \ref{def:semantics}):
$E \models D_G (\varphi)$ :iff for all $E'$ s.t. $E \sim_G E'$: $E' \models \varphi$,
where we use ":iff" to abbreviate "by definition, if and only if".
\addtocounter{example}{-1}
\begin{example}[continued] For trace $E_1$, the thread-local
projections are: $E_1 \downarrow t_1 = (t_1,\mathit{st}(1))$ and $E_1
\downarrow t_2 = (t_2,\mathit{ld}(0)) (t_2,\mathit{ld}(1))$. We
get the same projections for $E_2$, and $E_3 \downarrow t_1 =
(t_1,\mathit{st}(1))$ and $E_3 \downarrow t_2 = (t_2,\mathit{ld}(0))
(t_2,\mathit{ld}(2))$. From these projections, we get: $E_1 \sim_{t_1}$ $E_2 \sim_{t_1} E_3$
and $E_1 \sim_{t_2} E_2$ but $E_1 \not \sim_{t_2} E_3$ and $E_2 \not
\sim_{t_2} E_3$. For groups of threads, we have: $E_1 \sim_{\{ t_1,t_2 \}} E_2$
but $E_2 \not\sim_{\{ t_1,t_2 \}} E_3$, because $E_2 \not\sim_{t_2}
E_3$.
We write $E \models \mathit{correctREG}$ to say $E$ is correct with respect
to the specification of a shared register. Then $E_1 \models \neg
D_{\textsc{Threads}} (\neg \mathit{correctREG})$, $E_2 \models \neg
D_{\textsc{Threads}} (\neg \mathit{correctREG})$ and $E_3 \models D_{\textsc{Threads}} (\neg
\mathit{correctREG})$.
\end{example}
\paragraph{Knowledge in the Trace-Based Formulation} Interestingly, the notion of equivalence found in the trace-based formulation of sequential consistency precisely corresponds to $\sim_\textsc{Threads}$, the indistinguishability relation of all threads accessing the shared data-structure. This suggests that the knowledge-based formulation of consistency lies already buried in the original definition. Similarly, the formulation ``The result of any execution is the same as if ...", found in the original definition alludes to the possibility of a fact $\varphi$, which, in epistemic logic, is represented by the dual modality of knowledge $\neg D_G (\neg \varphi)$.
\subsection{Eventual Consistency}
Eventual consistency~\cite{shapiro09} is a correctness condition for distributed database systems, as those employed in modern geo-replicated internet services. In such systems, threads (distributed nodes) keep local working-copies (repositories) of the database which they may update by performing a commit operation. Queries and updates have revision ids, representing the current state of the local copy.
Whenever a thread commits, it broadcasts local changes to its repository and receives changes made by other threads.
After the commit, a new revision id is assigned. As the underlying network is unreliable, committed changes may however be delayed or lost before reaching other threads.
In this setting, weaker guarantees on consistency than in a multi-processor environment are required, as network partitions are unavoidable, causing updates to be delayed or lost. Consequently, eventual consistency is a prototypical example for what is called ``weak''-consistency. We present a recent, partial-order based definition drawn from the literature~\cite{evCons} in Section \ref{sec:evCons}.
Taking the knowledge perspective on eventual consistency reveals a remarkable insight. Eventual consistency is actually not a new, weaker consistency condition, but just sequential consistency -- with an appropriate sequential specification.
In our logical characterization, eventual consistency is defined by the formula:
$E \models \neg D_{\textsc{Threads}} (\neg \mathit{correctEVC}$).
That is, to be eventually consistent, a trace needs to be sequentially consistent with respect to a sequential specification $\mathit{correctEVC}$. Our formula for $\mathit{correctEVC}$ uses the past time modality $\boxminus (\varphi)$ (see Section \ref{def:semantics}), representing the fact that so far, formula $\varphi$ was true.
We specify $\mathit{correctEVC}$ by:
\begin{equation*}
\mathit{correctEVC} :=
\begin{array}[t]{l@{}}
\begin{array}[t]{l@{}}
\forall t \forall q \forall r ( \boxminus( \mathit{query}(t,q,r) \rightarrow
\exists \mathcal{L}( \mathcal{L} \; \mathit{validLog} \; t \; \wedge \mathit{result}(q,\mathcal{L},r)))) \\ [\jot]
\wedge \; \mathit{atomicTrans} \wedge \mathit{alive} \wedge \mathit{fwd}
\end{array}
\end{array}
\end{equation*}
This formula says that for all threads, queries and results, so far, whenever a thread~$t$ posed a query~$q$ to its local repository, producing result~$r$, thread~$t$ must be able to present a valid log $\mathcal{L}$, such that the result of posing query~$q$ on a machine that performed only the operations logged in log~$\mathcal{L}$ matches the recorded result~$r$. The additional conjuncts $\mathit{atomicTrans}$, $\mathit{alive}$ and $\mathit{fwd}$ specify further requirements on the way updates may be propagated in the network.
In our characterization, a log~$\mathcal{L}$ is a sequence of actions (i.e., queries, updates and commits). The formula $\mathit{validLog}$ describes the conditions a log has to satisfy to be valid for a thread~$t$:
\begin{center}
$\mathcal{L} \; \mathit{validLog} \; t \; := \; \forall a(a \; \mathit{in} \; \mathcal{L} \leftrightarrow t \; k_{\mathit{log}} \; a ) \; \wedge \mathit{consistent}(\mathcal{L})$
\end{center}
This formula requires that for all actions~$a$, $a$ is logged in~$\mathcal{L}$ (represented by the infix-predicate $\mathit{in}$) if and only if thread~$t$ knows about action~$a$. A thread knows about all the actions that it performed itself and the actions performed in revisions that were forwarded to it. The formula $\mathit{consistent}({\mathcal{L}})$ ensures that all actions in the log~$\mathcal{L}$ appear in an order consistent with the actual order of events.
\paragraph{Environment Events}
To make this result possible, we make a generalization that comes natural in the knowledge setting. We allow traces to contain \emph{environment} events that represent actions that are not controlled by the threads that participate in the computation. In our characterization, environment events are used to mark positions where updates were successfully forwarded from one client to another. By allowing $\mathit{correctEVC}$ to refer to those events, we implicitly encode an existential quantification over all possible positions for these events. That means a trace is eventually consistent if any number of such events could have occurred such that the specification $\mathit{correctEVC}$ is met.
\begin{example}
Consider the following traces of a simple database that allows clients to update and query the integer variable $x$:
\begin{equation*}
\begin{array}{rl}
E_{4} :=&(t_1, \mathit{up}(0,x:=0)) \; (t_1,\mathit{com}(0))\; (t_1,\mathit{up}(1,x:=1))
(t_1,\mathit{com}(1)) \\ [\jot]
& (t_2,\mathit{qu}(0,x,0)) (t_2,\mathit{com}(0)) \; (t_2,\mathit{qu}(1,x,1)) \\ [\jot]
E_{5} :=& (t_1, \mathit{up}(0,x:=0)) \; (t_1,\mathit{com}(0)) \; (\mathit{env},\mathit{fwd}(t_1,t_2,0))
(t_1,\mathit{up}(1,x:=1)) \\ [\jot]
& (t_1,\mathit{com}(1)) \; (t_2,\mathit{qu}(0,x,0))
(t_2,\mathit{com}(0)) \; (\mathit{env},\mathit{fwd}(t_1,t_2,1)) \\ [\jot]
& (t_2,\mathit{qu}(1,x,1))
\end{array}
\end{equation*}
Updates are of the form $\mathit{up}(\mathit{id},u)$, where $\mathit{id}$ is the revision-id and $u$ the actual update. In our example, updates are variable assignments $x:=v$ meaning that a variable~$x$ is assigned value $v$. Queries are of the form $\mathit{qu}(\mathit{id},q,r)$, where $\mathit{id}$ stands for the revision-id, $q$ for the query, and $r$ for the result. Queries in our example consist only of variables, i.e., a query returns the current value assigned. The action $\mathit{com}(\mathit{id})$ represents the act of committing, that is, sending revision $\mathit{id}$ over the network and checking for updates.
Forwarding actions are performed by the environment $\mathit{env}$. The event $(\mathit{env},\mathit{fwd(t,t',\mathit{id})})$ represents the environment forwarding the changes made in revision $\mathit{id}$ from thread~$t$ to thread~$t'$.
In trace $E_{5}$, when thread~$t_2$ queries the value of $x$ in revision $0$, thread~$t_2$ can present the log
$\mathcal{L} := \mathit{up}(0,x:=0) \; \mathit{com}(0) \; \mathit{qu}(0,x,0)$
as an evidence of the correctness of the result $0$. As by the time of $t$'s query, only revision $0$ has been forward from $t_1$ to $t_2$, thread $t_2$ only knows about $t_1$'s first update and its own query. Querying $x$ after the update $x:=0$ yields $0$, so $\mathit{result}(x,\mathcal{L},0)$ holds.
When thread~$t_2$ queries $x$ in revision $1$, thread~$t_1$'s second update has been forwarded, so $t_2$ can present the log
$\mathcal{L} := \mathit{up}(0,x:=0) \; \mathit{com}(0) \; \mathit{up}(1,x:=1) \; \mathit{com}(1) \;
\mathit{qu}(0,x,0) \; com(0) \; \mathit{qu}(1,x,1)$.
Since $t_2$ received the $t_1$'s revision~$1$ the log contains the second update $x:=1$ and $t_2$'s query of $x$ returns $1$. This means $E_5 \models \mathit{correctEVC}$.
As a consequence, we have $E_4 \models \neg D_{\textsc{Threads}} (\neg \mathit{correctEVC})$, because $E_4 \sim_{\textsc{Threads}} E_5$ and $E_5 \models \mathit{correctEVC}$. The forwarding events in $E_5$ mark positions where the transmission of updates through the network could have occurred to make the computation meet $\mathit{correctEVC}$.
\end{example}
\subsection{Linearizability}
While the threads' knowledge characterizes sequential consistency and eventual consistency, their knowledge is not strong enough to define linearizability. Linearizability extends sequential consistency by the requirement that method calls must effect all visible change of the shared data at some point between their invocation and their return. Such a point is called the \emph{linearization points} of the method.
To characterize linearizability, we introduce another agent called \emph{the observer} that tracks the available information on linearization points. To do this, the observer monitors the order of non-overlapping (sequential) method calls in a trace. The observer's view of a trace is the order of non-overlapping method calls. This order is represented by a set of pairs of return and invoke events, such that the return took place before the invocation. We extract this order by a projection function $\mathit{obs}(\cdot)$.
\begin{example}
Consider the following traces where method calls are split into invocation- and return events:
\begin{equation*}
\begin{array}{rl}
E_6:=&(t_2, \mathit{inv} \; \mathit{ld}()) \; (t_2, \mathit{ret} \; \mathit{ld}(1)) \; (t_1, \mathit{inv} \; \mathit{st}(1)) \;
(t_1,\mathit{ret} \; \mathit{st}(\mathit{true}))\\[\jot]
E_7:=&(t_2, \mathit{inv} \; \mathit{ld}()) \; (t_1, \mathit{inv} \; \mathit{st}(1)) \; (t_2, \mathit{ret} \; \mathit{ld}(1)) \;
(t_1,\mathit{ret} \; \mathit{st}(\mathit{true})) \\[\jot]
E_8 :=&(t_1, \mathit{inv} \; \mathit{st}(1)) \; (t_1,\mathit{ret} \; \mathit{st}(\mathit{true})) \; (t_2, \mathit{inv} \; \mathit{ld}()) \; (t_2, \mathit{ret} \; \mathit{ld}(1))
\end{array}
\end{equation*}
For trace $E_6$, the observer's projection function $\mathit{obs}(\cdot)$ yields:
$\mathit{obs}(E_6) = \{ ( \; (t_2, \mathit{ret} \; \mathit{ld}(1)) , (t_1, \mathit{inv} \; \mathit{st}(1)) \; ) \}$.
This means the observer sees that $t_2$'s load returned before $t_1$'s store was invoked. In trace $E_7$, the method calls overlap. Consequently, the observer knows nothing about this trace:
$\mathit{obs}(E_7) := \varnothing$.
For $E_8$, we get $\mathit{obs}(E_8) = \{ ( \; (t_1, \mathit{ret} \; \mathit{st}(\mathit{true})) , (t_2, \mathit{inv} \; \mathit{ld}()) \; ) \}$.
\end{example}
The observer's view tracks the available information on linearization points. In trace $E_6$, thread $t_2$'s linearization point for the call to load must have occurred before the linearization point of $t_1$'s call to store. This follows from the fact that $t2$'s load returned before $t1$'s call to store and that linearization point must occur somewhere between a method's invocation and its return. In trace $E_7$ linearization points may have occurred in any order as the method calls overlap.
To the observer, a trace $E$ is indistinguishable from a trace $E'$ if the order of linearization points in $E$ is preserved in $E'$ and maybe an order between additional linearization points is fixed (see Section \ref{sec:indist}):
$E \preceq_{obs} E' \; \text{:iff} \; \mathit{obs}({E}) \subseteq \mathit{obs}(E')$.
A trace $E$ is linearizable if the threads together with the observer do not know that the trace is incorrect:
$E \models \neg D_{\textsc{Threads} \uplus \{ \mathit{obs} \}} (\neg \mathit{correct})$.
\addtocounter{example}{-1}
\begin{example}[Continued]
We have
$E_6 \not \preceq_{obs} E_7 \; \text{, but} \; E_7 \preceq_{obs} E_6$.
Trace $E_7$ is linearizable since $E_7 \sim_{\textsc{Threads} \uplus \{ \mathit{obs}\}} E_8$ and $E_8 \models \mathit{correctREG}$. However, trace $E_6$ is not linearizable since there is no indistinguishable trace that meets the specification. Note that the threads without the observer could not have detected this violation of the specification, i.e., $E_6 \models \neg D_{\textsc{Threads}} \neg \mathit{correctREG}$.
\end{example}
\subsection{Knowledge about Consistency}
As we describe sequential consistency in a standard logic of
knowledge, corresponding axioms apply (see, e.g. \cite[chapter
2.2]{Ditmarsch08}).
For example, everything a group of threads knows is also true:
$\text{(T)} := \; \models D_G (\varphi) \rightarrow \varphi \; \text{(Truth axiom)}$,
groups of threads know what they know:
$\text{(4)} := \; \models D_G (\varphi) \rightarrow D_G (D_G (\varphi)) \; \text{ (positive introspection)}$
and groups of threads know what they do not know:
$\text{(5)} := \; \models \neg D_G (\varphi) \rightarrow D_G (\neg D_G (\varphi)) \; \text{ (negative introspection)}$.
For an complete axiomatization of a similar epistemic logic with time see \cite{Belardinelli08}.
Interestingly, adding the observer not only strengthens the threads' ability to distinguish traces but changes the \emph{kind} of knowledge agents acquire about a computation. Whereas $\sim_\textsc{Threads}$ is an \emph{equivalence relation}, $\sim_{\textsc{Threads} \uplus \{ \mathit{obs}\}}$ is only a \emph{partial order}. As a consequence, $D_{\textsc{Threads}}$ corresponds to the modal system \emph{S5}, whereas $D_{\textsc{Threads} \uplus \{ \mathit{obs} \}}$ corresponds to the weaker system \emph{S4} \cite{Ditmarsch08}. This means, that $D_{\textsc{Threads} \uplus \{ \mathit{obs} \}}$ does not satisfy the axiom of negative introspection (5).
It seems natural to ask if the differences in the type of knowledge between sequential consistency and linearizability affect the ability to detect violations of the specification. In Section \ref{sec:detect}, we show that the difference the lack of axiom (5) makes, lies in the certainty threads have about their decision. Whereas for sequentially consistent ($\mathit{seqCons}:= \neg D_{\textsc{Threads}} (\neg correct)$), whenever the threads decide that a trace is sequentially consistent, they can be sure that the trace is indeed sequentially consistent:
$(\mathit{seqCons} \leftrightarrow D_\textsc{Threads}(\mathit{seqCons}))$
for linearizability ($\mathit{Lin} := \neg D_{\textsc{Threads} \uplus \{\mathit{obs} \}} (\neg \mathit{correct})$), it can occur that the threads together with the observer decide that a trace is linearizable, however, they cannot be sure that it really was:
$ \mathit{Lin} \wedge \neg D_{\textsc{Threads} \uplus \{ \mathit{obs} \}} (\mathit{Lin})$.
\section{Logic Of Knowledge}
\label{sec:log}
In this section we present a standard logic of knowledge (see
\cite{Halpern90}) that we use for our characterizations.
We follow the exposition of \cite{kramer10}. We define the set
$\mathcal{E}$ of events as $\mathcal{E} \ni e := (t,\mathit{act})$, representing $t \in
\textsc{Threads} \uplus \{ \mathit{env}\}$ performing an action $\mathit{act} \in
\mathcal{A}$. The environment $\mathit{env}$ can perfom
synchronization events that go unseen by the threads. In our
characterization of eventual consistency, the environment forwards
transactions from one node to the other.
We define the generic set of actions:
$\mathcal{A} \ni \mathit{act}:= \; \mathit{inv}(m,v) \; | \; \mathit{ret}(m,v)$.
Threads can invoke or return from methods $m \in \textsc{Methods}$ with $v \in
\textsc{Values}$. For our characterization of
eventual consistency, we instantiate $\mathcal{A}$
with application-specific actions. These can easily be translated back into
the generic form by splitting up events into separate invocation-
and return-parts.
\subsection{Preliminaries}
\label{sec:prelim}
We denote by $\mathcal{E}^\ast$ the set of finite-, and by $\mathcal{E}^\omega$ the set of infinite sequences over $\mathcal{E}$.
We denote the empty sequence by $\epsilon$. Let $\mathcal{E}^\infty :=
\mathcal{E}^\ast \; \uplus \; \mathcal{E}^\omega$ and $E
\in\mathcal{E}^\infty$. Then $E \downharpoonright i$ denotes the
finite prefix up to- and including $i$. We let $E@i$ be the element of
sequence $E$ at position $i$. We define $\mathit{len}(E)$ to be the
length of $E$, where $\mathit{len}(\epsilon)=0$, and $\mathit{len}(E)=
\omega$, if $E \in E^\omega$. For $e \in \mathcal{E}$, we say that $\mathit{pos}(e,E)= j$, if $E@j=e$ and $\mathit{pos}(e,E)=\omega$ otherwise. Hence, we write $e \in E$ if $\mathit{pos}(e,E) < \omega$.
\label{sec:indist}
We formally define projection functions and indistinguishability
relations. A thread's view of a computation trace is the part of the trace it can
observe. We define this part by a projection function that extracts
the respective events. We use this projection function to define an
indistinguishability relation for each thread.
\paragraph*{Thread Indistinguishability Relation}
For a thread $t \in$ \textsc{Threads} the indistinguishability
relation $ \sim_t \; \subseteq (\mathcal{E}^\omega \times (\mathbb{N}
\uplus \{ \omega )\})^2$ is defined such that:
$ (E,i) \sim_t (E',i') \mbox{ :iff } (E \downharpoonright i)
\downarrow t = (E' \downharpoonright i') \downarrow t$
where $\downarrow : (\mathcal{E}^\infty \times \textsc{Threads}) \to
\mathcal{E}^\infty$ designates a projection function onto $t$'s local
perspective. $E \downarrow t$ is the projection on events in the set
$\{ (t,\mathit{act}) \; | \; \mathit{act} \in \mathcal{A} \}$, i.e.,
the sequence obtained from $E$ by erasing all events that are not in
the above set.
\paragraph{Observer Indistinguishability Relations}
The observer's view of a trace is the order of non-overlapping method
calls.
We let $\textsc{Inv} \ni \mathit{in} :=
(t, \mathit{inv}(m,v))$ and $\textsc{Ret} \ni r := (t,
\mathit{ret}(m,v))$. The indistinguishability
relation of the observer $ \preceq_{\mbox{obs}} \; \subseteq
(\mathcal{E}^\omega \times \mathbb{N})^2$ is given by:
for all $(E,i),(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$:
$(E,i) \preceq_{\mbox{obs}} (E',i')\mbox{ :iff }\mbox{obs}(E,i)
\subseteq \mbox{obs}(E' , i')$ where obs: $(\mathcal{E}^\omega \times
\mathbb{N}) \to \mathcal{P}(\mathcal{E}^2)$ designates a projection
onto the observer's local view, such that:
obs$(E,i) =\{ (r,\mathit{in}) \in$ \textsc{Ret} $\times \textsc{ Inv} \; |$ pos$(r,E) < $ pos$(\mathit{in},E) \leq i \}$.
We abbreviate $\mathit{obs}(E):= \mathit{obs}(E,\mathit{len}(E))$.
\paragraph{Joint Indistinguishability Relations}
Joint indistinguishability relations link pairs of traces that a group
of threads can distinguish if they share their knowledge. Whenever a
thread in the group can tell the difference between two traces, the
group can. Let $G \subseteq \textsc{Threads}$.
We define the joint indistinguishability relation of group $G$ to
be $\sim_{G} := (\bigcap_{t \in G} \sim_t)$ and $\sim_{G \uplus \{
\mathit{obs}\}} := \sim_{G} \cap \preceq_\mathit{obs}$. For any
indistinguishability relation $\sim$, we write $E \sim E'$ as an
abbreviation for $(E,\mathit{len}(E)) \sim (E,\mathit{len}(E'))$.
\subsection{Syntax}
\label{sec:syntax}
A formula $\psi$ takes the form:
\begin{equation*}
\begin{array}[t]{r@{\;::=\;}l}
\psi & D_G (\varphi) \; | \; D_G (\psi) \; | \; \psi \mathrel{\land} \psi \; | \;
\lnot \psi \\[\jot]
\varphi & p \; | \; \varphi \wedge \varphi \; | \; \neg \varphi
\; | \; \varphi S \varphi \; | \; \varphi U \varphi \; | \; \forall x ( \varphi)
\end{array}
\end{equation*}
with $G \subseteq \textsc{Threads} \uplus \{ \mathit{obs} \}$ and $p \in \textsc{Predicates}$, which we instantiate for each of our characterizations.
The logic provides the temporal modalities $\varphi S \psi$
representing the fact that since $\psi$ occurred, $\varphi$ holds and
the modality $\varphi U \psi$ representing the fact that until $\psi$ occurrs, $\varphi$
holds. Additionally, it provides the \emph{distributed knowledge} modality $D_G$ and first order quantification.
Let $\Phi$ denote the set of all formulae in the logic.
\subsection{Semantics}
\label{def:semantics}
We now define the satisfaction relation $\models \; \subseteq
(\mathcal{E}^\omega \times (\mathbb{N} \uplus \omega)) \times \Phi$. We let:
\begin{equation*}
\begin{array}[t]{@{}l@{\models}l@{\;\text{:iff}\;}l@{}}
(E,i)&\varphi \wedge \psi & (E,i) \models \varphi$ and $(E,i) \models \psi\\
(E,i)&\neg \varphi & \text{not} \; (E,i) \models \varphi \\
\end{array}
\end{equation*}
We define the temporal modalities by:
\begin{equation*}
\begin{array}[t]{@{}l@{\models}l@{\;\text{:iff}\;}l@{}}
(E,i) & \varphi S \psi &
\begin{array}[t]{ll}
\text{ there is } j \leq i \text{ s.t. } (E,j) \models \psi \text{ and }\\
\text{ for all } j < k \leq i: (E,k)\models \varphi\\
\end{array}\\
(E,i) & \varphi U \psi &
\begin{array}[t]{ll}
\text{ there is } j \leq i \text{ s.t. } (E,j) \models \psi \text{ and} \\
\text{ for all } 1 \leq k < j: (E,k) \models \varphi
\end{array}
\end{array}
\end{equation*}
We define distributed knowledge as:
$(E,i) \models D_G (\varphi)$ :iff for all $(E',i')$: if $(E,i) \sim_{G}
(E',i')$ then $(E',i') \models \varphi$, with $G \subseteq
\textsc{Threads} \uplus \{ \mathit{obs} \}$.
Let $\textsc{D}$ be the domain of quantification. We define first-order quantification:
$(E,i) \models \forall x(\varphi)$ :iff for all $d \in \textsc{D}: (E,i) \models \varphi [d/x]$.
By $\varphi [d/x]$, we denote the term $\varphi$ with all occurrences
of $x$ replaced by $d$. We define $\textsc{D}$ as the disjoint union
of all quantities used in the definition of a condition.
We write $E \models \varphi$ as an abbreviation for $(E,\mathit{len}(E)) \models \varphi$.
\paragraph{Additional Definitions}
\label{app:macro} For convenience, we define the following standard operators in terms
of our above definitions: $\varphi \vee \psi:= \neg (\neg \varphi \wedge
\neg \psi)$, $\varphi \rightarrow \psi := \neg \varphi \vee \psi$,
$\top := (p \vee \neg p)$ for some atomic predicate $p$,
$\diamondminus \varphi := \top S \varphi$ ("once $\varphi$"),
$\boxminus \varphi := \neg \diamondminus \neg \varphi$ ("so far
$\varphi$"), $\Diamond \varphi := \top U \varphi$ (``eventually
$\varphi$''), $\Box \varphi := \neg \Diamond \neg \varphi$ (``always
$\varphi$''), $\varphi W \psi := \varphi U \psi \vee \Box \varphi$
(``weak until''), $\exists x(\varphi) := \neg \forall x(\neg
\varphi)$.
\section{Sequential Consistency}
\label{sec:seqCons}
We present a trace-based definition of sequential consistency (cf.,
\cite{Attiya94}) and prove our logical characterization
equivalent. Our definition of sequential consistency generalizes the original
definition \cite{Lamport79} by allowing non-sequential specifications.
\begin{definition}[Sequential Consistency]
\label{def:seq_cons}
Let $\textsc{Spec} \subseteq \mathcal{E}^\ast$ be a specification of
the shared data-structure.
A trace $E \downharpoonright i$ is sequentially consistent
seqCons$(E,i)$ if and only if there is $(E',i') \in \mathcal{E}^\omega
\times \mathbb{N}$ s.t.
for all $t \in \textsc{Threads}$:
\begin{center}
$(E \downharpoonright i) \downarrow
t$ = $(E' \downharpoonright i') \downarrow t$ and $E'
\downharpoonright i' \in \textsc{Spec}$
\end{center}
\end{definition}
\paragraph*{Basic Predicates} For our logical characterization, we define the predicate $\mathit{correct}$ representing the fact that a trace meets the specification:
\begin{center}
$(E,i) \models \mathit{correct}$ :iff
$E \downharpoonright i \in $\textsc{ Spec}\\
\end{center}
\begin{theorem}[Logical Characterization Sequential Consistency]
\label{thm:seq_cons}
A trace~$E \downharpoonright i $ is sequentially consistent if and
only if the threads do not jointly know that it is incorrect:
$\mathit{seqCons}(E,i)$ iff $(E,i) \models \neg D_\textsc{Threads}(
\neg correct)$.
\end{theorem}
\begin{proof}
By expanding the definitions in \ref{def:semantics}.
\end{proof}
\section{Eventual Consistency}
\label{sec:evCons}
We define the set of actions for eventual consistency as:
\begin{equation*}
\mathcal{A} \ni \mathit{act}:= \; \mathit{qu}(\mathit{id},q,r) \;
\; | \; \; \mathit{up}(\mathit{id},u) \;
\; | \; \; \mathit{com}(\mathit{id}) \;
\; | \; \; \mathit{fwd}(t,t',\mathit{id}).
\end{equation*}
Threads may pose a query ($\mathit{qu}$) $q \in \textsc{Queries}$ with result $r \in
\textsc{Values}$, issue an update ($\mathit{up}$) $u \in
\textsc{Updates}$, or commit ($\mathit{com}$)
their local changes. Queries, updates and commits get assigned a
revision-id $\mathit{id} \in \textsc{Identifiers}$, representing the
current state of the local database copy. We assume that if a thread
commits, the committed revision id matches the revision id of the
previous queries and updates, and that thread-revision-id pairs~$(t,\mathit{id})$ are unique. Again, this is no restriction. To fulfill the requirement, the threads can just increment their local revision id whenever they commit.
As updates may get lost in the network, we represent by $ \mathit{fwd}(t,t',\mathit{id})$ the successful forwarding of the updates made by thread $t$ in revision $id$ to thread $t'$.
\paragraph*{Preliminaries}
We let let $\mathit{set}(E)= \{ e \in E \}$, i.e., the set of events
in trace E. On a fixed trace $E$, we define the program order
$\prec_{p}$ as $e \prec_{p} e'$ :iff if there is $t$ such that
$\mathit{pos}(e,E \downarrow t) < \mathit{pos}(e',E \downarrow t)$.
We let "$\_$" represent irrelevant, existential quantification.
Let $e \equiv_t e'$ if and only if there is $id \in \textsc{Identifiers}$ such that $e=(t,\_(\mathit{id},\_) )$ and $e'=(t,\_(\mathit{id},\_))$, i.e., if the events belong to the same revision of thread $t$. A relation $\preceq$ factors over $\equiv_t$ if $x \preceq y$, $x \equiv_t x'$ and $y \equiv_t y'$ imply $x' \preceq y'$. Updates are interpreted in terms of states, i.e., we assume there is an interpretation function $u^{\#}: \textsc{States} \to \textsc{States}$, for each $u \in \textsc{Updates}$, and a designated initial state $s_0 \in \textsc{States}$. For each query $q \in \textsc{Queries}$, there is an interpretation function $q^{\#}: \textsc{States} \to \textsc{Values}$.
For a finite set of events $E_S$, a total order~$\prec$ over the events in $E_S$, and a state $s$ we let $\mathit{apply}(E_s,\prec,s)$ be the result of applying all updates in $E_s$ to $s$, in the order specified by $\prec$.
\begin{definition}[Eventual Consistency]
We use the definition presented in \cite{evCons}.
A trace~$E \in \mathcal{E}^\infty$ is eventually consistent ($\mathit{evCons}(E)$) if and only if there exist a partial order $\prec_{v}$ (visibility order), and a total order $\prec_{a}$ (arbitration order) on the events in $\mathit{set}(E)$ such that:
\begin{itemize}
\item $\prec_{v} \subseteq \prec_{a}$ (arbitration extends visibility).
\item $\prec_{p} \subseteq \prec_{v}$ (visibility is compatible with program-order).
\item for each $e_q = (t,\mathit{qu}(\mathit{id},q,r)) \in E$, we have r = $\mathit{apply}(\{e \; | \; e \prec_{v} e_q \},\prec_{a},s_0)$ (consistent query results).
\item $\prec_{a}$ and $\prec_{v}$ factor over $\equiv_t $ ( atomic revisions).
\item if $(t, \mathit{com}(\mathit{id})) \not\in E$ and $(t,\_(\mathit{id},\_)) \prec_{v} (t',\_)$ then $t=t'$ (uncommitted updates).
\item if $e= (t,\mathit{com}(\mathit{id})) \in \mathit{E}$ then there are only finitely many $e' := (t',\mathit{com}(\mathit{id}'))$ such that $e' \in E$ and $e \not\prec_{v} e'$ (eventual visibility).
\end{itemize}
\end{definition}
\subsection{Logical Characterization}
\paragraph*{Basic Predicates}
We represent queries and updates by predicates
$\mathit{query}(t,q,r,\mathit{id})$ and
$\mathit{update}(t,u,\mathit{id})$, representing $t \in
\textsc{Threads}$, issuing $q \in \textsc{Queries}$ with result $r
\in \textsc{Values}$ on revision $id \in \textsc{Identifiers}$, and
$t$ performing $u \in \textsc{Updates}$ on revision $\mathit{id}$,
respectively. As threads work on their local copies, revision ids
mark the version of data the threads work with. We represent commits
by the predicate $\mathit{commit}(t,\mathit{id})$, representing $t$
committing its state in revision $id$. After performing a commit, a
new revision id is assigned. We define:
\begin{equation*}
\begin{array}[t]{@{}l@{\models}l@{\;\text{:iff}\;}l@{}}
(E,i) & \mathit{query}(t,q,r,\mathit{id}) & E@i=(t,\mathit{qu}(\mathit{id},q,r))\\[\jot]
(E,i) & \mathit{update}(t,u,\mathit{id})& E@i=(t,\mathit{up}(\mathit{id},u))\\[\jot]
(E,i) & \mathit{commit}(t,\mathit{id}) & E@i=(t, \mathit{com}(\mathit{id}))
\end{array}
\end{equation*}
We let $\mathit{query}(t,q,r) \; := \; \exists
\mathit{id}(\mathit{query}(t,q,r,\mathit{id}))$. Upon commit, a thread forwards all the information from its local repository to the database system and receives updates from other threads. Committed updates may however be delayed or lost by the network. By predicate forward$(t,t',id)$ we mark the event that the environment forwarded the updates $t$ performed in revision $id$ to $t'$. We let:
$(E,i) \models \mathit{forward}(t,t',\mathit{id})$ :iff $E@i=(\mathit{env},\mathit{fwd}(t,t',\mathit{id}))$.
Eventual Consistency, requires all threads to keep valid logs. Logs
are finite sequences of actions, i.e., $\mathcal{L} \in A^\ast$. We represent log validity by the formula:
$\mathcal{L} \; \mathit{validLog} \; t \; := \; \forall a(t \; k_{\mathit{log}} \; a \leftrightarrow a \; \mathit{in} \; \mathcal{L}) \; \wedge \mathit{consistent}(\mathcal{L})$.
That is, to be a valid log for thread~$t$, log~$\mathcal{L}$ must contain exactly the actions that $t$ knows of and these actions must be arranged in an order consistent with respect to the other threads logs. We let:
$(E,i) \models \; a \; \mathit{in} \; \mathcal{L} \; :$iff $a \in L$.
The predicate $t \; k_{\mathit{log}} \; a$ represents the fact that
$t$ knows about action $a$. The predicate~$k_{\mathit{log}}$ represents individual
knowledge, i.e., knowledge in the sense of knowing about an action in contrast to knowing that a fact is true \cite{kramer10}. We let:
\begin{center}
\begin{tabular}{lll}
$(E,i) \models$&$t \; k_{\mathit{log}} \; a$ :iff there is $j \leq i: (E@j= (t,a)$ or \\[\jot]
&($(E,j) \models \mathit{forward}(t',t,id)$ and there is $l<j: \; (E,l) \models \mathit{commit}(t',id)$\\[\jot]
&and $(E,l) \models t' \; k_{log} \; a$))
\end{tabular}
\end{center}
That is, threads know an action if they performed it themselves, or
they received an update containing it. Upon commits, threads pass on all
actions they know about.
A log $\mathcal{L}$ is consistent if the actions in the log occur in
the same order as the actions in the real trace. This means the sequence of actions in $\mathcal{L}$ must be a subsequence of the actions in the real trace. A sequence $a= a_1 a_2 \ldots a_n$ is a subsequence of a sequence $b=b_1 b_2 \ldots b_m$ ($a \preceq b$), if and only if there exist $1 \leq i_1 < i_2 < \ldots < i_n \leq m$ such that for all $1 \leq j \leq n: a_j = b_{i_j}$. We project a sequence of events to a sequence of actions by the function $\mathit{act}: \mathcal{E}^\ast \to \mathcal{A}^\ast$, such that
$\mathit{act}((t_1,a_1) (t_2,a_2) \ldots (t_n,a_n)) = a_1a_2 \ldots a_n$.
We define:
$(E,i) \models \mathit{consistent}(L):$iff $ \mathcal{L} \preceq \mathit{act}(E \downharpoonright i)$.\\
\paragraph{Query Results}
All queries that threads issue must return the correct result with
respect to the logged operations. That is, the query's result must
match the result the query would yield when issued on a database that
performed all the updates in the log. We represent the fact that
query~$q$ would yield result $r$ on log $\mathcal{L}$ by the predicate result$(q,\mathcal{L},r)$.
We define the order of actions in a log $\mathcal{L}$ by the relation $<_\mathcal{L}$. We let $a <_\mathcal{L} a'$ :iff $pos(a,\mathcal{L}) < pos(a',\mathcal{L}) < \omega$.
We define:
$(E,i) \models \mathit{result}(q,L,r):$iff $r=q^\#(\mathit{apply}(set(\mathcal{L}),<_\mathcal{L},s_0))$.
\paragraph*{Network Assumptions}
We pose additional requirements on the network:
Updates in the same revision must be sent as atomic bundles
($\mathit{atomicTrans}$).
Only commited updates can be forwarded ($\mathit{fwd}$).
Active threads must eventually receive all committed update
($\mathit{alive}$).
We formalize them in Appendix \ref{app:EVC}.
We represent $\mathit{correctEVC}$ by the formula:
\begin{equation*}
\mathit{correctEVC} :=
\begin{array}[t]{@{}l@{}}
\forall t \forall q \forall r \\[\jot]
\begin{array}[t]{@{}l@{}}
( \boxminus( \mathit{query}(t,q,r) \rightarrow
\exists \mathcal{L}( \mathcal{L} \; \mathit{validLog} \; t \; \wedge \mathit{result}(q,\mathcal{L},r)))) \\ [\jot]
\wedge \mathit{atomicTrans} \wedge \mathit{alive} \wedge \mathit{fwd}
\end{array}
\end{array}
\end{equation*}
\begin{theorem}[Logical Characterization Eventual
Consistency]
\label{thm:evCons}
A trace is eventually constent if and only if the threads do not know
that it violates $\mathit{correctEVC}$. For all traces $E \in \mathcal{E}^\infty$:
\begin{center}
$evCons(E)$ if and only if $E \models \neg D_{\textsc{Threads}} \neg(\mathit{correctEVC})$.
\end{center}
\end{theorem}
\section{Linearizability}
\label{sec:lin}
Linearizability refines sequential consistency by guaranteeing that each method call takes its effect at exactly one point between its invocation and its return.
For our definition of linearizability, we follow \cite{gotsman11}. As for sequential consistency, our definition generalizes the original notion \cite{Herlihy87,Herlihy90} by allowing non-sequential specifications. We define the real-time precedence order $\preceq_{real}$ $\subseteq (\mathcal{E}^\omega \times \mathbb{N})^2$:
$(E,i)$ $\preceq_{real}$ $(E',i')$ :iff there is a bijection $\pi: \{1,\ldots,i\} \rightarrow \{1,\ldots,i'\} $ s.t
for all $j \in \mathbb{N}$ such that $j \leq i: E@j = E'@\pi(j)$, i.e., $E'$ is a permutation of $E$, and
for all $j,k \in \mathbb{N}$ such that $j < k \leq i:$ if $E@j \in \textsc{Ret} \mbox{ and } E@k \in \textsc{Call} \mbox{ then } \pi(j) < \pi(k)$, i.e., when permuting the events in $E$, calls are never pulled before returns.
\begin{definition}[Linearizability]
\label{def:lin}
A trace $(E,i)$ is linearizable ($\mathit{lin}(E,i)$) if and only if there is $(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$ such that
(1) for all $t \in \textsc{Threads}$: $(E \downharpoonright i) \downarrow t$ = $(E' \downharpoonright i') \downarrow t$
(2) $(E,i) \preceq_{real} (E',i')$ and
(3) $E' \downharpoonright i' \in \textsc{Spec}$.
\end{definition}
To characterize linearizability, we make the assumption that each event occurs only once in a trace. This is not a restriction as we could add a unique time-stamp or a sequence number to each event.
\begin{theorem}[Logical Characterization Linearizability]
\label{thm:lin}
A trace $E \downharpoonright i \in \mathcal{E}^\ast$ is linearizable if and only if the threads together with the observer do not know that it is incorrect:
\begin{center}
$\mathit{lin}(E,i)$ iff $(E,i) \models \neg D_{\textsc{Threads} \uplus \{ \mathit{obs} \} } \neg correct$.
\end{center}
\end{theorem}
\section{Knowledge about consistency}
\label{sec:detect}
We write $\models \varphi$ as an abbreviation for:
for all $E \in \mathcal{E}^\infty$: $E\models \varphi$.
Let $\mathit{seqCons} := \neg D_{\textsc{Threads}} (\neg \mathit{correct})$.
\begin{theorem}[Detection Sequential Consistency]
\label{thm:dec_seqs}
Threads can decide whether a trace is sequentially consistent or not:
$ \models (\mathit{seqCons} \leftrightarrow D_\mathit{Threads}(\mathit{seqCons})) \mathrel{\land}
(\neg\mathit{seqCons} \leftrightarrow D_\mathit{Threads}(\neg\mathit{seqCons}))$.
\end{theorem}
Let $\mathit{Lin} := \neg D_{\textsc{Threads} \uplus \{\mathit{obs} \}} \neg \mathit{correct}$.
\begin{theorem}[Detection Linearizability]
\label{thm:dec_lin}
There is $E \in \mathcal{E}^\infty$ such that
$(E,i) \models \mathit{Lin} \land \neg D_\mathit{Threads \uplus \{
\mathit{obs}\}} (\mathit{Lin})$.
As in sequential consistency, the threads together with the observer can spot if a trace is not linearizable:
$\models \neg \mathit{Lin} \leftrightarrow D_\mathit{Threads \uplus \{ \mathit{obs}\}}(\neg\mathit{Lin})$.
\end{theorem}
\section{Related Work}
\label{sec:rel}
The only applications of epistemic logic to concurrent computations that we are aware of are a logical characterization of wait-free
computations by Hirai \cite{Hirai10} and a knowledge based analysis of cache-coherence by Baukus et al. \cite{Baukus04}.
\appendix
\section{Additional Definitions Eventual Consistency}
\label{app:EVC}
We define the helper predicate:
\begin{center}
$\mathit{rev}(t,id)$ := $\exists q \exists r (\mathit{query}(t,q,r,id)) \vee \exists u(\mathit{update}(t,u,\mathit{id}))$
\end{center}
representing the fact, that the current action belongs to revision $id$ of thread $t$. We specify the requirements that updates made in the same revision must be sent bundled as indivisible transactions by the formula :
\begin{equation*}
\mathit{atomicTrans} :=
\forall t \forall id
(\boxminus (\mathit{rev}(t,\mathit{id})\rightarrow
\WeakUntil{\mathit{rev}(t,\mathit{id})} {\mathit{commit}(t,\mathit{id})} ))
\end{equation*}
That is, queries and updates from revision $id$ are only followed by
other queries and updates from the same revision, or a commit. We enforce that only committed revisions can be forwarded by:
\begin{equation*}
\mathit{fwd} :=
\forall t \forall t' \forall id
(\boxminus (\mathit{fwd}(t,t',\mathit{id})\rightarrow
\Box (\neg \mathit{commit}(t,\mathit{id}) ) ))
\end{equation*}
Threads that makes progress, i.e. that commit infinitely often must eventually receive all committed updates. We formalize this as:
\begin{equation*}
\mathit{alive} :=
\begin{array}[t]{ll}
\forall t \forall t' \forall \mathit{id} \\
( \boxminus (\mathit{commit}(t,\mathit{id}) \wedge \Box \Diamond (\exists \mathit{id}'( \mathit{commit}(t',id')) ) \rightarrow \\[\jot]
\Diamond \mathit{forward}(t,t',\mathit{id}))) \\[\jot]
\end{array}
\end{equation*}
That is, if thread~$t'$ commits infinitely often, eventually, the database system must manage to forward all the committed updates.
\section{Proofs}
\subsection{Linearizability (Theorem \ref{thm:lin})}
\label{app:lin_proof}
A trace $E \downharpoonright i \in \mathcal{E}^\ast$ is linearizable if and only if the threads together with the observer do not know that it is incorrect:
\begin{center}
$\mathit{lin}(E,i)$ iff $(E,i) \models \neg D_{\textsc{Threads} \uplus \{ \mathit{obs} \} } \neg correct$
\end{center}
We formalize our assumption that each event occurs at most once in a trace:
\begin{center}
\begin{tabular}{l l}
unique := & for all E $\in \mathcal{E}^\omega$ and all $j,k \in \mathbb{N}$:\\
& if $E@j=E@k$ then $j=k$.\\
\end{tabular}
\end{center}
\label{sec:proofLin}
\begin{definition}[Eventset]
\label{def:setproj}
Let $\llbracket \cdot \rrbracket$ : $\mathcal{E}^\ast \to
\mathcal{P}(\mathcal{E})$ denote a function that transforms a trace into the set of events it contains, that is: \\
For all $E \in \mathcal{E}^\ast: \llbracket E \rrbracket = \{ e \; |
\; e \in E \}$.\\
\end{definition}
\begin{proposition}[Union of Thread-Eventsets]
\label{prop:disjoint}
For all $E \in \mathcal{E}^\ast$: $\llbracket E \rrbracket = \uplus_{t} \llbracket E \downarrow t \rrbracket$.
\end{proposition}
\begin{proof}
By induction on $\mathit{len}(E)$.\\
For $\mathit{len}(E)=0$, we have $E=\epsilon$ and $\varnothing=\varnothing$.\\
Let $E \cdot E'$ denote the concatenation of traces $E$ and $E'$.
For $\mathit{len}(E) = n+1$, we have $E = e \cdot E'$ for some $e \in \mathcal{E}$, $E' \in \mathcal{E}^\ast$.\\
We get $\llbracket E' \rrbracket \uplus \{ e \} = \uplus_{t' \neq t} \llbracket E' \downarrow t' \rrbracket \uplus \llbracket E' \downarrow t \rrbracket \uplus \{ e \}$, for some t, and by the induction hypothesis: $\llbracket E' \rrbracket \uplus \{ e \} = \llbracket E' \rrbracket \uplus \{ e \}$.
\end{proof}
\begin{lemma}
\label{lemma:card}
For all $(E,i),(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$: if unique and $(E,i) \sim_{\textsc{Threads}} (E',i')$ then $i=i'$.
\end{lemma}
\begin{proof}
For a proof by contradiction we assume $i>i'$ without loss of generality. Then, by unique, there is an $e \in \mathcal{E}$ such that $e \in \llbracket E \downharpoonright i \rrbracket$ and $e \notin \llbracket E' \downharpoonright i' \rrbracket$. By Proposition \ref{prop:disjoint}: $e \in \llbracket (E \downharpoonright i)\downarrow t \rrbracket$ for some $t \in \textsc{Threads}$ but $e \notin \llbracket (E' \downharpoonright i')\downarrow t \rrbracket$. But by $(E,i) \sim_{\textsc{Threads}} (E',i')$, we have $(E \downharpoonright i) \downarrow t $ = $(E' \downharpoonright i') \downarrow t$ and thus $\llbracket (E \downharpoonright i)\downarrow t \rrbracket $ = $\llbracket (E' \downharpoonright i') \downarrow t \rrbracket$, from which we get the contradiction.
$\Box$
\end{proof}
\begin{lemma}
\label{lemma:exist}
For all $(E,i),(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$: if $(E,i) \sim_{\textsc{Threads}} (E',i')$ and $E@j=e$ for some $j \in \mathbb{N}$ such that $j \leq i$ then there is $j' \in \mathbb{N}$ such that $1 \leq j' \leq i'$ and $E'@j' = e$.
\end{lemma}
\begin{proof}
Suppose $j \leq i$, $E@j=e$ and $(E,i) \sim_{\textsc{Threads}}
(E',i')$. By Proposition \ref{prop:disjoint}, $e \in \llbracket (E
\downharpoonright i)\downarrow t \rrbracket$ for some $t \in
\textsc{Threads}$. Then because $(E \downharpoonright i) \downarrow t
$ = $(E' \downharpoonright i') \downarrow t$ we have $\llbracket (E
\downharpoonright i)\downarrow t \rrbracket $ = $\llbracket (E'
\downharpoonright i') \downarrow t \rrbracket $ and thus by
proposition \ref{prop:disjoint}: $e \in \llbracket (E' \downharpoonright i') \rrbracket $ and thus by definition \ref{def:setproj}, $E'@j'=e$ for some j' with $ j' \leq i'$.
$\Box$
\end{proof}
\begin{lemma}[Existence of a Bijection]
\label{lemma:func}
For all $(E,i),(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$: if unique and $(E,i) \sim_{\textsc{Threads}} (E',i')$ then there exists a bijective function $\pi: \{1,\ldots,i\} \rightarrow \{1,\ldots,i'\} $ and
for all $j \in \mathbb{N}$ such that $j \leq i: E@j = E'@\pi(j)$.
\end{lemma}
\begin{proof}
Let $j \in \mathbb{N}$ such that $j \leq i$ and $E@j=e$. By lemma \ref{lemma:exist} we know that $E'@j'= e$ for some $j' \in \mathbb{N}$ with $j' \leq i'$. We will now show that the mapping from j to j' is a function. Suppose there was $k' \neq j'$ with $k' \in \mathbb{N}$ such that $E'@k'=e$ and $k' \leq i'$. This cannot be, since by unique each event occurs at most once in each trace. Let us denote that mapping by $\pi$. We now need to show that $\pi$ is a bijection.
By lemma \ref{lemma:card}, $i=i'$ and we have $\pi: \{ 1 \ldots i \} \to \{ 1 \ldots i \}$. This means it suffices to show that $\pi$ is injective. Now for a contradiction suppose that for $j,k \in \mathbb{N}$ with $ j,k \leq i$: $E@j= e$ and $E@k=e'$ for some $e,e' \in \mathcal{E}$
with $j \neq k$ and thus by unique $e \neq e'$. Now let $\pi(j)=j'$ and $\pi(k)=j'$ for some $j' \in \mathbb{N}$ with $1 \leq j' \leq i'$. Then $E'@j'=e=e'$, contradicting $e \neq e'$.
\end{proof}
\begin{landscape}
\begin{table*}
\footnotesize
\caption{Equivalence proof for linearizability: ($\rightarrow$)}
\label{proof:ltr}
\begin{center}
\fbox{
\begin{tabular}{l l l}
& Let $\preceq_{lin} \; := \; \sim_{\textsc{Threads}} \cap \preceq_{real}$ and $\mathcal{A} := \textsc{Threads} \uplus
\{ obs \}$. \\
Show:&for all $(E,i) \in \mathcal{E}^\omega \times \mathbb{N}$: \\
&if lin (E,i) then there is $(E',i') \in (\mathcal{E}^\omega \times \mathbb{N}):$ s.t. $(E,i)\sim_{\mathcal{A}}(E',i')$ and $(E',i') \models correct$.\\
\\
1.& $(E,i) \in \mathcal{E}^\omega \times \mathbb{N}$
hyp.\\
2.& \hspace*{1.5em} \hspace*{1.5em} lin (E,i)
hyp. \\
3.& \hspace*{1.5em} \hspace*{1.5em} $(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$ and $(E,i) \preceq_{lin} (E',i')$ and $(E',i') \models $ correct
2,def. lin\\
4.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $(e,e') \in obs(E , i)$
hyp. \\
5.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $j,k \in \mathbb{N}$ and $E@j=e$ and $E@k=e'$ and $e \in \textsc{Ret}$ and $e' \in \textsc{Inv}$ and $j < k \leq i$ \hspace{0.5cm}
4,def.obs,def.pos\\
6.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $\pi: \{1,\ldots,i\} \rightarrow \{1,\ldots,i'\}$ is a bijective function and $E'@\pi(j)=e$ and \\
& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $E'@\pi(k)=e'$ and $\pi(j)<\pi(k)$
3,5,def. $\preceq_{lin}$,def. $\preceq_{real}$ \\
7.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $\pi(j) < \pi(k) \leq i'$
5,6,def. $\pi$ \\
8.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $(e,e') \in obs(E', i')$
5,6,7,def. obs \\
9.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $obs(E, i) \subseteq obs(E', i')$
4,8,def. $\subseteq$ \\
10.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $(E,i) \sim_{obs} (E',i')$
9, def. $\sim_{obs}$ \\
11.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $(E,i) \sim_{{\textsc{Threads}}} (E',i')$
3, def. $\preceq_{lin}$\\
11.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $(E,i) \sim_{\textsc{A}} (E',i')$
10,11,def. $ \sim_{\textsc{A}}$\\
12.& \hspace*{1.5em} \hspace*{1.5em} there is $(E',i') \in (\mathcal{E}^\omega \times \mathbb{N})$ s.t. $(E,i)\sim_{\mathcal{A}}(E',i')$ and $(E',i') \models correct$
3,11\\
13.& for all $(E,i) \in \mathcal{E}^\omega \times \mathbb{N}$: \\
&if lin (E,i) then there is $(E',i') \in (\mathcal{E}^\omega \times \mathbb{N}):$ s.t. $(E,i)\sim_{\mathcal{A}}(E',i')$ and $(E',i') \models correct$.
1,2,12\\
\end{tabular}
}
\end{center}
\end{table*}
\end{landscape}
\begin{landscape}
\begin{table*}
\footnotesize
\caption{Equivalence proof for linearizability: ($\leftarrow$)}
\label{proof:rtl}
\begin{center}
\fbox{
\begin{tabular}{l l l}
Show:& For all $(E,i) \in \mathcal{E}^\omega \times \mathbb{N}$:\\
&if unique then if there is $(E',i') \in \mathcal{E}^\omega \times \mathbb{N}:$ s.t. $(E,i)\sim_{\mathcal{A}}(E',i')$ and $(E',i') \models correct$ then lin (E,i). \\
\\
1.& $(E,i) \in \mathcal{E}^\omega \times \mathbb{N}$
hyp.\\
2.& \hspace*{1.5em} for all E $\in \mathcal{E}^\omega$ and all $j,k \in \mathbb{N}$ if $E@j=E@k$ then $j=k$.
hyp.\\
3.& \hspace*{1.5em} \hspace*{1.5em} $(E',i') \in \mathcal{E}^\omega \times \mathbb{N}$ and $(E,i)\sim_{\mathcal{A}}(E',i')$ and $(E',i') \models correct$
hyp.\\
4.& \hspace*{1.5em} \hspace*{1.5em} $(E,i) \sim_{\textsc{Threads}}(E',i')$
3,def. $\sim_{\mathcal{A}}$\\
\\
5.& \hspace*{1.5em} \hspace*{1.5em} $\pi: \{1,\ldots,i\} \rightarrow \{1,\ldots,i'\}$ is a bijective function and \\
& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} for all $j \in \mathbb{N}$ such that $1 \leq j \leq i: E@j = E'@\pi(j)$
1,3,4,lemma \ref{lemma:func}\\
6.&\hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $j,k \in \mathbb{N}$ and $E@j \in \textsc{Ret}$ and $E@k \in \textsc{Inv}$ and $j < k \leq i$
hyp.\\
7.&\hspace*{1.5em} \hspace*{1.5em}\ \hspace*{1.5em} \hspace*{1.5em} $e \in \textsc{Ret}$ and $e' \in \textsc{Inv}$ and pos(e,E) = j and pos(e',E) = k
6 \\
8.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em}$(e,e') \in obs(E,i)$
6,7 \\
9.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em}$obs(E,i) \subseteq obs(E', i')$
3, def. $\sim_{\mathcal{A}}$, def. $\sim_{obs}$\\
10.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em}$(e,e') \in obs(E', i')$\\
11.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} pos(e,E') $<$ pos(e',E') $\leq i'$
10\\
12.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} for all $j \in \{ 1, \ldots ,i'\}$ there is $k \in \{ 1, \ldots, i\}$: $j = \pi(k)$
5,$\pi$ surjective\\
13.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $j',k' \in \{ 1, \ldots, i\}$ and $E'@\pi(j')=e$ and $E'@\pi(k')=e'$ and $\pi(j') < \pi(k') \leq i'$ \hspace{1cm}
11,12\\
14.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $E'@\pi(j)=e$ and $E'@\pi(k)=e'$
5,7 \\
15.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $E'@\pi(j')= E'@\pi(j)$ and $E'@\pi(k')=E'@\pi(k)$
13,14 \\
16.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $\pi(j')= \pi(j)$ and $\pi(k') = \pi(k)$
2,15\\
17.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} $\pi(j) < \pi(k) \leq i'$\\
18.& \hspace*{1.5em} \hspace*{1.5em} \hspace*{1.5em} for all $j,k \in \mathbb{N}$ if $E@j \in \textsc{Ret}$ and $E@k \in \textsc{Inv}$ and $j < k$ then $\pi(j) < \pi(k)$
6,17\\
19.& \hspace*{1.5em} \hspace*{1.5em} $(E,i)\preceq_{real} (E',i')$
5,18\\
20.& For all $(E,i) \in \mathcal{E}^\omega \times \mathbb{N}$: \\
& if unique then if there is $(E',i') \in \mathcal{E}^\omega \times \mathbb{N}:$ s.t. $(E,i)\sim_{\mathcal{A}}(E',i')$ and $(E',i') \models correct$ then lin (E,i).\hspace{0.5cm}
1-4,19
\end{tabular}
}
\end{center}
\end{table*}
\end{landscape}
\subsection{Eventual Consistency (Theorem \ref{thm:evCons})}
\label{proof:evCons}
The axiomatic and the logical description of eventual consistency are equivalent:
\begin{equation*}
\text{Show } \mathit{evCons}(E) \text{ iff } (E,\omega) \models \neg D_{\textsc{Threads}} \neg \mathit{correctEVC}
\end{equation*}
We restate the axiomatic definition for reference:
\begin{definition}[Eventual Consistency]
\begin{enumerate}
\item $\prec_{v} \subseteq \prec_{a}$ (arbitration extends visibility).
\item $\prec_{p} \subseteq \prec_{v}$ (visibility is compatible with program-order).
\item for each $e_q = (t,\mathit{qu}(\mathit{id},q,r)) \in E$, we have r = $\mathit{apply}(\{e \; | \; e \prec_{v} e_q \},\prec_{a},s_0)$ (consistent query results).
\item $\prec_{a}$ and $\prec_{v}$ factor over $\equiv_t $ ( atomic revisions).
\item if $(t, \mathit{com}(\mathit{id})) \not\in E$ and $(t,\_(\mathit{id},\_)) \prec_{v} (t',\_)$ then $t=t'$ (uncommitted updates).
\item if $e= (t,\mathit{com}(\mathit{id})) \in \mathit{E}$ then there are only finitely many $e' := (t',\mathit{com}(\mathit{id}'))$ such that $e' \in E$ and $e not\prec_{v} e'$ (eventual visibility).
\end{enumerate}
\end{definition}
We will reference the parts of the definition by their numbers.
\begin{proof}
We write that $e$ is before $e'$ in $E$, if $\mathit{pos}(e,E) < \mathit{pos}(e',E) < \omega$. Let $\mathit{rev}(e)=(t,\mathit{id})$ :iff $e=(t,\_(id,\_)$. The revision number of an event $e$ is the revision $\mathit{rev}(e')$ of the next event $e':=(t,\mathit{com}(\mathit{id}))$ in $E$. We lift the relations $\prec_a$ and $\prec_v$ to revisions, i.e. $(t,id) \prec (t',id')$, if for the respective commit events $e=(t,\mathit{com}(\mathit{id}))$, and $e'=(t',\mathit{com}(\mathit{id'}))$ it holds that $e \prec e'$. Let $\mathit{align}(E,<)$ be a function that arranges a set of events $E$ according to order $<$.\\
\\
Show " $\rightarrow$": \\
Let $ E':=\mathit{align}(E,\prec_a)$, where we erase all $(\_,fwd(\_))$ events. Because $\prec_p \subseteq \prec_v \subseteq \prec_a$, we have $E \sim_\textsc{Threads} E'$. We now show that $(E',\omega) \models \mathit{correctEVC}$.
For each revision $(t,id)$ and for all $t' \neq t$ insert $(d,\mathit{fwd}(t,t',\mathit{id}))$ directly before the first event of revision $(t',\mathit{id'})$ if and only if $(t',id')$ is the minimal revision with respect to $\preceq_a$ such that $(t,\mathit{id}) \preceq_v (t',\mathit{id'})$ and such a revision exists. If there are several forward events order them by $\preceq_a$ with respect to the revisions that were forwarded.\\
$(E,i) \models \mathit{atomicTrans}$, by (4), our assumption that committed revision ids match the revision ids of previous actions and our construction, as we add forward events only directly after commit events.\\
$(E',i) \models \mathit{fwd}$ follows by (1). Because visibility order must not contradict arbitration order, a revision becomes visible only after being committed.
\\
Show $(E',\omega) \models \mathit{alive}$: Assume $(t,\mathit{com}(\mathit{id})) \in E'$ and there is $t'$ s.t. $t'$ commits infinitely often. Suppose that $(t,id)$ is never forwarded to $t'$, i.e. $(\mathit{fwd},t,t',id) \notin E'$. But then, by construction of $E'$, there are infinitely many $(t',\_)$ such that $(t,id) \not\preceq_v (t',\_)$. This cannot be by condition (6).\\
\\
Show $(E',\omega) \models \mathit{correctEVC}$:\\
Assume $(E',k) \models$ query$(t,q,\mathit{res})$. Let $\mathcal{L}:= \mathit{act}(\mathit{align}(\{ e \; | \; e \preceq_v E'@k \}, \preceq_a))$. We need to show that:
\begin{equation*}
(E',k) \models \mathcal{L} \; \mathit{validLog} \; t \wedge \mathit{result}(q,\mathcal{L},r)
\end{equation*}
We show $(E',k) \models \mathcal{L} \; \mathit{validLog} \; t$.
We get $(E',k) \models \mathit{consistent}(\mathcal{L})$ by the fact that we align with respect to $\prec_a$.
Show $(E',k) \models \forall a ( \;t \; k_{log} \; a \rightarrow a \in \mathcal{L} )$. Assume $(E',k) \models \;t \; k_{log} \; a $, and $E'@k$ has revision id $(t,id)$. Then either there is $j$ such that $(E',j) =(t,a)$, and $a \in \mathcal{L}$, by (2), or there is $j$ such that $E@j=(d,\mathit{fwd}(t',t,\mathit{id'}))$ and there is $l < j$ such that $E@l= (t',\mathit{com}(\mathit{id'}))$, and $(E',l) \models t' k_{log} a$. By our construction, we have $(t',\mathit{id'}) \prec_v (t,\mathit{id})$. Then, by the same argument and the transitivity of $\prec_v$, we have $a \in \mathcal{L}$.\\
\\
Show $(E',k) \models \forall a (a \in \mathcal{L} \rightarrow \;t \; k_{log} \; a)$.
Assume $e \preceq_a a$ and $e \preceq_v a$. Let $a$ be in revision $(t,\mathit{id})$, and $e$ in revision $(t',id')$. Assume $t=t'$. Then $(E',k) \models t \; k_{log} \; a$ by (2) and the definition of $k_{log} $. Assume $t \neq t'$.
Then either $(t,\mathit{id})$ is the the earliest revision of $t$ such that $(t',\mathit{id}') \preceq_v (t,id)$, and we have entered $(d,\mathit{fwd}(t',t,id'))$ before the first event of $(t,\mathit{id})$, or there is an earlier revision in which case we inserted $(d,\mathit{fwd}(t',t,id'))$ before. By the definition of $k_{log}$, we have $(E',k) \models t \; k_{log} a$.\\
\\
Having established that $\mathcal{L}$ is valid, $(E',k) \models \mathit{result}(q,\mathcal{L},\mathit{r})$ follows by definition.\\
\\
Show "$\leftarrow$":\\
We have $E \sim_{\textsc{Threads}} E'$ and $E' \models \mathit{correctEVC}$. Let $e \preceq_a e'$ iff $\mathit{pos}(e,E') < \mathit{pos}(e',E')$ and $e:=(\_,a) \preceq_v e':=(t,\_)$ iff there is $\mathcal{L}$ such that $(E',\mathit{pos}(e')) \models \mathcal{L} \; \mathit{ validLog } \; t \wedge \mathit{result}(q,\mathcal{L},\mathit{res})$, and $a \in \mathcal{L}$. \\
Show $\preceq_a$ is a total order: This follows from the definition of $\mathit{pos}$ and the fact that revision ids occur only once. Show $\preceq_v$ is a partial order: (a) Show $e \preceq_v e$: This follows from the definition of $k_{log}$, as threads know their own actions. (b) Show antisymmetry: follows from the definition of $\mathit{pos}$ and the fact that transactions occur only once. (c) Show transitivity: follows from the transitivity of $\prec_p$, and the recursive definition of $k_{log}$. We now prove the individual parts of the definition.
$(1)$: Follows by $(E',\mathit{pos}(e')) \models \mathit{consistent}(\mathcal{L})$.
$(2)$: By our definition of $k_{log}$ i.e. the fact that threads know their own actions.
$(3):$ Follows by $(E',\mathit{pos}(e')) \models \mathit{result}(q,\mathcal{L},\mathit{res})$.
$(4):$ Follows from $E' \models \mathit{atomicTrans}$.
$(5):$ by $E' \models \mathit{fwd}$, as updates are only forwarded after being committed.
$(6):$ Follows from $E' \models \mathit{alive}$.
\end{proof}
\subsection{Knowledge about Consistency (Theorems \ref{thm:dec_seqs}
and \ref{thm:dec_lin})}
We restate the relevant axioms for reference:
Everything a group of thread knows is also true:
$\text{(T)} := \; \models D_G \varphi \rightarrow \varphi \; \text{(Truth axiom)}$,
groups of threads know what they know:
$\text{(4)} := \; \models D_G \varphi \rightarrow D_G D_G \varphi \; \text{ (positive introspection)}$
and groups of threads know what they do not know:
$\text{(5)} := \; \models \neg D_G \varphi \rightarrow D_G \neg D_G \varphi \; \text{ (negative introspection)}$.
Threads can decide whether a trace is sequentially consistent or not:
\begin{equation*}
\begin{array}[t]{@{}l@{}}
\models (\mathit{seqCons} \leftrightarrow D_\mathit{Threads}(\mathit{seqCons})) \mathrel{\land}\\[\jot]
(\neg\mathit{seqCons} \leftrightarrow D_\mathit{Threads}(\neg\mathit{seqCons}))
\end{array}
\end{equation*}
\begin{proof}
By instantiating axiom (5) with $\varphi:= \neg \mathit{correct} $ and $G:=\textsc{Threads}$, we get:
\[1. \models \neg D_\textsc{Threads} (\neg \mathit{correct})
\mathrel{\rightarrow}
D_\textsc{Threads} (\neg D_\textsc{Threads} (\neg \mathit{correct})) \]
We get $2.$ from $1.$ by applying the definition of $\mathit{seqCons}$, (i.e. $\mathit{seqCons}:= \neg D_{\textsc{Threads}} (\neg correct$)):
\[ 2. \models \mathit{seqCons} \rightarrow D_{\textsc{Threads}} (\mathit{seqCons}) \]
We get $3$ by instantiating (T) with $\varphi :=
\mathit{seqCons}$ and $G:=\textsc{Threads}$:
\[ 3. \models D_{\textsc{Threads}} (\mathit{seqCons}) \rightarrow \mathit{seqCons}. \]
This proves the first conjunct. The second conjunct is proved in a
similar way, by instantiating (4).
\end{proof}
There are linearizable traces on which the threads together with the observer do not know that they are linearizable. There is $E \in \mathcal{E}^\infty$ such that
\begin{equation*}
\begin{array}[t]{@{}l@{}}
(E,i) \models \mathit{Lin} \land \neg D_\mathit{Threads \uplus \{ \mathit{obs}\}}(\mathit{Lin})
\end{array}
\end{equation*}
As in sequential consistency, the threads together with the observer can spot when a trace is not linearizable.
\begin{equation*}
\models \neg \mathit{Lin} \leftrightarrow D_\mathit{Threads \uplus \{ \mathit{obs}\}}(\neg\mathit{Lin})
\end{equation*}
\begin{proof}
As $\preceq_\mathit{obs}$ is a partial order, axiom (5) for negative introspection is not a validity. That means its negation is satisfiable. The proof for the second claim is analogue to the case of sequential consistency.
\end{proof}
\end{document} |
\betaegin{document}
\title{On the quasi-Ablowitz-Segur and quasi-Hastings-McLeod solutions of the inhomogeneous Painlev\'{e} II equation}
\deltaate{\today}
\alphauthor{Dan Dai$^{\alphast}$ and Weiying Hu$^{\deltaag}$}
\maketitle
\betaegin{abstract}
We consider the quasi-Ablowitz-Segur and quasi-Hastings-McLeod solutions of the inhomogeneous Painlev\'{e} II equation
$$
u''(x)=2u^3(x)+xu(x)-\alphalpha \qquad \textrm{for } \alpha \in \mathbb{R} \textrm{ and } |\alpha| > \frac{1}{2}.
$$
These solutions are obtained from the classical Ablowitz-Segur and Hastings-McLeod solutions via the B\"{a}cklund transformation, and satisfy the same asymptotic behaviors when $x \to \pm \infty$. For $|\alpha| > 1/2$, we show that the quasi-Ablowitz-Segur and quasi-Hastings-McLeod solutions possess $ [ \, |\alphalpha| + \frac{1}{2} \, ] $ simple poles on the real axis, which rigorously justifies the numerical results in Fornberg and Weideman (\emph{Found. Comput. Math.}, \textbf{14} (2014), no. 5, 985--1016).
\end{abstract}
\noindent 2010 \textit{Mathematics Subject Classification}. 33E17, 34M55.
\noindent \textit{Keywords and phrases}: Painlev\'{e} II equation; Ablowitz-Segur solutions; Hastings-McLeod solutions; B\"{a}cklund transformation.
\hrule width 65mm
\betaegin{description}
\item \hspace*{5mm}$\alphast$ Department of Mathematics, City University of
Hong Kong, Hong Kong. \\
Email: \texttt{[email protected]}
\item \hspace*{5mm}$\deltaag$ Department of Mathematics, City University of
Hong Kong, Hong Kong. \\
Email: \texttt{[email protected]} (corresponding author)
\end{description}
\section{Introduction and statement of results}
\subsection{Ablowitz-Segur and Hastings-McLeod solutions}
We consider the following \emph{inhomogeneous} Painlev\'e II equation (PII)
\betaegin{equation}\lambdaabel{PII-def}
u''(x)=2u^3+xu-\alpha, \qquad \alpha \in \mathbb{R} \setminus \{0\}.
\end{equation}
When $\alpha = 0$, the above equation is reduced to the \emph{homogeneous} PII. It is well-known that, PII possesses two families of special solutions which are real and pole-free on the real axis: one family of these solutions is oscillatory and bounded, namely the \emph{Ablowitz-Segur} (AS) solutions; the other family is smooth and nonoscillatory, namely the \emph{Hastings-McLeod} (HM) solutions. Both families of solutions decay like $\alpha/x$ as $ x \to +\infty$. More precisely, they have the following behaviors at $x \to \pm \infty$.
\noindent \textbf{Ablowitz-Segur solutions: $\alpha \in (-1/2,1/2)$.}
Let $k_{\alpha}$ be a real parameter and $k_{\alpha}\in (-\cos \pi \alpha, \cos \pi \alpha)$. The AS solution $u_{\textrm{AS}}(x;\alpha)$ is a one-parameter family of solutions of inhomogeneous PII \eqref{PII-def}, which is continuous on the real line and has the following asymptotic behaviors:
\betaegin{eqnarray}
u_{\textrm{AS}}(x;\alpha)&=& B(x; \alphalpha) + k_{\alpha}\Ai(x)(1+O(x^{-3/4})), \qquad \textrm{as} \,\, x \to +\infty,\lambdaabel{asy-pos-AS}\\
u_{\textrm{AS}}(x;\alpha)&=&\frac{d}{(-x)^{1/4}}\cos\{\frac{2}{3}(-x)^{3/2}-\frac{3}{4}d^2\lambdan(-x) + \phi\} +O(|x|^{-1}), \lambdaabel{asy-neg-AS} \\
&&\hspace{6.5cm} \quad \textrm{as}\,\, x\to -\infty,\nonumber
\end{eqnarray}
where $\Ai(x)$ is the Airy function,
\betaegin{equation} \lambdaabel{series-B}
B(x; \alphalpha)\sim \frac{\alphalpha}{x}\sum_{n=0}^{\infty} \frac{a_n}{x^{3n}}, \qquad a_0 = 1
\end{equation}
and $a_{n+1}=(3n+1)(3n+2)a_n - 2\alpha^2 \sum_{k,l,m =0}^{n} a_k a_l a_m.$
The constants $d$ and $\phi$ in \eqref{asy-pos-AS} and \eqref{asy-neg-AS} satisfy the following connection formulas
\betaegin{eqnarray}
d(k_{\alpha}) &=& \frac{1}{\sqrt{\pi}} \sqrt{-\lambdan(\cos^2(\pi \alphalpha)-k_{\alpha}^2)}, \lambdaabel{d-k}\\
\phi(k_{\alpha}) &=& -\frac{3}{2} d^2 \lambdan 2 +\alpharg \Gamma{\betaiggr(\frac{1}{2}id^2}\betaiggr)- \frac{\pi}{4} -\alpharg (-\sin \pi \alpha -k_{\alpha}i). \lambdaabel{phi-k}
\end{eqnarray}
\noindent \textbf{Hastings-McLeod solutions: $\alpha \in \mathbb{R}$.}
The HM solutions $u_{\textrm{HM}}(x;\alpha)$ of the inhomogeneous PII \eqref{PII-def} are continuous on the real axis and have the following asymptotic behaviors
\betaegin{eqnarray}
u_{\textrm{HM}}(x;\alpha)&=& B(x; \alphalpha) + \sigma \cos(\pi \alpha) \Ai(x)(1+O(x^{-3/4})), \qquad \textrm{as} \,\, x \to +\infty,\lambdaabel{asy-pos-HM} \\
u_{\textrm{HM}}(x;\alpha)&=&\sigma \sqrt{\frac{-x}{2}} - \frac{\alpha}{2x}+O(-x)^{-3/2},\hspace{2.5cm} \qquad \textrm{as}\,\, x\to -\infty, \lambdaabel{asy-neg-HM}
\end{eqnarray}
where $\sigma\in \{+1, -1\}$ and the series $B(x; \alphalpha)$ is given in \eqref{series-B}.
In \eqref{asy-pos-HM} and \eqref{asy-neg-HM}, the coefficient $\sigma$ depends on $\alpha$ as follows:
\betaegin{equation}\lambdaabel{condition-HM-pole-free}
\textrm{(i) if } \alpha > -1/2: \quad \sigma =+1; \qquad \textrm{(ii) if } \alpha < 1/2: \quad \sigma = -1.
\end{equation}
From the above formulas, one immediately sees that, when $\alpha \in (-\infty, -1/2] \cup [1/2, +\infty) $, there is a \emph{unique} HM solution for each $\alpha$; while when $\alpha \in (-\frac{1}{2}, \frac{1}{2})$, there exist \emph{two} HM solutions. Depending on whether they are monotonic on the whole real axis, the solutions can be separated into two families, which are called the \emph{primary Hastings-McLeod solutions} (pHM) and \emph{secondary Hastings-McLeod solutions} (sHM) in Fornberg and Weideman \cite{Fornberg2014}; see Figure \ref{phm-shm} for a sketch of their properties. They satisfy the asymptotics in \eqref{asy-pos-HM} and \eqref{asy-neg-HM} with the parameter $\sigma$ given by
\betaegin{figure}[h]
\centering
\subfigure{
\includegraphics[width=6cm]{phm.jpg}}
\subfigure{
\includegraphics[width=6cm]{shm.jpg}}
\caption{The pHM (left) $u_{\textrm{pHM}}(x;\alpha)$ and sHM (right) $-u_{\textrm{sHM}}(x;\alpha)$ solutions of PII with the same parameter $0<\alpha <1/2$.}\lambdaabel{phm-shm}
\end{figure}
\betaegin{align}
\textrm{pHM (monotonic):} \qquad & \sigma = \betaegin{cases}
\textrm{sgn}(\alpha), & \textrm{if } \alpha \neq 0, \\ 1 \textrm{ or } -1, & \textrm{if } \alpha = 0,
\end{cases} \lambdaabel{pHM-sigma-value} \\
\textrm{sHM (not monotonic):} \qquad & \sigma = -\textrm{sgn}(\alpha), \qquad \textrm{if } \alpha \in (-1/2, 0) \cup (0, 1/2). \lambdaabel{sHM-sigma-value}
\end{align}
From \eqref{asy-pos-HM}-\eqref{sHM-sigma-value}, one can see that the pHM solutions have same signs in their asymptotic behaviors, i.e., $u_{\textrm{pHM}}(x;\alpha) \sim \alpha/x$ as $x \to +\infty$ and $u_{\textrm{pHM}}(x;\alpha)\sim\textrm{sgn}(\alpha) \sqrt{-x/2}$ as $x \to -\infty$ for $\alpha \neq 0$.
This is similar to the classical HM solution $u_{\textrm{HM}}(x;0)$ of the homogeneous PII whose asymptotic behaviors are $u_{\textrm{HM}}(x;0) \sim \Ai(x)$ as $x \to +\infty$ and $u_{\textrm{HM}}(x;\alpha)\sim \sqrt{-x/2}$ as $x \to -\infty$. It is well-known that the HM solution $u_{\textrm{HM}}(x;0)$ is monotonic on the real axis and possesses a unique inflexion point $x_0$ where $u_{\textrm{HM}}''(x_0;0) = 0$; see Hastings and McLeod \cite{HM1980}. From the numerical evidence in Fornberg and Weideman \cite[Fig. 10]{Fornberg2014}, the pHM solutions satisfy similar properties. For the sHM solutions, they have different signs in their asymptotics as $x \to \pm \infty$, and are no longer monotonic. Recently, these properties have been proved rigorously in Clerc et al. \cite{Clerc2017} and Troy \cite{Troy2018}.
The formulas \eqref{pHM-sigma-value} and \eqref{sHM-sigma-value} indicate that there exists a family of pHM solutions for any $\alpha \in \mathbb{R}$; and there is one additional family of sHM solutions for $\alpha \in (-1/2, 0) \cup (0, 1/2)$. This result is actually proved in Claeys, Kuijlaars and Vanlessen \cite[Theorem 1.1]{Claeys2008}. In \cite{Claeys2008}, the authors showed that, for $\alpha > - 1/2$, there exist the HM solutions which are pole-free on the real line and uniquely determined by the following asymptotic behaviors
\betaegin{equation*}
u(x;\alpha) \sim \alpha /x , \quad \textrm{as } x \to +\infty \quad \textrm{and} \quad u(x;\alpha) \sim \sqrt{-x/2} , \quad \textrm{as } x \to -\infty.
\end{equation*}
With the classification in \eqref{pHM-sigma-value} and \eqref{sHM-sigma-value}, one can see that the above solutions are indeed the pHM and sHM solutions when $\alpha > 0$ and $-1/2<\alpha <0$, respectively. Using the following symmetry relation
\betaegin{equation}\lambdaabel{sym-relation}
u(x;\alpha)=-u(x;-\alpha),
\end{equation}
one immediately gets the pHM and sHM solutions for $\alpha < 0$ and $0<\alpha <1/2$.
The AS and HM solutions for the homogeneous PII were first discovered by Ablowitz and Segur in \cite{Ablowitz1977-asymptotic,Segur1981} and Hastings and McLeod in \cite{HM1980}, respectively. For the inhomogeneous PII, these solutions were obtained later by McCoy and Tang \cite{mccoy:Tang1986}, Its and Kapaev \cite{Its2003} and Kapaev \cite{Kapaev2004}. The rigorous justification of the asymptotic behaviors in \eqref{asy-pos-AS}-\eqref{asy-neg-AS} and \eqref{asy-pos-HM}-\eqref{asy-neg-HM}, as well as the connection formulas \eqref{d-k}-\eqref{phi-k}, has attracted a lot of research interest in the literature; see for example \cite{Bas:Cla:Law:McL1998,Clarkson1988,Deift1995,Kapaev1992} and the monograph by Fokas et al. \cite{Fokas2006}. All the AS and HM solutions are pole-free on the real axis; see Claeys, Kuijlaars and Vanlessen \cite{Claeys2008} and Dai and Hu \cite{Dai:Hu2017}. It is very interesting to note that these pHM and sHM solutions for the inhomogeneous PII play an important role in the study of nematic liquid crystals; see \cite{Clerc2017, Troy2018}. Recently, some novel solutions similar to the AS and HM solutions are obtained by Fornberg and Weideman \cite{Fornberg2014}. They are no longer pole-free but have finitely many poles on the real axis. We will discuss them in the coming section.
\subsection{Quasi-Ablowitz-Segur and quasi-Hastings-McLeod solutions}
It is a well-known fact that, PII transcendents for different parameters $\alpha$ are related to each other through the following B\"{a}cklund transformation:
\betaegin{equation}\lambdaabel{B-transfmn}
u(x;\alpha)=-u(x;\alpha-1)+\frac{2\alpha-1}{2u^{2}(x;\alpha-1)-2u'(x;\alpha-1)+x};
\end{equation}
see \cite{Clarkson:NIST}. From the previous section, we know that the AS and sHM solutions exist only when $|\alpha| < 1/2$. Applying the B\"{a}cklund transformation to the AS and sHM solutions, we will get solutions for $|\alpha| > 1/2$. It is very interesting to see that the asymptotic behaviors as $x \to \pm \infty$ are reserved under the B\"{a}cklund transformation \eqref{B-transfmn}. The only difference is that, due to properties of the denominator
\betaegin{equation}\lambdaabel{f-def}
f(x;\alpha):=2u^{2}(x;\alpha)-2u'(x;\alpha)+x,
\end{equation}
the solutions after the B\"{a}cklund transformation may not be pole-free on the real line. Fornberg and Weideman first observe such kind of solutions and name them the \emph{quasi-Ablowitz-Segur} (qAS) and \emph{quasi-Hastings-McLeod} (qHM) solutions; see \cite[Sec. 4.3]{Fornberg2014}.
Let us define the qAS and qHM solutions with more details below. Due to the symmetry relation \eqref{sym-relation}, we may assume $\alpha >0$.
\betaigskip
\noindent \textbf{qAS solutions: $\alpha \in (n-1/2,n+1/2),$ $n \in \mathbb{N}$.}
Let $k_{\alpha}$ be a real parameter and $k_{\alpha} \in (-|\cos \pi \alpha|, |\cos \pi \alpha|)$. The qAS solution $u_{\textrm{qAS}}(x;\alpha)$ is a one-parameter family of solutions of inhomogeneous PII \eqref{PII-def}, which satisfies the asymptotic behaviors in \eqref{asy-pos-AS} and \eqref{asy-neg-AS}, as well as the connection formulas \eqref{d-k} and \eqref{phi-k}.
\noindent \textbf{qHM solutions: $\alpha \in (n-1/2,n+1/2),$ $n \in \mathbb{N}$.}
The qHM solutions $u_{\textrm{qHM}}(x;\alpha)$ of the inhomogeneous PII \eqref{PII-def} have the following asymptotic behaviors
\betaegin{eqnarray}
u_{\textrm{qHM}}(x;\alpha)&=& B(x; \alphalpha) - \cos(\pi \alpha) \Ai(x)(1+O(x^{-3/4})), \qquad \textrm{as} \,\, x \to +\infty, \lambdaabel{asy-pos-qHM} \\
u_{\textrm{qHM}}(x;\alpha)&=&-\sqrt{\frac{-x}{2}}- \frac{\alpha}{2x} +O(-x)^{-3/2}, \hspace{2.5cm}\quad \textrm{as}\,\, x\to -\infty, \lambdaabel{asy-neg-qHM}
\end{eqnarray}
where the series $B(x; \alphalpha)$ is given in \eqref{series-B}.
\betaegin{rmk}\lambdaabel{qHM-pHM}
The qHM solutions distinguish themselves from the pHM solutions in \eqref{pHM-sigma-value} for $\alpha > 1/2$ by having different signs in their asymptotics as $x \to \pm \infty$.
\end{rmk}
\betaegin{rmk}\lambdaabel{HM-qHM}
If one applies the B\"{a}cklund transformation to the HM solutions $u_{\textrm{HM}}(x;\alpha)$ (including all the pHM, sHM and qHM solutions) to get a solution $u(x;\alpha+1)$, it is straightforward to verify that the leading term of the asymptotics at $-\infty$ is still $\textrm{sgn}(\alpha) \sqrt{-x/2}$ for $\alpha \neq 0$; while the leading term of the asymptotics at $+\infty$ becomes $(\alpha+1)/x$. Therefore, because the asymptotics of $u(x;\alpha+1)$ as $x\to \pm \infty$ keep the same sign for $\alpha>0$, we have
\betaegin{equation} \lambdaabel{HM-transform1}
\betaegin{array}{l}
u_{\textrm{pHM}}(x;\alpha) \mapsto u_{\textrm{pHM}}(x;\alpha+1), \\
u_{\textrm{sHM}}(x;\alpha) \textrm{ and } u_{\textrm{qHM}}(x;\alpha) \mapsto u_{\textrm{qHM}}(x;\alpha+1),
\end{array}
\qquad \textrm{for } \alpha > 0.
\end{equation}
Since the term $\alpha /x \mapsto(\alpha+1)/x$ changes sign when $-1/2<\alpha<0$, we obtain
\betaegin{equation}
u_{\textrm{pHM}}(x;\alpha) \mapsto u_{\textrm{qHM}}(x;\alpha+1), \quad u_{\textrm{sHM}}(x;\alpha) \mapsto u_{\textrm{pHM}}(x;\alpha+1) \quad \textrm{for } -1/2<\alpha<0.
\end{equation}
For the case $\alpha = 0$, if we choose the following pHM solution with $\sigma = -1$ in \eqref{pHM-sigma-value}
\betaegin{equation} \lambdaabel{HMsol-1}
u_{\textrm{pHM}}(x;0)\sim - \Ai(x), \quad \textrm{as } x \to +\infty \quad \textrm{and} \quad u_{\textrm{pHM}}(x;0)\sim - \sqrt{-x/2}, \quad \textrm{as } x \to -\infty,
\end{equation}
then the B\"{a}cklund transformation \eqref{B-transfmn} gives us
\betaegin{equation} \lambdaabel{HM-transform3}
u(x;1)\sim 1/x, \quad \textrm{as } x \to +\infty \quad \textrm{and} \quad u(x;1)\sim - \sqrt{-x/2}, \quad \textrm{as } x \to -\infty,
\end{equation}
which is $u_{\textrm{qHM}}(x;1)$. Of course, if we put $\sigma = 1$, we get $u_{\textrm{pHM}}(x;1)$.
\end{rmk}
\betaegin{rmk}
When applying the B\"{a}cklund transformation to get the qAS solutions, we have
\betaegin{equation} \lambdaabel{BT-coeff}
\alpha \mapsto \alpha +1 \qquad \textrm{and} \qquad k_{\alpha} \mapsto k_{\alpha + 1} \ \textrm{ with } k_{\alpha+1} = -k_{\alpha},
\end{equation}
where $k_\alpha$ is given in \eqref{asy-pos-AS}. Similar sign change for the coefficient of the $\Ai(x)$ term also occurs while obtaining the qHM solutions, where $\cos(\pi \alpha)$ is mapped to $\cos(\pi (\alpha+1))$ in \eqref{asy-pos-HM} and \eqref{asy-pos-qHM}.
\end{rmk}
It is interesting to note that any qAS and qHM solutions defined above admit a B\"{a}cklund transformation \eqref{B-transfmn} with the
Ablowitz-Segur or Hastings-McLeod solution as the seed solution.
To see this, one can study the B\"{a}cklund transformation \eqref{B-transfmn} through Riemann-Hilbert (RH) problems; see Fokas et al. \cite[Sec. 6.1]{Fokas2006}. The AS(qAS) solutions correspond to the following special Stokes multipliers
\betaegin{equation}
s_1 = - \sin(\pi \alpha) - i k_{\alpha}, \quad s_2 =0, \quad s_3 = - \sin(\pi \alpha) + i k_{\alpha}, \quad k_{\alpha} \in (-|\cos \pi \alpha|, |\cos \pi \alpha|)
\end{equation}
see \cite[(2.20)]{Dai:Hu2017}. When we change the parameters as in \eqref{BT-coeff}, the Stokes multipliers become $s_{k} \mapsto -s_{k}$, $k = 1,3$. Studying the difference between the two associated RH problems, we get the B\"{a}cklund transformation \eqref{B-transfmn}. Moreover, once the Stokes multipliers are fixed for the qAS solutions $u_{\textrm{qAS}}(x;\alpha)$, the asymptotics in \eqref{asy-pos-AS} and \eqref{asy-neg-AS} can be derived by using Deift-Zhou nonlinear steepest method uniquely; see for example \cite{Dai:Hu2017,Deift1995,Its2003,Kapaev2004}. Similar arguments also work for the qHM solutions $u_{\textrm{qHM}}(x;\alpha)$. Since we focus on the pole properties of the qAS and qHM solutions, we will not go into the detailed RH analysis in this paper.
\subsection{Our main results}
We will prove the following results about the poles of qAS and qHM solutions.
\betaegin{thm} \lambdaabel{thm-pole-number}
For $\alpha\in (n-\frac{1}{2}, n+\frac{1}{2})$ with $n \in \mathbb{N}$, let the qAS solutions $u_{\textrm{qAS}}(x;\alpha)$ and qHM solutions $u_{\textrm{qHM}}(x;\alpha)$ be the solution of PII \eqref{PII-def} with asymptotic behaviors given in \eqref{asy-pos-AS}-\eqref{asy-neg-AS} and \eqref{asy-pos-qHM}-\eqref{asy-neg-qHM}, respectively. Then, the qAS and qHM solutions have $n$ real poles.
\end{thm}
\betaegin{rmk}
Pole numbers of the qAS and qHM solutions on the real line have been predicted by Fornberg and Weideman based on the numerical computations in \cite{Fornberg2014}. In the past a few years, Fornberg and Weideman \cite{Fornberg2011,Fornberg2015} have successfully developed the pole field solver (PFS) to compute the Painlev\'e transcendents in the complex plane efficiently and accurately. Recently, they further extend the PFS to study multivalued Painlev\'e transcendents on their Riemann surfaces; see Fasondini, Fornberg and Weideman \cite{Fas:Forn:Weid2017} .
\end{rmk}
Besides the pole numbers, our analysis gives more properties about the poles on the real axis. It is well-known that the PII transcendents are meromorphic functions whose poles are all simple with residue $\pm 1$; see Gromak et al. \cite[Sec. 2]{Gromak2002}. Our second result shows the dynamics of these poles with respect to the parameter $\alpha$.
\betaegin{thm}\lambdaabel{main-thm-AS}
For $\alpha\in (n-\frac{1}{2}, n+\frac{1}{2})$ with $n \in \mathbb{N}$, let $p_{i,\pm1}(\alpha)$ be the $i$-th real pole of $u_{\textrm{qAS}}(x;\alpha)$ or $u_{\textrm{qHM}}(x;\alpha)$ counting from the negative real axis, where the subscript $\pm 1$ indicates the residue of the pole is 1 or $- 1$. Then, the poles $p_{i,\pm1}(\alpha)$ satisfy the following properties:
\betaegin{itemize}
\item[(a)] The residue of the smallest pole must be 1. Moreover, it is strictly decreasing with respect to $\alpha$, i.e.,
\betaegin{equation}
p_{1,+1}(\alpha) > p_{1,+1}(\alpha+1) > p_{1,+1}(\alpha+2) > \cdots .
\end{equation}
\item[(b)] The poles with residue $\pm 1$ interlace on the real axis, that is,
\betaegin{equation}
\betaegin{split}
&p_{1, +1}(\alpha)<p_{2, -1}(\alpha)< \cdots < p_{n, +1}(\alpha), \quad \textrm{if $n$ is odd,} \\
&p_{1, +1}(\alpha)<p_{2, -1}(\alpha)<\cdots<p_{n, -1}(\alpha) , \quad \textrm{if $n$ is even.}
\end{split}
\end{equation}
\item[(c)] All poles of $u(x;\alpha)$ with residue $+1$ become poles of $u(x;\alpha+1)$ with residue $-1$ via the B\"{a}cklund transformation, i.e. $p_{i+1, -1}(\alpha+1) = p_{i, +1}(\alpha)$; while all poles of $u(x;\alpha)$ with residue $-1$ are regular points of $u(x;\alpha+1)$.
\item[(d)] The residue of the largest pole is $1$ and $-1$ when $n$ is odd and even, respectively. They are increasing with respect to $\alpha$
\betaegin{equation}
\betaegin{split}
& p_{n,+1}(\alpha) = p_{n+1,-1}(\alpha+1) < p_{n+2,+1}(\alpha+2) = p_{n+3,-1}(\alpha+3) < \cdots , \quad \textrm{if $n$ is odd}, \\
& p_{n,-1}(\alpha) < p_{n+1,+1}(\alpha+1) = p_{n+1,-1}(\alpha+2) < \cdots , \hspace{3.1cm}\quad \textrm{if $n$ is even}.
\end{split}
\end{equation}
\end{itemize}
\end{thm}
The properties in the above theorems can be summarized in the following figure.
\betaegin{figure}[h]
\betaegin{center}
\includegraphics[width=15cm]{poles.jpg}
\end{center}
\caption{The poles locations of qAS and qHM solutions of PII on $\mathbb{R}$.}\lambdaabel{poles}
\end{figure}
The rest of this paper is arranged as follows. In Section \ref{sec-pre-proof}, some properties for the general PII transcendent $u(x;\alpha)$ and the function $f(x;\alpha)$ in \eqref{f-def} are provided. Then, in Section \ref{sec-main-proof}, we prove our main results by mathematical induction.
\section{Properties of the general PII transcendents}\lambdaabel{sec-pre-proof}
First, let us derive some relations between a general solution $u(x;\alpha)$ and the denominator $f(x;\alpha)$ in the B\"{a}cklund transformation \eqref{B-transfmn}.
\betaegin{lemma} \lambdaabel{lemma-u&f}
The functions $u(x;\alpha)$ and $f(x;\alpha)$ satisfy the following relations:
\betaegin{itemize}
\item[(i)] If $u(x;\alpha)$ is continuous and differentiable at $x_0$ such that $f(x_0;\alpha)=0$, then $f'(x_0;\alpha) = 2\alpha +1$.
\item[(ii)] In any interval where $u(x;\alpha)$ is continuous, $f(x;\alpha)$ has at most one simple zero when $\alpha \neq -1/2$.
\end{itemize}
\end{lemma}
\betaegin{proof}
From the definition of $f(x;\alpha)$ in \eqref{f-def}, we have
\betaegin{equation*}
f'(x;\alpha) = 4 u'(x;\alpha) u(x;\alpha ) - 2 u''(x;\alpha) + 1.
\end{equation*}
As $u(x;\alpha )$ satisfies the PII equation \eqref{PII-def}, it follows from the above formula
\betaegin{equation}\lambdaabel{f-der}
f'(x;\alpha)=-2u(x;\alpha)f(x;\alpha)+2\alpha+1.
\end{equation}
This immediately gives us part (i) of the lemma.
We will prove the second part by contradiction. Suppose that there are two adjacent zeros $x_0$ and $\tilde{x}_0$ of $f(x;\alpha)$ in the interval $I$ where $u(x;\alpha)$ is continuous. According to the definition of $f(x;\alpha)$ in \eqref{f-def}, we know $f(x;\alpha)$ is continuous and differentiable in $I$.
If $\alpha \neq -1/2$, then the zeros of $f(x;\alpha-1)$ must be simple, since
\betaegin{equation}
f'(x;\alpha)= 2\alpha+1 \neq 0.
\end{equation}
Thus, we have
\betaegin{equation*}
f(x_0;\alpha)=f(\tilde{x}_0;\alpha)=0 \quad \textrm{and} \quad f'(x_0;\alpha)f'(\tilde{x}_0;\alpha)<0.
\end{equation*}
However, part (i) of the lemma also tells us
\betaegin{equation}
f'(x_0;\alpha)=f'(\tilde{x}_0;\alpha)= 2\alpha+1,
\end{equation}
which yields a contradiction. Therefore, $f(x;\alpha)$ has at most one zero in $I$.
\end{proof}
Next, we study the pole properties under the B\"{a}cklund transformation.
\betaegin{lemma} \lambdaabel{lemma-bt-u}
Let the solutions $u(x;\alpha -1)$ and $u(x;\alpha)$ be related via the B\"{a}cklund transformation \eqref{B-transfmn}. Then, we have
\betaegin{itemize}
\item[(i)] The poles of $u(x;\alpha -1)$ with residue $+1$ are poles of $u(x;\alpha)$ with residue $-1$.
\item[(ii)] If $p_{-1}$ is a pole of $u(x;\alpha -1)$ with residue $-1$, then $p_{-1}$ is a regular point of $u(x;\alpha)$.
\end{itemize}
\end{lemma}
\betaegin{proof}
To prove part (i), let us assume $p_{+1}$ is a pole of $u(x; \alpha -1)$ with residue $+1$. Then we have the following expansion near $p_{+1}$
\betaegin{equation}\lambdaabel{u(a-1)-pole-res(+1)}
u(x;\alpha-1)= \frac{1}{x-p_{+1}}-\frac{p_{+1}}{6}(x-p_{+1})+\frac{\alpha-2}{4}(x-p_{+1})^2+O((x-p_{+1})^3), \qquad \textrm{as } x\to p_{+1}.
\end{equation}
Using the definition of $f(x;\alpha)$ in \eqref{f-def} and the above formula, we can see that $p_{+1}$ is a double pole of $f(x;\alpha -1)$:
\betaegin{equation}\lambdaabel{f(a-1)-pole-res(+1)}
f(x;\alpha -1)=\frac{4}{(x-p_{+1})^2}+O((x-p_{+1})^{-3}), \qquad \textrm{as } x\to p_{+1}.
\end{equation}
From the B\"{a}cklund transformation \eqref{B-transfmn}, it is easy to obtain
\betaegin{equation}
u(x; \alpha) = -\frac{1}{x-p_{+1}}+\frac{p_{+1}}{6}(x-p_{+1})+ \frac{\alpha+1}{4}(x-p_{+1})^2+O((x-p_{+1})^3), \qquad \textrm{as } x\to p_{+1}.
\end{equation}
Therefore, $p_{+1}$ is a pole of $u(x;\alpha)$ with residue $-1$.
Similarly, we can prove part (ii). If $p_{-1}$ is a pole of $u(x; \alpha -1)$ with residue $-1$, the following expansion near $p_{-1}$ holds:
\betaegin{equation}
u(x;\alpha-1)= \frac{-1}{x-p_{-1}}+\frac{p_{-1}}{6}(x-p_{-1})+\frac{\alpha}{4}(x-p_{-1})^2+O((x-p_{-1})^3), \qquad \textrm{as } x\to p_{-1}.
\end{equation}
Then, $p_{-1}$ is a simple zero of $f(x;\alpha -1)$:
\betaegin{equation}\lambdaabel{f(a-1)-zero-res(-1)}
f(x;\alpha -1)=(-2\alpha+1)(x-p_{-1})+O((x-p_{-1})^2), \qquad \textrm{as } x\to p_{-1}.
\end{equation}
As a consequence, the second term in the B\"{a}cklund transformation \eqref{B-transfmn} induces a simple pole with residue $-1$, which cancels the pole contribution from the first term. Thus, $p_{-1}$ is a regular point of $u(x;\alpha)$.
\end{proof}
\section{Proof of Theorems \ref{thm-pole-number} and \ref{main-thm-AS}}\lambdaabel{sec-main-proof}
\subsection{Properties of real poles of qAS solutions}
For $\alpha\in (n-\frac{1}{2}, n+\frac{1}{2})$, we will study the qAS solutions for $n=1,2,3$, which possess all properties listed in Theorems \ref{thm-pole-number} and \ref{main-thm-AS}. Then, we will prove our results by mathematical induction for all $n \in \mathbb{N}$. First, we show that, for $\alpha\in (\frac{1}{2}, \frac{3}{2})$, the qAS solutions have only one pole on the real line.
\betaegin{prop}\lambdaabel{thm-one-pole}
For $\alpha\in (\frac{1}{2}, \frac{3}{2})$, the qAS solutions of \eqref{PII-def} have only one pole on the real line with residue $+1$.
\end{prop}
\betaegin{proof}
For $\alpha\in (\frac{1}{2}, \frac{3}{2})$, $u_{\textrm{AS}}(x;\alpha-1)$ is the pole-free AS solution on the real line. To get the unique pole of $u_{\textrm{qAS}}(x;\alpha)$, it is enough to show that $f(x;\alpha-1)$ in the B\"{a}cklund transformation \eqref{B-transfmn} has only one zero on the real line. Recalling the asymptotics of $u_{\textrm{AS}}(x;\alpha-1)$ in \eqref{asy-pos-AS} and \eqref{asy-neg-AS}, we have from \eqref{f-def}
\betaegin{equation}
f(x;\alpha-1)\sim x, \qquad \textrm{as}\,\, x\to \pm \infty.
\end{equation}
Moreover, since $u_{\textrm{AS}}(x; \alpha-1)$ is smooth on the real line, $f(x;\alpha-1)$ is also continuous on the real line. The above asymptotics imply that $f(x;\alpha-1)$ has zeros on the real line.
According Lemma \ref{lemma-u&f}, there is a unique point $x_1$ such that $f(x_1; \alpha-1)=0$ and $f'(x_1,\alpha-1)=2\alpha -1$. Then, using the B\"{a}cklund transformation \eqref{B-transfmn}, we conclude that $u_{\textrm{qAS}}(x;\alpha)$ has only one pole on the real line with residue $+1$.
\end{proof}
\betaegin{prop}\lambdaabel{thm-two-poles}
For $\alpha\in (\frac{3}{2}, \frac{5}{2})$, the qAS solutions of \eqref{PII-def} have two poles on the real line with residues $\pm 1$. Moreover, we have $p_{1,+1}(\alpha) < p_{2,-1}(\alpha)$.
\end{prop}
\betaegin{proof}
For $\alpha\in (\frac{3}{2}, \frac{5}{2})$, let $x_{1, +1}$ be the unique pole of $u_{\textrm{qAS}}(x;\alpha-1)$ with residue $+1$. Then, from part (i) in Lemma \ref{lemma-bt-u}, $x_{1, +1}$ is the pole of $u_{\textrm{qAS}}(x;\alpha)$ with residue $-1$. To find the other pole of $u_{\textrm{qAS}}(x;\alpha)$, we make use of the behavior of $u_{\textrm{qAS}}(x;\alpha-1)$ near $x_{1, +1}$ given in \eqref{u(a-1)-pole-res(+1)}.
Combining the formulas \eqref{asy-pos-AS}-\eqref{asy-neg-AS}, we have from \eqref{f-def}
\betaegin{equation}\lambdaabel{f(a-1)-infty-asy-1}
f(x;\alpha-1)\sim x,\qquad \textrm{as}\,\, x \to \pm\infty \quad \textrm{and} \quad f(x;\alpha-1)\to +\infty, \qquad \textrm{as}\,\, x \to x_{1, +1}.
\end{equation}
Based on the similar analysis in Proposition \ref{thm-one-pole}, we get that $f(x;\alpha-1)$ has only one zero $x_0$ in $(-\infty, x_{1, +1})$ and $f'(x_0; \alpha-1)= 2\alpha-1$. Therefore, $x_0$ is the unique pole of $u_{\textrm{qAS}}(x;\alpha)$ in $(-\infty, x_{1, +1})$ with residue $+1$.
Finally, to show that there is no other poles, we verify that $u_{\textrm{qAS}}(x;\alpha)$ has no pole in $(x_{1, +1}, +\infty)$. Otherwise, $f(x;\alpha-1)$ must have zeros in $(x_{1, +1}, +\infty)$. Due to the asymptotics in \eqref{f(a-1)-infty-asy-1}, $f(x;\alpha-1)$ tends to $+\infty$ at both endpoints of the interval $(x_{1, +1}, +\infty)$. Then, $f(x;\alpha-1)$ has at least two zeros in $(x_{1, +1}, +\infty)$, where we use the fact that all zeros of $f(x;\alpha-1)$ are simple. So, we arrive at a contradiction with part (ii) of Lemma \ref{lemma-u&f}.
This completes the proof of our proposition.
\end{proof}
\betaegin{prop}\lambdaabel{thm-three-poles}
For $\alpha\in (\frac{5}{2}, \frac{7}{2})$, the qAS solutions of \eqref{PII-def} have three poles on the real line with residues $\pm 1$. Moreover, we have $p_{1,+1}(\alpha) < p_{2,-1}(\alpha)< p_{3,+1}(\alpha)$.
\end{prop}
\betaegin{proof}
For $\alpha\in (\frac{5}{2}, \frac{7}{2})$, let $x_{1, +1}<x_{2, -1}$ be the two poles of $u_{\textrm{qAS}}(x;\alpha-1)$ with residues $+1$ and $-1$, respectively. Then, from part (i) in Lemma \ref{lemma-bt-u}, $x_{1, +1}$ is the pole of $u_{\textrm{qAS}}(x;\alpha)$ with residue $-1$. Using the similar analysis in Proposition \ref{thm-two-poles}, it is easy to show that there is a unique pole of $u_{\textrm{qAS}}(x;\alpha)$ in $(-\infty, x_{1, +1})$ with residue $+1$. To find the last pole of $u_{\textrm{qAS}}(x;\alpha)$, let us study the property of $f(x;\alpha-1)$ in $(x_{1, +1}, +\infty)$. Using a similar computation in \eqref{f(a-1)-infty-asy-1}, we have
\betaegin{eqnarray}
f(x;\alpha-1)\to +\infty, \quad \textrm{as } x\to x_{1, +1}, \quad \textrm{and} \quad
f(x;\alpha-1)\to +\infty, \quad \textrm{as } x\to +\infty.
\end{eqnarray}
Note that, although $x_{2, -1} \in (x_{1, +1}, +\infty)$, it is a regular point of $u_{\textrm{qAS}}(x;\alpha)$; see part (ii) of Lemma \ref{lemma-bt-u}. Moreover, we have from \eqref{f(a-1)-zero-res(-1)}
\betaegin{equation}
f(x_{2, -1};\alpha-1) = 0 \quad \textrm{and} \quad f'(x_{2, -1};\alpha-1)=-2\alpha+1<0.
\end{equation}
As $f(x;\alpha-1)$ is continuous and differentiable on $(x_{1, +1}, +\infty)$, the above two formulas yield there must exist $x_3>x_{2, -1}$ such that $f(x_3;\alpha-1)=0$ and $f'(x_3;\alpha-1)>0$. Note that $u_{\textrm{qAS}}(x;\alpha-1)$ is continuous on $(x_{2, -1}, +\infty)$. According to Lemma \ref{lemma-u&f}, $x_3$ is the unique zero of $f(x;\alpha-1)$ in $(x_{2, -1}, +\infty)$ and $f'(x_3;\alpha-1)= 2\alpha -1$. Therefore, $x_3$ must be a pole of $u_{\textrm{qAS}}(x;\alpha)$ with residue $+1$. By Lemma \ref{lemma-u&f} again, $u_{\textrm{qAS}}(x;\alpha)$ has no pole in $(x_{1, +1}, x_{2, -1})$.
This completes the proof of our proposition.
\end{proof}
Finally, we prove the statements involving the qAS solutions in Theorems \ref{thm-pole-number} and \ref{main-thm-AS} by mathematical induction.
\noindent\emph{Proof of Theorem \ref{thm-pole-number} and \ref{main-thm-AS}.} The above three propositions indicate that Theorems \ref{thm-pole-number} and \ref{main-thm-AS} are true for $n=1,2,3$. Assume the results also hold for $n= m$, let us consider the case for $m+1$. We denote the poles of $u_{\textrm{qAS}}(x;\alpha)$ and $u_{\textrm{qAS}}(x;\alpha+1)$ by $p_{i,\pm1}(\alpha)$ and $\widetilde{p}_{j,\pm1}(\alpha+1)$, respectively.
As the residue of the smallest pole of $u_{\textrm{qAS}}(x;\alpha)$ is 1, following similar analysis in Proposition \ref{thm-two-poles}, there exists a unique pole $x_0$ of $u_{\textrm{qAS}}(x;\alpha+1)$ in $(-\infty, p_{1,+1}(\alpha))$ with residue $1$. This proves part (a) of Theorem \ref{main-thm-AS}.
Let $p_{k,+1}(\alpha)< p_{k+1,-1}(\alpha) < p_{k+2,+1}(\alpha)$ be three consecutive poles of $u_{\textrm{qAS}}(x;\alpha)$. According to Lemma \ref{lemma-bt-u}, they are mapped to $\widetilde{p}_{j, -1}(\alpha+1)< \eta_0 < \widetilde{p}_{j+2, -1}(\alpha+1)$, where $\eta_0$ is the zero of $f(x;\alpha)$ and a regular point of $u_{\textrm{qAS}}(x;\alpha+1)$. Since $p_{k,+1}(\alpha)$ and $p_{k+2,+1}(\alpha)$ are poles of $u_{\textrm{qAS}}(x;\alpha)$ with residue $+1$, we have from \eqref{f(a-1)-pole-res(+1)}
\betaegin{equation}
f(x;\alpha)\to +\infty, \quad \textrm{as } x\to p_{k,+1}(\alpha) \textrm{ and } x \to p_{k+2,+1}(\alpha).
\end{equation}
Using the similar analysis in Proposition \ref{thm-three-poles}, there exists a unique point $x^*\in (p_{k+1,-1}(\alpha), p_{k+2,+1}(\alpha))$ such that $f(x^*;\alpha)=0$ and $f'(x^*;\alpha)= 2\alpha +1>0$. This shows that $x^*$ is the unique pole of $u_{\textrm{qAS}}(x;\alpha+1)$ with residue $+1$ in $(\eta_0, \widetilde{p}_{j+2, -1}(\alpha+1))$. Therefore, we obtain three consecutive poles of $u_{\textrm{qAS}}(x;\alpha+1)$: $\widetilde{p}_{j, -1}(\alpha+1)< \widetilde{p}_{j+1, +1}(\alpha+1) < \widetilde{p}_{j+2, -1}(\alpha+1)$ with $\widetilde{p}_{j+1, +1}(\alpha+1) = x^*$. Thus, we prove the interlacing property of poles with residue $\pm 1$, i.e., part (b) of Theorem \ref{main-thm-AS}. The part (c) of Theorem \ref{main-thm-AS} is indeed Lemma \ref{lemma-bt-u}.
The proof of the pole numbers and the largest pole also follows from the arguments above. Lemma \ref{lemma-bt-u} and arguments in the previous paragraph imply that, for the interval $I= [p_{k_1,+1}(\alpha), p_{k_2,+1}(\alpha)] $ with any $k_1<k_2$, $u_{\textrm{qAS}}(x;\alpha+1)$ has the same number of poles as $u_{\textrm{qAS}}(x;\alpha)$ in $I$. When $m$ is odd, as the residues of both the smallest and largest pole of $u_{\textrm{qAS}}(x;\alpha)$ are 1, $u_{\textrm{qAS}}(x;\alpha+1)$ has $m$ poles in $[p_{1,+1}(\alpha), p_{m,+1}(\alpha)]$ with ${p}_{2,-1}(\alpha+1)=p_{1,+1}(\alpha)$ and ${p}_{m+1,-1}(\alpha+1)=p_{m,+1}(\alpha)$. Recalling that there is one more pole with residue 1 in $(-\infty, p_{1,+1}(\alpha))$, then $u_{\textrm{qAS}}(x;\alpha+1)$ has $m+1$ poles with the largest pole ${p}_{m+1,-1}(\alpha+1)=p_{m,+1}(\alpha)$. When $m$ is even, the situation is similar. Now, $u_{\textrm{qAS}}(x;\alpha+1)$ has $m$ poles in $(-\infty , p_{m-1,+1}(\alpha)]$. By the similar analysis in Proposition \ref{thm-three-poles}, there is one more pole with residue $+1$ in $(p_{m,-1}(\alpha), + \infty)$. Thus, $u_{\textrm{qAS}}(x;\alpha+1)$ has $m+1$ poles with the largest pole ${p}_{m+1,+1}(\alpha+1)>p_{m,-1}(\alpha)$. This proves Theorem \ref{thm-pole-number} and part (d) of Theorem \ref{main-thm-AS}.
Then, we finish proof of the results involving the qAS solutions in Theorem \ref{thm-pole-number} and \ref{main-thm-AS}.
$\Box$
\subsection{Properties of real poles of qHM solutions}
Using the same idea in the previous section, we first show that, for $\alpha\in (\frac{1}{2}, \frac{3}{2})$, the qHM solutions corresponding to the asymptotics in \eqref{asy-pos-qHM} and \eqref{asy-neg-qHM} have only one pole on the real line.
\betaegin{prop}\lambdaabel{thm-one-pole-HM}
For $\alpha\in (\frac{1}{2}, \frac{3}{2})$, the qHM solutions of \eqref{PII-def} have only one pole on the real line with residue $+1$.
\end{prop}
\betaegin{proof}
For $\alpha\in (\frac{1}{2}, \frac{3}{2})$, the qHM solutions $u_{\textrm{qHM}}(x;\alpha)$ are transform from the pHM and sHM solutions (cf. \eqref{HM-transform1}-\eqref{HM-transform3}), which are continuous on the real line. Using the asymptotics of these solutions in \eqref{asy-pos-HM}, \eqref{asy-neg-HM} and \eqref{HMsol-1}, we obtain from \eqref{f-def}
\betaegin{equation}
f(x;\alpha-1)\sim x, \quad \textrm{as}\,\, x\to \infty \quad \textrm{and} \quad f(x;\alpha-1)\to \frac{1-2\alpha}{2}\lambdaeft(\frac{-x}{2}\right)^{-1/2}, \quad \textrm{as}\,\, x\to -\infty.
\end{equation}
As $f(x;\alpha-1)$ is continuous on the real line, it has zeros on the real line. By the similar argument in Proposition \ref{thm-one-pole}, we know that $u_{\textrm{qHM}}(x;\alpha)$ has only one pole on the real line with residue $+1$.
\end{proof}
Following similar analysis in the previous section, it is easy to see that Propositions \ref{thm-two-poles} and \ref{thm-three-poles} also hold for the qHM solutions. Using mathematical induction again, we prove the statements involving the qHM solutions in Theorems \ref{thm-pole-number} and \ref{main-thm-AS}.
\end{document} |
\begin{document}
\title{ \CORR{Effective interface conditions for a porous medium type problem}
\begin{abstract}
Motivated by biological applications on tumour invasion through thin membranes, we study a porous-medium type equation where the density of the cell population evolves under Darcy's law, assuming continuity of both the density and flux velocity on the thin membrane which separates two domains. The drastically different scales and mobility rates between the membrane and the adjacent tissues lead to consider the limit as the thickness of the membrane approaches zero. We are interested in recovering the \textit{effective interface problem}
and the transmission conditions on the limiting zero-thickness surface, formally derived by Chaplain \textit{et al.} (2019), which are compatible with nonlinear generalized Kedem-Katchalsky ones. Our analysis relies on \textit{a priori} estimates and compactness arguments as well as on the construction of a suitable extension operator which allows to deal with the degeneracy of the mobility rate in the membrane, as its thickness tends to zero.
\end{abstract}
\begin{flushleft}
n_\gammaoindent{\makebox[1in]\hrulefill}
\end{flushleft}
2010 \textit{Mathematics Subject Classification.} 35B45; 35K57; 35K65; 35Q92; 76N10; 76S05;
n_\gammaewline\textit{Keywords and phrases.} Membrane boundary conditions; Effective interface; Porous medium equation; Nonlinear reaction-diffusion equations; Tumour growth models\\[-2.em]
\begin{flushright}
n_\gammaoindent{\makebox[1in]\hrulefill}
\end{flushright}
\section{Introduction}
We consider a model of cell movement through a membrane where the \CORR{population density} ${u=u(t,x)}$ is driven by porous medium dynamics. We assume the domain to be an open and bounded set $\Omega\subset\mathbb{R}^3$. This domain $\Omega$ is divided into three open subdomains, $\Omega_{i,\es}$ for $i=1,2,3$, where $\es>0$ is the thickness of the intermediate membrane, $\Omega_{2,\es}$, see Figure~\ref{fig:domains}. In the three domains, the cells are moving with different constant mobilities, $\mu_{i,\es}$, for $i=1,2,3$, and they are allowed to cross the adjacent boundaries of these domains which are $\Gamma_{1,2,\es}$ (between $\Omega_{1,\es}$ and $\Omega_{2,\es}$) and $\Gamma_{2,3,\es}$ (between $\Omega_{2,\es}$ and $\Omega_{3,\es}$).
\CORR{Then, we write $\Omega= \Omega_{1,\es}\cup \Omega_{2,\es} \cup \Omega_{3,\es}$, with $\Gamma_{1,2,\es}=p_\gammaartial{\Omega}_{1,\es}\cap p_\gammaartial{\Omega}_{2,\es}$, and $\Gamma_{2,3,\es}=p_\gammaartial{\Omega}_{2,\es}\cap p_\gammaartial{\Omega}_{3,\es}$.} The system reads as
\begin{equation}\label{epspb}
\left\{
\begin{array}{rlll}
&p_\gammaartial_t u_{i,\es} - \mu_{i,\es} n_\gammaabla \cdot ( u_{i,\es} n_\gammaabla p_{i,\es}) = u_{i,\es} G(p_{i,\es}) & \text{ in } (0,T)\times\Omega_{i,\es}, & i=1,2,3,\\[1em]
&\mu_{i,\es} u_{i,\es} n_\gammaabla p_{i,\es}\cdot \boldsymbol{n}_{i,i+1} = \mu_{i+1,\es} u_{i+1,\es} n_\gammaabla p_{i+1,\es}\cdot \boldsymbol{n}_{i,i+1} &\text{ on } (0,T)\times\Gamma_{i,i+1,\es}, & i = 1,2,\\[1em]
&u_{i,\es} = u_{i+1,\es} &\text{ on } (0,T) \times\Gamma_{i,i+1,\es}, & i = 1,2,\\[1em]
&u_{i,\es}=0 &\text{ on } (0,T) \timesp_\gammaartial\Omega.
\end{array}
\right.
\end{equation}
We denote by $p_{i,\es}$ the density-dependent pressure, which is given by the following power law
\begin{equation*}
p_{i,\es} = u_{i,\es}^{\gamma}, \quad \text{ with } \; \gamma > 1.
\end{equation*}
In this paper, we are interested in studying the convergence of System~\eqref{epspb} as $\es\rightarrow 0$. When the thickness of the thin layer decreases to zero, the membrane collapses to a limiting interface, $\tilde{\Gamma}_{1,3}$, which separates two domains denoted by $\tilde\Omega_1$ and $\tilde\Omega_3$, see Figure~\ref{fig:domains}. \CORR{Then, the domain turns out to be $\Omega=\tilde\Omega_1 \cup \tilde{\Gamma}_{1,3} \cup\tilde\Omega_3$.}
We derive in a rigorous way the \textit{effective problem}~\eqref{effectivepb}, and in particular, the transmission conditions on the limit density, $\tilde{u}$, across the effective interface. Assuming that \CORR{the mobility} coefficients satisfy $\mu_{i,\es} >0 $ for $i=1,3$
and
\begin{equation*}
\lim_{\es\to 0}\mu_{1,\es}=\tilde\mu_{1}^{(i)}n (0,+^{(i)}nfty), \quad \qquad \lim_{\es\to 0}\frac{\mu_{2,\es}}{\es}=\tilde\mu_{1,3} ^{(i)}n (0,+^{(i)}nfty), \quad \qquad\lim_{\es\to 0} \mu_{3,\es}=\tilde\mu_{3}^{(i)}n (0,+^{(i)}nfty),
\end{equation*}
we prove that\CORR{, in a weak sense, solutions of Problem~\eqref{epspb} converge to solutions of the following system }
\begin{equation}\label{effectivepb}
\left\{
\begin{array}{rlll}
&p_\gammaartial_t \tilde{u}_{i} - \tilde\mu_{i} n_\gammaabla \cdot ( \tilde{u}_{i} n_\gammaabla \tilde{p}_{i}) = \tilde{u}_{i} G(\tilde{p}_{i}) & \text{ in } (0,T) \times\tilde\Omega_{i},& i=1,3,\\[1em]
&\tilde\mu_{1,3} \llbracket \Pi \rrbracket= \tilde\mu_{1} \tilde{u}_{1} n_\gammaabla\tilde{p}_1 \cdot \boldsymbol{\tilde{n}}_{1,3} = \tilde\mu_{3} \tilde{u}_{3} n_\gammaabla\tilde{p}_3\cdot \boldsymbol{\tilde{n}}_{1,3} &\text{ on } (0,T)\times\tilde\Gamma_{1,3},\\[1em]
&\tilde{u}=0 &\text{ on } (0,T)\timesp_\gammaartial\Omega,
\end{array}
\right.
\end{equation}
where $\Pi$ satisfies $\Pi'(u)= u p'(u)$, namely
\begin{equation*}
\Pi(u) := \frac{\gamma}{\gamma +1} u ^{\gamma+1}.
\end{equation*}
We use the symbol $\llbracket(\cdot) \rrbracket$ to denote the jump across the interface $\tilde\Gamma_{1,3}$, \emph{i.e.}\;
\begin{equation}
\llbracket \Pi \rrbracket:=\frac{\gamma}{\gamma+1}(\tilde{u}^{\gamma+1})_3- \frac{\gamma}{\gamma+1}(\tilde{u}^{\gamma+1})_{1},
\end{equation}
where the subscript indicates that $(\cdot)$ is evaluated as the limit to a point of the interface coming from the subdomain $\tilde\Omega_1$, $\tilde\Omega_3$, respectively.
\begin{figure}
\caption{We represent here the bounded cylindrical domain $\Omega$ of length $L$. On the left, we can see the subdomains $\Omega_{i,\es}
\label{fig:domains}
\end{figure}
p_\gammaaragraph{Motivations and previous works.}
Nowadays, a huge literature can be found on the mathematical modeling of tumour growth {\color{black}, see, for instance, \cite{perthame,LFJCMWC, RCM, PT}}, on a domain $\Omega \subseteq \mathbb{R}^d$ (with $d=2,3$ for {^{(i)}t in vitro} experiments, $d=3$ for {^{(i)}t in vivo} tumours). Studying tumour's evolution, a crucial and challenging scenario is represented by cancer cells invasion through thin membranes. In particular, one of the most difficult barriers for the cells to cross is the \textit{basement membrane}. This kind of membrane separates the epithelial tissue from the connective one \CORR{(mainly consisting in extracellular matrix, ECM)}, providing a barrier that isolates malignant cells from the surrounding environment.
At the early stage, cancer cells proliferate locally in the epithelial tissue originating a carcinoma {^{(i)}t in situ}. Unfortunately, cancer cells could mutate and acquire the ability to migrate by producing \textit{matrix metalloproteinases} (MMPs), specific enzymes which degrade the basement membrane\CORR{, allowing cancer cells to penetrate into it, invading the adjacent tissue.}
A specific study can be done on the relation between MMP and their inhibitors as in Bresch
\textit{et al.} \cite{bresch}. Instead, we are interested in modeling cancer transition from {^{(i)}t in situ} stage to the invasive phase. \CORR{ This transition is described both by System~\eqref{epspb} and~\eqref{effectivepb}. In fact, for the both of them, the left domain can be interpreted as the domain in which the primary tumor lives, whereas the one on the right is the connective tissue. Between them, the basal membrane is penetrated by cancer cells either with a mobility coefficient (in the case of a nonzero thickness membrane) or with particular membrane conditions, in the case of a zero-thickness interface. }
Since in biological systems the membrane is often much smaller than the size of the other components, it is then convenient and reasonable to approximate the membrane as a zero-thickness one, as done in \cite{gallinato, giverso}, differently from \cite{bresch}. In particular, it is possible to mathematically describe cancer invasion through a zero thickness interface considering a limiting problem defined on two domains. The system is then closed by \textit{transmission conditions} on the effective interface which generalise the classical Kedem-Katchalsky conditions. The latter were first formulated in \cite{KK} and are used to describe different diffusive phenomena, such as, for instance, the transport of molecules through the cell/nucleus membrane \cite{cangiani, dimitrio, serafini}, solutes absorption processes through the arterial wall \cite{QVZ}, the transfer of chemicals through thin biological membranes \cite{calabro}, or the transfer of ions through the interface between two different materials \cite{bulicek}. \CORR{In our description, the transmission conditions define continuity of cells density flux through the effective interface $\tilde\Gamma_{1,3}$ and their proportionality to the jump of a term linked to cells pressure. The coefficient of proportionality is related to the permeability of the effective interface with respect to a specific population.}
For these reasons, studying the convergence as the thickness of the membrane tends to zero represents a relevant and interesting problem both from a biological and mathematical point of view.
In the literature, this limit has been studied in different fields of applications other than tumour invasion, such as, for instance, thermal, electric or magnetic conductivity, \cite{sanchez-palencia, LRWZ}, or transport of drugs and ions through an heterogeneous layer, \cite{neuss-radu}. Physical, cellular and ecological applications characterised the bulk-surface model and the dynamical boundary value problem, derived in \cite{wang} in the context of boundary adsorption-desorption of diffusive substances between a bulk (body) and a surface. Another class of limiting systems is offered by \cite{LiWang}, in the case in which the diffusion in the thin membrane is not as small as its thickness. Again, this has a very large application field, from thermal barrier coatings (TBCs) for turbine engine blades to the spreading of animal species, from commercial pathways accelerating epidemics to cell membrane.
As it is now well-established, see for instance \cite{BD}, living tissues behave like compressible fluids. Therefore, in the last decades, mathematical models have been more and more focusing on the fluid mechanical aspects of tissue and tumour development, see for instance \cite{BC96, Green, BD, giverso, perthame, BCGR}. Tissue cells move through a porous embedding, such as the extra-cellular matrix (ECM). This nonlinear and degenerate diffusion process is well captured by filtration-type equations like the following, rather than the classical heat equation,
\begin{equation}\label{filtration}
p_\gammaartial_t u + n_\gammaabla \cdot (u {\bf v}) = F(u), \quad \text{ for } t>0, \; x ^{(i)}n \Omega.
\end{equation}
Here $F(u)$ represents a generic density-dependent reaction term and the model is closed with the velocity field equation
\begin{equation}\label{eq: darcy}
{\bf v}:=-\mu n_\gammaabla p,
\end{equation}
and a density-dependent law of state for the pressure $p:=f(u).$
The function $\mu=\mu(t,x)\ge 0$ represents the cell mobility coefficient and the velocity field equation corresponds to the Darcy law of fluid mechanics. This relation between the velocity of the cells and the pressure gradient reflects the tendency of the cells to move away from regions of high compression.
Our model is based on the one by Chaplain \textit{et al.} \cite{giverso}, where the authors formally recover the \textit{effective interface problem}, analogous to System~\eqref{effectivepb}, as the limit of a transmission problem, (or \textit{thin layer problem}) \emph{cf.}\; System~\eqref{epspb}, when the thickness of the membrane converges to zero. They also validate through \CORR{ simulations the numerical equivalence between the two models.}
When shrinking the membrane $\Omega_{2,\es}$ to an infinitesimal region, $\tilde\Gamma_{1,3}$, (\emph{i.e.}\; when passing to the limit $\es\rightarrow 0$, where $\es$ is proportional to the thickness of the membrane), it is important to guarantee that the effect of the thin membrane on \CORR{cell invasion} remains preserved.
To this end, it is essential to make the following assumption on the mobility coefficient in the subdomain $\Omega_{2,\es}$,
\begin{equation*}
\mu_{2,\es} \xrightarrow{\es\to 0}0 \qquad \text{ such that } \qquad \frac{\mu_{2,\es}}{\es}\xrightarrow{\es\to 0} \tilde\mu_{1,3}.
\end{equation*}
This condition implies that, when shrinking the pores of the membrane, the local permeability of the layer decreases to zero proportionally with respect to the local shrinkage. The function $\tilde\mu_{1,3}$ represents the \textit{effective permeability coefficient} of the limiting interface $\tilde\Gamma_{1,3}$, \emph{i.e.}\; the permeability of the zero-thickness membrane. We refer the reader to \cite[Remark 2.4]{giverso} for the derivation of the analogous assumption in the case of a fluid flowing through a porous medium. In \cite{giverso}, the authors derive the effective transmission conditions on the limiting interface, $\tilde\Gamma_{1,3}$, which relates the jump of the quantity $\Pi:= \Pi(u)$, defined by $\Pi'(u)=u f'(u)$ and the normal flux across the interface, namely
\begin{equation*}
\tilde\mu_{1,3} \llbracket \Pi \rrbracket= \tilde\mu_i \tilde{u}_i n_\gammaabla f( \tilde{u}_i) \cdot \tilde{\boldsymbol{n}}_{1,3} = \tilde\mu_i n_\gammaabla \Pi(\tilde{u}_i) \cdot \tilde{\boldsymbol{n}}_{1,3}, \quad \text{ for } i=1,3 \quad\text{ on } \tilde\Gamma_{1,3}. \ \footnote{This equation is reported in \cite[Proposition 3.1]{giverso}, where we adapted the notation to that of our paper.}
\end{equation*}
These conditions turns out to be the well-known Kedem-Katchalsky interface conditions when $f(u):= \ln(u)$, for which $\Pi(u)= u + C$, $C^{(i)}n \mathbb{R}$, \emph{i.e.}\; the linear diffusion case.
In this paper, we provide a rigorous proof to the derivation of these limiting transmission conditions, for a particular choice of the pressure law. To the best of our knowledge, this question has not been addressed before in the literature for a non-linear and degenerate model such as System~\eqref{epspb}. Although our system falls into the class of models formulated by Chaplain \textit{et al.}, we consider a less general case, making some choices on the quantities of interest. First of all, for the sake of simplicity, we assume the mobility coefficients $\mu_{i,\es}$ to be positive constants, hence they do not depend on time and space as in \cite{giverso}. We take a reaction term of the form $u G(p)$, where $G$ is a pressure-penalized growth rate. Moreover, we take a power-law as pressure law of state, \emph{i.e.}\; $p= u^\gamma$, with $\gamma\ge 1$. Hence, our model turns out to be in fact a porous medium type model, since Equations~(\ref{filtration}, \ref{eq: darcy}) read as follows
\begin{equation*}
p_\gammaartial_t u - \frac{\gamma}{\gamma+1}\Delta u^{\gamma+1} = u G(p), \quad \text{ for } t>0, \; x ^{(i)}n \Omega.
\end{equation*}
The nonlinearity and the degeneracy of the porous medium equation (PME) bring several additional difficulties to its analysis compared to its linear and non-degenerate counterpart. In particular, the main challenge is represented by the emergence of a free boundary, which separates the region where $u>0$ from the region of vacuum. On this interface the equation degenerates, affecting the control and the regularity of the main quantities. For example, it is well-known that the density can develop jumps singularities, therefore preventing any control of the gradient in $L^2$, opposite to the case of linear diffusion. On the other hand, using the fundamental change of variables of the PME, $p=u^\gamma$, and studying the equation on the pressure rather than the equation on the density, turns out to be very useful when searching for better regularity of the gradient. Nevertheless, since the pressure presents "corners" at the free boundary, it is not possible to bound its laplacian in $L^2$ (uniformly on the entire domain).
For these reasons, we could not straightforwardly apply some of the methods previously used in the literature in the case of linear diffusion. For instance, the result in \cite{BCF} is based on proving $H^2$-\textit{a priori} bounds, which do not hold in our case. The authors consider elliptic equations in a domain divided into three subdomains, each one contained into the interior of the other. The coefficients of the second-order terms are assumed to be piecewise continuous with jumps along the interior interfaces. Then, the authors study the limit as the thickness of the interior reinforcement tends to zero.
In \cite{sanchez-palencia}, Sanchez-Palencia studies the same problem in the particular case of a lense-shaped region, $I_\es$, which shrinks to a smooth surface in the limit, facing also the parabolic case. The approach is based on $H^1$-\textit{a priori} estimates, namely the $L^2$-boundedness of the gradient of the unknown. Considering the variational formulation of the problem, the author is able to pass to the limit upon applying an extension operator. In fact, if the mobility coefficient in $I_\es$ converges to zero proportionally with respect to $\es$, it is only possible to establish uniform bounds outside of $I_\es$. The extension operator allows to "truncate" the solution and then "extend" it into $I_\es$ reflecting its profile from outside. Therefore, making use of the uniform control outside of the $\es$-thickness layer, the author is able to pass to the limit in the variational formulation.
Let us also mention that, in the literature, one can find different methods and strategies for reaction-diffusion problems with a thin layer. For instance, in \CORR{\cite{maruvsic} the notion of two-scale convergence for thin domains is introduced which allows
the rigorous derivation of lower dimensional models. Some other papers have deepened the case of heterogeneous membrane. We cite} \cite{neuss-radu}, \CORR{where} the authors develop a multiscale method which combines classical compactness results based on \textit{a priori estimates} and weak-strong two-scale convergence results in order to be able to pass to the limit in a thin heterogeneous membrane. \CORR{In \cite{gahn2022}, a transmission problem involving nonlinear diffusion in the thin layer is treated
and an effective model was derived. Finally, in \cite{gahn2021}, the accuracy of the effective approximations for processes through thin layers is studied by proving estimates for the difference between the original and the
effective quantities.
}
The passage at the limit allows to infer the existence of weak solutions for the effective Problem~\eqref{effectivepb}\CORRdeux{, thanks to the existence result for the $\es-$problem provided in Appendix \ref{appendix:existence}}. In the case of linear diffusion, the existence of global weak solutions for the effective problem with the Kedem-Katchalsky conditions is provided by \cite{ciavolella-perthame}. In particular, the authors prove it under weaker hypothesis such as $L^1$ initial data and reaction terms with sub-quadratic growth in an $L^1$-setting.
p_\gammaaragraph{Outline of the paper.}
The paper is organised as follows. In Section~\ref{hp}, we introduce the assumptions and notations, including the definition of weak solution of the original problem, System~\eqref{epspb}. In Section~\ref{sec: a priori}, \textit{a priori} estimates that will be useful to pass to the limit are proven.
Section~\ref{sec: limit} is devoted to prove the convergence of Problem~\eqref{epspb}, following the method introduced in \cite{sanchez-palencia} for the (non-degenerate) elliptic and parabolic cases.
The argument relies on recovering the $L^2$-boundedness (uniform with respect to $\es$) of the velocity field, in our case, the pressure gradient. As one may expect, since the permeability of the membrane, $\mu_{2,\es}$, tends to zero proportionally with respect to $\es$, it is only possible to establish a uniform bound outside of $\Omega_{2,\es}$. For this reason, following \cite{sanchez-palencia}, we introduce an extension operator (Subsection~\ref{subsec: extension}) and apply it to the pressure in order to extend the $H^1$-uniform bounds in the whole space $\Omega\setminus\tilde\Gamma_{1,3}$, hence proving compactness results.
We remark that the main difference between the strategy in \cite{sanchez-palencia} and our adaptation, is given by the fact that due to the non-linearity of the equation, we have to infer strong compactness of the pressure (and consequently of the density) in order to pass to the limit in the variational formulation. For this reason, we also need the $L^1$-boundedness of the time derivative, hence obtaining compactness with a standard Sobolev's embedding argument.
Moreover, since \commentout{the solution}\CORR{solutions} to the limit Problem~\eqref{effectivepb} will present discontinuities at the effective interface, we need to build proper test functions which belong to $H^1(\Omega\setminus \tilde \Gamma_{1,3})$ that are zero on $p_\gammaartial \Omega$ and are discontinuous across $\tilde\Gamma_{1,3}$, (Subsection~\ref{subsec: testfnc}).
Finally, using the compactness obtained thanks to the extension operator, we are able to prove the convergence of \commentout{a solution}\CORR{solutions} to Problem~\eqref{epspb} to \commentout{a couple}\CORR{couples} $(\tilde u, \tilde p)$ which \commentout{satisfies}\CORR{satisfy} Problem~\eqref{effectivepb} in a weak sense, therefore inferring the existence of \commentout{a solution}\CORR{solutions} of the effective problem, as stated in the following theorem.
\begin{thm}[Convergence to the effective problem]\label{thm: main}
\CORR{Solutions of Problem~\eqref{epspb} converge weakly to solutions $(\tilde{u},\tilde{p})$ of Problem~\eqref{effectivepb} in the following weak form}
\begin{equation}\label{eq: variational limit problem}
\begin{split}
-^{(i)}nt_0^T^{(i)}nt_{\Omega} \tilde{u} & p_\gammaartial_t w +\tilde\mu_1 ^{(i)}nt_0^T ^{(i)}nt_{\tilde\Omega_1} \tilde{u} n_\gammaabla \tilde{p} \cdot n_\gammaabla w + \tilde\mu_3 ^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_3} \tilde{u} n_\gammaabla \tilde{p} \cdot n_\gammaabla w \\
&+ \tilde\mu_{1,3}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} \llbracket \Pi \rrbracket p_\gammart*{w_{|x_3=0^+}- w_{|x_3=0^-}} = ^{(i)}nt_0^T^{(i)}nt_{\Omega} \tilde{u}G(\tilde{p}) w + ^{(i)}nt_{\Omega} \tilde{u}^0 w^0,
\end{split}
\end{equation}
for all test functions $w(t,x)$ with a proper regularity (defined in Theorem~\ref{thm: existence}) and $w(T,x) = 0$ a.e. in $\Omega$. We used the notation
\[
\llbracket \Pi \rrbracket:=\frac{\gamma}{\gamma+1}(\tilde{u}^{\gamma+1})_{|x_3=0^+}- \frac{\gamma}{\gamma+1}(\tilde{u}^{\gamma+1})_{|x_3=0^-},
\]
\CORR{and $(\cdot)_{|x_3=0^-}= \mathcal{T}_1 (\cdot)$ as well as $(\cdot)_{|x_3=0^+}= \mathcal{T}_3 (\cdot)$, with $\mathcal{T}_1, \mathcal{T}_3$ the trace operators defined in Section~\ref{hp}.}
\end{thm}
n_\gammaoindent Section~\ref{sec: conclusion} concludes the paper and provides some research perspectives.
\section{Assumptions and notations} \label{hp}
Here, we detail the problem setting and assumptions.
For the sake of simplicity, we consider as domain $\Omega\subset \mathbb{R}^3$ a cylinder with axis $x_3$, see Figure~\ref{fig:domains}. Let us notice that it is possible to take a more general domain $\hat \Omega$ defining a proper diffeomorfism $F: \hat \Omega \to \Omega $. Therefore, the results of this work extend to more general domains as long as the existence of the map $F$ can be proved (this implies that $\hat \Omega$ is a connected open subset of $\mathbb{R}^d$ and has a smooth boundary). Therefore, we assume that the domain $\Omega$ has a $C^1$-piecewise boundary. We also want to emphasize the fact that our proofs hold in a 2D domain considering three rectangular subdomains.
We introduce
\begin{equation*}
u_{\es} :=\left\{
\begin{array}{ll}
u_{1,\es}, \quad \mbox{ in } \; \Omega_{1,\es},\\[0.1em]
u_{2,\es}, \quad \mbox{ in } \; \Omega_{2,\es},\\[0.1em]
u_{3,\es}, \quad \mbox{ in } \; \Omega_{3,\es},
\end{array}
\right.
\qquad
p_{\es} := \left\{
\begin{array}{ll}
p_{1,\es}, \quad \mbox{ in } \; \Omega_{1,\es},\\[0.1em]
p_{2,\es}, \quad \mbox{ in } \; \Omega_{2,\es},\\[0.1em]
p_{3,\es}, \quad \mbox{ in } \; \Omega_{3,\es}.
\end{array}
\right.
\end{equation*}
We define the interfaces between the domains $\Omega_{i,\es}$ and $\Omega_{i+1,\es}$ for $i=1,2$, as
\[
\Gamma_{i,i+1,\es} = p_\gammaartial \Omega_{i,\es} \cap\, p_\gammaartial \Omega_{i+1,\es}.
\]
We denote with $\boldsymbol{n}_{i,i+1}$ the outward normal to $\Gamma_{i,i+1,\es}$ with respect to $\Omega_{i,\es}$, for $i=1,2$. Let us notice that $\boldsymbol{n}_{i,i+1}=- \boldsymbol{n}_{i+1,i}$.
\CORR{
We define two trace operators
\[
\begin{cases}
\mathcal{T}_1: W^{k,p}(\tilde \Omega_1) \longrightarrow L^{p}(p_\gammaartial \tilde \Omega_1),\\
\mathcal{T}_3: W^{k,p}(\tilde \Omega_3) \longrightarrow L^{p}(p_\gammaartial \tilde \Omega_3),
\end{cases}
\quad \mbox{ for } 1\leq p < +^{(i)}nfty, \quad k\geq 1 .
\]
Therefore, for any $z ^{(i)}n W^{k,p}(\Omega \setminus \tilde\Gamma_{1,3})$, we have the following decomposition
\[
z := \begin{cases}
z_1,\quad \text{in} \quad \tilde \Omega_1,\\
z_3,\quad \text{in} \quad \tilde \Omega_3.
\end{cases}
\]
Obviously, we have that $z_\alpha ^{(i)}n W^{k,p}(\tilde \Omega_\alpha)$ ($\alpha =1,3$). Thus, we denote
\[
z_{|_{p_\gammaartial \tilde \Omega_\alpha} }:=\mathcal{T}_\alpha z ^{(i)}n L^{p}(p_\gammaartial \tilde \Omega_\alpha ),\quad \alpha=1,3,
\]
and the following continuity property holds~\cite{brezis}
\begin{equation*}
\|\mathcal{T}_\alpha z\|_{L^{p}(p_\gammaartial \tilde \Omega_\alpha )} \leq C \|z\|_{W^{k,p}(\tilde \Omega_\alpha) },\quad \alpha=1,3.
\end{equation*}
We assume $W^{k,p}(\Omega \setminus \tilde\Gamma_{1,3})$ is endowed with the norm
\CORRdeux{
\[ \| z\|_{W^{k,p}(\Omega \setminus \tilde\Gamma_{1,3})}= \|z\|_{L^p(\Omega \setminus \tilde\Gamma_{1,3})}+ \sum_{j=1}^k \| D^j z\|_{L^p(\Omega \setminus \tilde\Gamma_{1,3})}. \]
}}
We make the following assumptions on the initial data: there exists a positive constant $p_H$, such that
\begin{equation}\label{eq: assumptions initial data} \tag{A-data1}
0\le p^0_\es \le p_H, \qquad 0\le u^0_\es \le p_H^{1/\gamma} =: u_H,
\end{equation}
\begin{equation}\label{eq: assumptions initial data 2} \tag{A-data2}
\Delta p_\gammart*{(u^0_{i,\es})^{\gamma+1}} ^{(i)}n L^1(\Omega_{i,\es}), \quad \text{ for } i=1,2,3.
\end{equation}
Moreover, we assume that there exists a function $\tilde u_0^{(i)}n L^1_+(\Omega)$ \CORR{(\emph{i.e.}\; $ \tilde u_0^{(i)}n L^1(\Omega)$ and non-negative)} such that
\begin{equation}\label{conv u0} \tag{A-data3}
\|u_{\es}^0 - \tilde u_{0}\|_{L^1(\Omega)}\longrightarrow 0, \quad\text{ as } \es\rightarrow 0.
\end{equation}
The growth rate $G(\cdot)$ satisfies
\begin{equation}\label{eq: assumption G} \tag{A-G}
G(0)=G_M>0, \quad G'(\cdot)<0, \quad G(p_H)=0.
\end{equation}
The value $p_H$, called \textit{homeostatic pressure}, represents the lowest level of pressure that prevents cell multiplication due to contact-inhibition.
We assume that the \CORR{mobility} coefficients satisfy $\mu_{i,\es} >0 $ for $i=1,3$
and
\begin{equation}
\lim_{\es\to 0}\mu_{1,\es}=\tilde\mu_{1}>0, \qquad \qquad \lim_{\es\to 0}\frac{\mu_{2,\es}}{\es}=\tilde\mu_{1,3}>0, \qquad \qquad\lim_{\es\to 0} \mu_{3,\es}=\tilde\mu_{3}>0.
\label{eq:cond-mu}
\end{equation}
n_\gammaoindent{\textbf{Notations.}} For all $T>0$, we denote $\Omega_T:=(0,T)\times\Omega$. We use the abbreviated form ${u_\es:=u_\es(t):=u_\es(t,x).}$ From now on, we use $C$ to indicate a generic positive constant independent of $\es$ that may
change from line to line. Moreover, we denote
\begin{equation*}
\mathrm{sign}_+{(w)}=\mathds{1}_{\{w>0\}},\quad \quad \mathrm{sign}_-{(w)}=-\mathds{1}_{\{w<0\}},
\end{equation*}
and
\[
\mathrm{sign}(w) = \mathrm{sign}_+{(w)} + \mathrm{sign}_-{(w)}.
\]
We also define the positive and negative part of $w$ as follows
\begin{equation*}
{\color{black}(w)_+} :=
\begin{cases}
w, &\text{ for } w>0,\\
0, &\text{ for } w\leq 0,
\end{cases}
\quad \text{ and }\quad
{\color{black}(w)_-} :=
\begin{cases}
-w, &\text{ for } w<0,\\
0, &\text{ for } w\geq 0.
\end{cases}
\end{equation*}
{\color{black} We denote $|w|:=(w)_++(w)_-$.}
n_\gammaoindent
Now, let us write the variational formulation of Problem~\eqref{epspb}.
\begin{definition}[Definition of weak solutions]
Given $\es>0$, a weak solution to Problem~\eqref{epspb} is given by $u_\es, p_\es ^{(i)}n L\CORR{^^{(i)}nfty}(0,T;L^^{(i)}nfty(\Omega))$ such that $n_\gammaabla p_\es ^{(i)}n L^2(\Omega_T)$ and
\begin{equation}
- ^{(i)}nt_0^T^{(i)}nt_{\Omega} u_\es p_\gammaartial_t p_\gammasi + \sum_{i=1}^3 \mu_{i,\es} ^{(i)}nt_0^T^{(i)}nt_{\Omega_{i,\es}} u_{i,\es} n_\gammaabla p_{i,\es} \cdot n_\gammaabla p_\gammasi = ^{(i)}nt_0^T^{(i)}nt_{\Omega} u_{\es} G(p_{\es}) p_\gammasi + ^{(i)}nt_{\Omega} u^0_\es p_\gammasi (0,x) ,
\label{eq:variational-pb-homo}
\end{equation}
for all test functions $p_\gammasi ^{(i)}n H^1(0,T; H^1_0(\Omega))$ such that $p_\gammasi(T,x)=0$ a.e. in $\Omega$.
\end{definition}
\section{A priori estimates}\label{sec: a priori}
We show that the main quantities satisfy some uniform \textit{a priori} estimates which will later allow us to prove strong compactness and pass to the limit.
\begin{lemma}[A priori estimates]\label{lemma: a priori}
Given the assumptions in Section~\ref{hp}, let $(u_\es, p_\es)$ be \CORR{a solution} of Problem~\eqref{epspb}.
{\color{black} There exists a positive constant $C$ independent of $\es$ such that
\begin{itemize}
^{(i)}tem[\textit{(i)}]
$0\le u_{\es}\le u_H$ and $0\le p_{\es}\le p_H$,
\commentout{^{(i)}tem[\textit{(ii)}]
$\|u_{\es}\|_{L^^{(i)}nfty(0,T; L^p(\Omega))}\leq C, \; \|p_{\es}\|_{L^^{(i)}nfty(0,T; L^p(\Omega))} \leq C$, for $1\le p < ^{(i)}nfty$,}
^{(i)}tem[\textit{(ii)}]
$\|p_\gammaartial_t u_\es\|_{L^^{(i)}nfty(0,T; L^1(\Omega))}\leq C,\; \|p_\gammaartial_t p_\es\|_{L^^{(i)}nfty(0,T; L^1(\Omega))}\leq C$,
^{(i)}tem[\textit{(iii)}]
$\|n_\gammaabla p_\es\|_{L^2(0,T; L^2(\Omega\setminus\Omega_{2,\es}))}\leq C$.
\end{itemize}}
\end{lemma}
\CORR{\begin{remark}
We remark that statement (i) implies that for all $p^{(i)}n [1,^{(i)}nfty]$, we have \[\|u_{\es}\|_{L^^{(i)}nfty(0,T; L^p(\Omega))}\leq C, \; \|p_{\es}\|_{L^^{(i)}nfty(0,T; L^p(\Omega))} \leq C.\]
\end{remark}}
\CORRdeux{\begin{remark}
The following proof can be made rigorous by performing a parabolic regularization of the problem, namely by adding $\delta \Delta u_{i,\es}$, for $\delta>0$, to the left-hand side of the equation and in the flux continuity conditions. In fact, the following estimates can be obtained uniformly both in $\es$ and $\delta$.
\end{remark}}
\begin{proof}
Let us recall the equation satisfied by $u_\es$ on $\Omega_{i,\es}$, namely
\begin{equation}\label{eq: u}
p_\gammaartial_t u_{i,\es} - \mu_{i,\es} n_\gammaabla \cdot ( u_{i,\es} n_\gammaabla u_{i,\es}^{\gamma} )= u_{i,\es} G(p_{i,\es}).
\end{equation}
n_\gammaoindent{\bf(i)} $\boldsymbol{ 0\le u_{\es}\le u_H,\quad 0\le p_{\es}\le p_H.}$ \
The $L^^{(i)}nfty$-bounds of the density and the pressure are a straight-forward consequence of the comparison principle applied to Equation~\eqref{eq: u}, which can be rewritten as
\begin{equation}\label{eq: u2}
p_\gammaartial_t u_{i,\es} - \frac{\gamma}{\gamma+1} \mu_{i,\es} \Delta u_{i,\es}^{\gamma+1}= u_{i,\es} G(p_{i,\es}).
\end{equation}
Indeed, summing up Equations~\eqref{eq: u2} for $i=1,2,3$, we obtain
\begin{equation}\label{eq: usum}
\sum_{i=1}^3 p_\gammaartial_t u_{i,\es} - \frac{\gamma}{\gamma+1} \sum_{i=1}^3 \mu_{i,\es} \Delta u_{i,\es}^{\gamma+1}= \sum_{i=1}^3 u_{i,\es} G(p_{i,\es}).
\end{equation}
Then, we also have
\begin{equation*}
\sum_{i=1}^3 p_\gammaartial_t (u_H-u_{i,\es})= \frac{\gamma}{\gamma+1} \sum_{i=1}^3 \mu_{i,\es} \Delta (u_H^{\gamma+1}-u_{i,\es}^{\gamma+1})+\sum_{i=1}^3 (u_H-u_{i,\es}) G(p_{i,\es}) - u_H\sum_{i=1}^3 G(p_{i,\es}).
\end{equation*}
n_\gammaoindent
{\color{black}Let us recall Kato's inequality, \cite{kato}, \emph{i.e.}\;
$$\Delta (u)_-\geq \mathrm{sign}_-(u) \Delta u.$$}
If we multiply by $\mathrm{sign}_{-}(u_H-u_{i,\es})$, thanks to Kato's inequality, we infer that
\begin{equation}\label{eq: sign -}
\begin{split}
\sum_{i=1}^3 p_\gammaartial_t (u_H-u_{i,\es})_{-}&\leq \sum_{i=1}^3 \left[ \frac{\gamma}{\gamma+1} \mu_{i,\es} \Delta (u_H^{\gamma+1}-u_{i,\es}^{\gamma+1})_{-}+ (u_H-u_{i,\es})_{-} G(p_{i,\es}) \right.\\[0.3em]
&\qquad \qquad - u_H G(p_{i,\es})\mathrm{sign}_{-}(u_H-u_{i,\es})\Big]\\[0.3em]
&\leq \sum_{i=1}^3\left[\frac{\gamma}{\gamma+1} \mu_{i,\es} \Delta (u_H^{\gamma+1}-u_{i,\es}^{\gamma+1})_{-}+ (u_H-u_{i,\es})_{-} G(p_{i,\es})\right],
\end{split}
\end{equation}
where we have used the assumption \eqref{eq: assumption G}.
We integrate over the domain $\Omega$. Thanks to the boundary conditions in System~\eqref{epspb}, \emph{i.e.}\; the density and flux continuity across the interfaces, and the homogeneous Dirichlet conditions on $p_\gammaartial \Omega$, we gain
\begin{align*}
\sum_{i=1}^3&^{(i)}nt_{\Omega_{i,\es}} \mu_{i,\es} \Delta (u_H^{\gamma+1}-u_{i,\es}^{\gamma+1})_{-}
\\[0.3em]
&=\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \left[\mu_i n_\gammaabla (u_H^{\gamma+1}-u_{i,\es}^{\gamma+1})_- - \mu_{i+1,\es} n_\gammaabla (u_H^{\gamma+1}-u_{i+1,\es}^{\gamma+1})_-\right]\cdot \boldsymbol{n}_{i,i+1} \\[0.3em]
&=\sum_{i=1}^2 \left[^{(i)}nt_{\Gamma_{i,i+1,\es}\cap \{u_H<u_{i,\es}\}} \mu_i n_\gammaabla u_{i,\es}^{\gamma+1}\cdot \boldsymbol{n}_{i,i+1} - ^{(i)}nt_{\Gamma_{i,i+1,\es}\cap \{u_H<u_{i+1,\es}\}} \mu_{i+1,\es} n_\gammaabla u_{i+1,\es}^{\gamma+1} \cdot \boldsymbol{n}_{i,i+1}\right] \\[0.3em]
&= \sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}\cap \{u_H<u_{i,\es}\}} \left[ \mu_i n_\gammaabla u_{i,\es}^{\gamma+1}- \mu_{i+1,\es} n_\gammaabla u_{i+1,\es}^{\gamma+1} \right]\cdot \boldsymbol{n}_{i,i+1} \\[0.3em]
&=0.
\end{align*}
Hence, from Equation~\eqref{eq: sign -}, we find
\begin{equation*}
\frac{d}{dt}\sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}}(u_H-u_{i,\es})_{-} \leq G_M \sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}}(u_H-u_{i,\es})_{-}.
\end{equation*}
Finally, Gronwall's lemma and hypothesis~\eqref{eq: assumptions initial data} on $u_{i,\es}^0$ imply
\begin{equation*}
\sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}}(u_H-u_{i,\es})_{-} \leq e^{G_M t} \sum_{i=1}^3^{(i)}nt_{\Omega_{i,\es}}(u_H-u^0_{i,\es})_{-} =0.
\end{equation*}
We then conclude the boundedness of $u_{i,\es}$ by $u_H$ for all $i=1,2,3$. From the relation $p_{\es}=u^\gamma_{\es}$, we conclude the boundedness of $p_\es$.
{\color{black} By arguing in an analogous way, replacing $u_H$ by $0$ and multiplying by $\mathrm{sign}_+(u_{i,\es})$, we obtain
$$ \sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}}(u_{i,\es})_{-} \leq e^{G_M t} \sum_{i=1}^3^{(i)}nt_{\Omega_{i,\es}}(u^0_{i,\es})_{-} =0,$$
namely, $u_\es\geq 0$, and consequently, $p_\es\geq 0.$}
\CORR{n_\gammaoindent{\bf(ii)}} $\boldsymbol{p_\gammaartial_t u_\es, p_\gammaartial_t p_\es ^{(i)}n L^^{(i)}nfty(0,T; L^1(\Omega)).}$ \
We derive Equation~\eqref{eq: u2} with respect to time to obtain
\begin{equation*}
p_\gammaartial_t (p_\gammaartial_t u_{i,\es}) = \mu_{i,\es} \gamma \Deltap_\gammart*{p_{i,\es} p_\gammaartial_t u_{i,\es}} + p_\gammaartial_t u_{i,\es} G(p_{i,\es}) + u_{i,\es} G'(p_{i,\es}) p_\gammaartial_t p_{i,\es}.
\end{equation*}
Upon multiplying by $\mathrm{sign}(p_\gammaartial_t u_{i,\es})$ and using Kato's inequality, we have
\begin{equation*}
p_\gammaartial_t (|p_\gammaartial_t u_{i,\es}|) \le \mu_{i,\es} \gamma \Deltap_\gammart*{p_{i,\es} |p_\gammaartial_t u_{i,\es}|} + |p_\gammaartial_t u_{i,\es}| G(p_{i,\es}) + u_{i,\es} G'(p_{i,\es}) |p_\gammaartial_t p_{i,\es}|,
\end{equation*}
since $u_{i,\es}$ and $p_{i,\es}$ are both nonnegative and $p_\gammaartial_t p_{i,\es}= \gamma u_{i,\es}^{\gamma-1}p_\gammaartial_t u_{i,\es}$.
We integrate over $\Omega_{i,\es}$ and we sum over $i=1,2,3$, namely
\begin{equation}\label{eq: dt dtu}
\ddt \sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}} |p_\gammaartial_t u_{i,\es}| \le \gamma \underbrace{\sum_{i=1}^3 \mu_{i,\es} ^{(i)}nt_{\Omega_{i,\es}} \Deltap_\gammart*{p_{i,\es} |p_\gammaartial_t u_{i,\es}|}}_{\mathcal{J}} + G_M ^{(i)}nt_{\Omega_{i,\es}} |p_\gammaartial_t u_{i,\es}|,
\end{equation}
where we use that $G'\leq 0$.
Now we show that the term $\mathcal{J}$ vanishes.
Integration by parts yields
\begin{equation*}
\mathcal{J} = \sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mu_{i,\es} n_\gammaabla(p_{i,\es} |p_\gammaartial_t u_{i,\es}|)\cdot \boldsymbol{n}_{i,i+1} + \sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mu_{i+1,\es}n_\gammaabla(p_{i+1,\es} |p_\gammaartial_t u_{i+1,\es}|)\cdot\boldsymbol{n}_{i+1,i}.
\end{equation*}
For the sake of simplicity, we denote $\boldsymbol{n} := \boldsymbol{n}_{i,i+1}.$ Let us recall that, by definition, $\boldsymbol{n}_{i+1,i}= -\boldsymbol{n}$.
We have
\begin{align*}
\mathcal{J} = &\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} p_\gammart*{\mu_{i,\es} n_\gammaabla(p_{i,\es} |p_\gammaartial_t u_{i,\es}|) - \mu_{i+1,\es} n_\gammaabla(p_{i+1,\es} |p_\gammaartial_t u_{i+1,\es}|)}\cdot\boldsymbol{n}\\
= &\underbrace{\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} |p_\gammaartial_t u_{i,\es}| \mu_{i,\es} n_\gammaabla p_{i,\es} \cdot \boldsymbol{n} - |p_\gammaartial_t u_{i+1,\es}| \mu_{i+1,\es} n_\gammaabla p_{i+1,\es}\cdot\boldsymbol{n}}_{\mathcal{J}_1}\\
&\; +\underbrace{\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mu_{i,\es} p_{i,\es}n_\gammaabla |p_\gammaartial_t u_{i,\es}| \cdot \boldsymbol{n} - \mu_{i+1,\es} p_{i+1,\es} n_\gammaabla|p_\gammaartial_t u_{i+1,\es}| \cdot \boldsymbol{n} }_{\mathcal{J}_2}.
\end{align*}
Let us recall the membrane conditions of Problem~\eqref{epspb}, namely
\begin{align}
\mu_{i,\es} u_{i,\es} n_\gammaabla p_{i,\es}\cdot \boldsymbol{n} &= \mu_{i+1,\es} u_{i+1,\es} n_\gammaabla p_{i+1,\es}\cdot \boldsymbol{n}, \label{cond: 1} \\[0.2em]
u_{i,\es} &= u_{i+1,\es}, \label{cond: 2}
\end{align}
on $ (0,T) \times\Gamma_{i,i+1,\es}$, for $i = 1,2$.
From Equation~\eqref{cond: 2}, it is immediate to infer
\begin{equation}\label{bdry 0}
p_\gammaartial_t u_{i,\es} = p_\gammaartial_t u_{i+1,\es}, \text{ on }(0,T)\times \Gamma_{i,i+1,\es},
\end{equation}
since
\begin{equation*}
u_{i,\es}(t+h) - u_{i,\es}(t) = u_{i+1,\es}(t+h) - u_{i+1,\es}(t),
\end{equation*}
on $\Gamma_{i,i+1,\es}$ for all $h>0$ such that $t+h ^{(i)}n (0,T)$.
Combing Equation~\eqref{cond: 2} and Equation~\eqref{cond: 1} we get
\begin{equation}\label{bdry 1}
\mu_{i,\es} n_\gammaabla p_{i,\es} \cdot \boldsymbol{n} = \mu_{i+1,\es} n_\gammaabla p_{i+1,\es} \cdot \boldsymbol{n} \quad \text{ on } (0,T)\times \Gamma_{i,i+1,\es}.
\end{equation}
Moreover, Equation~\eqref{cond: 1} also implies
\begin{align}
\label{bdry 2}
\mu_{i,\es} p_{i,\es} n_\gammaabla u_{i,\es} \cdot \boldsymbol{n} = \mu_{i+1,\es} p_{i+1,\es} n_\gammaabla u_{i+1,\es} \cdot \boldsymbol{n} \quad \text{ on }(0,T)\times \Gamma_{i,i+1,\es},
\end{align}
which, combined with Equation~\eqref{cond: 2} gives also
\begin{align}
\mu_{i,\es} n_\gammaabla u_{i,\es} \cdot \boldsymbol{n} = \mu_{i+1,\es} n_\gammaabla u_{i+1,\es} \cdot \boldsymbol{n} \quad \text{ on } (0,T)\times \Gamma_{i,i+1,\es}.
\label{bdry 3}
\end{align}
Now we may come back to the computation of the term $\mathcal{J}$. By Equations~\eqref{bdry 0}, and \eqref{bdry 1} we directly infer that $\mathcal{J}_1$ vanishes.
We rewrite the term $\mathcal{J}_2$ as
\begin{align*}
&\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mu_{i,\es} p_{i,\es} \ \mathrm{sign}(p_\gammaartial_t u_{i,\es}) \ p_\gammaartial_ tp_\gammart*{n_\gammaabla u_{i,\es} \cdot \boldsymbol{n}} - \mu_{i+1,\es} p_{i+1,\es} \ \mathrm{sign}(p_\gammaartial_t u_{i+1,\es}) \ p_\gammaartial_t p_\gammart*{n_\gammaabla u_{i+1,\es} \cdot \boldsymbol{n} }\\
= & \underbrace{\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mathrm{sign}(p_\gammaartial_t u_{i,\es}) \ p_\gammaartial_t p_\gammart*{\mu_{i,\es} p_{i,\es} n_\gammaabla u_{i,\es} \cdot \boldsymbol{n} - \mu_{i+1,\es} p_{i+1,\es} n_\gammaabla u_{i+1,\es} \cdot \boldsymbol{n} }}_{\mathcal{J}_{2,1}}\\
& - \underbrace{\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \abs{p_\gammaartial_t p_{i, \es}} p_\gammart*{\mu_{i,\es} n_\gammaabla u_{i,\es} \cdot \boldsymbol{n} - \mu_{i+1,\es} n_\gammaabla u_{i+1,\es} \cdot \boldsymbol{n}}}_{\mathcal{J}_{2,2}} ,
\end{align*}
where we used Equation~\eqref{bdry 0}, which also implies $p_\gammaartial_t p_{i,\es} = p_\gammaartial_t p_{i+1,\es} $ on $(0,T) \times \Gamma_{i,i+1,\es}$, for $i = 1,2$.
The terms $\mathcal{J}_{2,1}$ and $\mathcal{J}_{2,2}$ vanish thanks to Equation~\eqref{bdry 2} and Equation~\eqref{bdry 3}, respectively.
Hence, from Equation~\eqref{eq: dt dtu}, we finally have
\begin{equation*}
\ddt \sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}} |p_\gammaartial_t u_{i,\es}| \le G_M \sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}} |p_\gammaartial_t u_{i,\es}|,
\end{equation*}
and, using Gronwall's inequality, we obtain
\begin{equation*}
\sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}} |p_\gammaartial_t u_{i,\es}(t)| \le e^{G_M t}\sum_{i=1}^3 ^{(i)}nt_{\Omega_{i,\es}} |p_\gammart*{p_\gammaartial_t u_{i,\es}}^0|.
\end{equation*}
Thanks to the assumptions on the initial data, \emph{cf.}\; Equation~\eqref{eq: assumptions initial data 2}, we conclude.
\CORR{n_\gammaoindent{\bf(iii)}} $\boldsymbol{ p_\es ^{(i)}n L^2(0,T; H^1(\Omega\setminus \Omega_{2,\es})).}$ As known, in the context of a filtration equation, we can recover the pressure equation upon multiplying the equation on $u_{i,\es}$, \emph{cf.}\; System~\eqref{epspb}, by $p'(u_{i,\es})=\gamma u_{i,\es}^{\gamma-1}$. Therefore, we obtain
\begin{equation}\label{eqp}
p_\gammaartial_t p_{i,\es} - \gamma \mu_{i,\es} p_{i,\es} \Delta p_{i,\es} = \mu_{i,\es} |n_\gammaabla p_{i,\es}|^2 + \gamma p_{i,\es} G(p_{i,\es}).
\end{equation}
Studying the equation on $p_\es$ rather than the equation on $u_\es$ turns out to be very useful in order to prove compactness, since, as it is well-know for the porous medium equation (PME), the gradient of the pressure can be easily bounded in $L^2$, while the density solution of the PME can develop jump singularities on the free boundary, \cite{vasquez}.
We integrate Equation~\eqref{eqp} on each $\Omega_{i,\es}$, and we sum over all $i$ to obtain
\begin{equation}\label{sump}
\sum_{i=1}^{3} ^{(i)}nt_{\Omega_{i,\es}}p_\gammaartial_t p_{i,\es} = \sum_{i=1}^{3} \left(\gamma \mu_{i,\es} ^{(i)}nt_{\Omega_i,\es} p_{i,\es}\Delta p_{i,\es} +^{(i)}nt_{\Omega_{i,\es}} \mu_{i,\es}|n_\gammaabla p_{i,\es}|^2 +\gamma^{(i)}nt_{\Omega_{i,\es}} p_{i,\es} G(p_{i,\es})\right).
\end{equation}
Integration by parts yields
\begin{align*}
\sum_{i=1}^{3} \mu_{i,\es} ^{(i)}nt_{\Omega_{i,\es}} p_{i,\es}\Delta p_{i,\es} = &-\sum_{i=1}^{3} \mu_{i,\es} ^{(i)}nt_{\Omega_{i,\es}} |n_\gammaabla p_{i,\es}|^2 + \sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mu_{i,\es} p_{i,\es} n_\gammaabla p_{i,\es} \cdot \boldsymbol{n}_{i,i+1} \\
&\qquad+\sum_{i=1}^2 ^{(i)}nt_{\Gamma_{i,i+1,\es}} \mu_{i+1,\es} p_{i+1,\es} n_\gammaabla p_{i+1,\es} \cdot \boldsymbol{n}_{i+1,i} \\
= &-\sum_{i=1}^{3}\mu_{i,\es} ^{(i)}nt_{\Omega_{i,\es}} |n_\gammaabla p_{i,\es}|^2,
\end{align*}
since we have homogeneous Dirichlet boundary conditions on $p_\gammaartial\Omega$ and the flux continuity conditions~\eqref{bdry 1}.
Hence, from Equation~\eqref{sump}, we have
\begin{equation}\label{sump2}
\sum_{i=1}^{3} ^{(i)}nt_{\Omega_{i,\es}}p_\gammaartial_t p_{i,\es} = \sum_{i=1}^{3} \mu_{i,\es}\left((1-\gamma)^{(i)}nt_{\Omega_{i,\es}} |n_\gammaabla p_{i,\es}|^2 +\gamma^{(i)}nt_{\Omega_{i,\es}} p_{i,\es} G(p_{i,\es})\right).
\end{equation}
We integrate over time and we deduce that
\begin{equation}\label{sump2t}
\sum_{i=1}^{3} \left( ^{(i)}nt_{\Omega_{i,\es}} p_{i,\es}(T) - ^{(i)}nt_{\Omega_{i,\es} }
p_{i,\es}^0 + \mu_{i,\es} (\gamma-1) ^{(i)}nt_{0}^{T}^{(i)}nt_{\Omega_{i,\es}} |n_\gammaabla p_{i,\es}|^2\right) = \sum_{i=1}^{3} \gamma ^{(i)}nt_{0}^{T}^{(i)}nt_{\Omega_{i,\es}} p_{i,\es} G(p_{i,\es}).
\end{equation}
Finally, we conclude that
\begin{equation}\label{gradp}
\sum_{i=1}^{3} ^{(i)}nt_{0}^{T}^{(i)}nt_{\Omega_{i,\es}}\mu_{i,\es} |n_\gammaabla p_{i,\es}|^2 \le \sum_{i=1}^{3} \frac{\gamma}{\gamma-1} ^{(i)}nt_{0}^{T}^{(i)}nt_{\Omega_{i,\es}} p_{i,\es} G(p_{i,\es}) + \frac{1}{\gamma-1} ^{(i)}nt_{\Omega_{i,\es}} p_{i,\es}^0,
\end{equation}
Since we have already proved that $p_{i,\es}$ is bounded in $L^^{(i)}nfty(\Omega_T)$ and by assumption $G$ is continuous, we finally find that
\begin{equation}\label{eq:bound-nablap}
\,\sum_{i=1}^{3} \mu_{i,\es} ^{(i)}nt_{0}^{T}^{(i)}nt_{\Omega_{i,\es}} |n_\gammaabla p_{i,\es}|^2 \le C,
\end{equation}
where $C$ denotes a constant independent of $\es$.
Since both $\mu_{1,\es}$ and $\mu_{3,\es}$ are bounded from below away from zero, we conclude that the uniform bound holds in $\Omega\setminus\Omega_{2,\es}$.
\end{proof}
\begin{remark}
Let us also notice that, differently from \cite{sanchez-palencia}, where the author studies the linear and uniformly parabolic case, proving weak compactness is not enough. Indeed, due to the presence of the nonlinear term $u n_\gammaabla p$, it is necessary to infer strong compactness of $u$. For this reason, the $L^1$-uniform estimate on the time derivative proven in Lemma~\ref{lemma: a priori} is fundamental.
\end{remark}
\section{Limit \texorpdfstring{$\es\rightarrow 0$}{}}\label{sec: limit}
We have now the {^{(i)}t a priori} tools to face the limit $\es \rightarrow 0$. We need to construct an extension operator with the aim of controlling uniformly, with respect to $\es$, the pressure gradient in $L^2(\Omega)$. Indeed, from \eqref{eq:bound-nablap}, we see that one cannot find a uniform bound for $n_\gammaorm{n_\gammaabla p_{2,\es} }_{L^2(\Omega_{2,\es})}$. The blow-up of Estimate~\eqref{eq:bound-nablap} for $i=2$, is in fact the main challenge in order to find compactness on $\Omega$. To this end, following \cite{sanchez-palencia}, we introduce in Subsection~\ref{subsec: extension} an extension operator which projects the points of $\Omega_{2,\es}$ inside $\Omega_{1,\es} \cup \Omega_{3,\es}$. Then, introducing proper test functions such that the variational formulation for $\es>0$ in \eqref{eq:variational-pb-homo} and $\es \rightarrow 0$ in \eqref{eq: variational limit problem} are well-defined, we can pass to the limit (Subsection~\ref{subsec: testfnc}).
\subsection{Extension operator and compactness}
\label{subsec: extension}
\begin{figure}
\caption{Representation of the spatial symmetry used in the definition of the extension operator, \emph{cf.}
\label{fig: extension}
\end{figure}
As mentioned above, in order to be able to pass to the limit $\es \to 0$, we first need to define the following extension operator
\begin{equation*}
\mathcal{P}_\es: L^q(0,T; W^{1,p}(\Omega\setminus \Omega_{2,\es})) \to L^q(0,T; W^{1,p}(\Omega \setminus \tilde\Gamma_{1,3} )), \quad \text{for} \quad 1\le p,q\le +^{(i)}nfty,
\end{equation*}
as follows for a general function $z ^{(i)}n L^q(0,T; W^{1,p}(\Omega\setminus \Omega_{2,\es})) $,
\begin{equation}\label{eq: extension operator}
\mathcal{P}_\es(z(t,x)) = \begin{cases}
z(t,x),\quad &\text{if} \quad x^{(i)}n {\Omega_{1,\es} \cup \Omega_{3,\es}},\\
z(t,x'),\quad &\text{if} \quad x^{(i)}n {\Omega_{2,\es}},
\end{cases}
\end{equation}
where $x'$ is the symmetric of $x$ with respect to $\Gamma_{1,2,\es}$ (or $\Gamma_{2,3,\es}$) if $x ^{(i)}n \Omega_{2,1,\es}$ (respectively $x^{(i)}n \Omega_{2,3,\es}$), \CORR{defined by the function $g:x \to x'$ for $x = (x_1,x_2,x_3)^{(i)}n \Omega_{2,\es}$ such that
\commentout{\[
g(x) = \begin{cases}
\left(-2d(\Gamma_{1,3,\epsilon}, x), x_2,x_3\right), \quad &\text{if} \quad x^{(i)}n \Omega_{2,1,\epsilon},\\
\left(2d(\Gamma_{2,3,\epsilon}, x), x_2,x_3\right), \quad &\text{if} \quad x^{(i)}n \Omega_{2,3,\epsilon},
\end{cases}
\]}
\[
g(x) = \begin{cases}
\left(x_1,x_2,x_3-2\, d(\Gamma_{1,3,\es}, x)\right), \quad &\text{if} \quad x^{(i)}n \Omega_{2,1,\es},\\
\left(x_1,x_2,x_3+2\, d(\Gamma_{2,3,\es}, x) \right), \quad &\text{if} \quad x^{(i)}n \Omega_{2,3,\es},
\end{cases}
\]
where $d(\Gamma_{1,2,\es}, x)$ (respectively $d(\Gamma_{2,3,\es}, x)$) denotes the distance between $x$ and the surface $\Gamma_{1,2,\es}$ (respectively $\Gamma_{2,3,\es}$).
The point $x'$ is illustrated in Figure~\ref{fig: extension}.} It can be easily seen that the function $g$ and its inverse have uniformly bounded first derivatives. Hence, we infer that $\mathcal{P}_\es$ is linear and bounded, \emph{i.e.}\;
\[
n_\gammaorm{\mathcal{P}_\es(z)}_{L^q(0,T;W^{1,p}(\Omega \setminus \tilde\Gamma_{1,3}))} \le C,\quad \forall z^{(i)}n L^q(0,T; W^{1,p}(\Omega\setminus \Omega_{2,\es}))\CORR{, \mbox{ for } 1\le p,q\le ^{(i)}nfty}.
\]
\commentout{Since the previous bound is uniform in $\es$, we obtain that $\mathcal{P}_\es$ remains bounded even in the limit $\es \to 0$.}
Let us notice that the extension operator is well defined also from $L^1((0,T)\times (\Omega \setminus \Omega_{2,\es}))$ into $L^1((0,T)\times ( \Omega \setminus \tilde\Gamma_{1,3}))$. Hence, we can apply it also on $u_{\es}$ and $p_\gammaartial_t p_{\es}$.
\begin{remark}\label{rmk: a priori P} Thanks to the properties of the extension operator, the estimates stated in Lemma~\ref{lemma: a priori} hold true also upon applying $\mathcal{P}_\es(\cdot)$ on $p_{\es}, u_\es$, and $p_\gammaartial_t p_\es$, namely
\begin{align*}
&0\le \mathcal{P}_\es(p_{\es})\le p_H, \quad 0\le\mathcal{P}_\es(u_\es) \leq u_H, \\[0.3em]
& p_\gammaartial_t \mathcal{P}_\es(p_\es) ^{(i)}n L^^{(i)}nfty(0,T; L^1(\Omega\setminus\tilde\Gamma_{1,3})),\\[0.3em]
&n_\gammaabla\mathcal{P}_\es(p_\es) ^{(i)}n L^2(0,T; L^2(\Omega\setminus\tilde\Gamma_{1,3})),\\[0.3em]
&{\color{black}\frac{\gamma}{\gamma+1} n_\gammaabla p_\gammart*{ \mathcal{P}_\es(u_\es^{\gamma+1})}^{(i)}n L^2(0,T;L^2(\Omega\setminus\tilde\Gamma_{1,3}))},\\[0.3em
]
& {\color{black}p_\gammaartial_t(\mathcal{P}_\es(u_\es^{\gamma+1}))^{(i)}n L^^{(i)}nfty(0,T; L^1(\Omega\setminus \tilde\Gamma_{1,3})).}
\end{align*}
{n_\gammaoindent\color{black}The last two bounds hold thanks to the following arguments
\begin{equation*}
\frac{\gamma}{\gamma+1} n_\gammaabla p_\gammart*{ \mathcal{P}_\es(u_\es^{\gamma+1})} = \mathcal{P}_\es(u_\es) n_\gammaabla\mathcal{P}_\es(p_{\es})^{(i)}n L^2(0,T;L^2(\Omega\setminus\tilde\Gamma_{1,3})),
\end{equation*}
and
\begin{equation*}
p_\gammaartial_tp_\gammart*{ \mathcal{P}_\es(u_\es^{\gamma+1})}=(\gamma+1)\mathcal{P}_\es(p_\es)p_\gammaartial_t \mathcal{P}_\es( u_\es)=(\gamma+1)\mathcal{P}_\es(p_\es)\mathcal{P}_\es(p_\gammaartial_t u_\es) ^{(i)}n L^^{(i)}nfty(0,T; L^1(\Omega\setminus \tilde\Gamma_{1,3}).
\end{equation*}}
\end{remark}
\begin{lemma}[Compactness of the extension operator]\label{lem: convergence P}
\begin{sloppypar}
Let $(u_\es, p_\es)$ be the solution of Problem~\eqref{epspb}. There exists a couple $(\tilde{u},\tilde{p})$ with
\[
\tilde{u}^{(i)}n L^^{(i)}nfty(0,T; L^^{(i)}nfty(\Omega \setminus \tilde \Gamma_{1,3})), \quad{\tilde{p}^{(i)}n L^2(0,T; H^1(\Omega \setminus \tilde \Gamma_{1,3})) \cap L^^{(i)}nfty(0,T; L^^{(i)}nfty(\Omega \setminus \tilde \Gamma_{1,3}))},
\]
such that, up to a subsequence, it holds
\end{sloppypar}
\begin{itemize}
^{(i)}tem[\textit{(i)}] $\mathcal{P}_\es(p_\es)\rightarrow \tilde{p}$ strongly in $L^p(0,T; L^p(\Omega \setminus \tilde \Gamma_{1,3}))$, for $1\le p < +^{(i)}nfty$,
^{(i)}tem[\textit{(ii)}] $\mathcal{P}_\es(u_\es)\rightarrow \tilde{u}$ strongly in $L^p(0,T; L^p(\Omega \setminus \tilde \Gamma_{1,3}))$, for $1\le p < +^{(i)}nfty$,
^{(i)}tem[\textit{(iii)}] $n_\gammaabla\mathcal{P}_\es(p_\es)\rightharpoonup n_\gammaabla\tilde{p}$ weakly in $L^2(0,T; L^2(\Omega\setminus\tilde \Gamma_{1,3}))$.
\end{itemize}
\end{lemma}
\begin{proof}
\textit{(i).} Since both $p_\gammaartial_t \mathcal{P}_\es(p_\es)$ and $n_\gammaabla \mathcal{P}_\es(p_\es)$ are bounded in $L^1(0,T; L^1(\Omega \setminus \tilde \Gamma_{1,3}))$ uniformly with respect to $\es$, we infer the strong compactness of $\mathcal{P}_\es(p_\es)$ in $L^1(0,T; L^1(\Omega \setminus \tilde \Gamma_{1,3}))$. Let us also notice that since both $u_\es$ and $p_\es$ are uniformly bounded in $L^^{(i)}nfty(0,T; L^^{(i)}nfty(\Omega \setminus \tilde \Gamma_{1,3}))$ then the strong convergence holds in any $L^p(0,T; L^p(\Omega \setminus \tilde \Gamma_{1,3}))$ with $1\le p < ^{(i)}nfty.$
\textit{(ii).} From \textit{(i)}, we can extract a subsequence of $\mathcal{P}_\es(p_\es)$ which converges almost everywhere. Then, remembering that $u_\es=p_\es^{1/\gamma}$, with $\gamma>1$ fixed, we have convergence of $\mathcal{P}_\es(u_\es)$ almost everywhere. Thanks to the uniform $L^^{(i)}nfty$-bound of $\mathcal{P}_\es(u_\es)$, Lebesgue's theorem implies the statement. Let us point out that, in particular, the $L^^{(i)}nfty$-uniform bound is also valid in the limit.
\textit{(iii).} The uniform boundedness of $ n_\gammaabla\mathcal{P}_\es(p_\es)$ in $L^2(0,T; L^2(\Omega \setminus \tilde \Gamma_{1,3}))$ immediately implies weak convergence up to a subsequence.
\end{proof}
\subsection{Test function space and passage to the limit \texorpdfstring{$\es\rightarrow 0$}{}}\label{subsec: testfnc}
Since in the limit we expect a discontinuity of the density on $\tilde\Gamma_{1,3}$, we need to define a suitable space of test functions. Therefore we construct the space $E^\star$ as follows.
Let us consider a function $\zeta ^{(i)}n \mathcal{D}(\Omega)$ (\emph{i.e.}\; $C^^{(i)}nfty_c(\Omega)$).
For any $\es>0$ small enough, we build the function $v_\es = \mathcal{P}_\es(\zeta)$, using the extension operator previously defined. The space of all linear combinations of these functions $v_\es$ is called $E^\star\subset H^1(\Omega \setminus \tilde{\Gamma}_{1,3})$\CORR{, namely
\[
E^\star=\left\{\sum_{n=1}^^{(i)}nfty c_n v_{\es,n}\; \mbox{ s.t. }\; c_n^{(i)}n \mathbb{R},\; v_{\es,n}=\mathcal{P}_\es(\zeta_n),\; \zeta_n ^{(i)}n C^^{(i)}nfty_c(\Omega)\right\}.
\]
}
We stress that the functions of $E^\star$ are discontinuous on $\tilde \Gamma_{1,3}$.
In the weak formulation of the limit problem~\eqref{eq: variational limit problem}, we will make use of piece-wise $C^^{(i)}nfty$-test functions (discontinuous on $\tilde\Gamma_{1,3}$) of the type $w(t,x)=\varphi(t)v(x)$, where $\varphi^{(i)}n C^1([0,T))$ with $\varphi(T) = 0$ and $v^{(i)}n E^*$. Therefore, $w$ belongs to $C^1([0,T);E^*)$.
On the other hand, in the variational formulation \eqref{eq:variational-pb-homo}, \emph{i.e.}\; for $\es>0$, $H^1(0,T; H^1_0(\Omega))$ test functions are required. Thus, in order to study the limit $\es\to 0$, we need to introduce a proper sequence of test functions depending on $\es$ that converges to $w$.
To this end, we define the operator $L_\es: C^1([0,T); E^*) \to H^1(0,T; H_0^1(\Omega))$ such that
\[
L_\es(w) \to w, \quad \text{uniformly as} \quad \es \to 0, \quad \forall w ^{(i)}n C^1([0,T); E^\star).
\]
In this way, $L_\es(w)$ belongs to $H^1(0,T; H^1_0(\Omega))$, therefore, it can be used as test function in the formulation~\eqref{eq:variational-pb-homo}.
Following Sanchez-Palancia, \cite{sanchez-palencia}, for all $t^{(i)}n[0,T]$ and $x = (x_1,x_2,x_3) ^{(i)}n \Omega$, we define
\[
L_\es(w(t,x)) = \begin{dcases}
w(t,x),\qquad &\text{if}\quad x n_\gammaotin \Omega_{2,\es},\\[0.3em]
\frac12\left[wp_\gammart*{t,x_1,x_2,\frac \es 2 } + wp_\gammart*{t,x_1,x_2, - \frac \es2} \right] \\[0.3em]
\quad +\left[wp_\gammart*{t,x_1,x_2,\frac \es 2} - wp_\gammart*{t,x_1,x_2, - \frac\es 2} \right]\frac{x_3}{\es}, \qquad &\text{otherwise}.
\end{dcases}
\]
It can be easily verified that \CORR{$L_\es(w)$} is linear with respect to $x_3$ in $\Omega_{2,\es}$ and is continuous on $p_\gammaartial \Omega_{2,\es}$. Let us notice that it holds
\begin{equation}\label{eq: dL dx3}
\left|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_3} \right|\le \frac C\es.
\end{equation}
Furthermore, thanks to the mean value theorem, the partial derivatives of \CORR{$L_\es(w)$} with respect to $x_1$ and $x_2$ are bounded by a constant (independent of $\es$),
\begin{equation*}
\left|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_1}\right|\le C,\qquad \left|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_2}\right|\le C,
\end{equation*}
and since the measure of $\Omega_{2,\es}$ is proportional to $\es$, we have
\begin{equation}\label{eq: bound dxL}
^{(i)}nt_0^T ^{(i)}nt_{\Omega_{2,\es}} \left|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_1}\right|^2 + \left|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_2}\right|^2 \le C \es.
\end{equation}
Given $w^{(i)}n C^1([0,T);E^\star)$, we take $L_\es(w)$ as a test function in the variational formulation of the problem, \emph{i.e.}\; Equation~\eqref{eq:variational-pb-homo}, and we have
\begin{equation}
\label{eq: test L}
\begin{split}
-^{(i)}nt_0^T^{(i)}nt_{\Omega} u_\es p_\gammaartial_t L_\es(w) + \sum_{i=1}^3 \mu_{i,\es} ^{(i)}nt_0^T ^{(i)}nt_{\Omega_{i,\es}} &u_{i,\es} n_\gammaabla p_{i,\es} \cdot n_\gammaabla L_{\es}(w) \\
&= ^{(i)}nt_0^T^{(i)}nt_{\Omega} u_{\es}G(p_{\es}) L_\es(w) + ^{(i)}nt_{\Omega} u_\es^0 L_\es(w^0).
\end{split}
\end{equation}
Thanks to the \textit{a priori} estimates already proven, \emph{cf.}\; Lemma~\ref{lemma: a priori}, Remark~\ref{rmk: a priori P} and the convergence result on the extension operator, \emph{cf.}\; Lemma~\ref{lem: convergence P}, we are now able to pass to the limit $\es\rightarrow 0$ and recover the effective interface problem.
\begin{thm}\label{thm: existence}
For all test functions of the form $w(t,x):=\varphi(t)v(x)$ with $\varphi ^{(i)}n C^1([0,T))$ and $v^{(i)}n E^*$, the limit couple $(\tilde{u},\tilde{p})$ of Lemma~\ref{lem: convergence P} satisfies the following equation
\begin{equation*}
\begin{split}
-^{(i)}nt_0^T^{(i)}nt_{\Omega} \tilde{u} & p_\gammaartial_t w + \tilde\mu_1^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_1} \tilde{u} n_\gammaabla \tilde{p} \cdot n_\gammaabla w + \tilde\mu_3 ^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_3} \tilde{u} n_\gammaabla \tilde{p} \cdot n_\gammaabla w \\
&+ \tilde\mu_{1,3}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} \llbracket \Pi \rrbracket p_\gammart*{w_{|x_3=0^+}- w_{|x_3=0^-}} = ^{(i)}nt_0^T^{(i)}nt_{\Omega} \tilde{u}G(\tilde{p}) w + ^{(i)}nt_{\Omega} \tilde{u}^0 w^0,
\end{split}
\end{equation*}
where $$\llbracket \Pi \rrbracket:=\frac{\gamma}{\gamma+1}(\tilde{u}^{\gamma+1})_{|x_3=0^+}- \frac{\gamma}{\gamma+1}(\tilde{u}^{\gamma+1})_{|x_3=0^-},$$
\CORR{and $(\cdot)_{|x_3=0^-}= \mathcal{T}_1 (\cdot)$ as well as $(\cdot)_{|x_3=0^+}= \mathcal{T}_3 (\cdot)$, with $\mathcal{T}_1, \mathcal{T}_3$ the trace operators defined in Section~\ref{hp}}. \CORRdeux{By definition, this equation is the weak formulation of Problem~\eqref{effectivepb}. }
\end{thm}
\begin{proof}
We may pass to the limit in Equation~\eqref{eq: test L}, computing each term individually.
n_\gammaoindent{\textit{Step 1. Time derivative integral.}} We split the first integral into two parts
\begin{align*}
-^{(i)}nt_0^T^{(i)}nt_{\Omega} u_\es p_\gammaartial_t L_\es(w) = \underbrace{-^{(i)}nt_0^T^{(i)}nt_{\Omega_{1,\es}\cup \Omega_{3,\es}} u_\es p_\gammaartial_t L_\es(w)}_{\mathcal{I}_1} - \underbrace{ ^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}} u_\es p_\gammaartial_t L_\es(w)}_{\mathcal{I}_2}.
\end{align*}
Since outside of $\Omega_{2,\es}$ the extension operator coincides with the identity, and $L_\es(w)=w$, we have
\begin{align*}
\mathcal{I}_1 = -^{(i)}nt_0^T^{(i)}nt_{\Omega_{1,\es}\cup \Omega_{3,\es}} \mathcal{P}_\es(u_\es) p_\gammaartial_t w = -^{(i)}nt_0^T^{(i)}nt_{\Omega} \mathcal{P}_\es(u_\es) p_\gammaartial_t w +^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}} \mathcal{P}_\es(u_\es) p_\gammaartial_t w.
\end{align*}
Thanks to Remark~\ref{rmk: a priori P}, we know that the last integral converges to zero, since both $\mathcal{P}_\es(u_\es)$ and $p_\gammaartial_t w$ are bounded in $L^2$ and the measure of $\Omega_{2,\es}$ tends to zero as $\es\rightarrow 0$.
Then, by Lemma~\ref{lem: convergence P}, we have
\begin{equation*}
-^{(i)}nt_0^T^{(i)}nt_{\Omega} \mathcal{P}_\es(u_\es) p_\gammaartial_t w \longrightarrow -^{(i)}nt_0^T^{(i)}nt_{\Omega} \tilde{u} \ p_\gammaartial_t w, \quad \mbox{ as } \es\rightarrow 0,
\end{equation*}
where we used the weak convergence of $\mathcal{P}_\es(u_{\es})$ to $\tilde{u}$ in $L^2(0,T; L^2( \Omega\setminus\tilde\Gamma_{1,3}))$.
The term $\mathcal{I}_2$ vanishes in the limit, since both $u_\es$ and $p_\gammaartial_t L_\es(w)$ are bounded in $L^2$ uniformly with respect to $\es$. Hence, we finally have
\begin{equation}\label{eq: conv of dt term}
-^{(i)}nt_0^T^{(i)}nt_{\Omega} u_\es p_\gammaartial_t L_\es(w) \longrightarrow -^{(i)}nt_0^T^{(i)}nt_{\Omega} \tilde{u} \ p_\gammaartial_t w, \quad \mbox{ as } \es\rightarrow 0.
\end{equation}
n_\gammaoindent{\textit{Step 2. Reaction integral.}} We use the same argument for the reaction term, namely
\begin{equation*}
^{(i)}nt_0^T ^{(i)}nt_{\Omega} u_\es G(p_\es) L_\es(w) = \underbrace{^{(i)}nt_0^T ^{(i)}nt_{\Omega_{1,\es}\cup\Omega_{3,\es}} u_\es G(p_\es) L_\es(w)}_{\mathcal{K}_1} + \underbrace{^{(i)}nt_0^T ^{(i)}nt_{\Omega_{2,\es}} u_\es G(p_\es) L_\es(w)}_{\mathcal{K}_2}.
\end{equation*}
Using again the convergence result on the extension operator, \emph{cf.}\; Lemma~\ref{lem: convergence P}, we obtain
\begin{equation*}
\mathcal{K}_1 = ^{(i)}nt_0^T ^{(i)}nt_{\Omega_{1,\es}\cup\Omega_{3,\es}} \mathcal{P}_\es(u_\es) G(\mathcal{P}_\es(p_\es)) w \longrightarrow ^{(i)}nt_0^T ^{(i)}nt_{\Omega} \tilde{u}\, G(\tilde{p}) w, \quad \mbox{ as } \es\rightarrow 0,
\end{equation*}
since both $\mathcal{P}_\es(u_\es)$ and $G(\mathcal{P}_\es(p_\es))$ converge strongly in $L^2(0,T; L^2(\Omega\setminus\tilde\Gamma_{1,3}))$.
Arguing as before, it is immediate to see that $\mathcal{K}_2$ vanishes in the limit. Hence
\begin{equation}
\label{eq: conv of G term}
^{(i)}nt_0^T ^{(i)}nt_{\Omega} u_\es G(p_\es) L_\es(w) \longrightarrow ^{(i)}nt_0^T ^{(i)}nt_{\Omega} \tilde{u} G(\tilde{p}) w, \quad \mbox{ as } \es\rightarrow 0.
\end{equation}
n_\gammaoindent{\textit{Step 3. Initial data integral.}}
From \eqref{conv u0}, it is easy to see that
\begin{equation}\label{eq: conv term initial}
^{(i)}nt_{\Omega} u_\es^0 L_\es(w^0) \longrightarrow ^{(i)}nt_{\Omega} \tilde{u}^0 w^0, \quad \mbox{ as } \es\rightarrow 0.
\end{equation}
n_\gammaoindent{\textit{Step 4. Divergence integral.}}
Now it remains to treat the divergence term in Equation~\eqref{eq: test L}, from which we recover the effective interface conditions at the limit.
Since the extension operator $\mathcal{P}_\es$ is in fact the identity operator on $\Omega\setminus\Omega_{2,\es}$, we can write
\begin{equation}\label{eq: H1 H2}
\begin{split}
\sum_{i=1}^3 \mu_{i,\es} &^{(i)}nt_0^T ^{(i)}nt_{\Omega_{i,\es}} u_{i,\es} n_\gammaabla p_{i,\es} \cdot n_\gammaabla L_{\es}(w) \\
=&\underbrace{\sum_{i=1,3} \mu_{i,\es} ^{(i)}nt_0^T ^{(i)}nt_{\Omega_{i,\es}} \mathcal{P}_\es (u_{i,\es}) n_\gammaabla \mathcal{P}_\es(p_{i,\es}) \cdot n_\gammaabla w}_{\mathcal{H}_1} + \underbrace{\mu_{2,\es}^{(i)}nt_0^T ^{(i)}nt_{\Omega_{2,\es}} u_{2,\es} n_\gammaabla p_{2,\es} \cdot n_\gammaabla L_{\es}(w)}_{\mathcal{H}_2}.
\end{split}
\end{equation}
We treat the two terms separately.
Since we want to use the weak convergence of $n_\gammaabla\mathcal{P}_\es(p_\es)$ in $L^2(0,T;L^2(\Omega\setminus \tilde\Gamma_{1,3}))$ (together with the strong convergence of $\mathcal{P}_\es(u_\es)$ in $L^2(0,T;L^2(\Omega\setminus \tilde\Gamma_{1,3}))$) we need to write the term $\mathcal{H}_1$ as an integral over $\Omega$. To this end, let $\overline{\mu}_\es:=\overline{\mu}_\es(x)$ be a function defined as follows
\begin{equation*}
\overline{\mu}_\es (x) :=
\begin{dcases}
\mu_{1,\es} \qquad &\text{ for } x ^{(i)}n \Omega_{1,\es},\\[0.2em]
0 \qquad &\text{ for } x ^{(i)}n \Omega_{2,\es},\\[0.2em]
\mu_{3,\es} \qquad &\text{ for } x ^{(i)}n \Omega_{3,\es}.
\end{dcases}
\end{equation*}
Then, we can write
\begin{equation*}
\mathcal{H}_1 = ^{(i)}nt_0^T ^{(i)}nt_{\Omega} \overline{\mu}_\es \mathcal{P}_\es (u_\es) n_\gammaabla \mathcal{P}_\es(p_\es) \cdot n_\gammaabla w .
\end{equation*}
Let us notice that as $\es$ goes to $0$, $\overline{\mu}_\es$ converges to $\tilde\mu_1$ in $\tilde\Omega_1$ and $\tilde\mu_3$ in $\tilde\Omega_3$.
Therefore, by Lemma~\ref{lem: convergence P}, we infer
\begin{equation}\label{eq: conv divergence term}
\mathcal{H}_1 \longrightarrow \tilde\mu_1 \CORR{^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_1} \ \tilde{u} \ n_\gammaabla \tilde{p} \cdot n_\gammaabla w + \tilde\mu_3 ^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_3} \ \tilde{u} \ n_\gammaabla \tilde{p} \cdot n_\gammaabla w},\quad \text{as} \quad \es\to 0.
\end{equation}
Now we treat the term $\mathcal{H}_2$, which can be written as
\begin{align*}
\mathcal{H}_2 =& \mu_{2,\es}^{(i)}nt_0^T ^{(i)}nt_{\Omega_{2,\es}} u_{2,\es} n_\gammaabla p_{2,\es} \cdot n_\gammaabla L_{\es}(w)\\
=& \mu_{2,\es}^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}} p_\gammart*{u_{2,\es} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_1} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_1} + u_{2,\es} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_2} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_2}} + \mu_{2,\es}^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}} u_{2,\es} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_3} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_3}.
\end{align*}
By the Cauchy-Schwarz inequality, the a priori estimate~\eqref{eq:bound-nablap}, and Equation~\eqref{eq: bound dxL}, we have
\begin{align*}
\mu_{2,\es}^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}}& u_{2,\es} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_1} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_1} + u_{2,\es} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_2} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_2} \\[0.3em]
&\leq \mu_{2,\es}^{1/2} \|u_{2,\es}\|_{L^^{(i)}nfty((0,T)\times\Omega_{2,\es})} p_\gammart*{\left\|\mu_{2,\es}^{1/2} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_1}\right\|_{L^2((0,T)\times\Omega_{2,\es})} \left\|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_1} \right\|_{L^2((0,T)\times\Omega_{2,\es})} }\\[0.3em]
&\quad +\mu_{2,\es}^{1/2} \|u_{2,\es}\|_{L^^{(i)}nfty((0,T)\times\Omega_{2,\es})} p_\gammart*{\left\|\mu_{2,\es}^{1/2} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_2}\right\|_{L^2((0,T)\times\Omega_{2,\es})} \left\|\frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_2} \right\|_{L^2((0,T)\times\Omega_{2,\es})} }\\[0.3em]
&\leq C\ \mu_{2,\es}^{1/2} \ \es^{1/2} \rightarrow 0.
\end{align*}
On the other hand, by Fubini's theorem, the following equality holds
\begin{equation*}
\begin{split}
\mu_{2,\es}&^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}} u_{2,\es} \frac{p_\gammaartial p_{2,\es}}{p_\gammaartial x_3} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_3} \\[0.3em]
&= \mu_{2,\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{\Omega_{2,\es}} \frac{p_\gammaartial u_{2,\es}^{\gamma+1}}{p_\gammaartial x_3} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_3} \\[0.3em]
&= \mu_{2,\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T ^{(i)}nt_{-\es/2}^{\es/2}^{(i)}nt_{\tilde\Gamma_{1,3}} \frac{p_\gammaartial u_{2,\es}^{\gamma+1}}{p_\gammaartial x_3} \frac{p_\gammaartial L_\es(w)}{p_\gammaartial x_3} \dx{\sigma} \dx{x_3}\\[0.3em]
&= \mu_{2,\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{-\es/2}^{\es/2}^{(i)}nt_{\tilde\Gamma_{1,3}} \frac{p_\gammaartial u_{2,\es}^{\gamma+1}}{p_\gammaartial x_3} \frac{w_{|x_3=\frac \es 2}- w_{|x_3=-\frac \es 2}}{\es} \dx{\sigma} \dx{x_3}\\[0.3em]
&=\frac{\mu_{2,\es}}{\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T ^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{w_{|x_3=\frac \es 2}- w_{|x_3=-\frac \es 2}} ^{(i)}nt_{-\es/2}^{\es/2} \frac{p_\gammaartial u_{2,\es}^{\gamma+1}}{p_\gammaartial x_3} \dx{x_3} \dx{\sigma} \\[0.3em]
&= \frac{\mu_{2,\es}}{\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{(u_{2,\es}^{\gamma+1})_{|x_3=\frac \es 2}- (u_{2,\es}^{\gamma+1})_{|x_3=-\frac \es 2}}\cdot p_\gammart*{w_{|x_3=\frac \es 2}- w_{|x_3=-\frac \es 2}}.
\end{split}
\end{equation*}
Therefore,
\begin{equation}\label{eq: fund eq}
\lim_{\es\to 0} \mathcal{H}_2 = \lim_{\es\to 0} \frac{\mu_{2,\es}}{\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{(u_{2,\es}^{\gamma+1})_{|x_3=\frac \es 2}- (u_{2,\es}^{\gamma+1})_{|x_3=-\frac \es 2}}\cdot p_\gammart*{w_{|x_3=\frac \es 2}- w_{|x_3=-\frac \es 2}}.
\end{equation}
In order to conclude the proof, we state the following lemma, which is proven \hyperlink{proof lemma: intermediate}{below}.
\begin{lemma}\label{lemma: intermediate}
The following limit holds uniformly in $\tilde\Gamma_{1,3}$
\begin{equation}\label{membrtest}
w_{|x_3=\frac \es 2}- w_{|x_3=-\frac \es 2} \longrightarrow w_{|x_3=0^+}- w_{|x_3=0^-}, \quad\mbox{ as } \es \rightarrow 0.
\end{equation}
Moreover,
\begin{equation}\label{eq: conv u gamma+1}
\frac{\gamma}{\gamma+1} p_\gammart*{ (u_{2,\es}^{\gamma+1})_{|x_3=\frac \es 2}- (u_{2,\es}^{\gamma+1})_{|x_3=-\frac \es 2}} \longrightarrow \frac{\gamma}{\gamma+1} p_\gammart*{(\tilde{u}^{\gamma+1})_{|x_3=0^+}- (\tilde{u}^{\gamma+1})_{|x_3=0^-}},
\end{equation}
strongly in $L^2(0,T; L^2(\tilde\Gamma_{1,3}))$, as $\es \rightarrow 0$.
\end{lemma}
We may finally find the limit of the term $\mathcal{H}_2$, using Assumption~\eqref{eq:cond-mu}, and applying Lemma~\ref{lemma: intermediate} to Equation~\eqref{eq: fund eq}
\begin{align*}
&\frac{\mu_{2,\es}}{\es}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{(u_{2,\es}^{\gamma+1})_{|x_3=\frac \es 2}- (u_{2,\es}^{\gamma+1})_{|x_3=-\frac \es 2}}\cdot p_\gammart*{w_{|x_3=\frac \es 2}- w_{|x_3=-\frac \es 2}}\\
\longrightarrow \; & \tilde\mu_{1,3}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{ \left(\tilde{u}^{\gamma+1}\right)_{|x_3=0^+}- \left(\tilde{u}^{\gamma+1}\right)_{|x_3=0-}}\cdot p_\gammart*{w_{|x_3=0^+}- w_{|x_3=0^-}},
\end{align*}
as $\es\to 0$.
Combining the above convergence to Equation~\eqref{eq: H1 H2} and Equation~\eqref{eq: conv divergence term}, we find the limit of the divergence term as $\es$ goes to $0$,
\begin{align*}
\sum_{i=1}^3 \mu_{i,\es} &^{(i)}nt_0^T ^{(i)}nt_{\Omega_{i,\es}} u_{i,\es} n_\gammaabla p_{i,\es} \cdot n_\gammaabla L_{\es}(w) \\
\longrightarrow \;& \tilde\mu_1 ^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_1} \ \tilde{u} n_\gammaabla \tilde{p} \cdot n_\gammaabla w + \tilde\mu_3 ^{(i)}nt_0^T^{(i)}nt_{\tilde\Omega_3} \tilde{u} n_\gammaabla \tilde{p} \cdot n_\gammaabla w\\
&+ \tilde\mu_{1,3}\frac{\gamma}{\gamma+1}^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{(\tilde{u}^{\gamma+1})_{|x_3=0^+}- (\tilde{u}^{\gamma+1})_{|x_3=0^-}}\cdot p_\gammart*{w_{|x_3=0^+}- w_{|x_3=0^-}},
\end{align*}
which, together with Equations~\eqref{eq: test L}, \eqref{eq: conv of dt term}, \eqref{eq: conv of G term}, and \eqref{eq: conv term initial}, concludes the proof.
\end{proof}
We now turn to the \hypertarget{proof lemma: intermediate}{proof of Lemma~\ref{lemma: intermediate}}
\begin{proof}[Proof of Lemma~\ref{lemma: intermediate}]
\begin{sloppypar}
Since by definition $w(t,x)= \varphi(t) v(x) $, with $\varphi^{(i)}n C^1([0,T))$ and $v^{(i)}n E^*$, the uniform convergence in Equation~\eqref{membrtest} comes from the piece-wise differentiability of $w$.
A little bit trickier is the second convergence, \emph{i.e.}\; Equation \eqref{eq: conv u gamma+1}. We recall that on ${ \{x_3= p_\gammam \es /2 \}}$, ${u^{\gamma+1}_{2,\es}}$ coincides with ${\mathcal{P}_\es(u_{\es}^{\gamma+1})}$, since
across the interfaces $u_\es$ is continuous and ${\mathcal{P}_\es(u_{i,\es})=u_{i,\es}}$, for $i=1,3$.
\end{sloppypar}
\color{black}{Let us recall that from Remark~\ref{rmk: a priori P}, we have
$$ \left\|\mathcal{P}_\es(u_\es^{\gamma+1})\right\|_{L^2(0,T; H^1(\Omega\setminus\tilde\Gamma_{1,3}))} \leq C, \ \text{ and } \
\left\|p_\gammaartial_tp_\gammart*{ \mathcal{P}_\es(u_\es^{\gamma+1})}\right\|_{L^^{(i)}nfty(0,T;L^1(\Omega\setminus\tilde\Gamma_{1,3}))}\leq C. $$ }
\commentout{Moreover, from \textit{(i)} and \textit{(iv)} in Remark~\ref{rmk: a priori P}, we infer that
\begin{equation*}
\frac{\gamma}{\gamma+1} n_\gammaabla p_\gammart*{ \mathcal{P}_\es(u_\es^{\gamma+1})} = \mathcal{P}_\es(u_\es) n_\gammaabla\mathcal{P}_\es(p_{\es})^{(i)}n L^2(0,T;L^2(\Omega\setminus\tilde\Gamma_{1,3})),
\end{equation*}
hence
\begin{equation*}
\CORR{\left\|\mathcal{P}_\es(u_\es^{\gamma+1})\right\|_{L^2(0,T; H^1(\Omega\setminus\tilde\Gamma_{1,3}))} \leq C.}
\end{equation*}
Again from Remark~\ref{rmk: a priori P}, we know that
\begin{equation*}
\left\|p_\gammaartial_tp_\gammart*{ \mathcal{P}_\es(u_\es^{\gamma+1})}\right\|_{L^^{(i)}nfty(0,T;L^1(\Omega\setminus\tilde\Gamma_{1,3}))}\leq C.
\end{equation*}}
n_\gammaoindent
Since we have the following embeddings
\begin{equation*}
H^1(\Omega\setminus\tilde\Gamma_{1,3}) \subset\subset H^\beta(\Omega\setminus\tilde\Gamma_{1,3})\subset L^1(\Omega\setminus\tilde\Gamma_{1,3}),
\end{equation*}
for every $\frac 12 < \beta < 1$, upon applying Aubin-Lions lemma, \cite{aubin, lions}, we obtain
\begin{equation*}
\mathcal{P}_\es(u_\es^{\gamma +1}) \longrightarrow \tilde{u}^{\gamma +1}, \quad \text{ as } \es \to 0,
\end{equation*}
strongly in $L^2(0,T; H^\beta (\Omega\setminus\tilde\Gamma_{1,3}))$.
\CORR{
Thanks to the continuity of the trace operators $\mathcal{T}_\alpha: H^\beta(\tilde \Omega_\alpha \setminus\tilde\Gamma_{1,3}) \rightarrow{} L^2(p_\gammaartial \tilde \Omega_\alpha)$,
for $\frac 12 < \beta < 1$ and $\alpha =1,3$, we finally recover that
\begin{equation}\label{eq: trace convergence}
\left\|\mathcal{P}_\es(u_\es^{\gamma+1})_{|x_3= 0^p_\gammam}- \left(\tilde{u}^{\gamma+1}\right)_{|x_3=0^p_\gammam}\right\|_{L^2(0,T; L^2(\tilde\Gamma_{1,3}))} \le C \left\| \mathcal{P}_\es(u_\es^{\gamma+1})- \tilde{u}^{\gamma+1}\right\|_{L^2(0,T; H^\beta(\Omega\setminus\tilde\Gamma_{1,3}))}\rightarrow 0,
\end{equation}
as $\es \rightarrow 0$.
We recall that the trace vanishes on the external boundary, $p_\gammaartial \Omega$, therefore we only consider the $L^2(0,T; L^2(\tilde\Gamma_{1,3}))$-norm.
}
Recalling that $L$ is the length of $\Omega$, trivially, we find the following estimate
\begin{align*}
\big\|\mathcal{P}_\es(u_{\es}^{\gamma+1})_{|x_3=p_\gammam \es/2} - \mathcal{P}_{\es}(u_{\es}^{\gamma+1})_{|x_3=0^p_\gammam}& \big\|^2_{L^2(0,T; L^2(\tilde\Gamma_{1,3}))}\\
&= ^{(i)}nt_0^T^{(i)}nt_{\tilde\Gamma_{1,3}} p_\gammart*{^{(i)}nt_{0}^{p_\gammam\es/2} \frac{p_\gammaartial \mathcal{P}_{\es}(u_{\es}^{\gamma+1})}{p_\gammaartial x_3} }^2 \\[0.3em]
& = ^{(i)}nt_0^T ^{(i)}nt_{\tilde \Gamma_{1,3}} \left( ^{(i)}nt_L \frac{p_\gammaartial \mathcal{P}_{\es}(u_{\es}^{\gamma+1})}{p_\gammaartial x_3} \mathds{1}_{[0,p_\gammam\es/2]}(x_3) \right)^2 \\[0.3em]
&\le ^{(i)}nt_0^T ^{(i)}nt_{\tilde \Gamma_{1,3}} \left( ^{(i)}nt_L \left( \frac{p_\gammaartial \mathcal{P}_{\es}(u_{\es}^{\gamma+1})}{p_\gammaartial x_3}\right)^2 ^{(i)}nt_L \left(\mathds{1}_{[0,p_\gammam\es/2]}(x_3) \right)^2 \right)\\[0.3em]
&\leq \frac\es 2 \|n_\gammaabla \mathcal{P}_\es(u_{\es}^{\gamma+1})\|_{L^2(0,T;L^2(\Omega\setminus\tilde\Gamma_{1,3}))}\\[0.3em]
&\leq \es \ C,
\end{align*}
and combing it with Equation~\eqref{eq: trace convergence}, we finally obtain Equation~\eqref{eq: conv u gamma+1}.
\end{proof}
\begin{remark}
Although not relevant from a biological point of view, let us point out that, in the case of dimension greater than 3, the analysis goes through without major changes. It is clear that the \textit{a priori} estimates are not affected by the shape or the dimension of the domain (although some uniform constants $C$ may depend on the dimension, this does not change the result in Lemma~\ref{lemma: a priori}). The following methods, and in particular the definition of the extension operator and the functional space of test functions, clearly depends on the dimension, but the strategy is analogous for a $d$-dimensional cylinder with axis $\{x_1=\dots=x_{d-1}=0\}$.
\end{remark}
\CORR{\begin{remark}
We did not consider the case of non-constant mobilities, \emph{i.e.}\; $\mu_{i,\es}:=\mu_{i,\es}(x)$, but continuity and boundedness are the minimal hypothesis to succeed in the proof.
\end{remark}}
\section{Conclusions and perspectives}\label{sec: conclusion}
We proved the convergence of a continuous model of cell invasion through a membrane when its thickness is converging to zero, hence giving a rigorous derivation of the effective transmission conditions already conjectured in Chaplain \textit{et al.}, \cite{giverso}. Our strategy relies on the methods developed in \cite{sanchez-palencia}, although we had to handle the difficulties coming from the nonlinearity and degeneracy of the system. \commentout{We did not consider the case of non-constant mobilities, \emph{i.e.}\; $\mu_{i,\es}:=\mu_{i,\es}(t,x)$, which could bring additional challenges to the derivation of the \textit{a priori estimates} and the compactness results. In particular, a}\CORR{A} very interesting direction both from the biological and mathematical point of view, could be coupling the system to an equation describing the evolution of the MMP concentration. In fact, as observed in \cite{giverso}, the permeability coefficient can depend on the local concentration of MMPs, since it indicates the level of "aggressiveness" at which the tumour is able to destroy the membrane and invade the tissue.
In a recent work~\cite{giverso2}, a formal derivation of the multi-species effective problem has been proposed. However, its rigorous proof remains an interesting and challenging open question. Indeed, introducing multiple species of cells, hence dealing with a cross-(nonlinear)-diffusion system, adds several challenges to the problem. As it is well-known, proving the existence of solutions to cross-diffusion systems with different mobilities is one of the most challenging and still open questions in the field. Nevertheless, even when dealing with the same constant mobility coefficients, the nature of the multi-species system (at least for dimension greater than one) usually requires strong compactness on the pressure gradient. We refer the reader to \cite{GPS, price2020global} for existence results of the two-species model without membrane conditions.
Another direction of further investigation of the effective transmission problem \eqref{effectivepb} could be studying the so-called \textit{incompressible limit}, namely the limit of the system as $\gamma \rightarrow ^{(i)}nfty$. The study of this limit has a long history of applications to tumour growth models, and has attracted a lot of interest since it links density-based models to a geometrical (or free boundary) representation, \emph{cf.}\; \cite{PQV, KP17}.
Moreover, including the heterogeneity of the membrane in the model could not only be useful in order to improve the biological relevance of the model, but could bring interesting mathematical challenges, forcing to develop new methods or adapt already existent ones, \cite{neuss-radu}, from the parabolic to the degenerate case.
\section{Existence of weak solution of the initial problem}\label{appendix:existence}
We prove in this appendix the existence of solution for System~\eqref{epspb}.
Similarly to diffraction problems modelled by linear parabolic equations (see Section 3.13 in~\cite{ladyvzenskaja1988linear}), this result follows from the existence of solution for the Porous Medium Equation with discontinuous coefficients. Indeed, using a test function $w ^{(i)}n C^^{(i)}nfty(\Omega_T)$, solutions of the following weak formulation
\[
^{(i)}nt_\Omega p_\gammaartial_t u w + \mu(x) u n_\gammaabla u^\gamma \cdot n_\gammaabla w \,\dd x= ^{(i)}nt_\Omega u G(p) w \, \dd x,
\]
are actually solutions of the strong form~\eqref{epspb}. This is obtained from the fact that the interfaces $\Gamma_{i,i+1}$ (for $i=1,2$) are continuous and from the interface conditions.
Even though the proof of the existence of weak solutions follow the lines of Section 5.4 in~\cite{vasquez}, we could not find a proof of this result in the case of discontinuous mobility coefficients in the literature, hence, for the sake of clarity, we give in this appendix the idea of the proof.
\begin{thm}[Existence of weak solutions for the initial problem]
Assuming that $\mu_i >0$ for $i=1,2,3$, System~\eqref{epspb} admits a weak solution $u ^{(i)}n L^1(\Omega_T)$ and $p ^{(i)}n L^1(0,T;H^1_0(\Omega))$.
\end{thm}
\begin{proof}
\textit{Step 1: Regularized problem.} We first regularize the model to convert it into a non-degenerate parabolic model. We use a positive parameter $n$ and define a positive initial condition
\begin{equation}
u_{0n} = u_{0} + \frac{1}{n}.
\end{equation}
Our regularized problem reads
\begin{equation}\label{epspb-reg}
\left\{
\begin{array}{rlll}
&p_\gammaartial_t u_{i,n} - \mu_{i} n_\gammaabla \cdot ( u_{i,n} n_\gammaabla p_{i,n}) = u_{i,n} G(p_{i,n}) & \text{ in } (0,T)\times\Omega_{i}, & i=1,2,3,\\[1em]
&\mu_{i} u_{i,n} n_\gammaabla p_{i,n}\cdot \boldsymbol{n}_{i,i+1} = \mu_{i+1} u_{i+1,n} n_\gammaabla p_{i+1,n}\cdot \boldsymbol{n}_{i,i+1} &\text{ on } (0,T)\times\Gamma_{i,i+1,}, & i = 1,2,\\[1em]
&u_{i,n} = u_{i+1,n} &\text{ on } (0,T) \times\Gamma_{i,i+1,}, & i = 1,2,\\[1em]
&u_{i,n}=\frac{1}{n} &\text{ on } (0,T) \timesp_\gammaartial\Omega.
\end{array}
\right.
\end{equation}
From results on diffraction problems from~\cite{ladyvzenskaja1988linear} we know that in weak form our regularized problem is only a quasi-linear parabolic PDE. Thus, from standard results on these equations, we can have the existence of a classical solution $u_n ^{(i)}n C^{1,2}(\Omega_T)$ of Problem~\eqref{epspb-reg}. Then, at this point the rest of the proof is similar to Section 5.4 in~\cite{vasquez}.
We obtain at the end the existence of weak solutions $u ^{(i)}n L^1(\Omega_T)$ and $p ^{(i)}n L^1(0,T;H^1_0(\Omega))$ of Problem~\eqref{epspb}.
\end{proof}
}
\end{document} |
\begin{document}
\date{}
\title{Wave breaking in the unidirectional non-local wave model}
\author{Shaojie Yang\thanks{Corresponding author:
[email protected] (Shaojie Yang); [email protected] (Jian Chen)},~~Jian Chen\\~\\
\small ~ Department of Systems Science and Applied Mathematics, \\
\small~Kunming University of Science and Technology, \\
\small ~Kunming, Yunnan 650500, China}
\date{}
\maketitle
\begin{abstract}
In this paper we study wave breaking in the unidirectional non-local wave model describing the motion of a collision-free plasma in a magnetic field. By analyzing the monotonicity and continuity properties of a system of the Riccati-type differential inequalities involving the extremal slopes of flows, we show a new sufficient condition on the initial data to exhibit wave breaking. Moreover, the estimates of life span and wave breaking rate are derived. \\
\nonumberindent\emph{Keywords}: Unidirectional non-local wave model; Wave breaking; Collision-free plasma
\end{abstract}
\nonumberindent\rule{15.5cm}{0.5pt}
\section{Introduction}\label{sec1}
The motion of a cold plasma in a magnetic field consisting of singly-charged particles can be
described by \cite{r1,r2}
\begin{align}
\label{eq1} n_t+(un)_x=0,\\
\label{eq2} u_t+uu_x+\frac{bb_x}{n}=0,\\
\label{eq3} b-n-\left( \frac{b_x}{n} \right)=0,
\end{align}
where $n=n(t,x), u=u(t,x)$ and $b=b(t,x)$ stand for the ionic density, the ionic velocity and the magnetic field, respectively.
System \eqref{eq1}-\eqref{eq3} can also be applied to describe the motion of collision-free two-fluid model under the assumptions that the electron inertia, the displacement current and the charge separation are neglected and the Poisson equation \eqref{eq3} is satisfied initially \cite{r3,r4}.
Recently, Alonso-Or\'{a}n D, Dur\'{a}n A and Granero-Belinch\'{o}n R \cite{r23} derived a unidirectional asymptotic model describing the motion of a collision-free plasma in a magnetic field given in system \eqref{eq1}-\eqref{eq3}.
The method to derive the unidirectional asymptotic model relies on a multi-scale expansion (see \cite{r5,r6,r7,r8,r9,r10,r11}) which reduces the full system \eqref{eq1}-\eqref{eq3} to a cascade of linear equations which can be closed up to some order of precision. More precisely, let
\begin{align*}
n=1+N,~~~U=u,~~~b=1+B,
\end{align*}
introducing the following formal expansions
\begin{align}\label{eq4}
N=\sum_{l=0}^\infty\varepsilon^{l+1}N^{(l)},~~~B=\sum_{l=0}^\infty\varepsilon^{l+1}B^{(l)},~~~U=\sum_{l=0}^\infty\varepsilon^{l+1}U^{(l)}
\end{align}
then under the extra assumption $U^{(0)}=N^{(0)}$ in \eqref{eq4} leads to the following bidirectional single non-local wave equation
\begin{align}\label{eq5}
h_{tt}+\mathcal{L}h= (hh_x+[ \mathcal{L}, \mathcal{N}h]h)_x-2(hh_t)_x.
\end{align}
The formal reduction of \eqref{eq5} to the corresponding unidirectional version, cf.\cite{r12}, yields
\begin{align}\label{eq6}
h_t+\frac{3}{2}hh_x=\frac{1}{2}( [ \mathcal{L}, \mathcal{N}h]h+ \mathcal{N}h+h_x ),
\end{align}
where
\begin{align*}
\mathcal{L}=-\partial_x^2(1-\partial_x^2)^{-1},~~~~\mathcal{N}=\partial_x(1-\partial_x^2)^{-1}
\end{align*}
and $[\mathcal{L}, \cdot ]\cdot$ denotes the following commutator
\begin{align*}
[\mathcal{L}, f ]g=\mathcal{L}(fg)-f \mathcal{L}g.
\end{align*}
Denote by $L$ the operator $(1-\partial_x^2)^{-1} $ which acting on functions $f\in L^2(\mathbb{R})$ has the representation
\begin{align*}
Lf(x)=G*f(x)=\int_{\mathbb{R}}G(x-y)f(y)dy,~~~~G(x)=\frac{1}{2}\mathrm{e}^{-|x|},
\end{align*}
a simple computation, for all $f\in L^2$, we have
\begin{align*}
\partial_x^2Lf(x)=(L-I)f(x),
\end{align*}
where $I$ denotes the identity operator. Note that $-\mathcal{L}=L-1$, then Eq.\eqref{eq6} can be rewritten as
\begin{align}\label{eq7}
h_t+\frac{3}{2}hh_x=\frac{1}{2}Lh_x+\frac{1}{2}h_x-\frac{1}{2}[L, Lh_x]h.
\end{align}
Eq.\eqref{eq7} possesses the following conservation properties
\begin{align*}
\int_{\mathbb{R}}h(t,x)dx=\int_{\mathbb{R}}h_0(x)dx
\end{align*}
and
\begin{align*}
\int_{\mathbb{R}}h^2(t,x)dx=\int_{\mathbb{R}}h^2_0(x)dx.
\end{align*}
Note that the unidirectional non-local wave model \eqref{eq7} is very similar to the celebrated Fornberg-Whitham equation
\begin{align}\label{eq8}
u_t+\frac{3}{2}uu_x=Lu_x,
\end{align}
which was derived by Whitham \cite{r13} and Fornberg and Whitham \cite{r14}, as a model for shallow water waves describing breaking waves. Recently, some properties of FW equation \eqref{eq8} such as well-posedness, continuity, traveling waves and blow-up have been studied in \cite{r15,r16,r17,r18,r19,r20,r21,r22}. The main difference between the unidirectional non-local wave model and the FW equation is the emergence of the nonlocal commutator-type term in \eqref{eq7}.
In this paper, we study wave breaking in the unidirectional non-local wave model \eqref{eq7}.
By analyzing wave breaking conditions on the extremal functions
\begin{align*}
M(t):=\sup\limits_{x\in\mathbb{R}}[h_x(t,x)],
\end{align*}
and
\begin{align*}
m(t):=\inf\limits_{x\in\mathbb{R}}[h_x(t,x)],
\end{align*}
where both $M(t)$ and $m(t)$ satisfy the Riccati-type differential inequalities \eqref{ed4} and \eqref{ed5}, we show a new sufficient condition on the initial data to exhibit wave breaking which extending wave breaking results in Ref.\cite{r23}. Our wave breaking condition (see Theorem \ref{th.1}) is that
\begin{align}\label{A}
\inf_{x\in\mathbb{R} }h_{0,x}(x)<\min\left\{-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right),-\frac{1}{12}\left(1+\sqrt{1+24\left(\sup_{x\in\mathbb{R} }h_{0,x}(x)+4C_0\right)}\right)\right\},
\end{align}
where $C_0$ is some positive constant depends on $\|h_0\|_{L^2}$. The wave breaking condition in \cite{r23} is that
\begin{align}\label{B}
\inf_{x\in\mathbb{R} }h_{0,x}(x)\leq -C_1,
\end{align}
where $C_1$ is some positive constant depends on $\|h_0\|_{L^2}$ and $\|h_0\|_{L^\infty}$.
Comparing wave breaking condition \eqref{A} and condition \eqref{B}, it is clear that the wave breaking condition \eqref{B} in \cite{r23} follows from our wave breaking condition \eqref{A}. Moreover, for the first time, we get the estimates of life span and wave breaking rate to the unidirectional non-local wave model \eqref{eq7}.
The remainder of this paper is organized as follows. In Section \ref{sec2}, we recall several useful results which are crucial in deriving wave breaking. In Section \ref{sec3}, we present a new wave breaking result and wave breaking rate.
\section{Preliminaries}\label{sec2}
In this section, we recall several useful results which are crucial in deriving wave breaking.
First of all, we recall the local well-posedness for Eq.\eqref{eq7}.
\begin{lemma}[\cite{r23}]\label{L1}
Let $h_0\in H^s(\mathbb{R})$ with $s>3/2$, then there exists maximal existence time $T>0$ and a unique local solution $h\in C([0, T), H^s(\mathbb{R}))$ of \eqref{eq7}.
\end{lemma}
The blow-up criteria for Eq.\eqref{eq7} can be formulated as follows.
\begin{lemma}[\cite{r23}]
Let $h_0\in H^s(\mathbb{R})$ with $s>3/2$, and let maximal existence time $T>0$ be the lifespan associated to the solution $h$ to \eqref{eq7} with $h(0,x)=h_0(x)$. Then $h(t,x)$ blows up in finite time $T$ if and only if
\begin{align*}
\int_0^{T}\|h_x(\tau)\|_{BMO}d\tau=\infty.
\end{align*}
\end{lemma}
Next, the following lemma shows that for the maximal time of existence $T> 0$, the solutions remains bounded.
\begin{lemma}[\cite{r23}]\label{L3}
Let $h_0\in H^s(\mathbb{R})$ with $s>3/2$, and let $T>0$ be the maximal time of existence of the unique solution $h$ of \eqref{eq7} given by Lemma \ref{L1}. Then
\begin{align*}
\sup_{t\in [0,T) }\|h(t)\|_{L^\infty(\mathbb{R})}<\infty.
\end{align*}
\end{lemma}
Finally, the following classical lemma is key to construct a system of the Riccati-type differential inequalities involving the extremal slopes of flows.
\begin{lemma}[\cite{r24}]\label{L4}
Let $T>0$ and $u\in C^1([0,T]; H^2)$. Then for every $t\in [0,T)$ there exist at least one point $\xi(t)\in \mathbb{R}$ with
\begin{equation*}
m(t):=\inf_{x\in\mathbb{R}}[h_x(t,x)]=h_x(t,\xi(t)),
\end{equation*}
and the function $m$ is almost everywhere differentiable on $(0,T)$ with
\begin{equation*}
\frac{dm(t)}{dt}=h_{tx}(t,\xi(t)),~\text{a.e.~on}~(0,T).
\end{equation*}
\end{lemma}
\nonumberindent {\bf Remark.~}The same statement is valid clearly for the supremum function $M(t):=\sup\limits_{x\in\mathbb{R}}[h_x(t,x)]$.
\section{Wave breaking}\label{sec3}
In this section, we present a new wave breaking result and wave breaking rate. The main result is as follows.
\begin{theorem}\label{th.1}
Let $h_0\in H^s(\mathbb{R})$ with $s>3/2$, and the maximal existence time $T>0$ be the lifespan associated to the solution $h$ to \eqref{eq7} with $h(0,x)=h_0(x)$. Assume
that the initial data $m_0=m(0)$ and $M_0=M(0)$ satisfy
\begin{equation}\label{b.1}
\inf_{x\in\mathbb{R} }h_{0,x}(x)<\min\left\{-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right),-\frac{1}{12}\left(1+\sqrt{1+24\left(\sup_{x\in\mathbb{R} }h_{0,x}(x)+4C_0\right)}\right)\right\},
\end{equation}
where $C_0=C\|h_0\|_{L^2}^{2}>0$. Then the corresponding solution $h$ breaks down in the finite time $T$ with
\begin{equation*}
0<T\leq t^*=\frac{2\sqrt3\big(3m_0^2+\frac{1}{2}m_0-2C_0\big)}{3(3m_0^2+m_0-2C_0)\sqrt{3m_0^2-\frac{1}{2}M_0+\frac{1}{2}m_0-2C_0}}.
\end{equation*}
Moreover, for some $x(t)\in\mathbb{R}$, the blow-up rate is
\begin{equation}\label{d.1}
h_x(t,x(t))\sim-\frac{2}{3(T-t)}~~~~as~~t\rightarrow T.
\end{equation}
\end{theorem}
\nonumberindent {\bf Remark.~}The wave-breaking time $t^*$ in Theorem \ref{th.1} could be optimized by more delicate estimate, which can be obtained from replacing
$m^2(t)=-\frac{2}{3}\varphi(t)+\frac{1}{6}(M-m)+\frac{2}{3}C_0\geq-\frac{2}{3}\varphi(t)$ by $m^2(t)\geq-\frac{2}{3}\varphi(t)-\frac{1}{6}m_0+\frac{2}{3}C_0$.
By investigating the proof in Theorem \ref{th.1}, we are able to obtain a better estimate of upper bound on wave-breaking time by
\begin{equation*}
t^*=\frac{\sqrt{3}\big(3m_0^2+\frac{1}{2}m_0-2C_0\big)}{3\sqrt{2C_0-\frac{1}{2}m_0}(3m_0^2+m_0-2C_0)}\ln\frac{\sqrt{6m_0^2-M_0}+\sqrt{4C_0-m_0}}{\sqrt{6m_0^2-M_0}-\sqrt{4C_0-m_0}}.
\end{equation*}
\begin{proof}
Let
\begin{align*}
M(t)=\sup_{x\in\mathbb{R}} h_x(t,x)=h_x(t,\xi_1(t))
\end{align*}
and
\begin{align*}
m(t)=\inf_{x\in\mathbb{R}} h_x(t,x)=h_x(t,\xi_2(t)).
\end{align*}
Differentiating \eqref{eq7} with respect to $x$ yields
\begin{align}\label{ed1}
h_{tx}=-\frac{1}{2}( 3h_x^2+3hh_{xx}+\partial_x [L, Lh_x]h+L_xh_x-h_{xx} ).
\end{align}
Note that the Sobolev embedding $H^{\frac{1}{2}+\mathrm{e}silon}\hookrightarrow L^\infty $ for $ \mathrm{e}silon >0$ and
$L$ is continuous between $H^s(\mathbb{R})$ and $H^{s+2}(\mathbb{R})$ for any $s\in \mathbb{R}$, yields
\begin{align}\label{ed2}
\nonumbertag \big| \partial_x [L, Lh_x]h \big|\leq& \|[L, Lh_x]h\|_{H^{\frac{1}{2}+\mathrm{e}silon}}\\
\nonumbertag \leq& \|Lh_xh\|_{H^{ -\frac{3}{2}+\mathrm{e}silon }}+\|(Lh)^2\|_{H^{\frac{3}{2}+\mathrm{e}silon }}\\
\nonumbertag\leq& C\|h\|_{L^2}^2\\
\leq& C\|h_0\|_{L^2}^2:=C_0
\end{align}
and by observing
\begin{align}\label{ed3}
L_xh_x=G_x*h_x=-\frac{1}{2}\int_{-\infty}^x\mathrm{e}^{y-x}h_xdy+\frac{1}{2}\int_x^{+\infty}\mathrm{e}^{x-y}h_xdy.
\end{align}
Combining \eqref{ed2} and \eqref{ed3}, from \eqref{ed1}, it's easy to find that, for a.e. $t\in (0,T)$
\begin{align}\label{ed4}
M'(t)\leq -\frac{3}{2}M^2+\frac{1}{4}(M-m)+C_0,
\end{align}
\begin{align}\label{ed5}
m'(t)\leq -\frac{3}{2}m^2+\frac{1}{4}(M-m)+C_0.
\end{align}
Now, we adapt the method in \cite{r25} in Theorem \ref{th.1} to prove our result. Note that the condition
\eqref{b.1}, together with \eqref{ed5}, we have
\begin{equation*}
m(0)<-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right)~~\text{and}~~m^{\prime}(0)\leq-\frac{3}{2}m^{2}(0)+\frac{1}{4}\big(M(0)-m(0)\big)+C_0<0.
\end{equation*}
We claim that:
\begin{equation*}
m(t)<-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right)~~\text{for~all}~t\in[0,T).
\end{equation*}
Otherwise, there exists $t_0<T$ such that
\begin{equation}\label{b.5}
m(t_0)=-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right)~\text{and}~ m(t)<-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right)~\text{for~all}~ t\in[0,t_0).
\end{equation}
Let
\begin{equation*}
\varphi(t)=-\frac{3}{2}m^{2}+\frac{1}{4}(M-m)+C_0,
\end{equation*}
then from \eqref{ed4} and \eqref{ed5}, a simple computation yields that, for~a.e.~$t\in[0,t_0)$
\begin{align*}
\varphi^{\prime}(t)&=-3m m^{\prime}+\frac{1}{4}(M^{\prime}-m^{\prime}) \\
&=-\frac{1}{4}(12m+1)m^{\prime}+\frac{1}{4}M^{\prime}\\
&\leq-\frac{1}{4}(12m+1)\left(-\frac{3}{2}m^{2}+\frac{1}{4}(M-m)+C_0\right)+\frac{1}{4}\left(-\frac{3}{2}M^{2}+\frac{1}{4}(M-m)+C_0\right)\\
&=~\frac{3}{2}m(3m^2+m-2C_0)-\frac{3}{8}(M+m)^{2}\\
&<0.
\end{align*}
Taking into account the fact~$\varphi(0)<0$, it follows from the decreasing of $\varphi(t)$ that
\begin{equation*}
\varphi(t)\leq \varphi(0)<0~~\text{for~a.e.}~ t\in[0,t_0).
\end{equation*}
Thus, we have
\begin{equation*}
m^{\prime}(t)\leq \varphi(t)<0~~\text{for~a.e.} ~t\in[0,t_0),
\end{equation*}
which means that
\begin{equation*}
m(t_0)\leq m(0)<-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right).
\end{equation*}
This contradicts with \eqref{b.5}. Consequently, we obtain
\begin{equation}\label{b.6}
m(t)<-\frac{1}{6}\left(1+\sqrt{1+24C_0}\right),~~\varphi(t)\leq \varphi(0)<0,~~~~\text{for~all}~t\in[0,T),
\end{equation}
and
\begin{equation}\label{b.7}
m^{\prime}(t)<0~~~~\text{for~a.e.}~t\in[0,T).
\end{equation}
On the other hand, in view of \eqref{b.6} and \eqref{b.7}, the definition of $\varphi(t)$ implies that
\begin{equation*}
m^2(t)=-\frac{2}{3}\varphi(t)+\frac{1}{6}(M-m)+\frac{2}{3}C_0\geq-\frac{2}{3}\varphi(t)>0,
\end{equation*}
which shows us
\begin{equation}\label{b.8}
-m(t)\geq\sqrt{-\frac{2}{3}\varphi(t)}.
\end{equation}
Conclusively, from the above discussion and \eqref{b.6}, for~a.e.~$t\in[0,T)$
\begin{align*}
\varphi^{\prime}(t)&\leq \frac{3}{2}m(3m^2+m-2C_0)\\
&\leq-\sqrt{6}\frac{3m^2+m-2C_0}{3m^2-\frac{1}{2}M+\frac{1}{2}m-2C_0}(-\varphi)^\frac{3}{2}(t)\\
&\leq-\sqrt{6}\frac{3m^2+m-2C_0}{3m^2+\frac{1}{2}m-2C_0}(-\varphi)^\frac{3}{2}(t).
\end{align*}
A straightforward computation yields
\begin{align*}
\left(\frac{3m^2+m-2C_0}{3m^2+\frac{1}{2}m-2C_0}\right)^{\prime}&=\left(\frac{\frac{1}{2}m}{3m^2+\frac{1}{2}m-2C_0}\right)^{\prime}\\
&=\frac{m^{\prime}(-\frac{3}{2}m^2-C_0)}{\big(3m^2+\frac{1}{2}m-2C_0\big)^2}>0.
\end{align*}
Hence, we have
\begin{equation*}
\varphi^{\prime}(t)\leq-\sqrt{6}\frac{3m_0^2+m_0-2C_0}{3m_0^2+\frac{1}{2}m_0-2C_0}(-\varphi)^\frac{3}{2}(t)~~~~\text{for~a.e.}~t\in[0,T).
\end{equation*}
Denote
\begin{equation*}
\delta=\sqrt{6}\frac{3m_0^2+m_0-2C_0}{3m_0^2+\frac{1}{2}m_0-2C_0}.
\end{equation*}
Apparently, $\delta>0$ and
\begin{equation}\label{b.9}
\left((-\varphi)^{-\frac{1}{2}}(t)\right)^{\prime}=\frac{1}{2}(-\varphi)^{-\frac{3}{2}}(t)\varphi^{\prime}(t)\leq-\frac{\delta}{2}.
\end{equation}
Integrating \eqref{b.9} from $0$ to $t$ yields
\begin{equation}\label{b.10}
-\varphi(t)\geq\left((-\varphi)^{-\frac{1}{2}}(0)-\frac{\delta}{2}t\right)^{-2}.
\end{equation}
Since $-\varphi(t)>0$ for any $t\geq0$, then $\varphi(t)$ and $m(t)$ break down at the finite time
\begin{equation*}
T\leq t^*:=\frac{2}{\delta(-\varphi)^{\frac{1}{2}}(0)}=\frac{2\sqrt3\big(3m_0^2+\frac{1}{2}m_0-2C_0\big)}{3(3m_0^2+m_0-2C_0)\sqrt{3m_0^2-\frac{1}{2}M_0+\frac{1}{2}m_0-2C_0}}.
\end{equation*}
In the case of $T_0=t^*$, in view of \eqref{b.8} and \eqref{b.10}, we have
\begin{equation*}
m(t)\leq-\sqrt{-\frac{2}{3}\varphi(t)}\leq-\frac{1}{\sqrt{\frac{3}{8}}\delta(t^*-t)},
\end{equation*}
which implies that $m(t)\rightarrow-\infty$ as $t\rightarrow T$.
Now, let us give more insight into the wave breaking mechanism to Eq. \eqref{eq7}. Assume that the corresponding solution $h$ breaks in finite time $T<\infty$ for some $\xi_2(t)\in\mathbb{R}$, then from Lemma \ref{L3}, we have
\begin{equation}\label{c.1}
m(t)=h_x(t,\xi_2(t))\rightarrow -\infty~~\text{as}~t\nearrow T~~\text{and}~~\sup_{t\in [0,T)}|h(t,\xi_2(t))|<\infty.
\end{equation}
Then it follows from \eqref{ed1} that, for~a.e.~$t\in[0,T)$
\begin{equation}\label{c.2}
m^{\prime}(t)=-\frac{3}{2}m^2(t)-\frac{1}{2}\partial_x [L, Lh_x]h(t,x(t))-\frac{1}{2}L_xh_x(t,x(t)).
\end{equation}
Note that
\begin{align}\label{c.3}
\big| L_xh_x \big| = \big| Lh-h \big| &\leq \|G*h\|_{L^\infty}+\|h\|_{L^\infty}\nonumbernumber\\
&\leq \|G\|_{L^2}\|h\|_{L^2}+\|h\|_{L^\infty}\nonumbernumber\\
&=\frac{1}{2}\|h\|_{L^2}+\|h\|_{L^\infty}\nonumbernumber\\
&=\frac{1}{2}\|h_0\|_{L^2}+\|h\|_{L^\infty}.
\end{align}
Combing \eqref{c.2}, \eqref{ed2} and \eqref{c.3} leads to
\begin{align}\label{c.4}
-\frac{C_0}{2}-\frac{1}{4}\|h_0\|_{L^2}-\frac{1}{2}\|h\|_{L^\infty}-\frac{3}{2}m^2(t)\leq m^{\prime}(t)\leq-\frac{3}{2}m^2(t)+\frac{1}{2}\|h\|_{L^\infty}+\frac{1}{4}\|h_0\|_{L^2}+\frac{C_0}{2}.
\end{align}
For small $\varepsilon>0$, from \eqref{c.1} we can find $t_0\in(0,T)$ such that $t_0$ is close enough to $T$ in a such
way that
\begin{equation*}
\left\lvert \frac{\frac{1}{2}\|h\|_{L^\infty}+\frac{1}{4}\|h_0\|_{L^2}+\frac{C_0}{2}}{m^2(t)}\right \rvert<\varepsilon
\end{equation*}
on $(t_0,T)$. Therefore, on such interval, one can infer from \eqref{c.4} that, for~a.e.~$t\in(t_0,T)$
\begin{equation*}
-\frac{3}{2}-\varepsilon\leq-\frac{d}{dt}\left(\frac{1}{m}\right)\leq-\frac{3}{2}+\varepsilon.
\end{equation*}
Integrating the above inequalities on $(t,T)$~with~$t\in(t_0,T)$, we derive the blow-up rate \eqref{d.1} with $x(t)=\xi_2(t)$. Thus, the proof of the Theorem \ref{b.1} is complete.
\end{proof}
\section*{Acknowledgment}
This work is supported by Yunnan Fundamental Research Projects (Grant NO. 202101AU070029).
\end{document} |
\begin{document}
\title[Information geometric approach to mixed state quantum estimation]{Information geometric approach to mixed state quantum estimation}
\author{Gabriel F. Magno$^1$, Carlos H. Grossi$^2$, Gerardo Adesso$^3$ and Diogo O. Soares-Pinto$^1$}
\address{$^1$ Instituto de F\'{i}sica de S\~{a}o Carlos, Universidade de S\~{a}o Paulo, CP 369, 13560-970, S\~{a}o Carlos, SP, Brazil}
\address{$^2$ Departamento de Matem\'{a}tica, ICMC, Universidade de S\~{a}o Paulo, Caixa Postal 668, 13560-970,
S\~{a}o Carlos, SP, Brazil}
\address{$^3$ School of Mathematical Sciences and Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom} \ead{[email protected], [email protected], [email protected], [email protected]}
\begin{indented}
\item[]
\end{indented}
\begin{abstract}
Information geometry promotes an investigation of the geometric structure of statistical manifolds, providing a series of elucidations in various areas of scientific knowledge. In the physical sciences, especially in quantum theory, this geometric method has an incredible parallel with the distinguishability of states, an ability of great value for determining the effectiveness in implementing physical processes. This gives us the context for this work. Here we will approach a problem of uniparametric statistical inference from an information-geometric perspective. We will obtain the generalized Bhattacharyya higher-order corrections for the Cram\'{e}r-Rao bound, where the statistics is given by a mixed quantum state. Using an unbiased estimator $T$, canonically conjugated to the Hamiltonian $H$ that generates the dynamics, we find these corrections independent of the specific choice of estimator. This procedure is performed using information-geometric techniques, establishing connections with corrections to the pure states case.
\end{abstract}
\section{Introduction}
\label{sec:introducao}
Quantum technologies utilise quantum states as the fundamental agents responsible for information processing. Knowing the quantum operations that can act on these states, their proper control allows an optimization in coding/decoding, manipulation and transmission of the information content on the state of the system. The access to this encoded information, after the operation is applied, demands some way to distinguish the initial and final states of the system \cite{ikemike}. Therefore, distinguishing states in quantum models is a typical task at the core of information theory. Several measures of distance in state spaces are known in the literature that are used as quantifiers of distinguishability, for example, the trace distance, Bures distance, Hellinger distance, Hilbert-Schmidt distance, relative entropies, among others \cite{fuchs96, watrous2018}. As the concept of distance is closely linked to some type of geometric structure of the space involved, there is a natural connection between methods of geometry and information theory.
In this geometric formulation of information theory, the quantum state space -- a set of density operators that act on the Hilbert space of a quantum system -- is treated as a differentiable manifold equipped with metric tensors that will define the notion of distance between the elements of the manifold. Such an approach allows the development of interesting physical results exploring the metric properties of space \cite{braunstein-caves, wootters81, amari1993infogeo, anderssonheydari1, anderssonheydari2, heydari}. In this way, it is natural to interpret the distinguishability measures as metrics defined in quantum statistical manifolds. This culminates in {\em information geometry}, an area that applies differential geometry methods to the solution and formalization of information science problems, obtaining robust and elegant results with broad applicability \cite{petz1996, gibiliscoisola, petzhasegawa, petz2002, hiaipetz, anderssonheydari3, jarzyna, benyosborne}. Such a framework was used successfully in problems of statistical inference, and it is specifically the one-parameter instance of these problems that will be the focus of our discussion \cite{brody1996prl, brody1996royalsoc}.
The uniparametric quantum statistical inference problem can be described as follows \cite{Helstrom, paris-ijqi, GLM}: Consider a physical system characterized by the pure quantum state $\rho(t)$, that depends on an unknown parameter $t$. We want to estimate the value of this parameter using an unbiased estimator $T$, i.e., $\mbox{E}_ {\rho} [T] = t$. Thus, we can ask: How accurate is this estimation procedure? To answer this question, note that the variance of the estimator can be interpreted as the error associated with the estimation \cite{Helstrom, paris-ijqi, GLM}. Therefore, a good estimation should have a small error, which means that the variance of the estimator should be as small as possible. In literature it is established a lower bound for the variance of the estimator, named the Cram\'{e}r-Rao inequality \cite{Helstrom, paris-ijqi, GLM},
\begin{equation}
\label{desigualdade_cr}
\mbox{Var}_{\rho}[T]\ge \frac{1}{\mathcal{G}},
\end{equation}
where $\mbox{Var}_{\rho}[T]\equiv\mathcal{D}elta T^2$ is the variance of the estimator $T$ and $\mathcal{G}$ is the quantum Fisher information associated to the parameter of interest. From this relation we can see that the more information the state provides about the parameter, the smaller is $\mbox{Var}_ {\rho}[T]$ and the better the estimation \cite{Helstrom, paris-ijqi, GLM, fabricio}.
In this work, we extend the analysis of the Cramér-Rao bound by obtaining its higher-order corrections using the geometry of the quantum state space for {\it mixed} states instead of {\it pure} states. We obtain the following generalized form of the bound in the mixed quantum state scenario:
\begin{prop}
Let $\rho(t)$ be a one-parameter mixed state under a von Neumann dynamics generated by the Hamiltonian $H$, canonically conjugated to the unbiased estimator $T$ of the parameter $t$. The generalized Cram\'{e}r-Rao bound is then given by
\begin{equation}
\label{qntcrlb_classys_correcao_misto}
(\mathcal{D}elta T^2 + \delta T^2)(\mathcal{D}elta H^2 - \delta H^2) \geq \frac{1}{4}\left[1+ \frac{(\mu_4-3\mu_2^2)^2}{\mu_6\,\mu_2-\mu_4^2} \right],
\end{equation}
where $\delta X = \tr(X\sqrt{\rho}X\sqrt{\rho})-[\tr(X\rho)]^2$, and $\mu_{2n}$ are the norms of the $n-$th derivatives of the square root of the state with respect to the parameter.
\end{prop}
The paper is organized as follows. In Section II we introduce the notation used along the manuscript and briefly discuss the current literature on quantum statistical estimation. Section III is devoted to obtain the extension of quantum estimation theory when considering a mixed state scenario, analyzing the consequences for the Cramér-Rao bound. In Section IV we present the higher order corrections for the variance implied by this mixed state scenario. In Section V, we present an algorithmic approach to obtain all possible corrections to the bound, and in Section VI we present our conclusions and discussions.
\section{Notation}
\label{sec:adapt_notacao}
Throughout the paper we use the following notation. Let $\mathcal{B}_{HS}$ be a Hilbert space contained in the set of square matrices with complex entries, equipped with the Hilbert-Schmidt inner product. For a given matrix $A\in \mathcal{B}_{HS}$ we have
\begin{equation*}
\langle A,A\rangle_{HS}=\tr (A^\dagger A)<\infty,
\end{equation*}
where $\langle\bullet,\bullet\rangle_{HS}$ denotes the Hilbert-Schmidt product and $\tr(\bullet)$ stands for the matrix trace. Therefore, $\mathcal{B}_{HS}$ is the set of all matrices with finite Hilbert-Schmidt norm. Thus, let us index the Hilbert-Schmidt product as $g_{ab}$. This definition allows us to establish that $\zeta^ag_{ab}\zeta^b \equiv \tr (\zeta^\dagger\zeta)$, where $\zeta^a \in \mathcal{B}_{HS}$ and $\zeta$ is the matrix associated to $\zeta^a$.
On the other hand, the set of density operators representing mixed states in quantum theory, $\mathcal{S}=\{ \rho \mid \rho=\rho^\dagger > 0, \tr(\rho)= 1, \tr(\rho^{2}) < 1 \}$, denoting a space of positive definite density operators, is a differentiable manifold in $\mathcal{B}_{HS}$ \cite{amari1993infogeo, livro_geo_info}. As the operators in $\mathcal{S}$ are positive definite, we have the embedding $\rho \rightarrow \sqrt{\rho}$ which maps an operator into its own square-root. Thus, identifying $\sqrt{\rho}\equiv\xi$, we find that $g_{ab}\xi^a\xi^b=\tr (\xi\xi)=1$.
A random variable in $\mathcal{S}$ is a form $X_{ab}$ whose average in terms of the state $\xi$ is given by
\begin{equation*}
\mbox{E}_{\xi}[X]=\xi^aX_{ab}\xi^b=\tr [\xi X\xi].
\end{equation*}
Also, the variance of $X_{ab}$ is given by
\begin{equation*}
\mbox{Var}_{\xi}[X]=\mathcal{D}elta X^2=\xi^a\tilde{X}_{ac}\tensor[ ]{\tilde{X}}{^c_b}\xi^b,
\end{equation*}
where $\tilde{X}_{ab} \equiv X_{ab}-g_{ab}(\xi^cX_{cd}\xi^d)$. It is important to note that when calculating these quantities, in general, we have $X_{ac}\xi^c\xi^b\tensor{Y}{_b^a}\neq\xi^cX_{ca}\xi^b\tensor{Y}{_b^a}$ since $\tr [X\xi\xi Y]\neq\tr [\xi X \xi Y]$. Once $\xi^a$ is a matrix, the way the contraction is taken is quite important, even for symmetric forms, because the matrix algebra is non-commutative.
Now consider that $\mathcal{S}$ is parametrically given by a set of local coordinates $\{ \theta = [\theta^{i}] \in \mathbb{R}e^{r}; i=1,\dots,r\}$, where $\xi(\theta)\in C^\infty$. Defining $Partial_i\equivPartial/Partial\theta^i$ we have, in terms of local coordinates in $\mathcal{S}$, the Riemannian metric
\begin{equation}
\label{fisher_metrica_misto}
G_{ij}=2g_{ab}Partial_i\xi^aPartial_j\xi^b=2\tr [Partial_i\xiPartial_j\xi]
\end{equation}
induced by the Hilbert-Schmidt metric $g_{ab}$ of $\mathcal{B}_{HS}(\mathcal{H})$. The proof of Eq.(\ref{fisher_metrica_misto}) follows the same steps as the proof of proposition 1 in Ref.~\cite{brody1996royalsoc}. The difference is that there, a factor 4 is arbitrarily considered so one has an identification with the results of the Cramér-Rao bound while here we have a factor 2. The motivation for this difference will become clear in the next section.
\section{Quantum statistical estimation}
\label{sec:est_qnt_misto}
In this section, inspired by Ref.\cite{brody2011fasttrack}, we are going to show a generalisation considering the mixed state scenario. Thus, consider that, in $\mathcal{S}$, we have the result of a measurement from an unbiased estimator $T_{ab}$ such that $\xi^aT_{ab}\xi^b=t$, and that a one-parameter family $\xi(t)$ characterizes the probability distribution of all the possible results of the measurement. We can refer to $t$ as the time that has passed since the preparation of the initially known normalized state $\xi_0 = \xi(0)$. Our goal is to estimate $t$.
The system is prepared in the normalized state $\xi_0$ and evolves under the Hamiltonian $H_{ab}$ following the equation of motion
\begin{equation*}
\xi_t = e^{-iHt}\,\xi_0\,e^{iHt}.
\end{equation*}
The derivative of this equation in time gives
\begin{equation*}
\dot{\xi}=-iH\xi+i\xi H = -i[H,\xi]
\end{equation*}
that is the von Neumann dynamics for the state $\xi_t$. From such evolution we find that
\begin{equation*}
\tr (\dot{\xi}\dot{\xi})=2[\tr (H^2\xi\xi)-\tr (H\xi H\xi)].
\end{equation*}
Given the mapping $\xi\rightarrow\sqrt{\rho}$, we find that the right side of the previous expression is twice the Wigner-Yanase skew information (WYSI) \cite{WY} of an arbitrary observable $H$ and a quantum state $\rho$
\begin{equation}
\label{wysi}
I_{\rho}(H)=\tr (H^2\rho)-\tr (H\sqrt{\rho}H\sqrt{\rho})=-\frac{1}{2}\tr ([\sqrt{\rho},H]^2).
\end{equation}
Defining $\delta X^2 \equiv \tr (X \xi X \xi)-[\tr (X \xi \xi)]^2$ we can rewrite Eq.(\ref{wysi}) as
\begin{equation*}
I_{\rho}(H)=(\mathcal{D}elta H^2-\delta H^2).
\end{equation*}
From the embedding $\rho\rightarrow \sqrt{\rho}$, we find that the natural metric, induced by the notion of distance in the ambient manifold, is going to be the WYSI in the mixed state space which is the only metric that has a Riemannian $\alpha-$connection \cite{amari1993infogeo}.
Relaxing the unitarity of the trace, we can define a symmetric function of $\xi$ in $\mathcal{B}_{HS}$ as
\begin{equation*}
t(\xi)=\frac{\xi^aT_{ab}\xi^b}{\xi^cg_{cd}\xi^d}=\frac{\tr (\xi T\xi)}{\tr (\xi\xi)}.
\end{equation*}
Again, being the matrix algebra non-commutative, we must be careful when doing the contractions. For example, considering $\triangledown_c=Partial/Partial\xi^c$, we obtain
\begin{eqnarray*}
\triangledown_c\xi^aT_{ab}\xi^b &=& \triangledown_c\xi^aT_{ab}\xi^b + \xi^aT_{ab}\triangledown_c\xi^b \nonumber \\
&=& T_{cb}\xi^b + \xi^aT_{ac} \\
&=& (T_{ac}+T_{ca})\xi^a=(T\xi+\xi T).
\end{eqnarray*}
Following this calculation, we can find that the gradient of a function $t$ in $\mathcal{B}_{HS}$ is given by
\begin{equation*}
\triangledown_ct=\frac{(T_{ac}+T_{ca})\xi^a-2\,t\,g_{ac}\xi^a}{\xi^b\xi_b}.
\end{equation*}
Now, imposing the unitarity of the trace, $\tr(\xi\xi)=1$, we get
\begin{eqnarray*}
\triangledown_ct &=& (\tilde{T}_{ac}+\tilde{T}_{ca})\xi^a \\
\triangledown^ct &=& (\tensor{T}{^c_a}+\tensor{T}{_a^c})\xi^a -2t\xi^c.
\end{eqnarray*}
The gradient norm is
\begin{eqnarray*}
\label{norm_gradt_misto}
\norm{\triangledown^ct}^2&=&(T_{ac}+T_{ca})(\tensor{T}{^c_b}+\tensor{T}{_b^c})\xi^a\xi^b \\
&-&2\,t\,(T_{ac}+T_{ca})\xi^a\xi^c -2\,t\,(T_{ab}+T_{ba})\xi^a\xi^b + 4\,t^2\,\xi^a\xi_a \\
&=&T_{ca}\xi^a\tensor{T}{^c_b}\xi^b+T_{ca}\xi^a\xi^b\tensor{T}{_b^c} \\
&+&\xi^aT_{ac}\tensor{T}{^c_b}\xi^b + \xi^aT_{ac}\xi^b\tensor{T}{_b^c}-4t^2 \\
&=&2\{ [\tr (T^2\xi\xi)-t^2] + [\tr (T\xi T\xi)-t^2] \} \\
&=&2(\mathcal{D}elta T^2 + \delta T^2)
\end{eqnarray*}
The von Neumann dynamics can be written as
\begin{equation*}
\dot{\xi}^a=-i\tensor{H}{^a_b}\xi^b+i\xi^b\tensor{H}{_b^a}.
\end{equation*}
Being $T$ canonically conjugated to $H$, i.e., $i[H,T]=1$, we obtain the projection of $\triangledown^ct$ into the direction $\dot{\xi}^a$
\begin{eqnarray*}
\triangledown_at\,\dot{\xi}^a&=&[(T_{ac}+T_{ca})\xi^c-2\,t\,\xi_a][-i\tensor{H}{^a_b}\xi^b+i\xi^b\tensor{H}{_b^a}] \nonumber \\
&=&\tr (-iT\xi H\xi+i\xi T\xi H+iT\xi\xi H-i\xi TH\xi) \nonumber \\
&=&i\tr ([H,T]\xi\xi)=1.
\end{eqnarray*}
Using the Cauchy-Schwartz inequality for a pair of hermitian operators $X$ and $Y$
\begin{equation*}
[\tr (XY)]^2\geq\tr(X^2)\tr(Y^2),
\end{equation*}
we deduce the quantum Cramér-Rao inequality for the density operators space
\begin{equation}
\label{qntcrlb_misto2}
(\mathcal{D}elta T^2+\delta T^2)(\mathcal{D}elta H^2-\delta H^2)\geq\frac{1}{4},
\end{equation}
where $\mathcal{D}elta X$ are the variances of the parameters $X$ and $(\mathcal{D}elta X^2 + \delta X^2)$ is the skew information of second kind \cite{brody2011fasttrack}. Note that the relation above extends the usual notion of Cram\'{e}r-Rao bound, considering some corrections to the uncertainty relation.
If we impose that $\xi=\xi^2$ to recover the pure state case, it follows that $\tr (H\sqrt{\rho}H\sqrt{\rho}) = [\tr (H\rho)]^2$, consequently $\delta H^2 = 0$, and thus $\tr (\dot{\xi}\dot{\xi})=4\mathcal{D}elta H^2$. Similarly, we can find that $\delta T^2 = 0$. After these considerations, the inequality reduces to the Cramér-Rao bound for pure states found in literature \cite{brody1996prl, brody1996royalsoc}\footnote{In the quantum scenario, the metric when $\alpha=0$ corresponds exactly to the WYSI in the uniparametric case, where the factor 4 can be found \cite{amari1993infogeo}. However, the embedding considered in this case is $\rho\rightarrow 2\sqrt{\rho}$, which is different in the present work. The factor 2 allows us to establish an equality with the relation in Eq.(\ref{qntcrlb_misto2}), in the sense of the Cramér-Rao $1/\mathcal{G}$, and recover the expression for the pure state case. Notice that if we consider the embedding $\rho\rightarrow 2\sqrt{\rho}$, the metric will have the factor 4 and, when taking Eq.(\ref{qntcrlb_misto2}) in the pure state case, we would find $\mathcal{D}elta T^2\mathcal{D}elta H^2\ge 1/8$ \cite{ref20_brody_fasttrack}.}.
We can go a step further and rewrite the inequality given in Eq.(\ref{qntcrlb_misto2}), after a simple manipulation, in another form
\begin{equation*}
\mathcal{D}elta T^2\mathcal{D}elta H^2 \geq \frac{1}{4} + \delta T^2\delta H^2.
\end{equation*}
However, although more symmetric, this inequality is less tight then the previous one.
\section{Higher orders corrections for the variance bound in mixed state case}
\label{sec:ordem_sup_var_qnt}
The dynamics of a mixed states will not lead to an exponential family of states that saturate the Cauchy-Schwartz inequality, i.e., Eq.(\ref{qntcrlb_misto2}) has its minimum bound unattainable when states evolve under a von Neumann dynamics. The calculation of higher-order corrections then becomes relevant to find how the estimation can be affected in such circumstances.
In order to establish the higher-order corrections to the bound in the mixed state scenario, we will be inspired by the works of Bhattacharyya \cite{bhatt1, bhatt2, bhatt3}, we will follow the programme outlined in Ref.\cite{brody2011fasttrack} and finding in this new context what is called generalized Bhattacharyya bound \cite{brody1996prl, brody1996royalsoc}.
\begin{prop}\label{prop:bhattacharrya_misto} Let
\begin{equation*}
\hat{\xi}^{(n)a}=\xi^{(n)a}-\frac{\xi^{(n-1)b}\xi^{(n)}_b}{\xi^{(n-1)c}\xi^{(n-1)}_c}\,\xi^{(n-1)a} - \dots - (\xi^b\xi^{(n)}_b)\,\xi^a \quad (n=0,1,2,\dots),
\end{equation*}
be an orthogonal vectors system, where $\xi^{(n)a}=d^n\xi^a/dt^n$ and
\begin{eqnarray*}
\hat{\xi}^{(r)}_a\xi^a&=&0 \nonumber \\
\hat{\xi}^{(r)}_a\xi^{(s)a} &=& 0; r\neq s \nonumber \\
\hat{\xi}^{(r)}_a\hat{\xi}^{(s)a} &=& 0; r\neq s,
\end{eqnarray*}
are defined for mixed states. The generalized Bhattacharyya lower bounds for an unbiased estimator $T_{ab}$ of a function $t$ can be expressed in the form
\begin{equation}
\label{bhattacharrya_misto}
\mathcal{D}elta T^2 + \delta T^2 \geq \frac{1}{2}\sum_n\frac{(\triangledown_at\,\hat{\xi}^{(n)a})^2}{\hat{\xi}^{(n)a}\hat{\xi}^{(n)}_a}
\end{equation}
\end{prop}
\begin{proof}[Proof]
Let $\hat{T}_{ab}\equiv (T_{ab}+T_{ba})-2\,t\,g_{ab}=(\tilde{T}_{ab}+\tilde{T}_{ba})$. Defining the tensor $R_{ab}\equiv \hat{T}_{ab}+\sum_n\lambda_n\xi_{(a)}\hat{\xi}_{(b)}$, we obtain the variance for $R$
\begin{equation*}
\mbox{Var}_{\xi}[R] = \mbox{Var}_{\xi}[\hat{T}]+\sum_n\lambda_n(\xi^a\hat{T}_{ac}\hat{\xi}^{(n)c}+\hat{\xi}^{(n)c}\hat{T}_{ca}\xi^a) + \sum_n\lambda_r^2\hat{\xi}^{(n)b}\hat{\xi}^{(n)}_b.
\end{equation*}
To obtain each value $\lambda_n$ that minimizes $\mbox{Var}_{\xi}[R]$, we need to consider the variance as a function of $\{\lambda_n\}_{n\in\mathbb{N}}$ variables and find the extreme points that cause the gradient of this function to vanish. There is only one extreme point, a minimum that presents all entries with the same value, namely $\lambda_n^{min}=-(\xi^a\hat{T}_{ac}\hat{\xi}^{(n)c}+\hat{\xi}^{(n)c}\hat{T}_{ca}\xi^a)/2\hat{\xi}^{(n)}_b\hat{\xi}^{(n)b}$ for all $n$. Replacing this in the above expression, we find
\begin{equation*}
\min_{\{\lambda_n\}_{n\in\mathbb{N}}}\left(\mbox{Var}_{\xi}[R]\right) = \mbox{Var}_{\xi}[\hat{T}]-\sum_n\frac{(\xi^a\hat{T}_{ac}\hat{\xi}^{(n)c}+\hat{\xi}^{(n)c}\hat{T}_{ca}\xi^a)^2}{4\hat{\xi}^{(n)b}\hat{\xi}^{(n)}_b}.
\end{equation*}
Since $\mbox{Var}_{\xi}[R]\geq 0$, we obtain an expression for the generalized bound on the variance of $\hat{T}_{ab}$,
\begin{equation*}
\mbox{Var}_{\xi}[\hat{T}]\geq \sum_n\frac{(\xi^a\hat{T}_{ac}\hat{\xi}^{(n)c}+\hat{\xi}^{(n)c}\hat{T}_{ca}\xi^a)^2}{4\hat{\xi}^{(n)b}\hat{\xi}^{(n)}_b}.
\end{equation*}
We need to obtain
\begin{equation*}
(\xi^a\hat{T}_{ac}\hat{\xi}^{(n)c}+\hat{\xi}^{(n)c}\hat{T}_{ca}\xi^a)^2=4(\xi^aT_{ac}\hat{\xi}^{(n)c}+\hat{\xi}^{(n)c}T_{ca}\xi^a)^2=4(\triangledown_at\,\hat{\xi}^{(n)a})^2
\end{equation*}
and
\begin{equation*}
\mbox{Var}_{\xi}[\hat{T}] = 2(\mathcal{D}elta T^2+\delta T^2).
\end{equation*}
Combining all these results we arrive at the desired expression in Eq.(\ref{bhattacharrya_misto}). Naturally, for the case $r=1$, we recover the inequality given in Eq.(\ref{qntcrlb_misto2}).
\end{proof}
A simple interpretation of this proposition is that, given the gradient vector $\triangledown^at$, its squared modulus will always be greater than or equal to the sum of the squares of its orthogonal components with respect to a given base. Applying Cauchy-Schwarz inequality is the same as using order$-1$ Bhattacharyya inequality. Note that the generalized Bhattacharyya bound is not necessarily independent of the specific choice of the estimator $T_{ab}$. This is evidence that they are not fully equivalent to the original Bhattacharrya bounds, not even at the classical level, since the original bounds are independent of the specific choice of estimator. Therefore, we want to obtain corrections which are independent of that choice, so that the bound will not depend on the way we perform the estimation. This will again demand $T$ and $H$ to be canonically conjugated.
Before focusing on higher order corrections, let us present some useful results. These results are adaptations of statements from \cite{brody1996royalsoc} to the context of mixed states.
\begin{prop}
\label{lem:norm_xin_indep_t_misto}
Given $\xi^a(t)$ satisfying the von Neumann dynamics, the norm of $\xi^{(n)a}$, $g_{ab}\xi^{(n)a}\xi^{(n)b}$, is independent of the parameter $t$, where $\xi^{(n)a}=d^n\xi^a/dt^n$. In particular, $g_{ab}\xi^{(n)a}\xi^{(n+1)b}=0$.
\end{prop}
\begin{proof}[Proof]
If von Neumann is valid, then $\dot{\xi}^{(n)a}=-i\tensor{H}{^a_b}\xi^{(n)b}+i\xi^{(n)b}\tensor{H}{_b^a}$. The time derivative of the norm of $\xi^{(n)a}$ gives
\begin{eqnarray*}
\frac{d}{dt}\left[g_{ab}\xi^{(n)a}\xi^{(n)b}\right]&=&2g_{ab}\xi^{(n)a}\dot{\xi}^{(n)b}=2g_{ab}\xi^{(n)a}\xi^{(n+1)b} \nonumber \\
&=&\xi^{(n)}_b(-i\tensor{H}{^a_c}\xi^{(n)c}+i\xi^{(n)c}\tensor{H}{_c^b}) \\
&=&\tr (-i\xi^{(n)}H\xi^{(n)}+i\xi^{(n)}H\xi^{(n)})=0,
\end{eqnarray*}
completing the proof.
\end{proof}
It is important to note that since the von Neumann dynamics is given by $\dot{\xi}=-i[\tilde{H},\xi]$, where $\tilde{H}_{ab}=H_{ab}-g_{ab}[\tr (H\xi\xi)]$, we can generalize this relation to
\begin{equation}
\label{vonneumann_nderiv}
\xi^{(n)}=\mbox{mod}_{-i}[n]\cdot\mbox{Ad}^n_{\tilde{H}}[\xi],
\end{equation}
where
\begin{eqnarray*}
\mbox{mod}_{-i}[n] \Longrightarrow && n=\bar{1}\rightarrow-i \\
&& n=\bar{2}\rightarrow -1 \\
&& n=\bar{3}\rightarrow i \\
&& n=\bar{4}\rightarrow +1
\end{eqnarray*}
and
$$\mbox{Ad}^n_{\tilde{H}}[\bullet]=[\dots[\tilde{H},[\tilde{H},[\tilde{H},[\tilde{H},\bullet]]]]\dots].$$
Let us also define the $n-$th derivative of the norm of $\xi$ as
\begin{equation*}
g_{ab}\xi^{(n)a}\xi^{(n)b}\equiv \mu_{2n}.
\end{equation*}
\begin{prop}
\label{prop:conseq_TconjugH_misto}
Let $T_{ab}$ be canonically conjugate to $H_{ab}$ and an unbiased estimator to the parameter $t$. Then
\begin{equation}
\label{conseq_TconjugH_misto}
T_{ab}\xi^{(n)a}\xi^{(n)b}=t\,g_{ab}\xi^{(n)a}\xi^{(n)b}+\kappa,
\end{equation}
where $\kappa$ is a constant. Therefore, for all $n$, $\tilde{T}_{ab}\xi^{(n)a}\xi^{(n)b}=\kappa$ is a constant of motion along the path $\xi(t)$, following a von Neumann dynamics.
\end{prop}
\begin{proof}[Proof]
If $T$ is canonically conjugated to $H$, thus $ig_{ab}=(T_{ac}\tensor{H}{^c_b}-\tensor{H}{_a^c}T_{cb})$. It follows that
\begin{equation*}
g_{ab}\xi^{(n)a}\xi^{(n)b}=(-i\xi^{(n)a}T_{ac}\tensor{H}{^c_b}\xi^{(n)b}+i\xi^{(n)a}\tensor{H}{_a^c}T_{cb}\xi^{(n)b})
\end{equation*}
The derivative of the average of $T$ in the state $\xi^{(n)}$ gives
\begin{eqnarray*}
\frac{d}{dt}[T_{ab}\xi^{(n)a}\xi^{(n)b}]&=&T_{ab}(\dot{\xi}^{(n)a}\xi^{(n)b}+\xi^{(n)a}\dot{\xi}^{(n)b}) \\
&=&(-i\xi^{(n)a}T_{ac}\tensor{H}{^c_b}\xi^{(n)b}+i\xi^{(n)a}\tensor{H}{_a^c}T_{cb}\xi^{(n)b}) \\
&=&g_{ab}\xi^{(n)a}\xi^{(n)b}.
\end{eqnarray*}
Integrating the above expression, we obtain
\begin{eqnarray*}
T_{ab}\xi^{(n)a}\xi^{(n)b}&=&t\,g_{ab}\xi^{(n)a}\xi^{(n)b}+\kappa \\
\tilde{T}_{ab}\xi^{(n)a}\xi^{(n)b}&=&\kappa,
\end{eqnarray*}
where $\kappa$ is a constant independent of $t$ and $\tilde{T}_{ab}\xi^{(n)a}\xi^{(n)b}$ is a constant of motion.
\end{proof}
\begin{prop}
\label{lem:2Txi(n)xi_misto}
Let $T_{ab}$ be canonically conjugate to $H_{ab}$ and an unbiased estimator to the parameter $t$. Thus, considering $n$ odd integers, with $m=(n-1)/2$, we have
\begin{equation}
\label{2Txi(n)xi_misto}
(T_{ab}+T_{ba})\xi^{(n)a}\xi^b=(-1)^mng_{ab}\xi^{(m)a}\xi^{(m)b}
\end{equation}
\end{prop}
\begin{proof}[Proof]
Given Proposition \ref{prop:conseq_TconjugH_misto}, the proof follows the one presented in Lemma 6 of Ref.~\cite{brody1996royalsoc}.
\end{proof}
In view of these results, we can deduce some higher order corrections, independent of the specific choice of $T$, for canonically conjugated observables in the mixed state case.
\begin{proof}[Proof of Proposition 1]
Let us consider corrections up to the third order for the bound in estimation of mixed states that emerge when we expand $\triangledown_at$ over the orthogonal vector system $\dot{\xi}^a$, $\hat{\xi}^{(2)a}$ and $\hat{\xi}^{(3)a}$. From Eq.(\ref{bhattacharrya_misto}), the generalized bound is
\begin{equation}
\label{cota_classysvar_ordem3_misto}
\mathcal{D}elta T^2+\delta T^2\geq \frac{(\dot{\xi}^a\triangledown_at)^2}{2\dot{\xi}^b\dot{\xi}_b}+\frac{(\hat{\xi}^{(2)a}\triangledown_at)^2}{2\hat{\xi}^{(2)b}\hat{\xi}^{(2)}_b}+\frac{(\hat{\xi}^{(3)a}\triangledown_at)^2}{4\hat{\xi}^{(2)b}\hat{\xi}^{(2)}_b},
\end{equation}
where $\hat{\xi}^{(2)a}$ and $\hat{\xi}^{(3)a}$ are given by
\begin{equation*}
\hat{\xi}^{(2)a}=\ddot{\xi}^a-\frac{(\ddot{\xi}^b\dot{\xi}_b)}{(\dot{\xi}^c\dot{\xi}_c)}\dot{\xi}^a-(\ddot{\xi}^b\xi_b)\xi^a
\end{equation*}
and
\begin{equation*}
\hat{\xi}^{(3)a}=\dddot{\xi}^a-\frac{(\dddot{\xi}^b\ddot{\xi}_b)}{(\ddot{\xi}^c\ddot{\xi}_c)}\ddot{\xi}^a-\frac{(\dddot{\xi}^b\dot{\xi}_b)}{(\dot{\xi}^c\dot{\xi}_c)}\dot{\xi}^a-(\dddot{\xi}^b\xi_b)\xi^a
\end{equation*}
The term corresponding to $r=1$ in Eq.(\ref{cota_classysvar_ordem3_misto}) has already been calculated in the derivation of Eq.(\ref{qntcrlb_misto2}). Thus let us proceed with the second order term. The expression in Eq.(\ref{vonneumann_nderiv}) and Prop.~\ref{lem:norm_xin_indep_t_misto} give us that $\ddot{\xi}=-[\tilde{H},[\tilde{H},\xi]]$, $\ddot{\xi}^a\dot{\xi}_a=0$, $\ddot{\xi}^a\xi_a=-2I_{\rho}(\tilde{H})$. Therefore, $\hat{\xi}^{(2)a}=2I_{\rho}(\tilde{H})\xi-[\tilde{H},[\tilde{H},\xi]]$. Hence, the numerator of the second order term is $(\hat{\xi}^{(2)a}\triangledown_at)^2=\big\{\tr \big[(T\xi-\xi T)(2I_{\rho}(\tilde{H})\xi-[\tilde{H},[\tilde{H},\xi]])\big]\big\}^2$ which explicitly depends on the choice of the estimator $T$. Due to this dependence, we will discard this term, as we want only correction terms that do not depend on the choice of the estimator.
Now, dealing with the term that involves $\hat{\xi}^{(3)a}$, we have $\dddot{\xi}^a\ddot{\xi}_a=\ddot{\xi}^a\dot{\xi}_a=0$ from Prop.~\ref{lem:norm_xin_indep_t_misto}. It follows that $\ddot{\xi}^a\xi_a=-\dot{\xi}^a\dot{\xi}_a$, thus $\dddot{\xi}^a\xi_a=0$. From the expression in Eq.(\ref{vonneumann_nderiv}), $\dddot{\xi}^a=i[\tilde{H},[\tilde{H},[\tilde{H},\xi]]]=i([\tilde{H}^3,\xi]+3[\tilde{H}\xi\tilde{H},\tilde{H}])$. After some manipulation, we find $\dddot{\xi}^a\dot{\xi}_a=-2\tr (\tilde{H}^4\xi\xi-4\tilde{H}^3\xi\tilde{H}\xi+3\tilde{H}^2\xi\tilde{H}^2\xi)$. Consequently,
\begin{equation*}
\hat{\xi}^{(3)}=i\left\{ \big([\tilde{H}^3,\xi]+3[\tilde{H}\xi\tilde{H},\tilde{H}]\big) - \frac{\mu_4}{\mu_2}[\tilde{H},\xi] \right\}
\end{equation*}
and
\begin{equation}
\label{norm_xihat3_misto}
\hat{\xi}^{(3)a}\hat{\xi}^{(3)}_a=\mu_6-\frac{\mu_4^2}{\mu_2},
\end{equation}
where we explicitly have
\begin{eqnarray*}
\mu_2&: =&2I_{\rho}(\tilde{H})=2\tr (\tilde{H}^2\xi\xi-\tilde{H}\xi\tilde{H}\xi)=2(\mathcal{D}elta H^2 + \delta H^2)=\tr(\dot{\xi}\dot{\xi}) \\
\mu_4&: =& 2\tr (\tilde{H}^4\xi\xi-4\tilde{H}^3\xi\tilde{H}\xi+3\tilde{H}^2\xi\tilde{H}^2\xi) =\tr(\ddot{\xi}\ddot{\xi})\\
\mu_6&: =& 2\tr (\tilde{H}^6\xi\xi-6\tilde{H}^5\xi\tilde{H}\xi+15\tilde{H}^4\xi\tilde{H}^2\xi-10\tilde{H}^3\xi\tilde{H}^3\xi)=\tr(\xi^{(3)}\xi^{(3)}).
\end{eqnarray*}
Regarding the numerator of the third order term
\begin{equation*}
\hat{\xi}^{(3)a}\triangledown_at=i\tr \left\{ [T,\tilde{H}^3]\xi\xi+3(\xi[\tilde{H}^2,T]\xi\tilde{H}+\xi[T,\tilde{H}]\xi\tilde{H}^2) -\xi[T,\tilde{H}]\xi\frac{\mu_4}{\mu_2} \right\},
\end{equation*}
we see that it involves commutators between $T$ and $H^k,k\in\mathbb{N}$. By finite induction one shows that $[\tilde{H}^k,T]=-ki\tilde{H}^{k-1}, k\in\mathbb{N}$, and we have
\begin{equation}
\label{gradt_xihat3_misto}
\hat{\xi}^{(3)a}\triangledown_at=\frac{\mu_4^2}{\mu_2}-3\mu_2.
\end{equation}
Due to Prop.~\ref{lem:2Txi(n)xi_misto}, which hypothesizes the canonical conjugation between $T$ and $H$, in general, even terms explicitly depend on the arbitrary choice of the estimator $T$ and odd terms involve commutators between $T$ and $H ^ k, k \in \mathbb{N}$; therefore the odd terms will not depend on the choice of $T$. Thus, putting the results in Eqs.(\ref{qntcrlb_misto2}), (\ref{norm_xihat3_misto}), and (\ref{gradt_xihat3_misto}) together, we obtain the following Heisenberg-like correction for the Cramér-Rao bound in the mixed state case, which only depends on the dynamics of $\xi(t)$ generated by $H$, i.e., the statement of Proposition 1.
Imposing $\xi=\xi^2$ to recover the pure state case, we have $\mu_4 \neq \langle\tilde{H}^4\rangle$, where $\langle\tilde{H}^n\rangle=\tr(\tilde{H}^n\xi\xi)$ denotes the $n$-th moment of the Hamiltonian in the corresponding state $\xi$. The only circumstance where the equality holds is when $\mu_2=\langle\tilde{H}^2\rangle$. Thus, just imposing that the density operator characterizes a pure state Eq.(\ref{qntcrlb_classys_correcao_misto}) does not the recover the results in Refs.~\cite{brody1996prl, brody1996royalsoc}, contrary to what was expected \cite{brody2011fasttrack}.
\end{proof}
\section{Method to obtain higher-orders corrections}
\label{sec:algorit_ordem_impar}
In the previous section, we found that \textit{even} higher order corrections explicitly depend on the choice of the estimator $T$, while by Prop.~\ref{lem:2Txi(n)xi_misto} \textit{odd} higher order corrections are independent of the specific choice of $T$. We will then present an algorithmic way of calculating odd terms of higher order corrections following the steps seen in Ref.~\cite{algorit_ordem_impar} for a one-parameter family of states $\xi(t)$ evolving under von Neumann dynamics.
Let the series of orthogonal vectors be
\begin{equation*}
\left\{ \xi^a, \dot{\xi}^a, \ddot{\xi}^a - (\ddot{\xi}^b\xi_b)\xi^a, \dddot{\xi}^a-\frac{\dddot{\xi}^b\dot{\xi}_b}{\dot{\xi}^c\dot{\xi}_c}\dot{\xi}^a, \dots \right\},
\end{equation*}
that are basically the $\hat{\xi}^{(n)a}$, given in Prop.~\ref{prop:bhattacharrya_misto}, dismissing the vanishing terms (e.g. $\dddot{\xi}^a\ddot{\xi}_a = \ddot{\xi}^a\dot{\xi}_a = \ddot{\xi}^a\xi_a=0$). Let us denote this series of vectors by $\{\uppsi^a_n\}$; thus $\uppsi^a_0=\xi^a$, $\uppsi^a_1=\dot{\xi}^a$ and so on.
For each \textit{odd integer value\/} $n$ the basis vectors $\uppsi^a_n$ are obtained by subtracting the components $\uppsi^a_k$ of $\xi^{(n)a}$ with $k<n$
\begin{eqnarray*}
\uppsi^a_1 &=& \dot{\xi}^a \\
\uppsi^a_3 &=&\xi^{(3)a}-\frac{\xi^{(3)b}\uppsi_{1b}}{\uppsi^c_1\uppsi_{1c}}\uppsi^a_1 \\
\uppsi^a_5 &=& \xi^{(5)a}-\frac{\xi^{(5)b}\uppsi_{3b}}{\uppsi^c_3\uppsi_{3c}}\uppsi^a_3 - \frac{\xi^{(5)b}\uppsi_{1b}}{\uppsi^c_1\uppsi_{1c}}\uppsi^a_1.
\end{eqnarray*}
Note that $\triangledown_at=(\tilde{T}_{ab}+\tilde{T}_{ba})\xi^b$, so we can rewrite the generalized bounds in Eq.(\ref{bhattacharrya_misto}) as
\begin{equation}
\label{bhattacharrya_misto_uppsi}
\mathcal{D}elta T^2 + \delta T^2 \geq \frac{1}{2}\sum_n\frac{[\uppsi^a_n(\tilde{T}_{ab}+\tilde{T}_{ba})\xi^b]^2}{g_{cd}\uppsi^c_n\uppsi^d_n}.
\end{equation}
Let $N_n \equiv g_{ab}\uppsi^a_n\uppsi^b_n$ stand for the denominator of the correction terms in Eq.(\ref{bhattacharrya_misto_uppsi}). We have
\begin{equation*}
N_n=\frac{D_{2n}}{D_{2n-4}} , \quad n>2,
\end{equation*}
where $D_{2n}$ is defined by the determinant
\begin{equation*}
D_{2n} =
\left| \matrix{ \mu_{2n}& \mu_{2n-2}& \cdots& \mu_{n+1} \cr
\mu_{2n-2}& \mu_{2n-4}& \cdots& \mu_{n-1} \cr
\vdots & \vdots & \ddots & \vdots \cr
\mu_{n+1} & \mu_{n-1} & \cdots & \mu_{2} \cr} \right|.
\end{equation*}
As examples we found that $D_{2}=\mu_2$, $D_6=\mu_6\mu_2-\mu_4^2$. Note that $N_1=\mu_2=2I_{\rho}(\tilde{H})$. The statistical identities \cite{stuart_kendall} guarantee that $D_{2n}\ge 0$.
Before moving on to the numerator, we define
\begin{equation*}
F_{n,k}\equiv \frac{\xi^{(n)a}\uppsi_{ka}}{\uppsi^b_k\uppsi_{kb}},
\end{equation*}
which has an expression in terms of determinants given by
\begin{equation*}
F_{n,k}=\frac{(-1)^{\frac{1}{2}(n+k)-1}}{D_{2k}}
\left| \matrix{\mu_{n+k} & \mu_{n+k-2} & \dots & \mu_{n+1}\cr
\mu_{2k-2} & \mu_{2k-4} & \dots & \mu_{k-1}\cr
\vdots & & \ddots & \vdots \cr
\mu_{k+1} & \mu_{k-1} & \dots & \mu_{2} \cr} \right| .
\end{equation*}
For example, we have for $k=1,3$
\begin{eqnarray*}
F_{n,3} &=& (-1)^{m+1}\frac{1}{D_6}
\left| \matrix{\mu_{n+3} & \mu_{n+1} \cr
\mu_4 & \mu_2 \cr} \right| \\
F_{n,1} &=& (-1)^m\frac{1}{D_2}\mu_{n+1},
\end{eqnarray*}
where $m=1/2(n-1)$.
We now have all the identities necessary to find a recursive relationship to obtain the odd higher order corrections. Going back to the numerator, we define $U_n\equiv \uppsi^a_n(\tilde{T}_{ab}+\tilde{T}_{ba})\xi^b$. Using $F_{n,k}$ follows the expression for $\uppsi^a_n$
\begin{equation*}
\uppsi^a_n=\xi^{(n)a}-\sum\limits^{n-2}_{k=1,3,5,\dots}F_{n,k}\uppsi^a_k;
\end{equation*}
from the relation above and Prop.~\ref{lem:2Txi(n)xi_misto} follows a recursive formula for $U_n$
\begin{equation}
\label{recursiva_Un}
U_n=(-1)^m\,n\,\mu_{n-1}-\sum\limits^{n-2}_{k=1,3,5,\dots}F_{n,k}U_k,
\end{equation}
where $U_1=1$. Thus, the uncertainty relation in Eq.(\ref{bhattacharrya_misto_uppsi}) can be rewritten as
\begin{equation}
\label{correcao_superior_impares}
(\mathcal{D}elta T^2+\delta T^2)(\mathcal{D}elta H^2-\delta H^2)\ge \frac{1}{4}\sum\limits^{n-2}_{k=1,3,5,\dots}\frac{\mu_2U^2_k}{N_k}.
\end{equation}
Using the relations given in Eqs.(\ref{recursiva_Un}) and (\ref{correcao_superior_impares}), after some algebraic manipulations, we can obtain any correction of a higher order purely in terms of $\mu_{2k}$, regardless of the choice of estimator.
\section{Conclusion}
\label{sec:conclusao}
In this work we approached the quantum statistical estimation problem using mixed states, illustrated by time-energy uncertainty relations, from a geometric perspective. This geometrical view of the space of states allows us to make a clear distinction from the pure state scenario previously reported in literature. We obtained the Cram\'{e}r-Rao bound independent of the choice of the estimator and analyzed the corrections that emerge naturally. A methodology to obtain higher-order corrections is also presented and, consequently, the path for extensions of the main result is proposed. Contrarily to what was expected, when we imposed $\rho = \rho^2$, there was no reduction to the known bound found for the pure state case.
It is important to note that the square root embedding used in the present work, $\rho\rightarrow \sqrt{\rho}$, is related to the WYSI metric which, in turn, has its Riemannian connection identified with the single $\alpha$-connection of the same type. This fact makes the WYSI a good choice of metric to work over a geometric structure of the state space, since it presents this privileged Riemannian structure from the point of view of $\alpha$-connections. The choice of this embedding was also motivated by the possibility of recovering the $1/4$ factor in the Cram\'{e}r-Rao bound.
To conclude our discussions in the context of single parameter estimation, it is important to remember that the saturation of the Cram\'{e}r-Rao bound can only be achieved when considering two important features: $(i)$ the asymptotic limit of a large number of probes and $(ii)$ performing an optimal measurement given by the eigenbasis of the symmetric logarithmic derivative. Usually in a laboratory, where the experimentalist has access to a limited number of probes, corrections to the bound gain importance to provide tighter estimates to the attainable estimation precision. Here we investigated these corrections approaching the problem from an information-geometric point of view and we obtained that in the context of mixed state quantum estimation, the Wigner-Yanase skew information constitutes a natural metric in the space of states. Next, higher-order corrections obtained to the Cram\'{e}r-Rao inequality based on such metric determine the tightness of the bound for practical purposes. Our work provides advances towards understanding the mixed state estimation paradigm and its practical realization in quantum sensing and metrology.
\section{Note added}
During the preparation of this manuscript, we became aware of related work by A.~J.~Belfield and D.~C.~Brody \cite{brody2020} where higher-order corrections to quantum estimation bounds based on the Wigner-Yanase skew information metric are also discussed.
\ack We thank Dorje C. Brody for careful reading of the manuscript and for fruitful discussions. The project was funded by Brazilian funding agencies CNPq (Grant No. 307028/2019-4), FAPESP (Grant No. 2017/03727-0), Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior - Brasil (CAPES) (Finance Code 001), by the Brazilian National Institute of Science and Technology of Quantum Information (INCT/IQ), and by the European Research Council (ERC StG GQCOP Grant No.~637352).
\section*{References}
\end{document} |
\begin{document}
\title{The arithmetic of trees}
\author{Adriano Bruno}
\address{Department of Mathematics and Statistics\\Lederle Graduate Research Tower\\ University of Massachusetts\\Amherst, MA 01003-9305}
\email{[email protected]}
\author{Dan Yasaki}
\address{Department of Mathematics and Statistics\\Lederle Graduate Research Tower\\ University of Massachusetts\\Amherst, MA 01003-9305}
\email{[email protected]}
\date{}
\thanks{The original manuscript was prepared with the \AmS-\LaTeX\ macro
system and the \Xy-pic\ package.}
\keywords{arithmetree, planar binary trees}
\subjclass{Primary 05C05, Secondary 03H15}
\begin{abstract}
The arithmetic of the natural numbers $\mathbb N$ can be extended to arithmetic operations on planar binary trees. This gives rise to a non-commutative arithmetic theory. In this exposition, we describe this \emph{arithmetree}, first defined by Loday, and investigate prime trees.
\end{abstract}
\maketitle
\begin{section}{Introduction}\label{chapter:introduction}
J.-L. Loday recently published a paper \emph{Arithmetree} \cite{Lo}, in which he defines arithmetic operations on the set $\mathbb{Y}$ of groves of planar binary trees. These operations extend the usual addition and multiplication on the natural numbers $\mathbb N$ in the sense that there is an embedding $\mathbb N \hookrightarrow \mathbb{Y}$, and the multiplication and addition he defines become the usual ones when restricted to $\mathbb N$. Loday's reasons for introducing these notions have to do with intricate algebraic structures known as dendriform algebras \cite{Lodend}.
Since the arithmetic extends the usual operations on $\mathbb N$, one can ask many of the same questions that arise in the natural numbers. In this exposition, we examine notions of primality, specifically studying \emph{prime trees}. We will see that all trees of prime degree must be prime, but many trees of composite degree are also prime. One should not be misled by the idea that arithmetree is an extension of the usual arithmetic on $\mathbb N$. Indeed, away from the image of $\mathbb N$ in $\mathbb{Y}$, the arithmetic operations $+$ and $\times$ are non-commutative. Both operations are associative, but multiplication is only distributive on the left with respect to $+$. In the end it is somewhat surprising that there is a very natural copy of $\mathbb N$ inside $\mathbb{Y}$.
The paper is organized as follows. Sections~\ref{sec:defn}--\ref{sec:times} summarize without proofs the results that we need from \cite{Lo}. Specifically, basic definitions are given in Section~\ref{sec:defn} to set notation. The embedding $\mathbb N \hookrightarrow \mathbb{Y}$ is given in Section~\ref{sec:natural}, and Section~\ref{sec:basicoperations} discusses the basic operations on groves. Sections~\ref{sec:add} and \ref{sec:times} define the arithmetic on $\mathbb{Y}$. Finally, Section~\ref{sec:results} discusses some new results and Section~\ref{sec:final} gives a few final remarks.
These results grew out of an REU project in the summer of 2007 at the University of Massachusetts at Amherst, and the authors thank them for their support. The second author would like to thank Paul Gunnells for introducing him to this very interesting topic, as well as all the help with typesetting and computing.
\end{section}
\begin{section}{Background}\label{sec:defn}
In this section, we give the basic definitions and set notation.
\begin{defn}
A \emph{planar binary tree} is an oriented planar graph drawn in the plane with one root, $n+1$ leaves, and $n$ interior vertices, all of which are trivalent.\end{defn} Henceforth, by \emph{tree}, we will mean a planar binary tree. We consider trees to be the same if they can be moved in the plane to each other. Thus we can always represent a tree by drawing a root and then having it
``grow'' upward. The \emph{degree} is the number of
internal vertices. See Figure \ref{fig:tree} for an example of a tree of degree four.
\begin{figure}
\caption{\label{fig:tree}
\label{fig:tree}
\end{figure}
Let $Y_{n}$ be the set of trees of degree $n$. For example,
\begin{gather*}
Y_{0} = \{\one \},\quad
Y_{1} = \{\oneone \},\quad
Y_{2} = \{\twoone, \onetwo \},\quad\text {and}\quad Y_{3} = \{\threeone, \onethree, \twotwo, \onetwoonea, \onetwooneb \}.
\end{gather*}
One can show that the cardinality of $Y_{n}$ is given by the $n^{th}$ \emph{Catalan} number,
\[c_n= \frac{1}{n+1} \binom{2n}{n}=\frac{(2n)!}{(n+1)!n!}.\]
The Catalan numbers arise in a variety of combinatorial problems \cite{Stan}.\footnote{He currently gives 161 combinatorial interpretations of $c_n$.}
\begin{defn}
A nonempty subset of $Y_{n}$ is called a \emph{grove}. The set of all
groves of degree $n$ is denoted by $\mathbb{Y}_{n}$. \end{defn} For example,
\[
\mathbb{Y}_{0} = \{\one \}, \quad
\mathbb{Y}_{1} =\{\oneone \}, \quad \text{and}\quad \mathbb{Y}_{2} = \{\twoone, \onetwo, \twoone\cup \onetwo \}.
\]
Notice that we are omitting the braces around the sets in $\mathbb{Y}_{n}$ and use instead $\cup$ to denote the subsets. For example we write $\twoone\cup \onetwo$ as opposed to $\{\twoone,\onetwo\}$ to denote the grove in $\mathbb{Y}_2$ consisting of both trees of degree $2$. Let $\mathbb{Y}=\bigcup_{n \in \mathbb N} \mathbb{Y}_n$ denote the set of all groves. By definition groves consist of trees of the same degree; hence we get a well-defined notion of degree \begin{equation}\deg:\mathbb{Y}\to \mathbb N.\end{equation}
The Catalan numbers $c_n$ grow rapidly. Since $\mathbb{Y}_n$ is the set of subsets of $Y_n$, we see that the cardinality $\#\mathbb{Y}_n = 2^{c_n}-1$ grows extremely fast, necessitating the use of computers even for computations on trees of fairly small degree.
\begin{table}[htb]
\caption{\label{tab:size}Number of trees and groves of degree $n \leq 7$.}
\begin{tabular}{|r|r|r|}
\hline
$n$&$\#Y_n$&$\#\mathbb{Y}_n$\\\hline
1& 1& 1\\
2& 2& 3\\
3& 5& 31\\
4& 14& 16383\\
5& 42& 4398046511103\\
6& 132& 5444517870735015415413993718908291383295\\
7& 429& $\sim 1.386 \times 10^{129}$\\\hline
\end{tabular}
\end{table}
\end{section}
\begin{section}{The natural numbers}\label{sec:natural}
In this section we give an embedding of $\mathbb N$ into $\mathbb{Y}$. There is a distinguished grove for each degree given by set of all trees of degree $n$.
\begin{defn}
The \emph{total grove of degree $n$} is defined by $\ul{n}=\bigcup_{x\in Y_{n}} x$.
\end{defn}
For example
\[\ul{0}=\one, \quad \ul{1}=\oneone, \quad \ul{2}=\onetwo \cup \twoone,\quad \text{and} \quad \ul{3}=\threeone \cup \onetwoonea \cup \onetwooneb \cup \onethree \cup \twotwo \]
This gives an embedding $\mathbb N \hookrightarrow \mathbb{Y}$. It is clear that the degree map is a one-sided inverse in the sense that $\deg(\ul{n})=n$ for all $n \in \mathbb N$. We will see in Section~\ref{sec:properties} that under this embedding, \emph{arithmetree} can be viewed as an extension of arithmetic on $\mathbb N$.
\end{section}
\begin{section}{Basic operations}\label{sec:basicoperations}
In this section we define a few operations that will be used to define the arithmetic on $\mathbb{Y}$.
\subsection{Grafting}
\begin{defn}
We say that a tree $z$ is obtain as the \emph{graft} of $x$ and
$y$ (notation: $z=x\vee y$) if $z$ is gotten by attaching the root of $x$ to the
left leaf and the root of $y$ to the right leaf of $\oneone$.
\end{defn} For example, $\twoone = \oneone \vee \one$ and $\twotwo = \oneone \vee \oneone$.
It is clear that every tree $x$ of degree greater than $1$ can be obtained as the graft of trees $x^l$ and $x^r$ of degree less than $n$. Specifically, we have that $x=x^l \vee x^r$. We refer to these subtrees as the \emph{left} and \emph{right parts} of $x$.
Given a tree $x$ of degree $n$, then one can create a tree of degree $n+1$ that carries much of the structure of $x$ by grafting on $\ul{0}=\one$. Indeed, there are two such trees, $x \vee \ul{0}$ and $\ul{0} \vee x$. We will say that such trees are \emph{inherited}.
\begin{defn}
A tree $x$ is said to be \emph{left-inherited} if $x^r=\ul{0}$ and \emph{right-inherited} if $x^l=\ul{0}$. A grove is \emph{left-inherited} (resp. \emph{right-inherited}) if each of its member trees is \emph{left-inherited} (resp. \emph{right-inherited}).
\end{defn}
We single out two special sequences of trees $L_n$ and $R_n$.
\begin{defn}
Let $L_1=R_1=\ul{1}$. For $n>1$, set $L_n=L_{n-1}\vee \ul{0}$ and $R_n=\ul{0} \vee R_{n-1}$. We will call such trees \emph{primitive}.
\end{defn}
Notice that $L_n$ is the left-inherited tree such that $L_n^l=L_{n-1}$. Similarly, $R_n$ is the right-inherited tree such that $R_n^r=R_{n-1}$.
\subsection{Over and under}
\begin{defn}
For $x\in Y_p$ and $y\in Y_q$ the tree $x/y$
(read $x$ {\it over} $y$) in $Y_{p+q}$ is obtained by identifying the
root of $x$ with the leftmost leaf of $y$. Similarly, the tree
$x\backslash y$ (read $x$ {\it under} $y$) in $Y_{p+q}$ is obtained by
identifying the rightmost leaf of $x$ with the root of $y$.
\end{defn}
For example, $\onetwo / \oneone = \onetwoonea$ and $\twoone \backslash
\oneone = \twotwo$.
\subsection{Involution}
The symmetry around the axis passing through the root defines an involution $\sigma$ on $Y$. For example, $\sigma(\twotwo)=\twotwo$ and $\sigma(\twoone)=\onetwo$. The involution can be extended to an involution on $\mathbb{Y}$, by letting $\sigma$ act on each tree in the grove. On can easily check that for trees $x,y$,
\begin{enumerate}
\item $\sigma(x \vee y)=\sigma(y)\vee \sigma(x)$,
\item $\sigma(x/y)=\sigma(y)\backslash \sigma(x)$, and
\item $\sigma(x \backslash y)=\sigma(y)/\sigma(x)$.
\end{enumerate}
We will see that this involution also respects the arithmetic of groves.
\end{section}
\begin{section}{Addition}\label{sec:add}
Before we define addition, we first put a partial ordering on $Y_n$.
\subsection{Partial ordering}
We say that the inequality $x<y$ holds if $y$ is obtained from
$x$ by moving edges of $x$ from left to right over a vertex. This induces a partial ordering on $Y_n$ by imposing
\begin{enumerate}
\item $(x \vee y)\vee z \leq x \vee (y\vee z)$
\item If $x < y$ then $x \vee z < y \vee z$ and $z \vee x < z \vee y$ for all $z \in Y_n$.
\end{enumerate}
For example, $\threeone < \onetwoonea < \onetwooneb < \onethree$. Note that the primitive trees are extremal elements with respect to this ordering.
\subsection{Sum}
\begin{defn}
The {\it sum} of two trees $x$ and
$y$ is the following disjoint union of trees
\[ x + y := \bigcup_{x/y \le z \le x\backslash y} z\ . \]
\end{defn}
All the elements in the sum have the same degree which happens to be $\deg(x) + \deg(y)$. Thus we can extend the definition of addition to groves by distributing. Namely, for groves $x=\bigcup_i x_i$ and $y=\bigcup_j y_i$,
\begin{equation}\label{eq:addunion}
x + y := \bigcup_{ij}\, (x_i+y_j).\end{equation}
\begin{prop}[Recursive property of addition]\label{prop:recursiveadd}
Let $x=x^{l}\vee x^{r}$ and $y=y^{l}\vee y^{r}$ be non-zero trees. Then
\[
x + y = x^l\vee (x^r + y)\, \cup \, (x+y^l)\vee y^r.
\]
\end{prop}
Note that the recursive property of addition says that the sum of two trees $x$ and $y$ is naturally a union of two sets, which we call the \emph{left} and \emph{right sum} of $x$ and $y$:
\begin{equation}\label{eq:lsrs}
x\dashv y = x^l\vee (x^r + y)\quad \text{and} \quad x \vdash y =(x+y^l)\vee y^r.\footnote{We set $x \vdash \ul{0} = \ul{0} \dashv y = \ul{0}$.}
\end{equation}
Note that $x+y = x\dashv y \cup x \vdash y$. You can think about this as
splitting the plus sign $+$ into two signs $\dashv $ and $\vdash $. From \eqref{eq:addunion} and the definition, we see that the definition for left sum and right sum can also be extended to groves by distributing.
With the definition of inherited trees/groves and \eqref{eq:lsrs}, one can easily check that left (respectively right) inheritance is passed along via right (respectively left) sums. More precisely,
\begin{lem}\label{lem:inherit}
Let $y$ be a left-inherited tree. Then $x \vdash y$ is left-inherited. Similarly, if $x$ is right-inherited, then $x \dashv y$ is right-inherited.
\end{lem}
\subsection{Universal expression}
It turns out that every tree can expressed as a combination of left and right sums of $\oneone$. This expression is unique modulo the failure of left and right sum to be associative. More precisely,
\begin{prop}
Every tree $x$ of degree $n$ can be written in as an iterated Left
and Right sum of $n$ copies of $\oneone$. This is called the
\emph{universal expression} of $x$, and we denote it by $w_{x}
(\oneone)$.
This expression is unique modulo
\begin{enumerate}
\item $(x \dashv y) \dashv z = x \dashv (y+z)$,
\item $(x \vdash y) \dashv z = x \vdash (y \dashv z)$, and
\item $(x+y) \vdash z = x \vdash (y \vdash z)$.
\end{enumerate}
\end{prop}
For example
\[\twoone = \oneone\vdash \oneone \quad \twotwo = \oneone \vdash \oneone \dashv \oneone.
\]
Loday gives a algorithm for computing the universal expression of a tree $x$.
\begin{prop}[Recursive property for universal expression]\label{prop:universal}
Let $x$ be a tree of degree greater than $1$. The algorithm for determining $w_{x} (\oneone)$ is given through the
recursive relation
\[
w_{x} (\oneone) = w_{x^{l}} (\oneone) \vdash \oneone \dashv w_{x^{r}} (\oneone).
\]
\end{prop}
\end{section}
\begin{section}{Multiplication}\label{sec:times}
Essentially, we define the multiplication to distribute on the left over the universal expression.
\begin{defn}
The product $x\times y$ is
defined by
\[
x\times y = w_{x} (y).
\]
\end{defn}
This means to compute the product $x\times y$, first compute the
universal expression for $x$, then replace each occurrence of $\oneone$
by the tree $y$, then compute the resulting Left and Right sums. For example, one can easily check that $\twoone = \oneone\vdash \oneone$. This means for any tree $y$, $\twoone \times y = y \vdash y$. In particular
$\twoone\times \onetwo = \onetwo \vdash \onetwo$ is the tree shown in Figure~\ref{fig:tree}.
Note that the definition of $x \times y$ as stated still makes sense if $y$ is a grove. We can further extend the definition of multiplication to the case when $x$ is a grove by declaring multiplication to be distributive on the left over disjoint unions: \[(x \cup x') \times y = x \times y \cup x' \times y =w_x(y) \cup w_{x'}(y).\]
\end{section}
\begin{section}{Properties}\label{sec:properties}
We list a few properties of \emph{arithmetree}.
\begin{itemize}
\item The addition $+: \mathbb{Y} \times \mathbb{Y} \to \mathbb{Y}$ is associative, but not commutative.
\item The multiplication $\times: \mathbb{Y} \times \mathbb{Y} \to \mathbb{Y}$ is associative, but not commutative. It is distributive on the left with respect to $+$, but it is not right distributive.
\item There is an injective map $\mathbb N \hookrightarrow \mathbb{Y}$, $n \mapsto \ul{n}$ (defined in Section~\ref{sec:natural}) that respects the arithmetic. Namely,
\[\ul{m+n}=\ul{m}+\ul{n}\quad \text{and}\quad \ul{mn}=\ul{m}\times\ul{n} \quad \text{for all $m,n \in \mathbb N$.}\]
\item Degree gives a surjective map $\deg:\mathbb{Y} \to \mathbb N$ that respects the arithmetic and is a one-sided inverse to the injection above . For every $x,y \in \mathbb{Y}$,
\[
\deg(x+y)=\deg(x)+\deg(y) \quad \text{and}
\quad \deg(x\times y)=\deg(x)\deg(y).\]
\item $\deg(\ul{n})=n$ for all $n \in \mathbb N$.
\item The neutral element for $+$ is $\ul{0}=\one$.
\item The neutral element for $\times$ is $\ul{1}=\oneone$.
\item The involution $\sigma$ satisfies
\[\sigma(x+y)=\sigma(y)+\sigma(x) \quad \text{and} \quad \sigma(x\times y)=\sigma(x) \times \sigma(y).\]
\end{itemize}
\end{section}
\begin{section}{Results}\label{sec:results}
The recursive properties of addition and multiplication allowed us to implement arithmetree on a computer using gp/PARI \cite{gp}. The computational experimentation was done using Loday's naming convention for trees \cite{Lo}.
\subsection{Counting trees}
Since each grove $x \in \mathbb{Y}$ is just a subset of trees, there is another measure of the ``size'' of $x$ other than degree.
\begin{defn}
Let $x \in \mathbb{Y}$ be a grove. The \emph{count} of $x$, denoted $C(x)$ is defined as the cardinality of $x$.
\end{defn}
It turns out that count function gives a coarse measure of how complicated a grove $x$ is in terms of arithmetree. Namely, if $x$ is the sum (resp. product) of other groves, then the count of $x$ is at least as large as the count of any of the summands (resp. factors).
\begin{lem}\label{lem:lsrs}
Let $x ,y \in \mathbb{Y}$ be two non-zero groves. Then
\begin{enumerate}
\item $C(x \dashv y) \geq C(x)C(y)$, with equality if and only if $x$ is a left-inherited grove. \label{it:ls}
\item $C(x \vdash y) \geq C(x)C(y)$, with equality if and only if $y$ is a right-inherited grove.\label{it:rs}
\end{enumerate}
\end{lem}
\begin{proof}
We first consider \eqref{it:ls}. Since $\dashv $ is distributive over unions, it suffices to prove the case when $x$ and $y$ are trees. Namely, we must show that for all non-zero trees $x$ and $y$, $C(x \dashv y) \geq 1$, with equality if and only if $x$ is a left-inherited tree. It is immediate that $C(x \dashv y) \geq 1$; it remains to show that equality is only attained when $x$ is left-inherited. From the definition of left sum, $x \dashv y =x^l \vee (x^r + y)$. If $x$ is not left-inherited, then $x^r \neq \ul{0}$ and
\begin{align*}
C(x \dashv y) &= C(x^l \vee (x^r + y))\\
&=C(x^r + y)\\
&=C(x^r\dashv y \cup x^r \vdash y)\\
&=C(x^r\dashv y)+ C(x^r \vdash y)\\
&>1.
\end{align*}
On the other hand, if $x$ is left-inherited, then $x^r = \ul{0}$ and
\[C(x \dashv y) = C(x^l \vee (x^r + y))=C(x^l\vee y)=1.\]
Item \eqref{it:rs} follows similarly.
\end{proof}
\begin{prop}\label{prop:sp}
Let $x ,y \in \mathbb{Y}$ be two non-zero groves. Then
\begin{enumerate}
\item $C(x + y) \geq 2C(x)C(y)$, with equality if and only if $x$ is a left-inherited and $y$ is right-inherited.\label{it:s}
\item $C(x \times y) \geq C(x)C(y)^{\deg(x)}$. \label{it:p}
\end{enumerate}
\end{prop}
\begin{proof}
Since $x+y=x \dashv y \cup x\vdash y$, \eqref{it:s} follows immediately from Lemma~\ref{lem:lsrs}. For \eqref{it:p}, we note that multiplication is left distributive over unions, and so it suffices to prove the case when $x$ is a tree. Namely we must show that for a tree $x$ and a grove $y$, $C(x \times y) \geq C(y)^{\deg(x)}$.
Let $w_x$ be the universal expression of the tree $x$. Then $x \times y=w_x(y)$ is some combination of left and right sums of $y$. By distributivity of left and right sum over unions and repeated usage of Lemma~\ref{lem:lsrs}, the result follows.
\end{proof}
\subsection{Primes}
\begin{defn}
A grove $x$ is said to be \emph{prime} if $x$ is not the product of two groves different from $\ul{1}$.
\end{defn}
Since $\deg(x \times y)=\deg(x)\deg(y)$ for all groves $x,y$, it is immediate that any grove of prime degree is prime. However, there are also prime groves of composite degree. For example, by taking all possible products of elements of $\mathbb{Y}_2$, one can check by hand that the primitive tree $L_4$ is a prime grove of degree $4$.
We turn our focus to prime trees, which are prime groves with count equal to $1$. It turn out that composite trees have a nice description in terms of inherited trees. Namely, a composite tree must have an inherited tree as a right factor and a primitive tree as a left factor.
\begin{thm}\label{thm:factor}
Let $z$ be a composite tree of degree $n$. Then there exists a proper divisor $d\neq 1$ of $n$ and a tree $T \in Y_{d-1}$ such that
\[z=L_{n/d} \times (\ul{0} \vee T) \quad \text{or} \quad z=R_{n/d} \times (T \vee \ul{0})\]
\end{thm}
\begin{proof}
Let $z=x\times y$ be a composite tree of degree $n$. By Proposition~\ref{prop:sp}, $x$ and $y$ must also be trees. Since $n=\deg(z)=\deg(x)\deg(y)$, it follows that there exists a proper divisor $d\neq 1$ of $n$ such that $\deg(y)=d$ and $\deg(x)=n/d$.
We proceed by induction on the degree of $x$. Suppose $x$ is a tree of degree $2$. Then $x=\oneone\dashv \oneone$ or $x=\oneone\vdash \oneone$. If $x=\oneone\vdash \oneone$, then $x=L_2$ is primitive and
\[1=C(x\times y) =C(y \vdash y).\]
From Proposition~\ref{prop:sp}, it follows that $y$ is $y$ is right-inherited. Similarly, if $x=\one \dashv \one$, then $x=R_2$ and $y$ is left-inherited.
Now suppose $x$ is a tree of degree $k$ such that $x \times y$ is a tree of degree $n$. From Proposition~\ref{prop:universal} and the definition of multiplication, it follows that
\begin{align*}
x \times y &=w_x(y)\\
&=w_{x^l}(y) \vdash y \dashv w_{x^r}(y)\\
&=(x^l \times y) \vdash y \dashv (x^r \times y).
\end{align*}
Suppose $x^r \neq \ul{0}$. Then $x^r \times y \neq \ul{0}$ and $C(y \dashv (x^r \times y))=1$. Then by Proposition~\ref{prop:sp}, $y$ is left-inherited. Let $T=y \dashv (x^r \times y)$. By Lemma~\ref{lem:inherit}, $T$ is also left-inherited. Since $C((x^l \times y) \vdash T)=1$ and $T \neq \ul{0}$, we must have that either $T$ is also right-inherited, or $(x^l \times y)=\ul{0}$. The only tree that is both left and right-inherited is the tree $\ul{1}=\one$. It follows that $(x^l \times y)=\ul{0}$, and hence $x^l=\ul{0}$. By the inductive hypothesis, $x^r$ is a right-primitive tree, and hence $x=R_k$.
Now suppose $x^r =\ul{0}$. Then $x^l \neq \ul{0}$, and an analogous argument shows that $y$ is left-inherited and $x=L_k$.
\end{proof}
From this theorem, we get a nice picture of composite trees as the trees with a particular shape. More precisely, one computes that the products $L_k \times (\ul{0} \vee T)$ and $R_k \times (T \vee \ul{0})$ have the forms given in Figure~\ref{fig:composite}. It follows that the primitive trees ($L_k$ and $R_k$) and the inherited trees ($\ul{0}\vee T$ and $T \vee \ul{0}$) are prime. More precisely,
\begin{prop}\label{prop:unique}
A non-zero tree is either $\oneone$, prime, or the product of exactly two prime trees. Furthermore, the factors are exactly the ones given in Theorem~\ref{thm:factor}, and can be read off from the shape of the tree.
\end{prop}
\begin{figure}
\caption{\label{fig:composite}
\label{fig:composite}
\end{figure}
As a consequence of Proposition~\ref{prop:unique}, one has the following combinatorial formula.
\begin{cor}
Let $a_n$ denote the number of composite trees of degree $n$. Then
\[\frac{a_n}{2} = -c_1-c_n+\sum_{d \mid n}c_d,\quad \text{where $c_d$ is the $d^{th}$ Catalan number}.\]
\end{cor}
\end{section}
\begin{section}{Final remarks}\label{sec:final}
\subsection{Unique factorization}
In \cite{Lo}, Loday conjectures that arithmetree possesses unique factorization. Namely, when a grove $x$ is written as a product of prime groves, the ordered sequence of factors is unique. Very narrowly interpreted, this statement is false. For example since multiplication in $\mathbb N$ is commutative and multiplication in $\mathbb{Y}$ extends arithmetic on $\mathbb N$, we see that for $n\in \mathbb N$, if $n=p_1p_2 \cdots p_k$, then $\ul{n}=\ul{p_{\sigma(1)}}\times\ul{p_{\sigma(2)}}\times \cdots \times \ul{p_{\sigma(k)}}$ for any permutation $\sigma$. However, away from the image of $\mathbb N$ in $\mathbb{Y}$, it appears that this narrow interpretation is true. Specifically, computer experimentation on groves of degree up to $12$ yielded a unique ordered sequence of prime factors for each grove outside of the image of $\mathbb N$ in $\mathbb{Y}$.
If we interpret the image of $\mathbb N$ in $\mathbb{Y}$ in terms of the count function, we see that it is precisely the set of groves with maximal count;
\[\mathbb{Y}^{\max}=\bigcup_{n \in \mathbb N}\{x \in \mathbb{Y}_n\;|\;C(x)=c_n\}.\]
This subset $\mathbb{Y}^{\max}$ possesses unique factorization up to permutation of the factors. On the other extreme, the trees are precisely the set of groves with minimal count;
\[\mathbb{Y}^{\min}=\bigcup_{n \in \mathbb N}\{x \in \mathbb{Y}_n\;|\;C(x)=1\}.\]
It follows from Proposition~\ref{prop:unique} that $\mathbb{Y}^{\min}$ possesses unique factorization in the narrow sense. The question of unique factorization for all of $\mathbb{Y}$ is open.
\subsection{Additively irreducible}
From Proposition~\ref{prop:sp} we see that not every grove can be written as a sum of groves. In fact it is easy to see that every tree is \emph{additively irreducible} in the sense that it cannot be written as the sum of two groves. It would be interesting to study additively irreducible groves. In an analogue to the question of unique factorization, one could ask if arithmetree possesses \emph{unique partitioning}. Namely, when a grove is written as a sum of additively irreducible elements, is the ordered sequence of summands unique?
\end{section}
\end{document} |
\begin{document}
\title{Routing on trees}\parskip=5pt plus1pt minus1pt \parindent=0pt
\author{Maria Deijfen\thanks{Stockholm University, Sweden; {\tt [email protected]}} \and Nina Gantert\thanks{Technische Universit\"{a}t M\"{u}nchen, Germany ; {\tt [email protected]}}}
\date{July 9, 2014}
\maketitle
\begin{abstract}
\noindent We consider three different schemes for signal routing on a tree. The vertices of the tree represent transceivers that can transmit and receive signals, and are equipped with i.i.d.\ weights representing the strength of the transceivers. The edges of the tree are also equipped with i.i.d.\ weights, representing the costs for passing the edges. For each one of our schemes, we derive sharp conditions on the distributions of the vertex weights and the edge weights that determine when the root can transmit a signal over arbitrarily large distances.
\noindent
\noindent \emph{Keywords:} Trees, transmission, first passage percolation, branching random walks, Markov chains.
\noindent AMS 2010 Subject Classification: 60K37, 60J80, 60J10.
\end{abstract}
\section{Introduction}
Let $\mathcal{T}$ be a rooted infinite $m$-ary tree and assign i.i.d.\ weights $\{R_x\}$ to the vertices of $\mathcal{T}$ and i.i.d.\ weights $\{C_e\}$ to the edges. Assume that $\{R_x\}$ is independent of $\{C_e\}$. We think of the vertices as representing transceivers that can receive and transmit signals. The vertex weights represent the strength or range of the transceivers and the edge weights represent the cost or resistance when traversing the edges. We study three different schemes for signal routing in $\mathcal{T}$ and, for each of these schemes, we investigate when the root can transmit a signal over arbitrarily large distances. More specifically, write $O$ for the set of vertices that are reached by a signal transmitted by the root, and say that a scheme can transmit indefinitely if $|O|=\infty$ with positive probability. Our main results are sharp conditions on the distributions of $R$ and $C$ that determine when the respective routing schemes can transmit indefinitely. Here and throughout the paper, $R$ and $C$ denote random variables with the laws of $R_x$ and $C_e$, respectively.
Write $\Gamma_{x,y}$ for the path between the vertices $x$ and $y$ in $\mathcal{T}$, and write $y>x$ if $y$ is located in the subtree below $x$ in $\mathcal{T}$ (so that $y$ is hence further away from the root than $x$). For each vertex $x$, let $\Lambda_x$ be the set of all vertices $y$ in the subtree below $x$ for which the total cost of the path from $x$ to $y$ does not exceed the range of $x$, that is,
$$
\Lambda_x=\Big\{y>x:\sum_{e\in\Gamma_{x,y}}C_e\leq R_x\Big\}.
$$
We say that the vertices in $\Lambda_x$ are within the range of $x$. The schemes that we will consider are now defined as follows.
\noindent\textbf{Complete routing.} The root 0 first transmits the signal to all vertices in $\Lambda_0$. In the next step, each vertex $x\in \Lambda_0$ forwards the signal to all vertices in $\Lambda_x$, and the signal is then forwarded according to the same rule by each new vertex that is reached by it. Note that edges leading back towards the root are not used in the forwarding process, that is, the transceivers do not forward the signal through the same edge that the signal arrived from. This simplifies the analysis since it implies that whether a signal reaches a vertex $y$ or not is determined only by the configuration on the path between 0 and $y$.
\noindent\textbf{Boundary routing.} For a connected subset $\Omega$ of the vertices in $\mathcal{T}$, with $0\in \Omega$, let $\partial \Omega$ denote the set of vertices in $\Omega$ that have at least one child that is not in $\Omega$. The transmission is initiated in that the root 0 transmits the signal to all vertices in $\Lambda_0$, and the signal is then forwarded stepwise: If the set of vertices that have received the signal after a certain step is $\Omega$, then, in the next step, the signal is forwarded by each $x\in \partial \Omega$ to all vertices $y$ in $\Lambda_x$ such that the path between $x$ and $y$ (excluding $x$) contains only vertices in $\Omega^c$. The difference compared to complete routing is hence that only vertices with neighbors that have not yet heard the signal forward the signal and then only in the direction of these un-informed neighbors.
\noindent\textbf{Augmented routing.} For a vertex $x$ at level $k$ in $\mathcal{T}$, write $0=x_0,\ldots,x_k=x$ for the path from the root to $x$. In the last scheme, when a signal traverses an edge, its strength is reduced by the cost of the edge, and when it passes a transceiver, it is amplified by the strength of the transceiver. The signal hence reaches the vertex $x$ at level $k$ if and only if
$$
\sum_{i=0}^nR_{x_i}>\sum_{i=0}^{n}C_{(x_i,x_{i+1})}\quad\mbox{for all }n=0,1,\ldots,k-1.
$$
Write $O_{\scriptscriptstyle{\rm comp}}$, $O_{\scriptscriptstyle{\rm bond}}$ and $O_{\scriptscriptstyle{\rm aug}}$ for the sets of vertices that are reached by a signal transmitted by the root using complete routing, boundary routing and augmented routing, respectively. Clearly, complete routing dominates boundary routing in the sense that $O_{\scriptscriptstyle{\rm bond}}\subset O_{\scriptscriptstyle{\rm comp}}$. Furthermore, augmented routing dominates complete routing in the same sense. Indeed, with augmented routing, the strength of a transceiver may be stored and used at any point in the forwarding process, while in complete routing, a transceiver at $x$ is only effective within $\Lambda_x$. Hence,
\begin{equation}\label{eq:hierarchy}
O_{\scriptscriptstyle{\rm bond}}\subseteq O_{\scriptscriptstyle{\rm comp}}\subseteq O_{\scriptscriptstyle{\rm aug}}\quad \mbox{a.s.}
\end{equation}
Note that, if $R\geq C$ almost surely, then all three schemes can trivially transmit indefinitely, while on the other hand, if $R<C$ almost surely, then a signal has no chance of spreading at all in any of the schemes. Hence the interesting case is when $\{R\geq C\}$ has a non-trivial probability. It is then natural to investigate the possibility of infinite transmission in the schemes and to compare the schemes in this sense. Are there for instance cases when complete routing (and thereby also augmented routing) can transmit indefinitely but not boundary routing? And are there cases when augmented routing, but not boundary routing and complete routing, can transmit indefinitely? Furthermore, one might ask in general what happens when one or both of the variables $R$ and $C$ have power-law distributions. For what values of the exponents is it possible to transmit a signal over arbitrarily large distances? These questions can, and will, be answered by analyzing the conditions for infinite output range derived below.
The paper is organized so that augmented routing is analyzed in Section 3, using tools related to branching random walks. Complete routing and boundary routing are then treated in Section 3 and 4, respectively, by generalizing the arguments from Section 3. In each section, we also give examples and make the conditions more explicit for certain distribution types. Section 5 contains a summary, further comparison of the derived conditions and some directions for further work. Throughout we assume that $\{R\geq C\}$ has non-trivial probability.
\subsection{Related work}
Probability on trees has been a very active field of probability for the last decades; see e.g.\ \cite{climb} for an introduction and \cite{LyonsPeresbook} for a recent account. The work here is closely related to first passage percolation on trees and tree-indexed Markov chains, see e.g.\ \cite{benjaminiperes94perc, lyonspem}. We also rely on results and techniques for branching random walks, see \cite{Zhan}. Transceiver networks have previously been analyzed in the probability literature in the context of spatial Poisson processes, see \cite{boll}, but the setup there is quite different from ours.
\section{Augmented routing}
We begin by analyzing the augmented routing scheme. To this end, first note that the transmission process can be represented by a process that we will identify below as a killed branching random walk: Define $V_0=0$ for the root and then, for a vertex $y$ that is a child of $x$, let $V_y=V_x+Z_{x,y}$, where $Z_{x,y}=R_x-C_{(x,y)}$. This means that $V_y$ keeps track of the strength of the signal when it arrives at $y$. When $V_y$ takes on a negative value, the process dies at that location and the subtree below $y$ is declared dead.
If $m=1$, we have a random walk, killed when it takes a negative value. Hence, in this case,
$\PP\left(|O_{\scriptscriptstyle{\rm aug}}|=\infty\right) > 0$ if ${\mathbb E}[R] > {\mathbb E}[C]$ and
$\PP\left(|O_{\scriptscriptstyle{\rm aug}}|=\infty\right) = 0$ if ${\mathbb E}[R] \leq {\mathbb E}[C]$ and both expectations are finite. If $R$ and $C$ both have infinite expectations, both scenarios can happen. For the remainder of the section we assume that $m\geq 2$.
A one-dimensional, discrete-time branching random walk may be defined as follows: At the beginning, there is a single particle located at $V_0=0$. Its children, who form the first generation, are positioned according to a certain point process. Each of the particles in the first generation gives birth to new particles that are positioned (with respect to their birth places) according to the same point process; they form the second generation. The system goes then on according to the same mechanism. See for instance \cite{Zhan} for an account of results on this model.
In our case, each particle has $m$ children and the point process of displacements of the children of $x$ consists of $\{Z_{x,y}: y \hbox{ child of } x\}$. Let $\mathcal{V}$ denote the vertex set of the tree. The process starts with $V_0=0$ and, for a vertex $y$ that is a child of $x$, we have $V_y=V_x+Z_{x,y}$, where $\{Z_{x,y}: y \hbox{ child of } x\}_{x \in \mathcal{V}}$ form a collection of i.i.d.\ random variables. Note that, unlike in ``classical'' branching random walk, the displacements $\{Z_{x,y}: y \hbox{ child of } x\}$ are not i.i.d., since, for a fixed $x$, the term $R_x$ appearing in the definition of $Z_{x,y}$ is the same for all children of $x$. Nevertheless, $\{Z_{x,y}: y \hbox{ child of } x\}_{x \in \mathcal{V}}$ are i.i.d.\ and hence $\{V_y\}$ fits in the more general definition of a branching random walk above.
Now kill the branching random walk at $0$, that is, whenever $V_x < 0$, the process dies and the subtree below the vertex $x$ is declared dead. The survival probability in this killed random walk coincides with the probability of infinite transmission for augmented routing, and we would hence like to obtain a condition that determines when the survival probability is strictly positive. To this end, let $Z, Z_1, Z_2,\ldots$ be i.i.d.\ with the same law as $Z_{x,y}$ and let $I(\cdot)$ be the right-side large deviation rate function for $Z$, defined by
\begin{equation}\label{rsrate}
I(s): = \sup\limits_{\lambda \geq 0}[\lambda s - \log E[\exp(\lambda Z)]]\in [0, \infty]\, .
\end{equation}
Then Cram\'er's Theorem implies that
\begin{equation}\label{ld}
\lim\limits_{n \to \infty}\frac{1}{n}\log \PP\left[\frac{Z_1 + \cdots + Z_n}{n} \geq s\right] = -I(s),
\end{equation}
see \cite[Theorem 2.2.3]{dembozeitouni}, and hence $I(s)$ describes deviations ``to the right'' of $s$ (note that $\lambda$ is only running through the non-negative reals). In particular, we have $I(s) =0$ if $s \leq {\mathbb E}[Z]$.
Define
$$
s^* : = \sup\{s: I(s) \leq \log m\}\in (-\infty, \infty]\, .
$$
Note that, since $I(\cdot)$ is convex and non-decreasing, with $I(s) =0$ for $s \leq {\mathbb E}[Z]$, we have that
$s^* > 0$ if and only if $I(0) < \log m$. With this at hand, we can determine when the killed branching random walk which describes the transmission process with augmented routing has a strictly positive survival probability.
\begin{prop}\label{charsurvival}
Let $m \geq 2$. For the survival probability $\alpha=\PP\left(|O_{\scriptscriptstyle{\rm bond}}|=\infty\right)$ of the killed branching random walk, we have $\alpha > 0$ if and only if $s^* > 0$. In particular, $\alpha > 0$ if and only if $I(0) < \log m$. Note that, if ${\mathbb E}[R - C] \geq 0$, then $I(0) = 0$ so that $\alpha > 0$.
\end{prop}
\begin{corr}
If $m \geq 2$, then $\PP\left(|O_{\scriptscriptstyle{\rm aug}}|=\infty\right) > 0$ if and only if
\begin{equation}\label{critaug}
{\mathbb E}[e^{\lambda R}]\cdot {\mathbb E}[e^{-\lambda C}] > \frac{1}{m}\quad \hbox{ for all } \lambda \geq 0\,.
\end{equation}
\end{corr}
The proposition morally follows from Theorem \ref{speedBRW} below, which goes back to J.\ D.\ Biggins, J.\ M.\ Hammersley, J.\ F.\ C.\ Kingman, see \cite{biggins,hammersley,kingman}. For a proof, we also refer to \cite[Theorem 2.1]{Zhan}. However, we will not need Theorem \ref{speedBRW}, but will give a direct proof of Proposition \ref{charsurvival} that we will then apply also for complete routing and boundary routing.
\begin{theorem}[Biggins, Hammersley, Kingman] \label{speedBRW}
For a branching random walk $\{V_x\}$, we have that
\begin{equation}\label{speed}
\lim\limits_{n \to \infty}\frac{1}{n}\max\limits_{x \in \mathcal{V}, |x|=n} V_x = s^*\quad \PP- \rm{a.s.}
\end{equation}
\end{theorem}
\noindent \emph{Proof of Proposition \ref{charsurvival}.} The proof is based on two standard arguments, which we recall since we will use them later. We also refer to \cite{climb}. We first show that the survival probability is 0 if $s^*<0$ by showing that
\begin{equation}\label{UBspeed}
\limsup_{k}\frac{1}{k}\max_{x:|x|=k}V_x\leq s^*\quad \PP- \rm{a.s.}
\end{equation}
Indeed, (\ref{UBspeed}) implies that, if the branching random walk is killed at the ``linear barrier'' $sk$ with $s > s^*$ (i.e.\ all vertices $x_k$ at distance $k$ from the root with $V_{x_k} < s k$ are removed along with all their descendants), then it will die out almost surely. Our process is killed at $s=0$ and hence $\alpha =0$ if $s^* < 0$.
To establish (\ref{UBspeed}), we will consider the probabilities that there is a vertex $x$ at distance $k$ from the root with $V_x\geq s^* + \delta$, and use a union bound. There are $m^k$ such vertices, and if $\PP(V_x\geq s^* + \delta)$ decays fast enough, our probabilities will be summable. Assume that $s^* < \infty$ and fix $\delta > 0$. Then there is $\varepsilon > 0$ such that $I(s^*+ \delta) - \varepsilon > \log m$. Take $k$ large enough such that
$$
\PP\left[\frac{Z_1 + \cdots + Z_k}{k} \geq s^* + \delta \right] \leq \exp(-k (I(s^* + \delta )- \varepsilon))\, .
$$
Now, by a union bound,
$$
\PP\left[\frac{1}{k}\max_{x:|x|=k}V_x \geq s^* + \delta \right] \leq m^k P\left[\frac{Z_1 + \cdots + Z_k}{k} \geq s^* + \delta \right]
$$
$$
\leq m^k \exp(-k (I(s^* + \delta )- \varepsilon))
$$
and we conclude, using the Borel-Cantelli lemma, that
$$
\limsup_{k}\frac{1}{k}\max_{x:|x|=k}V_x\leq s^* + \delta \quad \PP- \rm{a.s.}
$$
Since $\delta > 0$ was arbitrary, \eqref{UBspeed} follows from this.
To show that the survival probability is strictly positive if $s^* > 0$, we will construct a supercritical Galton-Watson process embedded in our tree. To this end, first note that
$$
\lim\limits_{n \to \infty}\frac{1}{n}\log \PP\left[\frac{Z_1 + \cdots + Z_j}{j} \geq s, j = 0,1, \ldots ,n \right] = -I(s),
$$
see \cite{mogulskii} or \cite[Theorem 5.1.2]{dembozeitouni}. Fix $s<s^*$. Since $I$ is a convex function which is strictly convex on $\{x: I(x) \in (0, \infty)\}$, we can pick $\delta > 0$ such that $I(s) < \log m -\delta$. Consider an embedded Galton-Watson process consisting of all vertices at distances $k, 2k, 3k, \ldots$ from the root such that the path of the branching random walk between the vertex (at distance $ik$ from the root, say) and its predecessor (at distance $(i-1)k$ from the root) stays strictly above $\ell s$ at distance $\ell= (i-1)k +j$ $(j=0,1, \ldots, k)$ from the root. Take $k$ large enough such that
$$
\PP\left[\frac{Z_1 + \cdots + Z_j}{j} \geq s, j =0,1, \ldots ,k \right]\geq \exp(-k(I(s) + \delta)).
$$
Then the embedded Galton-Watson process has expected offspring at least $\exp(-k(I(s)+ \delta))m^k > 1$, and therefore it has a strictly positive survival probability. An infinite path $0=x_0,x_1,x_2\ldots$ from the root, where $x_i$ is a child of $x_{i-1}$, for all $i$, is called a ray. The above argument shows that for $s < s^*$, we have that
\begin{equation}\label{LBspeedinf}
\PP\left(\exists\mbox{ a ray }\{x_n\}\mbox{ with } V_{x_n}\geq ns\mbox{ for all }n\right)>0.
\end{equation}
In particular, if the branching random walk is killed at the ``linear barrier'' $s k$, with $s < s^*$, it survives with positive probability. Hence, $\alpha > 0$ if $s^* > 0$, since our process is killed at $s=0$.
Finally we consider the critical case $s^* =0$. This requires a refinement of the argument for the case when $s^*>0$: Assume that $s^* =0$, so that $I(0) = \log m$. Then, by the Bahadur-Rao Theorem (see \cite[Theorem 3.7.4]{dembozeitouni}), there is a constant $c< 0$ such that, for all $k$,
$$
\PP\left[\frac{Z_1 + \cdots + Z_k}{k} \geq 0 \right] \leq \frac{c}{\sqrt{k}} \exp(-k (I(0))\, .
$$
Now consider the probability that there is a vertex $x$ at distance $k$ from the root with $V_x\geq 0$. There are $m^k$ such vertices and, using a union bound, we get that
$$
\PP\left[\frac{1}{k}\max_{x:|x|=k}V_x \geq 0\right] \leq m^k P\left[\frac{Z_1 + \cdots + Z_k}{k} \geq 0 \right]\leq \frac{c}{\sqrt{k}}.
$$
We conclude, using the Borel-Cantelli lemma along the subsequence $k^4$ $(k=1, 2, \ldots )$, that
$$
\frac{1}{k^4}\max_{x:|x|=k^4 }V_x \geq 0 \mbox{ only for finitely many $k$}, \quad \PP- \rm{a.s.}
$$
This implies that $\alpha =0$. In fact, much more is known: A ``nearly optimal'' ray consists of vertices $x_k$ with $V_{x_k} \geq (s^* -\varepsilon)k$ for all $k$ and, in \cite[Theorem 1.2]{nyzsurvival}, it is shown that the probability that a nearly optimal ray exists goes to $0$ as $\varepsilon \to 0$.
$\Box$
\begin{remark} Let $\partial \mathcal{T}$ denote the boundary of the tree which is defined as the set of all rays in the tree. One can use a $0-1$ law as in \cite[Proposition 3.2]{climb} to conclude from \eqref{LBspeedinf} that for $s < s^*$, we have
\begin{equation}
\PP(\sup\limits_{\xi \in \partial \mathcal{T}}\liminf_{x_k \in \xi, |x_k| = k}\frac{1}{k}V_{x_k}\geq s) =1,
\end{equation}
which implies that
\begin{equation}\label{LBspeedsure}
\PP(\sup\limits_{\xi \in \partial \mathcal{T}}\liminf_{x_k \in \xi, |x_k| = k}\frac{1}{k}V_{x_k}\geq s^*) =1\, .
\end{equation}
Now, Theorem \ref{speedBRW} follows, in our setup, from \eqref{UBspeed} and \eqref{LBspeedsure}.
\end{remark}
\textbf{Example 2.1.} Let $R$ and $C$ be Poisson distributed with mean $\mu_R$ and $\mu_C$, respectively. Then
$$
\log {\mathbb E}[\exp{\lambda(R-C)}]= (e^\lambda -1)\mu_R + (e^{ - \lambda} -1)\mu_C
$$
and \eqref{critaug} yields, after an easy calculation that infinite transmission is possible if and only if $\sqrt{\mu_C} - \sqrt{\mu_R} < \sqrt{\log m}$.
$\Box$
\textbf{Example 2.2.} Let $C\equiv 1$ and assume that a transceiver is either functioning with range 1 (probability $r_1\neq 1$) or non-functioning with range 0 (probability $r_0$). Then
$$
{\mathbb E}[e^{\lambda R}]{\mathbb E}[e^{-\lambda C}]=r_0e^{-\lambda}+r_1
$$
and we see that \eqref{critaug} is satisfied if and only if $r_1>1/m$. Next assume that a functioning transceiver has range 2 (probability $r_2=1-r_0$). We apply Proposition \ref{charsurvival}, calculating $I(0)=0$ if $r_2 \geq 1/2$ and $I(0)= -\log(2 \sqrt{r_2(1- r_2)})$ otherwise, and get that either $r_2 \geq 1/2$ or $r_2(1- r_2) > (4m^2) ^{-1}$. Hence infinite transmission is possible if and only if
$$
r_2 > \frac{1}{2}\left(1 - \sqrt{1-1/m^2}\right)\, .
$$
$\Box$
\textbf{Example 2.3.} Consider the case with $R\equiv 1$ and $C\in\{0,2\}$, with $\PP(C=0)=p_0$ and $\PP(C=2)=p_2$. This is equivalent to the previous example with $r_2=p_0$ and $r_0=p_2$ in the sense that the effect of passing a transceiver and a consecutive edge is that either the signal strength is increased by 1 (probability $p_0$) or decreased by 1 (probability $p_2$). It follows that infinite transmission is possible if and only if
$$
p_0 > \frac{1}{2}\left(1 - \sqrt{1-1/m^2}\right)\, .
$$
$\Box$
\textbf{Example 2.4.} Set $C\equiv 1$ and let $R$ be Poisson distributed with mean $\mu$. Then (\ref{critaug}) is equivalent to
\begin{equation}\label{eq:1po_aug}
(e^\lambda-1)\mu-\lambda+\log m > 0\quad \mbox{for all }\lambda\geq 0.
\end{equation}
The minimal value is attained for $\lambda=-\log \mu$, and hence (\ref{eq:1po_aug}) is true if $\mu > 1$ or $1-\mu+\log(m\mu) > 0$. For $m=2$, we obtain numerically that infinite transmission is possible if and only if $\mu > 0.23$.
$\Box$
\section{Complete routing}
For complete routing, the transmission process can be described by a process $\{W_y\}$ that keeps track of the remaining range of a signal from the root when it reaches $y$ and is defined as follows: Set $W_0=0$ for the root and then, for a vertex $y$ that is a child of $x$ in the tree, let
$$
W_y = \left\{ \begin{array}{ll}
R_x-C_{(x,y)} & \mbox{if $R_x>W_x$};\\
W_x-C_{(x,y)} & \mbox{otherwise}.
\end{array}
\right.
$$
Indeed, if $R_x>W_x$, then the range of the transceiver at $x$ is larger than the remaining range of the routed signal at $x$. Hence, by the definition of the scheme, the remaining range at a given child $y$ of $x$ is $R_x$ minus the cost $C_{(x,y)}$ of the edge $(x,y)$. If $R_x\leq W_x$ on the other hand, then the transceiver at $x$ does not increase the remaining range, and the remaining range at a given child $y$ is therefore $W_x$ minus the cost $C_{(x,y)}$ of the edge $(x,y)$. When $W_y$ takes on a negative value, the process dies at that location and all vertices in the subtree below $y$ are assigned the value $\Gamma$, where $\Gamma$ is a cemetery state.
If $m=1$, we have a Markov chain, killed when it takes a negative value. When $R$ has bounded support, say $R\leq b$ almost surely, then infinite transmission is not possible: Let $c>0$ and $r<c$ be any numbers such that $\PP(C\geq c)>0$ and $\PP(R\leq r) > 0$. Consider a sequence of length $\lceil b/(c-r)\rceil+1$ such that the strength of each transceiver is at most $r$ while the cost of the incoming link is at least $c$. Such a sequence occurs eventually with probability 1 and, since $W_y\leq b$, it is not hard to see that it kills the signal. Furthermore, one sees directly, or from (\ref{eq:hierarchy}), that infinite transmission is not possible if ${\mathbb E}[R]\leq {\mathbb E}[C] < \infty$. In the general case, we do not know if survival is possible.
Assume for the remainder of the section that $m \geq 2$. The process $\{W_y\}$ is not a branching random walk. It is also not a tree-valued Markov chain in the sense of \cite{benjaminiperes94}, since the values of the vertices of two children of $x$ are not chosen independently given $W_x$. In addition, the Markov process we are considering is not irreducible. Nevertheless, the arguments of the previous section apply and we can give conditions for a positive survival probability. To this end, let $W_0, W_1, W_2, \ldots $ be a Markov process with the same law as $W_0, W_{x_1}, W_{x_2}, \ldots$, where $x_i$ is a child of $x_{i-1}$. Hence, the transition mechanism is the following: Take two i.i.d.\ sequences $\{C_i\}$ and $\{R_i\}$ which are independent. Given $W_{i-1}$, we set $W_i=\Gamma $ if $W_{i-1} = \Gamma$, and if $W_{i-1} \geq 0$, we set
\begin{eqnarray*}
W_i = \left\{ \begin{array}{ll}
R_i-C_i & \mbox{ if $R_i>W_{i-1}$ and $R_i - C_i \geq 0$};\\
W_{i-1}-C_i & \mbox{ if $R_i\leq W_{i-1}$ and $W_{i-1}-C_i \geq 0$};\\
\Gamma & \mbox{ otherwise}.
\end{array}
\right.
\end{eqnarray*}
Denote by $\PP_z$ the probability measure associated with the Markov process started from $z \in {\mathbb R}$ (the transmission process is started from $W_0=0$ but in the proof of Theorem \ref{condcomplete} below we need to consider arbitrary starting points). This Markov process has $\Gamma$ as an absorbing state. Note that, due to subadditivity, the limit
$$
-\lim\limits_{n \to \infty}\frac{1}{n}\log \inf\limits_{z \in {\mathbb R}^+} \PP_z(W_n \geq 0) =
-\lim\limits_{n \to \infty}\frac{1}{n}\log \inf\limits_{z \in {\mathbb R}^+} \PP_z(W_n \neq \Gamma)
$$
exists. The state space of the Markov chain $\{W_i\}$ is (a subset of) $\{\Gamma\}\cup [0, \infty)$. We can think of the state space as an ordered set, with smallest element $\Gamma$, and we claim that
$$
\inf_{z\in\mathbb{R}^+}\PP_z(W_n\geq 0)=\PP_0(W_n\geq 0) \, .
$$
Indeed, using the natural coupling for two Markov chains distributed according to $\PP_z$ and $\PP_y$, respectively, which is to take the same sequences $\{C_i\}$ and $\{R_i\}$ in the above construction, we see that for any $z$ and $y$ with $y< z$, the law of $W_1$ under $\PP_y$ is dominated by the law of $W_1$ under $\PP_z$, and by induction, the law of $W_n$ under $\PP_0$ is dominated by the law of $W_1$ under $\PP_z$, for any $z > 0$ and any $n$. We conclude that
\begin{equation}\label{beta_def}
\beta: = -\lim\limits_{n \to \infty}\frac{1}{n}\log \PP_0(W_n \geq 0)
\end{equation}
exists and that
\begin{equation}\label{bISinf}
\beta = -\lim\limits_{n \to \infty}\frac{1}{n}\log \inf\limits_{z \in {\mathbb R}^+} \PP_z(W_n \geq 0)\, .
\end{equation}
The following theorem asserts that complete routing can transmit indefinitely if $\beta<\log m$ but not if $\beta>\log m$.
\begin{theorem}\label{condcomplete}
Assume that $m \geq 2$ and let $\beta$ be defined as in \eqref{beta_def}.
\begin{itemize}
\item[\rm{(i)}]If, for some subsequence $n_k$ of the integers with $n_k \to \infty$ as $k \to \infty$,
\begin{equation}\label{conddie}
\sum\limits_{k=1} ^\infty m^{n_k} \PP_0(W_{n_k} \geq 0) < \infty
\end{equation}
then $\PP\left(|O_{\scriptscriptstyle{\rm comp}}|=\infty\right) = 0$. In particular, \eqref{conddie} is satisfied if $\beta > \log m$.\\
\item[\rm{(ii)}] If $\beta < \log m$, then $\PP\left(|O_{\scriptscriptstyle{\rm comp}}|=\infty\right) > 0$.\\
\end{itemize}
\end{theorem}
\begin{proof}
The proof of (i) is the same as the proof of \eqref{UBspeed}, and the proof of (ii) is the same as the proof of \eqref{LBspeedinf}. Indeed, using a union bound,
$$
\PP\left[\frac{1}{k}\max_{x:|x|=k}W_x \geq 0\right] \leq m^k \PP\left[W_k\geq 0\right]
$$
and hence it follows from the Borel-Cantelli lemma that, if \eqref{conddie} holds, then
$$
\limsup_{k}\frac{1}{k}\max_{x:|x|=k}V_x< 0 \quad \PP- \rm{a.s.}
$$
Part (i) follows from this by the same argument as in the proof of \eqref{UBspeed}.
To show (ii), we again construct an embedded Galton-Watson tree which survives with positive probability. Pick $\delta>0$ such that $\beta<\log m -\delta$ and choose $k$ large enough such that
$\inf\limits_{z \in {\mathbb R}^+} \PP_z(W_k \neq \Gamma)\geq \exp(-k(\beta+\delta))$ (which is possible due to \eqref{bISinf}). Consider an embedded Galton-Watson process consisting of all vertices at distances $k, 2k, 3k, \ldots$ from the root such that the path of the branching random walk between the vertex (at distance $ik$ from the root, say) and its predecessor (at distance $(i-1)k$ from the root) does not hit $\Gamma$ (note that it suffices that $W$ takes non-negative values at the vertex and its predecessor). Then, the embedded Galton-Watson process has expected offspring at least $ \exp(-k(\beta+ \delta))m^k > 1$, and therefore it has a strictly positive survival probability.
\end{proof}
Obtaining explicit expressions for the probability $\PP(W_n\geq 0)$, and thereby for $\beta$, for some large class of distributions seems difficult. However, it is possible to deduce from Theorem \ref{condcomplete} that infinite transmission is always possible when $R$ is a power-law or when $\PP(C=0)>1/m$. The (simple) proofs of this are valid also for boundary routing, and therefore we give the proofs in the next section, see Corollary \ref{cor:pl} and \ref{cor:c_atom}. Here instead we analyze the condition in Theorem \ref{condcomplete} for some specific examples.
\textbf{Example 3.1} Let $C\equiv 1$ and $R\in\{0,1\}$ with $\PP(R=1)=r_1$. Then $W_n\geq 0$ if and only if no transceiver up to vertex $n$ is non-functioning with range 0. Hence $\PP(W_n\geq 0)=r_1^n$, so that $\beta=-\log r_1$, which is smaller than $\log m$ when $r_1>1/m$. This is the same condition as in Example 2.1, and indeed all schemes are equivalent in this case.
$\Box$
\textbf{Example 3.2} Let $C\equiv 1$ and $R\in\{0,2\}$ with $\PP(R=0)=r_0$ and $\PP(R=2)=r_2=: r$.
In this case, $W_0, W_1, W_2, \ldots $ is a Markov chain with state space $\{\Gamma, 0, 1 \}$ and
with transition probabilities given by $p(\Gamma, \Gamma) =1$, $p(0, \Gamma) =r_0$, $p(0, 1) =r$, $p(1, \Gamma) =0$, $p(1, 0) = r_0$, $p(1, 1) =r$. The transition matrix can be diagonalized, and has the eigenvalues $1, \frac{1}{2}(r + a)$ and $\frac{1}{2}(r - a)$, where $a = \sqrt{4r - 3 r ^2}$. We conclude that $\beta = -\log(\frac{1}{2}(r + a))$. Hence
$$
\PP\left(|O_{\scriptscriptstyle{\rm comp}}|=\infty\right) > 0 \hbox{ if } r + \sqrt{4r - 3 r ^2} > \frac{2}{m}\, .
$$
and
$$
\PP\left(|O_{\scriptscriptstyle{\rm comp}}|=\infty\right) = 0 \hbox{ if } r + \sqrt{4r - 3 r ^2} < \frac{2}{m}\, .
$$
This can be rewritten as
$$
\PP\left(|O_{\scriptscriptstyle{\rm comp}}|=\infty\right) > 0 \hbox{ if } r > \frac{1}{2}\left(1 + \frac{1}{m} - \sqrt{1+ \frac{2}{m} - \frac{3}{m^2}}\right)
$$
and
$$
\PP\left(|O_{\scriptscriptstyle{\rm comp}}|=\infty\right) = 0 \hbox{ if } r < \frac{1}{2}\left(1 + \frac{1}{m} - \sqrt{1+ \frac{2}{m} - \frac{3}{m^2}}\right).
$$
In particular, recalling the condition for augmented routing from Example 2.3, we see that $r$ can be chosen such that infinite transmission is possible for augmented routing, but not for complete routing. For $m=2$ for instance, the critical value for $r$ is approximately $0.19$ with complete routing and approximately $0.067$ with augmented routing. We remark that, diagonalizing the transition matrix, one can compute that
$$
\PP(W_n \geq 0) = \frac{r + a}{2a}\left(\frac{r + a}{2}\right)^n + \frac{r - a}{2a}\left(\frac{r - a}{2}\right)^n ,
$$
but this does not help to settle the critical case $\beta = \log m$, since \eqref{conddie} is not satisfied. In general, we believe that, in the critical case, both scenarios are possible depending on the distributions.
$\Box$
\textbf{Example 3.3} Next, we give another example where augmented routing is strictly more powerful than complete routing. To this end, recall Example 2.2, where is was shown that, when $R\equiv 1$ and $C\in\{0,2\}$, with $\PP(C=0)=p_0$, then infinite transmission is possible with augmented routing if and only if $p_0 > \frac{1}{2}(1 - \sqrt{1-1/m^2})$. For complete routing we note that $W_n\geq 0$ if and only if no edge between the root and vertex $n$ has weight 2. Thus $\beta=\log p_0$, implying that infinite transmission is possible if $p_0>1/m$, but not if $p_0<1/m$. For $m \geq 2$ and $p_0 = \frac{1}{2m}$, augmented routing can hence transmit indefinitely, but complete routing cannot.
$\Box$
\section{Boundary routing}
First note that, when $\{R\geq C\}$ has a non-trivial probability, infinite transmission is never possible with boundary routing for $m=1$. Indeed, the tree $\mathcal{T}$ then reduces to a singly infinite path and the time until we encounter a transceiver at the boundary of the set of the informed vertices whose strength is strictly smaller than the cost of the edge to its un-informed neighbor is clearly almost surely finite. We hence restrict to $m\geq 2$.
We begin by giving an explicit condition for infinite transmission in the case when $C\equiv c$. By scaling we can take $c=1$ and it is then enough to consider integer-valued range variables $R$. Indeed, if $R$ is not integer-valued we instead work with $R'=\lfloor R\rfloor$ and note that this gives rise to the same transmission process.
\begin{prop} If $C\equiv 1$ and $R$ is integer-valued with $\PP(R=i)=r_i$ ($i= 0,1,2, \ldots$), then $\PP\left(|O_{\scriptscriptstyle{\rm bond}}|=\infty\right)>0$ if and only if
\begin{equation}\label{eq:bond_cond_const}
{\mathbb E}\left[m^R\right]>1+r_0.
\end{equation}
\end{prop}
\begin{proof}
The condition follows by relating the transmission process to a branching process: The ancestor of the process is the root 0, and the offspring of a vertex $x$ then is $\partial \Lambda_x$, that is, the vertices that are within the range of $x$, but that have at least one child that is not within the range of $x$. The possible offspring of $x$ are the vertices at level $R_x$ below $x$, and since there are $m^k$ vertices at level $k$ below $x$, the offspring mean is $\sum_{k=1}^\infty m^kr_k={\mathbb E}[m^R]-r_0$.
\end{proof}
\textbf{Example 4.1.} Let $C\equiv 1$ and assume that a transceiver is either functioning with range $n$ (probability $r_n\neq 1$) or non-functioning with range 0 (probability $r_0=1-r_n$). The root can then transmit indefinitely if and only if $r_n>1/m^n$. For $n=2$, the condition becomes $r_2>1/m^2$, which is strictly stronger than the condition for complete routing derived in Example 3.2. The critical value for $r_2$ when $m=2$ for instance is 0.25 with boundary routing and approximately 0.19 with complete routing. If $r_i=L(i)a^{-i}$ for some slowly varying function $L(i)$ and $a<1$, then infinite transmission is possible for $a>1/m$, while for $a<1/m$ it depends on the precise form of the distribution.
$\Box$
\textbf{Example 4.2.} Take $C\equiv 1$ and let $R$ be Poisson distributed with mean $\gamma$. Then (\ref{eq:bond_cond_const}) translates into
$$
e^{\gamma(m-1)}>1+e^{-\gamma},
$$
which holds for $\gamma$ large enough. For $m=2$, the threshold is $\gamma=\ln(1+\sqrt{2})=0.88$. This can be compared to the condition for augmented routing, which is $\gamma > 0.23$, see Example 2.4. For larger $m$, analytical expressions for the threshold are more involved, but numerical values are easily obtained.
$\Box$
When the edge costs are random, a branching process approach does not work, since information on that the signal has reached a vertex $x$, but not a given child $y$, affects the distribution of $C_{(x,y)}$ in a way that is difficult to control. Also the number of un-informed children of $x$ carries information about $C_{(x,y)}$. However, the arguments from the previous section can be applied again to derive a general condition. To this end, we note that the transmission process can be described by a process $\{U_y\}$ that keeps track of the strength of a signal from the root when it reaches $y$ and is defined as follows: Set $U_0=0$ for the root and then, for a vertex $y$ that is a child of $x$ in the tree, let
$$
U_y = \left\{ \begin{array}{ll}
U_x-C_{(x,y)}& \mbox{if $U_x-C_{(x,y)}\geq 0$};\\
R_x-C_{(x,y)}& \mbox{otherwise}.
\end{array}
\right.
$$
Indeed, when $U_x-C_{(x,y)}$ becomes strictly negative, we have passed a vertex that is on the boundary of the informed set. The transceiver at $x$ then forwards the signal and the new balance is $R_x-C_{(x,y)}$. When $U_y$ takes on a negative value, the process dies at that location and all vertices in the subtree below $y$ are assigned the value $\Gamma$, where $\Gamma$ is a cemetery state.
Let $U_0,U_1,\ldots$ be a Markov process distributed as the above process along a given ray in the tree, that is, $\Gamma$ is an absorbing state and, if $U_{i-1}\geq 0$, the transition mechanism is
$$
U_i = \left\{ \begin{array}{ll}
U_{i-1}-C_i & \mbox{if $U_{i-1}-C_i\geq 0$};\\
R_{i-1}-C_i & \mbox{if $U_{i-1}-C_i<0$ and $R_{i-1}-C_i\geq 0$};\\
\Gamma & \mbox{otherwise.}
\end{array}
\right.
$$
Here $\{R_i\}$ and $\{C_i\}$ are i.i.d.\ sequences. Let $\PP_z$ denote the probability measure of the process $\{U_i\}$ started from $U_0=z$. In analogy with complete routing, the limit
\begin{equation}\label{eq:lim_inf_bound}
-\lim\limits_{n \to \infty}\frac{1}{n}\log \inf_{z\in\mathbb{R}^+}\PP_z(U_n \geq 0)
\end{equation}
exists due to subadditivity. Furthermore, we have also in this case that
\begin{equation}\label{eq:temp}
\lim\limits_{n \to \infty}\frac{1}{n}\log \inf_{z\in\mathbb{R}^+}\PP_z(U_n \geq 0)=\lim\limits_{n \to \infty}\frac{1}{n}\log \PP_0(U_n\geq 0).
\end{equation}
Indeed, if the chain is started from $U_0=z>0$, for sure it survives to the level $M_z=\max\{k:\sum_{i=1}^kC_i\leq z\}$, and from that point the mechanism is stochastically the same as for a process started from $U_0=0$. Hence,
\begin{equation}\label{gamma_def}
\gamma := -\lim\limits_{n \to \infty}\frac{1}{n}\log \inf\limits_{z \in {\mathbb R}^+} \PP_z(U_n \geq 0)\,
\end{equation}
exists and coincides with the limit in \eqref{eq:lim_inf_bound}. This means that the proof of Theorem \ref{condcomplete} goes through verbatim and gives an analogous criteria for infinite transmission with boundary routing.
\begin{theorem} Assume that $m \geq 2$ and let $\gamma$ be defined as in \eqref{gamma_def}.
\begin{itemize}
\item[\rm{(i)}] If $\gamma > \log m$, then $\PP\left(|O_{\scriptscriptstyle{\rm bond}}|=\infty\right) = 0$.
\item[\rm{(ii)}] If $\gamma < \log m$, then $\PP\left(|O_{\scriptscriptstyle{\rm bond}}|=\infty\right) > 0$.
\end{itemize}
\end{theorem}
Just as for complete routing, it is typically difficult to find explicit expressions for $\gamma$. However, in some cases we can give sufficient conditions for $\gamma<\log m$, and hence for the possibility of infinite transmission. First recall that a tail distribution function $\bar F(x)=\PP(X>x)$ is said to be regularly varying with tail exponent $\tau-1$ if $\bar F(x)=x^{-(\tau-1)}L(x)$, where $x\mapsto L(x)$ is slowly varying at infinity (that is, $L(ax)/L(x)\to 1$ as $x\to \infty$ for any $a>0$). When this is the case, we say that the random variable $X$ has a power-law distribution.
\begin{corr}\label{cor:pl}
If $m \geq 2$ and $R$ has a power-law distribution, then $\PP\left(|O_{\scriptscriptstyle{\rm bond}}|=\infty\right)>0$ regardless of the distribution of $C$.
\end{corr}
\begin{proof}
Let $S_n=\sum_{i=1}^nC_i$. Trivially $\PP(U_n\geq 0)\geq \PP(S_n\leq R)$, since the process is clearly alive at level $n$ if the total cost of a given path of length $n$ does not exceed the range of the root transceiver. For any $c>0$, we have that $\PP(S_n\leq R)\geq \PP(R\geq nc)\cdot\PP(S_n\leq nc)$ and trivially $\PP(S_n\leq nc)\geq \PP(C\leq c)^n$. Now take $c$ such that $\PP(C\leq c)\geq a/m$ for some $a\in(1,m)$. Then
$$
\PP(S_n\leq R)\geq \PP(R\geq nc)\cdot (a/m)^n
$$
and it follows that $\gamma<\log m$.
\end{proof}
The tail behavior of the cost variable $C$ does not have the same role in determining the possibility of infinite transmission. For instance, it is not the case that infinite transmission is necessarily impossible if $C$ has a power-law distribution while $R$ has a distribution with an exponentially decaying tail. Instead, a sufficiently large atom at 0 for $C$ guarantees that infinite transmission is possible, regardless of the tail behavior of the distributions.
\begin{corr} \label{cor:c_atom}
If $\PP(C=0)\geq 1/m$, then $\PP\left(|O_{\scriptscriptstyle{\rm bond}}|=\infty\right)>0$.
\end{corr}
\begin{proof}
For any fixed $r\geq 0$, we have that
$$
\PP(S_n\leq r)\geq \PP(\cap_{i=1}^n\{C_i\leq r/n\})\geq \PP(C=0)^n.
$$
Since $\PP(U_n\geq 0)\geq \PP(S_n\leq R)$, this implies that $\gamma<\log m$.
\end{proof}
\section{Summary and conclusions}
We have derived sharp conditions for infinite transmission in all three schemes. For $m\geq 2$, the conditions are as follows:
\begin{itemize}
\item[$\bullet$] Augmented routing can transmit indefinitely if and only if ${\mathbb E}[{\rm{e}}^{\lambda R}]{\mathbb E}[{\rm{e}}^{-\lambda C}] > 1/m$ for all
$\lambda\geq 0$.
\item[$\bullet$] Let $\{W_i\}$ represent the transmission process with complete routing along a given ray (see Section 4 for a precise definition) and define $\beta=-\lim_{n \to \infty}\frac{1}{n}\log P(W_n \geq 0)$. Complete routing can then transmit indefinitely if $\beta<\log m$, but not if $\beta>\log m$. At the critical point $\beta=\log m$, we believe that both scenarios are possible.
\item[$\bullet$] For $m\geq 2$, boundary routing can transmit indefinitely if $\gamma <\log m$ but not if $\gamma>\log m$, where $\gamma=-\lim_{n \to \infty}\frac{1}{n}\log \PP(U_n \geq 0)$ and $\{U_i\}$ represents the transmission along a given ray. When $C\equiv 1$ and $R$ is integer-valued with $\PP(R=i)=r_i$, the condition becomes ${\mathbb E}[m^R]>1+r_0$.
\end{itemize}
For $m =1$, augmented routing can transmit indefinitely if ${\mathbb E}[R] > {\mathbb E}[C]$ but not if ${\mathbb E}[R] \leq {\mathbb E}[C]$ and both expectations are finite. If $R$ and $C$ both have infinite expectations, both scenarios can happen. Complete routing cannot transmit indefinitely when $R$ has bounded support, but the general case is open. Boundary routing cannot transmit indefinitely for any distribution.
When $R$ has a power-law distribution and $m\geq 2$, infinite transmission is always possible with all three schemes; see Corollary \ref{cor:pl}. The tail behavior of $C$ does not play the same role, since, according to Corollary \ref{cor:c_atom}, a large enough atom at 0 guarantees that infinite transmission is possible with boundary routing (and thereby also with the other schemes), regardless of the tail behaviors.
We have given several examples of distributions where the schemes are strictly different in the sense that there are regimes for the parameters of the distributions of $R$ and $C$ where one (or two) of the schemes can transmit indefinitely, but not the other two (one), see e.g.\ Example 3.2, 3.3, 4.1 and 4.2. Complete routing and boundary routing are trivially equivalent in some cases, e.g.\ when $R$ is constant (see also Example 3.1). An interesting question is if the three schemes are always strictly different when this is not the case and when $R$ does not have a power-law distribution, that is, is it then always strictly easier to transmit to infinity with complete routing than with boundary routing, and strictly easier with augmented routing than with complete routing? Or are there cases when the conditions coincide for (at least) two of the schemes? Answering this is complicated by the fact that the conditions for complete routing and boundary routing are somewhat difficult to analyze, since the probabilities $\PP(W_n\geq 0)$ and $\PP(U_n\geq 0)$ are typically not easy to calculate.
However, when $C\equiv 1$ and $R$ is integer-valued, the conditions for boundary routing and for augmented routing are explicit and an easier question is if there are families of distributions of $R$ for which these conditions coincide. Below we show that the answer is no.
The condition \eqref{eq:bond_cond_const} for infinite transmission with boundary routing means that $r_0$ has to be sufficiently small. We now show that, for any distribution that satisfies \eqref{eq:bond_cond_const}, it is possible to strictly increase $r_0$ and still be able to transmit indefinitely with augmented routing. To this end, let the distribution of $R$ be described by $\{r_i\}_{i=0}^\infty$, let $k=\min\{i:r_i>0\}$ and take $\varepsilon\in(0,r_k)$. Then define $R_\varepsilon$ by shifting mass $\varepsilon$ from $k$ to 0, that is, $R_\varepsilon$ has distribution
$$
\PP(R_\varepsilon=i) = \left\{ \begin{array}{ll}
r_k-\varepsilon & \mbox{if $i=k$};\\
r_0+\varepsilon & \mbox{if $i=0$};\\
r_i & \mbox{otherwise.}
\end{array}
\right.
$$
\begin{prop}
Let $m\geq 2$, take $C\equiv 1$ and let $R$ be integer-valued such that \eqref{eq:bond_cond_const} holds. If $\varepsilon$ is sufficiently small, then $R_\varepsilon$ satisfies \eqref{critaug}.
\end{prop}
\begin{proof}
We have that
\begin{equation}\label{eq:aug_mod}
m{\mathbb E}[e^{\lambda R_{\varepsilon}}]{\mathbb E}[e^{-\lambda C}] \geq r_0me^{-\lambda}-me^{(k-1)\lambda}\varepsilon+m\sum_{i=1}^\infty r_ie^{(i-1)\lambda}.
\end{equation}
First assume that $e^\lambda\geq m$, and write $e^\lambda=m+c_\lambda$, where $c_\lambda>0$ and $c_\lambda\sim e^\lambda$ as $\lambda\to\infty$. Then we obtain for the last term in \eqref{eq:aug_mod} that
$$
m\sum_{i=1}^\infty r_ie^{(i-1)\lambda}\geq \sum_{i=1}^\infty r_im^i+mr_kc_\lambda^{k-1}>1+mr_kc_\lambda^{k-1},
$$
where the last inequality follows from \eqref{eq:bond_cond_const}. It follows that, if $\varepsilon$ is sufficiently small, then $m{\mathbb E}[e^{\lambda R_{\varepsilon}}]{\mathbb E}[e^{-\lambda C}]>1$ for all $\lambda$ such that $e^\lambda\geq m$. Next assume that $e^\lambda<m$ so that $e^{-\lambda}>1/m$. Trivially
$$
m\sum_{i=1}^\infty r_ie^{(i-1)\lambda}\geq m(1-r_0)
$$
and hence the right hand side of \eqref{eq:aug_mod} is bounded from below by
$$
r_0-me^{(k-1)\lambda}\varepsilon+m(1-r_0),
$$
which is larger than 1 for all $\lambda$ in the specified range if $\varepsilon$ is sufficiently small.
\end{proof}
A possible continuation of the current work would be to investigate time dynamics of the transmission schemes under various rules for the transmission times. Conditionally on that the signal does not die, what is the asymptotic speed of the transmission? Are there setups when a scheme has a very small (large) probability of transmitting to infinity, but where the speed of transmission conditionally on survival is large (small)? Yet another question to investigate is when the root can hear a signal from infinitely far away. Do the conditions on $R$ and $C$ for this coincide with the conditions for infinite transmission? For all three routing schemes, the probability that the root can transmit to a given vertex $x$ at level $n$ is of course the same as the probability that $x$ can transmit to the root. However, the dependence structure for the events $\{\mbox{root can transmit to vertex $i$ at level $n$}\}_{i=1}^{m^n}$ and $\{\mbox{vertex $i$ at level $n$ can transmit to the root}\}_{i=1}^{m^n}$ is different, and hence the conditions could possibly be different. In \cite{dousse}, this issue is analyzed for a related problem in the context of a spatial Poisson process.
{\bf Acknowledgement: } We thank Silke Rolles and Anita Winter for inviting us to the ``Women in Probability'' workshop taking place in July 2010 at Technische Universit\"at M\"unchen, where this work was initiated.
\end{document} |
\begin{document}
\title{Improving Efficiency of SVM $k$-fold Cross-validation by Alpha Seeding}
\author{Zeyi Wen$^1$, Bin Li$^2$, Kotagiri Ramamohanarao$^1$, Jian Chen$^2$\thanks{Jian Chen is the corresponding author.}, Yawen Chen$^2$, Rui Zhang$^1$\\
$^[email protected] \{rui.zhang, kotagiri\}@unimelb.edu.au\\
The University of Melbourne, Australia\\
$^2$\{[email protected], [email protected], [email protected]\}\\
South China University of Technology, China}
\maketitle
\begin{abstract}
The $k$-fold cross-validation is commonly used to evaluate the effectiveness of SVMs with the selected hyper-parameters.
It is known that the SVM $k$-fold cross-validation is expensive,
since it requires training $k$ SVMs.
However, little work has explored reusing the $h^{\text{th}}$ SVM for training the $(h+1)^{\text{th}}$ SVM
for improving the efficiency of $k$-fold cross-validation.
In this paper, we propose three algorithms that reuse the $h^{\text{th}}$ SVM
for improving the efficiency of training the $(h+1)^{\text{th}}$ SVM.
Our key idea is to efficiently identify the support vectors and
to accurately estimate their associated weights (also called alpha values) of the next SVM by using the previous SVM.
Our experimental results show that our algorithms are
several times faster than the $k$-fold cross-validation which does not make use of the previously trained SVM.
Moreover, our algorithms produce the same results (hence same accuracy) as the $k$-fold cross-validation
which does not make use of the previously trained SVM.
\end{abstract}
\section{Introduction}
\eat{
Support Vector Machines (SVMs) are widely used in many data mining applications,
such as document classification and image processing~\cite{prasad2010handwriting}.
Training an SVM classifier\footnote{For ease of presentation, we discuss binary classification,
although our approaches are applicable to multi-class classification and regression.}
is to find a hyperplane that separates the two classes of training instances with the largest margin,
such that the SVM classifier can predict the labels of unseen instances more accurately.
\footnote{For ease of presentation, we discuss binary classification,
although our approaches are applicable to multi-class classification and regression.}
\footnote{Without confusion,
we omit ``SVM" in SVM $k$-fold cross-validation.}
}
In order to train an effective SVM classifier,
the hyper-parameters (e.g. the penalty $C$) need to be selected carefully.
The $k$-fold cross-validation is a commonly used process
to evaluate the effectiveness of SVMs with the selected hyper-parameters.
It is known that the SVM $k$-fold cross-validation is expensive,
since it requires training $k$ SVMs with different subsets of the whole dataset.
To improve the efficiency of $k$-fold cross-validation,
some recent studies~\cite{wen2014mascot,athanasopoulos2011gpu} exploit
modern hardware (e.g. Graphic Processing Units).
Chu et al.~\cite{chu2015warm} proposed to reuse the $k$ linear SVM classifiers trained
in the $k$-fold cross-validation with parameter $C$ for training the $k$ linear
SVM classifiers with parameter ($C+\Delta$).
However, little work has explored the possibility of reusing the $h^{\text{th}}$ (where $h \in \{1, 2, ..., (k - 1)\}$) SVM
for improving the efficiency of training the $(h+1)^{\text{th}}$ SVM in the $k$-fold cross-validation
with parameter $C$.
In this paper, we propose three algorithms that reuse the $h^{\text{th}}$ SVM
for training the $(h+1)^{\text{th}}$ SVM in $k$-fold cross-validation.
The intuition behind our algorithms is that the hyperplanes of the two SVMs are similar,
since many training instances (e.g. more than 80\% of the training instances when $k$ is 10) are the same in training the two SVMs.
Note that in this paper we are interested in $k > 2$,
since when $k=2$ the two SVMs share no training instance.
\eat{
Figure~\ref{fig:reuse} gives an example of two SVMs trained during the $k$-fold cross-validation.
In the previous SVM, the optimal hyperplane is shown in Figure~\ref{fig:reuse:before}.
In the training dataset of the next SVM, two instances represented by rectangles (cf. Figure~\ref{fig:reuse:before}) are removed,
and two instances represented by triangles (cf. Figure~\ref{fig:reuse:after}) are added.
The optimal hyperplane of the next SVM is shown using a solid line in Figure~\ref{fig:reuse:after},
where the previous hyperplane is shown in dotted line as a comparison.
As we can see from this example, the hyperplane of the next SVM is potentially close to the previous one,
due to the large number of shared training instances.
Hence, using the previous SVM as a starting point for training the next SVM may save significant amount of computation.
\captionsetup[subfloat]{captionskip=5pt}
\begin{figure}
\caption{Two SVMs in the $k$-fold cross-validation}
\label{fig:reuse:before}
\label{fig:reuse:after}
\label{fig:reuse}
\end{figure}
}
We present our ideas in the context of training SVMs using Sequential Minimal Optimisation (SMO)~\cite{platt1998sequential},
although our ideas are applicable to other solvers~\cite{osuna1997improved,joachims1999making}.
In SMO, the hyperplane of the SVM is represented by a subset of training instances together with their weights, namely alpha values.
The training instances with alpha values larger than $0$ are called support vectors.
Finding the optimal hyperplane is effectively finding the alpha values for all the training instances.
Without reusing the previous SVM, the alpha values of all the training instances are initialised to $0$.
Our key idea is to use the alpha values of the $h^{\text{th}}$ SVM to initialise the alpha values for the $(h+1)^{\text{th}}$ SVM.
Initialising alpha values using the previous SVM is called \textit{alpha seeding}
in the literature of studying leave-one-out cross-validation~\cite{decoste2000alpha}.
At some risk of confusion to the reader, we will use ``alpha seeding'' and ``initialising alpha values'' interchangeably,
depending on which interpretation is more natural.
Reusing the $h^{\text{th}}$ SVM for training the $(h+1)^{\text{th}}$ SVM in $k$-fold cross-validation has two key challenges.
(i) The training dataset for the $h^{\text{th}}$ SVM is different from that for the $(h+1)^{\text{th}}$ SVM,
but the initial alpha values for the $(h+1)^{\text{th}}$ SVM should be close to their optimal values;
improper initialisation of alpha values leads to slower convergence than without reusing the $h^{\text{th}}$ SVM.
(ii) The alpha value initialisation process should be very efficient;
otherwise, the time spent in the initialisation may be larger than that saved in the training.
This is perhaps the reason that existing work either (i) reuses the $h^{\text{th}}$ SVM trained with parameter $C$
for training the $h^{\text{th}}$ SVM with parameter ($C + \Delta$) where both SVMs have the identical training dataset~\cite{chu2015warm}
or (ii) only studies alpha seeding in leave-one-out cross-validation~\cite{decoste2000alpha,lee2004efficient}
which is a special case of $k$-fold cross-validation.
Our key contributions in this paper are the proposal of three algorithms (where we progressively refine one algorithm after the other)
for reusing the alpha values of the $h^{\text{th}}$ SVM for the $(h+1)^{\text{th}}$ SVM.
(i) Our first algorithm aims to initialise the alpha values to their optimal values for the $(h+1)^{\text{th}}$ SVM
by exploiting the optimality condition of the SVM training.
(ii) To efficiently compute the initial alpha values,
our second algorithm only estimates the alpha values for the newly added instances,
based on the assumption that all the shared instances between the $h^{\text{th}}$ and the $(h+1)^{\text{th}}$ SVMs
tend to have the same alpha values.
(iii) To further improve the efficiency of initialising alpha values,
our third algorithm exploits the fact that a training instance in the $h^{\text{th}}$ SVM
can be potentially replaced by a training instance in the $(h+1)^{\text{th}}$ SVM.
Our experimental results show that when $k=10$, our algorithms are
several times faster than the $k$-fold cross-validation in LibSVM;
when $k=100$, our algorithm dramatically outperforms LibSVM (32 times faster in the Madelon dataset).
Moreover, our algorithms produce the same results (hence same accuracy) as LibSVM.
The remainder of this paper is organised as follows.
We describe preliminaries in Section~\ref{paper:pre}.
Then, we elaborate our three algorithms in Section~\ref{paper:alg},
and report our experimental study in Section~\ref{paper:es}.
In Section~\ref{paper:rw} and~\ref{paper:conc}, we review the related literature,
and conclude this paper.
\section{Preliminaries}
\label{paper:pre}
Here, we give some details of SVMs,
and discuss the relationship of two rounds of $k$-fold cross-validation.
\subsection{Support Vector Machines}
\label{paper:pre-svm}
An instance $\boldsymbol{x}_i$ is attached with an integer $y_i \in \{+1, -1\}$ as its label.
A positive (negative) instance is an instance with the label of $+1$ ($-1$).
Given a set $\mathcal{X}$ of $n$ training instances,
the goal of the SVM training is to find a hyperplane that separates the positive and the
negative training instances in $\mathcal{X}$ with the maximum margin and meanwhile,
with the minimum misclassification error on the training instances.
\eat{
The SVM training is equivalent to solving the following optimisation problem:
\begin{small}
\begin{equation*}
\begin{aligned}
& \underset{\boldsymbol{w}, \text{ } \boldsymbol{\xi}, \text{ } b}{\argminl}
& \frac{1}{2}{||\boldsymbol{w}||^2} + C\sum_{i=1}^{n}{\xi_i} \\
& \text{subject to}
& y_i(\boldsymbol{w}\cdot \boldsymbol{x}_i + b) \geq 1 - \xi_i \\
& & \xi_i \geq 0, \ \forall i \in \{1,...,n\}
\end{aligned}
\end{equation*}
\end{small}where
$\boldsymbol{w}$ is the normal vector of the hyperplane,
$C$ is the penalty parameter,
$\boldsymbol{\xi}$ is the slack variables
to tolerant some training instances falling into the wrong side of the hyperplane,
and $b$ is the bias of the hyperplane.
In many applications, a better hyperplane exists in other data space instead of the original data space.
}
To enable handily mapping training instances to other data spaces by kernel functions,
finding the hyperplane can be expressed in a \textit{dual form}~\cite{bennett2000duality}
as the following \textit{quadratic programming} problem~\cite{nocedal2006numerical}.
\begin{small}
\begin{equation}
\begin{aligned}
& \underset{\boldsymbol{\alpha}}{\argmaxl}
& & \sum_{i=1}^{n}{\alpha_i}-\frac{1}{2}{\boldsymbol{\alpha}^T \boldsymbol{Q} \boldsymbol{\alpha}} \\
& \text{subject to}
& & 0 \leq \alpha_i \leq C, \forall i \in \{1,...,n\}; \sum_{i=1}^{n}{y_i\alpha_i} = 0
\end{aligned}
\label{eq:svm_dual}
\end{equation}
\end{small}where
{$\boldsymbol{\alpha} \in \mathbb{R}^n$} is also called a weight vector,
and $\alpha_i$ denotes the \textit{weight} of $\boldsymbol{x}_i$;
$\boldsymbol{Q}$ denotes an $n \times n$ matrix $[Q_{i,j}]$ and {$Q_{i,j} = y_i y_j K(\boldsymbol{x}_i, \boldsymbol{x}_j)$,
and {$K(\boldsymbol{x}_i, \boldsymbol{x}_j)$} is a kernel value computed from a kernel
function (e.g. Gaussian kernel, {$K(\boldsymbol{x}_i, \boldsymbol{x}_j) =
exp\{-\gamma||\boldsymbol{x}_i-\boldsymbol{x}_j||^2\}$}).
Then, the goal of the SVM training is to find the optimal $\boldsymbol{\alpha}$.
If $\alpha_i$ is greater than $0$, $\boldsymbol{x}_i$ is called a \textit{support vector}.
\eat{An intriguing property of the SVM is that the SVM classifier is determined by the support vectors,
and the non-support vectors have no effect on the SVM classifier.
The key ideas of our algorithm proposed in Section~\ref{paper:alg} are applicable to various SVM training algorithms
such as chucking based approaches~\cite{osuna1997improved,joachims1999making},
Sequential Minimal Optimisation (SMO)~\cite{platt1998sequential},
and training algorithms in the SVM primal form~\cite{shalev2011pegasos,suykens1999least}.
}
In this paper, we present our ideas in the context of
using SMO to solve Problem~\eqref{eq:svm_dual},
although our key ideas are applicable to other solvers~\cite{osuna1997improved,joachims1999making}.
The training process and the derivation of the optimality condition are unimportant for understanding our algorithms,
and hence are not discussed here.
Next, we present the optimality condition for the SVM training which
will be exploited in our proposed algorithms in Section~\ref{paper:alg}.
\subsubsection{The optimality condition for the SVM training}
In SMO, a training instance $\boldsymbol{x}_i$ is associated with an optimality indicator $f_i$ which is defined as follows.
\begin{small}
\begin{equation}
f_i = y_i\sum_{j=1}^{n}{\alpha_j Q_{i,j} - y_i}
\label{eq:f-i}
\end{equation}
\end{small}The optimality condition of the SVM training is the Karush-Kuhn-Tucker (KKT)~\cite{kuhn2014nonlinear} condition.
When the optimality condition is met,
we have the optimality indicators satisfying the following constraint.
\begin{small}
\begin{equation}
\min\{f_i| i \in I_u \cup I_m\} \ge
\max\{f_i | i \in I_l \cup I_m\}
\label{eq:fu-fl}
\end{equation}
\end{small}where
\begin{small}
\begin{equation}
\begin{adjustbox}{max width=0.43\textwidth}
$
\begin{split}
&I_{m} = \{i | \boldsymbol{x}_i \in \mathcal{X}, 0 < \alpha_i < C\},\\
&I_{u} = \{i | \boldsymbol{x}_i \in \mathcal{X}, y_i = +1, \alpha_i = 0\} \cup \{i | \boldsymbol{x}_i \in \mathcal{X}, y_i = -1, \alpha_i = C\},\\
&I_{l} = \{i | \boldsymbol{x}_i \in \mathcal{X}, y_i = +1, \alpha_i = C\} \cup \{i | \boldsymbol{x}_i \in \mathcal{X}, y_i = -1, \alpha_i = 0\}.
\end{split}
$
\end{adjustbox}
\label{eq:def-i}
\end{equation}
\end{small}As
observed by Keerthi et al.~\cite{keerthi2001improvements},
Constraint~\eqref{eq:fu-fl} is equivalent to the following constraints.
\begin{small}
\begin{equation}
f_i > b \text{ for } i \in I_u; \text{\hspace{5pt}} f_i = b \text{ for } i \in I_m; \text{\hspace{5pt}} f_i < b \text{ for } i \in I_l
\text{\hspace{5pt}}
\label{eq:f-const}
\end{equation}
\end{small}where
$b$ is the bias of the hyperplane.
Our algorithms proposed in Section~\ref{paper:alg} exploit Constraint~\eqref{eq:f-const}.
\captionsetup[subfloat]{captionskip=5pt}
\begin{figure}
\caption{$k$-fold cross-validation}
\label{fig:cv:kfold}
\label{fig:cv:1stfold}
\label{fig:cv:pthfold}
\label{fig:cv}
\end{figure}
\eat{
\subsection{$k$-fold cross-validation}
The $k$-fold cross-validation evenly divides the dataset into $k$ subsets.
One subset is used as the test set $\mathcal{T}$,
while the rest $(k-1)$ subsets together form the training set $\mathcal{X}$.
The SVM is first trained using $\mathcal{X}$.
Then the trained SVM is used to classify the test instances in $\mathcal{T}$ by predicting their labels.
To obtain more reliable results, the above training-classification process is repeated for $k$ times,
where every subset is used as the test set in turn.
Figure~\ref{fig:cv} shows an example of the $k$-fold cross-validation.
The whole dataset is evenly divided into $k$ subsets as shown in Figure~\ref{fig:cv:kfold}.
In the first round, the $1^{st}$ subset is used as the test set
and the remaining $(k-1)$ subsets are used as the training set (cf. Figure~\ref{fig:cv:1stfold});
similarly in the $h^{th}$ round, the $h^{th}$ subset is used as the test set
and the remaining $(k-1)$ subsets are used as the training set (cf. Figure~\ref{fig:cv:pthfold}).
As we can see from the $k$-fold cross-validation process,
$(k-2)$ subsets of the training instances are shared in any two rounds.
Note that we are interested in the $k$-fold cross-validation where $k > 2$,
since no instance is shared when $k=2$.
}
\subsection{Relationship between the $h^{\text{th}}$ round and the $(h+1)^{\text{th}}$ round in $k$-fold cross-validation}
\label{paper:relation-two-round}
The $k$-fold cross-validation evenly divides the dataset into $k$ subsets.
One subset is used as the test set $\mathcal{T}$,
while the rest $(k-1)$ subsets together form the training set $\mathcal{X}$.
Suppose we have trained the $h^{\text{th}}$ SVM (in the $h^{\text{th}}$ round)
using the $1^{\text{st}}$ to $(h-1)^{\text{th}}$ and $(h+1)^{\text{th}}$ to $k^{\text{th}}$ subsets as the training set,
and the $h^{\text{th}}$ subset serves as the testing set (cf. Figure~\ref{fig:cv:1stfold}).
Now we want to train the $(h+1)^{\text{th}}$ SVM.
Then, the $1^{\text{st}}$ to $(h-1)^{\text{th}}$ subsets and the $(h+2)^{\text{th}}$ to $k^{\text{th}}$
subsets are shared between the two rounds of the training.
To convert the training set used in the $h^{\text{th}}$ round to the training set for the $(h+1)^{\text{th}}$ round,
we just need to remove the $(h+1)^{\text{th}}$ subset from and add the $h^{\text{th}}$ subset
to the training set used in the $h^{\text{th}}$ round.
Hereafter, we call the $h^{\text{th}}$ and $(h+1)^{\text{th}}$ SVMs \textit{the previous SVM} and \textit{the next SVM}, respectively.
For ease of presentation, we denote the shared subsets---$(k - 2)$ subsets in total---by $\mathcal{S}$,
denote the unshared subset in the training of the previous round by $\mathcal{R}$,
and denote the subset for testing in the previous round by $\mathcal{T}$.
Let us continue to use the example shown in Figure~\ref{fig:cv},
$\mathcal{S}$ consists of the $1^{\text{st}}$ to $(h-1)^{th}$ subsets and the $(h+2)^{\text{th}}$ to $k^{\text{th}}$ subsets;
$\mathcal{R}$ is the $(h+1)^{\text{th}}$ subset; $\mathcal{T}$ is the $h^{\text{th}}$ subset.
To convert the training set $\mathcal{X}$ used in the $h^{\text{th}}$ round to the training set $\mathcal{X}'$ for the $(h+1)^{\text{th}}$ round,
we just need to remove $\mathcal{R}$ from $\mathcal{X}$ and add $\mathcal{T}$ to $\mathcal{X}$, i.e. $\mathcal{X}' = \mathcal{T} \cup \mathcal{X} \setminus \mathcal{R} = \mathcal{T} \cup \mathcal{S}$.
We denote three sets of indices as follows corresponding to
$\mathcal{R}$, $\mathcal{T}$ and $\mathcal{S}$ by $I_\mathcal{R}$, $I_\mathcal{T}$ and $I_\mathcal{S}$, respectively.
\begin{small}
\begin{equation}
I_{\mathcal{R}} = \{i | \boldsymbol{x}_i \in \mathcal{R}\}, I_{\mathcal{T}} = \{i | \boldsymbol{x}_i \in \mathcal{T}\}, I_{\mathcal{S}} = \{i | \boldsymbol{x}_i \in \mathcal{S}\}
\label{eq:i-rt}
\end{equation}
\end{small}
Two rounds of the $k$-fold cross-validation often have many training instances in common, i.e. large $\mathcal{S}$.
E.g. when $k$ is 10, $\frac{8}{9}$ (or $\sim 90\%$) of instances in $\mathcal{X}$ and $\mathcal{X}'$ are the instances of $\mathcal{S}$.
Next, we study three algorithms for reusing the previous SVM to train the next SVM.
\eat{
\subsection{Constraints on alpha values}
As we have discussed in Section~\ref{paper:relation-two-round},
the training instances in $\mathcal{S}$ are used in two rounds of the $k$-fold cross-validation.
As a result, the two SVMs may be very similar, especially when the majority of the support vectors are from $\mathcal{S}$.
Since the support vectors are determined by the alpha values of the instances (cf. Section~\ref{paper:pre-svm}),
the alpha values of the instances in $\mathcal{S}$ obtained from the previous round may be reused for training the next SVM.
When reusing the alpha values,
we need to satisfy the constraints on alpha values in the SVM training.
In what follows, we present the constraints on alpha values.
For the SVM training, the following constraints on alpha values, denoted by $\boldsymbol{\alpha}$,
must be satisfied (cf. Problem~\eqref{eq:svm_dual}).
The constraints are
\begin{small}
\begin{equation}
\begin{split}
\sum_{\boldsymbol{x}_i \in \mathcal{X}}{y_i \alpha_i} = 0, \text{where } 0 \le \alpha_i \le C
\end{split}
\label{eq:pro-const}
\end{equation}
\end{small}where $\mathcal{X}$ is the training set.
The values of $\boldsymbol{\alpha}$ is commonly initialised to $\boldsymbol{0}$ for the SVM training.
Existing SVM implementations which initialise $\boldsymbol{\alpha}$ to $\boldsymbol{0}$ include~\cite{wen2014mascot,chang2011libsvm} and~\cite{catanzaro2008fast}.
This is perhaps the simplest way to initialise alpha values that satisfy Constraint~\eqref{eq:pro-const}.
Compared with initialising alpha values to 0,
we propose algorithms in Section~\ref{paper:alg} to initialise the alpha values
for the next SVM using the alpha values obtained from the previous SVM.
The technique for initialising alpha values for the next SVM is also called alpha seeding,
and is first proposed to improve the efficiency of leave-one-out cross-validation~\cite{decoste2000alpha}.
At some risk of confusion to the reader, we will use ``alpha seeding'' and ``initialising alpha values'' interchangeably,
depending on which interpretation is more natural.
\subsection{The alpha seeding problem}
As we have discussed in Section~\ref{paper:relation-two-round},
the training set $\mathcal{X}'$ for training the next SVM can be considered as
removing $\mathcal{R}$ from and adding $\mathcal{T}$ to the previous training set $\mathcal{X}$;
$\mathcal{S}$ is all the training instances shared between the two training sets.
We can revise Problem~\eqref{eq:svm_dual} as the following optimisation problem that reuses
the alpha values obtained from the previous SVM.
\begin{small}
\begin{equation}
\begin{split}
\hspace{-5pt} \underset{\Delta \boldsymbol{\alpha}'}{\text{max} \hspace{10pt}}
\sum_{i \in I_\mathcal{S} \cup I_\mathcal{T}}&{(\alpha_i + \Delta \alpha_i')}-\frac{1}{2}{{(\boldsymbol{\alpha} + \Delta \boldsymbol{\alpha}')^T}
\boldsymbol{Q} (\boldsymbol{\alpha} + \Delta \boldsymbol{\alpha}')} \\
\text{subject to \hspace{12pt}}
&0 \leq \alpha_i + \Delta \alpha_i' \leq C, \forall i \in I_\mathcal{S} \cup I_\mathcal{T}\\
&\sum_{i \in I_\mathcal{S} \cup I_\mathcal{T}}{y_i(\alpha_i + \Delta \alpha_i')} = 0\\
\end{split}
\label{eq:alpha-seeding}
\end{equation}
\end{small}where
for $i \in I_\mathcal{T}$ (cf. the index set definition in~\eqref{eq:i-rt}), $\alpha_i$ equals to $0$;
for $i \in I_\mathcal{S}$, $\alpha_i$ equals to the alpha value of $\boldsymbol{x}_i$ obtained from the previous SVM training.
So, $\alpha_i$ is a constant in the above problem.
Finding $\Delta \boldsymbol{\alpha}'$ for Problem~\eqref{eq:alpha-seeding} is equivalent to solving a combinatorial problem which is intractable
when $\mathcal{S} \cup \mathcal{T}$ has more than 10 instances~\cite{joachims1999transductive}.
The alpha seeding problem is to find $\Delta \boldsymbol{\alpha}'$ that is likely to maximise the objective function of Problem~\eqref{eq:alpha-seeding}.
Once we have computed $\Delta \alpha_i'$, we can compute the initial alpha value $\alpha_i'$
for the next SVM using $\alpha_i' = \alpha_i + \Delta \alpha_i'$,
where $i \in I_\mathcal{S} \cup I_\mathcal{T}$.
After initialising $\boldsymbol{\alpha}'$, we use SMO to further adjust $\boldsymbol{\alpha}'$ until the optimal condition for the SVM training is reached.
Next, we study three algorithms that produce $\boldsymbol{\alpha}'$ for reusing the previous SVM to train the next SVM.
}
\section{Reusing the previous SVM in $k$-fold cross-validation}
\label{paper:alg}
We present three algorithms that reuse the previous SVM for training the next SVM,
where we progressively refine one algorithm after the other.
(i) Our first algorithm aims to initialise the alpha values $\boldsymbol{\alpha}'$ to their optimal values for the next SVM,
based on the alpha values $\boldsymbol{\alpha}$ of the previous SVM.
We call the first algorithm Adjusting Alpha Towards Optimum (ATO).
(ii) To efficiently initialise $\boldsymbol{\alpha}'$, our second algorithm keeps the alpha values
of the instances in $\mathcal{S}$ unchanged (i.e. $\alpha_s' = \alpha_s$ for $s \in I_\mathcal{S}$),
and estimates $\alpha_t'$ for $t \in I_\mathcal{T}$.
This algorithm effectively performs alpha value initialisation via replacing $\mathcal{R}$ by $\mathcal{T}$
under constraints of Problem~\eqref{eq:svm_dual},
and hence we call the algorithm Multiple Instance Replacement (MIR).
(iii) Similar to MIR, our third algorithm also keeps the alpha values of the instances in $\mathcal{S}$ unchanged;
different from MIR, the algorithm replaces the instances in $\mathcal{R}$ by the instances in $\mathcal{T}$ one at a time,
which dramatically reduces the time for initialising $\boldsymbol{\alpha}'$.
We call the third algorithm Single Instance Replacement (SIR).
Next, we elaborate these three algorithms.
\subsection{Adjusting Alpha Towards Optimum (ATO)}
ATO aims to initialise the alpha values to their optimal values.
It employs the technique for online SVM training, designed by Karasuyama and Takeuchi~\cite{karasuyama2009multiple},
for the $k$-fold cross-validation.
In the online SVM training, a subset $\mathcal{R}$ of outdated training instances is removed from the training set $\mathcal{X}$,
i.e. $\mathcal{X}' = \mathcal{X} \setminus \mathcal{R}$;
a subset $\mathcal{T}$ of newly arrived training instances is added to the training set, i.e. $\mathcal{X}' = \mathcal{X}' \cup \mathcal{T}$.
The previous SVM trained using $\mathcal{X}$ is adjusted by removing and adding subsets of instances to obtain the next SVM.
In the ATO algorithm, we first construct a new training dataset $\mathcal{X}'$ where $\mathcal{X}' = \mathcal{S} = \mathcal{X} \setminus \mathcal{R}$.
Then, we gradually increase alpha values of the instances in $\mathcal{T}$ (i.e. increase $\alpha_t'$ for $t \in I_\mathcal{T}$), denoted by $\boldsymbol{\alpha}_\mathcal{T}'$, to (near) their optimal values;
meanwhile, we gradually decrease the alpha values of the instances in $\mathcal{R}$ (i.e. decrease $\alpha_r'$ for $r \in I_\mathcal{R}$),
denoted by $\boldsymbol{\alpha}_\mathcal{R}'$, to $0$.
Once the alpha value of an instance in $\mathcal{T}$ satisfies the optimal condition (i.e. Constraint~\eqref{eq:f-const}),
we move the instance from $\mathcal{T}$ to the training set $\mathcal{X}'$;
similarly once the alpha value of an instance in $\mathcal{R}$ equals to 0 (becoming a non-support vector),
we remove the instance from $\mathcal{R}$.
ATO terminates the alpha value initialisation when $\mathcal{R}$ is empty.
\subsubsection{Updating the alpha values}
Next, we present details of increasing $\boldsymbol{\alpha}_\mathcal{T}'$ and decreasing $\boldsymbol{\alpha}_\mathcal{R}'$.
We denote the step size for an increment on $\boldsymbol{\alpha}_\mathcal{T}'$ and decrement on $\boldsymbol{\alpha}_\mathcal{R}'$ by $\eta$.
From constraints of Problem~\eqref{eq:svm_dual}, all the alpha values must be in $[0, C]$.
Hence, for $t \in I_\mathcal{T}$ the increment of $\alpha_t'$, denoted by $\Delta \alpha_t'$, cannot exceed $(C - \alpha_t')$;
for $r \in I_\mathcal{R}$ the decrement of $\alpha_r'$, denoted by $\Delta \alpha_r'$, cannot exceed $\alpha_r'$.
We denote the change of all the alpha values of the instances in $\mathcal{T}$ by $\Delta \boldsymbol{\alpha}_\mathcal{T}'$ and
the change of all the alpha values of the instances in $\mathcal{R}$ by $\Delta \boldsymbol{\alpha}_\mathcal{R}'$.
Then, we can compute $\Delta \boldsymbol{\alpha}_\mathcal{T}'$ and $\Delta \boldsymbol{\alpha}_\mathcal{R}'$ as follows.
\begin{small}
\begin{equation}
\Delta \boldsymbol{\alpha}_\mathcal{T}'= \eta (C\boldsymbol{1} - \boldsymbol{\alpha}_\mathcal{T}'), \text{\hspace{10pt}} \Delta \boldsymbol{\alpha}_\mathcal{R}' = -\eta \boldsymbol{\alpha}_\mathcal{R}'
\label{eq:delta-a-rt}
\end{equation}
\end{small}where $\boldsymbol{1}$ is a vector with all the dimensions of $1$.
When we add $\Delta \boldsymbol{\alpha}_\mathcal{T}'$ to $\boldsymbol{\alpha}_\mathcal{T}'$ and $\Delta \boldsymbol{\alpha}_\mathcal{R}'$ to $\boldsymbol{\alpha}_\mathcal{R}'$,
constraints of Problem~\eqref{eq:svm_dual} must be satisfied.
However, after adjusting $\boldsymbol{\alpha}_\mathcal{T}'$ and $\boldsymbol{\alpha}_\mathcal{R}'$, the constraint $\sum_{i \in I_\mathcal{T} \cup I_\mathcal{S} \cup I_\mathcal{R}}{y_i \alpha_i'} = 0$ is often violated,
so we need to adjust the alpha values of the training instances in $\mathcal{X}'$
(recall that at this stage $\mathcal{X}' = \mathcal{S}$).
We propose to adjust the alpha values of the training instances in $\mathcal{X}'$ which are also in $\mathcal{M}$
where $\boldsymbol{x}_i \in \mathcal{M}$ given $i \in I_m$.
\eat{The reason for choosing $\mathcal{M}$ is that the alpha values can be increased or decreased under the constraint $\alpha \in [0, C]$.
}In summary, after increasing $\boldsymbol{\alpha}_\mathcal{T}'$ and decreasing $\boldsymbol{\alpha}_\mathcal{R}'$, we adjust $\boldsymbol{\alpha}_\mathcal{M}'$.
So when adjusting $\boldsymbol{\alpha}_\mathcal{T}'$, $\boldsymbol{\alpha}_\mathcal{R}'$ and $\boldsymbol{\alpha}_\mathcal{M}'$, we have the following equation according to constraints of Problem~\eqref{eq:svm_dual}.
\begin{small}
\begin{equation}
\sum_{t \in I_\mathcal{T}}y_t \Delta \alpha_t' +\sum_{r \in I_\mathcal{R}}y_r \Delta \alpha_r' +\sum_{i \in I_m}y_i \Delta \alpha_i' =0
\label{eq:after-adj-alpha-sum}
\end{equation}
\end{small}$\mathcal{M}$ often has a large number of instances, and there are many possible ways to adjust $\boldsymbol{\alpha}_\mathcal{M}'$.
Here, we propose to use the adjustment on $\boldsymbol{\alpha}_\mathcal{M}'$ that ensures
all the training instances in $\mathcal{M}$ satisfy the optimality condition (i.e. Constraint~\eqref{eq:f-const}).
According to Constraint~\eqref{eq:f-const}, we have $\forall i \in I_m$ and $f_i = b$.
Combining $f_i = b$ and the definition of $f_i$ (cf. Equation~\eqref{eq:f-i}), we have the following equation for each $i \in I_m$.
\begin{small}
\begin{equation}
y_i(\sum_{t \in I_\mathcal{T}} Q_{i,t}\Delta\alpha_t' +\sum_{r \in I_\mathcal{R}} Q_{i,r}\Delta\alpha_r' +\sum_{j \in I_m} Q_{i,j}\Delta\alpha_j')=0
\label{eq:after-adj}
\end{equation}
\end{small}Note that $y_i$ can be omitted in the above equation.
We can rewrite Equation~\eqref{eq:after-adj-alpha-sum} and Equation~\eqref{eq:after-adj} using
the matrix notation for all the training instances in $\mathcal{M}$.
\begin{small}
\begin{equation*}
\begin{bmatrix}
\boldsymbol{y}_\mathcal{T}^T & \boldsymbol{y}_\mathcal{R}^T\\
\boldsymbol{Q}_{\mathcal{M},\mathcal{T}} & \boldsymbol{Q}_{\mathcal{M},\mathcal{R}}
\end{bmatrix}
\begin{bmatrix}
\Delta \boldsymbol{\alpha}_\mathcal{T}'\\
\Delta \boldsymbol{\alpha}_\mathcal{R}'
\end{bmatrix}+
\begin{bmatrix}
\boldsymbol{y}_\mathcal{M}^T\\
\boldsymbol{Q}_{\mathcal{M},\mathcal{M}}
\end{bmatrix}
\Delta\boldsymbol{\alpha}_M'
=0
\end{equation*}
\end{small}We substitute $\Delta \boldsymbol{\alpha}_\mathcal{T}'$ and $\Delta \boldsymbol{\alpha}_\mathcal{R}'$ using Equation~\eqref{eq:delta-a-rt};
the above equation can be rewritten as follows.
\begin{small}
\begin{equation}
\boldsymbol{\Delta \alpha}_\mathcal{M}' = -\eta \Phi
\label{eq:delta-a-m}
\end{equation}
\end{small}where {\small $\Phi = \begin{bmatrix}
\boldsymbol{y}_\mathcal{M}^T\\
\boldsymbol{Q}_{\mathcal{M}, \mathcal{M}}
\end{bmatrix}^{-1}
\begin{bmatrix}
\boldsymbol{y}^T_\mathcal{T} & \boldsymbol{y}^T_\mathcal{R} \\
\boldsymbol{Q}_{\mathcal{M},\mathcal{T}} & \boldsymbol{Q}_{\mathcal{M},\mathcal{R}}
\end{bmatrix}
\begin{bmatrix}
C\boldsymbol{1} - \boldsymbol{\alpha}_\mathcal{T}' \\
-\boldsymbol{\alpha}_\mathcal{R}'
\end{bmatrix}$}.
If the inverse of the matrix in Equation~\eqref{eq:delta-a-m} does not exist,
we find the pseudo inverse~\cite{greville1960some}
\eat{
Note that the set $\mathcal{M}$ may be empty in some cases.
When $\mathcal{M}$ is empty, we randomly pick one instance from $I_u$ and one from $I_l$.
We then set the alpha values of the four instances to $C/2$, and put them to $\mathcal{M}$.
Recall that the ideal alpha value $\alpha_m'$ of the instance in $\mathcal{M}$ is $0 < \alpha_m' < C$.
}
\textbf{Computing step size $\eta$}:
Given an $\eta$,
we can use Equations~\eqref{eq:delta-a-rt} and~\eqref{eq:delta-a-m} to adjust $\boldsymbol{\alpha}_\mathcal{M}'$, $\boldsymbol{\alpha}_\mathcal{T}'$ and $\boldsymbol{\alpha}_\mathcal{R}'$.
The changes of the alpha values lead to the change of all the optimality indicators $\boldsymbol{f}$.
We denote the change to $\boldsymbol{f}$ by $\Delta \boldsymbol{f}$ which can be computed by the following equation derived from Equation~\eqref{eq:f-i}.
\begin{small}
\begin{equation}
\boldsymbol{y} \odot \boldsymbol{\Delta f} = \eta [-\boldsymbol{Q}_{\mathcal{X}, \mathcal{M}} \Phi+ \boldsymbol{Q}_{\mathcal{X},\mathcal{T}}(C\boldsymbol{1} - \boldsymbol{\alpha}_\mathcal{T}') - \boldsymbol{Q}_{\mathcal{X},\mathcal{R}}\boldsymbol{\alpha}_\mathcal{R}']
\label{eq:delta-f}
\end{equation}
\end{small}where $\odot$ is the hadamard product (i.e. element-wise product~\cite{schott2005matrix}).\
If the step size $\eta$ is too large, more optimality indicators tend to violate Constraint~\eqref{eq:f-const}.
Here, we use Equation~\eqref{eq:delta-f} to compute the step size $\eta$
by letting the updated $f_i$ (where $i \in I_u \cup I_l$) just violate Constraint~\eqref{eq:f-const},
i.e. $f_i + \Delta f_i = b$ for $i \in I_u \cup I_l$.
\eat{
The $\eta$ computed from Equation~\eqref{eq:delta-f} may need to be reduced,
because the $\eta$ will be used to compute $\Delta \boldsymbol{\alpha}_\mathcal{T}'$, $\Delta \boldsymbol{\alpha}_\mathcal{R}'$ and $\Delta \boldsymbol{\alpha}_\mathcal{M}'$ (cf. Equations~\eqref{eq:delta-a-rt} and~\eqref{eq:delta-a-m})
and the updated $\boldsymbol{\alpha}'$ should satisfy $\alpha_i' \in [0, C]$ for all $i \in I_\mathcal{T} \cup I_\mathcal{R} \cup I_m$.
After a valid $\eta$ is obtained, we update $\boldsymbol{\alpha}'$ using Equations~\eqref{eq:delta-a-rt} and~\eqref{eq:delta-a-m}.
}
\subsubsection{Updating $\boldsymbol{f}$}
After updating $\boldsymbol{\alpha}'$,
we update $\boldsymbol{f}$ using Equations~\eqref{eq:f-i} and~\eqref{eq:delta-f}.
Then, we update the sets $I_m$, $I_u$ and $I_l$ according to Constraint~\eqref{eq:f-const}.
The process of computing $\eta$ and updating $\boldsymbol{\alpha}'$ and $\boldsymbol{f}$ are repeated until $\mathcal{R}$ is empty.
\subsubsection{Termination}
When $\mathcal{R}$ is empty, the SVM may not be optimal,
because the set $\mathcal{T}$ may not be empty.
The alpha values obtained from the above process serve as the initial alpha values for the next SVM.
To obtain the optimal SVM, we use SMO to adjust the initial alpha values until optimal condition is met.
The pseudo-code of the full algorithm is shown in Algorithm~\ref{alg:ato} in Supplementary Material.
\eat{
until the optimality condition (cf. Constraints~\eqref{eq:def-i} and~\eqref{eq:f-const}) is satisfied.
According to the definition (cf. Equation~\eqref{eq:def-i}), $I_m$, $I_u$ and $I_l$ are determined
by alpha values and their labels of the training instances.
Note that we cannot make $I_m$, $I_u$ and $I_l$ satisfy both Equation~\eqref{eq:def-i} and Constraint~\eqref{eq:f-const}
(i.e. the optimality condition),
since after updating $\boldsymbol{f}$ and $\boldsymbol{\alpha}'$ the KKT condition is often violated.
To continue to reduce $\boldsymbol{\alpha}_\mathcal{R}'$ until $\mathcal{R}$ becomes empty,
we rearrange $I_m$, $I_u$ and $I_l$ by only considering the sign of $(f_i - b)$.}
\subsection{Multiple Instance Replacement (MIR)}
A limitation of ATO is that
it requires adjusting \textit{all} the alpha values for an \textit{unbounded} number of times (i.e. until $\mathcal{R}$ is empty).
Hence, the cost of initialising the alpha values may be very high.
In what follows, we propose the Multiple Instance Replacement (MIR) algorithm that only needs to adjust $\boldsymbol{\alpha}_\mathcal{T}'$ once.
The alpha values of the shared instances between the two rounds stay unchanged
(i.e. $\boldsymbol{\alpha}_\mathcal{S}' = \boldsymbol{\alpha}_\mathcal{S}$), the intuition is that many support vectors tend to stay unchanged.
The key idea of MIR is to replace $\mathcal{R}$ by $\mathcal{T}$ at once.
We obtain the alpha values of the instances in $\mathcal{S}$ and $\mathcal{R}$ from the previous SVM,
and those alpha values satisfy the following constraint.
\begin{small}
\begin{equation}
\sum_{s \in I_\mathcal{S}}{y_s\alpha_s} + \sum_{r \in I_\mathcal{R}}{y_r\alpha_r} = 0
\label{eq:alpha_sum_const}
\end{equation}
\end{small}
In the next round of SVM $k$-fold cross-validation, $\mathcal{R}$ is removed and $\mathcal{T}$ is added.
When reusing alpha values, we should guarantee that the above constraint holds.
To improve the efficiency of initialising alpha values, we do not change alpha values in first term
of Constraint~\eqref{eq:alpha_sum_const}, i.e. $\sum_{s \in I_\mathcal{S}}{y_s\alpha_s}$.
To satisfy the above constraint after replacing $\mathcal{R}$ by $\mathcal{T}$,
we only need to ensure $\sum_{r \in I_\mathcal{R}}{y_r\alpha_r} = \sum_{t \in I_\mathcal{T}}{y_t\alpha_t'}$.
Next, we present an approach to compute $\boldsymbol{\alpha}_\mathcal{T}'$.
According to Equation~\eqref{eq:f-i}, we can rewrite $f_i$ before replacing $\mathcal{R}$ by $\mathcal{T}$ as follows.
\begin{small}
\begin{equation}
f_i =
y_i(\sum_{r \in I_\mathcal{R}}{\alpha_r Q_{i, r}} +
\sum_{s \in I_\mathcal{S}}{\alpha_s Q_{i, s}} - 1)
\label{eq:f-i-before}
\end{equation}
\end{small}After replacing $\mathcal{R}$ by $\mathcal{T}$, $f_i$ can be computed as follows.
\begin{small}
\begin{equation}
f_i = y_i(\sum_{t \in I_\mathcal{T}}{\alpha_t' Q_{i, t}} + \sum_{s \in I_\mathcal{S}}{\alpha_s' Q_{i, s}} - 1)
\label{eq:f-i-after}
\end{equation}
\end{small}where $\alpha_s' = \alpha_s$, i.e. the alpha values in $\mathcal{S}$ stay unchanged.
We can compute the change of $f_i$, denoted by $\Delta f_i$, by subtracting Equation~\eqref{eq:f-i-before} from Equation~\eqref{eq:f-i-after}.
Then, we have the following equation.
\begin{small}
\begin{equation}
\Delta f_i = y_i[\sum_{t \in I_\mathcal{T}}{\alpha_t' Q_{i, t}} -
\sum_{r \in I_\mathcal{R}}{\alpha_r Q_{i, r}}]
\label{eq:delta-f-const}
\end{equation}
\end{small}To meet the constraint
$\sum_{}{y_i \alpha_i = 0}$ after replacing $\mathcal{R}$ by $\mathcal{T}$, we have the following equation.
\begin{small}
\begin{equation*}
\sum_{s \in I_\mathcal{S}}{y_s\alpha_s} + \sum_{r \in I_\mathcal{R}}{y_r\alpha_r} = \sum_{s \in I_\mathcal{S}}{y_s\alpha_s'} + \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'}
\end{equation*}
\end{small}As $\alpha_s' = \alpha_s$, we rewrite the above equation as follows.
\begin{small}
\begin{equation}
\sum_{r \in I_\mathcal{R}}{y_r\alpha_r} = \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'}
\label{eq:delta-alpha-const}
\end{equation}
\end{small}We write Equations~\eqref{eq:delta-f-const} and~\eqref{eq:delta-alpha-const} together
as follows.
\begin{small}
\begin{equation}
\begin{bmatrix}
\boldsymbol{y} \odot \boldsymbol{\Delta f} + \boldsymbol{Q}_{\mathcal{X},\mathcal{R}}\boldsymbol{\alpha}_{\mathcal{R}}\\
\boldsymbol{y}_{\mathcal{R}}^T \cdot \boldsymbol{\alpha}_{\mathcal{R}}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X},\mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}
\boldsymbol{\alpha}'_{\mathcal{T}}
\label{eq:delta-f2}
\end{equation}
\end{small}Similar to the way we compute $\Delta f_i$ in the ATO algorithm,
given $i$ in $I_u \cup I_l$ we compute $\Delta f_i$ by letting $f_i + \Delta f_i = b$ (cf. Constraint~\eqref{eq:f-const}).
Given $i$ in $I_m$, we set $\Delta f_i = 0$ since we try to avoid $f_i$ violating Constraint~\eqref{eq:f-const}.
Once we have $\Delta \boldsymbol{f}$, the only unknown in Equation~\eqref{eq:delta-f2} is $\boldsymbol{\alpha}_{\mathcal{T}}'$.
\eat{
Note that for $i \in I_\mathcal{R}$, $\Delta f_i$ need not to be included in Equation~\eqref{eq:delta-f2},
since $\boldsymbol{x}_i$ will be removed anyway and hence $\Delta f_i$ can be any value.
}
\subsubsection{Finding an approximate solution for $\boldsymbol{\alpha}_\mathcal{T}'$}
The linear system shown in Equation~\eqref{eq:delta-f2} may have no solution.
This is because $\boldsymbol{\alpha}_\mathcal{S}'$ may also need to be adjusted,
but is not considered in Equation~\eqref{eq:delta-f2}.
Here, we propose to find the approximate solution $\boldsymbol{\alpha}_{\mathcal{T}}'$ for Equation~\eqref{eq:delta-f2}
by using linear least squares~\cite{lawson1974solving} and we have the following equation.
\begin{small}
\begin{equation*}
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X},\mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}^T
\begin{bmatrix}
\boldsymbol{y} \odot \boldsymbol{\Delta f} + \boldsymbol{Q}_{\mathcal{X},\mathcal{R}}\boldsymbol{\alpha}_{\mathcal{R}}\\
\boldsymbol{y}_{\mathcal{R}}^T \cdot \boldsymbol{\alpha}_{\mathcal{R}}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X}, \mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}^T
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X}, \mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}
\boldsymbol{\alpha}_{\mathcal{T}}'
\end{equation*}
\end{small}Then we can compute $\boldsymbol{\alpha}_\mathcal{T}'$ using the following equation.
\begin{equation}
\begin{adjustbox}{max width=0.41\textwidth}
$\boldsymbol{\alpha}_{\mathcal{T}}'=
\Big(
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X}, \mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}^T
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X}, \mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}
\Big)^{-1}
\begin{bmatrix}
\boldsymbol{Q}_{\mathcal{X},\mathcal{T}} \\ \boldsymbol{y}_{\mathcal{T}}^T
\end{bmatrix}^T
\begin{bmatrix}
\boldsymbol{y} \odot \boldsymbol{\Delta f} + \boldsymbol{Q}_{\mathcal{X},\mathcal{R}}\boldsymbol{\alpha}_{\mathcal{R}}\\
\boldsymbol{y}_{\mathcal{R}}^T \cdot \boldsymbol{\alpha}_{\mathcal{R}}
\end{bmatrix}$
\label{eq:delta-alpha-appr}
\end{adjustbox}
\end{equation}
If the inverse of the matrix in above equation does not exist,
we find the pseudo inverse similar to ATO.
\subsubsection{Adjusting $\boldsymbol{\alpha}_\mathcal{T}'$}
\label{paper:cv-rpi-adjust}
Due to the approximation,
the constraints $0 \le \alpha_t' \le C$ and
$\sum_{r \in I_\mathcal{R}}{y_r\alpha_r} = \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'}$ may not hold.
Therefore, we need to adjust $\boldsymbol{\alpha}_\mathcal{T}'$ to satisfy the constraints, and we perform the following steps.
\begin{itemize}
\item If $\alpha_t' < 0$, we set $\alpha_t' = 0$;
if $\alpha_t' > C$, we set $\alpha_t' = C$.
\item If {\footnotesize $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} > \sum_{r \in I_\mathcal{R}}{y_r\alpha_r}$ }
(if {\footnotesize $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} < \sum_{r \in I_\mathcal{R}}{y_r\alpha_r}$}),
we uniformly decrease (increase) all the $y_t \alpha_t'$ until $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} = \sum_{r \in I_\mathcal{R}}{y_r\alpha_r}$,
subjected to the constraint $0 \le \alpha_t' \le C$.
\end{itemize}
After the above adjusting, $\alpha_t'$ satisfies the constraints $0 \le \alpha_t' \le C$
and $\sum_{r \in I_\mathcal{R}}{y_r\alpha_r} = \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'}$.
Then, we use SMO with $\boldsymbol{\alpha}'$ (where $\boldsymbol{\alpha}' = \boldsymbol{\alpha}_\mathcal{S}' \cup \boldsymbol{\alpha}_\mathcal{T}'$) as the initial alpha values
for training an optimal SVM.
The pseudo-code of whole algorithm is shown in Algorithm~\ref{alg:rt-replacement} in Supplementary Material.
\subsection{Single Instance Replacement (SIR)}
Both ATO and MIR have the following major limitation:
the computation for $\boldsymbol{\alpha}_\mathcal{T}'$ is expensive (e.g. require computing the inverse of a matrix).
The goal of the ATO and MIR is to minimise the number of instances that violate the optimality condition.
In the algorithm we propose here, we try to \textit{minimise} $\Delta f_i$ with a hope that the small change to $f_i$
will not violate the optimality condition.
This slight change of the goal leads to a much cheaper computation cost on computing $\boldsymbol{\alpha}_\mathcal{T}'$.
Our key idea is to replace the instance in $\mathcal{R}$ one after another with a similar instance in $\mathcal{T}$.
Since we replace one instance in $\mathcal{R}$ by an instance in $\mathcal{T}$ each time,
we call this algorithm Single Instance Replacement (SIR).
Next, we present the details of the SIR algorithm.
According to Equation~\eqref{eq:f-i}, we can rewrite $f_i$ of the previous SVM as follows.
\begin{small}
\begin{equation}
f_i = y_i(\sum_{j \in I_\mathcal{S} \cup I_\mathcal{R} \setminus \{p\}}{\alpha_j Q_{i, j}} +
\alpha_p Q_{i, p} - 1)
\label{eq:before-single-replace}
\end{equation}
\end{small}where $p \in I_\mathcal{R}$. We replace the training instance $\boldsymbol{x}_p$ by $\boldsymbol{x}_q$ where $q \in I_\mathcal{T}$,
and then the value of $f_i$ after replacing $\boldsymbol{x}_p$ by $\boldsymbol{x}_q$ is as follows.
\begin{small}
\begin{equation}
f_i = y_i(\sum_{j \in I_\mathcal{S} \cup I_\mathcal{R} \setminus \{p\}}{\alpha_j Q_{i, j}} +
\alpha_q' Q_{i, q} - 1)
\label{eq:after-single-replace}
\end{equation}
\end{small}where $\alpha_q' = \alpha_p$.
By subtracting Equation~\eqref{eq:before-single-replace} from Equation~\eqref{eq:after-single-replace},
the change of $f_i$, denoted by $\Delta f_i$, can be computed by
$\Delta f_i = y_i \alpha_p (Q_{i, q} - Q_{i, p})$.
Recall that $Q_{i, j} = y_i y_j K(\boldsymbol{x}_i, \boldsymbol{x}_j)$. We can write $\Delta f_i$ as follows.
\begin{equation}
\Delta f_i = \alpha_p (y_q K(\boldsymbol{x}_i, \boldsymbol{x}_q) - y_p K(\boldsymbol{x}_i, \boldsymbol{x}_p))
\end{equation}
Recall also that in SIR we want to replace $\boldsymbol{x}_p$ by an instance, denoted by $\boldsymbol{x}_q$, that minimises $\Delta f_i$.
When $\alpha_p = 0$, $\Delta f_i$ has no change after replacing $\boldsymbol{x}_p$ by $\boldsymbol{x}_q$.
In what follows, we focus on the case that $\alpha_p > 0$.
We propose to replace $\boldsymbol{x}_p$ by $\boldsymbol{x}_q$ if $\boldsymbol{x}_q$ is the ``most similar'' instance to $\boldsymbol{x}_p$ among all the instances in $\mathcal{T}$.
The instance $\boldsymbol{x}_q$ is called the most similar to the instance $\boldsymbol{x}_p$ among all the instances in $\mathcal{T}$,
when the following two conditions are satisfied.
\begin{itemize}
\item $\boldsymbol{x}_p$ and $\boldsymbol{x}_q$ have the same label, i.e. $y_p = y_q$.
\item $K(\boldsymbol{x}_p, \boldsymbol{x}_q) \ge K(\boldsymbol{x}_p, \boldsymbol{x}_t)$ for all $\boldsymbol{x}_t \in \mathcal{T}$.
\end{itemize}
Note that in the second condition, we use the fact that the kernel function
approximates the similarity between two instances~\cite{balcan2008theory}.
If we can find the most similar instance to each instance in $\mathcal{R}$,
the constraint $\sum_{s \in I_\mathcal{S}}{y_s \alpha_s'} + \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} = 0$ will be satisfied after
the replacing $\mathcal{R}$ by $\mathcal{T}$.
Whereas, if we cannot find any instance in $\mathcal{T}$ that has the same label as $\boldsymbol{x}_p$,
we randomly pick an instance from $\mathcal{T}$ to replace $\boldsymbol{x}_p$.
When the above situation happens,
the constraint $\sum_{s \in I_\mathcal{S}}{y_s \alpha_s'} + \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} = 0$ is violated.
Hence, we need to adjust $\boldsymbol{\alpha}_\mathcal{T}'$ to make the constraint hold.
We use the same approach as MIR to adjusting $\boldsymbol{\alpha}_\mathcal{T}'$.
The pseudo code for SIR is given in Algorithm~\ref{alg:sir} in Supplementary Material.
\eat{
To satisfy the constraint $\sum_{s \in I_\mathcal{S}}{y_s \alpha_s'} + \sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} = 0$, we perform the following steps.
\begin{itemize}
\item If $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} > -\sum_{s \in I_\mathcal{S}}{y_s\alpha_s'}$,
we uniformly reduce all the $y_t \alpha_t'$ until $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} = -\sum_{s \in I_\mathcal{S}}{y_s\alpha_s'}$,
subjected to the constraint $0 \le \alpha_t' \le C$.
\item If $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} < \sum_{s \in I_\mathcal{S}}{y_s\alpha_s'}$,
we uniformly increase all the $y_t \alpha_t'$ until $\sum_{t \in I_\mathcal{T}}{y_t \alpha_t'} = -\sum_{s \in I_\mathcal{S}}{y_s\alpha_s'}$,
subjected to the constraint $0 \le \alpha_t' \le C$.
\end{itemize}
After adjusting $\boldsymbol{\alpha}_\mathcal{T}'$, Constraint~\eqref{eq:pro-const} holds.
We use $\boldsymbol{\alpha}'$ (where $\boldsymbol{\alpha}' = \boldsymbol{\alpha}_\mathcal{T}' \cup \boldsymbol{\alpha}_\mathcal{S}'$) as initialised alpha values to train the next SVM using SMO.
}
\begin{table*}
\centering
\caption{Efficiency comparison ($k = 10$)}
\begin{adjustbox}{max width=1.0\textwidth}
\begin{tabular}{|*{14}{c|}} \hline
\multirow{3}{*}{Dataset} & \multicolumn{7}{c|}{elapsed time (sec)}
& \multicolumn{4}{c|}{number of iterations}
& \multicolumn{2}{c|}{accuracy (\%)} \\\cline{2-14}
& \multirow{2}{*}{libsvm} & \multicolumn{2}{c|}{ATO} & \multicolumn{2}{c|}{MIR} & \multicolumn{2}{c|}{SIR}
& \multirow{2}{*}{libsvm} & \multirow{2}{*}{ATO} & \multirow{2}{*}{MIR} & \multirow{2}{*}{SIR}
&\multirow{2}{*}{libsvm} &\multirow{2}{*}{SIR}\\\cline{3-8}
& & init. & the rest & init. & the rest & init. & the rest & & & & & & \\\hline
Adult & 6,783 & 3,824 & 5,738 & 2,034 & 3,717 & 57 & 3,705 & 397,565 & 361,914 & 318,169 & 317,110 & 82.36 & 82.36\\
Heart & 0.36 & 0.016 & 0.19 & 0.058 & 0.083 & 0.003 & 0.24 & 6,988 & 4,882 & 1,443 & 3,968 & 55.56 & 55.56 \\
Madelon & 54.5 & 2.0 & 24.6 & 1.7 & 12.8 & 1.2 & 13.5 & 9,000 & 5,408 & 1,800 & 1,800 & 50 .0 & 50.0 \\
MNIST & 172,816 & 35,410 \eat{est}& 69,435 \eat{est}& 30,897 & 38,696 & 1,416 & 36,406 & 1,291,068 & 575,250 \eat{est}& 280,820 & 258,500 & 50.85 & 50.85 \\
Webdata & 24,689 & 11,166 & 9,394 & 6,172 & 7,574 & 133 & 11,901& 783,208 & 245,385 & 230,357 & 356,528 & 97.70 & 97.70\\
\hline
\end{tabular}
\end{adjustbox}
\label{tbl:overall-eff}
\end{table*}
\begin{table}[b]
\centering
\caption{Datasets and kernel parameters}
\begin{small}
\begin{tabular}{|*{5}{c|}} \hline
Dataset & Cardinality & Dimension & $C$ & $\gamma$ \\\hline
Adult & 32,561 & 123 & 100 & 0.5 \\
Heart & 270 & 13 & 2182 & 0.2 \\
Madelon & 2,000 & 500 & 1 & 0.7071 \\
MNIST & 60,000 & 780 & 10 & 0.125 \\
Webdata & 49,749 & 300 & 64 & 7.8125 \\
\hline
\end{tabular}
\end{small}
\label{tbl:dataset}
\end{table}
\section{Experimental studies}
\label{paper:es}
We empirically evaluate our proposed algorithms using five datasets from the LibSVM website~\cite{chang2011libsvm}.
All our proposed algorithms were implemented in C++.
\eat{We used Eigen~\cite{eigenweb} to perform matrix operations (e.g. find the inverse of a matrix)
in Equations~\eqref{eq:delta-a-m} and~\eqref{eq:delta-alpha-appr}
for the ATO algorithm and MIR algorithm.
In our experiment, ATO breaks from the loop (i.e. lines 3-11 of Algorithm~\ref{alg:ato}),
if ATO executes the loop body for 100 times but $\mathcal{R}$ is still not empty.
}The experiments were conducted on a desktop computer running Linux with a 6-core E5-2620 CPU and 128GB main memory.
Following the common settings, we used the Gaussian kernel function and by default $k$ is set to $10$.
The hyper-parameters for each dataset are
identical to the existing studies~\cite{catanzaro2008fast,smirnov2004unanimous,wufeature}.
Table~\ref{tbl:dataset} gives more details about the datasets.
We study the $k$-fold cross-validation under the setting of binary classification.
\eat{
Note that the MNIST problem is to predict the correct digit (ranging from 0 to 9) of a handwritten digit,
and it is a multi-class classification problem.
We converted it into a binary classification problem,
i.e. predicting whether a handwritten digit is an even or odd number~\cite{catanzaro2008fast}.
}
Next, we first show the overall efficiency of our proposed algorithms in comparison with LibSVM.
Then, we study the effect of varying $k$ from $3$ to $100$ in the $k$-fold cross-validation.
\eat{
and investigate the effect of varying the number of instances in a dataset.
Last, we present experimental results on leave-one-out cross-validation.
}
\subsection{Overall efficiency on different datasets}
We measured the total elapsed time of each algorithm to test their efficiency.
The total elapsed time consists of the alpha initialisation time and the time for the rest of the $10$-fold cross-validation.
The result is shown in Table~\ref{tbl:overall-eff}.
To make the table to fit in the page, we do not provide the total elapsed time of ATO, MIR and SIR for each dataset.
But the total elapsed time can be easily computed
by adding the time for alpha initialisation and the time for the rest.
\eat{
For example, the total elapsed time of ATO on Adult is adding the value (i.e. 3,824) in the third column
and the value (i.e. 5,738) in the fourth column of Table~\ref{tbl:overall-eff}.
}Note that the time for ``the rest" (e.g. the fourth column of Table~\ref{tbl:overall-eff})
includes the time for partitioning dataset into $10$ subsets, training (the most significant part) and classification.
As we can see from the table, the total elapsed time of MIR and SIR is much smaller than LibSVM.
In the Madelon dataset, MIR and SIR are about $2$ times and $4$ times faster than LibSVM, respectively.
In comparison, ATO does not show obvious advantages over MIR and SIR,
and is even slower than LibSVM on the Adult dataset due to spending too much time on alpha value initialisation.
Another observation from the table is SIR spent the smallest amount of time on the alpha initialisation
among our three algorithms, while SIR has the similar ``effectiveness'' as MIR on reusing the alpha values.
The effectiveness on reusing the alpha values is reflected by
the total number of training iterations during the $10$-fold cross-validation.
More specifically, according to the ninth to twelfth columns of Table~\ref{tbl:overall-eff},
LibSVM often requires more training iterations than MIR and SIR;
SIR and MIR have similar number of iterations, and in some datasets (e.g. Adult and MNIST) SIR needs fewer iterations,
although SIR saves much time in the initialisation.
More importantly, the improvement on the efficiency does not sacrifice the accuracy.
According to the last two columns of Table~\ref{tbl:overall-eff},
we can see that SIR produces the same accuracy as LibSVM.
Due to the space limitation, we omit providing the accuracy of ATO and MIR
which also produce the same accuracy as LibSVM.
\subsection{Effect of varying $k$}
We varied $k$ from $3$ to $100$ to study the effect of the value of $k$.
\eat{Please note that when $k = 2$, no training instances are shared between the two rounds of the $k$-fold cross-validation,
which violates the assumption of our alpha seeding techniques.
Hence, the minimum value for $k$ is $3$ for this set of experiments.
}Moreover, because conducting this set of experiments is very time consuming especially when $k = 100$,
we only compare SIR (the best among the our three algorithms according to results in Table~\ref{tbl:overall-eff})
with LibSVM.
Table~\ref{tbl:varying-k} shows the results.
Note that as LibSVM was very slow when $k=100$ on the MNIST dataset,
we only ran the first 30 rounds to estimate the total time.
As we can see from the table, SIR consistently outperforms LibSVM.
When $k=100$, SIR is about $32$ times faster than LibSVM in the Madelon dataset.
The experimental result for the leave-one-out (i.e. $k$ equals to the dataset size) cross-validation is similar to $k=100$,
and is available in Figure~\ref{fig:loocv} in Supplementary Material.
\begin{table}
\caption{Effect of $k$ on total elapsed time (sec)}
\centering
\begin{adjustbox}{max width=0.48\textwidth}
\begin{tabular}{|*{7}{c|}} \hline
\multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{$k=3$}
& \multicolumn{2}{c|}{$k=10$}
& \multicolumn{2}{c|}{$k=100$} \\\cline{2-7}
& libsvm & SIR & libsvm & SIR & libsvm & SIR \\\hline
Adult & 733 & 683 & 6,783 & 3,762 & 41,288 & 33,877 \\
Heart & 0.09 & 0.08 & 0.36 & 0.25 & 3.39 & 1.17 \\
Madelon & 8.8 & 7.8 & 54.5 & 14.7 & 620 & 19.5 \\
MNIST & 29,692 & 22,296 & 172,816 & 37,822 & 2,508,684 & 61,016 \\
Webdata & 3,941 & 2,342 & 24,689 & 12,034 & 190,817 & 31,918 \\
\hline
\end{tabular}
\label{tbl:varying-k}
\end{adjustbox}
\end{table}
\section{Related work}
\label{paper:rw}
We categorise the related studies into
two groups: on alpha seeding, and on online SVM training.
\subsection{Related work on alpha seeding}
DeCoste and Wagstaff~\cite{decoste2000alpha} first introduced the reuse of alpha values
in the SVM leave-one-out cross-validation.
Their method (i.e. AVG discussed in Supplementary Material) has two main steps:
(i) train an SVM with the whole dataset;
(ii) remove an instance from the SVM and distribute the associated alpha value uniformly among all the support vectors.
Lee et al.~\cite{lee2004efficient} proposed a technique (i.e. TOP discussed in Supplementary Material)
to improve the above method.
Instead of uniformly distributing alpha value among all the support vectors,
the method distributes the alpha value to the instance with the largest kernel value.
Existing studies called ``Warm Start"~\cite{kao2004decomposition,chu2015warm} apply alpha seeding in selecting the parameter $C$ for linear SVMs.
Concretely, $\boldsymbol{\alpha}$ obtained from training the $h^{\text{th}}$ linear SVM with $C$ is used
for training the $h^{\text{th}}$ linear SVM with ($C+\Delta$) in the \textbf{two} $k$-fold cross-validation processes
by simply setting $\boldsymbol{\alpha}' = r\boldsymbol{\alpha}$ where $r$ is a ratio computed from $C$ and $\Delta$.
In those studies, no alpha seeding technique is used when training the $k$ SVMs with parameter $C$.
Our work aims to reuse the $h^{\text{th}}$ SVM for training the $(h+1)^{\text{th}}$ SVM
for the $k$-fold cross-validation with parameter $C$.
\subsection{Related work on online SVM training}
\eat{In the online SVM training, a subset of instances is outdated and removed, and meanwhile,
a subset of new instances arrives and is added. The online SVM training problem is similar to the $k$-fold cross-validation problem.
}
Gauwenberghs and Poggio~\cite{cauwenberghs2001incremental} introduced an algorithm for training SVM online
where the algorithm handles adding or removing one training instance.
Karasuyama and Takeuchi~\cite{karasuyama2009multiple} extended the above algorithm to the cases
where multiple instances need to be added or removed.
Their key idea is to gradually reduce the alpha values of the outdated instances to 0,
and meanwhile, to gradually increase the alpha values of the new instances.
Due to the efficiency concern, the algorithm produces \textit{approximate} SVMs.
Our work aims to train SVMs which meet the optimality condition.
\eat{
\subsection{Related work on accelerating $k$-fold cross-validation}
Some recent methods have been developed to improve the efficiency of SVM $k$-fold cross-validation using Graphic Processing Units (GPUs).
Athanasopouloset al.~\cite{athanasopoulos2011gpu} used GPUs
to precompute the kernel matrix, which is stored in main memory, to improve the efficiency of the SVM $k$-fold cross-validation.
Wen et al.~\cite{wen2014mascot} proposed to a more scalable GPU-based SVM $k$-fold cross-validation
by precomputing the kernel matrix and storing the kernel matrix to main memory extended by SSDs.
Our work aims to reuse the previously trained SVM for training the next SVM,
and our techniques can be integrated into the GPU-based algorithms.
The discussion on applying our techniques to the GPU-based algorithms is out of the scope of this paper
and hence is not provided here.
}
\section{conclusion}
\label{paper:conc}
To improve the efficiency of the $k$-fold cross-validation,
we have proposed three algorithms that reuse the previously trained SVM to initialise the next SVM,
such that the training process for the next SVM reaches the optimal condition faster.
We have conducted extensive experiments to validate the effectiveness and efficiency of our proposed algorithms.
Our experimental results have shown that the best algorithm among the three is SIR.
When $k=10$, SIR is several times faster than the $k$-fold cross-validation in LibSVM which does not make use of the previously trained SVM;
when $k=100$, SIR dramatically outperforms LibSVM (32 times faster than LibSVM in the Madelon dataset).
Moreover, our algorithms produce same results (hence same accuracy) as the $k$-fold cross-validation in LibSVM does.
\eat{
Our SIR algorithm can efficiently identify the support vectors and
accurately estimate their alpha values of the next SVM by using the previous SVM.
Hence, we recommend SIR for reusing the previous SVM.
}
\subsubsection{Acknowledgments}
This work is supported by Australian Research
Council (ARC) Discovery Project DP130104587 and Australian Research
Council (ARC) Future Fellowships Project FT120100832.
Prof. Jian Chen is supported by the Fundamental Research Funds for the Central Universities (Grant No. 2015ZZ029)
and the Opening Project of Guangdong Province Key Laboratory of Big Data Analysis and Processing.
\section*{Supplementary Material}
\subsection{Pseudo-code of our three algorithm}
Here, we present the pseudo-code of our three algorithms proposed in the paper.
\subsubsection{The ATO algorithm}
The full algorithm of ATS is summarised in Algorithm~\ref{alg:ato}.
As we can see from Algorithm~\ref{alg:ato}, ATO terminates when $\mathcal{R}$ is empty
and it might spend a substantial time in the loop especially when the step size $\eta$ is small.
\begin{algorithm}
\begin{small}
\KwIn{Sets $\mathcal{X}$ and $\mathcal{R}$ of instances,\\
\quad \quad \quad $\boldsymbol{\alpha}$ associated with instances in $\mathcal{X}$,\\
\quad \quad \quad and a set $\mathcal{T}$ of new instances.}
\KwOut{Optimal alpha values for $\mathcal{X} \setminus \mathcal{R}$ and $\mathcal{T}$.}
$\boldsymbol{\alpha}'_\mathcal{T} \leftarrow \boldsymbol{0}$\tcc*[f]{Initialise $\boldsymbol{\alpha}'_\mathcal{T}$}\\
\tcc*[f]{Initialise index sets $I_m$, $I_u$ and $I_l$}\\
Init($I_m$, $I_u$, $I_l$, $\boldsymbol{\alpha}$)\\
\mathcal{R}epeat{$\mathcal{R} = \phi$}{
$\eta \leftarrow$ GetStepSize() \tcc*[f]{Eqs~\eqref{eq:delta-a-rt}, \eqref{eq:delta-a-m} and~\eqref{eq:delta-f}}\\
\tcc*[f]{use Eqs~\eqref{eq:delta-a-rt} and~\eqref{eq:delta-a-m} to update $\boldsymbol{\alpha}$} \text{\hspace{6pt}}\\
$\boldsymbol{\alpha}'$, $\boldsymbol{\alpha}'_\mathcal{T} \leftarrow$
UpdateAlpha($\eta$, $\boldsymbol{\alpha}$, $\boldsymbol{\alpha}'_\mathcal{T}$)
$\boldsymbol{f} \leftarrow$ UpdateF($\eta$, $\boldsymbol{f}$)\tcc*[f]{use Eqs~\eqref{eq:f-i} and~\eqref{eq:delta-f}}
\ForEach{$r \in I_\mathcal{R}$}
{
\If(\tcc*[f]{safe to remove $\boldsymbol{x}_r$}){$\alpha_r' = 0$}
{
$I_\mathcal{R} \leftarrow I_\mathcal{R} \setminus \{r\}$, $\boldsymbol{\alpha}' \leftarrow \boldsymbol{\alpha}' \setminus \{\alpha_r'\}$
}
}
\tcc*[f]{update the sets $I_m$, $I_g$ and $I_s$}\\
$I_m$, $I_u$, $I_l \leftarrow$
Rearrange($I_m$, $I_u$, $I_l$, $\boldsymbol{f}$)
}
$\boldsymbol{\alpha}' \leftarrow \boldsymbol{\alpha}'_\mathcal{T} \cup \boldsymbol{\alpha}'$\\
$\mathcal{X}' \leftarrow \mathcal{T} \cup \mathcal{X} \setminus R$\\
TrainOptimalSVM($\boldsymbol{\alpha}'$, $\mathcal{X}'$)\tcc*[f]{SMO to improve $\boldsymbol{\alpha}'$}
\caption{Adjusting Alpha Towards Optimum (ATO)}
\label{alg:ato}
\end{small}
\end{algorithm}
\subsubsection{The MIR algorithm}
The full algorithm of MIR is summarised in Algorithm~\ref{alg:rt-replacement}.
\begin{algorithm}
\begin{small}
\KwIn{Sets $\mathcal{X}$ and $\mathcal{R}$ of instances,\\
\quad \quad \quad $\boldsymbol{\alpha}$ associated with instances in $\mathcal{X}$,\\
\quad \quad \quad and a set $\mathcal{T}$ of new instances.}
\KwOut{Optimal alpha values for $\mathcal{X} \setminus \mathcal{R}$ and $\mathcal{T}$.}
$\boldsymbol{\alpha}'_\mathcal{T} \leftarrow \boldsymbol{0}$\tcc*[f]{Initialise $\boldsymbol{\alpha}'_\mathcal{T}$ for $\mathcal{T}$}\\
\tcc*[f]{Initialise index sets $I_m$, $I_u$ and $I_l$}\\
Init($I_m$, $I_u$, $I_l$, $\boldsymbol{\alpha}$)\\
$\Delta f_i \leftarrow$ ComputeDeltaF() \tcc*[f]{Equation~\eqref{eq:delta-f-const}}\\
$\boldsymbol{\alpha}'_\mathcal{T} \leftarrow$ ComputeAlpha($\boldsymbol{\alpha}_\mathcal{R}$, $\boldsymbol{y}_\mathcal{R}$) \tcc*[f]{Equation~\eqref{eq:delta-alpha-appr}}\\
\tcc*[f]{Adjust $\boldsymbol{\alpha}'_\mathcal{T}$ to meet constraints of problem~\eqref{eq:svm_dual}}\\
$\boldsymbol{\alpha}'_\mathcal{T} \leftarrow$ AdjustAlpha($\boldsymbol{\alpha}'_\mathcal{T}$, $\boldsymbol{y}_\mathcal{T}$, $\boldsymbol{\alpha}_\mathcal{R}$, $\boldsymbol{y}_\mathcal{R}$)
$\boldsymbol{\alpha}' \leftarrow \boldsymbol{\alpha}'_\mathcal{T} \cup \boldsymbol{\alpha} \setminus \boldsymbol{\alpha}_\mathcal{R}$\\
$\mathcal{X}' \leftarrow \mathcal{T} \cup \mathcal{X} \setminus R$\\
TrainOptimalSVM($\boldsymbol{\alpha}'$, $\mathcal{X}'$)\tcc*[f]{SMO to improve $\boldsymbol{\alpha}'$}
\caption{Multiple Instance Replacement (MIR)}
\label{alg:rt-replacement}
\end{small}
\end{algorithm}
\subsubsection{The SIR algorithm}
The full algorithm of SIR is summarised in Algorithm~\ref{alg:sir}.
\begin{algorithm}
\begin{small}
\KwIn{Sets $\mathcal{X}$ and $\mathcal{R}$ of instances,\\
\quad \quad \quad $\boldsymbol{\alpha}$ associated with instances in $\mathcal{X}$,\\
\quad \quad \quad and a set $\mathcal{T}$ of new instances.}
\KwOut{Optimal alpha values for $\mathcal{X} \setminus \mathcal{R}$ and $\mathcal{T}$.}
$\boldsymbol{\alpha}'_\mathcal{T} \leftarrow \boldsymbol{0}$\tcc*[f]{Initialise $\boldsymbol{\alpha}'_\mathcal{T}$ for $\mathcal{T}$}\\
\ForEach{$r \in I_\mathcal{R}$}
{
$maxValue \leftarrow 0$, $t' \leftarrow -1$\\
\ForEach{$t \in I_\mathcal{T}$}
{
\If{$y_r = y_t \land K(\boldsymbol{x}_r, \boldsymbol{x}_t) > maxValue$}
{
$maxValue \leftarrow K(\boldsymbol{x}_r, \boldsymbol{x}_t)$, $t' \leftarrow t$
}
}
\If(\tcc*[f]{replace $\boldsymbol{x}_r$ by $\boldsymbol{x}_{t'}$}){$t' \neq -1$}
{
$I_\mathcal{T} \leftarrow I_\mathcal{T} \setminus \{t'\}$, $\boldsymbol{\alpha} \leftarrow \boldsymbol{\alpha} \setminus \{\alpha_r\}$, $\alpha_{t'} \leftarrow \alpha_r$
}
}
\tcc*[f]{Adjust $\boldsymbol{\alpha}'_\mathcal{T}$ to meet constraints of Problem~\eqref{eq:svm_dual}}\\
$\boldsymbol{\alpha}'_\mathcal{T} \leftarrow$ AdjustAlpha($\boldsymbol{\alpha}'_\mathcal{T}$, $\boldsymbol{y}_\mathcal{T}$, $\boldsymbol{\alpha}$, $\boldsymbol{y}_\mathcal{S}$)
$\boldsymbol{\alpha}' \leftarrow \boldsymbol{\alpha}'_\mathcal{T} \cup \boldsymbol{\alpha}$\\
$\mathcal{X}' \leftarrow \mathcal{T} \cup \mathcal{X} \setminus R$\\
TrainOptimalSVM($\boldsymbol{\alpha}'$, $\mathcal{X}'$)\tcc*[f]{SMO to improve $\boldsymbol{\alpha}'$}
\caption{Single Instance Replacement (SIR)}
\label{alg:sir}
\end{small}
\end{algorithm}
\eat{
\begin{table}
\centering
\caption{Effect of \# of instances in MNIST on elapsed time (sec) in $10$-fold cross-validation}
\begin{tabular}{|*{5}{c|}} \hline
Algorithm& 7,500 & 15,000 & 30,000 & 60,000 \\\hline
libsvm & 1,920 & 10,678 & 38,449 & 172,816 \\
ATO & 1,660 & 7,446 & 25,996 & 104,845 \eat{est} \\
MIR & 1,021 & 5,799 & 19,472 & 69,593 \\
SIR & 1,121 & 4,735 & 16,141 & 37,822 \\
\hline
\end{tabular}
\label{tbl:varying-size}
\end{table}
}
\eat{
\begin{table}
\centering
\caption{Total elapsed time (sec) in leave-one-out cross-validation}
\begin{tabular}{|*{7}{c|}} \hline
Dataset & libsvm & AVG & TOP & ATO & MIR & SIR \\ \hline
Adult & 2.4x10$^7$ & 3.0x10$^6$ & 3.1x10$^6$ & 7.4x10$^6$ & 7.9x10$^6$ & 7.2x10$^5$ \\
Heart & 8.6 & 4.1 & 4.0 & 4.7 & 2.8 & 3.2\\
Madelon & 2.9x10$^5$ & 1.8x10$^4$ & 1.8x10$^4$ & 1.9x10$^4$ & 1.0x10$^4$ & 1.4x10$^4$ \\
MNIST & 1.2x10$^9$ & 5.3x10$^8$ & 4.8x10$^8$ & 4.2x10$^8$\eat{est} & 3.8x10$^8$ & 1.7x10$^7$ \\
Webdata & 2.0x10$^8$ & 2.5x10$^7$ & 2.5x10$^7$ & 4.8x10$^6$ & 2.4x10$^7$ & 1.2x10$^6$ \\
\hline
\end{tabular}
\label{tbl:loocv}
\end{table}
}
\eat{
\subsection{Effect of varying the number of instances in the dataset}
To study the effect of the number of instances in a dataset on our algorithms,
we constructed four sub-datasets of MNIST (which is the largest dataset among the datasets we used)
with the number of instances ranging from $7,500$ to $60,000$.
The results are given in Figure~\ref{fig:varying-size}, where $k$ is set to $10$.
As we can see from the figure, the larger the number of instances the dataset has,
the more significant improvement our algorithms achieve.
Another observation from Figure~\ref{fig:varying-size} is that the total elapsed time of ATO, MIR and SIR
is similar when the dataset size is small.
As the dataset size increases, the elapsed time of SIR increases much slower than ATO and MIR.
This property of SIR is intriguing,
since SIR can handle the large dataset much more efficiently than its counterparts.
\begin{figure}
\caption{Effect of \# of instances on elapsed time in $10$-fold cross-validation}
\label{fig:varying-size}
\end{figure}
}
\subsection{Existing approaches for leave-one-out cross-validation}
\label{paper:alg:loocv}
As our three algorithms (i.e. ATO, MIR and SIR) are proposed to improve the efficiency of $k$-fold cross-validation,
naturally the three algorithms can accelerate leave-one-out cross-validation.
Note that leave-one-out cross-validation is a special case of $k$-fold cross-validation,
when $k$ equals to the number of instances in the dataset.
Here, we present two existing alpha seeding techniques~\cite{decoste2000alpha,lee2004efficient}
that have been specifically proposed to improve the efficiency
of leave-one-out cross-validation.
Given a dataset $\mathcal{X}$ of $n$ instances,
both of the algorithms train the SVM using all the $n$ instances.
Recall that the trained SVM meets constraints of Problem~\eqref{eq:svm_dual},
and we have the constraint $\sum_{\boldsymbol{x}_i \in \mathcal{X}}{y_i \alpha_i} = 0$ held.
Then, in each round of the leave-one-out cross-validation, an instance $\boldsymbol{x}_t$ is removed from the trained SVM.
To make the constraint $\sum_{\boldsymbol{x}_i \in \mathcal{X} \setminus \{\boldsymbol{x}_t\}}{y_i \alpha_i'} = 0$ hold,
the alpha values of the instances in $\mathcal{X} \setminus \{\boldsymbol{x}_t\}$ may need to be adjusted.
The two existing techniques apply different strategies to adjust the alpha values of the instances in $\mathcal{X} \setminus \{\boldsymbol{x}_t\}$.
\subsubsection{Uniformly distributing $\alpha_t y_t$ to other instances}
First, the strategy proposed in~\cite{decoste2000alpha} counts the number,
denoted by $d$, of instances with alpha values satisfying $0 < \alpha_i < C$ where $\boldsymbol{x}_i \in \mathcal{X}\setminus \{\boldsymbol{x}_t\}$.
Then, the average amount of value that the $d$ instances need to be adjusted is $\frac{y_t \alpha_t}{d}$.
For each instance $\boldsymbol{x}_j$ in the $d$ instances, adjusting their alpha values is handled in
the following two scenarios.
\begin{itemize}
\item If $y_t = y_j$, $\alpha_j'$ equals to $(\alpha_j + \frac{\alpha_t}{d})$.
\item If $y_t = -y_j$, $\alpha_j'$ equals to $(\alpha_j - \frac{\alpha_t}{d})$.
\end{itemize}
Note that the updated alpha value $\alpha_j'$ subjects to the constraint $0 \le \alpha_j' \le C$.
Hence, the alpha values of some instances may not allow to be increased/decreased by $\frac{\alpha_t}{d}$.
Those alpha values are adjusted to the maximum allowed limit (i.e. increased to $C$ or decreased to $0$);
similar to the above process, the extra amount of value
(of $\frac{\alpha_t}{d}$ which cannot be added to or removed from $\alpha_j'$)
is uniformly distributed to those alpha values that satisfy $0 < \alpha_i < C$.
We call this technique \textbf{AVG}, because each alpha value of the $d$ instances is increased/decreased
by the average amount (except those near $0$ or $C$) of value from $y_t \alpha_t$.
Our ATO algorithm has the similar idea as AVG,
where the alpha values of many instances are adjusted by the same (or similar) amount.
\subsubsection{Distributing the $\alpha_t y_t$ to similar instances}
AVG requires changing the alpha values of many instances, which may not be efficient.
Lee et al.~\cite{lee2004efficient} proposed a technique to adjust the alpha values of only a few most similar instances to $\boldsymbol{x}_t$.
The technique first finds the instance $\boldsymbol{x}_j$ among $\mathcal{X} \setminus \{\boldsymbol{x}_t\}$ with the largest kernel value,
i.e. $K(\boldsymbol{x}_j, \boldsymbol{x}_t)$ is the largest.
Then, $\alpha_j' \leftarrow (\alpha_j + \alpha_t)$ if $y_t = y_j$ or $\alpha_j' \leftarrow (\alpha_j - \alpha_t)$ if $y_t = -y_j$.
Recall that the updated alpha value $\alpha_j'$ needs to satisfy the constraint $0 \le \alpha_j' \le C$.
Hence, the alpha value of the most similar instance $\boldsymbol{x}_j$ may not allow to be increased/decreased by $\alpha_t$.
Then, $\alpha_j'$ is increased to $C$ or decreased to $0$ depending on $y_j$.
The extra amount of value is distributed to the alpha value of the second most similar instance,
the third most similar instance, and so on
until the constraint $\sum_{\boldsymbol{x}_i \in \mathcal{X} \setminus \{\boldsymbol{x}_t\}}{\alpha_i' y_i} = 0$ holds.
We call this technique \textbf{TOP}, since it only adjusts the alpha values of a few most similar (i.e. a top few) instances to $\boldsymbol{x}_t$.
Our MIR algorithm and SIR algorithm have the similar idea to TOP,
where only the alpha values of a proportion of the instances are adjusted.
After the adjusting by either of the two techniques,
the constraint $\sum_{\boldsymbol{x}_i \in \mathcal{X} \setminus \{\boldsymbol{x}_t\}}{\alpha_i' y_i} = 0$ holds,
and $\boldsymbol{\alpha}'$ is used as the initial alpha values for training the next SVM.
In the next section (more specifically, in Section~\ref{paper:loo-exp}),
we empirically evaluate the five techniques for accelerating leave-one-out cross-validation.
\subsection{Efficiency comparison on leave-one-out cross-validation}
\label{paper:loo-exp}
Here, we study the efficiency of our proposed algorithms,
in comparison with LibSVM and the existing alpha seeding techniques, i.e. AVG and TOP (cf. Section~\ref{paper:alg:loocv}),
for leave-one-out cross-validation.
Similar to the other algorithms, we implemented AVG and TOP in C++.
Since leave-one-out cross-validation is very expensive for the large datasets,
we estimated the total time for leave-one-out cross-validation on the three large datasets (namely Adult, MNIST and Webdata) for each algorithm.
For MNIST and Webdata, we ran the first 30 rounds of the leave-one-out cross-validation to estimate the total time for each algorithm;
for Adult, we ran the first 100 rounds of the leave-one-out cross-validation to estimate the total time for each algorithm.
As Heart and Madelon are relatively small, we ran the whole leave-one-out cross-validation, and measured their total elapsed time.
The experimental results are shown in Figure~\ref{fig:loocv}.
As we can see from the table, all the five algorithms
are faster than LibSVM ranging from a few times to a few hundred times (e.g. SIR is 167 times faster than LibSVM on Webdata).
Another observation from the table is AVG and TOP have similar efficiency.
It is worth pointing out that our SIR algorithm almost always outperforms all the other algorithms,
except Heart and Madelon where MIR is slightly better.
\eat{
From the above experimental study, we recommend SIR to accelerate $k$-fold cross-validation,
because SIR has the following three advantages.
(i) SIR is generally more efficient than other algorithms on various datasets (cf. Table~\ref{tbl:overall-eff});
(ii) SIR is robust while varying $k$ (cf. Table~\ref{tbl:varying-k});
(iii) SIR has better scalability over the dataset size (cf. Figure~\ref{fig:varying-size}).
}
\begin{figure}
\caption{Elapsed time compared with the total elapsed time of SIR in leave-one-out cross-validation}
\label{fig:loocv}
\end{figure}
\end{document} |
\begin{document}
\title[Radii of starlikeness of some special functions]{Radii of
starlikeness of some special functions}
\author[\'A. Baricz]{\'Arp\'ad Baricz}
\address{Department of Economics, Babe\c{s}-Bolyai University, Cluj-Napoca
400591, Romania} \email{[email protected]}
\author[D. K. Dimitrov]{Dimitar K. Dimitrov}
\address{Departamento de Matem\'atica Aplicada, IBILCE, Universidade Estadual Paulista UNESP, S\~{a}o Jos\'e do Rio Preto 15054, Brazil}
\email{[email protected]}
\author[H. Orhan]{Halit Orhan}
\address{Department of Mathematics, Ataturk University, Erzurum 25240, Turkey}
\email{[email protected]}
\author[N. Yagmur]{Nihat Yagmur}
\address{Department of Mathematics, Erzincan University, Erzincan 24000,
Turkey} \email{[email protected]}
\thanks{The research of \'A. Baricz is supported by the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, under Grant
PN-II-RU-TE-2012-3-0190. The research of D. K. Dimitrov is supported by the Brazilian foundations CNPq under Grant 307183/2013--0
and FAPESP under Grants 2009/13832--9}
\begin{abstract}
Geometric properties of the classical Lommel and Struve functions, both of the first kind, are studied.
For each of them, there different normalizations are applied in such a way that
the resulting functions are analytic in the unit disc of the complex plane.
For each of the six functions we determine the radius of starlikeness precisely.
\end{abstract}
\maketitle
\section{Introduction and statement of the main results}
Let $\mathbb{D}_{r}$ be the open disk $\left\{ {z\in \mathbb{C}:\left\vert
z\right\vert <r}\right\} ,$ where $r>0,$ and set $\mathbb{D}=\mathbb{D}_{1}$.
By $\mathcal{A}$ we mean the class of analytic
functions $f:\mathbb{D}_r\to\mathbb{C}$ which satisfy the usual
normalization conditions $f(0)=f'(0)-1=0.$ Denote by $\mathcal{S}$
the class of functions belonging to $\mathcal{A}$ which are univalent in $\mathbb{D}
_r$ and let $\mathcal{S}^{\ast }(\alpha )$ be the subclass of $\mathcal{S}$
consisting of functions which are starlike of order $\alpha $ in $\mathbb{D}
_r,$ where $0\leq \alpha <1.$ The analytic characterization of this class of
functions is
\begin{equation*}
\mathcal{S}^{\ast }(\alpha )=\left\{ f\in \mathcal{S}\ :\ \Re \left(\frac{zf'(z)}{f(z)}\right)>\alpha\ \ \mathrm{for\ all}\ \ z\in \mathbb{
D}_r \right\},
\end{equation*}
and we adopt the convention $\mathcal{S}^{\ast}=\mathcal{S}^{\ast }(0)$. The real number
\begin{equation*}
r_{\alpha }^{\ast}(f)=\sup \left\{ r>0\ :\ \Re \left(\frac{zf'(z)}{f(z)}\right)>\alpha\ \ \mathrm{ for\ all}\ \ z\in \mathbb{D}_{r}\right\},
\end{equation*}
is called the radius of starlikeness of order $\alpha $ of the function $f.$
Note that $r^{\ast }(f)=r_0^{\ast}(f)$ is the largest radius such that the
image region $f(\mathbb{D}_{r^{\ast }(f)})$ is a starlike domain with
respect to the origin.
We consider two classical special functions, the Lommel function of the first kind $s_{\mu ,\nu }$
and the Struve function of the first kind $\mathbf{H}_{\nu}$. They are explicitly defined in terms of the
hypergeometric function $\,_{1}F_{2}$ by
\begin{equation}
s_{\mu ,\nu }(z)=\frac{z^{\mu +1}}{(\mu -\nu +1)(\mu +\nu +1)}\, _{1}F_{2}\left( 1;\frac{\mu -\nu +3}{2},\frac{\mu +\nu +3}{2};-\frac{z^{2}}{4}\right),\ \ \frac{1}{2}(-\mu \pm \nu-3) \not\in \mathbb{N},
\label{LomHypG}
\end{equation}
and
\begin{equation}
\mathbf{H}_{\nu}(z)=\frac{\left(\frac{z}{2}\right)^{\nu+1}}{\sqrt{\frac{\pi}{4}}\, \Gamma\left(\nu+\frac{3}{2}\right)}
\,_{1}F_{2} \left( 1;\frac{3}{2},\nu + \frac{3}{2};-\frac{z^{2}}{4}\right),\ \ -\nu-\frac{3}{2} \not\in \mathbb{N}.
\label{SrtHypG}
\end{equation}
A common feature of these functions is that they are solutions of inhomogeneous Bessel
differential equations \cite{Wat}. Indeed, the Lommel function of the first kind $s_{\mu ,\nu }$ is
a solution of
\begin{equation*}
z^{2}w''(z)+zw'(z)+(z^{2}-{\nu }^{2})w(z)=z^{\mu +1}
\end{equation*}
while the Struve function $\mathbf{H}_{\nu}$ obeys
\begin{equation*}
z^{2}w''(z)+zw'(z)+(z^{2}-{\nu }^{2})w(z)=\frac{4\left(
\frac{z}{2}\right) ^{\nu +1}}{\sqrt{\pi }\Gamma \left( \nu +\frac{1}{2}
\right) }.
\end{equation*}
We refer to Watson's treatise \cite{Wat} for comprehensive information about these functions and recall
some more recent contributions. In 1972 Steinig \cite{stein}
examined the sign of $s_{\mu ,\nu }(z)$ for real $\mu ,\nu $ and positive $z$. He showed, among other things,
that for $\mu <\frac{1}{2}$ the function $s_{\mu ,\nu }$ has infinitely many changes of sign on $(0,\infty )$. In 2012
Koumandos and Lamprecht \cite{Kou} obtained sharp estimates for the location of the zeros of $s_{\mu -\frac{1}{2},\frac{1}{2}}$
when $\mu \in (0,1)$. The Tur\'{a}n type inequalities for $s_{\mu -\frac{1}{2},\frac{1}{2}}$ were established in \cite{Bar2} while
those for the Struve function were proved in \cite{BPS}.
Geometric properties of $s_{\mu -\frac{1}{2},\frac{1}{2}}$ and of the Struve function were obtained in \cite{Bar3} and in \cite{H-Ny,N-H},
respectively. Motivated by those results we study the problem of starlikeness of certain analytic functions related to the classical special
functions under discussion. Since neither $s_{\mu ,\nu }$, nor $\mathbf{H}_{\nu}$ belongs to $\mathcal{A}$, first we perform some natural
normalizations. We define three functions originating from $s_{\mu ,\nu }$:
\begin{equation*}
f_{\mu ,\nu }(z)=\left( (\mu -\nu +1)(\mu +\nu +1)s_{\mu ,\nu }(z)\right)^{\frac{1}{\mu +1}},
\end{equation*}
\begin{equation*}
g_{\mu ,\nu }(z)=(\mu -\nu +1)(\mu +\nu +1)z^{-\mu }s_{\mu ,\nu }(z)
\end{equation*}
and
\begin{equation*}
h_{\mu ,\nu }(z)=(\mu -\nu +1)(\mu +\nu +1)z^{\frac{1-\mu }{2}}s_{\mu ,\nu }(
\sqrt{z}).
\end{equation*}
Similarly, we associate with $\mathbf{H}_{\nu}$ the functions
$$
u_{\nu }(z)=\left(\sqrt{\pi }2^{\nu }\Gamma \left( \nu +\frac{3}{2} \right)
\mathbf{H}_{\nu }(z)\right)^{\frac{1}{\nu +1}},$$
$$
v_{\nu }(z)=\sqrt{\pi }2^{\nu }z^{-\nu }\Gamma \left( \nu + \frac{3}{2} \right) \mathbf{H}_{\nu }(z)
$$
and
$$
w_{\nu }(z)=\sqrt{\pi }2^{\nu }z^{\frac{1-\nu }{2}}\Gamma \left( \nu +\frac{3}{2}\right) \mathbf{H}_{\nu }(\sqrt{z}).
$$
Clearly the functions $f_{\mu ,\nu }$, $g_{\mu ,\nu }$, $h_{\mu ,\nu }$, $u_{\nu }$, $v_{\nu }$ and $w_{\nu }$
belong to the class $\mathcal{A}$. The main results in the present note concern the exact values of the radii
of starlikeness for these six function, for some ranges of the parameters.
Let us set
$$
f_{\mu }(z)=f_{\mu -\frac{1}{2},\frac{1}{2}}(z),\ \ g_{\mu }(z)=g_{\mu-\frac{1}{2},\frac{1}{2}}(z)\ \ \ \mbox{and}\ \ \ h_{\mu }(z)=h_{\mu-\frac{1}{2},\frac{1}{2}}(z).$$
The first principal result we establish reads as follows:
\begin{theorem}
\label{theo1} Let $\mu\in(-1,1),$ $\mu\neq0.$ The following statements hold:
\begin{enumerate}
\item[\textbf{a)}] If $0\leq\alpha<1$ and $\mu\in \left(-\frac{1}{2},0\right),$ then $r_{\alpha }^{\ast }(f_{\mu })=x_{\mu
,\alpha }$, where $x_{\mu ,\alpha }$ is the smallest positive root of the
equation
\begin{equation*}
z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-\alpha \left(\mu + \frac{1}{2} \right)s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=0.
\end{equation*}
Moreover, if $0\leq\alpha<1$ and $\mu\in \left(-1,-\frac{1}{2}\right),$ then $r_{\alpha
}^{\ast }(f_{\mu })=q_{\mu ,\alpha }$, where $q_{\mu ,\alpha }$ is
the unique positive root of the equation $$izs_{\mu -\frac{1}{2},\frac{1}{2}
}'(iz)-\alpha \left(\mu +\frac{1}{2}\right)s_{\mu -\frac{1}{2},\frac{1}{2}
}(iz)=0.$$
\item[\textbf{b)}] If $0\leq\alpha<1,$ then $r_{\alpha }^{\ast }(g_{\mu })=y_{\mu
,\alpha }$, where $y_{\mu ,\alpha }$ is the smallest positive root of the
equation
\begin{equation*}
z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-\left(\mu +\alpha- \frac{1}{2} \right) s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=0.
\end{equation*}
\item[\textbf{c)}] If $0\leq\alpha<1,$ then $r_{\alpha }^{\ast }(h_{\mu })=t_{\mu ,\alpha }$, where $
t_{\mu ,\alpha }$ is the smallest positive root of the equation
\begin{equation*}
zs_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-\left(\mu +2\alpha -\frac{3}{2
}\right)s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=0.
\end{equation*}
\end{enumerate}
\end{theorem}
The corresponding result about the radii of starlikeness of the functions, related to Struve's one,
is:
\begin{theorem}
\label{theo2} Let $|\nu|<\frac{1}{2}.$ The following assertions are true:
\begin{enumerate}
\item[\textbf{a)}] If $0\leq
\alpha <1,$ then $r_{\alpha }^{\ast }(u_{\nu })=\delta _{\nu ,\alpha }$,
where $\delta _{\nu ,\alpha }$ is the smallest positive root of the equation
\begin{equation*}
z\mathbf{H}_{\nu }'(z)-\alpha (\nu +1)\mathbf{H}_{\nu }(z)=0.
\end{equation*}
\item[\textbf{b)}] If $0\leq
\alpha <1,$ then $r^{\ast }(v_{\nu })=\rho _{\nu ,\alpha }$, where $\rho
_{\nu ,\alpha }$ is the smallest positive root of the equation
\begin{equation*}
z\mathbf{H}_{\nu }'(z)-(\alpha +\nu )\mathbf{H}_{\nu }(z)=0.
\end{equation*}
\item[\textbf{c)}] If $0\leq\alpha<1,$ then $r_{\alpha }^{\ast }(w_{\nu })=\sigma _{\nu ,\alpha }$,
where $\sigma _{\nu ,\alpha }$ is the smallest positive root of the equation
\begin{equation*}
z\mathbf{H}_{\nu }'(z)-(2\alpha +\nu -1)\mathbf{H}_{\nu }(z)=0.
\end{equation*}
\end{enumerate}
\end{theorem}
It is worth mentioning that the starlikeness of $h_{\mu }$, when $\mu \in (-1,1)$, $\mu
\neq 0,$ as well as of $w_{\nu }$, under the restriction $\left\vert \nu \right\vert
\leq \frac{1}{2}$, were established in \cite{Bar3}, and it was proved there that all the derivatives of these
functions are close-to-convex in $\mathbb{D}.$
\section{Preliminaries}
\setcounter{equation}{0}
\subsection{The Hadamard's factorization} The following preliminary result is the content of Lemmas 1 and 2 in \cite{Bar2}.
\begin{lemma}
\label{lem1} Let
\begin{equation*}
\varphi _{k}(z)=\, _{1}F_{2}\left( 1;\frac{\mu -k+2}{2},\frac{\mu -k+3}{2};-
\frac{z^{2}}{4}\right)
\end{equation*}
where ${z\in \mathbb{C}}$, ${\mu \in \mathbb{R}}$ and ${k\in }\left\{
0,1,\dots\right\} $ such that ${\mu -k}$ is not in $\left\{
0,-1,\dots\right\} $. Then, $\varphi _{k}$ is an entire function of order $
\rho =1$ and of exponential type $\tau =1.$ Consequently, the Hadamard's
factorization of $\varphi _{k}$ is of the form
\begin{equation}
\varphi _{k}(z)=\prod\limits_{n\gammaeq 1}\left( 1-\frac{z^{2}}{z_{\mu ,k,n}^{2}}
\right) , \label{1.6}
\end{equation}
where $\pm z_{\mu ,k,1},$ $\pm z_{\mu ,k,2},\dots$ are all zeros of the
function $\varphi _{k}$ and the infinite product is absolutely convergent.
Moreover, for $z,$ ${\mu }$ and $k$ as above, we have
\begin{equation*}
(\mu -k+1)\varphi _{k+1}(z)=(\mu -k+1)\varphi _{k}(z)+z\varphi _{k}'(z), \label{1.7}
\end{equation*}
\begin{equation*}
\sqrt{z}s_{\mu -k-\frac{1}{2},\frac{1}{2}}(z)=\frac{z^{\mu -k+1}}{(\mu
-k)(\mu -k+1)}\varphi _{k}(z). \label{1.8}
\end{equation*}
\end{lemma}
\subsection{Quotients of power series} We will also need the following result (see \cite{biernacki,pv}):
\begin{lemma}\label{lempower}
Consider the power series $f(x)=\displaystyle\sum_{n\gammaeq 0}a_{n}x^n$ and $g(x)=\displaystyle\sum_{n\gammaeq 0}b_{n}x^n$,
where $a_{n}\in \mathbb{R}$ and $b_{n}>0$ for all $n\gammaeq 0$. Suppose that both series converge on $(-r,r)$, for some $r>0$. If the
sequence $\lbrace a_n/b_n\rbrace_{n\gammaeq 0}$ is increasing (decreasing), then the function $x\mapsto{f(x)}/{g(x)}$ is increasing
(decreasing) too on $(0,r)$. The result remains true for the power series
$$f(x)=\displaystyle\sum_{n\gammaeq 0}a_{n}x^{2n}\ \ \ \mbox{and}\ \ \ g(x)=\displaystyle\sum_{n\gammaeq 0}b_{n}x^{2n}.$$
\end{lemma}
\subsection{Zeros of polynomials and entire functions and the Laguerre-P\'olya class} In this subsection
we provide the necessary information about polynomials and entire functions with real zeros. An algebraic polynomial
is called hyperbolic if all its zeros are real.
The simple statement that two real polynomials $p$ and $q$ posses real and interlacing zeros if and only if any linear combinations
of $p$ and $q$ is a hyperbolic polynomial is sometimes called Obrechkoff's theorem. We formulate the following specific statement
that we shall need.
\begin{lemma} \label{OLem}
Let $p(x)=1-a_1 x +a_2 x^2 -a_3 x^3 + \cdots +(-1)^n a_n x^n = (1-x/x_1)\cdots (1-x/x_n)$ be a hyperbolic polynomial with positive zeros
$0< x_1\leq x_2 \leq \cdots \leq x_n$, and normalized by $p(0)=1$. Then, for any constant $C$, the polynomial $q(x) = C p(x) - x\, p'(x)$ is hyperbolic. Moreover, the smallest
zero $\eta_1$ belongs to the interval $(0,x_1)$ if and only if $C<0$.
\end{lemma}
The proof is straightforward; it suffices to apply Rolle's theorem and then count the sign changes of the linear combination at the zeros of $p$. We refer to \cite{BDR, DMR} for further results on monotonicity and asymptotics of zeros of linear combinations of hyperbolic polynomials.
A real entire function $\psi$ belongs to the Laguerre-P\'{o}lya class $\mathcal{LP}$ if it can be represented in the form
$$
\psi(x) = c x^{m} e^{-a x^{2} + \beta x} \prod_{k\gammaeq1}
\left(1+\frac{x}{x_{k}}\right) e^{-\frac{x}{x_{k}}},
$$
with $c,$ $\beta,$ $x_{k} \in \mathbb{R},$ $a \gammaeq 0,$ $m\in
\mathbb{N} \cup\{0\},$ $\sum x_{k}^{-2} < \infty.$
Similarly, $\phi$ is said to be of
type I in the Laguerre-P\'{o}lya class, written $\varphi \in \mathcal{LP}i$,
if $\phi(x)$ or $\phi(-x)$ can be represented as
$$
\phi(x) = c x^{m} e^{\sigma x} \prod_{k\gammaeq1}\left(1+\frac{x}{x_{k}}\right),
$$
with $c \in \mathbb{R},$ $\sigma \gammaeq 0,$ $m \in
\mathbb{N}\cup\{0\},$ $x_{k}>0,$ $\sum 1/x_{k} < \infty.$
The class $\mathcal{LP}$ is the complement of the space of hyperbolic
polynomials in the topology induced by the uniform convergence
on the compact sets of the complex plane while $\mathcal{LP}i$ is the complement
of the hyperbolic polynomials whose zeros posses a preassigned constant sign.
Given an entire function $\varphi$ with the Maclaurin expansion
$$\varphi(x) = \sum_{k\gammaeq0}\gammaamma_{k} \frac{x^{k}}{k!},$$
its Jensen polynomials are defined by
$$
g_n(\varphi;x) = g_{n}(x) = \sum_{j=0}^{n} {n\choose j} \gamma_{j}
x^j.
$$
Jensen proved the following relation in \cite{Jen12}:
\begin{THEO}\label{JTh}
The function $\varphi$ belongs to $\mathcal{LP}$ ($\mathcal{LP}i$, respectively) if and only if
all the polynomials $g_n(\varphi;x)$, $n=1,2,\ldots$, are hyperbolic (hyperbolic
with zeros of equal sign).
Moreover, the sequence $g_n(\varphi;z/n)$ converges locally
uniformly to $\varphi(z)$.
\end{THEO}
Further information about the Laguerre-P\'olya class can be found in
\cite{Obr, RS} while \cite{DC} contains references and additional facts
about the Jensen polynomials in general and also about those related to the
Bessel function.
A special emphasis has been given on the question of characterizing the
kernels whose Fourier transform belongs to $\mathcal{LP}$ (see \cite{DR}). The following is a
typical result of this nature, due to P\'olya \cite{pol}.
\begin{THEO} \label{PTh}
\label{pol} Suppose that the function $K$ is positive, strictly increasing
and continuous on $[0, 1)$ and integrable there.
Then the entire functions
\begin{equation*}
U(z)=\int_{0}^{1}K(t) \sin (zt)dt\ \ \ \mbox{and} \ \ \
V(z)=\int_{0}^{1}K(t)\cos(zt)dt
\end{equation*}
have only real and simple zeros and their zeros interlace.
\end{THEO}
In other words, the latter result states that both the sine and the cosine transforms of a kernel
are in the Laguerre-P\'olya class provided the kernel is compactly supported and increasing in the support.
\begin{theorem}\label{ThZ} Let $\mu\in(-1,1),$ $\mu\neq0,$ and $c$ be a constant such that $c<{\mu}+\frac{1}{2}$. Then the functions $z\mapsto z s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)-c s_{\mu -\frac{1}{2},\frac{1}{2}}(z)$
can be represented in the form
\begin{equation}
\mu (\mu+1) \left( z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z)- c\, s_{\mu -\frac{1}{2},\frac{1}{2}}(z) \right) = z^{\mu+\frac{1}{2}} \psi_\mu(z), \label{psi}
\end{equation}
where $\psi_\mu$ is an even entire function and $ \psi_\mu \in \mathcal{LP}$. Moreover, the smallest positive zero of $\psi_\mu$
does not exceed the first positive zero of $s_{\mu-\frac{1}{2},\frac{1}{2}}$.
Similarly, if $|\nu |<\frac{1}{2}$ and $d$ is a constant satisfying $d<\nu+1$, then
\begin{equation}
{\frac{\sqrt{\pi}}{2}}\, \Gamma\left(\nu+\frac{3}{2}\right)\ \left(\, z \mathbf{H}_{\nu }'(z)- d \mathbf{H}_{\nu }(z) \, \right) = \left(\frac{z}{2}\right)^{\nu+1}\, \phi_\nu(z),
\label{phinu}
\end{equation}
where $\phi_\nu$ is an entire function in the Laguerre-P\'olya class and the smallest positive zero of $\phi_\nu$
does not exceed the first positive zero of $\mathbf{H}_{\nu}$.
\end{theorem}
\begin{proof}
First suppose that $\mu\in(0,1).$ Since, by (\ref{LomHypG}),
$$
\mu (\mu+1) s_{\mu -\frac{1}{2},\frac{1}{2}}(z) = \sum_{k\gammaeq0} \frac{(-1)^kz^{2k+\mu+\frac{1}{2}}}{2^{2k}\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k},
$$
then
$$
\mu (\mu+1) z\, s_{\mu -\frac{1}{2},\frac{1}{2}}'(z) = \sum_{k\gammaeq0} \frac{(-1)^k\left(2k+\mu+\frac{1}{2}\right)z^{2k+\mu+\frac{1}{2}}}{2^{2k}\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k}.
$$
Therefore, (\ref{psi}) holds with
$$
\psi_\mu(z) = \sum_{k\gammaeq0} \frac{2k+{\mu}+\frac{1}{2} -c}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}.
$$
On the other hand, by Lemma 1,
$$
\mu (\mu+1) s_{\mu -\frac{1}{2},\frac{1}{2}}(z) = z^{\mu+\frac{1}{2}} \varphi_0(z),
$$
and, by \cite[Lemma 3]{Bar2}, we have
\begin{equation}
z\varphi _{0}(z)=\mu (\mu +1)\int_{0}^{1}(1-t)^{\mu -1}\sin (zt)dt, \ \ \mathrm{for}\ \mu >0. \label{integ}
\end{equation}
Therefore $\varphi_0$ has the Maclaurin expansion
$$
\varphi_0(z) = \sum_{k\gammaeq0}\frac{1}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}.
$$
Moreover, (\ref{integ}) and Theorem \ref{PTh} imply that $\varphi_0 \in \mathcal{LP}$ for $\mu\in(0,1)$, so that
the function $\tilde{\varphi}_0(z):= \varphi_0(2\sqrt{z})$,
$$
\tilde{\varphi}_0(\zeta) = \sum_{k\gammaeq0} \frac{1}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\zeta\right)^{k},
$$
belongs to $\mathcal{LP}i$.
Then it follows form Theorem \ref{JTh} that its Jensen polynomials
$$
g_n(\tilde{\varphi}_0;\zeta) = \sum_{k=0}^n {n\choose k} \frac{k!}{\left(\frac{\mu+2}{2}\right)_k \left(\frac{\mu+3}{2}\right)_k} \left( -\zeta \right)^{k}
$$
are all hyperbolic. However, observe that the Jensen polynomials of $\tilde{\psi}_\mu(z):= \psi_\mu(2\sqrt{z})$ are simply
$$
-\frac{1}{2}g_n(\tilde{\psi}_\mu;\zeta) = -\frac{1}{2}\left({\mu}+\frac{1}{2}-c\right)\, g_n(\tilde{\varphi}_0;\zeta) - \zeta\, g_n'(\tilde{\varphi}_0;\zeta).
$$
Lemma \ref{OLem} implies that all zeros of $g_n(\tilde{\psi}_\mu;\zeta)$ are real and positive and that the smallest one
precedes the first zero of $g_n(\tilde{\varphi}_0;\zeta)$. In view of Theorem \ref{JTh}, the latter conclusion immediately yields that $\tilde{\psi}_\mu \in \mathcal{LP}i$
and that its first zero precedes the one of $\tilde{\varphi}_0$. Finally, the first statement of the theorem for $\mu\in(0,1)$ follows after we go back
from $\tilde{\psi}_\mu$ and $\tilde{\varphi}_0$ to $\psi_\mu$ and $\varphi_0$ by setting $\zeta=-\frac{z^2}{4}$.
Now we prove \eqref{psi} for the case when $\mu\in(-1,0)$. Observe that for $\mu\in(0,1)$ the function \cite[Lemma 3]{Bar2} $$\varphi_1(z)=\sum_{k\gammaeq0}\frac{1}{\left(\frac{\mu+1}{2}\right)_k \left(\frac{\mu+2}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}=\mu \int_{0}^{1}(1-t)^{\mu -1}\cos (zt)dt$$
belongs also to Laguerre-P\'olya class $\mathcal{LP},$ and hence the Jensen polynomials of $\tilde{\varphi}_1(z):= \varphi_1(2\sqrt{z})$ are hyperbolic. Straightforward calculations show that the Jensen polynomials of $\tilde{\psi}_{\mu-1}(z):= \psi_{\mu-1}(2\sqrt{z})$ are
$$-\frac{1}{2}g_n(\tilde{\psi}_{\mu-1};\zeta) = -\frac{1}{2}\left({\mu}-\frac{1}{2}-c\right)\, g_n(\tilde{\varphi}_1;\zeta) - \zeta\, g_n'(\tilde{\varphi}_1;\zeta).$$
Lemma \ref{OLem} implies that for $\mu\in(0,1)$ all zeros of $g_n(\tilde{\psi}_{\mu-1};\zeta)$ are real and positive and that the smallest one
precedes the first zero of $g_n(\tilde{\varphi}_1;\zeta)$. This fact, together with Theorem \ref{JTh}, yields that $\tilde{\psi}_{\mu-1} \in \mathcal{LP}i$
and that its first zero precedes the one of $\tilde{\varphi}_1$. Consequently, the first statement of the theorem for $\mu\in(-1,0)$ follows after we go back
from $\tilde{\psi}_{\mu-1}$ and $\tilde{\varphi}_1$ to $\psi_{\mu-1}$ and $\varphi_1$ by setting $\zeta=-\frac{z^2}{4}$ and substituting $\mu$ by $\mu+1$.
In order to prove the corresponding statement for (\ref{phinu}), we recall first that the hypergeometric representation (\ref{SrtHypG}) of the Struve function
is equivalent to
$$
\frac{\sqrt{\pi}}{2}\, \Gamma\left(\nu+\frac{3}{2}\right)\, \mathbf{H}_{\nu}(z) = \sum_{k\gammaeq0} \frac{(-1)^k}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left( \frac{z}{2} \right)^{2k+\nu+1},
$$
which immediately yields
$$
\phi_\nu(z) = \sum_{k\gammaeq0} \frac{2k+\nu+1-d}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k}\left( -\frac{z^2}{4} \right)^{k}.
$$
On the other hand, the integral representation
\begin{equation*}
\mathbf{H}_{\nu }(z)=\frac{2\left(\frac{z}{2}\right) ^{\nu }}{\sqrt{\pi }
\Gamma \left( \nu +\frac{1}{2}\right) }\int_{0}^{1}(1-t^{2})^{\nu -\frac{1}{2
}}\sin (zt)dt,
\end{equation*}
which holds for $\nu >-\frac{1}{2},$ and Theorem \ref{PTh} imply that the even entire function
$$
\mathcal{H}_\nu(z) = \sum_{k\gammaeq0} \frac{1}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left( -\frac{z^2}{4} \right)^{k}
$$
belongs to the Laguerre-P\'olya class when $|\nu|<\frac{1}{2}$. Then the functions $\tilde{\mathcal{H}}_\nu(z):= \mathcal{H}_{\nu}(2\sqrt{z})$,
$$
\tilde{\mathcal{H}}_\nu(\zeta) = \sum_{k\gammaeq0} \frac{1}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left(-\zeta \right)^{k},
$$
is in $\mathcal{LP}i$. Therefore, its Jensen polynomials
$$
g_n(\tilde{\mathcal{H}}_\nu;\zeta) = \sum_{k=0}^n {n\choose k} \frac{k!}{\left(\frac{3}{2}\right)_k \left(\nu+\frac{3}{2}\right)_k} \left( -\zeta \right)^{k}
$$
are hyperbolic, with positive zeros. Then, by Lemma \ref{OLem}, the polynomial
$
-\frac{1}{2}\left({\nu}+{1}-d\right)\, g_n(\tilde{\mathcal{H}}_\nu;\zeta) - \zeta\, g_n'(\tilde{\mathcal{H}}_\nu;\zeta)
$
possesses only real positive zeros. Obviously the latter polynomial coincides with the $n$th Jensen polynomials of
$\tilde{\phi}_\nu(z) = \phi_\nu(2\sqrt{z})$, that is
$$
-\frac{1}{2}g_n(\tilde{\phi}_\nu;\zeta) = -\frac{1}{2}\left({\nu}+{1}-d\right)\, g_n(\tilde{\mathcal{H}}_\nu;\zeta) - \zeta\, g_n'(\tilde{\mathcal{H}}_\nu;\zeta).
$$
Moreover, the smallest zero of $g_n(\tilde{\phi}_\nu;\zeta)$ precedes the first positive zero of $g_n(\tilde{\mathcal{H}}_\nu;\zeta)$.
This implies that $\phi_\nu \in \mathcal{LP}$ and that its first positive zero is smaller that the one of $\mathcal{H}_\nu$.
\end{proof}
\section{Proofs of the main results}
\setcounter{equation}{0}
\begin{proof}[Proof of Theorem \ref{theo1}]
We need to show that for the corresponding values of $\mu$ and $\alpha$ the
inequalities
\begin{equation}
\Re \left( \frac{zf_{\mu }'(z)}{f_{\mu }(z)}\right) >\alpha ,
\text{ \ \ }\Re \left( \frac{zg_{\mu }'(z)}{g_{\mu }(z)}\right)
>\alpha \text{ \ and \ }\Re \left( \frac{zh_{\mu }'(z)}{h_{\mu
}(z)}\right) >\alpha \text{ \ \ } \label{2.0}
\end{equation}
are valid for $z\in \mathbb{D}_{r_{\alpha }^{\ast }(f_{\mu })}$, $z\in
\mathbb{D}_{r_{\alpha }^{\ast }(g_{\mu })}$ and $z\in \mathbb{D}_{r_{\alpha
}^{\ast }(h_{\mu })}$ respectively, and each of the above inequalities does
not hold in larger disks. It follows from (\ref{1.6}) that
\begin{equation*}
f_{\mu }(z)=f_{\mu -\frac{1}{2},\frac{1}{2}}(z)=\left(\mu (\mu +1)s_{\mu -
\frac{1}{2},\frac{1}{2}}(z)\right)^{\frac{1}{\mu +\frac{1}{2}}}=z\left(\varphi
_{0}(z)\right)^{\frac{1}{\mu +\frac{1}{2}}},
\end{equation*}
\begin{equation*}
g_{\mu }(z)=g_{\mu -\frac{1}{2},\frac{1}{2}}(z)=\mu (\mu +1)z^{-\mu +\frac{1
}{2}}s_{\mu -\frac{1}{2},\frac{1}{2}}(z)=z\varphi _{0}(z),
\end{equation*}
\begin{equation*}
h_{\mu }(z)=h_{\mu -\frac{1}{2},\frac{1}{2}}(z)=\mu (\mu +1)z^{\frac{3-2\mu
}{4}}s_{\mu -\frac{1}{2},\frac{1}{2}}(\sqrt{z})=z\varphi _{0}(\sqrt{z}),
\end{equation*}
which in turn imply that
\begin{equation*}
\frac{zf_{\mu }'(z)}{f_{\mu }(z)}=1+\frac{z\varphi _{0}'(z)
}{(\mu +\frac{1}{2})\varphi _{0}(z)}=1-\frac{1}{\mu +\frac{1}{2}}
\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{z_{\mu ,0,n}^{2}-z^{2}},
\end{equation*}
\begin{equation*}
\frac{zg_{\mu }'(z)}{g_{\mu }(z)}=1+\frac{z\varphi _{0}'(z)
}{\varphi _{0}(z)}=1-\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{z_{\mu
,0,n}^{2}-z^{2}},
\end{equation*}
\begin{equation*}
\frac{zh_{\mu }'(z)}{h_{\mu }(z)}=1+\frac{1}{2}\frac{\sqrt{z}
\varphi _{0}'(\sqrt{z})}{\varphi _{0}(\sqrt{z})}=1-\sum\limits_{n
\gammaeq 1}\frac{z}{z_{\mu ,0,n}^{2}-z},
\end{equation*}
respectively. We note that for $\mu \in (0,1)$ the function $\varphi _{0}$
has only real and simple zeros (see \cite{Bar2}). For $\mu \in (0,1),$ and ${
n\in }\left\{ 1,2,\dots \right\} $ let $\xi _{\mu ,n}=z_{\mu ,0,n}$ be the $
n $th positive zero of $\varphi _{0}.$ We know that (see \cite[Lemma 2.1]
{Kou}) $\xi _{\mu ,n}\in (n\pi ,(n+1)\pi )$ for all $\mu \in (0,1)$ and
${n\in }\left\{ 1,2,\dots \right\}$, which implies that $\xi _{\mu ,n}>\xi
_{\mu ,1}>\pi >1$ for all $\mu \in (0,1)$ and $n \gammaeq 2$. On the other hand,
it is known that \cite{sz} if ${z\in \mathbb{C}}$ and $\beta $ ${\in \mathbb{R}}$ are such that $\beta >{\left\vert
z\right\vert }$, then
\begin{equation}
\frac{{\left\vert z\right\vert }}{\beta -{\left\vert z\right\vert }}\gammaeq
\Re\left( \frac{z}{\beta -z}\right) . \label{2.5}
\end{equation}
Then the inequality
$$
\frac{{\left\vert z\right\vert }^{2}}{\xi _{\mu ,n}^{2}-{\left\vert
z\right\vert }^{2}}\gammaeq\Re\left( \frac{z^{2}}{\xi _{\mu ,n}^{2}-z^{2}}
\right),
$$
holds get for every $\mu \in (0,1)$, $n\in \mathbb{N}$ and ${\left\vert z\right\vert <}\xi _{\mu ,1}$. Therefore,
\begin{equation*}
\Re\left(\frac{zf_{\mu }'(z)}{f_{\mu }(z)}\right)=1-\frac{1
}{\mu +\frac{1}{2}}\Re\left(\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{\xi
_{\mu ,n}^{2}-z^{2}}\right) \gammaeq 1-\frac{1}{\mu +\frac{1}{2}}
\sum\limits_{n\gammaeq 1}\frac{2\left\vert z\right\vert ^{2}}{\xi _{\mu
,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert f_{\mu
}'(\left\vert z\right\vert )}{f_{\mu }(\left\vert z\right\vert )},
\end{equation*}
\begin{equation*}
\Re\left( \frac{zg_{\mu }'(z)}{g_{\mu }(z)}\right) =1-\Re\left(\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{\xi _{\mu ,n}^{2}-z^{2}}\right)
\gammaeq 1-\sum\limits_{n\gammaeq 1}\frac{2\left\vert z\right\vert ^{2}}{\xi _{\mu
,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert g_{\mu
}'(\left\vert z\right\vert )}{g_{\mu }(\left\vert z\right\vert )}
\end{equation*}
and
\begin{equation*}
\Re\left(\frac{zh_{\mu}'(z)}{h_{\mu }(z)}\right)=1-\Re\left(\sum\limits_{n\gammaeq 1}\frac{z}{\xi _{\mu ,n}^{2}-z}\right) \gammaeq
1-\sum\limits_{n\gammaeq 1}\frac{\left\vert z\right\vert }{\xi _{\mu
,n}^{2}-\left\vert z\right\vert }=\frac{\left\vert z\right\vert h_{\mu
}'(\left\vert z\right\vert )}{h_{\mu }(\left\vert z\right\vert )},
\end{equation*}
where equalities are attained only when $z=\left\vert z\right\vert =r$. The latter inequalities and
the minimum principle for harmonic functions imply that the
corresponding inequalities in (\ref{2.0}) hold if and only if $
\left\vert z\right\vert <x_{\mu ,\alpha },$ $\left\vert z\right\vert <y_{\mu
,\alpha }$ and $\left\vert z\right\vert <t_{\mu ,\alpha },$ respectively,
where $x_{\mu ,\alpha }$, $y_{\mu ,\alpha }$ and $t_{\mu ,\alpha }$ are the
smallest positive roots of the equations
\begin{equation*}
rf_{\mu }'(r)/f_{\mu }(r)=\alpha ,\text{ \ }rg_{\mu }'(r)/g_{\mu }(r)=\alpha ,\ rh_{\mu }'(r)/h_{\mu }(r)=\alpha.
\end{equation*}
Since their solutions coincide with the zeros of the functions
$$r\mapsto rs_{\mu -\frac{1}{2},\frac{1}{2}}'(r)-\alpha \left( \mu +\frac{1}{2}
\right) s_{\mu -\frac{1}{2},\frac{1}{2}}(r),\ r\mapsto rs_{\mu -\frac{1}{2},\frac{1}{2}}'(r)-\left( \mu +\alpha -\frac{1}{2
}\right) s_{\mu -\frac{1}{2},\frac{1}{2}}(r),
$$
$$
r\mapsto rs_{\mu -\frac{1}{2},\frac{1}{2}}'(r)-\left( \mu +2\alpha -\frac{3}{
2}\right) s_{\mu -\frac{1}{2},\frac{1}{2}}(r),
$$
the result we need follows from Theorem \ref{ThZ}. In other words, Theorem \ref{ThZ} show that all the zeros of the above three functions are real and their first positive zeros do not exceed the first positive zero $\xi_{\mu,1}$. This guarantees that the above inequalities hold. This completes the proof our theorem when $\mu\in(0,1)$.
Now we prove that the inequalities in \eqref{2.0} also hold for $\mu\in\left(-1,0\right),$ except the first one, which is valid for $\mu\in\left(-\frac{1}{2},0\right).$ In order to do this, suppose that $\mu\in(0,1)$ and adapt the above proof, substituting $\mu$ by $\mu-1$, $\varphi_0$ by the function $\varphi_1$ and taking into account that the $n$th positive zero of $\varphi _{1},$ denoted by $\zeta_{\mu ,n}=z_{\mu ,1,n},$ satisfies (see \cite{Bar3}) $\zeta _{\mu ,n}>\zeta _{\mu ,1}>\frac{\pi }{2}
>1 $ for all $\mu \in (0,1)$ and $n\gammaeq 2$. It is worth mentiontioning that
\begin{equation*}
\Re\left(\frac{zf_{\mu-1}'(z)}{f_{\mu-1}(z)}\right)=1-\frac{1
}{\mu -\frac{1}{2}}\Re\left(\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{\zeta
_{\mu ,n}^{2}-z^{2}}\right) \gammaeq 1-\frac{1}{\mu -\frac{1}{2}}
\sum\limits_{n\gammaeq 1}\frac{2\left\vert z\right\vert ^{2}}{\zeta_{\mu
,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert f_{\mu-1
}'(\left\vert z\right\vert )}{f_{\mu-1}(\left\vert z\right\vert )},
\end{equation*}
remains true for $\mu\in\left(\frac{1}{2},1\right)$. In this case we use the minimum principle for harmonic functions to ensure that \eqref{2.0} is valid for $\mu-1$ instead of $\mu.$ Thus, using again Theorem \ref{ThZ} and replacing $\mu$ by $\mu+1$, we obtain the statement of the first part for $\mu\in\left(-\frac{1}{2},0\right)$. For $\mu\in(-1,0)$ the proof of the second and third inequalities in \eqref{2.0} go along similar lines.
To prove the statement for part {\bf a} when $\mu \in \left(-1,-\frac{1}{2}\right)$ we observe that the counterpart of (\ref{2.5}) is
\begin{equation}
\operatorname{Re}\left( \frac{z}{\beta -z}\right) \gammaeq \frac{-{\left\vert
z\right\vert }}{\beta +{\left\vert z\right\vert }}, \label{2.10}
\end{equation}
and it holds for all ${z\in \mathbb{C}}$ and $\beta $ ${\in \mathbb{R}}$ such
that $\beta >{\left\vert z\right\vert }$ (see \cite{sz}). From (\ref
{2.10}), we obtain the inequality
$$
\Re\left( \frac{z^{2}}{\zeta _{\mu ,n}^{2}-z^{2}}\right) \gammaeq \frac{-{
\left\vert z\right\vert }^{2}}{\zeta _{\mu ,n}^{2}+{\left\vert z\right\vert }
^{2}}, \label{2.11}
$$
which holds for all $\mu \in \left(0,\frac{1}{2}\right),$ $n\in \mathbb{N}$
and ${\left\vert z\right\vert <}\zeta _{\mu ,1}$ and it implies that
\begin{equation*}
\Re\left(\frac{zf_{\mu-1}'(z)}{f_{\mu-1}(z)}\right) =1-
\frac{1}{\mu -\frac{1}{2}}\Re\left(\sum\limits_{n\gammaeq1}\frac{
2z^{2}}{\zeta _{\mu ,n}^{2}-z^{2}}\right) \gammaeq 1+\frac{1}{\mu -\frac{1}{2}}
\sum\limits_{n\gammaeq1}\frac{2\left\vert z\right\vert ^{2}}{\zeta _{\mu
,n}^{2}+\left\vert z\right\vert ^{2}}=\frac{i\left\vert z\right\vert f_{\mu
-1}'(i\left\vert z\right\vert )}{f_{\mu -1}(i\left\vert
z\right\vert )}.
\end{equation*}
In this case equality is attained if $z=i\left\vert z\right\vert =ir.$ Moreover, the latter inequality implies that
\begin{equation*}
\Re\left( \frac{zf_{\mu -1}'(z)}{f_{\mu -1}(z)}\right) >\alpha
\end{equation*}
if and only if $\left\vert z\right\vert <q_{\mu ,\alpha }$, where $q_{\mu ,\alpha }$ denotes the smallest positive root of the equation $irf_{\mu
-1}'(\mathrm{i}r)/f_{\mu-1}(\mathrm{i}r)=\alpha,$ which is equivalent to
\begin{equation*}
i r s_{\mu -\frac{3}{2},\frac{1}{2}}'(ir)-\alpha \left(\mu -\frac{1}{2}\right)s_{\mu -\frac{3}{2},\frac{1}{2}}(ir)=0,\text{ for }\mu \in \left(0,\frac{1}{2}\right).
\end{equation*}
Substituting $\mu$ by $\mu +1,$ we obtain
\begin{equation*}
i r s_{\mu -\frac{1}{2},\frac{1}{2}}'(i r)-\alpha \left(\mu +\frac{1}{2}\right)s_{\mu -\frac{1}{2},\frac{1}{2}}(ir)=0,\text{ for }\mu \in \left(-1,-\frac{1}{2}\right).
\end{equation*}
It follows from Theorem \ref{ThZ} that the first positive zero of $z\mapsto izs_{\mu -\frac{1}{2},\frac{1}{2}}'(iz)-\alpha \left(\mu +\frac{1}{2}\right)s_{\mu -\frac{1}{2},\frac{1}{2}}(iz)$ does not exceed $\zeta_{\mu,1}$ which guarantees that the above inequalities are valid. All we need to prove is that the above function has actually only one zero in $(0,\infty)$. Observe that, according to Lemma \ref{lempower}, the function
$$
r\mapsto \frac{irs_{\mu -\frac{1}{2},\frac{1}{2}}'(ir)}{s_{\mu -\frac{1}{2},\frac{1}{2}}(ir)}
$$
is increasing on $(0,\infty)$ as a quotient of two power series whose positive coefficients form the increasing ``quotient sequence'' $\left\{2k+\mu+\frac{1}{2}\right\}_{k\gammaeq0}.$ On the other hand, the above function tends to $\mu+\frac{1}{2}$ when $r\to0,$ so that its graph can intersect the horizontal line $y=\alpha\left(\mu+\frac{1}{2}\right)>\mu+\frac{1}{2}$ only once. This completes the proof of part {\bf a} of the theorem when $\mu\in(-1,0)$.
\end{proof}
\begin{proof}[Proof of Theorem 2]
As in the proof of Theorem \ref{theo1} we need show that, for the
corresponding values of $\nu $ and $\alpha $, the inequalities
\begin{equation}
\Re\left( \frac{zu_{\nu }'(z)}{u_{\nu }(z)}\right) >\alpha ,\
\ \Re\left( \frac{zv_{\nu }'(z)}{v_{\nu }(z)}\right) >\alpha
\ \text{and\ }\Re\left( \frac{zw_{\nu }'(z)}{w_{\nu }(z)}
\right) >\alpha \ \ \label{strv1}
\end{equation}
are valid for $z\in \mathbb{D}_{r_{\alpha }^{\ast }(u_{\nu })}$, $z\in
\mathbb{D}_{r_{\alpha }^{\ast }(v_{\nu })}$ and $z\in \mathbb{D}_{r_{\alpha
}^{\ast }(w_{\nu })}$ respectively, and each of the above inequalities does
not hold in any larger disk.
If $\left\vert \nu \right\vert \leq \frac{1}{2},$ then (see \cite[Lemma 1]{BPS}) the Hadamard
factorization of the transcendental entire function $\mathcal{H}_{\nu }$, defined by
\begin{equation*}
\mathcal{H}_{\nu }(z)=\sqrt{\pi }2^{\nu }z^{-\nu -1}\Gamma \left( \nu +\frac{
3}{2}\right) \mathbf{H}_{\nu }(z),
\end{equation*}
reads as follows
\begin{equation*}
\mathcal{H}_{\nu }(z)=\prod\limits_{n\gammaeq 1}\left( 1-\frac{z^{2}}{h_{\nu
,n}^{2}}\right) ,
\end{equation*}
which implies that
\begin{equation*}
\mathbf{H}_{\nu }(z)=\frac{z^{\nu +1}}{\sqrt{\pi }2^{\nu }\Gamma \left( \nu +
\frac{3}{2}\right) }\prod\limits_{n\gammaeq 1}\left( 1-\frac{z^{2}}{h_{\nu
,n}^{2}}\right),
\end{equation*}
where $h_{\nu ,n}$ stands for the $n$th positive zero of the Struve function
$\mathbf{H}_{\nu }.$
We know that (see \cite[Theorem 2]{Bar3}) $h_{\nu ,n}>h_{\nu ,1}>1$ for all $
\left\vert \nu \right\vert \leq \frac{1}{2}$ and $n\in \mathbb{N}$. If $\left\vert \nu \right\vert \leq \frac{1}{2}$ and $\left\vert
z\right\vert <h_{\nu ,1}$, then (\ref{2.5}) imples
\begin{equation*}
\Re\left(\frac{zu_{\nu }'(z)}{u_{\nu }(z)}\right) =1-\frac{1
}{\nu +1}\Re\left(\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{h_{\nu
,n}^{2}-z^{2}}\right) \gammaeq 1-\frac{1}{\nu +1}\sum\limits_{n\gammaeq 1}\frac{
2\left\vert z\right\vert ^{2}}{h_{\nu ,n}^{2}-\left\vert z\right\vert ^{2}}=
\frac{\left\vert z\right\vert u_{\nu }'(\left\vert z\right\vert )}{
u_{\nu }(\left\vert z\right\vert )},
\end{equation*}
\begin{equation*}
\Re\left(\frac{zv_{\nu }'(z)}{v_{\nu }(z)}\right)=1-\Re\left(\sum\limits_{n\gammaeq 1}\frac{2z^{2}}{h_{\nu ,n}^{2}-z^{2}}\right) \gammaeq
1-\sum\limits_{n\gammaeq 1}\frac{2\left\vert z\right\vert ^{2}}{h_{\nu
,n}^{2}-\left\vert z\right\vert ^{2}}=\frac{\left\vert z\right\vert v_{\nu
}'(\left\vert z\right\vert )}{v_{\nu }(\left\vert z\right\vert )}
\end{equation*}
and
\begin{equation*}
\Re\left(\frac{zw_{\nu }'(z)}{w_{\nu }(z)}\right)=1-\Re\left(\sum\limits_{n\gammaeq 1}\frac{z}{h_{\nu ,n}^{2}-z}\right)\gammaeq
1-\sum\limits_{n\gammaeq 1}\frac{\left\vert z\right\vert }{h_{\nu
,n}^{2}-\left\vert z\right\vert }=\frac{\left\vert z\right\vert w_{\nu}'(\left\vert z\right\vert )}{w_{\nu }(\left\vert z\right\vert )},
\end{equation*}
where equalities are attained when $z=\left\vert z\right\vert =r.$ Then minimum principle for
harmonic functions implies that the
corresponding inequalities in (\ref{strv1}) hold if and only if
$\left\vert z\right\vert <\delta _{\nu ,\alpha },$ $\left\vert z\right\vert
<\rho _{\nu ,\alpha }$ and $\left\vert z\right\vert <\sigma _{\nu ,\alpha },$
respectively, where $\delta _{\nu ,\alpha }$, $\rho _{\nu ,\alpha }$ and $
\sigma _{\nu ,\alpha }$ are the smallest positive roots of the equations
\begin{equation*}
ru_{\nu }'(r)/u_{\nu }(r)=\alpha ,\text{ \ }rv_{\nu }'(r)/v_{\nu }(r)=\alpha ,\ rw_{\nu }'(r)/w_{\nu }(r)=\alpha.
\end{equation*}
The solutions of these equations are the zeros of the functions
$$r\mapsto r\mathbf{H}_{\nu }'(r)-\alpha (\nu +1)\mathbf{H}_{\nu }(r),\ r\mapsto r\mathbf{H}_{\nu }'(r)-(\alpha +\nu )\mathbf{H}_{\nu }(r),\ r\mapsto r\mathbf{H}_{\nu }'(r)-(2\alpha +\nu -1)\mathbf{H}_{\nu }(r),$$
which, in view of Theorem \ref{ThZ}, have only real zeros and the smallest positive zero of each of them does not exceed the first
positive zeros of $\mathbf{H}_{\nu }$.
\end{proof}
\end{document} |
\begin{document}
\title{On Lower Central Series Quotients of Finitely Generated Algebras over $\zz$}{}
\begin{abstract}
Let $A$ be an associative unital algebra, $B_k$ its successive quotients of lower central series and $N_k$ the successive quotients of ideals generated by lower central series. The geometric and algebraic aspects of $B_k$'s and $N_k$'s have been of great interest since the pioneering work of \cite{feigin2007}. In this paper, we will concentrate on the case where $A$ is a noncommutative polynomial algebra over $\mathbb{Z}$ modulo a single homogeneous relation. Both the torsion part and the free part of $B_k$'s and $N_k$'s are explored. Many examples are demonstrated in detail, and several general theorems are proved. Finally we end up with an appendix about the torsion subgroups of $N_k(A_n(\mathbb{Z}))$ and some open problems.
\end{abstract}
\section{Introduction}
Let $A$ be an associative unital algebra over a commutative ring $R$. There is a natural Lie algebra structure on $A$, whose Lie bracket is given by $[a,b]=a\cdot b-b\cdot a$. Let $L_k(A)$ be the lower central series of $A$. They are defined inductively by $L_1(A)=A$ and $L_{k+1}(A)=[A,L_k(A)]$. Denote by $M_i(A)$ the two-sided ideal generated by $L_k(A)$ in $A$. An easy computation shows (see \cite{kerchev2013} for instance) $M_i(A)=A\cdot L_i(A)$. Finally, we define $B_k(A)=L_k(A)/L_{k+1}(A)$ and $N_k(A)=M_k(A)/M_{k+1}(A)$ as successive quotients.
The study of $B_i$'s began in \cite{feigin2007}, where the interest was on the case where $R=\mathbb{Q}$ and $A=A_n(\mathbb{Q})$ is the free associative algebra with $n$ generators. In their paper, Feigin and Shoikhet observed that each $B_k(A_n(\mathbb{Q}))$ for $k\geq 2$ admits a $W_n$-module structure, where $W_n$ is the Lie algebra of polynomial vector fields on $\mathbb{Q}^n$. In particular, they showed that $B_2(A_n(\mathbb{Q}))$ is isomorphic, as a graded vector space, to $\Omega_{\textrm{closed}}^{\textrm{even}>0}(\mathbb{Q}^n)$, the space of closed polynomial forms on $\mathbb{Q}^n$ of positive even degree.
Later on, Dobrovolska et al. \cite{dobrovolska2008} extended Feigin and Shoikhet's results to more general algebras over $\mathbb{C}$, and Balagovic and Balasubramanian \cite{balagovic2011} explored the case $A=A_n(\mathbb{C})/(f)$ where $f$ is a generic homogeneous polynomial. On the other hand, Bhupatiraju et al. \cite{bhupatiraju2012} studied the case where $A$ is the free algebra over $\mathbb{Z}$ or finite fields. Specifically, they were able to describe almost all the torsion appearing in $B_2(A_n(\mathbb{Z}))$.
The story of the $N_i$'s is comparably briefer. The only literature on the $N_i$'s as far as the authors know is \cite{etingof2009} and \cite{kerchev2013}, where the Jordan-H\"older series of $N_i$'s (as $W_n$-modules) are investigated for $A=A_n(\mathbb{Q})$.
The goal of this paper is to understand $B_k$'s and $N_k$'s for $A=A_n(\mathbb{Z})/(f)$, a $\mathbb{Z}$-algebra generated by $n$ elements with a single homogeneous relation. We are especially interested in which torsion groups show up.
The organization of our paper is as follows. In Section 2, we begin with an easy example, the $q$-polynomials, where everything can be computed explicitly. From Section 3 to Section 5, many patterns of $B_k$ and $N_k$ for general $A$ are observed, and some of them are proven. Finally, we end up with an appendix on torsion subgroups of $N_k(A_n(\mathbb{Z}))$.
\section{An Example: The $q$-Polynomials}
Let $A=\mathbb{Z}\langle x,y\rangle/(yx-qxy)$, $q\in\mathbb{Z}$, be the algebra of $q$-polynomials. The case $q=1$ is trivial, because $A$ is commutative and therefore all the $B_k$'s and $N_k$'s are 0 for $i\geq 2$. Let us assume that $q\neq\pm1$ at this moment, and the goal is to compute $B_k$'s and $N_k$'s explicitly.
Let us assign $x$ and $y$ to be of degree $(1,0)$ and $(0,1)$ respectively in $\mathbb{Z}\langle x,y\rangle$. Clearly this assignment descends to $A$, therefore we get a $\mathbb{Z}^2$-grading on $A$ as well as the various quotients $B_k$'s and $N_k$'s. We will use $L_k[i,j]$ (resp. $M_k[i,j]$, $B_k[i,j]$, $N_k[i,j]$) to denote the degree $(i,j)$ part of $L_k$ (resp. $M_k$, $B_k$, $N_k$).
Using the relation $yx=qxy$ repeatedly, we know that each $L_k[i,j]$ is a free abelian group of rank at most 1, whose basis is of the form $S^k_{i,j}x^iy^j$ for some $S^k_{i,j}\in\mathbb{Z}$. Similarly, we write the basis of $M_k[i,j]$ as $T^k_{i,j}x^iy^j$. Our goal is to compute $S^k_{i,j}$ and $T^k_{i,j}$.
Clearly when $k>i+j$, we have $S^k_{i,j}=T^k_{i,j}=0$. So let us first look at the case $k=i+j$. By definition of lower central series, we have \[L_k[i,j]=[x,L_{k-1}[i-1,j]]+[y,L_{k-1}[i,j-1]],\] thus we obtain the recursive formula \[S^k_{i,j}=\gcd\left(S^{k-1}_{i-1,j}(q^j-1),S^{k-1}_{i,j-1}(q^i-1)\right).\] Using some elementary number theory, by induction we conclude for $k=i+j$, \[S^k_{i,j}=T^k_{i,j}=(q-1)^{i+j-2}\cdot(q^{\gcd(i,j)}-1).\] Similarly, we can prove that for $k<i+j$, \[S^k_{i,j}=(q-1)^{k-2}\cdot(q^{\gcd(i,j)}-1),~~~~ T^k_{i,j}=(q-1)^{k-1}.\]
As a result, we can formulate the following proposition:
\begin{prop}
For the $q$-polynomial algebra $A=\mathbb{Z}\langle x,y\rangle/(yx-qxy)$ with $q\neq\pm1$, we have \[\begin{split}&B_k(A)[i,j]\cong\begin{cases}\mathbb{Z}_{|q-1|} & i,j>0, k<i+j\\ \mathbb{Z} & i,j>0, k=i+j\\ 0&\textrm{elsewhere}\end{cases},\\ &N_k(A)[i,j]\cong\begin{cases}\mathbb{Z}_{|q-1|} & i,j>0, k<i+j-1 \\ \mathbb{Z}_{|q^{\gcd(i,j)}-1|} & i,j>0, k=i+j-1\\ \mathbb{Z} & i,j>0, k=i+j\\ 0&\textrm{elsewhere}\end{cases}\end{split}\]
\end{prop}
As a corollary, we have the following
\begin{cor}
All primes except those dividing $q$ appear in the torsion subgroup of $N_k(A)$.
\end{cor}
\begin{proof}
Fermat's little theorem.
\end{proof}
The case $q=-1$ can be worked out in like manner, so we omit the proof here.
\begin{prop}
For $A=\mathbb{Z}\langle x,y\rangle/(yx+xy)$, we have \[\begin{split} &B_k(A)[i,j]=\begin{cases} \mathbb{Z}_2 & i+j>k,~i,j>0,\textrm{ not all even}\\ \mathbb{Z} & i+j=k,~i,j>0,\textrm{ not all even}\\ 0 & \textrm{elsewhere}\end{cases}\\ &N_k(A)[i,j]=\begin{cases} \mathbb{Z}_2 & \begin{cases}i+j>k,~i,j>0,\textrm{ not all even}\\ i+j>k+1,~i,j>0\textrm{ even}\end{cases}\\ \mathbb{Z} & \begin{cases}i+j=k,~i,j>0,\textrm{ not all even}\\ i+j=k+1,~i,j>0\textrm{ even}\end{cases}\\ 0 & \textrm{elsewhere}\end{cases}\end{split}\]
\end{prop}
\section{A Basis for $N_2(\mathbb{Q}\langle x,y\rangle/(x^m+y^m))$}
Another simple example to study is the case $A(m)=\mathbb{Z}\langle x,y\rangle/(x^m+y^m)$ for $m\geq2$ an integer. If we assign both $x$ and $y$ to be of degree 1, then we get a graded algebra structure on $A$. Again, let $L_k[d]$ (resp. $M_k[d]$, $B_k[d]$, $N_k[d]$) be the degree $d$ part of $L_k$ (resp. $M_k$, $B_k$, $N_k$). Thanks to \cite{bosma1997}, Magma helps us to get the data presented in Table 1:
\begin{table}[h]
\centering
\begin{tabular}{l|ccccccc}
\hline
r$(N_2[d])$&2&3&4&5&6&7&8\\
\hline
2&1\\
3&1&2&1\\
4&1&2&3&2&1\\
5&1&2&3&4&3&2&1\\
\hline
\end{tabular}
\caption{The ranks of $N_2(A(m))[d]$ for $m=2,3,4,5$.}
\end{table}
Here each entry represents the rank of $N_2(A(m))[d]$, where the first row indicates the degree $d$ and the left column values of $m$. The pattern above is extremely clear, which leads to the following proposition:
\begin{prop}\label{prop31}
For $A=\mathbb{Z}\langle x,y\rangle/(x^m+y^m)$, we have \[\textrm{Rank}(N_2[d])=\begin{cases}d-1 & d<m \\ 2m-d-1 & m\leq d\leq 2m-2 \\ 0 & \textrm{otherwise}\end{cases}\]
\end{prop}
Because we are only concerned about ranks, we may tensor with $\mathbb{Q}$. Then Proposition \ref{prop31} follows directly from finding a basis for $N_2(\mathbb{Q}\langle x,y\rangle/(x^m+y^m))$, as Balagovic and Balasubramanian did for $B_2(\mathbb{Q}\langle x,y\rangle/(x^m+y^m))$ in \cite{balagovic2011}, Section 3.
To proceed, let us introduce a new variable $u=[x,y]\in A$. By abuse of notations, we are going to use letters $x,y$ or $u$ to represent corresponding classes in the quotient algebras.
\begin{lemma}\label{lemma32}
In $A/M_3(A)$, we have \[u^2=[u,x]=[u,y]=x^{m-1}u=y^{m-1}u=0.\]
\end{lemma}
\begin{proof}
Direct calculation.
\end{proof}
\begin{rmk}
The triple $\{x,y,u\}$ in $A/M_3(A)$ satisfies the commutative relations of the standard generators of a Heisenberg algebra.
\end{rmk}
The above lemma tells us that $x^iy^j~(0\leq i<m)$ and $x^iy^ju~(i,j<m-1)$ span $A/M_3(A)$. Let us consider the short exact sequence \[0\to N_2(A)\to A/M_3(A)\xrightarrow{\pi}A/M_2(A)\to0.\] Note $A/M_2(A)\cong\mathbb{Q}[x,y]/(x^m+y^m)$ is the abelianization of $A$, we easily see that the images of $x^iy^j~(0\leq i<m)$ under $\pi$ are linearly independent. As $\pi(u)=0$, a natural guess would be the following:
\begin{prop}
$x^iy^ju~(i,j<m-1)$ form a basis for $N_2(A)$.
\end{prop}
\begin{proof}
We only need to prove that they are linearly independent in $A/M_3(A)$. Rewrite $A/M_3(A)$ as $(A_2/M_3(A_2))/(x^m+y^m)$, where $(x^m+y^m)$ is the ideal generated by $(x^m+y^m)$ in $A_2/M_3(A_2)$, we claim that any nonzero linear combination of $x^iy^ju$ does not lie in $(x^m+y^m)$.
Recall that \cite{feigin2007} Lemma 2.1.2 showed there is an isomorphism of algebras $\varphi:A_2/M_3(A_2)\to\Omega^{\textrm{even}}(\mathbb{Q}^2)_*$, where $\Omega^{\textrm{even}}(\mathbb{Q}^2)_*$ is the algebra of even polynomial differential forms on $\mathbb{Q}^2$ with the twisted product $\alpha*\beta=\alpha\wedge\beta+(-1)^{\deg\alpha}\mathrm{d}\alpha\wedge\mathrm{d}\beta$. To be precise, $\varphi$ is defined by $\varphi(x)=x$, $\varphi(y)=y$.
Now everything can be translated into differential forms, and the verification of our claim in that context is straightforward.
\end{proof}
As a consequence, Proposition \ref{prop31} follows from dimensional counting of the basis $x^iy^ju~(i,j<m-1)$.
\begin{rmk}
If we work more carefully, we can figure out the torsion subgroups in $N_2(A)$ using the same method.
\end{rmk}
Furthermore, after taking a look at the data table of higher $N_k(A)$, say $k=3$, one may easily formulate the following conjecture:
\begin{conj}
For $A=\mathbb{Z}\langle x,y\rangle/(x^m+y^m)$, we have \[\textrm{Rank}(N_3(A)[d])=\begin{cases}3d-7 & 3\leq d\leq m+1 \\ 6m-3d+1 & m+2\leq d\leq 2m \\ 0 & \textrm{elsewhere}\end{cases}\]
\end{conj}
Actually, we can compute the rank of $N_2[d]$ explicitly for $A=\mathbb{Z}\langle x,y\rangle/(f)$, where $f$ is an arbitrary homogeneous relation of degree $m$. Let $f_{\mathrm{ab}}$ be the abelianization of $f$, and denote $\partial f_{\mathrm{ab}}/\partial x$ and $\partial f_{\mathrm{ab}}/\partial y$ by $f_x$ and $f_y$ respectively. Instead of the relations in Lemma \ref{lemma32}, the following equations are satisfied in $A/M_3(A)$: \[u^2=[u,x]=[u,y]=f_xu=f_yu=0.\] As before, $\{x^iy^ju\}$ spans $N_2(A)$, and the only relations are \[f_xu=f_yu=0.\] The relation $f_{\mathrm{ab}}u=0$ is redundant because of the Euler identity \[m f_{\mathrm{ab}}=xf_x+yf_y.\] Therefore there is a degree-preserving bijection \[\{\textrm{basis of } \mathbb{Q}[x,y]/(f_x,f_y)\}\cdot u\longleftrightarrow \{\textrm{basis of }N_2(A)\}.\] In other words, we have \begin{equation}\label{*}\textrm{Rank}(N_2(A)[d])=\textrm{Rank}(\mathbb{Z}[x,y]/(f_x,f_y)[d-2]).\end{equation}
We may write $f_{ab}$ as product of linear polynomials of the form $\alpha x+\beta y$ (over $\mathbb{C}$). It is easy to check that \[(\alpha x+\beta y)^l||\gcd(f_x,f_y)~~\textrm{ if and only if }~~(\alpha x+\beta y)^{l+1}||f_{ab}.\] Now let $\{(\alpha_ix+\beta_iy)\}_{i=1}^s$ be the set of distinct linear factors of $f_{ab}$, and let $m_i$ be the multiplicity of $(\alpha_ix+\beta_iy)$, which satisfies\[\sum_{i=1}^sm_i=m.\] The least common multiple of $f_x$ and $f_y$ is given by \[\frac{f_xf_y}{\prod_{i=1}^s(\alpha_ix+\beta_iy)^{m_i-1}},\] which is of degree $m+s-2$. Combining it with Eq. (\ref{*}), we conclude:
\begin{thm}\label{thm37}
Let $A=\mathbb{Z}\langle x,y\rangle/(f)$, where $f$ is a homogeneous polynomial of degree $m$, let $s$ be the number of distinct linear factors of $f_{ab}$ (over $\mathbb{C}$). Then \[\textrm{Rank}(N_2(A)[d])=\begin{cases}0 & d=0\\ d-1 & 1\leq d\leq m-1 \\2m-d-1 & m\leq d\leq m+s-1 \\ m-s & d\geq m+s\end{cases}\]
\end{thm}
\begin{rmk}
The computation here is consistent with the results we get in Sections 4 and 5.
\end{rmk}
\section{Rank Stabilization}
In this section, we are going to study the $B_k$'s and $N_k$'s associated with $A=\mathbb{Z}\langle x,y\rangle/(x^m)$. It is essentially different from what we have seen in Sections 1 and 2: If we look at the data produced by Magma (see Table 2), we find $\textrm{Rank}(B_k(A)[d])$ stabilizes at some positive integer for any $k$ when $d$ is large enough. On the other hand, the $B_k$'s of $\mathbb{Z}\langle x,y\rangle/(x^m+y^m)$ are finite dimensional. We will say more about it in the next section.
\begin{table}[h]
\centering
\begin{tabular}{l|cccccccc}
\hline
r$(B_k[d])$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\
\hline
$B_2$ & 1 & 2 & 2 & 2 & 2 & 2 & 2 & 2\\
$B_3$ & & 2 & 4 & 4 & 4 & 4 & 4 & 4\\
$B_4$ & & & 3 & 7 & 8 & 8 & 8 & 8\\
$B_5$ & & & & 6 & 13 & 16 & 16 & 16\\
\hline
\end{tabular}
\caption{The ranks of $B_k(A)[d]$ for $A=\mathbb{Z}\langle x,y\rangle/(x^3)$.}\label{table2}
\end{table}
To be more precise, we have the following theorem:
\begin{thm}\label{thm41}
Let $A=\mathbb{Z}\langle x,y\rangle/(x^m)$, for each $k\geq 2$, there exists some $l=l(k)\in\mathbb{Z}_{\geq0}$ such that $\textrm{Rank}(B_k[l])=\textrm{Rank}(B_k[j])$ for any $j\geq l$. Furthermore, if $k=2$, then $l\leq m$; if $k\geq 3$, then $l\leq 2k+m-5$.
\end{thm}
Again, we may work over $\mathbb{Q}$. To simplify our notation, $A_n$ means $A_n(\mathbb{Q})$ in this section. The proof of our theorem relies heavily on the following results:
\begin{prop}(\cite{feigin2007})
For the free associative algebras $A=A_n$, $B_k(A_n)$ has a natural $W_n$-module structure for $k\geq2$. Here $W_n=\bigoplus_{i=1}^n\mathbb{Q}[x_1,\dots,x_n]\partial_i$ is the Lie algebra of polynomial vector fields over $\mathbb{Q}^n$.
\end{prop}
\begin{prop}(\cite{rudakov1974},\cite{dobrovolskae2008})\label{prop43}
Each $B_k(A_n)$ has a finite length Jordan-H\"older series with respect to the $W_n$-action. Moreover, all the irreducible subquotients of $B_k(A_n)$ is of the form $F_\lambda$, where $F_\lambda$ are certain tensor field modules parameterized by Young diagrams $\lambda=(\lambda_1,\dots,\lambda_n)$.
\end{prop}
For $n=2$, we have the following estimate on $|\lambda|$ for those $F_\lambda$ appearing in the Jordan-H\"older series of $B_k(A_2)$:
\begin{prop}(\cite{feigin2007}, \cite{arbesfeld2010})\label{prop44}
Let $k\geq3$, for $F_\lambda$ in the Jordan-H\"older series of $B_k(A_2)$, \[|\lambda|:=\lambda_1+\lambda_2\leq2k-3.\] For $k=2$, $B_2(A_2)\cong F_{(1,1)}$ as $W_2$-modules.
\end{prop}
Now let us prove Theorem \ref{thm41}.
\begin{proof}
Let $\mathbb{Q}[y]\partial_y=:W_1\subset W_2=\mathbb{Q}[x,y]\partial_x\oplus\mathbb{Q}[x,y]\partial_y$ be the Lie subalgebra of $W_2$. Clearly $B_k(A_2)$ has a $W_1$-module structure. Now we replace our associative algebra by $A=A_2/(x^m)$, the $W_2$-action does not exist any more, however, the $W_1$-action remains.
Claim: $B_i(A_2/(x^m))$ is of finite length as a $W_1$-module.
Let $\pi:B_k(A_2)\to B_k(A_2/(x^m))$ be the canonical map induced by the quotient $A_2\to A_2/(x^m)$, clearly, $\pi$ preserves the $W_1$-action. As $[W_1,x^mW_2]\subset x^mW_2$, $(x^mW_2)B_k(A_2)$ is a $W_1$-submodule of $B_k(A_2)$. Moreover, $\pi$ maps $(x^mW_2)B_k(A_2)$ to 0, and since $\pi$ is surjective, we conclude that $B_k(A_2/(x^m))$ is a quotient module of $B_k(A_2)/(x^mW_2)B_k(A_2)$. Therefore we only have to show that, as a $W_1$-module, $B_k(A_2)/(x^mW_2)B_k(A_2)$ is of finite length.
Recall that by Proposition \ref{prop43}, $B_k(A_2)$ is a finite-length $W_2$-module. Therefore there exists a sequence of $W_2$-modules \[0=M_n\subset M_{n-1}\subset\dots\subset M_1\subset M_0=B_k(A_2)\] such that each $P_j:=M_j/M_{j+1}$ is an irreducible $W_2$-module. Now we have the following commutative diagram: \[\xymatrix{0\ar[r] & (x^mW_2)M_{j+1}\ar[r]\ar[d] & (x^mW_2)M_j\ar[r]\ar[d] & (x^mW_2)M_j/(x^mW_2)M_{j+1}\ar[r]\ar[d]^f & 0 \\ 0\ar[r] & M_{j+1}\ar[r] & M_j\ar[r] & P_j\ar[r] & 0}\] where the rows are exact and the first two vertical arrows are inclusions. The snake lemma gives us the long exact sequence \[0\to\ker f\to M_{j+1}/(x^mW_2)M_{j+1}\to M_j/(x^mW_2)M_j\to P_j/(x^mW_2)P_j\to0.\] By induction on $j$, we only have to show that each $P_j/(x^mW_2)P_j$ is of finite length.
We know that each $P_j$ is of the form $F_{(p,q)}$ ($q\leq p$) and \[F_{(p,q)}=\mathbb{Q}[x,y]\otimes\mathrm{Sym}^{p-q}(\mathrm{d} x,\mathrm{d} y)\otimes(\mathrm{d} x\wedge\mathrm{d} y)^{\otimes q},\] where $\mathrm{Sym}(\mathrm{d} x,\mathrm{d} y)$ is the symmetric algebra generated by differentials $\mathrm{d} x$ and $\mathrm{d} y$. Moreover, $W_2$ acts on $F_{(p,q)}$ as Lie derivatives. To make the grading correct, $\mathrm{d} x$ and $\mathrm{d} y$ are each assigned to be of degree 1.
By direct computation, we see that $(x^mW_2)F_{(p,q)}=x^{m-1}F_{(p,q)}$, therefore \[F_{(p,q)}/(x^mW_2)F_{(p,q)}\cong\bigoplus_{l=0}^{m-2}\bigoplus_{j=q}^px^l(\mathrm{d} x)^{p+q-j}F_j,\] where $F_j=\mathbb{Q}[y](\mathrm{d} y)^j$ is an irreducible $W_1$-module. As a consequence, our claim is proven.
Furthermore, $F_j[d]$ is of dimension 1 as $d\geq j$, thus $x^l(\mathrm{d} x)^{p+q-j}F_j[d]$ is of dimension 1 as $d\geq p+q+l$. Theorem \ref{thm41} follows from Proposition \ref{prop44} automatically.
\end{proof}
From Table \ref{table2} we cannot determine the Jordan-H\"older series (in terms of $W_1$-module) of $B_k(A_2/(x^m))$, because both $x$ and $\mathrm{d} x$ carry degree 1, we cannot read off $j$ from the degree distribution of $x^l(\mathrm{d} x)^{p+q-j}F_j$. However, if we compute ranks of $B_k(A_2/(x^m))$ refined by bidegree, we can deduce $j$ from data, thus determine the Jordan-H\"older series.
As a result, we get:
\begin{prop}
As $W_1$-modules, the Jordan-H\"older series of $B_k(A_2/(x^m))$ can be determined:
\[\begin{split}&B_2(A_2/(x^2))\sim F_1[1],\\ &B_3(A_2/(x^2))\sim F_1[2]+F_2[1],\\ &B_4(A_2/(x^2))\sim F_2[2]+F_3[1]+F_3[2],\\ &B_2(A_2/(x^3))\sim F_1[1]+F_1[2],\\ &B_3(A_2/(x^3))\sim F_1[2]+F_1[3]+F_2[1]+F_2[2],\\ &B_4(A_2/(x^3))\sim F_1[3]+F_2[2]+2F_2[3]+F_3[1]+2F_3[2]+F_3[3],\\ &B_2(A_2/(x^4))\sim F_1[1]+F_1[2]+F_1[3],\\ &B_3(A_2/(x^4))\sim F_1[2]+F_1[3]+F_1[4]+F_2[1]+F_2[2]+F_2[3],\\ &B_4(A_2/(x^4))\sim F_1[3]+F_1[4]+F_2[2]+2F_2[3]+2F_2[4]+F_3[1]+2F_3[2]+2F_3[3]+F_3[4],\\&......\end{split}\] Here, $F_j[d]$ denotes the $W_1$-module $F_j$ which begins with total degree $d$ in $x$ and $\mathrm{d} x$.
\end{prop}
\begin{rmk}
Naturally, the analogous statement of Theorem \ref{thm41} for $N_k(A_2/(x^m))$ can be carried out without any significant change. The only difference is that we need to replace Proposition \ref{prop44} by Kerchev's result \cite{kerchev2013}. If we combine this reasoning with the computation in Theorem \ref{thm37}, we actually prove:
\begin{prop}
As a $W_1$-module, the Jordan-H\"older series of $N_2(A_2/(x^m))$ is \[N_2(A_2/(x^m))\sim F_1[1]+F_1[2]+\dots+F_1[m-1].\]
\end{prop}
\end{rmk}
\section{Finite-Dimensionality}
Now, as promised, let us make a comparison between the phenomena we observed in previous sections. In both cases, the associated ``commutative'' spaces, $\mathrm{Spec~}\mathbb{Q}[x,y]/(x^m+y^m)$ and $\mathrm{Spec~}\mathbb{Q}[x,y]/(x^m)$, are one-dimensional. However, as we have seen, $B_k(A_2/(x^m+y^m))$ is a finite-dimensional $\mathbb{Q}$-vector space while $B_k(A_2/(x^m))$ is of infinite dimension. This difference is encoded in the information about the locus of non-reduced points in $\textrm{Spec~}A_{\textrm{ab}}$, the spectrum associated with the abelianization of the algebra we began with. The exact statement is the following theorem:
\begin{thm}\label{thm51}
Let $A$ be a finitely generated graded associative algebra over $\mathbb{Q}$ (generated by elements of degree 1) such that $X:=A_{\textrm{ab}}$ is at most of dimension 1 in the Krull sense. Moreover, if we assume that the number of non-reduced points in $\textrm{Spec~}X$ is finite, then $B_k(A)$ and $N_j(A)$ are finite-dimensional $\mathbb{Q}$-vector spaces for $k\geq3$ and $j\geq 2$.
\end{thm}
\begin{rmk}
Part of this theorem was stated in Jordan and Orem's paper \cite{jordan2014} as Corollary 3.10. We obtained this result independently and generalized it to $B_k$.
\end{rmk}
To prove the theorem, we need to cite the following powerful lemma:
\begin{lemma}(\cite{bapat2013})\label{lemma52}
$[M_j,L_k]\subset L_{k+j}$, whenever $j$ is odd.
\end{lemma}
Let us prove Theorem \ref{thm51}.
\begin{proof}
Apply Lemma \ref{lemma52} to the case $k=1$ and $j=2r+1$, then $[L_1,M_{2r+1}]\subset L_{2r+2}$, which implies \begin{equation}\label{eq1}\sum_i[x_i,M_{2r+1}]\subset L_{2r+2},\end{equation} where $x_i$ are degree-one generators of $A$. On the other hand, we always have $L_j\subset\sum_i[x_i,M_{j-1}]$ by definition. In particular, set $j=2r$, we get \begin{equation}\label{eq2}L_{2r}\subset\sum_i[x_i,M_{2r-1}].\end{equation} Combining (\ref{eq1}) and (\ref{eq2}), we get (to be understood in each graded component) \begin{equation}\label{eq3}\dim L_{2r}-\dim L_{2r+2}\leq\dim(\sum_i[x_i,M_{2r-1}])-\dim(\sum_i[x_i,M_{2r+1}]).\end{equation}
Now, let $V$ be the $\mathbb{Q}$-vector space spanned by the $x_i$'s, which we assume to be $n$-dimensional. Let \[\begin{split}\lambda:V\otimes M_{2r-1}&\to\sum_i[x_i,M_{2r-1}]\\ \mu:V\otimes M_{2r+1}&\to\sum_i[x_i,M_{2r+1}]\end{split}\] be the natural maps sending $a\otimes b$ to $[a,b]$. We have the following commutative diagram: \[\xymatrix{0\ar[r] & \ker \mu\ar[r]\ar[d]_\alpha & V\otimes M_{2r+1}\ar[r]^\mu\ar[d]_\beta & \sum_i[x_i,M_{2r+1}]\ar[r]\ar[d]_\gamma & 0\\ 0\ar[r] & \ker \lambda\ar[r] & V\otimes M_{2r-1}\ar[r]^\lambda & \sum_i[x_i,M_{2r-1}]\ar[r] & 0}\] where both rows are exact, and $\beta$ and $\gamma$ are injections. The snake lemma gives a long exact sequence that reduces to a short exact sequence \[0\to\mathrm{coker}~\alpha\to\mathrm{coker}~\beta\to\mathrm{coker}~\gamma\to0,\] hence (again, to be understood in each graded component) \[\dim\mathrm{coker}~\gamma=\dim\mathrm{coker}~\beta-\dim\mathrm{coker}~\alpha\leq\dim\mathrm{coker}~\beta.\] That is, \[\begin{split}\dim(\sum_i[x_i,M_{2r-1}])-\dim(\sum_i[x_i,M_{2r+1}])&\leq\dim V\otimes M_{2r-1}-\dim V\otimes M_{2r+1}\\ &=n(\dim M_{2r-1}-\dim M_{2r+1}).\end{split}\] Plugging it in (\ref{eq3}), we conclude \begin{equation}\label{eq4}\dim B_{2r}+\dim B_{2r+1}\leq n(\dim N_{2r-1}+\dim N_{2r}).\end{equation}
A similar argument gives \begin{equation}\label{eq5}\dim B_{2r+1}\leq n\dim N_{2r}.\end{equation}
Therefore we only have to prove the finite-dimensionality of $N_j$ for $j\geq2$.
It is known that $C:=A/M_3(A)$ has a commutative ring structure: the multiplication is defined by $a*b=(ab+ba)/2$. In addition, each $N_j$ has a $C$-module structure as follows: for $a\in C$ and $m\in N_j$, we also use $a\in A$ and $m\in M_j$ to denote the fixed liftings of $a$ and $m$. The multiplication is again defined by $a*m=(am+ma)/2$. Note $m\in M_j$, $(am+ma)/2\in M_j$, we define $a*m\in N_j$ to be the equivalent class of $(am+ma)/2$.
It is straightforward to check that this multiplication is well-defined. We should point out that $M_jM_3\subset M_{j+2}$ (\cite{bapat2013} Corollary 1.4) is needed in this routine verification.
It is also established (see \cite{etingof2009}, \cite{jordan2014}) that $C$ is a finitely generated algebra over $\mathbb{Q}$ and $N_j$ a finitely generated module over $C$. By a standard result from commutative algebra, in order to show that $N_j$ is a finite dimensional $\mathbb{Q}$-vector space, we only have to show that $N_j$ has finite support as a coherent sheaf on $\mathrm{Spec~}C$.
Now let $X:=A_{\textrm{ab}}=A/M_2(A)$ and consider the short exact sequence \[0\to M_2(A)/M_3(A)\to C\to X\to 0.\] Using results from \cite{jennings1947}, we find that $M_2(A)/M_3(A)$ is a nilpotent ideal in $C$. In other words, $\mathrm{Spec~}X$ and $\mathrm{Spec~}C$ share the same reduced scheme (in particular, the same underlying topological space).
The goal is to prove that $N_j$ has finite support, and by our assumption, we need to deal with reduced points only. Further, since $\textrm{Spec~}X$ is assumed to be at most one-dimensional, it suffices to prove that $N_j$ is supported on the set of singular points. Let $x$ be a reduced and smooth point of $\textrm{Spec~}X$. Then, from \cite{jordan2014} Corollary 3.9, we have $N_j(A)_x=N_j(A_x)$, where the subscript $x$ denotes the completion at $x$. Moreover, our assumptions on $X$ guarantee that $A_x$ is commutative (being $\mathbb{Q}[[x]]$ or $\mathbb{Q}$), therefore $N_j(A)_x=0$. As a consequence, $N_j$ has finite support and it is of finite dimension.
\end{proof}
\section{Appendix: Torsion Subgroups in $N_k(A_n(\mathbb{Z}))$}
It was noticed in \cite{bhupatiraju2012} that there are no torsion elements in $N_k(A_n(\mathbb{Z}))$ when $k$ and $n$ are small. A natural guess would be that this holds for every $k$ and $n$. Surprisingly, Krasilnikov found a 3-torsion in $N_3(A_5(\mathbb{Z}))$ and more torsion elements for higher $k$ and $n$ (see \cite{krasilnikov2013}). He predicts that 3-torsion is the only possible one showing up in $N_k(A_n(\mathbb{Z}))$, and they arise somehow ``by coincidence''.
In fact, Krasilnikov and his coauthors obtained many more results (see \cite{deryabina2013}, \cite{dacosta2013}). For instance, in \cite{deryabina2013} they proved that \[\{x_1^{n_1}\cdots x_k^{n_k}[x_{i_1},[x_{i_2},x_{i_3}]]\cdot[x_{i_4},x_{i_5}]\cdots[x_{i_{2l}},x_{i_{2l+1}}]\}_{i_1<i_2<\dots<i_{2l+1}}\] forms a basis of 3-torsion in $N_3(A_n)$. In particular, the number of copies of $\mathbb{Z}/3$ in $N_3(A_n)[1,1,\dots,1]$ is given by the sum of binomial coefficients \[{n\choose5}+{n\choose7}+{n\choose9}+\dots\] Our data obtained by Magma agrees with Krasilnikov's result. Similarly, many results about $N_4(A_n)$ can be deduced from \cite{dacosta2013}, in which the generators of $M_5(A_n)$ were determined explicitly.
This appendix serves as an experimental verification of Krasilnikov's predictions. The computational data obtained by Magma is listed in the next page. For example, if we assign each variable to be of degree 1, then $N_5(A_5(\mathbb{Z}))$ has torsion subgroup $\mathbb{Z}/3$ in degree $(1,1,1,1,3)$ and $(\mathbb{Z}/3)^2$ in degree $(1,1,1,2,3)$. A blank entry indicates that there is no torsion elements. Red question marks ``\textcolor{red}{?}'' represent data currently beyond our computability. A particular interesting phenomenon is that no torsion is found in $N_4$ so far, therefore we do not include the $N_4$ row in the table.
\setlength{\textheight}{220mm}
\begin{landscape}
$N_k(A_2)$: No torsion up to total degree 14.\\
~\\
$N_k(A_3)$: No torsion up to total degree 10.\\
~\\
$N_k(A_4)$: No torsion up to total degree 9.\\
~\\
\begin{tabular}{c|cccccccccccc}
$N_k(A_5)$ & \!(1,1,1,1,1)\! & \!(1,1,1,1,2)\! & \!(1,1,1,1,3)\! & \!(1,1,1,2,2)\! & \!(1,1,1,1,4)\! & \!(1,1,1,2,3)\! & \!(1,1,2,2,2)\! & \!(1,1,1,1,5)\! & \!(1,1,1,2,4)\! & \!(1,1,1,3,3)\! & \!(1,1,2,2,3)\! & \!(1,2,2,2,2)\!\\
\hline
$N_3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$\\
$N_5$ & & & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $(\mathbb{Z}/3)^2$ & $(\mathbb{Z}/3)^3$ & $\mathbb{Z}/3$ & $(\mathbb{Z}/3)^2$ & $(\mathbb{Z}/3)^3$ & $(\mathbb{Z}/3)^4$ & \textcolor{red}{?}\\
$N_6$ & & & & & $\mathbb{Z}/3$ & & & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & & & \textcolor{red}{?}\\
$N_7$ & & & & & & & & $\mathbb{Z}/3$ & $\mathbb{Z}/3$ & $(\mathbb{Z}/3)^2$ & $(\mathbb{Z}/3)^2$ & \textcolor{red}{?}\\
\end{tabular}
~\\
~\\
\begin{tabular}{c|ccccc}
$N_k(A_6)$ & (1,1,1,1,1,1) & (1,1,1,1,1,2) & (1,1,1,1,1,3) & (1,1,1,1,2,2) & (1,1,1,1,1,4)\\
\hline
$N_3$ & $(\mathbb{Z}/3)^6$ & $(\mathbb{Z}/3)^6$ & $(\mathbb{Z}/3)^6$ & $(\mathbb{Z}/3)^6$ & $(\mathbb{Z}/3)^6$\\
$N_5$ & & $(\mathbb{Z}/3)^6$ & $(\mathbb{Z}/3)^{12}$ & $(\mathbb{Z}/3)^{18}$ & $(\mathbb{Z}/3)^{12}$\\
$N_6$ & & & $\mathbb{Z}/3$ & & $(\mathbb{Z}/3)^6$\\
$N_7$ & & & & & $(\mathbb{Z}/3)^7$\\
\end{tabular}
~\\
~\\
~\\
\begin{tabular}{c|cc}
$N_k(A_7)$ & (1,1,1,1,1,1,1) & (1,1,1,1,1,1,2)\\
\hline
$N_3$ & $(\mathbb{Z}/3)^{22}$ & $(\mathbb{Z}/3)^{22}$\\
$N_5$ & $(\mathbb{Z}/3)^{21}$ & $(\mathbb{Z}/3)^{69}$\\
$N_6$ & & $\mathbb{Z}/3$
\end{tabular}
\end{landscape}
\setlength{\textheight}{210mm}
\end{document} |
\begin{document}
\title[Rational points on curves of genus 2]
{On the average number of rational points \\ on curves of genus 2}
\author{Michael Stoll}
\address{Mathematisches Institut,
Universit\"at Bayreuth,
95440 Bayreuth, Germany.}
\email{[email protected]}
\date{\today}
\maketitle
\section{Introduction}
For $N > 0$, let ${\mathbb C}C_N$ denote the set of all genus~$2$ curves
\[ C : y^2 = F(x,z) = f_6\,x^6 + f_5\,x^5 z + \dots + f_1\,x z^5 + f_0\,z^6 \]
with integral coefficients $f_j$ such that $|f_j| \le N$ for all~$j$.
($C$ is considered in the weighted projective plane with weights~$1$
for $x$ and~$z$ and weight~$3$ for~$y$.)
In this note, we sketch heuristic arguments that lead to
the following conjectures.
\begin{Conjecture} \label{Conj}
There is a constant $\gamma > 0$ such that
\[ \frac{\sum_{C \in {\mathbb C}C_N}\# C({\mathbb Q})}{\# {\mathbb C}C_N} \sim \frac{\gamma}{\sqrt{N}}
\,.
\]
In particular, the density of genus~$2$ curves with a rational point
is zero.
\end{Conjecture}
The second part of this conjecture is analogous to Conjecture~2.2~(i)
in~\cite{PoonenVoloch}, which considers hypersurfaces in~${\mathbb P}^n$.
If $C$ is a curve of genus~2 as above and $P = (a : y : b)$ is a rational point
on~$C$ (i.e., we have $F(a,b) = y^2$ with $a, b$ coprime integers), then
we denote by $H(P)$ the height $H(a:b) = \max\{|a|,|b|\}$ of its $x$-coordinate.
\begin{Conjecture} \label{ConjBound}
Let $\varepsilon > 0$. Then there is a constant $B_\varepsilon$ and a Zariski open
subset~$U_\varepsilon$ of the `coefficient space' ${\mathbb A}^7$ such that for all
$C \in {\mathbb C}C_N \cap U_\varepsilon$ and all rational points $P$ on~$C$, we have
\[ H(P) \le B_\varepsilon N^{13/2 + \varepsilon} \,. \]
\end{Conjecture}
The reason for restricting to~$U_\varepsilon$ is that one should expect infinite
families of curves with larger points (at least over sufficiently large
number fields). In general, we still expect the following to hold.
\begin{Conjecture} \label{ConjBoundGen}
There are constants~$\kappa$ and~$B$ such that every rational point~$P$
on any curve $C \in {\mathbb C}C_N$ satisfies $H(P) \le B N^\kappa$.
\end{Conjecture}
If we restrict to quadratic twists of a fixed curve, then the ABC~Conjecture
implies such a bound with $\kappa = 1/2$, see~\cite{Granville}.
Note that Conjecture~\ref{ConjBoundGen} says in particular that the height
of a point on~$C$ is
{\em polynomially bounded} by the height of~$C$. If a statement like the
above could be proved for some explicit~$\kappa$ and~$B$,
then this would immediately imply that there is a polynomial
time algorithm that determines the set of rational points on a given
curve~$C$ of genus~2. More precisely, it would be polynomial time in~$N$
(and not in the input length, which is roughly~$\log N$). If we assume
that the Mordell-Weil group of the Jacobian~$J$ of~$C$ is known, then we
obtain a very efficient algorithm, since we only have to check all points
in~$J({\mathbb Q})$ of logarithmic height $\ll \log N$.
Similar statements can be formulated for other families of curves.
We also present the conjecture below, which is based on observation of
experimental data, and not on our heuristic arguments.
\begin{Conjecture} \label{ConjNumber}
There is a constant~$B$ such that for any curve $C \in {\mathbb C}C_N$, the
number of rational points on~$C$ satisfies
\[ \#C({\mathbb Q}) \le B \log (2N + 1) \,. \]
\end{Conjecture}
Caporaso, Harris, and Mazur~\cite{CHM} show that the weak form of Lang's
conjecture on rational points on varieties of general type (namely, that
they are not Zariski dense) would imply that there is a uniform bound
on~$\#C({\mathbb Q})$, independent of~$N$. So our conjecture here can be considered
as a weaker form of this consequence of Lang's conjecture.
\subsection*{Acknowledgments}
I thank Noam Elkies for providing me with his wonderful ternary sextics.
I also wish to thank Noam Elkies, Bjorn Poonen and Samir Siksek for some
helpful comments on earlier versions of this text.
\section{The Heuristic}
We first need an estimate for the fraction of curves of the form
$y^2 = F(x,z)$ in a $(1,3,1)$-weighted projective plane, with $F$
a sextic form with integral coefficients bounded by~$N$ in absolute
value, that are singular (and so are not of genus~$2$). The corresponding
forms~$F$ have a repeated irreducible factor. The largest contribution
to the set ${\mathbb C}D_N$ of singular curves comes from polynomials with a
repeated linear factor; they are of the form
\[ F(x, z) = (ax + bz)^2 G(x, z) \]
with $\deg G = 4$, with coefficients such that $F(x, z)$ has coefficients
bounded by~$N$. For fixed $(a:b)$, we denote by $H(a : b) = \max\{|a|,|b|\}$
the usual height in~${\mathbb P}^1$; then this number is bounded by (roughly)
$(2N+1)^5/H(a:b)^{10}$, leading to $\#{\mathbb C}D_N = O(N^5)$. Hence
$\#{\mathbb C}C_N/(2N+1)^7 = 1 - O(N^{-2})$. See Section~\ref{Sbad} below for
details.
We try to estimate the average number of rational points on
curves in~${\mathbb C}C_N$ with given $x$-coordinate $(a : b) \in {\mathbb P}^1({\mathbb Q})$.
Denote this number by ${\mathbb E}_{(a:b)}(N)$.
In the simplest
case, $(a : b) = (1 : 0)$ (or $(0 : 1)$, which leads to the same
computation). For a given curve (identified with the sextic form~$F$)
to have such a rational point, its coefficients have to satisfy
\[ f_6 = u^2 \qquad\text{for some $u \in {\mathbb Z}_{\ge 0}$.} \]
If $u = 0$, we have one point, for $u > 0$, we have two.
The total number of such points on (not necessarily nonsingular)
curves $y^2 = F(x,z)$ is then
\[ (2\lfloor\sqrt{N}\rfloor + 1)(2N + 1)^6 \,. \]
The number of all polynomials is $(2N + 1)^7$, and if we neglect those
that are not squarefree (which is allowed, see above), we obtain
for the average number of points at infinity
\[ {\mathbb E}_{(1:0)}(N) = \frac{2\lfloor\sqrt{N}\rfloor + 1}{2N+1}
\sim \frac{1}{\sqrt{N}} \,. \]
For $(a:b) \neq (1:0), (0:1)$, we claim that similarly (see Cor.~\ref{Corab})
\begin{equation} \label{Claimab}
{\mathbb E}_{(a:b)}(N) \sim \frac{\gamma(a:b)}{\sqrt{N}}
\end{equation}
with, for $0 < a < b$,
\[ \gamma(a:b) = \frac{1}{b^3}\,\phi\Bigl(\frac{a}{b}\Bigr) \,, \]
where, for $t > 0$,
\[ \phi(t) = \frac{1}{3 \cdot 5 \cdot 7 \cdot 9 \cdot 11 \cdot 13\,t^{21}}
\sum_{\varepsilon_0,\dots,\varepsilon_6 \in \{\pm 1\}}
\varepsilon_0 \varepsilon_1 \cdots \varepsilon_6
\max\{\varepsilon_0 + \varepsilon_1 t + \dots + \varepsilon_6 t^6, 0\}^{13/2} \,.
\]
In general, we have
$\gamma(a : b) = \gamma\bigl(\min\{|a|,|b|\} : \max\{|a|,|b|\}\bigr)$.
\begin{figure}
\caption{The function $\phi$. For $0 \le t \le 0.5$, the power series was used,
for $0.5 \le t \le 2$ the sum, and for $t \ge 2$ the functional equation.}
\label{FigPhi}
\end{figure}
Note that for $t \to 0$,
\begin{align*}
\phi(t) &= 1 - \frac{1}{2^3 \cdot 3}\,t^2
- \frac{19}{2^7 \cdot 3}\,t^4
- \frac{217}{2^{10} \cdot 3}\,t^6
- \frac{9583}{2^{15} \cdot 3}\,t^8
- \frac{40125}{2^{18}}\,t^{10}
+ O(t^{12}) \,,
\end{align*}
so that we can extend $\phi$ to all of~${\mathbb R}$ by setting $\phi(0) = 1$
and $\phi(t) = \phi(|t|)$. The power series expansion is obtained by
noting that for $|t| \le 1/2$, we have
\[ \varepsilon_0 + \varepsilon_1 t + \dots + \varepsilon_6 t^6 \ge 0
\quad\iff\quad \varepsilon_0 = +1 \,.
\]
The radius of convergence of the series is given by the positive root
$\rho \approx 0.504138$ of $1 - t - t^2 - \dots - t^6$.
We have the functional equation (for $t \neq 0$)
\[ \phi\Bigl(\frac{1}{t}\Bigr) = t^3\,\phi(t) \,. \]
Furthermore, $\phi(t)$ is decreasing for $t \ge 0$. This implies that
\[ \frac{\phi(1)}{H(a:b)^3} \le \gamma(a:b) \le \frac{1}{H(a:b)^3} \,. \]
Note that
\[ \phi(1) = \frac{7^{13/2} - 7\cdot 5^{13/2} + 21\cdot 3^{13/2} - 35}{135135}
\approx 0.689540287634369059265 \,.
\]
See Figure~\ref{FigPhi} for a graph of~$\phi$.
We postpone the proof of the claim~\eqref{Claimab} to Section~\ref{Spfab}.
Summing the terms for $H(a:b) \le H$, we obtain, denoting by
${\mathbb E}_{\le H}(N)$ the average number of rational points of height $\le H$
(where the height of a rational point is the usual naive height
$H(a:b) = \max\{|a|, |b|\}$ of its $x$-coordinate $(a:b)$):
\[ {\mathbb E}_{\le H}(N) \sim \frac{\gamma_H}{\sqrt{N}} \qquad
\text{as $N \to \infty$, uniformly for $H \ll N^{6/5-\varepsilon}$},
\]
where
\[ \gamma_H = \sum_{H(a:b) \le H} \gamma(a:b) \,. \]
See Cor.~\ref{CorH} in Section~\ref{Spfab} below.
We obtain Conjecture~\ref{Conj} by letting $H \to \infty$, with
\[ \gamma = \lim_{H \to \infty} \gamma_H
= \sum_{(a:b) \in {\mathbb P}^1({\mathbb Q})} \gamma(a:b) \,.
\]
We denote by ${\mathbb E}(N)$ the average number of rational points on curves in~${\mathbb C}C_N$.
Note that we can at least prove the following (which is, however, the less
interesting inequality).
\begin{Proposition}
We have
\[ \liminf_{N \to \infty} \sqrt{N}\,{\mathbb E}(N) \ge \gamma \,. \]
\end{Proposition}
\begin{proof}
Given $\varepsilon > 0$, fix $H$ such that $\gamma_H > \gamma - \varepsilon$.
We then have
\[ \sqrt{N}\,{\mathbb E}(N) \ge \sqrt{N}\,{\mathbb E}_H(N) > \gamma_H - \varepsilon > \gamma - 2 \varepsilon
\qquad\text{for $N$ sufficiently large.}
\]
\end{proof}
In order to prove Conjecture~\ref{Conj}, one would need a reasonably
good estimate for the number of very large points. This is most likely
a very hard problem.
Let us look a bit closer at the value of~$\gamma$. We have
\[ \gamma = 4 \sum_{b=1}^\infty \mathop{\sum\nolimits'}_{0 \le a \le b, a \perp b}
\frac{1}{b^3} \phi\Bigl(\frac{a}{b}\Bigr)
= \frac{4}{\zeta(3)} \sum_{H=1}^\infty
\frac{1}{H^3} \mathop{\sum\nolimits'}_{0 \le a \le H} \phi\Bigl(\frac{a}{H}\Bigr)
\,.
\]
Here, $\sum'$ denotes the sum with first and last terms counted half.
By the Euler-Maclaurin summation formula,
\[ \mathop{\sum\nolimits'}_{0 \le a \le H} \phi\Bigl(\frac{a}{H}\Bigr)
= H \int_0^1 \phi(t)\,dt + \frac{1}{12 H} \phi'(1)
- \frac{1}{720 H^3} \phi'''(1) + O\Bigl(\frac{1}{H^5}\Bigr) \,.
\]
So we obtain
\[ \gamma = 4\Bigl(\frac{\zeta(2)}{\zeta(3)} \int_0^1 \phi(t)\,dt
+ \frac{\phi'(1) \zeta(4)}{12 \zeta(3)}
- \frac{\phi'''(1) \zeta(6)}{240 \zeta(3)}
+ R \Bigr)
\,,
\]
with a small error~$R$.
For more precise numerical estimates, we compute the first few terms
in the series over~$H$ to some precision and estimate the tail of the
series by the formula above. Note that the derivatives of~$\phi$ at~$t = 1$
can be computed explicitly. We find
\[ \gamma \approx 4.79991101188445188 \,. \]
Here is a table with experimental data obtained from all curves of size
$N \le 10$. For $N \le 3$, the number of points should be accurate; for
$4 \le N \le 10$, we counted all points of height up to $2^{14} - 1$,
so the numbers given are lower bounds. However, the difference is likely
to be so small that it does not affect the leading few digits. See
Section~\ref{S:Data} for the source of these data.
\begin{center}
\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|}
\hline
size of curves $\le N${\Large\strut}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\hline
avg.~$\#C({\mathbb Q})${\Large\strut}
& 3.94 & 2.70 & 2.19 & 2.42 & 2.08 & 1.84 & 1.66 & 1.52 & 1.65 & 1.53 \\\hline
(avg.~$\#C({\mathbb Q})$)$\sqrt{N}${\Large\strut}
& 3.94 & 3.82 & 3.79 & 4.84 & 4.66 & 4.50 & 4.40 & 4.31 & 4.94 & 4.83 \\\hline
\end{tabular}
\end{center}
We observe values reasonably close to the expected asymptotic value
$\gamma \approx 4.800$. When $N$ is a square, the average number of points
jumps up because of the additional possibilities for points at $x = 0$
or $x = \infty$ (leading or trailing coefficient equal to~$N$).
From the above, we also get an estimate for $\gamma - \gamma_H$:
\[ \gamma - \gamma_H = 4 \sum_{b>H} \sum_{0<a<b, a \perp b}
\frac{1}{b^3} \phi\Bigl(\frac{a}{b}\Bigr)
\approx \frac{4}{\zeta(2) H} \int_0^1 \phi(t)\,dt
\approx 2.28253672259903912\,\frac{1}{H} \,.
\]
\section{Proof of the asymptotics for fixed $(a:b)$} \label{Spfab}
The total number of rational points with $x$-coordinate $(a : b) \in {\mathbb P}^1({\mathbb Q})$
on curves in~${\mathbb C}C_N \cup {\mathbb C}D_N$ is the number of integral solutions
$(f_0, f_1, \dots, f_6, y)$ of the equation
\[ f_6 a^6 + f_5 a^5 b + f_4 a^4 b^2 + f_3 a^3 b^3 + f_2 a^2 b^4
+ f_1 a b^5 + f_0 b^6 = y^2 \,,
\]
subject to the inequalities $- N \le f_j \le N$ for $j = 0, 1, \dots, 6$.
If we fix~$y$, then the solutions correspond to the lattice points in
the intersection of the cube $[-N,N]^7$ with the hyperplane given by the
equation above. For $y = 0$, the intersection of ${\mathbb Z}^7$ with the hyperplane,
which we will denote $L_{(a:b)}$, is spanned by the vectors
\[ (-a,b,0,0,0,0,0), (0,-a,b,0,0,0,0), \dots, (0,0,0,0,-a,b,0),
(0,0,0,0,0,-a,b) \,.
\]
We can define the lattice spanned by these vectors for any $(a:b) \in {\mathbb P}^1({\mathbb R})$.
These lattices (considered up to scaling) make up
the image of the obvious map from~${\mathbb P}^1({\mathbb R})$ into the moduli space
of $6$-dimensional lattices; this image is compact since ${\mathbb P}^1({\mathbb R})$ is.
This implies that all invariants of our lattices (like for example the covering radius)
can be estimated above and below by a constant times
a suitable power of the typical length $H(a:b)$ associated to the lattice.
For some of these invariants, we give explicit bounds below.
The Gram matrix of the vectors above is tridiagonal:
\[ \begin{pmatrix}
a^2+b^2 & -ab & 0 & \cdots & 0 \\
-ab & a^2+b^2 & -ab & \cdots & 0 \\
0 & -ab & a^2+b^2 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & a^2+b^2
\end{pmatrix}
\]
(From this matrix, one can again see that the lattice has a nearly orthogonal
basis consisting of vectors of equal length.)
The covolume of the lattice (in six-dimensional volume in~${\mathbb R}^7$) is
\[ \Delta_{(a:b)} = \sqrt{a^{12} + a^{10} b^2 + \dots + b^{12}} \,; \]
we have
\[ H(a:b)^6 \le \Delta_{(a:b)} \le \sqrt{7} H(a:b)^6. \]
The diameter of the fundamental parallelotope spanned by these vectors is
\[ \delta_{(a:b)} = \sqrt{a^2 + 5(|a|+|b|)^2 + b^2}
= \sqrt{6a^2 + 10|ab| + 6b^2}
\le \sqrt{22}\,H(a:b) \,.
\]
Let
\[ \mathbf{a}_{(a:b)} = (b^6, a b^5, a^2 b^4, \dots, a^6) \in {\mathbb R}^7 \]
and
\[ \mathbf{e}_{(a:b)} = \frac{1}{\Delta_{(a:b)}^2}\,\mathbf{a}_{(a:b)} \]
(note that $\mathbf{a}_{(a:b)} \cdot \mathbf{e}_{(a:b)} = 1$);
then the number of points we want to count is
\[ \sum_{y \in {\mathbb Z}}
\#\bigl({\mathbb Z}^7 \cap [-N,N]^7
\cap (L_{(a:b)} + y^2 \mathbf{e}_{(a:b)})\bigr)
= \sum_{y \in {\mathbb Z}} \# S(y) \,,
\]
where we define $S(y)$ to be the set under the `$\#$' sign in the first sum. Let
$V_{(a:b)} \subset L_{(a:b)}$ be the Voronoi cell of the lattice
${\mathbb Z}^7 \cap L_{(a:b)}$; in particular it has volume~$\Delta_{(a:b)}$, and its
translates by lattice points tessellate~$L_{(a:b)}$.
We consider $S(y) + V_{(a:b)}$. The (6-dimensional) volume of this set
is $\#S(y) \Delta_{(a:b)}$. We use $B_r(x)$ to denote the closed
ball of radius~$r$ with center~$x$. Write
\[ W_{(a:b)}(t,\delta)
= \begin{cases}
\bigl\{x \in L_{(a:b)} + t \mathbf{e}_{(a:b)}
: B_{-\delta(x)} \subset [-1,1]^7\bigr\} \,,
& \text{if $\delta \le 0$,} \\
\bigl\{x \in L_{(a:b)} + t \mathbf{e}_{(a:b)}
: B_\delta(x) \cap [-1,1]^7 \neq \emptyset\bigr\} \,,
& \text{if $\delta \ge 0$.}
\end{cases}
\]
In particular, $W_{(a:b)}(t,0) = (L_{(a:b)} + t \mathbf{e}_{(a:b)}) \cap [-1,1]^7$.
There is a constant $c_0 > 0$ such that the covering radius of the lattice
${\mathbb Z}^7 \cap L_{(a:b)}$ is bounded by $c_0 H(a:b)$ (see the remark above).
Writing $H = H(a:b)$ in the following, we obtain
\[ N \cdot W_{(a:b)}\Bigl(\frac{y^2}{N}, -\frac{c_0 H}{N}\Bigr)
\subset S(y) + V_{(a:b)}
\subset N \cdot W_{(a:b)}\Bigl(\frac{y^2}{N}, \frac{c_0 H}{N}\Bigr)
\,.
\]
Since
\[ \operatorname{vol}_6 W_{(a:b)}(t, \delta)
= \operatorname{vol}_6 W_{(a:b)}(t, 0) + O(\delta) + O(\delta^6)
\]
(with $O$-constants independent of~$(a:b)$), we obtain
\[ \#S(y) \Delta_{(a:b)}
= N^6\,\operatorname{vol}_6 W_{(a:b)}\Bigl(\frac{y^2}{N}, 0\Bigr) + O(H N^5) + O(H^6) \,.
\]
Therefore, using that $\Delta_{(a:b)} \asymp H^6$ and writing
$f_{(a:b)}(t) = \operatorname{vol}_6 W_{(a:b)}(t, 0)$,
\[ \#S(y) = \frac{N^6}{\Delta_{(a:b)}} f_{(a:b)}\Bigl(\frac{y^2}{N}\Bigr)
+ O(H^{-5} N^5) + O(1) \,.
\]
Let $S(a :b)$ be the set $\cup_{y \in {\mathbb Z}} S(y)$ of all relevant lattice
points. Then
\begin{align*}
\#S(a:b) &= \sum_{y \in {\mathbb Z}} \#S(y) \\
&= \frac{N^6}{\Delta_{(a:b)}}
\sum_{y \in {\mathbb Z}} f_{(a:b)}\Bigl(\frac{y^2}{N}\Bigr)
+ O(H^{-2} N^{11/2}) + O(H^3 N^{1/2}) \\
&= \frac{2 N^6}{\Delta_{(a:b)}}
\int_0^\infty f_{(a:b)}\Bigl(\frac{y^2}{N}\Bigr)\,dy \,
+ O(H^{-6} N^6) + O(H^{-2} N^{11/2}) + O(H^3 N^{1/2}) \,.
\end{align*}
(Note that $y = O(\sqrt{N \Delta_{(a:b)}}) = O(H^3 \sqrt{N})$ and that
$f_{(a:b)}(t)$ is decreasing for $t \ge 0$, with $f(0) = O(1)$.) Substituting
$t = y^2/N$, this gives
\begin{align*}
\#S(a:b) &= \frac{N^{13/2}}{\Delta_{(a:b)}}
\int_0^\infty f_{(a:b)}(t)\,\frac{dt}{\sqrt{t}}
+ O(H^{-6} N^6) + O(H^{-2} N^{11/2}) + O(H^3 N^{1/2}) \\
&= N^{13/2}
\int_{[-1,1]^7} (\mathbf{a}_{(a:b)} \cdot \mathbf{x})_+^{-1/2}\,d\mathbf{x} \\
&\qquad\qquad{}+ O(H^{-6} N^6) + O(H^{-2} N^{11/2}) + O(H^3 N^{1/2})
\,.
\end{align*}
Here $x_+^{-1/2}$ is zero when $x \le 0$ and $x^{-1/2}$ when $x > 0$.
More generally, for $x, r \in {\mathbb R}$, we let $x_+^r$ denote $0$ when $x \le 0$
and $x^r$ when $x > 0$.
\begin{Lemma} \label{Lemma:Int}
We have, for $ab \neq 0$,
\begin{align*}
\int_{[-1,1]^7} &(\mathbf{a}_{(a:b)} \cdot \mathbf{x})_+^{-1/2}\,d\mathbf{x} \\
&= \frac{2^7}{135135 \, |a b|^{21}}\!
\sum_{\varepsilon_0,\dots,\varepsilon_6 = \pm 1}\! \varepsilon_0 \cdots \varepsilon_6
(\varepsilon_0 |b^6| + \varepsilon_1 |a b^5| + \dots + \varepsilon_6 |a^6|)_+^{13/2} .
\end{align*}
\end{Lemma}
Note that this is $2^7$ times
\[ \gamma(a:b) = \frac{1}{|b|^3} \phi\Bigl(\frac{|a|}{|b|}\Bigr) \]
in the notation introduced in the previous section.
\begin{proof}
Let $a_1, \dots, a_m > 0$ be real numbers, $r > -1$, $c \in {\mathbb R}$.
Write $\mathbf{a} = (a_1, \dots, a_m)$.
We prove the more general statement
\begin{align*}
\int_{[-1,1]^m} &(\mathbf{a} \cdot \mathbf{x} + c)_+^r\,d\mathbf{x} \\
&\hspace*{-3mm}{}= \frac{1}{a_1 \cdots a_m\,(r+1) \cdots (r+m)}
\sum_{\varepsilon_1, \dots, \varepsilon_m = \pm 1} \varepsilon_1 \cdots \varepsilon_m
(\varepsilon_1 a_1 + \dots + \varepsilon_m a_m + c)_+^{r+m} \,.
\end{align*}
We proceed by induction. When $m = 1$, we have
\[ \int_{-1}^1 (a_1 x_1 + c)_+^r\,dx_1
= \frac{1}{a_1\,(r+1)}
\bigl((a_1 + c)_+^{r+1} - (-a_1 + c)_+^{r+1}\bigr) \,,
\]
as can be checked by considering the cases $-c \le -a_1$,
$-a_1 \le -c \le a_1$, and $a_1 \le -c$ separately.
For the inductive step, we assume the statement to be true for
$a_1, \dots, a_m$ and~$r$, and prove it for $a_1, \dots, a_m, a_{m+1}$.
Let $\mathbf{a}' = (a_1, \dots, a_m)$ and
$\mathbf{a} = (a_1, \dots, a_{m+1})$, and use similar notation for
vectors $\mathbf{x}$, $\mathbf{x}'$. Then
\begin{align*}
&\int_{[-1,1]^{m+1}} (\mathbf{a} \cdot \mathbf{x} + c)_+^r\,d\mathbf{x} \\
&= \int_{-1}^1 \int_{[-1,1]^m}
\bigl(\mathbf{a}' \cdot \mathbf{x}' + a_{m+1} x_{m+1} + c\bigr)_+^r
\,d\mathbf{x}'\,dx_{m+1} \\
&= \int_{-1}^1 \frac{1}{a_1 \cdots a_m\,(r+1) \cdots (r+m)} \times{} \\
&\quad \sum_{\varepsilon_1, \dots, \varepsilon_m = \pm 1} \varepsilon_1 \cdots \varepsilon_m
\bigl(\varepsilon_1 a_1 + \dots + \varepsilon_m a_m + x_{m+1} a_{m+1} + c\bigr)_+^{r+m}
\,dx_{m+1} \\
&= \frac{1}{a_1 \cdots a_m\,(r+1) \cdots (r+m)} \times{} \\
&\quad \sum_{\varepsilon_1, \dots, \varepsilon_m = \pm 1} \varepsilon_1 \cdots \varepsilon_m
\int_{-1}^1
\bigl(\varepsilon_1 a_1 + \dots + \varepsilon_m a_m + x_{m+1} a_{m+1} + c\bigr)_+^{r+m}
\,dx_{m+1} \\
&= \frac{1}{a_1 \cdots a_m\,(r+1) \cdots (r+m)} \times{} \\
&\quad \sum_{\varepsilon_1, \dots, \varepsilon_m = \pm 1} \varepsilon_1 \cdots \varepsilon_m
\frac{1}{a_{m+1}\,(r+m+1)}
\sum_{\varepsilon_{m+1} = \pm 1}
\bigl(\varepsilon_1 a_1 + \dots + \varepsilon_{m+1} a_{m+1}
+ c\bigr)_+^{r+m+1}
\end{align*}
by the case $m = 1$.
To finish the proof of the lemma, note that we can take $a, b > 0$.
We then apply the claim with $\mathbf{a} = \mathbf{a}_{(a:b)}$, $r = -1/2$, and $c = 0$.
\end{proof}
\begin{Corollary} \label{Corab}
With $H = H(a:b)$,
\[ {\mathbb E}_{(a:b)}(N) = \frac{\gamma(a:b)}{\sqrt{N}}
+ O\bigl(H^{-6} N^{-1}\bigr)
+ O\bigl(H^{-2} N^{-3/2}\bigr)
+ O\bigl(H^3 N^{-13/2}\bigr) \,.
\]
In particular, we have
\[ \sqrt{N}\,{\mathbb E}_{(a:b)}(N) \longrightarrow \gamma(a:b) \]
as $N \to \infty$, uniformly for $(a:b)$ such that $H(a:b) \ll N^{2-\varepsilon}$.
\end{Corollary}
\begin{proof}
First note that ${\mathbb E}_{(a:b)}(N) = \#S'(a:b)/\#{\mathbb C}C_N$, where
$S'(a:b)$ only lists the points in $S(a:b)$ on curves that are smooth.
We have
\[ \#{\mathbb C}C_N = (2N+1)^7 - \#{\mathbb C}D_N = (2N+1)^7 + O(N^5)
= (2N)^7\bigl(1 + O(N^{-1})\bigr)
\]
and
\[ \#S'(a:b) = \#S(a:b)
+ O\bigl(H^{-10} N^5\bigr)
+ O\bigl(H^{-1} N^{9/2}\bigr)
+ O\bigl(H^3 N^{1/2}\bigr) \,. \]
See Section~\ref{Sbad} below.
This implies that
\[ {\mathbb E}_{(a:b)}(N)
= \frac{\#S(a:b)}{(2N)^7}\bigl(1 + O(N^{-1})\bigr)
+ O\bigl(H^{-10} N^{-2}\bigr)
+ O\bigl(H^{-1} N^{-5/2}\bigr)
+ O\bigl(H^3 N^{-13/2}\bigr) \,.
\]
By Lemma~\ref{Lemma:Int}, the definition of $\gamma(a:b)$, and the discussion
preceding the lemma, we have (using $\gamma(a:b) \asymp H^{-3}$)
\[ \frac{\#S(a:b)}{(2N)^7}
= \frac{\gamma(a:b)}{\sqrt{N}}
+ O\bigl(H^{-6} N^{-1}\bigr)
+ O\bigl(H^{-2} N^{-3/2}\bigr)
+ O\bigl(H^3 N^{-13/2}\bigr) \,.
\]
The result is obtained by combining these results, after eliminating
redundant terms.
\end{proof}
\begin{Corollary} \label{CorH}
\[ {\mathbb E}_{\le H}(N) = \frac{\gamma_H}{\sqrt{N}}
+ O\bigl(N^{-1}\bigr)
+ O\bigl((\log H) N^{-3/2}\bigr)
+ O\bigl(H^5 N^{-13/2}\bigr) \,.
\]
In particular, we have
\[ \sqrt{N}\,{\mathbb E}_{\le H(N)}(N) \longrightarrow \gamma_{H(N)} \]
as $N \to \infty$ if $H(N) \ll N^{6/5-\varepsilon}$, and
\[ {\mathbb E}_{\le H(N)}(N) \longrightarrow 0 \]
as $N \to \infty$ if $H(N) \ll N^{13/10-\varepsilon}$.
\end{Corollary}
\begin{proof}
Sum the estimates in the previous corollary.
\end{proof}
It should be possible to extend the range beyond $H \ll N^{6/5 - \varepsilon}$ if one
uses more sophisticated methods from analytic number theory. (In fact,
Stephan Baier~\cite{Baier} has obtained an exponent of $7/5-\varepsilon$.)
It would be interesting to see how far one can get.
\section{Counting Bad Curves and Points} \label{Sbad}
In this section, we will bound the number $\#{\mathbb C}D_N$ of non-smooth curves
and the total number of points of height $\le H$ on them. Recall the
following.
\begin{Lemma} \label{Lemma:GenBound}
Let $\Lambda \subset {\mathbb R}^n$ be a lattice of covolume~$\Delta$ and
covering radius~$\rho$. Let $S \subset {\mathbb R}^n$ be a subset. Then
\[ \#(S \cap \Lambda) \le \frac{\operatorname{vol} (S + B_\rho(0))}{\Delta} \,. \]
\end{Lemma}
\begin{proof}
Let $V$ be the Voronoi cell of~$\Lambda$ (centered at zero), then
$V \subset B_{\rho}(0)$ by definition of the covering radius, and
$\operatorname{vol} V = \Delta$. It follows that
\[ \bigcup_{x \in S \cap \Lambda} (V + x) \subset S + B_{\rho}(0)\,,
\quad\text{and thus}\quad
\Delta \cdot \#(S \cap \Lambda) \le \operatorname{vol} (S + B_{\rho}(0)) \,.
\]
\end{proof}
To make life a bit simpler, we observe that
$[-N,N]^7 \subset B_{\sqrt{7}N}(0)$; we will bound the number of bad curves
in the ball. This has the advantage that the intersection with any affine
subspace will be a ball again.
Note that a form $F(x,z)$ is not square-free if and only if it is divisible by
the square of a primitive form~$G$. Let $n$ be the degree of~$G$;
assume it has coefficients $\mathbf{a}l = (\alpha_n, \dots, \alpha_0)$. Then
the forms divisible by~$G^2$ correspond to lattice points in the span
of
\[ x^{6-2n} G^2\,, x^{5-2n} z G^2\,, \dots\,, z^{6-2n} G^2\,, \]
intersected with the ball $B_{\sqrt{7}N}(0)$.
We can extend this to $G$ with real coefficients; then the lattices we
obtain (modulo scaling) are parametrized by the compact set ${\mathbb P}^n({\mathbb R})$,
hence they all live in a compact subset of the moduli space of lattices.
Taking into account that the basis vectors have length
of order $H(\mathbf{a}l)^2$, this gives the
following relations for the covolume, covering radius and minimal length
of the lattices.
\[ \Delta \asymp H(\mathbf{a}l)^{14-4n}\,, \qquad \rho \asymp H(\mathbf{a}l)^2\,, \qquad
\mu \asymp H(\mathbf{a}l)^2 \,.
\]
In particular, there will be no non-zero lattice point in the ball
of radius $\sqrt{7} N$ when $N > \text{const}\,H(\mathbf{a}l)^2$.
By Lemma~\ref{Lemma:GenBound}, we then obtain a bound
\begin{align*}
\#{\mathbb C}D_N
&\le \sum_{n=1}^3
\sum_{\mathbf{a}l \in {\mathbb P}^n({\mathbb Q}), H(\mathbf{a}l) \ll \sqrt{N}}
O\Bigl(\frac{(N + H(\mathbf{a}l)^2)^{7-2n}}{H(\mathbf{a}l)^{14-4n}}\Bigr) \\
&= \sum_{n=1}^3
\sum_{\mathbf{a}l \in {\mathbb P}^n({\mathbb Q}), H(\mathbf{a}l) \ll \sqrt{N}}
O\Bigl(\frac{N^{7-2n}}{H(\mathbf{a}l)^{14-4n}}\Bigr) \\
&= \sum_{n=1}^3 N^{7-2n} \sum_{H \ll \sqrt{N}} O(H^{3n-14})
= O(N^5) \,.
\end{align*}
We conclude that in fact $\#{\mathbb C}D_N \asymp N^5$, since we already get
$N^5$ from $G = x$.
Now in order to count points on these bad curves, we use the same basic
idea as before. This time, we have to count lattice points in the ball
that are in a translate of the subspace of forms that are divisible
by $G(x,z)^2 (bx-az)$. We assume for now that $G(a,b) \neq 0$.
If $F(a,b) = G(a,b)^2 y^2$, then the translation
is by a vector of length $G(a,b)^2 y^2/\Delta_{(a:b)}$.
So for the count of points with $x$-coordinate $(a:b)$, we get a bound of
\[ \sum_{|y| \ll \frac{\sqrt{N \Delta_{(a:b)}}}{|G(a,b)|}}
O\Bigl(\frac{(N + H(\mathbf{a}l)^2)^{6-2n}}{H(\mathbf{a}l)^{12-4n} H(a:b)^{6-2n}}\Bigr)
= O\Bigl(\frac{N^{\frac{13}{2}-2n}}
{|G(a,b)| H(\mathbf{a}l)^{12-4n} H(a:b)^{3-2n}}\Bigr) \,.
\]
Estimating $|G(a,b)| \ge 1$ trivially, we obtain for the total number
of such points the bound
\begin{align*}
\sum_{n=1}^3 &\sum_{H(\mathbf{a}l) \ll \sqrt{N}}
O\Bigl(\frac{N^{\frac{13}{2}-2n}}{H(\mathbf{a}l)^{12-4n} H(a:b)^{3-2n}}\Bigr) \\
&= O\Bigl(\frac{N^{9/2}}{H(a:b)}\Bigr) + O\bigl(N^{5/2} H(a:b)\bigr)
+ O\bigl(N^{1/2} H(a:b)^3\bigr) \,.
\end{align*}
The middle term is redundant, since it is always dominated by one of the
others.
If $G(a,b) = 0$, then (since we can assume $G$ to be irreducible) $n = 1$,
and we have to count all forms divisible by~$G^2$. This adds a term
of order $N^5/H(a:b)^{10}$.
\begin{Remark}
With a similar computation as above, one can show that the number of
curves in~${\mathbb C}C_N$ with reducible polynomial~$F$ is~$O(N^6)$. Therefore
the contribution of such curves is negligible.
\end{Remark}
\section{Speculations on the Size of Points}
Recall that
\[ \gamma_H \approx \gamma - \frac{c}{H} \]
where $c \approx 2.28253672259903912$.
If we assume Conj.~\ref{Conj}, then
the calculations above suggest that the number of curves in~${\mathbb C}C_N$
that have a rational point of $x$-height $> H$ is roughly
$c N^{13/2}/H$, at least as long as $H$ is not too large compared
to~$N$, see Cor.~\ref{CorH}. If we recklessly extend
this to large~$H$, this would predict that the largest rational point
on a curve from~${\mathbb C}C_N$ should have height $\ll N^{13/2+\varepsilon}$.
One has to be careful, however, as was pointed out to me by Noam Elkies,
mentioning the case of integral points on elliptic curves as an analogy.
Considering curves in short Weierstrass form $y^2 = x^3 + Ax + B$ with
$A, B \in {\mathbb Z}$, heuristic considerations like those presented here predict
that integral points should be of size $\ll \max\{|A|^{1/2}, |B|^{1/3}\}^{10 + \varepsilon}$,
but there are families that reach an exponent of~$12$. See the information
given at~\cite{ElkiesWeb}. This leads to Conjecture~\ref{ConjBound}.
Regarding possible families with larger points, we consider the case
that the coefficients $f_j$ are linear forms in the coordinates $(t:u)$
of~${\mathbb P}^1$, the coordinates $x$ and~$z$ of the point we are looking for
are homogeneous polynomials of degree~$m$ (to be determined), and
$y^2 = q(t,u)^2 r(t,u)$ with $q$ of degree~$3m-1$ and $r$ of degree~$3$.
If we find a solution of
\[ q^2 r = \sum_{j=0}^6 f_j x^j z^{6-j} \]
in such polynomials, then we should obtain an infinite family of curves
with points satisfying $H(P) \gg N^m$. (Of course, we have to exclude
degenerate solutions.) To see this, multiply by $r(1,0)$ (which we can
assume to be nonzero after a suitable change of coordinates on~${\mathbb P}^1$).
The equation
\[ r(1,0) r(t,u) = w^2 \]
then has the solution $(t,u,w) = (1,0,r(1,0))$, which must be contained
in a family of solutions that is parametrized by a genus~$0$ curve
(compare \cite{DarmonGranville,Beukers}). If we plug in this parametrization,
we obtain a one-dimensional family of suitable curves with base~${\mathbb P}^1$.
There are
\[ 3m + 4 + 7 \cdot 2 + 2 \cdot (m+1) = 5m + 20 \]
unknown coefficients involved in the equation above.
On the other hand, there is an action of
$\operatorname{GL}_2 \times \operatorname{GL}_2 \times {\mathbb G}_{\text{\rm m}}$ (given by the automorphisms of ${\mathbb P}^1_{(t:u)}$,
the automorphisms of ${\mathbb P}^1_{(x:z)}$ (acting on $x$, $z$, and the~$f_j$ and
leaving the value of the right hand side unchanged), and scaling of $q$
versus~$r$), which takes away 9~degrees of freedom. The relation above
leads to $6m+2$ equations, so the remaining number of degrees of freedom
should be
\[ (5m + 20) - 9 - (6m + 2) = 9 - m \,. \]
This suggests that there should be families of curves with points
such that $H(P) \gg N^9$. (We do not get better results when we take
coefficients~$f_j$ of higher degree, taking $\deg r = 4$ in case this
degree is even.) Of course, the corresponding variety may
fail to have rational points, so that we do not see these families
over~${\mathbb Q}$. Or some other accidents can occur, leading to extraneous
solutions with larger~$m$.
In the following, we will ignore such special families and try to
make our `generic' conjecture more precise by using a probabilistic model.
In this model, we interpret the quantity
\[ \frac{N^6}{\Delta_{(a:b)}} \int_0^\infty f_{(a:b)}\Bigl(\frac{y^2}{N}\Bigr) \,dy
= 2^6 \gamma(a : b) N^{13/2}
\]
that gives rise to the main term in the count of points $(a:\pm y:b)$
as the probability that such a point pair occurs in~${\mathbb C}C_N$. The number of
pairs of points of height $> H$ should then follow a Poisson distribution
with mean
\[ \mu_H = 2^6 (\gamma - \gamma_H) N^{13/2}
\approx \frac{2^6 c N^{13/2}}{H} \,,
\]
at least when $H$ is large compared to~$N$. Taking $H = \lambda N^{13/2}$,
the probability that no such point exists is then $e^{-2^6 c/\lambda}$.
Taking into account the fact that points occur in packets of eight
\footnote{We consider all points on all curves of fixed size together.}
(change the sign of $x$ or~$y$, send $x$ to $1/x$), i.e., four point
pairs, we should correct this to $e^{-16 c/\lambda}$. For a fifty-fifty
chance of no larger points, we should take $\lambda \approx 53$, for
an 80\% chance, we take $\lambda \approx 164$. This line of argument
would lead us to expect the following.
{\em
If $\lambda(N) \to \infty$ as $N \to \infty$, then there are only
finitely many `generic' curves $C \in {\mathbb C}C_N$ of genus~2 such that $C$
has a rational point~$P$ with $H(P) > \lambda(N) N^{13/2}$.
}
The problem with this is that it is not so clear how to make the
restriction to `generic' curves precise. There might be an infinity
of families of curves with points of height $N^k$ for a sequence of~$k$
tending to~$13/2$ from above, which could lead to problems when $\lambda(N)$
tends to infinity very slowly. Therefore, we keep on the safe side
with the given formulation of Conjecture~\ref{ConjBound}.
On the other hand, we can use similar heuristic arguments for any
given family of curves. It is reasonable to expect that the bounds we
obtain will not get arbitrarily large (in terms of the exponent of~$N$
in the height bound). This leads to Conjecture~\ref{ConjBoundGen}.
We have checked experimentally how well the expected number of points
of height in the interval $\left[2^n, 2^{n+1}\right[$ matches the
actual number of points on curves of small size. For values of~$n$ that
are not very small, this is $2^{-(n+1)} c\,\#{\mathbb C}C_N/\sqrt{N}$.
Figure~\ref{FigHeights}
shows this comparison, for curves in ${\mathbb C}C_N$, for $1 \le N \le 10$
and $0 \le n \le 13$.
The fit is quite good, even though the range of~$N$ is certainly far
too small for the asymptotics to kick in except for very small heights.
There is an unexpected feature: starting with $N = 4$, points of larger
height seem to occur more frequently than they should.
\footnote{Since points accumulate on singular curves, which we did not
consider here, one would perhaps rather expect a deviation in the other
direction!}
It would be
interesting to find an explanation for this phenomenon. One possibility
is that it might be related to the existence of families of curves
with systematically occurring large points. Of course,
according to our results, this can only occur for fairly large heights
when $N$ is large. See Section~\ref{S:Data} for a description of the
computations.
\begin{figure}
\caption{Expected and actual number of rational points in various height
brackets, for $1 \le N \le 10$.}
\label{FigHeights}
\end{figure}
It is also interesting to compare the observed value of~$\lambda(N)$ such
that no rational point of height $> \lambda(N) N^{13/2}$ exists on a curve
in~${\mathbb C}C_N$ with the estimates given above. For $N = 1, 2, 3$, the largest
points we found on curves in~${\mathbb C}C_N$ have heights as follows.
\begin{center}
\begin{tabular}{|l||c|c|c|}
\hline
size of curves{\large\strut} & $N = 1$ & $N = 2$ & $N = 3$ \\\hline
max.~$H(P)${\large\strut} & 145 & 10711 & 209040 \\\hline
\end{tabular}
\end{center}
We therefore find
\[ \lambda(1) \approx 145.00\,, \quad
\lambda(2) \approx 118.34\,, \quad
\lambda(3) \approx 165.55\,,
\]
corresponding to probabilities (for no larger point to exist, in the
sense explained above) between 73\% and~81\%.
The record point on \quad $y^2 = x^6 - 3 x^4 - x^3 + 3 x^2 + 3$ \quad has
$x = -\frac{58189}{209040}$.
Similar considerations for general hyperelliptic curves of genus $g \ge 2$
lead to a heuristic estimate of
$O(N^{(4g+5)/2}/H^{g-1})$ for the number of curves with a point of
height $> H$. Therefore we would expect the points to be generically of height
\[ H \ll N^{(4g+5)/(2g-2)+\varepsilon} = N^{2 + \frac{9}{2(g-1)} + \varepsilon} \,. \]
\section{Speculations on the Number of Points}
We can also try to extract some information of the number of points
(or point pairs) on hyperelliptic curves. Since the linear conditions
on the coefficients coming from up to seven distinct $x$-coordinates
are linearly independent, we would expect the following.
Let $R^{(m)}_N$ be the subset of~${\mathbb C}C_N$ of curves that have at least
$m$ pairs of rational points (i.e., points with $m$ distinct
$x$-coordinates).
For $0 \le m \le 7$, there are constants $\gamma^{(m)} > 0$ such that
\[ \#R^{(m)}_N \sim \gamma^{(m)} \, N^{7-m/2} \,. \]
One caveat here is that the number of non-squarefree polynomials
will be in the range of these sizes if $m \ge 4$, so the conclusion is not automatic.
Indeed, the experimental data show a noticeable deviation from this
expectation already for $m \ge 3$.
Let us be more precise and try to obtain numerical values for the
$\gamma^{(m)}$. Assuming the occurrence or not of points with distinct
$x$-coordinates to be independent for all~$x \in {\mathbb P}^1({\mathbb Q})$,
the generating function for the probability of
having rational points with exactly $m$ distinct $x$-coordinates
should be (assuming exact probability $\gamma(a:b)/2\sqrt{N}$ for
a point~$P \in C({\mathbb Q})$ with $x(P) = (a:b)$)
\[ G(T) = \sum_{m=0}^\infty \operatorname{Prob}_N\bigl(\#x(C({\mathbb Q})) = m\bigr) T^m
= \prod_{(a:b) \in {\mathbb P}^1({\mathbb Q})}
\Bigl(1 + \frac{\gamma(a:b)}{2\sqrt{N}} (T-1)\Bigr) \,.
\]
The numbers $\gamma^{(m)}$ should then occur as the limits as
$N \to \infty$ of the coefficients in the series
\[ \sum_{m=0}^\infty \gamma^{(m)}(N)\,T^m
= \frac{1 - \sqrt{N}T\,G(\sqrt{N}T)}{1 - \sqrt{N}T}
= \frac{T\,G(\sqrt{N}T) - \frac{1}{\sqrt{N}}}{T - \frac{1}{\sqrt{N}}} \,,
\]
where $\gamma^{(m)}(N) N^{-m/2}$ is an estimate for the fraction of
curves with at least $m$ point pairs.
Now, as $N \to \infty$ and coefficient-wise, this series behaves as
\[ G(\sqrt{N} T) = \prod_{(a:b)}
\Bigl(1 - \frac{\gamma(a:b)}{2\sqrt{N}}
+ \frac{\gamma(a:b)}{2}\,T\Bigr)
\longrightarrow \prod_{(a:b)} \Bigl(1 + \frac{\gamma(a:b)}{2}\,T\Bigr) \,.
\]
So $\gamma^{(m)}$ is the degree-$m$ ``infinite elementary symmetric
polynomial'' in the numbers $\gamma(a:b)/2$. Using $(a:b)$ of height
up to~$1000$, we find
\begin{gather*}
\gamma^{(1)} = \frac{\gamma}{2} \approx 2.399\,, \quad
\gamma^{(2)} \approx 2.499\,, \quad
\gamma^{(3)} \approx 1.504\,, \quad
\gamma^{(4)} \approx 0.591\,, \text{\quad etc.}
\end{gather*}
In Figure~\ref{FigNum}, we compare the expected values $\gamma^{(m)}(N)/N^{m/2}$
with the observed numbers. For $m \le 2$, there is good agreement, but for
$m \ge 4$, there seem to be many more curves with at least $m$ pairs of points
than predicted. Indeed, the data suggest a behavior of the form
$\alpha^{m}$ for the fraction of curves in this range, with
$\alpha \approx 0.5$ largely independent of~$N$ (or even increasing: note
the changes in slope when $N = 4$ or $N = 9$).
This seems to indicate that as soon as there are many points, it is
much more likely that there are additional points than on average ---
the points ``conspire'' to generate more points. Maybe this is related
to another observation, which is that in examples of curves with
many rational points, the points tend to have many dependence relations
in the Mordell-Weil group. One possible explanation might be that when
there are already several points, they tend to be fairly small, so that
there are many small linear combinations of them in the Mordell-Weil group.
Such a small point in the Mordell-Weil group is represented by a pair of
points on~$C$ such that the quadratic polynomial whose roots are the
$x$-coordinates of the two points has small height. A polynomial of small
height has a good chance to split into linear factors. In this case, both
points involved are rational points on~$C$. It would be very interesting
to turn this into a precise estimate for the number~$\alpha$ that we observe.
\begin{figure}
\caption{Expected and actual number of curves with at least $m$ pairs
of rational points}
\label{FigNum}
\end{figure}
In Figure~\ref{FigNum1}, we show the proportion of curves in~${\mathbb C}C_N$ with
at least $m$ point pairs. It is striking how the graphs are all contained
in a narrow strip near the line (in the logarithmic scaling used in the
picture) corresponding to $m \mapsto 2^{-m}$.
\begin{figure}
\caption{Proportion of curves with at least $m$ pairs of rational points}
\label{FigNum1}
\end{figure}
If these observations extend to larger~$N$, then we should expect about
$2^{-m} (2N+1)^7$ curves in~${\mathbb C}C_N$ with $m$ or more point pairs. The
largest number of point pairs on a curve in~${\mathbb C}C_N$ should then be
\[ \frac{7}{\log 2}\,\log(2N + 1) + O(1) \,. \]
Conjecture~\ref{ConjNumber} gives a slightly weaker statement, replacing
the factor $7/\log 2$ by an arbitrary constant.
In order to test our conjecture, we conducted a search for curves with
many points in~${\mathbb C}C_{200}$. The table in Figure~\ref{FigTable} lists the record curves
we found (curves with more point pairs than all smaller curves). On each
curve, we found all points of height up to~$2^{17} - 1 = 131\,071$
(and in some cases a few more).
The column labeled~``$F$'' lists the coefficients of one example curve.
\begin{figure}
\caption{Examples of curves with many points.}
\label{FigTable}
\end{figure}
The constant in front of $\log(2N+1)$ that seems to fit our data best
points to a value of~$\alpha$ of about~$0.68$ in that range (corresponding to
the slope of the lines in the figure and indicating that the
observed increase of~$\alpha$ with~$N$ persists). In Figure~\ref{FigNum2},
we have plotted $\#C({\mathbb Q})$ against $\log(2N+1)$ for the curves in the table
(and some more coming from an ongoing extended search).
In addition, we show a selection of good
curves from Elkies' families, see below, and some other previously known examples.
(``$\log$'' in the figure is the logarithm with base~$10$.) The sources
of these examples are \cite{Kulesz,KellerKulesz,Stahlke}; the curve marked
``Stahlke'' on the left was communicated to me by Colin Stahlke; it appears
in~\cite{Stoll02}, where the Mordell-Weil group of its Jacobian is determined.
One of these examples is the curve with the largest number
of point pairs found until very recently (see Keller and Kulesz~\cite{KellerKulesz}).
It has $N = 22\,999\,624\,761$ and $m = 294$. This curve has 12 automorphisms
defined over~${\mathbb Q}$, and the 588~points are 49~orbits of 12~points each.
Until~2008, the record for curves with only the hyperelliptic involution
as a nontrivial automorphism was held by a curve found by Stahlke~\cite{Stahlke}
with 366 known rational points. (In fact, there are at least 8 more points,
see Section~\ref{S:Data}.)
Recently, Noam Elkies~\cite{Elkies} has constructed several K3~surfaces of the
form $y^2 = S(t,u,v)$ with a ternary sextic~$S$ such that $S$
admits a large number ($> 50$) of rational lines on which $S$ restricts
to a perfect square. Each of these therefore provides a 2-dimensional
family of genus~2 curves with more than 50 pairs of rational points.
In one of these families, he found a curve with 536~rational points.
(It is marked ``Elkies 2008'' in Figure~\ref{FigNum2}.)
In the course of a further systematic search in these families, we found
several curves with still more points, some of which even beat the Keller and Kulesz
record. The curve with the largest number of points discovered so far is
\begin{align*}
y^2 &= 82342800 x^6 - 470135160 x^5 + 52485681 x^4 + 2396040466 x^3 \\
& \qquad {} + 567207969 x^2 - 985905640 x + 247747600\,;
\end{align*}
it has (at least) {\bf 642} points. The $x$-coordinates of the points
with $H(P) > 10^5$ are as follows (the smaller points can easily be found
using {\tt ratpoints}, for example).
\begin{gather*}
\frac{15121}{102391}, \frac{130190}{93793}, -\frac{141665}{55186},
\frac{39628}{153245}, \frac{30145}{169333}, -\frac{140047}{169734},
\frac{61203}{171017}, \frac{148451}{182305}, \frac{86648}{195399}, \\[1mm]
-\frac{199301}{54169}, \frac{11795}{225434}, -\frac{84639}{266663},
\frac{283567}{143436}, -\frac{291415}{171792}, -\frac{314333}{195860},
\frac{289902}{322289}, \frac{405523}{327188}, \\[1mm]
-\frac{342731}{523857}, \frac{24960}{630287}, -\frac{665281}{83977},
-\frac{688283}{82436}, \frac{199504}{771597}, \frac{233305}{795263},
-\frac{799843}{183558}, -\frac{867313}{1008993}, \\[1mm]
\frac{1142044}{157607}, \frac{1399240}{322953},
-\frac{1418023}{463891}, \frac{1584712}{90191}, \frac{726821}{2137953},
\frac{2224780}{807321}, -\frac{2849969}{629081}, -\frac{3198658}{3291555}, \\[1mm]
\frac{675911}{3302518}, -\frac{5666740}{2779443}, \frac{1526015}{5872096},
\frac{13402625}{4101272}, \frac{12027943}{13799424}, -\frac{71658936}{86391295},
\frac{148596731}{35675865}, \\[1mm]
\frac{58018579}{158830656}, \frac{208346440}{37486601},
-\frac{1455780835}{761431834}, -\frac{3898675687}{2462651894}.
\end{gather*}
\begin{figure}
\caption{Curves with many points}
\label{FigNum2}
\end{figure}
The record so far for $\#C({\mathbb Q})/\log_{10}(2N+1)$ is held by the curve
\[ y^2 = 37665 x^6 - 220086 x^5 + 212355 x^4 + 268462 x^3 - 209622 x^2
- 69166 x + 49036
\]
with $\#C({\mathbb Q}) \ge 452$; the quotient is (at least)~$78.88$.
\section{Computations} \label{S:Data}
Our data come from several sources.
\subsection{Computations with (very) small curves}
This began as a project whose aim it was to decide, for every genus~2
curve $C \in {\mathbb C}C_3$, whether it possesses rational points. This experiment
is described in~\cite{BruinStollExp}, with more detailed explanation
of the various methods used in~\cite{BruinStoll2D,BruinStollMWS,BruinStollFG}.
These computations were later extended by the author. For those curves that
do have rational points, we proceeded to find all rational points, or at
least all rational points up to a height bound that is so large that we
can safely assume that no larger points exist.
More precisely, the following was done. We determined a generating set
for the Mordell-Weil group of the Jacobian of every curve (in a small
number of cases, the rank is not yet proved to be correct: there is a
difference of~$2$ between the rank of the known subgroup and the 2-Selmer
rank, which very likely comes from nontrivial elements of order~2 in
the Shafarevich-Tate group). When the Mordell-Weil rank~$r$ is zero,
the set $C({\mathbb Q})$ of rational points on~$C$ can be trivially determined.
When $r = 1$, a combination of Chabauty's method and the Mordell-Weil
sieve can be used to determine~$C({\mathbb Q})$; this is described in~\cite{BruinStollMWS}.
For $r = 2$, we can still use the Mordell-Weil sieve in order to find
all points up to a height of $H = 10^{1000}$ in reasonable time. For
$r > 2$, the sieving computation would take too long; in these cases,
we have used a lattice point enumeration procedure on the Mordell-Weil
group to find all points up to $H = 10^{100}$. The following table
summarizes what was done and gives the number of curves (up to isomorphism)
for each value of the rank~$r$. We denote the set of rational points
on~$C$ up to height~$H$ by $C({\mathbb Q})_H$.
\begin{center}
\begin{tabular}{|c|r|l|} \hline
$r = 0$ & 14\,010 curves & \strut $C({\mathbb Q})$ is determined. \\
$r = 1$ & 46\,575 curves & \strut $C({\mathbb Q})$ is determined. \\
$r = 2$ & 52\,227 curves & \strut $C({\mathbb Q})_H$ is determined for $H = 10^{1000}$. \\
$r = 3$ & 22\,343 curves & \strut $C({\mathbb Q})_H$ is determined for $H = 10^{100}$. \\
$r = 4$ & 2\,318 curves & \strut $C({\mathbb Q})_H$ is determined for $H = 10^{100}$. \\
$r = 5$ & 17 curves & \strut $C({\mathbb Q})_H$ is determined for $H = 10^{100}$. \\\hline
\end{tabular}
\end{center}
Under the reasonable assumption that there are no points on these curves
of height $> 10^{100}$ (note that the largest point that we found has
height about $2 \cdot 10^5$), plus assuming that all the ranks are correct,
this means that we have complete information on all rational points on
curves in~${\mathbb C}C_3$. We plan to extend our computations to~${\mathbb C}C_4$ eventually.
\subsection{All points with $H < 2^{14}$ on curves with $N \le 10$}
Since $N = 3$ is rather small, we also tried to get some information on
somewhat larger curves. The author has written a program {\tt ratpoints}
(see~\cite{Ratpoints} for a description) that uses a quadratic sieve and
fast bit-wise operations to search for rational points on hyperelliptic
curves. On current hardware, it takes about 10~ms on average to find all
points up to height $H = 2^{14}-1 = 16383$ on a genus~2 curve.
We used up to 20~machines from the CLAMV teaching lab at Jacobs
University Bremen for about one week in January~2008 to let {\tt ratpoints}
find all these points on all curves in~${\mathbb C}C_{10}$. If $f \in {\mathbb Z}[x]$ is
the polynomial defining the curve, then it is only necessary to look at
one representative of the set
\[ \{f(x), f(-x), x^6 f(1/x), x^6 f(-1/x)\} \,, \]
since the corresponding curves are isomorphic and the isomorphism preserves
the height of the rational points. The total number of curves to be
considered was therefore roughly $21^7/4 \approx 450 \cdot 10^6$, for a
total of more than~$100$ CPU days (the average time per curve on these
machines was about 20~ms).
This gives us precise information on the frequency of points of
height $< 2^{14}$ on curves with $N \le 10$. It also gives us close
to complete information on curves with many points in this range, since
curves with many points seem to have reasonably small points. We might
have missed a few curves with (comparatively) many points that have one
(or more?) additional point pair(s).
We plan to extend these computations to $N \le 20$ (and possibly beyond),
with the same height bound, once we have suitable hardware at our disposal.
\subsection{Small curves with many points}
To get some more data on curves with many points, we conducted a systematic
search for curves in~${\mathbb C}C_{50}$ with many points, making use of the
observation that all curves in~${\mathbb C}C_{10}$ that have comparatively many points
tend to have points with $x$-coordinates $0$, $\infty$, $1$ and~$-1$.
Putting in the conditions that $F(0,1)$, $F(1,0)$, $F(1,1)$ and~$F(-1,1)$
have to be squares reduces the search space to a sufficient extent so
that a search up to $N = 50$ is possible. The point search was first done
with the bound $H = 2^{10} - 1 = 1023$; for those curves that had more than
a certain number of points in this range, points were then counted up to
height $H = 2^{17} - 1 = 131\,071$.
Based on the observation that all but one of the best curves that this
computation revealed also have rational points at $x = \pm 2$ (maybe after
a height-preserving isomorphism), we did a further systematic search for
curves in~${\mathbb C}C_{200}$ having rational points at all $x \in \{\infty,0,1,-1,2,-2\}$.
Here the point search was done in three steps, using height bounds of
$2^{12} - 1 = 2047$, $2^{14} - 1 = 16383$, and finally $2^{17} - 1 = 131071$.
Two threshold values for the number of points were used in order to decide
whether to search for more points on a given curve.
We plan to extend these computations, too.
\subsection{Curves with many points in Elkies' families}
Noam Elkies was so kind to provide us with explicit formulas for five
ternary sextics $S(t,u,v)$ that admit many rational lines~$\ell$ on which $S$
restricts to a perfect square. Setting the restriction of~$S$ to a generic
line equal to a square gives a curve of genus~2 that has a pair of rational
points over each intersection point with a line~$\ell$ as above. In this way,
we obtain a 2-dimensional family of genus~2 curves with more than 50~pairs
of rational points. We have conducted a systematic search among all lines
$at + bu + cv = 0$ with $a,b,c \in {\mathbb Z}$ and $\max\{|a|,|b|,|c|\} \le 500$
in order to find curves with many points in these families.
There are two features of our computation that merit special mention.
The first is that we used as a preliminary selection step a product
$\prod_{p < X} \#C({\mathbb F}_p)/p$ with $X = 200$, which was required to be
above a certain threshold value. The rationale behind this is that we
expect a curve with many rational points also to have more ${\mathbb F}_p$-points
than a random curve. Similar ideas have been used before.
Note that each factor only depends on the reduction of the line $at+bu+cv=0$
mod~$p$, so that we can precompute the relevant values and reduce the
computation of the factors in the product to a table lookup.
The second is a systematic way of finding new rational points from known
ones. If there are five rational points $(x_i/z_i,y_i/z_i^3)$ on a genus~2 curve~$C$
that lie on a cubic $y = \alpha x^3 + \beta x^2 + \gamma x + \delta$, then
the sixth intersection point of this cubic with~$C$ is again a rational
point. The condition is equivalent to the vanishing of the determinant
of the matrix with rows $(z_i^3, x_i z_i^2, x_i^2 z_i, x_i^3, y_i)$.
For reasons of efficiency, we have written a C~program that first computes
these determinants mod~$2^{64}$ using native machine arithmetic; whenever
a determinant appears to be zero, this is checked using exact arithmetic,
and if the sixth intersection point is not yet known, it is recorded.
We have applied this procedure to the points of height up to~$10^5$ we found
using {\tt ratpoints}. This can produce quite a number of additional points
of considerable height. For example, we were able to find eight more points
on Stahlke's curve from~\cite{Stahlke}, so that this curve must have at
least~374 rational points. Of course, in this way we can only find points within the
subgroup of the Mordell-Weil group generated by the known points.
\end{document} |
\begin{document}
\date{}
\title{A quadratic bound on the number of boundary slopes of essential surfaces with bounded genus}
\author{Tao Li
\thanks{Partially supported by NSF grant DMS-0705285}\and
Ruifeng Qiu
\thanks{Partially supported by NSFC grant 10631060}\and
Shicheng Wang
\thanks{Partially supported by NSFC grant 10625102}}
\maketitle
\begin{abstract}
Let $M$ be an orientable 3-manifold with $\partial M$ a single torus. We
show that the number of boundary slopes of immersed essential surfaces with
genus at most $g$ is bounded by a quadratic function of $g$. In the
hyperbolic case, this was proved earlier by Hass, Rubinstein and
Wang.
\textbf{Subject class:} 57M50, 57N10
\end{abstract}
\section{Introduction}
A proper immersion $f: (F, \partial F)\to (M, \partial M)$ from a compact surface to a compact 3-manifold is
essential if it is $\pi_1$-injective and $\partial$-injective, i.e., it maps essential loops and arcs in $F$ to essential loops and arcs in $M$.
Let $M$ be a compact orientable 3-manifold with $\partial M$ a single torus. We say a slope $s$ in $\partial M$ is realized by an essential surface if there is a proper essential immersion $f:
(F, \partial F)\to (M,
\partial M)$ such that every component of $f(\partial F)$ is a curve of slope $s$ in $\partial M$. Such an immersed surface is particularly interesting because it extends to a closed immersed surface in the closed 3-manifold $M(s)$ obtained by Dehn filling along the slope $s$.
The study of boundary slopes of essential surfaces has been an active and
attractive topic for long times. Hatcher \cite{Ha} showed that there are only finitely many boundary slopes of embedded essential surfaces. The number of boundary slopes of small-genus embedded surfaces (e.g.~punctured spheres or tori) is quite small and the study of these exceptional slopes is a center topic in the theory of Dehn surgery, see the survey article \cite{Go}.
However, for immersed essential surfaces, there is no such bound in
general. In fact, there are examples that every slope is realized by
an immersed essential surface, see \cite{Ba, BC, O}. In \cite{HRW},
Hass, Rubinstein and Wang show that for hyperbolic manifolds, the
number of boundary slopes of essential surfaces of genus at most $g$
is bounded by $Cg^2$, where $C$ is a constant independent of the
manifold (see also Agol \cite{Ag}). The purpose of this paper is to
extend the quadratic bound result to general 3-manifolds.
\begin{thm}\label{Tmain}
Suppose $M$ is an orientable 3-manifold with $\partial M$ a single
torus. For any $g$, let $N_g(M)$ be the number of slopes that can be realized by essential immersed surfaces of
genus at most $g$.
Then $N_g(M)\le\left\{
\begin{array}{cl}
C(M)g^2 & g\ge 1 \\
C'(M) & g=0
\end{array} \right.$ for some constants $C(M)$ and $C'(M)$ that depend on $M$.
\end{thm}
\noindent
{\bf Remark.}
(1). In \cite{HRW}, Hass, Rubinstein and Wang proved that $N_g(M)$ is finite, but no bound on $N_g(M)$ is given in \cite{HRW}. Recently Zhang \cite{Zh} extended the techniques in \cite{HRW} and proved that $N_g(M)$ is bounded by $c(M)g^3$ for some constant $c(M)$ that depends on $M$.
(2). The coefficient $C(M)$ depends on $M$. One would hope for a quadratic bound independent of $M$, but even for embedded surfaces, it seems difficult to obtain such a bound if $M$ contains essential annuli. Nevertheless, the coefficients $C(M)$ and $C'(M)$ can be algorithmically determined, see Remark~\ref{Rlast}.
(3). When $\partial M$ is a high genu surface, there are finiteness
and infiniteness results in both embedded and immersed case, see
\cite{SWu}, \cite{Qi}, \cite{HWZ}, \cite{La} and \cite{QW}.
\section{Some crucial facts}
The proof of Theorem \ref{Tmain} relies on a theorem of
Hass-Rubinstein-Wang \cite{HRW}, a theorem of Culler-Shalen
\cite{CS}, and Li's extension of Hatcher's argument
\cite{Li2}. Propositions \ref{hyperbolic}, \ref{2-slopes} and
\ref{Lsign} below are their variations, presented in the
forms we need.
In this section we first consider a hyperbolic 3-manifold $M$ with possibly more than one cusp. We denote by
$M_\text{max}$ the interior of $M$ with a system of maximal cusps
removed. Now we identify $M$ with $M_\text{max}$, then $\partial M$
has a Euclidean metric induced from the hyperbolic metric and each closed
Euclidean geodesic in $\partial M$ has length at least 1 (see
\cite{Ad} for detail).
\begin{prop}\label{hyperbolic}
Suppose $M$ is a hyperbolic 3-manifold as above and $T$ is a component of $\partial M$. Suppose $F$ is an
essential immersed surface of genus $g$ in $M$ and let
$c_1,..., c_n$ be the components of $\partial F\cap T$.
\begin{enumerate}
\item If we identify $M$ with $M_\text{max}$, then
$$\Sigma _{i=1}^n L(c_i)\le -2\pi \chi(F),$$
where $L(c_i)$ is the length of an Euclidean geodesic homotopic to
$c_i$ in $T$.
\item Let $S$ be an embedded essential surface in $M$ and let $\gamma$ be a component of $\partial S\cap T$. Then there is a number $C_S$ which can be expressed as an explicit function of $\chi(S)$, such that
$$|\gamma\cap\partial F|\le -C_S\cdot\chi(F),$$
where $|\gamma\cap\partial F|$ is the minimum number of intersection points of $\gamma$ and $\partial F$.
\item There are two distinct essential circles $\Gamma_1$ and
$\Gamma_2$ in $T$, such that
$$|\Gamma_j\cap\partial F| \le -C\chi(F)$$ for some constant $C$, where $|\Gamma_j\cap
\partial F|$ is the minimum number of intersection points of $\Gamma_j$ and $\partial F$
up to isotopy, $j=1,2$.
\end{enumerate}
\end{prop}
\begin{proof} Part (1) is proved in \cite{HRW}.
Now we prove part (2). Recall we have identified $M$ with $M_\text{max}$, and
$\partial M$ has a Euclidean metric induced from the hyperbolic
metric. We may assume that $\gamma$ and each component $c_i$ of $\partial F$ have been isotoped to be closed Euclidean geodesics in $\partial M$.
Let $p: E^2\to T$ be the universal cover, where $E^2$ is the
Euclidean plane. By lifting $\gamma$ to $E^2$, we get an Euclidean line segment $OO_1$ which projects to $\gamma$. By part (1), the Euclidean length $L(\gamma)=L(OO_1)$ is at most $-2\pi\chi(S)$. The covering translations of $O$ form a lattice in $E^2$. Let $O_2$ be a lattice point such that $OO_1$ and $OO_2$ span a fundamental parallelogram $P$ for $T$. By a theorem of Cao and Meyerhoff (also see Lemma 2.2 of \cite{HRW}), $area(P)\ge 3.35$.
Let $h$ be the distance from $O_2$ to the line $OO_1$. Since the Euclidean length $L(OO_1)\le -2\pi\chi(S)$ and since $area(P)\ge 3.35$, the height $h\ge\frac{3.35}{L(OO_1)}\ge \frac{3.35}{-2\pi\chi(S)}$.
By lifting $c_i$ to $E^2$, it is easy to see that the
length of $c_i$ is at least $h|c_i\cap \gamma|$.
By part (1), we have
$$h|\gamma\cap
\partial F|
=h \Sigma _{i=1}^n |c_i\cap \gamma| \le \Sigma
_{i=1}^n L(c_i)\le -2\pi \chi(F).$$
So part (2) holds and $C_S=\frac{-4\pi^2\chi(S)}{3.35}$.
The proof of part (3) is similar. Pick an origin $O$ in $E^2$ and consider the lattice $L$ in $E^2$ given by the covering translations of $O$. Let $O_1$ and $O_2$ be two independent
vertices in $L$ which have the first and second shortest distance
from the origin $O$. Let $\alpha$ be the angle of the triangle
$OO_1O_2$ at $O$ and let $l$, $l_1$, $l_2$ be the lengths of
$O_1O_2$, $OO_1$ and $OO_2$ respectively. By our assumptions above and by a theorem in \cite{Ad} mentioned earlier, we have $l\ge
l_2 \ge l_1\ge 1$. This implies that $\alpha\ge \pi/3$. Furthermore, we can
assume that $\alpha\le \pi/2$, because otherwise we can replace one of the
vertices by its inverse.
$OO_1$ and $OO_2$ span a fundamental parallelogram $P$ for $T$. It follows from our assumptions above that
$l_1sin\alpha$ (resp. $l_2sin\alpha$), the height of $P$ over $OO_2$
(resp. over $OO_1$), is at least $\frac{\sqrt 3} 2$.
Let $\Gamma_j=p(OO_j)$, $j=1,2$. As in part (2), we have
$$\frac{\sqrt 3} {2}|\Gamma_j\cap
\partial F|
=\frac{\sqrt 3} {2} \Sigma _{i=1}^n |c_i\cap \Gamma_j| \le \Sigma
_{i=1}^n L(c_i)\le -2\pi \chi(F),$$ and part (3) follows with $C=\frac{4\pi}{\sqrt{3}}$.
\end{proof}
\begin{prop} \label{2-slopes}
Suppose $M$ is a hyperbolic 3-manifold as above and $T$ is a component
of $\partial M$. Then $T$ has two distinct boundary slopes $c_1$ and
$c_2$ of embedded essential surfaces, i.e., there are
properly embedded essential surfaces $F_i$ in $M$ such that $F_i\cap
T$ is a multiple of $c_i$, $i=1,2$.
\end{prop}
\begin{proof} By performing hyperbolic Dehn filling on each boundary
component of $\partial M\setminus T$, we get a hyperbolic 3-manifold
$M^*$ with $\partial M^*=T$. By a theorem of Culler-Shalen \cite{CS},
there are two distinct boundary slopes $c_1$ and $c_2$ on $T$, i.e.
there are properly embedded essential surfaces $F_i^*$ in $M^*$ such
that $F_i^*\cap T$ is a multiple of $c_i$, $i=1,2$. So
$F_i=F_i^*\cap M$ has the required property.
\end{proof}
A surface in a Seifert fiber space is said to be horizontal if it is transverse to the $S^1$-fibers.
If an orientable Seifert fiber space has a single boundary component, then it is easy to see that all embedded horizontal surfaces have the same slope which is determined by its Euler number. The following Lemma is a generalization of this fact to immersed horizontal surfaces in a Seifert fiber space with more than one boundary component.
\begin{prop}\label{Lsign}
Let $N$ be an orientable Seifert fiber space with boundary and $T$
a boundary component of $N$. Let $F_1$ and $F_2$ be immersed
essential horizontal surfaces in $N$. Suppose $F_i\cap T$ is
embedded for both $i=1,2$ and $|\partial F_1\cap\partial F_2\cap T|$
is minimal in the isotopy classes of $F_1$ and $F_2$. If there is a
double curve $\alpha\subset F_1\cap F_2$ with both endpoints in $T$,
then the curves of $F_1\cap T$ and $F_2\cap T$ must have the same
slope in $T$.
\end{prop}
\begin{proof}
The proof of the lemma is basically an argument first used by Hatcher in \cite{Ha} and then extended to immersed surfaces in \cite{Li2}. As $N$ is a Seifert fiber space, we can fix a direction for the $S^1$--fibers of $N$ in $T$.
Since $N$ is orientable and each $F_i$ is horizontal, the normal direction of $\partial N$ and the orientation of the $S^1$--fibers in $T$ uniquely determine an orientation for every curve of $\partial F_1\cap T$ and $\partial F_2\cap T$. Since $F_i\cap T$ is embedded, every component of $\partial F_i\cap T$ ($i=1\ or\ 2$) with this induced orientation represents the same element in $H_1(T)$. If $\partial F_1\cap T$ and $\partial F_2\cap T$ have different slopes, they must have a nonzero intersection number. Moreover, since we have assumed $|\partial F_1\cap\partial F_2\cap T|$ is minimal in the isotopy classes of $F_1$ and $F_2$, the signs of the intersection points of $\partial F_1\cap\partial F_2\cap T$ (with respect to the directions above) are the same, either all positive or all negative.
Let $\alpha\subset F_1\cap F_2$ be an intersection arc with both endpoints in $T$. One can easily list all possible configurations of the directions of the $S^1$-fibers at $\partial\alpha$ and the induced orientations of $\partial F_1$ and $\partial F_2$. However, since each $F_i$ is horizontal, only two possible configurations can happen, see Figure~\ref{Fsign}. In either case, the two ends of $\alpha$ give points of $\partial F_1\cap\partial F_2\cap T$ with opposite signs of intersection. This contradicts our conclusion on the sign of the intersection points above. So $F_1\cap T$ and $F_2\cap T$ must have the same slope in $T$.
\begin{figure}\label{Fsign}
\end{figure}
\end{proof}
The following fact follows immediately from Lemma~\ref{Lsign}. Since an essential surface in a Seifert fiber space is either vertical or horizontal \cite{H}, if $M$ is an orientable Seifert fiber space with a single boundary component, this means that only two possible slopes can be realized by immersed essential surfaces, one vertical and one horizontal.
\begin{col}\label{Csame}
Let $N$ be an orientable Seifert fiber space with a single boundary torus. Then all immersed horizontal surfaces with respect to a fixed Seifert structure have the same slope in $\partial N$.
\end{col}
\section{Construct a surface of reference}\label{Scon}
Let $M$ be as in Theorem~\ref{Tmain}.
First note that we may assume $M$ is irreducible, since if $M$ is reducible we can use the prime factor of $M$ that contains $\partial M$ and the proof is the same. Since the hyperbolic case is proved in \cite{HRW} and the Seifert fiber case is trivial (see Corollary~\ref{Csame}), we may assume $M$ has a nontrivial JSJ decomposition.
Let $\mathcal{T}$ be the set of JSJ
decomposition tori of $M$. We call the closure (under path metric)
of each component of $M-N(\mathcal{T})$ a JSJ piece. Let $M_0$ be
the JSJ piece that contains the torus $\partial M$.
In this section, we suppose $M_0$ is a Seifert fiber space and we
will use the JSJ structure of $M$ to construct a surface of
reference for counting the boundary slopes of immersed essential
surfaces. This surface is in $M_0$ and is not a proper surface in
$M$.
For any Seifert fiber space $N$
with boundary, we call a slope in a boundary torus the
\emph{vertical slope} if it is the slope of a regular fiber of $N$.
\begin{prop}\label{Phor}
Let $N$ be a Seifert fiber space and $T_0$, $T_1,\dots T_n$ the
boundary tori of $N$. Let $s_i$ ($i=1,\dots n$) be any slope in $T_i$
that is not vertical in $N$. Then there is an embedded horizontal surface
in $N$ realizing each slope $s_i$ in $T_i$.
\end{prop}
\begin{proof}
We perform Dehn fillings along each slope $s_i$ ($i=1,\dots n$) and let $\hat{N}$ be the resulting manifold. So $\partial \hat{N}=T_0$. Since $s_i$ is not vertical in $N$, the Seifert structure of $N$ extends to $\hat{N}$. Hence $\hat{N}$ is a Seifert fiber space with boundary. Every Seifert fiber space with boundary has an embedded horizontal surface. The restriction of a horizontal surface of $\hat{N}$ to $N$ is a horizontal surface of $N$ realizing each slope $s_i$ in $T_i$ ($i=1,\dots n$).
\end{proof}
Let $M_0$ be the Seifert JSJ piece of $M$ as above. Let $\partial M$, $T_1,\dots, T_n$ be the boundary tori of $M_0$. So each $T_i$ can be viewed as a JSJ torus in $\mathcal{T}$ and $M_0$ is a JSJ piece on one side of $T_i$. Next we fix a slope in $T_i$ according the JSJ piece on the other side of $T_i$. Let $M_i$ be the JSJ piece on the other side of $T_i$. Note that $M_i$ is the same as $M_0$ if $T_i$ is glued to some $T_j$ in $M$. We fix a slope $s_i$ for each boundary component $T_i$ of $M_0$ as follows.
\noindent
\emph{Case 1}. $M_i$ is a Seifert fiber space and $M_i$ is not a twisted $I$-bundle of a Klein bottle. In this case we choose the slope $s_i$ of $T_i$ to be the slope of a regular fiber of $M_i$. Note that $s_i$ is not a vertical slope for $M_0$, because otherwise the regular fibers of $M_0$ and $M_i$ match and $M_0\cup_{T_i} M_i$ is a Seifert fiber space, which contradicts the hypothesis that $T_i$ is a JSJ torus. So $s_i$ is not a vertical slope for $M_0$.
\noindent
\emph{Case 2}. $M_i$ is a twisted $I$-bundle over a Klein bottle. In this case, $M_i$ has two different Seifert structures \cite{Ja}. For any point $x\in T_i=\partial M_i$, we define $p(x)$ to be the other endpoint of the $I$-fiber of $M_i$ that contains $x$. Let $\gamma_\nu$ be a simple closed curve in $T_i$ which is a regular fiber of $M_0$. Let $s_i$ be the slope of $p(\gamma_\nu)$. Note that $\gamma_\nu$ and $p(\gamma_\nu)$ bound an immersed essential annulus in $M_i$. If $\gamma_\nu$ and $p(\gamma_\nu)$ have the same slope in $T_i$, i.e. $\gamma_\nu\cup p(\gamma_\nu)$ bounds an embedded annulus, then we can choose a Seifert structure for $M_i$ \cite{Ja} so that $\gamma_\nu$ is also a regular fiber for $M_i$ and hence $M_0\cup M_i$ is a Seifert fiber space, a contradiction to the hypothesis that $T_i$ is a JSJ torus. So $s_i$ is not a vertical slope for $M_0$.
\noindent \emph{Case 3}. $M_i$ is hyperbolic. By Proposition
\ref{2-slopes}, $T_i$ has at least two boundary slopes (of embedded
essential surfaces in $M_i$). In this case we choose $s_i$ to be a
boundary slope of $M_i$ that is not a vertical slope in $M_0$. So
there is an embedded essential surface $S_i$ in $M_i$ whose boundary
in $T_i$ has slope $s_i$ and $s_i$ is not a vertical slope in $M_0$.
By Proposition~\ref{Phor}, $M_0$ contains a properly embedded horizontal surface $S$ such that the slope of $\partial S\cap T_i$ is the slope $s_i$ described above. Note that $S$ is not a properly embedded surface in $M$, since two tori $T_i$ and $T_j$ ($i\ne j$) may be glued together in $M$ and $s_i$ and $s_j$ may not match in the corresponding JSJ torus of $M$.
Next we fix the surface $S$ in the construction above. Let $\mu$ the slope of $S\cap\partial M$ in the torus $\partial M$ and let $\nu$ be the vertical slope of $\partial M$ with respect to the Seifert structure of $M_0$.
\section{Proof of the Theorem \ref{Tmain}}
Let $F$ be a proper immersed essential surface of genus $g$ in $M$.
If $M_0$ is hyperbolic then Theorem~\ref{Tmain} follows from
\cite{HRW}. More precisely, suppose $\partial F$ is an $n$ multiple
of a slope $c$ in $\partial M$ and we have identified $M_0$ with
the metric space $M_{0 \text{max}}$ as in Proposition \ref{hyperbolic}. By Proposition \ref{hyperbolic}
(1), we have
$$n L(c)\le L(\partial(F\cap M_0))\le -2\pi \chi(F\cap M_0)\le -2\pi \chi(F) = 2\pi(2g-2+n).$$
Then as discussed in \cite{HRW} we have $L(c)\le 2\pi$ if $g=0$ and
$L(c)\le 2g\pi$ if $g>0$, therefore $N_g(M)\le C'$ for $g=0$ and
$N_g(M)\le Cg^2$ for some constants $C'$ and $C$ independent of $M$.
Below we assume that $M_0$ is a Seifert fiber space.
We may assume the slope of $\partial F$ is not the vertical slope of
$M_0$, so $F\cap M_0$ is horizontal in $M_0$. Since $\partial M$ is
incompressible, $F$ is not a disk. If $F$ is an annulus, then
$F\cap M_0$ is a horizontal annulus. The only orientable Seifert
fiber space that admits a horizontal annulus is either
$T^2\times I$ or a twisted $I$-bundle over a Klein bottle. Since
$M_0$ is a JSJ piece, $M_0$ is not $T^2\times I$. If $M_0$ is a
twisted $I$-bundle over a Klein bottle, $M_0=M$ and by
Corollary~\ref{Csame} there are only two possible slopes for $F$. Thus
Theorem~\ref{Tmain} holds if $\chi(F)\ge 0$. So in this section, we
assume $\chi(F)<0$.
\begin{lem}\label{Lver}
Let $N$ be a Seifert JSJ piece of $M$ and $v$ a regular fiber of $N$. Suppose $N$ is not a twisted $I$-bundle over a Klein bottle. Let $F$ be an essential surface in $M$ and suppose $F\cap N$ is horizontal in $N$. Then $|v\cap F|\le -6\chi(F)$.
\end{lem}
\begin{proof}
Let $O(N)$ be the base orbifold of $N$. Since $O(N)$ has boundary and $N$ is not a solid torus, $\chi(O(N))\le 0$. Moreover, since $N$ is orientable and is not $T^2\times I$, $\chi(O(N))=0$ if and only if $O(N)$ is a disk with two cone points both of order 2 and $N$ is a twisted $I$-bundle over a Klein bottle.
Thus by our hypothesis that $N$ is not a twisted $I$-bundle over a Klein bottle, we have $\chi(O(N))<0$.
Since $F\cap N$ is horizontal in $N$, $\chi(F\cap N)=k\chi(O(N))$ where $k=|v\cap F|$. Since $O(N)$ has boundary, the maximal possible value for $\chi(O(N))$ occurs when $O(N)$ is a disk with two cone points of orders 2 and 3 respectively, in which case $\chi(O(N))=-1/6$. Therefore $\chi(O(N))\le -1/6$ and $k=|v\cap F|\le -6\chi(F\cap N)\le -6\chi(F)$.
\end{proof}
\begin{rem}\label{R2}
In the proof of Lemma~\ref{Lver}, $\chi(O(N))\le -1/2$ except when $O(N)$ is a disk with two cone points. Thus $|v\cap F|\le -2\chi(F)$ if $\partial N$ has more than one boundary component. This is a key observation in the proof of the following lemma, see \cite{Zh}.
\end{rem}
\begin{lem}[\cite{Zh}, Lemma 3.2]\label{Lzhang}
Let $M$ and $M_0$ be as in section~\ref{Scon} and let $\nu$ be the vertical slope of $\partial M$ in $M_0$. Let $F$ be an immersed essential surface in $M$ of genus at most $g$ and let $s_F$ be the boundary slope of $F$ in $\partial M$. Then the geometric intersection number $$\Delta(\nu,s_F)\le U(g)=\left\{
\begin{array}{cl}
2 & g=0 \\
2g & g\ge 1
\end{array} \right. .$$
\end{lem}
\qed
\begin{proof}[Proof of Theorem \ref{Tmain} when $M_0$ is a Seifert fiber space]
Let $S$ be the fixed embedded horizontal surface in $M_0$
constructed in section~\ref{Scon}. Let $F$ be an immersed essential
surface in $M$ of genus at most $g$. We will study the intersection
of $F\cap M_0$ and $S$. Let $s_F$ be the boundary slope of $F$.
Our main goal is to show that $\Delta(\mu, s_F)$ is bounded by a
linear function of $g$, where $\mu$ is the slope of $\partial
S\cap\partial M$. As only one slope is vertical, we suppose $F\cap M_0$ is horizontal in $M_0$.
We will use the same notation as section~\ref{Scon}. The boundary tori of $M_0$ are $\partial M$, $T_1,\dots, T_n$ and $S$ is properly embedded in $M_0$. In this section, we view $S$ as a surface in $M$ instead of $M_0$. Since it is possible that $T_i$ and $T_j$ ($i\ne j$) are glued together in $M$, when regarded as a surface in $M$, curves of $\partial S$ may intersect in a JSJ torus of $M$.
Now we consider the intersection of $F$ and $S$. A key difference between $F$ and $S$ is that $F$ is a proper surface in $M$ while $S$ is only defined in $M_0$. We view the torus $T_i$ as a JSJ torus of $M$ and as in section~\ref{Scon}, let $M_i$ be the JSJ piece incident to $T_i$ on the other side of $M_0$ ($M_i$ may be the same JSJ piece as $M_0$). Let $\Gamma_i=S\cap T_i$ in $M_0$. As above, we view $\Gamma_i$ as a collection of curves in a JSJ torus in $M$. Next we estimate $|F\cap\Gamma_i|$. Let $k_i$ be the number of components of $\Gamma_i$. As in the construction of $S$, we have 3 cases:
\noindent
\emph{Case 1}. $M_i$ is a Seifert fiber space and $M_i$ is not a twisted $I$-bundle over a Klein bottle. By the construction of $S$, in this case, each curve in $\Gamma_i$ is a regular fiber of the Seifert fiber space $M_i$. By Lemma~\ref{Lver}, $|F\cap\Gamma_i|\le -6k_i\chi(F\cap M_i)\le -6k_i\chi(F)$.
\noindent
\emph{Case 2}. $M_i$ is a twisted $I$-bundle over a Klein bottle. By our construction of $S$ in this case, each curve $\gamma$ in $\Gamma_i$ and a regular fiber $p(\gamma)$ of $M_0$ bound an immersed essential annulus in $M_i$. We may assume $F\cap M_i$ to be essential in $M_i$. So the intersection of $F$ and an essential annulus in $M_i$ consists of essential arcs in the annulus. In particular, $|F\cap\gamma|=|F\cap p(\gamma)|$. Since $p(\gamma)$ is a regular fiber of $M_0$ for each curve $\gamma$ in $\Gamma_i$ and since $M_0$ has more than one boundary component, by Lemma~\ref{Lver} and Remark~\ref{R2}, $|F\cap\Gamma_i|\le -2k_i\chi(F\cap M_0)\le -2k_i\chi(F)$.
\noindent \emph{Case 3}. $M_i$ is hyperbolic. In this case there is
an embedded essential surface $S_i$ in $M_i$ whose boundary slope in the
torus $T_i$ is the same as the slope of $\Gamma_i$. Now we consider
the intersection of $S_i$ and $F\cap M_i$. By Proposition
\ref{hyperbolic} (2), $|F\cap\Gamma_i|\le -c_i\chi(F\cap M_i)\le
-c_i\chi(F)$ for some number $c_i$ which depends on $|\Gamma_i|$ and $\chi(S_i)$.
Let $\Gamma_0=\partial S\cap\partial M$ be the boundary curves of $S$ lying in $\partial M$. So $\partial S-\Gamma_0=\bigcup_{i=1}^n\Gamma_i$. By the argument above, there is a number $c>0$ depending on $S$ such that the total number of intersection points of $F$ and $\partial S-\Gamma_0$ is at most $-c\chi(F)=c(2g-2+|\partial F|)$ for some constant $c$ which depends on $|\partial S-\Gamma_0|$ and the surface $S_i$ in the case that $M_i$ is hyperbolic as in Case (3).
Let $\Delta=\Delta(\mu, s_F)$ be the intersection number of a curve in $\Gamma_0$ and a curve $\partial F$. So the total number of intersection points of $\Gamma_0$ and $\partial F$ is $\Delta\cdot|\Gamma_0|\cdot|\partial F|$.
Thus, if $g\ge 1$, there is a number $C_1$ depending on $S$ and $S_i$ such
that if $\Delta> C_1g$, we have $\Delta\cdot|\Gamma_0|\cdot|\partial
F|>c(2g-2+|\partial F|)$ and hence there must be an arc in $S\cap F$
with both endpoints in $\partial M$. However, by
Proposition~\ref{Lsign}, this means that $\partial F$ has the same
slope as $\partial S\cap\partial M$ and $\Delta=0$, a contradiction.
Therefore, if $g\ge 1$, $\Delta\le C_1g$ for some constant $C_1$ which depends on $S$ and the surface $S_i$ in Case (3).
Similarly if $g=0$, there is a number $C_0$ such that if $\Delta>C_0$, then $\Delta\cdot|\Gamma_0|\cdot|\partial F|>c(|\partial F|-2)$ and hence there must be an arc in $S\cap F$ with both endpoints in $\partial M$, which means that $\Delta=0$. Thus if $g=0$, $\Delta\le C_0$ for some constant $C_0$ which depends on $M$.
We have two fixed slopes for $\partial M$, the vertical slope $\nu$ and the slope $\mu$ of $\partial S\cap\partial M$. For any horizontal immersed essential surface $F$ of genus at most $g$, let $s_F$ be its boundary slope. The argument above says that $\Delta(\mu,s_F)\le V(g)$, where $V(g)=C_1g$ if $g\ge 1$ and $V(g)=C_0$ is $g=0$ for some constants $C_1$ and $C_0$ depending on $S$. By Lemma~\ref{Lzhang}, $\Delta(\nu, s_F)\le U(g)$ where $U(g)=2g$ if $g\ge 1$ and $U(g)=2$ if $g=0$. Therefore, the total number of possible slopes for $\partial F$ is bounded by a quadratic function of $g$, where the coefficients depend on the fixed surface $S$ and the surface $S_i$ used in the hyperbolic JSJ piece as in Case (3).
\end{proof}
\begin{rem}\label{Rlast}
If one uses part (3) of Proposition~\ref{hyperbolic} instead of part (2) in the argument, then one can prove the main theorem without using the Culler-Shalen theorem (i.e. Proposition~\ref{2-slopes}). However, there is an advantage of using Proposition~\ref{2-slopes}. Given any triangulation of a 3-manifold, one can use normal surface theory to algorithmically find two embedded essential surfaces with different boundary slopes whose existence is guaranteed by Proposition~\ref{2-slopes}. Since there are algorithms to determine the JSJ and Seifert structures, the constant in Theorem~\ref{Tmain} can be found algorithmically by following the proof.
\end{rem}
\noindent
Department of Mathematics; Boston College; Chestnut Hill, MA 02167 USA. \\
Email address: [email protected]
\noindent
Department of Mathematics; East China Normal University; Shanghai
200062 CHINA \\
Email address: [email protected]
\noindent
Department of Mathematics; Peking University, Beijing 100871, CHINA \\
Email address: [email protected]
\end{document} |
\begin{equation}gin{document}
\title{Semi-device independent certification of multiple unsharpness parameters through sequential measurements }
\author{ Sumit Mukherjee }
\email{[email protected]}
\author{ A. K. Pan }
\email{[email protected]}
\affiliation{National Institute of Technology Patna, Ashok Rajpath, Patna, Bihar 800005, India}
\begin{equation}gin{abstract}
Based on a sequential communication game, semi-device independent certification of an unsharp instrument has recently been demonstrated [\href{https://iopscience.iop.org/article/10.1088/1367-2630/ab3773}{New J. Phys. 21 083034 (2019), }\href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033014}{ Phys. Rev. Research 2, 033014 (2020)}]. In this paper, we provide semi-device independent self-testing protocols in the prepare-measure scenario to certify multiple unsharpness parameters along with the states and the measurement settings. This is achieved through the sequential quantum advantage shared by multiple independent observers in a suitable communication game known as parity-oblivious random-access-code. We demonstrate that in 3-bit parity-oblivious random-access-code, at most three independent observers can sequentially share quantum advantage. The optimal pair (triple) of quantum advantages enables us to uniquely certify the qubit states, the measurement settings, and the unsharpness parameter(s). The practical implementation of a given protocol involves inevitable losses. In a sub-optimal scenario, we derive a certified interval within which a specific unsharpness parameter has to be confined. We extend our treatment to the 4-bit case and show that at most two observers can share quantum advantage for the qubit system. Further, we provide a sketch to argue that four sequential observers can share the quantum advantage for the two-qubit system, thereby enabling the certification of three unsharpness parameters.
\end{abstract}
\maketitle
\section{Introduction}
Bell's theorem \cite{bell} is regarded by many as one of the most profound discoveries of modern science. This celebrated no-go proof asserts that any ontological model satisfying locality cannot account for all quantum statistics. This feature is commonly demonstrated through the quantum violation of suitable Bell inequalities. Besides the immense conceptual insights Bell's theorem adds to the quantum foundations research, it provides a multitude of practical applications in quantum information processing (see, for extensive reviews, \cite{hororev,guhnearev,brunnerrev}). Moreover, the non-local correlation is device-independent, i.e., no characterization of the devices is needed to be assumed. Only the observed output statistics are enough to certify non-locality. The device-independent non-local correlation is used as a resource for secure quantum key distribution \cite{bar05,acin06,acin07,pir09}, randomness certification \cite{col06,pir10,nieto,col12}, witnessing Hilbert space dimension \cite{wehner,gallego,ahrens,brunnerprl13,bowler,sik16prl,cong17,pan2020} and for achieving advantages in communication complexity tasks \cite{complx1}.
Optimal quantum value of a given Bell expression enables device-independent certification of state and measurements - commonly known as self-testing \cite{self1,wagner2020,tava2018,farkas2019}. For example, optimal violation of CHSH inequality self-tests the maximally entangled state and mutually anti-commuting local observables. Note that the device-independent certification encounters practical challenges arises from the requirement of the loophole-free Bell test. Such tests have recently been realized \cite{lf1,lf2,lf3,lf4,lf5} enabling experimental demonstrations of device-independent certification of randomness \cite{Yang, Peter}. However, practically such a loophole-free certification of non-local correlation remains an uphill task.
In recent times, there is an upsurge of interest in semi-device independent (SDI) protocols in prepare-measure scenario \cite{pawlowski11, lungi,li11,li12,Wen,van, brask, bowles14,tava2018,farkas2019,tava20exp,mir2019,wagner2020,mohan2019,miklin2020,samina2020,fole2020,anwar2020}. It is less cumbersome than full device-independent test and hence experimentally more appealing. In most of the SDI certification protocols, it is commonly assumed that there is an upper bound on dimension, but otherwise the devices remain uncharacterized. Of late, a flurry of SDI protocols has been developed for certifying the states and sharp measurements \cite{tava2018}, non-projective measurements \cite{mir2019,samina2020}, mutually unbiased bases \cite{farkas2019}, randomness \cite{lungi,li11,li12}. Very recently, the SDI certification of unsharpness parameter has been reported \cite{miklin2020,mohan2019,anwar2020,fole2020}.
In this work, we provide interesting SDI protocols in the prepare-measure scenario to certify multiple unsharpness parameters. This is demonstrated by examining the quantum advantage shared by multiple observers. Sharing various forms of quantum correlations by multiple sequential observers has recently received considerable attention among researchers. Quite many works \cite{silva2015,sasmal2018,bera2018,kumari2019,brown2020,Zhang21} have recently been reported to examine at most how many sequential observers can share different quantum correlations, viz., entanglement, coherence, non-locality, and preparation contextuality. Any such sharing of quantum correlation protocol requires the prior observers to perform unsharp measurements \cite{busch} represented by a set of positive operator valued measures (POVMs). This allows the subsequent observer to extract the quantum advantage. In ideal sharp measurements \cite{vonneumann}, the system collapses to one of the eigenstates of the measured observable. From an information-theoretic perspective, the information gained in sharp measurement is maximum; thereby, the system is most disturbed. It may be natural to expect that the projective measurement provides an optimal advantage in contrast to POVMs, but there are certain tasks where unsharp measurement showcases its supremacy over projective measurement. Examples include sequential quantum state discrimination \cite{std1,std2}, unbounded randomness certification \cite{ran1} and many more.
However, the statistics corresponding to POVMs can be simulated by projective measurements unless they are extremal ones \cite{oszmaniec2017}. The unsharp POVMs that are the noisy variants of sharp projective measurements can be simulated through the classical post-processing of ideal projective measurement statistics. The certification of such unsharp non-extremal POVMs using the standard self-testing scheme is not possible. Note that the quantum measurement instruments are subject to imperfections for various reasons, and emerging quantum technologies demand certified instruments for conclusive experimental tests. Standard self-testing protocols certify the states and measurements based on the optimal quantum value, but such protocols do not certify the post-measurement states. The same optimal quantum value of a Bell expression may be obtained even when the POVMs are implemented differently, leading to different post-measurements states. But the sequential measurements have the potential to certify the post-measurement states and consequently the unsharpness parameter.
Recently, in an interesting work, such a certification was put forwarded by Mohan, Tavakoli and Brunner (henceforth, MTB) \cite{mohan2019} through the sequential quantum advantages of two independent observers in an SDI prepare-measure communication game. In particular, they have considered $2$-bit Random-access code(RAC) and showed that two sequential observers can get the quantum advantage at most. Both the observers cannot have optimal quantum advantages, but there exists an optimal trade-off between quantum advantages of two sequential observers, {enabling the certification of the prepared qubit states and measurement settings.} Another interesting protocol for the same purpose is proposed in \cite{miklin2020}, but significantly differs from the approach used in \cite{mohan2019}. MTB protocol \cite{mohan2019} also provided a certified interval of values of sharpness parameter in loss tolerant scenario. MTB proposal has been experimentally tested in \cite{anwar2020,fole2020,xiao}.
Against this backdrop, natural questions may arise, such as whether two or more independent unsharpness parameters can be certified and the certified interval of values of unsharpness parameter of a measurement instrument can be fine-tuned. Here we answer both the questions affirmatively. Intuitively, two or more unsharpness parameters can be certified if multiple independent observers can share the quantum advantage. In such a case, the certified interval of unsharpness parameters of a measurement instrument will also be fine-tuned. By using a straightforward mathematical approach, we provide SDI prepare-measure protocols to demonstrate that more than two sequential observers can share the quantum advantage, thereby enabling us to certify multiple unsharpness parameters instead of a single unsharpness parameter as it is in \cite{mohan2019,miklin2020}. However, in the practical scenario, the perfect optimal quantum correlations cannot be achieved. In such a case, we derive an interval of unsharpness parameter using the suboptimal witness pair. The closer the observed statistics are to the optimal values, the narrower is the interval to which the sharpness parameter can be confined. We also provide a methodology for how sequential measurement schemes can be used to fine-tune a certified interval of the unsharpness parameter of a quantum measurement device. Furthermore, we argue that the higher the number of observers shares the quantum advantage narrower the certified interval.
We first propose the aforementioned SDI certification scheme based on $3$-bit RAC in a prepare-measure scenario and demonstrate that at most, three independent observers can share the quantum advantage sequentially. Throughout this paper, we assume that the quantum system to be a qubit; otherwise, the devices remain uncharacterized. We show that the optimal pair of quantum advantage corresponding to the first two observers certifies the first observer's unsharpness parameter, and the optimal triple of quantum advantage provides the certification of unsharpness parameters of the first two observers. As indicated earlier, we further provide a fine-tuning of the interval of values of unsharpness parameter compared to the $2$-bit RAC presented in \cite{mohan2019}. We also show that if the quantum advantage is extended to a third sequential observer, then the interval of values of unsharpness parameter for the first observer can be more fine-tuned, thereby certifying a narrow interval in the sub-optimal scenario. This scheme is further extended to the $4$-bit case, where we demonstrate that at least two observers can share quantum advantage for a qubit system. If the two-qubit system is taken, four independent observers can share the quantum advantage at most. We provide a sketch of how to certify three unsharpness parameters corresponding to the first three observers and the input two-qubit states and measurement settings.
This paper is organized as follows. In Sec. II, we discuss a general $n$-bit quantum parity-oblivious RAC. In section III, we explicitly consider the $3$-bit case and find the optimal and sub-optimal relationships between sequential success probabilities for two observers. In Sec. IV, we generalize the scenario to three or more observers. In Sec. V we discuss the certification of multiple unsharpness parameters when the third observer gets the quantum advantage. In Sec. VI, we extend the protocol for the 4-bit case. We discuss our results in Sec. VII.
\section{ Parity-oblivious random-access-code}
We start by briefly encapsulating the notion of parity-oblivious RAC which is used here as a tool to demonstrate our results. It is a two-party one-way communication game. The $n$-bit RAC \cite{ambainis} in terms of a prepare-measure scenario that involves a sender (Alice), who has a length-$n$ string $x$, randomly sampled from $x\in\{0,1\}^{n}$. On the other hand, a receiver (Bob) receives uniformly at random, index $ y \in \{1,2, ..., n\}$ as his input. Bob's task is to recover the bit $x_{y}$ with a probability. In an operational theory, Alice encodes her $ n $-bit string $ x $ into the states prepared by a procedure $P_{x}$. After receiving the system, for every $y \in \{1,2,...,n\}$, Bob performs a two-outcome measurement $M_{y}$ and reports outcome $b$ as his output. As mentioned, the winning condition of the game is $b=x_{y}$, i.e., Bob has to predict the $y^{th}$ bit of Alice's input string $x$ correctly. Then the average success probability of the game is given by,
\begin{equation}gin{equation}
\label{qprob}
S_{n}(b=x_{y}) = \dfrac{1}{2^n n}\sum\limits_{x,y}p(b=x_y|P_{x},M_{y}).
\end{equation}
Now, to help Bob, Alice can communicate some bits to him over a classical communication channel. The game becomes trivial if Alice communicates $n$ number of bits to Bob, and Non-triviality may arise if Alice communicates less than that. For our purpose here, we impose a constraint - the parity-oblivious condition \cite{spekk09} that has to be satisfied by Alice's inputs. This demands that Alice can communicate any number of bits to Bob, but that must not reveal any information about the parity of Alice's input. The parity-oblivious condition eventually provides an upper bound on the number of bits that can be communicated.
Following Spekkens \emph{et al.} \cite{spekk09}, we define a parity set $ \mathbb{P}_n= \{x|x \in \{0,1\}^n,\sum_{r} x_{r} \geq 2\} $ with $r\in \{1,2,...,n\}$. Explicitly, the parity-oblivious condition dictates that for any $s \in \mathbb{P}_{n}$, no information about $s\cdot x = \oplus_{r} s_{r}x_{r}$ (s-parity) is to be transmitted to Bob, where $\oplus$ is sum modulo $ 2 $. We then have s-parity 0 and s parity-1 sets. Explicitly, the parity-oblivious condition demands that in an operational theory the following relation is satisfied,
\begin{equation}gin{align}
\label{poc1}
\forall s: \frac{1}{2^{n-1}}\sum\limits_{x|x.s=0} P(P_{x}|b,M_{y})=\frac{1}{2^{n-1}}\sum\limits_{x|x.s=1} P(P_{x}|b,M_{y}).
\end{align}
For example, when $ n=2 $ the set is $\mathbb{P}=\{11\} $, so no information about $x_1\oplus x_2$ can be transmitted by Alice. It was shown in \cite{spekk09} that to satisfy the parity-oblivious condition in the $n$-bit case, the communication from Alice to Bob has to be restricted to be one bit.
The maximum average success probability in such a classical $ n $-bit parity-oblivious RAC is ${(n+1)}/{2n}$. While the explicit proof can be found in \cite{spekk09}, a simple trick can saturate the bound as follows. Assume that Alice always encodes the first bit (prior agreement between Alice and Bob) and sends it to Bob. If $y=1$, occurring with probability $ 1/n $, Bob can predict the outcome with certainty, and for $y \neq 1$, occurring with the probability of $(n-1)/n$, he at best guesses the bit with probability $1/2$. Hence the total probability of success is derived \cite{spekk09} as
\begin{equation}gin{align}
\label{cb}
(S_{n}(b=x_{y}))_{c}\leq \frac{1}{n} + \frac{(n-1)}{2n} = \frac{1}{2}\left(1+\frac{1}{n}\right).
\end{align}
In quantum RAC, Alice encodes her length-$ n $ string $ x \in\{0,1\}^{n} $ into quantum states $ \rho_{x}^{0}$, prepared by a procedure $P_{x}$, Bob performs a two-outcome measurement $M_{y}$ for every $y \in \{1,2,...,n\}$ and reports outcome $b$ as his output. Average success probability in quantum theory can then be written as,
\begin{equation}gin{eqnarray}
\label{succ1}
S_{n}= \dfrac{1}{2^n n} \sum_{y=1}^{n} \sum\limits_{x \in \{0,1\}^{n}} Tr[\rho_{x}^{0} M_{b|y}].
\end{eqnarray}
The parity-oblivious constraint {imposes} the following condition to be satisfied by Alice's input states,
\begin{equation}gin{align}
\label{poc}
\forall s: \frac{1}{2^{n-1}}\sum\limits_{x|x.s=0} \rho_{x}^{0}=\frac{1}{2^{n-1}}\sum\limits_{x|x.s=1} \rho_{x}^{0}.
\end{align}
It has been shown in \cite{spekk09,ghorai18} that the optimal quantum success probability for $2$-bit parity-oblivious RAC is $(S_{2})^{opt}=(1/2)(1+1/\sqrt{2}) $ and $3-$bit case is $(S_{3})^{opt}=(1/2)(1+1/\sqrt{3}) $. In both the cases, the qubit system is enough to obtain the optimal quantum value. But, for $n>3$ one requires higher dimensional system. {In entanglement assisted variant of parity-oblivious RAC with $n>3$, optimal values of success probabilities achievable with qubit systems were found in \cite{pan2020}.} However, in this work we consider the prepare-measure scenario and hence we separately prove the results for $n=3$ and $n=4$.
A remark on the ontological model of quantum theory could be useful to understand the type of non-classicality appearing in parity-oblivious quantum RAC. The satisfaction of parity-oblivious condition in an operational theory implies that no measurement can distinguish the parity of the inputs. This is regarded as an equivalent class of preparations \cite{spek05,kunjwal,pan19} which will have equivalent representation at the level of the ontic states. It has been demonstrated in \cite{spekk09} that the parity-obliviousness at the operational level must be satisfied at the level of ontic states if the ontological model of quantum theory is preparation non-contextual. Thus, in a preparation non-contextual model, the classical bound remains the same as given in Eq.(\ref{cb}). Quantum violation of this bound thus demonstrates a form of non-classicality - the preparation contextuality. Throughout this paper, by quantum advantage, we refer to the violation of preparation non-contextuality (unless stated otherwise) but to avoid clumsiness, we skip detailed discussion about it. We refer \cite{spek05,spekk09,kunjwal,pan19} for detailed discussion about it.
We first consider the $3$-bit parity-oblivious RAC and demonstrate that three independent Bobs can sequentially share the quantum advantage at most by assuming the quantum system to be a qubit. We then demonstrate how this sequential measurement scenario enables the SDI certification of multiple unsharpness parameters along with the qubit states and the measurement settings. { We note here that the MTB protocol was extended \cite{wei21} for $3$-bit standard RAC to certify a single unsharpness parameter. In our work, we consider the parity-oblivious RAC and certify multiple unsharpness parameters. While the mathematical approach in \cite{wei21} follows the MTB protocol\cite{mohan2019}, our approach is quite different from \cite{mohan2019,wei21}. Also, we extend our approach to $4$-bit standard and parity-oblivious RACs and provide some important results in Sec. VI.}
\section{Sequential quantum advantage in $3$-bit RAC}
In sequential quantum RAC \cite{tava2018}, multiple independent observers perform a prepare-transform-measure task. Alice encodes her bit into a quantum system and sends it to the next observer (say, Bob). There are one Alice and multiple numbers of independent Bobs. After receiving the system, first Bob (say, Bob$_{1}$) performs random measurements to decode the information and relays the system to the second Bob (Bob$_{2}$), who does the same as the first Bob. The chain continues until $k^{th}$ Bob (where $k$ is arbitrary and has to be determined) gets the quantum advantage.
Before examining how many independent Bobs can get quantum advantage sequentially in a $3$-bit RAC, let us first discuss the classical RAC task in sequence. When the physical devices are classical, the states that are being sent by Alice are diagonal in the same basis on which multiple independent Bobs perform their measurements sequentially. In that case, the measurement of first Bob does not disturb the system, and hence decoding measurement for second Bob will not be influenced by the first Bob. In other words, there exists no trade-off between sequential Bobs, and each of them can obtain optimal classical success probability. The range in which classical success probabilities of standard RAC lie is { $\frac{1}{2} \leq (S_{3}^{1},S_{3}^{2},...., S_{3}^{k}) \leq \frac{3}{4}$. However, in parity-oblivious RAC, as explained in Sec. II classical success probabilities lie in $\frac{1}{2} \leq (S_{3}^{1},S_{3}^{2},...., S_{3}^{k}) \leq \frac{2}{3}$.} In $3$-bit RAC, we will find that the input states for which the optimal quantum success probability is obtained automatically satisfy the parity-oblivious conditions. Hence, we keep our discussion in parity-oblivious RAC in the $3$-bit case.
On the other hand, in the quantum case, the prior measurements, in general, disturb Alice's input quantum states and thereby influence the success probability of the subsequent Bobs. Then there exists an information disturbance trade-off - the more information one gains from the system, the more disturbance is caused to the system. Sharp projective valued measurements disturb the system most. Then to get the quantum advantage for $k^{th}$ Bob (where $k$ is arbitrary), previous Bobs have to measure unsharp POVMs. To obtain the maximum number of independent Bobs who can share quantum advantage, the measurements of previous Bobs have to be so unsharp that they are just enough to reveal the quantum advantage. We find that at most, two independent Bobs can share quantum advantage in $3$-bit standard RAC, but at most three Bobs can share quantum advantage in $3$-bit parity-oblivious RAC.
{In $3$-bit sequential quantum RAC, Alice randomly encodes her three bit string $x=(x_{0}, x_{1}, x_{2}) \in \{0,1\}^3$ into eight qubits $\rho_x^{0}$ and sends them to Bob$_{1}$. After receiving the system, Bob$_{1}$ randomly performs unsharp measurement (with unsharpness parameter $\lambda_{1}$) of dichotomic observables $B_{y_{1}}$ ( y$_{1}$=1,2,3) and relays the system to Bob$_{2}$ who carries out unsharp measurement of the observables $B_{y_{2}}$ ($y_{2}=1,2,3$) with unsharpness parameter $\lambda_{2}$ and so on. The scheme is depicted in figure Fig\ref{fig1}. Each Bob examines if quantum advantage over classical RAC is obtained and for the $k^{th}$ Bob if the advantage is not obtained then the process stops. Let us also define the measurement observables of Bob$_{k}$ as $B_{y_{k}}$ with y$_{k}\in\{1,2,3\}$. Here we assume that a particular observer (say Bob$_{k}$) is provided with different possible measurement instruments with same value of unsharpness parameters $\lambda_{k}$. Although, each Bob can take different values to the unsharpness parameters for different measurement settings, for our purpose it is sufficient to assume the same value of unsharpness parameters for all the measurement settings that a particular Bob uses.}
If $k^{th}$ Bob's instruments are characterized by Kraus operators $\{K_{b_{k}|y_{k}}\}$ then after $k^{th}$ Bob's measurement, the average state relayed to Bob$_{k+1}$ is given by
\begin{equation}gin{equation}
\rho^{k}_x = \frac{1}{3}\sum_{y_{k}, b_{k}} K_{b_{k}|y_{k}}\rho_{x}^{k-1}K_{b_{k}|y_{k}}, \label{eq:D}
\end{equation}
where $\forall k$, $b_{k}\in \{0,1\}$ and $\sum_{b_{k}} K_{b_{k}|y_{k}}^{\dagger}K_{b_{k}|y_{k}}=\mathbb{I}$ and {$K_{b_{k}|y_{k}}= U\sqrt{M_{b_{k}|y_{k}} }$ where $U$ is an unitary operator. For simplicity, here we set $U=\mathbb{I}$, and our arguments remains valid up to any unitary transformation.} Here $M_{b_{k}|y_{k}} = \left(\mathbb{I}+\lambda_{k}(-1)^{x_{y_{k}}} \hat{b}_{k}\cdot\sigma\right)/2$ is the POVM corresponding to the result $ b_{k}$ of measurement $B_{y_{k}}$.
\begin{equation}gin{figure*}
\centering
\includegraphics[scale=0.4]{RAC_1.pdf}
\caption{Prepare-transform-measure Block diagram for sequential encoding decoding scheme involving four parties. The first party gets a random three-bit string which she encodes into a qubit system. It is being sent to the next party, who thereby performs three binary outcome measurements at random to decode the message and send the system to the next party and so on}\label{fig1}
\end{figure*}
The Kraus operators for k$^{th}$ Bob can be written as, $K_{\pm|y_{k}}= \sqrt{\frac{(1\pm\lambda_{k})}{2}}P_{y_{k}}^{+} +\sqrt{\frac{(1\mp\lambda_{k})}{2}}P_{y_{k}}^{-} = \alpha_{k} \mathbb{I} \pm \begin{equation}ta_{k} B_{y_{k}} $ where $P_{y_{k}}^{\pm}=(\mathbb{I}\pm B_{y_{k}})/2$ are the projectors and $\lambda_{k}$ is the unsharpness parameter. Here $\alpha_{k} = \frac{1}{2}\Big(\sqrt{\frac{(1-\lambda_{k})}{2}}+ \sqrt{\frac{(1+\lambda_{k})}{2}}\Big)$ and $\begin{equation}ta_{k} = \frac{1}{2}\Big(\sqrt{\frac{(1+\lambda_{k})}{2}}-\sqrt{\frac{(1-\lambda_{k})}{2}}\Big)$, with $\alpha_{k}^2 +\begin{equation}ta_{k}^2 = 1/2$ and $4\alpha_{k}\begin{equation}ta_{k}=\lambda_{k}$.
By using Eq. (\ref{succ1}) the quantum success probability for the k$^{th}$ Bob can be written as,
\begin{equation}gin{equation}
S_{3}^{k} =\frac{1}{24}\sum_{x \in \{0,1\}^3}\sum_{y_{k}=1,2,3}Tr\Big[\rho_{x}^{k-1} M_{b_{k}|y_{k}}\Big]. \label{eq:S}
\end{equation}
Let us consider that Alice's eight input qubit states $\rho_{x}^{0} = \frac{\mathbb{I}+\vec{a}_{x}\cdot \sigma}{2} $ where $\vec{a}_{x}$ is the Bloch vector with $||\vec{a}_{x}||\leq 1$. By using Eq. \eqref{eq:S}, the quantum success probability for Bob$_{1}$ can be written as
\begin{equation}gin{equation}
S_{3}^{1} =\frac{1}{24}\sum_{x \in \{0,1\}^3}\sum_{y_{1}=1,2,3}Tr\Big[\rho_x^{0} M_{b_{1}|y_{1}}\Big]. \label{s31}
\end{equation}
We consider that $ M_{b_{1}|y_{1}}$ are unbiased POVMs represented by $ M_{b_{1}|y_{1}}=(\mathbb{I}+ 4\alpha_{1}\begin{equation}ta_{1}(-1)^{x_{y_1}} B_{y_{1}})/2$. Using it in Eq. (\ref{s31}) and further simplifying, we get
\begin{equation}gin{equation}
S_{3}^{1}=\frac{1}{2}+\frac{\alpha_{1} \begin{equation}ta_{1}}{12}\sum_{r,y_{1}=1,2,3}\big(\delta_{r,y_{1}}\vec{m}_{r}\cdot \hat{b}_{y_{1}}\big), \label{eq:succ1}
\end{equation}\\
where $\vec{m}_{r}=||\vec{m}_{r}|| \hat{m}_{r}$ ($ \hat{m}_{r}$ is the unit vector) are unnormalized vectors can explicitly be written as,
\begin{equation}gin{align}
\vec{m}_1=(\vec{a}_{000}-\vec{a}_{111})+(\vec{a}_{001}-\vec{a}_{110})+(\vec{a}_{010}-\vec{a}_{101})-(\vec{a}_{100}-\vec{a}_{011}) \nonumber \\
\vec{m}_2= (\vec{a}_{000}-\vec{a}_{111})+(\vec{a}_{001}-\vec{a}_{110})-(\vec{a}_{010}-\vec{a}_{101})+(\vec{a}_{100}-\vec{a}_{011}) \nonumber\\
\vec{m}_3= (\vec{a}_{000}-\vec{a}_{111})-(\vec{a}_{001}-\vec{a}_{110})+(\vec{a}_{010}-\vec{a}_{101})+(\vec{a}_{100}-\vec{a}_{011}). \label{eq:effd}
\end{align}
Next, if Bob$_{1}$ performs unsharp measurements then the average state that is relayed to Bob$_{2}$ is obtained from Eq. \eqref{eq:D} and is given by,
\begin{equation}gin{equation}
\rho_{x}^{1} =\frac{1}{2}\left( \mathbb{I} + \vec{a}^{1}_{x}\cdot\sigma\right),
\end{equation}
where
\begin{equation}gin{equation}
\vec{a}^{1}_{x}= 2\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\vec{a}_{x} +\frac{4\begin{equation}ta_{1}^2}{3}\sum_{y_{1}=1,2,3}\left( \hat{b}_{y_{1}}\cdot\vec{a}_{x}\right) \hat{b}_{y_{1}} \label{eq:vecA}
\end{equation}
is the Bloch vector of the reduced state. The quantum success probability for Bob$_{2}$ is calculated by using Eqs.\eqref{eq:S} and \eqref{eq:vecA} as,
\begin{equation}gin{equation}
\begin{equation}gin{split}
S_{3}^{2} &=\frac{1}{24}\sum_{x \in \{0,1\}^3}\sum_{y_{2}=1,2,3}Tr\Big[\rho_{x}^1 M_{b_{2}|y_{2}}\Big]\\
&= \frac{1}{2}+\frac{1}{48}\Big[2\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\sum_{r,y_{2}=1,2,3}\big(\delta_{r,y_{2}}\vec{m}_{r}\cdot \hat{b}_{y_{2}}\big)\\
&+\frac{4\begin{equation}ta_{1}^2}{3} \sum_{r,y_{1}, y_{2}=1,2,3}\left(\delta_{r,y_{2}}\vec{m}_{r}\cdot \hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right)\Big]. \label{eq:succ2}
\end{split}
\end{equation}
In order to find the optimal trade-off between the success probabilities of Bob$_{1}$ and Bob$_{2}$, the maximum quantum value of $S_{3}^{2}$ for the given value of $S_{3}^{1}$ has to be derived. This leads {to an optimization} of $S_{3}^{2}$ over all $\rho_{x}^{0}$, $ \hat{b}_{y_1}$ and $ \hat{b}_{y_2}$.
To optimize $S_{3}^{2} $ we note that the states $\rho_{x}^{0}$ prepared by Alice should be pure i.e., $||\vec{a}_{x}||=1$. Otherwise the magnitudes $||\vec{m}_{r}||$s decrease leading to an decrement of overall success probability. Again, we must have antipodal pairs (joining vertices of a unit cube inside the Bloch sphere ) constituting each $\vec{m}_{r}$ in Eq. \eqref{eq:effd}. So overall maximization of $S_{3}^{2}$ requires the following; $\vec{a}_{000}=-\vec{a}_{111}$, $\vec{a}_{001}=-\vec{a}_{110}$, $\vec{a}_{010}=-\vec{a}_{101}$ and $\vec{a}_{100}=-\vec{a}_{001}$. From this choice we can rewrite the $\vec{m}_{r}$s as, $\vec{m}_1= 2(\vec{a}_{000}+\vec{a}_{001}+\vec{a}_{010}-\vec{a}_{100})$ , $\vec{m}_2= 2(\vec{a}_{000}+\vec{a}_{001}-\vec{a}_{010}+\vec{a}_{100})$ and $\vec{m}_3= 2(\vec{a}_{000}-\vec{a}_{001}+\vec{a}_{010}+\vec{a}_{100})$.
It is seen from Eq. (\ref{eq:succ2}) that as $\alpha_{1}>\begin{equation}ta_{1}$ so for the maximization of $S_{3}^{2}$ we must have $\vec{m}_{r}$ to be along the direction of $ \hat{b}_{y_2}$ when $r =y_2$. We can then write
\begin{equation}gin{equation}
\begin{equation}gin{split}
S_{3}^{2} &\leq \frac{1}{2}+\frac{1}{48}\Big[2\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\sum_{r=1,2,3}||\vec{m}_{r}||\\
&+\frac{4\begin{equation}ta_{1}^2}{3} \sum_{r, y_{1}=1,2,3}\left(\vec{m}_{r}\cdot \hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{r}\right)\Big]. \label{eq:succ211}
\end{split}
\end{equation}
{Using concavity of the square root} $\sum_{y_{1}=1}^{3}||\vec{m}_{r}|| \leq \sqrt{3\sum_{y_{1}=1}^{3}||\vec{m}_{r}||^{2}}$ and putting the expressions of $\vec{m}_{r}$s, we have
\begin{equation}gin{equation}
\sum_{y_{1}=1}^{3}||\vec{m}_{r}|| \leq 2\sqrt{48-3\left(\vec{a}_{000}-\vec{a}_{001}-\vec{a}_{010}-\vec{a}_{100}\right)^{2}}. \label{eq:msum}
\end{equation}
We can then get $max(\sum_{y_{1}=1}^{3}||\vec{m}_{r}||)=8\sqrt{3}$ when the condition
\begin{equation}gin{align}
\label{poco}
\vec{a}_{000}-\vec{a}_{001}-\vec{a}_{010}-\vec{a}_{100}=0
\end{align}
in Eq. \eqref{eq:msum} is satisfied. This in turn provides each of the $||\vec{m}_{r}||$ is $8/\sqrt{3}$ when equality in Eq. \eqref{eq:msum} holds. The condition in Eq. (\ref{poco}) leads the following relations between the input Bloch vectors, $ \vec{a}_{000}\cdot\vec{a}_{001}+\vec{a}_{000}\cdot\vec{a}_{010}+\vec{a}_{000}\cdot\vec{a}_{100}=1$ , \ $\vec{a}_{000}\cdot\vec{a}_{001}-\vec{a}_{001}\cdot\vec{a}_{010}-\vec{a}_{001}\cdot\vec{a}_{100}=1$, \
$\vec{a}_{000}\cdot\vec{a}_{010}-\vec{a}_{001}\cdot\vec{a}_{010}-\vec{a}_{010}\cdot\vec{a}_{100}=1,$ and \ $\vec{a}_{000}\cdot\vec{a}_{100}+\vec{a}_{010}\cdot\vec{a}_{100}+\vec{a}_{010}\cdot\vec{a}_{100}=1$.
Solving the above set of relations one finds $\vec{a}_{000}\cdot\vec{a}_{001}= -\vec{a}_{010}\cdot\vec{a}_{100}$ , $\vec{a}_{000}\cdot\vec{a}_{010}= -\vec{a}_{001}\cdot\vec{a}_{100}$ and $\vec{a}_{000}\cdot\vec{a}_{100}= -\vec{a}_{001}\cdot\vec{a}_{010}$. Further by noting that $||\vec{m}_{1}|| = ||\vec{m}_{2}|| = ||\vec{m}_{3}||$, we get $\vec{a}_{x}\cdot\vec{a}_{x'}=\left(+\frac{1}{3}\right)\delta_{xx'}$ for $x=000, x'\in\{001,010,100\}$ and $\vec{a}_{x}\cdot\vec{a}_{x'}=\left(-\frac{1}{3}\right)\delta_{xx'}$ for other combinations of $x$ and $x'$ where $x'\neq x$ and $\vec{a}_{x}\cdot\vec{a}_{x'}=1$ for $x'= x$. It can be easily checked that the four unit vectors $\vec{a}_{000}$, $\vec{a}_{001},\vec{a}_{010}$ and $\vec{a}_{100}$ form a regular tetrahedron when represented on the Bloch sphere.
Impinging the above conditions into Eq. \eqref{eq:effd} we immediately get, $\hat{m}_{r}\cdot \hat{m}_{r^{'}}=\delta_{rr^{'}}$, i.e., $\hat{m}_1$, $\hat{m}_2$ and $\hat{m}_3$ are mutually orthogonal unit vectors. Without loss of generality we can then fix $\hat{m}_1=\hat{x}$, $\hat{m}_2=\hat{y}$ and $\hat{m}_3=\hat{z}$. This implies that the maximum value $max\left(S_{3}^{2}\right)\equiv\Delta_{2}$ can be obtained when the unit vectors of Bob${_2}$ are $ \hat{b}_{y_{2}=1}=\hat{x}$, $ \hat{b}_{y_{2}=2}=\hat{x}$ and $ \hat{b}_{y_{2}=3}=\hat{z}$. It is then straightforward to understand from the last part of Eq. (\ref{eq:succ211}) that the measurement settings of Bob$_{1}$ has to be same as Bob$_{2}$, $ \hat{b}_{y_{2}}= \hat{b}_{y_{1}}=\hat{m}_{r}$ when $y_{1}=y_{2}$=r. We then have,
\begin{equation}gin{equation}
\label{eq:succ2d}
S_{3}^{2}\leq\frac{1}{2}+\frac{\left(3\alpha_{1}^2-\begin{equation}ta_{1}^2\right)}{6\times24} \sum\limits_{y_{1}=1,2,3}||\vec{m}_{r}||\equiv \Delta_{2}.
\end{equation}
It can be seen that given the values of $\alpha_{1}$ and $\begin{equation}ta_{1}$, the maximization of $S_{3}^{2}$ provides the success probability of Bob$_{1}$ {of the form},
\begin{equation}gin{equation}
S_{3}^{1}=\frac{1}{2}+\frac{\alpha_{1} \begin{equation}ta_{1}}{12}\sum_{y_{1}=1,2,3}||\vec{m}_{r}||\label{eq:succ12}.
\end{equation}\\
Note that both $S_{3}^{1}$ and $S_{3}^{2}$ are simultaneously optimized when the quantity $\sum_{y_{1}=1,2,3}||\vec{m}_{r}||$ is optimized. Let us denote $max(S_{3}^{1})$ as $\Delta_{1}$. By putting the values of $\alpha_{1}$ and $\begin{equation}ta_{1}$ and by noting $max(\sum_{y_{1}=1}^{3}||\vec{m}_{r}||))=8\sqrt{3}$, we thus have the optimal pair of success probabilities corresponding to Bob$_{1}$ and Bob$_{2}$ {given by}, \begin{equation}gin{eqnarray}
\Delta_{1} = \frac{1}{2}\left(1+\frac{\lambda_{1}}{\sqrt{3}}\right); \ \
\Delta_{2} = \frac{1}{2}\left(1+ \frac{1+2\sqrt{1-\lambda_{1}^2}}{3\sqrt{3}}\right) \label{eq:succ21}
\end{eqnarray}
The optimal trade-off between success probabilities of Bob$_{2}$ and Bob$_{1}$ can then be written as,
\begin{equation}gin{equation}
{\Delta_{2}({\Delta_{1}})= \frac{1}{2}+\frac{\left(1+2\sqrt{12\Delta_{1}-12\Delta_{1}^{2}-2}\right)}{6\sqrt{3}}},
\end{equation}
\begin{equation}gin{figure}
\centering
\includegraphics[scale=0.47]{opttradeof.pdf}
\caption{Optimal trade-off between quantum success probabilities of Bob$_{1}$ and Bob$_{2}$ is shown by solid orange curve while the shaded portion gives the sub-optimal range. Blue solid line is for classical parity-oblivious RAC for the same two observers. } \label{fig3}
\end{figure}
yielding that $(\Delta_{2})^{\Delta_{1}}$ is the function of $\Delta_{1}$ only, which is now solely dependent on sharpness parameter $\lambda_{1}$ once the $\Delta_{2}$ is maximized.
Fig.\ref{fig3} represents the optimal trade-off characteristics between $\Delta_{1}$ and $\Delta_{2}$. The blue line corresponds to the maximum success probability in classical strategy showing no information-disturbance trade-off between the measurements. Each observer can get maximum success probability without any dependence on other observers. On the other hand, the optimal pair $(\Delta_{2}, {\Delta_{1}})$ in quantum theory provide a trade-off. The more the Bob$_{1}$ disturbs the system more the information is gained by him, and $\Delta_{1}$ increases.This eventually decreases the success probability $\Delta_{2}$ of Bob$_{2}$ and vice-versa. The orange curve gives the trade-off for quantum success probabilities, and each point on it certifies a unique value of unsharpness parameter $\lambda_{1}$. For example when Bob$_{1}$ and Bob$_{2}$ gets equal quantum advantage, i.e, $\Delta_{1}=\Delta_{2}=(17+\sqrt{3})/26\approx0.72$, the sharpness parameter of the Bob$_1$ is $\lambda_{1}=(4\sqrt{3}+3)/13 \approx0.763$. This is shown in Fig. \ref{fig3} by the blue point on the orange curve. Similarly, each point in the orange region in Fig.\ref{fig3} certify an unique value of $\lambda_{1}$.
Thus the optimal pair $(\Delta_{1}, \Delta_{2})$ uniquely certify the preparation of Alice and measurements of Bob$_{1}$ and Bob$_{2}$, and the unsharpness parameter $\lambda_{1}$. The certification statements are the following.
(i) Alice has encoded her message into the eight quantum states that are pure and pairwise antipodal represented by points at the vertices of a unit cube inside the Bloch sphere. One of the examples is, $\vec{a}_{000}=(\hat{x}+\hat{y}+\hat{z})/\sqrt{3}$, $\vec{a}_{001}=(\hat{x}+\hat{y}-\hat{z})/\sqrt{3}$, $\vec{a}_{010}=(-\hat{x}+\hat{y}+\hat{z})/\sqrt{3}$, $\vec{a}_{100}=(\hat{x}-\hat{y}+\hat{z})/\sqrt{3}$ and their respective antipodal pairs.
(ii) Bob$_{1}$ performs unsharp measurement corresponding to the observables along three mutually unbiased bases, say, $B_{1}=\lambda_{1}\sigma_{x}$, $B_{2}=\lambda_{1}\sigma_{y}$ and $B_{3}=\lambda_{1}\sigma_{z}$ where $\lambda_{1}=\sqrt{3}\left(2\Delta_{1}-1\right)$. Bob$_{2}$'s measurement settings are same as Bob$_{1}$ but sharp measurement of rank one projective value measures.
It is important to note here that the input states those provides the optimal pair ($\Delta_{1},\Delta_{2}$) must satisfy the parity-oblivious condition in quantum theory. Otherwise the comparison between the classical and quantum bounds becomes unfair. For $n=3$ the parity set $\mathbb{P}_{3}$ {contains} four elements $\mathbb{P}_{3}:=\{011,101,110,111\}$. For every element $s\in \mathbb{P}_{3}$ the parity-oblivious condition given by Eq. (\ref{poc}) has to be satisfied. We see that for $s=011$, Alice's input states satisfy the parity-oblivious condition, $\left(\rho_{000}+\rho_{100}+\rho_{011}+\rho_{111}\right)/4=\left(\rho_{010}+\rho_{101}+\rho_{001}+\rho_{110}\right)/4 =\mathbb{I}/2$. This is due to the fact that for the optimal pair ($\Delta_{1}$,$\Delta_{2}$) one requires antipodal pairs, i.e., $\left(\rho_{000}+\rho_{100}\right)/2=\left(\rho_{011}+\rho_{111}\right)/2=\mathbb{I}/2$ and $\left(\rho_{010}+\rho_{001}\right)/2=\left(\rho_{101}+\rho_{110}\right)/2=\mathbb{I}/2$. Similarly, for $s=101$ and $s=110$ the parity-oblivious condition is automatically satisfied and constitute a trivial constraint to Alice's inputs. But for $s=111$, to satisfy parity-oblivious condition Alice's input must satisfy the relation $\left(\rho_{111}+\rho_{100}+\rho_{001}+\rho_{010}\right)/4=\left(\rho_{000}+\rho_{101}+\rho_{011}+\rho_{110}\right)/4 =\mathbb{I}/2$. This demands a non-trivial relation to be satisfied by Alice's inputs is $\vec{a}_{000}-\vec{a}_{001}-\vec{a}_{010}-\vec{a}_{100}=0 $. Interestingly, this is the condition which was required to obtain $\Delta_{2}$ and $\Delta_{1}$ in Eq. (\ref{poco}). Thus, Alice's input satisfy the parity-oblivious constraint as imposed in the classical RAC.
\subsection{Sub-optimal scenario}
Note that there are many practical reasons due to which the precise certification may not be possible in the real experimental scenario. In other words, the optimal pair of success probabilities may not be achieved in a practical scenario, and hence certification of $\lambda_{1}$ may not be possible uniquely. We argue that even in sub-optimal scenarios when $S_{3}^{1}\leq \Delta_{1}$ and $S_{3}^{2}\leq \Delta_{2}$ the certification of an interval of values of $\lambda_{1}$ is possible from our protocol.
From Eq. \eqref{eq:succ21} we find that $\Delta_{1}$ can provide a lower bound $(\lambda_{1})^{min}$ to the unsharpness parameter $\lambda_{1}$ as,
\begin{equation}gin{equation}
\lambda_{1}\geq \sqrt{3}(2\Delta_{1}-1) \equiv (\lambda_{1})^{min} \label{eq:lowbound3},
\end{equation}
and the upper bound $(\lambda_{1})_{max}$ of $\lambda_{1}$ as,
\begin{equation}gin{equation}
\lambda_{1} \leq \sqrt{1-\Bigg(\frac{3\sqrt{3}(2\Delta_{2}-1)}{2}-\frac{1}{2}\Bigg)^2}\equiv (\lambda_{1})^{max}\label{eq:upbound3}.
\end{equation}
Here we assumed that Bob$_{2}$ performs sharp projecting measurement with $\lambda_{2}=1$. Note that, both the upper and the lower bounds saturate and become equal to each other when $\Delta_{2}$ reaches its maximum value. Now, the quantum advantage for Bob$_1$ requires $\Delta_{1}> 2/3$ which fixes the $(\lambda_{1})^{min}= 1/\sqrt{3}\approx 0.57$ yielding that any value of $\lambda_{1}\in [1/\sqrt{3},1]$ provides quantum advantage for Bob$_{1}$. Again as there is a trade-off between $\Delta_{1}$ and $\Delta_{2}$, in order to obtain the quantum advantage for Bob$_{2}$ the value of $\lambda_{1}$ has the upper bound $(\lambda_{1})^{max}=\sqrt{\sqrt{3}/2}\approx 0.93$.
Hence, when both Bob$_1$ and Bob$_2$ get quantum advantage the following interval $0.57\leq \lambda_{1}\leq 0.93$ can be certified. {The more the accuracy of the experimental observation the more precise certification of $\lambda_{1}$ is achieved.} Importantly, this range can be further fine-tuned (in particular the upper bound) if Bob$_3$ gets the quantum advantage. This is explicitly discussed in the Sec. IV and Sec. V.
\section{Sequential $3$-bit RAC for three or more Bobs}
Let us now investigate the case when more than two independent Bobs perform the sequential measurement. The question is how many can share quantum advantage sequentially. First, to find the success probability for Bob$_{3}$ we assume that Bob$_{2}$ performs unsharpness measurement with unsharpness parameter $\lambda_{2}$. By using Eq. \eqref{eq:D} again, we calculate the average state received by Bob$_{3}$ is $\rho^{2}_{x}=\left(\mathbb{I}+\vec{a}^{2}_{x}\cdot \sigma\right)/2$,
where
\begin{equation}gin{eqnarray}
\vec{a}^{2}_{x}&&= 4\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right)\vec{a}_{x}+\frac{8}{3}\begin{equation}ta_{1}^{2}\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right) \sum_{y_{1}=1,2,3}\left( \hat{b}_{y_{1}}\cdot\vec{a}_{x}\right) \hat{b}_{y_{1}} \nonumber \\ &&\hspace{1.5cm}+\frac{8}{3}\begin{equation}ta_{2}^{2}\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right) \sum_{y_{2}=1,2,3}\left( \hat{b}_{y_{2}}\cdot\vec{a}_{x}\right) \hat{b}_{y_{2}} \\
\nonumber
&&\hspace{1.5cm}+\frac{16\begin{equation}ta_{1}^2\begin{equation}ta_{2}^{2}}{9}\sum_{y_{1},y_{1}=1,2,3}\left( \hat{b}_{y_{1}}\cdot\vec{a}_{x}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right) \hat{b}_{y_{2}}.
\end{eqnarray}
The quantum success probability for Bob$_{3}$ can then be written as,
\begin{equation}gin{eqnarray}
S_{3}^{3} =&&\frac{1}{2}+ \frac{1}{48} Tr\Bigg[ 4\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right)\sum_{y_{3}=1,2,3}\left(\delta_{r,y_{3}}\vec{m}_{r}\cdot \hat{b}_{y_{3}}\right) \nonumber \\
&&+\frac{8}{3}\begin{equation}ta_{1}^{2}\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right) \sum_{y_{1},y_{3}=1,2,3}\left( \delta_{r,y_{1}}\vec{m}_{r}\cdot \hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{3}}\right) \nonumber \\
&&+\frac{8}{3}\begin{equation}ta_{2}^{2}\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right) \sum_{y_{2},y_{3}=1,2,3}\left( \delta_{r,y_{2}}\vec{m}_{r}\cdot \hat{b}_{y_{2}}\right)\left( \hat{b}_{y_{2}}\cdot \hat{b}_{y_{3}}\right) \nonumber\\
&&+\frac{16\begin{equation}ta_{1}^2\begin{equation}ta_{2}^{2}}{9}\sum_{y_{1},y_{2},y_{3}=1,2,3}\left( \delta_{r,y_{1}}\vec{m}_{r}\cdot \hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right) \left( \hat{b}_{y_{2}}\cdot \hat{b}_{y_{3}}\right) \Bigg]. \label{eq:succ3}
\end{eqnarray}
Using the maximization scheme presented in Sec. III, it is straightforward to obtain that $m_{y_{1}}$ must be in the same direction of $\hat{b}_{y_{3}}$ when $r$=$y_{3}$ and $\hat{b}_{y_{1}}\cdot\hat{b}_{y_{2}}$ when $y_{1}=y_{2}$ and so on. We can then write,
\begin{equation}gin{equation}
S_{3}^{3} \leq \frac{1}{2}+\frac{1}{6\times72}\Big[\left(3\alpha_{1}^2-\begin{equation}ta_{1}^2\right)\left(3\alpha_{2}^2-\begin{equation}ta_{2}^2\right)\sum_{y_{1}=1,2,3}||\vec{m}_{r}||\Big]\equiv \Delta_{3},\label{eq:succ3d}
\end{equation}
where the maximum value $max(S_{3}^{3})= \Delta_{3}$. It can then be seen that $S_{3}^{3}$, $S_{3}^{2}$ and $S_{3}^{1}$ can be jointly optimized if the quantity $\sum_{y_{1}=1}^{3}||\vec{m}_{r}||$ is maximized. By using the fact $\max({\sum_{y_{1}=1}^{3}||\vec{m}_{r}}||)=8\sqrt{3}$ and putting the values of $\alpha_{1}, \alpha_{2}, \begin{equation}ta_{1}$ and $\begin{equation}ta_{2}$ we have
\begin{equation}gin{equation}
\begin{equation}gin{split}
\Delta_{3} =\frac{1}{2}\Bigg(1+ \frac{(1+2\sqrt{1-\lambda_{1}^2})(1+2\sqrt{1-\lambda_{2}^2})}{9\sqrt{3}}\Bigg). \label{eq:succ31}
\end{split}
\end{equation}
Using the values of $\Delta_{1}$ and $\Delta_{2}$ from Eq. \eqref{eq:succ21}, by combining them in Eq. \eqref{eq:succ31} and further simplifying we find the optimal triple between success probabilities of Bob$_{1}$, Bob$_{2}$ and Bob$_{3}$, is given by,
\begin{equation}gin{equation}
\Delta_{3} ^{\Delta_{1},\Delta_{2}}=\frac{1}{2}\Bigg\{1+\frac{\Xi_{1}+2\sqrt{\Xi_{1}^{2}-\Xi_{2}^{2}}}{9\sqrt{3}}\Bigg\}, \label{eq:opt}
\end{equation}
where $\Xi_{1}=1+2\sqrt{12\Delta_{1}-12\Delta_{1}^2-2}$ and $\Xi_{2}=3\sqrt{3}(2\Delta_{2}-1)$.\\
This becomes nontrivial when at least one of the $\Delta_{1}$, $\Delta_{2}$ or $\Delta_{3}$ surpasses classical bound, as discussed earlier.
The above approach can be generalized for $k^{th}$ number of Bobs where $k$ is arbitrary. When Bob$_{k}$ uses optimal choice of observables, the average state relayed to $(k+1)^{th}$ Bob can be written as $\rho_{x}^{k}=\frac{1}{2}\Bigg(\mathbb{I}+\vec{a}_{x}^{k}\cdot \sigma)\Bigg)$, where,
\begin{equation}gin{equation}
\vec{a}_{x}^{k}=\prod_{i=1}^{k}\frac{(1+2\sqrt{1-\lambda_{i}^2})\vec{a}_{x}^{0}}{3^{k}}.
\end{equation}
Here $ \vec{a}_{x}^{0}$s are the Bloch {vectors} of the states prepared by Alice. Simply, this state is along the same direction of the initial state prepared by Alice, but the length of the the Bloch vector is shrunk by an amount, $\prod_{i=1}^{k}(1+2\sqrt{1-\lambda_{i}^2})/3^{k}$. The optimal success probability of $k^{th}$ Bob can be written as
\begin{equation}gin{equation}
\Delta_{k}=\frac{1}{2}\Bigg[1+{\frac{1}{3^{k}} \prod_{i=1}^{k-1}\sqrt{3}(1+2\sqrt{1-\lambda_{i}^2})}\Bigg], \label{eq:anyk}
\end{equation}
where k$^{th}$ Bob performs sharp projective measurement. In order to examine this the longest sequence to which all Bobs can get quantum advantage, let us consider the situation when Bob$_{1}$, Bob$_{2}$ and Bob$_{3}$ implement their unsharp measurements with lower critical values of unsharpness parameters at their respective sites. From Eq. \eqref{eq:anyk}, we get the lower critical values of $\lambda_{1}=\frac{1}{\sqrt{3}}=0.5773$, $\lambda_{2}=0.6578$ and $\lambda_{3}=0.7873$. Putting these values in Eq. \eqref{eq:anyk} we get $\Delta_{4}= 0.6575 < 2/3$ for $k=4$, i.e., no quantum advantage is obtained for the fourth Bob.
\section{Certification of multiple independent unsharpness parameters}
Optimal triple of the success probabilities of Bob$_{1}$, Bob$_{2}$ and Bob$_{3}$ are given by Eq. \eqref{eq:opt} is ploted in Fig.\ref{fig2}. The orange cube represents the classical success probabilities showing no trade-off. The three dimensional semi-paraboloid over the cube represents quantum trade-off between success probabilities. Each point on the surface of semi-paraboloid in Fig.\ref{fig2} uniquely certify $\lambda_{1}$ and $\lambda_{2}$ while we take measurement of Bob$_{3}$ is projective having $\lambda_{3}=1$. For instance, let us consider the point when all the success probabilities have the equal value, $(\Delta_{1}, \Delta_{2}, \Delta_{3})=(0.686,0.686,0.686)$. This particular point uniquely certify $\lambda_{1}=0.6443$ and $\lambda_{2}=0.7641$. We can take any arbitrary point, for example, $(\Delta_{1}, \Delta_{2}, \Delta_{3})=(0.67,0.675,0.704)$ on the graph. This point certifies the unsharpness parameters $\lambda_{1}=0.588$ and $\lambda_{2}=0.695$.
In principle, one can uniquely certify unsharpness parameters $\lambda_{1}$ and $\lambda_{2}$ through the optimal triple but in practical scenario quanum instruments are subjected to imperfections and losses. We show that our scheme can also certify a range of values of $\lambda_{1}$ and $\lambda_{2}$. When all the three Bobs get quantum advantage, the optimal values of success probabilities fix the range of $\lambda_{1}$ and $\lambda_{2}$ obtained from Eqs. \eqref{eq:succ21} and \eqref{eq:succ31} are respectively given by
\begin{equation}gin{equation}
\sqrt{3}(2\Delta_{1}-1) \leq \lambda_{1} \leq \sqrt{1-\Bigg(\frac{3\sqrt{3}(2\Delta_{2}-1)}{2\lambda_{2}}-\frac{1}{2}\Bigg)^2}\label{eq:bound1}
\end{equation}
and
\begin{equation}gin{equation}
\frac{3\sqrt{3}(2\Delta_{2}-1) }{1+2\sqrt{1-\lambda_{1}^2}}\leq \lambda_{2} \leq \sqrt{1-\frac{1}{4}\Bigg(\frac{9\sqrt{3}(2\Delta_{3}-1)}{1+2\sqrt{1-\lambda_{1}^2}}-1\Bigg)^2}. \label{eq:bound2}
\end{equation}
\begin{equation}gin{figure}
\centering
\includegraphics[scale=0.3]{graph3d.pdf}
\caption{Optimal trade-off between success probabilities for Bob$_{1}$, Bob$_{2}$ and Bob$_{3}$. Blue point on the three dimensional graph indicates the point where the success probabilities coincide, i.e, $\Delta_{1}$ = $\Delta_{2}$ = $\Delta_{3}$ = 0.686 $\geq$ $\frac{2}{3}$.}\label{fig2}
\end{figure}
It is seen from Eq. \eqref{eq:bound1} that although the lower bound of $\lambda_{1}$ depends only on the observational statistics $\Delta_{1}$, the upper bound does not. Rather, the upper bound of $\lambda_{1}$ is a function of $\lambda_{2}$ and the optimal success probability $\Delta_{2}$ of Bob$_{2}$. On the other hand, Eq. \eqref{eq:bound2} shows that both the upper and lower bound of $\lambda_{2}$ are not only dependent on the observational statistics but also on $\lambda_{1}$. This interdependence of sharpness parameters suggests a trade-off between them. By taking a value of $\lambda_{2}$, which is just above the lower critical value, we sustain the quantum advantage to subsequent Bobs. Each value of $\lambda_{1}$ above the lower critical value fixes the minimum value of unsharpness that is required to get the quantum advantage for Bob$_{2}$ while the need for a sustainable advantage to a subsequent Bob fixes its upper bound. The whole program runs as follows.
As already mentioned, the minimum value of unsharpness parameter required to get a quantum advantage is $\lambda_{min}=\frac{1}{\sqrt{3}}$. So a value $\lambda_{1}= \frac{1}{\sqrt{3}}+\epsilon_{1}$ where $\epsilon_{1}> 0$ would suffice. Furthermore putting this value of $\lambda_{1}$ into Eq. \eqref{eq:succ21} we can estimate the minimum value of unsharpness in Bob$_{2}$'s measurement as $\lambda_{2}= 0.6578+ \epsilon_{2}$ for all $\epsilon_{2} >0.3533\epsilon_{1} + O(\epsilon_{1}^2)$. Similarly, to get advantage for Bob$_{3}$ the minimum value of unsharpness parameter can be estimated from Eq. \eqref{eq:succ31} as $\lambda_{3}=0.7873+\epsilon_{3}$ where $\epsilon_{3}=0.1187\epsilon_{1}+O(\epsilon_{1}^2)$.
For an explicit example of how to estimate the interval of values of unsharpness parameters, let us consider the task where Bob$_{1}$ chooses the optimal measurement settings with unsharpness $\lambda_{1}=\frac{1}{\sqrt{3}}+0.05=0.6273$.
Now their task is to set the unsharpness parameter such that Bob$_{2}$ and Bob$_{3}$ can get the quantum advantage. From Eq. \eqref{eq:bound2} Bob$_{2}$ can set the unsharpness parameter any value in the range $0.6772\leq\lambda_{2}\leq0.8566$. The minimum value of unsharpness parameter required to get an advantage at Bob$_{3}$'s site depends on both $\lambda_{1}$ and $\lambda_{2}$. With the same $\lambda_{1}$, for a lower critical value of $\lambda_{2} (=0.6772)$, the unsharpness parameter of Bob$_{3}$'s measurement is lower bounded as $0.8220\leq\lambda_{3}$, on the other hand when Bob$_{2}$ performed his measurement with upper critical unsharpness ($\lambda_{2}=0.8566$) the bounds on $\lambda_{3}$ becomes unity, which comes directly from Eq. \eqref{eq:succ31}.
Due to the fact that when Bob$_{3}$ gets the quantum advantage, the upper bound of $\lambda_{1}$ depends on both $\lambda_{2}$ and the success probabilities of respective Bobs, $(\lambda_{1})^{max}$ is more restricted than the one obtained when quantum advantages only for Bob$_{1}$ and Bob$_{2}$ were considered. Using Eq. \eqref{eq:bound1} by putting $\Delta_{2}=2/3$ and $\lambda_{2}=1$ we had earlier fixed the allowed interval of $\lambda_{1}$ as, $0.57 \leq \lambda_{1} \leq 0.93$ when only two Bobs are able to get advantage. This is discussed in Sec. III. Now if Bob$_{3}$ gets quantum advantage ( $\Delta_{3}>2/3$) along with Bob$_{1}$ and Bob$_{2}$, then using Eq. \eqref{eq:succ21} and Eq. \eqref{eq:succ31} we get the the interval of $\lambda_{1}$ has to be $0.57 \leq \lambda_{1} \leq 0.77$. Thus, the quantum advantage extended to the Bob$_{3}$ provides a narrower interval $\lambda_{1}$ by decreasing the upper bound. In such case more efficient experimental verification is required to test it.
Let us denote these two ranges by \emph{R$_{1}$} and \emph{R$_{2}$} respectively. To get a practical impression of how such a certification works, let us consider a {seller} who sells a measurement instrument and claims that it works with a particular noise given by its unsharpness parameter $\lambda_{1}$ that belongs to a value within \emph{R$_{2}$}. We can check by a sequential arrangement discussed above and by observing the statistics of Bob$_{3}$ whether the instrument is trusted or not. If we see that Bob$_{3}$ is getting a quantum advantage, then we conclude that the seller is trusted and $\lambda_{1}$ must lie in \emph{R$_{2}$}. On the other hand, a failure of Bob$_{3}$ in getting an advantage will compel us to decide that the instrument is not trusted and $\lambda_{1}$ lie in \emph{R$_{1}$} which may or may not be useful for a particular purpose.
It is then natural to think that if more numbers of Bobs get the quantum advantage, then the interval of $\lambda_{1}$ will be narrower. However, for 3-bit RAC, a fourth observer cannot get the quantum advantage. In search of such a possibility, we consider the $4$-bit sequential RAC.
\section{4-bit sequential RAC }
In $4$-bit sequential quantum RAC Alice now has a length-4 string randomly sampled from $x\in \{0,1\}^{4}$ which she encodes into sixteen qubits states $\rho_x = \frac{\mathbb{I}+\vec{a}_{x}\cdot\sigma}{2}$, and {sends} them to Bob$_{1}$. After receiving the system, Bob$_{1}$ randomly performs the measurements of dichotomic observables $B_{y_{1}}$ with $y_{1}\in[4]$ and relay the system to Bob$_{2}$ and so on. This process goes on as long as $k^{th}$ Bob gets the quantum advantage, as also discussed for the 3-bit case. The main purpose of extending our analysis to the $4$-bit case is to examine whether more than three Bobs can get quantum advantage sequentially. This may then provide the certification of more than two unsharpness parameters and more efficient fine-tuning of the interval of values of unsharpness parameter of a given measurement instrument.
For $4$-bit sequential quantum RAC, we found that for the qubit system, the optimal quantum value of success probability is different in standard and parity-oblivious RAC. Note here that the classical success probability is also different in standard and parity-oblivious RAC. For the latter case, the quantum advantage can be obtained for two Bobs, but for the former case, the quantum advantage cannot exceed one Bob. However, we demonstrate that instead of a single qubit system, if the input states are encoded in a two-qubit system, then four Bobs can get the quantum advantage in parity-oblivious RAC. In fact, in that case, the input two-qubit states required for achieving optimal quantum value naturally satisfy the parity-oblivious conditions for the $4$-bit case. This feature is similar to the $3$-bit case.
\subsection{Standard $4$-bit RAC with qubit input states}
We first consider the case when the parity-oblivious constraint is not imposed, and input states are qubit. In that case the success probability for $4$-bit classical RAC is bounded by $S_{4}^{k}\leq \frac{11}{16}$ \cite{pan2020}.
In quantum RAC, we first consider the input states are in the qubit system. We have taken slightly different calculation steps than that of the $3$-bit case (Sec. II), as presented in the Appendix. The quantum success probability for Bob$_{1}$ can be written as
\begin{equation}gin{equation}
\begin{equation}gin{split}
S_{4}^{1} &=\frac{1}{64}\sum_{x \in \{0,1\}^4}\sum_{y_{1}=1}^{4}Tr\Big[\rho_{x}^{0} B_{x_{y_{1}}|y_{1}}\Big]\\
&=\frac{1}{2}+\frac{\alpha_{1}\begin{equation}ta_{1}}{16 }\sum_{y_{1}=1}^{4} (-1)^{x_{y_{1}}^{i}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{1}}. \label{eq:Succ41m}
\end{split}
\end{equation}
Here $x_{y_{1}}^{i}$ is the $y_{1}^{th}$ bit of the $4$-bit input string $x^{i}\in \{0,1\}^{4}$ with first bit $0$, i.e., $x_{1}^{i}=0$ for all $i\in [8]$.
Let us now analyze the case when Bob$_{2}$ can get quantum advantage. We find the maximum success probability for Bob$_{2}$ from \eqref{eq:Succ42b} is achieved when $\hat{b}_{1}\cdot\hat{b}_{2}=\hat{b}_{1}\cdot\hat{b}_{3}=\hat{b}_{2}\cdot\hat{b}_{3}=0$ and $\hat{b}_{3}\cdot\hat{b}_{4}=1$. As presented in detail in the Appendix the maximum success probability for the Bob$_{2}$ $max(S_{4}^{2})=\Omega_{2}$ can be written as
\begin{equation}gin{equation}
max(S_{4}^{2}) = \Omega_{2} =\frac{1}{2}+ \lambda_{2}\Bigg( \frac{(\sqrt{2}+\sqrt{6})}{64}\Bigg)\big(1+3\sqrt{1-\lambda_{1}^2}\big). \label{eq:Succ421m}
\end{equation}
\begin{equation}gin{figure}
\centering
\includegraphics[scale=0.47]{fourtradeoff1.pdf}
\caption{Optimal trade-off between quantum success probabilities of Bob$_{1}$ and Bob$_{2}$ for 4-bit sequential RAC is shown by solid orange curve while the shaded portion gives the sub-optimal range. Blue solid line is classical success probability for the same two observer constrained by parity-oblivious condition.} \label{fig4}
\end{figure}
The directions of the Bloch vectors $a_{x}$ and $ \hat{b}_{y_{1}}$ and relations between the different Bloch vectors $\vec{a}_{x}$ are explicitly provided in the Appendix. It is seen that conditions for maximization of $S_{4}^{2}$ also leads the maximum value $\Omega_{1}=max(S_{4}^{1})$ of Bob$_{1}$ in Eq. \eqref{eq:Succ41m}, which can be written as,
\begin{equation}gin{equation}
max(S_{4}^{1}) = \Omega_{1} =\frac{1}{2}+ \lambda_{1}\Bigg( \frac{(\sqrt{2}+\sqrt{6})}{16}\Bigg).\label{eq:Succ411m}
\end{equation}\\
From Eqs. \eqref{eq:Succ41m} and \eqref{eq:Succ421m} we found that to get simultaneous quantum advantage for Bob$_{1}$ and Bob$_{2}$ the minimum values of the unsharpness parameters are $\lambda_{1}^{min}=0.776$ and $\lambda_{2}^{min}=1.074$ respectively. Since the allowed value of $\lambda_{2}$ is $0\leq \lambda_{2}\leq 1$, it is evident that Bob$_{2}$ cannot get quantum advantage.
\subsection{$4$-bit RAC with qubit input states with parity-oblivious constraint}
We now show that when the parity-oblivious constraint is imposed on Alice's encoding, two sequential Bobs can get quantum advantage {which} enables certification of the unsharpness parameter of the Bob$_1$'s instrument even when the input states are qubit. In such case, the classical preparation non-contextual bound on success probabilities is $(S_{4}^{k})\leq\frac{5}{8}\approx 0.625$. As outlined in Appendix A, respective maximum success probabilities for Bob$_{1}$ and Bob$_{2}$ are
\begin{equation}gin{equation}
max(S_{4}^{1}) = \Omega_{1} =\frac{1}{2}+ \lambda_{1}\Bigg( \frac{1}{4\sqrt{2}}\Bigg)\label{eq:Succ41m1}
\end{equation}
and
\begin{equation}gin{equation}
max(S_{4}^{2}) = \Omega_{2} =\frac{1}{2}+ \lambda_{2}\Bigg( \frac{\big(1+3\sqrt{1-\lambda_{1}^2}\big)}{16\sqrt{2}}\Bigg). \label{eq:Succ42m1}
\end{equation}
The optimal trade-off between the success probabilities of Bob$_{1}$ and Bob$_{2}$ (by considering $\lambda_{2}=1$), is given by
\begin{equation}gin{equation}
{\Omega_{2}({\Omega_{1}}) =\frac{1}{2}+\frac{\left(1+3\sqrt{1-8(2\Omega_{1}-1)^{2}}\right)}{16\sqrt{2}}}
\label{tf4}
\end{equation}
In Fig.\ref{fig4} we plotted $\Omega_{2}$ against $\Omega_{1}$ which gives the optimal trade-off relation between successive success probabilities. The solid orange curve provides an optimal trade-off, while the shaded portion gives the sub-optimal range. The solid blue line is for the classical case for the same two observers, and there is no trade-off. As explained for the $3$-bit case, one can uniquely certify the unsharpness parameter from this trade-off relation. For example, when $\Omega_{2}={\Omega_{1}}=0.6805$ the unsharpness parameter is {$\lambda_{1}=\frac{4+6\sqrt{6}}{25}\approx0.7478$. This point is shown in Fig.\ref{fig4} by the black dot on the solid orange curve.
Thus the optimal pair $(\Omega_{1}, \Omega_{2})$ self-test the preparation of Alice, the measurement settings of Bob$_1$ and Bob$_2$, and the unsharpness parameter $\lambda_{1}=2\sqrt{2}\left(2\Omega_{1}-1\right)$. The self-testing statements similar to the $3$-bit case can also be made here.
In sub-optimal scenario, by taking $\lambda_{2}=1$, the Bob$_{1}$ and Bob$_{2}$ both get quantum advantage for a interval of values of $\lambda_{1}$ is given by,
\begin{equation}gin{equation}
2\sqrt{2}(2\Omega_{1}-1) \leq \lambda_{1} \leq \sqrt{1-\Bigg(\frac{8\sqrt{2}(2\Omega_{2}-1)}{3}-\frac{1}{3}\Bigg)^2}\label{eq:bound41}
\end{equation}
This above range is calculated by using Eqs.\eqref{eq:Succ41m1} and \eqref{eq:Succ42m1}. Putting the classical bounds of $\Omega_{1}=\Omega_{2}=5/8$, the numerical values of the interval of $\lambda_{1}$ is calculated as $ 0.707 \leq \lambda_{1} \leq 0.792$. The more accurate the experimental observation, the more precise the $\lambda_{1}$ can be certified. This implies that an efficient experimental verification is required to confine the $\lambda_{1}$ in a narrow region. Note that the above interval can be made narrower if Bob$_{3}$ gets the quantum advantage. But we found that not more than two Bobs can get the advantage in this case, as given in the Appendix.
\subsection{$4$-bit RAC with two-qubit input states}
We provide a sketch of the argument here. If in 4-bit parity oblivious RAC the input states are a two-qubit system, then four sequential Bobs can share the quantum advantage (see Appendix), and thus three unsharpness parameters can be certified. Note that the input states providing the optimal quantum value of the success probabilities satisfy the parity-oblivious constraints. The optimal trade-off relation between Bob$_{1}$ and Bob$_{2}$ can be derived by using the Eqs. \eqref{eq:Succtwo1}-\eqref{eq:Succtwok}.
In the sub-optimal scenario, the interval within which the unsharpness parameter is confined becomes much narrower. We argue again that this requires sophisticated experimental realization. When only Bob$_{1}$ and Bob$_{2}$ get quantum advantage, the interval of $\lambda_{1}$ in sub-optimal scenario is $ 0.517 \leq \lambda_{1} \leq 0.934$ and when Bob$_{1}$, Bob$_{2}$ and Bob$_{3}$ get the quantum advantage, the interval becomes $ 0.517 \leq \lambda_{1} \leq 0.807 $. However, four Bobs get quantum advantage, the interval of $\lambda_{1}$becomes further narrower $0.517 \leq \lambda_{1} \leq 0.663 $. This then shows that the range of $\lambda_{1}$ becomes more narrow when four sequential Bobs share quantum advantage compared to the case when only the first two Bobs get the quantum advantage. Note that the analytic expressions can be straightforwardly calculated using Eq.\eqref{eq:Succtwok}. To avoid clumsiness, we skipped them and provided only numerical values.
\section{Summary and Discussion}
In summary, based on a communication {game} known as the parity-oblivious RAC, we provided two SDI protocols in the prepare-measure scenario to certify multiple unsharpness parameters. Such a certification is demonstrated by examining the quantum advantage shared by multiple sequential observers (we denote Bob$_{k}$ for $k^{th}$ observer). First, we demonstrated that for $3$-bit parity-oblivious sequential RAC, a maximum of three sequential Bobs gets the quantum advantage over the classical preparation non-contextual strategies. We showed that the optimal pair of quantum success probabilities between Bob$_{1}$ and Bob$_{2}$ self-test the qubit states and the measurements of Bob$_{1}$ and {Bob$_{2}$}. Our treatment clearly shows that in the $3$-bit case, the prepared states providing the optimal quantum success probability satisfy the parity-oblivious restriction. The optimal triple of success probabilities self-test the unsharpness parameters $\lambda_{1}$ and $\lambda_{2}$ of Bob$_{1}$ and Bob$_{2}$ respectively. In realistic experimental scenarios, loss and imperfection are inevitable, and the success probabilities become sub-optimal. In such a case, when only Bob$_{1}$ and Bob$_{2}$ get the quantum advantage, we derived a certified interval of $\lambda_{1}$ within which it has to be confined. The more the precision of the experiment, the more accurate the bound to $\lambda_{1}$ can be certified. We showed that when the quantum advantage is extended to Bob$_{3}$, the interval of $\lambda_{1}$ will be narrower, thereby requiring more efficient experiment realization.
We extended our treatment to the $4$-bit case. We found that the qubit system cannot achieve the global optimal quantum value of success probability. But if the prepared states are two-qubit systems, we obtain the optimal quantum success probability. The qubit input states for which the maximum success probability is attained do not naturally satisfy the parity-oblivious constraints on input states. We studied three different cases in $4$-bit sequential RAC. First, when input states are qubit, and parity-oblivious constraints are not imposed. In such a case, we have standard RAC, and we found the quantum advantage cannot be extended to Bob$_2$. Thus, this result can be used to certify the states and measurements but cannot certify the unsharpness parameter. We then considered the case when input states are qubit and imposed the parity-oblivious constraints. We showed that, in this case, Bob$_{1}$ and Bob$_{2}$ can share the quantum advantage. We derived a trade-off relation between the success probabilities between Bob$_{1}$ and Bob$_{2}$. Consequently, we demonstrated the certification of the unsharpness parameter of Bob$_{1}$. In the third case, when the input states are in a two-qubit system, the input states maximize the quantum success probability naturally satisfy the parity-oblivious conditions. In this case, the quantum advantage is extended to four sequential Bobs, and hence three unsharpness parameters can be certified. However, we provided a sketch of this argument, and details of it will be published elsewhere.
The MTB \cite{mohan2019} protocol of $2$-bit sequential quantum RAC for certifying the unsharpness parameter has recently been experimentally tested in \cite{anwar2020,fole2020,xiao}. Our protocols can also be tested using the existing technology already adopted in \cite{anwar2020,fole2020,xiao}. Finally, our protocol can also be extended to arbitrary-bit sequential RAC or other parity-oblivious communication games \cite{pan21} which may demonstrate the quantum advantage to more independent observers. This will eventually pave the path for certifying more number of unsharpness parameters. In such a case, in a sub-optimal scenario, the interval of unsharpness parameter of the first Bob will be very narrow. Efficient experimental realization is required to certify such an interval. Study along this line could be an exciting avenue for future research. We also note here that in the other interesting work \cite{miklin2020} along this line, the authors have used the $2$-bit sequential RAC as a tool for certifying the unsharpness parameter. However, the approach adopted in \cite {miklin2020} is different from the MTB protocol. It would then be interesting to extend their certification protocol for $n$-bit RAC or other parity-oblivious communication games. This also calls for further study.
\section{ACKNOWLEDGMENTS}
The authors acknowledge the support from the project DST/ICPS/QuST/Theme-1/2019/4.
\begin{equation}gin{widetext}
\appendix
\section{Details of the calculation for $4$-bit sequential quantum RAC}
For the $4$-bit case, we provide three conditional maximizations. i) The case when input states are qubit, and parity-oblivious constraints are not imposed. ii) The case when input states are qubit, and parity-oblivious constraints on the input state are imposed. iii) When the input states are a two-qubit system. In this case, the input states maximize the success probability naturally satisfy the parity-oblivious conditions.
\subsection{Maximum success probability for qubit system without imposing parity-oblivious constraints }
In the 4-bit scenario as Alice has encoded her 4-bit string into sixteen quantum states, and Bob$_{1}$ performs four random measurements. From Eq.\eqref{succ1} of the main text, the quantum success probability of Bob$_{1}$'s site is given by:
\begin{equation}gin{eqnarray}
S_{4}^{1} =\frac{1}{64}\sum_{x\in \{0,1\}^{4}}\sum_{y_{1}=1}^{4}Tr\Big[\rho_{x} B_{x_{y_{1}}|y_{1}}\Big]=\frac{1}{2}+\frac{\alpha_{1}\begin{equation}ta_{1}}{32}\sum_{x\in \{0,1\}^{4}}\sum_{y_{1}=1}^{4} (-1)^{x_{y_1}}\vec{a}_{x}\cdot \hat{b}_{y_{1}}\big), \label{eq:Succ41}
\end{eqnarray}
which is Eq. \eqref{eq:Succ41m} in the main text. We denote $m_{y_{1}}=(-1)^{x_{y_1}}\vec{a}_{x}$ which are unnormalized vectors and can explicitly be written as \\
$ \vec{m}_1=(\vec{a}_{0000}-\vec{a}_{1111})+(\vec{a}_{0001}-\vec{a}_{1110})+(\vec{a}_{0010}-\vec{a}_{1101})+(\vec{a}_{0100}-\vec{a}_{1011})+(\vec{a}_{0111}-\vec{a}_{1000})+(\vec{a}_{0011}-\vec{a}_{1100})+(\vec{a}_{0101}-\vec{a}_{1010})+(\vec{a}_{0110}-\vec{a}_{1001})$,\\
$ \vec{m}_2=(\vec{a}_{0000}-\vec{a}_{1111})+(\vec{a}_{0001}-\vec{a}_{1110})+(\vec{a}_{0010}-\vec{a}_{1101})-(\vec{a}_{0100}-\vec{a}_{1011})-(\vec{a}_{0111}-\vec{a}_{1000})+(\vec{a}_{0011}-\vec{a}_{1100})-(\vec{a}_{0101}-\vec{a}_{1010})-(\vec{a}_{0110}-\vec{a}_{1001})$,\\
$ \vec{m}_3=(\vec{a}_{0000}-\vec{a}_{1111})+(\vec{a}_{0001}-\vec{a}_{1110})-(\vec{a}_{0010}-\vec{a}_{1101})+(\vec{a}_{0100}-\vec{a}_{1011})-(\vec{a}_{0111}+\vec{a}_{1000})-(\vec{a}_{0011}-\vec{a}_{1100})+(\vec{a}_{0101}-\vec{a}_{1010})-(\vec{a}_{0110}-\vec{a}_{1001})$,\\
$ \vec{m}_4=(\vec{a}_{0000}-\vec{a}_{1111})-(\vec{a}_{0001}-\vec{a}_{1110})+(\vec{a}_{0010}+\vec{a}_{1101})-(\vec{a}_{0100}-\vec{a}_{1011})-(\vec{a}_{0111}+\vec{a}_{1000})-(\vec{a}_{0011}-\vec{a}_{1100})-(\vec{a}_{0101}-\vec{a}_{1010})+(\vec{a}_{0110}-\vec{a}_{1001}).$ \label{eq:m}\\
By observing the expressions for $\vec{m}_{y_{1}}$s, we can easily recognize that each Bloch vector must be a unit vector. In other words, the states should be pure unless it would lead to a decrement of the overall magnitude of $m_{y_{1}}$s. Along with this requirement, we can observe that among the sixteen unit vectors, eight appears with minus sign, the optimal magnitude of $\vec{m}_{y_{1}}$s would imply that all sixteen Bloch vectors would form eight antipodal pairs. Hence maximization of S$_{4}^{1}$ demands the two vectors in each parenthesis of Eq. \eqref{eq:m} are antipodal pairs, and hence we can write,\\
$ \vec{m}_1=2(\vec{a}_{0000}+\vec{a}_{0001}+\vec{a}_{0010}+\vec{a}_{0100}+\vec{a}_{0111}+\vec{a}_{0011}+\vec{a}_{0101}+\vec{a}_{0110}$), $ \vec{m}_2=2(\vec{a}_{0000}+\vec{a}_{0001}+\vec{a}_{0010}-\vec{a}_{0100}-\vec{a}_{0111}+\vec{a}_{0011}-\vec{a}_{0101}-\vec{a}_{0110}$), $ \vec{m}_3=2(\vec{a}_{0000}+\vec{a}_{0001}-\vec{a}_{0010}+\vec{a}_{0100}-\vec{a}_{0111}-\vec{a}_{0011}+\vec{a}_{0101}-\vec{a}_{0110}$), and $ \vec{m}_4=2(\vec{a}_{0000}-\vec{a}_{0001}+\vec{a}_{0010}+\vec{a}_{0100}-\vec{a}_{0111}-\vec{a}_{0011}-\vec{a}_{0101}+\vec{a}_{0110}$).\\
In a compact form it can be written as $m_{y_{1}}=(-1)^{x^{i}_{y_1}}\vec{a}_{x^{i}}$ where $x_{y_{1}}^{i}$ is the $y_{1}^{th}$ bit of the $4$-bit input string $x^{i}\in \{0,1\}^{4}$ with first bit $0$, i.e., $x_{1}^{i}=0$ for all $i\in [8]$. With this notation, the Eq. \eqref{eq:Succ41} can be re-written as
\begin{equation}gin{equation}
S_{4}^{1} =\frac{1}{2}+\frac{\alpha_{1}\begin{equation}ta_{1}}{16 }\sum_{x^{i}}\sum_{y_{1}=1}^{4} (-1)^{x_{y_{1}}^{i}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{1}}. \label{eq:Succ4111}
\end{equation}
Now using Eq. \eqref{eq:D} the reduced density operator after Bob$_{1}$'s measurement can be represented as,
\begin{equation}gin{eqnarray}
\rho^{1}_x &&= \frac{1}{4}\sum_{y_{1},b} K_{b|y_{1}}\rho_{x}K_{b|y_{1}} =\frac{\mathbb{I}+\vec{a}_{x}^{1}\cdot\sigma}{2},
\end{eqnarray}
where $\vec{a}_{x}^{1}$ is given by,
\begin{equation}gin{equation}
\vec{a}^{1}_{x}= 2\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\vec{a}_{x} +\begin{equation}ta_{1}^{2}\sum_{y_{1}=1}^{4}\left( \hat{b}_{y_{1}}\cdot\vec{a}_{x}\right) \hat{b}_{y_{1}}. \label{eq:rho1}
\end{equation}
Using Eq. \eqref{eq:rho1} find the success probability of Bob$_{2}$ as,
\begin{equation}gin{equation}
\begin{equation}gin{split}
S_{4}^{2} &=\frac{1}{64}\sum_{x \in \{0,1\}^3}\sum_{y_{2}=1}^{4}Tr\Big[\rho_{x}^{1} B_{x_{y_{2}}|y_{2}}\Big]\\
&=\frac{1}{2}+ \frac{1}{128}\Big[ 2\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\sum_{x\in \{0,1\}^{4}} \sum_{y_{2}=1}^{4}\left((-1)^{x_{y_2}}\vec{a}_{x}\cdot \hat{b}_{y_{2}}\right)+ \begin{equation}ta_{1}^{2}\sum_{x\in \{0,1\}^{4}} \sum_{y_{1},y_{2}=1}^{4}\left( (-1)^{x_{y_2}}\vec{a}_{x}\cdot\hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right)\Big]. \label{eq:Succ42a}
\end{split}
\end{equation}
which we can re-write as
\begin{equation}gin{equation}
\begin{equation}gin{split}
S_{4}^{2} =\frac{1}{2}+ \frac{1}{64}\Big[ 2\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\sum_{x^{i}} \sum_{y_{2}=1}^{4}\left((-1)^{x^{i}_{y_2}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{2}}\right)+ \begin{equation}ta_{1}^{2}\sum_{x^{i}} \sum_{y_{1},y_{2}=1}^{4}\left( (-1)^{x^{i}_{y_2}}\vec{a}_{x^{i}}\cdot\hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right)\Big]. \label{eq:Succ42b}
\end{split}
\end{equation}
Let us now first maximize the first term in right hand side first. We consider a positive number $\gamma$ whose expectation value can be written as $\langle \gamma \rangle = \begin{equation}ta- \Delta$ , where $ \begin{equation}ta$ is a real number and $\Delta=\sum_{x^{i}} \sum_{y_{2}=1}^{4}(-1)^{x^{i}_{y_1}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{2}}$. Furthermore, we consider a set of positive vectors $\vec{L}_{i}$ which is polynomial functions of $\vec{a}_{x^{i}}$ and $b_{y_{2}}$ so that $\gamma$ acquires the form,
\begin{equation}gin{equation}
\gamma=\sum_{i=1}^{8}\frac{w_{i}}{2}(\vec{L}_{i})^{2}. \label{eq:gamma}
\end{equation}
Suitable vectors $\vec{L}_{i}$ can be chosen for function given in equation \eqref{eq:gamma} as,
\begin{equation}gin{equation}
\vec{L}_{i}=\frac{1}{w_{i}}\sum_{y=1}^{n} (-1)^{x_{y}^{i}}\hat{b}_{y} -\vec{a}_{x^i}, \label{eq:L}
\end{equation}
where $w_{i}$ is any positive semi definite function of $b_{y_{2}}$. Now substituting Eq. \eqref{eq:L} into Eq. \eqref{eq:gamma} and noting that $(\hat{b}_{y_{2}})^{2}=(\vec{a}_{x^i})^{2}=1$, we get,
\begin{equation}gin{equation}
\langle\gamma\rangle=\sum_{i=1}^{8}\Big[\frac{1}{2w_{i}}\Big(\sum_{y=1}^{4} (-1)^{x_{y_{2}}^{i}}\hat{b}_{y_{2}}\Big)^2 + \frac{w_{i}}{2}\Big] - \Delta .\label{eq:gamma2}
\end{equation}
We can now conveniently put $w_{i}=||\sum_{y=1}^{4} (-1)^{x_{y_{2}}^{i}}b_{y_{2}}||$ in Eq.\eqref{eq:gamma2} to finally get, $\Delta=\sum_{i=1}^{8}w_{i}- \langle\gamma\rangle$. Since $\langle\gamma\rangle\geq0$ the maximum quantum value of $\Delta$ can be written as
\begin{equation}gin{equation}
(\Delta)_{max}=\max_{\hat{b}_{y_{2}}}\Bigg(\sum_{i=1}^{8}w_{i}\Bigg).
\end{equation}
Since maximum value of $\Delta$ demands $\langle\gamma_{n}\rangle=0$ which further implies that $\vec{L}_{i}=0$ and consequently we have the input Bloch vectors $ \vec{a}_{x^i}=\frac{ \sum_{y=1}^{n} (-1)^{x_{y}^{i}}\hat{b}_{y}}{w_{i}}$.
Using the concavity inequality $\sum_{i=1}^{n}w_{i}\leq\sqrt{n\sum_{i=1}^{n}(w_{i})^2}$, and applying it two times we have
\begin{equation}gin{eqnarray}
(\Delta)_{max}&\leq \sqrt{ 4(w_{1}^2+w_{2}^2+w_{3}^2+w_{4}^2)} +\sqrt{ 4(w_{5}^2+w_{6}^2+w_{7}^2+w_{8}^2)}\label{ww}\\
&=4\sqrt{4+ 2\hat{b}_{3}\cdot\hat{b}_{4} } +4\sqrt{4- 2\hat{b}_{3}\cdot\hat{b}_{4} }. \nonumber
\end{eqnarray}
In the first line of Eq. (\ref{ww}), the equality holds when $w_{1}=w_{2}=w_{3}=w_{4}$ and same for the third term. This provides $\hat{b}_{1}\cdot\hat{b}_{2}=\hat{b}_{1}\cdot\hat{b}_{3}=\hat{b}_{2}\cdot\hat{b}_{3}=0$. Then, in qubit system it is not possible to obtain $\hat{b}_{3}\cdot\hat{b}_{4}=0$. The maximum value of Eq. (\ref{ww}) can be obtained when $\hat{b}_{3}\cdot\hat{b}_{4}=1$. We then have,
\begin{equation}gin{eqnarray}
\label{www}
(\Delta)_{max}=4\left(\sqrt{2} +\sqrt{6}\right).
\end{eqnarray}
We can then chose $\hat{b}_{1}=\hat{x}$, $\hat{b}_{2}=\hat{y}$, $\hat{b}_{3}=\hat{z}$ and $\hat{b}_{4}=\hat{z}$. This in turn provides from the optimization condition $\hat{L}_{i}=0$ that $m_{y_{1}}=(-1)^{x_{y}^{i}}\vec{a}_{x^i}=\hat{b}_{y_{2}}$. Thus the input Bloch vectors $\vec{a}_{x^i}$ can be written as
\begin{equation}gin{eqnarray}
\vec{a}_{0000} &&= \frac{\hat{x}+\hat{y}}{\sqrt{2}}, \hspace{0.8cm}
\vec{a}_{0001}=\frac{\hat{x}+\hat{y}+2\hat{z}}{\sqrt{6}},
\hspace{0.8cm}
\vec{a}_{0010}= \frac{\hat{x}+\hat{y}-2\hat{z}}{\sqrt{6}}, \hspace{0.8cm}
\vec{a}_{0100}=\frac{\hat{x}-\hat{y}}{\sqrt{2}},\\
\nonumber
\vec{a}_{1000}&=& \frac{-\hat{x}+\hat{y}}{\sqrt{2}}, \hspace{0.8cm}
\vec{a}_{0011}= \frac{\hat{x}+\hat{y}}{\sqrt{2}}, \hspace{0.8cm}
\vec{a}_{0101}= \frac{\hat{x}-\hat{y}+2\hat{z}}{\sqrt{6}}, \hspace{0.8cm}
\vec{a}_{0110}= \frac{\hat{x}-\hat{y}-2\hat{z}}{\sqrt{6}},
\end{eqnarray}
It can be easily checked that the second term in Eq. (\ref{eq:Succ42b}) is maximized when $\hat{b}_{y_2}=\hat{b}_{y_1}$ when $y_{1}=y_{2}$. By putting the values of $\Delta$, $\alpha$ and $\begin{equation}ta$ we have
\begin{equation}gin{equation}
max(S_{4}^{2}) = \Omega_{2} =\frac{1}{2}+ \lambda_{2}\Bigg( \frac{(\sqrt{2}+\sqrt{6})}{64}\Bigg)\big(1+3\sqrt{1-\lambda_{1}^2}\big) \label{eq:Succ421}
\end{equation}
and consequently
\begin{equation}gin{equation}
max(S_{4}^{1}) = \Omega_{1} =\frac{1}{2}+ \lambda_{1}\Bigg( \frac{(\sqrt{2}+\sqrt{6})}{16}\Bigg).\label{eq:Succ411}
\end{equation}
Considering the situation when Bob$_{1}$ and Bob$_{2}$ implement their unsharp measurements so that the quantum advantage to Bob$_{3}$ persists, we can calculate the minimum value of the unsharpness parameters from Eqs. \eqref{eq:Succ411} and \eqref{eq:Succ421} as, $\lambda_{1}^{min}=0.776$, $\lambda_{2}^{min}=1.074$. It is clear now that as $\lambda_{2}^{min}\geq 1$ is not a legitimate value, Bob$_{2}$ does not get any quantum advantage. But we can change the scenario by putting the restriction of parity-obliviousness to see whether in 4-bit RAC quantum advantage can be extended to Bob$_{2}$.
\subsection{Maximum success probability for parity-oblivious RAC for qubit system}
Let us now discuss the case when the same game is constrained by parity-obliviousness constrains. In this case the classical bound of the game becomes the preparation non contextual bound and is equal to, $S_{4}=\frac{1}{2}(1+\frac{1}{4})\approx 0.625$. We recall the parity set $ \mathbb{P}_n= \{x|x \in \{0,1\}^n,\sum_{r} x_{r} \geq 2\} $ with $r\in \{1,2,...,n\}$. For any $s \in \mathbb{P}_{n}$, no information about $s\cdot x = \oplus_{r} s_{r}x_{r}$ (s-parity) is to be transmitted to Bob, where $\oplus$ is sum modulo $ 2 $. We then have s-parity-0 and s parity-1 sets. For 4-bit case the cardinality of parity set is eleven. Among them only four elements, 1011, 0111, 1110 and 0111, provide nontrivial constraints on the inputs are respectively given by
\begin{equation}gin{eqnarray} &&R_{1011}=(\vec{a}_{0000}-\vec{a}_{0001}-\vec{a}_{0010}+\vec{a}_{0100}-\vec{a}_{1000}+\vec{a}_{0011}-\vec{a}_{0101}-\vec{a}_{0110}) =0 \nonumber\\ &&R_{0111}= (\vec{a}_{0000}-\vec{a}_{0001}-\vec{a}_{0010}-\vec{a}_{0100}+\vec{a}_{1000}+\vec{a}_{0011}+\vec{a}_{0101}+\vec{a}_{0110}) =0\nonumber\\
&&R_{1110}=(\vec{a}_{0000}+\vec{a}_{0001}-\vec{a}_{0010}-\vec{a}_{0100}-\vec{a}_{1000}-\vec{a}_{0011}-\vec{a}_{0101}+\vec{a}_{0110})=0 \nonumber \\ &&R_{1101}= (\vec{a}_{0000}-\vec{a}_{0001}+\vec{a}_{0010}-\vec{a}_{0100}-\vec{a}_{1000}-\vec{a}_{0011}+\vec{a}_{0101}-\vec{a}_{0110})=0 . \label{eq:R}
\end{eqnarray}
When the parity-oblivious restriction is imposed on Alice's inputs then we get $\Delta_{max}=8\sqrt{2}$. This in turn provides the maximum success probability of Bob$_{2}$,
\begin{equation}gin{equation}
max(S_{4}^{2}) = \Omega_{2} =\frac{1}{2}+ \lambda_{2}\Bigg( \frac{\big(1+3\sqrt{1-\lambda_{1}^2}\big)}{16\sqrt{2}}\Bigg), \label{eq:Succ42a1}
\end{equation}
which also fixes the maximum success probability of Bob$_{1}$ in 4-bit parity-oblivious RAC is given by,
\begin{equation}gin{equation}
max(S_{4}^{1}) = \Omega_{1} =\frac{1}{2}+ \lambda_{1}\Bigg( \frac{1}{4\sqrt{2}}\Bigg).\label{eq:Succ41a1}
\end{equation}
The respective minimum values of sharpness parameter required to get quantum advantage for Bob$_{1}$ and Bob$_{2}$ are $\lambda_{1}^{min}=0.707$, $\lambda_{2}^{min}=0.906$ . Thus both Bob$_{1}$ and Bob$_{2}$ both get the quantum advantage. We then extend our calculation to Bob$_{3}$ to check whether he gets any quantum advantage or not. As a continuation to the previous section we can write the reduced state after Bob$_{2}$'s measurement will be,
\begin{equation}gin{eqnarray}
\rho^{2}_x = \frac{1}{16} \Bigg\{ 8\mathbb{I} + 32\alpha_{1}^2 \alpha_{2}^2(\vec{a}_{x} \cdot\sigma) + 8\alpha_{2}^2\begin{equation}ta_{1}^2\sum_{y_{1}=1}^{4}B_{y_{1}}(\vec{a}_{x}\cdot \sigma) B_{y_{1}} +8\alpha_{1}^2 \begin{equation}ta_{2}^2\sum_{y_{2}=1}^{4}B_{y_{2}}(\vec{a}_{x}\cdot \sigma) B_{y_{2}}+ 2\begin{equation}ta_{1}^2\begin{equation}ta_{2}^2\sum_{y_{1},y_{2}=1}^{4}B_{y_{2}}B_{y_{1}}(\vec{a}_{x}\cdot \sigma)B_{y_{1}} B_{y_{2}}\Bigg\}.
\end{eqnarray}
After some simplification we can represent above equation as $\rho^{2}_x = \frac{\mathbb{I}+\vec{a}_{x}^{2}\cdot\sigma}{2}$ where the Bloch vector $\vec{a}_{x}^{2}$ can be written as,
\begin{equation}gin{eqnarray}
\vec{a}^{2}_{x}= 4\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right)\vec{a}_{x}+\begin{equation}ta_{1}^{2}\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right) \sum_{y_{1}=1}^{4}\left( \hat{b}_{y_{1}}\cdot\vec{a}_{x}\right) \hat{b}_{y_{1}} +\begin{equation}ta_{2}^{2}\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right) \sum_{y_{2}=1}^{4}\left( \hat{b}_{y_{2}}\cdot\vec{a}_{x}\right) \hat{b}_{y_{2}}+\begin{equation}ta_{1}^2\begin{equation}ta_{2}^{2}\sum_{y_{1},y_{1}=1}^{4}\left( \hat{b}_{y_{1}}\cdot\vec{a}_{x}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right) \hat{b}_{y_{2}} \label{eq:4222}. \nonumber\\
\end{eqnarray}
The success probability for Bob$_{3}$ is
\begin{equation}gin{eqnarray}
S_{4}^{3} &&= \frac{1}{2} +\frac{1}{128}\Bigg[ 4\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right)\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right)\sum_{x^{i}} \sum_{y_{2}=1}^{4}\left((-1)^{x^{i}_{y_1}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{2}}\right)+\begin{equation}ta_{1}^{2}\left(\alpha_{2}^2-\begin{equation}ta_{2}^{2}\right) \sum_{x^{i}} \sum_{y_{1},y_{2}=1}^{4}\left( (-1)^{x_{y_1}}\vec{a}_{x^{i}}\cdot\hat{b}_{y_{1}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right) \nonumber \\ &&+\begin{equation}ta_{2}^{2}\left(\alpha_{1}^2-\begin{equation}ta_{1}^{2}\right) \sum_{x^{i}} \sum_{y_{2},y_{3}=1}^{4}\left((-1)^{x^{i}_{y_1}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{2}}\right)\left( \hat{b}_{y_{2}}\cdot \hat{b}_{y_{3}}\right)+\begin{equation}ta_{1}^2\begin{equation}ta_{2}^{2}\sum_{x^{i}} \sum_{y_{1},y_{2},y_{3}=1}^{4}\left((-1)^{x^{i}_{y_1}}\vec{a}_{x^{i}}\cdot \hat{b}_{y_{2}}\right)\left( \hat{b}_{y_{1}}\cdot \hat{b}_{y_{2}}\right)\left( \hat{b}_{y_{2}} \cdot \hat{b}_{y_{3}}\right) \Bigg] . \label{eq:Succ43a}
\end{eqnarray}
We finally obtain the maximum success probability for Bob$_{3}$ as,
\begin{equation}gin{equation}
max(S_{4}^{3}) =\Omega_{3} =\frac{1}{2} + \Big(\frac{ \sqrt{2}}{128}\Big)\prod_{k^{'}=1}^{2}(1+3\sqrt{1-\lambda_{k^{'}}^2}). \label{eq:succ441a}
\end{equation}
The minimum sharpness parameter that Bob$_3$'s instrument has to be $\lambda_{3}^{min}=1.56\geq1$. Since this value is not legitimate we have no advantage for Bob$_{3}$.
\subsection{Optimal success probability in parity-oblivious RAC for two-qubit system }
We provide sketch of the argument when Alice's input states are two-qubit system. From the entanglement assisted parity-oblivious RAC in \cite{ghorai18}, in our prepare-measure scenario we find that to obtain the optimal quantum success probability the input states has to be
\begin{equation}gin{align}
\rho_{x}=\frac{1}{4}\left(\mathbb{I}\otimes\mathbb{I} +\frac{(-1)^{x_{y}^{i}}B_{y}}{2}\right)
\end{align}
where $B_{y}$s are mutually anticommuting observables in two-qubit system. One of such choices are $B_1=\sigma_{x}\otimes\sigma_{x}$, $B_2=\sigma_{x}\otimes\sigma_{y}$, $B_3=\sigma_{x}\otimes\sigma_{z}$ and $B_4=\sigma_{y}\otimes\mathbb{I}$. Imortantly, the sixteen two-qubit input states corresponding to $x\in \{0,1\}^{4}$ satisfy the four nontrivial parity-oblivious conditions. The optimal quantum success probability for Bob$_{1}$ can be obtained as,
\begin{equation}gin{equation}
max(S_{4}^{1}) =\xi_{4}^{1} =\frac{1}{2}+ \frac{\lambda_{1}}{4}\label{eq:Succtwo2}
\end{equation}
and for Bob$_{2}$,
\begin{equation}gin{equation}
max(S_{4}^{2}) =\xi_{4}^{2} =\frac{1}{2}+ \lambda_{2}\Bigg( \frac{1}{16}\Bigg)\big(1+3\sqrt{1-\lambda_{1}^2}\big). \label{eq:Succtwo1}
\end{equation}
The general form of optimal quantum success probability for k$^{th}$ Bob can be written as,
\begin{equation}gin{equation}
max(S_{4}^{k}) =\xi_{4}^{k} =\frac{1}{2}+ \lambda_{2}\Bigg( \frac{1}{4^{k}}\Bigg)\prod_{k^{'}=1}^{k-1}(1+3\sqrt{1-\lambda_{k^{'}}^2}). \label{eq:Succtwok}
\end{equation}
The preparation non-contextual bound on $4$-bit parity-oblivious RAC is $S_{4}=\frac{1}{2}(1+\frac{1}{4})\approx 0.625$. In this case we get the minimum value of unsharpness parameters of five sequential Bobs are $\lambda_{1}^{min}=0.5$, $\lambda_{2}^{min}=0.556$, $\lambda_{3}^{min}=0.679$, $\lambda_{4}^{min}=0.795$ and $\lambda_{5}^{min}=1.12\geq1$. Hence, if Alice's encoding quantum states are in two-qubit system, at most four Bobs can get quantum advantage. Similar trade-off relation can be demonstrated by following the prescription in $3$-bit parity-oblivious RAC.
\end{widetext}
\begin{equation}gin{thebibliography}{99}
\bibitem{bell} J.S. Bell, On the Einstein Podolsky Rosen paradox, \href{https://doi.org/10.1103/PhysicsPhysiqueFizika.1.195}{Physics, {\bf 1}, 195 (1964)}.
\bibitem{hororev} R. Horodecki, P . Horodecki, M. Horodecki and K. Horodecki, Quantum entanglement, \href{https://doi.org/10.1103/RevModPhys.81.865} {Rev. Mod. Phys. 81, 865 (2009)}.
\bibitem{guhnearev} O. G\"{u}hne and G. T\'{o}th, Entanglement detection, \href{https://doi.org/10.1016/j.physrep.2009.02.004} {Phys. Rep. 474, 1 (2009)}.
\bibitem{brunnerrev} N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani and S. Wehner, Bell nonlocality, \href{https://doi.org/10.1103/RevModPhys.86.419}{Rev. Mod. Phys. 86, 419 (2014)}.
\bibitem{bar05} J. Barrett, L. Hardy and A. Kent, No Signaling and Quantum Key Distribution, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.95.010503}{Phys. Rev. Lett. 95, 010503(2005).}
\bibitem{acin06}A. Acin, N. Gisin and L. Masanes, From Bell’s Theorem to
Secure Quantum Key Distribution, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.97.120405}{Phys. Rev. Lett. 97, 120405 (2006).}
\bibitem{acin07} A. Acin, N. Brunner, N. Gisin, S. Massar, S. Pironio and and V.
Scarani, Device-Independent Security of Quantum Cryptography against Collective Attacks, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.230501}{ Phys. Rev. Lett. 98, 230501 (2007).}
\bibitem{pir09}S. Pironio, A. Acin, N. Brunner, N. Gisin, S. Massar and V.
Scarani, Device-independent quantum key distribution secure against collective attacks, \href{https://iopscience.iop.org/article/10.1088/1367-2630/11/4/045021/meta}{New J. Phys. 11, 045021 (2009).}
\bibitem{col06} R. Colbeck, Quantum and relativistic protocols for secure multi-party computation, Ph.D. thesis, University of Cambridge (2006); \href{https://arxiv.org/abs/0911.3814}{arXiv:0911.3814v2.}
\bibitem{pir10} S. Pironio, et al., Random numbers certified by Bell’s theorem,\href{https://www.nature.com/articles/nature09008}{ Nature volume 464,1021(2010).}
\bibitem{nieto} O. Nieto-Silleras, S. Pironio and J. Silman, Using complete measurement statistics for optimal device-independent randomness evaluation, \href{https://iopscience.iop.org/article/10.1088/1367-2630/16/1/013035}{New J. Phys. 16, 013035 (2014).}
\bibitem{col12}R. Colbeck and R. Renner, Free randomness can be amplified, \href{https://www.nature.com/articles/nphys2300}{ Nature Physics 8, 450(2012).
}
\bibitem{wehner} S. Wehner, M. Christandl and A.C. Doherty, Lower bound on the dimension of a quantum system given measured data, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.78.062112}{Phys. Rev. A 78,062112 (2008).}
\bibitem{gallego} R. Gallego, N. Brunner, C. Hadley and A. Acin, Device-Independent Tests of Classical and Quantum Dimensions, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.105.230501}{Phys. Rev. Lett. 105, 230501 (2010).}
\bibitem{ahrens} J. Ahrens, P. Badziag, A. Cabello and M. Bourennane, Experimental device-independent tests of classical and quantum dimensions, \href{https://www.nature.com/articles/nphys2333}{Nat.Phys, 8, 592(2012).}
\bibitem{brunnerprl13} N. Brunner, M. Navascues and T. Vertesi, Dimension Witnesses and Quantum State Discrimination, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.150501}{Phys. Rev. Lett. 110, 150501 (2013).}
\bibitem{bowler} J. Bowles, M. Quintino and N. Brunner, Certifying the Dimension of Classical and Quantum Systems in a Prepare-and-Measure Scenario with Independent Devices, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.140407}{Phys. Rev. Lett. 112, 140407 (2014).
}
\bibitem{sik16prl}J. Sikora, A. Varvitsiotis and Z. Wei, Minimum Dimension of a Hilbert Space Needed to Generate a Quantum Correlation, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.117.060401}{Phys. Rev. Lett., 117, 060401 (2016)}
\bibitem{cong17} W. Cong, Y. Cai, J-D. Bancal and V. Scarani, Witnessing Irreducible Dimension, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.080401}{Phys. Rev. Lett. 119, 080401 (2017).}
\bibitem{pan2020} A. K. Pan and S. S. Mahato, Device-independent certification of the Hilbert-space dimension using a family of Bell expressions, \href{https://doi.org/10.1103/PhysRevA.102.052221}{Phys. Rev. A 102, 052221(2020)}.
\bibitem{complx1} H. Buhrman, R. Cleve, S. Massar, and R. de Wolf, Nonlocality and communication complexity, \href{https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.82.665}{Phys. Rev. Lett. 114, 250401 (2015)}
\bibitem{tava2018} A. Tavakoli, J. Kaniewski, T. Vértesi, D. Rosset and N. Brunner, Self-testing quantum states and measurements in the prepare-and-measure scenario, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.062307}{Phys. Rev. A 98, 062307 (2018).}
\bibitem{farkas2019} M. Farkas and J. Kaniewski, Self-testing mutually unbiased bases in the prepare-and-measure scenario, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.032316}{Phys. Rev. A 99, 032316 (2019).}
\bibitem{wagner2020} S. Wagner, J.D. Bancal, N. Sangouard, and P. Sekatski, \href{https://quantum-journal.org/papers/q-2020-03-19-243/pdf/}{Quantum 4, 243 (2020).}
\bibitem{self1} D. Mayers, and A. Yao, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.88.015005}{ Quantum Information and Computation, 4(4):273-286 (2004)}
\bibitem{lf1} B. Hensen \emph{et al.,},Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, \href{https://www.nature.com/articles/nature15759}{Nature (London) 526, 682 (2015).}
\bibitem{lf2} W. Rosenfeld \emph{et al.,} Event-Ready Bell Test Using Entangled Atoms Simultaneously Closing Detection and Locality Loopholes, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.010402}{Phys. Rev. Lett. 119, 010402 (2017).}
\bibitem{lf3} M. Giustina \emph{et al.,} \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.250401}{Phys. Rev. Lett. 115, 250401 (2015).}
\bibitem{lf4}L. K. Shalm \emph{et al.,} Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.250401}{Phys. Rev. Lett. 115, 250402 (2015).}
\bibitem{lf5} M.-H. Li \emph{et al.,}Test of Local Realism into the Past without Detection and Locality Loopholes, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.080404}{Phys. Rev. Lett. 121, 080404 (2018).}
\bibitem{Yang} Y. Liu et,al; Device-independent quantum random-number generation, \href{https://www.nature.com/articles/s41586-018-0559-3}{Nature 562, 548 (2018).}
\bibitem{Peter} P.Bierhorst et, al; Experimentally generated randomness certified by the impossibility of superluminal signals, \href{https://www.nature.com/articles/s41586-018-0019-0}{ Nature 556:223-226 (2018).}
\bibitem{lungi} T. Lunghi \emph{et al.,}Self-Testing Quantum Random Number Generator, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.150501}{Phys. Rev. Lett., 114, 150501 (2015).}
\bibitem{li11} Hong-Wei Li, et,al., Semi-device-independent random-number expansion without entanglement, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.84.034301}{Phys. Rev. A 84, 034301, (2011).}
\bibitem{li12} Hong-Wei Li et al.; Semi-device-independent randomness certification using $n\rightarrow 1$ quantum RAC, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.85.052308}{Phys. Rev. A 85, 052308 ,(2012).}
\bibitem{pawlowski11} M. Pawłowski and N. Brunner, Semi-device-independent security of one-way quantum key distribution, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.84.010302}{Phys. Rev. A 84, 010302(R)(2011)}
\bibitem{bowles14} J. Bowles, M. T. Quintino and N. Brunner, Certifying the dimension of classical and quantum systems in
a prepare-and-measure scenario with independent devices,\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.140407}{Phys. Rev. Lett. 112, 140407 (2014).}
\bibitem{Wen} Y-Q. Zhou, H-W. Li, Y-K. Wang, D-D. Li, F. Gao and Q-Y. Wen, Semi-device-independent randomness expansion with partially free random sources, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.022331}{Phys. Rev. A 92, 022331 (2015).}
\bibitem{van} T. Van Himbeeck, E. Woodhead, N. J. Cerf, R. Garcia-Patron and S. Pironio, Semi-device-independent framework based on natural physical assumptions, \href{https://quantum-journal.org/papers/q-2017-11-18-33/}{Quantum 1, 33 (2017).}
\bibitem{brask}J. B. Brask \emph{et al.}Megahertz-Rate, Semi-Device-Independent Quantum Random Number Generators Based on Unambiguous State Discrimination, \href{https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.7.054018}{ Phys. Rev. Appl. 7, 054018 (2017).}
\bibitem{mir2019} P. Mironowicz and M. Pawłowski, Experimentally feasible
semi-device-independent certification of 4 outcome POVMs, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.030301}{Phys. Rev. A 100, 030301 (2019)}
\bibitem{samina2020}M. Smania, P. Mironowicz, M. Nawareg, M. Pawłowski, A. Cabello and M. Bourennane, \href{https://www.osapublishing.org/optica/fulltext.cfm?uri=optica-7-2-123&id=426348}{Optica 7, 123 (2020).}
\bibitem{mohan2019} K. Mohan, A. Tavakoli and N. Brunner,Sequential RAC and self-testing of quantum measurement instruments, \href{https://iopscience.iop.org/article/10.1088/1367-2630/ab3773}{ New J. Phys. 21 083034 (2019).}
\bibitem{miklin2020} N. Miklin, J. J. Borkała and M. Pawłowski, Semi-device-independent self-testing of unsharp measurements, \href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033014}{ Phys. Rev. Research 2, 033014 (2020).}
\bibitem{anwar2020} H. Anwer, S. Muhammad, W. Cherifi, N. Miklin, A. Tavakoli and M. Bourennane, Experimental Characterization of Unsharp Qubit Observables and Sequential Measurement Incompatibility via Quantum RAC,\href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.080403}{Phys. Rev. Lett. 125, 080403 (2020).}
\bibitem{fole2020} G. Foletto, L. Calderaro, G. Vallone and P. Villoresi, Experimental demonstration of sequential quantum random access
codes, \href{https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033205}{Phys. Rev. Research 2, 033205 (2020).}
\bibitem{xiao} Ya Xiao, Xin-Hong Han, Xuan Fan, Hui-Chao Qu, and Yong-Jian Gu, Widening the sharpness modulation region of an entanglement-assisted sequential quantum random access code: Theory, experiment, and application, \href{https://doi.org/10.1103/PhysRevResearch.3.023081} {Phys. Rev. Research 3 023081 (2021).}
\bibitem{tava20exp} A. Tavakoli, M. Smania T. Vértesi, N. Brunner and M. Bourennane, Self-testing nonprojective quantum measurements in prepare-and-measure experiments,\href{https://advances.sciencemag.org/content/6/16/eaaw6664.abstract}{Science Advances 6, 16 (2020).}
\bibitem{silva2015} R. Silva, N. Gisin, Y. Guryanova and S. Popescu,Multiple Observers Can Share the Nonlocality of Half of an Entangled Pair by Using Optimal Weak Measurements, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.250401}{Phys. Rev. Lett. 114, 250401 (2015)}
\bibitem{sasmal2018} S. Sasmal , D. Das , S. Mal and A. S. Majumdar, Steering a single system sequentially by multiple observers, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.012305}{Phys. Rev. A 98, 012305 (2018)}
\bibitem{bera2018} A. Bera, S. Mal, A. Sen(De) and U. Sen, Witnessing bipartite entanglement sequentially by multiple observers, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.062304}{ Phys. Rev. A 98, 062304 (2018)}
\bibitem{kumari2019} A. Kumari and A.K. Pan, Sharing nonlocality and nontrivial preparation contextuality using the same family of Bell expressions, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.062130}{ Phys. Rev. A 100, 062130 (2019).}
\bibitem{brown2020} P. J. Brown and R. Colbeck, Arbitrarily Many Independent Observers Can Share the Nonlocality of a Single Maximally Entangled Qubit Pair, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.090401}{Phys. Rev. Lett. 125, 090401 (2020)}
\bibitem{Zhang21} T. Zhang and S-M. Fei, Sharing quantum nonlocality and genuine nonlocality with independent observables, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.103.032216}{Phys. Rev. A 103, 032216 (2021)}
\bibitem{busch} P. Busch,Unsharp reality and joint measurements for spin observables, \href{https://journals.aps.org/prd/abstract/10.1103/PhysRevD.33.2253}{Phys. Rev. D. 33, 2253 (1986)}
\bibitem{vonneumann} J. von Neumann, Mathematical Foundations of Quantum Mechanics,\href{https://press.princeton.edu/books/hardcover/9780691178561/mathematical-foundations-of-quantum-mechanics}{Princeton Univ. Press (1955).}
\bibitem{std1} Janos Bergou, Edgar Feldman, and Mark Hillery, Extracting Information from a Qubit by Multiple Observers: Toward a Theory of Sequential State Discrimination, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.100501}{Phys. Rev. Lett. 111, 100501 (2013)}
\bibitem{std2} D. Fields, R. Han, M. Hillery and J. A. Bergou, Extracting unambiguous information from a single qubit by sequential observers, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.012118}{Phys. Rev. A 101, 012118 (2020)}
\bibitem{ran1} F. J. Curchod, M. Johansson, R. Augusiak, M. J. Hoban, P. Wittek, and A. Acín, Unbounded randomness certification using sequences of measurements, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.020102}{Phys. Rev. A 95, 020102(R) (2017)}
\bibitem{oszmaniec2017} M. Oszmaniec, L. Guerini, P. Wittek, and A. Acín, Extracting Information from a Qubit by Multiple Observers: Simulating Positive-Operator-Valued Measures with Projective Measurements, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.190501}{Phys. Rev. Lett. 119, 190501 (2017)}
\bibitem{ambainis} A. Ambainis, D. Leung, L. Mancinska, M. Ozols, Quantum Random Access Codes with Shared Randomness, \href{https://arxiv.org/abs/0810.2937}{ arXiv:0810.2937v3}
\bibitem{spekk09} R. W. Spekkens, D. H. Buzacott, A. J. Keehn, B. Toner and G. J. Pryde, Preparation contextuality powers parity-oblivious multiplexing, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.102.010401}{Phys. Rev. Lett. 102, 010401(2009).}
\bibitem{ghorai18} S. Ghorai and A. K. Pan, Optimal quantum preparation contextuality in an $n$-bit parity-oblivious multiplexing task, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.032110}{Phys. Rev. A, 98, 032110 (2018).}
\bibitem{spek05} R. W. Spekkens, Contextuality for preparations, transformations, and unsharp measurements, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.71.052108}{Phys. Rev. A 71, 052108 (2005).}
\bibitem{pan19} A. K. Pan, Revealing universal quantum contextuality through communication games, \href{https://www.nature.com/articles/s41598-019-53701-5}{Scientific Reports volume 9, 17631 (2019). }
\bibitem{pan21} A. K. Pan, Oblivious communication game, self-testing of projective and nonprojective measurements, and certification of randomness, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.104.022212}{Phys. Rev. A, 104, 022212 (2021).}
\bibitem{kunjwal} R. Kunjwal and R. W. Spekkens, From the Kochen-Specker Theorem to Noncontextuality Inequalities without Assuming Determinism, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.110403}{Phys. Rev. Lett. 115, 110403 (2015) }
\bibitem{wei21} S. Wei, F. Guo, F. Gao and Q. Wen, Certification of three black boxes with unsharp measurements using 3$\rightarrow$1 sequential quantum random access codes, \href{https://iopscience.iop.org/article/10.1088/1367-2630/abf614}{ New J. Phys. 23 053014 (2021).}
\end{thebibliography}
\end{document} |
\begin{document}
\title{On Total Domination and Minimum Maximal Matchings in Graphs}
\author{Selim Bahad{\i}r}
\affil{Department of Mathematics-Computer,\\
Ankara Y\i ld\i r\i m Beyaz\i t University, Turkey\\
[email protected]}
\date{\today}
\maketitle
\pagenumbering{roman} \setcounter{page}{1}
\pagenumbering{arabic} \setcounter{page}{1}
\begin{abstract}
\noindent
A subset $M$ of the edges of a graph $G$ is a matching if no two edges in $M$ are incident.
A maximal matching is a matching that is not contained in a larger matching.
A subset $S$ of vertices of a graph $G$ with no isolated vertices is a
total dominating set of $G$ if every vertex of $G$ is adjacent to at least one vertex in $S$.
Let $\mu^*(G)$ and $\gamma_t(G)$ be the minimum cardinalities of a maximal matching and a total dominating set in $G$, respectively.
Let $\delta(G)$ denote the minimum degree in graph $G$.
We observe that $\gamma_t(G)\leq 2\mu^*(G)$ when $1\leq \delta(G)\leq 2$ and $\gamma_t(G)\leq 2\mu^*(G)-\delta(G)+2$ when $\delta(G)\geq 3$.
We show that the upper bound for the total domination number is tight for every fixed $\delta(G)$.
We provide a constructive characterization of graphs $G$ satisfying $\gamma_t(G)= 2\mu^*(G)$ and a polynomial time procedure to determine whether $\gamma_t(G) = 2\mu^*(G)$ for a graph $G$ with minimum degree two.
\end{abstract}
\noindent
{\it Keywords: minimum maximal matching, total domination number} \\
\section{Introduction}
\label{sec:intro}
Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$.
The neighborhood of a vertex $v\in V(G)$, denoted by $N(v)$, is the set of vertices adjacent to $v$.
The \emph{degree} of a vertex $v$ is the cardinality of $N(v)$ and denoted by $d(v)$.
The minimum degree of the graph $G$ is denoted by $\delta(G)$.
Throughout this paper, we only
consider simple, finite and undirected graphs without isolated vertices.
A set $S\subseteq V(G)$ of vertices is called a \emph{dominating set} of $G$ if every vertex of $V(G)\backslash S$ is adjacent to a member of $S$.
The \emph{domination number} $\gamma(G)$ is the minimum cardinality of a dominating set of $G$.
If $G$ has no isolated vertices, a subset $S\subseteq V(G)$ is called a \emph{total dominating set} of $G$ if every vertex of $V(G)$ is adjacent to a member of $S$.
In other words, $S$ is a total dominating set if $S$ is a dominating set and the subgraph of $G$ induced by $S$ has no isolated vertices.
The \emph{total domination number} of $G$ with no isolated vertices, denoted by $\gamma_t(G)$, is the minimum size of a total dominating set of $G$.
Two edges in $G$ are \emph{independent} if they share no vertex.
A set $M\subseteq E(G)$ consisting of pairwise independent edges is called a \emph{matching} in $G$.
A \emph{maximal matching} of $G$ is a matching that is not contained in a
larger matching in $G$.
The \emph{matching number} of $G$ is the maximum cardinality of a matching in $G$ and denoted by $\mu(G)$ (also $\alpha'(G)$ and $\nu(G)$).
Let $\mu^*(G)$ denote the minimum cardinality of a maximal matching in $G$.
A set $D\subseteq E(G)$ is called \emph{edge dominating set} if every edge not in $D$ is adjacent to at least one edge in $D$ and minimum cardinality of an edge dominating set is denoted by $\gamma'(G)$.
Any maximal matching is always an edge dominating set.
Furthermore, the size of a minimum edge dominating set equals the size of a minimum maximal matching, i.e., $\mu^*(G)=\gamma'(G)$ for every graph $G$ (see, \cite{yannakakis1980edge}).
Obtaining bounds on total domination number in terms of other graph parameters and classifying graphs whose total domination number attains an upper or lower bound are studied by many authors (see, Chapter 2 in \cite{yeo:2013}).
For example, \citeauthor{cockayne:1980} \cite{cockayne:1980} showed that if $G$ is a connected graph with order at least 3, then $\gamma_t(G)\leq 2|V(G)|/3$.
Moreover, \citeauthor{brigham:2000} \cite{brigham:2000} provided that a connected graph $G$ satisfies
$\gamma_t(G)= 2|V(G)|/3$ if and only if $G$ is a cycle of length 3 or 6, or $H\circ P_2$ for some connected graph $H$,
where $P_2$ is a path of length 2 and $H\circ P_2$ is obtained by identifying each vertex of $H$ by an end vertex of a copy of $P_2$.
It is well-known that $\gamma(G) \leq \gamma_t(G) \leq 2\gamma(G)$.
An open question in \cite{henning:2009} is to characterize the graphs $G$ with $\gamma_t(G)=2\gamma(G)$.
As partial answers of this problem, \citeauthor{henning:2001} \cite{henning:2001} provided a constructive characterization of trees,
\citeauthor{hou:2010} \cite{hou:2010} generalized it to block graphs and \citeauthor{sbdg:2018} \cite{sbdg:2018} presented a characterization of a large family of graphs (including chordal graphs) satisfying $\gamma_t(G)= 2\gamma(G)$.
For every graph $G$ with no isolated vertex it is true that $\gamma(G)\leq \mu(G)$.
However, the inequality $\gamma_t(G)\leq \mu(G)$ does not always hold.
\citeauthor{henning2008matching} \cite{henning2008matching} prove that $\gamma_t(G)\leq \mu(G)$ is valid for every claw-free graph $G$ with $\delta(G)\geq 3$ and
every $k$-regular graph $G$ with $k\geq 3$.
\citeauthor{henning2013total} \cite{henning2013total} show that
if all vertices in a connected graph $G$ with at least four vertices belong to a triangle, then $\gamma_t(G)\leq \mu(G)$.
Claw-free graphs with minimum degree three that have equal total domination and matching numbers are determined in \cite{henning2006total}, whereas every tree $T$ satisfying $\gamma_t(T)\leq \mu(T)$ is characterized in \cite{shiu2010some}.
Unlike the inequality $\gamma_t(G)\leq \mu(G)$,
the inequality $\gamma_t(G)\leq 2\mu^*(G)$ holds for every graph $G$ with no isolated vertex since the vertex set of a maximal matching is a total dominating set.
However, the inequality is tight only when the minimum degree is one or two.
We observe that if $\delta(G)\geq 3$, then $\gamma_t(G)\leq 2\mu(G)^*-\delta(G)+2$ and the equality is attained by infinitely many graphs for any given minimum degree.
In this paper,
we mainly study graphs satisfying the upper bound for total domination number,
$\gamma_t(G)= 2\mu^*(G)$, and refer to them
as $(\gamma_t,2\mu^*)$-graphs.
We characterize all $(\gamma_t,2\mu^*)$-graphs and present a process with polynomial time complexity to determine whether a given graph $G$ with $\delta(G)=2$ is a $(\gamma_t,2\mu^*)$-graph.
The rest of this paper is organized as follows:
Section \ref{sec:mainres} provides the main results and
the proofs of the main theorems are given in Section \ref{sec:proof}.
Discussion and conclusions are provided in Section \ref{sec:dis}.
\section{Main Results}
\label{sec:mainres}
\subsection{An Upper Bound for the Total Domination Number}
We first provide some definitions and notations required for the statement of the main results.
An edge joining vertices $u$ and $v$ is denoted as $uv$.
For a matching $M=\{u_1v_1,\dots, u_nv_n\}$ in a graph,
let $V(M)=\{u_1,v_1,\dots,u_n,v_n\}$
and $M_p(w)$ be the neighbor of $w$ in $M$ for every vertex $w$ in $V(M)$, i.e., $M_p(u_i)=v_i$ and $M_p(v_i) =u_i$ for $i=1,\dots,n$.
Notice that $M_p(M_p(w))=w$ for every $w\in V(M)$.
For a subset $A$ of $V(M)$ let $M_p(A)=\{M_p(v):v\in A\}$.
A set of vertices is called \emph{independent} if there is no edge joining two of the vertices in the set.
\begin{observation}\label{obs:maxmathind}
For every graph $G$ and every maximal matching $M$ in $G$, $V(G)\backslash V(M)$ is an independent set.
\end{observation}
\begin{observation}\label{obs:maxmatchtds}
Let $G$ be a graph with no isolated vertices and $M$ be a maximal matching of $G$.
Then, $V(M)$ is a total dominating set in $G$.
\end{observation}
\begin{proposition}\label{prop:gammatdeg3}
Let $G$ be a graph with no isolated vertices.
If $1\leq \delta(G) \leq 2$, then $\gamma_t(G)\leq 2\mu^*(G)$.
If $\delta(G)\geq 3$, then we have $\gamma_t(G)\leq 2\mu^*(G)-\delta(G)+2$.
\end{proposition}
\begin{proof}
Let $M$ be a minimum maximal matching in $G$ and $\delta=\delta(G)$.
By Observation \ref{obs:maxmatchtds} we get $\gamma_t \leq |V(M)|=2\mu^*(G)$ and hence, we obtain the inequality for $\delta \in \{1,2\}$.
When the minimum degree is at least three we can obtain a better inequality for the total domination number.
Suppose that $V(G)\backslash V(M)$ is not empty and let $x$ be vertex in $V(G)\backslash V(M)$.
Recall that $N(x)\subseteq V(M)$.
Let $A$ be a subset of $N(x)$ of size $\delta(G)-1$.
Consider the set $S=(V(M)\backslash M_p(A))\cup \{x\}$.
which is obtained by removing $\delta-1$ vertices from $V(M)$ and adding one vertex not in $V(M)$.
We claim that $S$ is a total dominating set of $G$.
For every $y$ not in $V(M)$ we have $N(y)\subseteq V(M)$ and $|N(y)|\geq \delta$.
Thus, $N(y)\cap S\neq \emptyset$.
Now let $v$ be a vertex in $V(M)$.
If $v\in A$, then $v$ is adjacent to $x\in S$.
If $v\notin A$, then $M_p(v)\in S$ and hence $M_p(v)\in N(v)\cap S$.
Consequently, we see that $S$ is a total dominating set and $\gamma_t \leq |S|=|M|-(\delta -1)+1=2\mu^*(G)-\delta +2$.
Next, assume that $V(G)\backslash V(M)$ is empty.
In this case, we see that $V(M)$ is the set of whole vertices in $G$.
As the minimum degree is $\delta$, any set obtained by removing $\delta -1$ vertices from $V(G)$ is a total dominating set and hence,
we obtain $\gamma_t(G)\leq |V(G)|-(\delta -1)=|V(M)|-\delta +1=2\mu^*(G)-\delta +1< 2\mu^*(G)-\delta +2$.
\end{proof}
\begin{figure}
\caption{Examples of $(\gamma_t,2\mu^*)$-graphs.
On the left, graph $G_1$ consists of $n$ paths of length three sharing a common end vertex.
Note that $\gamma_t(G_1)=2n$, $\mu^*(G_1)=n$ and $\delta(G_1)=1$.
On the right, graph $G_2$ is obtained by subdividing every horizontal edge of a $1\times n$ grid graph.
Notice that $\gamma_t(G_2)=2n+2$, $\mu^*(G_2)=n+1$ and $\delta(G_2)=2$.
}
\label{fig:exgammat2mu}
\end{figure}
Proposition \ref{prop:gammatdeg3} implies that every $(\gamma_t,2\mu^*)$-graph has minimum degree one or two.
Examples of $(\gamma_t,2\mu^*)$-graphs are illustrated in Figure \ref{fig:exgammat2mu}.
The following result shows that the inequality in Proposition \ref{prop:gammatdeg3} is tight when the minimum degree is at least three.
\begin{proposition}
For every positive integer $\delta \geq 3$, there exist infinitely many graphs $G$ with $\gamma_t(G)=2\mu^*(G)-\delta +2$ and $\delta(G)=\delta$.
\end{proposition}
\begin{proof}
Let $\delta \geq 3$ be a positive integer.
Let $M$ be disjoint union of $n$ copies of $K_2$ where $n\geq (\delta +1)/2$.
For every subset $A$ of $V(M)$ with $|A|=\delta$, add a new vertex $v_A$ whose neighborhood is $A$.
Let $G$ be the resulting graph.
The degree of a vertex in $V(M)$ is $\binom{2n-1}{\delta -1}\geq \binom{\delta}{\delta-1}=\delta$ and each vertex not in $V(M)$ is of degree $\delta$.
Therefore, the minimum degree of $G$ is $\delta$.
As $M$ is a maximal matching, we have $\mu^*(G)\leq n$.
Let $S$ be a total dominating set of $G$.
If $V(M)\subseteq S$, then $|S|\geq 2n> 2n-\delta +2$.
Otherwise, let $v$ be a vertex in $V(M)\backslash S$.
Since $S$ is a total dominating set, there exists a vertex $x\in S$ adjacent to $M_p(v)$.
Notice that $x\notin V(M)$.
If $|V(M)\backslash S|\geq \delta $, then there exists no vertex in $S$ adjacent to $v_A$ whenever $A\subseteq V(M)\backslash S$ and $|A|=\delta$ which contradicts with $S$ being a total dominating set.
Therefore, we get $|V(M)\backslash S|\leq \delta -1$ and then, $|S\cap V(M)|\geq 2n-(\delta-1)$.
Together with $x$, we see that $|S|\geq 2n-(\delta-1)+1=2n-\delta +2$.
Thus, in both cases we have the inequality $2n-\delta+2\leq |S|$ which gives $2n-\delta +2\leq \gamma_t(G)$.
Then we obtain the inequality chain
$2n-\delta+2\leq \gamma_t(G)\leq 2\mu^*(G)-\delta+2\leq 2n-\delta+2$ which implies $\gamma_t(G)= 2\mu^*(G)-\delta+2$.
\end{proof}
\subsection{Construction of $(\gamma_t,2\mu^*)$-Graphs}
In this paper, we not only show that there exist infinitely many $(\gamma_t,2\mu^*)$-graphs in both cases of the minimum degree but also provide a procedure which enables to construct all $(\gamma_t,2\mu^*)$-graphs.
A \emph{leaf} of a graph is a vertex with degree one,
while a \emph{support vertex} of a graph is a vertex adjacent to a leaf.
Let $sup(G)$ denote the set of all support vertices in the graph $G$.
Let $S^+(G)$ be the set of support vertices which are adjacent to a support vertex and let $S^-(G)=sup(G)\backslash S^+(G)$ which is the set of isolated vertices in the subgraph of $G$ induced by $sup(G)$.
A matching is called \emph{perfect} if it covers all the vertices in the graph.
\begin{theorem}\label{thm:MainThm1}
Let $G$ be a graph with $1\leq \delta(G)\leq 2$.
Then, $G$ is a $(\gamma_t,2\mu^*)$-graph if and only if there exists a maximal matching $M=M^+ \cup M^- \cup M^*$ in $G$ satisfying the following conditions:
\begin{enumerate}[label=(\roman*)]
\item $M^+$ is a perfect matching of the subgraph of $G$ induced by $S^+(G)$.
\item $S^-(G)\subseteq V(M^-)$ and every edge in $M^-$ joins a vertex from $S^-(G)$ and a vertex from $V(G)\backslash sup(G)$.
\item For every $v\in S^-(G) \cup V(M^*)$, $M_p(v)$ is the unique neighbor of $v$ among the vertices in $V(M)$.
\item For every distinct vertices $u$ and $v$ in $S^-(G) \cup V(M^*)$, whenever $u$ and $v$ have a common neighbor, there exists a vertex whose neighborhood is $\{M_p(u),M_p(v)\}$.
\end{enumerate}
\end{theorem}
The proof of Theorem \ref{thm:MainThm1} will be given in the next section.
Note that when $\delta(G)=2$ there is no leaf in $G$ and hence, $sup(G)=\emptyset$.
Therefore, Theorem \ref{thm:MainThm1} implies the following result for the graphs with minimum degree two.
\begin{corollary}\label{cor:maindeg2}
Let $G$ be a graph with $\delta(G)=2$.
Then, $G$ is a $(\gamma_t,2\mu^*)$-graph if and only if there exists a maximal matching $M$ in $G$ satisfying the following two conditions:
\begin{enumerate}[label=(\roman*)]
\item For every $v\in V(M)$, $M_p(v)$ is the unique neighbor of $v$ among the vertices in $V(M)$.
\item For every distinct vertices $u$ and $v$ in $V(M)$, whenever $u$ and $v$ have a common neighbor, there exists a vertex whose neighborhood is $\{M_p(u),M_p(v)\}$.
\end{enumerate}
\end{corollary}
By Theorem \ref{thm:MainThm1} we can characterize $(\gamma_t,2\mu^*)$-graphs in a constructive way.
Let $\mathcal{F}$ be the family of graphs obtained by following the steps below:
\begin{enumerate}
\item
Let $M$ be disjoint union of some copies of $K_2$ and $A$ be a set of vertices disjoint with $V(M)$.
\item
Mark some (might be none) of the vertices in $V(M)$.
Let $L$ be the set of vertices $v$ in $V(M)$ such that $M_p(v)$ is unmarked.
\item
For each vertex $v$ in $A$ draw at least two edges joining $v$ to vertices in $V(M)$.
\item
For every leaf $v$ in $L$, join $v$ to some vertices in $A$.
\item
Draw some (might be none) edges joining two vertices of $V(M)\backslash L$.
\item
For every distinct vertices $u$ and $v$ in $L$,
if $u$ or $v$ is marked and $N(u)\cap N(v)\neq \emptyset$,
then add a new vertex whose neighborhood is $\{M_p(u),M_p(v)\}$
unless such a vertex already exists.
\item
For every distinct vertices $u$ and $v$ in $L$,
if none of $u$ and $v$ is marked and $(N(u)\cap N(v))\cup (N(M_p(u)) \cap N(M_p(v))\neq \emptyset$,
then add two new vertices with neighborhoods $\{u,v\}$ and $\{M_p(u),M_p(v)\}$ unless such vertices already exist (when $v=M_p(u)$ add only one such vertex).
\item
Finally, for every marked vertex $v$, if $v$ is not a support vertex, then add some new vertices and join them to $v$.
\end{enumerate}
Construction of a graph in $\mathcal{F}$ is illustrated in Figure \ref{fig:exF}.
\begin{figure}
\caption{ Construction of a graph $G$ in $\mathcal{F}
\label{fig:exF}
\end{figure}
In the procedure above, when at least one vertex is marked in the second step we obtain a graph with minimum degree one.
If no vertex is marked in the second step, then $L=V(M)$ and the resulting graph is of minimum degree two.
Let $G$ be a member of $\mathcal{F}$.
Observe that marked vertices corresponds to $sup(G)$.
Note also that $S^+(G)$ is the set of marked vertices $v$ such that $M_p(v)$ is also marked and $S^-(G)$ is the set of marked vertices $v$ such that $M_p(v)$ is not marked.
Therefore, we can split $M$ into three matchings $M^+=\{vM_p(v):v\in V(M), \text{ both } v \text{ and } M_p(v) \text{ are marked} \}$,
$M^-=\{vM_p(v):v\in V(M), v \text{ is marked but } M_p(v) \text{ is not marked} \}$ and
$M^*=\{vM_p(v):v\in V(M),$ \text{ none of } $v \text{ and } M_p(v) \text{ is marked} \}$.
Thus, the set $L$ corresponds to $S^-(G)\cup V(M^*)$.
Under the assumption of condition (iv) of Theorem \ref{thm:MainThm1},
for every distinct vertices $u$ and $v$ in $V(M^*)$ we have $N(u)\cap N(v)\neq \emptyset$ if and only if $N(M_p(u))\cap N(M_p(v))\neq \emptyset $ since $M_p(M_p(x))=x$ for every $x$.
That is why two vertices are simultaneously inserted in the seventh step of the construction of $\mathcal{F}$.
Consequently, it is easy to verify that a graph $G$ has a maximal matching fulfilling the conditions in Theorem \ref{thm:MainThm1} if and only if $G$ belongs to $\mathcal{F}$ and hence, we obtain the following result.
\begin{corollary}\label{cor:MainThm2}
A graph $G$ is a $(\gamma_t,2\mu^*)$-graph if and only if $G\in \mathcal{F}$.
\end{corollary}
Corollary \ref{cor:MainThm2} enables us to construct all $(\gamma_t,2\mu^*)$-graphs and shows that there exist infinitely many $(\gamma_t,2\mu^*)$-graphs in both cases of the minimum degree.
\subsection{$(\gamma_t,2\mu^*)$-Graphs with Minimum Degree Two}
In this subsection, we provide a procedure to determine whether a given graph with minimum degree two is a $(\gamma_t,2\mu^*)$-graph.
Let $\mathcal{K}$ be the family of graphs consists of cycles $C_3$ with a common edge, that is, graphs isomorphic to a graph $G$ with $V(G)=\{u,v,w_1,\dots,w_n\}$ and $E(G)=\{uv,uw_1,vw_1,\dots,uw_n,vw_n\}$ for some positive integer $n$.
Let $d_2(G)$ be the set of vertices in $G$ of degree two, i.e., $d_2(G)=\{v\in V(G):d(v)=2\}$.
Let $G$ be a graph with connected components $G_1,\dots,G_n$ and no isolated vertex.
Since $\gamma_t(G)=\sum_{i=1}^n \gamma_t(G_i)$, $\mu^*(G)=\sum_{i=1}^n \mu^*(G_i)$ and $\gamma_t(G_i)\leq 2\mu^*(G_i)$ for $i=1,\dots,n$,
we have $\gamma_t(G)=2\mu^*(G)$ if and only if $\gamma_t(G_i)=2\mu^*(G_i)$
for every $i\in\{1,\dots,n\}$.
Therefore, it is sufficient to consider only connected graphs.
For every connected graph $G\notin \mathcal{K}\cup \{C_6\}$,
let $\mathcal{M}=\mathcal{M}(G)$ be the set of edges $uv$ such that there exist $x\in N(u)\cap d_2(G)$ and $y\in N(v)\cap d_2(G)$ so that the edges $xu,uv,vy$ belong to the same induced $C_6$ in $G$.
In other words,
$\mathcal{M}$ can be constructed as follows:
Initially set $\mathcal{M}=\emptyset$ and then,
for every distinct vertices $x$ and $y$ in $d_2(G)$, if the subgraph of $G$ induced by $N(x)\cup N(y)\cup \{x,y\}$ is a $C_6$, then add both of the edges of the cycle which are incident to neither $x$ nor $y$ to the set $\mathcal{M}$.
For a given graph $G$ with $\delta(G)=2$, the following result enables to check whether $\gamma_t(G)=2\mu^*(G)$.
\begin{theorem}\label{thm:MainDeg2}
Let $G$ be a connected graph with $\delta(G)=2$.
Then, $G$ is a $(\gamma_t,2\mu^*)$-graph if and only if
$G\in \mathcal{K}\cup \{C_6\}$ or
$\mathcal{M}$ is a maximal matching satisfying the following two conditions:
\begin{enumerate}[label=(\roman*)]
\item For every $v\in V(\mathcal{M})$, $\mathcal{M}_p(v)$ is the unique neighbor of $v$ among the vertices of $V(\mathcal{M})$.
\item For every distinct vertices $u$ and $v$ in $V(\mathcal{M})$, whenever $u$ and $v$ share a neighbor, there exists a vertex with neighborhood $\{\mathcal{M}_p(u),\mathcal{M}_p(v)\}$.
\end{enumerate}
\end{theorem}
For general graphs,
both of the problems of finding the total domination number and finding the minimum maximal matching number are NP-complete
(see, \cite{pfaff1983np} and \cite{yannakakis1980edge}, respectively.)
However,
constructing the set $\mathcal{M}$ and checking whether it is a maximal matching and satisfies the conditions (i) and (ii) in Theorem \ref{thm:MainDeg2} can be easily done by an algorithm with polynomial time complexity.
Therefore, the problem of determining whether $\gamma_t (G)=2\mu^*(G)$ for a graph $G$ with $\delta(G)= 2$ is polynomial time solvable.
The girth of a graph $G$, denoted by $g(G)$, is the length of a shortest cycle (if any) in $G$.
Acyclic graphs (forests) are considered to have infinite girth.
Let $G$ be a connected $(\gamma_t,2\mu^*)$-graph with $\delta(G)=2$.
If $G\in \mathcal{K}\cup \{C_6\}$, then the girth of $G$ is either 3 or 6.
Otherwise, $\mathcal{M}$ has at least two edges and thus, $G$ has an induced $C_6$.
Therefore, $G$ has an induced $C_3$ or $C_6$ and hence, Theorem \ref{thm:MainDeg2} implies the following result.
\begin{corollary}\label{cor:girth}
If $G$ is a $(\gamma_t,2\mu^*)$-graph with $\delta(G)= 2$, then $g(G)\leq 6$.
\end{corollary}
\section{Proofs of the Main Results}
\label{sec:proof}
\subsection{Proof of Theorem \ref{thm:MainThm1}}
We first present some results which are simple to observe and useful for the proofs.
\begin{observation}\label{obs:tdssup}
Let $G$ be a graph without isolated vertices.
Every total dominating set of $G$ contains $sup(G)$.
\end{observation}
\begin{observation}\label{obs:maxmatchsup}
For every graph $G$ and every maximal matching $M$ of $G$, we have $sup(G)\subseteq V(M)$.
\end{observation}
\begin{lemma}\label{lem:MT->}
Let $G$ be a $(\gamma_t,2\mu^*)$-graph.
Then $G$ has a maximal matching $M=M^+ \cup M^- \cup M^*$ satisfying the conditions in Theorem \ref{thm:MainThm1}.
\end{lemma}
\begin{proof}
Let $G$ be a graph such that $\gamma_t(G)=2\mu^*(G)$.
Let $\mu^*(G)=k$ and $M$ be a maximal matching of size $k$.
Note that $\gamma_t(G)=2k$ and $V(M)$ is a minimum total dominating set.
Let $v$ be a vertex in $V(M)$ such that $M_p(v)$ is not a support vertex.
We will show that $M_p(v)$ is the unique neighbor of $v$ in $V(M)$.
Suppose that $v$ has a neighbor in $V(M)$ other than $M_p(v)$.
Since $V(M)$ is a minimum total dominating set there must exist a vertex $x$ such that $N(x)\cap V(M)=\{M_p(v)\}$.
Clearly $x$ is not equal to $v$ or any other vertex in $V(M)$, that is,
$x\notin V(M)$.
But then, we see that $N(x)\subseteq V(M)$ by Observation \ref{obs:maxmathind} and hence, $x$ is a leaf adjacent to $M_p(v)$.
However, $M_p(v)$ is not a support vertex, contradiction.
Consequently, for every $v\in V(M)$ satisfying $M_p(v)\notin sup(G)$ we have $N(v)\cap V(M)=\{M_p(v)\}$.
Now, let $v$ be a support vertex in $S^+(G)$.
By Observation \ref{obs:maxmatchsup} we have $v\in V(M)$ and $v$ is adjacent to a support vertex which is also in $V(M)$.
Therefore, by the argument above we obtain that $M_p(v)$ is a support vertex and hence, by definition of $S^+(G)$ we see that $M_p(v)\in S^+(G)$.
Consequently, we obtain that $v\in S^+(G)$ implies $M_p(v) \in S^+(G)$ and
$M^+=\{vM_p(v):v\in S^+(G) \}$ is a perfect matching of the subgraph of $G$ induced by $S^+(G)$ and contained in $M$.
Let $M^-=\{vM_p(v):v\in S^-(G)\}$ and note that $M^-$ is a subset of $M$.
Clearly, by definition of $S^-(G)$ we have $v\in S^-(G)$ implies $M_p(v)\notin sup(G)$.
Therefore, $M^+$ and $M^-$ satisfy the first two conditions.
Now let $M^*=M\backslash (M^+\cup M^-)$ which is the set of edges in $M$ whose neither of endpoints is a support vertex.
Let $v$ be a vertex in $S^-(G)\cup V(M^*)$.
Recall that $M_p(v)$ is not a support vertex and therefore,
the third condition is also satisfied.
Finally, let $u$ and $v$ be distinct vertices of $S^-(G)\cup V(M^*)$ sharing a common neighbor.
Let $w\in N(u)\cap N(v)$ and note that $w$ is not in $V(M)$.
Consider the set
$A=(V(M)\backslash \{M_p(u),M_p(v)\}) \cup \{w\}$.
As $\gamma_t(g)=2k$ and $|A|=2k-1$, there exists a vertex $x$ adjacent to no vertex in $A$.
Clearly $x$ is not a vertex in $V(M)$ and therefore,we get $N(x)\subseteq V(M)$.
Since $V(M)$ is a total dominating set, we have $N(x)\cap V(M)\neq \emptyset$ which implies $N(x)\subseteq \{M_p(u),M_p(v)\}$.
As none of $M_p(u)$ and $M_p(v)$ is a support vertex, $x$ is not a leaf and consequently we obtain $N(x)=\{M_p(u),M_p(v)\}$ and thus,
last condition is also satisfied.
\end{proof}
\begin{lemma}\label{lem:MT<-}
Let $G$ be a graph with no isolated vertex.
If $G$ has a maximal matching $M=M^+ \cup M^- \cup M^*$ satisfying the conditions in Theorem \ref{thm:MainThm1},
then $G$ is a $(\gamma_t,2\mu^*)$-graph.
\end{lemma}
\begin{proof}
Let $M=M^+ \cup M^- \cup M^*$ be a maximal matching fulfilling the conditions and $|M|=2k$.
Notice that it suffices to prove that $2k\leq \gamma_t(G)$,
since $\gamma_t(G)\leq 2\mu^*(G)\leq |M|=2k$.
Let $T$ be a total dominating set of $G$.
We will provide a one to one function $f:V(M)\rightarrow T$.
Let $U$ be the set of vertices $u$ in $V(M)$ such that $M_p(u)\notin T$.
By Observation \ref{obs:tdssup} and the first two conditions we have $U\subseteq S^-(G)\cup V(M^*)$.
Since $T$ is a total dominating set, for every vertex $u\in U$ there exists a vertex in $T$ adjacent to $u$ and set one of them as $f(u)$.
Note that by the third condition we have $f(u)\notin V(M)$.
We claim that $f(u)\neq f(v)$ for distinct vertices $u$ and $v$ in $U$.
Assume the contrary and let $u$ and $v$ be two distinct vertices of $U$ such that $f(u)=f(v)$.
Then we see that $u$ and $v$ have a common neighbor and hence, the fourth condition implies
the existence of a vertex $x$ adjacent to only $M_p(u)$ and $M_p(v)$.
On the other hand, by definition none of $M_p(u)$ and $M_p(v)$ is in $T$ and therefore,
there is no vertex in $T$ adjacent to $x$ which contradicts with $T$ being a total dominating set.
For every vertex $u\in V(M)\backslash U$ set $f(u)$ to be $M_p(u)$.
Then, it is easy to verify that $f(u)\neq f(v)$ for every distinct vertices $u$ and $v$ in $V(M)$, that is, $f:V(M)\rightarrow T$ is an injection and hence,
$|V(M)|=2k\leq |T|$ for any total dominating set $T$.
Consequently, we get $2k \leq \gamma_t(G)$ and thus, $\gamma_t(G)= 2k$.
\end{proof}
Combining the results of Lemmas \ref{lem:MT->} and \ref{lem:MT<-} gives Theorem \ref{thm:MainThm1}.
\subsection{Proof of Theorem \ref{thm:MainDeg2}}
It is clear that if $G\in \mathcal{K}\cup\{Cc_6\}$, then $\gamma_t(G)=2\mu^*(G)$ and $\delta(G)=2$.
By Corollary \ref{cor:maindeg2}, we see that if $\mathcal{M}$ is a maximal matching satisfying the conditions in Theorem \ref{thm:MainDeg2}, then $G$ is a $(\gamma_t,2\mu^*)$-graph.
Therefore, to prove Theorem \ref{thm:MainDeg2} the following result is sufficient by Corollary \ref{cor:maindeg2}.
\begin{lemma}
Let $G$ be a connected $(\gamma_t,2\mu^*)$-graph with $\delta(G)=2$ and $G\notin \mathcal{K}\cup \{C_6\}$.
Then $\mathcal{M}$ is the unique maximal matching satisfying the conditions (i) and (ii) in Corollary \ref{cor:maindeg2}.
\end{lemma}
\begin{proof}
Let $M$ be a maximal matching in $G$ satisfying the conditions (i) and (ii) in Corollary \ref{cor:maindeg2}.
We first show that $M\subseteq \mathcal{M}$.
Let $uv$ be an edge in $M$.
Then, since $G$ is connected and $G\notin \mathcal{K}$,
there exists a vertex $z\in (N(u)\cup N(v))\backslash \{u,v\}$ such that $N(x)\neq \{u,v\}$.
As the degree of $z$ is at least two,
$z$ has a neighbor $w$ which does not belong to $\{u,v\}$.
Note that $z$ is adjacent to at least one of $u$ and $v$.
Without loss of generality, suppose that $v$ is a neighbor of $z$.
Note also that $z\notin V(M)$ and $w\in V(M)$ by condition (i) and Observation \ref{obs:maxmathind}.
Therefore, $v$ and $w$ are distinct vertices in $V(M)$ both adjacent to $z$ and hence, there exist vertices $x$ and $y$ such that
$N(x)=\{M_p(v),M_p(w)\}=\{u,M_p(w)\}$ and $N(y)=\{v,w\}$.
Then, the subgraph of $G$ induced by $\{x,y,u,v,w,M_p(w)\}$ is a $C_6$ containing the edges $xu,uv,vy$ and thus, we obtain $uv\in \mathcal{M}$ which yields $M\subseteq \mathcal{M}$.
We next prove that $\mathcal{M}\subseteq M$ by contradiction.
Suppose that an edge $uv$ belongs to $\mathcal{M}\backslash M$.
Since $uv\in \mathcal{M}$, there exist vertices $w,t$ and $x,y\in d_2(G)$ such that subgraph of $G$ induced by $\{x,u,v,y,w,t\}$ has the edge set $\{xu,uv,vy,yw,wt,tx\}$.
As $uv\notin M$, condition (i) and Observation \ref{obs:maxmathind} imply that exactly one of $u$ and $v$ belongs to $V(M)$.
Without loss of generality, let $u\in V(M)$.
Then, $v\notin V(M)$ and hence, $y\in V(M)$ by condition (i) and Observation \ref{obs:maxmathind}.
Therefore, $u$ and $y$ are distinct vertices of $V(M)$ and hence there exists a vertex $z$ whose neighborhood is $\{u,y\}$.
Since $y$ has degree two, $z$ is either $v$ or $w$.
However, $u$ and $w$ are not adjacent and thus, we get $z=v$ and $v\in d_2(G)$.
Since $y\in d_2(G)\cap V(M)$ and $v\notin V(M)$, we get $w\in V(M)$ and $M_p(y)=w$.
As both $w$ and $y$ are in $V(M)$, we see that $t\notin V(M)$ and hence, we also get $x\in V(M)$ and $M_p(u)=x$ by condition (i).
Thus, $x$ and $w$ are distinct vertices in $V(M)$ and therefore, there exists a vertex $z$ with $N(z)=\{x,w\}$.
As $x$ is of degree two and $u$ is not a neighbor of $w$, we obtain $z=t$ and $t\in d_2(G)$.
Now note that on this cycle $C_6$ the vertices $x,y,v,t$ are of degree two.
Since $G$ is connected and $G\neq C_6$,
at least one of $u$ and $w$ has a neighbor $s$ not on this cycle.
Without loss of generality, let $s$ be adjacent to $w$.
As $w\in V(M)$ and $s\neq y$, we have $s\notin V(M)$.
Moreover, since the degree of $s$ is at least two, condition (i) implies that
$s$ is adjacent to a vertex $r$ in $V(M)\backslash \{w\}$.
Then, $w$ and $r$ share a neighbor and thus, by condition (ii) we see that there exists a vertex $q$ such that $N(q)=\{M_p(w),M_p(r)\}=\{y,M_p(r)\}$ and hence, $y$ is adjacent to $q$.
As $y\in d_2(G)$, $q$ must be the vertex $v$ and therefore, we get
$M_p(r)=u$ which gives $r=x$.
But then, $x$ is adjacent to three distinct vertices $u,t$ and $s$ which contradicts with $x\in d_2(G)$.
Consequently, we obtain $\mathcal{M}\subseteq M$ and the result follows.
\end{proof}
\section{Discussion and Conclusions}
\label{sec:dis}
In this paper, we study graphs $G$ for which the total domination number $\gamma_t(G)$ attains its upper bound $2\mu^*(G)$, that is $(\gamma_t,2\mu^*)$-graphs.
We provide not only a constructive characterization of $(\gamma_t,2\mu^*)$-graphs but also
a polynomial time procedure to determine whether a given graph $G$ with $\delta(G)= 2$ is a $(\gamma_t,2\mu^*)$-graph.
Producing a polynomial time algorithm to determine $(\gamma_t,2\mu^*)$-graphs with at least one leaf is a topic of ongoing research.
Another potential research direction is to find a necessary and sufficient condition for the graphs $G$ satisfying $\delta(G)=\delta$ and $\gamma_t(G)=2\mu^*(G)-\delta+2$ for some or all positive integers $\delta\geq 3$.
\section*{Acknowledgments}
This work is supported by the Scientific and Technological Research
Council of Turkey (TUBITAK) under grant no. 118E799.
\end{document} |
\begin{document}
\title{Inequalities That Test Locality in Quantum Mechanics}
\date{\today}
\author{Dennis Dieks}\email{[email protected]}
\affiliation{Institute for the History and Foundations of Science\\
Utrecht University, P.O.Box 80.000 \\ 3508 TA Utrecht, The
Netherlands}
\begin{abstract}
Quantum theory violates Bell's inequality, but not to the maximum
extent that is logically possible. We derive inequalities
(generalizations of Cirel'son's inequality) that quantify the
upper bound of the violation, both for the standard formalism and
the formalism of generalized observables (POVMs). These
inequalities are quantum analogues of Bell inequalities, and they
can be used to test the quantum version of locality. We discuss
the nature of this kind of locality. We also go into the relation
of our results to an argument by Popescu and Rohrlich (Found.\
Phys.\ {\bf24}, 379 (1994)) that there is no general connection
between the existence of Cirel'son's bound and locality.
\end{abstract}
\pacs{03.65.Ta, 03.65.Ud}
\maketitle
\section{Introduction}\label{introduction}
The violation of Bell's inequality by the predictions of quantum
theory (both in its non-relativistic and relativistic versions)
shows that quantum theory is non-local in the sense that its
results cannot be reproduced by a hidden-variables theory in which
measurement results depend only on the local settings of the
measuring devices and on the properties of the objects being
measured (a local hidden-variables theory). However, the maximum
violation of Bell's inequality allowed by quantum theory is less
than the maximum violation that is logically possible: quantum
theory obeys Cirel'son's inequality. One might surmise that this
is due to the fact that quantum theory does not abandon locality
completely: after all, in situations of the
Einstein-Podolsky-Rosen (EPR) type the measurements performed on
one wing do not influence expectation values on the other wing
(the no-signaling theorem; in the relativistic context this is the
feature of relativistic causality). Perhaps compliance with a
no-signaling demand restricts the extent to which Bell's
inequality can be violated, and perhaps inequalities like
Cirel'son's can be regarded as a touchstone of this kind of
locality (in the same way as Bell's inequality is a touchstone for
locality in the classical sense).
In this paper we show that this hypothesis is right: the fact that
quantum theory does not violate Bell's inequality to the maximum
logically possible extent is due to features of locality that are
built into the theory. We derive a set of inequalities, and a
strongest inequality representing this whole set, that can be
regarded as quantum versions of Bell's inequality. To make the
analogy with Bell inequalities clear we will analyze how locality
is implemented in quantum theory, and in what sense the quantum
theoretical inequalities we derive are based on locality
assumptions. We will discuss how this relates to a result of
Popescu and Rohrlich \cite{popescu and rohrlich} that at first
sight seems to show that the existence of Cirel'son's bound is
unconnected with locality issues.
\section{Cirel'son's inequality}\label{cirelson}
Consider a probability space in which there are four stochastic
functions, $A, a, B, b$, each of which can take the values $+1$ or
$-1$. The quantity $AB + Ab + aB - ab = A(B + b) + a(B-b)$ can
only be $+2$ or $-2$, from which it follows that the absolute
value of its expectation value is smaller than $2$:
\begin{equation}
|\langle AB + Ab + aB - ab\rangle| \leq \; \langle |AB + Ab + aB -
ab|\rangle =2 \label{bell}.
\end{equation}
This is the form of Bell's inequality that we will consider. The
inequality is respected by physical quantities in classical
theories, as long as these quantities can be represented by
(stochastic) functions on one state space, with a joint
probability distribution---which is ordinarily the case. We will
discuss the connection with locality in sect.\ \ref{locality}.
In quantum mechanics physical magnitudes are not represented by
stochastic functions on a phase space, but by Hermitian operators
on a Hilbert space. Let us now use $A$, $a$, $B$, $b$ to denote
such operators that have eigenvalues $+1$ and $-1$, and let us
consider a combination of them that is analogous to the
combination of quantities in Bell's inequality: $AB + Ab + aB -
ab$, where we assume that the operators occurring in a product
commute. As was first shown by Cirel'son \cite{cirelson}, the
modulus of the quantum mechanical expectation value of this
expression is bounded by $2\sqrt{2}$: $|\langle AB + Ab + aB -
ab\rangle| \leq 2\sqrt{2}$---the upper bound can be attained, as
shown by the example of the singlet state. So Bell's inequality
can be violated by quantum theory; but the quantum expectation
value stays well below the logically possible upper bound of the
expression $|\langle AB + Ab + aB - ab\rangle|$, namely $4$.
Cirel'son's inequality can be proved elegantly by observing
\cite{landau} that if $A^{2}=a^{2}=B^{2}=b^{2}=\openone$ and
$[A,B]=[A,b]=[a,B]=[a,b]=0$, then \[ C^{2}\equiv(AB + Ab + aB -
ab)^{2}= 4\openone - [A,a][B,b].\] It follows from this that
\[\langle C\rangle^{2} \leq \langle C^{2}\rangle \leq ||C||^{2}
\leq 4 + 4||A|| \, ||a|| \, ||B|| \, ||b||= 8, \] or \[|\langle
C\rangle|\leq 2\sqrt{2} \label{land}.\]
An alternative simple proof, which is analogous to the above proof
of Bell's inequality (\ref{bell}) and similar to proofs of other
inequalities that we will give in sections \ref{povms} and
\ref{stronger}, goes as follows.
For a normed state vector $\ket{\psi}$, put $A\ket{\psi}\equiv
\ket{A}$, $B\ket{\psi}\equiv \ket{B}$, $a\ket{\psi}\equiv \ket{a}$
and $b\ket{\psi}\equiv \ket{b}$. Each of these four vectors has a
norm that is $\leq 1$. We now have
\begin{eqnarray}
|\langle C\rangle|& = & |\bra{\psi} C \ket{\psi}| \nonumber \\ & =
& |\inpr{A}{B + b} + \inpr{a}{B - b}| \nonumber \\ &\leq& ||\,
\ket{B} + \ket{b}\, || + ||\, \ket{B} - \ket{b}\, || \nonumber
\\
& \leq & \sqrt{2(1+ \mbox{\bf Re}\inpr{B}{b})} + \sqrt{2(1-
\mbox{\bf Re}\inpr{B}{b})} \nonumber \\ &\leq& 2\sqrt{2}
\label{cirel}.
\end{eqnarray}
The difference between this derivation and the derivation of
Bell's inequality is that for \emph{numbers} $B$ and $b$ with norm
$\leq 1$ we have $|B+b| + |B-b| \leq 2$, whereas for
\emph{vectors} with norm $\leq 1$ we find $||\ket{B}+\ket{b}|| +
||\ket{B}-\ket{b}|| \leq 2\sqrt{2}$. In the latter case the
maximum is attained when $\ket{B}$ and $\ket{b}$ are
perpendicular.
In derivation (\ref{land}) the essential premise is that the
operators ${A,a}$ commute with ${B,b}$. At first sight, derivation
(\ref{cirel}) does not make use of this premise. This impression
is deceptive, however. The operator products occurring in $C$ are
hermitian operators (and therefore representations of physical
quantities) if and only if the operators that are multiplied
commute, and this leads to exactly the same commutativity
requirement as in (\ref{land}). One physical consequence of this
commutativity requirement is that the operators $A$ and $a$ are
jointly measurable with the operators $B$ and $b$. Moreover, it
follows from the commutativity that it does not make any
difference for the expectation values of the operators $B$ or $b$
whether they are measured together with $A$ or $a$ (the
no-signaling theorem). Within the framework of the orthodox
measurement formalism co-measurability and causal independence (in
the sense of no signaling) therefore go together: they both hold
if and only if the commutativity requirement is satisfied. In this
case Cirel'son's inequality also holds.
\section{Generalized measurements}\label{povms}
Above we followed the orthodox point of view about the
mathematical representation of physical quantities in quantum
theory, namely that physical quantities are represented by
hermitian operators. Within this framework joint measurability is
equivalent to commutativity (which in turn leads to the
no-signaling theorem in the context of the EPR experiment). But
there is a more general treatment of measurements in quantum
theory, first developed by Ludwig \cite{ludwig} and Davies
\cite{davies}, in which physical quantities correspond not to
single operators but to collections of positive operators $M_{i}$
on the Hilbert space, such that
\[ M_{i} \geq 0, \;\; \sum_{i}M_{i}= \openone.\]
If the possible outcomes of a measurement of the considered
quantity are ${m_{i}}$, the probabilities of obtaining these
values in a state $\rho$ of the system are given by $\mbox{Tr} \rho
M_{i}$. The mapping $m_{i} \rightarrow M_{i}$ is a positive
operator valued mapping (POVM), representative of the associated
physical quantity $\cal M$.
Two physical quantities $\cal A$ and $\cal B$, represented by sets
of positive operators $\{A_{i}\}$ and $\{B_{j}\}$, respectively,
are jointly measurable if there is a third quantity $\cal O$,
represented by $\{O_{k}\}$, such that
\begin{equation}
A_{i}= \sum_{k\in K_{i}}O_{k}, \;\;\; B_{j}= \sum_{k\in
K^{\prime}_{j}}O_{k},\label{joint}
\end{equation} where $\{K_{i}\}$ and $\{K^{\prime}_{j}\}$ are two
partitions of the index set through which $k$ runs.
If there is an $\cal O$ satisfying Eq.(\ref{joint}) we can measure
it, and infer information about the outcomes and their
probabilities of both $\cal A$ and $\cal B$ by grouping together
the results according to the two partitions. An important feature
of this formalism is that commutativity of the two generalized
observables $\cal A$ and $\cal B$ (in the sense that
$A_{i}B_{j}=B_{j}A_{i}$ for all $i,j$) is a sufficient but not a
necessary condition for their joint measurability. \emph{If} $\cal
A$ and $\cal B$ commute, the products $A_{i}B_{j}$ are positive
operators characterizing the joint measurement of $\cal A$ and
$\cal B$. But in general a joint measurement need not correspond
to product operators (see for a critical analysis of the
significance of these results \cite{uffink0}).
So in the EPR situation we could imagine a joint measurement of
two non-commuting generalized observables $\cal A$ and $\cal B$,
each pertaining to a different wing of the experiment. In this
case it would no longer be true that the mere requirement of
compatibility leads to causal independence (no-signaling), the
product form of the joint measurement operators, and the validity
of Cirel'son's inequality.
However, Busch and Singh \cite{busch} have shown for the EPR
situation, treated by means of the POVMs formalism, that if the
possible values and their probabilities of the quantity measured
at one wing are required to be independent of which quantity is
measured at the other side, the operators representing the
generalized observables at one wing must commute with those at the
other. It follows that in this case the operators corresponding to
the joint measurement take on the product form again. So within
the generalized measurements framework commutativity and product
form are consequences of locality, in the sense of the
impossibility of signaling.
If the measurements in the EPR experiment are represented by
generalized observables, and if locality in the sense of
impossibility of signaling is assumed, Cirel'son's inequality can
again be derived. To see this, consider one pair of the four pairs
of observables, $\cal A$ and $\cal B$, say. Because of the
no-signaling requirement, the corresponding positive operators
$A_{i}$ and $B_{j}$ commute, and the joint measurement of $\cal A$
and $\cal B$ can be represented by four positive operators
$A_{i}B_{j}$, with $i,j = \pm 1$. The expectation value of the
outcomes of this joint measurement, in the pure state
$\ket{\psi}$, becomes:
\begin{eqnarray}\label{general}
&& \bra{\psi}(A_{1}B_{1}- A_{1}B_{-1}+ A_{-1}B_{-1}-
A_{-1}B_{1})\ket\psi \nonumber \\ && = \bra{\psi}(A_{1}-
A_{-1})(B_{1} - B_{-1})\ket\psi = \inpr{A}{B},
\end{eqnarray}
where $\ket{A}\equiv (A_{1}- A_{-1})\ket{\psi}$ and $\ket{B}\equiv
(B_{1}- B_{-1})\ket{\psi}$. So for the purpose of calculating
expectation values the generalized observables ${\cal A}, {\cal
B}, \tilde{a}, \tilde{b}$ can each be represented by a single
hermitian operator, namely $(A_{1}- A_{-1})$, $(B_{1}- B_{-1})$,
$(a_{1}- a_{-1})$ and $(b_{1}- b_{-1})$, respectively; the joint
measurements are represented by the corresponding products.
Compared to the case discussed in sect.\ \ref{cirelson}, the
differences are that the operators $A_{i}, B_{j}, \ldots$ need not
be projection operators, and the squares of $(A_{1}- A_{-1}),
(B_{1}- B_{-1}), \ldots$ need not be $\openone$. The second proof
of the Cirel'son inequality given in section \ref{cirelson} goes
nevertheless through, because the operators $(A_{1}- A_{-1}),
(B_{1}- B_{-1}), \dots$ all have norms $\leq 1$.
Indeed,
\begin{eqnarray}\label{norm1}
&&||(A_{1}- A_{-1})\ket{\psi}||^{2} \nonumber \\
&& = ||A_{1}\ket{\psi}||^{2}+ ||A_{-1}\ket{\psi}||^{2}
-2\inpr{A_{1}\psi}{A_{-1}\psi},
\end{eqnarray}
whereas \begin{eqnarray}\label{norm2} &&||(A_{1}+
A_{-1})\ket{\psi}||^{2} \nonumber \\ && = ||A_{1}\ket{\psi}||^{2}+
||A_{-1}\ket{\psi}||^{2} +2\inpr{A_{1}\psi}{A_{-1}\psi}=1,
\end{eqnarray}
so that
\begin{equation}\label{norm}
||(A_{1}- A_{-1})\ket{\psi}||^{2}= 1 -
4\inpr{A_{1}\psi}{A_{-1}\psi}.
\end{equation}
Because $A_{-1}=\openone - A_{1}$, $[A_{1}, A_{-1}]=0$ and the
inner products in (\ref{norm1}), (\ref{norm2}) and (\ref{norm})
are real. This inner product is also $\geq 0$:
\begin{equation}\label{inpr}
\inpr{A_{1}\psi}{A_{-1}\psi}= \bra{\psi}A_{1}\ket{\psi} -
\bra{\psi}A_{1}^{2}\ket{\psi},
\end{equation}
which is $\geq 0$ because $A_{1}$ has norm $\leq 1$ and only has
eigenvalues $\lambda_{i}$ with $ 0 \leq \lambda_{i} \leq 1$.
Now introduce vectors $\ket{a}, \ket{b}$ in the obvious way:
$\ket{a}\equiv(a_{1}-
a_{-1})\ket{\psi},\ket{b}=(b_{1}-b_{-1})\ket{\psi}$. It follows
from the above that the vectors $\ket{A}, \ket{B}, \ket{a},
\ket{b}$ all have norms $\leq 1$, just as the vectors denoted by
the same symbols in sect.\ \ref{cirelson}. Repeating the proof
(\ref{cirel}), we therefore find:
\begin{eqnarray}\label{cirgen}
&&|\langle {\cal A}{\cal B} + {\cal A} \tilde{b} + \tilde{a} {\cal B} -
\tilde{a}\tilde{b}\rangle| \nonumber \\ && = |\inpr{A}{B+b} + \inpr{a}{B-b}| \leq
2\sqrt{2}.
\end{eqnarray}
This inequality holds in every pure state $\ket{\psi}$. Its
validity in any mixed state $\rho$ follows immediately.
\section{The strongest inequality}\label{stronger}
Cirel'son's inequality is not the only nor the strongest one that
can be derived from the locality (no-signaling) and therefore
commutativity requirement. Put $X \equiv \langle {\cal A}\tilde{b}
+ \tilde{a} {\cal B}\rangle$ and $Y \equiv \langle {\cal A} {\cal
B} - \tilde{a}\tilde{b}\rangle$. Cirel'son's inequality can now be
written as
\begin{equation}\label{general1}
|X + Y| \leq 2\sqrt{2}.
\end{equation}
In the $X, Y$ `correlation plane' this inequality restricts the
points $(X,Y)$ to the strip between the two lines
\begin{equation}\label{lines1}
X + Y = \pm 2\sqrt{2}.
\end{equation}
But by a minimal change in the proof of sect.\ \ref{cirelson} it
immediately follows that also the following inequality holds:
\begin{equation}\label{general2}
|X - Y| \leq 2\sqrt{2},
\end{equation}
so that the points must also lie in the strip bounded by the lines
\begin{equation}\label{lines2}
X - Y = \pm 2\sqrt{2}.
\end{equation}
We also have the obvious inequalities $|X| \leq 2, \;\; |Y| \leq 2
$, so that the allowed points $(X,Y)$ must be in the intersection
of the interiors of the two squares indicated in Fig.\ 1.
\begin{figure}
\caption{The $X, Y$ plane. The slanted square represents
inequalities (\ref{general1}
\end{figure}
It further turns out that they must be inside (or on the sides of)
all squares that result from these just-mentioned squares by
applying an arbitrary rotation around an axis through the origin
of the $X, Y$ plane and normal to this plane. To prove this,
consider the expression $| X\sin\varphi + Y\cos\varphi|$. We have:
\begin{eqnarray}\label{general3}
&&|X\sin\varphi + Y\cos\varphi| \nonumber \\
&=& |\inpr{A}{B\cos\varphi + b\sin\varphi} + \inpr{a}{B\sin\varphi -
b\cos\varphi}| \nonumber \\
& \leq & ||\, \ket{B\cos\varphi} + \ket{b\sin\varphi} || + ||\, \ket{B\sin\varphi} - \ket{
b\cos\varphi}|| \nonumber \\
& \leq & \sqrt{\sin^{2}\varphi + \cos^{2}\varphi +
2\mbox{\bf Re}\inpr{B}{b}\sin\varphi\cos\varphi} \nonumber\\ && + \: \sqrt{\sin^{2}\varphi + \cos^{2}\varphi -
2\mbox{\bf Re}\inpr{B}{b}\sin\varphi\cos\varphi} \nonumber \\
& \leq & \sqrt{\sin^{2}\varphi + \cos^{2}\varphi} + \sqrt{\sin^{2}\varphi +
\cos^{2}\varphi}= 2.
\end{eqnarray}
Cirel'son's inequality and the other inequalities mentioned
earlier in this section are special cases of this general set of
inequalities (in which $\varphi$ can take arbitrary values). It
should be noted that these proofs apply both to the case of
ordinary observables and to the case of generalized observables.
It is clear from the geometry of Fig.\ 1 that the requirement that
all the inequalities (\ref{general3}) be satisfied leads to the
inequality
\begin{equation}\label{strongest}
X^{2} + Y^{2}= \langle {\cal A}\tilde{b} + \tilde{a} {\cal
B}\rangle^{2} + \langle {\cal A} {\cal B} -
\tilde{a}\tilde{b}\rangle^{2} \leq 4.
\end{equation}
All points $X, Y$ are inside or on the circumference of a circle
with radius $2$.
Inequality (\ref{strongest}) (which was recently proved directly,
by a variational argument, for the case of ordinary spin
observables by Uffink \cite{uffink}) summarizes all generalized
Cirel'son inequalities (\ref{general3}). All values $X, Y$ that
satisfy (\ref{strongest}) also satisfy all Cirel'son inequalities
(\ref{general3}); but satisfaction of a finite number of
inequalities of (\ref{general3}) is not sufficient to guarantee
satisfaction of (\ref{strongest}). Moreover, each point on the
circumference of the circle can actually be attained, because the
bound of the corresponding generalized Cirel'son inequality can be
attained (the one resulting in a line tangent to the circle in the
point in question). Inequality (\ref{strongest}) is therefore the
strongest inequality in terms of $X, Y$ that follows from the
requirement of commutativity.
\section{Locality}\label{locality}
Bell's inequality (\ref{bell}) is valid for an arbitrary quadruple
of stochastic functions on one probability space, and as such is
not immediately connected with locality issues. The link with
locality comes in via the application of (\ref{bell}) to
situations of the Einstein-Podolsky-Rosen type in which A and a,
and B and b, stand for measurements on the space-like separated
wings 1 and 2 of the experiment, respectively. An experimenter at
1 can choose between measuring A and a; her or his colleague at 2
has the choice between B and b. The combined measurement on parts
1 and 2 is represented by the product of the individual single
system result functions. This is justified by a locality
assumption: a measurement of a physical quantity on one wing of
the experiment has no influence at the other wing. On the basis of
this assumption one-wing quantities are represented by one and the
same function, regardless of whether, and if so which, measurement
is performed at the other side. Both the possible measurement
outcomes and their probabilities are insensitive to choices made
at the other side.
An obvious silent assumption in this is that $A, a, B, b$
correspond to characteristic measurement devices and interactions.
The device corresponding to $A$, e.g., should remain the same in
different instances---that the possible outcomes and corresponding
probabilities remain the same is by itself not enough. Consider to
make this clear a Stern-Gerlach device at wing 1 that undergoes a
rotation depending on the choice between $B$ and $b$: although the
possible outcomes would still be $+1$ and $-1$ and the
probabilities would remain equal to $1/2$ (if the device measures
the spin of a spin-1/2 particle), this would not constitute one
specific measured quantity. Spin along different axes would be
measured. In spite of the fact that all measurement results could
be represented by the same function $A$, the rotation of the
corresponding device would signal non-locality. Within the quantum
formalism such a non-invariance of the measuring procedure could
easily lead to a violation of (\ref{strongest}). An explicit
example can be constructed by stipulating that the concrete
physical implementation of measuring ${\cal A} {\cal B}$ and the
other joint quantities in (\ref{strongest}), in the two-particle
singlet state, be: ``Measure $\sigma_{x}^{I}$ and
$-\sigma_{x}^{II}$, call the results `spin of particle $I$ along
axis $\alpha$ and spin of particle $II$ along axis $\beta$,
respectively'; perform the same measurement of $\sigma_{x}^{I}$
and $-\sigma_{x}^{II}$ to obtain the spin values for the pairs of
axes $\alpha^{\prime},\beta$ and $\alpha,\beta^{\prime}$, and
measure $\sigma_{x}^{I}$ and $+\sigma_{x}^{II}$ in the case of
$\alpha^{\prime},\beta^{\prime}$''. Obviously, the correlation
functions obtained in this way violate Cirel'son's inequality
maximally (even though outcomes and probabilities at each wing are
insensitive to what choice is made at the other wing). This is
because the quantities defined in this way are not bona-fide local
physical quantities in the sense we have discussed. Indeed,
according to this measurement protocol the measuring procedure for
the spin of particle $II$ along $\beta^{\prime}$ depends on
whether spin along $\alpha$ or along $\alpha^{\prime}$ is measured
on particle $I$.
So we have identified a first locality requirement: the operators
that are used to represent physical quantities on the individual
wings of the experiment should refer to the same physical devices
and interactions, regardless of what goes on at the other side. It
should be possible to measure these quantities on wing 1 and wing
2, respectively, together; therefore the operators representing
them should be compatible (i.e., commuting or jointly measurable
in the sense of the POVM scheme).
The second locality assumption to be considered is that no signals
are transmitted: the possible outcomes and their probabilities on
the two sides of the experiment are insensitive to what happens at
the other wing. Usually, this is the only assumption that is
explicitly discussed.
Within the orthodox treatment of measurements in quantum theory
the compatibility assumption and the no-signaling assumption are
equivalent: both lead to the requirement that the operators at one
side commute with those at the other side. This commutativity is
in turn sufficient to derive Cirel'son's inequality and its
generalizations, including inequality (\ref{strongest}). Within
the framework of generalized measurements it is the no-signaling
requirement that leads to commutativity; so here we have to invoke
locality (in the sense of no signaling, or relativistic causality
in the relativistic context) explicitly in order to derive
(\ref{strongest}); the requirement of joint measurability is not
enough.
The two just-mentioned locality requirements together lead to a
theoretical description with one-wing operators (referring to
invariant measuring procedures) that commute with those at the
other wing. This is sufficient for deriving the inequalities of
section \ref{stronger}. These inequalities can therefore be used
in experimental tests of locality. If an inequality is violated in
experiments, this indicates either the propagation of influences
that change the outcomes and/or their probabilities (an
application of this idea can be found in \cite{beckman}), or it
indicates non-invariance of the measured quantity (see above for
an example of the latter possibility).\footnote{This conclusion
assumes the exclusive use of the ordinary (predictive) quantum
rule for computing expectation values that says that the state of
the system is represented by a density operator $\rho$ and that
$\langle A\rangle=\mbox{Tr}\{\rho A\}$. If one uses ensembles that
cannot be represented by a quantum state, for example if one
calculates averages in post-selected ensembles, the situation
becomes very different. As Cabello has shown recently
\cite{cabello}, Cirel'son's inequality can no longer be derived if
combinations of different post-selected ensembles are used for the
calculation of average values.}
\section{An argument by Popescu and Rohrlich}\label{popescu}
The conclusion of the previous Section is that quantum mechanical
locality (no-signaling) is responsible for the existence of the
upper bound of Cirel'son's inequality. This conclusion might seem
in conflict with an argument by Popescu and Rohrlich \cite{popescu
and rohrlich}. These authors argue that the impossibility of
signaling does \emph{not} limit the sum of the correlations
occurring in Cirel'son's inequality to $2\sqrt{2}$. Their
counterexample is an EPR situation in which spin measurements are
performed on the two wings. For the outcomes of these measurements
a particular probability distribution is postulated, as follows.
The two possible outcomes are taken to be $+1$ and $-1$ along any
axis, and both of these possibilities are postulated to have a
probability of $1/2$, independently of the measurement performed
at the other wing. So the measured outcomes and their
probabilities do not give any information about choices made at
the other side; it is impossible to signal, or, as Popescu and
Rohrlich put it, relativistic causality is satisfied.
Further, for any pair of axes, the combinations of outcomes $+1,
+1$ and $-1, -1$ are assumed to be equally probable, and the same
applies to the combinations $+1, -1$ and $-1, +1$. Finally, the
correlation function (a `superquantum' correlation function) is
stipulated to have a form like the following:
\begin{equation}
E(\theta) = \left\{ \begin{array}{ll}
+1 & \mbox{for $0 \leq \theta \leq \pi/4$} \\
2 - 4\theta/\pi & \mbox{for $\pi /4 \leq \theta \leq 3\pi/4$} \\
-1 & \mbox{for $3\pi/4 \leq \theta \leq \pi$}
\end{array} \right.\label{supercorr}
\end{equation}
This is equivalent to assuming that the probability
$p_{++}(\theta)$ of the pair of outcomes $+1,-1$ is given by
\[p_{++}(\theta) = \frac{E(\theta) + 1}{4}.\] In these formulas
$\theta$ is the angle between the axes on the left and right,
respectively, along which the spin measurements are made.
Now consider four axes $\alpha^{\prime}, \beta, \alpha,
\beta^{\prime}$ separated by successive angles of $\pi/4$ and
lying in one plane. We find that
\begin{equation}
E(\alpha,\beta)+ E(\alpha^{\prime},\beta) +
E(\alpha,\beta^{\prime})- E(\alpha^{\prime},\beta^{\prime}) =
4. \label{violation}
\end{equation}
Cirel'son's inequality can therefore be violated, even to the
maximum extent logically possible, by a correlation function that
respects invariance of one-wing outcomes and probabilities, and
thus the impossibility of signaling.
Clearly, therefore, the requirement that outcomes and
probabilities are invariant is by itself insufficient to derive
Cirel'son's inequality and its generalizations. Nevertheless, we
have demonstrated above that Cirel'son's inequality \emph{can} be
derived from that locality assumption within the theoretical
framework of quantum mechanics. It is only when the no-signaling
requirement is implemented within a well-defined theoretical
framework, equipped with prescriptions for how to represent
observables (corresponding to characteristic measuring
procedures), that it gets enough bite to make the derivation of
Bell-type inequalities possible. Indeed, we have seen that within
the framework of theories that operate with a phase space on which
physical quantities are represented by (stochastic) functions, the
original Bell inequalities can be obtained, whereas within the
Hilbert space formalism of quantum theory inequality
(\ref{strongest}) results. In both cases the correlation function
postulated by Popescu and Rohrlich cannot arise in a local way (it
\emph{can} be produced by non-local means as was illustrated
earlier in this section).
The fact that within the framework of quantum theory the
correlation function (\ref{supercorr}) cannot be produced in a
local way (because it violates (\ref{strongest})) does not mean,
of course, that there cannot exist other theoretical frameworks in
which `superquantum' correlation functions \emph{could} arise in a
local way; frameworks that use neither functions on a state space
nor the Hilbert space operator formalism. It is difficult to say
anything definite about such hypothetical theoretical frameworks.
Popescu's and Rohrlich's argument does not use any assumption
about how the observables are represented mathematically, and
therefore does not provide a sufficient basis for a discussion of
locality and causality (in fact, we already observed that their
probability distribution could be produced by non-local
mechanisms, which implies that invariance of outcomes and
probabilities is a necessary but not a sufficient condition for
causality).
Summing up, Popescu and Rohrlich are right in their claim that
Bell-type inequalities do not follow from the no-signaling
requirement alone. But this does not answer the question of
whether locality, in the sense of the no-signaling requirement,
may lead to such inequalities \emph{in the context of specific
theories}. It turns out that if locality is fleshed out within
classical theories, this leads to Bell's inequality; if it is
fleshed out within the framework of quantum theory this leads to
inequality (\ref{strongest}).
\begin{acknowledgments}
I thank Jos Uffink for helpful remarks.
\end{acknowledgments}
\end{document} |
\begin{document}
\title[Source monitoring for continuous-variable quantum key distribution]{Source monitoring for continuous-variable quantum key distribution}
\author{Jian Yang, Xiang Peng$^{*}$ and Hong Guo$\dag$}
\address{CREAM Group, State Key Laboratory of Advanced Optical
Communication Systems and Networks (Peking University) and Institute
of Quantum Electronics, School of Electronics Engineering and
Computer Science, Peking University, Beijing 100871, PR China}
\ead{$^{*}[email protected]}
\ead{$\[email protected]}
\begin{abstract}
The noise in optical source needs to be characterized for the security of continuous-variable quantum key distribution (CVQKD).
Two feasible schemes, based on either active optical switch or passive beamsplitter are proposed
to monitor the variance of source noise, through which, Eve's knowledge can be properly estimated. We derive the security bounds for both
schemes against collective attacks in the asymptotic case, and find that the passive scheme performs better.
\end{abstract}
\maketitle
\section{Introduction}
Continuous-variable quantum key distribution (CVQKD) encodes information into the quadratures of
optical fields and extracts it with homodyne detection, which has
higher efficiency and repetition rate than that of the single photon
detector \cite{Scarani_RMP_09}. CVQKD, especially the GG02 protocol
\cite{Grosshans_PRL_02}, is hopeful to realize high speed
key generation between two parties, Alice and Bob.
Besides experimental demonstrations \cite{Grosshans_Nature_03, Lodewick_Ex_PRA_07},
the theoretical security of CVQKD has been established against collective attacks
\cite{Grosshans_PRL_05, Navascues_PRL_05}, which has been shown optimal in the
asymptotical limit \cite{Renner_PRL_102}. The practical security of CVQKD has also
been noticed in the recent years, and it has been shown that the source noise in
state preparation may be undermine the secure key rate \cite{Filip_PRA_08}. In GG02,
the coherent states should be displaced in phase space following Gaussian
modulation with variance $V$. However, due to the imperfections in laser source and
modulators, the actual variance is changed to $V+\chi_{s}$, where $\chi_{s}$
is the variance of source noise.
An method to describe the trusted source noise is the beamsplitter model
\cite{Filip_PRA_08, Filip_PRA_10}. This model has a good
approximation for source noise, especially when the transmittance of beamsplitter
$T_{A}$ approaches $1$, which means that the loss in signal mode is
negligible. However, this method has the difficulty of parameter
estimation to the ancilla mode of the beamsplitter, without the information of
which, the covariance matrix of the system are not able to determine. In this case,
the optimality of Gaussian attack \cite{Patron_PRL_06,Navascues_PRL_06} should be
reconsidered \cite{YujieShen__PRA_11}, and we have to assume that the channel is linear
to calculate the secure key rate.
To solve this problem, we proposed an improved source noise model
with general unitary transformation \cite{YujieShen__PRA_11}. Without extra
assumption on quantum channel and ancilla state, we are able to derive a tight
security bound for reverse reconciliation, as long as the variance of source
noise $\chi_{s}$ can be properly estimated. The optimality of Gaussian attack is kept
within this model.
The remaining problem is to estimate the variance of source noise
properly. Without such a source monitor, Alice and Bob can not discriminate
source noise from channel excess noise, which is supposed to be controlled by
the eavesdropper (Eve) \cite{Yong_JPB_09}. In practice,
source noise is trusted and is not controlled by Eve. So, such
\textit{untrusted source noise model} just overestimates Eve'power and
leads to an untight security bound. A compromised method is to
measure the quadratures of Alice's actual
output states each time before starting experiment. However, this work is time consuming, and
in QKD running time, the variance of source noise may fluctuate slowly and deviate
from preliminary result. In this paper, we propose two real-time
schemes, that the active switch scheme and the passive beamsplitter
scheme to monitor the variance of source noise, with the help of which,
we derive the security bounds asymptotically for both of them against
collective attacks, and discuss their potential applications when
finite size effect is taken into account.
\section{Source monitoring in CVQKD}
In this section, we introduce two real-time schemes to monitor the variance
of source noise for the GG02 protocol, based on our general model. Both
schemes are implemented in the so-called
prepare and measurement scheme (P\&M scheme) \cite{Garcia_PHD_2009}, while for the
ease of theoretical research, here we analyze their security in
the entanglement-based scheme (E-B scheme). The covariance matrix, used to simplify the
calculation, is defined by
\cite{Garcia_PHD_2009}
\begin{equation}
\gamma_{ij}=\textrm{Tr}[\rho\{(\hat{r}_{i}-d_{i}), (\hat{r}_{j}-d_{j})\}],
\end{equation}
where operator $\hat{r}_{2i-1}=\hat{x}_{i}$, $\hat{r}_{2i}=\hat{p}_{i}$, mean value
$d_{i}=\langle\hat{r}_{i}\rangle=\textrm{Tr}[\rho\hat{r}_{i}]$, $\rho$ is
the density matrix, and $\{\}$ denotes the anticommutator.
In E-B scheme, Alice prepares EPR pairs, measuring the quadratures of one mode
with two balanced homodyne detectors, and then send the other mode to Bob. It is easy
to verify that the covariance matrix of an EPR pair is
\begin{equation}
\gamma_{AB_{0}}=\left(
\begin{array}{cc}
V\mathbb{I} & \sqrt{V^{2}-1}\sigma_{z} \\
\sqrt{V^{2}-1}\sigma_{z} & V\mathbb{I} \\
\end{array}
\right),
\end{equation}
where $V=V_{A}+1$ is the variance of the EPR modes, and $V_{A}$
corresponds to Alice's modulation variance in the P\&M scheme.
However, due to the effect of source noise, the actual covariance
matrix is changed to
\begin{equation}
\gamma_{AB_{0}}=\left(
\begin{array}{cc}
V\mathbb{I} & \sqrt{V^{2}-1}\sigma_{z} \\
\sqrt{V^{2}-1}\sigma_{z} & (V+\chi_{s})\mathbb{I} \\
\end{array}
\right),
\end{equation}
where $\chi_{s}$ is the variance of source noise. As mentioned in
\cite{YujieShen__PRA_11}, we assume this noise is introduced by a neutral
party, Fred, who purifies $\rho_{AB}$ and introduces the source noise
with arbitrary unitary transformation. In this section, we show how
to monitor $\chi_{s}$ with our active and passive schemes, and derive
the security bounds in the infinite key limit.
\subsection{Active switch scheme}
A method of source monitoring is to use an active optical
switch, controlled by a true random number generator (TRNG),
combined with a homodyne detection. The
entanglement-based version \cite{Grosshans_QIC_2003} of this
scheme is illustrated in
Fig. \ref{pic2}, where we randomly select parts of signal pulses,
measure their quadratures and estimate their variance. In the
infinite key limit, the pulses used for source monitor should have
the same statistical identities with that sent to Bob. Comparing
the estimated value with the theoretical one, we are able to derive
the variance of source noise, and the security bound can be
calculated by \cite{YujieShen__PRA_11}
\begin{equation}
K_{OS}=(1-r)\times[\beta\times I(a:b)-S(E:b)],
\end{equation}
where $r$ is the sampling ratio of source monitoring,
$I(a:b)$ is the classical mutual information between Alice and
Bob, $S(E:b)$ is the quantum mutual information between Eve and Bob,
and $\beta$
is the reconciliation efficiency. After channel transmission, the
whole system can be described by covariance matrix $\gamma_{FAB}$
\begin{equation}
\gamma_{FAB}=
\left(
\begin{array}{cccc}
F_{11} & F_{12} & F_{13} & F'_{14} \\
F_{12}^{T} & F_{22} & F_{23} & F'_{24} \\
F_{13}^{T} & F_{23}^{T} & V\mathbb{I} & \sqrt{\eta (V^{2}-1)}\sigma_{z} \\
(F'_{14})^{T} & (F'_{24})^{T} & \sqrt{\eta (V^{2}-1)}\sigma_{z} & \eta (V+\chi_{s}+\chi)\mathbb{I} \\
\end{array}
\right),
\end{equation}
where $\chi_{s}$ is the variance of source noise, $2\times 2$ matrix $F_{ij}$
is related to Fred's two-mode state, $\eta$ is the transmittance and
$\chi=(1-\eta)/\eta+\epsilon$ is the channel noise and $\epsilon$ is the channel
excess noise. In practice, the covariance matrix can be estimated with experimental
data with source monitor and parameter estimation. Here, for the ease of calculation,
we assume that parameters $\eta$ and $\epsilon$ have known values.
\begin{figure}
\caption{(color online). Entanglement-based model for the optical switch scheme. Alice measures one mode of EPR pairs and projects the other mode to coherent states, and then sends it to Bob. F represents the neutral party, Fred, who introduces the source noise. Using a high-speed optical switch driven by TRNG, we can measure part of the signal sent to B and estimate the variance of source noise $\Delta V$ in the infinite key limit. }
\label{pic2}
\end{figure}
Given $\gamma_{FAB}$, the classical mutual information $I(a:b)$ can
be directly derived, while $S(E:b)$ can not,
since the ancilla state $F$ is unknown in our general model.
Fortunately, we can substitute $\gamma_{FAB}$
with another state $\gamma'_{FAB}$ when calculating $S(E:b)$
\cite{YujieShen__PRA_11}, where
\begin{equation}\label{CM'}
\gamma '_{FAB}=
\left(
\begin{array}{cccc}
\mathbb{I} & 0 & 0 & 0 \\
0 & \mathbb{I} & 0 & 0 \\
0 & 0 & (V+\chi_{s})\mathbb{I} & \sqrt{\eta [(V+\chi_{s})^{2}-1]}\sigma_{z} \\
0 & 0 & \sqrt{\eta [(V+\chi_{s})^{2}-1]}\sigma_{z} & \eta (V+\chi_{s}+\chi)\mathbb{I} \\
\end{array}
\right),
\end{equation}
and we have shown that such substitution provides a tight bound
for reverse reconciliation.
Here, we have assumed that the pulses generated in Alice is i.i.d., and
the true random number plays an important role in this scheme,
without which, the sampled pulses may have different statistical characters
from signal pulses sent to Bob. The asymptotic performance of this scheme
will be analyzed in sec. III.
\subsection{Passive beam splitter scheme}
Though the active switch scheme is intuitive in theoretical research,
it is very not convenient in the experimental realization, since the high
speed optical switch and an extra TRNG are needed. Also,
it lowers the secure key rate with $(1-r)$ due
to the sampling ratio. Inspired by \cite{Peng_OL_2008}, we propose a
passive beam splitter scheme to simplify the implementation. As illustrated in
Fig. \ref{BSScheme}, a beamsplitter is used to separate mode
$B_{1}$ into two parts. One mode, $M$, is monitored by Alice, and the other,
$B_{2}$, is sent to Bob.
\begin{figure}
\caption{(color online). Entanglement-based model for the beamsplitter scheme. The optical switch in Fig.\ref{pic2}
\label{BSScheme}
\end{figure}
The security bound of passive beam splitter scheme can be calculated in a
similar way that we substitute the whole state $\rho_{FAB_{1}M_{0}}$ with
$\rho'_{FAB_{1}M_{0}}$. The covariance of its subsystem, $\rho_{AB_{1}M_{0}}$, can be written as
\begin{equation}
\gamma'_{AB_{1}M_{0}}=
\left(
\begin{array}{ccc}
(V+\chi_{s})\mathbb{I} & \sqrt{(V+\chi_{s})^{2}-1}\sigma_{z} & 0 \\
\sqrt{(V+\chi_{s})^{2}-1}\sigma_{z} & (V+\chi_{s})\mathbb{I} & 0 \\
0 & 0 & \mathbb{I} \\
\end{array}
\right),
\end{equation}
where mode $M_{0}$ is initially in the vacuum state. The covariance matrix
after beam splitter is
\begin{equation}
\gamma'_{AB_{2}M}=(\mathbb{I}^{A}\otimes S_{\rm BS}^{BM})^{T}\gamma_{AB_{1}M_{0}}(\mathbb{I}^{A}\otimes S_{\rm BS}^{BM}),
\end{equation}
where
$$
\mathbb{I}^{A}\otimes S_{\rm BS}^{BM}=
\left(
\begin{array}{ccc}
\mathbb{I} & 0 & 0 \\
0 & \sqrt{T}\mathbb{I} & \sqrt{1-T}\mathbb{I} \\
0 & -\sqrt{1-T}\mathbb{I} & \sqrt{T}\mathbb{I} \\
\end{array}
\right).
$$
Then, mode $B_{2}$ is sent to Bob through quantum channel,
characterized by $(\eta, \chi)$. The calculation of $S(E:b)$
in this scheme is a little more complex, since an extra mode $M$ is
introduced by the beamsplitter. We omit the detail of calculation here,
which can be derived from \cite{Garcia_PHD_2009}. The
performance of this scheme is discussed in the next section.
\section{Simulation and Discussion}
In this section, we analyze the performance of both schemes with
numerical simulation.
As mentioned above, the simulation is restricted to the asymptotic
limit. The case of finite size will be
discussed later. To show the performance of source monitor schemes,
we illustrate the secure key rate
in Fig. \ref{Comparision}, in which the
\textit{untrusted noise scheme} is included for comparison. For the
ease of discussion, the imperfections in practical detectors are not
included in our simulation, the effect of which have been studied
previously \cite{Lodewick_Ex_PRA_07}.
\begin{figure}
\caption{(color online). A comparison among the secure key rate
of untrusted noise scheme, optical switch scheme and beam splitter
scheme for the GG02 protocol, which are in dash line, dot-dash line and solid line,
respectively. Typical values are used for each parameter. The
modulation variance is $V=40$, the source noise is $\chi_{s}
\label{Comparision}
\end{figure}
As shown in Fig. \ref{Comparision}, secure key rate of each scheme
is limited within $30 {\rm km}$, where large excess noise
$\epsilon\sim 0.1$ is used. Under state-of-the-art technology, the
excess noise can be controlled less than a few percent of the shot
noise. So, our simulation is just a conservative estimation on the
secure key rate. The \textit{untrusted source noise scheme} has the
shortest secure distance, because it ascribes the source
noise into channel noise, which is supposed to be induced by the
eavesdropper. In fact, source noise is neutral and can be
controlled neither by Alice and Bob, nor by Eve.
So, this scheme just overestimates Eve's power by supposing
she can acquire extra information from source noise, which lower
the secure key rate of this scheme.
Both the active and passive schemes have longer secure distance
than the untrusted noise scheme, since they are based on the general
source noise model, which does not ascribe source noise into Eve's
knowledge. The active switch scheme has lower secure key rate in
the short distance area. This is mainly because that the random
sampling process intercepts parts of the signal pulses to estimate
the variance of source noise, which reduces the repetition rate with
ratio $r$. Nevertheless, it does not overestimate Eve's power
\cite{YujieShen__PRA_11}. As a result, the secure key distance is
improved.
\begin{figure}
\caption{(color online). Secure key rate as a function of distance $d$ and transmittance $T$, in which T varies from $0.01$ to $0.99$.
The colored parts illustrates the area with
positive secure key rate, the empty parts illustrates to insecure area, and the abscissa values of boundary points corresponds to the
secure distance.}
\label{3D}
\end{figure}
Both the secure key rate and secure
distance of beam splitter scheme are superior than that of other
schemes, when the transmittance is set to be $0.5$, equal
to the sampling rate $r$ in optical switch scheme, where no extra
vacuum noise is introduced. This phenomena is quite similar to
the "noise beat noise" scheme \cite{Garcia_PHD_2009}, which
improves the secure key
rate by introduce an extra noise into Bob's side. Though such noise
lowers the mutual information between Alice and Bob, it also
makes Eve more difficult to estimate Bob's measurement result.
With the help of simulation, we find a similar phenomenon in the
beam splitter scheme, the vacuum noise reduces mutual information
$S(b:E)$ more rapidly than its effect on $I(a:b)$. A preliminary
explanation is that the sampled pulse in optical switch scheme is
just used to estimate the noise variance, while in beam splitter
model it increases Eve's uncertainty on Bob's information. Combined
with advantages in experimental realization, beam splitter scheme
should be a superior choice.
To optimize the performance of passive beam splitter scheme, we
illustrate the secure key rate in Fig. \ref{3D} for
different beam splitter transmittance $T_{A}$. The maximal secure
distance about $34{\rm km}$ is achieved when
$T\sim 0.1$, about $10$ km longer than that when $T\sim 0.5$.
Combined with the discussion above, this result
can be understood as a balance between the effects of the noise
on $I(a:b)$ and $S(b:E)$, induced by beamsplitter. When $T_{A}$
is too small, $I(a:b)$ also decreases rapidly, which limits the
secure distance.
\section{Finite size effect}
The performance of source monitor schemes above is analyzed in
asymptotical limit. In practice, the real-time monitor will
concern the finite-size effect, since the variance of source noise
may change slowly. A thorough research in finite size effect is beyond
the scope of this paper, because the security of CVQKD in finite size is still
under development, that the optimality of Gaussian attack and collective attack
has not been shown in the finite size case. Nevertheless, we are able to give a
rough estimation on the effect of block size, for a given distance. Taking the
active optical switch scheme for example, with a similar
method in \cite{Leverrier_PRA_2010}, the maximum-likelihood estimator
$\hat{\sigma_{s}}^{2}$ is given by
\begin{equation}
\hat{\sigma_{s}}^{2}=\left(\frac{1}{m}\sum_{i=1}^{m}y_{i}^{2}-V\right),
\end{equation}
where $(m\hat{\sigma_{s}}^{2}/\sigma_{s}^{2})\sim\chi^{2}(m-1)$,
$y_{i}$ is the measurement result of source monitor,
and $\sigma_{s}^{2}=\chi_{s}$ is the expected value of the variance
of source noise. For large $m$, the
$\chi^{2}$ distribution converges to a normal distribution. So, we have
\begin{equation}
\sigma_{\rm min}^{2}\approx \hat{\sigma}_{s}^{2}-z_{\epsilon_{\rm SM}}\frac{\hat{\sigma}_{s}^{2}\sqrt{2}}{\sqrt{m}}
\end{equation}
where $z_{sm}$ is such that $1-{\rm erf}(z_{sm}/\sqrt{2})=\epsilon_{\rm SM}$, and
$\epsilon_{SM}$ is the failure probability. The reason why we choose
$\hat{\sigma}_{min}$ is that given the values of $\eta(V+\chi_{s}+\chi)$ and $\eta$,
estimated by Bob, the minimum of $\chi_{s}$ corresponds to the maximum of channel noise $\chi$,
which may be fully controlled by Eve. The extra variance $\Delta_{m}\chi_{s}$ due to the
finite size effect in source monitor is
\begin{equation}\label{eq:block size}
\Delta_{m}\chi_{s}\approx \frac{z_{\rm SM}\hat{\sigma}_{s}^{2}\sqrt{2}}{\sqrt{m}}.
\end{equation}
For $\epsilon_{SM}\sim 10^{-10}$, we have $z_{\epsilon_{SM}}\approx 6.5$. As analyzed
in \cite{Leverrier_PRA_2010}, if the distance between Alice and bob is $50$ km
($T\sim 10^{-1}$), The block length should be at least $10^{8}$, which
corresponds to $\Delta_{m} \chi_{s}\sim 10^{-6}$ induced by
the finite size effect in source monitor. Compared with the channel excess noise of $10^{-2}$,
the effect of finite size in source monitor is very slight. Due to the high repetition rate in
CVQKD, Alice and Bob are able to accumulate such a block within several minutes, during
which the source noise may change slightly.
\section{Concluding Remarks}
In conclusion, we propose two schemes, the active optical switch
scheme and the passive beamsplitter scheme, to monitor the variance
of source noise $\chi_{s}$. Combined with previous general noise model,
we derive tight security bounds for both schemes with reverse
reconciliation in the asymptotic limit. Both schemes can be implemented
under current technology, and the simulation result shows an better
performance of our schemes, compared with the untrusted source noise model.
Further improvement in secure distance can be achieved, when
the transmittance $T_{A}$ is optimized.
In practise, the source noise varies slowly. To realize
real-time monitoring, the finite size effect should be taken into account,
that the block size should not be so large, that the source noise has changed
significantly within this block, and the block size should not be too small,
that we can not estimate the source noise accurately. The security proof
of CVQKD with finite block size has not been established completely, since the
optimality of collective attack and Gaussian attack has not been shown in finite
size. Nevertheless, we derive the effective source noise induced by the finite
block size, and find its effect is not significant in our scheme. So, our
schemes may be helpful to realize real-time source monitor in the future.
\section*{Acknowledgments}
This work is supported by the Key Project of National Natural
Science Foundation of China (Grant No. 60837004), National Hi-Tech
Research and Development (863) Program. The authors thank Yujie Shen,
Bingjie Xu and Junhui Li for fruitful discussion.
\section*{References}
\end{document} |
\begin{document}
\title{A note on the integrality gap of the configuration LP for restricted Santa Clausootnote{Research
was supported by German Research Foundation (DFG) project JA 612/15-2}
\begin{abstract}
In the restricted Santa Claus problem we are given resources $\mathcal R$ and players $\mathcal P$.
Every resource $j\in\mathcal R$ has a value $v_j$ and
every player $i$ desires a set $\mathcal R(i)$ of resources.
We are interested in distributing the resources to players that desire them.
The quality of a solution is measured by the least happy player, i.e., the lowest sum of
resource values. This value should be maximized.
The local search algorithm by Asadpour et al.~\cite{DBLP:journals/talg/AsadpourFS12} and its
connection to the configuration LP has proved itself to be a very influential technique
for this and related problems.
In the original proof, a local search was used to obtain a bound of $4$
for the ratio of the fractional
to the integral optimum of the configuration LP (integrality gap). This bound is
non-constructive since the local search has not been shown to terminate in polynomial time.
On the negative side, the worst instance known has an integrality gap of $2$.
Although much progress was made in this area, neither bound has been improved since.
We present a better analysis that shows the integrality gap is not
worse than~$3 + 5/6 \approx 3.8333$.
\end{abstract}
\section{Introduction}
A generalization of the problem we consider goes back to Bansal and Srividenko~\cite{DBLP:conf/stoc/BansalS06}.
In the Santa Claus problem there are players $\mathcal P$ and resources $\mathcal R$. Every
resource $j$ has a value $v_{ij} \ge 0$ for player $i$. The goal is to find an assignment $\sigma : \mathcal R\rightarrow\mathcal P$ such that $\min_{i\in\mathcal P}\sum_{j\in\sigma^{-1}(i)} v_{ij}$ is maximized.
In the restricted variant, we consider only values $v_{ij} \in\{0, v_j\}$ where $v_j > 0$ is a value
depending only on the resource. This can also be seen as each player desiring a subset $\mathcal R(i)$
of resources which have a value of $v_j$ for him, whereas other resources cannot be assigned to him.
For the restricted Santa Claus problem there exists a strong LP relaxation, the configuration LP.
The proof that this has a small integrality gap (see \cite{DBLP:journals/talg/AsadpourFS12}) is not trivial.
It works by defining an
exponential time local search algorithm which is guaranteed to return an integral solution of
value not much less than the fractional optimum.
This technique has since been used in other problems, like the minimization of the makespan~\cite{DBLP:journals/siamcomp/Svensson12, DBLP:conf/soda/JansenR17}.
Significant research has also gone into making the proof constructive~\cite{PolacekS16,DBLP:journals/talg/AnnamalaiKS17,DBLP:conf/ipco/JansenR17}.
Yet, no improvement of the bound of $4$ on the integrality gap has been found. We show that
the original analysis is not tight and can be improved to $3 + 5/6\approx 3.8333$.
\subsection{Configuration LP}
the configuration LP is an exponential size LP relaxation, but it
can be approximated in polynomial time with a rate of $(1 + \epsilon$)
for every $\epsilon > 0$~\cite{DBLP:conf/stoc/BansalS06}.
For every player $i$ and every value $\tau$ let
\begin{equation*}
\mathcal C(i, \tau) = \{ S \subseteq \mathcal R(i) : v(S) \ge \tau\} .
\end{equation*}
These are the configurations for player $i$ and value $\tau$. They are a selection of
resources that have value at least $\tau$ and are desired by player $i$.
The optimum $\mathrm{OPT}^*$ of the configuration LP is the highest $\tau$ such that the following linear
program is feasible.
\\[1em]
\fbox{
\begin{minipage}{\textwidth}
Primal of the configuration LP for restricted {\sc Santa Claus}
\begin{align*}
\sum_{C\in\mathcal C(i, \tau)} x_{i, C} &\ge 1 & \forall i\in\mathcal P \\
\sum_{i\in\mathcal P}\sum_{C\in\mathcal C(i,\tau) : j\in C} x_{i, C} &\le 1 & \forall j\in\mathcal R \\
x_{i, C} &\ge 0
\end{align*}
\end{minipage}}
\\[1em]
\fbox{
\begin{minipage}{\textwidth}
Dual of the configuration LP for restricted {\sc Santa Claus}
\begin{align*}
\max \sum_{i\in\mathcal P} y_i &- \sum_{j\in\mathcal R} z_j \\
\sum_{j\in C} z_j &\ge y_i &\forall i\in\mathcal P, C\in\mathcal C(i, \tau) \\
y_i, z_j &\ge 0
\end{align*}
\end{minipage}}
\\[1em]
We derive the following condition from duality:
\begin{theorem}\label{theorem-condition-sc}
Let $y\in \mathbb R_{\ge 0}^\mathcal P$ and $z\in \mathbb R_{\ge 0}^\mathcal R$ such
that $\sum_{i\in\mathcal P} y_i > \sum_{j\in\mathcal R} z_j$ and
for every $i\in\mathcal P$ and $C\in\mathcal C(i, \tau)$ it holds
that $\sum_{j\in C} z_j \ge y_i$, then
$\mathrm{OPT}^* < \tau$.
\end{theorem}
It is easy to see that if such a solution $y, z$ exists, then every component can be scaled
by a constant to obtain a feasible solution greater than any given value. Hence, the
dual must be unbounded and therefore the primal must be infeasible.
\section{Algorithm}
We consider the local search algorithm from~\cite{DBLP:journals/talg/AsadpourFS12}.
It is the same algorithm with a slightly different presentation that is
inspired by~\cite{PolacekS16}.
Throughout this section we will denote by $\alpha = 3 + 5/6$ the bound on integrality
we want to prove.
We model our problem as a hypergraph matching problem:
There are vertices for all players and all resources and the hyperedges $\mathcal H$ each consist
of exactly one player $i$ and a set of resources $C\subseteq \mathcal R(i)$ where
$v(C) \ge \mathrm{OPT}^* / \alpha$.
However, we restrict $\mathcal H$ to edges that are minimal,
that is to say $v(C') < \mathrm{OPT}^* / \alpha$ for all
$C'\subset C$.
It is easy to see that a matching (a set of non-overlapping edges) such that every
player is in one matching edge corresponds to a solution of value $\mathrm{OPT}^*/\alpha$.
For a set of edges $F$ we write $F_\mathcal P$ as the set of players in these edges
and $F_\mathcal R$ as the resources in the edges.
The algorithm maintains a partial matching $M$ and extends it one player at a time. After $|\mathcal P|$ many calls to the algorithm
the desired matching is found.
Two types of edges play a crucial role in the algorithm:
An ordered list $A = \{e^A_1,\dotsc, e^A_\ell\} \subseteq \mathcal H\setminus M$ (the addable edges)
and sets $B_M(e^A_1)$, $B_M(e^A_2)$, \dots, $B_M(e^A_\ell) \subseteq M$ (the blocking edges for each addable edge).
An addable edge is a edge that the algorithm hopes to add to $M$ - either to cover the new player or to free the player of a blocking edge.
A blocking edge is an edge in $M$ that conflicts with an addable edge, i.e., that has a non-empty overlap with an addable edge.
For each addable edge $e^A_k$ we define the blocking edges $B_M(e^A_k)$
as $\{e'\in M : e'_\mathcal R \cap (e^A_k)_\mathcal R \neq \emptyset\}$.
From the definition of the algorithm it will be clear that
$B_M(e^A_k) \cap B_M(e^A_{k'}) = \emptyset$ for $k\neq k'$.
We write $B_M(A)$ for $\bigcup_{e\in A} B_M(e)$.
\subsection{Detailed description of the algorithm}
In each iteration the algorithm first adds a new addable edge
that does not overlap in resources with any existing addable
or blocking edge. Then it consecutively swaps addable
edges that are not blocked for the blocking edge they
are supposed to free.
Also, addable/blocking edges added at a later time are removed,
since they might be obsolete.
The swap does not create new blocking edges,
since the new matching edge does not overlap with addable edges.
Also, by adding only addable edges that do not overlap with
the previous addable/blocking edges, the resources
of addable edges are disjoint and the resources of
each blocking edge overlap only with one addable edge.
The blocking edges $B_M(e^A_k)$ and $B_M(e^A_{k'})$ for
two addable edges with $k < k'$ must be disjoint, because
otherwise $e^A_{k'}$ could not have been added in the first place.
\begin{algorithm}[H]
\SetKwInOut{KwInput}{Input}
\SetKwFor{Loop}{loop}{}{end}
\KwInput{Partial matching $M$ and unmatched player $i_0$}
\KwResult{Partial matching $M'$ with $M'_\mathcal P = M_\mathcal P\cup \{i_0\}$}
$\ell \gets 0$ \;
\Loop{}{
$\ell \gets \ell + 1$ \;
let $e^A_\ell\in \mathcal H\setminus M$ with
$(e^A_\ell)_\mathcal R \cap (A\cup B_M(A))_\mathcal R = \emptyset$ and
$(e^A_\ell)_\mathcal P \in B_M(A)_\mathcal P \cup \{i_0\}$ \;
\tcp{existence of $e^A_\ell$ is proved in the analysis}
$A \gets A\cup\{e^A_{\ell}\}$ ; \tcp{$e^A_\ell$ is added as the last addable edge}
\While{$B_M(e^A_\ell) = \emptyset$}{
\eIf{there is an edge $e'\in B_M(A)$ with $e'_\mathcal P = (e^A_\ell)_\mathcal P$ \tcp{unambiguous since $M$ is matching}} {
let $e^A_k\in A$ such that $e'\in B_M(e^A_k)$ ; \tcp{unambiguous}
$M \gets M\setminus \{e'\}\cup\{e^A_\ell\}$ ; \tcp{swap $e'$ for $e^A_\ell$
}
$A \gets \{e^A_1,\dotsc, e^A_k\}$; $\ell\leftarrow k$ ; \tcp{Forget $e^A_{k+1},\dotsc, e^A_\ell$
}
}{
$M \gets M\cup \{e^A_\ell\}$ ;
\tcp{$(e^A_\ell)_\mathcal P = i_0$}
\Return $M$ \;
}
}
\caption{Local search for restricted {\sc Santa Claus}}
}
\end{algorithm}
\section{Analysis}
\begin{theorem}[\cite{DBLP:journals/talg/AsadpourFS12}]
The algorithm terminates after at most $2^{|M| - 1}$ many iterations of the main loop.
\end{theorem}
\begin{proof}
Consider the signature vector
\begin{equation*}
s(A) = \{|B_M(e^A_1)|,|B_M(e^A_2)|,\dotsc,|B_M(e^A_\ell)|, \infty\} .
\end{equation*}
This vector decreases after every iteration of the main loop:
If the inner loop is never executed, then the last component
is replaced by a finite value. Hence, assume the inner while
loop is executed at least once. Let $\ell'$ be the cardinality
of $A$ after the last execution of the inner loop.
Then $|B(e^A_{\ell'})|$ has decreased by the swap operation.
It follows that the signature vectors in each iteration of
the main loop are pairwise different.
The number of signature vectors can be trivially bounded
by $|M|^{|M|}$.
A clever idea from~\cite{DBLP:journals/talg/AsadpourFS12}
even gives a bound of $2^{|M| - 1}$:
We have that $\sum_{k=1}^\ell |B_M(e^A_k)| = |B_M(A)|\le |M|$
and $|B_M(e^A_k)| \ge 1$ for all $k$.
There is an bijection between signature vectors and possibilities
of placing separators on a line of $|M|$ elements.
$|B_M(e^A_k)|$ is the number of elements between the $(k-1)$-th
and $k$-th separator. The number of possibilities of placing
separators between the $|M|$ elements is the number of
subsets of $|M| - 1$ elements, i.e. $2^{|M| - 1}$.
\end{proof}
Clearly the inner loop also terminates after finitely many iterations, since in
each iteration $\ell$ is decreased.
\begin{theorem}
If the configuration LP is feasible and there will an edge that can be added to $A$ as long as $i_0$ is not matched.
\end{theorem}
\begin{proof}
In the proof we use the constant $\beta = 1 + 8/15 \approx 1.53333$, that has been chosen
so as to minimize $\alpha$.
Assume toward contradiction that edge remains that can be added to $A$,
but $i_0$ is not covered.
In the remainder of the proof we will write $B$ instead of $B_M(A)$
and $B(e)$ instead of $B_M(e)$,
since $M$ and $A$ are constant throughout the proof.
Define
\begin{equation*}
y_i = \begin{cases}
1 &\text{if $i\in B_\mathcal P\cup \{i_0\}$},\\
0 &\text{otherwise}.\\
\end{cases}
\end{equation*}
\begin{equation*}
z_j = \begin{cases}
1 &\text{if } v_j \ge \mathrm{OPT}^*/\alpha \text{ and } j \in A_\mathcal R \cup B_\mathcal R,\\
\min\{1/3, \beta \cdot v_j / \mathrm{OPT}^*\} &\text{if } v_j < \mathrm{OPT}^*/\alpha \text{ and } j \in A_\mathcal R \cup B_\mathcal R,\\
0 &\text{otherwise}.\\
\end{cases}
\end{equation*}
We refer to the resources $j$ where $v_j \ge \mathrm{OPT}^*/\alpha$ as
fat resources and to others as thin resources. Note that by
minimality of edges in $\mathcal H$, each edge containing
a fat resource does not contain any other resources.
We call these the fat edges. Likewise, edges that contain only thin
resources are referred to as thin edges.
\begin{claim}
\label{claim-feasibility}
$(y, z)$ is a feasible solution for the dual of the configuration LP.
\end{claim}
\begin{claim}
\label{claim-negativity}
$(y, z)$ has a negative objective value, that is to say $\sum_{j\in\mathcal R} z_j < \sum_{i\in\mathcal P} y_i$.
\end{claim}
By Theorem~\ref{theorem-condition-sc} this implies that the configuration LP is infeasible for $\mathrm{OPT}^*$.
A contradiction.
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim-feasibility}]
Let $i\in\mathcal P$ and $C\in \mathcal C(i, \mathrm{OPT}^*)$. We need to show that $y_i \le z(C)$.
If $y_i = 0$ or $C$ contains a fat resource, this is trivial.
Hence, assume w.l.o.g. that $C$ consists solely of thin resources and $y_i = 1$.
Since no addable edge for $i$ remains,
$v(C \setminus (A_\mathcal R \cup B_\mathcal R)) < \mathrm{OPT}^*/\alpha$.
Let $S\subseteq C$ be the resources $j\in C$ which have $z_j = 1/3$.
\begin{description}
\item[Case 1: $|S| = 3$.] Then $z(C) \ge z(S) \ge 3 \cdot 1/3 = 1$.
\item[Case 2: $|S| \le 2$.]
Define $C' := (C \cap (A_\mathcal R \cup B_\mathcal R)) \setminus S$.
Then
\begin{equation*}
v(C') > v(C) - v(C \setminus (A_\mathcal R \cup B_\mathcal R)) - v(S) \ge \mathrm{OPT}^* - \mathrm{OPT}^*/\alpha - v(S) .
\end{equation*}
Therefore,
\begin{multline*}
z(C) \ge 1/3 \cdot |S| + \beta / \mathrm{OPT}^* \cdot v(C') > 1/3 |S| + \beta (1 - 1/\alpha - v(S) / \mathrm{OPT}^*) \\
\ge 1/3 |S| + \beta (1 - (|S| + 1) /\alpha) =: (*) .
\end{multline*}
Since $\beta / \alpha = 0.4 > 1/3$, the coefficient of $|S|$ in $(*)$ is negative
and thus we can substitute $|S|$ for its upper bound, i.e., $2$.
By inserting the values of $\alpha$ and $\beta$ we get,
\begin{equation*}
(*) \ge 2/3 + \beta (1 - 3/\alpha) = 1 . \qedhere
\end{equation*}
\end{description}
\end{proof}
\begin{proof}[Proof of Claim~\ref{claim-negativity}]
We write in the following $F^f$ ($F^t$) for the fat edges (thin edges, respectively) in a set of edges $F$.
First note that every fat edge with positive $z$ value must be in a fat blocking edge and therefore
\begin{equation*}
\sum_{j\in \mathcal R^f} z_j \le |B^f| .
\end{equation*}
Now consider thin addable edges.
Since every addable edge is blocked, $|B(e)| \ge 1$ for every $e\in A^t$.
We now proceed to show that for every $e\in A^t$
\begin{equation*}
z(e_\mathcal R\cup B(e)_\mathcal R) \le |B(e)|.
\end{equation*}
Note that for every thin resource $j$, we have $z_j \le \beta / \mathrm{OPT}^* \cdot v_j$. By minimality of edges in $\mathcal H$, it holds that
$v(e_\mathcal R) \le 2 \mathrm{OPT}^* / \alpha$ (each element
in $e_\mathcal R$ has value at most $\mathrm{OPT}^* / \alpha$).
Also $v(e'_\mathcal R \setminus e_\mathcal R) \le \mathrm{OPT}^*/\alpha$
for each $e'\in B(e)$, since the intersection of $e_\mathcal R$
and $e'_\mathcal R$ is non-empty.
If $|B(e)| \ge 2$, this implies
\begin{multline*}
z(e_\mathcal R\cup B(e)_\mathcal R) \le \beta / \mathrm{OPT}^* \cdot (v(e_\mathcal R) + v(B(e)_\mathcal R \setminus e_\mathcal R)) \\
\le \beta \cdot (2/\alpha + |B(e)| \cdot 1/\alpha) \le |B(e)| \cdot \beta \cdot 2 /\alpha = 0.8 \cdot |B(e)| < |B(e)| .
\end{multline*}
Assume in the following that $|B(e)| = 1$.
Let $v_{\min}$ be the value of the smallest element
in $e_\mathcal R\cup B(e)_\mathcal R$.
Then $v(e_\mathcal R\cup B(e)_\mathcal R) \le 2 \mathrm{OPT}^* / \alpha + v_{\min}$: If the smallest element is in $e_\mathcal R$, then
\begin{equation*}
v(e_\mathcal R\cup B(e)_\mathcal R) \le \underbrace{\mathrm{OPT}^* / \alpha + v_{\min}}_{\ge v(e_\mathcal R)} + v(B(e)_\mathcal R\setminus e_\mathcal R)
\le 2\mathrm{OPT}^*/\alpha + v_{\min} .
\end{equation*}
The same argument (swapping the role of $e$ and $B(e)$) holds,
if the smallest element is in $B(e)$.
Therefore, if $|B(e)| = 1$ and $v_{\min} \le 1/2 \cdot \mathrm{OPT}^*/\alpha$,
\begin{equation*}
z(e_\mathcal R\cup B(e)_\mathcal R) \le \beta \cdot (2/\alpha + v_{\min}) \le \beta \cdot 5/2 \cdot 1/\alpha = 1 = |B(e)| .
\end{equation*}
If $|B(e)| = 1$ and $v_{\min} > 1/2 \cdot \mathrm{OPT}^*/\alpha$,
then $B(e)_\mathcal R\setminus e_\mathcal R, B(e)_\mathcal R\cap e_\mathcal R,$ and $e_\mathcal R\setminus B(e)_\mathcal R$ have at least one and by minimality of edges at most one element.
Since the $z$ value of each thin edge is at most $1/3$,
\begin{equation*}
z(e_\mathcal R\cup B(e)_\mathcal R) = z(e_\mathcal R\setminus B(e)_\mathcal R) + z(e_\mathcal R\cap B(e)_\mathcal R) + z(B(e)_\mathcal R \setminus e_\mathcal R) \le 3 \cdot 1/3 = 1 = |B(e)| .
\end{equation*}
We conclude that
\begin{equation*}
\sum_{i\in\mathcal P} y_i = |B^f| + |B^t| + 1 > |B^f| + \sum_{e\in A^t} |B(e)| \ge \sum_{j\in\mathcal R^f} z_j + \sum_{e\in A^t} z(e_\mathcal R\cup B(e)_\mathcal R) = \sum_{j\in\mathcal R} z_j . \qedhere
\end{equation*}
\end{proof}
\appendix
\end{document} |
\mathfrak athfrak bgin{document}
\mathfrak athfrak email{[email protected], [email protected]}
\title{Group Operation on Nodal Curves}
\author{Kubra Nari}
\author{Enver Ozdemir}
\address{Informatics Institute, Istanbul Technical University}
\mathfrak athfrak bgin{abstract}
In this work, we present an efficient method for computing in the Generalized Jacobian of special singular curves. The efficiency of the operation is due to representation of an element in the Jacobian group by a single polynomial.
\mathfrak athfrak end{abstract}
\keywords{ Jacobian group, nodal curves, Mumford representation, Cantor's Algorithm}
\thanks{Classification: 11G99, 11G20, 11Y99}
\mathfrak athfrak maketitle
\specialsection*{\bf \Large Introduction}
\indent The Jacobian groups of smooth curves, especially for those belonging to the elliptic and hyperelliptic curves, have been rigorously investigated \cite{Anderson, Bauer,Cantor, Makdisi, Mumford} due to their use in computational number theory and cryptography \cite{CohFrey, HLEN, KOBEL, KOBHYP, VMIL}. Even though the singular counterpart of these curves have simple geometric structures, the generalized Jacobian groups of these curves might be potential candidates for further applications in computational number theory or cryptography.
\indent An element in the Jacobian of a hyperelliptic curve is represented by a pair of polynomials $(u(x),v(x))$ satisfying certain conditions \cite{Mumford}. The situation is the same for higher degree curves. For example, an element $D$ in the Jacobian of a superelliptic curve $S:y^3=g(x)$ is represented by a triple of polynomials $(s_1(x),s_2(x),s_3(x))$ satisfying certain conditions \cite{Bauer}. Therefore, for a smooth curve, we do not have the liberty to choose any polynomial $u(x)$ and say that it is a coordinate of an element in the Jacobian group of the curve. On the other hand, the results of this work will allow us to treat almost any polynomial $h(x)$ as an element of the Jacobian of a nodal curve. We believe that this might encourage researchers to work with these curves for further applications in the related areas. \\
\indent In this work, we present an efficient method to perform group operation on the Jacobians of nodal curves. The method is basically a modification of Mumford representation\cite{Mumford} and Cantor's algorithm\cite{Cantor}. We note that in the work \cite{OzdemirPHD}, Mumford representation and Cantor's algorithm are extended for general singular curves. For our purpose, a nodal curve $N$ over a finite field $\mathfrak athfrak mathbb F_q$ with a characteristic $p\mathfrak athfrak ne 2$ is a curve defined by an equation $y^2=xf(x)^2$ where $f(x)\in \mathfrak athfrak mathbb F_q[x]$ is an irreducible polynomial. Let $d=\deg(f(x))$. We show that almost any polynomial $h(x)$ with $\deg(h(x))<d$ uniquely represents an element $D$ in the Jacobian of the curve. Then, we define an addition algorithm for this single polynomial representation in the Jacobian group. The representation provides great advantageous and the implementation results are illustrated by the end of the paper.
\section{Singular Curves}
\indent The Jacobian is an abstract term which attaches an abelian group to an algebraic curve. This abstract group, Jacobian, is simply the ideal class group of the corresponding coordinate ring. If the
curve is smooth, the attached group is called Jacobian, otherwise it is called Generalized Jacobian \cite{Rosenlicht}. However, we will keep using the term `Jacobian' for all kinds of curves. We are only interested in computing in the Jacobian groups of nodal curves and more details about algebraic and geometric properties of these curves can be found in \cite{Bosch,LIU}. As we mentioned above, for our purposes, a nodal curve is defined by an equation of the form $N:y^2=xf(x)^2$ over a field $\mathfrak athfrak mathbb F_q$ with a characteristic different from 2 where $f(x)$ is an irreducible polynomial in $\mathfrak athfrak mathbb F_q[x]$. The attached Jacobian group is denoted by Jac($N$). For example, if the degree of $f(x)$ is 1, that is $N:y^2=x(x+a)^2$ for some $a\mathfrak athfrak ne 0\in \mathfrak athfrak mathbb F_q$, computing in the Jacobian group is similar to computing in an elliptic curve group \cite[Section 2.10]{WAS}. In order to perform group operation for a curve, each element in the Jacobian should be represented in a concrete way. The Mumford representation provides a concrete representation for elements in the Jacobians of hyperelliptic curves. This representation has been extended\cite{OzdemirPHD} for singular curves defined by equations of the form $y^2=g(x)$. Below, we present Mumford representation along with Cantor's algorithm which provides a method of computing in the Jacobians for aforementioned singular curves \cite{OzdemirPHD}.
\subsubsection{The Mumford Representation}
\indent Let $f(x) \in \mathfrak athfrak mathbb F_q$ be a monic polynomial of degree $2g+1$ such that $g\mathfrak athfrak ge 1$. A curve $H$ over $\mathfrak athfrak mathbb F_q$ is defined by the equation $y^2=f(x)$. Any divisor class $D$ in the Jacobian group of $H$, Jac($H$), is represented by a pair of polynomials $[u(x),v(x)]$ satisfying the following:
\mathfrak athfrak bgin{enumerate}
\item $\deg(v(x))<\deg(u(x))$.
\item $v(x)^2-f(x)$ is divisible by $u(x)$.
\item If $u(x)$ and $v(x)$ are both multiples of $(x-a)$ for a singular point $(a,0)$ then
$\dfrac{f(x)-v(x)^2}{u(x)}$ is not a multiple of $(x-a)$. Note that $(a,0)$ is a singular point of $H$ if $a$ is a multiple root of $f(x)$.\\
\mathfrak athfrak end{enumerate}
Any divisor class $D\in$ Jac($H$) is uniquely represented by a (reduced) pair $(u(x),v(x))$ if in addition to the above properties, we have:
\mathfrak athfrak bgin{enumerate}
\item $u(x)$ is monic.
\item deg$(v(x))<$deg$(u(x))\leq g.$
\mathfrak athfrak end{enumerate}
We should note here that the identity element is represented by $[1,0]$.
\subsubsection{Cantor's Algorithm}
This algorithm takes two divisor classes $D_1=[u_1(x),v_1(x)]$ and $D_2=[u_2(x),v_2(x)]$ on $H: y^2=f(x)$ and outputs the unique representative for the divisor class $D$ such that $D=D_1+D_2$.
\mathfrak athfrak bgin {enumerate}
\item $h= \mathfrak athfrak gcd (u_1,u_2,v_1+v_2)$ with polynomials $h_1, h_2,h_3$ such that\\ $h=h_1u_1+h_2u_2+h_3(v_1+v_2)$\\
\item $u=\dfrac {u_1u_2}{h^2}$ and $v\mathfrak athfrak equiv \dfrac{h_1u_1v_2+h_2u_2v_1+h_3(v_1v_2+f)}{h}$ (mod $u$)\\
\\
{\bf repeat:}
\\
\item $\widetilde{u}=\dfrac{v^2-f}{u}$ and $\widetilde{v}\mathfrak athfrak equiv v$ (mod $\widetilde u$)\\
\item $u=\widetilde u$ and $v=-\widetilde v$
\\
{\bf until} deg $(u)\leq g$
\item Multiply $u$ by a constant to make $u$ monic.
\item $D=[u(x),v(x)]$
\mathfrak athfrak end{enumerate}
The combination of the third and fourth steps is called the reduction steps which eventually return a unique reduced divisor for each class. The justification of the above statements is given in the \cite{OzdemirPHD}.\\
\section{Nodal Curves}
A nodal curve $N$ over a field is an algebraic curve with finitely many singular points which are all simple double points. The curve $N$ has a smooth resolution $\widetilde{N}$ obtained by separating the two branches at each node. In this section, we are going to construct a representation for elements in the Jacobians of nodal curves. Again, note that the curves under consideration are of the form $N:y^2=xf(x)^2$ where $f(x)$ is an irreducible polynomial of degree $d$ over the field $\mathfrak athfrak mathbb F_q$. Here, we briefly mention related results for nodal curves especially from the work of M. Rosenlicht \cite{Rosenlicht, Rosenlicht54}. Let
$C$ be a smooth algebraic curve and $\mathfrak athfrak mathfrak m$ be a modulus, i.e. $\mathfrak athfrak mathfrak m=\sum_{P\in C} m_PP$ where $m_P$ is non-negative. Let denote the generalized Jacobian group of $C$ with respect to the modulus $\mathfrak athfrak mathfrak m$ by $J_{\mathfrak athfrak mathfrak m}(C)$. We have a surjective homomorphism\cite{Rosenlicht54}
$$\sigma : J_{\mathfrak athfrak mathfrak m}(C)\rightarrow Jac(C).$$
\mathfrak athfrak bgin{remark}\label{Rm1} The normalization of the nodal curve $N$ gives $\mathfrak athfrak mathbb P^1$ so we take $C=\mathfrak athfrak mathbb P^1$. It is known that Jac($C)$ is trivial. In our case i.e., $J_{\mathfrak athfrak mathfrak m}(C)=$Jac($N)$, the kernel of $\sigma$ is isomorphic to a torus $\mathfrak athfrak mathbb G_m^d$ of dimension $d$=deg($f(x)$). Note that, the modulus $\mathfrak athfrak mathfrak m$ has only singular points which are the roots of $f(x)$. See \cite{Dechene,Serre} for more details.
\mathfrak athfrak end{remark}
\mathfrak athfrak bgin{theorem} \label{theorem1}
Let $f(x)$ be an irreducible polynomial of degree $d$ over $\mathfrak athfrak mathbb F_q$ and $N:y^2=xf^2(x)$ be a nodal curve over $\mathfrak athfrak mathbb F_q$. Any divisor class $D\in$ Jac($N$) is uniquely represented by a polynomial $h(x)$ satisfying $$\deg(h(x))< d \text{ and } \mathfrak athfrak gcd(f(x),x-h^2(x))=1.$$
\mathfrak athfrak end{theorem}
We are going to prove Theorem \ref{theorem1} by a series of lemma.
\mathfrak athfrak bgin{lemma} \label{Lm1}
Let $N$ be as above. Let $h(x)$ be a polynomial of degree less than $d$ such that $\mathfrak athfrak gcd(f(x),x-h^2(x))=1$. Then, the pair $D=[f^2(x),h(x)f(x)]$ represents an element in Jac($D$).
\mathfrak athfrak end{lemma}
\mathfrak athfrak bgin{proof}
Let $$D=[u(x),v(x)]=[f^2(x),h(x)f(x)].$$ Both $u(x)$ and $v(x)$ are divisible by $x-a$ where $a$ is any root of $f(x)$ over the algebraic closure of $\mathfrak athfrak mathbb F_q$. On the other hand, $\mathfrak athfrak gcd(f(x),x-h^2(x))=1$ so $$\dfrac{xf^2(x)-v^2(x)}{u(x)}=\dfrac{xf^2(x)-h^2(x)f^2(x)}{f^2(x)}=x-h^2(x)$$ is not divisible by $x-a$ for any root $a$ of $f(x)$. By Mumford Representation which is described above, $[f^2(x),h(x)f(x)]$ represents an element $D$ in Jac($N$).
\mathfrak athfrak end{proof}
\mathfrak athfrak bgin{lemma}\label{Lm2}
Let $\mathfrak athfrak gcd(f(x),x-h_i^2(x))=1$ and $\deg(h_i(x))<d$ for each $i=1,2$. Let $D_1=[f^2(x),h_1(x)f(x)]$ and $D_2=[f(x)^2,h_2(x)f(x)]$ be two divisor classes.
We find $$D_1+D_2=D_3=[f^2(x),h_3(x)f(x)]$$ via
\mathfrak athfrak bgin{enumerate}
\item finding two polynomials $g_1(x),g_2(x)$ such that $$g_1(x)f(x)+g_2(x)(h_1(x)+h_2(x))=1\\$$
\item Then computing $$h_3(x)\mathfrak athfrak equiv (f(x)h_1(x)g_1(x)+g_2(x)(h_1(x)h_2(x)+x)) \mathfrak athfrak mod f(x)$$ with $\deg(h_3(x))<d$.\\
\mathfrak athfrak end{enumerate}
\mathfrak athfrak end{lemma}
\mathfrak athfrak bgin{proof}
We apply Cantor's Algorithm for $D_1+D_2$ to confirm the addition algorithm.
\mathfrak athfrak bgin{enumerate}
\item We first compute:\\
$\mathfrak athfrak bgin{array}{lll}
\mathfrak athfrak gcd(f(x)^2,f(x)^2,h_1(x)f(x)+h_2(x)f(x))&=& f(x)\cdot \mathfrak athfrak gcd(f(x),f(x),h_1(x)+h_2(x))\\
\\
&=& f(x)\cdot \mathfrak athfrak gcd(f(x),h_1(x)+h_2(x))\\
\\
&=&f(x)
\mathfrak athfrak end{array}$
with $g_1(x),g_2(x)$ such that $g_1(x)f(x)+g_2(x)\Big(h_1(x)+h_2(x)\Big)=1.$ \\
\item Set\\
$\mathfrak athfrak bgin{array}{lllll}
u_3(x)&=&\dfrac{f(x)^2f(x)^2}{f(x)^2}=f(x)^2 \\
\\
v_0(x)&=&\dfrac{g_1(x)f^3(x)h_1(x)+g_2(x)\Big(h_1(x)h_2(x)f^2(x)+xf(x)^2\Big)}{f(x)}\\
\\
&&=g_1(x)f^2(x)h_1(x)+g_2(x)\Big(h_1(x)h_2(x)f(x)+xf(x)\Big).\\
\\
\mathfrak athfrak end{array}$
\item Then\\
$\mathfrak athfrak bgin{array}{lllll}
v_3(x)&\mathfrak athfrak equiv& v_0(x) \mathfrak athfrak mod u_1(x)\\
\\
&\mathfrak athfrak equiv& g_1(x)f(x)^2h_1(x)+g_2(x)\Big(h_1(x)h_2(x)f(x)+xf(x)\Big)\mathfrak athfrak mod u_3(x)=f(x)^2\\
\\
&=&\underbrace{\Big(f(x)g_1(x)h_1(x)+g_2(x)(h_1(x)h_2(x)+x) \mathfrak athfrak mod f(x)\Big)}_{h_3(x)}f(x)\\
\\
&=& h_3(x)f(x) \text{ with} \deg(h_3(x))<d\\
\\
\mathfrak athfrak end{array}
$
\item $D_1+D_2=[u_3(x),v_3(x)]=[f^2(x),h_3(x)f(x)]=D_3$\\
\mathfrak athfrak end{enumerate}
\mathfrak athfrak end{proof}
Note that $$[f^2(x), h(x)f(x)]+[f^2(x),-h(x)f(x)]=[1,0].$$
\mathfrak athfrak bgin{lemma}
Let $N:y^2=xf^2(x)$ be a nodal curve over $\mathfrak athfrak mathbb F_q$ such that $f(x)$ is an irreducible polynomial. Let $$\mathfrak athfrak bgin{array}{ccc}
D_1 & = & [f^2(x),\mathfrak athfrak quad h_1(x)f(x)] \text{ with } \deg (h_1(x))<\deg(f(x))\\
D_2 &= & [f^2(x),\mathfrak athfrak quad h_2(x)f(x)] \text{ with } \deg(h_2(x))<\deg(f(x))
\mathfrak athfrak end{array}$$
such that $$h_1(x)\mathfrak athfrak ne h_2(x).$$
Then $$D_1\mathfrak athfrak neq D_2.$$
\mathfrak athfrak end{lemma}
\mathfrak athfrak bgin{proof}
Suppose $$D_1=D_2$$ then
$$\mathfrak athfrak bgin{array}{ccl}
[1,0]&=& D_1+(-D_2)\\
&=& [f^2(x),h_1(x)f(x)]+[f^2(x),-h_2(x)f(x)]\\
\mathfrak athfrak end{array}$$
This is possible only when $h_1(x)+(-h_2(x))$ is zero or a multiple of $f(x)$. Note that it can not be a multiple of $f(x)$ as the degrees of both $h_1(x)$ and $h_2(x)$ are less than $\deg(f(x))$. Therefore, as long as $h_1(x)\mathfrak athfrak ne h_2(x)$, we do not get $D_1=D_2$.
\mathfrak athfrak end{proof}
\mathfrak athfrak noindent{\it Proof of Theorem \ref{theorem1}}:\\
In Lemma \ref{Lm1}, we defined a new type of a representation for elements in the Jacobian group of $N:y^2=xf^2(x)$, i.e., each element is represented by a pair $[f^2(x),h(x)f(x)]$ such that $\deg(h(x))<\deg(f(x))$ and $f(x)$ doesn't divide $x-h^2(x)$. The lemma \ref{Lm2} shows how to perform the group operation with this representation. In the last lemma, we showed that for distinct $h(x)$, the pairs represent distinct elements in the Jacobian group. As the degree of $h(x)$ is less than $d$, we have approximately $q^{\deg(f(x))}$ such pairs which is equal to the order of the Jacobian group by the remark \ref{Rm1} and this completes the proof. \\
Let $\mathfrak athfrak mathbb F_q$ be a finite field with a characteristic $p\mathfrak athfrak ne2$. Let $N:y^2=xf^2(x)$ be a singular curve such that $f(x)$ is an irreducible polynomial of degree $d$ over $\mathfrak athfrak mathbb F_q$. The above discussion leads us to the following algorithm.
\mathfrak athfrak bgin{algorithm}[!h]
\caption{Addition algorithm for the Jacobian groups of the curves $N:y^2=xf^2(x)$ over $\mathfrak athfrak mathbb F_q$. }
\label{alg1}
\mathfrak athfrak bgin{algorithmic}[1]
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\REQUIRE $D_1=h_1(x) \mathfrak athfrak hspace{1ex} \text{and} \mathfrak athfrak hspace{1ex} D_2=h_2(x)$ where the degrees of $h_1(x)$ and $h_2(x)$ are less than the degree of $f(x)$. Note that, we represent the pair $[f(x)^2,h(x)f(x)]$ by $h(x)$.
\ENSURE $D= D_1 + D_2=h(x)$
\STATE If $h_1(x)+h_2(x)=0$ set $h(x)=[1,0]$ (identity). Otherwise do:
\STATE Find $g_1(x)$ and $g_2(x)$ such that
$g_1(x)f(x)+g_2(x)(h_1(x)+h_2(x))=1$.
\STATE Set: $$h(x)\mathfrak athfrak equiv (g_2(x)(h_1(x)h_2(x)+x)) \mathfrak athfrak mod f(x)$$
\RETURN $h(x)$\\
\mathfrak athfrak end{algorithmic}
\mathfrak athfrak end{algorithm}
We form the curve $N$ with an irreducible polynomial $f(x)$ of degree $d$ over $\mathfrak athfrak mathbb F_q$. Any polynomial $h(x)$ of degree less than $d$ with $\mathfrak athfrak gcd(f(x),x-h^2(x))=1$ represents a unique element in
Jac($N$). For two elements $D_1,D_2\in $ Jac($N$) represented by polynomials $h_1(x)$ and $h_2(x)$ respectively, we defined an addition operation involving only univariate polynomial arithmetics. The algorithm returns a polynomial $h(x)\in\mathfrak athfrak mathbb F_q[x]$ which uniquely represents $D=D_1+D_2$. We also note that almost all polynomials $h(x)$ of degree less than $d$ represents an element in Jac($N$) and this allows one easily select a random element $D$ in Jac($N$). The single polynomial representation does not only give us a liberty to select any polynomial, it also provides an efficient group operation in the Jacobian group. The following table compares this group operation with regular Cantor's algorithm. According to the results in this table, the single representation of Jacobian elements have advantages over polynomial pairs representation. The time is measured while computing a $pQ$ where $Q$ is an element in the Jacobian group of $N$ and $p=4294967311$. The curve is over the field $\mathfrak athfrak mathbb F_p$.
\\
\mathfrak athfrak bgin{tabular}{ |p{4cm}|p{3cm}|p{3.5cm}| }
\mathfrak athfrak hline
{\bf Degree of the $f(x)$ when $N:y^2=xf^2(x)$}& {\bf Nodal Curves (Second)} &{\bf Cantor's Algorithm (Second)}\\
\mathfrak athfrak hline
5 & 0.003 &0.023\\
\mathfrak athfrak hline
11& 0.01 & 0.081 \\
\mathfrak athfrak hline
23 &0.019 & 0.357\\
\mathfrak athfrak hline
47 &0.06 & 2.15\\
\mathfrak athfrak hline
53& 0.068 & 2.98\\
\mathfrak athfrak hline
63& 0.089 & 4.96 \\
\mathfrak athfrak hline
71& 0.12 & 6.95\\
\mathfrak athfrak hline
83 & 0.15 &10.98\\
\mathfrak athfrak hline
95& 0.19 & 16.67 \\
\mathfrak athfrak hline
110 &0.24 & 26.36\\
\mathfrak athfrak hline
130 &0.33 & 45.46\\
\mathfrak athfrak hline
145& 0.41 & 64.9\\
\mathfrak athfrak hline
150& 0.43 & 72.3 \\
\mathfrak athfrak hline
165& 0.52 & 100.39\\
\mathfrak athfrak hline
193 & 0.69 &167.29\\
\mathfrak athfrak hline
\mathfrak athfrak end{tabular}
\\
\\
The tests were run on a Linux OS computer with 8 GB RAM and a Intel Core i7- 5600 2.6 GHz processor. We use the programming language of C++ with a PARI/GP library\cite{Pari}. The following chart illustrates the results in the table.
\mathfrak athfrak newpage
\mathfrak athfrak bgin{figure}
\includegraphics[width=\linewidth]{fig1.png}
\caption{Nodal Curves vs Cantor's Algorithm}
\label{fig:fig1}
\mathfrak athfrak end{figure}
\mathfrak athfrak bgin{thebibliography}{10}
\bibitem {Anderson} G.W. Anderson, Abeliants and their application to elementary construction of Jacobians, Advances in Mathematics 172, 169-205, 2002.
\bibitem {Bauer} M.L. Bauer, The Arithmetic of Certain Cubic Function Fields, Math.Comp,
(73)(2003), 387-413.
\bibitem {Bosch} S. Bosch, W. Lutkebohmert, M. Raynaud, Neron Models, Springer-Verlag, 1990.
\bibitem {Cantor} D. G. Cantor, Computing in the Jacobian of a hyperelliptic curve, Math. Comp. 48 (1987), 95-101.
\bibitem {CohFrey} H. Cohen, G. Frey, Handbook of Elliptic and Hyperelliptic Curve Cryptography, Chapman \& Hall/CRC 2005.
\bibitem {Coh} H. Cohen, A Course in Computational Algebraic Number Theory, Springer-Verlag, 2000.
\bibitem {Dechene} I. Dechene, Generalized Jacobians in cryptography, Ph.D. thesis, McGill University, 2005.
\bibitem{HLEN} H. W. Lenstra Jr., "Factoring integers with elliptic curves." Annals of Mathematics (2) 126 (1987), 649-673.
\bibitem{KOBEL} N. Koblitz, Elliptic curve cryptosystems, Mathematics of Computation 48, 1987, pp. 203–209
\bibitem{KOBHYP} N. Koblitz, Hyperelliptic cryptosystems, J. Cryptology, 1(3): 139-150, 1989
\bibitem {LIU} Q. Liu, Algebraic Geometry and Arithmetic Curves, Oxford Science Publications, 2002.
\bibitem {Makdisi} K. Khuri-Makdisi, Linear algebra algorithms for divisors on an algebraic curve. Math. Comp. 73 (2004), no. 245, 333--357
\bibitem {VMIL} V. Miller, Use of elliptic curves in cryptography, Advances in cryptology---CRYPTO 85, Springer Lecture Notes in Computer Science vol 218, 1985
\bibitem {Mumford} D. Mumford, Tata Lectures on Theta II, Birkhauser, 1982.
\bibitem {OzdemirPHD} E. Ozdemir, Curves and Their Applications to Factoring Polynomials, Ph.D. Thesis, University of Maryland, 2009.
\bibitem {Rosenlicht} M. Rosenlicht, Equivalence relations on algebraic curves, Annals of Mathematics, 56, 169-191, 1952
\bibitem{Rosenlicht54} M. Rosenlicht, Generalized Jacobian varieties, Annals of Mathematics, 59, 505-530, 1954.
\bibitem{Pari} The PARI~Group, PARI/GP version {\tt 2.11.0}, Univ. Bordeaux, 2018.
\url{http://pari.math.u-bordeaux.fr/}.
\bibitem {Serre}J. P. Serre, Algebraic Groups and Class Fields,Springer-Verlag, 1997.
\bibitem {WAS} L. C. Washington, Elliptic Curves: Number Theory and Cryptography, 2nd edition. Chapman \& Hall/CRC 2008.
\mathfrak athfrak end{thebibliography}
\mathfrak athfrak end{document} |
\begin{document}
\title{On Periods: from Global to Local}
\author{Lucian M. Ionescu}
\address{Department of Mathematics, Illinois State University, IL 61790-4520}
\email{[email protected]}
\date{June, 2018}
\begin{abstract}
Complex periods are algebraic integrals over complex algebraic domains,
also appearing as Feynman integrals and multiple zeta values.
The Grothendieck-de Rham period isomorphisms
for p-adic algebraic varieties defined via Monski-Washnitzer cohomology, is briefly reviewed.
The relation to various p-adic analogues of periods are considered,
and their relation to Buium-Manin arithmetic differential equations.
\end{abstract}
\maketitle
\setcounter{tocdepth}{3}
\tableofcontents
\section{Introduction}
In this article we discuss periods and their applications,
as a continuation of \cite{Ionescu-Sumitro}, focusing on the relation between global periods in characteristic zero,
and their local counterparts.
The main goal of this research is to question the ``stability'' of the connection between scattering amplitudes and periods \cite{Schnetz:QuantumPeriods,Brown:FeynmanIntegrals,Brown:ICMP}, when passing from global to local, by using the analogy between Veneziano amplitudes and Jacobi sums,
is addressed in a follow up article \cite{LI:p-adicFrobenius},
which also adopts a Deformation Theory point of view when introducing p-adic numbers.
It is expected to provide some feedback on the Feynman amplitudes and Multiple Zeta Values correspondence \cite{QuantaMagazine,LI:Periods-FI-JS-Talk}.
Periods are values of algebraic integrals, extending the field of algebraic numbers.
Non-trivial examples are Feynman amplitudes from
experimentally ``dirty-gritty'' Quantum Field Theory \cite{Schnetz:QuantumPeriods},
yet which happen to be also linear combinations of multiple zeta values from ``pure''
Number Theory \cite{QuantaMagazine}.
That Mathematics is unreasonably effective, we know; but to the point of
starting to reconsider Plato's thesis that reality is a mirror of the world of
(mathematical) ideas?! So, {\em Number}, (once categorified) does rule the (Quantum) Universe after all ...
After reviewing the idea and concept of period, the article explores the connection with {\em quantization functors}, i.e. representations of (generalized) categories of cobordisms,
as a perhaps more physical route than that of abstract motives.
At a more concrete level,
the power of analogy \cite{Weil-analogy} between Veneziano amplitude as a
String Theory analogue of Feynman amplitude,
and Jacobi sum in finite characteristic \cite{Ireland-Rosen}
(an finite characteristic analog of Euler beta function),
is used to investigate a possible global-to-local correspondence for periods (factorization or reduction of cohomology):
$$\xymatrix@R=.2pc{
Veneziano\ Amplitude: & & Jacobi\ Sum: \\ A(a,a')=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}
& \quad \leftrightarrow \quad & J(c,c')=\frac{g(c)g(c')}{g(cc')}
}
$$
where $\alpha=-1+(k_1+k_2)^2, \beta=-1+(k_3+k_4)^2$ relate the in/out momenta of the interacting strings,
and $c,c':F_p^\times->C^\times$ are multiplicative characters of the finite field $F_p$.
The first measures the correlation (interaction amplitude) of two strings, with momenta expressed in Mandelstam's variables,
while the second measures, the ``intersection correlation'' between two multiplicative subgroups (e.g. squares and cubes),
yielding the correction term (``defect'' $a_p$) for the number of points $N_p$
(like a constructive or destructive amplitude for the ``volume integral'')
of a finite Riemann surface $C(F_p)$, over a finite field $F_p$.
This connection has been studied, as for example in \cite{Kholodenko}.
The Local-to-Global Principle could be used even informally, via an analogy with the algebraic number theory case, to guide more experienced investigators deal with the global case of periods,
Feynman Diagrams and Mirror Symmetry.
In the other direction, it can be used to guide the development of p-adic String Theory, beyond a mere formal replacement of real (complex) numbers by p-adic numbers.
The interplay between Galois symmetries and periods (Feynman amplitudes) \cite{Brown:ICMP},
will be investigated in the framework of Noether's Theorems,
connecting conserved quantities (and e.g. unitarity as conservation of probability), and symmetries of the system.
The article is organized as follows.
We review the basic ideas regarding periods in \S \ref{S:Periods-AG},
starting from their simple introduction as algebraic integrals, followed by a cohomological interpretation.
Remarks on periods, motives and Galois group are followed by considering p-adic periods, in connection with Buium calculus.
Further considerations are postponed for a Deformation Theory approach to p-adic numbers \cite{LI:p-adicFrobenius}, and an investigation of a connection between Grothendieck's algebraic de Rham cohomology, and the discrete analog of de Rham cohomology of the present author \cite{LI:DiscreteDeRham}, as well as possible connections with the discrete periods of \cite{Mercat:DiscretePeriods}.
\section{Periods: from integrals to cohomology classes}
The arithmetic notion of period refers essentially
to the value of rational integrals over rational domains \cite{KZ}.
For example the ubiquitous ``Euclidean circle-radius ratio'' $\pi=\int_{[-1,1]}dx/\sqrt{1-x^2}$,
residues like $2\pi i = \int dz/z$ or path integrals as $\log(n)=\int_1^n dx/x$ \cite{K},
are {\em numeric periods}.
The representation of a period as an integral is not unique.
When placed in the context of de Rham isomorphism of a compact manifold,
or algebraic variety as in the early work by Grothendieck \cite{Grothendieck-deRham},
they are matrix coefficients of the corresponding integration pairing \cite{Muller-Stach}.
The resulting isomorphism between de Rham cohomology and singular cohomology:
$$de \ Rham\ Theorem: \quad
H_{dR}^*(X,D)\otimes_Q C \quad \overset{\cong}{\to}\quad H_{sing}^*(X,D)\otimes_Q C,$$
is called the {\em period isomorphism}.
\subsection{Periods of Algebraic Varieties} \label{S:Periods-AG}
Specifically, from the algebraic geometry point of view,
an {\em numeric period} $p$ is represented by a
quadruple consisting of a smooth algebraic variety $X$ over $Q$,
of dimension $d$,
$\omega$ is a regular algebraic d-form, $D$ a normal crossing divisor
and $\gamma$ is singular chain on $X(C)$, with boundary on $D(C)$:
$$(X,D,\omega,\gamma) \quad \mapsto \quad p=\int_\gamma \omega.$$
Fixing $X$ and such a normal divisor $D$,
choosing a rational basis in both cohomology groups allows to
represent the above {\em period isomorphism} as a {\em period matrix}.
Of course, there are elementary transformations on such quadruples (linearity,
change of variables and Stokes formula),
which leave the corresponding period unchanged \cite{KZ}, p.31.
Whether the {\em effective periods}, i.e.
equivalence classes of quadruples modulo the elementary moves,
correspond isomorphically to the numeric periods,
is the content of the corresponding Kontsevich Conjecture.
Since the history of periods and period domains goes back to the very beginning of algebraic geometry \cite{CG}, we will proceed with two such elementary examples:
the Riemann Sphere (genus zero), the case of elliptic curves (genus une).
\begin{example}
With $X=P^1-\{0,\infty\}$, $D=\empty$, $\omega=dz/z$ and $\gamma=S^1$ the unit circle,
we find $2\pi i$ as the (only) netry of the period matrix of the
period isomorphism $H^1(X;C)\to H_1(X;C)$.
If we change the divisor to $D=\{1,n\}$ and take $\gamma=[1,n]$, then
the numeric period $\log(n)$ becomes one of the periods of
$H^1(X,D)$.
\end{example}
\begin{example}\label{Example:EC}
Given an elliptic curve $X:y^2=x^3+ax+b$
with canonical homology basis $\gamma_1,\gamma_2$ \cite{RS}
and differential form $\omega=dx/y$, the period matrix (vector) is (\cite{CG}, p.1418):
$$(A,B)=\left( \int_{\gamma_1}\omega, \int_{\gamma_2} \omega\right).$$
It is customary to go to the fraction field (of periods)
and divide by $A$ to get $(1,\tau)$ with the {\em normalized $B$ period}
denoted $\tau=1+it$,
having positive imaginary part\footnote{Due to the fact that
$i\int_X \omega\cup \bar{\omega}>0$ \cite{CG}.}
and constituting an invariant of the elliptic curve \cite{Carlson-Stach-PD}, p.9.
For example with $\lambda=-1$, the elliptic curve $E:y^2=(x-1)x(x+1)$
has invariant $\tau=i$ and $E=C/Z\oplus Zi$ has an additional isomorphism
of order $4$ (complex multiplication) \cite{Chowla-Selberg}.
Such normalized periods provide a simple example of {\em period domain},
here the upper-half plane $\C{H}$.
A change of the homology basis by a unimodular transformation in
$\Gamma=SL_2(Z)$ corresponds to a fractional transformation relating
the corresponding two points of the period domain.
Thus the moduli space of genus one Riemann surfaces corresponds to
the quotient $\Gamma / \C{H}$.
Additional details and examples can be found in \cite{Carlson-Stach-PD}, Ch.1.
\end{example}
Before we move on, a crude physics interpretation of periods will be later useful.
\begin{rem}
Think of an elliptic curve with a 1-form $\omega$ as a 2D-universe with a
flow, or perhaps a world sheet of a string, with a given capacity for
propagating action (Quantum computing: duplex channel).
In a conformal metric, for convenience of relating to metric picture,
the corresponding vector field will represent a free propagation,
with certain circulation and flux
(harmonic dual pair: streamlines and equipotential lines).
The two periods then measure these two: circulation and flux.
\end{rem}
\subsection{Families of periods}
Continuing the discussion of the above case of Riemann surfaces of genus one,
one often has a family of such elliptic curves $E_t$ depending holomorphically
on a parameter $t$.
The resulting map $t\mapsto \tau(t)$ is the {\em period map}.
For example, if the base space is the Riemann sphere,
one finds a globally defined map $S\mapsto \Gamma/\C{H}$ \cite{CG}, p.1418.
\subsection{Interpretation}
Comparing periods and algebraic numbers is probably the first thing to do,
before developing a theory of periods.
\subsubsection{On algebraic numbers}\label{S:AlgebraicNumbers}
Algebraic numbers (over $Q$) extend rational numbers via extensions $Q[x]/(f(x))$.
If focusing on integers, and contenting to systematically view field extensions
as fields of fractions (as long as we stay within the commutative world),
then we may choose to interpret these algebraic extensions geometrically,
as lattices \cite{IonescuMina}, and algebraically as {\em group representations}.
For example, $Q(i)$ is the fraction field of its ring of algebraic integers $Z[i]$,
which in turn is the group ring of its group of units $U=<i>$,
which is a subgroup of the rotation group of the {\em rational plane} $ZxZ$
\footnote{... in the spirit of the geometric interpretation of complex numbers, starting with Argand, Gauss, followed by Riemann and perhaps the modern CFT and String Theory developments.}
At this stage, finite fields (finite characteristic) can be constructed via quotients
of lattices of algebraic integers.
For example $F_5\cong Z[i]/(2+i), F_{3^2}=Z[i]/(3)$ etc.
\footnote{See \cite{IonescuMina} for developments of this direction of reasoning.}.
Then, increasing the dimension from the above $D=0$ case, i.e. the number of variables,
we obtain algebraic varieties $X:Z[x_1,...,x_n]/<f_1,...,f_k>$,
suited for a geometric study via homology and cohomology.
As a reach class of examples, we have the Riemann surfaces
$RS:y^2=f(x)$, with de Rham / Dolbeault cohomology over the complex numbers.
\subsubsection{Cohomology pairings and their coefficients}
Integrals, as a non-degenerate pairing, e.g. de Rham isomorphism,
on the other hand go beyond the arithmetic realm.
Nevertheless periods should probably be beter compared with algebraic integers,
since thw inverse of a period is not necessarily a period:
$1/\pi$ is conjecturally not a period.
Another important aspect is that other important numbers like $e$ and Euler's
constant $\gamma$, conjecturally are not periods; why? what lies still beyond
periods, but before the ``transcendental junk'' like Louisville's number and such!?
Is there a numerical shadow of a Lie correspondence, and hence
exponential periods like $e^{period}$ form another important class?
\subsubsection{Speculative remarks}
Speaking of shadows, the cohomology theory lies ``above'' the periods themselves,
reminiscent of {\em categorification},
a process relating numbers and algebraic structures
(e.g. Grothendieck ring etc.).
But how to isolate the ``cohomology theory'' from the actual implementation,
based on a specific manifold?
Is there such a ``thing'' as isomorphism class of the functor representing the
respective cohomology theory? And how are its various matrices,
corresponding to its values, related?
If algebraic numbers can be viewed in fact as representations
corresponding to their multiplicative structure, as suggested above,
then periods should probably relate to representations of groupoids, maybe?
This leads to the ``3-rd level'' of abstraction, beyond arithmetic and algebraic-geometric.
\subsection{Periods, Motives and Galois Group}
\cite{Wiki-motives} ``The theory of motives was originally conjectured as an attempt to unify a rapidly multiplying array of cohomology theories, including Betti cohomology, de Rham cohomology, l-adic cohomology, and crystalline cohomology.''
In a more abstract direction, following the work of Grothendieck on pure and mixed motives,
in 1990s the work of M. Nori starting from directed graphs encoding ``equivalence moves'' between periods (with a certain similarity to Redemeister's moves and theorem on homotopy classes of knots), let to a degree of abstraction which seams not to serve our purpose here,
to understand the ``knots, braids and links'' of {\em real} ``elementary'' particles
in decays and collisions.
Now on one hand, Feynman integrals, also periods, are ``closer'' to Chen's theory of iterated integrals, which forms a homotopy theory analog of de Rham cohomology \cite{Chen,Hains}.
This explains the ``coincidence'' with (linear combinations of) periods arising from Number Theory, e.g. multiple zeta values, which can also be expressed as iterated integrals.
This ``non-commutative side'' of {\em homotopical motives}, is probably better suited to
be approached from the (``Cosmic'') Galois action viewpoint \cite{Brown:Galois}.
On one the other hand,
the de Rham {\em cohomology} pairing framework for understanding and generalizing periods
is reasonably close/similar to the {\em representation theory} viewpoint for algebraic integers.
Then what is the connection between the two directions?
It seams that generalizing the idea of Galois action on roots of polynomials (representations
point of view), allows to view periods as having infinite orbits under bigger analogues of
``Galois groups'' \cite{Brown:Galois}.
For example $\pi$ can be viewed as associated to a 1-dimensional representation of a group
\cite{Brown:FeynmanIntegrals}, p.11.
This direction provides a framework for studying amplitudes,
perhaps not unrelated to the cornerstone idea in high energy physics
that ``elementary particles'' are associated to irreducible representations,
but definitely to be pursued by ``Mathematicians only!''.
Alternatively, {\em motives} as universal sumands of cohomology theories,
allows to connect with the direct approach to periods via the period isomorphism.
This perhaps allows to connect the global and the local.
Indeed, since {\em Weil cohomologies} are such universal sumands, one should be able
identify what are the natural analog of the concept of period in finite characteristic
\cite{Weil-Conjectures}.
We will confine to the simplest nontrivial case of elliptic curves (Example \ref{Example:EC}),
and investigate in what follows the reduction modulo a prime,
in the context of {\em Ramification Theory},
within the {\em Algebraic Number Theory framework},
suited for the algebraic integers and representation viewpoint,
and which is ``close enough'' to
Algebraic Geometry framework mentioned in \S\ref{S:Periods-AG}.
\section{p-Adic Periods}
The above periods in characteristic zero correspond to the ``real world''
of the ``prime at infinity'' (arguably on both accounts: \cite{Real-fish}
\footnote{Real numbers result in completing the rationals the ``other way''
then the direction of the carryover 2-cocycle!}).
But what about the periods of p-adic analysis?
How are these defined, and how do they relate to p-adic analogues like p-adic gamma function, Gauss sums and Jacobi sums?
Of course, beyond the natural motivation to generalize and applications to Number Theory, such a study would share light on p-adic String Theory and CFT, via their connection to Veneziano amplitudes and other such
iterated integrals on moduli spaces of punctured Riemann spheres \cite{Brown:ModuliSpaces}.
\subsection{p-Adic De Rham Cohomology}
(Algebraic/Geometric) Number Theory in finite characteristic
may be thought of as the ``infinitesimal/linear analysis''
of p-adic analysis\footnote{It is rather {\em Deformation Theory},
as it will be argued elsewhere \cite{LI:p-adicFrobenius}.},
and algebraic de Rham cohomology of a variety does not reduce ``nicely'',
requiring a lift to characteristic zero of p-adic number fields,
called Monski-Washnitzer cohomology \cite{Hartog}, p.27 (See also \cite{Kedlaya} and references therein).
Briefly, if $X$ is an algebraic variety over $Q$ and $A=Q[x,y]/f(x,y)$ its coordinate ring,
one considers the ``overconvergent'' subalgebra $A^\dagger$ of its lift to p-adic numbers (loc. cit.),
in order to have exact forms closed under p-adic completion.
Then the {\em Monski-Washnitzer cohomology of $X$},
also called here the {\em p-adic algebraic de Rham cohomology of $X$} is:
$$H^i_{MW}(A):= H^i_{dR}(A^\dagger)\otimes_{Z_q} Q_q,$$
where $q=p^n$, $Z_q$ is the p-adic integers of the $n-th$ degree unramified extension of $Z_p$,
and $Q_q$ its field of fractions.
Constructing a lift to $Z_p$ of Frobenius $x\mapsto x^q$, as a ring endomorphism turns out to be difficult,
being equivalent to specifying a p-derivation in the sense of Buium-Manin \cite{Buium-Manin}:
$$\phi_p(x)=Frob_p(x)+p\delta_p(x), \quad \delta(x+y)=\delta(x)+\delta(y)+C_p(x,y),$$
$$C_p(x,y)=[x^p+y^p-(x+y)^p]/p\ \in \ Z[x,y].$$
Note at this stage that, if concerned with its action on MW-cohomology, then
one may relax the endomorphism requirement, and examples
like $\phi_1(x)=x^p$ or $\phi_2(x)=x^p+px$ will do \cite{Hartog}, Ch.3, p.24.
\subsection{p-adic Period Isomorphism}
The p-adic analog of period isomorphism is defined via Hodge Theory and Hodge isomorphism.
Since the presentation may benefit from a deformation Theory approach to p-adic numbers, as sketched in \cite{LI:p-adicFrobenius}, it will continued in loc. cit.
Indeed, the understanding of the p-adic periods would benefits from a comparison of the MW-cohomology with Hochschild cohomology,
and corresponding period isomorphism.
\section{Conclusions}
The subject of {\em Periods}, although with a long history, has become essential for understanding the deep mysteries underlying the ``coincidence'' between quantum physics scattering amplitudes and number theoretical special values, like MZVs.
Since the Grothendieck's algebraic de Rham cohomology is instrumental in studying the global (conceptual) aspects of periods, we will also point to its connection to classical de Rham cohomology, via the discrete version of de Rham cohomology, as defined for finite abelian groups \cite{LI:DiscreteDeRham}, which will be addressed elsewhere, in connection with p-adic periods.
Finally, the analogy between Euler's integrals, beta and gamma functions, with the Jacobi and Gauss sums, is used to question the ``stability'' of the (Veneziano) amplitude-periods connection, when passing from global to local.
Understanding the connections with Venatiano amplitudes and Jacobi sums require an understanding of the ``discrete case'', of finite characteristic.
A parallel with characteristic zero can be achieved via a Deformation Theory viewpoint, and can be found in \cite{LI:p-adicFrobenius}.
\end{document} |
\begin{document}
\title{Derived equivalences for a class of PI algebras}
\author{Quanshui Wu}
\address{School of Mathematical Sciences, Fudan University, Shanghai 200433, China}
\email{[email protected]}
\author{Ruipeng Zhu}
\address{Department of Mathematics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China}
\email{[email protected]}
\begin{abstract}
A description of tilting complexes is given for a class of PI algebras whose prime spectrum is canonically homeomorphic to the prime spectrum of its center. Some Sklyanin algebras are the kind of algebras considered. As an application, it is proved that any algebra derived equivalent to such kind of algebra, is Morita equivalent to it.
\end{abstract}
\subjclass[2020]{
16D90,
16E35,
18E30
}
\keywords{Tilting complexes, derived equivalences, Morita equivalences, derived Picard groups}
\maketitle
\section*{Introduction}
Let $A$ be a ring, $P$ be a progenerator (that is, a finitely generated projective generator) of Mod-$A$, and $\{ e_1, \cdots, e_s \}$ be a set of central complete orthogonal idempotents of $A$. For any integers $s > 0$ and $n_1 > \cdots > n_s$, it is clear that $T:=\bigoplus\limits_{i=1}^{s}Pe_i[-n_i]$ is a tilting complex over $A$ and $\End_{\D^\mathrm{b}(A)}(T)$ is Morita equivalent to $A$. If $A$ is commutative, then any tilting complex over $A$ has the above form \cite[Theorem 2.6]{Ye99} or \cite[Theorem 2.11]{RouZi03}.
Recall that a ring $A$ is called an {\it Azumaya algebra}, if $A$ is separable over its center $Z(A)$, i.e., $A$ is a projective $ (A\otimes_{Z(A)}A^{op})$-module. Any two Azumaya algebras which are derived equivalent are Morita equivalent \cite{A17}. If $A$ is an Azumaya algebra,
then there exists a bijection between the ideals of $A$ and those of $Z(A)$ via $I \mapsto I \cap Z(A)$ \cite[Corollary II 3.7]{MI}. In this case, the prime spectrum of $A$ is canonically homeomorphic to the prime spectrum of $Z(A)$.
In this note, we prove a more general result.
\begin{thm}\label{tilting complex for finite over center case}
Let $A$ be a ring, $R$ be a central subalgebra of $A$. Suppose that $A$ is finitely presented as $R$-module, and $\Spec(A) \to \Spec(R), \; \mathfrak{P} \mapsto \mathfrak{P} \cap R$ is a homeomorphism.
Then for any tilting complex $T$ over $A$, there exists a progenerator $P$ of $A$ and a set of complete orthogonal idempotents $e_1, \cdots, e_s$ in $R$ such that in the derived category $\mathcal{D}(A)$
$$T \cong \bigoplus\limits_{i=1}^{s}Pe_i[-n_i]$$
for some integers $s>0$ and $n_1 > n_2 > \cdots > n_s$.
\end{thm}
\begin{cor}\label{derived equiv is Morita equiv for finite over center case}
Let $B := \End_{\D^\mathrm{b}(A)}(T)$. Then $B \cong \End_{A}(P)$, i.e., $B$ is Morita equivalent to $A$.
\end{cor}
Theorem \ref{tilting complex for finite over center case} and Corollary \ref{derived equiv is Morita equiv for finite over center case} apply to some class of Sklyanin algebras, see Corollary \ref{derived-equiv-Skl-alg}. The derived Picard group of $A$ is described in Proposition \ref{D-Pic-group}, which generalizes a result of Negron \cite{Negron17} about Azumaya algebras.
\section{Preliminaries}
Let $A$ be a ring, $\D^\mathrm{b}(A)$ be the bounded derived category of the (right) $A$-module category.
Let $T$ be a complex of $A$-modules, $\add(T)$ be the full subcategory of $\D^\mathrm{b}(A)$ consisting of objects that are direct summands of finite direct sums of copies of $T$, and $\End_{\D^\mathrm{b}(A)}(T)$ be the endomorphism ring of $T$. A complex is called {\it perfect}, if it is quasi-isomorphic to a bounded complex of finitely generated projective $A$-modules. $\mathcal{K}^{\mathrm{b}}(\text{proj-}A)$ is the full subcategory of $\D^{\mathrm{b}}(A)$ consisting of perfect complexes.
We first recall the definition of tilting complexes \cite{Rickard89}, which generalizes the notion of progenerators. For the theory of tilting complexes we refer the reader to \cite{Ye20}.
In the following, $A$ and $B$ are associative ring.
\begin{defn}
A complex $T \in \mathcal{K}^{\mathrm{b}}(\text{proj-}A)$ is called a {\it tilting complex} over $A$ if
\begin{enumerate}
\item $\add(T)$ generates $\mathcal{K}^{\mathrm{b}}(\text{proj-}A)$ as a triangulated category, and
\item $\Hom_{\D^\mathrm{b}(A)}(T, T[n]) = 0$ for each $n \neq 0$.
\end{enumerate}
\end{defn}
The following Morita theorem for derived categories, is due to Rickard.
\begin{thm}\cite[Theorem 6.4]{Rickard89}\label{derived-equivalence}
The following are equivalent.
\begin{enumerate}
\item $\mathcal{D}^{\mathrm{b}}(A)$ and $\mathcal{D}^{\mathrm{b}}(B)$ are equivalent as triangulated categories.
\item There is a tilting complex $T$ over $A$ such that $\End_{\D^\mathrm{b}(A)}(T) \cong B$.
\end{enumerate}
\end{thm}
If $A$ and $B$ satisfy the equivalent conditions in Theorem \ref{derived-equivalence}, then $A$ is said to be {\it derived equivalent} to $B$.
The following theorem is also due to Rickard \cite[Theorem 1.6]{Ye99}, where $A$ and $B$ are two flat $k$-algebras over a commutative ring $k$.
\begin{thm}(Rickard)\label{two-side-tilting-complex}
Let $T$ be a complex in $\D^\mathrm{b}(A \otimes_k B^{\mathrm{op}})$. The following are equivalent:
(1) There exists a complex $T^{\vee} \in \D^\mathrm{b}(B \otimes_k A^{\mathrm{op}})$ and isomorphisms
$$T \Lt_{A} T^{\vee} \cong B \text{ in } \D^\mathrm{b}(B^e)\, \text{ and }\, T^{\vee} \Lt_{B} T \cong A \text{ in } \D^\mathrm{b}(A^e)$$
where $B^e =B \otimes_k B^{\mathrm{op}}$ and $A^e =A \otimes_k A^{\mathrm{op}}$.
(2) $T$ is a tilting complex over $A$, and the canonical morphism $B \to \Hom_{\D^\mathrm{b}(A)}(T,T)$ is an isomorphism in $\D^\mathrm{b}(B^e)$.
In this case, $T$ is called a {\it two-sided tilting complex} over $A$-$B$ relative to $k$.
\end{thm}
We record some results about tilting complexes here for the convenience.
\begin{lem}\cite[Theorem 2.1]{Rickard91}\label{flat tensor products}
Let $A$ be an $R$-algebra and $S$ be a flat $R$-algebra where $R$ is a commutative ring. If $T$ is a tilting complex over $A$, then $T \otimes_R S$ is a tilting complex over $A \otimes_R S$.
\end{lem}
The following lemma follows directly from Lemma \ref{flat tensor products}.
\begin{lem}\cite[Proposition 2.6]{RouZi03}\label{decomposition of tilting complex by central idempotent}
Let $T$ be a tilting complex over $A$. If $0 \neq e \in A$ is a central idempotent, then $Te$ is a tilting complex over $Ae$.
\end{lem}
The following lemma is easy to verify.
\begin{lem}\label{til-comp-lem}
Let $T$ be a tilting complex over $A$. The following conditions are equivalent.
(1) $T \cong \bigoplus\limits_{i=1}^{s}P_i[-n_i]$ in $\D^\mathrm{b}(A)$, for some $A$-module $P_i$ such that $\bigoplus\limits_{i=1}^{s} P_i$ is a progenerator of $A$.
(2) $T$ is homotopy equivalent to $\bigoplus\limits_{i=1}^{s}P_i[-n_i]$ for some $A$-module $P_i$.
\end{lem}
By using a similar proof of \cite[Theorem 2.3]{Ye99} for two-sided tilting complexes (see also \cite{Ye10} or \cite{Ye20}), the following result for tilting complexes holds.
\begin{lem}\label{equivalent condition for tilting complex being a progenerator}
Let $T$ be a tilting complex over $A$, and $T^{\vee}:= \Hom_A(T, A)$. Let $n = \max\{ i \mid \mathrm{H}^i(T) \neq 0 \}$ and $ m = \max\{ i \mid \mathrm{H}^i(T^{\vee}) \neq 0 \}$. If $\mathrm{H}^n(T) \otimes_A \mathrm{H}^m(T^{\vee}) \neq 0$, then
$m=-n$ and $T \cong P[m]$ in $\D^\mathrm{b}(A)$ for some progenerator $P$ of $A$.
\end{lem}
\begin{proof}
Without loss of generality, we assume that
$$T:= \cdots \longrightarrow 0 \longrightarrow T^{-m} \longrightarrow \cdots \longrightarrow T^n \longrightarrow 0 \longrightarrow \cdots.$$
Since $T$ is a perfect complex of $A$-modules, $$\RHom_A(T,T) \cong T \Lt_{A} T^{\vee}.$$
Consider the bounded spectral sequence of the double complex $T \otimes_A T^{\vee}$ (see \cite[Lemma 14.5.1]{Ye20}). Then
$$\mathrm{H}^n(T) \otimes_A \mathrm{H}^{m}(T^{\vee}) \cong \mathrm{H}^{n+m}(T \Lt_{A} T^{\vee}) \cong \mathrm{H}^{n+m}(\RHom_A(T,T)).$$
Since $T$ is a tilting complex, $\Hom_{\D^\mathrm{b}(A)}(T,T[i]) = 0$ for all $i \neq 0$. It follows that $n+m = 0$.
So $T \cong T^n[-n]$.
By Lemma \ref{til-comp-lem}, the conclusion holds.
\end{proof}
A ring $A$ is called {\it local} if $A/J_A$ is a simple Artinian ring, where $J_A$ is the Jacobson radical of $A$. If both $M_A$ and ${}_AN$ are nonzero module over a local ring $A$, then $M \otimes_A N \neq 0$ by \cite[Lemma 14.5.6]{Ye20}.
\begin{prop}\cite[Theorem 2.11]{RouZi03}\label{derived equiv is Morita equiv for local rings}
Let $A$ be a local ring, $T$ be a tilting complex over $A$. Then, $T \cong P[-n]$ in $\D^\mathrm{b}(A)$, for some progenerator $P$ of $A$ and $n \in \mathbb{Z}$.
\end{prop}
\begin{proof}
It follows from Lemma \ref{equivalent condition for tilting complex being a progenerator}.
\end{proof}
Let $\Spec A$ (resp., $\Max A$) denote the prime (resp., maximal) spectrum of a ring $A$.
Let $R$ be a central subalgebra of $A$. Since the center of any prime ring is a domain, the quotient ring $R/(\mathfrak{P} \cap R)$ is a domain for any $\mathfrak{P} \in \Spec(A)$. Then there is a well defined map $\pi: \Spec A \to \Spec R, \; \mathfrak{P} \mapsto \mathfrak{P} \cap R$.
The following facts about the map $\pi$ are well known, see \cite{Bla73} for instance. For the convenience of readers, we give their proofs here.
\begin{lem}\label{Phi-lem}
Let $A$ be a ring, $R$ be a central subring of $A$. Suppose that $A$ is finitely generated as $R$-module.
\begin{enumerate}
\item For any primitive ideal $\mathfrak{P}$ of $A$, $\pi(\mathfrak{P})$ is a maximal ideal of $R$. In particular, $$\pi(\Max A) \subseteq \Max R.$$
\item If $\mathfrak{P} \in \Spec(A)$ and $\pi(\mathfrak{P})$ is a maximal ideal of $R$, then $\mathfrak{P}$ is a maximal ideal of $A$.
\item For any multiplicatively closed subset $\mathcal{S}$ of $R$, the prime ideals of $\mathcal{S}^{-1}A$ are in one-to-one correspondence ($\mathcal{S}^{-1} \mathfrak{P} \leftrightarrow \mathfrak{P}$) with the prime ideals of $A$ which do not meet $\mathcal{S}$.
\item $\pi: \Spec A \to \Spec R$ is surjective.
\item The Jacobson radical $J_R$ of $R$ is equal to $J_A \cap R$.
\item Let $\p$ be a prime ideal of $R$. Then $A_{\p}$ is a local ring if and only if there exists only one prime ideal $\mathfrak{P}$ of $A$ such that $\pi(\mathfrak{P}) = \p$.
\item If $R$ is a Jacobson ring (that is, $\forall$ $\p \in \Spec R$, $J_{R/\p} = 0$), then so is $A$.
\item If $\pi$ is injective, then $\pi(\mathcal{V}(I)) = \mathcal{V}(I \cap R)$. In this case, $\pi$ is a homeomorphism.
\end{enumerate}
\end{lem}
\begin{proof}
(1) Without loss of generality, we assume that $A$ is a primitive ring. Let $V$ be a faithful simple $A$-module. For any $0 \neq x \in R$, since $Vx$ is also a non-zero $A$-module, $Vx = V$. Because $V$ is a finitely generated faithful $R$-module, $x$ is invertible in $R$. It follows that $R$ is a field.
(2) Since the prime ring $A/\mathfrak{P}$ is finite-dimensional over the field $R/(\mathfrak{P}\cap R)$, the quotient ring $A/\mathfrak{P}$ is a simple ring, that is, $\mathfrak{P}$ is a maximal ideal of $A$.
(3) The proof is similar to the commutative case.
(4) Suppose $\p \in \Spec R$. It follows from (3) that there exists a prime ideal $\mathfrak{P}$ of $A$ such that $\mathfrak{P}A_{\p}$ is a maximal ideal of $A_{\p}$. By (1), $\mathfrak{P}A_{\p} \cap R_{\p}$ is a maximal ideal of $R_{\p}$. Hence $\p R_{\p} \subseteq \mathfrak{P}A_{\p}$. It follows that $\mathfrak{P} \cap R = \p$.
(5) By (1), $J_R \subseteq J_A \cap R$. On the other hand, by (4) and (2), $J_A \cap R \subseteq J_R$.
(6) It follows from (3) and (4) that $\Max A_{\p} = \{ \mathfrak{P}A_{\p} \mid \mathfrak{P} \cap R = \p, \, \mathfrak{P} \in \Spec A \}$. Then the conclusion in (6) follows.
(7) Without loss of generality, we may assume that $A$ is a prime ring. Set $\mathcal{S} = R\setminus \{0\}$.
Then $\mathcal{S}^{-1}A$ is also a prime ring which is finite-dimensional over the field $\mathcal{S}^{-1}R$. Hence $\mathcal{S}^{-1}A$ is an Artinian simple ring. It follows from that $R$ is a Jacobson ring and (5) that $J_A \cap R = J_R = 0$. Hence $\mathcal{S}^{-1}J_{A} \neq \mathcal{S}^{-1}A$, and so $\mathcal{S}^{-1}J_{A} = 0$. Hence $J_A = 0$, as every element in $\mathcal{S}$ is regular in the prime ring $A$. Therefore $A$ is a Jacobson ring.
(8) Suppose $I$ is an ideal of $A$. Obviously, $\pi(\mathcal{V}(I)) \subseteq \mathcal{V}(I \cap R)$. On the other hand, let $\p \in \mathcal{V}(I \cap R) := \{ \p \in \Spec R \mid I \cap R \subseteq \p \}$. It follows from the assumption that $\pi$ is injective and (6) that $A_{\p}$ is a local ring. Hence there exists a prime ideal $\mathfrak{P}$ of $A$ such that $\mathfrak{P}A_{\p}$ is the only maximal ideal of $A_{\p}$ and $\p = \mathfrak{P} \cap R$. Since $I \cap (R \setminus \p) = \emptyset$, $IA_{\p} \subseteq \mathfrak{P} A_{\p}$ and $I \subseteq \mathfrak{P}$. Then $\p = \pi(\mathfrak{P}) \in \pi(\mathcal{V}(I))$. Hence $\pi(\mathcal{V}(I)) \supseteq \mathcal{V}(I \cap R)$. So $\pi$ is a closed map. Since $\pi$ is bijective, it is a homeomorphism.
\end{proof}
By Lemma \ref{Phi-lem}, we have the following two results, which describe the condition when $\pi$ is a homeomorphism.
\begin{lem}\label{spec-homeo}
$\pi$ is a homeomorphism if and only if $A_{\p}$ is a local ring for any $\p \in \Spec R$.
\end{lem}
\begin{prop}\label{Max-Prime}
Suppose that $R$ is a Jacobson ring. If the restriction map $\pi|_{\Max A}$ is injective, then $\pi$ is a homeomorphism.
\end{prop}
\begin{proof}
If $\pi$ is not injective, then there exist two different prime ideals $\mathfrak{P}$ and $\mathfrak{P}'$ of $A$ such that $\mathfrak{P} \cap R = \mathfrak{P}' \cap R$. By Lemma \ref{Phi-lem} (7), $A$ is a Jacobson ring. Hence, there exists a maximal ideal $\mathfrak{M}$ of $A$ such that $\mathfrak{P} \subseteq \mathfrak{M}$ and $\mathfrak{P}' \nsubseteq \mathfrak{M}$, or the other way round. Without loss of generality, assume
$\mathfrak{P} \subseteq \mathfrak{M}$ and $\mathfrak{P}' \nsubseteq \mathfrak{M}$. Then $\pi(\mathfrak{M}) = \mathfrak{M} \cap R \supseteq \mathfrak{P} \cap R = \mathfrak{P}' \cap R$.
By applying Lemma \ref{Phi-lem} (2) and (4) to the ring $A/{\mathfrak{P}'}$ with the central subalgebra $R/{\mathfrak{P}' \cap R}$, it implies that there exists a maximal ideal $\mathfrak{M}'$ of $A$, such that $\mathfrak{P}' \subseteq \mathfrak{M}'$ and $\mathfrak{M}' \cap R = \mathfrak{M} \cap R$. This contradicts to the hypothesis that $\pi|_{\Max(A)}$ is injective. So $\pi$ is a homeomorphism.
\end{proof}
\section{Some Derived equivalences imply Morita equivalences}
If $A$ is a local ring such that $A/{J_A}$ is not a skew-field and $A$ is a domain, then $A$ is not semiperfect. The localizations of the Sklyanin algebra considered in Lemma \ref{S-domain} at maximal ideals of its center are such kind of examples. The following results are needed in the proof of Theorem \ref{tilting complex for finite over center case}.
\begin{lem}\label{lemma1}
Let $A$ be a local ring. Then there exists an idempotent element $e$ in $A$, such that any finitely generated projective right $A$-module is a direct sum of finitely many copies of $eA$.
\end{lem}
\begin{proof}
For any finitely generated projective $A$-modules $P$, $Q$, and a surjective $A$-module morphism $f: P/PJ_A \twoheadrightarrow Q/QJ_A$, there exists a surjective $A$-module morphism $\widetilde{f}$ so that the following diagram commutative.
$$\xymatrix{
P \ar@{-->>}[d]^{\widetilde{f}} \ar@{->>}[r] & P/PJ_A \ar@{->>}[d]^{f}\\
Q \ar@{->>}[r] & Q/QJ_A
}$$
It follows that $Q$ is a direct summand of $P$.
There exists a finitely generated projective $A$-module $Q\neq 0$ such that the length of $Q/QJ_A$ is smallest possible.
By the above fact and division algorithm, any finitely generated projective $A$-module is a direct sum of finite copies of $Q$. In particular, $Q$ is a direct summand of the $A$-module $A$. Hence, there exists an idempotent element $e$ in $A$ such that $Q \cong eA$ as $A$-modules.
\end{proof}
\begin{prop}\label{a open set of fp mod in spec} Let $A$ be a ring with a central subalgebra of $R$ such that $A$ is finitely presented as $R$-module. Suppose that $A_{\p}$ is a local ring for all $\p \in \Spec R$.
If $M$ is a finitely presented $A$-module, then
$$U:= \{ \p \in \Spec R \mid M_{\p} \text{ is projective over } A_{\p} \}$$
is an open subset in $\Spec R$.
\end{prop}
\begin{proof}
Suppose $\p \in U$. Then $M_{\p}$ is a finitely generated projective $A_{\p}$-module. Since $A_{\p}$ is a local ring, by Lemma \ref{lemma1}, there exist $x \in A$ and $s \in R\setminus\p$, such that
$$xs^{-1} \in A_{\p} \text{ is an idempotent element and } M_{\p} \cong (xs^{-1}A_{\p})^{\oplus l}$$
for some $l \in \mathbb{N}$.
Hence there exists $t \in R \setminus \p$ such that $(x^2-xs)st = 0$, and $xs^{-1}$ is an idempotent element in $A_{st} = A[(st)^{-1}]$. So $xs^{-1}A_{st}$ is a projective $A_{st}$-module.
There is an $A_{st}$-module morphism $g: (xs^{-1}A_{st})^{\oplus l} \to M_{st}$ such that $g_{\p}: (xs^{-1}A_{\p})^{\oplus l} \to M_{\p}$ is the prescribed isomorphism.
Since $M_A$ and $A_R$ are finitely presented modules, $M$ is a finitely presented $R$-module.
So $M_{st}$ is a finitely presented $R_{st}$-module.
By \cite[Proposition II.5.1.2]{Bourbaki},
there exists $u \in R \setminus \p$ such that $gu^{-1}: (xs^{-1}A_{stu})^{\oplus l} \to M_{stu}$ is an isomorphism.
It follows that $M_{stu}$ is a projective $A_{stu}$-module.
Hence $X_{stu} := \{\mathfrak{q} \in \Spec R \mid stu \notin \mathfrak{q} \}$ is contained in $U$. Obviously, $\p \in X_{stu}$.
So $U$ is an open subset of $\Spec R$.
\end{proof}
Now we are ready to prove Theorem \ref{tilting complex for finite over center case}.
The following proof is nothing but an adaption of the arguments of Yekutieli \cite[Theorem 1.9]{Ye10} and Negron \cite[Proposition 3.3]{Negron17} to this situation.
\begin{proof}[Proof of Theorem \ref{tilting complex for finite over center case}]
By assumption, $\Spec A \to \Spec R, \; \mathfrak{P} \mapsto \mathfrak{P} \cap R$ is a homeomorphism. It follows from Lemma \ref{spec-homeo} that $A_{\p}$ is a local ring for any prime ideal $\p$ of $R$.
There exist integers $s>0$ and $n_1 > n_2 > \cdots > n_s$ such that $\mathrm{H}^{i}(T) = 0$ for all $i \neq n_1, \cdots, n_s$, as the tilting complex $T$ is bounded.
Obviously, $\mathrm{H}^{n_1}(T)$ is a finitely generated $R$-module. In fact, it is a finitely presented $R$-module by assumption. Hence the support set $$\Supp(\mathrm{H}^{n_1}(T)) = \mathcal{V}(\Ann_R(\mathrm{H}^{n_1}(T)))$$
is closed in $\Spec R$.
By Lemma \ref{flat tensor products}, $T_{\p} := T \otimes_R R_{\p}$ is a tilting complex over $A_{\p}$. If $\mathrm{H}^{n_1}(T)_{\p} \cong \mathrm{H}^{n_1}(T_{\p})$ is non-zero, then $\mathrm{H}^{n_1}(T)_{\p}$ is $A_{\p}$-projective by Proposition \ref{derived equiv is Morita equiv for local rings}. Hence
$$\Supp(\mathrm{H}^{n_1}(T)) = \{ \p \in \Spec R \mid \mathrm{H}^{n_1}(T)_{\p} \cong \mathrm{H}^{n_1}(T_{\p}) \text{ is a non-zero projective } A_{\p}\text{-module} \}.$$
Since $\mathrm{H}^{n_1}(T)$ is a finitely presented $A$-module, $\mathrm{H}^{n_1}(T)$ is a projective $A$-module, and
$\Supp(\mathrm{H}^{n_1}(T))$ is an open set by Proposition \ref{a open set of fp mod in spec}.
By \cite[page 406, Theorem 7.3]{Ja}, there exists an idempotent $e_1 \in R$ such that $\Supp(\mathrm{H}^{n_1}(T)) = X_{e_1}:= \{\mathfrak{q} \in \Spec R \mid e_1 \notin \mathfrak{q} \}$. It follows that $(\mathrm{H}^{n_1}(T)/\mathrm{H}^{n_1}(T)e_1)_{\p} = 0$ for all $\p \in \Spec R$. Hence
$\mathrm{H}^{n_1}(T)(1-e_1) = 0$. Then
$$\mathrm{H}^{j}(T(1-e_1)) = \begin{cases}
0, & j \neq n_2,\cdots,n_s \\
\mathrm{H}^{n_i}(T)(1-e_1), & j = n_2,\cdots,n_s
\end{cases}.$$
By Lemma \ref{decomposition of tilting complex by central idempotent}, $T(1-e_1)$ is a tilting complex over $A(1-e_1)$ and $Te_1$ is a tilting complex over $Ae_1$. It follows from Proposition \ref{derived equiv is Morita equiv for local rings} that $Te_1$ is homotopy equivalent to $\mathrm{H}^{n_1}(T)[-n_1]$.
By induction on $s$, there is a complete set of orthogonal idempotents $e_1, \cdots, e_s$ in $R$ such that $Te_i$ is homotopy equivalent to $\mathrm{H}^{n_i}(T)[-n_i]$ for each $i$. Then $T = \bigoplus\limits_{i=1}^{s} Te_i = \bigoplus\limits_{i=1}^{s} \mathrm{H}^{n_i}(T)[-n_i]$. It follows from Lemma \ref{til-comp-lem} that $P:= \bigoplus\limits_{i=1}^{s} \mathrm{H}^{n_i}(T)$ is a progenerator of $A$ and $T \cong \bigoplus\limits_{i=1}^{s}Pe_i[-n_i]$ in $\D^\mathrm{b}(A)$.
\end{proof}
\begin{cor}
If $\Spec R$ is connected, then $T \cong P[-n]$ in $\D^\mathrm{b}(A)$.
\end{cor}
\begin{cor}\cite{Negron17}\label{tilting complex over Azumaya algebra}
Let $A$ be an Azumaya algebra. Then any tilting complex over $A$ has the form $P[n]$, where $P$ is a progenerator of $A$. If there exists a ring $B$ which is derived equivalent to $A$, then $B$ is Morita equivalent to $A$. In particular, $B$ is also an Azumaya algebra.
\end{cor}
In the following we provide some non-Azumaya algebras which satisfy the conditions in Theorem \ref{tilting complex for finite over center case}.
\begin{defn}\cite{ATV90}
Let $k$ be an algebraically closed field of characteristic $0$. The {\it three-dimensional Sklyanin algebras} $S = S(a, b, c)$ are $k$-algebras generated by three noncommutating variables $x,y,z$ of degree $1$, subject to relations
$$axy+byx+cz^2 = ayz+bzy+cx^2 = azx+bxz+cy^2,$$
for $[a:b:c] \in \mathbb{P}^2$ such that $(3abc)^3 \neq (a^3+b^3+c^3)^3$.
The point scheme of the three-dimensional Sklyanin algebras $S$ is given by the elliptic curve
$$E:= \mathcal{V}( abc(X^3+Y^3+Z^3) - (a^3+b^3+c^3)XYZ ) \subset \mathbb{P}^2.$$
Let us choose the point $[1:-1:0]$ on $E$ as origin, and the automorphism $\sigma$ denotes the translation by the point $[a:b:c]$ in the group law on the elliptic curve $E$ with
$$\sigma[x:y:z] = [acy^2-b^2xz:bcx^2-a^2yz:abz^2-c^2xy].$$
\end{defn}
\begin{lem} \label{S-domain}\cite{ATV90, ATV91}
(1) $S$ is a Noetherian domain.
(2) $S$ is a finite module over its center if and only if the automorphism $\sigma$ has finite order.
\end{lem}
Recently, Walton, Wang and Yakimov (\cite{WWY19}) endowed the three-dimensional Sklyanin algebra $S$, which is a finite module over its center $Z$, with a Poisson $Z$-order structure (in the sense of Brown-Gordon \cite{BrownGordon03}). By using the Poisson geometry of $S$, they analyzed all the irreducible representations of $S$. In particular, they proved the following result.
\begin{thm}\cite[Theorem 1.3.(4)]{WWY19}\label{spec-Skl-alg}
If the order of $\sigma$ is finite and coprime with $3$, then $S/{\m S}$ is a local ring for any $\m \in \Max Z$.
\end{thm}
Here is a corollary following Theorem \ref{spec-Skl-alg} and Theorem \ref{tilting complex for finite over center case}.
\begin{cor}\label{derived-equiv-Skl-alg}
If the order of $\sigma$ is finite and coprime with $3$, then every tilting complex over $S$ has the form $P[n]$, where $P$ is a progenerator of $S$ and $n \in \mathbb{Z}$. Furthermore, any ring which is derived equivalent to $S$, is Morita equivalent to $S$.
\end{cor}
\begin{proof}
It follows from the Artin-Tate lemma that $Z$ is a finitely generated commutative $k$-algebra and so it is a Jacobson ring. Obviously, $S$ is a finitely presented $Z$-module. By Theorem \ref{spec-Skl-alg}, the restriction map $\pi|_{\Max(S)}$ is injective. It follows from Proposition \ref{Max-Prime} that $\pi: \Spec S \to \Spec Z$ is a homeomorphism. Then, the conclusions follow from Theorem \ref{tilting complex for finite over center case} and Corollary \ref{derived equiv is Morita equiv for finite over center case}.
\end{proof}
\section{Derived Picard groups}
Let $A$ be a projective $k$-algebra over a commutative ring $k$. The derived Picard group $\DPic(A)$ of an algebra $A$ was introduced by Yekutieli \cite{Ye99} and Rouquier-Zimmermann \cite{RouZi03} independently. In fact, $A$ can be assume to be flat over $k$, see the paragraph after Definition 1.1 in \cite{Ye10}.
\begin{defn}\label{defn of DPic}
The {\it derived Picard group} of $A$ relative to $k$ is
$$\DPic_k(A):= \frac{\{ \text{two-sided tilting complexes over } A \text{-} A \text{ relative to } k \}}{\text{ isomorphisms }},$$
where the isomorphism is in $\D^\mathrm{b}(A\otimes_kA^{\mathrm{op}})$. The class of a tilting complex $T$ in $\DPic_k(A)$ is denoted by $[T]$. The group multiplication is induced by $- \Lt_A -$, and $[A]$ is the unit element.
\end{defn}
Let $T$ be a two-sided tilting complex over $A$-$A$ relative to $k$. For any $z \in Z(A)$, there is an endomorphism of $T$ induced by the multiplication by $z$ on each component of $T$ (see \cite[Proposition 9.2]{Rickard89} or \cite[Propsition 6.3.2]{KonZimm98}). This defines a $k$-algebra automorphism of $Z(A)$, which is denoted by $f_T$.
The assignment $\Phi: \DPic_k(A) \to \Aut_k(Z(A)), [T] \mapsto f_T$ is a group morphism, see the paragraph in front of Definition 7 in \cite{Zim96} or \cite[Lemma 5.1]{Negron17}.
Let us first recall some definitions in \cite[Section 3.1]{Negron17}. Suppose $n \in \Gamma(\Spec Z(A), \underline{\mathbb{Z}})$, which consists of continuous functions from $\Spec Z(A)$ to the discrete space $\Z$. For any $i \in \mathbb{Z}$, $n^{-1}(i)$ is both an open and closed subset of $\Spec Z(A)$. Since $\Spec Z(A)$ is quasi-compact, there exists a set of complete orthogonal idempotents $e_{n_1}, \cdots, e_{n_s}$ of $Z(A)$ such that
$$n^{-1}(i) = \begin{cases}
X_{e_i}, & i = n_j, \, j = 1, \cdots, s \\
\emptyset, & i \neq n_1, \cdots, n_s.
\end{cases}$$
Set $e_i = 0$ for any $i \neq n_1, \cdots, n_s$, and $X_{e_i} = \emptyset$.
For any complex $T$ of $Z(A)$-modules, the shift $\Sigma^n T$ is defined by
$$\bigoplus_{i \in \Z, \, n^{-1}(i) = X_{e_i}} Te_{i} [-i].$$
Let $\Pic_{Z(A)}(A)$ be the Picard group of $A$ over $Z(A)$. Clearly $\Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$ can be viewed as a subgroup of $\DPic_{Z(A)}(A)$ via $(n, [P]) \mapsto [\Sigma^n P]$.
The following result is proved in \cite{Negron17} under the assumption that $A$ is an Azumaya algebra.
\begin{prop}\label{D-Pic-group}
Let $A$ be an $k$-algebra which is a finitely generated projective module over its center $Z(A)$. Suppose that
$\Spec A$ is canonically homeomorphic to $\Spec Z(A)$.
Then
(1) there is an exact sequence of groups
\begin{equation}\label{ex-seq-DPic}
1 \longrightarrow \DPic_{Z(A)}(A) \longrightarrow \DPic_{k}(A) \stackrel{\Phi}{\longrightarrow} \Aut_k(Z(A)).
\end{equation}
(2) $\DPic_{Z(A)}(A) = \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$.
\end{prop}
\begin{proof}
It is obvious that $\Ker \Phi = \DPic_{Z(A)}(A)$. Hence \eqref{ex-seq-DPic} is a exact sequence of groups. Next we prove $\DPic_{Z(A)}(A) = \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$.
Let $T$ be a two-sided tilting complex over $A$ relative to $k$. By Theorem \ref{tilting complex for finite over center case}, there exists a global section $n \in \Gamma(\Spec Z(A), \underline{\mathbb{Z}})$ and an $A$-progenerator $P$, such that $T \cong \Sigma^nP$ in $\D^\mathrm{b}(A)$.
It follows from the fact that
$$A \cong \End_{\D^\mathrm{b}(A)}(T) \cong \End_{\D^\mathrm{b}(A)}(\Sigma^nP) = \End_{\D^\mathrm{b}(A)}(P) = \End_A(P)$$
that $P$ is an invertible $A$-$A$-bimodule with central $Z(A)$-action. By \cite[Proposition 2.3]{RouZi03}, there exists an automorphism $\sigma \in \Aut_k(A)$ such that $T \cong {^{\sigma} (\Sigma^nP)}$ in $\D^\mathrm{b}(A^e)$. If $\Phi([T]) = \id_{Z(A)}$, then $\sigma \in \Aut_{Z(A)}(A)$. Hence $T \cong \Sigma^n({^{\sigma}P})$ in $\D^\mathrm{b}(A^e)$. It follows that $\DPic_{Z(A)}(A) = \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$.
\end{proof}
Given any algebra automorphism $\sigma$ of $Z(A)$, there is a $k$-algebra $B$ with $Z(B)=Z(A)$ and a $k$-algebra isomorphism $\widetilde{\sigma}: A \to B$, such that $\sigma=\widetilde{\sigma}|_{Z(A)}$. This determines uniquely an isomorphism class of $B$ as $Z(A)$-algebra
(see \cite[page 9]{F} for the definition).
Then it induces an $\Aut_k(Z(A))$-action on the $Z(A)$-algebras which are isomorphic to $A$ as $k$-algebras. The image of $\Phi$ is just the stabilizer $\Aut_k(Z(A))_{[A]}$ of the derived equivalent class (relative to $Z(A)$) of $A$.
\begin{rk}
Suppose that $A$ is an Azumaya algebra.
(1) For any $k$-algebra $B$, $B$ is derived equivalent to $A$ if and only if $B$ is Morita equivalent to $A$. So $\Aut_k(Z(A))_{[A]}$ is also the stabilizer of the Brauer class of $A$ just as in \cite{Negron17}.
(2) Notice that $\DPic_{Z(A)}(A) \cong \DPic_{Z(A)}(Z(A)) \cong \Gamma(\Spec Z(A), \underline{\mathbb{Z}}) \times \Pic_{Z(A)}(A)$ is an abelian group. Hence $\DPic_{k}(A)$ is a group extension of $\Aut_k(Z(A))_{[A]}$ by $\DPic_{Z(A)}(A)$ \cite[Theorem 1.1]{Negron17}. If $A$ is not an Azumaya algebra, $\DPic_{Z(A)}(A)$ may not be abelian.
\end{rk}
\end{document} |
\begin{document}
\tildetle[Derivations on algebras of measurable operators]{Innerness of continuous derivations
on algebras of measurable operators affiliated with finite von
Neumann algebras}
\author{Shavkat Ayupov and Karimbergen Kudaybergenov}
\address[Shavkat Ayupov]{Institute of
Mathematics National University of
Uzbekistan,
100125 Tashkent, Uzbekistan
and
the Abdus Salam International Centre
for Theoretical Physics (ICTP),
Trieste, Italy}
\email{sh$_{-}[email protected]}
\address[Karimbergen Kudaybergenov]{Department of Mathematics, Karakalpak state
university, Nukus 230113, Uzbekistan.} \email{[email protected]}
\maketitle
\begin{abstract}
This paper is devoted to derivations on the algebra $S(M)$ of all
measurable operators affiliated with a finite von Neumann algebra
$M.$ We prove that if $M$ is a finite von Neumann algebra with
a faithful normal semi-finite trace $\tau$, equipped with the
locally measure topology $t,$ then every $t$-continuous
derivation $D:S(M)\rightarrow S(M)$ is inner. A similar result is valid
for derivation on the algebra $S(M,\tau)$ of $\tau$-measurable operators
equipped with the measure topology $t_{\tau}$.
\end{abstract} \maketitle
\section{Introduction}
Given an algebra $\mathcal{A},$ a linear operator
$D:\mathcal{A}\rightarrow \mathcal{A}$ is called a
\textit{derivation}, if $D(xy)=D(x)y+xD(y)$ for all $x, y\in
\mathcal{A}$ (the Leibniz rule). Each element $a\in \mathcal{A}$
implements a derivation $D_a$ on $\mathcal{A}$ defined as
$D_a(x)=[a, x]=ax-xa,$ $x\in \mathcal{A}.$ Such derivations $D_a$
are said to be \textit{inner derivations}. If the element $a,$
implementing the derivation $D_a,$ belongs to a larger algebra
$\mathcal{B}$ containing $\mathcal{A},$ then $D_a$ is called
\textit{a spatial derivation} on $\mathcal{A}.$
One of the main problems in the theory of derivations is to prove
the automatic continuity, ``innerness'' or ``spatiality'' of
derivations, or to show the existence of non-inner and
discontinuous derivations on various topological algebras. In
particular, it is a general algebraic problem to find algebras
which admit only inner derivations. Examples of algebra for which
any derivation is inner include:
\begin{itemize}
\item finite dimensional simple central algebras (see \cite[p.\ 100]{Her});
\item simple unital $C^{\ast}$-algebras (see the main theorem of \cite{Sak68});
\item the algebras $B(X),$ where $X$ is a Banach
space (see \cite[Corollary 3.4]{Che}).
\item von Neumann algebras (see \cite[Theorem 1]{Sak66})
\end{itemize}
A related problem is:
\begin{quotation}
Given an algebra $\mathcal{A},$ is there an algebra $\mathcal{B}$
containing $\mathcal{A}$ as a subalgebra such that any derivation
of $\mathcal{B}$ is inner and any derivation of the algebra
$\mathcal{A}$ is spatial in $\mathcal{B}$?
\end{quotation}
The following are some examples for which the answer is positive:
\begin{itemize}
\item $C^{\ast}$-algebras (see \cite[Theorem 4]{Kad} or \cite[Theorem 2]{Sak66});
\item standard operator algebras on a Banach space $X$, i.e.\
subalgebras of $B(X)$ containing all finite rank operators (see
\cite[Corollary~3.4]{Che}).
\end{itemize}
In \cite{Alb2} and \cite{AK1}, derivations on various subalgebras
of the algebra $LS(M)$ of locally measurable operators with
respect to a von Neumann algebra $M$ has been considered. A
complete description of derivations has been obtained in the case
when $M$ is of type I and III. Derivations on algebras of
measurable and locally measurable operators, including rather non
trivial commutative case, have been studied by many authors
\cite{Alb2, AK2, AK1, Ber, BPS, Ber2, Ber3, Ber4}. A comprehensive
survey of recent results concerning derivations on various
algebras of unbounded operators affiliated with von Neumann
algebras can be found in \cite{AK2}.
If we consider the algebra $S(M)$ of all measurable
operators affiliated with a type III von Neumann algebra $M$, then it is clear that
$S(M)=M$. Therefore from the results of \cite{Alb2} it follows that
for type I$_\infty$ and type III von Neumann algebras $M$ every derivation on $S(M)$ is
automatically inner and, in particular, is continuous in the local
measure topology.
The problem of description of the structure of
derivations in the case of type II algebras has been open so far
and seems to be rather difficult.
In this connection several open problems concerning innerness and
automatic continuity of derivations on the algebras $S(M)$ and
$LS(M)$ for type II von Neumann algebras have been posed in
\cite{AK2}. First positive results in this direction were recently
obtained in \cite{Ber2, Ber4}, where automatic continuity has been
proved for derivations on
algebras of $\tau$-measurable and locally
measurable operators affiliated with properly infinite von Neumann algebras.
Another problem in \cite[Problem 3]{AK2} asks the following
question:
Let $M$ be a type II von Neumann algebra with a faithful normal
semi-finite trace $\tau.$ Consider the algebra $S(M)$ (respectively $LS(M)$) of all
measurable (respectively locally measurable) operators affiliated with $M$ and equipped with the
locally measure topology $t.$ Is every $t$-continuous derivation
$D:S(M)\rightarrow S(M)$ (respectively, $D:LS(M)\rightarrow LS(M)$) necessarily inner?
In the present paper we suggest a solution of this problem for
type II$_1$ von Neumann algebras (in this case $LS(M)=S(M)$). Namely, we
prove that if $M$ is a finite von Neumann algebra and
$D:S(M)\rightarrow S(M)$ is a $t$-continuous derivation then $D$
is inner. A similar result is proved
for derivation on the algebra $S(M,\tau)$ of all $\tau$-measurable operators
equipped with the measure topology $t_{\tau}$.
\section{Algebras of measurable operators}
Let $B(H)$ be the $\ast$-algebra of all bounded linear operators
on a Hilbert space $H,$ and let $\textbf{1}$ be the identity
operator on $H.$ Consider a von Neumann algebra $M\subset B(H)$
with the operator norm $\|\cdot\|$ and with a faithful normal
semi-finite trace $\tau.$ Denote by $P(M)=\{p\in M:
p=p^2=p^\ast\}$ the lattice of all projections in $M.$
A linear subspace $\mathcal{D}$ in $H$ is said to be
\emph{affiliated} with $M$ (denoted as $\mathcal{D}\eta M$), if
$u(\mathcal{D})\subset \mathcal{D}$ for every unitary $u$ from
the commutant
$$M'=\{y\in B(H):xy=yx, \,{\rm fin}orall x\in M\}$$ of the von Neumann algebra $M.$
A linear operator $x: \mathcal{D}(x)\rightarrow H,$ where the
domain $\mathcal{D}(x)$ of $x$ is a linear subspace of $H,$ is
said to be \textit{affiliated} with $M$ (denoted as $x\eta M$)
if $\mathcal{D}(x)\eta M$ and $u(x(\xi))=x(u(\xi))$
for all $\xi\in
\mathcal{D}(x)$ and for every unitary $u\in M'.$
A linear subspace $\mathcal{D}$ in $H$ is said to be
\textit{strongly dense} in $H$ with respect to the von Neumann
algebra $M,$ if
1) $\mathcal{D}\eta M;$
2) there exists a sequence of projections $\{p_n\}_{n=1}^{\infty}$
in $P(M)$ such that $p_n\uparrow\textbf{1},$ $p_n(H)\subset
\mathcal{D}$ and $p^{\perp}_n=\textbf{1}-p_n$ is finite in $M$
for all $n\in\mathbb{N}$.
A closed linear operator $x$ acting in the Hilbert space $H$ is
said to be \textit{measurable} with respect to the von Neumann
algebra $M,$ if
$x\eta M$ and $\mathcal{D}(x)$ is strongly dense in $H.$
Denote by $S(M)$ the set of all linear operators on $H,$ measurable with
respect to the von Neumann algebra $M.$ If $x\in S(M),$
$\langlembda\in\mathbb{C},$ where $\mathbb{C}$ is the field of
complex numbers, then $\langlembda x\in S(M)$ and the operator
$x^\ast,$ adjoint to $x,$ is also measurable with respect to $M$
(see \cite{Seg}). Moreover, if $x, y \in S(M),$ then the operators
$x+y$ and $xy$ are defined on dense subspaces and admit closures
that are called, correspondingly, the strong sum and the strong
product of the operators $x$ and $y,$ and are denoted by
$x\stackrel{.}+y$ and $x \ast y.$ It was shown in \cite{Seg} that
$x\stackrel{.}+y$ and $x \ast y$ belong to $S(M)$ and these
algebraic operations make $S(M)$ a $\ast$-algebra with the
identity $\textbf{1}$ over the field $\mathbb{C}.$ Here, $M$ is a
$\ast$-subalgebra of $S(M).$ In what follows, the strong sum and
the strong product of operators $x$ and $y$ will be denoted in
the same way as the usual operations, by $x+y$ and $x y.$
It is clear that if the von Neumann algebra $M$ is finite then every linear operator
affiliated with $M$ is measurable and, in particular, a self-adjoint operator is
measurable with respect to $M$ if and only if all its
spectral projections belong to $M$.
Let $\tau$ be a faithful normal semi-finite trace on
$M.$ We recall that a closed linear operator
$x$ is said to be $\tau$\textit{-measurable} with respect to the von Neumann algebra
$M,$ if $x\eta M$ and $\mathcal{D}(x)$ is
$\tau$-dense in $H,$ i.e. $\mathcal{D}(x)\eta M$ and given $\varepsilon>0$
there exists a projection $p\in M$ such that $p(H)\subset\mathcal{D}(x)$
and $\tau(p^{\perp})<\varepsilon.$
Denote by $S(M,\tau)$ the set of all $\tau$-measurable operators affiliated with $M.$
Note that if the trace $\tau$ is finite then $S(M,\tau)=S(M).$
Consider the topology $t_{\tau}$ of convergence in measure or \textit{measure topology}
on $S(M, \tau),$ which is defined by
the following neighborhoods of zero:
$$V(\varepsilon, \delta)=\{x\in S(M, \tau): \exists\, e\in P(M),
\tau(e^{\perp})<\delta, xe\in
M, \|xe\|<\varepsilon\},$$ where $\varepsilon, \delta$ are
positive numbers.
It is well-known \cite{Nel} that $M$ is $t_\tau$-dense in $S(M, \tau)$
and $S(M, \tau)$ equipped with the measure topology is a complete
metrizable topological $\ast$-algebra.
Let $M$ be a finite von Neumann algebra with a faithful normal
semi-finite trace $\tau.$ Then there exists a family
$\{z_i\}_{i\in I}$ of mutually orthogonal central projections in
$M$ with $\bigvee\limits_{i\in I}z_i=\mathbf{1}$ and such that
$\tau(z_i)<+\infty$ for every $i\in I$ (such family exists
because $M$ is a finite algebra). Then the algebra $S(M)$ is
$\ast$-isomorphic to the algebra $\prod\limits_{i\in I}S(z_iM)$
(with the coordinate-wise operations and involution), i.e.
$$
S(M)\cong\prod\limits_{i\in I}S(z_iM)
$$ ($\cong$ denoting
$\ast$-isomorphism of algebras) (see \cite{Mur}).
This property implies that given any family $\{z_i\}_{i\in I}$ of
mutually orthogonal central projections in $M$ with
$\bigvee\limits_{i\in I}z_i=\textbf{1}$ and a family of elements
$\{x_i\}_{i\in I}$ in $S(M)$ there exists a unique element $x\in
S(M)$ such that $z_i x=z_i x_i$ for all $i\in I.$
Let $t_{\tau_i}$ be the measure topology on $S(z_iM)=S(z_iM,
\tau_i),$ where $\tau_i=\tau|_{z_iM},$ $i\in I.$ On the algebra
$S(M)\cong\prod\limits_{i\in I}S(z_iM)$ we consider the topology
$t$ which is the Tychonoff product of the topologies $t_{\tau_i},
i\in I.$ This topology coincides with so-called \textit{locally
measure topology} on $S(M)$ (see \cite[Remark 2.7]{Ber4}).
It is known \cite{Mur} that $S(M)$ equipped with the locally measure topology is a
topological $\ast$-algebra.
Note that if the trace $\tau$ is finite then $t=t_{\tau}.$
\section{The Main results}
Given a von Neumann algebra $M$ with a faithful normal finite
trace $\tau,$ $\tau(\mathbf{1})=1,$ we consider the $L_2$-norm
$$
\|x\|_2=\sqrt{\tau(x^\ast x)},\, x\in M.
$$
Denote by $\mathcal{U}(M)$ and $\mathcal{GN}(M)$ the set of all unitaries in
$M$ and the set of all partially isometries in $M,$ respectively.
A partial ordering can be defined on the set $\mathcal{GN}(M)$
as follows:
$$
u\leq_1 v \Leftrightarrow u u^\ast\leq v v^\ast,\, u= uu^\ast v.
$$
It is clear that
$$
u\leq_2 v \Leftrightarrow u^\ast u\leq v^\ast v,\, u= v u^\ast
u
$$
is also defined a partial ordering on the set $\mathcal{GN}(M)$
and
$$
u\leq_1 v \Leftrightarrow u^\ast \leq_2 v^\ast.
$$
Note that $u^\ast u=r(u)$ is the right support of $u,$ and
$uu^\ast=l(u)$ is the left support of $u.$
The $t$-continuity of algebraic operations on $S(M)$ implies that
every inner derivation on $S(M)$ is $t$-continuous.
The following main result of the paper shows that the converse
implication is also true.
\begin{theorem}\langlebel{main}
Let $M$ be a finite von Neumann algebra with a faithful normal
semi-finite trace $\tau$. Then every $t$-continuous derivation
$D:S(M)\rightarrow S(M)$ is inner.
\end{theorem}
For the proof of this theorem we need several lemmata.
For $f\in \ker D$ and $x\in S(M)$ we have
$$
D(fx)=D(f)x+fD(x)=fD(x),
$$
i.e.
$$
D(fx)=fD(x).
$$
Likewise
$$
D(xf)=D(x)f.
$$
This simple properties will be frequently used below.
Let $D$ be a derivation on $S(M).$ Let us define a mapping
$D^\ast:S(M)\rightarrow S(M)$ by setting
$$
D^\ast(x)=(D(x^\ast))^\ast,\, x\in S(M).
$$
A direct verification
shows that $D^\ast$ is also a derivation on $S(M).$ A derivation
$D$ on $S(M)$ is said to be \textit{skew-hermitian}, if $D^\ast=-D,$ i.e.
$D(x^\ast)=-D(x)^\ast$ for all $x\in S(M).$ Every derivation $D$
on $S(M)$ can be represented in the form $D= D_1+ i D_2,$ where
$$
D_1 = (D -D^\ast)/2,\quad D_2 = (-D - D^\ast)/2i
$$
are skew-hermitian derivations on $S(M).$
It is clear that a derivation $D$ is inner if and only if the skew-hermitian derivations
$D_1$ and $D_2$ are inner.
Therefore further we may assume that $D$ is a skew-hermitian
derivation.
\begin{lemma}\langlebel{duu} For every $v\in \mathcal{GN}(M)$
the element $vv^\ast D(v)v^\ast$ is hermitian.
\end{lemma}
\begin{proof}
First note that if $p$ is a projection, then $pD(p)p=0$. Indeed, $D(p)=D(p^2)=D(p)p+pD(p)$.
Multiplying this equality by $p$ from both sides we obtain $pD(p)p=2pD(p)p$, i.e. $pD(p)p=0$.
Now take an arbitrary $v\in \mathcal{GN}(M).$ Taking into account that
$vv^\ast v=v$ and $D$ is skew-hermitian, we get
\begin{eqnarray*}
\left(v v^\ast D(v) v^\ast\right)^\ast &=& vD(v)^\ast v v^\ast=
-vD(v^\ast) v v^\ast=\\
& = & -vD(v^\ast v) v^\ast+ v v^\ast D(v) v^\ast= -v
\left(v^\ast vD(v^\ast v) v^\ast v\right) v^\ast+
\\
& + & v v^\ast D(v) v^\ast= v v^\ast D(v) v^\ast,
\end{eqnarray*}
because $v^\ast v$ is a projection and therefore $v^\ast vD(v^\ast
v) v^\ast v=0.$ So
$$
\left(vv^\ast D(v) v^\ast\right)^\ast=vv^\ast D(v) v^\ast.
$$
The proof is complete.
\end{proof}
\begin{lemma}\langlebel{five} Let $n\in \mathbb{N}$ be a fixed number and
let $v\in \mathcal{GN}(M)$ be a partially isometry. Then
$$
vv^\ast D(v)v^\ast\geq n vv^\ast
$$
if and only if
$$
v^\ast vD(v^\ast) v\leq -n v^\ast v.
$$
\end{lemma}
\begin{proof}
Take an arbitrary $v\in \mathcal{GN}(M)$ such that $ vv^\ast
D(v)v^\ast\geq n vv^\ast.$ Multiplying this equality from the left
side by $v^\ast$ and from the right side by $v,$ we obtain
$$
v^\ast vv^\ast D(v)v^\ast v\geq n v^\ast v v^\ast v,
$$
i.e.
$$ v^\ast D(v)v^\ast v\geq n v^\ast v.
$$
Since
\begin{eqnarray*}
v^\ast D(v)v^\ast v &=& v^\ast D(v v^\ast) v- v^\ast v
D(v^\ast)v = v^\ast \left(v v^\ast D(v v^\ast) v v^\ast\right) v-
\\
& - & v^\ast v D(v^\ast)v= -v^\ast v D(v^\ast)v,
\end{eqnarray*}
because $v v^\ast$ is a projection and therefore $v v^\ast D(v
v^\ast) v v^\ast =0.$ So
$$
-v^\ast v D(v^\ast)v\geq n v^\ast v,
$$
i.e.
$$
v^\ast v D(v^\ast)v\leq -n v^\ast v.
$$
In a similar way we can prove the converse implication.
The proof is complete.
\end{proof}
\begin{lemma}\langlebel{orto} Let $v_1\in \mathcal{GN}(M)$ be
a partially isometry and let $v_2\in \mathcal{GN}(pMp),$ where
\linebreak $p=\mathbf{1}- v_1v_1^\ast \vee v_1^\ast v_1 \vee
s(iD(v_1v_1^\ast))\vee s(iD(v_1^\ast v_1))$ and $s(x)$ denotes the
support of a hermitian element $x.$ Then
$$
(v_1+v_2)(v_1+v_2)^\ast D(v_1+v_2)(v_1+v_2)^\ast=v_1v_1^\ast D(v_1)v_1^\ast
+v_2v_2^\ast D(v_2)v_2^\ast.
$$
\end{lemma}
\begin{proof} Since $v_2\in
\mathcal{GN}(pMp)$ we get
$$
v_1v_2^\ast=v_2^\ast v_1=v_2v_1^\ast=v_1^\ast v_2=0,
$$
$$
v_2^\ast D(v_1v_1^\ast)= D(v_1v_1^\ast)v_2=v_2D(v_1^\ast v_1) =
D(v_1^\ast v_1)v_2^\ast=0.
$$
Thus
$$
v_1v_1^\ast D(v_2)=D(v_1v_1^\ast v_2)-D(v_1v_1^\ast)v_2=0,
$$
$$
v_2^\ast D(v_1)v_2^\ast =D(v_2^\ast v_1) v_2^\ast - D(v_2^\ast)
v_1v_2^\ast=0,
$$
$$
v_1^\ast D(v_1)v_2^\ast =D(v_1^\ast v_1) v_2^\ast - D(v_1^\ast)
v_1v_2^\ast=0,
$$
$$
v_2^\ast D(v_1)v_1^\ast =v_2^\ast D(v_1 v_1^\ast) - v_2^\ast v_1
D(v_1^\ast)=0,
$$
$$
D(v_2)v_1^\ast =D(v_2)v_1^\ast v_1 v_1^\ast=D(v_2v_1^\ast
v_1)v_1^\ast- v_2 D(v_1^\ast v_1) v_1^\ast=0.
$$
Taking into account these equalities we get
\begin{eqnarray*}
(v_1+v_2)(v_1+v_2)^\ast D(v_1+v_2)(v_1+v_2)^\ast &=&
(v_1v_1^\ast+v_2v_2^\ast) D(v_1+v_2)(v_1+v_2)^\ast=
\\
& = & v_1v_1^\ast D(v_1)v_1^\ast +
v_2v_2^\ast D(v_2)v_2^\ast+
\\
& + & v_1v_1^\ast D(v_2)v_1^\ast +
v_2v_2^\ast D(v_1)v_2^\ast+
\\
& + & v_1v_1^\ast D(v_1)v_2^\ast +
v_2v_2^\ast D(v_2)v_1^\ast+
\\
& + & v_1v_1^\ast D(v_2)v_2^\ast +
v_2v_2^\ast D(v_1)v_1^\ast=
\\
& = & v_1v_1^\ast D(v_1)v_1^\ast +
v_2v_2^\ast D(v_2)v_2^\ast.
\end{eqnarray*}
The proof is complete. \end{proof}
Let $p\in M$ be a projection. It is clear that the mapping
$$
pDp:x\rightarrow pD(x)p,\, x\in pS(M)p
$$
is a derivation on $pS(M)p=S(pMp).$
The
following lemma is one of the key steps in the proof of the main
result.
\begin{lemma}\langlebel{boun} Let $M$ be a
von Neumann algebra with a faithful normal finite trace $\tau$,
$\tau(\mathbf{1})=1.$ There exists a sequence of projections
$\{p_n\}$ in $M$ with $\tau(\mathbf{1}-p_n)\rightarrow 0$ such
that the derivation $p_nDp_n$ maps $p_nMp_n$ into itself for all
$n\in \mathbb{N}.$
\end{lemma}
\begin{proof}
For each $n\in \mathbb{N}$ consider the set
$$
\mathcal{F}_n=\{v\in \mathcal{GN}(M): vv^\ast D(v)v^\ast\geq n
vv^\ast\}.
$$
Note that $0\in \mathcal{F}_n$, so $\mathcal{F}_n$ is not empty.
Let us show that the set $\mathcal{F}_n$ has a maximal element
with respect to the order $\leq_1.$
Let $\{v_\alpha\}\subset \mathcal{F}_n$ be a totally ordered net.
We will show that $v_\alpha \stackrel{t_\tau}\longrightarrow v$
for some $v \in \mathcal{F}_n.$ For $\alpha\leq \beta$ we have
\begin{eqnarray*}
\|v_\beta-v_\alpha\|_2 & =& \|l(v_\beta)v_\beta-l(v_\alpha)v_\beta\|_2= \\
& = & \|(l(v_\beta)-l(v_\alpha))v_\beta\|_2\leq
\|l(v_\beta)-l(v_\alpha)\|_2\|v_\beta\|= \\
& = & \sqrt{\tau(l(v_\beta)-l(v_\alpha))}\rightarrow 0,
\end{eqnarray*}
because $\{l(v_\alpha)\}$ is an increasing net of projections.
Thus $\{v_\alpha\}$ is a $\|\cdot\|_2$-fundamental, and hence
there exists an element $v$ in the unit ball $M$ such that
$v_\alpha\stackrel{\|\cdot\|_2}\longrightarrow v.$ Therefore
$v_\alpha\stackrel{t_\tau}\longrightarrow v,$ and thus we have
$$
v_\alpha v_\alpha^\ast \stackrel{t_\tau}\longrightarrow v
v^\ast,\, v_\alpha^\ast v_\alpha\stackrel{t_\tau}\longrightarrow
v^\ast v.
$$
Therefore
$$
v v^\ast, \, v^\ast v\in P(M).
$$
Thus $v\in \mathcal{GN}(M).$
Since $\{v_\alpha v_\alpha^\ast\}$ is an increasing net of
projections it follows that $v_\alpha v_\alpha^\ast \uparrow v
v^\ast.$ Also, $v_\alpha=v_\alpha v_\alpha^\ast v_\beta$ for all
$\beta\geq \alpha$ implies that $v_\alpha=v_\alpha v_\alpha^\ast
v.$ So $v_\alpha\leq_1 v$ for all $\alpha.$ Since
$v_\alpha\stackrel{t_\tau}\longrightarrow v$ by ${t_\tau}$-continuity of $D$ we have that
$D(v_\alpha)\stackrel{t_\tau}\longrightarrow D(v).$ Taking into
account that $v_\alpha v_\alpha^\ast D(v_\alpha)v_\alpha^\ast\geq
n v_\alpha v_\alpha^\ast$ we obtain $v v^{\ast} D(v)v^{\ast}\geq n
v v^{\ast},$ i.e. $v\in \mathcal{F}_n.$
So, any totally ordered net in $\mathcal{F}_n$ has the least upper
bound. By Zorn`s Lemma $\mathcal{F}_n$ has a maximal element, say
$v_n.$
Put
$$
p_n=\mathbf{1}- v_nv_n^\ast \vee v_n^\ast v_n \vee
s(iD(v_nv_n^\ast))\vee s(iD(v_n^\ast v_n)).
$$
Let us prove that
$$
\|vv^\ast D(v)v^\ast\|\leq n
$$
for all $v\in \mathcal{U}(p_nMp_n).$
The case $p_n=0$ is trivial.
Let us consider the case $p_n\neq 0.$ Take $v\in
\mathcal{U}(p_nMp_n).$ Let $v v^\ast
D(v)v^\ast=\int\limits_{-\infty}^{+\infty}\langlembda \, d\,
e_{\langlembda}$ be the spectral resolution of $v v^\ast D(v)v^\ast.$
Assume that $p=e_n^\perp\neq 0.$ Then
$$
p v v^\ast D(v)v^\ast p\geq n p.
$$
Denote $u=pv.$ Then since $p\leq p_n = vv^\ast,$ we have
\begin{eqnarray*}
u u^\ast D(u)u^\ast & =& p v v^\ast p D(pv)v^\ast p= \\
& = & p v v^\ast p D(p)vv^\ast p +p v v^\ast p p D(v)v^\ast p=\\
& = & p v v^\ast p D(p) p vv^\ast +p v v^\ast D(v)v^\ast p =\\
& = &0+ p v v^\ast D(v)v^\ast p\geq n p,
\end{eqnarray*}
i.e.
$$
uu^\ast D(u)u^\ast \geq n p.
$$
Since $u u^\ast, u^\ast u\leq p_n=\mathbf{1}- v_nv_n^\ast \vee
v_n^\ast v_n \vee s(iD(v_nv_n^\ast))\vee s(iD(v_n^\ast v_n))$ it
follows that $u$ is orthogonal to $v_n,$ i.e.
$uv_n^{\ast}=v_n^{\ast}u=0.$ Therefore $w=v_n+u\in
\mathcal{GN}(M).$ Using Lemma~\ref{orto} we have
\begin{eqnarray*}
w w^\ast D(w)w^\ast & =& v_n v_n^{\ast} D(v_n)v_n^{\ast} +u
u^\ast D(u)u^\ast\geq n(v_nv_n^\ast+p)=nww^\ast,
\end{eqnarray*}
because
\begin{eqnarray*}
w w^\ast & =& (v_n+u)(v_n +u)^{\ast}=v_nv_n^{\ast} +u u^\ast=\\
& =& v_nv_n^\ast+pvv^\ast p=v_nv_n^\ast+p.
\end{eqnarray*}
So
$$
w w^\ast D(w)w^\ast \geq n ww^\ast.
$$
This is contradiction with maximality $v_n^.$ From this
contradiction it follows that $e_n^\perp=0.$ This means that
$$
v v^\ast D(v)v^\ast\leq n v v^\ast
$$
for all $v\in \mathcal{U}(p_nMp_n).$
Set
$$
\mathcal{S}_n=\{v\in \mathcal{GN}(M): vv^\ast D(v)v^\ast\leq -n
vv^\ast\}.
$$
By Lemma~\ref{five} it follows that $v\in \mathcal{F}_n$ is a
maximal element of $\mathcal{F}_n$ with respect to the order
$\leq_1$ if and only if $v^\ast$ is a maximal element of
$\mathcal{S}_n$ with respect to the order $\leq_2.$
Taking into account this observation in a similar way we can show
that
$$
v v^\ast D(v)v^\ast\geq -n v v^\ast
$$
for all $v\in \mathcal{U}(p_nMp_n).$ So
$$
-n v v^\ast\leq v v^\ast D(v)v^\ast\leq n v v^\ast.
$$
This implies that $v v^\ast D(v)v^\ast\in M$ and
\begin{equation}\langlebel{noreq}
\|vv^\ast D(v)v^\ast\|\leq n
\end{equation}
for all $v\in \mathcal{U}(p_nMp_n).$
Let us show that the derivation $p_nDp_n$
maps $p_nMp_n$ into itself. Take $v\in \mathcal{U}(p_nMp_n).$ Then $vv^\ast=v^\ast v=p_n$ and
hence
\begin{eqnarray*}
(p_nDp_n)(v)=p_nD(p_n v p_n)p_n & =& vv^\ast D(v)v^\ast v\in
p_nMp_n.
\end{eqnarray*}
Since any element from $p_nMp_n$ is a finite linear combination of
unitaries from $\mathcal{U}(p_nMp_n)$ it follows that
\begin{eqnarray*}
p_n D(x)p_n \in p_nMp_n
\end{eqnarray*}
for all $x\in p_nMp_n,$ i.e. the derivation $p_nDp_n$
maps $p_nMp_n$ into itself.
Let us show that $\tau(v_n v_n^\ast)\rightarrow 0.$
Let us suppose the opposite, e.g.
there exist a number $\varepsilon>0$ and a sequence
$n_1<n_2<...<n_k<...$ such that
$$
\tau(v_{n_k} v_{n_k}^\ast)\geq \varepsilon
$$
for all $k\geq1.$ Since $v_{n_k}\in \mathcal{F}_{n_k}$ we have
\begin{equation}\langlebel{inq}
v_{n_k}v_{n_k}^{\ast} D(v_{n_k})v_{n_k}^{\ast} \geq n_k v_{n_k}v_{n_k}^{\ast}
\end{equation}
for all $k\geq1.$
Now take an arbitrary number $c>0$ and let $n_k$ be a number such
that $n_k>c\delta,$ where $\delta={\rm fin}rac{\textstyle
\varepsilon}{\textstyle 2}.$ Suppose that
$$
v_{n_k}v_{n_k}^{\ast} D(v_{n_k})v_{n_k}^{\ast}\in
cV\left(\delta,\delta\right)= V\left(c\delta,\delta\right).
$$
Then there exists a projection $p\in M$ such that
\begin{equation}\langlebel{eps}
||v_{n_k}v_{n_k}^{\ast} D(v_{n_k})v_{n_k}^{\ast} p||<c\delta,\,
\tau(p^\perp)<\delta.
\end{equation}
Let $v_{n_k}v_{n_k}^{\ast}
D(v_{n_k})v_{n_k}^{\ast}=\int\limits_{-\infty}^{+\infty}\langlembda \,
d\, e_{\langlembda}$ be the spectral resolution of
$v_{n_k}v_{n_k}^{\ast} D(v_{n_k})v_{n_k}^{\ast}.$ From
\eqref{eps} using \cite[Lemma 2.2.4]{Mur} we obtain that
$e^\perp_{c\delta}\preceq p^\perp.$ Taking into account
\eqref{inq} we have that $v_{n_k}v_{n_k}^{\ast}\leq
e^\perp_{n_k}.$ Since $n_k>c\delta$ it follows that
$e^\perp_{n_k}\leq e^\perp_{c\delta}.$ So
$$
v_{n_k}v_{n_k}^{\ast} \leq e^\perp_{n_k}\leq
e^\perp_{c\delta}\preceq p^\perp.
$$
Thus
$$
\varepsilon\leq \tau(v_{n_k}v_{n_k}^{\ast} )\leq
\tau(p^\perp)<\delta={\rm fin}rac{\varepsilon}{2}.
$$
This contradiction implies that
$$
v_{n_k}v_{n_k}^{\ast} D(v_{n_k})v_{n_k}^{\ast}\notin
cV\left(\delta,\delta\right)
$$
for all $n_k>c\delta.$ Since $c>0$ is arbitrary it follows that
the sequence $\{v_{n_k}v_{n_k}^{\ast}
D(v_{n_k})v_{n_k}^{\ast}\}_{k\geq1}$ is unbounded in the measure
topology. Therefore the set $\{vv^\ast D(v)v^\ast: v\in
\mathcal{GN}(M)\}$ is also unbounded in the measure topology.
On the other hand, the continuity of the derivation $D$
implies that the set $\{xx^\ast D(x)x^\ast: \|x\|\leq 1\}$ is
bounded in the measure topology. In particular, the set
$\{uu^\ast D(u)u^\ast: u\in \mathcal{GN}(M)\}$ is also bounded in
the measure topology. This contradiction implies that $\tau(v_n
v_n^\ast)\rightarrow 0.$
Finally let us show that
$$
\tau(\mathbf{1}-p_n) \rightarrow 0.
$$
It is clear that
$$
l(iD(v_nv_n^\ast)v_nv_n^\ast)\preceq v_nv_n^\ast,
$$
$$
r(v_nv_n^\ast iD(v_nv_n^\ast))\preceq v_nv_n^\ast.
$$
Since
$$
D(v_nv_n^\ast)=D(v_nv_n^\ast)v_nv_n^\ast+v_nv_n^\ast
D(v_nv_n^\ast)
$$
we have
\begin{eqnarray*}
\tau(s(iD(v_nv_n^\ast))) & =&
\tau(s(iD(v_nv_n^\ast)v_nv_n^\ast+v_nv_n^\ast
iD(v_nv_n^\ast))\leq \\
& \leq & \tau(s(v_nv_n^\ast)\vee l(iD(v_nv_n^\ast)v_nv_n^\ast)\vee
r(v_nv_n^\ast iD(v_nv_n^\ast)))\leq\\
& \leq & \tau(v_n v_n^\ast)+\tau(v_n v_n^\ast)+\tau(v_n
v_n^\ast)=3\tau(v_n v_n^\ast),
\end{eqnarray*}
i.e.
$$
\tau(s(iD(v_nv_n^\ast))) \leq 3\tau(v_n v_n^\ast).
$$
Similarly
$$
\tau(s(iD(v_n^\ast v_n))) \leq 3\tau(v_n^\ast v_n).
$$
Now taking into account that
$$
v_n v_n^\ast \sim v_n^\ast v_n
$$
we obtain
\begin{eqnarray*}
\tau(\mathbf{1}-p_n) & =& \tau(v_nv_n^\ast \vee v_n^\ast v_n \vee
s(iD(v_nv_n^\ast))\vee s(iD(v_n^\ast v_n)))\leq \\
& \leq & \tau(v_n v_n^\ast)+\tau(v_n^\ast v_n)+3\tau(v_n v_n^\ast)+3\tau(v_n^\ast v_n)=\\
& = & 8\tau(v_n v_n^\ast)\rightarrow 0,
\end{eqnarray*}
i.e.
$$
\tau(\mathbf{1}-p_n) \rightarrow 0.
$$
The proof is complete.
\end{proof}
Let $c\in S(M)$ be a central element. It is clear that the mapping
$$
cD:x\rightarrow cD(x),\, x\in S(M)
$$
is a derivation on $S(M).$
\begin{lemma}\langlebel{last} Let $M$ be a
von Neumann algebra with a faithful normal finite trace $\tau$,
$\tau(\mathbf{1})=1.$ There exist an invertible central
element $c\in S(M)$ and a faithful projection $p\in M$ such that
the derivation $cpDp$ maps $pMp$ into itself.
\end{lemma}
\begin{proof}
By Lemma~\ref{boun} there exists a sequence of projections
$\{p_n\}\subset M$ with $\tau(\mathbf{1}-p_n) \rightarrow 0$ such
that the derivation $p_nDp_n$ maps $p_nMp_n$ into itself for all
$n\in \mathbb{N}.$ By \eqref{noreq} we have
\begin{equation}\langlebel{nnn}
\|v_nv_n^\ast D(v_n)v_n^\ast\|\leq n
\end{equation}
for all $v_n\in \mathcal{U}(p_nMp_n).$
Let $z_n=c(p_n)$ be
the central support of $p_n,$ $n\in \mathbb{N}.$ Since $p_n\leq
z_n$ and $\tau(\mathbf{1}-p_n) \rightarrow 0$ we get
$\tau(\mathbf{1}-z_n) \rightarrow 0.$ Thus
$\bigvee\limits_{n\geq1}z_n=\mathbf{1}.$ Set
$$
f_1=z_1,\, f_n=z_n\wedge\left(\vee_{k=1}^{n-1}f_k\right)^\perp,\,
n>1.
$$
Then $\{f_n\}$ is a sequence of mutually orthogonal central
projections with $\bigvee\limits_{n\geq1}f_n=\mathbf{1}.$
Set
$$
c=\sum\limits_{n=1}^\infty n^{-1}f_n
$$
and
$$
p=\sum\limits_{n=1}^\infty f_np_n,
$$
where convergence of series means the convergence in the strong
operator topology. Then $c$ is an invertible central element in
$S(M)$ and $p$ is a faithful projection in $M.$
Let us show that
$$
\|vv^\ast cD(v)v^\ast\|\leq 1
$$
for all $v\in \mathcal{U}(pMp).$
Take $v\in \mathcal{U}(pMp)$ and put $v_n=f_n v,$ $n\in
\mathbb{N}.$ Since $f_n\leq z_n$ it follows that $v_n\in
\mathcal{U}(p_nMp_n).$ Taking into account that $f_n c=n^{-1}f_n$
from the inequality \eqref{nnn} we have
$$
\|v_nv_n^\ast cD(v_n)v_n^\ast\|\leq 1.
$$
Notice that
$$
f_nvv^\ast cD(v)v^\ast f_n=(f_nv)(f_nv)^\ast
cD(f_nv)(f_nv)^\ast=v_nv_n^\ast cD(v_n)v_n^\ast.
$$
Since $\{f_n\}$ is a sequence of mutually orthogonal central
projections we obtain that
$$
||vv^\ast cD(v)v^\ast||=\sup\limits_{n\geq 1}||v_nv_n^\ast
cD(v_n)v_n^\ast||\leq1.
$$
Thus as in the proof of Lemma~\ref{boun} it follows that the
derivation $cpDp$ maps $pMp$ into itself. The proof is complete.
\end{proof}
In the following Lemmata~\ref{lem:eq}-\ref{lem:main} we do not
assume the continuity of derivations.
\begin{lemma}\langlebel{lem:eq} Let $M$ be an arbitrary von Neumann algebra
with mutually
equivalent orthogonal projections $e, f$ such that
$e+f=\mathbf{1}.$ If $D:S(M)\rightarrow S(M)$ is a derivation such
that $D|_{fS(M)f}\equiv 0$ then
$$
D(x)=ax-xa
$$
for all $x\in S(M),$ where $a=D(u^\ast)u$ and $u$ is a partial
isometry in $M$ such that $u^\ast u=e,\, u u^\ast=f.$
\end{lemma}
\begin{proof}
Since $D(f)=0$ we have
$$
D(e)=D(\mathbf{1}-f)=D(\mathbf{1})-D(f)=0.
$$
Thus
$$
D(ex)=eD(x),\, D(xe)=D(x)e,
$$
$$
D(fx)=fD(x),\, D(xf)=D(x)f
$$
for all $x\in S(M).$
Now take the partially isometry $u\in M$ such that
$$
u^\ast u=e,\, u u^\ast=f.
$$
Set $a=D(u^\ast)u.$ Since $eu^\ast=u^\ast,$ $u e=u$ we have
$$
a=D(u^\ast)u=D(e u^\ast)u e=e D(u^\ast)u e,
$$
i.e. $a\in e S(M) e=S(eMe).$
We shall show that
$$
D(x)=ax-xa
$$
for all $x\in S(M).$
Consider the following cases.
Case 1. $x=exe.$ Note that
$$
x=exe= u^\ast u x u^\ast u
$$
and
$$
u x u^\ast =f(u x u^\ast)f\in fS(M)f.
$$
Therefore $D(u x u^\ast)=0.$ Further
\begin{eqnarray*}
D(x) & =& D(u^\ast u x u^\ast u)= \\
& = & D(u^\ast)u x u^\ast u+ u^\ast D(u x u^\ast) u + u^\ast u x
u^\ast D(u)=\\
& = & D(u^\ast)u x+ x u^\ast D(u),
\end{eqnarray*}
i.e.
$$
D(x)=D(u^\ast)u x+ x u^\ast D(u).
$$
Taking into account
$$
u^\ast D(u)=D(u^\ast u)-D(u^\ast)u=D(e)-D(u^\ast)u=-a
$$
we obtain
$$
D(x)=ax-xa.
$$
Case 2. $x=e x f. $ Then
\begin{eqnarray*}
D(x) & =& D(exf)
\ = \ D(u^\ast u x u u^\ast)= \\
& = & D(u^\ast)u x u u^\ast + u^\ast D(u x u u^\ast)= ax,
\end{eqnarray*}
because $u x u u^\ast\in f S(M) f$ and $D(u x u u^\ast)=0.$ Thus
$D(x)=ax.$ Since $a\in eS(M)e$ we have
$$
xa=exf eae=0.
$$
Therefore
$$
D(x)=ax-xa.
$$
Case 3. $x=fxe.$ Then
\begin{eqnarray*}
D(x) & =& D(fxe)
\ = \ D(uu^\ast x u^\ast u)= \\
& = & D(u u^\ast x u^\ast) u + u u^\ast x u^\ast D(u)=x
u^\ast D(u).
\end{eqnarray*}
Since $u^\ast D(u)=-a$ we get $D(x)=-xa.$ Since $a\in eS(M)e$ have
$$
ax=eae fxe=0.
$$
Therefore
$$
D(x)=ax-xa.
$$
For an arbitrary element $x\in S(M)$ we consider its
representation of the form $x=exe+exf+fxe+fxf$ and taking into
account the above cases we obtain
$$ D(x)=ax-xa.
$$
The proof is complete. \end{proof}
\begin{lemma}\langlebel{lem:part} Let $M$ be a von Neumann algebra
and let $e, f\in M$ be projections such that $0\neq f\sim e\leq f^\perp.$ If
$D:S(M)\rightarrow S(M)$ is a derivation with $D|_{fS(M)f}\equiv
0$ then there exists an element $a\in S(M)$ such that
$$
D|_{pS(M)p}\equiv D_a|_{pS(M)p},
$$
where $p=e+f.$
\end{lemma}
\begin{proof}
Denote $b=D(e)e-eD(e).$ Since $e$ is a projection, one has
$eD(e)e=0.$ Thus
\begin{eqnarray*}
D_b(e) & =& be-eb =\\
& = & \left(D(e)e-eD(e)\right)e-
e\left(D(e)e-eD(e)\right) =\\
& = & D(e)e+eD(e)=D(e^2)=D(e),
\end{eqnarray*}
i.e. $D(e)=D_b(e).$
Now let $x=fxf.$ Taking into account that $ef=0$ we obtain
\begin{eqnarray*}
D_b(x) & =& bx-ba = \\
& = & \left(D(e)e-eD(e)\right)fxf-
fxf\left(D(e)e-eD(e)\right) =\\
& = & -eD(e)fxf-fxfD(e)e=-eD(ef)xf-fxD(fe)e=0,
\end{eqnarray*}
i.e.
$D_b|_{fS(M)f}\equiv 0.$
Consider the derivation $\Delta=D-D_b.$ We have
$$
\Delta(p)=(D-D_b)(e+f)=D(e)-D_b(e)=0,
$$
i.e. $\Delta(p)=0.$ Thus
$$
\Delta(pxp)=p\Delta(x)p
$$
for all $x\in S(M).$ This means that $\Delta$ maps $pS(M)p=S(pM
p)$ into itself. So the restriction $\Delta|_{pS(M)p}$ of $\Delta$
on $pS(M)p$ is a derivation. Moreover
$$
\Delta|_{fS(M)f}=(D-D_b)|_{fS(M)f}\equiv 0.
$$
By Lemma~\ref{lem:eq} there exists $c\in pS(M)p$ such that
$$
\Delta|_{pS(M)p}\equiv D_c|_{pS(M)p}.
$$
Then
$$
D|_{pS(M)p}=(\Delta+D_b)|_{pS(M)p}= D_c|_{pS(M)p}+
D_b|_{pS(M)p}=D_{b+c}|_{pS(M)p}.
$$
So
$$
D|_{pS(M)p}=D_{b+c}|_{pS(M)p}.
$$
The proof is complete. \end{proof}
\begin{lemma}\langlebel{lem:two} Let $M$ be a von Neumann algebra
of type II$_1$ with faithful normal center-valued trace $\Phi$
and let $f$ be a projection such that $\Phi(f)\geq \varepsilon \mathbf{1},$
where $0<\varepsilon<1.$ If
$D:S(M)\rightarrow S(M)$ is a derivation such that
$D|_{fS(M)f}\equiv 0$ then $D$ is inner.
\end{lemma}
\begin{proof} Without loss of generality we may assume that
$\Phi(\mathbf{1})=\mathbf{1}.$ Choose a number $n\in \mathbb{N}$
such that $2^{-n}<\varepsilon.$ Since $M$ is of type II$_1,$ there
exists a projection $f_1\leq f$ such that
$\Phi(f_1)=2^{-n}\mathbf{1}.$ Since $f_1\leq f$ we have
$D|_{f_1S(M)f_1}\equiv 0.$ Therefore replacing, if necessary,
$f$ by $f_1,$ we may assume that
$\Phi(f)=2^{-n}\mathbf{1}.$
Consider the following cases.
Case 1. $n=1.$ Then $f\sim f^\perp.$ By Lemma~\ref{lem:eq} $D$ is
inner.
Case 2. $n>1.$ Take a projection $e\leq f^\perp$ with $e\sim f.$
Denote $p=e+f.$ Applying Lemma~\ref{lem:part} we can find an
element $a_p\in S(M)$ such that
$$
D|_{pS(M)p}\equiv D_{a_p}|_{pS(M)p}.
$$
Set $\Delta:=D-D_{a_p}.$ Then $\Phi(p)=2^{1-n}\mathbf{1}$ and
$$
\Delta|_{pS(M)p}\equiv 0.
$$
Similarly, applying Lemma~\ref{lem:part} $(n-1)$ times, we can find
an element $a\in S(M)$ such that $D=D_a.$ The proof is complete.
\end{proof}
\begin{lemma}\langlebel{lem:main} Let $M$ be a von Neumann algebra
of type II$_1$ and let $f$ be a faithful projection. If
$D:S(M)\rightarrow S(M)$ is a derivation such that
$D|_{fS(M)f}\equiv 0$ then $D$ is inner.
\end{lemma}
\begin{proof} Since $f$ is a faithful, we see that
$c(\Phi(f))=\mathbf{1},$ where $c(x)=\inf\{z\in P(Z(M)): zx=x\}$
is the central support of the element $x\in S(M).$ There exist a
family $\{z_n\}_{n\in F},$ $F\subseteq\mathbb{N},$ of central
projections from $M$ with $\bigvee\limits_{n\in F} z_n =
\mathbf{1}$ and a sequence $\{\varepsilon_n\}_{n\in F}$ with
$\varepsilon_n> 0$ such that
$$
z_n\Phi(f)\geq \varepsilon_n z_n
$$
for all $n\in F.$ Since $z_n$ is a central projection, we have
$D(z_n)=0.$ Thus
$$
D(z_n x)=z_nD(x)
$$
for all $x\in S(M).$ This means that $D$ maps $z_nS(M)=S(z_nM)$
into itself. So
$z_n D|_{S(z_nM)}$ is a
derivation on $S(z_nM).$ Moreover
$$
z_nD|_{z_nfS(M)z_nf}\equiv 0
$$
and
$$
z_n\Phi(f)\geq \varepsilon_n z_n.
$$
By Lemma~\ref{lem:two} there exists $a_n=z_n a_n\in S(z_nM)$ such
that
$$
z_nD|_{S(z_nM)}\equiv D_{a_n}|_{S(z_nM)}
$$
for all $n\in F.$ There exists a unique element $a\in S(M)$ such
that $z_n a=z_n a_n$ for all $n\in F.$ It is clear that $D=D_a.$
The proof is complete. \end{proof}
\textit{Proof of Theorem~\ref{main}}. For finite type I von
Neumann algebras the assertion has been proved in \cite[Corollary 4.5]{Alb2}.
Therefore it is sufficient to consider the case of type II$_1$ von Neumann
algebras.
Case 1. The trace $\tau$ is finite ( we may suppose without loss of
generality that $\tau(\mathbf{1})=1).$ By Lemma~\ref{last} there
exist an invertible central element $c\in S(M)$ and a faithful
projection $p\in M$ such that
the derivation $cpDp$ on $S(pMp)=pS(M)p$ maps $pMp$ into itself.
By Sakai's Theorem \cite[Theorem 1]{Sak66} there is an element $a_p\in pMp$ such that
$cpD(x)p=a_p x-xa_p$ for all $x\in pMp.$ Since $cD$ is
$t_\tau$-continuous it follows that
$$
cpD(x)p=a_p x-xa_p
$$ for all
$x\in S(pMp).$ So
$$
pD(x)p=(c^{-1}a_p) x-x(c^{-1}a_p)
$$ for all
$x\in S(pMp).$
As in the proof of
Lemma~\ref{lem:part} denote $b=D(p)p-pD(p).$ Then $D(p)=D_{b}(p).$
Consider the derivation $\Delta$ on $S(M)$ defined by
$$
\Delta=D-D_{c^{-1}a_p}-D_{b}.
$$
Then
$$
\Delta(p)=D(p)-D_{c^{-1}a_p}(p)-D_{b}(p)=0,
$$
because $D(p)=D_{b}(p)$ and $c^{-1}a_p\in pMp.$
Let $x\in S(pMp).$ Taking into account that $\Delta(p)=0$ we have
\begin{eqnarray*}
\Delta(x) & =& \Delta(pxp)=p\Delta(pxp)p=\\
& = & pD(pxp)p-pD_{c^{-1}a_p}(pxp)p-pD_{b}(pxp)p =0,
\end{eqnarray*}
because $pD(pxp)p=pD_{c^{-1}a_p}(pxp)p$ and $pbp=0.$ So
$$
\Delta|_{S(pMp)}\equiv 0.
$$
Since $p$ is a faithful projection in $M,$ by Lemma~\ref{lem:main}
$\Delta=D-D_{c^{-1}a_p}-D_{b}$ is an inner derivation. This means
that there exists an element $h\in S(M)$ such that
$$
D=D_{h}+D_{c^{-1}a_p}+D_{b}=D_{h+c^{-1}a_p+b}.
$$
Case 2. Let $\tau$ be an arbitrary faithful normal semi-finite
trace on $M.$ Take a family $\{z_i\}_{i\in I}$ of mutually
orthogonal central projections in $M$ with $\bigvee\limits_{i\in
I}z_i=\mathbf{1}$ and such that $\tau(z_i)<+\infty$ for every
$i\in I$ (such family exists because $M$ is a finite algebra).
The map $D_i: S(z_iM)\rightarrow S(z_iM)$ defined by
$$
D_i(x)=z_i D(z_ix),\, x\in S(z_iM)
$$
is a derivation on $S(z_iM).$ By the case 1 for each $i\in I$ there exists $a_i\in
S(z_iM)$ such that $D_i=D_{a_i}$. Further there is a
unique element $a\in S(M)$ such that $z_i a=z_ia_i$ for all $i\in
I.$ Now it is clear that $D=D_a.$ The proof is complete.
Recall that a $\ast$-subalgebra $\mathcal{A}$ of $S(M)$ is
called absolutely solid if from $x\in S(M),$ $y\in
\mathcal{A},$ and $|x|\leq |y|$ it follows that $x\in \mathcal{A}.$ Note
that $S(M, \tau)$ is an absolutely solid $\ast$-subalgebra in
$S(M).$
The following theorem gives a solution of the mentioned problem \cite[Problem 3]{AK2} for the algebra $S(M,\tau$) of
all $\tau$- measurable operators affiliated with $M$.
\begin{theorem}\langlebel{smt}
Let $M$ be a finite von Neumann algebra with a faithful normal
semi-finite trace $\tau$. Then every $t_\tau$-continuous
derivation $D:S(M, \tau)\rightarrow S(M,\tau)$ is inner.
\end{theorem}
\begin{proof} As above take a family $\{z_i\}_{i\in I}$ of mutually
orthogonal central projections in $M$ with $\bigvee\limits_{i\in
I}z_i=\mathbf{1}$ and such that $\tau(z_i)<+\infty$ for every
$i\in I.$ The map $D_i: S(z_iM, \tau_i)\rightarrow S(z_iM,
\tau_i)$ defined by
$$
D_i(x)=z_i D(z_ix),\, x\in S(z_iM, \tau_i)
$$
is a derivation on $S(z_iM, \tau_i)=S(z_iM),$ where
$\tau_i=\tau|_{z_iM},$ $i\in I.$ Note that the restriction of the
topology $t_\tau$ on $S(z_iM, \tau_i)$ coincides with the topology
$t_{\tau_i}.$ Since $\tau(z_i)<+\infty$ we have that the measure
topology $t_{\tau_i}$ on $S(z_iM, \tau_i)$ coincides with the
locally measure topology. Therefore the derivation $D_i$ is
continuous in the locally measure topology. By Theorem~\ref{main}
for each $i\in I$ there exists $a_i\in S(z_iM)$ such that
$D_i=D_{a_i}$. Now if we take the unique element $a\in S(M)$ such
that $z_i a=z_ia_i$ for all $i\in I$, then we obtain that
$$
z_i D(x)=D(z_i x)=D_i(z_i x)=a_i(z_ix)-(z_i x)a_i=z_i(ax-xa),
$$
i.e.
$$
D(x)=ax-xa
$$
for all $x\in S(M, \tau),$ i.e the derivation $D$ is implemented by the
element $a\in S(M)$. Since $S(M, \tau)$ is an absolutely
solid $\ast$-subalgebra in $S(M),$ applying \cite[Proposition
5.17]{BPS} we may choose the element $a$, implementing $D$, from the algebra $S(M, \tau)$ itself.
So $D$ is an inner derivation on $S(M, \tau)$ . The proof is complete.
\end{proof}
\section*{Acknowledgments}
The authors are indebted to the referee for valuable comments and
suggestions.
\end{document} |
\begin{document}
\title[Standing waves for 6-superlinear Chern-Simons-Schr\"{o}dinger systems]{Standing waves for 6-superlinear Chern-Simons-Schr\"{o}dinger systems with indefinite potentials}
\author{Shuai Jiang$^*$\and Shibo Liu}
\dedicatory{School of Mathematical Sciences, Xiamen University,
Xiamen 361005, China}
\thanks {${^\star}$This work was supported by NSFC (Grant No. 12071387 and 11971436).\\
\rule{4ex}{0ex}2020 Mathematics Subject Classification. 35J20, 35J50, 35J10.\\
\rule{3ex}{0ex}$^*$Corresponding author.\\
\rule{4ex}{0ex}\emph{E-mail address:} [email protected] (S. Jiang), [email protected] (S. Liu).}
\maketitle
\begin{abstract}
In this paper we consider 6-superlinear Chern-Simons-Schr\"{o}dinger systems. In contrast to most studies, we consider the case where the potential $V$ is indefinite so that the Schr\"{o}dinger operator $-\Delta +V$ possesses a finite-dimensional negative space. We obtain nontrivial solutions for the problem via Morse theory.
\vskip1ex
\noindent\emph{Keywords:} Chern-Simons-Schr\"{o}dinger system; Palais-Smale condition; Local linking; Morse theory
\end{abstract}
\section{Introduction}
In this paper, we consider the following Chern-Simons-Schr\"{o}dinger system (CCS system) in $H^1(\mathbb{R}^{2})$:
\begin{equation}
\left\{
\begin{array}[c]{ll}
-\Delta u+V(x)u+A_0u+\sumop_{j=1}^{2}{A_j}^2u=f(x,u),\\
\partial_1A_0=A_2\vert u\vert^2,\quad\partial_2A_0=-A_1\vert u\vert^2,\\
\partial_1A_2-\partial_2A_1=-\frac{1}{2}u^2,\quad\partial_1A_1+\partial_2A_2=0.
\end{array}
\right.
\label{eq:K}
\end{equation}
where $V\in C(\mathbb{R}^{2})$ is potential and $f\in C(\mathbb{R}^{2},\mathbb{R})$ is the nonlinearity. The $\left(\mathbf{C}\mathbf{C}\mathbf{S}\right)$ system describes the nonrelativistic thermodynamic behavior of large number of particles in an electromagnetic field.
The $\left(\mathbf{C}\mathbf{C}\mathbf{S}\right)$ system (\ref{eq:K}) arises when we are looking for standing waves for the following nonlinear Schr\"{o}dinger system
\begin{equation}
\left\{
\begin{array}[c]{ll}
iD_0\phi+\left(D_1D_1+D_2D_2\right)\phi+f(x,\phi)=0,\\
\partial_0A_1-\partial_1A_0=-\text{Im}\left(\bar{\phi}D_2\phi\right),\\
\partial_0A_2-\partial_2A_0=\text{Im}\left(\bar{\phi}D_1\phi\right),\\
\partial_1A_2-\partial_2A_1=-\frac{1}{2}\vert\phi\vert^2,
\end{array}
\right.\label{eq:12}
\end{equation}
where $i$ denotes the imaginary unit, $\partial_0=\frac{\partial}{\partial_t}$, $\partial_1=\frac{\partial}{\partial_{x_1}}$, $\partial_2=\frac{\partial}{\partial_{x_2}}$, for $(t,x_1,x_2)\in \mathbb{R}^{1+2}$, $\phi:\mathbb{R}^{1+2}\rightarrow\mathbb{C}$ is the complex scalar field, $A_\mu:\mathbb{R}^{1+2}\rightarrow\mathbb{R}$ is the gauge field. The associated covariant differential operators are given by
\[
D_\mu:=\partial_\mu+iA_\mu,\qquad\mu=0,1,2.
\]
System (\ref{eq:12}) proposed in \cite{MR1084552,MR1056846} consists of the Schr\"{o}dinger equation augmented by the gauge field $A_\mu$. This feature of the model is important for the study of the high-temperature superconductor, fractional quantum Hall effect and Aharovnov-Bohm scattering.
We suppose that the gauge field satisfies the Coulmb gauge condition $\partial_0A_0+\partial_1A_1+\partial_2A_2=0$, and $A_\mu(x,t)=A_\mu(x)$, $\mu=0,1,2.$ Then we deduce that $\partial_1A_1+\partial_2A_2=0$. Moreover,
standing waves for (\ref{eq:12}) are obtained through the ansatz $\phi=u(x)e^{i\omega t}$, $f(x,ue^{i\omega t})=f(x,u)e^{i\omega t}$, $\omega>0$, resulting in
\begin{equation}
\left\{
\begin{array}[c]{ll}
-\Delta u+\omega u+A_0u+\sumop_{j=1}^{2}{A_j}^2u=f(x,u),\\
\partial_1A_0=A_2\vert u\vert^2,\quad\partial_2A_0=-A_1\vert u\vert^2,\\
\partial_1A_2-\partial_2A_1=-\frac{1}{2}u^2,\quad \partial_1A_1+\partial_2A_2=0.
\end{array}
\right.
\label{eq:13}
\end{equation}
Here the components $A_1$ and $A_2$ in system (\ref{eq:13}) can be represented by solving the elliptic equation
\[
\Delta A_1=\partial_2\left(\frac{\vert u\vert^2}{2}\right)\quad\text{and}\quad\Delta A_2=-\partial_2\left(\frac{\vert u\vert^2}{2}\right),
\]
which provide
\[
A_1=A_1[u](x)=\frac{x_2}{2\pi\vert x\vert^2}*\left(\frac{|u|^2}{2}\right)=\frac{1}{2\pi}\int_{\mathbb{R}^2}\frac{x_2-y_2}{\vert x-y\vert^2}\frac{\vert u(y)\vert^2}{2}\text{d}y,
\]
\[A_2=A_2[u](x)=\frac{x_1}{2\pi\vert x\vert^2}*\left(\frac{\vert u\vert^2}{2}\right)=-\frac{1}{2\pi}\int_{\mathbb{R}^2}\frac{x_1-y_1}{\vert x-y\vert^2}\frac{\vert u(y)\vert^2}{2}\text{d}y,
\]
where $*$ denotes the convolution.
Similarly,
\[
\Delta A_0=\partial_1(A_2\vert u\vert^2)-\partial_2(A_1\vert u\vert^2),
\]
which gives the following representation of the component $A_0$:
\[
A_0=A_0[u](x)=\frac{x_1}{2\pi\vert x\vert^2}*(A_2\vert u\vert^2)-\frac{x_2}{2\pi\vert x\vert^2}*(A_1\vert u\vert^2).
\]
The $\left(\mathbf{C}\mathbf{C}\mathbf{S}\right)$ system (\ref{eq:K}) has attracted considerable attention in recent decades, which can be seen in \cite{MR2948224,MR3494398,MR3619082,MR3173176,MR4188316,MR4000150,MR3924547} and the references therein. We emphasize that in all these papers, the authors only considered the case where the Schr\"{o}dinger operator $-\Delta+V$ is positive definite. In this case, the quadratic part of the variational functional $\Phi$ given in (\ref{eq:2.1}) is positively definite, the zero function $u=0$ is a local minimizer of $\Phi$ and the mountain pass theorem \cite{MR0370183} can be applied. However, when the potential $V$ is negative somewhere so that the quadratic part of $\Phi$ is indefinite, the zero function $u=0$ is no longer a local minimizer of $\Phi$, the mountain pass theorem is not applicable anymore. For stationary NLS equations
\[
-\Delta u+V(x)u=f(x,u)
\]
with indefinite Schr\"{o}dinger operator $-\Delta+V$, one usually applies the linking theorem to get solution, see e.g.\cite{MR1751952,MR2957647}. For system (\ref{eq:K}), it seems hard to verify the linking geometry due to the nonnegative terms involving $A_i^2u^2$ in \eqref{eq:2.1}, which prevent the functional $\Phi$ to be nonpositive on the negative space of the Schr\"{o}dinger operator. Hence, the classical linking theorem \cite[Lemma 2.12]{MR1400007} is also not applicable.
For this reason, there are very few results about (\ref{eq:K}) with indefinite potential. It seems that \cite{MR4271204} is the only work devoted to this situation. To overcome these difficulties, and the difficulty that the Sobolev embedding
\[
H^1(\mathbb{R}^2)\hookrightarrow L^2(\mathbb{R}^2)
\]
is not compact, it is assumed in \cite{MR4271204} that, roughly speaking, $V$ is coercive so that the related Sobolev space is compactly embedded into $ L^2(\mathbb{R}^2)$. Then the local linking theory of Li and Willem \cite{MR1312028} is applied to get critical point of $\Phi$.
Motivated by the above observation and \cite{MR3656292} on Schr\"{o}dinger-Poisson systems (see also \cite{MR3945610}), in this paper, we will consider the case that $V$ is bounded, so that the above-mentioned compact embedding may not be true. From now on all integrals are taken over $\mathbb{R}^2$ except stated explicitly.
Now we are ready to state our assumptions on $V$ and $f$.
\begin{itemize}
\item[$(V)$] $V\in C(\mathbb{R}^{2})$ is a bounded function such that the quadratic form
\[
Q(u)=\frac{1}{2}\int\left(\vert \nabla u\vert
^{2}+ V(x)u^{2}\right)
\]
is non-degenerate and the negative space of $Q$ is finite-dimensional.
\item[$(f_1)$] $f\in C(\mathbb{R}^2\times\mathbb{R})$ satisfies
\[
\lim_{\left\vert
t\right\vert \rightarrow0}\frac{f(x,t)}{t}=0\text{,}\qquad\lim_{\left\vert t\right\vert \rightarrow\infty}\frac{f(x,t)}{e^{\mu t^2}}=0
\]
for any $\mu>0$, where $F(x,t)=\int_{0}^{t}f(x,\tau)\text{d}\tau$.
\item[$(f_2)$] For $(x,t)\in \mathbb{R}^2\times\mathbb{R}\setminus\{0\}$ we have $0<6F(x,t)\leq tf(x,t)$, moreover, for almost all $x\in\mathbb{R}^2$
\begin{equation}
\lim_{\left\vert
t\right\vert \rightarrow\infty}\frac{F(x,t)}{t^6}=+\infty.\label{eq:1.4}
\end{equation}
\item[$(f_3)$] One of the following conditions is satisfied:
\begin{enumerate}
\item[$(f_{31})$] there exist $C_0>0$ and $\nu\in(0,6)$ satisfying $
F(x,t)\ge C_0\vert t\vert^\nu$ for all $t\in\mathbb{R}$;
\item[$(f_{32})$] for some $\delta>0$, $
F(x,t)\leq0$ for all $\vert t\vert\leq\delta$.
\end{enumerate}
\item[$(f_4)$] For any $r>0$, we have
\[
\lim_{\left\vert x\right\vert \rightarrow\infty}\underset{0<\left\vert t\right\vert\leq r}{\sup}\left\vert\frac{f(x,t)}{t}\right\vert=0.
\]
\item[$(f_5)$] For some $s>2$, $p,q>1$ we have $a\in L^\infty(\mathbb{R}^2) \cap L^{p}(\mathbb{R}^2), b\in L^\infty(\mathbb{R}^2) \cap L^{q}(\mathbb{R}^2)$ such that
\begin{equation}
\left\vert f(x,t)\right\vert\leq a(x)\vert t\vert+b(x)\vert t\vert^{s-1}\text{.}\label{eq:1.5}
\end{equation}
\end{itemize}
We emphasize that in $(f_5)$, the exponents $p$ and $q$ can be chosen arbitrarily from $(1,\infty)$, see Remark \ref{rr0}.
Now we are ready to state the main results of this paper.
\begin{thm}
\label{th:1}Suppose that $(V)$, $(f_1)$, $(f_2)$, $(f_3)$ and $(f_4)$ are satisfied. Then system \eqref{eq:K} has a nontrivial solution.
\end{thm}
\begin{thm}
\label{th:2}Suppose that $(V)$, $(f_1)$, $(f_2)$, $(f_3)$ and $(f_5)$ are satisfied. Then system \eqref{eq:K} has a nontrivial solution.
\end{thm}
\begin{rem}
\label{re:1}
As we have mentioned before, neither the mountain pass lemma nor the linking theorem can be applied to our functional $\Phi$. It turns out that $\Phi$ has a local linking at the origin. Unfortunately, at present all critical point theorems involving local linking require the functional to satisfy global compactness condition. The role of our condition $(f_4)$ or $(f_5)$ is to ensure such compactness.
\end{rem}
The paper is organized as follows. In Section 2 we prove that the $(PS)$ sequences of $\Phi$ are bounded and $\Phi$ satisfies the $(PS)$ condition. In Section 3 we will prove Theorem \ref{th:1} by applying Morse theory. For this purpose after recalling some concepts and results in infinite-dimensional Morse theory \cite{MR1196690}, we will compute the critical group of $\Phi$ at infinity and then give the proof of Theorem \ref{th:1}. Finally, after investigating the compactness of the operator $K^{\prime}$ (see Lemma \ref{lf:1}), we use Morse theory again to prove Theorem \ref{th:2} in Section 4.
\section{Palais-Smale Condition}
Throughout this paper we always denote $X=H^1(\mathbb{R}^2)$. Under the assumptions $(V)$ and $(f_1)$, similar to Kang and Tang \cite{MR4271204} we can show that the functional $\Phi:X\rightarrow\mathbb{R}$,
\begin{equation}
\Phi(u)=\frac{1}{2}\int\left(\vert\nabla u\vert
^{2}+ V(x)u^{2}+A^2_1(u)u^2+A^2_2(u)u^2\right)
-\int F(x,u). \label{eq:2.1}
\end{equation}
is well defined and of class $C^1$. The derivative of $\Phi$ is given by
\[
\left\langle \Phi^{\prime}(u),v\right\rangle =\int\nabla u\cdot\nabla v+\int V(x)uv+\int\left[\left(A^2_1(u)+A^2_2(u)\right)uv+A_0uv\right]-\int f(x,u)v.
\]
for $u,v\in X$. Consequently, critical points of $\Phi$ are weak solutions of system \eqref{eq:K}.
To study the functional $\Phi$, it will be convenient to rewrite the quadratic part $Q$ in a simpler form. It is well known that, if $(V)$ holds, then there exists an equivalent norm $\Vert\cdot\Vert$ on $X$ such that
\[
Q(u)=\frac{1}{2}\left(\Vert u^+\Vert^2-\Vert u^-\Vert^2\right),
\]
where $u^\pm$ is the orthogonal projection of $u$ on $X^\pm$ being $X^\pm$ the positive/negative space of $Q$. Using this new norm, $\Phi$ can be rewritten as
\begin{equation}
\Phi(u)=\frac{1}{2}\left(\Vert u^+\Vert^2-\Vert u^-\Vert^2\right)+\frac{1}{2}\int\left(A^2_1(u)u^2+A^2_2(u)u^2\right)
-\int F(x,u). \label{eq:2.2}
\end{equation}
By simple calculation (also see \cite{MR3619082,MR3924547}), we obtain, for any $u\in X$,
\begin{equation}
\left\langle \Phi^{\prime}(u),u\right\rangle =\Vert u^+\Vert^2-\Vert u^-\Vert^2+3\int\left(A^2_1(u)u^2+A^2_2(u)u^2\right)
-\int f(x,u)u. \label{eq:2.3}
\end{equation}
Next, we recall the following properties of the terms involving $A_0$, $A_1$, $A_2$.
\begin{lem}[\cite{MR4271204}]
\label{l:2.1} There is a constant $a_1>0$ such that for all $u\in X$ we have
\[
0\leq\int\left(A^2_1(u)+A^2_2(u)\right)u^2\leq a_1\left\Vert u\right\Vert^6.
\]
\end{lem}
\begin{lem}[{\cite[Proposition 2.1]{MR3619082}}]
\label{l:2.2} Let $1<r<2$ and $\frac{1}{r}-\frac{1}{t}=\frac{1}{2}$. If $u\in X$, then
\begin{align}
&\vert A_0(u)\vert_t\leq C\vert u\vert^2_{2r}\vert u\vert^2_4\nonumber,\\
&\vert A^2_i(u)\vert_t\leq C\vert u\vert^2_{2r}\nonumber,
\end{align}
where $i=1,2$ and $\vert\cdot\vert_t$ is the $L^t$-norm.
\end{lem}
\begin{lem}
\label{l:2.3}If $(V)$, $(f_1)$ and $(f_2)$ hold, then all $(PS)$ sequences of $\Phi$ are bounded.
\end{lem}
\begin{proof}
Let $\{u_n\}$ be a $(PS)$ sequence of $\Phi$, that is,
\[
\sup_n\vert\Phi(u_{n})\vert<\infty,\qquad\Phi^{\prime}\left(u_{n}\right)\rightarrow{0}.
\]
It suffices to show that $\{u_n\}$ is bounded.
Suppose $\{u_n\}$ is unbounded, we may assume $\Vert u_n\Vert\rightarrow\infty.$ Let $v_n=\Vert u_n\Vert^{-1}u_n$. Then
\[
v_n=v^+_n+v^-_n\rightharpoonup v=v^++v^-\in X,\quad v^\pm_n,v^\pm\in X^\pm.
\]
If $v=0$, then $v^-_n\rightarrow v^-=0$ because dim $X^-<\infty$. Since
\[
\Vert v^+_n\Vert^2+\Vert v^-_n\Vert^2=1,
\]
for $n$ large enough we have
\[
\Vert v^+_n\Vert^2-\Vert v^-_n\Vert^2\ge\frac{1}{2}.
\]
Therefore, by assumption $(f_2)$, we deduce that for $n$ large enough,
\begin{align}
1+\sup_n\vert\Phi(u_{n})\vert+\Vert u_n\Vert
&\ge\Phi(u_n)-\frac{1}{6}\left\langle\Phi^{\prime}(u_n),u_n\right\rangle\nonumber\\
&=\frac{1}{3}\Vert u_n\Vert^2\left( \Vert v^+_n\Vert^2-\Vert v^-_n\Vert^2\right)+\int\left(\frac{1}{6}f(x,u_n)u_n-F(x,u_n)\right)\nonumber\\
&\ge\frac{1}{6}\Vert u_n\Vert^2\nonumber,
\end{align}
contradicting $\Vert u_n\Vert\rightarrow\infty$.
If $v\neq0$. Then the set $\Theta=\{v\neq0\}$ has positive Lebesgue measure. For $x\in\Theta$ we have $\vert u_n(x)\vert\rightarrow\infty$ and
\begin{equation}
\frac{F(x,u_n(x))}{\Vert u_n\Vert^6}=\frac{F(x,u_n(x))}{u^6_n(x)}v^6_n(x)\rightarrow+\infty\label{eq:2.4},
\end{equation}
thanks to (\ref{eq:1.4}).
By Fatou lemma we deduce from (\ref{eq:2.4}) that
\begin{equation}
\int\frac{F(x,u_n)}{\Vert u_n\Vert^6}\ge\int_{v\neq0}\frac{F(x,u_n)}{\Vert u_n\Vert^6}\rightarrow+\infty.\label{eq:2.5}
\end{equation}
It follows from Lemma \ref{l:2.1} that
\begin{align}
\frac{1}{\Vert u_n\Vert^6}\int F(x,u_n)
&=\frac{\Vert u^+_n\Vert^2-\Vert u^-_n\Vert^2}{2\Vert u_n\Vert^6}+\frac{1}{2\Vert u_n\Vert^6}\int \left(A^2_1(u_n)u_n^2+A^2_2(u_n)u_n^2\right)-\frac{\Phi(u_n)}{\Vert u_n\Vert^6}\nonumber\\
&\leq\frac{a_1}{2}+1\nonumber,
\end{align}
a contradiction to (\ref{eq:2.5}). Therefore $\{u_n\}$ is bounded in $X$.
\end{proof}
To get a convergent subsequence of the $(PS)$ sequence, we need some compact properties of operators involving $A_j$, $j=$0,1,2. Firstly, we need to investigate the $C^1$-functional $\mathcal{N}:X\rightarrow \mathbb{R}$,
\[
\mathcal{N}(u)=\frac{1}{2}\int\left(A^2_1(u)u^2+A^2_2(u)u^2\right).
\]
It is known that the derivative of $\mathcal{N}$ is given by
\[
\left\langle \mathcal{N}^{\prime}(u),v\right\rangle=\int\left[\left(A^2_1(u)+A^2_2(u)\right)uv+A_0(u)uv\right],\quad u,v\in X.
\]
\begin{lem}
\label{l:2.4} The functional $\mathcal{N}$ is weakly lower semi-continuous, its derivative $\mathcal{N}^{\prime}: X\rightarrow X^*$ is weakly sequentially continuous, where $X^*=H^{-1}(\mathbb{R}^2)$ is the dual space of $X=H^1(\mathbb{R}^2)$.
\end{lem}
\begin{proof}
Let $\{u_n\}$ be a sequence in $X$ such that $u_n\rightharpoonup u$ in $X$, we need to show
\[
\mathcal{N}(u)\leq \varliminf \mathcal{N}(u_n),\qquad\left\langle \mathcal{N}^{\prime}(u_n),\phi\right\rangle\rightarrow\left\langle \mathcal{N}^{\prime}(u),\phi\right\rangle,
\]
for all $\phi\in X$.
Since $u_n\rightharpoonup u$ in $X$, up to a subsequence, by the compactness of the embedding $X\hookrightarrow L^2_{\rm loc}(\mathbb{R}^2)$, we have
\[
u_n\rightarrow u\quad\text{in}~~L^2_{\rm loc}(\mathbb{R}^2),\qquad u_n(x)\rightarrow u(x)\quad \text{a.e.\ in}~~\mathbb{R}^2.
\]
According to Wan and Tan \cite[Proposition 2.2]{MR3619082}, for $j=1,2$ we have $A^2_j(u_n)\rightarrow A^2_j(u)$ a.e. in $\mathbb{R}^2$. Moreover,
by the Fatou lemma,
\[
\mathcal{N}(u)=\frac{1}{2}\int\left(A^2_1(u)u^2+A^2_2(u)u^2\right)\leq\frac{1}{2}\varliminf\int\left(A^2_1(u_n)u_n^2+A^2_2(u_n)u_n^2\right)=\varliminf \mathcal{N}(u_n).
\]
Hence $\mathcal{N}$ is weakly lower semi-continuous.
To prove the weak continuity of $\mathcal{N}^{\prime}$, we observe that
\begin{align}
\left\langle\mathcal{N}^{\prime}(u_n)-\mathcal{N}^{\prime}(u),\phi\right\rangle
=&\int\left(A^2_1(u_n)u_n\phi-A^2_1(u)u\phi\right)+\int\left(A^2_2(u_n)u_n\phi-A^2_2(u)u\phi\right)\nonumber\\
&+\int\left(A_0(u_n)u_n\phi-A_0(u)u\phi\right).\label{eq:2.6}
\end{align}
Since
\begin{equation}
\int\left(A^2_j(u_n)u_n\phi-A^2_j(u)u\phi\right)=\int A^2_j(u_n)(u_n-u)\phi+\int\left(A^2_j(u_n)-A^2_j(u)\right)u\phi,\quad j=1,2.\label{eq:2.7}
\end{equation}
By Lemma \ref{l:2.2}, H\"{o}lder inequality and continuous embedding yield that
\begin{align}
\int\vert A^2_j(u_n)(u_n-u)\vert^2
&\leq\vert A^2_j(u_n)\vert^2_4\vert u_n-u\vert^2_4\nonumber\\
&\leq C\vert u_n\vert^4_{\frac{8}{3}}\vert u_n-u\vert^2_4\nonumber\\
&\leq C\Vert u_n\Vert^4\Vert u_n-u\Vert^2\leq C.\nonumber
\end{align}
Combining $u_n\rightarrow u$ a.e.\ in $\mathbb{R}^2$ and $A^2_j(u_n)\rightarrow A^2_j(u)$ a.e.\ in $\mathbb{R}^2$, we have $ A^2_j(u_n)(u_n-u)\rightharpoonup 0$ in $L^2(\mathbb{R}^2)$. Thus
\begin{equation}
\int A^2_j(u_n)(u_n-u)\phi\rightarrow0.\label{eq:2.8}
\end{equation}
Similarly,
\begin{equation}
\int\left(A^2_j(u_n)-A^2_j(u)\right)u\phi\rightarrow0.\label{eq:2.9}
\end{equation}
On the other hand
\begin{equation}
\int\left(A_0(u_n)u_n\phi-A_0(u)u\phi\right)=\int A_0(u_n)(u_n-u)\phi+\int\left(A_0(u_n)-A_0(u)\right)u\phi\label{eq:2.10}.
\end{equation}
By Lemma \ref{l:2.2}, H\"{o}lder inequality and continuous embedding yield that
\begin{align}
\int\vert A_0(u_n)(u_n-u)\vert^2
&\leq\vert A_0(u_n)\vert^2_4\left\vert u_n-u\right\vert^2_4\nonumber\\
&\leq C\vert u_n\vert^4_{\frac{8}{3}}\vert u_n\vert^4_4\vert u_n-u\vert^2_4\nonumber\\
&\leq C\Vert u_n\Vert^8\Vert u_n-u\Vert^2\leq C.\nonumber
\end{align}
Combining $u_n\rightarrow u$ a.e.\ in $\mathbb{R}^2$, we have $ A_0(u_n)(u_n-u)\rightharpoonup0$ in $L^2(\mathbb{R}^2)$. Thus
\begin{equation}
\int A_0(u_n)(u_n-u)\phi\rightarrow0.\label{eq:2.11}
\end{equation}
Similarly,
\begin{equation}
\int\left(A_0(u_n)-A_0(u)\right)u\phi\rightarrow0.\label{eq:2.12}
\end{equation}
From (\ref{eq:2.6})-(\ref{eq:2.12}), for $\phi\in X$ we have
\[
\left\langle \mathcal{N}^{\prime}(u_n),\phi\right\rangle\rightarrow\left\langle \mathcal{N}^{\prime}(u),\phi\right\rangle.
\]
Therefore, we have proved that $\mathcal{N}^{\prime}$ is weakly sequencetially continuous.
\end{proof}
\begin{lem}
\label{l:2.5}Let $u_n\rightharpoonup u$ in $X$. Then
\[
\varliminf\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right] \ge0.
\]
\end{lem}
\begin{proof}
Applying Lemma \ref{l:2.4}, we have
\[
\varliminf\mathcal{N}(u_n)\ge\mathcal{N}(u),\qquad \text{lim}\left\langle \mathcal{N}^{\prime}(u_n),u\right\rangle=\left\langle \mathcal{N}^{\prime}(u),u\right\rangle.
\]
Therefore,
\begin{align}
&\hspace{-1cm}\varliminf\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right]\nonumber\\
=&\varliminf\int\left[3\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n^2-\left(A^2_1(u_n)+A^2_2(u_n)\right)u_nu-A_0(u_n)u_nu\right]\nonumber\\
=&\varliminf\left(6\mathcal{N}(u_n)-\left\langle \mathcal{N}^{\prime}(u_n),u\right\rangle\right)\nonumber\\
\ge&6\mathcal{N}(u)-\left\langle \mathcal{N}^{\prime}(u),u\right\rangle=0.\qedhere\nonumber
\end{align}
\end{proof}
\begin{lem}
\label{l:2.6}If $(V)$, $(f_1)$, $(f_2)$ and $(f_4)$ hold, then the functional
$\Phi$ satisfies the $(PS)$ condition, that is, any $(PS)$ sequence $\{u_{n}\}\subset
X$ possesses a convergent subsequence.
\end{lem}
\begin{proof}
Let $\{u_n\}$ be a $(PS)$ sequence. We know from Lemma \ref{l:2.3} that $\{u_n\}$ is bounded in $X$. Up to a subsequence we may assume $u_n\rightharpoonup u$ in $X$. We have
\[
\int\left(\nabla u_n\cdot\nabla u+V(x)u_nu\right)\rightarrow\int\left(\vert \nabla u\vert
^{2}+ V(x)u^{2}\right)=\Vert u^+\Vert^2-\Vert u^-\Vert^2.
\]
Consequently
\begin{align}
o(1)=&\left\langle \Phi^{\prime}(u_n),u_n-u\right\rangle\nonumber\\
=& \int\left[\nabla u_n\cdot\nabla \left({u_n-u}\right)+V(x)u_n\left(u_n-u\right)\right]\nonumber\\
&+\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right]-\int f(x,u_n)(u_n-u)\nonumber\\
=&\int\left(|\nabla u_n|^2+V(x)u_n^2\right)-\int\left(\nabla u_n\cdot\nabla u+V(x)u_nu\right)\nonumber\\
&+\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right]-\int f(x,u_n)(u_n-u)\nonumber\\
=&\left(\Vert u_n^+\Vert^2-\Vert u_n^-\Vert^2\right)-\left(\Vert u^+\Vert^2-\Vert u^-\Vert^2\right)\nonumber\\
&+\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right]-\int f(x,u_n)(u_n-u)+o(1).\nonumber
\end{align}
We have $u^-_n\rightarrow u^-$ and $\Vert u_n^-\Vert=\Vert u^-\Vert$ because dim$X^-<\infty$. Collecting all infinitesimal terms, we obtain
\begin{align}
\Vert u_n^+\Vert^2-\Vert u^+\Vert^2
= o(1)&+\int f(x,u_n)(u_n-u)\nonumber\\
&-\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right].\label{eq:2.13}
\end{align}
Using the condition $(f_4)$, according to \cite[p.29]{MR2038142} we have
\[
\varlimsup\int f(x,u_n)(u_n-u)\leq0.
\]
We deduce from Lemma \ref{l:2.5} and (\ref{eq:2.13}) that
\begin{align}
&\hspace{-1cm}\varlimsup\left(\Vert u_n^+\Vert^2-\Vert u^+\Vert^2\right)\nonumber\\
=&\varlimsup\left(\int f(x,u_n)(u_n-u)-\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right]\right)\nonumber\\
=&\varlimsup\int f(x,u_n)(u_n-u)-\varliminf\int\left[\left(A^2_1(u_n)+A^2_2(u_n)\right)u_n(u_n-u)+A_0(u_n)u_n(u_n-u)\right]\nonumber\\
\leq&\varlimsup\int f(x,u_n)(u_n-u)\leq0.\nonumber
\end{align}
Combining this with the weakly lower semi-continuity of the norm functional $u\mapsto \left\Vert u\right\Vert$, we obtain
\[
\Vert u^+\Vert\leq\varliminf \Vert u^+_n\Vert\leq\varlimsup \Vert u^+_n\Vert\leq \Vert u^+\Vert.
\]
Therefore $ \Vert u^+_n\Vert\rightarrow \Vert u^+\Vert$. Remembering $ \Vert u^-_n\Vert\rightarrow \Vert u^-\Vert$, we get $ \Vert u_n\Vert\rightarrow \left\Vert u\right\Vert$. Thus $u_n\rightarrow u$ in $X$.
\end{proof}
\section{Critical groups and the proof of Theorem \ref{th:1}}
Having established the $(PS)$ condition for $\Phi$, we are now ready to present the proof of Theorem \ref{th:1}. We start by recalling some concepts and results from infinite-dimensional Morse theory (see e.g., Chang \cite{MR1196690} and Mawhin and Willem \cite[Chapter 8]{MR982267}).
Let $X$ be a Banach space, $\varphi: X\rightarrow \mathbb{R}$ be a $C^1$ functional, $u$ be an isolated critical point of $\varphi$ and $\varphi(u)=c$. Then
\[
C_q(\varphi,u):=H_q(\varphi_c,\varphi_c\setminus\{0\}),\qquad q\in\mathbb{N}=\{0,1,2,\ldots\},
\]
is called the $q$-th critical group of $\varphi$ at $u$, where $\varphi_c:=\varphi^{-1}(-\infty,c]$ and $H_*$ stands for the singular homology with coefficients in $\mathbb{Z}$.
If $\varphi$ satisfies the $(PS)$ condition and the critical values of $\varphi$ are bounded from below by $\alpha$, then following Bartsch and Li \cite{MR1420790}, we define the $q$-th critical group of $\varphi$ at infinity by
\[
C_q(\varphi,\infty):=H_q(X,\varphi_\alpha),\qquad q\in\mathbb{N}.
\]
Due to the deformation lemma, it is well known that the homology on the right hand side does not depend on the choice of $\alpha$.
\begin{prop}[{\cite[Proposition 3.6]{MR1420790}}]
\label{prop:3.1} If $\varphi\in C^1(X,\mathbb{R})$ satisfies the $(PS)$ condition and $C_\ell(\varphi,0)\neq C_\ell(\varphi,\infty)$ for some $\ell\in\mathbb{N}$, then $\varphi$ has a nonzero critical point.
\end{prop}
\begin{prop}[{\cite[Theorem 2.1]{MR1110119}}]
\label{prop:3.2} Suppose $\varphi\in C^1(X,\mathbb{R})$ has a local linking at $0$ with respect to the decomposition $X=Y\oplus Z$, i.e., for some $\varepsilon>0$,
\begin{align}
&\varphi(u)\leq0 \quad\text{for}~~u\in Y\cap B_\varepsilon,\nonumber\\
&\varphi(u)>0 \quad\text{for}~~u\in(Z\setminus\{0\})\cap B_\varepsilon,\nonumber
\end{align}
where $B_\varepsilon=\{u\in X\mid\Vert u\Vert\leq\varepsilon\}$. If $\ell={\rm dim}~Y<\infty$, then $C_\ell(\varphi,0)\neq0$
\end{prop}
To investigate $C_*(\Phi,\infty)$, using the idea of \cite[Lemma 3.3]{MR3656292} we will prove the following lemma.
\begin{lem}
\label{l:3.3}If $(V)$, $(f_1)$ and $(f_2)$ hold, there exists $A>0$ such that, if $\Phi(u)\leq-A$, then
\[
\frac{\rm d}{{\rm d}t}\Bigg|_{t=1}\Phi(tu)<0.
\]
\end{lem}
\begin{proof}
Otherwise, there exists a sequence $\{u_n\}\subset X$ such that $\Phi(u_n)\leq-n$ but
\begin{equation}
\left\langle \Phi^{\prime}(u_n),u_n\right\rangle= \frac{\rm d}{{\rm d}t}\Bigg|_{t=1}\Phi(tu_n)\ge0.\label{eq:3.1}
\end{equation}
Consequently,
\begin{align}
2\left(\Vert u_n^+\Vert^2-\Vert u_n^-\Vert^2\right)
&\leq2\left(\Vert u_n^+\Vert^2-\Vert u_n^-\Vert^2\right)+\int\left[f(x,u_n)u_n-6F(x,u_n)\right]\nonumber\\
&=6\Phi(u_n)-\left\langle \Phi^{\prime}(u_n),u_n\right\rangle\leq-6n\label{eq:3.2}
\end{align}
Let $v_n=\Vert u_n\Vert^{-1}u_n$ and $v^\pm_n$ be the orthogonal projection of $v_n$ on $X^\pm$. Then up to a subsequence $v^-_n\rightarrow v^-$ for some $v^-\in X^-$, because dim $X^-<\infty$.
If $v^-\neq 0$, then $v_n\rightharpoonup v$ in $X$ for some $v\in X\setminus\{0\}$. By assumption $(f_2)$ we have
\[
\frac{f(x,t)t}{t^6}\ge\frac{6F(x,t)}{t^6}\rightarrow+\infty,\qquad\text{as}~~t\rightarrow\infty.
\]
Thus, similar to the proof of $(\ref{eq:2.5})$, we obtain
\begin{equation}
\frac{1}{\Vert u_n\Vert^6}\int f(x,u_n)u_n\rightarrow+\infty.\label{eq:3.3}
\end{equation}
Now, using Lemma \ref{l:2.1} we have a contradiction
\begin{align}
0\leq\frac{\left\langle \Phi^{\prime}(u_n),u_n\right\rangle}{\Vert u_n\Vert^6}
&=\frac{1}{\Vert u_n\Vert^6}\left(\left(\Vert u_n^+\Vert^2-\Vert u_n^-\Vert^2\right)+3\int\left(A^2_1(u_n)+A^2_2(u_n)\right)u^2_n-\int f(x,u)u\right)\nonumber\\
&\leq 1+3a_1-\frac{1}{\Vert u_n\Vert^6}\int f(x,u_n)u_n\rightarrow-\infty.\nonumber
\end{align}
Hence, we must have $v^-=0$. But $\Vert v^+_n\Vert^2+\Vert v^-_n\Vert^2=1$, we deduce $\Vert v^+_n\Vert\rightarrow1$. Now for large $n$ we have
\[
\Vert u^+_n\Vert=\Vert u_n\Vert\Vert v^+_n\Vert\ge\Vert u_n\Vert\Vert v^-_n\Vert=\Vert u^-_n\Vert
\]
a contradiction to $(\ref{eq:3.2})$.
\end{proof}
\begin{lem}
\label{l:3.4} $C_q(\Phi,\infty)=0$ for all $q\in\mathbb{N}$.
\end{lem}
\begin{proof}
Let $B=\{v\in X\mid \Vert u\Vert\leq1\}$, $S=\partial{B}$ be the unit sphere in $X$, and $A>0$ be the number given in Lemma \ref{l:3.3}. Without loss of generality, we may assume that
\[
-A<\underset{0<\left\Vert u\right\Vert\leq2}{\inf}\Phi(u).
\]
Using $(\ref{eq:1.4})$, it is easy to see that for any $v\in S$
\begin{align}
\Phi(sv)&=\frac{s^2}{2}\left(\Vert v^+\Vert^2-\Vert v^-\Vert^2\right)+3s^2\int\left(A^2_1(sv)+A^2_2(sv)\right)v^2-\int F(x,sv)\nonumber\\
&=s^6\left\{\frac{\Vert v^+\Vert^2-\Vert v^-\Vert^2}{2s^4}+\frac{3}{s^4}\int\left(A^2_1(sv)+A^2_2(sv)\right)v^2-\int\frac{F(x,sv)}{s^6}\right\}\rightarrow-\infty,\nonumber
\end{align}
as $s\rightarrow+\infty$. Therefore, for $v\in S$ there is $s_v>0$ such that $\Phi(s_vv)=-A$.
Using Lemma \ref{l:3.3} and the implicit function theorem, as in the proof of \cite[Lemma 3.4]{MR3656292} it can be shown that such $s_v$ is uniquely determined by $v$ and $T:v\mapsto s_v$ is continuous on $S$. Using the continuous function $T$ it is standard (see \cite{MR1094651}) to construct a deformation from $X\setminus B$ to the level set $\Phi_{-A}=\Phi^{-1}(-\infty,-A\rbrack$, and deduce
\[
C_q(\Phi,\infty)=H_q(X,\Phi_{-A})\cong H_q(X,X\setminus B)=0,\qquad\text{for all}~~q\in\mathbb{N}.\qedhere
\]
\end{proof}
\begin{proof}
[\indent Proof of Theorem \ref{th:1}]
From the assumption $(f_1)$, there exists $\varepsilon>0$ and $C_\varepsilon>0$ such that, for every $\mu>0$,
\[
\vert F(x,u)\vert\leq\varepsilon\vert u\vert^2+C_\varepsilon\vert u\vert^\xi(e^{\mu u^2}-1)\quad \text{for all}~~\xi>2.
\]
Using the conditions $(V)$, $(f_1)$ and $(f_3)$, similar to \cite[Lemmas 3.4 and 3.5]{MR4271204}, it is easy to see that
\[
\Phi(u)=\frac{1}{2}\left(\Vert u^+\Vert^2-\Vert u^-\Vert^2\right)+o(\Vert u\Vert^2)\quad\text{as $\Vert u\Vert\rightarrow0$.}
\]
Hence, there exists $\varepsilon>0$ such that $\Phi$ is positive on $(X^+\setminus\{0\})\cap B_\varepsilon$, and negative on $(X^-\setminus\{0\})\cap B_\varepsilon$. That is, $\Phi$ has a local linking with respect to the decomposition $X=X^-\oplus X^+$. Therefore Proposition \ref{prop:3.2} yields
\[
C_\ell(\varphi,0)\neq0,
\]
where $\ell=\text{dim}~X^-$. By Lemma \ref{l:3.4}, $C_\ell(\varphi,\infty)=0$. Applying Proposition \ref{prop:3.1}, we see that $\Phi$ has a nonzero critical point. The proof of Theorem \ref{th:1} is completed.
\end{proof}
\section{Proof of Theorem \ref{th:2}}
To prove Theorem \ref{th:2}, we need to recover the $(PS)$ condition with the help of condition $(f_5)$. Therefore, we need the following lemma.
\begin{lem}
\label{lf:1} Suppose that $f:\mathbb{R}^2\times\mathbb{R}\rightarrow\mathbb{R}$ is continuous and satisfies $(f_5)$.
For the functional $K:X\rightarrow \mathbb{R},$
\[
K(u)=\int F(x,u)
\]
is well defined and of class $C^1$ with
\[
\left\langle K^{\prime}(u),\phi\right\rangle=\int f(x,u)\phi,\quad\forall\phi\in X.
\]
Moreover, $K^{\prime}$ is compact.
\end{lem}
\begin{proof}
From \eqref{eq:1.5} we have
\[
|f(x,t)|\le |a|_\infty|t|+|b|_\infty|t|^{s-1}\text{,}
\]
so it is well knwon that $K$ is well defined and of class $C^1$.
To show that $K':X\to X^*$ is compact, let $u_n\rightharpoonup u$ in $X$.
Because $a\in L^{p}(\mathbb{R}^2), b\in L^{q}(\mathbb{R}^2)$, for any $\varepsilon>0$, there exists $R>0$ such that
\begin{equation}
\int_{B^{\,\rm c}_R}\vert a\vert^p<\varepsilon^p,\quad\int_{B^{\,\rm c}_R}\vert b\vert^q
<\varepsilon^q,\label{eq:43}
\end{equation}
where $B_R$ is the ball in $\mathbb{R}^2$ with radius $R>0$ centering at the origin and $B^{\,\rm c}_R=\mathbb{R}^2\setminus B_R$.
For $\phi\in X$, $\|\phi\|=1$, using (\ref{eq:1.5}), (\ref{eq:43}) and the H\"{o}lder inequality and noting that $u,\phi\in L^\gamma(B^{\,\rm c}_R)$ for any $\gamma>2$, we have
\begin{align}
\int_{B^{\,\rm c}_R}\vert f(x,u)\vert\vert\phi\vert
&\leq\int_{B^{\,\rm c}_R}\vert a\vert\vert u\vert\vert\phi\vert+\int_{B^{\,\rm c}_R}\vert b\vert\vert u\vert^{s-1}\vert\phi\vert\nonumber\\
&\leq[ a]_{p}[ u]_{2p/(p-1)}[\phi]_{2p/(p-1)}+[ b]_q[ u]^{s-1}_{2q(s-1)/(q-1)}[\phi]_{2q/(q-1)}\nonumber\\
&<\varepsilon\left([ u]_{2p/(p-1)}[\phi]_{2p/(p-1)}+[ u]^{s-1}_{2q(s-1)/(q-1)}[\phi]_{2q/(q-1)}\right)\nonumber\\
&\le M\varepsilon\text{,}
\end{align}
where $[\cdot]_\gamma$ is the standard $L^\gamma(B_R^{\,\rm c})$ norm, $M$ is a constant depending on $\sup_n\|u_n\|$ but not on $\phi$. A similar inequality for $\int_{B^{\,\rm c}_R}\vert f(x,u_n)\vert\vert\phi\vert$ is also true, therefore
\begin{align}
\int_{B^{\,\rm c}_R}\vert f(x,u_n)-f(x,u)\vert\vert\phi\vert\le\int_{B^{\,\rm c}_R}\vert f(x,u_n)\vert\vert\phi\vert+\int_{B^{\,\rm c}_R}\vert f(x,u)\vert\vert\phi\vert\le 2M\varepsilon\text{.}
\label{xx}
\end{align}
By the compactness of the embedding $X\hookrightarrow L^2_{\rm loc}(\mathbb{R}^2)$ we have
\begin{equation}
\sup_{\|\phi\|=1}\int_{B_R}\vert f(x,u_n)-f(x,u)\vert\vert\phi\vert\rightarrow0.\label{eq:42}
\end{equation}
It follows from \eqref{xx} and \eqref{eq:42} that
\begin{align*}
&\hspace{-1cm}\varlimsup_{n\rightarrow\infty}\left\Vert K^{\prime}(u_{n})-K^{\prime
}(u)\right\Vert =\varlimsup_{n\rightarrow\infty}\sup_{\left\Vert
\phi\right\Vert =1}\left\vert \int\left( f(x,u_{n})-f(x,u)\right)
\phi\right\vert \\
& \leq\varlimsup_{n\rightarrow\infty}\sup_{\left\Vert \phi\right\Vert
=1} \int_{B_{R}}\left| f(x,u_{n})-f(x,u)\right| \phi
+\varlimsup_{n\rightarrow\infty}\sup_{\left\Vert \phi\right\Vert =1}
\int_{B_{R}^{\,\mathrm{c}}}\left( f(x,u_{n})-f(x,u)\right) \phi\\
&\le 2M\varepsilon\text{.}
\end{align*}
Let $\varepsilon\to0$, we deduce $K^{\prime}(u_n)\rightarrow K^{\prime}(u)$ in $X$. Hence $K'$ is compact.
\end{proof}
\begin{rem}\label{rr0}
Lemma \ref{lf:1} is motivated by \cite[Lemma 1]{MR1457116}, In that paper, the space dimension $N>2$ and the working space is $\mathcal{D}^{1,2}(\mathbb{R}^N)$, $a$ and $b$ have to be taken from $L^\infty(\mathbb{R}^N)\cap L^\gamma(\mathbb{R}^N)$ for certain $\gamma$ depending on the power of $|t|$ on the left hand side of \eqref{eq:1.5}. Here our space dimension is $N=2$ and our working space is $X=H^1(\mathbb{R}^2)$. Unlike $\mathcal{D}^{1,2}(\mathbb{R}^N)$ the Sobolev space $X=H^1(\mathbb{R}^2)$ can be continuously embedded into $L^\gamma(\mathbb{R}^2)$ for any $\gamma\ge2$. For this reason, in our assumption $(f_5)$ the integrability of the weight functions $a$ and $b$ can be quit flexible.
\end{rem}
\begin{proof}
[\indent Proof of Theorem \ref{th:2}]
As we have pointed out in Remark \ref{re:1}, our condition $(f_5)$ is to ensure the global compactness of our functional $\Phi$. According to the proof of Lemma \ref{l:2.6}, it suffices to derive
\[
\varlimsup\int f(x,u_n)(u_n-u)=0.
\]
from $u_n\rightharpoonup u$ in $X$.
Indeed, if $u_n\rightharpoonup u$ in $X$, by the Lemma \ref{lf:1} we have $K^{\prime}(u_n)\rightarrow K^{\prime}(u)$ and
\begin{align}
\left\vert \int f(x,u_n)(u_n-u)\right\vert
&=\left\vert \left\langle K^{\prime}(u_n),u_n-u\right\rangle \right\vert\nonumber\\
&\leq \left\vert \left\langle K^{\prime}(u_n)-K^{\prime}(u),u_n-u\right\rangle \right\vert+\left\vert \left\langle K^{\prime}(u),u_n-u\right\rangle \right\vert\nonumber\\
&\leq\left\Vert K^{\prime}(u_n)-K^{\prime}(u)\right\Vert \left\Vert u_n-u\right\Vert+o(1)\rightarrow0.\nonumber
\end{align}
Therefore, similar to the proof of Lemma \ref{l:2.6} we can show that the functional $\Phi$ satisfies the $(PS)$ condition. Furthermore, applying Lemma \ref{l:3.3} and \ref{l:3.4}, using the same proof we deduce that under the assumptions of Theorem \ref{th:2}, $\Phi$ has a nonzero critical point. This completes the proof of Theorem \ref{th:2}.
\end{proof}
\subsection*{Acknowledgments}
This work was supported by NSFC (12071387) and NSFC (11971436). The authors would like to thank the anonymous referees for careful reading and valuable suggestions which improve the work.
\end{document} |
\begin{document}
\title[ Saturated Sets for Nonuniformly Hyperbolic Systems ]
{ Saturated Sets for Nonuniformly Hyperbolic Systems}
\author[Liang, Liao, Sun, Tian]
{Chao Liang$^{*}$, Gang Liao$^{\dag}$, Wenxiang Sun$^{\dag}$,
Xueting Tian$^{\ddag}$}
\thanks{$^{*}$ Applied Mathematical Department, The Central University of Finance and Economics,
Beijing 100081, China; Liang is supported by NNSFC(\# 10901167)}\email{[email protected]}
\thanks{$^{\dag}$ School of Mathematical Sciences,
Peking University, Beijing 100871,
China; Sun is supported by NNSFC(\#
10231020) and Doctoral Education Foundation of China}\email{[email protected]}
\email{[email protected]}
\thanks{$^{\ddag}$
Academy of Mathematics and Systems Science, Chinese Academy of
Sciences, Beijing 100190, China; Tian is supported by CAPES} \email{[email protected]}
\date{July, 2011}
\maketitle
\begin{abstract}
In this paper we prove that for an ergodic hyperbolic measure
$\omega$ of a $C^{1+\alpha}$ diffeomorphism $f$ on a Riemannian
manifold $M$, there is an $\omega$-full measured set
$\widetilde{\Lambda}$ such that for every invariant probability
$\mu\in \mathcal{M}_{inv}(\widetilde{\Lambda},f)$, the metric
entropy of $\mu$ is equal to the topological entropy of saturated
set $G_{\mu}$ consisting of generic points of $\mu$:
$$h_\mu(f)=h_{\operatorname{top}}(f,G_{\mu}).$$
Moreover, for every nonempty, compact and connected subset $K$ of
$\mathcal{M}_{inv}(\widetilde{\Lambda},f)$ with the same hyperbolic
rate, we compute the topological entropy of saturated set $G_K$ of
$K$ by the following equality:
$$\inf\{h_\mu(f)\mid \mu\in K\}=h_{\operatorname{top}}(f,G_K).$$
In particular these results can be applied (i) to the nonuniformy
hyperbolic diffeomorphisms described by Katok, (ii) to the robustly
transitive partially hyperbolic diffeomorphisms described by
~Ma{\~{n}}{\'{e}}, (iii) to the robustly transitive non-partially
hyperbolic diffeomorphisms described by Bonatti-Viana. In all these
cases $\mathcal{M}_{inv}(\widetilde{\Lambda},f)$ contains an open
subset of $\mathcal{M}_{erg}(M,f)$.
\end{abstract}
\tableofcontents
\section{Introduction}
Let $(M, d)$ be a compact metric space and $f : M\rightarrow M$ be
a continuous map. Given an invariant subset $\Gamma\subset M$,
denote by $\mathcal{M}(\Gamma)$ the set consisting of all Borel
probability measures, by $\mathcal{M}_{inv}(\Gamma, f)$ the
subset consisting of $f$-invariant probability measures and, by
$\mathcal{M}_{erg}(\Gamma, f)$ the subset consisting of
$f$-invariant ergodic probability measures. Clearly, if $\Gamma$ is
compact then $\mathcal{M}(\Gamma)$ and $\mathcal{M}_{inv}(\Gamma,
f)$ are both compact spaces in the weak$^*$-topology of measures.
Given $x\in M$, define the $n$-ordered empirical measure
of $x$ by
$$\mathcal{E}_n(x)=\frac1n\sum_{i=0}^{n-1}\delta_{f^i(x)},$$
where $\delta_y$ is the Dirac mass at $y\in M$. A subset $W\subset
M$ is called saturated if $x\in W$ and the sequence
$\{\mathcal{E}_n(y)\}$ has the same limit points set as
$\{\mathcal{E}_n(x)\}$ then $y\in W$. The limit point set $V(x)$ of
$\{\mathcal{E}_n(x)\}$ is always a compact connected subset of $
\mathcal{M}_{inv}(M, f)$. Given $\mu\in \mathcal{M}_{inv}(M, f)$, we
collect the saturated set $G_\mu$ of $\mu$ by those generic points
$x$ satisfying $V(x)=\{\mu\}$. More generically, for a compact
connected subset $K\subset \mathcal{M}_{inv}(M, f)$, we denote by
$G_K$ the saturated set consisting of points $x$ with $V(x)=K$. By
Birkhoff Ergodic Theorem, $\mu(G_{\mu})=1$ when $\mu$ is ergodic.
However, this is somewhat a special case. For non-ergodic $\mu$, by Ergodic Decomposition Theorem, $G_\mu$ has measure $0$ and
thus is `` thin " in view of measure. In addition, when $f$ is
uniformly hyperbolic (\cite{Sigmund}) or non uniformly hyperbolic
(\cite{LST}), $G_\mu$ is of first category hence ``thin" in view of
topology. Exactly, one can get this topological fact of first
category as follows. Denote by $C^0(M)$ the set of continuous
real-valued functions on $M$ provided with the sup norm. For non
uniformly hyperbolic systems $(f,\mu)$, there is $x\in M$ such that
$$\overline{\operatorname{orb}(x,f)}\supset \operatorname{supp}(\mu)\quad \mbox{and}\quad
\mathcal{E}_n(x)\quad \mbox{does not converge},$$ where the support
of a measure $\nu$, denoted by $\operatorname{supp}(\nu)$, is the minimal closed
set with $\nu-$total measure, see \cite{Sigmund,LST}. We can take
$0<a_1<a_2$ and $ \varphi\in C(M)$ such that
$$\liminf_{n\rightarrow +\infty} \frac1n\sum_{i=0}^{n-1}\varphi(f^i(x))<a_1<a_2<\limsup_{n\rightarrow +\infty}\frac1n\sum_{i=0}^{n-1}\varphi(f^i(x)).$$
Let $$R=\cap_N \cup_{n\geq N}\big{\{}x\mid
\frac1n\sum_{i=0}^{n-1}\varphi(f^i(x))<a_1\big{\}}\cap \cap_N
\cup_{n\geq N}\big{\{}x\mid
\frac1n\sum_{i=0}^{n-1}\varphi(f^i(x))>a_2\big{\}}.$$ Then $$R\cap
\overline{\operatorname{orb}(x,f)} \subset (\overline{\operatorname{orb}(x,f)}\setminus
G_{\mu})\,\,\mbox{ and} \,\, R\cap \overline{\operatorname{orb}(x,f)}\,\,\mbox{ is
a}\,\, G_{\delta}\,\,\mbox{ subset of}\,\, \overline{\operatorname{orb}(x,f)}.$$
Combining with $x\in \overline{\operatorname{orb}(x,f)}$, we can see that
$\overline{\operatorname{orb}(x,f)}\setminus G_{\mu}$ is a residual set of
$\overline{\operatorname{orb}(x,f)}$. Hence, $G_{\mu}$ is of first category in the
subspace $\overline{\operatorname{orb}(x,f)}$.
For a conservative system $(f,M,\operatorname{Leb})$ preserving the normalized
volume measure $\operatorname{Leb}$, if $f$ is ergodic, then by ergodic theorem,
$$\mathcal{E}_n(x)\rightarrow \operatorname{Leb},\quad \mbox{as}\quad n\rightarrow +\infty,$$
for $\operatorname{Leb}$-a.e. $x\in M$. In the general dissipative case where, a
priori, there is no distinguished invariant probability measures, it
is much more subtle what one should mean by describing the behavior
of almost orbits in the physically observable sense. In this
content, an invariant measure $\mu$ is called physical measure (or
Sinai-Ruelle-Bowen meaure) if the saturated set $G_{\mu}$ is of
positive Lebesgue measure. SRB measures are used to measure the
``thickness'' of saturated sets in view of $\operatorname{Leb}$-measure.
Motivated by the definition of
saturated sets, it is reasonable to think that $G_{\mu}$ should put
together all information of
$\mu$. If $\mu$ is ergodic,
Bowen\cite{Bowen4} has succeeded this motivation to prove that
$$h_{\operatorname{top}}(f,G_{\mu})=h_\mu(f).$$
When $f$ is mixing and uniformly hyperbolic (which implies uniform
specification property), applying \cite{PS} it also holds that
$$h_{\operatorname{top}}(f,G_{\mu})=h_\mu(f).$$ This implies that $G_\mu$ is
``thick" in view of topological entropy. Indeed, the information of
invariant measure can be well approximated by nearby measures
\cite{Katok2, Wang, Liang,Liao-Sun-Tian}. For non uniformly
hyperbolic systems, in \cite{LST} Liang, Sun and Tian proved
$G_{\mu}\neq \emptyset$. Our goal in the present paper is to show
the ``thickness" of $G_{\mu}$ in view of entropy.
Now we start to introduce our results precisely. Let $M$ be a
compact connected boundary-less Riemannian $d$-dimensional manifold
and $f : M \rightarrow M$ a $C^{1+\alpha}$ diffeomorphism.
We
use $Df_x$ to denote the tangent map of $f$ at $x\in M$. We say that
$x\in M$ is a regular point of $f$ if there exist numbers
$\lambda_1(x)>\lambda_2(x)>\cdots>\lambda_{\phi(x)}(x)$ and a
decomposition on the tangent space
$$T_xM=E_1(x)\oplus\cdots\oplus E_{\phi(x)}(x)$$
such that$$\underset{n\rightarrow
\infty}{\lim}\frac{1}{n}\log\|(D_xf^n)u\|=\lambda_j(x)$$ for every
$0\neq u\in E_j(x)$ and every $1\leq j\leq \phi(x)$. The number
$\lambda_i(x)$ and the space $E_i(x)$ are called the Lyapunov
exponents and the eigenspaces of $f$ at the regular point $x$,
respectively. Oseledets theorem \cite{Oseledec} states that all
regular points of a diffeomorphism $f: M\rightarrow M$ forms a Borel
set with total measure. For a regular point $x\in M$ we define
$$\lambda^+(x)=\max\{0,\,\min\{\lambda_i(x)\mid
\lambda_i(x)>0,\,1\leq i\leq \phi(x)\}\} $$ and
$$\lambda^-(x)=\max\{0,\,\min\{-\lambda_i(x)\mid \lambda_i(x)<0,\,1\leq i\leq
\phi(x)\}\},
$$ We appoint $\min \emptyset=\max \emptyset =0$. Taking an ergodic invariant measure $\mu$, by the ergodicity for
$\mu$-almost all $x\in M$ we can obtain uniform exponents
$\lambda_i(x)=\lambda_i(\mu)$ for $1\leq i\leq \phi(\mu)$. In this
content we denote $\lambda^+(\mu)=\lambda^+(x)$ and
$\lambda^-(\mu)=\lambda^-(x)$. We say an ergodic measure $\mu$ is
hyperbolic if $\lambda^+(\mu)$ and $\lambda^-(\mu)$ are both
non-zero.
\begin{Def}\label{Def6} Given $\beta_1,\beta_2\gg\epsilon
>0$, and for all $k\in \mathbb{Z}^+$, the hyperbolic block
$\Lambda_k=\Lambda_k(\beta_1,\beta_2;\,\epsilon)$ consists of all
points $x\in M$ for which there is a splitting $T_xM=E_x^s\oplus
E_x^u$ with the invariance property $Df^t(E_x^s)=E_{f^tx}^s$ and
$Df^t(E_x^u)=E_{f^tx}^u$, and satisfying:\\
$(a)~
\|Df^n|E_{f^tx}^s\|\leq e^{\epsilon k}e^{-(\beta_1-\epsilon)
n}e^{\epsilon|t|}, \forall t\in\mathbb{Z}, n\geq1;$\\
$(b)~
\|Df^{-n}|E_{f^tx}^u\|\leq e^{\epsilon k}e^{-(\beta_2-\epsilon)
n}e^{\epsilon|t|}, \forall t\in\mathbb{Z}, n\geq1;$ and\\
$(c)~
\tan(Angle(E_{f^tx}^s,E_{f^tx}^u))\geq e^{-\epsilon
k}e^{-\epsilon|t|}, \forall t\in\mathbb{Z}.$
\end{Def}
\begin{Def}\label{Def7} $\Lambda(\beta_1,\beta_2;\epsilon)=\overset{+\infty}{\underset{k=1}{\cup}}
\Lambda_k(\beta_1,\beta_2;\epsilon)$ is a Pesin set.
\end{Def}
It is verified that $\Lambda(\beta_1,\beta_2;\epsilon)$ is an
$f$-invariant set but usually not compact. Although the definition
of Pesin set is adopted in a topology sense, it is indeed related to
invariant measures. Actually, given an ergodic hyperbolic measure
$\omega$ for $f$ if $\lambda^-(\omega)\geq \beta_1$ and
$\lambda^+(\mu)\geq\beta_2$ then $\omega\in
\mathcal{M}_{inv}(\Lambda(\beta_1,\beta_2;\epsilon), f)$. From now
on we fix such a measure $\omega$ and denote by
$\omega\mid_{\Lambda_l}$ the conditional measure of •¯$\omega$ on
$\Lambda_l$. Set
$\widetilde{\Lambda}_l=\operatorname{supp}(\omega\mid_{\Lambda_l})$ and
$\widetilde{\Lambda}=\cup_{l\geq1}\widetilde{\Lambda}_l$. Clearly,
$f^{\pm}(\widetilde{\Lambda}_l)\subset \widetilde{\Lambda}_{l+1}$,
and the sub-bundles $E^s_x$, $E^u_x$ depend continuously on $x\in
\widetilde{\Lambda}_l$. Moreover, $\widetilde{\Lambda}$ is also
$f$-invariant with $\omega$-full measure \footnote{Here
$\widetilde{\Lambda}$ is obtained by taking support for each
hyperbolic block $\Lambda_l$ so even if an ergodic measure with
Lyapunov exponents away from $[-\beta_1,\beta_2]$ it is not
necessary of positive measure for $\widetilde{\Lambda}$. We will
give more discussions on $\widetilde{\Lambda}$ in section 6}.
\begin{Thm}\label{main theorem of measure} For every $\mu\in \mathcal{M}_{inv}(\widetilde{\Lambda},
f)$, we have
$$h_{\mu}(f)=h_{\operatorname{top}}(f,G_{\mu}).$$
\end{Thm}
Let $\{\eta_l\}_{l=1}^{\infty}$ be a decreasing sequence which
approaches zero. As in \cite{Newhouse} we say a probability measure
$\mu\in \mathcal{M}_{inv}(M, f)$ has {\it hyperbolic rate}
$\{\eta_l\}$ with respect to the Pesin set
$\widetilde{\Lambda}=\cup_{l\geq 1}\widetilde{\Lambda}_l$ if
$\mu(\widetilde{\Lambda}_l)\geq 1-\eta _l$ for all $l\geq1$.
\begin{Thm}\label{main theorem of set}Let $\eta=\{\eta_l\}$ be a sequence decreasing to zero and
$\mathcal{M}(\widetilde{\Lambda},\eta)\subset\mathcal{M}_{inv}(M,f)
$ be the set of measures with hyperbolic rate $\eta$. Given any
nonempty compact connect set $K\subset
\mathcal{M}(\widetilde{\Lambda},\eta)$, we have
$$\inf\{h_{\mu}(f)\mid \mu\in K\}=h_{\operatorname{top}}(f,G_K).$$
\end{Thm}
\section{Dynamics of non uniformly hyperbolic systems}\setlength{\parindent}{2em}
We start with some notions and results of Pesin theory \cite{Barr-Pesin,Katok1,Pollicott}.
\subsection{Lyapunov metric}
Assume
$\Lambda(\beta_1,\beta_2;\,\epsilon)=\cup_{k\geq1}\Lambda_k(\beta_1,\beta_2;\,\epsilon)$
is a nonempty Pesin set. Let $\beta_1'=\beta_1-2\epsilon$,
$\beta_2'=\beta_2-2\epsilon$. Note that $\epsilon\ll
\beta_1,\beta_2$, then $\beta_1'>0, \beta_2'>0$.
For $x\in \Lambda(\beta_1,\beta_2;\,\epsilon)$, we define
$$\|v_s\|_s=\sum_{n=1}^{+\infty}e^{\beta_1'n}\|D_xf^n(v_s)\|, ~~~\forall~v_s\in E^s_x ,$$
$$\|v_u\|_u=\sum_{n=1}^{+\infty}e^{\beta_2'n}\|D_xf^{-n}(v_u)\|, ~~~\forall~v_u\in E^u_x ,$$
$$\|v\|'=\mbox{max}(\|v_s\|_s,\,\|v_u\|_u)~~~\mbox{where}~v=v_s+v_u.$$
We call the norm $\|\cdot\|'$ a Lyapunov metric. This metric is in
general not equivalent to the Riemannian metric. With the Lyapunov
metric $f : \Lambda\rightarrow\Lambda$ is uniformly hyperbolic. The
following estimates are known :\\
$(i) \|Df\mid_{E^s_x}\|'\leq e^{-\beta_1'},~~~ \|Df^{-1}\mid_{E^u_x}\|'\leq
e^{-\beta_2'}$;\\
$(ii) \frac{1}{\sqrt{2}}\|v\|_x\leq \|v\|_x'\leq \frac{2}{1-e^{-\epsilon}}e^{\epsilon k}\|v\|_x,~\forall~v\in T_x M,~x\in \Lambda_k.$
\begin{Def}\label{Lyapunov coordinates}
In the local coordinate chart, a coordinate change $C_{\varepsilon}:
M\rightarrow GL(m,\mathbb{R})$ is called a Lyapunov change of
coordinates if for each regular point $x\in M$ and $u,v\in T_xM$, it
satisfies
$$<u,v>_x=<C_{\varepsilon}u,C_{\varepsilon}u>_x'.$$
\end{Def}
By any Lyapunov change of coordinates $C_{\epsilon}$ sends the
orthogonal decomposition $\mathbb{R}^{\dim E^s}\oplus
\mathbb{R}^{\dim E^u} $ to the decomposition $E^s_x\oplus E^u_x$ of
$T_xM$. Additionally, denote
$A_{\epsilon}(x)=C_{\epsilon}(f(x))^{-1}Df_xC_{\epsilon}(x)$. Then
$$A_{\epsilon}(x)=\begin{pmatrix}A_{\epsilon}^s(x)&0\\0&A_{\epsilon}^u(x)\\\end{pmatrix},$$
$$\|A_{\epsilon}^s(x)\|\leq e^{-\beta_1'},\quad \|A_{\epsilon}^u(x)^{-1}\|\leq e^{-\beta_2'}.$$
\subsection{Lyapunov neighborhood}
Fix a point $x\in \Lambda(\beta_1,\beta_2;\,\epsilon)$. By taking
charts about $x, f(x)$ we can assume without loss of generality
that $x\in\mathbb{R}^d, f(x)\in \mathbb{R}^d$. For a sufficiently
small neighborhood $U$ of $x$, we can trivialize the tangent bundle
over $U$ by identifying $T_UM\equiv U\times \mathbb{R}^d$. For any
point $y\in U$ and tangent vector $v\in T_yM$ we can then use the
identification $T_UM=U\times \mathbb{R}^d$ to translate the vector
$v$ to a corresponding vector $\bar{v}\in T_xM$. We then define
$\|v\|''_y=\|\bar{v}\|'_x$, where $\|\cdot\|'$ indicates the
Lyapunov metric. This defines a new norm $\|\cdot\|''$ (which agrees
with $\|\cdot\|'$ on the fiber $T_xM$). Similarly, we can define
$\|\cdot\|''_z$ on $T_zM$ (for any $z$ in a sufficiently small
neighborhood of $fx$ or $f^{-1}x$). We write $\bar{v}$ as $v$
whenever there is no confusion. We can define a new splitting $T_yM
= {E^s_y}'\oplus {E^u_y}', y\in U$ by translating the splitting
$T_xM = E^s_x\oplus E^u_x$ (and similarly for $T_zM = {E^s_z}'\oplus
{E^u_z}'$).
There exist $\beta_1''=\beta_1-3\epsilon>0 ,
\beta_2''=\beta_2-3\epsilon>0$ and $\epsilon_0> 0 $ such that if we
set $\epsilon_k =\epsilon_0e^{-\epsilon k }$ then for any $y\in
B(x,\epsilon_k)$ in an $\epsilon_k$ neighborhood of $x\in\Lambda_k$.
We have a splitting
$T_yM = {E^s_y}'\oplus {E^u_y}'$ with hyperbolic behavior: \\
$(i)~ \|D_yf(v)\|''_{fy}\leq e^{-\beta_1''}\|v\|''°∏$ for every
$v\in
{E^s_y}'$;\\
$(ii)~ \|D_yf^{-1}(w)\|''_{f^{-1}y}\leq e^{-\beta_2''}\|w\|''°∏$ for every $w\in
{E^u_y}'$.\\
The constant $\epsilon_0$ here and afterwards depends on various
global properties of $f$, e.g., the H$\ddot{o}$lder constants, the
size of the local trivialization, see p.73 in \cite{Pollicott}.
\begin{Def}\label{Def5} We define the Lyapunov neighborhood $\Pi=\Pi(x,a\epsilon_k)$
of $x\in \Lambda_k$ (with size $a\epsilon_k$, $0<a<1$) to be the
neighborhood of $x$ in $M$ which is the exponential projection onto
$M$ of the tangent rectangle
$(-a\epsilon_k,\,a\epsilon_k)E^s_x\oplus
(-a\epsilon_k,\,a\epsilon_k)E^u_x$.
\end{Def} In the Lyapunov neighborhoods, $Df$ displays uniformly
hyperbolic in the Lyapunov metric. More precisely, one can extend
the definition of $C_{\epsilon}(x)$ to the Lyapunov neighborhood
$\Pi(x,a\epsilon_k)$ such that for any $y\in \Pi(x,a\epsilon_k)$,
$$A_{\epsilon}(y):=C_{\epsilon}(f(y))^{-1}Df_yC_{\epsilon}(y)= \begin{pmatrix}A_{\epsilon}^s(y)&0\\0&A_{\epsilon}^u(y)\\\end{pmatrix},$$
$$\|A_{\epsilon}^s(y)\|\leq e^{-\beta_1''},\quad \|A_{\epsilon}^u(y)^{-1}\|\leq e^{-\beta_2''}.$$
Let $\Psi_x=\exp_x\circ C_{\epsilon}(x)$. Given $x\in \Lambda_k$, we
say that the set $H^u\subset \Pi(x,a\epsilon_k)$ is an admissible
$(u,\gamma_0,k)$-manifold near $x$ if $H^u=\Psi_x(\mbox{graph
}\psi)$ for some $\gamma_0$-Lipschitz function $\psi:
(-a\epsilon_k,\,a\epsilon_k)E^u_x\rightarrow
(-a\epsilon_k,\,a\epsilon_k)E^s_x$ with $\|\psi\|\leq
a\epsilon_k/4$. Similarly, we can also define
$(s,\gamma_0,k)$-manifold near $x$. Through each point $y\in
\Pi(x,a\epsilon_k)$ we can take $(u,\gamma_0,k)$-admissible manifold
$H^u(y)\subset \Pi(x,a\epsilon_k)$ and $(s,\gamma_0,k)$-admissible
manifold $H^s(y)\subset \Pi(x,a\epsilon_k)$. Fixing $\gamma_0$ small
enough, we can assume that \\$(i)~ \|D_zf(v)\|''_{fz}\leq
e^{-\beta_1''+\epsilon}\|v\|''$ for every $v\in
T_zH^s(y), z\in H^s(y)$;\\
$(ii)~ \|D_zf^{-1}(w)\|''_{f^{-1}z}\leq e^{-\beta_2''+\epsilon}\|w\|''°∏$ for every $w\in
T_zH^u(y), z\in H^u(y)$.\\
For any regular point $x\in \Lambda$, define $k(x)=\min\{i\in
\mathbb{Z}\mid x\in \Lambda_i\}$. Using the local hyperbolicity
above, we can see that each connected component of $f(H^u(y))\cap
\Pi(fx,a\epsilon_{k(fx)})$ is an admissible
$(u,\gamma_0,k(fx))$-manifold; each connected component of
$f^{-1}(H^s(y))\cap \Pi(f^{-1}x,a\epsilon_{k(f^{-1}x)})$ is an
admissible $(s,\gamma_0,k(f^{-1}x))$-manifold.
\subsection{ Weak shadowing lemma}
In this section, we state a weak shadowing
property for $C^{1+\alpha}$ non-uniformly hyperbolic systems, which
is needful in our proofs.
Let $(\delta_k)_{k=1}^{\infty}$ be a sequence of positive real
numbers. Let $(x_n)_{n=-\infty}^{\infty}$ be a sequence of points
in $\Lambda=\Lambda(\beta_1,\beta_2,\epsilon)$ for which there
exists a sequence $(s_n)_{n=-\infty}^{+\infty}$ of positive integers
satisfying: \begin{eqnarray*}&(a)& x_n\in
\Lambda_{s_n},\,\,\forall n\in \mathbb{Z };\\[2mm]& (b)& | s_n-s_{n-1} |\leq
1, \forall \,n\in \mathbb{Z};\\[2mm]
&(c)& d(f(x_n), x_{n+1})\leq \delta_{s_n},\,\,\,\forall\, n\in
\mathbb{Z},\end{eqnarray*} then we call
$(x_n)_{n=-\infty}^{+\infty}$ a $(\delta_k)_{k=1}^{\infty}$
pseudo-orbit. Given $c>0$,
a point $x\in M$ is an $\epsilon$-shadowing point for the
$(\delta_k)_{k=1}^{\infty}$
pseudo-orbit if $d(f^n(x), x_{n})\leq
c\epsilon_{s_n}$,\,\,$\forall\,n\in \mathbb{Z}$, where
$\epsilon_k=\epsilon_0e^{-\epsilon k}$ are given by the definition
of Lyapunov neighborhoods.
\begin{Thm} (Weak shadowing lemma \cite{Hirayama,Katok1,Pollicott})\label{specification} Let $f: M\rightarrow M$ be a $C^{1+\alpha}$ diffeomorphism, with
a non-empty Pesin set $\Lambda=\Lambda(\beta_1,\beta_2;\epsilon)$
and fixed parameters, $\beta_1,\beta_2 \gg \epsilon > 0$. For $c >
0$ there exists a sequence $(\delta_k)_{k=1}^{\infty}$ such that for
any $(\delta_k)_{k=1}^{\infty}$ pseudo-orbit there exists a unique
$c$-shadowing point. \end{Thm}
\section{Entropy for non compact spaces}
In our settings the saturated sets are often non compact. In
\cite{Bowen4} Bowen gave the definition of topological entropy for
non compact spaces. We state the definition in a slightly different
way and they are in fact equivalent. Let $E\subset M$ and
$\mathcal{C}_n(E,\varepsilon)$ be the set of all finite or countable
covers of $E$ by the sets of form $B_m(x,\varepsilon)$ with $m\geq
n$. Denote
$$\mathcal{Y}(E; t,n,\varepsilon)=\inf\{ \sum_{B_m(x,\varepsilon)\in A}\,\,e^{-tm}\mid \,\,A\in \mathcal{C}_n(E,\varepsilon)\},$$
$$\mathcal{Y}(E; t,\varepsilon)=\lim_{n\rightarrow \infty}\gamma(E; t,n,\varepsilon).$$
Define $$h_{\operatorname{top}}(E;\varepsilon)=\inf \{t\mid\,\mathcal{Y}(E;
t,\varepsilon)=0\}=\sup \{t\mid\,\mathcal{Y}(E;
t,\varepsilon)=\infty\}$$
and the topological entropy of $E$ is
$$h_{\operatorname{top}}(E,f)=\lim_{\varepsilon\rightarrow 0} h_{\operatorname{top}}(E;\varepsilon).$$
The following formulas from \cite{PS}(Theorem 4.1(3)) are subcases
of Bowen's variational principle and true for general topological
setting.
\begin{Prop}\label{generical lemmas} Let $K\subset
\mathcal{M}_{inv}(M,f)$ be non-empty, compact
and connected. Then
$$h_{\operatorname{top}}(f,G_K)\leq \inf\{h_\mu(f)\mid \mu\in K\}.$$
In particular, taking $K=\{\mu\}$ one has
$$h_{\operatorname{top}}(f,G_{\mu})\leq h_\mu(f).$$
\end{Prop}By the above proposition, to prove
Theorem \ref{main theorem of measure} and Theorem \ref{main theorem
of set}, it suffices to show the following theorems.
\begin{Thm}\label{main theorem of measure1} For every $\mu\in \mathcal{M}_{inv}(\widetilde{\Lambda},
f)$, we have
$$h_{\operatorname{top}}(f,G_{\mu})\geq h_\mu(f).$$
\end{Thm}
\begin{Thm}\label{main theorem of set1}Let $\eta=\{\eta_n\}$ be a sequence decreasing to zero and
$\mathcal{M}(\widetilde{\Lambda},\eta)\subset\mathcal{M}_{inv}(\widetilde{\Lambda},f)
$ be the set of measures with hyperbolic rate $\eta$. Given any
nonempty compactly connected set $K\subset
\mathcal{M}(\widetilde{\Lambda},\eta)$, we have
$$h_{\operatorname{top}}(f,G_K)\geq\inf\{h_{\mu}(f)\mid \mu\in K\}.$$
\end{Thm}
\begin{Rem}Let $\mu\in \mathcal{M}_{inv}(M,f)$ and $K\subset
\mathcal{M}_{inv}(M,f)$ be a nonempty compactly connect set. In
\cite{PS}, C. E. Pfister and W. G. Sullivan proved that\\ (1) with
almost product property(for detailed definition, see \cite {PS}), it
holds that
$$h_{\operatorname{top}}(f,G_{\mu})= h_\mu(f);$$
(2) with almost product property plus uniform separation(for
detailed definition, see \cite {PS}), it holds that
$$h_{\operatorname{top}}(f,G_K)=\inf\{h_{\mu}(f)\mid \mu\in K\}.
$$
However, for nonuniformly hyperbolic systems, the shadowing and
separation are inherent from the weak hyperbolicity of Lyapunov
neighborhoods which varies in the index $k$ of Pesin blocks
$\Lambda_k$, hence in general almost product property and uniform
separation both fail to be true.
\end{Rem}
\section{Proofs of Theorem \ref{main theorem of measure} and Theorem \ref{main theorem of measure1}}
In this section, we will verify Theorem \ref{main theorem of measure1} and thus complete the proof of Theorem \ref{main theorem of measure} by Proposition \ref{generical lemmas}.
For
each ergodic measure $\nu$, we use Katok's definition of metric
entropy( see \cite{Katok2}). For $x,y\in M$ and $n\in \mathbb{N}$,
let
$$d^n(x,y)=\max_{0\leq i\leq n-1}d(f^i(x),\,f^i(y)).$$ For
$\varepsilon,\,\delta>0$, let $N_n(\varepsilon,\,\delta)$ be the
minimal number of $\varepsilon-$ Bowen balls $B_n(x,\,\varepsilon)$
in the $d^n-$metric, which cover a set of $\nu$-measure at least
$1-\delta$. We define
$$h_{\nu}^{Kat}(f,\varepsilon\mid \delta)=\limsup_{n\rightarrow\infty}\frac{\log
N_n(\varepsilon,\,\delta)}{n}.$$ It follows by Theorem 1.1 of
\cite{Katok2} that
$$h_{\nu}(f)=\lim_{\varepsilon\rightarrow0}h_{\nu}^{Kat}(f,\varepsilon\mid \delta).$$
Recall that $\mathcal{M}_{erg}(M,f)$ denote the set of all ergodic
$f-$invariant measures supported on $M$. Assume
$\mu=\int_{\mathcal{M}_{erg}(M,f)}d\tau(\nu)$ is the ergodic
decomposition of $\mu$ then by Jacobs Theorem
$$h_{\mu}(f)=\int_{\mathcal{M}_{erg}(M,f)}h_{\nu}(f)d\tau(\nu).$$
Define
$$h_{\mu}^{Kat}(f,\varepsilon\mid \delta)\triangleq \int_{\mathcal{M}_{erg}(M,f)}h_{\nu}^{Kat}(f,\varepsilon\mid \delta)d\tau(\nu).$$
By Monotone Convergence Theorem, we have
$$h_{\mu}(f)=\int_{\mathcal{M}_{erg}(M,f)}\lim_{\varepsilon\rightarrow0}h_{\nu}^{Kat}(f,\varepsilon\mid \delta)d\tau(\nu)
=\lim_{\varepsilon\rightarrow0}h_{\mu}^{Kat}(f,\varepsilon\mid
\delta).$$
\noindent{\bf Proof of Theorem \ref{main theorem of
measure1}}\,\,\,Assume $\{\varphi_i\}_{i=1}^{\infty}$ is the dense
subset of $C(M)$ giving the weak$^*$ topology, that is,
$$D(\mu,\,\nu)=\sum_{i=1}^{\infty}\frac{|\int \varphi_id\mu-\int \varphi_id\nu|}{2^{i+1}\|\varphi_i\|}$$
for $\mu,\nu\in\mathcal{M}(M)$. It is easy to check the affine
property of $D$, i.e., for any $\mu,\,m_1,\,m_2\in\mathcal{M}(M)$
and $0\leq\theta\leq1$,
\begin{eqnarray*}
D(\mu,\,\theta m_1+(1-\theta)m_2 )\leq \theta D(\mu,
m_1)+(1-\theta)D(\mu, m_2).\end{eqnarray*} In addition,
$D(\mu,\nu)\leq 1$ for any $\mu,\nu\in \mathcal{M}(M)$. For any
integer $k\geq1$ and $\varphi_1,\cdots,\varphi_k$, there exists
$b_k>0$ such that
\begin{eqnarray}\label{points approximation}
d(\varphi_j(x),\varphi_j(y))<\frac{1}{k}\|\varphi_j\|\,\,\,\mbox{for
any}\,\,d(x,y)<b_k,\,\,1\leq j\leq k.\end{eqnarray}
Now fix
$\varepsilon,\delta>0$.
\begin{Lem}\label{rational approximation}For any integer $k\geq1$ and invariant measure $\mu$, we can take a
finite convex combination of ergodic probability measures with
rational coefficients,
$$\mu_k=\sum_{j=1}^{p_k}a_{k,j}\,m_{k,j}$$
such that
\begin{eqnarray}\label{measures approximation}
D(\mu,\mu_k)<\frac{1}{k},\,\,\,m_{k,j}(\widetilde{\Lambda})=1\,\,\,\mbox{
and}\,\,\, |h_{\mu}^{Kat}(f,\varepsilon\mid
\delta)-h_{\mu_k}^{Kat}(f,\varepsilon\mid
\delta)|<\frac1k.\end{eqnarray}
\end{Lem}
\begin{proof}
From the ergodic decomposition, we get
$$
\int_{\widetilde{\Lambda}} \varphi_i
d\mu=\int_{\mathcal{M}_{erg}(\widetilde{\Lambda}, f)}
\int_{\widetilde{\Lambda}} \varphi_i dm \,d\tau(m),\quad 1\leq i\leq
k.
$$
Use the definition of Lebesgue integral, we get the following steps.
First, we denote
$$A_{+}:= \max_{1\leq i \leq
k}{\int_{\widetilde{\Lambda}} \varphi_i d m}+1, \,\,\,A_{-}:=
\min_{1\leq i \leq k}{\int_{\widetilde{\Lambda}} \varphi_i d m }-1$$
$$ F_{+}:=\sup_{\mathcal{M}_{erg}(\widetilde{\Lambda},f)}{h_{\mu}^{Kat}(f,\varepsilon\mid
\delta)}+1,\,\,\,
F_{-}:=\inf_{\mathcal{M}_{erg}(\widetilde{\Lambda},f)}{h_{\mu}^{Kat}(f,\varepsilon\mid
\delta)}-1.$$ It is easy to see that:
$$-\infty <A_{-}< A_{+}<+\infty,\,\,\,-\infty <F_{-}< F_{+}<+\infty.$$
For any integer $n>0$, let $$y_{0}=A_{-},\,\,
y_{j}-y_{j-1}=\frac{A_{+}-A_{-}}{n},\,\,y_{n}=A_{+},$$
$$F_{0}=F_{-},\,\,
F_{j}-F_{j-1}=\frac{F_{+}-F_{-}}{n},\,\,F_{n}=F_{+}.$$ We can take
$E_{i,j}$,$\,\,F_{s}\,\,$ to be measurable partitions of
$\mathcal{M}_{erg}(\widetilde{\Lambda},f)\,\,$ as follows:
$$E_{i,j}=\{\mu\in \mathcal{M}_{erg}(\widetilde{\Lambda},f) \mid \quad y_{j}\leq \int_{\widetilde{\Lambda}} \varphi_i dm\leq y_{j+1}\}, $$
$$F_{n}=\{\mu\in \mathcal{M}_{erg}(\widetilde{\Lambda},f)\mid \quad F_{j}\leq h_{\mu}^{Kat}(f,\varepsilon\mid
\delta) \leq F_{j+1}\}.$$ Noticing the fact that
$\bigcup_{j}E_{i,j}=\mathcal{M}_{erg}(\widetilde{\Lambda},f)$ and
$\bigcup_{j}F_{n}=\mathcal{M}_{erg}(\widetilde{\Lambda},f)$, we can
choose a new partition $\,\xi\,$ defined as:
$$\xi=\bigwedge_{i,j}E_{i,j}\bigwedge_{s}F_{s},$$ where $\varsigma\bigwedge \zeta$
is given by $\{A\cap B\mid \,A\in \varsigma,\,\,B\in \zeta\}$.
For convenience, denote
$\xi=\{\xi_{k,1},\xi_{k,2},\cdots,\xi_{k,p_{k}}\}$. To finish the
proof of Lemma \ref{rational approximation}, we can let $n$ large
enough such that any combination
$$\mu_k=\sum_{j=1}^{p_k}a_{k,j}\,m_{k,j}$$ where $m_{k,j}\in
\xi_{k,j}$, rational numbers $a_{k,j}>0$ with
$|a_{k,j}-\tau(\xi_{k,j})|<\frac{1}{2k}$, satisfies:
\begin{eqnarray*}\label{measures approximation}
D(\mu,\mu_k)<\frac{1}{k},\,\,\,m_{k,j}(\widetilde{\Lambda})=1\,\,\,\mbox{
and}\,\,\, |h_{\mu}^{Kat}(f,\varepsilon\mid
\delta)-h_{\mu_k}^{Kat}(f,\varepsilon\mid \delta)|<\frac1k.
\end{eqnarray*}
\end{proof}
For each $k$, we can find $l_k$ such that
$m_{k,j}(\widetilde{\Lambda}_{l_k})>1-\delta$ for all $1\leq j\leq
p_k$. Recalling that $\epsilon_{l_k}$ is the scale of Lyapunov
neighborhoods associated with the Pesin block $\Lambda_{l_k}$. For
any $x\in \Lambda_{l_k}$, $Df$ exhibits uniform hyperbolicity in
$B(x,\epsilon_{l_k})$. For $c=\frac{\varepsilon}{8\epsilon_0}$, by
Theorem \ref{specification} there is a sequence of numbers
$(\delta_k)_{k=1}^\infty$. Let $\xi_k$ be a finite partition of $M$
with $\mbox{diam}\,\xi_k
<\min\{\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}},\epsilon_{l_k},\delta_{l_k}\}$
and $\xi_k>\{\widetilde{\Lambda}_{l_k},M\setminus\widetilde{
\Lambda}_{l_k}\}$. Given $t\in \mathbb{N}$, consider the
set\begin{eqnarray*} \Lambda^t(m_{k,j})&=&\big{\{}x\in
\widetilde{\Lambda}_{l_k}\mid f^q(x)\in \xi_k(x)~ \mbox{for some} ~
q\in[t,[(1+{\frac1{k}})t]\\[2mm]
&&\mbox{and}~
D(\mathcal{E}_n(x),m_{k,j})<\frac1k\,\,\mbox{ for all}\,\,n\geq
t\big{\}},
\end{eqnarray*}
where $\xi_k(x)$ denotes the element in the partition $\xi_k$ which
contains the point $x$. Before going on the proof, we give the
following claim.
{\bf Claim}
$$m_{k,j}(\Lambda^t(m_{k,j}))\rightarrow m_{k,j}(\widetilde{\Lambda}_{l_k})\,\,\,\mbox{as}\,\,t\rightarrow +\infty.$$
\begin{proof}\,\,By ergodicity of $m_{k,j}$ and Birkhoff Ergodic Theorem, we know that for $m_{k,j}-a.e.x\in \widetilde{\Lambda}_{l_k}$, it holds $$\lim_{n\to\infty}\mathcal{E}_{n}(x)=m_{k,j}.$$ So we only need prove that the set $$ \Lambda^t_1(m_{k,j})=\big{\{}x\in
\widetilde{\Lambda}_{l_k}\mid f^q(x)\in \xi_k(x)~ \mbox{for some} ~
q\in[t,[(1+{\frac1{k}})t]\big{\}}$$ satisfying the property
$$m_{k,j}(\Lambda^t_1(m_{k,j}))\rightarrow m_{k,j}(\widetilde{\Lambda}_{l_k})\,\,\,\mbox{as}\,\,t\rightarrow +\infty.$$
We next need the quantitative Poincar$\acute{e}$'s Recurrence Theorem (see Lemma 3.12 in \cite{Bochi} for more detail) as following.
\begin{Lem}\label{Reccurence Thm}
Let $f$ be a $C^1$ diffeomorphism preserving an invariant measure $\mu$ supported on $M$. Let $\Gamma\subset M$ be a measurable set with $\mu(\Gamma)> 0$ and let
$$\Omega =\cup_{n\in\mathbb{Z}}f^n(\Gamma).$$
Take $\gamma> 0$. Then there exists a measurable function $N_0 : \Omega\to \mathbb{N}$ such that for $a.e. x\in\Omega$,
every $n\geq N_0(x)$ and every $t\in [0, 1]$ there is some $l\in\{0, 1, . . .,n\}$ such that $f^l(x)\in\Gamma$
and $|(l/n)-t | < \gamma$.
\end{Lem}
\begin{Rem}
A slight modify (More precisely, replacing the interval $(n(t-\gamma),n(t+\gamma))$ by $(n(t,n(t+\gamma))$), one can require that $(l/n)-t<\gamma$ in the above lemma. Hence we have $l\in [n,n(t+\gamma)]$.
\end{Rem}
Take an element $\xi^l_k$ of the partition $\xi_k$. Let
$\Gamma=\xi^l_k$, $\gamma=\frac1k$. Applying Lemma \ref{Reccurence
Thm} and its remark, we can deduce that for $a.e.x\in\xi^l_k$, there
exists a measurable function $N_0$ such that for every $t\geq
N_0(x)$ there is some $q\in\{0, 1, \cdots ,n\}$ such that
$f^q(x)\in\xi^l_k=\xi_k(x)$ and $q\in [t,t(1+\frac1k)]$. That is to
say, $t\geq N_0(x)$ implies $x\in\Lambda^t_1(m_{k,j})$. And this
property holds for $a.e.x\in\xi^l_k$. Hence it is true for
$a.e.x\in\Lambda^l_k$. This completes the proof of the claim.
\end{proof}
Now we continue our proof of Theorem \ref{main theorem of measure1}. By above claim, we can take $t_k$ such that
$$m_{k,j}(\Lambda^{t}(m_{k,j}))>1-\delta$$ for all $t\geq t_k$ and
$1\leq j\leq p_k$.
Let $E_t(k,j)\subset \Lambda^{t}(m_{k,j})$ be a $(t,\varepsilon)$-separated set of
maximal cardinality. Then $\Lambda^{t}(m_{k,j})\subset \cup_{x\in
E_t(k,j)}B_{t}(x,\varepsilon) $, and by the definition of Katok's
entropy there exist infinitely many $t$ satisfying
$$\sharp\,E_t(k,j)\geq e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac1k)}.$$
For each $q\in [t,[(1+{\frac1{k}})t]$, let
$$V_q=\{x\in E_t(k,j) \mid f^q(x)\in \xi_k(x)\}$$ and let $n=n(k,j)$ be
the value of $q$ which maximizes $\sharp\,V_q$. Obviously, \begin{eqnarray}\label{n t} t\geq
\frac{n}{1+\frac1k}\geq n(1-\frac{1}{k}).\end{eqnarray} Since
$e^{\frac{t}{k}}>\frac{t}{k}$, we deduce that
$$\sharp\, V_n\geq \frac{\sharp\,E_t(k,j)}{\frac{t}{k}}\geq e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid \delta)-\frac3k)}.$$
Consider the element $A_n(m_{k,j})\in \xi_k$ for which
$\sharp\,(V_n\cap A_n(m_{k,j})) $ is maximal. It follows that
$$\sharp\,(V_n\cap A_n(m_{k,j}))\geq \frac{1}{\sharp\,\xi_k}\sharp\,V_n\geq
\frac{1}{\sharp\,\xi_k}e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac3k)}.$$ Thus taking $t$ large enough so that
$e^{\frac{t}{k}}>\sharp\,\xi_k$, we have by inequality (\ref{n t}) that
\begin{eqnarray}\label{count}\sharp\,(V_n\cap A_n(m_{k,j}))\geq e^{t(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac4k)}\geq
e^{n(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac4k)}.\end{eqnarray}
Notice that $A_{n(k,j)}(m_{k,j})$ is contained in an open subset
$U(k,j)$ of some Lyapunov neighborhood with
$\operatorname{diam}(U(k,j))<2\operatorname{diam}(\xi_k)$. By the ergodicity of $\omega$, for
any two measures $m_{k_1,j_1}, m_{k_2,j_2}$ we can find
$y=y(m_{k_1,j_1},m_{k_2,j_2})\in
U(k_1,j_1)\cap\widetilde{\Lambda}_{l_{k_1}}$ satisfying that for
some $s=s(m_{k_1,j_1},m_{k_2,j_2})$ one has
$$f^{s}(y)\in U(k_2,j_2)\cap\widetilde{\Lambda}_{l_{k_2}}.$$
Letting $C_{k,j}=\frac{a_{k,j}}{n(k,j)}$, we can choose an integer
$N_k$ larger enough so that $N_kC_{k,j}$ are integers and
$$N_k\geq k\sum_{1\leq r_1,r_2\leq k+1, 1\leq j_i\leq p_{r_i},i=1,2}s(m_{r_1,j_1},m_{r_2,j_2}).$$
Arbitrarily take $x(k,j)\in A_n(m_{k,j})\cap
V_{n(k,j)}$. Denote sequences
\begin{eqnarray*}
X_k&=&\sum_{j=1}^{p_k-1}s(m_{k,j},m_{k,j+1})+s(m_{k,p_k},m_{k,1})\\[2mm]
Y_k&=&\sum_{j=1}^{p_k}N_kn(k,j)C_{k,j}+X_k=N_k+X_k.\end{eqnarray*}
So,
\begin{eqnarray}\label{small bridge}\frac{N_k}{Y_k}\geq
\frac{1}{1+\frac1k}\geq 1-\frac1k.\end{eqnarray}
We
further choose a strictly increasing sequence $\{T_k\}$ with $T_k\in
\mathbb{N}$,
\begin{eqnarray}\label{circle1}Y_{k+1}&\leq& \frac{1}{k+1} \sum
_{r=1}^{k}Y_rT_r,\\[2mm]
\label{circle2}\sum_{r=1}^{k}(Y_rT_r+s(m_{r,1},m_{r+1,1}))&\leq&
\frac{1}{k+1}Y_{k+1}T_{k+1}.\end{eqnarray}
In order to obtain shadowing points $z$ with our desired property
$\mathcal{E}_n(z)\rightarrow \mu$ as $n\rightarrow +\infty$, we
first construct pseudo-orbits with satisfactory property in the
measure theoretic sense.
For simplicity of the statement, for $x\in M$ define segments of orbits
\begin{eqnarray*}L_{k,j}(x)&\triangleq&(x,f(x),\cdots,
f^{n(k,j)-1}(x)),\,\,\,1\leq j\leq p_k,\\[2mm]
\widehat{L}_{k_1,j_1;k_2,j_2}(x)&\triangleq&(x,f(x)\cdots,
f^{s(m_{k_1,j_1},m_{k_2,j_2})-1}(x)),\,\,1\leq j_i\leq
p_{k_i},i=1,2.\end{eqnarray*}
\begin{figure}
\caption{\,Quasi-orbits }
\end{figure}
Consider now the pseudo-orbit \begin{eqnarray}
\label{quasi orbits}\quad \quad O&=&O(x(1,1;1,1),\cdots,x(1,1; 1, N_1C_{1,1}),\cdots,x(1,p_1; 1, 1),\cdots,x(1,p_1; 1, N_1C_{1,p_1});\nonumber\\[2mm]
&&\cdots;\nonumber \\[2mm]
&&x(1,1;T_1,1),\cdots,x(1,1; T_1, N_1C_{1,1}),\cdots,x(1,p_1; T_11),\cdots,x(1,p_1; T_1, N_1C_{1,p_1});\nonumber\\[2mm]
&&\vdots\nonumber \\[2mm]
&&x(k,1; 1, 1),\cdots,x(k,1; 1, N_kC_{k,1}),\cdots,x(k,p_k,; 1, 1),\cdots,x(k,p_k; 1, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1;T_k,1),\cdots,x(K, 1; T_k, N_kC_{k,1}),\cdots,x(k,p_k; T_k, 1),\cdots,x(k,p_k; T_k, N_kC_{k,p_k});\nonumber\\[2mm]
&&\vdots\nonumber \\[2mm]
&&\cdots\,\,)\nonumber\end{eqnarray}
with the precise form as follows
\begin{eqnarray*} \label{precise quasi orbits} &\big{\{}&\,\,[\,L_{1,1}(x(1,1; 1,1)),\cdots,L_{1,1}(x(1,1; 1, N_1C_{1,1})),\widehat{L}_{1,1,1,2}(y(m_{1,1},m_{1,2}));\\[2mm]
&&L_{1,2}(x(1,2; 1,1)),\cdots,L_{1,2}(x(1,2;1, N_1C_{1,2})),\widehat{L}_{1,2,1,3}(y(m_{1,1},m_{1,2})); \cdots\nonumber\\[2mm]
&&L_{1,p_1}(x(1,p_1;1, 1)),\cdots,L_{1,p_1}(x(1,p_1;1, N_1C_{1,p_1})),\widehat{L}_{1,p_1,1,1}(y(m_{1,p_1},m_{1,1}))\,\,;\nonumber \\[2mm]
&&\cdots\nonumber\\[2mm]
&&\,L_{1,1}(x(1,1; T_1, 1)),\cdots,L_{1,1}(x(1,1; T_1, N_1C_{1,1})),\widehat{L}_{1,1,1,2}(y(m_{1,1},m_{1,2}));\nonumber\\[2mm]
&&L_{1,2}(x(1,2;T_1, 1)),\cdots,L_{1,2}(x(1,2; T_1, N_1C_{1,2})),\widehat{L}_{1,2,1,3}(y(m_{1,1},m_{1,2})); \cdots\nonumber\\[2mm]
&&L_{1,p_1}(x(1,p_1;T_1, 1)),\cdots,L_{1,p_1}(x(1,p_1;T_1, N_1C_{1,p_1})),\widehat{L}_{1,p_1,1,1}(y(m_{1,p_1},m_{1,1}))\,]\,;\nonumber \\[2mm]
&&\widehat{L}(y(m_{1,1},m_{2,1}));\nonumber\\[2mm]
&&\vdots\nonumber\\[2mm]
&&\,[\,L_{k,1}(x(k,1; 1,1)),\cdots,L_{k,1}(x(k,1;1, N_kC_{k,1})),\widehat{L}_{k,1,k,2}(y(m_{k,1},m_{k,2}));\nonumber\\[2mm]
&&L_{k,2}(x(k,2; 1, 1)),\cdots,L_{k,2}(x(k,2;1, N_kC_{k,2})),\widehat{L}_{k,2,k,3}(y(m_{k,1},m_{k,2})); \cdots\nonumber \\[2mm]
&&L_{k,p_k}(x(k,p_k; 1, 1)),\cdots,L_{k,p_k}(x(k,p_k;1, N_kC_{k,p_k})),\widehat{L}_{k,p_k,k,1}(y(m_{k,p_k},m_{k,1}))\,\,;\nonumber\\[2mm]
&&\widehat{L}(y(m_{k,1},m_{k+1,1}));\nonumber\\[2mm]
&&\cdots\,\,\,\nonumber\\[2mm]
&&L_{k,1}(x(k,1; T_k,1)),\cdots,L_{k,1}(x(k,1;T_k, N_kC_{k,1})),\widehat{L}_{k,1,k,2}(y(m_{k,1},m_{k,2}));\nonumber\\[2mm]
&&L_{k,2}(x(k,2; T_k, 1)),\cdots,L_{k,2}(x(k,2;T_k, N_kC_{k,2})),\widehat{L}_{k,2,k,3}(y(m_{k,1},m_{k,2})); \cdots\nonumber \\[2mm]
&&L_{k,p_k}(x(k,p_k; T_k, 1)),\cdots,L_{k,p_k}(x(k,p_k;T_k, N_kC_{k,p_k})),\widehat{L}_{k,p_k,k,1}(y(m_{k,p_k},m_{k,1}))\,]\,;\nonumber\\[2mm]
&&\widehat{L}(y(m_{k,1},m_{k+1,1}));\nonumber\\[2mm]
&&\vdots\,\,\,\nonumber\\[2mm]
&&\cdots
\big{\}},\end{eqnarray*}
where $x(k,j;i,t)\in V_{n(k,j)}\cap A_{n(k,j)}(m_{k,j})$.
For $k\geq 1$, $1\leq i\leq T_k$, $1\leq j\leq p_k$, $t\geq 1$, let
$M_1=0$,
\begin{eqnarray*}M_k&=&M_{k,1}=\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1})),\\[2mm]
M_{k,i}&=&M_{k,i,1}=M_k+(i-1)Y_k,
\\[2mm]
M_{k,i,j}&=& M_{k,i,j,1}=M_{k,i}
+\sum_{q=1}^{j-1}(N_k\,n(k,q)C_{k,q}+s(m_{k,q},m_{k,q+1})),\\[2mm]
M_{k,i,j,t}&=&M_{k,i,j}+(t-1)n(k,j).\end{eqnarray*}
By
Theorem \ref{specification}, there exists a shadowing point $z$ of
$O$ such that
$$d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j;i, t)))<c\epsilon_0e^{-\epsilon l_k}<\frac{\varepsilon}{4\epsilon_0}\epsilon_0e^{-\epsilon l_k}
\leq \frac{\varepsilon}{4},$$ for $0\leq q\leq n(k,j)-1,$ $1\leq i
\leq T_k$, $1\leq t\leq N_kC_{k,j}$, $1\leq j\leq p_k$. To be
precise, $z$ can be considered as a map with variables
$x(k,j;i,t)$:
\begin{eqnarray}
\label{quasi orbits}\quad \quad z&=&z(x(1,1;1,1),\cdots,x(1,1; 1, N_1C_{1,1}),\cdots,x(1,p_1; 1, 1),\cdots,x(1,p_1; 1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(1,1;T_1,1),\cdots,x(1,1; T_1, N_1C_{1,1}),\cdots,x(1,p_1; T_11),\cdots,x(1,p_1; T_1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1; 1, 1),\cdots,x(k,1; 1, N_kC_{k,1}),\cdots,x(k,p_k,; 1, 1),\cdots,x(k,p_k; 1, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1;T_k,1),\cdots,x(K, 1; T_k, N_kC_{k,1}),\cdots,x(k,p_k; T_k, 1),\cdots,x(k,p_k; T_k, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&\cdots\,\,)\nonumber\end{eqnarray} We denote by $\mathcal{J}$ the set of
all shadowing points $z$ obtained in above procedure.
\begin{Lem}\label{converge} $\overline{\mathcal{J}}\subset G_{\mu}$. \end{Lem}
\begin{proof} First we prove that for any $z\in \mathcal{J}$,
$$\lim_{k\rightarrow+\infty}\mathcal{E}_{M_{k}}(z)=\mu.$$
We begin by estimating $d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j;i,t)))$ for
$0\leq q\leq n(k,j)-1.$ Recalling that in the procedure of finding
the shadowing point $z$, all the constructions are done in the
Lyapunov neighborhoods $\Pi(x(k,j,t),a\epsilon_k)$. Moreover, notice
that we have required
$\mbox{diam}\,\xi_k<\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}}$
which implies that for every two adjacent orbit segments $x(k,j;
i_1, t_1)$ and $x(k,j; i_2, t_2)$, the ending point of the front
orbit segment and the beginning point of the segment following are
$\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}}$ close to
each other. Let $y$ be the unique intersection point of admissible
manifolds $H^s(z)$ and $H^u(x)$. In what follows, define $d''$ to be
the distance induced by $\|\cdot\|''$ in the local Lyapunov
neighborhoods. By the hyperbolicity of $Df$ in the Lyapunov
coordinates \footnote{This hyperbolic property is crucial in the
estimation of distance along adjacent segments, so the weak
shadowing lemma \ref{specification} (which is actually stated in
topological way) does not suffice to conclude Theorem \ref{main
theorem of measure} and the following Theorem \ref{main theorem of
set}.}, we obtain
\begin{eqnarray*}&&d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j; i, t)))\\[2mm]
&\leq&
d(f^{M_{k,i,j,t}+q}(z),f^q(y))+d(f^q(y),f^q(x(k,j; i, t)))\\[2mm]
&\leq&\sqrt{2}d''(f^{M_{k,i,j,t}+q}(z),f^q(y))+\sqrt{2}d''(f^q(y),f^q(x(k,j; i, t)))\\[2mm]
&\leq&\sqrt{2}e^{-(\beta_1''-\epsilon)q}d''(f^{M_{k,i,j,t}}(z),y)+\sqrt{2}e^{-(\beta_2''-\epsilon)(n(k,j)-q)}d''(f^{n(k,j)}(y),f^{n(k,j)}(x(k,j; i, t)))\\[2mm]
&\leq &\sqrt{2}
\max\{e^{-(\beta_1''-\epsilon)q},e^{-(\beta_2''-\epsilon)(n(k,j)-q)}\}(d''(f^{M_{k,i,j,t}}(z),y)\\[2mm]&&+d''(f^{n(k,j)}(y),f^{n(k,j)}(x(k,j; i, t))))\\[2mm]
&\leq&\frac{2\sqrt{2}e^{\epsilon
(k+1)}}{1-e^{-\epsilon}}(d(f^{M_{k,i,j,t}}(z),y)+d(f^{n(k,j)}(y),f^{n(k,j)}(x(k,j; i, t))))\\[2mm]
&\leq& \frac{2\sqrt{2}e^{\epsilon
(k+1)}}{1-e^{-\epsilon}} 2\operatorname{diam} (\xi_k)\\[2mm]
&<&b_k
\end{eqnarray*} for
$0\leq q\leq n(k,j)-1$. Now we can deduce that
$$|\varphi_p(f^{M_{k,i,j,t}+q}(z)-\varphi_p(f^q(x(k,j; i,t)))|<\frac{1}{k}\|\varphi_p\|,\,\,\,\,1\leq p\leq k,$$
which implies that
\begin{eqnarray}\label{approximation2}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j,t}}(z)),\mathcal{E}_{n(k,j)}(x(k,j; i, t)))<\frac1k+\frac{1}{2^{k-1}}<\frac2k,
\end{eqnarray}
for sufficiently large $k$. By the triangle inequality, we have
\begin{eqnarray*}
D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)&\leq&
D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu_k)+\frac1k\\[2mm]
&\leq&D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)))\\[2mm]
&&+D(\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)),\,\mu_k)+\frac1k
\end{eqnarray*}
Note that for any $\varphi\in C^0(M)$, it holds
\begin{eqnarray*}
& &\|\int\varphi d\mathcal{E}_{Y_k}(f^{M_{k,i}}(z))-\int\varphi d\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)))\|\\[2mm]
&=&\|\frac1{Y_k}\sum_{q=1}^{Y_k-1}\varphi(f^{M_{k,i}+q}(z))-\frac1{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}\sum_{q=1}^{n(k,j)-1}\varphi(f^{M_{k,i,j}+q}(z))\|\\[2mm]
&\leq&\|\frac1{Y_k}\sum_{j=1}^{p_k}N_kC_{k,j}\sum_{q=1}^{n(k,j)-1}\varphi(f^{M_{k,i,j}+q}(z))-\frac1{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}\sum_{q=1}^{n(k,j)-1}\varphi(f^{M_{k,i,j}+q}(z))\|\\[2mm]
&&+\|\frac1{Y_k}(\sum_{j=1}^{p_k-1}\sum_{q=1}^{s(m_{k,j},\,m_{k,j+1})-1}\varphi(f^{M_{k,i,j}-s(m_{k,j},\,m_{k,j+1})+q}(z))\\[2mm]
&&+\sum_{q=1}^{s(m_{k,p_k},\,m_{k,1})-1}\varphi(f^{M_{k,i,j}-s(m_{k,p_k},\,m_{k,1})+q}(z)))\|\\[2mm]
&\leq&[|(\frac{1}{Y_k}-\frac{1}{Y_k-X_k})(Y_k-X_k)|+\frac{X_k}{Y_k}]\|\varphi\|.
\end{eqnarray*}
Then by the definition of $D$, the above inequality implies that
\begin{eqnarray*}
&&D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)))\\&\leq&
|(\frac{1}{Y_k}-\frac{1}{Y_k-X_k})(Y_k-X_k)|+\frac{X_k}{Y_k}.
\end{eqnarray*}
Thus, by the affine property of $D$, together
with the property $a_{k,j}=n(k,j)C_{k,j}$ and $N_k=Y_k-X_k$, we have
\begin{eqnarray*}
D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)&\leq&
D(\frac{1}{Y_k-X_k}\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)),
\sum_{j=1}^{p_k}a_{k,j}m_{k,j})\\[2mm]&&+|(\frac{1}{Y_k}-\frac{1}{Y_k-X_k})(Y_k-X_k)|+\frac{X_k}{Y_k}+\frac{1}{k}\\[2mm]
&\leq&\frac{N_k}{Y_k-X_k}\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),m_{k,j})+\frac{2X_k}{Y_k}+\frac{1}{k}\\[2mm]
&=&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),m_{k,j})+\frac{2X_k}{Y_k}+\frac{1}{k}.
\end{eqnarray*}
Noting that
\begin{eqnarray*}
&&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),m_{k,j})\\[2mm]
&\leq&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z)),\mathcal{E}_{n(k,j)}(x(k,j)))
+\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(x(k,j)),\,m_{k,j})
\end{eqnarray*}
and by the definition of $\Lambda^t(m_{k,j})$ which all $x(k,j)$
belong to and by (\ref{approximation2}), we can further deduce that
\begin{eqnarray*}
D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)&\leq&\sum_{j=1}^{p_k}a_{k,j}D(\mathcal{E}_{n(k,j)}(f^{M_{k,i,j}}(z),\mathcal{E}_{n(k,j)}(x(k,j)))
+\frac1k+\frac{2X_k}{Y_k}+\frac{1}{k}\\[2mm]
&\leq&\frac{2}{k}+\frac1k+\frac{2X_k}{Y_k}+\frac{1}{k}\\[2mm]
&\leq&\frac6k\,\,\,\,\,\,(\mbox{by}\,(\ref{small bridge})).
\end{eqnarray*}
Hence, by affine property and inequalities (\ref{circle1}) and (\ref{circle2}) and $D(\cdot,\cdot)\leq1$, we obtain that
\begin{eqnarray*}
D(\mathcal{E}_{M_{k+1}}(z),\,\mu)&\leq&
\frac{\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})}{T_kY_k+\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})}
\\[2mm]
&&+\frac{T_kY_k}{T_kY_k+\sum_{r=1}^{k-1}T_rY_r+s(m_{r,1},m_{r+1,1})}D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\mu)\\[2mm]
&\leq&
\frac{\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})}{T_kY_k+\sum_{r=1}^{k-1}(T_rY_r+s(m_{r,1},m_{r+1,1}))+s(m_{k,1},m_{k+1,1})}
\\[2mm]
&&+\frac{T_kY_k}{T_kY_k+\sum_{r=1}^{k-1}T_rY_r+s(m_{r,1},m_{r+1,1})}\frac6k\\[2mm]
&\leq&\frac8k.
\end{eqnarray*}
Thus,
$$\lim_{k\rightarrow+\infty}\mathcal{E}_{M_k}(z)=\mu.$$
For $M_{k,i}\leq n\leq M_{k,i+1}$ (here we appoint
$M_{k,p_k+1}=M_{k+1,1}$), it follows that
\begin{eqnarray*}
D(\mathcal{E}_{n}(z),\,\mu)&\leq&
\frac{M_{k}}{n}D(\mathcal{E}_{M_{k}}(z),\mu)+\frac{1}{n}
\sum_{p=1}^{i-1}D(\mathcal{E}_{Y_k}(f^{M_{k,p-1}}(z)),\mu)\,\,\,\,(\mbox{by affine property})\\[2mm]
&&+\frac{n-M_{k,i}}{n}D(\mathcal{E}_{n-M_{k,i}}(f^{M_{k,i}}(z)),\mu)
\\[2mm]
&\leq&
\frac{M_{k}}{n}\frac8k+\frac{(i-1)Y_k}{n}\frac6k+\frac{Y_k+s(m_{k,1},m_{k+1,1})}{n}\\[2mm]
&\leq&\frac{15}k\,\,\,\,\,\,(\mbox{by}\,(\ref{circle1})
\,\mbox{and}\,(\ref{circle2})).
\end{eqnarray*}
Let $n\rightarrow +\infty$, then $k\rightarrow +\infty$ and
$\mathcal{E}_{n}(z)\rightarrow \mu$. That is $\mathcal{J}\in
G_{\mu}$. For any $z'\in \overline{\mathcal{J}}$, we take $z_t\in
\mathcal{J} $ with $\lim_n z_t=z'$. Observing that for $M_{k,i}\leq
n\leq M_{k,i+1}$, $D(\mathcal{E}_{n}(z_t),\,\mu)\leq 15/k$ by
continuity it also holds that $D(\mathcal{E}_{n}(z'),\,\mu)\leq
15/k$. This completes the proof of the Lemma \ref{converge}.
\end{proof}
To finish the proof of Theorem \ref{main theorem of measure1}, we
need to compute the entropy of $\overline{\mathcal{J}}\subset
G_{\mu}$. Notice that the choices of the position labeled by
$x(k,j; i, t)$ in (\ref{quasi orbits}) has at least
$$e^{n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac{4}{k})}$$ by (\ref{count}). Moreover, fixing the
position indexed $k,j,t$, for distinct $x(k,j; i, t), x'(k,j; i,
t)\in V_{n(k,j)}\cap A_{n(k,j)}(m_{k,j}) $, the corresponding
shadowing points $z,z'$ satisfying
\begin{eqnarray*}&&d(f^{M_{k,i,j,t}+q}(z),f^{M_{k,i,j,t}+q}(z'))\\&\geq&
d(f^{q}(x(k,j; i, t)), f^{q}(x'(k,j; i, t)))-d(f^{M_{k,i,j,t}+q}(z),
f^{q}(x(k,j; i, t)))\\&&-d(f^{M_{k,i,j,t}+q}(z'),
f^{q}(x'(k,j;i, t)))\\
&\geq& d(f^{q}(x(k,j; i, t)),
f^{q}(x'(k,j,t)))-\frac\varepsilon2.\end{eqnarray*} Since $x(k,j,t),
x'(k,j; i, t)$ are $(n(k,j),\varepsilon)$-separated, so
$f^{M_{k,i,j,t}}(z)$, $f^{M_{k,i,j,t}}(z')$ are
$(n(k,j),\frac\varepsilon2)$-separated. Denote sets concerning the
choice of quasi-orbits in $M_{ki}$
\begin{eqnarray*}
H_{ki}=\{&&(x(k,j;i,1),\cdots,x(k,j;i,N_kC_{k,j}),\cdots,x(k,p_k;i,1),\cdots,x(1,p_k;i, N_kC_{k,p_k})\\[2mm]
&&\mid \,\,x(k,j;i,t)\in V_{n(k,j)}\cap A_{n(k,j)}\}.
\end{eqnarray*}
Then
\begin{eqnarray*}
\sharp H_{ki}\geq
e^{\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac{4}{k})}.
\end{eqnarray*}
Hence,
\begin{eqnarray}
\label{large katok entropy}\frac{1}{Y_k}\log \,\sharp H_{ki}&\geq&
\frac{Y_k-X_k}{Y_k}\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\\[2mm]
&\geq&(1-\frac{1}{k})\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\nonumber\\[2mm]
&=&(1-\frac1k)^2h_{\mu_{k}}^{Kat}(f,\varepsilon\mid\delta)-\frac4k(1-\frac{1}{k})^2\nonumber\\[2mm]
&\geq&(1-\frac1k)^2(h_{\mu}^{Kat}(f,\varepsilon\mid\delta)-\frac1k)-\frac4k(1-\frac{1}{k})^2.\nonumber
\end{eqnarray}
Since $\overline{\mathcal{J}}$ is compact we can take only finite
covers $\mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)$ of
$\overline{\mathcal{J}}$ in the calculation of topological entropy
$h_{\operatorname{top}}(\overline{\mathcal{J}},\frac\varepsilon2)$. Let
$r<h_{\mu}^{Kat}(f,\varepsilon\mid\delta)$. For each $\mathcal{A}\in
\mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)$ we define a new
cover $\mathcal{A}'$ in which for $M_{k,i}\leq m\leq M_{k,i+1}$,
$B_m(z,\varepsilon/2)$ is replaced by
$B_{M_{k,i}}(z,\varepsilon/2)$, where we suppose $M_{k,0}=M_{k-1,
p_{k-1}}$, $M_{k,p_k+1}=M_{k+1,1}$. Therefore,
\begin{eqnarray*}
\mathcal{Y}(\overline{\mathcal{J}};r,n,\varepsilon/2
)=\inf_{\mathcal{A}\in
\mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)}\sum_{B_{m}(z,\varepsilon/2)\in
\mathcal{A}} e^{-rm}\geq \inf_{\mathcal{A}\in
\mathcal{C}(\overline{\mathcal{J}},\varepsilon/2)}\sum_{B_{M_{k,i}}(z,\varepsilon/2)\in
\mathcal{A}'} e^{-rM_{k,i+1}}.
\end{eqnarray*}
Denote
$$b=b(\mathcal{A}')=\max\{M_{k,i}\mid\,\,B_{m}(z,\frac\varepsilon2)\in \mathcal{A}'\quad \mbox{and}\quad M_{k,i}\leq m<M_{k,i+1}\}.$$
Noticing that $\mathcal{A}'$ is a cover of $\mathcal{J}$ each point
of $\mathcal{J}$ belongs to some $B_{M_{k,i}}(x, \frac\varepsilon2)$
with $M_{k,i}\leq b$. Moreover, if $z,z'\in \mathcal{J}$ with some
position $x(k,j;i,t)\neq x'(k,j;i,t)$ then $z,z'$ can't stay in the
same $B_{M_{k,i}}(x, \frac\varepsilon2)$. Define
\begin{eqnarray*}
W_{k,i}=\{B_{M_{k,i}}(z, \frac\varepsilon2)\in \mathcal{A}' \}.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i} \,\,\Pi_{M_{k,i}<
M_{k',i'}\leq b } \,\,\,\sharp H_{k',i'} \,\,\geq\,\,
\Pi_{M_{k',i'}\leq b }\,\, \sharp H_{k',i'}\,\,.
\end{eqnarray*}
So,
\begin{eqnarray*}
\sum_{M_{k,i}\leq b} \,\,\,\sharp W_{k,i}\,\, (\Pi_{ M_{k',i'}\leq
M_{k,i} } \,\,\,\sharp H_{k',i'})^{-1} \,\,\geq\,\, 1.
\end{eqnarray*}
From (\ref{large katok entropy}) it is easily seen that
$$\limsup_{k\rightarrow \infty}\frac{\Pi_{ M_{k',i'}\leq M_{k,i} }
\,\,\,\sharp
H_{k',i'}}{\exp(h_{\mu}^{Kat}(f,\varepsilon\mid\delta)M_{k,i})}\geq
1.$$ Since $r<h_{\mu}^{Kat}(f,\varepsilon\mid\delta)$ and
$\lim_{k\rightarrow \infty}\frac{M_{k,i+1}}{M_{k,i}}=1$, we can take
$k$ large enough so that $$\frac{M_{k,i+1}}{M_{k,i}}\leq
\frac{h_{\mu}^{Kat}(f,\varepsilon\mid\delta)}{r}.$$ Thus there is
some constant $c_0>0$ for large $k$
\begin{eqnarray*}
\sum_{B_{M_{k,i}}(z,\varepsilon/2)\in \mathcal{A}'}
e^{-rM_{k,i+1}}&=& \sum_{M_{k,i}\leq b} \,\,\,\sharp
W_{k,i}\,\,\,e^{-rM_{k,i+1}}\\[2mm]&\geq& \sum_{M_{k,i}\leq b} \,\,\,\sharp
W_{k,i}\,\, \exp(-h_{\mu}^{Kat}(f,\varepsilon\mid\delta)M_{k,i})
\,\,\\[2mm]&\geq& c_0\sum_{M_{k,i}\leq b} \,\,\,\sharp
W_{k,i}\,\,(\Pi_{ M_{k',i'}\leq M_{k,i} } \,\,\,\sharp
H_{k',i'})^{-1}\,\, \\[2mm]&\geq&c_0,
\end{eqnarray*}
which together with the arbitrariness of $r$ gives rise to the
required inequality
$$h_{\operatorname{top}}(\overline{\mathcal{J}},\frac\varepsilon2)\geq
h_{\mu}^{Kat}(f,\varepsilon\mid\delta).$$ Finally, the arbitrariness
of $\varepsilon$ yields:
$$h_{\operatorname{top}}(f,G_{\mu})\geq h_{\mu}(f).$$
$\Box$
\section{Proofs of Theorem \ref{main theorem of set} and Theorem \ref{main theorem of set1}}
We start this section by recalling the notion of entropy introduced
by Newhouse \cite{Newhouse}. Given $\mu\in \mathcal{M}_{inv}(M,f)$,
let $F\subset M$ be a measurable
set. Define \\
\begin{eqnarray*}&(1)&H(n,\rho\mid x,F,\varepsilon)=\log \max\{\sharp E\mid E
\,\mbox{is a}\,(d^n,\rho)-\mbox{separated set in }\,F\cap
B_{n}(x,\varepsilon) \};\\[2mm]
&(2)&H(n,\rho\mid F,\varepsilon)=\sup_{x\in F}H(n,\rho\mid x,
F,\varepsilon);
\\[2mm]
&(3)&h(\rho\mid
F,\varepsilon)=\limsup_{n\rightarrow+\infty}\frac1nH(n,\rho\mid
F,\varepsilon);\\[2mm]
&(4)&h( F,\varepsilon)=\lim_{\rho\rightarrow0}h(\rho\mid
F,\varepsilon);\\[2mm]
&(5)&h^{New}_{\operatorname{loc}}(
\mu,\varepsilon)=\liminf_{\sigma\rightarrow1}\{h(
F,\varepsilon)\mid \mu(F)>\sigma\};\\[2mm]
&(6)&h^{New}(\mu,\varepsilon)=h_{\mu}(f)-h^{New}_{\operatorname{loc}}(
\mu,\varepsilon)\end{eqnarray*}
Let $\{\theta_k\}_{k=1}^{\infty}$ be a decreasing sequence which
approaches zero. One can verify that
$(h^{New}(\mu,\theta_k)\mid_{\mu\in
\mathcal{M}_{inv}(M,f)})_{k=1}^{\infty}$ is in fact an increasing
sequence of functions defined on $\mathcal{M}_{inv}(M,f)$. Further
more,
$$\lim_{\theta_k\rightarrow
0}h^{New}(\mu,\theta_k)=h_{\mu}(f)\,\,\,\mbox{ for any}\,\,\,\mu\in
\mathcal{M}(f).$$
Let $\mathcal{H}=(h_k)$ and $\mathcal{H}'=(h_k')$ be two increasing
sequences of functions on a compact domain $\mathcal{D}$. We say
$\mathcal{H}'$ {\it uniformly dominates} $\mathcal{H}$, denoted by
$\mathcal{H}'\geq \mathcal{H}$, if for every index $k$ and every
$\gamma>0$ there exists an index $k'$ such that $$h'_{k'}\geq
h_k-\gamma.$$ We say that $\mathcal{H}$ and $\mathcal{H}'$ are {\it
uniformly equivalent} if both $\mathcal{H}\geq \mathcal{H}'$ and
$\mathcal{H}'\geq \mathcal{H}$. Obviously, uniform equivalence is an
equivalence relation.
Next we give some elements from the theory of entropy structures as
developed by Boyle-Downarowicz \cite{BD}. An increasing sequence
$\alpha_1\leq \alpha_2\leq\cdots$ of partitions of $M$ is called
{\it essential} (for $f$ ) if \begin{eqnarray*}&&(1)
\operatorname{diam}(\alpha_k)\rightarrow0 \,\,\mbox{as}\,\, k\rightarrow
+\infty,\\[2mm]
&&(2)\mu(\partial\alpha_k)= 0 \,\,\mbox{for every }\,\,\mu\in
\mathcal{M}_{inv}(M, f).\end{eqnarray*} Here $\partial\alpha_k$
denotes the union of the boundaries of elements in the partition
$\alpha_k$. Note that essential sequences of partitions may not
exist (e.g., for the identity map on the unit interval). However,
for any finite entropy system $(f, M)$ it follows from the work of
Lindenstrauss and Weiss \cite{Lindenstrauss}\cite{Lin-Weiss} that
the product $f\times R$ with $R$ an irrational rotation has
essential sequences of partitions. Noting that the rotation doesn't
contribute entropy for every invariant measure, we can always assume
$(f,M)$ has an essential sequence. By an {\it entropy structure} of
a finite topological entropy dynamical system $(f,M)$ we mean an
increasing sequence $\mathcal{H}=(h_k)$ of functions defined on
$\mathcal{M}_{inv}(M,f)$ which is uniformly equivalent to
$(h_{\mu}(f,\alpha_k)\mid_{\mu\in \mathcal{M}_{inv}(M,f)})$.
Combining with Katok's definition of entropy, we consider an
increasing sequence of functions on $\mathcal{M}_{inv}(M,f)$ given
by $(h_{\mu}^{Kat}(f,\epsilon_k\mid\delta)\mid_{\mu\in
\mathcal{M}_{inv}(f)})$.
\begin{Thm}\label{entropy structure}
Both $(h_{\mu}^{Kat}(f,\theta_k\mid\delta)\mid_{\mu\in
\mathcal{M}_{inv}(M,f)})$ and $(h^{New}(\mu,\theta_k)\mid_{\mu\in
\mathcal{M}_{inv}(M,f)})$ are entropy structures hence they are
uniformly equivalent.
\end{Thm}
\begin{proof}This theorem is a part of Theorem 7.0.1 in \cite{Downarowicz}.\end{proof}
\begin{Rem}The entropy structure in fact reflects the uniform convergence of entropy. It is well known that there are various notions of entropy.
However, not all of them can form entropy structure, for
example, the classic definition of entropy by partitions (see Theorem 8.0.1 in \cite{Downarowicz}).\end{Rem}
Let $\eta=\{\eta_n\}_{n=1}^{\infty}$ be a sequence decreasing to zero.
$\mathcal{M}(\widetilde{\Lambda},\eta)$ is the subset of
$\mathcal{M}_{inv}(M,f)$ with respect to the hyperbolic rate $\eta$.
For $\delta, \varepsilon>0$ and any $\Upsilon\subset
\mathcal{M}(M)$, define
$$h_{\Upsilon,\operatorname{loc}}^{Kat}(f,\varepsilon\mid \delta)=\max_{\mu\in
\Upsilon}\{h_{\mu}(f)-h_{\mu}^{Kat}(f,\varepsilon\mid \delta)\}.$$
\begin{Lem}\label{asymp entropy expansive}
$\lim_{\theta_k\rightarrow
0}h_{\mathcal{M}(\widetilde{\Lambda},\eta),\operatorname{loc}}^{Kat}(f,\theta_k\mid
\delta)=0$.
\end{Lem}
\begin{proof} First we need a proposition contained in Page 226 of
\cite{Newhouse}, which reads as
$$\lim_{\varepsilon\rightarrow0}\sup_{\mu\in \mathcal{M}(\widetilde{\Lambda},\eta)}h_{\operatorname{loc}}^{New}(\mu,\varepsilon)=0.$$
By Theorem \ref{entropy structure},
$$ (h^{New}(\mu,\theta_k)\mid_{\mu\in \mathcal{M}_{inv}(M, f)})\leq(h_{\mu}^{Kat}(f,\theta_k\mid\delta)\mid_{\mu\in \mathcal{M}_{inv}(M, f)})
.$$ So, for any $k\in \mathbb{N}$, there exists $k'>k$ such that
$$h_{\mu}^{Kat}(f,\theta_{k'}\mid\delta)
\geq h^{New}(\mu,\theta_k)-\frac1k,$$ for all $\mu\in
\mathcal{M}(\widetilde{\Lambda},\eta)$. It follows that
\begin{eqnarray*}
h_{\mu}(f)-h_{\mu}^{Kat}(f,\theta_{k'}\mid \delta)&\leq&
h_{\mu}(f)-(h^{New}(\mu, \theta_k)-\frac1k)\\[2mm]&=&h_{\operatorname{loc}}^{New}(\mu,\theta_k)+\frac1k,
\end{eqnarray*}
for all $\mu\in \mathcal{M}(\widetilde{\Lambda},\eta)$. Taking
supremum on $\mathcal{M}(\widetilde{\Lambda},\eta)$ and letting
$k\rightarrow +\infty$, we conclude that
$$\lim_{\theta_{k'}\rightarrow
0}h_{\mathcal{M}(\widetilde{\Lambda},\eta),\operatorname{loc}}^{Kat}(f,\theta_{k'}\mid
\delta)=0.$$ \end{proof}
\begin{Rem}\label{lose control of local entropy}
In \cite{Newhouse}, Lemma \ref{asymp entropy expansive} was used to
prove upper semi-continuity of metric entropy on
$\mathcal{M}(\widetilde{\Lambda},\eta)$. However, the upper
semi-continuity is broadly not true even if the underlying system is
non uniformly hyperbolic. For example, in
\cite{Downarowicz-Newhouse}, T. Downarowicz and S. E. Newhouse
established surface diffeomorphisms whose local entropy of arbitrary
pre-assigned scale is always larger than a positive constant.
Exactly, they constructed a compact subset $E$ of
$\mathcal{M}_{inv}( \Lambda, f )$ such that there exist a periodic
measure in $E$ and a positive real number $\rho_0$ such that for
each $\mu\in E$ and each $k > 0$,
$$\limsup_{\nu\in E, \nu\rightarrow \mu} h_{\nu}( f )- h_k(\nu) >
\rho_0,$$ which implies infinity of symbolic extension entropy and
also the absence of upper semi-continuity of metric entropy and thus
no uniform separation in \cite{PS}.
\end{Rem}
Now we begin to prove Theorem \ref{main theorem of set1} and hence
complete the proof of Theorem \ref{main theorem of set} by
Proposition \ref{generical lemmas}. Throughout this section, for
simplicity, we adopt the symbols used in the proof of Theorem
\ref{main theorem of measure}. Except specially mentioned, the
relative quantitative relation of symbols share the same meaning.
{\bf Proof of Theorem \ref{main theorem of set1}}$ $
For any
nonempty closed connected set $K\subset
\mathcal{M}(\widetilde{\Lambda},\eta)$, there exists a sequence of
closed balls $U_n$ in $\mathcal{M}_{inv}(M,f)$ with radius $\zeta_n$
in the metric $D$ with the weak$^*$ topology such that the following
holds:
\begin{eqnarray*}
&(i)&U_n\cap U_{n+1}\cap K \neq \emptyset;\\[2mm]
&(ii)& \cap_{N\geq 1}^{\infty}\cup_{n\geq N}U_n =
K;\\[2mm]
&(iii)& \lim_{n\rightarrow +\infty}\zeta_n=0.\end{eqnarray*} By
$(1)$, we take $\nu_k\in U_k\cap K$. Given $\gamma>0$, using Lemma
\ref{asymp entropy expansive}, we can find an $\varepsilon>0$ such
that
$$h_{\mathcal{M}(\widetilde{\Lambda},\eta),\operatorname{loc}}^{Kat}(f,\varepsilon\mid
\delta)<\gamma.$$ For each $\nu_k$, we then can choose a finite
convex combination of ergodic probability measures with rational
coefficients,
$$\mu_k=\sum_{j=1}^{p_k}a_{k,j}\,m_{k,j}$$
satisfying the following properties:
$$D(\nu_k,\mu_k)<\frac{1}{k},\,\,\,m_{k,j}(\Lambda)=1\,\,\,\mbox{
and}\,\,\, |h_{\nu_k}^{Kat}(f,\varepsilon\mid
\delta)-h_{\mu_k}^{Kat}(f,\varepsilon\mid \delta)|<\frac1k.$$ For
each $k$, we can find $l_k$ such that
$m_{k,j}(\Lambda_{l_k})>1-\delta$ for all $1\leq j\leq p_k$. For
$c=\frac{\varepsilon}{8\epsilon_0}$, by Theorem \ref{specification}
there is a sequence of numbers $(\delta_k)_{k=1}^\infty$. Let
$\xi_k$ be a finite partition of $M$ with $\mbox{diam}\,\xi_k
<\min\{\frac{b_k(1-e^{-\epsilon})}{4\sqrt{2}e^{(k+1)\epsilon}},\epsilon_{l_k},\delta_{l_k}\}$
and $\xi_k>\{\widetilde{\Lambda}_{l_k},M\setminus
\widetilde{\Lambda}_{l_k}\}$.
For each $m_{k,j}$, following the proof of Theorem \ref{main theorem
of measure1}, we can obtain an integer $n(k,j)$ and an
$(n(k,j),\varepsilon)$-separated set $W_n$ contained in an open
subset $U(k,j)$ of some Lyapunov neighborhood with
$\operatorname{diam}(U(k,j))<2\operatorname{diam}(\xi_k)$ and satisfying that
$$\sharp\,W_{n(k,j)}\geq e^{n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac4k)}.$$ Then likewise, for $k_1,k_2,j_1,j_2$ one can
find $y=y(m_{k_1,j_1},m_{k_2,j_2})\in U(k_1,j_1)$ satisfying that
for some $s=s(m_{k_1,j_1},m_{k_2,j_2})\in\mathbb{ N}$,
$$f^{s}(y)\in U(k_2,j_2).$$In the same manner, we consider the
following pseudo-orbit
\begin{eqnarray}
\label{quasi orbits}\quad \quad O&=&O(x(1,1;1,1),\cdots,x(1,1; 1, N_1C_{1,1}),\cdots,x(1,p_1; 1, 1),\cdots,x(1,p_1; 1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(1,1;T_1,1),\cdots,x(1,1; T_1, N_1C_{1,1}),\cdots,x(1,p_1; T_11),\cdots,x(1,p_1; T_1, N_1C_{1,p_1});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1; 1, 1),\cdots,x(k,1; 1, N_kC_{k,1}),\cdots,x(k,p_k,; 1, 1),\cdots,x(k,p_k; 1, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&x(k,1;T_k,1),\cdots,x(K, 1; T_k, N_kC_{k,1}),\cdots,x(k,p_k; T_k, 1),\cdots,x(k,p_k; T_k, N_kC_{k,p_k});\nonumber\\[2mm]&&\cdots;\nonumber \\[2mm]
&&\cdots\,\,)\nonumber\end{eqnarray}
with the precise type as (\ref{precise quasi orbits}), where $x(k,j;
i, t)\in W_{n(k,j)}$. Then Theorem \ref{specification} applies to
give rise to a shadowing point $z$ of $O$ such that
$$d(f^{M_{k,i,j,t}+q}(z),f^q(x(k,j;i, t)))<c\epsilon_0e^{-\epsilon l_k}<\frac{\varepsilon}{4\epsilon_0}\epsilon_0e^{-\epsilon l_k}\leq \frac{\varepsilon}{4},$$
for $0\leq q\leq n(k,j)-1,$ $1\leq i \leq T_k$, $1\leq t\leq
N_kC_{k,j}$, $1\leq j\leq p_k$. By the construction of $N_k$ and
$Y_k$, it is verified that
$$D(\mathcal{E}_{Y_k}(f^{M_{k,i}}(z)),\,\nu_k)\leq\frac6k.$$
For sufficiently large $M_{k,i}\leq n\leq M_{k,i+1}$, by affine property, we have that
\begin{eqnarray*}
D(\mathcal{E}_{n}(z),\,\nu_k)&\leq&
\frac{M_{k-2}}{n}D(\mathcal{E}_{M_{k-2}}(z),\nu_k)+\frac{Y_{k-1}}{n}\sum_{r=1}^{T_{k-1}}D(\mathcal{E}_{Y_{k-1}}(f^{M_{k-1,r-1}}(z)),\nu_k)\\[2mm]
&&+\frac{s(m_{k-1,1},m_{k,1})}{n}D(\mathcal{E}_{s(m_{k-1,1},m_{k,1})}(f^{M_{k-1,T_{k-1}}}(z)),\nu_k)\\[2mm]
&&+\frac{Y_k}{n}\sum_{r=1}^{i-1}D(\mathcal{E}_{Y_k}(f^{M_{k,r-1}}(z)),\nu_k)\\[2mm]
&&+\frac{n-M_{k,i}}{n}D(\mathcal{E}_{n-M_{k,i}}(f^{M_{k,i}}(z)),\nu_k).
\\[2mm]
\end{eqnarray*}
Noting that $$D(\mathcal{E}_{Y_{k-1}}(f^{M_{k-1,i-1}}(z)),\nu_k)\leq
D(\mathcal{E}_{Y_{k-1}}(f^{M_{k-1,i-1}}(z)),\nu_{k-1})+D(\nu_{k-1},\nu_{k})$$
and using the fact that
$D(\nu_k,\nu_{k-1})\leq2\zeta_k+2\zeta_{k-1}$ and inequalities
(\ref{circle1}) and (\ref{circle2}), one can deduce that
\begin{eqnarray*}
D(\mathcal{E}_{n}(z),\,\nu_k)&\leq&
\frac1k+(\frac{6}{k-1}+2\zeta_k+2\zeta_{k-1})+\frac1k+\frac6k+\frac1k.
\end{eqnarray*}
Letting
$n\rightarrow +\infty$, we get $V(z)\subset K$. On the other hand,
noting that $$\cap_{N\geq 1}^{\infty}\cup_{n\geq N}U_n = K,$$ so
$\mathcal{E}_{n}(z)$ can enter any neighborhood of each $\nu\in K$
in infinitely times, which implies the converse side $K\subset
V(x)$. Consequently, $V(z)=K$.
Next we show the inequality concerning entropy. Fixing $k,j,i, t $,
the corresponding shadowing points of distinct $x(k,j,i)$ are
$(n(k,j),\frac{\varepsilon}{2})$-separated. Let
\begin{eqnarray*}
H_{ki}=\{&&(x(k,j;i,1),\cdots,x(k,j;i,N_kC_{k,j}),\cdots,x(k,p_k;i,1),\cdots,x(1,p_k;i, N_kC_{k,p_k})\\[2mm]
&&\mid \,\,x(k,j;i,t)\in V_{n(k,j)}\cap A_{n(k,j)}\}.
\end{eqnarray*}
Then
\begin{eqnarray*}
\sharp H_{ki}\geq
e^{\sum_{j=1}^{p_k}N_kC_{k,j}n(k,j)(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid
\delta)-\frac{4}{k})}.
\end{eqnarray*}
So,
\begin{eqnarray*}
\frac{1}{Y_k}\log \,H_{ki}&\geq&
\frac{Y_k-X_k}{Y_k}\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\\[2mm]
&\geq&(1-\frac{1}{k})\sum_{j=1}^{p_k}a_{k,j}(1-\frac1k)(h_{m_{k,j}}^{Kat}(f,\varepsilon\mid\delta)-\frac{4}{k})\\[2mm]
&=&(1-\frac1k)^2h_{\mu_{k}}^{Kat}(f,\varepsilon\mid\delta)-\frac4k(1-\frac{1}{k})^2\\[2mm]
&\geq&(1-\frac1k)^2(h_{\nu_k}^{Kat}(f,\varepsilon\mid\delta)-\frac1k)-\frac4k(1-\frac{1}{k})^2\\[2mm]
&\geq&(1-\frac1k)^2(h_{\nu_k}(f)-\gamma-\frac1k)-\frac4k(1-\frac{1}{k})^2.
\end{eqnarray*}
In sequel by the analogous arguments in section 4, we obtain that
$$h_{\operatorname{top}}(f,G_{K})\geq \inf\{h_{\mu}(f)\mid \mu\in K\}-\gamma.$$
The arbitrariness of $\gamma$ concludes the desired inequality:
$$h_{\operatorname{top}}(f,G_{K})\geq \inf\{h_{\mu}(f)\mid \mu\in K\}.$$
$\Box$
\section{On the Structure of Pesin set $\widetilde{\Lambda}$}
The construction of $\widetilde{\Lambda}$ asks for many techniques
that yields fruitful properties of Pesin set but meanwhile leads
difficulty to check which measures support on $\widetilde{\Lambda}$.
Sometimes $\mathcal{M}_{inv}(\widetilde{\Lambda}, f)$ contains only
the measure $\omega$ itself, for instance $\omega$ is atomic. In
what follows, we will show that for several classes of
diffeomorphisms derived from Anosov systems
$\mathcal{M}_{inv}(\widetilde{\Lambda},f)$ enjoys many members.
\subsection{Symbolic dynamics of Anosov
diffeomorpisms}\quad Let $f_0$ be an Anosov diffeomorphism on a
Riemannian manifold $M$. For $x\in M$, $\varepsilon_0>0$, we have
the stable manifold $W^s_{\varepsilon_0}(x)$ and the unstable
manifold $W^u_{\varepsilon_0}(x)$ defined by
\begin{eqnarray*}&&W^s_{\varepsilon_0}(x)=\{y\in M\mid
d(f_0^n(x),f_0^n(y))\leq \varepsilon_0,\quad \mbox{for all}\quad
n\geq
0\}\\[2mm]
&&W^u_{\varepsilon_0}(x)=\{y\in M\mid d(f_0^{-n}(x),F^{-n}(y))\leq
\varepsilon_0,\quad \mbox{for all}\quad n\geq 0\}.\end{eqnarray*}
Fixing small $\varepsilon_0>0$ there exists a
$\delta_0>0$ so that $W^s_{\varepsilon_0}(x)\cap
W^u_{\varepsilon_0}(y)$ contains a single point $[x,y]$ whenever
$d(x,y)<\delta_0$. Furthermore, the function
$$[\cdot, \cdot]: \{(x,y)\in M\times M\mid d(x,y)<\delta_0 \}\rightarrow M$$
is continuous. A rectangle $R$ is understood by a subset of $M$
with small diameter and $[x,y]\in R$ whenever $x,y\in R$. For $x\in
R$ let $$W^s(x,R)=W^s_{\varepsilon_0}(x)\cap R\quad \mbox{and}\quad
W^u(x,R)=W^u_{\varepsilon_0}(x)\cap R.$$ For Anosov diffeomorphism
$f_0$ one can obtain the follow structure known as a Markov
partition $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ of $M$ with
properties:
\begin{enumerate}
\item[(1)] $\operatorname{int} R_i\cap \operatorname{int} R_j=\emptyset$ for $i\neq j$;
\item[(2)] $f_0 W^u(x,R_i)\supset W^u(f_0x,R_j)$ and \\
$f_0 W^s(x,R_i)\subset W^s(f_0x,R_j)$ when $x\in \operatorname{int} R_i$, $fx\in
\operatorname{int} R_j$.
\end{enumerate}
Using the Markov Partition $\mathcal{R}$ we can define the
transition matrix $B=B(\mathcal{R})$ by
$$B_{i,j}=\begin{cases}1\quad \mbox{if}\quad \operatorname{int} R_i\cap f_0^{-1} ( \operatorname{int} R_j)\neq \emptyset;\\ 0\quad \mbox{otherwise}.
\end{cases}$$
The subshift $(\Sigma_B,\sigma)$ associated with $B$ is given by
$$\Sigma_B=\{\underline{q}\in \Sigma_l\mid \,B_{q_iq_{i+1}}=1\quad \forall i\in \mathbb{Z}\}.$$
For each $\underline{q}\in \Sigma_B$ by the hyperbolic property the
set $\cap_{i\in \mathbb{Z}}f_0^{-i}R_{q_i}$ contains of a single
point, denoted by $\pi_0( \underline{q} )$. We denote
$$\Sigma_B(i)=\{\underline{q}\in \Sigma_B\mid q_0=i\}.$$
The following properties hold for the map $\pi_0$ (see Sinai
\cite{Sinai} and Bowen \cite{Bowen1, Bowen2}).
\begin{Prop}\label{property of Markov}$ $\\
\begin{enumerate}
\item[(1)] The map $\pi_0: \Sigma_B\rightarrow M$ is a continuous surjection
satisfying $\pi_0\circ \sigma=f_0\circ \pi_0;$\\
\item[(2)] $\pi_0(\Sigma_B(i))=R_i$,\quad $1\leq i\leq l$;\\
\item[(3)] $h_{\operatorname{top}}(\sigma, \Sigma_B)=h_{\operatorname{top}}(f_0,M)$.
\end{enumerate}
\end{Prop}
Since $B$ is $(0,1)$-matrix, using Perron Frobenius Theorem the
maximal eigenvalue $\lambda$ of $B$ is positive and simple.
$\lambda$ has the row eigenvector $u=(u_1,\cdots,u_l)$, $u_i>0$,
and the column eigenvector $v=(v_1,\cdots, v_l)^{T}$, $v_i>0$. We
assume $\sum_{i=1}^lu_iv_i=1$ and denote
$(p_1,\cdots,p_l)=(u_1v_1,\cdots,u_lv_l)$. Define a new matrix
$$\mathcal{P}=(p_{ij})_{l\times l},\quad\quad\mbox{where}\quad p_{ij}=\frac{B_{ij}\,v_j}{\lambda\,v_i}.$$ Then $\mathcal{P}$
can define a Markov chain with probability $\mu_0$ satisfying
$$\mu_0([a_0a_1\cdots a_i])=p_{a_0}p_{a_0a_1}\cdots p_{a_{i-1}a_i}. $$
Then $\mu_0$ is $\sigma$-invariant and Gurevich \cite{Gu1,Gu2}
proved that $\mu_0$ is the unique maximal measure of
$(\Sigma_B,\sigma)$, that is,
$$h_{\operatorname{top}}(\sigma, \Sigma_B)=h_{\mu_0}(\sigma,\Sigma_B)=\log\lambda.$$
In addition, Bowen \cite{Bowen1} proved that $\pi_{0*}(\mu_0)$ is
the unique maximal measure of $f_0$ and $\pi_{0*}(\mu)(\partial
\mathcal{R})=0$, where $\partial \mathcal{R}$ consists of all
boundaries of $R_i$, $1\leq i\leq l$.
Denote $\mu_1= \pi_{0*}(\mu_0)$. Then $\mu_1(\pi_0\Sigma_B(i))=p_i$
for $1\leq i\leq l$. For $0<\gamma<1$, $N\in \mathbb{N}$ define
\begin{eqnarray*}\Gamma_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid
f_0^j(x)\in R_i\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid
f_0^{-j}(x)\in R_i\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq
1,\,\,\,\forall\, n\in \mathbb{Z}\}.\end{eqnarray*} Then
$f_0^{\pm}(\Gamma_N(i,\gamma))\subset \Gamma_{N+1}(i,\gamma)$. Let
$\Gamma(i,\gamma)=\cup_{N\geq 1}\Gamma_N(i,\gamma)$.
\begin{Lem}\label{full measure}
For any $m\in \mathcal{M}_{inv}(M,f_0)$, if \,$m(R_i)<p_i+\gamma/2$
then $m(\Gamma(i,\gamma))=1$. \end{Lem}
\begin{proof}
Since $m(R_i)<p_i+\gamma/2$, for $m$ almost all $x$ one can fine
$N(x)>0$ such that
\begin{eqnarray*}&&n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq
n-1\mid
f_0^j(x)\in R_i\}\leq n(m(R_i)+\frac\gamma2),\quad \forall \,n\geq N(x);\\[2mm]
&&n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid
f_0^{-j}(x)\in R_i\}\leq n(m(R_i)+\frac\gamma2),\quad \forall
\,n\geq N(x).
\end{eqnarray*}
Take $N_0(x)$ to be the smallest number such that for every $n\geq
1$,
\begin{eqnarray*}&&-N_0(x)+n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid
f_0^j(x)\in R_i\}\leq N_0(x)+n(m(R_i)+\frac\gamma2);\\[2mm]
&&-N_0(x)+n(m(R_i)-\frac\gamma2)\leq\sharp \{0\leq j\leq n-1\mid
f_0^{-j}(x)\in R_i\}\leq N_0(x)+n(m(R_i)+\frac\gamma2).
\end{eqnarray*}
Then for any $k\geq 1$,
\begin{eqnarray*}
&&\sharp \{n\leq j\leq n+k-1\mid f_0^j(x)\in R_i\}\\[2mm]
&=& \sharp
\{0\leq j\leq n+k-1\mid f_0^j(x)\in R_i\}-\sharp \{0\leq j\leq
n-1\mid
f_0^j(x)\in R_i\}\\[2mm]
&\leq&
N_0(x)+(n+k)(m(R_i)+\frac\gamma2)-(-N_0(x)+n(m(R_i)-\frac\gamma2))\\[2mm]
&=& 2N_0(x)+k(m(R_i)+\frac\gamma2)+n\gamma.
\end{eqnarray*}
In this manner we can also show $$\sharp \{n\leq j\leq n+k-1\mid
f_0^j(x)\in R_i\}\leq 2N_0(x)+k(m(R_i)+\frac\gamma2)+n\gamma.$$
Thus, $x\in \Gamma_{N_0(x)}(i,\gamma)$.
\end{proof}
By Lemma \ref{full measure}, $\mu_1(\Gamma(i,\gamma))=1$. We further
define
$$\widetilde{\Gamma}_N(i,\gamma)=\operatorname{supp}(\mu_1\mid
\Gamma_N(i,\gamma))\quad \mbox{ and}\quad
\widetilde{\Gamma}(i,\gamma)=\cup_{N\geq
1}\widetilde{\Gamma}_N(i,\gamma).$$ It holds that
$\widetilde{\Gamma}(i,\gamma)$ is $f$-invariant and
$\mu_1(\widetilde{\Gamma}(i,\gamma))=1$.
\begin{Prop}\label{frequence} There is a neighborhood $U$ of $\mu_1$ in
$\mathcal{M}_{inv}(M,f_0)$ such that for any ergodic measure $m\in
U$ we have $m \in
\mathcal{M}_{inv}(\widetilde{\Gamma}(i,\gamma),f_0)$.
\end{Prop}
\begin{proof}
Observing that $\mu_1(\partial R_i)=0$, for $\gamma>0$ there exists
a neighborhood $U$ of $\mu_1$ in $\mathcal{M}_{inv}(M,F)$ such that
for any $m\in U$ one has
$$m( R_i)<p_i+\frac\gamma2.$$
{\bf Claim :} \,\,\,\,We can find an ergodic measure $m_0\in
\mathcal{M}_{inv}(\Sigma_B, \sigma)$ satisfying $\pi_{0*}m_0=m$.
\noindent{\it Proof of Claim.} Denote the basin of $m$ by
\begin{eqnarray*}Q_{m}(M,f_0)=\Big{\{}x\in M\mid \lim_{n\to+\infty}\frac 1n
\sum_{j=0}^{n-1}\varphi(f_0^ix) &=&\lim_{n\to-\infty}\frac 1n
\sum_{j=0}^{n-1}\varphi(f_0^ix)\\[2mm]&=&\int_M\varphi dm,\quad\forall
\varphi\in C^0(M)\Big{\}}.\end{eqnarray*} Take and fix a point $x\in
Q_{m}(M,\, f_0)$ and choose $q\in \Sigma_B$ with $\pi_0(q)=x$.
Define a sequence of measures $\nu_n$ on $\Sigma_B$ by
$$\int\,\psi d \nu_n:=\frac 1n \Sigma_{i=0}^{n-1}\psi(\sigma^i(q)),\,\,\,\,
\,\,\forall \psi\in C^0(\Sigma_B).$$ By taking a subsequence when
necessary we can assume that $\nu_n\to \nu_0.$ It is standard to
verify that $\nu_0$ is a $\sigma$-invariant measure and $\nu_0$
covers $m$ i.e., $\pi_{0*}(\nu_0)=m.$ Set
$$Q(\sigma):=
\cup_{\nu \in \mathcal{M}_{erg}(\Sigma_B, \sigma)} Q_\nu(\Sigma_B,
\sigma).$$ Then $ Q(\sigma)$ is a $\sigma-$invariant total measure
subset in $\Sigma_B.$ We have
\begin{align*}
&m( Q_{m}(M,F) \cap \pi_{0} Q(\sigma))\\
\ge & \nu_0(\pi_{0}^{-1}Q_{m}(M,f_0)\cap Q(\sigma))\\
=&1.
\end{align*}
Then the set
\begin{align*} \mathcal{A}_0:=\Big{\{}\nu\in \mathcal{M}_{erg}(\Sigma_B, \sigma)\,\mid\,\,&\exists \,\, q\in
Q(\sigma), \pi_0(q)\in Q_{m}(M, f_0), s.\,t. \\
&\lim_{n\to+\infty}\frac 1n\Sigma_{i=0}^{n-1} \psi(\sigma^i(q))
=\lim_{n\to-\infty}\frac 1n\Sigma_{i=0}^{n-1}
\psi(\sigma^i(q))\\
&=\int _{\Sigma_B} \psi\,d\nu\,\,\,\,\,\,\quad\quad\quad \forall
\psi\in C^0(\Sigma_B)\,\Big{\}}
\end{align*}
is non-empty. It is clear that $\nu $ covers $m$, $\pi_{0*}(\nu)=m,$
for all $\nu\in \mathcal{A}_0.$
$\Box$
We continue the proof of Proposition \ref{frequence}. Since
$\pi_{0*}(m_0)=m$ so $m_0(\pi_{0}^{-1}(R_i))=m(R_i)< p_i+\gamma/2$
which together with $\Sigma_B(i)\subset \pi_0^{-1}(R_i)$ implies
that
$$m_0(\Sigma_B(i))< p_i+\frac\gamma2.$$
In particular, $\mu_0(\Sigma_B(i))< p_i+\frac\gamma2.$ For
$0<\gamma<1$, $N\in \mathbb{N}$ define
\begin{eqnarray*}\Upsilon_N(i,\gamma)=\{\underline{q}\in \Sigma_B\mid &&\sharp
\{n\leq j\leq n+k-1\mid q_j=i\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]
&& \sharp
\{n\leq j\leq n+k-1\mid q_{-j}=i\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]
&&\quad \forall\,k\geq1\,\,\,\forall \,n\in
\mathbb{Z}\}.\end{eqnarray*} Let $\Upsilon(i,\gamma)=\cup_{N\geq
1}\Upsilon_N(i,\gamma)$. Then $\mu_0(\Upsilon(i,\gamma))=1$. Further
define
$$\widetilde{\Upsilon}_N(i,\gamma)=\operatorname{supp}(\mu_0\mid
\Upsilon_N(i,\gamma))\quad \mbox{ and}\quad
\widetilde{\Upsilon}(i,\gamma)=\cup_{N\geq
1}\widetilde{\Upsilon}_N(i,\gamma).$$ It also holds that
$\widetilde{\Upsilon}(i,\gamma)$ is $\sigma$-invariant and
$\mu_0(\widetilde{\Upsilon}(i,\gamma))=1$.
\begin{Lem}\label{measures on support}Given $m_0\in \mathcal{M}_{erg}(\Sigma_B,\sigma)$,
if $m_0(\Sigma_B(i))<p_i+\gamma/2$ then $m_0\in
\mathcal{M}_{inv}(\widetilde{\Upsilon}(i,\gamma),\sigma)$.
\end{Lem}
\noindent{\it Proof of Lemma.}\,\,\,\, Since
$m_0(\Sigma_B(i))<p_i+\gamma/2$ we obtain $m_0(\cup_{N\in
\mathbb{N}}\,\Upsilon_{N}(i,\gamma))=1$. We can take $N_0$ so large
that $m_0(\Upsilon_{N_0}(i,\gamma))>0$ and
$\mu_0(\widetilde{\Upsilon}_{N_0}(i,\gamma))>0$. Define
$$\Upsilon(i,j)=\{\underline{q}\in \widetilde{\Upsilon}_{N_0}(i,\gamma)\mid q_0=j\}.$$
Then there exists $j\in [ 1,l ]$ such that $\mu_0(\Upsilon(i,j))>0$.
Noting that $(\Sigma_B,\sigma)$ is mixing, there is $L_0>0$ such
that for each pair $j_1,j_2$ one can choose an sequence
$L(j_1,j_2)=(q_1 \cdots q_L)$ satisfying $q_1=j_1$, $q_L=j_2$ and
$2\leq \sharp L(j_1,j_2)\leq L_0$.
Arbitrarily taking $\underline{q}\in \Upsilon_{N_0}(i,\gamma)$,
$\underline{z}\in \Upsilon(i,j)$, $n\in \mathbb{N}$, define
$$\underline{y}(\underline{q},\underline{z},n)
=(\cdots z_{-3}z_{-2}z_{-1}L(z_0,q_{-n})q_{-n+1}\cdots q_{-1};
\stackrel{0}{q_0} q_1\cdots q_{n-1}L(q_n,z_0)z_1z_2z_3 \cdots ).$$
Denote $N_1=2L_0+2N_0+1$. For any $\theta>0$ we can take large $n$
satisfying $n>N_1$ and
$d(\underline{y}(\underline{q},\underline{z},n),\underline{q})<\theta$.
Define a new subset of $\Sigma_B$:
$$Y(\underline{q},n)=\{\underline{y}(\underline{q},\underline{z},n)\in \Sigma_B \mid
\underline{z}\in \Upsilon(i,j)\}.$$ Consider the positive and
negative constitutions of $\Upsilon(i,j)$ as follows
\begin{eqnarray*}
\Upsilon^{+}(i,j)&=&\{\underline{w}\in
\Sigma_B\mid\,w_k=z_k,\,\,i\geq
0,\quad \mbox{for some }\,\,\underline{z}\in \Upsilon(i,j)\}\\[2mm]
\Upsilon^{-}(i,j)&=&\{\underline{w}\in
\Sigma_B\mid\,w_k=z_k,\,\,i\leq 0,\quad \mbox{for some
}\,\,\underline{z}\in \Upsilon(i,j)\}.
\end{eqnarray*}
Clearly $\Upsilon^{+}(i,j)\supset \Upsilon(i,j)$,
$\Upsilon^{-}(i,j)\supset \Upsilon(i,j)$. Then by the Markov
property of $\mu_0$ it holds that
$$\mu_0(Y(\underline{q},n))\geq
\mu_0(\Upsilon^{-}(i,j))p_{jq_{-n}}p_{q_{-n}q_{-n+1}}\cdots
p_{q_{n-1}q_{n}}q_{q_{n}j}\,\mu_0(\Upsilon^{+}(i,j))>0.$$ Moreover,
for any $\underline{y}\in Y(\underline{q},n)$ and $k\geq1,\,\,s\in
\mathbb{Z}$ we have
Case 1: $-n-\sharp L\leq s\leq n+\sharp L$, $s+k-1\leq n+\sharp L$
it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid
y_t=i\}&\leq&\,2L_0+\sharp \{s\leq t\leq s+k-1\mid q_t=i\} \\[2mm]&\leq&
2L_0+N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*}
Case 2: $-n-L\leq s\leq n+L$, $s+k-1> n+L$ it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid
y_t=i\}&\leq&\,L_0+N_0+(n+L-s)(p_i+\gamma)+s|\gamma|+N_0+\\[2mm]
&&+(s+k-1-n-L)(p_i+\gamma) \\[2mm]&\leq&
L_0+2N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*}
Case 3: $s>n+L$ it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid
y_t=i\}&\leq & N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*}
Case 4: $s<-n-L$ it follows that
\begin{eqnarray*}\,\sharp \{s\leq t\leq s+k-1\mid
y_t=i\}&\leq&\,2L_0+2N_0+k(p_i+\gamma)+|s|\gamma.\end{eqnarray*} The
situation of $\sharp \{s\leq t\leq s+k-1\mid y_{-t}=i\}$ is similar.
Altogether, since $N_1=2L_0+2N_0+1$, $Y(\underline{q},n)\subset
\Upsilon_{N_1}(i,\gamma) $. The arbitrariness of $\theta$ gives rise
to that
$$\underline{q}\in \operatorname{supp}(\mu_0\mid \Upsilon_{N_1}(i,\gamma)).$$ That
is, $\Upsilon_{N_0}(i,\gamma)\subset
\widetilde{\Upsilon}_{N_1}(i,\gamma) $. Since
$m_0(\Upsilon_{N_0}(i,\gamma))>0$ so
$m_0(\widetilde{\Upsilon}_{N_1}(i,\gamma))>0$ which by the
ergodicity of $m_0$ implies $m_0(\widetilde{\Upsilon}(i,\gamma))=1$.
$\Box$
Noting that $\pi_0(\widetilde{\Upsilon}_{N}(i,\gamma))\subset
\widetilde{\Gamma}_{N}(i,\gamma)$, by Lemma \ref{measures on
support} we obtain
$$m(\widetilde{\Gamma}(i,\gamma))=m_0(\pi_0^{-1}(\widetilde{\Gamma}(i,\gamma)))\geq
m_0(\widetilde{\Upsilon}(i,\gamma))=1
$$ which concludes Proposition \ref{frequence}.
\end{proof}
\subsection{Nonuniformly hyperbolic systems} We shall verify $\widetilde{\Lambda}$
for an example due to Katok \cite{Katok} (see also \cite{Ba-Pesin1,
Barr-Pesin}) of a diffeomorphism on the 2-torus $\mathbb{T}^2$ with
nonzero Lyapunov exponents, which is not an Anosov map. Let $f_0$
be a hyperbolic linear automorphism given by the matrix
$$A=\begin{pmatrix} 2&1\\1&1
\end{pmatrix}
$$
Let $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ be the Markov partition of
$f_0$ and $B=B(\mathcal{R})$ be the associated transition matrix.
$f_0$ has a maximal measure $\mu_1$. Without loss of generality, at
most taking an iteration of $f_0$ we suppose there is a fixed point
$O\in \operatorname{int} R_1$. Consider the disk $D_r$ centered at $O$ of radius
$r$. Let $(s_1, s_2)$ be the coordinates in $D_r$ obtained from the
eigendirections of $A$. The map $A$ is the time-1 map of the local
flow in $D_r$ generated by the system of ordinary differential
equations:
$$\frac{ds_1}{dt} = s_1 \log\lambda ,\quad \frac{ds_2}{dt} = -s_2 \log\lambda.$$ We obtain
the desired map by slowing down $A$ near the origin.
Fix small $r_1 < r_0$ and consider the time-1 map $g$ generated by
the system of ordinary differential equations in $D_{r_1}$ :
$$ \frac{ds_1}{dt} =
s_1\psi(s_1^2 + s_2^ 2) \log\lambda ,\quad \frac{ds_2}{dt}
=-s_2\psi(s_1^2 + s_2^2) \log\lambda $$ where $\psi$ is a
real-valued function on $[0, 1]$ satisfying: \begin{enumerate}
\item[(1)] $\psi$ is a $C^{\infty}$ function except for
the origin $O$; \\
\item[(2)] $ \psi(0) = 0$ and $\psi(u) = 1$ for $u\geq r_0$ where $0 < r_0 < 1;$\\
\item[(3)] $ \psi(u) >
0$ for every $0<u<r_0$;\\
\item[(4)] $\int_0^1\frac{du}{\psi(u)}<\infty$.
\end{enumerate}
The map $f$, given as $f(x) = g(x)$ if $x\in D_{r_1}$ and $f(x) =
A(x)$ otherwise, defines a homeomorphism of the torus, which is a
$C^{\infty}$ diffeomorphism everywhere except for the origin $O$. To
provide the differentiability of the map $f$, the function $\psi$
must satisfy some extra conditions. Namely, near $O$ the integral
$\int_0^1du/\psi$ must converge ``very slowly". We refer the
smoothness to \cite{Katok}. Here $f$ is contained in the $C^0$
closure of Anosov diffeomorphisms and even more there is a
homeomorphism $\pi: \mathbb{T}^2 \rightarrow \mathbb{T}^2$ such that
$\pi\circ f_0=f\circ \pi$ and $\pi(O)=O$. By the constructions,
there is a continuous decomposition on the tangent space
$T\mathbb{T}^2=E^1\oplus E^2$ such that
for any neighborhood $V$ of $O$, there exists
$\lambda_V>1$ such that
\begin{enumerate}
\item[(1)]$\|Df_x \mid_{ E^1(x)}\|\geq \lambda_{V}$, \,\,\, $\|Df_x \mid_{ E^2(x)}\|\leq
\lambda_{V}^{-1}$,\,\,\, $x\in \mathbb{T}^2\setminus V$ ;\\
\item[(2)] $\|Df_x \mid_{ E^1(x)}\|\geq 1$, \,\,\, $\|Df_x \mid_{ E^2(x)}\|\leq
1$,\,\,\, $x\in V$.
\end{enumerate}
Let $H_i=\pi(R_i)$, $\nu_0=\pi_{*}\mu_1$ and $p_i=\nu_0(H_i)$. Then
$H_i$ is a closed subset of $\mathbb{T}^2$ with nonempty interior.
Let
\begin{eqnarray*}p_0&=&\frac12\min\{1-p_i\,\mid\,\, 1\leq i\leq
l\},\\[2mm]
\beta&=&(1-p_1-p_0-\gamma)\log\lambda_{V}.
\end{eqnarray*}
\begin{Thm} \label{Katok}There exists a
neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^2,f)$
such that for any ergodic $\nu\in U$ it holds that $\nu\in
\mathcal{M}_{inv}(\widetilde{\Lambda}(\beta,\beta, \epsilon))$ for
any $0\leq \epsilon\ll \beta$.
\end{Thm}
\begin{proof}Take a small neighborhood $V\subset H_1$ of $O$. Denote
\begin{eqnarray*}\Phi_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid
f^j(x)\in H_i\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid
f^{-j}(x)\in H_i\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq
1,\,\,\,\forall\, n\in \mathbb{Z\}}.\end{eqnarray*} Define
\begin{eqnarray*}\widetilde{\Phi}_N(1,\gamma)&=&\operatorname{supp}(\nu_0\mid
\Phi_N(1,\gamma)).
\end{eqnarray*} Then for some large $N$ we have $\nu_0(\widetilde{\Phi}_N(1,\gamma))>0$. Noting that $\mu_1(\partial R_1)=0$, by Proposition \ref{frequence}
and the conjugation $\pi$ there exists a neighborhood $U$ of $\nu_0$
in $\mathcal{M}(\mathbb{T}^2,f)$ such that for any ergodic $\nu\in
U$,
$$\nu(\widetilde{\Phi}_N(1,\gamma))>0.$$
For any $x\in
\Phi_N(1,\gamma)$ and $k\geq 1$, $n\in \mathbb{Z}$ we have
Case 1: $k(p_1+\gamma+p_0)\leq N+k(p_1+\gamma)+|n|\gamma$, then
$$k\leq \frac{N+|n|\gamma}{p_0}.$$
So, \begin{eqnarray*}
\|Df^{-k} \mid_{ E^1(f^{n}x)}\|&\leq& e^{-k\beta}\exp(\frac{\beta}{p_0}(N+|n|\gamma)),\\[2mm]
\|Df^k_x
\mid_{ E^2(f^nx)}\|&\leq&
e^{-k\beta}\exp(\frac{\beta}{p_0}(N+|n|\gamma)).
\end{eqnarray*}
Case 2: $k(p_1+\gamma+p_0)> N+k(p_1+\gamma)+|n|\gamma$, then
\begin{eqnarray*}
\|Df^{-k} \mid_{ E^1(f^{n}x)}\|&\leq& \lambda_V^{-(1-p_1-p_0-\gamma)k}=e^{-\beta k},\\[2mm] \|Df^k_x
\mid_{ E^2(f^nx)}\|&\leq& \lambda_V^{-(1-p_1-p_0-\gamma)k}=e^{-\beta
k}.
\end{eqnarray*}
Let $N_2=[\frac{\beta N}{\gamma p_0}]+1$. Then
$$\Phi_N(1,\gamma)\subset \Lambda_{N_2}(\beta,\beta, \gamma)
\quad \mbox{and}\quad \widetilde{\Phi}_N(1,\gamma)\subset
\widetilde{\Lambda}_{N_2}(\beta,\beta, \gamma).$$
Therefore,
$$\nu(\widetilde{\Lambda}_{N_2}(\beta,\beta, \gamma))>0$$
which completes the proof of Theorem \ref{Katok}.
\end{proof}
\subsection{Robustly transitive partially hyperbolic systems}
In \cite{Mane} R.~Ma{\~{n}}{\'{e}} constructed a class of robustly
transitive diffeomorphisms which is not hyperbolic. Firstly we
recall the description of Ma{\~{n}}{\'{e}}'s example. Let
$\mathbb{T}^n$, $n\geq 3$, be the torus $n$-dimentional and $f_0 :
\mathbb{T}^n \rightarrow \mathbb{T}^n$ be a (linear) Anosov
diffeomorphism. Assume that the tangent bundle of $\mathbb{T}^n$
admits the $Df_0$-invariant splitting $T\mathbb{T}^n = E^{ss} \oplus
E^u\oplus E^{uu}$, with $\dim E^u = 1$ and $$\lambda_s :=
\|Df\mid_{E^{ss}}\|,\quad \lambda_u:= \|Df\mid_{ E^u}\|,\quad
\lambda_{uu}:= \|Df\mid_{E^{uu}}\|$$ satisfying the relation
$$\lambda_s < 1 < \lambda_u < \lambda_{uu}.$$
The following Lemma is proved in \cite{Sambarino-Vasquez}.
\begin{Lem}\label{shadowing onto diffeo}
Let $f_0 : \mathbb{T}^n\rightarrow \mathbb{T}^n$ be a linear Anosov
map. Then there exists $C> 0$ such that for any small $r$ and any $f
: \mathbb{T}^n\rightarrow \mathbb{T}^n$ with $\operatorname{dist}_{C^0}(f, g) <
r$ there exists $\pi : \mathbb{T}^n\rightarrow \mathbb{T}^n$
continuous and onto, $\operatorname{dist}_{C^0}(\pi, \operatorname{id}) < Cr$, and $$f_0\circ
\pi = \pi\circ f.$$
\end{Lem}
Let $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ be the Markov partition of
$f_0$ and $B=B(\mathcal{R})$ be the associated transition matrix.
Let $\mu_1$ be the maximal measure of $(\mathbb{T}^n,f_0)$ and
$p_i=\mu_1(R_i)$ for $1\leq i\leq l$. Suppose there is a fixed point
$O\in \operatorname{int} R_1$. Take small $r$ satisfying the ball $B(O,Cr)\subset
R_1$ and $d(B(O,Cr), \partial R_1)>Cr$. Then deform the Anosov
diffeomorphim $f$ inside $B(p, r)$ passing through a flip
bifurcation along the central unstable foliation $\mathcal{F}^u(p)$
and then we obtain three fixed points, two of them with stability
index equal to $\dim E^s$ and the other one with stability index
equal to $\dim E^s + 1$. Moreover take positive numbers $\delta,
\gamma \ll \min\{\lambda_s, \lambda_u\}$. Let $f$ satisfy the
following $C^1$ open conditions:
\begin{enumerate}\item[(1)] $ \|Df\mid_{E^{ss}}\|< e^{\delta}\lambda_s,\quad
\|Df\mid_{E^{uu}}\|>e^{-\delta}\lambda_{uu}$;\\
\item[(2)] $ e^{-\delta}\lambda_u<\|Df\mid_{ E^u(x)}\|< e^{\delta}\lambda_u$
,\quad for $x\in \mathbb{T}^n\setminus B(O,r)$;\\
\item[(3)] $ e^{-\delta}<\|Df\mid_{ E^u(x)}\|< e^{\delta}\lambda_u$
,\quad for $x\in B(O,r)$.
\end{enumerate}
As shown in \cite{Sambarino-Vasquez} for the obtained $f$ there
exists a unique maximal measure $\nu_0$ of $f$ with
$\pi_{*}\nu_0=\mu_1$. Let $H_i=\pi(R_i)$, $p_i=\pi_{*}\mu_1(H_i)$
and
\begin{eqnarray*}p_0&=&\frac12\min\{1-p_i\,\mid\,\, 1\leq i\leq
l\},\\[2mm]
\beta&=&(1-p_1-p_0-\gamma)\min\{-\log\lambda_{s}-\delta,\,\,\log\lambda_{u}-\delta\}.
\end{eqnarray*}
We can see $E^{uu}$ is uniformly contracted by at least
$e^{-\beta}$.
\begin{Thm} \label{Mane}There exist $0<\epsilon\ll1<\beta$ and a
neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f)$
such that for any ergodic $\nu\in U$ it holds that $\nu\in
\mathcal{M}_{inv}(\widetilde{\Lambda}(\beta,\beta, \epsilon), f)$.
\end{Thm}
\begin{proof}
By Proposition \ref{frequence} we can take a neighborhood $U_1$ of
$\mu_1$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f_0)$ such that every
ergodic $\mu\in U_1$ also belongs to $
\widetilde{\Gamma}(i,\gamma)$, where $\widetilde{\Gamma}(i,\gamma)$
is given by Proposition \ref{frequence}. Since
$\pi$ is continuous, there is a neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f)$ such
that $\pi_{*}U\subset U_1$. For $N\in \mathbb{N}, \gamma>0$, define
\begin{eqnarray*}
T_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid
f^j(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid
f^{-j}(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq
1,\,\,\,\forall\, n\in \mathbb{Z\}}.
\end{eqnarray*}
For large $N$ we have $\nu_0(T_N(i,\gamma))>0$ and let
$$\widetilde{T}_N(i,\gamma)=\operatorname{supp}(\nu_0\mid T_N(i,\gamma)).$$
For any $z\in T_N(i,\gamma)$, $n\in \mathbb{Z}$,
$k\geq 1$ we have
Case 1: $k(p_1+\gamma+p_0)\leq N+k(p_1+\gamma)+|n|\gamma$, then
$$k\leq \frac{N+|n|\gamma}{p_0}.$$
So, \begin{eqnarray*} \|Df^{-k} \mid_{ E^{u}(x)\oplus
E^{uu}(f^{n}x)}\|&\leq&
e^{-k\beta}\exp(\frac{\beta}{p_0}(N+|n|\gamma)).
\end{eqnarray*}
Case 2: $k(p_1+\gamma+p_0)> N+k(p_1+\gamma)+|n|\gamma$, then
\begin{eqnarray*}
\|Df^{-k} \mid_{E^{u}(x)\oplus E^{uu}(f^{n}x)}\|\leq
(\lambda_ue^{\delta})^{-(1-p_1-p_0-\gamma)k} e^{\delta
k(p_1+\gamma+p_0)}\leq e^{-\beta k}e^{\delta k(p_1+\gamma+p_0)}.
\end{eqnarray*}
Let $N_2=[\frac{\beta N}{\gamma p_0}]+1$, $\epsilon= \max\{\delta
(p_1+\gamma+p_0),\,\gamma\}$. Then
$$T_N(1,\gamma)\subset \Lambda_{N_2}(\beta,\beta, \epsilon)
\quad \mbox{and}\quad \widetilde{T}_N(1,\gamma)\subset
\widetilde{\Lambda}_{N_2}(\beta,\beta, \epsilon).$$
For any $x\in
\Gamma_N(1,\gamma)$, $z\in \pi^{-1}(x)$, it holds that
$$d(f^i(z),f^i_0(x))=d(f^i(z),\, \pi (f^i(x)))<Cr$$ which implies
that if $f^i_0(x)\notin R_1$ then $f^i(z)\notin B(O,r)$ because
$d(B(O,r),\partial R_1)>Cr$. Thus
$$\pi^{-1}(\Gamma_N(1,\gamma))\subset T_N(1,\gamma)\subset
\Lambda_{N_2}(\beta , \beta, \epsilon)$$ which yields that
$$\pi^{-1}(\widetilde{\Gamma}_N(1,\gamma))\subset
\widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon).$$ For any ergodic
$\nu\in U$, $\pi_{*}\nu\in U_1$. So
$\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0$. We obtain
$$\nu(\widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon))
\geq
\nu(\pi^{-1}(\widetilde{\Gamma}_N(1,\gamma)))=\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0.$$
The ergodicity of $\nu$ concludes $\nu(\widetilde{\Lambda}(\beta,
\beta, \epsilon))=1$.
\end{proof}
\subsection{Robustly transitive systems which is not partially hyperbolic}
In this subsection we will apply the structure of $\widetilde{\Lambda}$ to
a class of diffeomorphisms introduced by Bonatti-Viana. For our
requirements we need do some additional assumptions on their
constants. The class $\mathcal{V}\subset\mathrm{Diff}(\mathbb{T}^n)$
under consideration consists of diffeomorphisms which are also
deformations of an Anosov diffeomorphism. To define $\mathcal{V}$,
let $f_0$ be a linear Anosov diffeomorphism of the $n$-dimensional
torus $\mathbb{T}^n$. Let $\mathcal{R}=\{R_1,R_2,\cdots,R_l\}$ be
the Markov partition of $f_0$ and $B=B(\mathcal{R})$ be the
associated transition matrix. Let $\mu_1$ be the maximal measure of
$(\mathbb{T}^n,f_0)$ and $p_i=\mu_1(R_i)$ for $1\leq i\leq l$.
Suppose there is a fixed point $O\in \operatorname{int} R_1$. Take small $r$
satisfying the ball $B(O,Cr)\subset R_1$ and $d(B(O,Cr), \partial
R_1)>Cr$, where $C$ is given by Lemma \ref{shadowing onto diffeo}.
Denote by $TM = E_0^s\oplus E_0^u$ the hyperbolic splitting for
$f_0$ and let
$$\lambda_s :=
\|Df\mid_{E^{s}_0}\|,\quad \lambda_u:= \|Df\mid_{ E^u_0}\|.$$ We
suppose that $f_0$ has at least one fixed point outside $V$. Fix
positive numbers $\delta, \gamma \ll \lambda : =\min\{\lambda_s,
\lambda_u\}$. Let
\begin{eqnarray*}p_0&=&\frac12\min\{1-p_i\,\mid\,\, 1\leq i\leq
l\},\\[2mm]
\beta&=&(1-p_1-p_0-\gamma)\log\lambda-\delta,.
\end{eqnarray*}
By definition $f\in \mathcal{V}$ if it satisfies the following $C^1$
open conditions:
(1) There exist small continuous cone fields $C^{cu}$ and $C^{cs}$
invariant for $Df$ and $Df^{-1}$ containing respectively $E_0^u$ and
$E_0^s$.
(2) $f$ is $C^1$ close to $f_0$ in the complement of $B(O,r)$, so
that for $x\in \mathbb{T}^n\setminus B(O,r)$:
$$\|(Df|T_xD^{cu})\|>e^{-\delta}\lambda\,\,\,\mbox{and}\,\,\,\|Df|T_xD^{cs}\|<e^{\delta}\lambda^{-1}.$$
(3) For $x\in B(O,r)$:
$$\|(Df|T_xD^{cu})\|>e^{-\delta}\,\,\, \mbox{and}\,\,\, \|(Df|T_xD^{cs}\|<e^{\delta},$$
where $D^{cu}$ and $D^{cs}$ are disks tangent to $C^{cu}$ and
$C^{cs}$.
Immediately by the cone property, we can get a dominated splitting
$T\mathbb{T}^n=E\oplus F$ with $E\subset D^{cs}$ and $F\subset
D^{cu}$.
Use Lemma
\ref{shadowing onto diffeo} there exists $\pi :
\mathbb{T}^n\rightarrow \mathbb{T}^n$ continuous and onto,
$\operatorname{dist}_{C^0}(\pi, \operatorname{id}) < Cr$, and
$$f_0\circ \pi = \pi\circ f.$$
In \cite{Buzzi-Fisher} for the obtained $f$, Buzzi and Fisher
proved that there exists a unique maximal measure $\nu_0$ of $f$
with $\pi_{*}\nu_0=\mu_1$. This measure $\nu_0$ conforms good
structure of Pesin set $\widetilde{\Lambda}$ by the following
Theorem.
\begin{Thm} \label{Bonatti-Viana}There exist $0<\epsilon\ll1<\beta$ and a
neighborhood $U$ of $\nu_0$ in $\mathcal{M}_{inv}(\mathbb{T}^n,f)$
such that for any ergodic $\nu\in U$ it holds that $\nu\in
\mathcal{M}_{inv}(\widetilde{\Lambda}(\beta,\beta, \epsilon))$.
\end{Thm}
\begin{proof} The arguments are analogous of Theorem \ref{Mane}.
Choose a neighborhood $U_1$ of $\mu_1$ in
$\mathcal{M}(\mathbb{T}^n,f_0)$ such that every ergodic $\mu\in U_1$
is contained in $ \widetilde{\Gamma}(i,\gamma)$, where
$\widetilde{\Gamma}(i,\gamma)$ defined as Proposition
\ref{frequence}. The continuity of $\pi$ give rise to a neighborhood
$U$ of $\nu_0$ in $\mathcal{M}(\mathbb{T}^n,f)$ such
that $\pi_{*}U\subset U_1$.
For $N\in \mathbb{N}, \gamma>0$, define
\begin{eqnarray*}
T_N(i,\gamma)=\{x\in M\mid&&\sharp \{n\leq j\leq n+k-1\mid
f^j(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma,\\[2mm]&&\sharp \{n\leq j\leq n+k-1\mid
f^{-j}(x)\in B(O,r)\}\leq N+k(p_i+\gamma)+|n|\gamma\\[2mm]&&\quad \forall \,k\geq
1,\,\,\,\forall\, n\in \mathbb{Z\}}.
\end{eqnarray*}
For large $N$ we have $\nu_0(T_N(i,\gamma))>0$ and let
$$\widetilde{T}_N(i,\gamma)=\operatorname{supp}(\nu_0\mid T_N(i,\gamma)).$$
Let $N_2=[\frac{\beta N}{\gamma p_0}]+1$, $\epsilon= \max\{\delta
(p_1+\gamma+p_0),\,\gamma\}$. Then
$$T_N(1,\gamma)\subset \Lambda_{N_2}(\beta,\beta, \epsilon)
\quad \mbox{and}\quad T_N(1,\gamma)\subset
\widetilde{\Lambda}_{N_2}(\beta,\beta, \epsilon).$$
For any $x\in
\Gamma_N(1,\gamma)$, $z\in \pi^{-1}(x)$, it holds that
$$d(f^i(z),f^i_0(x))=d(f^i(z),\, \pi (f^i(x)))<Cr$$ which implies
that if $f^i_0(x)\notin R_1$ then $f^i(z)\notin B(O,r)$ because
$d(B(O,r),\partial R_1)>Cr$. Thus
$$\pi^{-1}(\Gamma_N(1,\gamma))\subset T_N(1,\gamma)\subset
\Lambda_{N_2}(\beta, \beta, \epsilon)$$ which yields that
$$\pi^{-1}(\widetilde{\Gamma}_N(1,\gamma))\subset
\widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon).$$ For any ergodic
$\nu\in U$, $\pi_{*}\nu\in U_1$. So
$\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0$. We obtain
$$\nu(\widetilde{\Lambda}_{N_2}(\beta, \beta, \epsilon))
\geq
\nu(\pi^{-1}(\widetilde{\Gamma}_{N}(1,\gamma)))=\pi_{*}\nu(\widetilde{\Gamma}_{N}(1,\gamma))>0.$$
Once more, the ergodicity of $\nu_0$ concludes
$\nu(\widetilde{\Lambda}(\lambda_1, \lambda_1, \epsilon))=1$.
\end{proof}
\noindent{\it Acknowledgement. } The authors thank very much to the
whole seminar of dynamical systems in Peking University. The
manuscript was improved according to their many suggestions.
\end{document} |
\betagin{document}
\title[Deformation of quaternionic space]
{Deformation space of a non-uniform 3-dimensional real hyperbolic
lattice in quaternionic hyperbolic plane}
\operatorname{Aut}hor{Inkang Kim}
\deltate{}
\maketitle
\betagin{abstract}
In this note, we study deformations of a non-uniform real hyperbolic
lattice in quaternionic hyperbolic spaces. Specially we show that
the representations of the fundamental group of the figure eight
knot complement into $PU(2,1)$, cannot be deformed in $PSp(2,1)$ out
of $PU(2,1)$ up to conjugacy.
\epsilonnd{abstract}
\footnotetext[1]{2000 {\sl{Mathematics Subject
Classification.}} 51M10, 57S25.} \footnotetext[2]{{\sl{Key words and
phrases.}} Quaternionic hyperbolic space, complex hyperbolic space,
local rigidity, representation variety.} \footnotetext[3]{The author
gratefully acknowledges the partial support of KRF grant
(0409-20060066) and a warm support of IHES during his stay.}
\section{Introduction}\lambdabel{intro}
In 1960's, A. Weil \cite{Weil} proved a local rigidity of a
uniform lattice $\Gammamma\subseteqset G$ inside $G$, i.e., he showed that
$H^1(\Gammamma,\mathfrak{g})=0$ for any semisimple Lie group $G$ not
locally isomorphic to $SL(2,{\mathbb R})$. This result implies that the
canonical inclusion map $i:\Gammamma \hookrightarrow G$ is locally
rigid up to conjugacy. In other words, for any local deformation
$H_{\br}^o_t:\Gammamma\rightarrow G$ such that $H_{\br}^o_0=i$, there exists a continuous
family $g_t\in G$ such that $H_{\br}^o_t= g_t H_{\br}^o_0 g_t^{-1}$. Weil's
idea is further explored by many others but notably by Raghunathan
\cite{Ra} and Matsushima-Murakami \cite{MM}. Much later Goldman and
Millson \cite{GM} considered the embedding of a uniform lattice
$\Gammamma$ of $SU(n,1)$
$$\Gammamma \hookrightarrow SU(n,1)\hookrightarrow SU(m,1),\ m>n$$ and
proved that there is still a local rigidity inside $SU(m,1)$ even if
one enlarges the target group. More recently further examples of
the local rigidity of a complex hyperbolic lattice in quaternionic
K\"ahler manifolds are found in \cite{KKP}
$$\Gammamma\hookrightarrow SU(n,1)\subseteqset Sp(n,1)\subseteqset SU(2n,2)
\subseteqset SO(4n,4).$$
But all these examples deal with the standard inclusion map $\Gammamma
\hookrightarrow G'$ to use the Weil's original idea about
$L^2$-group cohomology. It has not been much studied yet when
$H_{\br}^o:\Gammamma\rightarrow G'$ is an arbitrary representation which is not an
inclusion.
In this
note, we study deformations of a non-inclusion representation
$H_{\br}^o_0$ of a non-uniform lattice $\Gammamma$ of $PSL(2,{\mathbb C})$ in
semisimple Lie groups $PU(2,1)$ and $PSp(2,1)$
$$\Gammamma \stackrel{H_{\br}^o_0}{\longrightarrow} PU(2,1) \subseteqset PSp(2,1).$$
This is a sequel to the
previous paper
\cite{KP} where the deformation of the standard inclusion
$\Gammamma\hookrightarrow SO(3,1)\subseteqset SO(4,1)\hookrightarrow
Sp(n,1)$ is studied but techniques are quite different. In
\cite{KP}, we used the group cohomology to prove the local rigidity
following the path of \cite{Weil, Ra}. Here we use explicit
coordinates and matrix calculations to prove a kind of a local
rigidity, namely representations into $PU(2,1)$ cannot be deformed
into $PSp(2,1)$ nontrivially. We calculate the dimension of a
representation variety of the fundamental group of the figure eight
knot complement in complex and quaternionic hyperbolic 2-plane using
Thurston's idea.
In \cite{Falbel}, Falbel constructed a special Zariski dense
discrete representation $H_{\br}^o_0$ in $PU(2,1)$ with purely parabolic
holonomy for
a peripheral group. If $g_1,g_2$ are generators of
the fundamental group of the figure eight knot complement, their
images in $SU(2,1)$
\[ G_1=\left [ \betagin{matrix}
1 & 1 & \frac{-1-i\sqrt 3}{2}\\
0 & 1 & -1 \\
0 & 0 & 1 \epsilonnd{matrix} \right]\]
and
\[ G_2=\left[\betagin{matrix}
1 & 0 & 0 \\
1 & 1 & 0\\
\frac{-1-i\sqrt 3}{2}&-1& 1 \epsilonnd{matrix}\right]\] give such a
special representation in $PU(2,1)$. Note that this representation
is not faithful.
More precisely we prove:
\betagin{thm}\lambdabel{Qstructure}Let $M$ be a figure eight knot complement which can be made up of two
ideal tetrahedra in the quaternionic hyperbolic plane $H^2_{\mathbb H}$
glued up along faces properly. For the space of representations
$H_{\br}^o:\partiali_1(M)\rightarrow PSp(2,1)$ which do not stabilize a quaternionic
line,
the representation variety in $PSp(2,1)$
around a discrete representation $H_{\br}^o_0$, is of real 3 dimension up to conjugacy.
The representation variety into $PU(2,1)$ around $H_{\br}^o_0$ is of
dimension 3 up to conjugacy as well. The variety around the
conjugacy class $[H_{\br}^o_0]$ is parameterized by the three angular
invariants of faces of one ideal tetrahedron.
\epsilonnd{thm}
As a corollary we obtain
\betagin{co}Any representation from the fundamental group of the
figure eight knot complement into $PU(2,1)$ near the discrete
representation $H_{\br}^o_0$, cannot be deformed in $PSp(2,1)$ out of
$PU(2,1)$ up to conjugacy.
\epsilonnd{co}
{\bf Acknowledgements} The author thanks an anonymous referee for
correcting some errors of the first version of the paper and
constructive suggestions.
\section{Preliminaries}\lambdabel{Pre}
\subseteqsection{Different models of hyperbolic space}
The set ${\mathbb H}$ of quaternions are $\{x=x_1+ix_2+jx_3+kx_4|
x_i\in{\mathbb R}\}$ with the multiplication law $ij=k,\ jk=i,\
i^2=j^2=k^2=-1$. We set ${\text{Im}}m x=ix_2+jx_3+kx_4$ and ${\mathbb A}r
x=x_1-ix_2-jx_3-kx_4$. We call $x$ pure imaginary if ${\mathbb A}r x=-x$.
Quaternion number is a non-commutative division ring and by the
abuse of notations, we will set $x^{-1}=\frac{1}{x}$ to be the
multiplicative inverse of $x$. Up to section \operatorname{Re}f{qstructure},
multiplication by quaternions on ${\mathbb H}^n$ is {\bf on the left} and
matrices act {\bf on the right}. Let $J_0$ be
\[ \left [ \betagin{matrix}
0 & 0 & 1 \\
0 & I_{n-1} & 0 \\
1 & 0 & 0 \epsilonnd{matrix} \right ]. \]
Define $\lambdangle Z, W \rightarrowngle_0= Z J_0 W^*$ where
$Z=(z_1,\cdots,z_{n+1})\in {\mathbb H}^{n+1}$.
Then $A\in Sp(n,1)$ if
$$AJ_0A^*=J_0.$$
Hence $$A^{-1}=J_0A^*J_0.$$
One can define projective
models, called {\bf Siegel domains}, of the hyperbolic spaces $H_{\bF}^
n,\ {\mathbb F}={\mathbb C},{\mathbb H}$ as the set of negative lines in the Hermitian
vector space ${\mathbb F} ^{n,1}$, with Hermitian structure given by the
indefinite $(n,1)$-form
$$
\lambdangle Z, W\rightarrowngle_0= Z J_0 W^*.
$$ Namely $H_{\bF}^ n$ is the left projectivization $\mathbb{P} V_-$ of the set
$$V_-=\{Z\in {\mathbb F}^{n,1}: \lambdangle Z, Z \rightarrowngle_0 <0\}.$$
The boundary of the Siegel domain consists of projectivized zero
vectors $$V_0=\{Z\in {\mathbb F}^{n,1}-\{0\}: \lambdangle Z, Z \rightarrowngle_0=0\}$$
together with a distinguished point at infinity $\infty$. The finite
points in the boundary carry the structure of the generalized
Heisenberg group ${\mathbb F}^{n-1}\times {\text{Im}}m {\mathbb F}$ with the group law
$$(Z,t)(W,s)=(Z+W, t+s- 2{\text{Im}}m \lambdangle\lambdangle Z, W \rightarrowngle\rightarrowngle )$$
where $\lambdangle\lambdangle Z, W \rightarrowngle\rightarrowngle=ZW^*=\sum z_i{\mathbb A}r w_i$ is
the standard positive definite Hermitian form on ${\mathbb F}^{n-1}$.
Motivated by this one can define {\bf horospherical coordinates} for
$H^n_{\mathbb F}$
$$\{(z,t,v)\in {\mathbb F}^{n-1}\times {\text{Im}}m {\mathbb F}\times {\mathbb R}_+\}.$$
From now on we will take $n=2$ so that we will deal with only two
dimensional hyperbolic spaces. A coordinate change $\partialsi$ from
Heisenberg coordinates $(z,t),z\in {\mathbb F}, t\in {\text{Im}}m {\mathbb F}$ to the
boundary of Siegel domain is
\betagin{eqnarray}\lambdabel{coo}
(\frac{-|z|^2+t}{2},z,1)\epsilonnd{eqnarray} with one extra equation
$$\partialsi(\infty)=(1,0,0).$$
If $U\in Sp(1), \mu\in Sp(1), r\in{\mathbb R}^+$, the action fixing $0$ and
$\infty$ is given by $$ (z,t)\rightarrow (r\mu^{-1} z\mu U, r^2\mu^{-1} t
\mu).$$ See \cite{Kim,KParker}. In matrix form acting on the right
\[ H_{\mu,U,r}=\left [ \betagin{matrix}
r\mu & 0 & 0 \\
0 & \mu U & 0 \\
0 & 0 & \frac{\mu}{r} \epsilonnd{matrix} \right ] \]
So the hyperbolic isometry fixing $\infty$ and $0$ is determined by
$\mu,\nu=\mu U\in Sp(1)$ and $r\in{\mathbb R}^+$, so it is 7 dimensional.
For complex hyperbolic space $H^2_{\mathbb C}$, $\mu=1$ and $U\in U(1)$.
\betagin{lemma}\lambdabel{fix}The set of isometries fixing three points on the ideal
boundary of $H^2_{\mathbb H}$, which do not lie on the quaternionic line, is
one dimensional whereas it is unique in $H^2_{\mathbb C}$.
\epsilonnd{lemma}
\betagin{pf}We may assume that three points are
$$\infty,0,(1,it)$$ up to the action of $Sp(2,1)$.
If $H_{\mu,U,r}$ fixes $(1,it)$, then $U=1,r=1$ and
$\mu^{-1}it\mu=it$. It is easy to show that $\mu=e^{i\theta}$,
showing that $\mu$ has one degree of freedom.
\epsilonnd{pf}
The Heisenberg group acts
by right multiplication:
$$T_{(z,t)}(\zetaeta,v)=(z+\zetaeta, t+v- 2 {\text{Im}}m \zetaeta{\mathbb A}r z).$$
In matrix form acting on ${\mathbb H}^{2,1}$ on the right
\[T_{(z,t)}= \left [ \betagin{matrix}
1 & 0 & 0 \\
-{\mathbb A}r z & 1 & 0 \\
\frac{-|z|^2+t}{2} & z & 1 \epsilonnd{matrix} \right ] \]
Then a hyperbolic isometry fixing $\infty$ and $(z,t)$ is
\[ T_{(-z,-t)}\circ H_{\mu,U,r}\circ T_{(z,t)}=\left[
\betagin{matrix}
r\mu &0 &0 \\
r{\mathbb A}r z \mu-\nu{\mathbb A}r z &\nu & 0 \\
r\frac{-|z|^2-t}{2}\mu+z\nu{\mathbb A}r z+\frac{\mu}{r}\frac{-|z|^2+t}{2}&-z\nu+\frac{\mu}{r}z&\frac{\mu}{r}
\epsilonnd{matrix}\right]\]
where $\nu=\mu U\in Sp(1)$. This group is also 7 dimensional
determined by $\mu,\nu\in Sp(1),r\in{\mathbb R}^+$. We call $T_{(z,t)}$ a
pure parabolic whereas $H_{(\mu,U,1)}\circ T_{(z,t)}$
ellipto-parabolic if it fixes a unique point at infinity.
\betagin{lemma}\lambdabel{pureparabolic}Two pure parabolic elements $T_{(z,t)},T_{(w,s)}$
commute if $w{\mathbb A}r z$ is real, i.e. $w=rz$ for some real $r$.
\epsilonnd{lemma}
\betagin{pf}A direct calculation shows that two elements commute iff
$w{\mathbb A}r z=z{\mathbb A}r w$, which implies that $w{\mathbb A}r z$ is real.
\epsilonnd{pf}
There is one more isometry interchanging $\infty$ and $0$
whose matrix form is
\[ \left [ \betagin{matrix}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \epsilonnd{matrix} \right] \] and in Heisenberg coordinates
$$I(z,t)=(-\frac{2}{|z|^2+{\mathbb A}r t}z, 4\frac{{\mathbb A}r{t}}{|z|^4+|t|^2}).$$
A reflection with respect to $H^2_{\mathbb R}$ in $H^2_{\mathbb H}$ is
an isometry. In Heisenberg coordinates, it is
$$(z,t)\rightarrow ({\mathbb A}r z, {\mathbb A}r t).$$
So
$$(z,t)\rightarrow (\frac{-2}{|z|^2+{ t}}{\mathbb A}r z,\frac{{4t}}{|z|^4+|t|^2})$$
is an isometry interchanging $\infty$ and $0$.
Composing with $ (z,t)\rightarrow (r\mu z U\mu^{-1}, r^2 \mu t \mu^{-1})$ we
get
$$R(z,t)=(r\mu\frac{-2 }{|z|^2+{ t}}{\mathbb A}r z\nu,r^2\mu \frac{{4t}}{|z|^4+|t|^2}\mu^{-1}),$$
where $r\in {\mathbb R}^+, \mu,\nu\in Sp(1)$.
\subseteqsection{Angular invariant}
To define angular invariant we introduce the {\bf unit ball model}
$\{Z\in {\mathbb F}^n:||Z||<1\}$ to make it compatible with existing
literatures where ${\mathbb F}^n$ is equipped with the standard positive
definite Hermitian form. Two points $(0',-1)$ and $(0',1)$ will play
a special role. There is a natural map from a unit ball model to
$\mathbb{P}({\mathbb F}^{n,1})$ where ${\mathbb F}^{n,1}$ is equipped with a standard
$(n,1)$ Hermitian product
$$ \lambdangle z,w\rightarrowngle =z_1\overlineerline
w_1+\cdots+z_n\overlineerline w_n-z_{n+1}\overlineerline w_{n+1}\,,
$$ defined as
$$(w',w_n)\rightarrow (w',w_n,1).$$ From unit ball model to the horospherical model, one defines the coordinates change as
$$(z',z_n)\rightarrow (\frac{z'}{1+z_n},\frac{2{\text{Im}}m z_n}{|1+z_n|^2},\frac{1-|z_n|^2-|z'|^2}{|1+z_n|^2}).$$
Its inverse from the horospherical model to $\mathbb{P}{\mathbb F}^{n,1}$ is given
by
\betagin{eqnarray}\lambdabel{ch}(\xi,v,u)=[(\xi,\frac{1-|\xi|^2-u+v}{2},\frac{1+|\xi|^2+u-v}{2})],\epsilonnd{eqnarray}
where $v$ is pure imaginary, i.e., $iv$ in complex case, and
$iv_i+jv_2+kv_3$ in quaternionic case. Note this coordinate change
is different from (\operatorname{Re}f{coo}) since we used a different Hermitian
product. According to this coordinate change, $(0',1)=[(0',1,1)]$
corresponds to the identity element $(0',0)$ in Heisenberg group,
$(0',-1)=[(0',-1,1)]$ to $\infty$, and $(0',0)$ to $(0,0,1)$.
\betagin{Def}\lambdabel{cart} The complex Cartan angular invariant $\mathbb A(x_1,x_2,x_3)$ of the ordered triples $(x_1,x_2,x_3)$
in $\partialartial H^n_{\mathbb C}$ is introduced by Cartan \cite{Ca} and defined
to be the argument between $-\frac{\partiali}{2}$ and $\frac{\partiali}{2}$ of
the Hermitian triple product
$$ -\lambdangle\tilde{x_1},\tilde{x_2},\tilde{x_3} \rightarrowngle= -\lambdangle
\tilde{x_1},\tilde{x_2}\rightarrowngle \lambdangle \tilde{x_2},\tilde{x_3}
\rightarrowngle \lambdangle\tilde{x_3},\tilde{x_1}\rightarrowngle\in{\mathbb C}\ $$ where
$\tilde x_i$ is a lift of $x_i$ to ${\mathbb C}^{n,1}$. It can be obtained
by, up to constant, integrating the K\"ahler from on $H^n_{\mathbb C}$ over
the geodesic triangle spanned by three points \cite{Do}, hence it is
a bounded cocyle. It satisfies the cocycle relation: for
$(x_1,x_2,x_3,x_4)\in\partialartial H^n_{\mathbb C}$
\betagin{eqnarray}\lambdabel{cocycle}
\mathbb A(x_1,x_2,x_3) +\mathbb A(x_1,x_3,x_4) =\mathbb
A(x_1,x_2,x_4)+\mathbb A(x_2,x_3,x_4).
\epsilonnd{eqnarray}
The quaternionic Cartan angular invariant of a triple $x$,
$0\leq {\mathbb A}_{{\mathbb H}}(x)\leq \partiali/2$, is the angle between the
first coordinate line ${\mathbb R} e_1=({\mathbb R},0,0,0)\subseteqset{\mathbb R}^4$ and the
Hermitian triple product
$$
\lambdangle\tilde{x_1},\tilde{x_2},\tilde{x_3} \rightarrowngle= \lambdangle
\tilde{x_1},\tilde{x_2}\rightarrowngle \lambdangle \tilde{x_2},\tilde{x_3}
\rightarrowngle \lambdangle\tilde{x_3},\tilde{x_1}\rightarrowngle\in{\mathbb H}\,,
$$
where we identify ${\mathbb H}$ and ${\mathbb R}^4$.
\epsilonnd{Def}
Note that the invariant is unchanged under the homothety by nonzero
real numbers, i.e., the triples $x$ and $rx$ have the same angular
invariant.
\betagin{Prop}\lambdabel{ang}
{\epsilonm ( \cite{Pernas},\cite{KimJKMS}).} Let $x=(x_1,x_2,x_3)$ and
$y=(y_1,y_2,y_3)$ be pairs of distinct triples of points in $H_{\bh}^ n$.
Then $ {\mathbb A}_{{\mathbb H}}(x)={\mathbb A}_{{\mathbb H}}(y) $ if and only if
there is an isometry $f\in PSp(n,1)$ such that $f(x_i)=y_i$ for
$i=1,2,3$.
\epsilonnd{Prop}
\betagin{pf}
Applying a homothety by nonzero real number, we may assume that our
triples have Hermitian products $X=\lambdangle
\tilde{x_1},\tilde{x_2},\tilde{x_3} \rightarrowngle$ and $ Y=\lambdangle
\tilde{y_1},\tilde{y_2},\tilde{y_3}\rightarrowngle$ with $|X|=|Y|$.
Now let us assume that ${\mathbb A}_{{\mathbb H}}(x)={\mathbb A}_{{\mathbb H}}(y)$.
This and $|X|=|Y|$ imply that there is an orthogonal transformation
$M \in SO(3)\times \{\operatorname{id}\}$ acting on ${\mathbb H}={\mathbb R}^4$ that leaves
invariant the real axis in ${\mathbb H}$ and maps $X$ to $Y$. Since the
conjugation action of $Sp(1)$ in ${\mathbb H}$ is $SO(3)$ action, there is
$\mu \in Sp(1)$ such that
$$
\lambdangle \tilde{x_1},\tilde{x_2},\tilde{x_3} \rightarrowngle= \mu \lambdangle
\tilde{y_1},\tilde{y_2},\tilde{y_3} \rightarrowngle {\mathbb A}r{\mu}\,.
$$
To finish the proof it is enough to choose lifts $\tilde{x_i}$ and
$\tilde{y_i}$ of points $x_i$ and $y_i$, $i=1,2,3$, so that $\lambdangle
\tilde{x_i},\tilde{x_j}\rightarrowngle=
\lambdangle \tilde{y_i},\tilde{y_j}\rightarrowngle$. Indeed,
then there is $A\in Sp(n,1)$ such that $A(\tilde{x_i})=\tilde{y_i}$, $i=1,2,3$.
Then it descends to an element $f\in PSp(n,1)$ such that
$f(x_i)=y_i$ for $i=1,2,3$.
To obtain those lifts, we first replace $\tilde{y_1}$ by $\mu
\tilde{y_1}$ (still denote it by
$\tilde{y_1}$) and
get $\lambdangle \tilde{x_1},\tilde{x_2}\rightarrowngle
\lambdangle \tilde{x_2},\tilde{x_3} \rightarrowngle \lambdangle\tilde{x_3},\tilde{x_1}\rightarrowngle=
\lambdangle \tilde{y_1},\tilde{y_2}\rightarrowngle
\lambdangle \tilde{y_2},\tilde{y_3} \rightarrowngle \lambdangle\tilde{y_3},\tilde{y_1}\rightarrowngle$.
Replacing $\tilde{x_2}$ and $\tilde{x_3}$ by $\mu_2\tilde{x_2}$ and $ \mu_3\tilde{x_3}$
if necessary, we can make
$\lambdangle\tilde{x_2}, \tilde{x_3}\rightarrowngle=\lambdangle \tilde{y_2}, \tilde{y_3}
\rightarrowngle$ and $ \lambdangle \tilde{x_3},\tilde{x_1}\rightarrowngle=\lambdangle
\tilde{y_3},\tilde{y_1}
\rightarrowngle$. Now the equation becomes
$$
\lambdangle \tilde{x_1},\tilde{x_2}\rightarrowngle
\lambdangle \tilde{x_2},\tilde{x_3} \rightarrowngle \lambdangle\tilde{x_3},\tilde{x_1}\rightarrowngle=
|\mu_2|^2|\mu_3|^2 \lambdangle \tilde{y_1},\tilde{y_2}\rightarrowngle
\lambdangle \tilde{y_2},\tilde{y_3}\rightarrowngle
\lambdangle\tilde{y_3},\tilde{y_1}\rightarrowngle\,,
$$
and we get $\lambdangle \tilde{x_1},\tilde{x_2}\rightarrowngle= r\lambdangle
\tilde{y_1},\tilde{y_2}\rightarrowngle$ where $r=|\mu_2||\mu_3|$. Then
replacing $\tilde{x_1}$, $\tilde{x_2}$, $\tilde{x_3}$ and
$\tilde{y_1}$ by $r^{-1}\tilde{x_1}$, $r^{-1}\tilde{x_2}$,
$r\tilde{x_3}$ and $r^2\tilde{y_1}$ respectively, we finally get
$\lambdangle \tilde{x_i},\tilde{x_j}\rightarrowngle= \lambdangle
\tilde{y_i},\tilde{y_j}\rightarrowngle$, and hence a desired $f\in
PSp(n,1)$.
The converse is trivial.
\epsilonnd{pf}
\betagin{thm}\lambdabel{dist}
For distinct points $x_1,x_2,x_3 \in \partialartial{H^n_{\mathbb H}}$, let
$\sigma_{12}$ and $\Sigma_{12}$ be real and quaternionic geodesics
containing the two points $x_1$ and $x_2$, and $\mathbb{P}i\!:\! H^n_{\mathbb
H}\rightarrow \Sigma_{12}$ be the orthogonal projection. Then
$$
|\tan{{\mathbb A}_{{\mathbb H}}(x)}|=sinh(d(\mathbb{P}i{x_3},\sigma_{12}))
$$
where $d$ is the hyperbolic distance in $H^n_{\mathbb H}$.
\epsilonnd{thm}
\betagin{pf}
Up to an isometry (in the unit ball model of $H_{\bh}^ n$), we may assume that
the triple $x$ consists of $ x_1=(0,-1)$, $x_2=(0,1)$, $x_3=(z',z_n)
$,\,\, whose lifts are $ \tilde{x_1}=(0,-1,1), \tilde{x_2}=(0,1,1),
\tilde{x_3}=(z',z_n,1) $.
In this setting $\sigma_{12}=\{(0,t)\!:\! t\in{\mathbb R},|t|<1\},
\Sigma_{12}=\{(0,z)\!:\! z\in{\mathbb H},|z|<1\}$, and $\lambdangle
\tilde{x_1},\tilde{x_2},\tilde{x_3} \rightarrowngle=2({\mathbb A}r{z_n}-1)(1+z_n)$.
So we get
$$
{\mathbb I}g|\tan{{\mathbb A}_{{\mathbb H}}(x)}{\mathbb I}g|=\frac{|2\operatorname{Im}
(z_n)|}{1-|z_n|^2}\,.
$$
On the other hand, we note that $\mathbb{P}i(x_3)=z_n$ and $\Sigma_{12}$ has
the Poincar\'e ball model geometry of $H_{\br}^ 4$ with sectional
curvature $-1$. Choose a hyperbolic two plane in $\Sigma_{12}$ that
contains the geodesic $\sigma_{12}$ and $z_n$. This plane is a
Poincar\'e disk with curvature $-1$, where we can write
$z_n=\operatorname{Re}{z_n}+i|\operatorname{Im}{z_n}|$. Let $d$ be the hyperbolic distance
between the point $z_n$ and the real axis in that Poincar\'e disk.
Then a direct calculation shows that
$sinh(d)=|2\operatorname{Im}(z_n)|/(1-|z_n|^2)$.
\epsilonnd{pf}
\section{Q-structure}\lambdabel{qstructure}
After the complete hyperbolic structure on the figure eight knot
complement was first given in \cite{Ri}, W. Thurston describes a
complete, finite volume hyperbolic structure on the complement of
the figure eight knot in the 3-sphere in \cite{Th} by gluing two
tetrahedra. Then, he shows how to deform it to non-complete
structures. All these structures have holonomies, which are
homomorphisms of the fundamental group $\Gammamma$ of the figure eight
knot complement (a 2 generators and 1 relator group) to $SO(3,1)^0$.
In fact, the {\bf character variety}
$H_{\bc}^i(\Gammamma,SO(3,1)^0)=Hom(\Gammamma,SO(3,1)^0)/SO(3,1)^0$ is a smooth
1-dimensional complex manifold near the conjugacy class of the
holonomy of the complete hyperbolic structure.
Let $H_{\br}^o =\Gammamma\rightarrow PSp(2,1)$ be the holonomy of the complete
hyperbolic structure, followed by the embedding $SO(3,1)^0 \rightarrow
SO(4,1)^0 =PSp(1,1)\rightarrow PSp(2,1)$. In \cite{KP}, it is shown that any
parabolicity-preserving local deformation around $H_{\br}^o$ preserves a
quaternionic line again. It is conjectured that any deformation
around $H_{\br}^o_0$ preserves a quaternionic line.
In this note, we prove that the component, containing $H_{\br}^o_0$
(introduced in section \operatorname{Re}f{intro}) with purely parabolic holonomy
for
a peripheral group, of
the space of representations, which do not stabilize a quaternionic
line, of the fundamental group of the figure eight knot complement
in $PSp(2,1)$ is actually conjugate into $PU(2,1)$.
\subseteqsection{The figure eight knot complement}
Consider the following complex. Glue together two tetrahedra by
identifying faces pairwise according to the pattern indicated in
Figure 1.
The resulting complex has two 3-cells, four faces, two edges and one
vertex. According to W. Thurston, the complement of the vertex is
homeomorphic to the complement $M$ of the figure eight knot in the
3-sphere. Identify the two 3-cells of $M$ with regular ideal
tetrahedra in (compactified) $H_{{\mathbb R}}^{3}$. This defines a
hyperbolic structure on $M$, and therefore a homomorphism $H_{\br}^o_0
:\Gammamma=\partiali_1 (M)\rightarrow SO(3,1)^0$. Here is how W. Thurston deforms it.
Up to isometry, an ideal tetrahedron in $H_{{\mathbb R}}^{3}$ is
characterized by one complex number. Identifying the 3-cells of $M$
with arbitrary ideal hyperbolic tetrahedra defines a hyperbolic
structure on the complement of the 2 skeleton, depending on two
complex parameters. One has to make sure that the gluing maps are
isometries which extend the hyperbolic structure across the faces.
In order for the hyperbolic structure to extend across the two
edges, two algebraic equations must be satisfied, but one of them
turns out to follow from the other as we can see from Lemma
\operatorname{Re}f{pure}. As a consequence, one obtains a (complex) one parameter
family of hyperbolic structures on $M$.
We adapt this construction to obtain homomorphisms $\Gammamma\rightarrow
PSp(2,1)$. For this, we first classify ideal tetrahedra in
$H_{{\mathbb H}}^{2}$, then introduce the relevant geometric structure,
baptised $Q$-structure, and describe the compatibility equations
along edges.
\subseteqsection{Ideal triangles and tetrahedra in $H_{{\mathbb H}}^2$}
The group $PSp(2,1)$ is not transitive on triples of points of
$\partialartial H_{{\mathbb H}}^2$. By Proposition \operatorname{Re}f{ang}, a pair of triples
are mapped to each other by an isometry iff they have the same
angular invariant.
By Theorem \operatorname{Re}f{dist} the angular invariant of a triple vanishes if
and only if all points sit in a (compactified) totally real totally
geodesic plane. It takes value $\partiali/2$ if and only if all points sit
in a (compactified) quaternionic line.
If three ideal points $x_1$, $x_2$ and $x_3 \in \partialartial H_{{\mathbb H}}^2$
do not belong to a quaternionic line, the set of isometries that fix
them is one dimensional by Lemma \operatorname{Re}f{fix}. It follows that if two
triangles have equal angular invariants different from $\partiali/2$,
there is a one dimensional family of isometries that sends one to
the other. Since the boundary of $H_{{\mathbb H}}^2$ is 7-dimensional, for
each $c\in[0,\partiali/2)$, the space of ideal tetrahedra
$(x_1,\ldots,x_4)$ with ${\mathbb A}_{{\mathbb H}}(x_1,x_2,x_3)=c$ up to
isometry has dimension 6. It follows that ideal tetrahedra up to
isometry depend on 7 parameters.
\betagin{Def}Let $A=\{x_1,\cdots,x_k\},\ k\geq 3,$ be a disjoint collection of
ideal points on the ideal boundary of a rank one symmetric space
$X$. The geometric center of $A$ in $X$ is the barycenter of the
associated measure $\delta_A=\sum \delta_{x_i}$. In more details,
let
$$F(x)=\int_{\partialartial X} B_o(x,\xi)d\delta_A(\xi)$$ be a function
defined on $X$ where $B_o$ is the Busemann function normalized that
$B_o(o,\xi)=0$. Then it is strictly convex and its value goes to
$\infty$ as $x$ tends to $\partialartial X$. The barycenter $x_0$ of
$\delta_A$ can be written as
$$dF_{(x_0)}(\cdot)=\int_{\partialartial X} (dB_0)_{(x_0,\xi)}(\cdot) d\delta_A(\xi)=0.$$
\epsilonnd{Def}
Let $\Delta$ denote a fixed regular ideal tetrahedron in
$H_{{\mathbb R}}^3$, let $\dot{\Delta}\subseteqset \Delta$ be the complement of
the 1-skeleton. Given an ideal tetrahedron (i.e. 4 distinct points
at infinity $(x_1,\ldots,x_4)$) in $H_{{\mathbb H}}^2$, the {\epsilonm straight
singular simplex} spanning them is the continuous map of
$\dot{\Delta}$ to $H_{{\mathbb H}}^2$ defined as follows. For each face
$(s_i,s_j,s_k)$ of $\Delta$, map the barycenter $s_{ijk}$ of
$(s_i,s_j,s_k)$ in $H_{{\mathbb R}}^3$ to the geometric barycenter $x_{ijk}$
of $(x_i,x_j,x_k)$ in $H_{{\mathbb H}}^2$. Map the orthogonal projection of
$s_{ijk}$ to the edge $[s_i,s_k]$ to the orthogonal projection of
$x_{ijk}$ to the geodesic $[x_i,x_j]$ defined by $x_i$ and $x_j$,
extend to an isometric map of edge $[s_i,s_k]$ onto geodesic
$[x_i,x_j]$. Then map each geodesic segment joining $s_{ijk}$ to
$[s_i,s_j]$ to a constant speed geodesic segment joining $x_{ijk}$
to the corresponding point $[x_i,x_j]$. Finally, map each geodesic
segment joining the barycenter of $(s_1,\ldots,s_4)$ to a point on a
face to a constant speed geodesic segment from the barycenter of
$(x_1,\ldots,x_4)$ to the corresponding point in the previously
defined parametrizations of faces. The obtained map being in general
discontinuous along edges, but let us ignore edges.
\subseteqsection{$Q$-structures}
\betagin{Def}
\lambdabel{Q} Let $M$ be a manifold. A {\epsilonm $Q$-structure} on $M$ is an
atlas of charts $\partialhi_j =U_j \to H_{{\mathbb H}}^2$ which are continuous
maps from open sets of $M$ to $H_{{\mathbb H}}^2$, such that on $U_j \cap
U_k$, $\partialhi_k =\partialsi_{jk}\circ\partialhi_j$ for some unique $\partialsi_{jk}\in
Sp(2,1)$.
\epsilonnd{Def}
Pick a pair of ideal tetrahedra whose faces have pairwise equal
angular invariants, all different from $\partiali/2$. Map the 3-cells of
the figure eight knot complement $M$ to $H_{{\mathbb H}}^2$ using the
straight singular simplices spanning chosen ideal tetrahedra. This
defines a $Q$-structure on $M$ with 2-skeleton deleted.
The $Q$-structure extends across faces. Indeed, each face
$(x_i,x_j,x_k)$ of tetrahedron $T$ is isometric to a unique face
$(y_{i'},y_{j'},y_{k'})$ of tetrahedron $T'$. Let $\partialsi\in Sp(2,1)$
be the unique isometry which maps one face to the other. Then the
straight singular simplices spanning $T$ and $\partialsi^{-1}(T')$ take
the same values along the common face, and so form a chart defined
in a neighborhood of that face. The elements of $Sp(2,1)$ realizing
the change of charts with the previously defined two charts are
identity and $\partialsi$ respectively.
The $Q$-structure extends across an edge $[s_i,s_j]$ if and only if
its holonomy, an element of $Sp(2,1)$, around that edge, equals the
identity. Let us compute holonomy based at a point of $T$. A priori,
we know that holonomy maps the image geodesic $[x_i,x_j]$ to itself.
As in section \operatorname{Re}f{Pre}, stabilizer of $[x_i,x_j]$ is ${\mathbb R}\times
Sp(1)Sp(1)$. Therefore, vanishing of holonomy amounts to 7
equations. Since there are two edges, we get 14 equations. But we
expect equations provided by the two edges to be dependent, as it
happens in $SO(3,1)$.
\betagin{figure}
$$\includegraphics[width=.8\linewidth]{compatibility3.eps}$$
\caption{Gluing pattern of figure
eight knot complement}
\epsilonnd{figure}
\betagin{Prop}Two holonomies around two edges in the complement of
figure eight knot complement obtained by gluing two ideal tetrahedra
are $g_1^{-1}g_3g_2^{-1}g_1g_3^{-1}$ and $g_2^{-1}g_3g_2g_1^{-1}$
where $g_1,g_2,g_3$ are elements in $Sp(2,1)$ appearing in gluing
pattern. One can permute the order of elements appearing in the
products of $g_1,g_2,g_3$.
\epsilonnd{Prop}
\betagin{pf}
By the gluing pattern, we obtain two pictures around two edges and
the proof follows. See Figure 1.
\epsilonnd{pf}
\betagin{lemma}\lambdabel{pure}If the holonomies around two edges multiply to be a pure parabolic element
fixing a common point of two edges, which is automatically satisfied
if the holonomies around edges are trivial, then one holonomy is
determined by the other.
\epsilonnd{lemma}
\betagin{pf}Two tetrahedra glue up together to produce two edges $p_1p_2$
and $p_1q_2$. Once we fix two tetrahedra, two holonomy $H_1$ and
$H_2$ around $p_1p_2$ and $p_1q_2$ are hyperbolic isometries
stabilizing them. But since two edges share $\infty$, product
$H_1H_2$ should fix $\infty$. Now put a restriction that $H_1H_2$ is
a pure parabolic isometry. Then
\[ H_1=\left [ \betagin{matrix}
q\alphapha & 0 & 0 \\
0 & \betata & 0 \\
0 & 0 & \frac{\alphapha}{q} \epsilonnd{matrix} \right ] \]
and
\[ H_2=\left[
\betagin{matrix}
r\mu &0 &0 \\
r{\mathbb A}r z \mu-\nu{\mathbb A}r z &\nu & 0 \\
r\frac{-|z|^2-t}{2}\mu+z\nu{\mathbb A}r z+\frac{\mu}{r}\frac{-|z|^2+t}{2}&-z\nu+\frac{\mu}{r}z&\frac{\mu}{r}
\epsilonnd{matrix}\right]\]
where $q,r\in{\mathbb R}^+, \alphapha,\betata,\mu,\nu\in Sp(1)$. Then since
$H_1H_2$ is a pure parabolic element fixing $\infty$,
\[H_1H_2=T_{(w,s)}= \left [
\betagin{matrix}
1 & 0 & 0 \\
-{\mathbb A}r w & 1 & 0 \\
\frac{-|w|^2+s}{2} & w & 1 \epsilonnd{matrix} \right ] \]
for some $(w,s)$. From this equation we obtain
$$qr\alphapha\mu=1,\betata\nu=1, \frac{\alphapha\mu}{qr}=1.$$
So $r=\frac{1}{q}, \mu=\frac{1}{\alphapha},\nu=\frac{1}{\betata}$. This
shows that $H_2$ is completely determined by $H_1$.
\epsilonnd{pf}
\betagin{co}The dimension of the space of real hyperbolic structures
near the complete one on the figure eight knot complement is 2.
\epsilonnd{co}
\betagin{pf}
An ideal
tetrahedron in a quaternionic line (which is isometric to $H^4_{\mathbb R}$)
is determined by one complex variable. So two tetrahedra have two
complex parameters. A holonomy around an edge belongs to the
stabilizer of a geodesic, in our notation, $H_{\mu,I,r}$ so that
$\mu\in U(1)\subseteqset Sp(1)$. So two holonomies around edges fix
$\infty$ in common, so its product is naturally a parabolic element.
Then by Lemma \operatorname{Re}f{pure}, one holonomy determines the other. Hence
there are two complex parameters with one complex equation and the
solution space is of one complex dimension.
\epsilonnd{pf}
First we calculate the dimension of the representation variety near $H_{\br}^o_0$ in $PU(2,1)$.
\betagin{Prop}\lambdabel{complex}The dimension of the component of the representation
variety containing $H_{\br}^o_0$ from the fundamental group of the figure
8 knot complement to $PU(2,1)$ is 3 up to conjugacy.
\epsilonnd{Prop}
\betagin{pf}We claim that to choose an ideal tetrahedron there is 4 degrees of freedom. Choosing three points up to
the action of $PU(2,1)$ is one degree of freedom corresponding to
the Cartan angular invariant. Once three points are fixed, there are
3 degrees of freedom for the last vertex since the boundary of
$H^2_{\mathbb C}$ is three dimensional. Hence there are total 4 degrees of
freedom to determine an ideal tetrahedron. To determine the second
one, we claim that there is only one degree of freedom. By gluing
pattern, three vertices of the second tetrahedron is determined
according to the angular invariant. The last vertex of the second
tetrahedron is connected to these three vertices to form 3 faces
whose angular invariants are pre-determined by the gluing pattern.
Since one angular invariant is determined if the other three are
known in a tetrahedron by cocycle relation (\operatorname{Re}f{cocycle}), two more
angular invariants will determine all the angular invariant. Since
the last vertex can move around 3-dimensional space $\partialartial
H^2_{\mathbb C}$ with pre-determined two angular invariants there is only
$3-2$ degree of freedom to choose the second tetrahedron. So there
are total 4+1 degrees of freedom to choose two tetrahedra to glue
them according to the pattern. Then 5 points of two tetrahedra can
be written as
$$p_1=\infty,p_2=0, q_1=(1,t),q_2=(z,s),q_3=(w,r)$$ where $z,w\in{\mathbb C}, t,s,r\in
{\text{Im}}m {\mathbb C}$. A coordinate change from horospherical coordinates
$(z,t),z\in {\mathbb C}, t\in {\text{Im}}m {\mathbb C}$ to ${\mathbb C}^{2,1}$ is
$$(\frac{-|z|^2+t}{2},z,1).$$ Then
$$p_1=(1,0,0),p_2=(0,0,1),q_1=(\frac{-1+t}{2},1,1),q_2=(\frac{-|z|^2+s}{2},z,1),$$$$q_3=(\frac{-|w|^2+r}{2},w,1).$$
As above since there are 5 parameters, there is only one
degree of freedom for $(w,r)$. This can be easily seen as follows.
Note that the isometries in $PU(2,1)$
$$g_1:(q_2,q_1,p_1)\rightarrow (q_3,p_2,p_1)$$
$$g_2:(p_2,q_1,q_2)\rightarrow (p_1,q_2,q_3)$$
$$g_3:(q_1,p_2,p_1)\rightarrow (q_2,p_2,q_3)$$ are all uniquely determined
by their angular invariants. Hence $g_1$ gives rise to one real
equation in $t,z,s,w,r$ which can be derived from the angular
invariant of the faces $(q_2,q_1,p_1)$ and $(q_3,p_2,p_1)$. Indeed
one can do explicit calculations. Using the coordinate change
formula in (\operatorname{Re}f{ch}),
$$p_1=(0,-1,1), p_2=(0,1,1), q_1=(1,\frac{t}{2},\frac{2-t}{2}),$$$$
q_2=(z,\frac{1-|z|^2+s}{2},\frac{1+|z|^2-s}{2}),
q_3=(w,\frac{1-|w|^2+r}{2},\frac{1+|w|^2-r}{2})$$ in ${\mathbb C}^{2,1}$
with the standard $(2,1)$ Hermitian form $\lambdangle Z, W \rightarrowngle=
z_1{\mathbb A}r w_1+ z_2{\mathbb A}r w_2-z_3{\mathbb A}r w_3$ so that
$$-\lambdangle q_2, q_1\rightarrowngle\lambdangle q_1, p_1\rightarrowngle\lambdangle p_1,q_2\rightarrowngle=\frac{|z-1|^2-s-z+{\mathbb A}r z+t}{2}$$
$$-\lambdangle q_3,p_2\rightarrowngle\lambdangle p_2,p_1\rightarrowngle \lambdangle p_1, q_3 \rightarrowngle=|w|^2-r.$$
From $\mathbb A(q_2,q_1,p_1)=\mathbb A (q_3,p_2,p_1)$, we get
\betagin{eqnarray}\lambdabel{ang}\frac{r}{|w|^2}=\frac{s+z-{\mathbb A}r z-t}{|z-1|^2}.\epsilonnd{eqnarray} If $A$ is a
matrix representing $g_1$, since $A\in U(2,1)$,
$$AJ_0A^*=J_0.$$ From the fact that $g_1$ fixes $p_1$ and sends
$q_1$ to $p_2$, it is easy to show that $A$ is of the form
\[\left[
\betagin{matrix}
a &0 &0 \\
a & b & 0 \\
a(-1-t)/2 &-b& {\mathbb A}r a^{-1}
\epsilonnd{matrix}\right]\]
where $|b|=1$. The fact that $g_1$ sends $q_2$ to $q_3$ implies
that
$$w=bz{\mathbb A}r a-b{\mathbb A}r a=(z-1)b{\mathbb A}r a.$$
$$\frac{-|w|^2+r}{2}=\frac{-|z|^2|a|^2+s|a|^2}{2}+z|a|^2+\frac{-|a|^2-t|a|^2}{2}$$
$$=\frac{-|z-1|^2|a|^2+s|a|^2+z|a|^2-{\mathbb A}r z|a|^2-t|a|^2}{2}.$$
From the first equation we have $a=\frac{b{\mathbb A}r w}{{\mathbb A}r z-1}$ so
$|a|^2=\frac{|w|^2}{|z-1|^2}$. Substituting this into the second
equation gives
$$\frac{-|w|^2+r}{2}=\frac{-|w|^2+(s+z-{\mathbb A}r
z-t)\frac{|w|^2}{|z-1|^2}}{2}.$$ But this equation is just the
angular invariant identity (\operatorname{Re}f{ang}).
The same is true for $g_2$. The equation coming from $g_3$ follows
from the equations from $g_1$ and $g_2$ since three angular
invariants determine the rest in a tetrahedron. In conclusion, out
of 7 parameters $t,s,r,z,w$, there are two angular invariant
equations, which makes 5 dimensional space as expected.
Now we consider holonomy relations. Since there is a unique element
sending three points to other three points in general position
according to the angular invariant, $g_1,g_2,g_3$ are uniquely
determined in terms of $t,z,s,w,r$. Then the holonomy
$g_1^{-1}g_3g_2^{-1}g_1g_3^{-1}\in SU(2,1)$ around the edge
connecting $\infty$ and $0$ is of the form
\[\left[
\betagin{matrix}
q\alphapha &0 &0 \\
0 &\betata & 0 \\
0&0& \alphapha/q
\epsilonnd{matrix}\right]\] where $\alphapha$ is a unit complex number
and $\betata=\alphapha^{-2}$.
Now $q\alphapha$ is a function in $t,z,s,w,r$. To get a representation
one should have
\betagin{eqnarray}\lambdabel{ho}
q\alphapha=f(t,z,s,w,r)+g(t,z,s,w,r)i=1.\epsilonnd{eqnarray} Since
$q\alphapha=f(t,z,s,w,r)+g(t,z,s,w,r)i$ is a complex number, it adds
two more real equations. So there are 5 parameters with 2
equations, which gives at most 3 dimensional solution space. Since
the other holonomy equation for the second edge follows from the
first by Lemma \operatorname{Re}f{pure}, the solution space is 3 dimensional.
\epsilonnd{pf}
Next we deal with $PSp(2,1)$. Here we would like to show that there
is no deformation of representations from the fundamental group of
the figure eight knot complement into $PU(2,1)$, to the ones into
$PSp(2,1)$ out of $PU(2,1)$ up to conjugacy. We do this by
calculating the dimension of the variety of representations near
$H_{\br}^o_0$ in $PSp(2,1)$ is also 3.
First we begin with a lemma.
\betagin{lemma}\lambdabel{nonconjugate}
Let $H_{\br}^o_1,H_{\br}^o_2:\Gammamma \rightarrow SU(2,1)$ be two non-conjugate Zariski
dense representations. Then they are not conjugate even in
$Sp(2,1)$.
\epsilonnd{lemma}
\betagin{pf}Let $H_{\br}^o_1(\alphapha)=A_1,H_{\br}^o_1(\betata)=A_2$ and
$H_{\br}^o_2(\alphapha)=B_1,H_{\br}^o_2(\betata)=B_2$ for the generators $\alphapha$
and $\betata$ of the fundamental group of the figure eight knot
complement such that $A_i,B_i\in SU(2,1)$. Suppose $H_{\br}^o_1$ and
$H_{\br}^o_2$ are conjugate in $Sp(2,1)$, i.e., there exist $X,Y\in
Sp(2,1)$ such that
$$X A_1 X^{-1}=B_1,\ X A_2 X^{-1}=B_2$$ where $X=X_1+ X_2 j,\
X^{-1}=X_3+ X_4 j$ and $X_1,X_2,X_3,X_4$ are $3\times 3$ complex
matrices. From $XX^{-1}=I$ there is a relation
$$X_1X_3- X_2{\mathbb A}r X_4=I,\ X_1X_4+ X_2{\mathbb A}r X_3=0.$$
Also conjugation relation gives
\betagin{eqnarray}\lambdabel{1}
X_1A_1X_3- X_2{\mathbb A}r A_1 {\mathbb A}r X_4=B_1,\ X_1A_1X_4+ X_2{\mathbb A}r A_1{\mathbb A}r
X_3=0,\epsilonnd{eqnarray}\betagin{eqnarray}\lambdabel{2} X_1A_2X_3 - X_2{\mathbb A}r
A_2{\mathbb A}r X_4=B_2,\ X_1A_2X_4+ X_2{\mathbb A}r A_2{\mathbb A}r X_3=0.\epsilonnd{eqnarray}
Since $H_{\br}^o_1$ and $H_{\br}^o_2$ are not conjugate in $SU(2,1)$, $X_2\neq
0$.
If $X_1=0$, $X=X_2j,\ X_2\in SU(2,1)$. But $$X_2j A_i (X_2 j)^{-1}=
X_2 j A_i (-j)X_2^{-1}=X_2 {\mathbb A}r A_i X_2^{-1}.$$ It is easy to show
that $X_2 {\mathbb A}r A_i X_2^{-1}\notin SU(2,1)$ for generic element
$A_i$. Since $H_{\br}^o_i$ is Zariski dense, we may assume that $X_2 {\mathbb A}r
A_i X_2^{-1}\notin SU(2,1)$ by choosing generators $\alphapha$ and
$\betata$ properly.
Hence $X_1\neq 0\neq X_2$. Then it is easy to show that $X_3\neq
0\neq X_4$.
Since $H_{\br}^o_1$ is a Zariski dense representation, we can choose
$A_1$ and $A_2$ arbitrarily independent by choosing a different
generators. Then for $X_1$ and $X_2$, there are at most 18 complex
parameters (indeed from $X_iJ_0X_i^*=J_0$ there are less parameters
than 18), whereas from Equations (\operatorname{Re}f{1}) and (\operatorname{Re}f{2}) there are
at least $9\times 4$ complex equations. This forces that there are
no solutions.
\epsilonnd{pf}
\betagin{Prop}The dimension of the component of the representation
variety of representations from the fundamental group of the figure
8 knot complement to $PSp(2,1)$, containing $H_{\br}^o_0$, which cannot
be conjugate into $PSp(1,1)$, is 3 up to conjugacy.
\epsilonnd{Prop}
\betagin{pf}We claim that there are 10 parameters to
choose two tetrahedra. Put one tetrahedron in a standard position
$$p_1=\infty,p_2=0,q_1=(1,ia),q_2=(z,t).$$
There is one degree of freedom for $q_1$ up to $PSp(2,1)$
(corresponding to Cartan angular invariant, or more concretely
corresponding to $a\in\mathbb{R}$). But there is one parameter
family of isometries fixing $p_1,p_2,q_1$ by Lemma \operatorname{Re}f{fix}. So
there are 6 degrees of freedom for $q_2$ (since the dimension of the
boundary of $H^2_{\mathbb H}$ is 7), which makes 7 degrees of freedom to
choose the first tetrahedron.
Once the first tetrahedron is chosen, three vertices of the second
tetrahedron are determined by its angular invariant according to
the gluing pattern. To choose the last vertex for the second
tetrahedron, it is connected to three vertices to form three
different faces, so their angular invariants are already determined
in the first tetrahedron by gluing pattern. Hence there are $6-3$
degrees of freedom to choose the last vertex for the second
tetrahedron. In conclusion there are total $7+3$ degrees of freedom
to choose two tetrahedra. Note that our parameters are written in
terms of $(1,ia), (z,t)$ for the first tetrahedron, and
$(1,ib),(w,s)$ for the second, so the parameters are in 10
dimensional subspace of ${\mathbb R}\times{\mathbb R}\times {\mathbb H}\times{\mathbb H}\times {\text{Im}}m
{\mathbb H} \times{\text{Im}}m {\mathbb H}$.
By the previous Lemma \operatorname{Re}f{pure}, a holonomy $H_1$ around an edge
being identity gives 7 equations. So we have 10 variables with 7
real equations. Note that $H_1$ is the product of elements in
$PSp(2,1)$ which can be written in terms of $(1,ia),
(z,t),(1,ib),(w,s)$ but $H_1$ can be written as above in terms of
$r\in {\mathbb R}^+,\mu,\nu\in Sp(1)\subseteqset {\mathbb H}$. So $H_1=id$ produces 7
independent equations in terms of $(1,ia), (z,t),(1,ib),(w,s)$.
Since the dimension of the variety into $PU(2,1)$ is already 3 and
two non-conjugate Zariski dense representations in $PU(2,1)$ cannot
be conjugate by an element in $PSp(2,1)$ by Lemma
\operatorname{Re}f{nonconjugate}, the dimension of the variety into $PSp(2,1)$
should be also 3. This shows that every representation in $PSp(2,1)$
around $H_{\br}^o_0$ is conjugate into $PU(2,1)$. So there is no
deformation.
\epsilonnd{pf}
We suspect that this component containing a discrete representation
$H_{\br}^o_0$
with purely parabolic holonomy for a peripheral group in $PU(2,1)$
is disjoint from the component containing the holonomy
representation of the complete real hyperbolic structure in
$PSp(1,1)$ in the representation variety in $PSp(2,1)$.
\section{Parameters for the character variety in $PU(2,1)$ near
$H_{\br}^o_0$} We showed that the dimension of the character variety from
the fundamental group $\Gammamma$ of the figure eight knot complement
to $PU(2,1)$ near $[H_{\br}^o_0]$ is 3. In this section we parameterize
this space using angular invariants. We use the notations of
Proposition \operatorname{Re}f{complex}. To parameterize the two ideal tetrahedra,
we used five points
$$p_1=\infty,p_2=0, q_1=(1,t),q_2=(z,s),q_3=(w,r)$$ where $z,w\in{\mathbb C}, t,s,r\in
\operatorname{Im} {\mathbb C}$. From $\mathbb A(q_2,q_1,p_1)=\mathbb A (q_3,p_2,p_1)$, we
had $$\lambdabel{ang}\frac{r}{|w|^2}=\frac{s+z-{\mathbb A}r z-t}{|z-1|^2}.$$ A
direct calculation shows that from $\mathbb A(q_1,p_2,p_1)=\mathbb
A(q_2,p_2,q_3)$ we have
$$\text {arg}(\frac{1-t}{2})=\text {arg}(\frac{(|z|^2-s)(|w|^2+r)(|w-z|^2-r-w{\mathbb A}r z+z{\mathbb A}r w+s)}{8}).$$
Hence there were 5 independent parameters out of $t,s,z,w,r$ to
parameterize two ideal tetrahedra according to the gluing pattern.
Finally the holonomy equation around the edge $(p_1,p_2)$ gave two
more real equations, hence the real dimension of the character
variety around $[H_{\br}^o_0]$ is 3. Here we give these 3 parameters in
terms of Cartan angular invariants.
\betagin{Prop}The character variety $H_{\bc}^i(\Gammamma,PU(2,1))$ around
$[H_{\br}^o_0]$ is parameterized by three angular invariants $\mathbb
A(p_1,p_2,q_j),\ j=1,2,3$.
\epsilonnd{Prop}
\betagin{pf}Since $\mathbb A(q_3,p_2,p_1)=\mathbb A(q_2,q_1,p_1)$,
knowing three angular invariants $\mathbb A(p_1,p_2,q_j),\ j=1,2,3$
will completely determine the angular invariants of the first
tetrahedron by cocycle relation (\operatorname{Re}f{cocycle}). The angular
invariants $\mathbb A(p_1,p_2,q_j),\ j=1,2,3$ are functions of only
$t,z,s$. Gluing maps $g_1,g_2,g_3$ relates variable $t,z,s$ to the
variable $w,r$. Hence the holonomy map
$g_1^{-1}g_3g_2^{-1}g_1g_3^{-1}$ relates $t,z,s$ variable to $w,r$.
In other words, holonomy map does not create any relation among
$t,z,s$. This shows that three angular invariants $\mathbb
A(p_1,p_2,q_j),\ j=1,2,3$ are independent. Hence these three
parameters are parametrization of the character variety around
$[H_{\br}^o_0]$.
\epsilonnd{pf}
In \cite{Falbel}, it is shown that the coordinates of tetrahedra
corresponding to $H_{\br}^o_0$ are
$$p_1=\infty, p_2=0, q_1=(1,\sqrt 3),q_4=(-\frac{-1-i\sqrt 3}{2},
\sqrt 3), q_3=(\frac{-1+i\sqrt 3}{2},\sqrt 3)$$ and
$$\mathbb A(p_1,p_2,q_j)=\frac{\partiali}{3},\ j=1,2,3.$$
Hence in our coordinates
$[H_{\br}^o_0]=(\frac{\partiali}{3},\frac{\partiali}{3},\frac{\partiali}{3})$.
\betagin{thebibliography}{99}
{\setminusall
{\mathbb I}bitem{Ca}E. Cartan, Sur les groupes de la g\'eom\'etrie
hyperspherique, Comm. Math. Helv. 4 (1932), 158-171.
{\mathbb I}bitem{Do}A. Domic and D. Toledo, The Gromov norm of symmetric
domains, Math Ann. 276 (1987), 495-520.
{\mathbb I}bitem{Falbel}E. Falbel, A spherical CR structure on the
complement of the figure eight knot with discrete holonomy, JDG. 79
(2008), no. 1, 69-110.
{\mathbb I}bitem{GM} William Goldman and John Millson, Local rigidity of
discrete groups acting on complex hyperbolic space. Invent. Math.
{\bf 88} (1987), 495--520.
{\mathbb I}bitem{KKP} Inkang Kim, Bruno Klingler and Pierre Pansu, Local
quaternionic rigidity for complex hyperbolic lattices, to appear in
Journal of the Institute of Mathematics of Jussieu.
{\mathbb I}bitem{KP}Inkang Kim and Pierre Pansu, Local rigidity
in quaternionic hyperbolic space, Journal of European Math Society, 11 (2009), no 6, 1141-1164.
{\mathbb I}bitem{KimJKMS}Inkang Kim, Geometry on exotic hyperbolic spaces,
J. Korean Math. Soc. \textbf{36} (1999), 621--631.
{\mathbb I}bitem{Kim}Inkang Kim, Marked length rigidity of rank one symmetric spaces and their products, Topology, \textbf{40} (2001), 1295--1323.
{\mathbb I}bitem{KParker}Inkang Kim and John Parker,
Geometry of quaternionic hyperbolic manifolds, Math. Proc. Cambridge
Philos. Soc. \textbf{135} (2003) 291--320.
{\mathbb I}bitem{MM}Y. Matsushima and S. Murakami, On vector bundle valued
harmonic forms and automorphic forms on symmetric riemannina
manifolds, Ann. of Math. (2) 78 (1963), 365-416.
{\mathbb I}bitem{Pernas}Louis Pernas, G\'eom\'etrie hyperbolique quaternionnienne. Th\`ese, Universit\'e Paris-Sud (1999).
{\mathbb I}bitem{Ra} M. S. Raghunathan, On the first cohomology of
discrete subgroups of semi-simple Lie groups, Amer. J. Math. {\bf
87} (1965), 103--139.
{\mathbb I}bitem{Ri}R. Riley, A quadratic parabolic group, Math. Proc.
Cambridge Philos. Soc. 77 (1975), 281-288.
{\mathbb I}bitem{Th} William Thurston, The geometry and topology of 3-manifolds.
Lecture notes, Princeton (1983).}
{\mathbb I}bitem{Weil}Andr\'e Weil, On discrete subgroups of Lie groups, Ann. Math. \textbf{72} (1960), 369--384.
\epsilonnd{thebibliography}
\vskip1cm
\noindent Inkang Kim\\
School of Mathematics\\
KIAS\\ Hoegiro 85, Dongdaemun-gu\\
Seoul, 130-722, Korea\\
\texttt{inkangH_{\bc}^ar`\@ kias.re.kr}
\setminusallskip
\epsilonnd{document} |
\begin{document}
\title{Bifurcation Mechanism Design -- From Optimal Flat Taxes to Improved Cancer Treatments}
\author{Ger Yang\\
University of Texas at Austin\\
Department of Electrical and Computer Engineering\\
\and
Georgios Piliouras\\
Singapore University of Technology and Design\\
Engineering Systems and Design (ESD)
\and
David Basanta\\
Integrated Mathematical Oncology\\
H. Lee Moffitt Cancer Center and Research Institute
}
\maketitle
\begin{abstract}
Small changes to the parameters of a system can lead to abrupt qualitative changes of its behavior, a phenomenon known as bifurcation.
Such instabilities are typically considered problematic, however, we show that their power can be leveraged to design novel types of mechanisms.
\textit{Hysteresis mechanisms} use transient changes of system parameters to induce a
permanent improvement to its performance via optimal equilibrium selection.
\textit{Optimal control mechanisms} induce convergence to states whose performance is better than even the best equilibrium.
We apply these mechanisms in two different settings that illustrate the versatility of bifurcation mechanism design. In the first one we explore how introducing flat taxation can improve social welfare, despite decreasing agent ``rationality", by destabilizing inefficient equilibria. From there we move on to consider a well known game of tumor metabolism and use our approach to derive novel cancer treatment strategies.
\end{abstract}
\thanks{This work is supported by the National Science Foundation,
under grant CNS-0435060, grant CCR-0325197 and grant EN-CS-0329609.}
\section{Introduction}
The term bifurcation, which means splitting in two, is used to describe abrupt qualitative
changes in system behavior due to smooth variation of its parameters.
Bifurcations are ubiquitous
and permeate all natural phenomena.
Effectively, they
produce discrete events (e.g., rain breaking out) out of
smoothly varying, continuous systems (e.g., small changes to humidity, temperature).
Typically, they are studied through bifurcation diagrams, multi-valued maps that prescribe how each
parameter configuration translates to possible system behaviors (e.g., Figure~\ref{fig:intro}).
Bifurcations arise in a natural way in game theory. Games are typically studied through their Nash correspondences,
a multi-valued map connecting the parameters of the game (i.e., payoff matrices) to system behavior, in this case Nash equilibria.
As we slowly vary the parameters of the game typically the Nash equilibria will also vary smoothly, except at bifurcation points
where, for example, the number of equilibria abruptly changes as some equilibria appear/disappear altogether.
Such singularities may a have huge impact both on system behavior and system performance. For example, if the system state was at an equilibrium that disappeared during the bifurcation, then a turbulent transitionary period ensues where the system tries to reorganize itself at one of the remaining equilibria. Moreover, the quality of all remaining equilibria may be significantly worse
than the original. Even more disturbingly, it is not a-priori clear that the system will equilibrate at all.
Successive bifurcations that lead to increasingly more complicated recurrent behavior is a standard route to chaos \cite{devaney1992first}, which may have devastating effects to system performance.
Game theorists are particularly aware of the need to produce ``robust" predictions that are not inherently bound
to a specific, exact instantiation of the payoff parameters of the game \cite{roughgarden09}. The typical way to approach this problem has been to focus on more
expansive solution concepts, e.g., $\epsilon$-approximate Nash equilibria or even outcomes approximately consistent to regret-minimizing learning. These approaches, however, do not really address
the problem at its core as any solution concept defines a map from parameter space to behavioral space and no such map is immune to bifurcations. If pushed hard enough any system will destabilize. The question is what happens next?
Well, a lot of things \textit{may} happen. It is intuitively clear that if we are allowed to play around arbitrarily with the payoffs of
the agents then we can reproduce any game and no meaningful analysis is possible. Using payoff entries
as controlling parameters is problematic for another reason. It is not clear that there exists a compelling
parametrization of the payoff space that captures how real life decision makers deviate from the Platonic ideal of the payoff matrix.
Instead, we focus on another popular aspect of economic theory, agent ``rationality".
We adopt a standard model of boundedly rational learning agents.
Boltzmann Q-learning dynamics \cite{watkins1989learning,watkins1992q,tan1993multi}
is a well studied behavioral model in which agents are parameterized by a temperature/rationality term $T$.
Each agent
keeps track of the collective past performance of his actions (i.e., learns from experience)
and chooses an action according to a Boltzmann/Gibbs distribution with parameter $T$.
When applied to a multi-agent game the behavioral fixed points of Q-learning are known as quantal response equilibria (QRE) \cite{McKelvey:1995aa}.
Naturally, QREs depend on the temperature $T$. As $T\rightarrow0$ players become perfectly rational and play approaches a Nash equilibrium,\footnote{Mixed strategies in the QRE model are sometimes interpreted as frequency distributions of deterministic actions in a large population of users. This population interpretation of mixed strategies is standard and dates back to Nash~\cite{Nash}. Depending on context, we will use either the probabilistic interpretation or the population one.} whereas as $T\rightarrow \infty$ all agents use uniformly random strategies. As we vary the temperature the QRE($T$) correspondence moves between these two extremes producing bifurcations along the way at critical points where the number of QREs changes (Figure~\ref{fig:intro}).
\textit{Our goal} in this paper is to quantify the effects of these rationality-driven bifurcations to the social welfare of two player two strategy games. At this point a moment of pause is warranted. Why is this a worthy goal? Games of small size ($2 \times 2$ games in particular) hardly seem like a subject worthy of serious scientific investigation. This, however, could not be further from the truth.
First, the correct way to interpret this setting is from the point of population games where each agent is better understood as a large homogeneous population (e.g. men and women, attackers and defenders, cells of type A and cells of type B). Each of a handful of different types of users
has only a few meaningful actions available to them. In fact, from the perspective of applied game theory only such games with a small number of parameters are practically meaningful. The reason should be clear by now. Any game theoretic modeling of a real life scenario is invariably noisy and inaccurate. In order for game-theoretic predictions to be practically binding they have to be robust to these uncertainties. If the system intrinsically has a large number of independent parameters e.g., 20, then this parameter space will almost certainly encode a vast number of bifurcations, which invalidate any theoretical prediction. Practically useful models \textit{need} to be small.
Secondly, game theoretic models applied for scientific purposes typically \textit{are} small.
Specifically, the exact setting studied here with Boltzmann Q-learning dynamics applied in $2 \times 2$ games has been used to model the effects of taxation to agent rationality \cite{wolpert2012hysteresis} (see Section~\ref{sec:taxation} for a more extensive discussion) as well as to model the effects of treatments that trigger phase transitions to cancer dynamics \cite{kianercy2014critical} (see Section~\ref{sec:cancer}). Our approach yields insights to explicit open questions in both of these applications areas. In fact, direct application of our analysis can address similar inquiries for any other phenomenon modeled by Q-learning dynamics applied in $2 \times 2$ games.
Finally, the analysis itself is far from straightforward as it requires combining sets of tools and techniques that have so far been developed in isolation from each other. On one hand, we need to understand the behavior of these dynamical systems using tools from topology of dynamical systems whose implications are largely qualitative (e.g. prove the lack of cyclic trajectories). On the other hand, we need to leverage these tools to quantify at which exact parameter values bifurcations occur and
produce price-of-anarchy type of guarantees which by definition are quantitative. As far as we know, this is the first instance of a fruitful combination of these tools. In fact, not only do we show how to analyze the effects of bifurcations to system efficiency, we also show how to leverage this understanding (e.g. knowledge of the geometry of the bifurcation diagrams) to design novel types of mechanisms with good performance guarantees.
\begin{figure}
\caption{Bifurcation diagram for a $2 \times 2$ population coordination game. The $x$ axis corresponds to the system temperature $T$, whereas
the $y$ axis corresponds to the projection of the proportion of the first population using the first strategy at equilibrium. For small $T$, the system exhibits multiple equilibria.
Starting at $T=0$, and by increasing the temperature beyond
the critical threshold $T_C=6$, and then bringing it back to zero, we can force the system to converge to another equilibrium.}
\label{fig:intro}
\end{figure}
\begin{comment}
in which agents
keep track of the collective past performance of their actions (i.e., they learn from experience)
chooses an action according to a Boltzmann distribution with parameter $T$. In effect, if the temperature parameter is small, then ``cool-headed" agents choose with high probability the most promising action, however, as the temperature parameter increases agents become increasingly erratic, unpredictable and at the limit choose actions uniformly at random, maximizing the system entropy. This model allows us to capture some realistic aspects of decision making. For example, given two options which are nearly indistinguishable (e.g., whose utilities are \$.01\ apart) we can model the fact that agents treat them similarly (assigning almost equal probability to them).
Another concrete example is capturing the effects of taxation. If taxation levels increase then agents typically become less responsive to payoff signals, which can be effectively captured by increasing their temperature parameter.
Let's focus on a concrete example.
Suppose that we were to conduct a decision theoretic experiment on 100 individuals.
Each participant is given two hypothetical options: a) work in a high stress position for a yearly salary of $100K$ or b) work in a pleasant environment with a yearly salary $80K$. After the first round of responses, the participants are asked the same question, but now it is revealed to them that at the of the year they will be taxed for $50\%$ of their income (effectively receiving $50K$ and $40K$ respectively). Standard perfect rationality assumptions suggest that agent decision making remains invariant under affine transformations of payoffs. Regardless of the payoff discount, even if it was $99.9\%$, agents pursue single-mindedly revenue maximization down to the last cent.
The additional realism and fidelity of this behavioral model comes at a cost. Due to its intricacies it is harder to predict agent behavior. Nash equilibria are no longer behavioral fixed points when the system temperature is positive. Instead, the behavioral fixed points of the model, which are known as quantal response equilibria (QRE) \cite{McKelvey:1995aa}, depend on the system temperature. At $T=0$ they coincide with Nash equilibria, at $T\rightarrow \infty$ all agents use uniformly random strategies, and as we vary the temperature parameter the quantal respond correspondence moves between the two extremes. Even if we can prove that these learning systems converge to their fixed points (QRE) for any given temperature, it may still be hard to achieve a robust, reliable understanding of them, as a small change to the (temperature) parameter can lead to a sudden change to the topology of the dynamics. These phenomena are known as bifurcations and despite their prevalence and importance in many natural settings (e.g., cancer dynamics \cite{kianercy2014critical}) they are notoriously hard to understand, quantify, and control.
Standard mechanism design techniques \cite{nisan2007algorithmic}, typically aim to enforce desirable actions, outcomes either as (weakly) dominant strategies or as weaker standard notions of equilibria (e.g., Nash equilibria). These powerful techniques are carefully tailored to the assumption of perfectly rational agents and as move away from it they are similarly susceptible to bifurcation effects that can result to radically different and possibly rather inefficient outcomes.
These phenomena are not well understood within mechanism design and game theory more generally.
While the possibility of abrupt behavioral changes given small updates to the parameters pose a threat to system performance, it also carries a glimmer of hope for the development of new, easy-to-implement mechanisms. If we understand how to employ bifurcations as tools, we could possibly design mechanisms that do not require large expenditure from the central planner, who by carefully designed tweaks to the system can enforce desirable outcomes as stable states.
\end{comment}
\subsection*{Our contribution.}
We introduce two different types of mechanisms, hysteresis and optimal control mechanisms.
\textit{Hysteresis mechanisms} use transient changes to the system parameters to induce permanent improvements to its performance via optimal (Nash) equilibrium selection. The term hysteresis is derived from an ancient Greek word that means ``to lag behind". It reflects a time-based dependence between the system's present output and its past inputs. For example, let's assume that we start from a game theoretic system of Q-learning agents with temperature $T=0$ and assume that the system has converged to an equilibrium. By increasing the temperature beyond some critical threshold and then bringing it back to zero, we can force the system to provably converge to another equilibrium, e.g., the best (Nash) equilibrium (Figure~\ref{fig:intro}, Theorem~\ref{thm:mec2}). Thus, we can ensure performance equivalent to that of the price of stability instead of the price of anarchy. One attractive feature of this mechanism is that from the perspective of the central designer it is rather ``cheap" to implement. Whereas typical mechanisms require the designer to continuously intervene by (e.g., by paying the agents) to offset their greedy tendencies this mechanism is transient with a finite amount of total effort from the perspective of the designer. Further, the idea that game theoretic systems have effectively systemic memory is rather interesting and could find other applications within algorithmic game theory.
\textit{Optimal control mechanisms} induce convergence to states whose performance is better than even the best Nash equilibrium.
Thus, we can at times even beat the price of stability (Theorem \ref{thm:mec1}). Specifically, we show that by controlling the exploration/exploitation tradeoff we can achieve strictly better states than those achievable by perfectly rational agents. In order to implement such a mechanism it does not suffice to identify the right set of agents' parameters/temperatures so that the system has some QRE whose social welfare is better than the best Nash. We need to design a trajectory through the parameter space so that this optimal QRE becomes the final resting point.
\begin{comment}
Also, even if we were to focus on introducing
merely a couple of degrees of freedom in the payoff functions of each agent, it is not clear that there exists a compelling family of such
Instead of focusing on the effects of payoff perturbations,
Moreover, using any subset
Depending on the number of system parameters, refereed to as the co-dimension of the bifurcation, it is clear that if we are allowed to
In any real life economic setting, the ``true" systems parameters are particularly prone to shifts due to either shifts to the psychological
One approach used implic
: Nash correspondence
If we typically chage
Dynamics
any discrete event such as rain breaking out,
Mathematically, these phenomena are studied via
maps (multivalued functions) that map the given set of system parameters to its equilibria.
Mathematically, bi (see figure \ref{fig:intro})
Here is a problem.
It is an interesting problem.
Here is my idea.
My ideas works (proof/details/data)
Here's how my idea compares to other people's ideas.
Maps exist everywhere around us.
Bifurcations exist where maps exist.
The word bifurcation means to split in two.
This term describes any sudden change to a system that occurs while its parameters are being smoothly varied.
Bifurcations penetrate all natural phenomena.
Real world systems always contain some parameters whose exact values are not known.
Problem: Dynamics exhibit bifurcation.
Solution: Control the parameters.
Study the bifurcation diagram. How do you read one.
Small changes to the parameters of a system can lead to abrupt qualitative changes of its behavior, a phenomenon known as bifurcation.
Such instabilities are typically considered problematic.
Narrative flow:
Here is a problem. The problem is bifurcations. Or is the problem taxation design?
They are unpredictable, they can lead to chaos, they pop up everywhere.\\
It is an interesting problem. It is a particularly interesting problem for game theory. The reality of these systems is that they are under constant perturbations. \\
It is an unsolved problem. Even specific instantiations of these problems are very hard. E.g. the op
Here is my idea.
My ideas works (proof/details/data)
Here's how my idea compares to other people's ideas.
Introduction: 1pg
Describe problem (Specific Example)\\
\textbf{Our contribution.}
\begin{itemize}
\item See theorem 3
\item See claim 2
\item See claim 4
\end{itemize}
The problem: 1pg
My idea: 1pg\\
The details: 5pg\\
Related work: 1-2 pages\\
Conclusions and furtherwork: 0.5 pages
How does MWU work?
What is this paper about?
Example of non-converging MWU in two player game
MWU has the right idea but is too aggressive in its implementation.
This is exactly what is expected for large $\epsilon$, i.e. no usable guarantees.
Nevertheless,
Not all regret-minimizing dynamics. Not even all MWU implementations are created equal.
The main idea is ...
\end{comment}
\section{Preliminaries}
\label{sec:prelim}
\subsection{Game Theory Basics: $2 \times 2$ games} \label{sec:def_2_by_2_game}
In this paper, we focus on $2 \times 2$ games. We define it as a game with two players, and each player has two actions. We write the payoff matrices of the game for each player as
\begin{equation} \label{eq:def_AB}
\vv{A} = \left( \begin{array}{cc}
a_{11} & a_{12} \\
a_{21} & a_{22} \end{array}
\right), \quad
\vv{B} = \left( \begin{array}{cc}
b_{11} & b_{12} \\
b_{21} & b_{22} \end{array}
\right)
\end{equation}
respectively. The entry $a_{ij}$ denotes the payoff for Player~$1$ when he chooses action $i$ and his opponent chooses action $j$; similarly, $b_{ij}$ denotes the payoff for Player~$2$ when he chooses action $i$ and his opponent chooses action $j$. We define $x$ as the probability that the Player~$1$ chooses his first action, and $y$ as the probability that Player~$2$ chooses his first action. We also define two row vectors $\vv{x}=(x, 1-x)^T$ and $\vv{y}=(y,1-y)^T$ as the \emph{strategy} for each player. For simplicity, we denote the $i$-th entry of vector $\vv{x}$ by $x_i$. We call the tuple $(x,y)$ as the \emph{system state} or the \emph{strategy profile}.
An important solution concept in game theory is the \emph{Nash equilibrium}, where each user cannot make profit by unilaterally changing his strategy, that is:
\begin{definition}[Nash equilibrium]
A strategy profile $(x_{NE},y_{NE})$ is a Nash equilibrium (NE) if
\begin{align*}
&x_{NE} \in \arg\max_{x \in [0,1]} \vv{x}^T\vv{A}\vv{y}_{NE},
&y_{NE} \in \arg\max_{y \in [0,1]} \vv{y}^T\vv{B}\vv{x}_{NE}
\end{align*}
\end{definition}
We call $(x_{NE},y_{NE})$ a \emph{pure} Nash equilibrium (PNE) if both $x_{NE} \in \{0,1\}$ and $y_{NE} \in \{0,1\}$.
Nash equilibrium assumes each user is fully rational. However, in real world, this assumption is impractical. An alternative solution concept is the \emph{quantal response equilibrium} \cite{McKelvey:1995aa}, where it assumes that each user has bounded rationality:
\begin{definition}[Quantal response equilibrium]
A strategy profile $(x_{QRE},y_{QRE})$ is a Quantal response equilibrium (QRE) with respect to temperature $T_x$ and $T_y$ if
\begin{align*}
x_{QRE} &= \frac{e^{\frac{1}{T_x} (\vv{A}\vv{y}_{QRE})_1}}{\sum_{j\in\{1,2\}} e^{\frac{1}{T_x} (\vv{A}\vv{y}_{QRE})_j}}, &1-x_{QRE} = \frac{e^{\frac{1}{T_x} (\vv{A}\vv{y}_{QRE})_2}}{\sum_{j\in\{1,2\}} e^{\frac{1}{T_x} (\vv{A}\vv{y}_{QRE})_j}} \\
y_{QRE} &= \frac{e^{\frac{1}{T_y} (\vv{B}\vv{x}_{QRE})_1}}{\sum_{j\in\{1,2\}} e^{\frac{1}{T_y} (\vv{B}\vv{x}_{QRE})_j}}, &1-y_{QRE} = \frac{e^{\frac{1}{T_y} (\vv{B}\vv{x}_{QRE})_2}}{\sum_{j\in\{1,2\}} e^{\frac{1}{T_y} (\vv{B}\vv{x}_{QRE})_j}}
\end{align*}
\end{definition}
Analogous to the definition of Nash equilibria, we can consider the QREs as the case that each player is not only maximizing the expected utility but also maximizing the entropy. We can see that the QREs are the solutions to maximizing the linear combination of the following program:
\begin{align*}
\vv{x}_{QRE} &\in \arg\max_{\vv{x}} \left\{\vv{x}^T\vv{A}\vv{y}_{QRE} - T_x \sum_{j} x_j \ln x_j \right\}\\
\vv{y}_{QRE} &\in \arg\max_{\vv{y}} \left\{ \vv{y}^T\vv{B}\vv{x}_{QRE} - T_y \sum_{j} y_j \ln y_j \right\}
\end{align*}
This formulation has been widely seen in Q-learning dynamics literature (e.g \cite{Cominetti:2010aa, wolpert2012hysteresis, coucheney2013entropy}). With this formulation, we can find that the two parameters $T_x$ and $T_y$ controls the weighting between the utility and the entropy. We call $T_x$ and $T_y$ the \emph{temperatures}, and their value defines the level of irrationality. If $T_x$ and $T_y$ are zero, then both players are fully rational, and the system state is a Nash equilibrium. However, if both $T_x$ and $T_y$ are infinity, then each player is choosing his action according to a uniform distribution, which corresponds to the fully irrational players.
\subsection{Efficiency of an equilibrium} \label{sec:eff_game_intro}
The performance of a system state can be measured via the \emph{social welfare}. Given a system state $(x,y)$, we define the social welfare as the sum of the expected payoff of all users in the system:
\begin{definition}
Given $2 \times 2$ game with payoff matrices $\vv{A}$ and $\vv{B}$, and a system state $(x,y)$, the social welfare is defined as
$$
SW(x,y)=xy(a_{11}+b_{11})+x(1-y)(a_{12}+b_{21})+y(1-x)(a_{21}+b_{12})+(1-x)(1-y)(a_{22}+b_{22})
$$
\end{definition}
In the context of algorithmic game theory, we can measure the efficiency of a game by comparing the social welfare of a equilibrium system state with the best social welfare. We call the strategy profile that achieves the maximal social welfare as the \emph{socially optimal (SO)} strategy profile. The efficiency of a game is often described as the notion of \emph{price of anarchy} (PoA) and \emph{price of stability} (PoS). They are defined as
\begin{definition}
Given $2 \times 2$ game with payoff matrices $A$ and $B$, and a set of equilibrium system states $S \subseteq [0,1]^2$, the price of anarchy (PoA) and the price of stability (PoS) are defined as
\begin{align*}
&PoA(S) = \frac{\max_{(x,y)\in[0,1]^2} SW(x,y)}{\min_{(x,y)\in S} SW(x,y)},
&PoS(S) = \frac{\max_{(x,y)\in[0,1]^2} SW(x,y)}{\max_{(x,y)\in S} SW(x,y)}
\end{align*}
\end{definition}
\section{Our Model}
\subsection{Q-learning Dynamics}
In this paper, we are particularly interested in the scenario when both players' strategies are evolving under \emph{Q-learning dynamics}:
\begin{align}
&\dot x_i = x_i \bigg[ (\vv{A} \vv{y})_i - \vv{x}^T \vv{A} \vv{y} + T_x \sum_{j} x_j \ln(x_j/x_i) \bigg],
&\dot y_i = y_i \bigg[ (\vv{B} \vv{x})_i - \vv{y}^T \vv{B} \vv{x} + T_y \sum_{j} y_j \ln(y_j/y_i) \bigg] \label{eq:conti_dynamics}
\end{align}
The Q-learning dynamics has been studied because of its connection with multi-agent learning problems. For example, it has been shown in \cite{Sato:2003aa,tuyls2003selection} that the Q-learning dynamics captures the system evolution of a repeated game, where each player learns his strategy through Q-learning and Boltzmann selection rules. More details are provided in Appendix~\ref{sec:from_q_to_q}.
An important observation on the dynamics \eqref{eq:conti_dynamics} is that it demonstrates the exploration/ exploitation tradeoff \cite{tuyls2003selection}.
We can find that the right hand side of equation \eqref{eq:conti_dynamics} is composed of two parts. The first part
$
x_i [ (\vv{A} \vv{y})_i - \vv{x}^T \vv{A} \vv{y} ]
$
is exactly the vector field of replicator dynamic \cite{sandholm2009evolutionary}. Basically, the replicator dynamics drives the system to the state of higher utility for both players. As a result, we can consider this as a selection process in terms of population evolutionary, or an exploitation process from the perspective of a learning agent. Then, for the second part
$
x_i [T_x \sum_{j} x_j \ln(x_j/x_i)],
$
we show in the appendix that if the time derivative of $\vv{x}$ contains this part alone, this results in the increase of the system entropy.
The system entropy is a function that captures the randomness of the system. From the population evolutionary perspective, the system entropy corresponds to the variety of the population. As a result, this term can be considered as the mutation process. The level of the mutation is controlled by the temperature parameters $T_x$ and $T_y$. Besides, in terms of the reinforcement learning, this term can be considered as an exploration process, as it provides the opportunity for the agent to gain information about the action that does not look the best so far.
\subsection{Convergence of the Q-learning dynamics}
By observing the Q-learning dynamics \eqref{eq:conti_dynamics}, we can find that the interior rest points for the dynamics are exactly the QREs of the $2 \times 2$ game.
It is claimed in \cite{Kianercy:2012aa} without proof that the Q-learning dynamics for a $2 \times 2$ game converges to interior rest points of probability simplexes for any positive temperature $T_x>0$ and $T_y>0$. We provide a formal proof in Appendix \ref{appendix:convergence}.
The idea is that for positive temperature, the system is dissipative and by leveraging the planar nature of the system it can be argued that it converges to fixed points.
\subsection{Rescaling the Payoff Matrix}\label{sec:rescaling}
At the end of this section, we discuss the transformation of the payoff matrices that preserves the dynamics in \eqref{eq:conti_dynamics}. This idea is proposed in \cite{hofbauer2005learning} and \cite{hofbauer1998evolutionary}, where the \emph{rescaling} of a matrix is defined as follows
\begin{definition}[\cite{hofbauer1998evolutionary}]
$\vv{A}'$ and $\vv{B}'$ is said to be a rescaling of $\vv{A}$ and $\vv{B}$ if there exist constants $c_j,d_i$, and $\alpha>0$, $\beta>0$ such that $a_{ij}'=\alpha a_{ij}+c_j$ and $b_{ji}' = \beta b_{ji} + d_i$.
\end{definition}
It is clear that
rescaling the game payoff matrices is equivalent to updating the temperature parameters of the two agents
in \eqref{eq:conti_dynamics}.
So, wlog it suffices to study the dynamics under the assumption that the $2 \times 2$ payoff matrices $\vv{A}$ and $\vv{B}$ are in the following \emph{diagonal form}.
\begin{definition}
Given $2 \times 2$ matrices $\vv{A}$ and $\vv{B}$, their diagonal form is defined as
$$
\vv{A}_D = \left( \begin{array}{cc}
a_{11}-a_{21} & 0 \\
0 & a_{22}-a_{12} \end{array}
\right) , \quad
\vv{B}_D = \left( \begin{array}{cc}
b_{11}-b_{21} & 0 \\
0 & b_{22}-b_{12} \end{array}
\right)
$$
\end{definition}
Note that although rescaling the payoff matrices to their diagonal form preserves the equilibria, it does not preserves the social optimality, i.e. the socially optimal strategy profile in the transformed game is not necessary the socially optimal strategy profile in the original game.
\section{Hysteresis Effect and Bifurcation Analysis} \label{sec:bif}
\subsection{Hysteresis effect in Q-learning dynamics: An example}
We begin our discussion with an example:
\begin{example}[Hysteresis effect] \label{ex:ex1}
Consider a $2 \times 2$ game with reward matrices
\begin{equation}
A = \left( \begin{array}{cc}
10 & 0 \\
0 & 5 \end{array}
\right), \quad
B = \left( \begin{array}{cc}
2 & 0 \\
0 & 4 \end{array}
\right)
\label{eq:ex1}
\end{equation}
\begin{figure}
\caption{Example~\ref{ex:ex1}
\label{fig:as_c1_low_TY}
\caption{Example~\ref{ex:ex1}
\label{fig:as_c1_high_TY}
\end{figure}
There are two PNEs in this game, $(x,y)=(0,0)$ and $(1,1)$. By fixing some $T_y$, we can plot different QREs with respect to $T_x$ as in Figure~\ref{fig:as_c1_low_TY} and Figure~\ref{fig:as_c1_high_TY}. For simplicity, we only show the value of $x$ in the figure, since according to \eqref{eq:assym_xy}, given $x$ and $T_y$, the value of $y$ can be uniquely determined. Assuming the system follows the Q-learning dynamics, as we slowly vary $T_x$, $x$ tends to stay on the line segment that is the closest to where it was originally corresponding to a stable but inefficient fixed point. We consider the following process:
\begin{enumerate}
\item The initial state is $(0.05,0.14)$, where $T_x \approx 1$ and $T_y \approx 2$. We plot $x$ versus $T_x$ by fixing $T_y=2$ in Figure~\ref{fig:as_c1_high_TY}.
\item Fix $T_y=2$, and increase $T_x$ to where there is only one QRE correspondence.
\item Fix $T_y=2$, and decrease $T_x$ back to $1$. Now $x \approx 0.997$.
\end{enumerate}
\end{example}
In the above example, we can find that although at the end the temperature parameters are set back to their initial value, the system state ends up to be a totally different equilibrium. This behavior is known as the \emph{hysteresis effect}. In this section, we would like to answer the question \emph{when this is going to happen}. Further, in the next section, we will answer \emph{how can we take advantage of this phenomenon}.
\subsection{Characterizing QREs}
We consider the bifurcation diagrams for QREs in $2\times 2$ games. Without loss of generality, we consider a properly rescaled $2 \times 2$ game with payoff matrices in the diagonal form:
\begin{equation*}
\vv{A}_D = \left( \begin{array}{cc}
a_X & 0 \\
0 & b_X \end{array}
\right), \quad
\vv{B}_D = \left( \begin{array}{cc}
a_Y & 0 \\
0 & b_Y \end{array}
\right)
\end{equation*}
Also, we can assume the action indices are ordered properly and rescaled properly so that $a_X>0$ and $|a_X|\ge|b_X|$. For simplicity, we assume $a_X=b_X$ and $b_X=b_Y$ do not hold at the same time. At QRE, we have
\begin{align}
&x = \frac{e^{\frac{1}{T_x} ya_X}}{e^{\frac{1}{T_x} ya_X} + e^{\frac{1}{T_x} (1-y)b_X}},
&y = \frac{e^{\frac{1}{T_y} xa_Y}}{e^{\frac{1}{T_y} xa_Y} + e^{\frac{1}{T_y} (1-x)b_Y}}
\label{eq:assym_xy}
\end{align}
Given $T_x$ and $T_y$, there could be multiple solutions to \eqref{eq:assym_xy}.
However, we find that if we know the equilibrium states, then we can recover the temperature parameters. We solve for $T_x$ and $T_y$ in \eqref{eq:assym_xy}, and then we get
\begin{align}
&T_X^I(x,y) = \frac{-(a_X+b_X) y + b_X}{\ln(\frac{1}{x}-1)},
&T_Y^I(x,y) = \frac{-(a_Y+b_Y) x + b_Y}{\ln(\frac{1}{y}-1)} \label{eq:Txy}
\end{align}
We call this the \emph{first form of representation}, where $T_x$ and $T_y$ are written as functions of $x$ and $y$. Here the capital subscripts for $T_X$ and $T_Y$ indicate that they are considered as functions. A direct observation to \eqref{eq:Txy} is that both of them are continuous function over $(0,1)\times(0,1)$ except for $x=1/2$ and $y=1/2$.
An alternative way to describe the QRE is to write $T_x$ and $y$ as a function of $x$ and parameterize with respect to $T_y$ in the following \emph{second form of representation}. This will be the form that we use to prove many useful characteristics of QREs.
\begin{align}
T_X^{II}(x,T_y) &= \frac{-(a_X+b_X) y^{II}(x,T_y) + b_X}{\ln(\frac{1}{x}-1)} \label{eq:Tx_}\\
y^{II}(x,T_y) &= \bigg(1+e^{\frac{1}{T_y}(-(a_Y+b_Y) x + b_Y)}\bigg)^{-1} \label{eq:y_}
\end{align}
In this way, if we are given $T_y$, we are able to analyze how $T_x$ changes with $x$. This helps us understand how to answer the question of what are the QREs given $T_x$ and $T_x$ in the system.
We also want to analyze the stability of the QREs. From dynamical system theory (e.g. \cite{perko}), a fixed point of a dynamical system is said to be asymptotically stable if all of the eigenvalues of its Jacobian matrix has negative real part; if it has at least one eigenvalue with positive real part, then it is unstable.
It turns out that under the second form representation, we are able to determine whether a segment in the diagram is stable or not.
\begin{lemma} \label{lem:qre_stab}
Given $T_y$, the system state $\left(x,y^{II}(x,T_y)\right)$ is a stable equilibrium if and only if
\begin{enumerate}
\item $\frac{\partial T_X^{II}}{\partial x}(x,T_Y)>0$ if $x \in (0, 1/2)$.
\item $\frac{\partial T_X^{II}}{\partial x}(x,T_Y)<0$ if $x \in ( 1/2,1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The given condition is equivalent to the case that both eigenvalues of the Jacobian matrix of the dynamics \eqref{eq:conti_dynamics} are negative.
\end{proof}
Finally, we define the \emph{principal branch}. In Example~\ref{ex:ex1}, we call the branch on $x \in (0.5,1)$ the \emph{principal branch} given $T_y=2$, since for any $T_x>0$, there is some $x \in (0.5,1)$ such that $T_X^{II}(x,T_y)=T_x$. Analogously, we can define it formally as in the following definition with the help of the second form representation.
\begin{definition}
Given $T_y$, the region $(a,b)\subset(0,1)$ \emph{contains the principal branch} of QRE correspondence if it satisfies the following conditions:
\begin{enumerate}
\item $T_X^{II}(x,T_y)$ is continuous and differentiable for $x \in (a,b)$.
\item $T_X^{II}(x,T_y) > 0$ for $x \in (a,b)$.
\item For any $T_x>0$, there exists $x \in (a,b)$ such that $T_X^{II}(x,T_y)=T_x$.
\end{enumerate}
Further, for a region $(a,b)$ that contains the principal branch, $x \in (a,b)$ is \emph{on the principal branch} if it satisfies the following conditions:
\begin{enumerate}
\item The equilibrium state $(x, y^{II}(x, T_y))$ is stable.
\item There is no $x' \in (a,b), x'<x$ such that $T_X^{II}(x',T_y) =T_X^{II}(x,T_y) $.
\end{enumerate}
\end{definition}
\subsection{Coordination Games}\label{sec:coord_topo}
We begin our analysis with the class of coordination games, where we have all $a_X$, $b_X$, $a_Y$, and $b_Y$ positive. Also, without loss of generality, we assume $a_X \ge b_X$. In this case, there are no dominant strategy for both players, and there are two PNEs.
Let us revisit Example~\ref{ex:ex1}, we can make the following observations from Figure~\ref{fig:as_c1_low_TY} and Figure~\ref{fig:as_c1_high_TY}:
\begin{enumerate}
\item Given $T_y$, there are three branches. One is the principal branch, while the other two appear in pairs and occur only when $T_x$ is less than some value.
\item For small $T_y$, the principal branch goes toward $x=0$; while for large $T_y$, the principal branch goes toward $x=1$.
\end{enumerate}
Now, we are going to show that these observations are generally true in coordination games. The proofs in this section are deferred to Appendix~\ref{appendix:bifurcation}, where we will give a detailed discussion on the proving techniques.
The first idea we are going to introduce is the \emph{inverting temperature}, which is the threshold of $T_y$ in Observation~(2). We define it as
$$T_I=\max\left\{0,\frac{b_Y-a_Y}{2\ln(a_X/b_X)}\right\}$$
We note that $T_I$ is positive only if $b_Y>a_Y$, which is the case that two players have different preferences. When $T_y<T_I$, as the first player increases his rationality from fully irrational, i.e. $T_x$ decreases from infinity, he is likely to be influenced by the second player's preference. If $T_y$ is greater than $T_I$, then the first player prefers to follow his own preference, making the principal branch goes toward $x=1$. We formalize this idea in the following theorem:
\begin{theorem}[Direction of the principal branch]\label{thm:coord_topo_pb}
Given a $2 \times 2$ coordination game, and given $T_y$, the following statements are true:
\begin{enumerate}
\item If $T_y>T_I$, then $(0.5,1)$ contains the principal branch.
\item If $T_y<T_I$, then $(0,0.5)$ contains the principal branch.
\end{enumerate}
\end{theorem}
The second idea is the \emph{critical temperature}, denoted as $T_C(T_y)$, which is a function of $T_y$. The critical temperature is defined as the infimum of $T_x$ such that for any $T_x>T_C(T_y)$, there is a unique QRE correspondence under $(T_x, T_y)$. Generally, there is no close form for the critical temperature. However, we can still compute it efficiently, as we show it in Theorem~\ref{thm:coord_topo_ct}. Besides, another interesting value of $T_y$ we should be noticed is $T_B=\frac{b_Y}{\ln(a_X/b_X)}$, which is the maximum value of $T_y$ that QREs not on the principal branch are presenting. Intuitively, as $T_y$ goes beyond $T_B$, the first player ignores the decision of the second player and turn his face to what he think is better. We formalize the idea of $T_C$ and $T_B$ in the following theorem:
\begin{theorem}[Properties about the second QRE]\label{thm:coord_topo_ct}
Given a $2 \times 2$ coordination game, and given $T_y$, the following statements are true:
\begin{enumerate}
\item For almost every $T_x>0$, all QREs not lying on the principal branch appear in pairs.
\item If $T_y>T_B$, then there is no QRE correspondence in $x \in (0,0.5)$.
\item If $T_y>T_I$, then there is no QRE correspondence for $T_x>T_C(T_y)$ in $x \in(0,0.5)$.
\item If $T_y<T_I$, then there is no QRE correspondence for $T_x>T_C(T_y)$ in $x \in(0.5,1)$.
\item $T_C(T_y)$ is given as $T_X^{II}(x_L,T_y)$, where $x_L$ is the solution to the equality
$$
y^{II}(x,T_y)+x(1-x)\ln\left(\frac{1}{x}-1\right)\frac{\partial y^{II}}{\partial x}(x,T_y)=\frac{b_X}{a_X+b_X}
$$
\item $x_L$ can be found using binary search.
\end{enumerate}
\end{theorem}
The next aspect of the QRE correspondence is their stability. According to Lemma~\ref{lem:qre_stab}, the stability of the QREs can also be inspected with the advantage of the second form representation by analyzing $\frac{\partial T_X^{II}}{\partial x}$. We state the results in the following theorem:
\begin{theorem}[Stability]\label{thm:coord_topo_stab}
Given a $2 \times 2$ coordination game, and given $T_y$, the following statements are true:
\begin{enumerate}
\item If $a_Y \ge b_Y$, then the principal branch is continuous.
\item If $T_y<T_I$, then the principal branch is continuous.
\item If $T_y>T_I$ and $a_Y< b_Y$, then the principal branch may not be continuous.
\item Fix $T_x$, for the pairs of QREs not lying on the principal branch, the one of less distance to $x=0.5$ is unstable, while the other one is stable.
\end{enumerate}
\end{theorem}
\begin{figure}
\caption{In a coordination game with $a_Y < b_Y$ and low $T_y$.}
\label{fig:as_c1a_by_ex1}
\caption{In a coordination game with $a_Y < b_Y$ and high $T_Y$. There is a non-stable segment on the principal branch.}
\label{fig:as_c1a_by_ex2}
\end{figure}
Note that part 3 in Theorem~\ref{thm:coord_topo_stab} infers that there is potentially an unstable segment between segments of the principal branch. This phenomenon is illustrated in Figure~\ref{fig:as_c1a_by_ex2}. Though this case is weaker than other cases, this does not hinder us from designing a controlling mechanism as we are going to do in Section~\ref{sec:mec_1}.
\subsection{Non-coordination games}
Due to space constraint, the analysis for non-coordination games is deferred to Appendix~\ref{sec:topo_nc}.
\section{Mechanism Design}
\subsection{Hysteresis Mechanism: Select the Best Nash Equilibrium via QRE Dynamics}\label{sec:mec_2}
In this section, we consider the class of coordination games, and when the socially optimal state is one of the PNEs. The main task for us in this case is to determine when and how we can get to the socially optimal PNE. In Example~\ref{ex:ex1}, by sequentially changing $T_x$, we move the equilibrium state from around $(0,0)$ to around $(1,1)$, which is the social optimum state. We formalize this idea as the \emph{hysteresis mechanism} and present it in Theorem~\ref{thm:mec2}. The hysteresis mechanism mainly takes advantage of the hysteresis effect we have discussed in Section~\ref{sec:bif}, that we use transient changes of system parameters to induce permanent
improvement to system performance via optimal equilibrium selection.
\begin{theorem}[Hysteresis Mechanism]\label{thm:mec2}
Given a $2 \times 2$ game, if it satisfies the following property:
\begin{enumerate}
\item Its diagonal form satisfies $a_X,b_X,a_Y,b_Y>0$.
\item Exactly one of its pure Nash equilibrium is the socially optimal state.
\end{enumerate}
Without loss of generality, we can assume $a_X \ge b_X$. Then, there is a mechanism to control the system to the social optimum by sequentially changing $T_x$ and $T_y$ if 1) $a_Y \ge b_Y$ and 2) the socially optimal state is $(0,0)$ do not hold at the same time.
\end{theorem}
\begin{proof}
First, note that if $a_Y \ge b_Y$, by Theorem~\ref{thm:coord_topo_pb} the principal branch is always in the region $x > 0.5$. As a result, once $T_y$ is increased beyond the critical temperature, the system state will no longer return to $x<0.5$ at any positive temperature. Therefore, $(0,0)$ cannot be approached from any state in $x>0.5$ through the QRE dynamics.
On the other hand, if $a_Y \ge b_Y$ and the socially optimal state is the PNE $(1,1)$, then we can approach that state by first getting onto the principal branch. The mechanism can be described as
\begin{enumerate}
\item[(C1)] \begin{enumerate}
\item Raise $T_x$ to some value above the critical temperature $T_C(T_y)$.
\item Reduce $T_x$ and $T_y$ to $0$.
\end{enumerate}
\end{enumerate}
Though in this case, the initial choice of $T_y$ does not affect the result, if the social designer is taking the costs from assigning large $T_x$ and $T_y$ into account, he is going to trade off between $T_C$ and $T_y$ since typically smaller $T_y$ induces larger $T_C$.
Next, consider $a_Y<b_Y$.
If we are aiming for state $(0,0)$, then we can do the following:
\begin{enumerate}
\item[(D1)] \begin{enumerate}
\item Keep $T_y$ at some value below $T_I=\frac{b_Y-a_Y}{2\ln(a_X/b_X)}$. Now the principal branch is at $(0,0.5)$.
\item Raise $T_x$ to some value above the critical temperature $T_C(T_y)$.
\item Reduce $T_x$ to $0$.
\item Reduce $T_y$ to $0$.
\end{enumerate}
\end{enumerate}
On the other hand, if we are aiming for state $(1,1)$, then the following procedure suffices:
\begin{enumerate}
\item[(D2)] \begin{enumerate}
\item Keep $T_y$ at some value above $T_I=\frac{b_Y-a_Y}{2\ln(a_X/b_X)}$. Now the principal branch is at $(0.5,1)$.
\item Raise $T_x$ to some value above the critical temperature $T_C(T_y)$.
\item Reduce $T_x$ to $0$.
\item Reduce $T_y$ to $0$.
\end{enumerate}
\end{enumerate}
Note that in the last two steps only by reducing $T_y$ after $T_x$ keeps the state around $x=1$. We recommend the reader to refer to Figure~\ref{fig:ex_rsw_p1} for case (D1), and Figure~\ref{fig:ex_rsw_p2} for case (D2) for more insights.
\end{proof}
\subsection{Efficiency of QREs: An example}
A question that arises with the solution concept of QRE is \emph{does QRE improves social welfare}? Here we show that the answer is \emph{yes}. We begin with an example to illustrate:
\begin{example}\label{ex:ex2}
Consider a standard coordination game with the payoff matrices of the form
\begin{equation} \label{eq:ex_pos1}
A = \left( \begin{array}{cc}
\epsilon & 1 \\
0 & 1+\epsilon' \end{array}
\right), \quad
B = \left( \begin{array}{cc}
1+\epsilon & 0 \\
1 & \epsilon' \end{array}
\right)
\end{equation}
where $\epsilon > \epsilon' >0$ are some small numbers. Note that in this game, there are two PNEs $(x,y)=(1,1)$ and $(x,y)=(0,0)$, with social welfare $1+2\epsilon$ and $1+2\epsilon'$, respectively. We can see that for small $\epsilon$ and $\epsilon'$, the socially optimal state is $(x,y)=(1,0)$, with social welfare $2$. In this case, the state $(x,y)=(1,1)$ is the PNE with the best social welfare. However, we are able to achieve the state with the better social welfare than any NE through QRE dynamics. We illustrate the social welfare of the QREs with different temperatures of this example in Figure~\ref{fig:pos_improve}. In this figure, we can see that at PNE, which is the point $T_x=T_y=0$, the social welfare is $1+2\epsilon$. However, we are able to increase the social welfare by increasing $T_y$. We will show in Section~\ref{sec:mec_1} a general algorithm to find the particular temperature, as well as a mechanism, which we refer to it as the \emph{optimal control mechanism}, that drives the system to the desired state.
\begin{figure}
\caption{The left figure is the social welfare on the principal branch for Example~\ref{ex:ex2}
\label{fig:pos_improve}
\end{figure}
\end{example}
\subsection{Optimal Control Mechanism: Better Equilibrium with Irrationality}\label{sec:mec_1}
Here, we show a general approach to improve the PoS bound for coordination games from Nash equilibria by QREs and Q-learning dynamics.
We denote $QRE(T_x,T_y)$ as the set of QREs with respect to $T_x$ and $T_y$. Further, denote $QRE$ as the set of the union of $QRE(T_x,T_y)$ over all positive $T_x$ and $T_y$. Also, denote the set of pure Nash equilibria system states as $NE$. Since the set $NE$ is the limit of $QRE(T_x,T_y)$ as $T_x$ and $T_y$ approach zero, we have the bounds:
$$
PoA(QRE) \ge PoA(NE), \quad PoS(QRE) \le PoS(NE)
$$
Then, we define \emph{QRE achievable states}:
\begin{definition}
A state $(x,y) \in [0,1]^2$ is a QRE achievable state if for every $\epsilon>0$, there exist positive finite $T_x$ and $T_y$ and $(x',y')$ such that $|(x',y')-(x,y)|<\epsilon$ and $(x',y') \in QRE(T_x,T_y)$.
\end{definition}
\begin{figure}
\caption{Set of QRE achievable states for Example~\ref{ex:ex2}
\label{fig:ex1_region}
\caption{Social welfare for all states in Example~\ref{ex:ex2}
\label{fig:ex1_sw_contour}
\end{figure}
Note that with this definition, pure Nash equilibria are QRE achievable states. However, the socially optimal states do not necessary to be QRE achievable. For example, we illustrate in Figure~\ref{fig:ex1_region} the set of QRE achievable states for Example~\ref{ex:ex2}. We can find that the socially optimal state, $(x,y)=(1,0)$, is not QRE achievable. Nevertheless, it is easy to see from Figure~\ref{fig:ex1_region} and Figure~\ref{fig:ex1_sw_contour} that we can achieve a higher social welfare at $(x,y)=(1,0.5)$, which is a QRE achievable state. Formally, we can describe the set of QRE achievable states as the positive support of $T_X^I$ and $T_Y^I$:
\begin{align*}
S=&\left\{ \left\{x\in\left[\frac{1}{2},1\right], y\in\left[\frac{b_X}{a_X+b_X},1\right]\right\} \cup \left\{x\in\left[0,\frac{1}{2}\right], y\in\left[0,\frac{b_X}{a_X+b_X}\right]\right\} \right\} \\
&\cap
\left\{ \left\{x\in\left[\frac{b_Y}{a_Y+b_Y},1\right], y\in\left[\frac{1}{2},1\right]\right\} \cup \left\{x\in\left[0,\frac{b_Y}{a_Y+b_Y}\right], y\in\left[0,\frac{1}{2}\right]\right\} \right\}
\end{align*}
An example for the region of a game with $a_Y \ge b_Y$ is illustrated in Figure~\ref{fig:ex1_region}. For the case $a_Y < b_Y$, we demonstrate it in Figure~\ref{fig:ex2_region}.
In the following theorem, we propose the \emph{optimal control mechanism} for a general process to achieve an equilibrium that is better than the PoS bound from Nash equilibria.
\begin{theorem}[Optimal Control Mechanism]\label{thm:mec1}
Given a $2 \times 2$ game, if it satisfies the following property:
\begin{enumerate}
\item Its diagonal form satisfies $a_X,b_X,a_Y,b_Y>0$.
\item None of its pure Nash equilibrium is the socially optimal state.
\end{enumerate}
Without loss of generality, we can assume $a_X \ge b_X$. Then,
\begin{enumerate}
\item there is a stable QRE achievable state whose social welfare is better than any Nash equilibrium.
\item there is a mechanism to control the system to this state from the best Nash equilibrium by sequentially changing $T_x$ and $T_y$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{figure}
\caption{Set of QRE achievable states for a coordination game with $a_Y < b_Y$.}
\label{fig:ex2_region}
\caption{Stable QRE achievable states for a coordination game with $a_Y>b_Y$.}
\label{fig:ex1_region_stab}
\end{figure}
Note that given those properties, there are two PNEs $(0,0)$ and $(1,1)$. Since we know neither of them is social optimum, the socially optimal state must lies on either $(0,1)$ or $(1,0)$.
First, consider $a_Y \ge b_Y$. In this case, we know from Theorem~\ref{thm:coord_topo_stab} that all $x \in (0.5,1)$ states belong to a principal branch for some $T_y>0$ and are stable. While for $x < 0.5$, not all of them are stable. We illustrate the region of stable QRE achievable states in Figure~\ref{fig:ex1_region_stab}. By Theorem~\ref{thm:coord_topo_ct} and Theorem~\ref{thm:coord_topo_stab}, we can infer that the states near the border $x=0$ are stable. As a result, we can claim that the following states are what we are aiming for:
\begin{enumerate}
\item[(A1)] If $(1,1)$ is the best NE and $(0,1)$ is the SO state, then we select $(0.5,1)$.
\item[(A2)] If $(1,1)$ is the best NE and $(1,0)$ is the SO state, then we select $(1,0.5)$.
\item[(A3)] If $(0,0)$ is the best NE and $(0,1)$ is the SO state, then we select $\left(0,\frac{b_X}{a_X+b_X}\right)$.
\item[(A4)] If $(0,0)$ is the best NE and $(1,0)$ is the SO state, then we select $\left(\frac{b_Y}{a_Y+b_Y},0\right)$.
\end{enumerate}
It is clear that these choices of states makes improvements on the social welfare. It is known that for the class of games we are considering, the price of stability is no greater than $2$. In fact, in case A1 and A2, we reduce this factor to $4/3$. Also in case A3 and A4, we reduce this factor to $\left(\frac{1}{2}+\frac{b_X/2}{a_X+b_X}\right)^{-1}$.
The next step is to show the mechanism to drive the system to the desired state. Due to symmetry, we only discuss case A1 and A3, where case A2 and case A4 can be done analogously. For case A1, the state corresponds to the temperature $T_x \rightarrow \infty$ and $T_y \rightarrow 0$. For any small $\delta>0$, we can always find the state $(0.5+\delta,1-\delta)$ on the principal branch of some $T_y$. This means that we can achieve this state from any initial state, not only from the NEs. With the help of the first form representation of the QREs in \eqref{eq:Txy}, given any QRE achievable system state $(x,y)$, we are able to recover them to corresponding temperatures through $T_X^I$ and $T_Y^I$. The mechanism can be described as follows:
\begin{enumerate}
\item[(A1)] \begin{enumerate}
\item From any initial state, raise $T_x$ to $T_X^I(0.5+\delta,1-\delta)$.
\item Decrease $T_y$ to $T_Y^I(0.5+\delta,1-\delta)$
\end{enumerate}
\end{enumerate}
For case A3, the state we selected is not on the principal branch. This means that we cannot increase the temperatures too much; otherwise the system state will move to the principal branch and will never return. We assume initially the system state is at $(\delta,\delta)$ for some small $\delta>0$, which is some state close to the best NE. Also, we can assume the initial temperatures are $T_x=T_X^I(\delta,\delta)$ and $T_y=T_Y^I(\delta,\delta)$. Our goal is to arrive at the state $\left(\delta_1, \frac{b_X}{a_X+b_X}-\delta_2\right)$ for some small $\delta_1>0$ and $\delta_2>0$ such that $\left(\delta_1, \frac{b_X}{a_X+b_X}-\delta_2\right)$ is stable. We present the mechanism in the following:
\begin{enumerate}
\item[(A3)] \begin{enumerate}
\item From initial state $(\delta,\delta)$, move $T_x$ to $T_X^I\left(\delta_1, \frac{b_X}{a_X+b_X}-\delta_2\right)$.
\item Increase $T_y$ to $T_Y^I\left(\delta_1, \frac{b_X}{a_X+b_X}-\delta_2\right)$
\end{enumerate}
\end{enumerate}
Here note that Step (b) should not be proceeded before Step~(a) because as we increase $T_y$ first, then we are taking the risks of getting off to the principal branch.
Next, consider the case that $a_Y<b_Y$. Similarly to the previous case, we know from Theorem~\ref{thm:coord_topo_ct} and Theorem~\ref{thm:coord_topo_stab} that states near the borders $x=0,0.5,1$ and $y=0,0.5,1$ are basically stable states. Hence, we can claim the following results:
\begin{enumerate}
\item[(B1)] If $(1,1)$ is the best NE and $(0,1)$ is the SO state, then we select $\left(\frac{b_Y}{a_Y+b_Y},1\right)$.
\item[(B2)] If $(1,1)$ is the best NE and $(1,0)$ is the SO state, then we select $(1,0.5)$.
\item[(B3)] If $(0,0)$ is the best NE and $(0,1)$ is the SO state, then we select $\left(0,\frac{b_X}{a_X+b_X}\right)$.
\item[(B4)] If $(0,0)$ is the best NE and $(1,0)$ is the SO state, then we select $\left(0.5,0\right)$.
\end{enumerate}
It is clear that these choices of states create improvement on the social welfare. An interesting result for this case is that basically these desired states can be reached from any initial state. Due to symmetry, we demonstrate the mechanisms for case (B3) and (B4), and the remaining ones can be done analogously.
For case (B3), we are aiming for the state $\left(\delta_1,\frac{b_X}{a_X+b_X}-\delta_2\right)$ for some small $\delta_1>0$ and $\delta_2>0$. We propose the following mechanism:
\begin{figure}
\caption{Phase 1 for case (B3), where we keep low $T_Y$ but increase $T_X$.}
\label{fig:ex_rsw_p1}
\caption{Phase 2 for case (B3), where we increase $T_Y$.}
\label{fig:ex_rsw_p2}
\end{figure}
\begin{enumerate}
\item[(B3)] \begin{enumerate}
\item[] \textbf{Phase 1:} Getting to the principal branch.
\item From any initial state, fix $T_y$ at some value less than $T_I= \frac{b_Y-a_Y}{2\ln(a_X/b_X)}$.
\item Increase $T_x$ above the critical temperature $T_C(T_y)$.
\item Decrease $T_x$ to $T_x^I\left(\delta_1,\frac{b_X}{a_X+b_X}-\delta_2\right)$.
\item[] \textbf{Phase 2:} Staying at the current branch.
\item Increase $T_y$ to $T_Y^I\left(\delta_1,\frac{b_X}{a_X+b_X}-\delta_2\right)$.
\end{enumerate}
\end{enumerate}
This process is illustrated in Figure~\ref{fig:ex_rsw_p1} and Figure~\ref{fig:ex_rsw_p2}. In phase 1, as we are keeping low $T_y$, meaning the second player is of more rationality. As the first player getting more rational, he is more likely to be influenced by the second player's preference, and eventually getting to a Nash equilibrium. In phase 2, we make the second player more irrational to increase the social welfare. The level of irrationality we add in phase 2 should be capped to prevent the first player to deviate his decision.
For case (B4), since our desired state is on the principal branch, the mechanism will be similar to case (A1).
\begin{enumerate}
\item[(B4)] \begin{enumerate}
\item From any initial state, raise $T_x$ to $T_X^I(0.5+\delta,\delta)$.
\item Decrease $T_y$ to $T_Y^I(0.5+\delta,\delta)$.
\end{enumerate}
\end{enumerate}
\end{proof}
As a remark, in case (A3) and (A4), if we do not start from $(\delta,\delta)$ but from some other states on the principal branch, we can instead aim for state $(0.5,1)$. This state is not better than the best Nash equilibrium, but still makes improvements over the initial state. The process can be modified as:
\begin{enumerate}
\item[(A3')] \begin{enumerate}
\item From any initial state, raise $T_x$ to $T_X^I(0.5+\delta,1-\delta)$ (above $T_C(T_y)$).
\item Reduce $T_y$ to $T_Y^I(0.5+\delta,1-\delta)$.
\end{enumerate}
\end{enumerate}
\section{Applications}
\subsection{Taxation} \label{sec:taxation}
A direct application for the solution concept of QRE is to analyze the effect of taxation, which has been discussed in \cite{wolpert2012hysteresis}. Unlike Nash equilibria, for QREs, if we multiply the payoff matrix by some factor $\alpha$, the equilibrium does change. This is because by multiplying $\alpha$, effectively we are dividing the temperature parameters by $\alpha$. This means that if we charge taxes to the players with some flat tax rate $\alpha-1$, the QREs will differ. Formally, we define the base temperature $T_0$ as the temperature when no tax is applied for both players. Then, we can define the \emph{tax rate} for each player as $\alpha_x = 1-T_0/T_x, \alpha_y=1-T_0/T_y$, respectively.
Now we demonstrate how the hysteresis mechanism can be applied via taxation with Example~\ref{ex:ex1}. Assume the base temperature $T_0=1$, then with taxation, we can rewrite the process in Example~\ref{ex:ex1} in the following form:
\begin{enumerate}
\item The initial state is $(0.05,0.14)$, where $\alpha_x \approx 0$ and $\alpha_y \approx 0.5$ (where $T_x \approx 1$ and $T_y \approx 2$).
\item Fix $\alpha_y = 0.5$ (where $T_y=2$), and increase $\alpha_x$ to $0.8$, where $T_x=5$ and there is only one QRE correspondence.
\item Fix $\alpha_y = 0.5$ (where $T_y=2$), and decrease $\alpha_x$ back to $0$ (where $T_x=1$). Now $x \approx 0.997$.
\end{enumerate}
\subsection{Evolution of metabolic phenotypes in cancer} \label{sec:cancer}
Evolutionary Game Theory has been instrumental in studying evolutionary aspects of the somatic evolution that characterize's cancer progression. Tomlinson and Bodmer were the first to explore the role of cell-cell interactions in cancer. This pioneering work was followed by others that expanded on those initial ideas to study the role of key aspects of cancer evolution like the role of space \cite{kaznatcheev2015edge} treatment \cite{basanta2012investigating,kaznatcheev2016cancer} or metabolism \cite{basanta2008evolutionary,kianercy2014critical}. With regards to Kianercy's work, it shows how microenvironmental heterogeneity impacts somatic evolution, in this case by optimizing the genetic instability to better tune cell metabolism to the dynamic microenvironment.
Our techniques (the hysteresis mechanism and the optimal control mechanism) can be applied to the cancer game \cite{kianercy2014critical} with two types of tumor phenotypic strategies: hypoxic cells and oxygenated cells. These cells inhabit regions where oxygen could be abundant or lacking. In the former, oxygenated cells with regular metabolism thrive but in the latter, hypoxic cells whose metabolism is less reliant on the presence of oxygen (but more on the presence of glucose) have higher fitness.
\section{Connection to previous works}\label{sec.rel}
Recently, there has been a growing interplay between game theory, dynamical systems, and computer science. Particular such examples include the integration of replicator dynamics and topological tools \cite{PiliourasAAMAS2014,papadimitriou2016nash,panageas2016average} in algorithmic game theory, and Q-learning dynamics \cite{watkins1992q} in multi-agent systems \cite{tan1993multi}. Q-learning dynamics has been studied extensively in game settings e.g. by Sato~{\em et al.~} in \cite{Sato:2003aa} and Tuyls~{\em et al.~} in \cite{tuyls2003selection}. In \cite{coucheney2013entropy} Q-learning dynamics is considered as an extension of replicator dynamics driven by a combination of payoffs and entropy.
Recent advances in our understanding of evolutionary dynamics in multi-agent learning can be found in the survey \cite{bloembergen2015evolutionary}.
We are particularly interested in the connection between the Q-learning dynamics and the concept of QRE \cite{McKelvey:1995aa} in game theory. In \cite{Cominetti:2010aa} Cominetti~{\em et al.~} study this connection in traffic congestion games. The hysteresis effect of Q-learning dynamics was first identified in 2012 by Wolpert~{\em et al.~} \cite{wolpert2012hysteresis}. Kianercy~{\em et al.~} in \cite{Kianercy:2012aa} observed the same phenomenon, and provided discussions on the bifurcation diagrams in $2 \times 2$ games. The hysteresis effect has been also been highlighted in recent follow-up work by \cite{kianercy2014critical} as a design principle for future cancer treatments. It was also studied in \cite{romero2015effect} in the context of minimum-effort coordination games.
However, our current understanding is still mostly qualitative and in this work we have pushed towards a more practically applicable quantitative, algorithmic analysis.
Analyzing the characteristics of various dynamical systems has also been attracting the attention of the computer science community in recent years. For example, besides the Q-learning dynamics, the (simpler) replicator dynamics has been studied extensively due to its connections \cite{paperics11,papadimitriou2016nash,Soda14} to the multiplicative weight update (MWU) algorithm in \cite{Kleinberg09multiplicativeupdates}.
Finally, a lot of attention has also been devoted to biological systems and their connections to game theory and computation. In recent work by Mehta~{\em et al.~} \cite{mehta_et_al:LIPIcs:2016:6407}, the connection with genetic diversity was discussed in terms of the complexity of predicting whether genetic diversity persists in the long run under evolutionary pressures.
This paper builds upon a rapid sequence of related results \cite{PNAS1:Livnat16122008,ITCS:DBLP:dblp_conf/innovations/ChastainLPV13,PNAS2:Chastain16062014,livnat2014satisfiability,Meir15,ITCS15MPP}. The key result is \cite{ITCS:DBLP:dblp_conf/innovations/ChastainLPV13,PNAS2:Chastain16062014} where effectively it was made clear that there exists a strong connection between studying replicator dynamics in games and standard models of evolution. Follow-up works show how to analyze dynamics that incorporate errors (i.e. mutations) \cite{MPPPV} and how these mutations can have a critical effect to ensuring survival in the presence of dynamically changing environments. Our paper makes progress along these lines by examining how noisy dynamics can introduce such as bifurcations.
We were inspired by recent work by Kianercy~{\em et al.~} establishing a connection between cancer dynamics and cancer treatment and studying Q-learning dynamics in games. This is analogous to the connections \cite{CACM,ITCS:DBLP:dblp_conf/innovations/ChastainLPV13,PNAS2:Chastain16062014} between MWU and evolution detailed above. It is our hope that by starting off a quantitative analysis of these systems we can kickstart similarly rapid developments in our understanding of the related questions.
\begin{comment}
game dynamics
\begin{enumerate}
\item Replicator dynamics\cite{Hofbauer98}
\item Replicator dynamics is closely connected to the multiplicative weights update
algorithm~\cite{Kleinberg09multiplicativeupdates}
\end{enumerate}
With evolution:
\begin{enumerate}
\item Livnat {\em et al.~} mixability? \cite{PNAS1:Livnat16122008}
\item Papadimitriou evolution in biological systems with game dynamics in coordination games. \cite{ITCS:DBLP:dblp_conf/innovations/ChastainLPV13} \cite{PNAS2:Chastain16062014}
\item Meir and Parkes~\cite{Meir15}
\item Analogous game theoretic interpretations are known for diploids~\cite{akin}
\item \cite{ITCS15MPP} Mehta, Panageas and Piliouras haploid genetics
\end{enumerate}
QREs
\begin{enumerate}
\item \cite{Cominetti:2010aa} QREs on traffic congestion games
\item \cite{wolpert2012hysteresis} Wolpert hysteresis effect
\item \cite{kianercy2014critical} Kianercy tumour metabolism
\item \cite{coucheney2013entropy} Entropy driven dynamics? more generalized entropy function?
\item \cite{bloembergen2015evolutionary} Survey on multi-agent learning and evolutionary dynamics
\item \cite{tuyls2003selection} Selection-mutation model
\item \cite{Sato:2003aa} Q-learning dynamics
\item \cite{romero2015effect} minimum effort coordination games
\end{enumerate}
In the last few years we have witnessed a rapid cascade of theoretical results on the intersection of computer science and evolution.
Livnat {\em et al.~} \cite{PNAS1:Livnat16122008} introduced the notion of mixability, the ability of an allele to combine itself successfully
with other alleles within a specific population. In \cite{ITCS:DBLP:dblp_conf/innovations/ChastainLPV13,PNAS2:Chastain16062014}
connections were established between haploid evolution and game theoretic dynamics in coordination games. Even more recently Meir and
Parkes~\cite{Meir15} provided a more detailed examination of these connections. These dynamics are close variants of the standard
(discrete) replicator dynamics~\cite{Hofbauer98}. Replicator dynamics is closely connected to the multiplicative weights update
algorithm~\cite{Kleinberg09multiplicativeupdates}. Analogous game theoretic interpretations are known for diploids~\cite{akin}.
Analyzing limit sets of dynamical systems is a critical step towards understanding the behavior of processes that are inherently
dynamic, like evolution. There has been an upsurge in
studying the complexity of computing these sets. Quite few works study such questions for dynamical systems governed by arbitrary
continuous functions or ODEs \cite{K1,K2,SZ}. Limit cycles are inherently connected to dynamical systems and recent works by
Papadimitriou and Vishnoi \cite{PV} showed that computing a point on an approximate limit cycle is PSPACE-complete. On the positive side, in \cite{PSV16}, it was shown that a class of evolutionary Markov chains mix rapidly, where techniques from dynamical systems were used.
The complexity of checking if a game has an evolutionarily stable strategy (ESS) has been studied first by Nissan and then by Etessami
and Lochbihler~\cite{Nisan06,etessami2008computational} and has been nailed down to be $\Sigma_2^P$-complete by
Conitzer~\cite{Conitzer13}. Unlike our setting here the game is between different species to survive in a common environment.
These decision problems are completely orthogonal to understanding the persistence of genetic diversity, and therefore are not
directly comparable to our work.
Other connections between computational complexity and ecology/evolution examine the complexity of finding local/global minima of structured fitness landscapes \cite{weinberger1996np,kaznatcheev2013complexity,wright2000computational}, as well as, complexity questions in regards to the probability that a new invader (or a new mutant) will take over a resident population \cite{nowakpnas2015}. The sexual dynamics and the questions about diversity considered here are not captured in any of the settings above.
In~\cite{ITCS15MPP} Mehta, Panageas and Piliouras examine the question of diversity for haploid species. Despite the systems'
superficial similarities the two analyses come to starkly different conclusions. In haploids systems all mixed (polymorphic)
equilibria are unstable and evolution converges to monomorphic states. In the case of diploid systems the answer to whether diversity
survives or not depends crucially on the geometry of the fitness landscape.
\end{comment}
\begin{comment}
This is not merely a question about stability of fixed
points but requires careful combinations of tools from topology of dynamical systems, stability analysis of fixed points as well as
complexity theoretic techniques.
\end{comment}
\begin{comment}
\noindent\textbf{Hardness of Nash equilibrium in Games.}
Megiddo and Papadimitriou \cite{MP} observed that the Nash equilibrium problem is not amenable to standard
complexity classes like NP, due to the guaranteed existence \cite{nash}. To overcome this Papadimitriou defined class PPAD
, and showed containment of two-player Nash equilibrium problem ($2$-Nash) in PPAD. Later, in a remarkable series
of works \cite{DGP,CDT}, the problem was shown to be PPAD-complete. For three or more players the problem was shown to be
FIXP-complete \cite{EY}.
On the other hand Gilboa and Zemel \cite{GZ} showed that almost every decision version of $2$-Nash is NP-complete. This
included questions like, checking if there exist more than one NE, an NE with support size $k$, an NE with payoff $h$, and
many more. Conitzer and Sandholm \cite{CS} extended these results to symmetric games, as well as to
hardness of approximation and counting.
The games related to the evolutionary process (under consideration in this paper) are symmetric coordination games.
Coordination games are very special and always have a pure NE; easy to find in case of two-players. For network coordination
games Cai and Daskalakis \cite{Cai} showed that finding pure NE is PLS-complete, while for mixed NE it is in CLS. As far as
we know, no NP-completeness results are known for decision questions on two-player coordination games, and our results may
help shed some light on these questions.
\end{comment}
\begin{comment}
\textbf{Nonlinear dynamical systems in theoretical computer science:}
Nonlinear dynamical systems have been studied quite extensively in a number of different fields including computer science.
Specifically, quadratic dynamical systems \cite{267761} are known to arise in the study genetic algorithms \cite{Holland:1992:ANA:129194,Rabinovich:1991:ASG,Baum:1995:GA:225298.225326}.
Both positive and negative computational complexity and convergence results are known for them, including convergence to a stationary distribution (analogous of classic theorems for Markov Chains) \cite{Rabani:1995:CVP:225058.225088,Rabinovich:1991:ASG,Arora:1994:SQD:195058.195231} depending on the specifics of the model. In contrast, replicator dynamic in (network) coordination games defines a cubic dynamical system.
Also, there exist numerous fixed distributions. We estimate, given a specific starting condition, to which one we converge to.
\end{comment}
\section{Conclusion}
In this paper,
we perform a quantitative analysis of bifurcation phenomena connected to Q-learning dynamics in the class of $2 \times 2$ games. Based on this analysis, we introduce two novel mechanisms, the hysteresis mechanism and the optimal control mechanism.
Hysteresis mechanisms use transient changes to the system parameters to induce permanent
improvements to its performance via optimal (Nash) equilibrium selection. Optimal control
mechanisms induce convergence to states whose performance is better than even the best
Nash equilibrium, showing that by controlling the exploration/exploitation tradeoff we can
achieve strictly better states than those achievable by perfectly rational agents.
We believe that these new classes of mechanisms could lead to interesting and new questions within game theory as well as a more thorough understanding of cancer biology.
\section{Supplementary materials}
\appendix
\section{From Q-learning to Q-learning Dynamics} \label{sec:from_q_to_q}
In this section, we provide a quick sketch on how we can get to the Q-learning dynamics from Q-learning agents.
We start with an introduction to the Q-learning rule. Then, we discuss the multi-agent model when there are multiple learners in the system. The goal for this section is to identify the dynamics of the system in which there are two learning agents playing a $2 \times 2$ game repeatedly over time.
\subsection{Q-learning Introduction}
Q-learning \cite{watkins1992q,watkins1989learning} is a value-iteration method for solving the optimal strategies in Markov decision processes. It can be used as a model where users learn about their optimal strategy when facing uncertainties. Consider a system that consists of a finite number of states and there is one player who has a finite number of actions. The player is going to decide his strategy over an infinite time horizon. In Q-learning, at each time $t$, the player stores a value estimate $Q_{(s,a)}(t)$ for the payoff of each state-action pair $(s,a)$. Then, he chooses his action $a_{t+1}$ that maximizes the $Q$-value $Q_{(s_t,\cdot)}(t)$ for time $t+1$, given the system state is $s_t$ at time $t$. In the next time step, if the agent plays action $a_{t+1}$, he will receive a reward $r(t+1)$, and the value estimate is updated according to the rule:
$$
Q_{(s_t,a_{t+1})}(t+1) = (1-\alpha) Q_{(s_t,a_{t+1})}(t) + \alpha (r(t+1) + \gamma \max_{a'} Q_{(s_{t+1},a')}(t))
$$
where $\alpha$ is the step size, and $\gamma$ is the discount factor.
\subsection{Joint-learning Model}
Next, we consider the joint learning model as in \cite{Kianercy:2012aa}. Suppose there are multiple players in the system that are learning concurrently. Denote the set of players as $P$. We assume the system state is a function of the action each player is playing, and the reward observed by each player is a function of the system state. Their learning behaviors are modeled as simplified models based on the Q-learning algorithm described above. More precisely, we consider the case that each player assumes the system is only of one state, which corresponds to the case that the player has very limited memory, and has discount factor $\gamma=0$. The reward observed by player $i\in P$ given he plays action $a$ at time $t$ is denoted as $r_a^i(t)$. We can write the updating rule of the $Q$-value for agent $i$ as follows:
$$
Q_{a}^i(t+1) = Q_a^i(t) + \alpha [r_a^i(t) - Q_a^i(t)]
$$
For the selection process, we consider the mechanism that each player $i \in P$ selects his action according to the Boltzmann distribution with temperature $T_i$:
\begin{equation} \label{eq:boltzmann_sel}
x_a^i(t) = \frac{e^{Q_{a}^i(t)/T_i}}{\sum_{a'} e^{Q_{a'}^i(t)/T_i}}
\end{equation}
where $x_a^i(t)$ is the probability that agent $i$ chooses action $a$ at time $t$. The intuition behind this mechanism is that we are modeling the irrationality of the users by the temperature parameter $T_i$. For small $T_i$, the selection rule corresponds to the case of more rational agents. We can see that for $T_i \rightarrow 0$, \eqref{eq:boltzmann_sel} corresponds to the best-response rule, that is, each agent selects the action with the highest $Q$-value with probability one. On the other hand, for $T_i \rightarrow \infty$, we can see that \eqref{eq:boltzmann_sel} corresponds to the selection rule of selecting each action uniformly at random, which models the case of fully-irrational agents.
\subsection{Continuous-time dynamics}
This underlying Q-learning model has been studied in the previous decades.
It is known that if we take the time interval to be infinitely small, this sequential joint learning process can be approximated as a continuous-time model (\cite{tuyls2003selection,Sato:2003aa}) that has some interesting characteristics. To see this, consider the $2 \times 2$ game as we have described in Section~\ref{sec:def_2_by_2_game}. The expected payoff for the first player at time $t$ given he chooses action $a$ can be written as $r_a^x(t)=[\vv{A} \vv{y}(t)]_a$, and similarly, the expected payoff for the second player at time $t$ given he chooses action $a$ is $r_a^y(t)=[\vv{B} \vv{x}(t)]_a$. The continuous-time limit for the evolution of the $Q$-value for each player can be written as
\begin{align*}
\dot Q^x_a(t) &= \alpha [r_a^x(t) - Q_a^x(t)]\\
\dot Q^y_a(t) &= \alpha [r_a^y(t) - Q_a^y(t)]
\end{align*}
Then, we take the time derivative of \eqref{eq:boltzmann_sel} for each player to get the evolution of the strategy profile:
\begin{align*}
\dot x_i &= \frac{1}{T_x} x_i \bigg( \dot Q_i^x - \sum_{k} x_k \dot Q_k^x\bigg)\\
\dot y_i &= \frac{1}{T_y} y_i \bigg( \dot Q_i^y - \sum_{k} y_k \dot Q_k^y\bigg)
\end{align*}
Putting these together, and rescaling the time horizon to $\alpha t/T_x$ and $\alpha t/T_y$ respectively, we obtain the continuous-time dynamics:
\begin{align}
\dot x_i &= x_i \bigg[ (\vv{A} \vv{y})_i - \vv{x}^T \vv{A} \vv{y} + T_x \sum_{j} x_j \ln(x_j/x_i) \bigg] \label{eq:conti_dynamics1_r}\\
\dot y_i &= y_i \bigg[ (\vv{B} \vv{x})_i - \vv{y}^T \vv{B} \vv{x} + T_y \sum_{j} y_j \ln(y_j/y_i) \bigg] \label{eq:conti_dynamics2_r}
\end{align}
\subsection{The exploration term increases entropy}
Now, we show that the exploration term in the Q-learning dynamics results in the increase of the entropy:
\begin{lemma}\label{lem:incr_entropy}
Suppose $A = \vv{0}$ and $B = \vv{0}$. The system entropy
$$ H(\vv{x},\vv{y}) = H(\vv{x}) + H(\vv{y}) = -\sum_{i} x_i \ln x_i - \sum_{i} y_i \ln y_i $$
for the dynamics \eqref{eq:conti_dynamics} increases with time, i.e.
$$ \dot H(\vv{x}, \vv{y}) > 0 $$
if $\vv{x}$ and $\vv{y}$ are not uniformly distributed.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:incr_entropy}]
It is equivalent that we consider the single agent dynamics:
$$
\dot x_i = x_i T_x \bigg[ -\ln x_i + \sum_{j} x_j \ln x_j \bigg]
$$
Taking the derivative of the entropy $H(\vv{x})$, we have
\begin{align*}
\dot H(\vv{x}) &= \sum_{i} (-\ln x_i -1) \dot x_i
= -T_x \bigg[ -\sum_{i} x_i (\ln x_i)^2 + \bigg(\sum_{j} x_i \ln x_i\bigg)^2 \bigg]
\end{align*}
and since we have $\sum_{i} x_i =1$, by Jensen's inequality, we can find that
$$
\bigg(\sum_{j} x_i \ln x_i\bigg)^2 \le \sum_{i} x_i (\ln x_i)^2
$$
where equality holds if and only if $\vv{x}$ is a uniform distribution. Consequently, if we have $x_i \in (0,1)$, and $\vv{x}$ is not a uniform distribution, $\dot H(\vv{x})$ is strictly positive, which means that the system entropy increases with time.
\end{proof}
\section{Convergence of dissipative learning dynamics in $2\times 2$ games}
\label{appendix:convergence}
\subsection*{Liouville's formula}
Liouville's formula can be applied to any system of autonomous differential equations with a continuously differentiable
vector field $V$ on an open domain of $\mathcal{S} \subset \mathbb{R}^k$.
The divergence of $V$ at $x \in \mathcal{S}$ is defined
as the trace of the corresponding Jacobian at $x$, \textit{i.e.}, $\text{div}[V(x)]\equiv\sum_{i=1}^k \frac{\partial V_i}{\partial x_i}(x)=tr(DV(x))$.
Since divergence is a continuous function we can compute its integral over measurable sets $A\subset \mathcal{S}$ (with respect to Lebesgue measure $\mu$ on $\mathbb{R}^n$). Given
any such set $A$, let $\phi_t(A)= \{\phi(x_0,t): x_0 \in A\}$ be the image of $A$ under map $\Phi$ at time $t$. $\phi_t(A)$ is measurable
and its measure is $\mu(\phi_t(A)))= \int_{\phi_t(A)}dx$. Liouville's formula states that the time derivative of the volume $\phi_t(A)$ exists and
is equal to the integral of the divergence over $\phi_t(A)$: $\frac{d}{dt} [A(t)] = \int_{\phi_t(A)} \text{div} [V(x)]dx.$ Equivalently:
\begin{theorem}[\cite{Sandholm10}, page 356]
$\frac{d}{dt}\mu(\phi_t(A))= \int_{\phi_t(A)} tr(DV(x))d\mu(x)$
\end{theorem}
A vector field is called divergence free if its divergence is zero everywhere. Liouville's formula trivially implies that volume is preserved
in such flows.
This theorem extends in a straightforward manner to systems where the vector field $V:X\rightarrow TX$ is defined on an affine set $X\subset \mathbb{R}^n$ with tangent space $TX$. In this case, $\mu$ represents the Lebesgue measure on the (affine hull) of $X$. Note that the derivative of $V$ at a state $x \in X$ must be represented using the derivate matrix $DV(x) \in \mathbb{R}^{n\times n}$, which by definitions has rows in $TX$. If $\hat{V}:\mathbb{R}^n\rightarrow R^n$ is a $C^1$ extension of $V$ then $DV(x)=D\hat{V}(x)P_{TX}$, where $P_{TX}\in \mathbb{R}^{n\times n}$ is the orthogonal projection\footnote{To find the matrix of the orthogonal projection onto $TX$ (or any subspace $Y$ of $\mathbb{R}^n$) it suffices to find a basis ($\vec{v_1}, \vec{v_2}, \dots, \vec{v_m}$). Let $B$ be the matrix with columns $\vec{v_i}$, then $P = B(B^T B)^{-1}B^T$.} of $\mathbb{R}^n$ onto the subspace $TX$.
\subsection*{Poincar\'{e}-Bendixson theorem}
The Poincar\'{e}-Bendixson theorem is a powerful theorem that implies that two-dimensional systems cannot effectively exhibit chaos.
Effectively, the limit behavior is either going to be an equilibrium, a periodic orbit, or a closed loop, punctuated by one (or more) fixed points.
Formally, we have:
\begin{theorem}
[\cite{bendixson1901courbes,teschl2012ordinary}]
Given a differentiable real dynamical system defined on an open subset of the plane, then every non-empty compact $\omega$-limit set of an orbit, which contains only finitely many fixed points, is either
a fixed point, a periodic orbit, or a connected set composed of a finite number of fixed points together with homoclinic and heteroclinic orbits connecting these.
\end{theorem}
\subsection*{Bendixson-Dulac theorem}
By excluding the possibility of closed loops (i.e., periodic orbits, homoclinic cycles, heteronclinic cycles) we can effectively establish global convergence to equilibrium.
The following criterion, which was first established by Bendixson in 1901 and further refined by French mathematician Dulac in 1933, allows us to do that.
It is typically referred to as the Bendixson-Dulac negative criterion. It focus exactly on planar system where the measure of initial conditions always shrinks (or always increases) with time, i.e., dynamical systems with vector fields whose divergence is always negative (or always positive).
\begin{theorem}[\cite{muller2015methods}, page 210]
Let $D\subset \mathbb{R}^2$ be a simply connected region and $(f,g)$ in $C^1(D, \mathbb{R})$ with $div(f,g)=\frac{\partial f}{\partial x}+\frac{\partial g}{\partial y}$ being not identically zero and without change of sign in $D$. Then the system
$$\frac{ dx }{ dt } = f(x,y),$$
$$\frac{ dy }{ dt } = g(x,y)$$
\noindent
has no loops lying entirely in $D$.
\end{theorem}
\noindent
The function $ \varphi(x, y)$ is typically called the Dulac function.
\noindent
\textbf{Remark:} This criterion can also be generalized. Specifically, it holds for the system:
$$\frac{ dx }{ dt } =\rho(x,y) f(x,y),$$
$$\frac{ dy }{ dt } =\rho(x,y) g(x,y)$$
\noindent
if $\rho(x,y)>0$ is continuously differentiable. Effectively, we are allowed to rescale the vector field by a scalar function (as long as this function does not have any zeros), before we prove that the divergence is positive (or negative). That is, it suffices to find $\rho(x,y)>0$ continuously differentiable, such that $(\rho(x,y) f(x,y))_x+ (\rho(x,y) g(x,y))_y$ possesses a fixed sign.
By \cite{Kianercy:2012aa} we have that the after a change of variables, $u_k=\frac{\ln(x_{k+1})}{\ln x_1}$, $v_k=\frac{\ln(y_{k+1})}{\ln y_1}$ for $k=1, \dots, n-1$,
the replicator system transforms to the following system:
$$\dot{u}_k=\frac{\sum_j \hat{a}_kje^{v_j}}{1+\sum_j e^{v_j}}- T_xu_k, \dot{v}_k=\frac{\sum_j \hat{a}_kje^{u_j}}{1+\sum_j e^{u_j}}- T_xv_k, (\text{II})$$
where
$\hat{a}_{kj} = a_{k+1,j+1} - a_{1,j+1}$, $\hat{b}_{kj} = b_{k+1,j+1} - a_{1,j+1}$.
In the case of $2\times2$ games, we can apply both the Poincar\'{e}-Bendixson theorem as well as the Bendixson-Dulac theorem, since the resulting dynamical system is planar and $\frac{\partial\dot{u}_1}{\partial u_1}+\frac{\partial \dot{v}_1}{\partial v_1}=-(T_x+T_y)<0$. Hence, for any initial condition system (II) converges to equilibria.
The flow of original replicator system in the $2\times2$ game is \textit{diffeomorhpic}\footnote{ A function $f$ between two topological spaces is called a \textit{diffeomorphism} if it has the following properties:
$f$ is a bijection, $f$ is continuously differentiable, and $f$ has a continuously differentiable inverse. Two flows $\Phi^t:A\rightarrow A$ and $\Psi^t :B\rightarrow B$ are \textit{diffeomorhpic} if there exists a diffeomorphism $g : A \rightarrow B$ such that for each $x \in A$ and $t \in \mathbb{R}$
$g(\Phi^t (x)) = \Psi^t (g(x))$. If two flows are diffeomorphic then their vector fields are related by the derivative of the conjugacy.
That is, we get precisely the same result that we would have obtained if we simply transformed the coordinates in their differential equations~\cite{Meiss2007}.} to the flow of system (II), thus replicator dynamics with positive temperatures $T_x,T_y$ converges to equilibria for all initial conditions as well.
\section{Bifurcation Analysis for Games with Only One Nash Equilibrium}\label{sec:topo_nc}
In this section, we present the results for the class of games with only one Nash equilibrium, where it can be either a pure one or a mixed one, where the mixed Nash equilibrium is defined as
\begin{definition}[mixed Nash equilibrium]
A strategy profile $(x_{NE},y_{NE})$ is a mixed Nash equilibrium if
\begin{align*}
&x_{NE} \in \arg\max_{x \in [0,1]} \vv{x}^T\vv{A}\vv{y}_{NE},
&y_{NE} \in \arg\max_{y \in [0,1]} \vv{y}^T\vv{B}\vv{x}_{NE}
\end{align*}
\end{definition}
This corresponds to the case that at least one of $b_X$, $a_Y$, or $b_Y$ being negative. Similarly, our analysis is based on the second form representation described in \eqref{eq:Tx_} and \eqref{eq:y_}, which demonstrates insights from the first player's perspective.
\subsection{No dominating strategy for the first player}
More specifically, this is the case when there is no dominating strategy for the first player, i.e. both $a_X$ and $b_X$ are positive. From \eqref{eq:y_} we can presume that the characteristics of the bifurcation diagrams depends on the value of $a_Y+b_Y$ since it affects whether $y^{II}$ is increasing with $x$ or not. Also, we can find some interesting phenomenon from the discussion below.
First, we consider the case when $a_Y+b_Y>0$. This can be considered as a more general case as we have discussed in Section~\ref{sec:coord_topo}. In fact, the statements we have made in Theorem~\ref{thm:coord_topo_pb}, Theorem~\ref{thm:coord_topo_ct}, and Theorem~\ref{thm:coord_topo_stab} applies to this case. However, there are some subtle difference we should be noticed. If $a_Y>b_Y$, where we can assume $b_Y<0$, then by the second part of Theorem~\ref{thm:coord_topo_ct}, there are no QRE in $x \in (0,0.5)$, since $T_B$ now is a negative number. This means that we always only have the principal branch. On the other hand, if $a_Y<b_Y$, where we can assume $a_Y<0$, then similar to the example in Figure~\ref{fig:as_c1a_by_ex1} and Figure~\ref{fig:as_c1a_by_ex2}, there could still be two branches. However, we can presume that the second branch vanishes \emph{before} $T_y$ actually goes to zero, as the state $(1,1)$ is not a Nash equilibrium.
\begin{theorem}\label{thm:nc1}
Given a $2\times 2$ game in which the diagonal form has $a_X, b_X>0$, $a_Y+b_Y>0$, and $a_Y<b_Y$, and given $T_y$, if $T_y<T_A$, where $T_A=\frac{-a_Y}{\ln(a_Y/b_Y)}$, then there are no QRE correspondence in $x \in (0.5,1)$.
\end{theorem}
The proof of the above theorem directly follows from Proposition~\ref{prop:c1a_g_right2} in the appendix. An interesting observation here is that we can still make the first player get to his desired state by changing $T_y$ to some value that is greater than $T_A$.
\begin{figure}
\caption{No dominating strategy for the first player, with $a_Y+b_Y<0$ and low $T_Y$.}
\label{fig:as_c1b_low_TY}
\caption{No dominating strategy for the first player, with $a_Y+b_Y<0$ and high $T_Y$.}
\label{fig:as_c1b_high_TY}
\end{figure}
Next, we consider $a_Y+b_Y\le 0$. The bifurcation diagram is illustrated in Figure~\ref{fig:as_c1b_low_TY} and Figure~\ref{fig:as_c1b_high_TY}. We can find that in this case the principal branch directly goes toward its unique Nash equilibrium. We present the results formally in the following theorem, where the proof follows from Section~\ref{sec:case1b} in the appendix.
\begin{theorem}\label{thm:nc2}
Given a $2\times 2$ game in which the diagonal form has $a_X, b_X>0$, $a_Y+b_Y \le 0$, QRE is unique given $T_x$ and $T_y$.
\end{theorem}
\subsection{Dominating strategy for the first player}
\begin{figure}
\caption{When there is a dominating strategy for the first player, with $a_Y+b_Y<0$.}
\label{fig:as_c2_ex1}
\caption{When there is a dominating strategy for the first player, with $a_Y+b_Y>0$ and $a_Y<b_Y$.}
\label{fig:as_c2_ex2}
\end{figure}
Finally, we consider the case when there is a dominating strategy for the first player, i.e. $b_X<0$. According to Figure~\ref{fig:as_c2_ex1} and Figure~\ref{fig:as_c2_ex2}, the principal branch seems always goes towards $x=1$. This means that the first player always prefers his dominating strategy. We formalize this observation, as well as some important characteristics for this case in the theorem below, where the proof can be found in Section~\ref{sec:case2} in the appendix.
\begin{theorem}\label{thm:nc3}
Given a $2\times 2$ game in which the diagonal form has $a_X>0$, $b_X<0$, $a_X+b_X>0$, and given $T_y$, the following statements are true:
\begin{enumerate}
\item The region $(0,0.5)$ contains the principal branch.
\item There are no QRE correspondence for $x \in (0.5,1)$.
\item If $a_Y+b_Y<0$ or $a_Y>b_Y$, then the principal branch is continuous.
\item If $a_Y+b_Y>0$ and $b_Y>a_Y$, then the principal branch may not be continuous.
\end{enumerate}
\end{theorem}
As we can see from Theorem~\ref{thm:nc3}, for the most cases, the principal branch is continuous. One special case is when $a_Y+b_Y>0$ with $b_Y>a_Y$. In fact, this can be seen as a duality, i.e. flipping the role of two players, of the case we have discussed in part 3 of Theorem~\ref{thm:nc1}, where for $T_y$ is within $T_A$ and $T_I$, there can be three QRE correspondences.
\section{Detailed Bifurcation Analysis for General $2 \times 2$ Game} \label{appendix:bifurcation}
In this section, we provide technical details for the results we stated in Section~\ref{sec:coord_topo} and Section~\ref{sec:topo_nc}.
Before we get into details, we state some results that will be useful throughout the analysis in the following lemma. The proof of this lemma is straightforward and we omit it in this paper.
\begin{lemma}\label{lem:techs}
The following statements are true.
\begin{enumerate}
\item The derivative of $T_X^{II}$ is given as
\begin{equation} \label{eq:def_dTX}
\frac{\partial T_X^{II}}{\partial x}(x,T_y)
= \frac{-(a_X+b_X)L(x,T_y)+b_X}{x(1-x)[\ln(1/x-1)]^2}
\end{equation}
where
\begin{equation} \label{eq:def_L}
L(x,T_y) = y^{II} + x(1-x)\ln\left(\frac{1}{x}-1\right)\frac{\partial y^{II}}{\partial x}
\end{equation}
\item The derivative of $y^{II}$ is given as
$$
\frac{\partial y^{II}}{\partial x} = y^{II}(1-y^{II})\frac{a_Y+b_Y}{T_y}
$$
\item For $x \in (0,1/2)\cup(1/2,1)$, $\frac{\partial T_X^{II}}{\partial x}>0$ if and only if $L(x,T_y)<\frac{b_X}{a_X+b_X}$; on the other hand, $\frac{\partial T_X^{II}}{\partial x}<0$ if and only if $L(x,T_y)>\frac{b_X}{a_X+b_X}$.
\end{enumerate}
\end{lemma}
\subsection{Case 1: $b_X \ge 0$}
First, we consider the case $b_X \ge 0$.
As we are going to show in Proposition~\ref{prop:dir_pb}, the direction of the principal branch relies on $y^{II}(0.5, T_y)$, which is the strategy the second player is performing, assuming the first player is indifferent to his payoff. The idea is that if $y^{II}(0.5, T_y)$ is large, then it means that the second player pays more attention to the action that the first player thinks better. This is more likely to happen when the second player has less rationality, i.e. high temperature $T_y$. On the other hand, if the second player pays more attention to the other action, the first player is forced to choose that as it gets more expected payoff.
We show that for $T_y>T_I$, the principal branch lies on $x\in\left(\frac{1}{2},1\right)$, otherwise the principal branch lies on $x\in\left(0, \frac{1}{2}\right)$. This result follows from the following proposition:
\begin{proposition} \label{prop:dir_pb}
For case~1, if $T_y>T_I$, then we have $y^{II}(1/2,T_y)>\frac{b_X}{a_X+b_X}$, and hence
\begin{align*}
\lim_{x\rightarrow \frac{1}{2}^+} T_X^{II}(x,T_y) = +\infty \quad \text{ and } \quad
\lim_{x\rightarrow \frac{1}{2}^-} T_X^{II}(x,T_y) = -\infty
\end{align*} On the other hand, if $T_y<T_I$, then we have $y^{II}(1/2,T_y)<\frac{b_X}{a_X+b_X}$, and hence
\begin{align*}
\lim_{x\rightarrow \frac{1}{2}^+} T_X^{II}(x,T_y) = -\infty \quad \text{ and } \quad \lim_{x\rightarrow \frac{1}{2}^-} T_X^{II}(x,T_y) = +\infty.
\end{align*}
\end{proposition}
\begin{proof}
First, consider the case that $b_Y > a_Y$, then, we can see that for $T_y>T_I=\frac{b_Y-a_Y}{2\ln(a_X/b_X)}$:
$$
y^{II}\left(\frac{1}{2}, T_y\right) = \left( 1+e^{\frac{b_Y-a_Y}{2T_y}} \right)^{-1} > \left( 1+e^{\frac{b_Y-a_Y}{2T_I}} \right)^{-1} = \left(1+\frac{a_X}{b_X}\right)^{-1} = \frac{b_X}{a_X+b_X}
$$
Then, for the case that $a_Y>b_Y$, we can see that
$$
y^{II}\left(\frac{1}{2}, T_y\right) = \left( 1+e^{\frac{b_Y-a_Y}{2T_y}} \right)^{-1} > \left( 1+e^{0} \right)^{-1} = \frac{1}{2} \ge \frac{b_X}{a_X+b_X}
$$
For the case that $a_Y=b_Y$, since we assumed $a_X \not= b_X$, we have
$$
y^{II}\left(\frac{1}{2}, T_y\right) = \left( 1+e^{\frac{b_Y-a_Y}{2T_y}} \right)^{-1} = \left( 1+e^{0} \right)^{-1} = \frac{1}{2} > \frac{b_X}{a_X+b_X}
$$
As a result, the numerator of \eqref{eq:Tx_} at $x=\frac{1}{2}$ is negative for $T_y>T_I$, this proves the first two limit.
For the rest two limits, we only need to consider the case $b_Y>a_Y$, otherwise $T_I=0$, which is meaningless. For $b_Y>a_Y$ and $T_y<T_I$, we can see that
$$
y^{II}\left(\frac{1}{2}, T_y\right) = \left( 1+e^{\frac{b_Y-a_Y}{2T_y}} \right)^{-1} < \left( 1+e^{\frac{b_Y-a_Y}{2T_I}} \right)^{-1} = \left(1+\frac{a_X}{b_X}\right)^{-1} = \frac{b_X}{a_X+b_X}
$$
This makes the numerator of \eqref{eq:Tx_} at $x=\frac{1}{2}$ positive and proves the last two limits.
\end{proof}
\subsubsection{Case 1a: $b_X\ge 0$, $a_Y+b_Y>0$}
In this section, we consider a relaxed version of the class of coordination game as in Section~\ref{sec:coord_topo}. We prove theorems presented in Section~\ref{sec:coord_topo}, and showing that these results can in fact be extended to the case that $a_Y+b_Y>0$, instead of requiring $a_Y>0$ and $b_Y>0$.
First, we can find that as $a_Y+b_Y>0$, $y^{II}$ is an increasing function of $x$, meaning
$$
\frac{\partial y^{II}}{\partial x} = y^{II}(1-y^{II})\frac{a_Y+b_Y}{T_y} > 0
$$
This implies that both player tend to agree to each other. Intuitively, if $a_Y \ge b_Y$, then both player agree with that the first action is the better one. For this case, we can show that no matter what $T_y$ is, the principal branch lies on $x \in \left(\frac{1}{2},1\right)$. In fact, this can be extended to the case whenever $T_y>T_I$, which is the first part of Theorem~\ref{thm:coord_topo_pb}.
\begin{proof}[Proof of Part~1 of Theorem~\ref{thm:coord_topo_pb}]
We can find that for $T_y>T_I$, we have $y^{II}(1/2,T_Y)>\frac{b_X}{a_X+b_X}$ for any $T_y$ according to Proposition~\ref{prop:dir_pb}. Since $y^{II}$ is monotonic increasing with $x$, we have $y^{II}>\frac{b_X}{a_X+b_X}$ for $x>1/2$. This means that we have $T_X^{II}>0$ for any $x \in (1/2,1)$. Also, it is easy to see that $\lim_{x\rightarrow 1^-} T_X^{II}=0$. As a result, we can find that $(0.5,1)$ contains the principal branch.
\end{proof}
For Case~1a with $a_Y \ge b_Y$ we can observe that on the principal branch, the lower the $T_x$, the more $x$ is close to $1$. We are able to show this monotonicity characteristics in Proposition~\ref{prop:c1a_g_monotone}, which can be used to justify the stability owing to Lemma~\ref{lem:qre_stab}.
\begin{proposition}\label{prop:c1a_g_monotone}
In Case~1a, if $a_Y \ge b_Y$, then $\frac{\partial T_X^{II}}{\partial x}<0$ for $x \in \left(\frac{1}{2},1\right)$.
\end{proposition}
\begin{proof}
It suffices to show that $L(x,T_y)>\frac{b_X}{a_X+b_X}$ for $x \in \left(\frac{1}{2},1\right)$. Note that according to Prop~\ref{prop:dir_pb}, we have if $a_Y \ge b_Y$,
\begin{equation}\label{eq:y_g_half}
L(1/2,T_y) = y^{II}(1/2,T_y) \ge \frac{1}{2}
\ge \frac{b_X}{a_X+b_X}
\end{equation}
Since $y^{II}(x,T_y)$ is monotonic increasing when $a_Y+b_Y>0$, $y^{II}(x,T_y)>\frac{1}{2}$ for $x \in \left(\frac{1}{2},1\right)$. As a result, we have $1-2y^{II}<0$, and hence we can see that for $x \in \left(\frac{1}{2},1\right)$,
$$
\frac{\partial L}{\partial x} = \left[(1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y}\right]\ln\left(\frac{1}{x}-1\right)\frac{\partial y^{II}}{\partial x} > 0
$$
Consequently we have that for $x \in \left(\frac{1}{2},1\right)$, $L(x,T_y)>\frac{b_X}{a_X+b_X}$, and hence $\frac{\partial T_X^{II}}{\partial x}<0$ according to Lemma~\ref{lem:techs}.
\end{proof}
\begin{proof}[Proof of Part 1 of Theorem~\ref{thm:coord_topo_stab}]
According to Lemma~\ref{lem:qre_stab}, Proposition~\ref{prop:c1a_g_monotone} implies that all $x \in (0.5,1)$ is on the principal branch. This directly leads us to part 1 of Theorem~\ref{thm:coord_topo_stab}.
\end{proof}
Next, if we look into the region $x \in (0,1/2)$, we can find that in this region, QREs appears only when $T_x$ and $T_y$ is low. This observation can be formalized in the proposition below. We can see that this proposition directly proves part~2 and 3 of Theorem~\ref{thm:coord_topo_ct}, as well as part~2 of Theorem~\ref{thm:coord_topo_stab}.
\begin{proposition}\label{prop:case1a_left}
Consider Case~1a. Let $x_1 = \min\left\{\frac{1}{2},\frac{-T_y\ln\left(\frac{a_X}{b_X}\right)+b_Y}{a_Y+b_Y}\right\}$ and $T_{B} = \frac{b_Y}{\ln(a_X/b_X)}$. The following statements are true for $x \in (0,1/2)$:
\begin{enumerate}
\item If $T_y>T_B$, then $T_X^{II}<0$.
\item If $T_y<T_B$, then $T_X^{II}>0$ if and only if $x \in (0, x_1)$.
\item $\frac{\partial L}{\partial x} >0$ for $x \in (0,x_1)$.
\item If $T_y<T_I$, then $\frac{\partial T_X^{II}}{\partial x}>0$.
\item If $T_y>T_I$, then there is a nonnegative \emph{critical temperature} $T_C(T_y)$ such that $T_X^{II}(x, T_Y) \le T_{C}(T_y)$ for $x \in (0, 1/2)$. If $T_Y<T_B$, then $T_C(T_y)$ is given as $T_X^{II}(x_L)$, where $x_L \in (0, x_1)$ is the unique solution to $L(x,T_y)=\frac{b_X}{a_X+b_X}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the first and second part, consider any $x \in (0, 1/2)$, and we can see that
\begin{align*}
T_X^{II} > 0 &\Leftrightarrow y^{II} < \frac{b_X}{a_X+b_X} \\
&\Leftrightarrow \left( 1 + e^{\frac{1}{T_y}(-(a_Y+b_Y)x+b_Y)} \right)^{-1} < \frac{b_X}{a_X+b_X}\\
&\Leftrightarrow x < \min\left\{\frac{1}{2},\frac{-T_y\ln\left(\frac{a_X}{b_X}\right)+b_Y}{a_Y+b_Y}\right\}
\end{align*}
Note that for $T_y>\frac{b_Y}{\ln(a_X/b_X)}=T_B$, we have $x_1<0$, and hence $T_X<0$.
From the above derivation we can see that for all $x \in (0,1/2)$ such that $T_X^{II}(x,T_y)>0$, we have $y^{II} < 1/2$ since $\frac{b_X}{a_X+b_X}<1/2$. Then, we can easily find that
$$
\frac{\partial L}{\partial x} = \left[(1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y}\right]\ln\left(\frac{1}{x}-1\right)\frac{\partial y^{II}}{\partial x} > 0.
$$
Further, when $T_y<T_I$, we have $y^{II}(1/2,T_y)<\frac{b_X}{a_X+b_X}$. This implies that for $x \in (0,1/2)$, $y^{II}(x,T_y)<\frac{b_X}{a_X+b_X}$. Since $\frac{\partial L}{\partial x}>0$, and $L$ is continuous, we can see that $L(x,T_y)<\frac{b_X}{a_X+b_X}$ for $x \in (0,1/2)$. This implies the fourth part of the proposition.
Next, if we look at the derivative of $T_X^{II}$,
$$
\frac{\partial T_X^{II}}{\partial x}(x,T_y)
= \frac{-(a_X+b_X)L(x,T_y)+b_X}{x(1-x)[\ln(1/x-1)]^2}
$$
we can see that any critical point in $x \in (0,1/2)$ must satisfy $L(x,T_y)=\frac{b_X}{a_X+b_X}$. When $T_y>T_I$, $x_1<1/2$, and we can see that $L(x_1, T_y) > y^{II}(x_1, T_y) = \frac{b_X}{a_X+b_X}$. If $T_y<\frac{b_Y}{\ln(a_X/b_X)}$, then $\lim_{x\rightarrow 0+} T_X = y^{II}(0, T_Y) < \frac{b_X}{a_X+b_X}$. Hence, there is exactly one critical point for $T_X$ for $x \in (0,x_1)$, which is a local maximum for $T_X$. If $T_y>\frac{b_Y}{\ln(a_X/b_X)}$, then we can see that $T_X$ is always negative, in which case the critical temperature is zero.
\end{proof}
The results in Proposition~\ref{prop:case1a_left} not only applies for the case $a_Y \ge b_Y$ but also general cases about the characteristics on $(0,1/2)$.
According to this proposition, we can conclude the following things for the case $a_Y \ge b_Y$, as well as the case $a_Y<b_Y$ when $T_y>T_I$:
\begin{enumerate}
\item The temperature $T_B=\frac{b_Y}{\ln(a_X/b_X)}$ determines whether there is a branch appears in $x \in (0,1/2)$.
\item There is some critical temperature $T_C$. If we raise $T_x$ above $T_C$, then the system is always on the principal branch.
\item The critical temperature $T_C$ is given as the solution to the equality $L(x,T_Y)=\frac{b_X}{a_X+b_X}$.
\end{enumerate}
When there is a positive critical temperature, though it has no closed form solution, we can perform binary search to look for $x \in (0, x_1)$ that satisfies $L(x,T_y)=\frac{b_X}{a_X+b_X}$.
Another result we are able to obtain from Proposition~\ref{prop:case1a_left} is that the principal branch for Case~1a when $T_y<T_I$ lies on $(0,1/2)$.
\begin{proof}[Proof of Part 2 of Theorem~\ref{thm:coord_topo_pb}]
First, we note that $T_y<T_I$ is meaningful only when $b_Y>a_Y$, for which case we always have $T_I<T_B$.
From Proposition~\ref{prop:case1a_left}, we can see that for $T_Y^{II}<T_I$, we have $x_1=1/2$, and hence $T_X^{II}>0$ for $x\in(0,1/2)$. From Proposition~\ref{prop:dir_pb}, we already have $\lim_{x \rightarrow \frac{1}{2}^-} T_X^{II} = \infty$. Also, it is easy to see that $\lim_{x \rightarrow 0^+} T_X^{II} = 0$. As a result, since $T_X^{II}$ is continuous differentiable over $(0, 0.5)$, for any $T_x>0$, there exists $x \in (0,0.5)$ such that $T_X^{II}(x,T_y)=T_x$.
\end{proof}
What remains to show is the characteristics on the side $(1/2,1)$ when $b_Y>a_Y$. In Figure~\ref{fig:as_c1a_by_ex1} and Figure~\ref{fig:as_c1a_by_ex2}, we can find that for low $T_y$, the branch on the side $(1/2,1)$ demonstrated a similar behavior as what we have shown in Proposition~\ref{prop:case1a_left} for the side $(0,1/2)$. However, for high $T_y$, while we still can find that $(0,1/2)$ contains the principal branch, the principal branch is not continuous. These observations are formalized in the following proposition. From this proposition, the proof of part~4 of Theorem~\ref{thm:coord_topo_ct} directly follows.
\begin{proposition} \label{prop:c1a_g_right2}
Consider Case~1a with $b_Y>a_Y$. Let $x_2=\max\left\{\frac{1}{2},\frac{-T_Y\ln\left(\frac{a_X}{b_X}\right)+b_Y}{a_Y+b_Y}\right\}$ and $T_A=\max\left\{0, \frac{-a_Y}{\ln(a_X/b_X)}\right\}$. The following statements are true for $x\in(1/2,1)$.
\begin{enumerate}
\item If $T_y<T_A$, then $T_X^{II}<0$.
\item If $T_y>T_A$, then $T_X^{II}>0$ if and only if $x \in (x_2,1)$.
\item For $x \in \left[\frac{b_Y}{a_Y+b_Y},1\right)$, we have $\frac{\partial L}{\partial x} > 0$.
\item If $T_y \in (T_{A}, T_I)$, then there is a positive critical temperature $T_{C}(T_y)$ such that $T_X^{II}(x,T_y)\le T_{C}(T_y)$ for $x \in (1/2,1)$, given as $T_{C}(T_y)=T_X^{II}(x_L)$, where $x_L \in (1/2,1)$ is the unique solution of $L(x,T_y)=\frac{b_X}{a_X+b_X}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the first part and the second part, consider $x \in (1/2,1)$, and we can find that
\begin{align*}
T_X^{II} > 0 &\Leftrightarrow y^{II} > \frac{b_X}{a_X+b_X} \\
&\Leftrightarrow \left( 1 + e^{\frac{1}{T_y}(-(a_Y+b_Y)x+b_Y)} \right)^{-1} > \frac{b_X}{a_X+b_X}\\
&\Leftrightarrow x > \max\left\{\frac{1}{2},\frac{-T_y\ln\left(\frac{a_X}{b_X}\right)+b_Y}{a_Y+b_Y}\right\} = x_2
\end{align*}
Note that for $T_y>T_I$, we get $x_2=1/2$. Also, if $T_y<T_{A}$, then $T_X^{II}<0$ for all $x \in (1/2,1)$.
For the third part, that $y^{II} \ge \frac{1}{2}$ for all $x\ge \frac{b_Y}{a_Y+b_Y}$ and $\frac{b_Y}{a_Y+b_Y}>\frac{1}{2}$. Then, we can find that
$$
\frac{\partial L}{\partial x} = \left[(1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y}\right]\ln\left(\frac{1}{x}-1\right)\frac{\partial y^{II}}{\partial x} > 0
$$
For the fourth part, we can find that any critical point of $L(x,T_Y)$ in $(0,1)$ must be either $x=\frac{1}{2}$ or satisfies the following equation:
\begin{equation} \label{eq:G_eq_0}
(1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y} = 0
\end{equation}
Consider $G(x,T_y) = (1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y}$. For $b_Y>a_Y$, $y^{II}(1/2,T_y)$ is strictly less than $1/2$. Also, we can see that $\frac{b_Y}{a_Y+b_Y}>1/2$. Now, we can observe that $G(1/2,T_y)>0$ and $G(\frac{b_Y}{a_Y+b_Y},T_y)<0$. Next, we can see that $G(x,T_y)$ is monotonic decreasing with respect to $x$ for $x \in \left(\frac{1}{2}, \frac{b_Y}{a_Y+b_Y}\right)$ by looking at its derivative:
$$
\frac{\partial G(x,T_y)}{\partial x} = -2+ \frac{a_Y+b_Y}{T_y}\left[ (1-2x)(1-2y^{II}) -2 x(1-x)\frac{\partial y^{II}}{\partial x} \right] < 0
$$
As a result, we can see that there is some $x^* \in \left(\frac{1}{2}, \frac{b_Y}{a_Y+b_Y}\right)$ such that $G(x^*,T_y)=0$. This implies that $L(x,T_y)$ has exactly one critical point for $x \in \left(\frac{1}{2}, \frac{b_Y}{a_Y+b_Y}\right)$. Besides, we can see that if $G(x,T_y)>0$, $\frac{\partial L}{\partial x}<0$; while if $G(x,T_y)<0$, then $\frac{\partial L}{\partial x}>0$. Therefore, $x^*$ is a local minimum for $L$.
From the above arguments, we can conclude that the shape of $L(x,T_y)$ for $T_y<T_I$ is as follows:
\begin{enumerate}
\item There is a local maximum at $x=1/2$, where $L(1/2,T_y)=y(1/2,T_y)<\frac{b_X}{a_X+b_X}$.
\item $L$ is decreasing on the interval $\left(\frac{1}{2}, x^*\right)$, where $x^*$ is the unique solution to \eqref{eq:G_eq_0}.
\item $L$ is increasing on the interval $(x^*,1)$. If $T_y>T_{A}$, then $\lim_{x\rightarrow 1^-} L(x,T_y)=y(1,T_y) >\frac{b_X}{a_X+b_X}$.
\end{enumerate}
Finally, we can claim that there is a unique solution to $L(x,T_Y)=\frac{b_X}{a_X+b_X}$, and such point gives a local maximum to $T_X^{II}$.
\end{proof}
The above proposition suggests that for $T_y \in (T_{A},T_I)$, we are able to use binary search to find the critical temperature. For $T_y>T_I$, unfortunately, with the similar argument of Proposition~\ref{prop:c1a_g_right2}, we can find that there are potentially at most two critical points for $T_X^{II}$ on $(1/2,1)$, as shown in Figure~\ref{fig:as_c1a_by_ex2}, which may induce an unstable segment between two stable segments. This also proves part 3 of Theorem~\ref{thm:coord_topo_stab}.
Now, we have enough materials to prove the remaining statements in Section~\ref{sec:coord_topo}.
\begin{proof}[Proof of Part~1, 5, and 6 of Theorem~\ref{thm:coord_topo_ct}, part~4 of Theorem~\ref{thm:coord_topo_stab}]
For $T_y>T_I$, by Proposition~\ref{prop:case1a_left}, we can conclude that for $x \in (0,x_L)$, we have $\frac{\partial T_X^{II}}{\partial x}>0$, for which the QREs are stable by Lemma~\ref{lem:qre_stab}. With similar argument we can conclude that the QREs on $x \in (x_L,x_1)$ are unstable. Besides, given $T_x$, the stable QRE $x_a\in(0,x_L)$ and the unstable $x_b\in(x_L,x_1)$ that satisfies $T_X^{II}(x_a,T_y)=T_X^{II}(x_b,T_y)=T_x$ appear in pairs. For $T_y<T_I$, with the same technique and by Proposition~\ref{prop:c1a_g_right2}, we can claim that the QREs in $x \in (x_2, x_L)$ are unstable; while the QREs in $x \in (x_L,1)$ are stable. This proves the first part of of Theorem~\ref{thm:coord_topo_ct} and part~4 of Theorem~\ref{thm:coord_topo_stab}.
Part 5 and 6 of Theorem~\ref{thm:coord_topo_ct} are corollaries of part 5 of Proposition~\ref{prop:case1a_left} and part 4 of Proposition~\ref{prop:c1a_g_right2}.
\end{proof}
\subsubsection{Case 1b: $b_X>0$, $a_Y+b_Y<0$} \label{sec:case1b}
In this case, both player have different preferences. For the game within this class, there is only one Nash equilibrium (either pure or mixed). We presented examples in Figure~\ref{fig:as_c1b_low_TY} and Figure~\ref{fig:as_c1b_high_TY}. We can find that in these figures, there is only one QRE given $T_x$ and $T_y$. We show in the following two propositions that this observation is true for all instances.
\begin{proposition}
Consider Case~1b. Let $x_3=\max\left\{0,\frac{-T_y \ln(a_X/b_X)+b_Y}{a_Y+b_Y}\right\}$. If $T_y<T_I$, then the following statements are true
\begin{enumerate}
\item $T_X^{II}(x,T_y)<0$ for $x \in (1/2,1)$.
\item $T_X^{II}(x,T_y)>0$ for $x \in \left( x_3, \frac{1}{2} \right)$.
\item $\frac{\partial T_X^{II}(x,T_y)}{\partial x}>0$ for $x \in \left( x_3, \frac{1}{2} \right)$.
\item $\left( x_3, \frac{1}{2} \right)$ contains the principal branch.
\end{enumerate}
\end{proposition}
\begin{proof}
Note that if $T_y<T_I$, we have $x_3<1/2$. Also, according to Proposition~\ref{prop:c1a_g_monotone}, $y^{II}(1/2,T_y) < \frac{b_X}{a_X+b_X}$. Since $y^{II}$ is continuous and monotonic decreasing with $x$, we can see that $y^{II} < \frac{b_X}{a_X+b_X}$ for $x>1/2$. Therefore, the numerator of \eqref{eq:Tx_} is always positive for $x \in (1/2,1)$, which makes $T_X^{II}$ negative. This proves the first part of the proposition.
For the second part, observe that for $x \in (0, 1/2)$, $T_X^{II}>0$ if and only if $y^{II} < \frac{b_X}{a_X+b_X}$. This is equivalent to $x > \frac{-T_y \ln(a_X/b_X)+b_Y}{a_Y+b_Y}$.
For the third part, note that for $x \in (0,1/2)$, $x(1-x)\ln(1/x-1)\frac{\partial y^{II}}{\partial x} < 0$. This implies $L(x,T_y) < y^{II}(x,T_y) < \frac{b_X}{a_X+b_X}$ for $x \in (x_3,1/2)$, from which we can conclude that $\frac{\partial T_X^{II}(x,T_y)}{\partial x}>0$.
Finally, we note that if $x_3>0$, then $T_X^{II}(x_3,T_y)=0$. If $x_3=0$, we have $\lim_{x\rightarrow 0^+} T_X^{II}=0$. As a result, we can conclude that $(x_3,1/2)$ contains the principal branch.
\end{proof}
With the similar arguments, we are able to show the following proposition for $T_y>T_I$:
\begin{proposition}
Consider Case~1b. Let $x_3=\min\left\{1,\frac{-T_y \ln(a_X/b_X)+b_Y}{a_Y+b_Y}\right\}$. If $T_y>T_I$, then the following statements are true
\begin{enumerate}
\item $T_X^{II}(x,T_y)<0$ for $x \in (0,1/2)$.
\item $T_X^{II}(x,T_y)>0$ for $x \in \left( \frac{1}{2}, x_3 \right)$.
\item $\frac{\partial T_X^{II}(x,T_y)}{\partial x}<0$ for $x \in \left( \frac{1}{2}, x_3 \right)$.
\item $\left( \frac{1}{2}, x_3 \right)$ contains the principal branch.
\end{enumerate}
\end{proposition}
\subsubsection{Case 1c: $a_Y+b+Y=0$}
In this case, we have $T_I=\frac{b_Y}{\ln(a_X/b_X)}$, and $y^{II}$ is a constant with respect to $x$. The proof of Theorem~\ref{thm:nc2} for $a_Y+b_Y=0$ directly follows from the following proposition.
\begin{proposition}
Consider Case~1c. The following statements are true:
\begin{enumerate}
\item If $T_y<T_I$, then $T_X^{II}(x,T_y)<0$ for $x \in (0.5,1)$, and $T_X^{II}(x,T_y)>0$ for $x \in (0,0.5)$.
\item If $T_y>T_I$, then $T_X^{II}(x,T_y)<0$ for $x \in (0,0.5)$, and $T_X^{II}(x,T_y)>0$ for $x \in (0.5,1)$.
\item If $T_y<T_I$, then $\frac{\partial T_X^{II}(x,T_y)}{\partial x}>0$ for $x \in \left( 0, 0.5 \right)$.
\item If $T_y>T_I$, then $\frac{\partial T_X^{II}(x,T_y)}{\partial x}<0$ for $x \in \left( 0.5,1 \right)$.
\end{enumerate}
\end{proposition}
\begin{proof}
Note that $y^{II}=\left(1+e^{b_Y/T_y}\right)^{-1}$.
First consider the case when $a_Y>b_Y$. In this case $T_I=0$ and $b_Y<0$. Therefore, $y^{II}>\frac{b_X}{a_X+b_X}$, and from which we can conclude that $T_X^{II}>0$ for $x \in (0.5,1)$ and $T_X^{II}<0$ for $x \in (0,0.5)$, for any positive $T_y$.
Now consider the case that $a_Y<b_Y$. If $T_y<T_I$, we have $y^{II}<\frac{b_X}{a_X+b_X}$, and hence we get $T_X^{II}(x,T_y)<0$ for $x \in (0.5,1)$, and $T_X^{II}(x,T_y)>0$ for $x \in (0,0.5)$, which is the first part of the proposition statement. Similarly, if $T_y>T_I$, we have $y^{II}>\frac{b_X}{a_X+b_X}$, from which the second part of the proposition follows.
For the third part and the fourth part, note that $L(x,T_y)=y^{II}$ in this case as $\frac{\partial y^{II}}{\partial x}=0$ by observing \eqref{eq:def_L}, and the sign of the derivative of $T_X^{II}$ can be seen from Lemma~\ref{lem:techs}.
\end{proof}
\subsection{Case 2: $b_X<0$}\label{sec:case2}
In this case, the first action is a dominating strategy for the first player. Note that both $-(a_X+b_X)$ and $b_X$ are not positive, which means that the numerator of \eqref{eq:Tx_} is always smaller than or equal to zero. This implies that all QRE correspondences appear on $x \in \left(\frac{1}{2}, 1\right)$. In fact, since $y^{II}>0$ for $x \in (1/2,1)$, the numerator of \eqref{eq:Tx_} is always negative, we have $T_X^{II}>0$ for $x \in (1/2,1)$. Also we can easily see that
$$
\lim_{x\rightarrow \frac{1}{2}^+} T_X^{II}(x,T_y) = +\infty
$$
This implies that $(1/2,1)$ contains the principal branch. First, we show the result when $a_Y+b_Y<0$ in the following proposition. Also, the bifurcation diagram is presented in Figure~\ref{fig:as_c2_ex1}.
\begin{proposition}
For Case~2, if $a_Y+b_Y<0$, then for $x \in (1/2,1)$, we have $\frac{\partial T_X^{II}}{\partial x}<0$.
\end{proposition}
\begin{proof}
In this case, $y^{II}$ is monotonic decreasing with $x$. We can see that
$$
L(x,T_Y) = y^{II} + x(1-x)\ln\left(\frac{1}{x}-1\right) \frac{\partial y^{II}}{\partial x} > y^{II} >0
$$
since $x(1-x)\ln\left(\frac{1}{x}-1\right) \frac{\partial y^{II}}{\partial x}$ is positive for $x \in (1/2,1)$. Bringing this back to \eqref{eq:def_dTX}, we have $\frac{\partial T_X^{II}}{\partial x}<0$.
\end{proof}
For $a_Y+b_Y>0$, if $a_Y>b_Y$, the bifurcation diagram has the similar trend as in Figure~\ref{fig:as_c2_ex1}; while if $a_Y<b_Y$, we lose the continuity on the principal branch.
\begin{proposition}
For Case~2, if $a_Y+b_Y>0$, then for $x\in(1/2,1)$, we have
\begin{enumerate}
\item if $a_Y>b_Y$, then $\frac{\partial T_X^{II}}{\partial x}<0$.
\item if $a_Y<b_Y$, then $T_X$ has at most two local extrema.
\end{enumerate}
\end{proposition}
\begin{proof}
In this case, $y^{II}$ is monotonic increasing with $x$. For $a_Y>b_Y$, we can find that $y^{II}(1/2,T_y)>0$ and $L(1/2,T_y)=y^{II}(1/2,T_y)>0$. Also, we can get that $L$ is monotonic increasing for $x \in (1/2,1)$ by inspecting
$$
\frac{\partial L(x,T_y)}{\partial x} = \left[(1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y}\right]\ln\left(\frac{1}{x}-1\right) \frac{\partial y^{II}(x,T_y)}{\partial x} > 0
$$
Hence, for $x \in (1/2,1)$, $L(x,T_y)>0$. This implies $\frac{\partial T_X^{II}}{\partial x}<0$ for $x\in(1/2,1)$.
For the second part, we can find that for $a_Y<b_Y$, $y^{II}(1/2)<1/2$. Let $x_2 = \min\left\{1,\frac{b_Y}{a_Y+b_Y}\right\}$. First note that if $x_2<1$, then for $x>x_2$, we have $y>1/2$, and further we can get $\frac{\partial L(x,T_y)}{\partial x} >0$ for $x\in(x_2,1)$. We use the same technique as in the proof of the Proposition~\ref{prop:c1a_g_right2}. Let $G(x,T_y=(1-2x)+x(1-x)(1-2y^{II})\frac{a_Y+b_Y}{T_y}$. Note that $G(1/2, T_y)>0$ and $G(x_2,T_y)<0$. Next, observe that $G(x,T_y)$ is monotonic decreasing for $x \in \left(\frac{1}{2}, x_2\right)$. Hence, there exists a $x^* \in (1/2, x_2)$ such that $G(x^*,T_y)=0$. This $x^*$ is a local minimum for $L$. We can conclude that for $x \in (1/2,1)$, $L$ has the following shape:
\begin{enumerate}
\item There is a local maximum at $x=1/2$, where $L(1/2,T_y)=y(1/2,T_y)>0$.
\item $L$ is decreasing on the interval $x \in (1/2, x^*)$, where $x^*$ is the solution to $G(x^*,T_y)=0$.
\item $L$ is increasing on the interval $x \in (x^*, x_2)$. Note that $\lim_{x \rightarrow 1^-} L(x,T_y) = y^{II}(1,T_y)>0$.
\end{enumerate}
As a result, if $L(x^*, T_y)>\frac{b_X}{a_X+b_X}$, then $T_X^{II}$ is monotonic decreasing; otherwise, $T_X^{II}$ has a local minimum and a local maximum on $(1/2,1)$.
\end{proof}
\begin{comment}
\section{Bifurcation Diagram Analysis For Symmetric Equilibrium}
In this sub-section, we analyze the bifurcation diagrams for $2 \times 2$ symmetric coordination game. Suppose the game has the payoff matrix for the row player:
$$
A = \left( \begin{array}{cc}
a & b \\
c & d \end{array}
\right)
$$
We consider the symmetric strategy profile and discuss its relationship with the temperature $\T$ of QRE. Specifically, let $x$ be the probability that the row play selects action $1$ at some QRE, and under the same QRE, by symmetry, the column player will also select action $1$ with probability $x$. At QRE, we have
\begin{equation} \label{eq:qre_symmetric}
x=\frac{e^{\frac{1}{\T}(xa+(1-x)b)}}{e^{\frac{1}{\T}(xa+(1-x)b)}+e^{\frac{1}{\T}(xc+(1-x)d)}}
\end{equation}
Rearranging \eqref{eq:qre_symmetric}, and denote $\alpha=c+b-a-d$, $\beta=d-b$, we get
$$
\frac{1}{x}=1+e^{\frac{1}{\T}(\alpha x+\beta)}
$$
Then, we can write temperature $\T$ as a function of $x$:
\begin{equation}\label{eq:func_temp_x}
\T(x)=\frac{\alpha x + \beta}{\ln(\frac{1}{x}-1)}
\end{equation}
For any $2\times 2$ coordination game, basically it can be categorized into one of the following three cases:
\begin{enumerate}
\item The payoff dominant equilibrium is the risk-dominant equilibrium.
\item The payoff dominant equilibrium is not the risk-dominant equilibrium.
\item There are no risk-dominant equilibrium.
\end{enumerate}
Without loss of generality, we assume that the first action is the payoff dominant action, i.e. $a>d$. By the definition of coordination game, we have $\beta=d-b>$ and $\alpha=(c-a)+(b-d)<0$ since $a-c>0$ and $d-b>0$. For case 1, we can see that $a-c>b-d$, which implies $-\alpha>2\beta$. Similarly, for case 2, we have $-\alpha<2\beta$, and for case 3, we have $-\alpha=2\beta$.
We analyze the characteristics of the function $\T(x)$ for $x \in (0,1)$. First, note that this function is continuous in $(0,1)$ except for $x=1/2$, and for case 1
\begin{align*}
\lim_{x \rightarrow \frac{1}{2}^+} \T(x) = +\infty \\
\lim_{x \rightarrow \frac{1}{2}^-} \T(x) = -\infty
\end{align*}
Similarly, for case 2
\begin{align*}
\lim_{x \rightarrow \frac{1}{2}^+} \T(x) = -\infty \\
\lim_{x \rightarrow \frac{1}{2}^-} \T(x) = +\infty
\end{align*}
However, for case 3, the limit exists
$$
\lim_{x \rightarrow \frac{1}{2}} \T(x) = \frac{-\alpha}{4}
$$
The function $\T(x)$ is differentiable for $x \in (0,\frac{1}{2})$ and $x \in (\frac{1}{2}, 1)$. Its derivative can be given as
\begin{equation}\label{eq:T_derivative}
\frac{d\T}{dx} = \frac{\alpha x[(1-x)\ln(\frac{1}{x}-1)+1]+\beta}{x(1-x)(\ln(\frac{1}{x}-1))^2}
\end{equation}
First, we state some useful lemmas for this:
\begin{lem} \label{lemma:dT_deno}
$x(1-x)(\ln(\frac{1}{x}-1))^2>0$ for $x \in (0,1)$
\end{lem}
\begin{lem} \label{lemma:dT_nom}
Let $L(x)=x[(1-x)\ln(\frac{1}{x}-1)+1]$. Then,
\begin{enumerate}
\item $L(x)$ is monotonic non-decreasing for $x \in (0,1)$.
\item $L(x)>\frac{1}{2}$ for $x \in (\frac{1}{2}, 1)$.
\item $L(x)<\frac{1}{2}$ for $x \in (0, \frac{1}{2})$.
\end{enumerate}
\end{lem}
\begin{proof}
This lemma can be shown by observing that the derivative of $L(x)$ is always non-negative:
$$
\frac{dL(x)}{dx} = (1-2x)\ln\bigg(\frac{1}{x}-1\bigg) \ge 0
$$
where the equality holds for $x=0$, and $L(\frac{1}{2})=\frac{1}{2}$.
\end{proof}
With these two lemmas, we can directly state the following results that suggest the plot $\T(x)$ versus $x$ looks like that as in Figure~\ref{fig:c1_Tx}:
\begin{prop}
For case 1, $\T(x)$ has the following property:
\begin{enumerate}
\item $\T(x)$ is upper-bounded for $x \in (0, \frac{1}{2})$.
\item $\T(x)$ is monotonic decreasing for $x \in (\frac{1}{2}, 1)$.
\item For any $\T'>0$, there exists $x \in (\frac{1}{2},1)$ such that $\T(x)=\T'$.
\end{enumerate}
\end{prop}
\begin{proof}
To show the first claim, we observe that $L(x) \in (0, \frac{1}{2})$ for $x \in (0, \frac{1}{2})$ and $L(x)$ is monotonic increasing. This means that for case 1 (where $-\alpha>2\beta)$, there exists one $x^*$ such that $\frac{d\T}{dx}(x^*)=0$, and one can easily verify that this is a local maximum point. The second claim can be seen by Lemma~\ref{lemma:dT_nom}:
$$
\frac{d\T}{dx} = \frac{\alpha L(x) + \beta}{x(1-x)(\ln(\frac{1}{x}-1))^2} < \frac{\frac{1}{2}\alpha+\beta}{x(1-x)(\ln(\frac{1}{x}-1))^2} < 0
$$
Finally, the third claim can be seen by the fact that $\T(x)$ is monotonic decreasing for $x \in (\frac{1}{2},1)$, and $\lim_{x \rightarrow \frac{1}{2}^+}\T(x) = \infty$, $\lim_{x \rightarrow 1^-}\T(x) = 0$
\end{proof}
\begin{figure}
\caption{$\T(x)$ versus $x$ for case 1}
\label{fig:c1_Tx}
\end{figure}
\begin{figure}
\caption{$\T(x)$ versus $x$ for case 2}
\label{fig:c2_Tx}
\end{figure}
\begin{figure}
\caption{$\T(x)$ versus $x$ for case 3}
\label{fig:c3_Tx}
\end{figure}
Similarly, we can state the result for case 2, the plot of $\T(x)$ versus $x$ shapes as in Figure~\ref{fig:c2_Tx}:
\begin{prop}
For case 2, $\T(x)$ has the following property:
\begin{enumerate}
\item $\T(x)$ is upper-bounded for $x \in (\frac{1}{2},1)$.
\item $\T(x)$ is monotonic increasing for $x \in (0, \frac{1}{2})$.
\item For any $\T'>0$, there exists $x \in (0, \frac{1}{2})$ such that $\T(x)=\T'$.
\end{enumerate}
\end{prop}
\begin{proof}
To show the first claim, we observe that $L(x) \in (\frac{1}{2}, 1)$ for $x \in (\frac{1}{2}, 1)$ and $L(x)$ is monotonic increasing. This means that for case 2 (where $-\alpha<2\beta)$, there exists one $x^*$ such that $\frac{d\T}{dx}(x^*)=0$, and one can easily verify that this is a local maximum point. The second claim can be seen by Lemma~\ref{lemma:dT_nom}:
$$
\frac{d\T}{dx} = \frac{\alpha L(x) + \beta}{x(1-x)(\ln(\frac{1}{x}-1))^2} > \frac{\frac{1}{2}\alpha+\beta}{x(1-x)(\ln(\frac{1}{x}-1))^2} > 0
$$
Finally, the third claim can be seen by the fact that $\T(x)$ is monotonic increasing for $x \in (0, \frac{1}{2})$, and $\lim_{x \rightarrow \frac{1}{2}^-}\T(x) = \infty$, $\lim_{x \rightarrow 0^+}\T(x) = 0$
\end{proof}
And for case 3, we have the following result:
\begin{prop}
For case 3, $\T(x)$ is symmetric around $x=\frac{1}{2}$ for $x \in (0,1)$. Also, $\T(x)$ is upper-bounded for $x \in (0,\frac{1}{2})\cup(\frac{1}{2},1)$.
\end{prop}
\begin{proof}
First, we show that $\T(x)$ is symmetric:
$$
\T(1-x)=\frac{\alpha(1-x)+\beta}{\ln(\frac{1}{1-x}-1)} = \frac{-\alpha x-\beta}{\ln(\frac{x}{1-x})} = \frac{\alpha x+\beta}{\ln(\frac{1}{x}-1)} = \T(x)
$$
Next, we can see that $\T(x)$ is upper-bounded by observing that $\lim_{x \rightarrow \frac{1}{2}} \frac{d\T(x)}{dx} = 0$.
\end{proof}
\end{comment}
\end{document} |
\begin{document}
\begin{abstract}
We develop a fully non-parametric, easy-to-use, and powerful test for the missing completely at random (MCAR) assumption on the missingness mechanism of a dataset. The test compares distributions of different missing patterns on random projections in the variable space of the data. The distributional differences are measured with the Kullback-Leibler Divergence, using probability Random Forests \citep{probabilityforests}. We thus refer to it as ``Projected Kullback-Leibler MCAR'' (PKLM) test. The use of random projections makes it applicable even if very few or no fully observed observations are available or if the number of dimensions is large. An efficient permutation approach guarantees the level for any finite sample size, resolving a major shortcoming of most other available tests. Moreover, the test can be used on both discrete and continuous data. We show empirically on a range of simulated data distributions and real datasets that our test has consistently high power and is able to avoid inflated type-I errors. Finally, we provide an \textsf{R}-package \texttt{PKLMtest} with an implementation of our test.
\end{abstract}
\begin{frontmatter}
\title{PKLM: A flexible MCAR test using Classification }
\runtitle{PKLM}
\thankstext{T1}{Authors with equal contribution.}
\thankstext{T2}{We are grateful to Jun Li for providing us with parts of their code.}
\begin{aug}
\author{\fnms{Meta-Lina} \snm{Spohn}*},
\author{\fnms{Jeffrey} \snm{Näf}*},
\author{\fnms{Loris} \snm{Michel}},
\ead[label=u1,url]{http://www.foo.com}
\and
\author{\fnms{Nicolai} \snm{Meinshausen}}
\affiliation{ETH Zürich}
\address{Seminar for Statistics\\
ETH Zürich\\
Rämistrasse 101\\
8092 Zürich\\
Switzerland\\
E-mail: [email protected]}
\end{aug}
\begin{keyword}
\kwd{Random Projections}
\kwd{Tree Ensembles}
\kwd{Random Forest}
\kwd{KL-Divergence}
\kwd{Permutation}
\end{keyword}
\end{frontmatter}
\section{Introduction}
Dealing with missing values is an integral part of modern statistical analysis. In particular, the assumed mechanism leading to the missing values is of great importance. Based on the work of \citet{Rubin1976}, there are three groups of missingness mechanisms usually considered: The values may be missing completely at random (MCAR), meaning the probability of a value being missing does not depend on the observed or unobserved data. In contrast, the probability of being missing could depend on observed values (missing at random, MAR) or on unobserved values (missing not at random, MNAR).
As stated in \citet{yuan2018}, ``a formal confirmation of the MCAR missing data mechanism is of great interest, simply because essentially all methods can still yield consistent estimates under MCAR even if the underlying population distribution is unknown''. While there is, at least for imputation, a number of approaches that can deal with a MAR missing data mechanism such as Multivariate Imputation by Chained Equations (mice) \citep{buuren_mice, Deng2016}, many commonly used methods explicitly rely on the validity of the MCAR assumption. Examples are the easy-to-use listwise-deletion and mean-imputation methods \citep{RubinLittlebook}. Consequently, the original paper on MCAR testing \citep{littletest} has been cited over $7600$ times according to google scholar. Recent papers (involving psychometric analysis) that test the MCAR assumption in order to justify listwise-deletion include
\cite{paperusingMCARtest4},
\cite{paperusingMCARtest2},
\cite{paperusingMCARtest1}, \cite{paperusingMCARtest3}, \cite{paperusingMCARtest5}, and \cite{paperusingMCARtest6}. As such, it is important to reliably test the MCAR assumption.
The testing framework is of an ANOVA-type: when observing a dataset with missing values, there are $n$ observations and $G$ missingness patterns, $g=1,\ldots,G$. The observations belonging to the missingness pattern $g$ can be seen as a group, such that we observe $G$ groups of observations. The MCAR hypothesis now implies that the distribution of the observed data in all groups is the same, while under the alternative at least two differ. This is technically testing the observed at random (OAR) assumption defined in \citet{Rhoads2012}, see also the end of Section \ref{scoredefsec} for a discussion. This distinction can be avoided by assuming the missingness mechanism is MAR, which is what is usually implicitly done \citep{Li2015}.
The idea of testing the MCAR assumption traces back to \citet{littletest}. While some more refined versions of this testing idea were developed since then \citep{chen_Little, Kim2002, Jamshidian2010}, there has not been a lot of progress on distribution-free MCAR tests, able to detect general distributional differences between the missingness patterns. \citet{Li2015} recently made a step in that direction. Their test is completely nonparametric and shown to be consistent. Empirically it is shown to keep the level and to have a high power over a wide range of distributions. An application area where their proposed test struggles is for higher-dimensional data with little or no complete observations. Their testing paradigm is based on ``a reasonable amount'' of complete cases and all pairwise comparisons between the observed parts of two missingness pattern groups. This is problematic, since, as the dimension $p$ increases, the number of distinct patterns $G$ tends to grow quickly as well. The most extreme case occurs when $G = n$, that is, every observation forms a missingness pattern group on its own. Consequently, their test appears computationally prohibitively expensive for $p > 10$. Additionally, as the dimension increases, both the number of complete cases and the number of observations per pattern tends to decrease, both contributing to a reduction in power for the test in \citet{Li2015}.
In this paper, we try to circumvent these problems in a data-efficient way, by employing a one v.s. all-others approach and using \emph{random projections} in the variable space. Considering observations that are projected into a lower-dimensional space allows us to recover more complete cases. As realized by \citet{Li2015}, the problem of MCAR testing, as described above, is a problem of testing whether distributions across missingness patterns are different. The method presented here relies on some of the core ideas of \citet{michel2021proper} and \citet{Cal2020}, who do distributional testing using classifiers. We extend the ideas of \citet{Cal2020} to be usable for multiclass classification and use the projection idea of \citet{michel2021proper} to build a test that is usable and powerful even for high dimensions. Moreover, using a permutation approach, we are able to provably keep the nominal level $\alpha$ for all $n$. As outlined later, this is in contrast to other tests, for which the level might be kept only asymptotically, or is even unclear. The approach of random projections together with a permutation test also allows to extract more information than just a global hypothesis test. We make use of this to calculate individual $p$-values for each variable. Such a partial test for a variable addresses the null hypothesis that, once that variable is removed, the data is MCAR. Together with the test of overall MCAR, this might point towards the potential source of deviation from the null, that is, the variables causing an MCAR violation.
The paper is structured in the following way. Section \ref{problemformulationsection} introduces notation. Section \ref{scoredefsec} details the testing framework including the null and alternative hypotheses we consider. Section \ref{pracaspects} then showcases how to perform this test in practice and details the algorithm. Section \ref{empresults_1} shows some numerical comparisons for type-I error control and power. Section \ref{extension} explains the extension of partial $p$-values, while Section \ref{discuss} concludes. Appendix \ref{proofsection} contains the proofs of all results, while Appendix \ref{app:comptime} adds some additional details and shows computation times of the different tests.
\subsection{Contributions}
Our contributions can be summarized as follows: We develop the PKLM-test, an easy-to-use and powerful non-parametric test for MCAR, that is applicable even in high dimensions. We thereby extend the testing approach of \citet{Cal2020} to multiclass testing, which in connection with random projections in the variable space and the Random Forest classifier leads to a powerful test for both discrete and continuous types of data. To the best of your knowledge, no other test is as widely applicable and powerful. Moreover, we are able to formally prove the validity of our $p$-values for any sample size and number of groups $G$. As we demonstrate in our simulations, this is remarkable for the MCAR testing literature. It appears no other MCAR test has such a guarantee and many have inflated type-I errors, even in realistic cases, see e.g. the discussion in \citet{Jamshidian2010}.
As an extension, we can compute partial $p$-values corresponding to each variable, addressing the question of the source of violation of MCAR among the variables. We demonstrate the validity and power of our test on a wide range of simulated and real datasets in conjunction with different MAR mechanisms. Finally, we make our test available through the \textsf{R}-package \texttt{PKLMtest}, available on \url{https://github.com/missValTeam/PKLMtest} and on CRAN.
\subsection{Related Work}\label{relwork}
Previous advances for tests of MCAR were mostly addressed by \citet{littletest} (referred to as ``Little-test'') and extensions \citep{chen_Little, Kim2002} under the assumption of joint Gaussianity. To the best of our knowledge, the only distribution-free tests are developed in \citet{Jamshidian2010}, \citet{Li2015} and \citet{empiricallikelihoodapproach}. The first paper develops a test (referred to as ``JJ-test''), which is distribution-free but is only able to spot differences in the covariance matrices between the different patterns. As such, the simulation study in \citet{Li2015} shows that their test (referred to as ``Q-test''), which can detect any potential difference, has much more power than the JJ-test. Moreover, the JJ-test requires prior imputation of missing values, which appears undesirable. \citet{empiricallikelihoodapproach} develop a test that can be used to subsequently also consistently estimate certain estimators under MCAR. Their test requires a set of fully observed ``auxiliary'' variables that can be used to first test and then estimate properties of some variable of interest. As such their approach and goals are quite different from ours.
Consequently, the test closest to ours is the fully non-parametric method in \citet{Li2015}. However, it is computationally costly or even infeasible to use their test with dimensions typically found in modern datasets ($p \gg 10$), as all pairwise comparisons between missingness patterns are calculated. While this could in principle be avoided by only checking a subset of pairs, we empirically show that, even if all pairwise comparisons are performed, our test has comparable or even higher power than theirs in their own simulation setting. This gap only increases with the number of dimensions or with a decrease in the fraction of fully observed cases.
We also address a major issue in the MCAR testing literature: none of the proposed methods has a finite sample guarantee of producing valid $p$-values and for some it can even be empirically checked that the produced $p$-value is not valid in certain settings. If $Z$ is a $p$-value generated from a statistical test, then it is valid if $\ensuremath{{\mathbb P}}(Z \leq \alpha) \leq \alpha$ under $H_0$ for all $\alpha \in [0,1]$, see e.g., \citet{lehmann2005testing}. Figure \ref{fig:ecdf} in Section \ref{empresults_1} shows some example of previous tests violating this validity of $p$-values. This issue might be surprising since the requirement of a valid $p$-value might be the most basic demand a statistical test needs to meet. For the Little-test, this is generally true under normality or asymptotically, that is if the number of observations is going to infinity, under some moment conditions and conditions on the group size. Despite this, Section \ref{empresults_1} shows that type error rates can strongly exceed the desired level even in samples of $500$ observations. The same holds for the JJ-test of \citet{Jamshidian2010} for which we sometimes observed a strong inflation of the level. As with the JJ-test, \citet{Li2015} also do not provide a formal guarantee that the level is kept. Though in our own simulation study, which is similar to theirs, we did not find any notable violation of the level for their test.
To conduct our test, we adapt and partially extend the approaches of \citet{Cal2020} and \citet{michel2021proper}. The former develops a two-sample test using classification, an approach that has gained a lot of attention in recent years (see e.g., \citet{DPLBpublished} or \citet{hediger2020use} for a literature overview). We extend this approach to multiclass testing, to obtain a test statistic akin to \citet{Cal2020}, but using the out of bag (OOB) probability estimate of the Random Forest (RF) instead of the in-sample probability. This was already hinted in \citet{hediger2020use} to increase the power of the two-sample testing approach designed by \citet{Cal2020}. \citet{michel2021proper}, on the other hand, use random projections to increase the sample efficiency in the presence of missing values. This simple idea makes our test applicable and powerful, even in high dimensions, and even if the number of patterns $G$ is the same as the number of observations. It can also provide additional information together with the rejection decision, as we demonstrate in Section \ref{extension}. Finally, through an efficient permutation testing approach, we are able to formally guarantee that our test produces valid $p$-values for \emph{any} $n$ and any number of groups $G$. It appears that the PKLM-test is the first MCAR test with such a guarantee.
Table~\ref{advantagetable} summarizes some of the properties of different tests. In particular, ``mixed data types'' refers to a possible combination of continuous data (such as income) and discrete data (such as gender), while ``power beyond differences in first and second moments'' means the test is able to detect differences between distributions, even if their means or variances are identical. Though this is difficult to show formally, it appears quite clear that the nonparametric nature of our approach allows for the detection of differences in distributions between patterns, even if the missingness groups all share the same mean or covariance matrix. As outlined in \citet{yuan2018} this is crucial for the detection of general MCAR deviations and is not the case, for instance, for the widely used Little-test. Appendix \ref{app:equalgroupmeansandvar} studies a simulated MAR example taken from \citet{yuan2018}, whereby observed means and variances are approximately the same across different groups. Tests such as the Little-test have no power in this example, yet with our approach, we reach a power of $1$.
\begin{table}[H]
\resizebox{1\textwidth}{!}{
\begin{tabular}{l|l|l|l|l}
& PKLM & Q & Little & JJ \\ \hline
Computational Complexity & $\mathcal{O}(p n \log(n))$ & $\mathcal{O}(n^2 p)$ & $\mathcal{O}(n p^2) $ & $\mathcal{O}(n (p^2 + \log(n)))$ \\ \hline
Can be used without & Yes & No & No & Yes \\
complete observations & & & & \\ \hline
Mixed data types possible & Yes & No & No & No \\ \hline
Does not require initial imputation & Yes & Yes & Yes & No\\ \hline
Power beyond differences & Yes & Yes & No & No \\
in first and second moments & & & & \\ \hline
\end{tabular}}
\caption{Illustration of some of the properties of various tests. For details on the calculation of the computational complexities we refer to Appendix \ref{app:comptime}.
\label{advantagetable}}
\end{table}
\section{Notation}\label{problemformulationsection}
We assume an underlying probability space $(\Omega, \mathcal{F}, \ensuremath{{\mathbb P}})$ on which all random elements are defined. Along the lines of \cite{josse} we introduce the following notation: let $\mathbf{X}^* \in \mathbb{R}^{n \times p}$ be a matrix of $n$ complete samples from a distribution $P^*$ on $\ensuremath{{\mathbb R}}^p$. We denote by $\mathbf{X}$ the corresponding incomplete dataset that is actually observed. Alongside $\mathbf{X}$ we observe the missingness matrix $\mathbf{M} \in \{0,1\}^{n\times p}$, of which an entry $m_{ij} \in \{0,1\}$ is $1$, if entry $x^*_{ij}$ is missing, and $0$, if it is observed. Each unique combination in $\{0,1\}^p$ in $\mathbf{M}$ is referred to as a missingness pattern and we assume that there are $G \leq n$ unique patterns in $\mathbf{M}$. As an example, for $p=2$, we might have the pattern $(1,0)$ (first value missing, second observed), $(0,1)$ (first value observed, second missing) or $(0,0)$ (both values are observed). We do not consider the completely missing pattern, in this case $(1,1)$.
We assume that each row $x_{i}$ ($x^*_{i}$) of $\mathbf{X}$ ($\mathbf{X}^*$) is a realization of an i.i.d. copy of the random vector $X$ ($X^*$) with distribution $P$ ($P^*$). Similarly, $M$ is the random vector in $\{0,1\}^p$ encoding the missingness pattern of $X$. Furthermore we assume that $P$ ($P^*$) has a density $f$ ($f^*$) with respect to some dominating measure. For a random vector $X$ or an observation $x$ in $\ensuremath{{\mathbb R}}^p$ and subset $A \subseteq \{1,\ldots,p\}$, we denote as $X_A$ ($x_A$) the projection onto that subset of indices. For instance if $p=3$ and $A=\{1,2\}$, then $X_A=(X_1, X_2)$ ($x_A=(x_1, x_2)$). For any set $C \subseteq \{1,\ldots, p \}$, we denote by $\mathbf{X}_{\bullet C}$ the matrix of $n$ observations projected onto dimensions in $C$, so that $\mathbf{X}_{\bullet C}$ is of dimension $n \times |C|$. Similarly, for $R \subseteq \{1,\ldots,n \}$, $\mathbf{X}_{R \bullet}$ denotes the matrix of observations in set $R$, over all dimensions, so that the dimension of $\mathbf{X}_{R \bullet}$ is given by $|R| \times p$. We denote by $F_g$ (respectively $f_g$) the complete distribution (density) of the data in the $g^{th}$ missingness pattern group. A quick overview of the notation including the use of indices for the number of missingness patterns, dimensions, observations, projections and permutations is given in Table \ref{Notationtable}.
\begin{table}[!htbp]
\centering
\begin{center}
\begin{tabular}{|c|c c |}
\hline
notation & partial & full \\ [0.5ex]
\hline\hline
distribution & $P$ & $P^*$ \\
dataset & $\mathbf{X}$ & $\mathbf{X}^*$ \\
observation in $\ensuremath{{\mathbb R}}^p$ & $x_i$ & $x_i^*$ \\
random vector & $X$ & $X^*$ \\
density & $f$ & $f^*$ \\
\hline \hline
number of missingness patterns & $G$ &\\
number of dimensions & $p$ &\\
number of observations & $n$ & \\
number of projections & $N$ & \\
number of permutations & $L$ & \\
\hline
\end{tabular}
\end{center}
\caption{\textbf{Notation}: Summary of the notation used throughout the paper, with (``partial'') and without (``full'') considering the missing values.}
\label{Notationtable}
\end{table}
\section{Testing Framework}\label{scoredefsec}
In this section, we formulate the specific null and alternative hypotheses for testing MCAR considered by the PKLM-test. Recalling the notation of Section \ref{problemformulationsection}, a missingness pattern is defined by a vector of length $p$, consisting of ones and zeros, indicating which of the $p$ variables are missing in the given pattern.
We divide the $n$ observations into $g \in \{1,\ldots, G\}$ unique groups, such that the observations of each group share the same missingness pattern. Each group $g \in \{1,\ldots, G\}$ contains $n_g$ observations such that $n_1 + \ldots + n_G = n$. Let $F_g$ denote the joint distribution of the $p$ variables in the missingness pattern group $g$, such that the $n_g$ observations of the group $g$ are i.i.d. draws from $F_g$. As stated in \cite{Li2015}, testing MCAR can be formulated by the hypothesis testing problem
\begin{align}\label{test1}
&H_0: F^*_1=F^*_2= \ldots= F^*_G \notag\\
&\text{v.s.} \\
&H_A: \exists \ i\neq j \in \{1,\ldots, G\} \ \text{s.t.} \ F^*_i \neq F^*_j. \notag
\end{align}
We want to emphasize the use of $F^*$ in the testing problem \eqref{test1}, indicating that these hypotheses involve distributions we cannot access. Thus, \eqref{test1} needs to be weakened. Borrowing the notation of \cite{Li2015}, for missingness pattern group $g$ we denote with $\boldsymbol o_g$ and $\boldsymbol m_g$ the subsets of $\{1,\ldots ,p\}$ indicating which variables are observed and which are missing, respectively. We denote the induced distributions by $F_{g,\boldsymbol o_g}$ and $F_{g,\boldsymbol m_g}$. For two groups $i$ and $j$, we denote by $\boldsymbol o_{ij} := \boldsymbol o_i \cap \boldsymbol o_j $ the shared observed variables of both groups. As mentioned in \cite{Li2015}, it is not possible to test \eqref{test1} reliably, since the distribution $F_{i,\boldsymbol m_i}$ of the unobserved variables is inaccessible. Thus, \citet{Li2015} consider the following hypothesis testing problem
\begin{align}
&H_0: F_{i, \boldsymbol o_{ij}} = F_{j, \boldsymbol o_{ij}} \text{ } \forall i \neq j \in \{1,\ldots, G\} \notag\\
&\text{v.s.} \label{test2}\\
&H_A: \exists \ i\neq j \in \{1,\ldots, G\} \text{ with } \boldsymbol o_{ij} \neq \emptyset \text{ s.t. } F_{i, \boldsymbol o_{ij}} \neq F_{j, \boldsymbol o_{ij}}.\notag
\end{align}
The null hypothesis $H_0$ of \eqref{test2} is implied by $H_0$ of \eqref{test1}, but not vice-versa. In other words, if we can reject the null hypothesis of \eqref{test2}, we can also reject the null hypothesis of \eqref{test1}. But if the null hypothesis of \eqref{test2} cannot be rejected, there could still be a distributional change for different groups in the unobserved parts, so that the null hypothesis of \eqref{test1} is not true. In this case, the missingness mechanism would be MNAR. Thus, using the terminology of \citet{Rhoads2012}, \eqref{test2} tests the ``observed at random'' (OAR) hypothesis instead of the MCAR hypothesis. The differentiation can be circumvented by assuming that the missingness mechanism is MAR, which is the approach usually taken, see \citet{Li2015}.
The comparison of all pairs of missingness groups in the hypothesis testing problem \eqref{test2} is problematic however, as laid out in the introduction. In the following, we circumvent this problem in a data-efficient way, considering a one v.s. all-others approach and employing \emph{random projections} in the variable space. Considering observations that are projected into a lower-dimensional space allows us to recover more complete cases. Let $\mathcal{A}$ be the set of all possible subsets of $\{1,\ldots,p \}$ with at most $p-1$ elements. For $A \in \mathcal{A}$ we define by $\mathcal{N}_{A}$ the indices in $1,\ldots, n$ of observations that are observed with respect to projection $A$, i.e., observations of which the projection onto $A$ is fully observed. These observations may belong to different missingness pattern groups $g \in \{1,\ldots , G\}$. As an example, $x = (\texttt{NA},1,\texttt{NA},2, 4)$ and $y = (\texttt{NA},\texttt{NA},\texttt{NA},1,3)$ are not complete and not in the same group, however if we project them to the dimensions $A = \{4,5\}$, $x_A$ and $y_A$ are complete in this lower-dimensional space.
Additionally, to circumvent the problem of many groups with only a few members, we assign new \textit{grouping or class labels} to all observations in $\mathcal{N}_A$. To do so, we consider the set of projections $\mathcal{B}(A^c)$, which is defined as the power set of $\{1,\ldots, p\}\setminus A$. The set $\mathcal{B}(A^c)$ is never empty since $|A| \leq p-1$. For a given projection $B \in \mathcal{B}(A^c)$, we project all observations with indx in $\mathcal{N}_A$ to $B$ and form new collapsed missingness pattern groups $G(A,B)$, where $G(A,B)$ is the set of labels corresponding to distinct missingness patterns among observations with index in $\mathcal{N}_A$ projected to $B$. This is solely done to determine the grouping or class labels of observations with index in $\mathcal{N}_A$. If two observations with index in $\mathcal{N}_A$ are in the same overall missingness pattern group $g \in \{1, \ldots, G\}$, they also end up in the same collapsed group. The other direction is not true, that is the number of collapsed groups $|G(A,B)|$ is at most as large as the initial number of distinct groups $G$ among the observations with index in $\mathcal{N}_A$. Considering again $x = (\texttt{NA},1,\texttt{NA},2, 4)$ and $y = (\texttt{NA},\texttt{NA},\texttt{NA},1,3)$, if $B = \{1,2\}$, then observations $x$ and $y$ are not in the same missingness pattern group. However, if $B = \{1,3\}$, we assign the same class label to $x$ and $y$. Thus, given the projection $A$, we obtain a set of fully observed observations $\mathbf{X}_{\mathcal{N}_{A}, A}=\mathbf{X}_{\mathcal{N}_{A}, A}^*$, and given the projection $B$ we assign to them the $|G(A,B)|$ different class labels. Figure \ref{fig:IllustrationofPKLMapproach} provides a schematic illustration of projections $A$ and $B$ on a more complicated example with four observations, each corresponding to a different pattern (i.e., $n=G=4$). According to $B= \{2\}$, the first observation in $\mathbf{X}_{\mathcal{N}_{A}, A}$ obtains one collapsed class label whereas the second and third observation obtain another, common label, resulting in $|G(A,B)|=2.$
We are now equipped to formulate our one v.s. all-others approach with the hypothesis testing problem
\begin{align}
&H_0: F_{g,A} = \sum_{j\in G(A,B) \setminus g} \omega_j^g F_{j,A} \notag\\
&\forall g \in G(A,B), \forall B \in \mathcal{B}(A^c), \forall A \in \mathcal{A}\notag\\
&\text{v.s.} \label{test3}\\
&H_A: F_{g,A} \neq \sum_{j\in G(A,B)\setminus g} \omega_j^g F_{j,A}.\notag\\
& \text{for one} \ g \in G(A,B), B \in \mathcal{B}(A^c), A \in \mathcal{A}, \notag
\end{align}
where $F_{g,A}$ is the joint distribution of the observations of class $g$ with index in $\mathcal{N}_A$ and the groups $j\in G(A,B)$ are jointly determined by $A$ and $B$. Thus, we compare the distribution of the observed part with respect to $A$ of one group $g$ with the mixture of the observed parts of the rest of the groups. The weights $\omega_j^g$ are non-negative, sum to $1$, and are proportional to the respective fraction of observations in class $j$.
\begin{figure}
\caption{Illustration of the projections $A$ and $B$ in an example with $n=4$ and $p=5$. In a first step, a projection $A = \{3,4,5\}
\label{fig:IllustrationofPKLMapproach}
\end{figure}
\begin{example}
\textit{To give some intuition about the hypothesis testing problem \eqref{test3}, we relate it to the hypothesis testing problem \eqref{test2} with the help of the example of Figure \ref{fig:IllustrationofPKLMapproach}. In this example, each observation $i=1,\ldots, 4$ has a different pattern and can thus be seen as a draw from a distribution $F^*_i$. We first assume that the null hypothesis of \eqref{test3} holds and show, as an example, that this implies $F_{1,\boldsymbol o_{13}}=F_{3, \boldsymbol o_{13}}$. Since the null hypothesis of \eqref{test3} refers to all $A \in \ensuremath{{\mathcal A}}$, it also includes $A=\boldsymbol o_{13}=\{3,4,5\}$, which is what we consider in Figure \ref{fig:IllustrationofPKLMapproach}. While we are only interested in $F_{1,A}$ and $F_{3,A}$, taking $B=\{1,2\}$ the observations in $\mathbf{X}_{\mathcal{N}_A,A}$ come from the three distributions $F_{1,A}, F_{2,A}, F_{3,A}$. Due to \eqref{test3} it holds that \begin{align}\label{test3ex}
\begin{split}
F_{1,A} &= \omega_2^1 F_{2,A} + \omega_3^1 F_{3,A}, \\
F_{2,A} &= \omega_1^2 F_{1,A} + \omega_3^2 F_{3,A}, \\
F_{3,A} &= \omega_1^3 F_{1,A} + \omega_2^3 F_{2,A}.
\end{split}
\end{align}
Some algebra shows that equation system \eqref{test3ex} is equivalent to $F_{1,A}=F_{2,A}=F_{3,A}$, which in particular means $F_{1,A}=F_{3,A}$, that we wanted to show. While we took $i=1$ and $j=3$ as an example matching Figure \ref{fig:IllustrationofPKLMapproach}, we cycle through all $A \in \ensuremath{{\mathcal A}}$ in \eqref{test3} and thus $A=\boldsymbol o_{ij}$ for all patterns $i,j$ eventually. We now assume that the null hypothesis of \eqref{test2} is true and consider again $A=\{3,4,5\}$ as an example. Since we only look at the fully observed observations in $\mathcal{N}_A$ in \eqref{test3}, i.e., leave out the fourth point, we again deal with the three distributions $F_{1,A}$, $F_{2,A}$, $F_{3,A}$. Moreover, by construction, $A \subset \boldsymbol o_{12}$ and $A \subset \boldsymbol o_{13}$ (even $A = \boldsymbol o_{13}$ in this case). Thus, $F_{1,\boldsymbol o_{12}}=F_{2,\boldsymbol o_{12}}$ and $F_{1,\boldsymbol o_{13}}=F_{3,\boldsymbol o_{13}}$, implied by the null hypothesis of \eqref{test2}, means that $F_{1,A}=F_{2,A}=F_{3,A}$, which implies \eqref{test3ex}. Again this might seem constructed, but since by definition, \eqref{test3} only considers the distributions $F_{i,A}$ and $F_{j,A}$ of fully observed points on $A$, it will always hold that $A \subset \boldsymbol o_{ij}$. }
\end{example}
We make note of an abuse of notation in \eqref{test3}, as the group $g$ in $F_{g,A}$ only corresponds to the same index of $F_g$ in \eqref{test2}, if $B=A^c$, as can be seen in the example of Figure \ref{fig:IllustrationofPKLMapproach}: If $B=A^c$, the three observations in $\mathbf{X}_{\mathcal{N}_A,A}$ are drawn from $F_{1,A}, F_{2,A}$ and $F_{3,A}$ respectively. However, if $B=\{2\}$, then observations two and three are now assumed to be drawn from a single distribution, which corresponds to a mixture of $F_{2,A}$ and $F_{3,A}$.
In short, the null hypothesis of \eqref{test3} implies the null hypothesis of \eqref{test2} because for $A=\boldsymbol o_{ij}$, observations coming from $F_{i,A}$ and $F_{j,A}$ are contained in $\mathbf{X}_{\mathcal{N}_A,A}$. Vice-versa, the null hypothesis of \eqref{test2} implies the null hypothesis of \eqref{test3} because $A$ is nested in $\boldsymbol o_{ij}$ for all $F_i$ and $F_j$ considered on $A$. This actually sketches the proof of the following result:
\begin{restatable}{proposition}{propequivalence}
\label{equalitythmlabel}
Hypothesis testing problem \eqref{test3} is equivalent to \eqref{test2}.
\end{restatable}
Tackling hypothesis testing problem \eqref{test3} would be rather inefficient since we might test many times the same hypothesis when cycling through all $A \in \mathcal{A}$ and $B \in \mathcal{B}(A^c)$. However, the idea is that $A$ and $B$ will only be random draws from $\mathcal{A}$ and $\mathcal{B}(A^c)$. This is discussed in the next section.
\section{MCAR test Through Classification}\label{pracaspects}
In this section we introduce the classification-based statistic of our test and detail the implementation of our permutation approach, permuting the rows of the missingness matrix $\mathbf{M}$, to obtain a valid test.
\subsection{Test Statistic $U$}
Let us fix a projection $A \in \mathcal{A}$ and corresponding projection $B \in \mathcal{B}(A^c)$.
We denote the induced collapsed class labels based on projections $A$ and $B$ by $Y^{(A,B)}$, by $X_A$ the projection of the random vector $X$ on $A$ and correspondingly by $x_A$ the projection on $A$ of observation $x$ in $\mathbf{X}_{\mathcal{N}_{A}, A}$. Furthermore, we define for each $g\in G(A,B)$ and $x$ in $\mathbf{X}_{\mathcal{N}_{A}, A}$ the following quantities:
\begin{align*}
p^{(A,B)}_g(x) &:= P(Y^{(A,B)}=g \mid X_A=x_A), \\
f^{(A,B)}_g(x) &:= P(x_A \mid Y^{(A,B)}=g),\\
\pi^{(A,B)}_g & := P(Y^{(A,B)}=g).
\end{align*}
Let us fix $g\in G(A,B)$ as well. We reformulate the hypothesis testing problem (\ref{test3}):
\begin{align}
H_{0,g}^{(A,B)}: &f^{(A,B)}_g = \frac{1}{1-\pi^{(A,B)}_g} \sum_{j\in\{1,\ldots, G(A,B)\}\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j\notag\\
&\text{v.s.} \label{test4}\\
H_{1,g}^{(A,B)}: &f^{(A,B)}_g \neq \frac{1}{1-\pi^{(A,B)}_g} \sum_{j\in\{1,\ldots, G(A,B)\}\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j. \notag
\end{align}
Let $S_{f_g^{(A,B)}} \subset \mathcal{N}_A$ denote the indices of observations in $\mathbf{X}_{\mathcal{N}_{A}, A}$ that belong to class~$g$. For each missingness pattern~$g$, we now define the following statistic in analogy to \citet{Cal2020},
\begin{align}
U^{(A,B)}_g := \frac{1}{|S_{f_g^{(A,B)}} |}\sum_{i\in S_{f_g^{(A,B)}}} \left(\log\frac{p^{(A,B)}_g(x_{i})}{1-p^{(A,B)}_g(x_{i})} - \log \frac{\pi^{(A,B)}_g}{1-\pi^{(A,B)}_g}\right).\label{Ustat}
\end{align}
This statistic is motivated by the following claim:
\begin{restatable}{lemma}{densityratiothm}
\label{densityratiothmlabel}
The logarithm of the density ratio for testing (\ref{test4}) is given by $U^{(A,B)}_g$.
\end{restatable}
The main motivation for the form of this test-statistic is that one can use the same arguments as in \citet[Proposition 1]{Cal2020} to show that a test based on $U^{(A,B)}_g$ will have the highest power among all tests for \eqref{test4}, according to the Neyman-Pearson Lemma. In addition, the test statistic converges to the Kullback-Leibler Divergence between $f_g^{(A,B)}$ and the mixture of the other densities, motivating the name of our MCAR test. A high value of KL-Divergence indicates that the distributions of two samples deviate strongly from each other.
\begin{restatable}{lemma}{KLlemmalabel}
\label{KLlemmalabel}
$U_g^{(A,B)}$ converges in probability to the Kullback-Leibler Divergence between $f_g^{(A,B)}$ and the mixture of the other densities:
\begin{align*}
U_g^{(A,B)} \rightarrow \ensuremath{{\mathbb E}}_{f_g}\left[ \log \frac{f^{(A,B)}_g(X)(1-\pi^{(A,B)}_g) }{ \sum_{j\in G(A,B)\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j(X)}\right],
\end{align*} as $n_g$ and $\sum_{j\in \{1,\ldots,G\}\setminus g} n_j\rightarrow \infty$ and $n_g/n \rightarrow \pi^{(A,B)}_g \in (0,1)$.
\end{restatable}
Since the statistic $ U^{(A,B)}_g$ is evaluated only on cases $x \in S_{f_g^{(A,B)}}$, it holds that $ f^{(A,B)}_g(x)= f^{*(A,B)}_g(x)$ and $p^{(A,B)}_g(x) = p^{*(A,B)}_g(x)$. This means that the projected complete and incomplete distributions coincide on the projected complete samples. Thus we are indeed asymptotically measuring the Kullback-Leibler Divergence between $f^{*(A,B)}_g$ and the mixture of the other densities.
Since there might be only very few observations for a single class $g$, we symmetrize the KL-Divergence. That is, we use the samples of all classes to evaluate the KL-Divergence and not only the samples of class $g$. Let $S_{f_{g^{c(A,B)}}} \subset \mathcal{N}_{A}$ denote the indices of observations in $\mathbf{X}_{\mathcal{N}_{A}, A}$ that belong to all other classes $G(A, B)\setminus g$. For each missingness pattern $g$, we will use, in the following, the difference between two of the above statistics, namely
\begin{align}\label{symm}
\begin{split}
U^{(A,B)}_g - U^{(A,B)}_{g^c} &= \frac{1}{| S_{f_g^{(A,B)}} |}\sum_{i\in S_{f_g^{(A,B)}}} \log\frac{p^{(A,B)}_g(x_{i})}{1-p^{(A,B)}_g(x_{i})} \\
&- \frac{1}{| S_{f_{g^{c(A,B)}}} |}\sum_{i\in S_{f_{g^{c(A,B)}}}} \log\frac{p^{(A,B)}_g(x_{i})}{1-p^{(A,B)}_g(x_{i})},
\end{split}
\end{align} where the terms including the class probabilities $\pi_g^{(A,B)}$ cancel out. This difference converges to the symmetrized KL-Divergence between the mixture of $f^{(A,B)}_g$ and the remaining classes and is more sample efficient than only using $ U^{(A,B)}_g$. The test statistic for fixed $(A,B)$ is then given by
\begin{align*}
U^{(A,B)} := \sum_{g=1}^{G(A,B)} (U^{(A,B)}_g - U^{(A,B)}_{g^c}),
\end{align*}
and the final test statistic is defined as
\begin{align}
U := \ensuremath{{\mathbb E}}_{A \sim \kappa, B\sim \kappa(A^c)} [U^{(A,B)}].\label{testtatU}
\end{align}
\subsection{Practical Estimation of $U$}
We estimate $p_g^{(A,B)}$ with a multiclass-classifier, yielding $\hat{p}_g^{(A,B)}$. Plugging-in this quantity into \eqref{symm} yields $\hat{U}_g^{(A,B)}-\hat{U}_{g^c}^{(A,B)}$. We then estimate $U^{(A,B)}$ by
$$
\hat{U}^{(A,B)} := \sum_{g=1}^{G(A,B)} (\hat{U}^{(A,B)}_g - \hat{U}^{(A,B)}_{g^c}).
$$
Finally, we estimate $U$ by
\begin{equation}
\hat{U}:= \frac{1}{N}\sum_{i=1}^{N} \hat{U}^{(A_i,B_i)}, \label{finalstat}
\end{equation}
where $N$ is the number of draws of pairs of projections $(A_i,B_i)$, $i=1,\ldots,N$, with $A \in \mathcal{A}$ according to a distribution $\kappa$ and $B \in \mathcal{B}(A^c)$ according to a distribution $\kappa(A^c)$.
Our chosen multiclass classifier is Random Forest \citep{Breiman, Breiman2001}, more specifically, the probability forest of \citet{probabilityforests}. That is, for each of the $N$ projections, we fit a Random Forest with a specified number of trees, a parameter called $\texttt{num.trees.per.proj}$.
Thus, for each tree (or group of trees) a random subset of variables and labels is chosen based on which the test statistic is computed. In each tree, we set $\texttt{mtry}$ to the full dimension of the projection to not have an additional subsampling effect. This approach aligns naturally with the construction of Random Forest, as the overall approach might be seen as one aggregated Random Forest, which restricts the variables in each tree or group of trees to a random subset of variables. We finally use the OOB-samples for predicting $\hat{p}^{(A,B)}_g$.
The question remains how to sample the sets $(A_1, B_1), \ldots (A_N, B_N)$ at random. Our chosen approach is quite simple: we first randomly sample a number of dimensions $r_1$ by drawing uniformly from $\{1,\ldots,p-1 \}$. We then draw $r_1$ values without replacement from $\{1,\ldots,p \}$ to obtain $A$. Similarly, we randomly draw a value $r_2$ from $\{1,\ldots,p - r_1 \}$ and then draw $r_2$ values without replacement from $\{1,\ldots,p \}\setminus A$ to obtain $B$. We then consider $\mathbf{M}_{\mathcal{N}_{A}, B}$, i.e., all patterns for the fully observed observations in $A$ projected to $B$, and build the labels $Y^{(A,B)}$ based on the patterns in this matrix. This simple approach is used as a default, but one could also employ a more data-adaptive subsampling. In our algorithm, we might restrict the number of collapsed classes by selecting $B$ corresponding to $A$ accordingly. The parameter indicating the maximal number of collapsed classes allowed is given by \texttt{size.resp.set}. If set to $2$, we reduce the multi-class problem to a two-class problem. In Algorithm \ref{algo1} we provide the pseudo-code for the estimation of $\hat{U}^{(A,B)}$.
\begin{algorithm}[H]
\normalsize
\textbf{Inputs}: incomplete dataset $\mathbf{X}$, missingness indicator $\mathbf{M}$, projections $A$, $B$ \\
\textbf{Result}: $\hat{U}^{(A,B)}$\\
\textbf{Hyper-parameters}: number of trees per projection \texttt{num.trees.per.proj}, standard parameters of the Probability Forests \texttt{size.resp.set}\;
- Recover the complete cases $\mathcal{N}_{A}$ with respect to $A$\;
- Generate the $G(A,B)$ collapsed class labels $Y^{(A,B)}$ from $\mathbf{M}_{\mathcal{N}_{A}, B}$\;
- Fit a multi-class probability forest with $\texttt{num.trees.per.proj}$ trees and $\texttt{mtry}$ full\;
\For{$g=1,\ldots, G(A,B)$}{
- Estimate $\hat{p}_g^{(A,B)}$ with the fitted forest above using out-of-bag probabilities\;
- Return the log-likelihood contribution $\hat{U}^{(A,B)}_g-\hat{U}^{(A,B)}_{g^c}$ for class $g$\;
}
- Average the log-likelihood ratio contributions $\hat{U}^{(A,B)}_g-\hat{U}^{(A,B)}_{g^c}$ from the $G(A,B)$ collapsed classes $g$ to get the statistic $\hat{U}^{(A,B)}$\;
\caption{$\text{Uhat}(\mathbf{X}, \mathbf{M}, A, B)$}
\label{algo1}
\end{algorithm}
To ensure that the level is kept by a test based on the statistic $\hat{U}$ for any choice of $\kappa$ and $\kappa(A^c)$, we use a permutation approach, as detailed next.
\subsection{Permutation Test}
To ensure the correct level, we follow a permutation approach. Informally speaking, the permutation approach works in this context if the testing procedure can be replicated in exactly the same way on the randomly permuted class labels. This is not completely trivial in this case, as the labels are defined in each projection via the missingness matrix $\mathbf{M}$. It can be shown numerically that permuting the labels at the level of the projection does not conserve the level, as this is blind to the correspondence between the projections across the permutations.
The key to the correct permutation approach is to permute the rows of $\mathbf{M}$. That is, for $L$ permutations $\sigma_{\ell}$, $\ell=1,\ldots,L$, we obtain $L$ matrices $\mathbf{M}_{\sigma_{1}}, \ldots, \mathbf{M}_{\sigma_{L}}$ with only the rows permuted. Then we proceed as above: We sample $A \sim \kappa$, $B \sim \kappa(A^c)$ and for each permutation of rows $\sigma_{\ell}$, $\ell=1,\ldots,L$, we calculate $U_{g, \sigma_{\ell}}^{(A,B)} - U_{g^c, \sigma_{\ell}}^{(A,B)}$ as in \eqref{symm}. Using $\hat{p}^{(A,B)}_g$ instead of $p^{(A,B)}_g$ this results in $\hat{U}_{g, \sigma_{\ell}}^{(A,B)}$ and in the statistic
\begin{align*}
\hat{U}^{(A,B)}_{\sigma_{\ell}} := \sum_{g=1}^{G(A,B)} \hat{U}^{(A,B)}_{g, \sigma_{\ell}}-\hat{U}^{(A,B)}_{g^c, \sigma_{\ell}}.
\end{align*}
We note that we do not need to refit the forest for this permutation approach to work. Instead, we can directly use $\hat{p}^{(A,B)}_g$ from the original Random Forest that we fitted on the original $\mathbf{M}$.
Finally, we calculate the empirical distribution of the test-statistic under the null, by calculating for $\ell=1,\ldots, L$,
\begin{equation}\label{finalestimatepermuted}
\hat{U}_{\sigma_{\ell}}:= \frac{1}{N}\sum_{j=1}^{N} \hat{U}^{(A_j,B_j)}_{\sigma_{\ell}}.
\end{equation}
The $p$-value of the test is then obtained as usual by
\begin{align}\label{p-val}
Z:=\frac{\sum_{\ell=1}^L \ensuremath{{\mathds{1}}}\{ \hat{U}_{\sigma_{\ell}} \geq \hat{U} \} +1 }{L+1}.
\end{align}
Then it follows from standard theory on permutation tests that $Z$ is a valid $p$-value:
\begin{restatable}{proposition}{pvalprop}
\label{pvallabel}
Under $H_0$ in \eqref{test1}, and $Z$ as defined in \eqref{p-val}, it holds for all $z \in [0,1]$ that
\begin{equation}\label{validpval}
\ensuremath{{\mathbb P}}(Z \leq z) \leq z.
\end{equation}
\end{restatable}
\begin{algorithm}[htb]
\normalsize
\textbf{Inputs}: incomplete dataset $\mathbf{X}$ \\
\textbf{Result}: $p$-value\\
\textbf{Hyper-parameters}: number of pairs of projections $N$, number of permutations $L$, number of trees per projection \texttt{num.trees.per.proj}, standard parameters of the Probability Forests, maximal number of collapsed classes \texttt{size.resp.set}\;
- Randomly permute the rows of $\mathbf{M}$ $L$ times to obtain $\mathbf{M}_{\sigma_1}, \ldots, \mathbf{M}_{\sigma_L}$\;
\For{$j=1,\ldots,N$}{
- Sample a pair of projections $(A_j,B_j)$ hierarchically according to $A_j \sim \kappa$ and $B_j \sim \kappa(A_j)$\;
- Calculate $\hat{U}^{(A_j,B_j)}=\text{Uhat}(\mathbf{X}, \mathbf{M}, A_j, B_j)$\;
\For{$\ell=1,\ldots,L$}{
- Calculate $\hat{U}_{\sigma_{\ell}}^{(A_j,B_j)}=\text{Uhat}(\mathbf{X}, \mathbf{M}_{\sigma_{\ell}}, A_j, B_j)$\;
}
}
- Average the statistics $\hat{U}^{(A_j,B_j)}$, $\hat{U}_{\sigma_{\ell}}^{(A_j,B_j)}$ over the couples of projections $(A_j,B_j)$ to get the final statistic $\hat{U}$, $\hat{U}_{\sigma_{\ell}}$, $\ell=1,\ldots,L$ \;
- Obtain the $p$-value with \eqref{p-val}\;
\caption{PKLMtest($\mathbf{X}$)}
\label{Implementationdetails}
\end{algorithm}
Algorithm \ref{Implementationdetails} summarizes the testing procedure.
\section{Empirical Validation}\label{empresults_1}
In this section, we empirically showcase the power of our test in comparison to recent competitors on both simulated and real data. The simulation setting is set up along the lines of \citet{Jamshidian2010} and \citet{Li2015} with a common MAR mechanism. For the real datasets we also add a random MAR generation through the function \texttt{ampute} of the \textsf{R}-package \texttt{mice}, see e.g., \citet{ amputepaper}.
As we did throughout the paper, we refer to our test as ``PKLM'', the test of \citet{Li2015} as ``Q'', the test of \citet{littletest} as ``Little'' and finally the one of \citet{Jamshidian2010} as ``JJ''. The Little-test is computed with the \textsf{R}-package \texttt{naniar} \citep{naniar}, while the JJ-test uses the code of the \textsf{R}-package \texttt{MissMech} \citep{missmech}. Finally, the code for the Q-test was kindly provided to us by the authors.
\subsection{Simulated Data}\label{sec: simulations}
We vary the sample size $n$, the number of dimensions $p$, and the number of complete observations, which we denote by $r$. Cases $1-8$ describe the following different data distributions, similarly as in \cite{Li2015} and in \cite{Jamshidian2010}: Throughout, $I_p$ is a covariance matrix with diagonal elements $1$ and off-diagonal elements $0$ while $\Sigma$ is a covariance matrix with diagonal elements $1$ and off-diagonal elements $0.7$:
\begin{enumerate}
\item A standard multivariate normal distribution with mean $0$ and covariance $I_p$,
\item a correlated multivariate normal distribution with mean $0$ and covariance $\Sigma$,
\item a multivariate $t$-distribution with mean $0$, covariance $I_p$ and degree of freedom $4$,
\item a correlated multivariate $t$-distribution with mean $0$, covariance $\Sigma$ and degree of freedom $4$,
\item a multivariate uniform distribution which has independent uniform$(0, 1)$ marginal distributions,
\item a correlated multivariate uniform distribution obtained by multiplying $\Sigma^{1/2}$ to the multivariate uniform distribution in 5,
\item a multivariate distribution obtained by generating
$W = Z + 0.1Z^3$, where $Z$ is from the standard multivariate normal distribution,
\item a multivariate Weibull distribution which has independent Weibull marginal distribution, and each Weibull marginal distribution has scale parameter $1$ and shape parameter $2$.
\end{enumerate}
The above implements the fully observed $\mathbf{X}^*$. To compute the type-I error, we then simulate the MCAR mechanism where each value in the $p$ columns of the missingness matrix $\mathbf{M}$ has a probability of $1-r^{1/p}$ being one and is otherwise zero. To compute the power, we simulate the MAR mechanism following the description in \citet{Li2015}: We generate $\mathbf{M}$ such that the first column consists only of zeros so that the first variable is fully observed. Further, each value in the remaining $p-1$ columns has a probability of $1-r^{1/(p-1)}$ being one, while the rest is zero. This results, on average, in $r$ rows in $\mathbf{M}$ with only zeros, and thus in $r$ fully observed rows in $\mathbf{X}$. Next, we sort the rows of $\mathbf{M}$ into two groups, those that will be fully observed (complete group) and those that will have at least one missing value (missing group). So far, the generation is still MCAR. However now, for each row $i=1,\ldots,n$ we compare $\mathbf{X}^*_{i, 1}$ with the mean of $\mathbf{X}^*_{\bullet, 1}$, denoted by $\bar{X}_{1}$. If $\mathbf{X}^*_{i, 1} < \bar{X}_{1}$, the corresponding row $i$ is placed into the complete group with probability 1/6, and with probability 5/6 into the missing group. That is, with probability 1/6, the row $i$ is paired with a row in $\mathbf{M}$ from the complete group, and with probability 5/6, it is paired with a row from the missing group. Thus, in this case it is 5 times more likely that the row is placed in the missing group. On the other hand, if $\mathbf{X}^*_{i, 1} \geq \bar{X}_{1}$ the situation reverses, and row $i$ is 5 times more likely to be associated with a row in $\mathbf{M}$ from the complete group. Assigning the rows of $\mathbf{X}^*$ successively to the rows of $\mathbf{M}$ like this results in $\mathbf{X}$ with MAR missingness.
Each experiment was rerun $\texttt{nsim}=300$ times to compute type-I error and power. We used the following default hyperparameter setting for the computation of our PKLM-test: number of permutations $\texttt{nrep} = 30$, number of projections $\texttt{num.proj} = 100$, minimal node size in a tree $\texttt{min.node.size} = 10$, number of fitted trees per projection $\texttt{num.trees.per.proj} = 200$ and maximal number of collapsed classes allowed in a projection $\texttt{size.resp.set} = 2$. We note that the choice of these hyperparameters is intriguingly simple: besides $\texttt{size.resp.set}$, it holds that ``higher values are better''. Thus, as with RF in general, it is mostly a question of computational resources determining how large the values can be chosen. This is especially true for the number of trees for each forest, which should be relatively high in order to minimize additional randomness. We found $\texttt{num.trees.per.proj} = 200$ to be a good compromise between speed and accuracy. As the level is guaranteed for any number of permutations, and we desired a choice of hyperparameters that would work for $p=4$ as well as $p=40$, we chose the number of permutations low ($\texttt{nrep}=30$), but the number of projections relatively high ($\texttt{num.proj} = 100$). The only ``difficult'' parameter to set is $\texttt{size.resp.set}$, as there appears to be some loss in accuracy when the number of classes is larger than two. We thus found that $\texttt{size.resp.set}=2$, generating two classes, works well in a wide range of examples.
As mentioned throughout the paper, the Q-test could not be calculated for a large range of settings.\footnote{The largest number $p$ reported in the paper of \cite{Li2015} is $10$, while $r$ is at least $0.35$.} In particular, computation times were infeasible for the setting $p=10$ and $r=0.1$, and for any configuration with $p = 20$ or $p=40$. For the setting $n= 500$, $p=10$ and $r=0.1$ for instance, one test for case $2$ took around 20 minutes to finish, implying an approximate overall computation time of $500 \cdot 8 \cdot 2 \cdot 20 = 16000$ minutes or approximately 110 24-hour days. This despite the fact that the \textsf{R}-code of the Q-test we received was well implemented. In the upcoming Tables \ref{tab3} and \ref{tab4} of results we always used the nominal level of $\alpha = 0.05$. We boldfaced the results for each row in the tables in the following manner: Whenever the type $I$ error of a test is below or equal to $0.05$ and the test has the best power, it will be boldfaced. If this is true for more than one test, they are all boldfaced. Additionally, we boldfaced all the type-$I$ errors that are below or equal to the nominal level $\alpha = 0.05$ to indicate which tests holds the level on average in the given settings.
In the simulation set-up of $n=200$ and $p=4$, the Q-test is very powerful, while keeping the nominal level. The PKLM-test is rarely the most powerful here, however the power of the PKLM-test is often relatively close to the best power. As an example, in case $2$ for $r=0.65$, the Q-test has a power of $1$ while the PKLM-test has a power $0.93$, with both keeping the nominal level $\alpha=0.05$.
In the set-up of $n=500$ and $p=10$, the overall picture changes. The PKLM-test is in all but two of the $24$ cases the most powerful test, sometimes leaving the second-best test quite far behind. As an example, in case $3$ for $r=0.65$, the PKLM-test has a power of $0.85$ while the Q- and the Little-test exhibit a power of $0.26$ and $0.61$, respectively. While the Little- and the JJ-test often show inflated levels, this is never a problem for the valid PKLM-test.
In the simulation set-up of $n=500$ and $p=20$, it appears as if the Little-test is a strong competitor. But this is only until one considers its type-I error. Though to a much lesser degree than the JJ-test, the type-I error is often heavily larger than the nominal level. Considering for instance case $4$, the power of the Little-test is even slightly less than its actual type-I error for $r=0.1$. In case $4$ with $r=0.35$, our test displays a power of $0.89$ and keeps the level, while the Little-test only has a power of $0.33$ despite having a grossly inflated type-I error. All of these problems are worsened for the JJ-test, which often displays an inflated type-I error in almost all cases and simulation set-ups. A similar story plays out in the case $r=0.65$.
Finally, in the simulation set-up of $n=1000$ and $p=40$, the power of our test is again much better than that of all other tests. Interestingly, the PKLM-test tends to have higher power when the components of the distribution are not independent, such as in the cases 2, 4, 6, and 8. For example, in case $1$ for $r=0.65$, PKLM has a power of 0.2, while for case $2$ it has a power of $0.95$. The main difference between these two cases is the strong positive correlation induced in case $2$. This pattern repeats: in all correlated examples and for both $r=0.65$ and $r=0.35$, the PKLM has a power nearing $1$, whereas in the independent versions, the power is closer to the type-I error. Thus, our test is able to use the dependencies in the data to its advantage, at least for $r=0.65$ and $r=0.35$, and can reach a very high power even for comparatively large $p$.
In summary, our test is very competitive even in small dimensions, where the Q-test is very powerful. It leaves behind all other tests by a wide margin as soon as one increases $p$. The Q-test remains strong in these situations as well, but becomes quickly infeasible as either $p$ increases or the fraction of complete cases $r$ decreases. Crucially, only the PKLM-test and the Q-test are able to consistently keep the nominal level over all experiments, with the Little- and JJ-test showing blatant inflation of the type-I error in many situations. This is the case despite the fact that simply checking the type-I error for a single level $\alpha$ ($0.05$ in this case) is far from sufficient to analyse the validity of a $p$-value.
\begin{figure}
\caption{Example plot of cumulative distribution function values of the $p$-values under the null (MCAR) of the four different tests. The simulation set up is $n=500$, $p=10$, $r=0.65$ in case $5$, with $500$ repetitions. The red line is the $x=y$ line, while the blue lines show $100$ ecdfs of $500$ simulated uniform random variables.}
\label{fig:ecdf}
\end{figure}
As an illustration, we randomly chose one of the above experiments in which the Little-test kept the nominal level, e.g., in the simulation set up $n=500$, $p=10$, $r=0.65$ in case $5$. In Figure \ref{fig:ecdf} we plot the empirical cumulative distribution functions (ecdf) of $500$ $p$-values under the null (MCAR) of the four different tests. The red line is the $x=y$ line. In blue we plotted $100$ ecdfs of a uniform$(0,1)$-distribution. As described in Equation \eqref{validpval}, a valid $p$-value has the property that the corresponding black ecdf values do not lie above the region defined by the blue lines. As Proposition \ref{pvallabel} predicts, this is clearly the case for the PKLM-test. That the $p$-values appear rather discrete stems from the fact that we chose a low number of permutations ($\texttt{nrep}=30$). The Q-test is sometimes overshooting the red line, though this appears to mostly stem from estimation error. In general, it is remarkable how closely the ecdfs of $p$-values from both the Q- and PKLM-test resemble the ecdf of a uniform sample. The JJ-test appears to consistently have $P(Z \leq z) \geq z$. The Little-test finally appears to produce a valid $p$-value as long as only values $z < 0.5$ are considered. For $z\geq 0.5$, the the ecdf clearly violates the requirement of a valid $p$-value. If there is no theoretical guarantee, it is thus important to not just check the type-I error at $\alpha = 0.05$, but to instead consider other levels, e.g., $\alpha=0.1$.
\begin{table}
\centering
\begin{center}
\begin{adjustbox}{totalheight=\textheight}
\begin{tabular}{rrrl|rrrr|rrrr}
\hline
& & & &\multicolumn{4}{c}{Power} & \multicolumn{4}{c}{Type-I Error}\\
n & p & r & case & PKLM & Q & Little & JJ & PKLM & Q & Little & JJ \\
\hline
200 & 4 & 0.65 & 1 & 0.73 & \textbf{0.98} & 0.98 & 0.12 & \textbf{0.03} & \textbf{0.03} & 0.06 & \textbf{0.04} \\
& & & 2 & \textbf{0.93} & 1.00 & 0.96 & 0.04 & \textbf{0.03} & 0.06 & 0.06 & \textbf{0.05} \\
& & & 3 & 0.81 & \textbf{0.94} & 0.92 & 0.05 & \textbf{0.03} & \textbf{0.02} & \textbf{0.04} & 0.08 \\
& & & 4 & 0.89 & \textbf{0.97} & 0.91 & 0.05 & \textbf{0.01} & \textbf{0.03} & \textbf{0.05} & \textbf{0.05} \\
& & & 5 & 0.79 & \textbf{1.00} & \textbf{1.00} & 0.19 & \textbf{0.03} & \textbf{0.04} & \textbf{0.04} & 0.06 \\
& & & 6 & 0.90 & \textbf{1.00} & 0.99 & 0.20 & \textbf{0.03} & \textbf{0.04} & \textbf{0.03} & 0.13 \\
& & & 7 & \textbf{0.80} & 0.93 & 0.95 & 0.04 & \textbf{0.04} & 0.06 & 0.09 & 0.08 \\
& & & 8 & 0.72 & \textbf{0.92} & 0.90 & 0.26 & \textbf{0.03} & \textbf{0.05} & \textbf{0.04} & 0.08 \\
\hline
200 & 4 & 0.35 & 1 & 0.79 & \textbf{0.98} & 0.97 & 0.04 & \textbf{0.03} & \textbf{0.04} & \textbf{0.04} & 0.13 \\
& & & 2 & 0.87 & \textbf{0.98} & 0.97 & 0.08 & \textbf{0.03} & \textbf{0.03} & \textbf{0.03} & 0.08 \\
& & & 3 & 0.82 & \textbf{0.97} & 0.90 & 0.16 & \textbf{0.03} & \textbf{0.03} & 0.06 & 0.12 \\
& & & 4 & 0.87 & \textbf{0.99} & 0.92 & 0.10 & \textbf{0.03} & \textbf{0.02} & 0.08 & 0.11 \\
& & & 5 & 0.79 & \textbf{0.99} & \textbf{0.99} & 0.10 & \textbf{0.04} & \textbf{0.05} & \textbf{0.05} & 0.08 \\
& & & 6 & 0.80 & \textbf{1.00} & 0.97 & 0.12 & \textbf{0.03} & \textbf{0.04} & 0.06 & 0.11 \\
& & & 7 & 0.79 & \textbf{0.98} & 0.92 & 0.09 & \textbf{0.03} & \textbf{0.05} & 0.07 & 0.06 \\
& & & 8 & 0.83 & \textbf{0.99} & 0.99 & 0.10 & \textbf{0.05} & \textbf{0.05} & 0.06 & \textbf{0.05} \\
\hline
200 & 4 & 0.10 & 1 & 0.30 & \textbf{0.40} & 0.26 & 0.20 & 0.06 & \textbf{0.03} & \textbf{0.05} & 0.22 \\
& & & 2 & \textbf{0.35} & 0.50 & 0.27 & 0.12 & \textbf{0.03} & 0.10 & \textbf{0.05} & 0.18 \\
& & & 3 & 0.25 & \textbf{0.29} & 0.18 & 0.21 & \textbf{0.04} & \textbf{0.01} & \textbf{0.04} & 0.24 \\
& & & 4 & 0.37 & \textbf{0.42} & 0.17 & 0.19 & \textbf{0.03} & \textbf{0.03} & \textbf{0.03} & 0.17 \\
& & & 5 & 0.27 & \textbf{0.51} & 0.33 & 0.26 & \textbf{0.05} & \textbf{0.02} & \textbf{0.05} & 0.20 \\
& & & 6 & 0.31 & \textbf{0.40} & 0.27 & 0.24 & \textbf{0.03} & \textbf{0.03} & \textbf{0.04} & 0.17 \\
& & & 7 & 0.26 & \textbf{0.42} & 0.22 & 0.20 & \textbf{0.04} & \textbf{0.04} & 0.09 & 0.31 \\
& & & 8 & 0.31 & \textbf{0.39} & 0.32 & 0.23 & \textbf{0.03} & \textbf{0.03} & \textbf{0.04} & 0.18 \\
\hline
500 & 10 & 0.65 & 1 & \textbf{0.93} & 0.89 & 0.88 & 0.09 & \textbf{0.05} & 0.06 & 0.06 & \textbf{0.05} \\
& & & 2 & \textbf{0.99} & 1.00 & 0.84 & 0.08 & \textbf{0.02} & 0.06 & \textbf{0.05} & \textbf{0.05} \\
& & & 3 & \textbf{0.85} & 0.26 & 0.61 & 0.12 & \textbf{0.02} & \textbf{0.05} & 0.18 & 0.10 \\
& & & 4 & \textbf{0.99} & 0.96 & 0.60 & 0.10 & \textbf{0.04} & 0.06 & 0.19 & 0.12 \\
& & & 5 & 0.89 & \textbf{0.98} & 0.96 & 0.16 & \textbf{0.04} & \textbf{0.05} & \textbf{0.03} & 0.10 \\
& & & 6 & \textbf{0.99} & 1.00 & 0.91 & 0.15 & \textbf{0.04} & 0.07 & \textbf{0.02} & 0.13 \\
& & & 7 & \textbf{0.90} & 0.61 & 0.68 & 0.09 & \textbf{0.02} & 0.07 & 0.12 & 0.07 \\
& & & 8 & \textbf{0.79} & 0.76 & 0.76 & 0.18 & \textbf{0.03} & \textbf{0.04} & \textbf{0.05} & 0.09 \\
\hline
500 & 10 & 0.35 & 1 & \textbf{0.89} & 0.74 & 0.66 & 0.07 & \textbf{0.02} & \textbf{0.02} & \textbf{0.02} & 0.08 \\
& & & 2 & \textbf{0.99} & 0.99 & 0.69 & 0.09 & \textbf{0.03} & 0.06 & \textbf{0.03} & 0.11 \\
& & & 3 & \textbf{0.88} & 0.33 & 0.51 & 0.14 & \textbf{0.04} & \textbf{0.05} & 0.18 & 0.11 \\
& & & 4 & \textbf{0.98} & 0.91 & 0.48 & 0.12 & \textbf{0.04} & 0.08 & 0.20 & 0.10 \\
& & & 5 & \textbf{0.91} & 0.92 & 0.83 & 0.12 & \textbf{0.04} & 0.06 & \textbf{0.04} & 0.12 \\
& & & 6 & \textbf{0.98} & 1.00 & 0.75 & 0.09 & \textbf{0.03} & 0.08 & \textbf{0.04} & 0.11 \\
& & & 7 & \textbf{0.89} & 0.46 & 0.52 & 0.05 & \textbf{0.03} & \textbf{0.03} & 0.08 & 0.11 \\
& & & 8 & \textbf{0.92} & 0.78 & 0.74 & 0.10 & \textbf{0.05} & 0.06 & 0.06 & 0.07 \\
\hline
500 & 10 & 0.10 & 1 & \textbf{0.31} & $-$ & 0.06 & 0.12 & \textbf{0.02} & $-$ & \textbf{0.03} & 0.10 \\
& & & 2 & \textbf{0.45} & $-$ & 0.07 & 0.12 & \textbf{0.03} & $-$ & \textbf{0.03} & 0.07 \\
& & & 3 & \textbf{0.34} & $-$ & 0.18 & 0.16 & \textbf{0.03} & $-$ & 0.19 & 0.14 \\
& & & 4 & \textbf{0.45} & $-$ & 0.20 & 0.16 & \textbf{0.02} & $-$ & 0.22 & 0.11 \\
& & & 5 & 0.33 & $-$ & \textbf{0.04} & 0.12 & 0.06 & $-$ & \textbf{0.02} & 0.14 \\
& & & 6 & \textbf{0.45} &$-$ & 0.03 & 0.08 & \textbf{0.05} & $-$ & \textbf{0.01} & 0.12 \\
& & & 7 & \textbf{0.34} & $-$ & 0.12 & 0.09 & \textbf{0.05} & $-$ & 0.09 & 0.15 \\
& & & 8 & \textbf{0.34} & $-$ & 0.04 & 0.16 & \textbf{0.03} & $-$ & \textbf{0.05} & 0.13 \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\caption{Simulated power and type-I error of PKLM, Q, Little and JJ for $n=200$, $p=4$ and $n=500$, $p=10$. We use $r=0.65$, $0.35$ and $0.1$. Cases $1-8$ describe different data distributions. The experiments were repeated $300$ times and the parameter setting for PKLM described above was used. }
\label{tab3}
\end{table}
\begin{table}
\centering
\begin{adjustbox}{totalheight=\textheight}
\begin{tabular}{rrrl|rrrr|rrrr}
\hline
& & & & \multicolumn{4}{c}{Power} & \multicolumn{4}{c}{Type-I Error} \\
n & p & r & case & PKLM & Q & Little & JJ & PKLM & Q & Little & JJ \\
\hline
500 & 20 & 0.65 & 1 & \textbf{0.39} & $-$ & 0.36 & 0.06 & \textbf{0.02} & $-$ & \textbf{0.05} & 0.09 \\
& & & 2 & \textbf{0.91} & $-$ & 0.48 & 0.08 & \textbf{0.03} & $-$ & \textbf{0.05} & 0.10 \\
& & & 3 & \textbf{0.33} & $-$ & 0.49 & 0.20 & \textbf{0.03} & $-$ & 0.24 & 0.11 \\
& & & 4 & \textbf{0.90} & $-$ & 0.40 & 0.14 & \textbf{0.04} & $-$ & 0.22 & 0.11 \\
& & & 5 & 0.32 & $-$ & \textbf{0.64} & 0.14 & \textbf{0.04} & $-$ & \textbf{0.04} & 0.08 \\
& & & 6 & \textbf{0.93} & $-$ & 0.39 & 0.25 & \textbf{0.04} & $-$ & \textbf{0.01} & 0.09 \\
& & & 7 & \textbf{0.33} & $-$ & 0.37 & 0.07 & \textbf{0.03} & $-$ & 0.09 & 0.10 \\
& & & 8 & \textbf{0.23} & $-$ & 0.25 & 0.14 & \textbf{0.04} & $-$ & 0.06 & \textbf{0.03} \\
\hline
500 & 20 & 0.35 & 1 & \textbf{0.45} & $-$ & 0.22 & 0.08 & \textbf{0.03} & $-$& \textbf{0.04} & 0.09 \\
& & & 2 & \textbf{0.90} & $-$ & 0.22 & 0.09 & \textbf{0.03} & $-$ & \textbf{0.04} & 0.08 \\
& & & 3 & \textbf{0.43} & $-$ & 0.35 & 0.18 & \textbf{0.02} & $-$ & 0.34 & 0.12 \\
& & & 4 & \textbf{0.89} & $-$ & 0.33 & 0.20 & \textbf{0.03} & $-$ & 0.31 & 0.15 \\
& & & 5 & \textbf{0.46} & $-$ & 0.24 & 0.09 & \textbf{0.02} & $-$ & \textbf{0.02} & 0.12 \\
& & & 6 & \textbf{0.91} & $-$ & 0.14 & 0.14 & \textbf{0.02} & $-$ & \textbf{0.03} & 0.10 \\
& & & 7 & \textbf{0.41} & $-$ & 0.22 & 0.11 & \textbf{0.02} & $-$ & 0.11 & 0.10 \\
& & & 8 & \textbf{0.52} & $-$ & 0.18 & 0.08 & \textbf{0.03} & $-$ & \textbf{0.04} & 0.07 \\
\hline
500 & 20 & 0.10 & 1 & \textbf{0.13} & $-$ & 0.00 & 0.14 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.10 \\
& & & 2 & \textbf{0.24} & $-$ & 0.01 & 0.14 & \textbf{0.04} & $-$ & \textbf{0.01} & 0.12 \\
& & & 3 & 0.08 & $-$ & 0.21 & 0.16 & 0.06 & $-$ & 0.22 & 0.10 \\
& & & 4 & \textbf{0.26} & $-$ & 0.27 & 0.08 & \textbf{0.04} & $-$ & 0.31 & 0.13 \\
& & & 5 & \textbf{0.12} & $-$ & 0.00 & 0.10 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.19 \\
& & & 6 & \textbf{0.19} & $-$ & 0.00 & 0.11 & \textbf{0.05} & $-$ & \textbf{0.00} & 0.18 \\
& & & 7 & \textbf{0.07} & $-$ & 0.08 & 0.12 & \textbf{0.04} & $-$ & 0.07 & 0.12 \\
& & & 8 & \textbf{0.07} & $-$ & 0.02 & 0.11 & \textbf{0.04} & $-$ & \textbf{0.00} & 0.16 \\
\hline
1000 & 40 & 0.65 & 1 & \textbf{0.20} & $-$ & 0.00 & 0.09 & \textbf{0.05} & $-$ & \textbf{0.00} & 0.15 \\
& & & 2 & \textbf{0.95} & $-$ & 0.00 & 0.12 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.14 \\
& & & 3 & \textbf{0.23} & $-$& 0.00 & 0.29 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.17 \\
& & & 4 & \textbf{0.94} & $-$ & 0.00 & 0.26 & \textbf{0.05} & $-$ & \textbf{0.00} & 0.17 \\
& & & 5 & \textbf{0.16} & $-$ & 0.00 & 0.30 & \textbf{0.02} & $-$& \textbf{0.00} & 0.19 \\
& & & 6 & \textbf{0.97} & $-$ & 0.00 & 0.26 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.19 \\
& & & 7 & \textbf{0.23} & $-$ & 0.00 & 0.11 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.10 \\
& & & 8 & \textbf{0.13} &$-$ & 0.00 & 0.17 & \textbf{0.03} & $-$ & \textbf{0.00 }& 0.12 \\
\hline
1000 & 40 & 0.35 & 1 & \textbf{0.35} & $-$ & 0.00 & 0.12 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.11 \\
& & & 2 & \textbf{0.97} & $-$ & 0.00 & 0.13 & \textbf{0.05} & $-$ & \textbf{0.00} & 0.10 \\
& & & 3 & \textbf{0.37} & $-$ & 0.00 & 0.30 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.30 \\
& & & 4 & \textbf{0.96} & $-$ & 0.00 & 0.33 & \textbf{0.04} & $-$ & \textbf{0.00} & 0.27 \\
& & & 5 & \textbf{0.32} & $-$ & 0.00 & 0.14 & \textbf{0.04} & $-$ & \textbf{0.00} & 0.11 \\
& & & 6 & \textbf{0.98} & $-$ & 0.00 & 0.16 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.10 \\
& & & 7 & \textbf{0.36} & $-$ & 0.00 & 0.11 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.08 \\
& & & 8 & \textbf{0.30} & $-$ & 0.00 & 0.16 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.10 \\
\hline
1000 & 40 & 0.10 & 1 & \textbf{0.08} & $-$ & 0.00 & 0.15 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.12 \\
& & & 2 & \textbf{0.32} & $-$ & 0.00 & 0.12 & \textbf{0.02} & $-$ & \textbf{0.00} & 0.10 \\
& & & 3 & \textbf{0.06} & $-$ & 0.00 & 0.13 &\textbf{ 0.05} & $-$ & \textbf{0.00} & 0.20 \\
& & & 4 & \textbf{0.25} & $-$ & 0.00 & 0.25 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.28 \\
& & & 5 & \textbf{0.08} & $-$ & 0.00 & 0.11 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.09 \\
& & & 6 & \textbf{0.27} & $-$ & 0.00 & 0.09 & \textbf{0.04 }& $-$ & \textbf{0.00} & 0.11 \\
& & & 7 & \textbf{0.07} & $-$ & 0.00 & 0.16 & \textbf{0.03} & $-$ & \textbf{0.00} & 0.13 \\
& & & 8 & \textbf{0.07} & $-$ & 0.00 & 0.15 & \textbf{0.04} & $-$ & \textbf{0.00} & 0.08 \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Simulated power and type-I error of PKLM, Q, Little and JJ for $n=500$, $p=20$ and $n=1000$, $p=40$. We use $r=0.65$, $0.35$ and $0.1$. Cases $1-8$ describe different data distributions. The experiments were repeated $300$ times and the parameter setting for PKLM described above was used. }
\label{tab4}
\end{table}
\subsection{Real Data}
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lrr|rrrr|rrrr}
\hline
& && \multicolumn{4}{c}{Power} & \multicolumn{4}{c}{Type-I Error} \\
dataset & n & p & PKLM & Q & Little & JJ & PKLM & Q & Little & JJ \\
\hline
iris & 150 & 4 & 0.41 & \textbf{0.91} & 0.84 & 0.27 & \textbf{0.03} & \textbf{0.04} & \textbf{0.03} & 0.16 \\
blood.transfusion & 748 & 4 & 0.48 & 0.97 & \textbf{1.00} & \texttt{NA} & \textbf{0.01} & 0.06 & \textbf{0.04} & \texttt{NA} \\
airfoil & 1503 & 6 & \textbf{0.92} & 0.13 & 0.17 & 0.09 & \textbf{0.02} & \textbf{0.03} & 0.06 & 0.42 \\
seeds & 210 & 7 & 0.64 & \textbf{0.74} & 0.57 & 0.24 & \textbf{0.05} & \textbf{0.02} & \textbf{0.02} & 0.10 \\
yacht & 308 & 7 & 0.60 & 0.56 & \textbf{0.76} & 0.24 & \textbf{0.03} & 0.07 & \textbf{0.05} & 0.24 \\
yeast & 1484 & 8 & \textbf{0.82} & 0.52 & 0.15 & 0.14 & \textbf{0.05} & 0.06 & 0.23 & 0.85 \\
glass & 214 & 9 & 0.10 & 0.02 & \textbf{0.20} & 0.20 & \textbf{0.01} & \textbf{0.00} & \textbf{0.03} & 0.33 \\
concrete.compression & 1030 & 9 & 0.64 & 0.48 & \textbf{0.81} & 0.47 & \textbf{0.04} & \textbf{0.04} & \textbf{0.05} & 0.41 \\
wine.quality.red & 1599 & 11 & \textbf{0.81} & $-$ & 0.72 & 0.80 & \textbf{0.04} & $-$ & 0.15 & 0.52 \\
wine.quality.white & 4898 & 11 & \textbf{0.98} & $-$ & 0.96 & 0.87 & \textbf{0.04} & $-$ & 0.10 & 0.79 \\
planning.relax & 182 & 12 & \textbf{0.29} & $-$ & 0.20 & 0.14 & \textbf{0.00} & $-$ & \textbf{0.00} & \texttt{NA} \\
climate.model.crashes & 540 & 19 & \textbf{0.18} & $-$ & 0.22 & 0.47 & \textbf{0.00} & $-$& \textbf{0.00} & \texttt{NA} \\
ionosphere & 351 & 32 & \textbf{0.45} & $-$ & 0.97 & 0.18 & \textbf{0.00} & $-$ & 0.06 & \texttt{NA} \\
\hline
\end{tabular}
}
\caption{Simulated power and level of PKLM, Q, Little and JJ for $13$ real datasets. We use $p_{miss}=0.3$. The experiments were repeated $300$ times and the parameter setting for PKLM described above was used. The \texttt{NA}s for some values of the JJ-test indicate that the test was not computable in any of the $300$ repetitions due to not enough observations in enough usable missingness groups.}\label{realresults}
\end{table}
We used $13$ real datasets with varying number of observations $n$ and dimensions $p$ for further empirical assessment of the PKLM-test and comparison to the other three tests. The datasets are available in the UCI machine learning repository\footnote{\url{https://archive.ics.uci.edu/ml/index.php}}. We preprocessed the data by cancelling factor variables, in order to be able to run all other three tests. However, we kept numerical variables with only few unique values.
For the generation of the \texttt{NA}s, we use an overall probability of missingness of $p_{miss}=0.3$ (not to be confused with $r$ from the last subsection, denoting the number of complete cases). We used a random MAR generation through the function \texttt{ampute} of the \textsf{R}-package \texttt{mice}. This function can randomly generate realistic MAR mechanisms, see e.g., \citet{ amputepaper}. Each experiment was run $\texttt{nsim}=300$ times to compute the type-I error and power. We used the following hyperparameter setting for the computation of our PKLM-test: number of permutations $\texttt{nrep} = 30$, number of projections $\texttt{num.proj} = 300$, minimal node size in a tree $\texttt{min.node.size} = 10$, number of fitted trees per projection $\texttt{num.trees.per.proj} = 200$ and maximal number of collapsed classes allowed in a projection $\texttt{size.resp.set} = 2$. The results are shown in Table \ref{realresults}. Our test is again very competitive with the best power in $7$ out of $13$ datasets, conditional on valid type-I errors. The Little-test shows also often good performance, though given the problematic level displayed in the previous section, this has to be considered with some care.
The Q-test also has relatively high power in the situations where it can be calculated. However, due to computational time we only run the Q-test for $p \leq 10$. All in all, we see that the Q-test quickly gets infeasible for large $p$ and $n$ and the advantage of the PKLM-test strengthens with increasing $p$.
\section{Extension}\label{extension}
In addition to the ``global test'' of MCAR, we can study the effect of single variables: For any given variable $k=1,\ldots,p$, we can calculate
\[
\hat{U}^{-k}= \frac{1}{|\mathcal{P}_{-k}|}\sum_{i \in \mathcal{P}_{-k}} \hat{U}^{(A_i,B_i)} ,
\]
where $\mathcal{P}_{-k}$ are all pairs of projections $(A_i, B_i)$ from the $N$ randomly chosen ones, with $B_i$ not containing variable $k$. We can use the analogous calculation based on the permuted missingness matrix $\mathbf{M}$
\[
\hat{U}^{-k}_{\sigma_{\ell}}= \frac{1}{|\mathcal{P}_{-k}|}\sum_{j \in \mathcal{P}_{-k}} \hat{U}^{(A_j,B_j)}_{\sigma_{\ell}},
\]
to obtain the $p$-value as in \eqref{p-val}. This ``partial'' $p$-value is valid and corresponds to the effect of removing the patterns induced by variable $k$. Indeed, assume the difference in the distribution of two patterns stems from a variable $j$ alone. If $j \in B$, a perfect classifier will be able to reliably differentiate the two, leading to a high value for $\hat{U}^{-k}$ relative to the permutation values. If $j$ is not forming the labels, we will not test these two classes against each other and thus not be able spot this difference. As such, we might expect to see a high $p$-value for $\hat{U}^{-j}$, when variable $j$ is removed, but a tendency to low $p$-values for $\hat{U}^{-k}$, $k \neq j$.
\begin{figure}
\caption{$X_1$ and $X_2$ of the fully observed data in the simulated example of Section \ref{extension}
\label{fig:geo}
\end{figure}
We illustrate the usefulness of partial $p$-values with an example. Let $C_{-k} = \{1,\ldots,p\}\setminus \{k\}$. We assume $\mathbf{X}_{\bullet, C_{-k}}$ has a MCAR missingness structure, in particular, we simulate below the MCAR mechanism described in Section \ref{sec: simulations} with $r=0.65$. Let $k=1$ and assume that this first column of observations $\mathbf{X}_{\bullet,1}$ has missingness depending on the observed values of $\mathbf{X}_{\bullet, 2}$. For instance, each value is missing if the mean of the corresponding row $\mathbf{X}_{j,2}$ is larger than $0.5$. In this simple example $\mathbf{X}$ is MAR, but $\mathbf{X}_{\bullet, C_{-1}}$ is MCAR. We simulate this example, with $p=4$ and $n=500$, $\mathbf{X}_{i,\bullet}$ being independent standard Gaussian and the MAR/MCAR mechanism as described above. The first two fully observed components, $X_1$ and $X_2$, are shown in Figure \ref{fig:geo}. As before, we set \texttt{num.trees.per.proj}=$200$ and use $100$ projections. In this example, we are only able to spot any difference when $j=1$ is used to build the labels.
Our test reliably delivers small $p$-values ($ \leq 0.05$) for the three partial tests based on projections potentially including variable $1$, i.e., sets of projections $\mathcal{P}_{-2}$, $\mathcal{P}_{-3}$, and $\mathcal{P}_{-4}$ and a high $p$-value for the partial test based on $\mathcal{P}_{-1}$. Thus in this sense, the test detects that the main culprit of the MAR mechanism lies in the first variable.
\section{Concluding Remarks}\label{discuss}
In this paper we presented the powerful, flexible and easy-to-use PKLM-test for the MCAR assumption on the missingness mechanism of a dataset. We proved the validity of the $p$-value of the test and showed its power over a wide range of distributions. We also provided an extension allowing to do partial tests, that may shed light on the source of the violation of the MCAR assumption. Naturally, with some slight adaptations the test can be used as a general test of homogeneity of $G$ different groups in the sense that it tests whether $G$ different groups have the same distribution.
\appendix
\setcounter{section}{0}
\section{Proofs} \label{proofsection}
\propequivalence*
\begin{proof}
We first show $H_0$ of \eqref{test2} implies $H_0$ of \eqref{test3}. Let $A, B$ be arbitrary. If they are such that there is only one label, there is nothing to test, so we may assume to have $|G(A,B)| \geq 2$ patterns in $\mathbf{X}_{\mathcal{N}_{A}, A}$. This means that $A \subset \boldsymbol{o}_{ij}$ for \emph{all} patterns $i,j \in G(A,B)$. This simply follows because, by construction, each of the $|G(A,B)|$ patterns in $\mathbf{X}_{\mathcal{N}_{A}, A}$ has the elements in $A$ fully observed. But since by assumption for all $i,j \in \{1,\ldots, G \}$, $F_{i,\boldsymbol{o}_{ij}}=F_{j,\boldsymbol{o}_{ij}}$ and $A\subset \boldsymbol{o}_{ij}$, this immediately implies that $F_{i,A}=F_{j,A}$ for all $i,j \in \{1,\ldots, G \}$ and thus $F_{g,A} = \sum_{j\in G(A,B)\setminus g} \omega_j^g F_{j,A} $. Since $A,B$ were arbitrary, one direction follows.
We now show that $H_0$ of \eqref{test3} implies $H_0$ of \eqref{test2}. The proof is based on the following claim: Consider $G$ arbitrary distribution functions $F_1, \ldots, F_G$ and weights $(\omega^{g}_j)_{j=1}^{G-1}$, $j=1,\ldots, G$ such that $\sum_{j=1}^{G-1} \omega^{g}_j = 1$ for all $j$. Then
\begin{align}\label{H001}
F_{g} = \sum_{j\in\{1,\ldots, G\}\setminus g} \omega_j^g F_{j}, \text{ } \forall g \in \{1,\ldots, G\} \implies F_{i}= F_{j}, \text{ } \forall i \neq j \in \{1,\ldots, G\}.
\end{align}
We prove the implication by induction: Consider first $G=3$. Assuming the LHS of \eqref{H001} and plugging the equation for $F_{2}$ into the equation for $F_{1}$, we obtain:
\begin{align*}
F_{1}&=w_2^1 w_1^2 F_{1} + w_2^1 w_3^2 F_{3} + w_3^1 F_{3} \\
&=w_2^1 w_1^2 F_{1} + (w_2^1 w_3^2 + w_3^1) F_{3},
\end{align*}
which implies $(1-w_1^2w_2^1)F_{1}= (w_2^1w_3^2+w_3^1)F_{3}$. Since
\begin{align*}
&1 = w_2^1 + w_3^1
= w_2^1(w_3^2+w_1^2) +w_3^1
= w_2^1w_3^2 + w_2^1w_1^2 + w_3^1,
\end{align*}
we have the equality $(1-w_1^2w_2^1)=(w_2^1w_3^2+w_3^1)$ and thus $F_{1}=F_{3}$. Plugging this back into the equivalent equation for $F_{2}$, we obtain $F_{1}=F_{2}=F_{3}$. Now assume \eqref{H001} is true for $G$ distributions $F_{1}, \ldots, F_{G}$ and we now would like to prove it for $G+1$. Assume wlog that the weight of $F_{2}$ in the equation of $F_{1}$ is nonzero (there will always be at least one such distribution $F_{2}, \ldots, F_{G}$). Using the same trick as above, we may plug say the equation for $F_{2}$ into $F_{1}$, thereby reducing the number of equations/distributions to $G$. By the induction assumption this implies that $F_{1}=F_{3}=\ldots=F_{G}$. But immediately this also implies that $F_{2}=F_{1}$ and implies \eqref{H001}. With this result we can now proof the that $H_0$ of \eqref{test3} implies $H_0$ of \eqref{test2}.
Take two arbitrary groups $i,j$ and $A=\boldsymbol{o}_{ij}$ and take $B=A^c$. To ease notation we just wlog take $i=1$ and $j=2$. Then $A=\boldsymbol{o}_{12}$ contains the dimensions for which patterns $1$ and $2$ have fully observed values. Thus, observations in $\mathbf{X}_{\mathcal{N}_{A}, A}$ contain draws from $F_{1,\boldsymbol{o}_{12}}$ and $F_{2,\boldsymbol{o}_{12}}$. Since by assumption
\begin{align}
H_0: F_{g,A} = \sum_{j\in G(A,B)\setminus g}& \omega_j^g F_{j,A} \text{ }, \forall g \in G(A,B),
\end{align}
it follows by \eqref{H001}, that $F_{i,A}=F_{j,A}$ for all $i,j \in G(A,B)$ and thus in particular, $F_{1,A}=F_{2,A}$. Since we will have $A=\boldsymbol{o}_{ij}$ for all groups $i\neq j$, $H_0$ of \eqref{test2} holds.
\end{proof}
\densityratiothm*
\begin{proof}
Based on the definitions of $ p^{(A,B)}_g(x)$, $f^{(A,B)}_g(x)$ and $\pi^{(A,B)}_g$ we obtain by Bayes Rule,
\begin{align} \label{bayes}
p^{(A,B)}_g(x) = \frac{f^{(A,B)}_g(x)\pi^{(A,B)}_g}{\sum_{j\in G(A,B)} \pi^{(A,B)}_j f^{(A,B)}_j(x)},
\end{align}
assuming the existence of densities $f_g$ of distributions $F_g$ for each $g\in G(A, B)$.
Following the same steps as in \cite{Cal2020}, we get that the logarithm of the (joint) density ratio for testing $H_0$ vs $H_1$ of (\ref{test4}), given by
\begin{align}
\log \frac{ f^{(A,B)}_g(x) (1-\pi^{(A,B)}_g)}{\sum_{j\in G(A,B)\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j(x) }.\label{densityratio}
\end{align}
We reformulate the fraction in \eqref{densityratio} in terms of $p^{(A,B)}_g$, starting from (\ref{bayes}):
\begin{align*}
p^{(A,B)}_g(x) \sum_{j\in G(A,B)\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j(x) &= (\pi^{(A,B)}_g - p^{(A,B)}_g(x)\pi^{(A,B)}_g)f^{(A,B)}_g(x)\\
&=\pi^{(A,B)}_g( 1 - p^{(A,B)}_g(x))f^{(A,B)}_g(x).
\end{align*} Thus, the inside of the logarithm of (\ref{densityratio}) is given by the following function of $p^{(A,B)}_g$:
\begin{align*}
\frac{f^{(A,B)}_g(x)(1-\pi^{(A,B)}_g) }{ \sum_{j\in G(A,B)\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j(x) } &=
\frac{ 1-\pi^{(A,B)}_g}{\pi^{(A,B)}_g}\frac{p^{(A,B)}_g(x) }{1-p^{(A,B)}_g(x) }.
\end{align*}
\end{proof}
\KLlemmalabel*
\begin{proof}
From the proof of Lemma \ref{densityratiothmlabel}, we know that $U^{(A,B)}_g$ can be rewritten as
\begin{align}
U^{(A,B)}_g &:= \frac{1}{|S_{f_g^{(A,B)}} |}\sum_{i\in S_{f_g^{(A,B)}}} \left(\log\frac{p^{(A,B)}_g(x_{i})}{1-p^{(A,B)}_g(x_{i})} - \log \frac{\pi^{(A,B)}_g}{1-\pi^{(A,B)}_g}\right) \notag\\
&=\frac{1}{n_g}\sum_{i\in S_{f_g^{(A,B)}}} \log \frac{f^{(A,B)}_g(x_i)(1-\pi^{(A,B)}_g) }{ \sum_{j\in G(A,B)\setminus g} \pi^{(A,B)}_j f^{(A,B)}_j(x_i) }.\label{Ustat2}
\end{align}
Since $n_g/n \rightarrow \pi^{(A,B)}_g \in (0,1)$ and the $x_i$ are i.i.d., the result follows from the law of large numbers.
\end{proof}
\pvalprop*
\begin{proof}
Let $\mathbf{A}=(A_1,\ldots, A_{N})$ and $\mathbf{B}=(B_1,\ldots, B_{N})$ be two sets of $N$ projections. Let $G_1, \ldots G_{L^*}$ be all possible permutations of the rows of the missingness matrix $\mathbf{M}$, such that
\[
G_{\ell}(\mathbf{X}^*, \mathbf{M}, \mathbf{A}, \mathbf{B})= (\mathbf{X}^*, \mathbf{M}_{\sigma_{\ell}}, \mathbf{A}, \mathbf{B}),
\]
for $\ell=1,\ldots, L^*$. Note that, since we are only considering fully observed observations for all projections in $\mathbf{A}$, $\hat{U}$, a function of $(\mathbf{X}, \mathbf{M}, \mathbf{A}, \mathbf{B})$, is indeed a function of $(\mathbf{X}^*, \mathbf{M}, \mathbf{A}, \mathbf{B})$, while $\hat{U}_{\sigma_{\ell}}$ is a function of $G_{\ell}(\mathbf{X}^*, \mathbf{M}, \mathbf{A}, \mathbf{B})$. It also holds that under the null, that is under MCAR, that
\begin{equation}\label{Equaldist}
(\mathbf{X}^*, \mathbf{M}, \mathbf{A}, \mathbf{B}) \stackrel{D}{=} (\mathbf{X}^*, \mathbf{M}_{\sigma_{\ell}}, \mathbf{A}, \mathbf{B}) = G_{\ell}(\mathbf{X}^*, \mathbf{M}, \mathbf{A}, \mathbf{B})\ \ \forall \ell = 1,\ldots L^*.
\end{equation}
This is true because, under MCAR, $\mathbf{M}$ and $\mathbf{X}^*$ are independent. Since by the i.i.d. assumption also $\mathbf{M}_{\sigma_{\ell}} \stackrel{D}{=} \mathbf{M}$ for all $\ell= 1,\ldots, L^*$ and since $\mathbf{A}$, $\mathbf{B}$ are also independent of $\mathbf{M}$, \eqref{Equaldist} follows.
As outlined for example in \citet{Hemerik2018}, this implies that under $H_0$,
\[
\ensuremath{{\mathbb P}}(Z \leq z \mid \mathbf{A}, \mathbf{B}) \leq z.
\]
Integrating over $(\mathbf{A}, \mathbf{B})$, results in \eqref{validpval}.
\end{proof}
\section{Additional Details and Computation Times} \label{app:comptime}
Here we provide more implementation details, discuss the complexity calculations in Table \ref{advantagetable} and show computation times of the different tests in the experiments.
\noindent
\textbf{Numerical truncation.} In order to avoid numerical issues when calculating the density ratio with Expression (\ref{Ustat}) or the $\log$ thereof, if we get predicted probabilities $\hat p_{A}$ close to $0$ or $1$, we apply the following truncation function to $\hat p_{A}$:
\begin{equation*}
p(x) = \min(\max(x, 10^{-9}), 1-10^{-9}).
\end{equation*}
\noindent
\textbf{Hyperparameter Selection.}
Generally speaking, it holds that ``the more the better'', certainly for the parameters $N$, $L$ and \texttt{num.trees.per.proj}. As such, the choice of those three parameters depends mostly on the computational power available to the user. For \texttt{size.resp.set}, this is not quite as clear, though we found a value of two to work well in most situations.
\noindent
\textbf{PLKM Test.} We first consider the complexity of one Random Forest, which is in this case
\[
\texttt{num.tree} \cdot p n\log(n).
\]
Note that this includes the calculation of $\hat{p}$ on the test sample through the OOB-error. In total we do this \texttt{num.proj} times. However, we consider \texttt{num.tree} and \texttt{num.proj} independent of $n$ and $p$ and thus treating it as constant. In this case we end up with $p n\log(n)$. Finally we need to calculate the statistics $U$ and repeat this number of calculations a fixed number of times. This would add a factor $Bn$, where again we assume that $B$ does not grow with $n$ and $p$. As this is neglible compared to $p n\log(n)$, the complexity is given as $\mathcal{O}(p n\log(n))$.
\noindent
\noindent
\textbf{Q-test.} The Q-test compares all groups leading to a complexity of $G^2$ to compare each group with any other. Additionally, the statistic used is an MMD type, so the complexity is $(n_1+n_2)^2$, where $n_1$, $n_2$ are the respective group sizes. The group size can be at worst $n/G$, which together results in $\mathcal{O}(n^2)$. The bootstrap on the other hand can also be ignored, as it simply results in a constant factor multiplied to $n^2$.
\noindent
\noindent
\textbf{JJ and Little-test.} Both JJ- and Little-test rely on covariance estimation which scales as $n p^2$. This gives the $\mathcal{O}(n p^2)$ complexity for the Little-test. For the JJ-test one also needs an ordering operation to obtain the test statistics, with complexity $n \log(n)$, which results in overall complexity $\mathcal{O}(n (p^2 + \log(n)))$.
\noindent
As mentioned above, Table \ref{advantagetable} just shows how the complexity scales in $n$ and $p$ and, in case of our test, treats the number of projections as constants. One might argue that the number of projections should be a function of $p$ as well. Similarly, for ``small'' $p$ and small number of groups $G$, the Q-test can be faster than ours. Still the complexities provide a good illustration of how quickly the Q-test can become infeasible, when the number of groups (often a function of $p$) and/or the number of observation increases.
\section{Example of \lowercase{\citet{yuan2018}}} \label{app:equalgroupmeansandvar}
\citet{yuan2018} study settings where group means and variances are approximately equal across missingness patterns, such that MCAR tests based on differences in means and variances, such as the Little-test, have no power. We study one such example here: Let $p=2$ and $(Z_1,Z_2)$ be jointly multivariate normal with correlation zero and let $X_1=Z_1$ and
\[
X_2 = 0.5 Z_1 + (1-0.25)^{1/2}Z_2.
\]
We set $X_2$ to $\texttt{NA}$ if
\[
X_1 \in (-\infty, -1.932] \cup (-0.314, 0.314] \cup (1.932, \infty).
\]
This corresponds to around $30\%$ missing values. Figure \ref{fig:simulationillustration} displays a histogram, plotting all observations of $X_1$ with $X_2$ missing for a simulation of $n=10'000$. This corresponds to the MAR example used in \citet[Section 3]{yuan2018} and we refer to their paper for more details.
We simulate the above distribution for $n=1000$ and run our PKLM-test with the same parameters as described in Section \ref{sec: simulations}. Though the deviation from MCAR cannot be detected through the first two moments in this example, our test reliably reaches a power of 1.
\begin{figure}
\caption{Histogram with relative frequencies of $X_1$ if the corresponding $X_2$ is $\texttt{NA}
\label{fig:simulationillustration}
\end{figure}
\pagebreak
\end{document} |
\begin{document}
\title{Translation-Invariant Estimates for
Operators with Simple Characteristics}
\xdef\@thefnmark{}\@footnotetext{Keywords: fundamental solution, a-priori estimate, invariant, simply characteristic, higher order.}
\xdef\@thefnmark{}\@footnotetext{2010 Mathematics Subject Classification: 35A01, 35A24, 35B45, 35G05, 42B45.}
\begin{abstract}
We prove \(L^{2}\) estimates and solvability for a variety of simply
characteristic constant coefficient partial differential
equations \(P(D)u=f\). These estimates
\[
||u||_{L^2(D_{r})}\le C\sqrt{d_{r}d_{s}} ||f||_{_{L^2(D_{s})}}
\]
depend on geometric quantities --- the diameters \(d_{r}\)
and \(d_{s}\) of the regions \(D_{r}\), where we estimate
\(u\), and \(D_{s}\), the support of \(f\) --- rather than
weights. As these geometric quantities transform simply
under translations, rotations, and dilations, the
corresponding estimates share the same properties. In
particular, this implies that they transform appropriately
under change of units, and therefore are physically
meaningful. The explicit dependence on the diameters
implies the correct global growth estimates. The weighted
\(L^{2}\) estimates first proved by Agmon \cite{Agmon} in
order to construct the generalized eigenfunctions for
Laplacian plus potential in \(\mathbb{R}^{n}\), and the more
general and precise Besov type estimates of Agmon and
H\"ormander \cite{Agmon-Hormander}, are all simple direct
corollaries of the estimate above.
\end{abstract}
\section*{Aknowledgements}
E. Bl{\aa}sten was partially supported by the European Research
Council's 2010 Advanced Grant 267700. J. Sylvester was partially
supported by the National Science Foundation's grant DMS-1309362.
\section{Introduction}
Constant coefficient partial differential equations are
translation invariant, so it is natural to seek estimates
that share this property. For the Helmholtz equation, and
other equations related to wave phenomena, \(L^2\)-norms
are appropriate in bounded regions because they measure
energy. For problems in all of \(\mathbb{R}^{n}\),
however, a solution with finite \(L^2\)-norm
may radiate infinite power\footnote{Finite radiated power
typically means that solutions decay fast enough at
infinity. For outgoing solutions to the Helmholtz
equation, radiated power can be expressed as the limit
as \(R\rightarrow\infty\)
of the \(L^{2}\)
norm of the restriction of the solution to the sphere of
radius \(R\).
It remains finite as long as solutions decay as
\(r^{-\frac{n-1}{2}}\)
in \(n\)
dimensions.}, and therefore not satisfy the necessary
physical constraints. The solution provided by Agmon
\cite{Agmon} was to introduce \(L^2_{\partialelta}\)
spaces where weights \((1+|x|^{2})^{\frac{\partialelta}{2}}\)
correctly enforced the finite transmission of power, but
gave up the translation invariance, as well as scaling
properties necessary for the estimates to make sense in
physical units. Later work by Agmon
and H\"ormander \cite{Agmon-Hormander} used Besov spaces
to exactly characterize solutions that radiated finite
power, but these spaces also relied on a weight and
therefore broke the translation invariance that is
intrinsically associated with both the physics and the
mathematics of the underlying problem. Later work by
Kenig, Ponce, and Vega \cite{MR1230709} modified the
Agmon-H\"ormander norms to regain better scaling
properties.
Our goal here is to offer \(L^2\) estimates that enforce finite
radiation of power without using weights that destroy translation
invariance and scaling properties. The
following theorem, which applies to a class of scalar pde's with constant
coefficients and simple characteristics, summarizes our
main results, which will be proved as
Theorem \ref{th:main} and Theorem \ref{th:other}.
\begin{theorem*}
Let \(P(D)\) be a constant coefficient partial differential
operator on \(\mathbb{R}^n\). Assume that it is either
\begin{enumerate}
\item real, of second order, and with no real double
characteristics, or
\item of $N$-th order, $N \geqslant 1$, with admissible symbol (Definition
\ref{admissibleSymbol}) and no complex double characteristics
\end{enumerate}
Then there exists a constant
\(C(P,n)\) such that, for every open bounded
\(D_{s}\subset\mathbb{R}^{n}\), and every \(f\in L^2(D_{s})\),
there is a \(u\in L^2_{loc}(\mathbb{R}^{n})\) satisfying
\begin{eqnarray*}
P(D)u=f
\end{eqnarray*}
and for any bounded domain \(D_{r}\subset\mathbb{R}^{n}\)
\begin{eqnarray}\label{eq:53}
||u||_{L^2(D_{r})}\le C \sqrt{d_{r}d_{s}}||f||_{L^2(D_{s})}
\end{eqnarray}
where \(d_{j}\) is the diameter of \(D_{j}\),
the supremum over all lines of the length of the
intersection of the line with \(D_{j}\); i.e.
\begin{eqnarray*}
\nonumber
d_{j}= \sup_{\mathrm{lines}\ l}\mu_{1}(l\cap D_{j}).
\end{eqnarray*}
\end{theorem*}
If \(f\) is not compactly supported, but \(\supp
f\subset \mathop{\cup}\limits_{j=1}^{\infty} B_{j}\) where
each \(B_{j}\) has finite diameter \(b_{j}\), then
(\ref{eq:53}) becomes
\begin{equation}\nonumber
||u||_{L^2(D)}\le C\sqrt{d} \sum_{j=1}^{\infty}\sqrt{b_{j}}
||f||_{_{L^2(B_{j})}}
\end{equation}
which we may rewrite as
\begin{eqnarray}\label{eq:91}
\sup_{D}\frac{1}{\sqrt{d}}||u||_{L^2(D)}\le
\sum_{j=1}^{\infty}\sqrt{b_{j}} ||f||_{_{L^2(B_{j})}}.
\end{eqnarray}
In the special case that \(D\) is a ball with a fixed
center and arbitrary radius; and the \(B_{j}\) include the
ball of radius one and the dyadic spherical shells
\(2^{j}<|x|<2^{j+1}\) for \(j\ge 0\), these are the
estimates of Agmon-H\"ormander in \cite{Agmon-Hormander}.
The weighted
\(L^2_{\partialelta}\) estimates introduced by Agmon are also
direct consequences of (\ref{eq:53}), so that these
solutions do radiate finite power and are therefore
physically meaningful. The solutions we construct are not
necessarily unique, but include the \textit{physically
correct} solutions in all the cases we are aware of. For
the Helmholtz equation, for example, the solution which
satisfies the Sommerfeld radiation condition is among
those which satisfy the estimate \eqref{eq:53}.\\
Our estimates do not include the uniform \(L^p\) estimates
for the Helmholtz equation, shown below, which were
derived in \cite{MR0358216} and \cite{MR894584}, and
presented in \cite{ruiz-notes} and \cite{serov-notes}.
\begin{theorem*}[Uniform $L^p$ estimates]
Let \(k>0\) and
\(\frac{2}{n}\ge
\frac{1}{p} - \frac{1}{q}\ge\frac{2}{n+1}\)
for \(n\ge3\) and
\(1> \frac{1}{p} -\frac{1}{q}\ge\frac{2}{3}\)
for \(n=2\), where
\(\frac{1}{q} + \frac{1}{p}=1\).
There exists a constant \(C(n,p)\), independent of \(k\), such
that, for smooth compactly supported \(u\)
\begin{equation}
\label{eq:84}
||u||_{L^{q}(\mathbb{R}^{n})}\le C(n,p)k^{n(\frac{1}{p}
-\frac{1}{q})-2}||(Delta+k^{2})u||_{L^{p}(\mathbb{R}^{n})}
\end{equation}
\end{theorem*}
The estimates for the Helmholtz equation in \eqref{eq:84}
share all the invariance properties of \eqref{eq:53}, and
are stronger for small scatterers and applications to
nonlinear problems. The dependence on the wavenumber
\(k\), however, is not as well-suited to applications where the
sources are supported on sets that are several
wavelengths in size and located far apart, nor do they
have a direct physical interpretation in terms of power.
Additionally, it seems reasonable that the estimate of the
solution in the higher \(L^{p}\) norm indicates a gain in
regularity. Our methods don't require, or make use of ellipticity,
so we don't expect to recover these estimates.\\
Our methods make use of certain anisotropic norms
introduced in \cite{scaleinvariant} for the Helmholtz
equation. Those estimates were scale and translation
invariant, but, due to the anisotropy, not rotationally
invariant. We show here that a consequence of these mixed
norm estimates is \eqref{eq:53}, which is rotationally
invariant and much simpler than the mixed norm estimates
used to derive it. Because of the generality, the
mixed norms we use here must be slightly different than
those in in \cite{scaleinvariant}, and the techniques
required to treat more general operators are substantially
more complicated.
We treat only operators with simple characteristics
because a \textit{bona fide} real multiple characteristic
(a real \(\eta\in\mathbb{R}^{n}\)
where the symbol \(p(\eta)\)
and \(\nabla p(\eta)\)
vanish simultaneously) will imply that our techniques
cannot succeed. In Section \ref{sec:examples}, we show
that estimates of the form (\ref{eq:53}) cannot hold for
the Laplacian, which has a double characteristic at the
origin.
For a single second order operator with real
constant coefficients we will show in Theorem
\ref{th:main} that the absence of multiple characteristics
is sufficient to conclude the estimate
(\ref{eq:53}). Under some additional hypotheses, we will
prove the same estimate for some higher order operators in
Theorem \ref{th:other}. Additionally, we will prove the
estimate (\ref{eq:53}) for the 4x4 Dirac system, and for a
scalar 4th order equation where H\"ormander's
\textit{uniformly simply characteristic} condition fails.
\section{The Helmholtz case}
We will illustrate our methods by outlining
the proof of (\ref{eq:53}) for the outgoing solution to
the Helmholtz equation below.
\begin{eqnarray}
\label{eq:13}
(Delta + k^2)u = f
\end{eqnarray}
We will choose a direction \(\Theta\) and write
\(x=t\Theta+\tp{x}\). We next Fourier transform in the
\(\Theta^\perp\) hyperplane to rewrite (\ref{eq:13}) as
an ordinary differential equation. We use the notation
\(\f{\Theta^\perp}{u}(t\Theta+\tp{\xi})\) to indicate this
partial Fourier transform (see \eqref{eq:68} below for a
formal definition). If we set \(g(t,\tp{\xi}) =
\f{\Theta^\perp}{f}(t\Theta+\tp{\xi})\) and \(w(t,\tp{\xi}) =
\f{\Theta^\perp}{u}(t\Theta+\tp{\xi})\), then \eqref{eq:13} becomes
\begin{eqnarray}
\label{eq:59}
(\partial_t^2 + k^2 - \abs{\xi_{\Theta^\perp}}^2)
w = g
\end{eqnarray}
We factor the second order operator as a product
of first order operators
\begin{eqnarray}\nonumber
\left(\partial_t + i\sqrt{k^2 - \abs{\xi_{\Theta^\perp}}^2}\right)
\left(\partial_t - i\sqrt{k^2 - \abs{\xi_{\Theta^\perp}}^2}\right)
w = g
\end{eqnarray}
and define a solution \(w=w_{1}+w_{2}\) where \(w_{1}\) and \(w_{2}\) solve
\begin{eqnarray}\label{modelProblems}
\begin{cases}
\Big(\partial_t + i\sqrt{k^2 - \abs{\xi_{\Theta^\perp}}^2}\Big)
w_{1} = \frac{i g}{ 2\sqrt{k^2 -
\abs{\xi_{\Theta^\perp}}^2}},
\\
\Big(\partial_t - i\sqrt{k^2 -
\abs{\xi_{\Theta^\perp}}^2}\Big) w_{2} = \frac{-i
g}{ 2\sqrt{k^2 - \abs{\xi_{\Theta^\perp}}^2}}.
\end{cases}
\end{eqnarray}
The solutions \(w_{1}\) and \(w_{2}\) are given by the exact formulas
\begin{eqnarray}\label{eq:60}
w_{1}(t,\tp{\xi})&=&\frac{1}{2\sqrt{k^2 -
\abs{\xi_{\Theta^\perp}}^2}}
\ensuremath{\mathop{\int}\limits}_{-\infty}^{t}
e^{i\sqrt{k^2 -\abs{\xi_{\Theta^\perp}}^2}(t-s)}\
i g(s,\tp{\xi})ds
\\\label{eq:69}
w_{2}(t,\tp{\xi})&=&\frac{1}{2\sqrt{k^2 -
\abs{\xi_{\Theta^\perp}}^2}}
\ensuremath{\mathop{\int}\limits}_{t}^{\infty}
e^{-i\sqrt{k^2 -\abs{\xi_{\Theta^\perp}}^2}(t-s)}\
i g(s,\tp{\xi})
ds
\end{eqnarray}
The square root
\(\sqrt{k^2-\abs{\xi_{\Theta^\perp}}^2}\)
is chosen so that it always has positive imaginary part
for imaginary part of \(k\) positive, and extends continuously,
as a function of \(k\), to the real axis. This insures
that the exponential in (\ref{eq:60}) and (\ref{eq:69}) is
bounded by one\footnote{This also selects the unique outgoing solution, which
satisfies the Sommerfeld radiation condition.}
so that \(\f{\Theta^\perp}{u} = w = w_{1} + w_{2}\) satisfies
\begin{eqnarray}
\label{eq:61}
|\f{\Theta^\perp}{u}(t\Theta+\tp{\xi})|\le
\frac{||\f{\Theta^\perp}{f}(s\Theta+\tp{\xi})||_{\L1(ds)}}
{\sqrt{k^2-\abs{\xi_{\Theta^\perp}}^2}}
\end{eqnarray}
which would yield a simple estimate if the denominator had
a lower bound.
In sections \ref{sec:order2} and \ref{sec:ordern}, we
will construct Fourier multipliers that implement a
partition of unity that
decomposes \(f\) into a sum
\begin{eqnarray}
f&=&f_{1}+f_{2}+\ldots+f_{m}
\\ \label{eq:85}
&=&f\phi_{1}+f\phi_{2}(1-\phi_{1})
+f\phi_{3}(1-\phi_{2})(1-\phi_{1})\ldots+f\prod_{j=1}^{m}(1-\phi_{j})
\end{eqnarray}
such that, for each \(f_{j}\), there is a direction
\(\Theta_{j}\) such that
\begin{eqnarray}
\label{eq:65}
\inf_{\supp
\f{\Theta_{j}^\perp}{f}}\sqrt{k^2-\abs{\xi_{\Theta_{j}^\perp}}^2}
&\ge& \varepsilon k
\\\na{and}\nonumber
\norm{\norm{\f{\Theta_{j}^\perp}{f_{j}}(s\Theta_j+\xi_{\Theta_j^\perp})}_{\L1(ds)}}_{\L2(d\xi_{\Theta_j^\perp})}
&\le&\nonumber
\norm{\norm{\f{\Theta_{j}^\perp}{f}(s\Theta_j+\xi_{\Theta_j^\perp})}_{\L1(ds)}}_{\L2(d\xi_{\Theta_j^\perp})}
\\\na{which we write more compactly as}\label{eq:67}
\norm{\f{\Theta_{j}^\perp}{f_{j}}}_{\Theta_{j}(1,2)}&\le&\norm{\f{\Theta_{j}^\perp}{f}}_{\Theta_{j}(1,2)}
\end{eqnarray}
using norms which we will define precisely in
(\ref{eq:76}).
\begin{figure}
\caption{Partition of unity for $Delta+k^2$.}
\label{partitionUnityProcedureFig}
\end{figure}
We illustrate this decomposition for the 3-dimensional
case in Figure \ref{partitionUnityProcedureFig}. Let
$\Theta_j=e_j$, $j=1,2,3$, be an orthogonal basis. The
cylinders illustrated in the
top-row are the sets, denoted $\mathscr{B}_{\Theta_{j},0}$, where the
denominators $\sqrt{k^2 - \abs{\xi_{\Theta_{j}^\perp}}^2}$
of \eqref{eq:61} vanish. Each \(\phi_{j}\), and hence each \(f_{j}\),
vanishes in a neighborhood of
$\mathscr{B}_{\Theta_{j},0}$. The thick lines in the
figures in the bottom row show the intersections
$\mathscr{B}_{\Theta_1,0}$,
$\mathscr{B}_{\Theta_1,0} \cap \mathscr{B}_{\Theta_2,0}$
and
$\mathscr{B}_{\Theta_1,0} \cap \mathscr{B}_{\Theta_2,0}
\cap \mathscr{B}_{\Theta_3,0}$, indicating the
support of the \(\prod_{j=1}^{m}(1-\phi_{j})\). To
guarantee that the \(f_{j}\) sum to \(f\), the
intersection of (neighborhoods of) all the
\(\mathscr{B}_{\Theta_{j},0}\) must be empty. We see in the figure that
the intersection of the first three neighborhoods consists
of neighborhoods of eight points, so we may add a fourth direction,
for example
$\Theta_4 = (e_1+e_2+e_3)/\sqrt{3}$ (not pictured), so that the
corresponding cylinder $\mathscr{B}_{\Theta_4,0}$ does not
intersect the eight points that are left.\\
Combining \eqref{eq:61}, \eqref{eq:65}, and \eqref{eq:67}
\begin{eqnarray}
\label{eq:66}
||\f{\Theta_{j}^\perp}{u_{j}}||_{\Theta_{j}(\infty,2)}\le
\frac{||\f{\Theta_{j}^\perp}{f}||_{\Theta_{j}(1,2)}}{k\varepsilon}
\end{eqnarray}
where each of the \(u_{j}\) solves \((Delta+k^{2})u_{j}=f_{j}\).
The estimates (\ref{eq:66}) estimate each \(u_{j}\) in a
different norm, and the norms, which depend on a choice of
the vectors \(\Theta_{j}\), are no longer
rotationally invariant. They can, however, be combined to
yield an estimate in a single norm that is
rotationally and translationally invariant.
\begin{lemma}\label{mixedToUniformNorms}
Let $D_s, D_r \subset \mathbb{R}^n$ be domains with diameters $d_s$ and $d_r$,
respectively. Let $\f{\Theta^\perp}{u} \in \Theta(\infty,2)$ and $f
\in L^2_{loc}$. Assume that $\supp f \subset D_s$. Then $u_{\mid D_r}
\in L^2$ and $\f{\Theta^\perp}{f} \in \Theta(1,2)$. Moreover
\begin{eqnarray}
\nonumber
\norm{u}_{L^2(D_r)} \le \sqrt{d_r} \norm{ \f{\Theta^\perp}{u} }_{\Theta(\infty,2)},
\\
\label{eq:71}
\norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)} \le \sqrt{d_s} \norm{ f }_{L^2(D_s)}.
\end{eqnarray}
\end{lemma}
\noindent Combining the lemma with (\ref{eq:66}) yields
\begin{eqnarray}
\label{eq:6}
\norm{u}_{L^2(D_r)} \le
\frac{\sqrt{d_rd_{s}}}{\varepsilon k}\norm{ f }_{L^2(D_s)}.
\end{eqnarray}
We leave the proof of the lemma for the next section,
after we have given the formal definitions of the norms.
\section{Mixed norms}
We begin with the formal definition of the anisotropic
norms we will use.
\begin{definition}\label{variableSplittingDef}
Let $\Theta\in\mathbb{S}^{n-1}(\mathbb{R}^n)$. We split any
$x\in\mathbb{R}^n$ as
\begin{eqnarray}
\label{eq:83}
x = t\Theta+x_{\Theta^\perp}
\end{eqnarray}
where
$t = x\cdot\Theta$ and
$x_{\Theta^\perp} = x - (x\cdot\Theta)\Theta$. We split
the dual variable $\xi$ as
\begin{eqnarray}
\nonumber
\xi = \tau \Theta + \xi_{\Theta^\perp}
\end{eqnarray}
The variables
$t$ and $\tau$ are dual, and so are $x_{\Theta^\perp}$
and $\xi_{\Theta^\perp}$.
\end{definition}
\begin{figure}
\caption{Splitting of $\xi = \tau\Theta+\tp{\xi}
\end{figure}
\begin{definition}\label{fourierTransformDef}
By $\f{\Theta}{}$ we denote the one-dimensional Fourier transform along
the direction $\Theta$. If $f\in\mathscr{S}(\mathbb{R}^n)$ then
\[
\f{\Theta}{f}(\tau\Theta+x_{\Theta^\perp}) = \frac{1}{\sqrt{2\pi}}
\int_{-\infty}^\infty e^{-it\tau} f(t\Theta+x_{\Theta^\perp}) dt
\]
using the notation of Definition \ref{variableSplittingDef}. The
Fourier transform in the orthogonal space $\Theta^\perp$ is denoted by
$\f{\Theta^\perp}{}$ and it acts by
\begin{eqnarray}\label{eq:68}
\f{\Theta^\perp}{f}(t\Theta+\xi_{\Theta^\perp}) =
\frac{1}{(2\pi)^{(n-1)/2}} \int_{\Theta^\perp} e^{-i x_{\Theta^\perp}
\cdot \xi_{\Theta^\perp}} f(t\Theta + x_{\Theta^\perp})
dx_{\Theta^\perp}.
\end{eqnarray}
The corresponding inverse transforms are denoted by $\finv{\Theta}{}$
and $\finv{\Theta^\perp}{}$.
\end{definition}
\begin{definition}\label{mixedNormSpaceDef}
We use $\Theta(p,q)$ we denote the space of $L^p(dt)$-valued
$L^q(dx_{\Theta^\perp})$-functions (if the variable is
$x=t\Theta+x_{\Theta^\perp}$). More precisely $f\in\Theta(p,q)$ if
\begin{eqnarray}\label{eq:76}
\norm{f}_{\Theta(p,q)} = \left( \int_{\Theta^\perp} \left(
\int_{-\infty}^\infty \abs{f(t\Theta+x_{\Theta^\perp})}^p dt
\right)^{q/p} dx_{\Theta^\perp} \right)^{1/q} < \infty.
\end{eqnarray}
with obvious modifications for $p=\infty$ or
$q=\infty$.
\end{definition}
\begin{remark}
We make note of the fact that the order is important.
For example,
we will use the norm
\[
\norm{f}_{\Theta(1,\infty)} = \sup_{x_{\Theta^\perp}\in\Theta^\perp}
\int_{-\infty}^\infty \abs{f(t\Theta+x_{\Theta^\perp})} dt
\]
in several lemmas. This is clearly not the same as
\[
\int_{-\infty}^\infty \ \sup_{x_{\Theta^\perp}\in\Theta^\perp}
\abs{f(t\Theta+x_{\Theta^\perp})} dt
\]
\end{remark}
We convert estimates in these anisotropic norms
to isotropic \(L^{2}\) estimates with Lemma
\ref{mixedToUniformNorms}. We give the proof now.
\begin{proof}[Proof of Lemma \ref{mixedToUniformNorms}]
Let $D^\Theta_s$, $D^\Theta_r$ be the projections of $D_s$ and $D_r$
onto the line $t \mapsto t\Theta$. We have
\begin{align*}
&\norm{u}_{L^2(D_r)}^2 \le \int_{D^\Theta_r} \int_{\Theta^\perp}
\abs{u(t\Theta+x_{\Theta^\perp})}^2 dx_{\Theta^\perp} dt
\\
&\qquad =
\int_{D^\Theta_r} \int_{\Theta^\perp}
\abs{\f{\Theta^\perp}{u}(t\Theta+\xi_{\Theta^\perp})}^2
d\xi_{\Theta^\perp} dt
\\
&\qquad \le \int_{D^\Theta_r} dt
\int_{\Theta^\perp} \sup_{t'}
\abs{\f{\Theta^\perp}{u}(t'\Theta+\xi_{\Theta^\perp})}^2
d\xi_{\Theta^\perp} \le d_r \norm{ \f{\Theta^\perp}{u}
}_{\Theta(\infty,2)}^2
\end{align*}
where we have used the Plancherel formula and the
hypothesis that the diameter of $D_r$ is at most
\(d_{r}\), which implies that \(D^\Theta_r\) is contained
in a union of intervals of length at most \(d_{r}\). The
proof of (\ref{eq:71}) is similar, and makes use of the
fact that \(D^\Theta_s\) is contained in a union of
intervals of length less than \(d_{s}\).
\[
\int_{-\infty}^\infty \abs{
\f{\Theta^\perp}{f}(t\Theta+\xi_{\Theta^\perp})} dt \le
\sqrt{d_{s}} \norm{
\f{\Theta^\perp}{f}(t\Theta+\xi_{\Theta^\perp}) }_{L^2(dt)}
\]
The inequality (\ref{eq:71}) follows by taking the
$L^2(d\xi_{\Theta^\perp})$-norm and using Fubini's theorem, and then
the Plancherel formula.
\end{proof}
\begin{remark}\label{mixedToUniformNormsLp}
Let $\frac{1}{q}+\frac{1}{p}=1$ and $q \geqslant 2$. An
analogous argument shows that
$\norm{u}_{L^q(D_r)} \le d_r^{1/q} \norm{ \f{\Theta^\perp}{u}
}_{\Theta(\infty,p)}$ and $\norm{ \f{\Theta^\perp}{f} }_{\Theta(1,p)}
\le d_s^{1/p} \norm{ \f{}{f} }_{L^p(\mathbb{R}^n)}$.
\end{remark}
\section{Fourier Multiplier Estimates}
\begin{definition}\label{multiplierDef}
Let $\Psi:\mathbb{R}^n\to\mathbb{C}$ be locally integrable. We define the
Fourier multiplier $M_\Psi$ as the
the operator
\[
M_\Psi f = \finv{}{\{\Psi \f{}{f}\}}.
\]
\end{definition}
Because our estimates rely on decompositions of sources
similar to \eqref{eq:85} where \(f_{j}= M_{\Psi_{j}}f\)
must satisfy satisfy conditions similar to \eqref{eq:65}
and the estimate \eqref{eq:67}, we need to establish the
boundedness of these Fourier multipliers on the mixed
norms of the partial Fourier transforms of the sources,
i.e. on \(\norm{\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f(t,\xi)}}_{\Theta(1,2)}\). Our first
lemma tells us that
\(\norm{\finv{\Theta}{\Psi}}_{\Theta(1,\infty)}<\infty\)
is enough to guarantee such a bound.
\begin{lemma}\label{multiplierNorm}
Let $\finv{\Theta}{\Psi} \in \Theta(1,\infty)$. Then
\[
\norm{ \f{\Theta^\perp}{M_\Psi f} }_{\Theta(1,p)} \le
\frac{1}{\sqrt{2\pi}} \norm{ \finv{\Theta}{\Psi} }_{\Theta(1,\infty)}
\norm{ \f{\Theta^\perp}{f} }_{\Theta(1,p)}.
\]
\end{lemma}
\begin{proof}
Write $\f{\Theta^\perp}{M_\Psi f} = \finv{\Theta}{\{ \Psi \f{}{f} \}}
= \frac{1}{\sqrt{2\pi}} \finv{\Theta}{\Psi} \ast_t
\f{\Theta^\perp}{f}$. Then take the $L^1(dt)$-norm and use Young's
inequality for convolutions. The result follows then by taking the
$L^p(d\xi_{\Theta^\perp})$-norm.
\end{proof}
Our Fourier multipliers will not be Schwartz class
functions. They will be smooth, but will always be
constant in a direction $\nu$, so the integrability
properties necessary to verify that the
\(\Theta(1,\infty)\) norm is finite may be a bit subtle,
and will depend on the relation between the direction
\(\nu\) of that coordinate and the direction \(\Theta\)
which defines the relevant norm. The estimates will be
simplest when the directions \(\nu\) and \(\Theta\)
coincide, or are perpendicular. Because second order
operators have a convenient normal form, the decompositions in
Section \ref{sec:order2} will only require multipliers
with \(\nu\) and \(\Theta\) either identical or
perpendicular. Higher order operators do not admit such
simple normal forms, so the decompositions are based on
abstract algebraic properties, and we cannot, in general, restrict to
these simple cases. The next proposition, and
its corollary, tell us how to reduce the
\(\Theta(1,\infty)\) estimate for the norm of a multiplier
that is
constant in the \(\nu\) direction, to the case where
\(\nu\) and \(\Theta\) are either parallel or perpendicular.\\
We need a little notation first.
Define $\nu_\perp$ to be a unit vector in the \((\Theta,\nu)\) plane
perpendicular to \(\nu\) so that the pair
\((\nu,\nu_{\perp})\) is positively oriented , and
define $\Theta_\perp$ analogously to be the
unit vector in that plane perpendicular to
\(\Theta\). Finally, let $\xi_{\perp\perp}$ denote the
component of any \(\xi\in\mathbb{R}^{n}\) perpendicular to the
\((\Theta,\nu)\) plane .
\begin{proposition}\label{fourierTwoDirections}
Let $\nu \in \mathbb{S}^{n-1}(\mathbb{R}^n)$ and $\psi \in
\mathscr{S}(\nu^\perp)$. Define
\begin{eqnarray}\nonumber
\Psi(\sigma\nu + \xi_{\nu^\perp}) = \psi(\xi_{\nu^\perp}) \qquad
\forall \sigma \in \mathbb{R}.
\end{eqnarray}
If $\Theta \not\parallel \nu$ and $\Theta \cdot \nu = \cos \alpha$,
$\alpha \in (0,\pi)$ then,
\begin{equation}
\finv{\Theta}{\Psi} (t \Theta + \xi_{\Theta^\perp}) = \frac{e^{i \ell
t \cot\alpha}} {\sin\alpha} \finv{\nu_\perp}{\psi} \big(
\frac{t}{\sin\alpha} \nu_\perp + \xi_{\perp\perp} \big),
\end{equation}
where $\ell = \xi_{\Theta^\perp}\cdot\Theta_\perp$ and
\(\xi_{\perp\perp}\) is the component of \(\xi=t
\Theta + \xi_{\Theta^\perp}\) perpendicular to the \((\Theta,\nu)\) plane.
If $\Theta \parallel \nu$ then
\begin{eqnarray}
\label{eq:87}
\finv{\Theta}{\Psi} (t\Theta+\xi_{\Theta^\perp}) = \sqrt{2\pi}
\partialelta_0(t) \psi(\xi_{\Theta^\perp}).
\end{eqnarray}
\end{proposition}
\begin{proof}
According to Definition \ref{fourierTransformDef}
\[
\finv{\Theta}{\Psi} (t\Theta+\xi_{\Theta^\perp}) =
\frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{it\tau}
\Psi(\tau\Theta+\xi_{\Theta^\perp}) d\tau.
\]
It is easy to check that
\begin{eqnarray*}
\Theta &=& \cos\alpha \,\nu + \sin\alpha
\,\nu_\perp, \label{nuTdef}
\\
\nu &=& \cos\alpha \,\Theta + \sin\alpha \,\Theta_\perp,
\label{thetaTdef}
\\
\Theta_\perp &=& \sin \alpha \,\nu - \cos \alpha
\,\nu_\perp,
\\\na{and therefore that}
\tau\Theta + \xi_{\Theta^\perp} &=&
(\tau\cos\alpha + \ell\sin\alpha)\nu + (\tau\sin\alpha -
\ell\cos\alpha)\nu_\perp + \xi_{\perp\perp}.
\end{eqnarray*}
Because $\Psi$ is
constant in the direction $\nu$ and equal to
$\psi$ on $\nu^\perp$,
\begin{align*}
\finv{\Theta}{\Psi} (t\Theta+\xi_{\Theta^\perp}) &=
\frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{it\tau} \psi\big(
(\tau\sin\alpha - \ell\cos\alpha)\nu_\perp + \xi_{\perp\perp} \big)
d\tau \\ & = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty
e^{it\left(\frac{\tau'}{\sin\alpha} + \ell\cot\alpha\right)}
\psi(\tau'\nu_\perp + \xi_{\perp\perp}) \frac{d\tau'}{\sin\alpha} \\
& = \frac{e^{i\ell t \cot\alpha}} {\sin\alpha}
\finv{\nu_\perp}{\psi} \big( \frac{t}{\sin\alpha}\nu_\perp +
\xi_{\perp\perp} \big).
\end{align*}
If $\Theta\parallel\nu$, then \(\Psi(t\Theta+\tp{\xi})\)
is independent of \(t\), so \eqref{eq:87} follows from
the fact that the one dimensional Fourier transform of
the constant function is the Dirac delta.
\end{proof}
\begin{corollary}\label{constantInDirectionNorm}
With the notation of Proposition \ref{fourierTwoDirections} and
$\Theta\not\parallel\nu$ we have
\begin{eqnarray}\label{eq:88}
\norm{\finv{\Theta}{\Psi}}_{\Theta(1,\infty)} =
\norm{\finv{\nu_\perp}{\psi}}_{\nu_\perp(1,\infty)},
\end{eqnarray}
and therefore
\begin{equation}
\norm{ \f{\Theta^\perp}{M_\Psi f} }_{\Theta(1,p)} \le
\norm{\finv{\nu_\perp}{\psi}}_{\nu_\perp(1,\infty)} \norm{
\f{\Theta^\perp}{f} }_{\Theta(1,p)}.
\end{equation}
If $\Theta\parallel\nu$, then
\begin{equation}
\norm{ \f{\Theta^\perp}{M_\Psi f} }_{\Theta(1,p)} \le
\sup_{\Theta^\perp} \abs{\psi} \norm{ \f{\Theta^\perp}{f}
}_{\Theta(1,p)}.
\end{equation}
\end{corollary}
\begin{remark}
The \(\nu_\perp(1,\infty)\) norm which appears in
(\ref{eq:88}) is analogous to the
\(\Theta_\perp(1,\infty)\) norm , but is defined on functions of
one fewer variable. Recall that \(\psi\) is defined on
the \(\nu^{\perp}\) hyperplane, and \(\nu_{\perp}\) is
a unit vector in that hyperplane perpendicular to
\(\Theta\).
Thus \(\finv{\nu_\perp}{\psi}\) is a function of
\(\tau\nu_{\perp}+ \xi_{\perp\perp}\), and the
\(\nu_\perp(1,\infty)\) norm means
the supremum over \(\xi_{\perp\perp}\) of the
\(L^{1}(d\tau)\) norm.
\end{remark}
\begin{proof}
We have $0<\alpha<\pi$ and so $\sin\alpha>0$. Hence
\[
\int_{-\infty}^\infty
\abs{\finv{\Theta}{\Psi}(t\Theta+\xi_{\Theta^\perp})} dt =
\int_{-\infty}^\infty \abs{\finv{\nu_\perp}{\psi}(t'\nu_\perp +
\xi_{\perp\perp})} dt'
\]
by a change of variables. Then we can take the supremum over
$\xi_{\Theta^\perp} \in \Theta^\perp$, which will give the same result
as the supremum of $\xi_{\perp\perp}$ over $\nu^\perp \cap
\nu_\perp^\perp$. The multiplier estimate follows from Lemma
\ref{multiplierNorm}. For the second case note that $M_\Psi f =
\finv{\Theta^\perp}{\{\psi(\xi_{\Theta^\perp})
\f{\Theta^\perp}{f}\}}$. The claim follows directly.
\end{proof}
\section{Estimates for 2\textsuperscript{nd} order operators}\label{sec:order2}
We treat a second order constant coefficient partial
differential operator \(P(D)\),
with no double characteristics, i.e. no
simultaneous real root of \(p(\xi) = 0\)
and \(\nabla p(\xi)=0\). The main result of this
section is:
\begin{theorem}\label{th:main}
Let \(P(D)\) be a single real second order constant
coefficient partial differential operator on \(\mathbb{R}^{n}\)
with no real double
characteristics. Then there exists a constant
\(C(P,n)\) such that, for every open bounded
\(D_{s}\subset\mathbb{R}^{n}\), and every \(f\in L^2(D_{s})\),
there is a \(u\in L^2_{loc}(\mathbb{R}^{n})\) satisfying
\begin{eqnarray}
\label{eq:1}
P(D)u=f
\end{eqnarray}
such that, for any bounded domain \(D_{r}\subset\mathbb{R}^{n}\)
\begin{eqnarray}
\label{eq:2}
||u||_{L^2(D_{r})}\le C \sqrt{d_{r}d_{s}}||f||_{L^2(D_{s})}
\end{eqnarray}
where \(d_{j}\) is the diameter of \(D_{j}\),
the supremum over all lines of the length of the
intersection of the line with \(D_{j}\); i.e.
\begin{eqnarray}
\nonumber
d_{j}= \sup_{\mathrm{lines}\ l}\mu_{1}(l\cap D_{j}).
\end{eqnarray}
\end{theorem}
We begin the proof by writing the second order operator in a simple
normal form.
\begin{lemma}\label{th:1}
After an orthogonal change of coordinates and a
rescaling:
\begin{eqnarray}
\label{eq:4}
P(D) = \sum\epsilon_{j}\left(Dj\right)^{2}
+ 2\sum\alpha_{j}Dj + B
\end{eqnarray}
where each \(\epsilon_{j}\) equals one of \(0,1,-1\);
\(\alpha_{j}\in\mathbb{R}\), and \(B\in\mathbb{R}\).
\end{lemma}
\begin{proof}
This is a statement about the principal (second order) part $P_2$ of
the operator. \(P_{2}(\xi)\)
is a real quadratic form with eigenvalues
\(-\lambda_{i}\)
and eigenvectors \(e_{j}\). If we introduce coordinates
\begin{equation}\nonumber
x = \sum x_{j}e_{j}
\end{equation}
then
\[
P_{2}(D) = \sum\lambda_{j}\left(Dj\right)^{2}
\]
After the rescaling
\[
x_{j}=\sqrt{\lambda_{j}}y_{j}\qquad
\frac{\partial}{\partial y_{j}}=\sqrt{\lambda_{j}}Dj
\]
the second order part takes the desired form in (\ref{eq:4}).
\end{proof}
Next, we dismiss the simple cases.
\begin{proposition}\label{prop4.3}
If some \(\epsilon_{j}=0\) and the corresponding \(\alpha_{j}\ne0\),
then Theorem \ref{th:main} is true.
\end{proposition}
\begin{proof}
Without loss of generality, we may assume that
\(j=1\). We do a partial Fourier transform in the
\(x_{1}^{\perp}\) plane, i.e. with
\(\tilde{x}=(x_{2},\ldots,x_{n})\) and
\(\xi=(\xi_{2},\ldots,\xi_{n})\). We let \(\Theta_{1}\)
denote the unit vector in the \(x_{1}\) direction, and
let \(w=\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}\) and \(g=\frac{1}{2}\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}\), i.e.
\begin{eqnarray}
\nonumber
w(x_{1},\xi) = \frac{1}{(2\pi)^{(n-1)/2}}
\int_{\mathbb{R}^{n-1}}e^{-i\xi\cdot\tilde{x}}u(x_{1},\tilde{x})dx_{2}\ldots dx_{n}
\end{eqnarray}
Then \(w\) satisfies
\begin{eqnarray}
\label{eq:8}
\alpha_{1}\frac{\partial w}{\partial x_{1}} + Q(\xi)w = g(x_{1},\xi)
\end{eqnarray}
where
\(Q(\xi) =
-\frac{1}{2}\sum_{j=2}^{n}\epsilon_{j}\xi_{j}^{2} + i\sum_{j=2}^n \alpha_j\xi_j +\frac{1}{2}B\). We
may write an explicit formula for \(w\):
\begin{eqnarray}
\label{eq:9}
w(x_1,\xi) = \frac{1}{\alpha_{1}}
\begin{cases}
\int_{-\infty}^{x_{1}}e^{Q(\xi)\frac{x_1-s}{\alpha_{1}}}g(s,\xi)ds
&\mathrm{if}\ \frac{\mathbb{R}e{Q(\xi)}}{\alpha_{1}}>0
\\
-\int_{x_{1}}^{\infty}e^{Q(\xi)\frac{x_1-s}{\alpha_{1}}}g(s,\xi)ds
&\mathrm{if}\ \frac{\mathbb{R}e{Q(\xi)}}{\alpha_{1}}<0
\end{cases}
\end{eqnarray}
Our formula insures that, on the domain of integration,
\begin{equation}\label{eq:10}
|e^{Q(\xi)\frac{x-s}{\alpha_{1}}}|<1
\end{equation}
and therefore, for each fixed \(\xi\), that
\begin{equation}\nonumber
||w(\cdot,\xi)||_{L^\infty}\le
\frac{1}{\alpha_{1}}||g(\cdot,\xi)||_{L^1}
\end{equation}
so that, squaring and integrating with respect
to \(\xi\) gives
\begin{eqnarray}\label{eq:31}
||w||_{L^2(d\xi,L^\infty(dx_{1}))}\le
\frac{1}{\alpha_{1}}||g||_{L^2(d\xi,L^1(dx_{1}))}
\\\na{or, using the notation of mixed norms}\nonumber
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}j1{u}||_{\Theta_{1}(\infty,1)}\le
\frac{1}{2\alpha_{1}}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}j1{f}||_{\Theta_{1}(1,2)}
\end{eqnarray}
with \(\Theta_{1}\) equal to the unit vector in the
\(x_{1}\) direction.
This combines with Lemma \ref{mixedToUniformNorms} to yield the
estimate (\ref{eq:2}).
\end{proof}
The proof of Theorem \ref{th:main} will use partitions of
unity and coordinate changes to reduce to a case very
similar to (\ref{eq:8}) and (\ref{eq:9}) and prove
estimates of the form in (\ref{eq:6}). Our main proof will
prove Theorem \ref{th:main} in the case that no
\(\epsilon_{j}\) in (\ref{eq:4}) is zero. We have already
treated the case where some \(\epsilon_{j}=0\) and the
corresponding \(\alpha_{j}\ne 0\). If, for one or more
values of \(j\), \(\epsilon_{j}=\alpha_{j}=0\), then the
PDE in (\ref{eq:1}) is independent of the \(x_{j}\)
variables. In this case, we may obtain the inequality
(\ref{eq:6}) from the corresponding inequality in the
lower dimensional case. We record this in the proposition
below.
\begin{proposition}\label{prop4.4}
Let \(x=(x_{1},\tilde{x},y)\), and suppose that, for
each \(y\),
\begin{equation}\label{eq:14}
||u(\cdot,\cdot,y)||_{L^2(d\tilde{x},L^\infty(dx_{1}))}\le
C ||f(\cdot,\cdot,y)||_{L^2(d\tilde{x},L^1(dx_{1}))}
\end{equation}
then
\begin{equation}
||u||_{L^2(d\tilde{x} dy,L^\infty(dx_{1}))}\le
C ||f(\cdot,\cdot,y)||_{L^2(d\tilde{x} dy,L^1(dx_{1}))}
\end{equation}
\end{proposition}
\begin{proof}
Just square both sides of (\ref{eq:14}) and integrate
with respect to \(y\).
\end{proof}
Henceforth, we will assume that no \(\epsilon_{j}=0\),
and complete the squares in (\ref{eq:4}) to rewrite that
equation as
\begin{eqnarray}
\nonumber
P(D) = \sum_{j=1}^{n}\epsilon_{j}(Dj-\beta_{j})^{2}
+ b\qquad;\quad \epsilon_{j}=\pm 1
\end{eqnarray}
where the \(\beta_{j}=-\epsilon_{j}\alpha_{j}\) from
(\ref{eq:4}) and \(b=B-\sum\epsilon_{j}\beta_{j}^{2}\) .
\begin{proposition}
\(P(D)\) has a real double characteristic iff \(b=0\)
and \(\beta=\overline{0}\).
\end{proposition}
\begin{proof}
\begin{align}\label{eq:19}
p(\eta) = \sum\epsilon_j(i\eta_{j}-\beta_{j})^{2}+b
\\ \notag
dp = \sum 2i\epsilon_j(i\eta_{j}-\beta_{j})d\eta_{j}
\end{align}
so that
\[
dp = 0\quad \iff\quad \mathrm{every}\
\eta_{j}=-i\beta_{j}
\]
but, as the \(\beta_{j}\) are real, this can
only happen if
\[
\eta_{j} = \beta_{j}=0
\]
If \(p\) vanishes as well, we must also have \(b=0\).
\end{proof}
We now begin the proof of Theorem \ref{th:main} in
earnest. We intend to use partial Fourier transforms, as
defined
in (\ref{eq:68}). To this end, we will choose special
directions \(\Theta\in\mathbb{R}^{n}\) (the unit vectors
\(\Theta_{k}\) in the coordinate directions
will suffice for the proof of Theorem \ref{th:main}) and
express \(x\in\mathbb{R}^{n}\) as
\begin{eqnarray*}
x = t\Theta + \tp{x}
\end{eqnarray*}
as in \eqref{eq:83}
and
write the dual variable \(\eta\) as
\begin{eqnarray*}
\eta = \tau\Theta + \xi \qquad\mathrm{with\ }
\xi\in\Theta^\perp
\end{eqnarray*}
In these coordinates,
we consider \(p(\eta)\)
as a polynomial \(p(\tau;\xi)\) in \(\tau\)
with coefficients depending on \(\xi\).
We will arrive at the estimate \eqref{eq:2} as long as the
roots of \(p\) are simple. When \(\Theta=\Theta_{k}\),
\(\xi=(\eta_{1},\ldots\eta_{k-1},\eta_{k+1},\ldots,\eta_{n})\). If
we define
\begin{equation}\label{eq:20}
Q_{k}(\xi) := \sum_{j\ne k}\epsilon_j(i\eta_{j}-\beta_{j})^{2}+b
\end{equation}
then by \eqref{eq:19} we have $p(\tau,\xi) =
\epsilon_k(i\tau-\beta_k)^2 + Q_k(\xi)$ and its roots are
\[
\tau_{\pm} = -i\beta_{k}\pm \sqrt{\epsilon_kQ_{k}(\xi)}
\]
and they are simple as long as
\[
Q_{k}(\xi)\ne 0.
\]
\begin{proposition}\label{th:ThetaEst}
Suppose that
\begin{eqnarray}
\label{eq:17}
\supp{\widehat{f}(\eta)}\subset \{\eta\in\mathbb{R}^n \big|\ |Q_{k}(\xi)|>\varepsilon\}
\end{eqnarray}
Then there
exists \(u\) solving
\begin{equation}\nonumber
P(D)u=f
\end{equation}
satisfying
\begin{equation}\label{eq:22}
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{u}||_{\Theta_{k}(\infty,2)}\le\frac{1}{\sqrt{\varepsilon}}
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{\Theta_{k}(1,2)}
\end{equation}
\end{proposition}
\begin{proof}
With \(x=t\Theta_{k}+\tp{x}\), we again use
the partial Fourier transform
\begin{eqnarray}
\nonumber
\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{u}(t,\xi) = \frac{1}{(2\pi)^{(n-1)/2}}
\int u(t,\tpk{x})e^{-i\xi\cdot \tpk{x}}d\tpk{x}
\end{eqnarray}
Letting \(w:=\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{u}(t,\xi)\) and \(g=\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}(t,\xi)\),
we see that
\begin{equation}\nonumber
\epsilon_k \left(\frac{d}{dt}-\beta_{1}\right)^{2}w
+Q_{k}(\xi)w = g
\end{equation}
which factors as
\[
\left(\frac{d}{dt}-(\beta_{1}+\sqrt{\epsilon_kQ_{k}})\right)
\left(\frac{d}{dt}-(\beta_{1}-\sqrt{\epsilon_kQ_{k}})\right)w =
\epsilon_k g
\]
so that we can write a solution
formula analogous to that in \eqref{eq:59} through
\eqref{eq:69}; i.e.
\begin{eqnarray}
\nonumber
w = \frac{\epsilon_k}{\sqrt{Q_{k}}}\left(\int
e^{(\beta_{1}+\sqrt{\epsilon_kQ_{k}})(t-s)}g(s,\xi)ds -
\int
e^{(\beta_{1}-\sqrt{\epsilon_kQ_{k}})(t-s)}g(s,\xi)ds\right)
\end{eqnarray}
where the limits of integration in the first integral are
\(-\infty<s<t\) for those \(\xi\) that satisfy
\(\mathbb{R}e{(\beta_{1}+\sqrt{\epsilon_kQ_{k}})}>0\) and \(t<s<\infty\)
for \(\xi\) with
\(\mathbb{R}e{(\beta_{1}+\sqrt{\epsilon_kQ_{k}})}<0\). The limits in the
the second integral are chosen similarly, based on the
real part of \(\beta_{1}-\sqrt{\epsilon_kQ_{k}}\). We
may choose either set of limits if the real part is
zero.\\
We now obtain the estimate (\ref{eq:22}) just as in
(\ref{eq:10}) through (\ref{eq:31}).
\end{proof}
Our next step is to show that any compactly supported
\(f\in L^2\) can be decomposed into a sum of sources, each
of which will satisfy (\ref{eq:17}) for some
\(\Theta_{k}\). To accomplish this, we let \(\phi(t)\in C^{\infty}_{0}(\mathbb{R})\)
be a positive bump function, equal to 0 for \(|t|<1\) and
1 for \(|t|>2\). We let \(\phi_{\varepsilon}(t) =
\phi(\frac{t}{\varepsilon})\). Again writing \(\eta\in\mathbb{R}^{n}\) as
\begin{eqnarray}
\nonumber
\eta=\tau\Theta_{k}+\xi
\end{eqnarray}
it is natural to define the multiplier
\begin{eqnarray}
\nonumber
\Phi_{k}(\eta) = \phi_{\varepsilon}(|Q_{k}(\xi)|)
\end{eqnarray}
which will equal 0 near the set where \(Q_{k}\) is
small. It is, however, more convenient to define
\begin{eqnarray}
\label{eq:32}
\Phi_{k}(\eta) = \phi_{\varepsilon}(\mathbb{R}e{Q_{k}}) +
\phi_{\varepsilon}(\Im{Q_{k}}) - \phi_{\varepsilon}(\mathbb{R}e{Q_{k}})\phi_{\varepsilon}(\Im{Q_{k}})
\end{eqnarray}
which equals 0 if both \(\mathbb{R}e{Q_{k}}<\varepsilon\) and
\(\Im{Q_{k}}<\varepsilon\), and equals 1 if either or both is greater
than \(2\varepsilon\). We decompose \(f\) as
\begin{align}
&\widehat{f} = \widehat{f_{1}} + \widehat{f_{2}}+\widehat{f_{3}}+\ldots \widehat{f_{n+1}}&
\notag\\
&= \Phi_{1}\widehat{f} + \Phi_{2}(1-\Phi_{1})\widehat{f}+
\Phi_{3}(1-\Phi_{2})(1-\Phi_{1})\widehat{f}
+\ldots
+\mathop{\prod}\limits_{j=1}^{n}(1-\Phi_{j})\widehat{f}
\label{eq:33}
\end{align}
and solve
\begin{equation}\label{eq:63}
P(D)u_{k}=f_{k}
\end{equation}
which will guarantee that, for all \(k=1\ldots n\),
\(f_{k}\)
will satisfy the hypothesis (\ref{eq:17}) of Proposition
\ref{th:ThetaEst} with direction vector \(\Theta_{k}\). We will use that proposition to
construct and estimate the \(u_{k}\).
To estimate the solution to \(P(D)u_{n+1}=f_{n+1}\)
we will need the following:
\begin{lemma}\label{th:zqeps}
Let
\begin{eqnarray}
Z^{\varepsilon}_{Q_{k}} = \{\eta\in\mathbb{R}^n \big|\
|\mathbb{R}e{Q_{k}(\xi)}|<\varepsilon\ \mathrm{and}\ |\Im{Q_{k}(\xi)}|<\varepsilon\}
\end{eqnarray}
then \(\partialisplaystyle \mathop{\cap}\limits_{k=1}^{n}Z^{\varepsilon}_{Q_{k}}\) is bounded
with diameter less than \(4\sqrt{2n\varepsilon}\).
Moreover, if P has no double characteristics, and
\(\varepsilon\) is chosen small enough,
\begin{eqnarray}
\label{eq:36}
\mathop{\cap}\limits_{k=1}^{n}Z^{\varepsilon}_{Q_{k}}\cap Z^{\varepsilon}_{P}=\emptyset
\end{eqnarray}
where $Z^\varepsilon_{P}$ is defined similarly as $Z^\varepsilon_{Q_k}$.
\end{lemma}
Before we begin the proof we record one simple lemma,
which we will use here and again in the proof of Proposition
\ref{th:qEst}.
\begin{lemma}\label{th:simplequad}
Suppose that \(q(t)=t^{2}-B\) and
\(Z^{q}_{\partialelta}=\{t\in\mathbb{R} \big| |q(t)|<\partialelta\}\), then
\begin{eqnarray}
\mu\left(Z^{q}_{\partialelta}\right)\le 4\min\left(\sqrt{\partialelta},\frac{\partialelta}{\sqrt{B}}\right)
\end{eqnarray}
\end{lemma}
\begin{proof}
If \(B<-\partialelta\), \(Z^{q}_{\partialelta}\) is empty, so
assume that is not the case and $t\in Z^\varepsilon_\partialelta$. Then
\begin{eqnarray}
\nonumber
\max(0,B-\partialelta)\leqslant t^{2}<B+\partialelta
\end{eqnarray}
so \(\pm t\) belongs to the interval
\(\left[\sqrt{\max(0,B-\partialelta)},\sqrt{B+\partialelta}\right]\), which
has length
\begin{eqnarray}
\nonumber
\sqrt{B+\partialelta} - \sqrt{\max(0,B-\partialelta)} &=&
\frac{2\partialelta}{\sqrt{B+\partialelta} + \sqrt{\max(0,B-\partialelta)}}
\\
&\le&\frac{2\partialelta}{\max(\sqrt{B},\sqrt{\partialelta})}
\end{eqnarray}
\end{proof}
\begin{proof}[Proof of Lemma \ref{th:zqeps}]
If
\(\eta\in\mathop{\cap}\limits_{k=1}^{n}Z^{\varepsilon}_{Q_{k}}\),
we will show that, each coordinate, \(\eta_{m}\)
belongs to the union of two intervals, with total length
at most \(4\sqrt{2\varepsilon}\), so that the diameter of the set
is no more than \(\sqrt{n}\) times \(4\sqrt{2\varepsilon}\).
For
\(\eta\in\mathop{\cap}\limits_{k=1}^{n}Z^{\varepsilon}_{Q_{k}}\),
\[
\left|\sum_{k\ne m}Q_{k}\right| \le (n-1)\varepsilon
\]
and
\begin{align*}
\sum_{k\ne m}Q_k &= \sum_{k\ne m} \left[ \sum_{j\ne k}
\epsilon_j (i\eta_j - \beta_j)^2 + b \right]\\
&= (n-2) \left[ \sum_{j\ne m} \epsilon_j (i\eta_j-\beta_j)^2 + b \right]
+(n-1) \epsilon_m (i\eta_m-\beta_m)^2 + b\\
&= (n-2) Q_m +(n-1) \epsilon_m (i\eta_m-\beta_m)^2 + b,
\end{align*}
so
\[
|(i\eta_m-\beta_m)^2 \pm b/(n-1)| \le (n-1)\varepsilon + (n-2)\varepsilon < 2\varepsilon.
\]
The real part of \((i\eta_{j}-\beta_{j})^{2}\pm b/(n-1)\) is
\(-\eta_{m}^{2}+B\) with \(B = \beta_{m}^{2}\pm b/(n-1)\), so
we may invoke Lemma \ref{th:simplequad} with
\(\partialelta=2\varepsilon\) to conclude that \(\eta_{m}\) belongs to
set with diameter at most \(4\sqrt{2\varepsilon}\).\\
We perform a similar calculation to establish
(\ref{eq:36}). The absence of real double characteristics means
that either \(b\) or some \(\beta_{j}\) in
(\ref{eq:19}) is nonzero.
For \(\eta\in\mathop{\cap}\limits_{k=1}^{n}Z^{\varepsilon}_{Q_{k}}\),
\begin{eqnarray*}
\left|\sum_{k=1}^{n}Q_{k}\right|&\le& n\varepsilon
\\
\left|(n-1)\sum_{k=1}^{n}\epsilon_{k}(i\eta_{k}-\beta_{k})^{2}+nb\right|&\le&
n\varepsilon
\\
\left|(n-1)p(\eta)+b\right|&\le& n\varepsilon
\\
\left|p(\eta)\right|&\ge& \frac{|b|-n\varepsilon}{n-1}
\\
&\ge&\varepsilon
\end{eqnarray*}
as long as \(|b|>0\) and \(\varepsilon\) is chosen sufficiently smaller
than \(|b|\) . If \(b=0\), then some \(\beta_{k}\ne0\) and
\begin{eqnarray*}
p-Q_{k} = \epsilon_k(i\eta_{k}-\beta_{k})^{2}
\\
|p|\ge|i\eta_{k}-\beta_{k}|^{2}-|Q_{k}|
\\
|p|\ge \beta_{k}^{2}-\varepsilon
\\\ge\varepsilon
\end{eqnarray*}
for \(\varepsilon\) sufficiently smaller than \(\beta_{k}^{2}\).
\end{proof}
Proposition \ref{th:ThetaEst} gives us the estimates
\begin{eqnarray}
\nonumber
||\f{\Theta_k^\perp}{u_{k}}||_{\Theta_{k}(\infty,2)}\le\frac{1}{\sqrt{\varepsilon}}
||\f{\Theta_k^\perp}{f_{k}}||_{\Theta_{k}(1,2)}
\end{eqnarray}
for \(k=1\ldots n\). To estimate \(u_{n+1}\), we prove
\begin{proposition}\label{th:unplus1}
Suppose that \(\supp \widehat{f}\) has diameter at most
\(d \) , and further that \(|P(\eta)|>\varepsilon\) on
\(\supp \widehat{f}\). Then
\[
u := \mathscr{F}^{-1}\left(\frac{\widehat{f}}{P}\right)
\qquad
\mathrm{solves}
\qquad
P(D)u = f
\]
and, for any unit vector \(\Theta\),
\begin{equation}
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}||_{\Theta(\infty,2)} \le \frac{d}{\varepsilon}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}||_{\Theta(1,2)}
\end{equation}
\end{proposition}
\begin{proof} We write \(\eta=\tau\Theta+\xi\), and inverse
Fourier transform in the \(\Theta\) direction, obtaining
\begin{equation}\nonumber
|\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}(t,\xi)| \le \left| \finv{\Theta}{\left\{ \frac{\chi_{\supp \widehat{f}}}{P} \right\}}
\ast_t \f{\Theta^\perp}{f} \right| \le \frac{d}{\varepsilon} \int | \f{\Theta^\perp}{f}(t',\xi)| dt'
\end{equation}
so for each fixed \(\xi\),
\[
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}(\cdot,\xi)||_{L^{\infty}}\le \frac{d}{\varepsilon}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}(\cdot,\xi)||_{L^1(dt)}
\]
Taking \(L^{2}(d\xi)\) norms of both sides yields
\[
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}||_{\Theta(\infty,2)} \le \frac{d}{\varepsilon}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}||_{\Theta(1,2)}
\]
\end{proof}
To complete the proof of Theorem \ref{th:main}, we need
only show that, for \hbox{\(k=1\ldots n+1\)},
\begin{eqnarray}
\label{eq:39}
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f_{k}}||_{\Theta_{k}(1,2)} \le C ||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{\Theta_{k}(1,2)}
\end{eqnarray}
We will then apply Lemma \ref{mixedToUniformNorms} to conclude that
each \(u_{k}\) satisfies
\begin{equation}\nonumber
||u_{k}||_{L^2(D_{r})} \le \sqrt{d_{r}}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{u_{k}}||_{\Theta(\infty,2)}
\end{equation}
and
\begin{eqnarray}\nonumber
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{\Theta_{k}(1,2)} \le C
\sqrt{d_{s}}||f||_{L^2(D_{s)}}\nonumber
\end{eqnarray}
Recalling that \(u=\sum u_{k}\) in
(\ref{eq:63}) will then finish the proof of Theorem
\ref{th:main}. \textit{Note that we can't apply Lemma \ref{mixedToUniformNorms}
directly to the \(f_{k}\) because their
supports need not be contained in the support of \(f\).}\\
In order to establish (\ref{eq:39}) for \(f_{k}\) defined
as in \eqref{eq:33}, we need to estimate \(||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{
M_{\Psi_{j}}f}||_{\Theta_{k}(1,2)}\) for all \(j\) and
\(k\). The case \(j=k\) is the simplest.
\begin{lemma}\label{th:jequalsk} Let \(\Psi_{k}\) denote either \(\phi_{\varepsilon}(\mathbb{R}e{Q_{k}})\) or
\(\phi_{\varepsilon}(\Im{Q_{k}})\). Then,
\begin{eqnarray}\label{eq:44}
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{ M_{\Psi_{k}}f}||_{\Theta_{k}(1,2)}\le
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{\Theta_{k}(1,2)}
\end{eqnarray}
\end{lemma}
\begin{proof}
Recall that , writing \(\eta= \tau\Theta_{k}+\xi\) with \(\xi\in\Theta_{k}^{\perp}\),
\begin{eqnarray*}
Q_{k}(\eta)=Q_{k}(\tau\Theta_{k}+\xi)=Q_{k}(\xi)
\end{eqnarray*}
so that \(Q_{k}\), and therefore \(\Psi_{k}\) does not
depend on \(\tau\). Hence
\begin{eqnarray*}
\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{M_{\Psi_{k}}f} &=&
\Psi_{k}(\xi)\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}(t,\xi)
\\
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{M_{\Psi_{k}}f}||_{\Theta_{k}}(1,2)&\le&
||\Psi_{k}(\xi)||_{L^{\infty}}\;||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}||_{\Theta_{k}}(1,2)
\end{eqnarray*}
and (\ref{eq:44}) now follows on noting that
\(|\phi_{\varepsilon}|\le 1\).
\end{proof}
According to Lemma \ref{multiplierNorm}, we may establish
(\ref{eq:39}) for \(j\ne k\) by proving that,
\(||\finv{\Theta_{k}}{\Psi_{j}}||_{\Theta_{k}(1,\infty)}\)
is bounded. We address this in the next few lemmas.
\begin{lemma}\label{th:BasicEst}
Let \(q(t)\) be a real valued function of \(t\in\mathbb{R}\),
and let
\begin{eqnarray}
\nonumber
\Phi(t)&=&\phi_{\varepsilon}(q(t))
\\
Z^{\varepsilon}_{q}&=&\{t\in\mathbb{R} \big|\ |q(t)|<\varepsilon\}
\end{eqnarray}
Suppose that
\begin{eqnarray}
\mu(Z^{\varepsilon}_{q}) \le \mu_{1}
\qquad
\sup_{t\in
Z^{\varepsilon}_{q}}\left|\frac{dq}{dt}\right|\le M_{1}
\qquad
\sup_{t\in
Z^{\varepsilon}_{q}}\left|\frac{d^{2}q}{dt^{2}}\right|\le M_{2}
\end{eqnarray}
then
\begin{eqnarray}
||\widecheck{\Phi}||_{L^{1}} \le 2\mu_{1}
\left[\frac{M_{1}}{\varepsilon}+\sqrt{\frac{M_{2}}{\varepsilon}}\right]
\end{eqnarray}
where \(\widecheck{\Phi}\) denotes the (one dimensional)
inverse Fourier transform of \(\Phi\).
\end{lemma}
\begin{proof}
\begin{eqnarray*}
\left|\widecheck{\Phi}(\tau)\right| &=&
\left| \frac{1}{\sqrt{2\pi}} \int
e^{-it\tau}\phi_{\varepsilon}(q(t))dt\right|
\le\mu(Z^{\varepsilon}_{q})\le\mu_{1}
\end{eqnarray*}
Two integrations by parts yield
\begin{eqnarray*}
\left|\widecheck{\Phi}(\tau)\right| &=& \left|\frac{-1}{\tau^{2}\sqrt{2\pi}}\int
e^{it\tau}\left[\phi_{\varepsilon}'\;q''+
\phi_{\varepsilon}''\;(q')^{2}\right]dt\right|
\le\frac{1}{\tau^{2}}
\left[\frac{M_{2}}{\varepsilon}+\left(\frac{M_{1}}{\varepsilon}\right)^{2}\right]
\end{eqnarray*}
so that
\[
\left|\widecheck{\Phi}(\tau)\right| \le \mu_{1}
\begin{cases}
1, &\tau \le
\left[\frac{M_{2}}{\varepsilon}+\left(\frac{M_{1}}{\varepsilon}\right)^{2}\right]^{\frac{1}{2}}
\\
\frac{\left[\frac{M_{2}}{\varepsilon}+\left(\frac{M_{1}}{\varepsilon}\right)^{2}\right]}{\tau^{2}},
& \tau \ge \left[\frac{M_{2}}{\varepsilon}+\left(\frac{M_{1}}{\varepsilon}\right)^{2}\right]^{\frac{1}{2}}
\end{cases}
\]
which implies that
\[
\int\left|\widecheck{\Phi}(\tau)\right|d\tau
\le
2\mu_{1}\left[\frac{M_{2}}{\varepsilon}+\left(\frac{M_{1}}{\varepsilon}\right)^{2}\right]^{\frac{1}{2}}
\le 2\mu_{1}\left[\left(\frac{M_{2}}{\varepsilon}\right)^{\frac{1}{2}}+\frac{M_{1}}{\varepsilon}\right]
\]
\end{proof}
An immediate corollary is:
\begin{corollary}\label{th:BasicEstCor}
Let \(Q(t,\xi)\) be a real valued function of \(t\in\mathbb{R}\), \(\xi\in\mathbb{R}^n\),
and let
\begin{eqnarray}
\nonumber
\Phi(t\Theta+\xi)&=&\phi_{\varepsilon}(Q(t,\xi))
\\
Z^{\varepsilon}_{Q}(\xi)&=&\{t\in\mathbb{R}\big|\ |Q(t,\xi)|<\varepsilon\}
\end{eqnarray}
Suppose that
\begin{eqnarray}
\mu(Z^{\varepsilon}_{Q})\le \mu_{1}(\xi)
\qquad
\sup_{t\in
Z^{\varepsilon}_{Q}}\left|\frac{dQ}{dt}\right|\le M_{1}(\xi)
\qquad
\sup_{t\in
Z^{\varepsilon}_{Q}}\left|\frac{d^{2}Q}{dt^{2}}\right|\le M_{2}(\xi)
\end{eqnarray}
then
\begin{equation}\label{eq:54}
||\finv{\Theta}{\Phi}||_{\Theta(1,\infty)} \le \sup_{\xi}\mu_{1}(\xi)
\left[\frac{M_{1}(\xi)}{\varepsilon}+\sqrt{\frac{M_{2}(\xi)}{\varepsilon}}\right]
\end{equation}
\end{corollary}
Finally, we specialize to \(\Theta=\Theta_{j}\) and
estimate the quantities on the right hand side of (\ref{eq:54}).
\begin{lemma}\label{th:qEst}
Let \(Q(t,\xi) = \mathbb{R}e{Q_{k}(t\Theta_{j}+\xi)}\) or \(Q(t,\xi) =
\Im{Q_{k}(t\Theta_{j}+\xi)}\), with
\begin{eqnarray}
\mu(Z^{\varepsilon}_{Q})=:\mu_{1}(\xi)
\qquad
\sup_{t\in
Z^{\varepsilon}_{Q}}\left|\frac{dQ}{dt}\right|=:M_{1}(\xi)
\qquad
\sup_{t\in
Z^{\varepsilon}_{Q}}\left|\frac{d^{2}Q}{dt^{2}}\right|=:M_{2}(\xi)
\end{eqnarray}
then
\begin{eqnarray}
\label{eq:43}
\sup_{\xi}\mu_{1}(\xi)
\left[\frac{M_{1}(\xi)}{\varepsilon}+\sqrt{\frac{M_{2}(\xi)}{\varepsilon}}\right]\le 9\sqrt{2}
\end{eqnarray}
\end{lemma}
\begin{proof}
We write \(\eta = \sigma\Theta_{k}+t\Theta_{j}+ \xi\)
where \(\xi\) is orthogonal to both \(\Theta_{k}\) and
\(\Theta_{j}\). Recall from (\ref{eq:20}) that
\(Q_{k}\) does not depend on \(\sigma\), and assume
for convenience that \(\epsilon_{j}=+1\); First let
\begin{equation}\nonumber
Q = \mathbb{R}e{Q_{k}(t,\xi,\sigma)} = t^{2} - C(\xi)
\end{equation}
where
\[
C(\xi) =
\sum_{l\ne j,k}\epsilon_{l}\left(\xi_{l}^{2}-\beta_{l}^{2}\right)-b+\beta_{j}^{2}
\]
so that we may conclude from Lemma \ref{th:simplequad}
that
\begin{align}\nonumber
\mu_{1}(\xi)\le 4
\min\left(\sqrt{\varepsilon},\frac{\varepsilon}{\sqrt{|C(\xi)|}}\right)
\\ \notag
\left|\frac{dQ}{dt}\right| = 2|t|\le 2\sqrt{C(\xi)+\varepsilon}
\end{align}
and
\[
\left|\frac{d^{2}Q}{dt^{2}}\right| = 2
\]
so that
\[
\mu_{1}\frac{M_{1}}{\varepsilon}\le
8\min\left(\sqrt{1+\frac{C(\xi)}{\varepsilon}},
\sqrt{1+\frac{\varepsilon}{C(\xi)}}\right)
\le 8\sqrt{2}
\]
and
\[
\mu_{1}\sqrt{\frac{M_{2}}{\varepsilon}}\le\sqrt{2}
\]
We next treat the case \(Q= \Im{Q_{k}} =
2\beta_{j}t+2\sum_{l\ne j,k}\epsilon_l\beta_{l}\xi_{l}\). In this case
\begin{align*}
\mu_{1}=\frac{\varepsilon}{2|\beta_{j}|}
\\
\frac{dQ}{dt} = 2\beta_{j}
\end{align*}
and
\[
\frac{d^{2}Q}{dt^{2}} = 0
\]
so that
\[
\mu_{1}\frac{M_{1}}{\varepsilon}= 1
\]
and
\[
\mu_{1}\sqrt{\frac{M_{2}}{\varepsilon}} = 0
\]
\end{proof}
The combination of Lemma \ref{th:BasicEst}, Corollary
\ref{th:BasicEstCor}, and Lemma \ref{th:qEst} gives us the
hypothesis necessary to invoke Lemma \ref{multiplierNorm}
and conclude that
\begin{corollary}
For \(j\ne k\),
\(||\finv{\Theta_{k}}{\Psi_{j}}||_{\Theta_{k}(1,\infty)}\le
18\)
\end{corollary}
\noindent and consequently that (\ref{eq:39}) holds with
\(C = 19^{n+1}\) -- because
we use products (with \(n+1\) factors) of these multipliers and identity minus
these multipliers for our cutoffs.
We can now finish the
\begin{proof}[Proof of Theorem \ref{th:main}]
We have shown that multiplication by \(\phi_{\varepsilon}(\mathbb{R}e{Q_{k}})\) and
\(\phi_{\varepsilon}(\Im{Q_{k}})\) preserve bounds on \(||\f{\Theta_k^\perp}{f}||_{\Theta_k(1,2)}\).
Hence let us start with
\begin{align*}
u &= \sum_{k} u_{k}
\\
||u||_{L^2(D_{r})}&\le \sum_{k}||u_{k}||_{L^2(D_{r})}
\\
&\le \sum_{k}||u_{k}||_{L^2(S_{1})}
\end{align*}
where \(S_{1}\) is a strip bounded by the two planes
\(\Theta_{k}\cdot x = s_{1}\) and \(\Theta_{k}\cdot x =
s_{2}\), with \(|s_{2}-s_{1}|\le d_{r}\)
\begin{align*}
&= \sum_{k}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{u_{k}}||_{\Theta_{k}(2,2)(S_{1})}
\\
&\le\sum_{k}d_{r}^{\frac{1}{2}}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{u_{k}}||_{\Theta_k(\infty,2)(S_{1})}
\\
&\le C_{1}(P,n)\ d_{r}^{\frac{1}{2}}\sum_{k}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f_{k}}||_{\Theta_k(1,2)}
\\
&\le C_{2}(P,n) \ d_{r}^{\frac{1}{2}}\sum_{k}||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{{\Theta_k(1,2)}}
\\
&\le C_{3}(P,n)\ d_{r}^{\frac{1}{2}}\sum_{k}||\chi_{S_2}\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{\Theta_k(1,2)}
\end{align*}
where the \(C_{i}\) are constants depending only on \(P\)
and the dimension \(n\), and \(S_{2}\) is a strip
containing \(D_{s}\) defined analogously to \(S_{1}\).
\begin{align*}
&\le C_{3}(P,n)\ d_{r}^{\frac{1}{2}}\sum_{k}d_{s}^{\frac{1}{2}}||\chi_{S_2}\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f}||_{{\Theta_k(2,2)}}
\\
&\le C_{4}(P,n)\ (d_{r}d_{s})^{\frac{1}{2}}||f||_{L^2}
\end{align*}
and Theorem \ref{th:main} is proved.
\end{proof}
\section{Estimates for higher order operators}\label{sec:ordern}
In this section we will consider an
$N$\textsuperscript{th} order constant coefficient partial
differential operator $P(D)$ on $\mathbb{R}^n$, $D = -i\nabla$. We
refer to polynomials which satisfy the three conditions
of Definition \ref{admissibleSymbol} as \textit{admissible}. For these admissible
polynomials, we will prove the same estimate as we did for
second order operators in Theorem \ref{th:main}. We will again use
partial Fourier transforms and solve ordinary differential
equations, using partitions of unity to decompose our
source into a sum of sources, each of which has support
suited to that particular direction, so that the solution
to the ODE satisfies the same estimates as in the previous
section.\\
The main difference here is that we don't have a simple
normal form as we did in Lemma \ref{th:1}, so we cannot
explicitly choose directions and construct cutoffs. We
need to rely on algebraic properties of the discriminant
to guarantee that we can find a finite decomposition of
the source analogous to the one we used in
\eqref{eq:85}. Additionally, the order of the ODE can
depend on the direction. In the second order case we
dismissed these cases easily in propositions \ref{prop4.3}
and \ref{prop4.4} because we could represent them
explicitly. In the higher
order case, we choose our directions to avoid these cases.\\
\begin{theorem}\label{th:other}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a degree $N \geqslant 1$ admissible polynomial. Then
there is a constant $C = C(P,n)$ such that for every bounded domain
$D_s \subset \mathbb{R}^n$ and every $f\in L^2(D_s)$ there is a $u\in
L^2_{loc}(\mathbb{R}^n)$ satisfying
\begin{equation}
P(D) u = f\label{eq:95}
\end{equation}
Moreover for any bounded domain $D_r \subset \mathbb{R}^n$
\begin{equation}
\norm{u}_{L^2(D_r)} \le C \sqrt{d_r d_s} \norm{f}_{L^2(D_s)}
\end{equation}
where $d_\ell$ is the diameter of $D_\ell$.
\end{theorem}
We will prove Theorem \ref{th:other} by reducing
the solution of the equation \eqref{eq:95} to solving a set of
parameterized ODE's, just as we wrote the solution to
\eqref{eq:13} in terms of solutions to
\eqref{modelProblems}. To accomplish this, we must choose
a set of directions $\Theta_k$ and build a partition of
unity on the Fourier side so that the denominators of the
source terms in each of these model problems are
strictly positive, just as was explained after
\eqref{eq:61}. These two ingredients will then imply the
final estimate.\\
We choose a direction $\Theta\in\mathbb{S}^{n-1}(\mathbb{R}^n)$
and Fourier transform \eqref{eq:95} along the $\Theta^\perp$ hyperplane to obtain
the \emph{ordinary differential equation}
\[
P(-i(\Theta\cdot\nabla)\Theta + \xi_{\Theta^\perp})
\f{\Theta^\perp}{u} = \f{\Theta^\perp}{f}
\]
in the direction $\Theta$, which we solve for each
$\xi_{\Theta^\perp}$. The next lemma gives the estimate
we seek in the case that ODE is first order.
\begin{lemma}\label{PDEmodel}
Let $\Theta\in\mathbb{S}^{n-1}$ and let $q:\Theta^\perp \to \mathbb{C}$ be
measurable. Assume that $g \in \Theta(1,2)$, or that $g \in
\Theta(\infty,2)$ and $\inf_{\Theta^\perp} \abs{\Im q} > 0$. Then
there is $w\in\Theta(\infty,2)$ satisfying $(-i\partial_t -
q(\xi_{\Theta^\perp})) w = g$ and
\begin{equation}\label{eq:45}
\norm{ w }_{\Theta(\infty,2)} \le \norm{ g}_{\Theta(1,2)},
\end{equation}
or in the second case
\begin{equation}\label{eq:46}
\norm{ w }_{\Theta(\infty,2)} \le \frac{1}{\inf_{\Theta^\perp}
\abs{\Im q}} \norm{ g }_{\Theta(\infty,2)}
\end{equation}
\end{lemma}
\begin{remark}
It is also true that
$\norm{w}_{\Theta(\infty,p)} \le
\norm{g}_{\Theta(1,p)}$ and
$\norm{w}_{\Theta(\infty,p)} \le
\norm{g}_{\Theta(\infty,p)} /(\inf \abs{\Im q})$ with
$1\le p \le \infty$.
\end{remark}
\begin{proof}
The general solution to $(-i\partial_t - q(\xi_{\Theta^\perp})) w = g$
is
\[
w(t\Theta+\xi_{\Theta^\perp}) = i\int_{t_0}^t
e^{iq(\xi_{\Theta^\perp})(t-t')} g(t'\Theta+\xi_{\Theta^\perp}) dt',
\quad t_0\in\mathbb{R}\cup\{\pm\infty\}.
\]
If $\Im q(\xi_{\Theta^\perp})=0$ we may set $t_0$ as we please and the
claim follows. If $\Im q < 0$ set $t_0 = \infty$, and then
$\abs{\exp(iq(\xi_{\Theta^\perp})(t-t'))} \le 1$ for
$t'\in{[{t,t_0}]}$. Similarly, if $\Im q > 0$ set $t_0 = -\infty$.
Now \eqref{eq:45} follows by estimating the integral on
the right by the \(L^{\infty}\) norm of the
exponential times the \(L^{1}\) norm of \(g\) for
each fixed \(\xi_{\Theta^\perp}\), and then taking
\(L^{2}\) norms in the \(\Theta^{\perp}\)
hyperplane. The inequality \eqref{eq:46} follows
in the same way, but using the \(L^{1}\) norm of the
exponential times the \(L^{\infty}\) norm of \(g\)
instead of the other way around.
\end{proof}
In general the differential equation
$P(-i(\Theta\cdot\nabla)\Theta + \xi_{\Theta^\perp})
\f{\Theta^\perp}{u} = \f{\Theta^\perp}{f}$ is not first
order in $\partial_t = \Theta\cdot\nabla$, but we can
factor it into a product of first order operators of the
form \((-i\partial_t - q(\xi_{\Theta^\perp}))\), and then
use a partial fractions expansion to express its
solution as a sum of solutions to first order ODE's.
\begin{definition}
For $\xi_{\Theta^\perp}$ fixed, let $p:\mathbb{C}\to\mathbb{C}$ be the polynomial in
$\tau$
\[
p(\tau) = P(\tau\Theta+\xi_{\Theta^\perp}).
\]
Then $p'(\tau) = \Theta\cdot\nabla P(\tau\Theta+\xi_{\Theta^\perp})$.
\end{definition}
\begin{lemma}\label{partialFractionDecomposition}
Let $p:\mathbb{C}\to\mathbb{C}$ be a polynomial of degree $N \geqslant 1$. Assume that its
roots $\tau_j$ are simple and that its leading coefficient is $p_N$.
Then
\begin{equation}
\frac{1}{p(\tau)} = \sum_{j=1}^N \frac{1}{(\tau-\tau_j)
p_N\prod_{k\neq j}(\tau_j-\tau_k)} = \sum_{j=1}^N
\frac{1}{(\tau-\tau_j) p'(\tau_j)}.
\end{equation}
\end{lemma}
\begin{proof}
$p'(\tau_j) = \lim_{\tau\to\tau_j} p(\tau)/(\tau-\tau_j)$ since
$p(\tau_j)=0$.
\end{proof}
If, for some direction \(\Theta_{k}\),
\(|p'(\tau_{j}(\xi_{\Theta_{k}^{\perp}}))|>\varepsilon>0\) for
all \(\xi_{\Theta_{k}^{\perp}}\) and for all \(j\) in the
support of \(\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f_{k}}\) , then we can define \(u_{kj}\) as
solutions to
\[
\big(-i\partial_t - \tau_j(\xi_{\Theta_k^\perp})\big)
\f{\Theta_k^\perp}{u_{kj}} = \f{\Theta_k^\perp}{f_k}/p'(\tau_j)
\]
that satisfy \eqref{eq:45} with
\(w=\f{\Theta_k^\perp}{u_{kj}}\) and
\(g=\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}k{f_{k}}/p'(\tau_{j})\). Then
$u_k = \sum_j u_{kj}$ will solve $P(D)u_k
= f_k$.
We must find a finite set of directions $\Theta_k$, and split $f =
\sum_k f_k$ such that $\f{}{f_k}(\xi) = 0$ whenever
$\xi_{\Theta^\perp}$ is such that $|p'(\tau_j)|\le\varepsilon$ for any $j$,
as was done in \eqref{eq:33}. Thus we need to define the sets where
$p'(\tau_j)$ becomes small. In reading the definition
below, recall that \(\xi_{\Theta^{\perp}}\) is the
component of \(\xi\) perpendicular to \(\Theta\) and
\(\tau_{j}=\tau_{j}(\xi_{\Theta^{\perp}})\), \(j=1,\ldots,N\) are the
roots of \(p(\tau)\).
\begin{definition}
Given $\Theta\in\mathbb{S}^{n-1}$ and $\varepsilon \geqslant 0$ let
\begin{align}\label{actualBadCylinderDef}
\mathscr{B}_{\Theta,\varepsilon} &= \{ \xi\in\mathbb{R}^n \mid
\min_{P(\tau_j\Theta+\xi_{\Theta^\perp})=0} \abs{\Theta\cdot\nabla
P(\tau_j\Theta+\xi_{\Theta^\perp})} \le \varepsilon \} \notag \\ &=
\{ \xi\in\mathbb{R}^n \mid \min_{p(\tau_j)=0} \abs{p'(\tau_j)} \le
\varepsilon \}
\end{align}
where the minimum is taken with respect to
$\tau_j\in\mathbb{C}$. If, for some \(\Theta\),
$\partialeg p = 0$ we adopt the convention that
$\min_{p(\tau_j)=0} \abs{p'(\tau_j)} := 0$.
\end{definition}
\begin{proposition}\label{solExistence}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a polynomial of degree $N \geqslant 1$ with principal
term $P_N$. Assume that $P_N(\Theta)\neq0$. Let
$\f{\Theta^\perp}{f}\in\Theta(1,2)$ be such that $\f{}{f}(\xi) = 0$
for all $\xi\in\mathscr{B}_{\Theta,\varepsilon}$. Then
there exists \(u\) solving $P(D)u = f$ and
\begin{equation}
\norm{ \f{\Theta^\perp}{u} }_{\Theta(\infty,2)} \le
\frac{N}{\varepsilon} \norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)}.
\end{equation}
\end{proposition}
\begin{remark}
The mixed norm estimate is also true for any
$1\le p\le\infty$:
$\norm{\f{\Theta^\perp}{u}}_{\Theta(\infty,p)} \le N
\varepsilon^{-1}
\norm{\f{\Theta^\perp}{f}}_{\Theta(1,p)}$.
\end{remark}
\begin{proof}
The roots of $p$ are simple when $\xi_{\Theta^\perp} \in
\mathbb{R}^n\setminus\mathscr{B}_{\Theta,\varepsilon}$ so we have
\[
\frac{1}{p(\tau)} = \sum_{p(\tau_j)=0}
\frac{1}{(\tau-\tau_j)p'(\tau_j)}
\]
there according to Lemma \ref{partialFractionDecomposition}. For each
$\xi_{\Theta^\perp}$, order the roots $\tau_j$ lexicographically by
$j=1,\ldots,N$, i.e. $\mathbb{R}e \tau_j \le \mathbb{R}e \tau_{j+1}$ and $\Im \tau_j
< \Im \tau_{j+1}$ if the real parts are equal. The maps
$\xi_{\Theta^\perp} \mapsto \tau_j(\xi_{\Theta^\perp})$ are measurable
since the coefficients of $p(\tau)$ are polynomials in the
$\xi_{\Theta^\perp}$.
The assumption on $\f{}{f}$ implies that
\[
\norm{ \f{\Theta^\perp}{f}/p'(\tau_j) }_{\Theta(1,2)} \le
\varepsilon^{-1} \norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)} < \infty.
\]
for any root $\tau_j = \tau_j(\xi_{\Theta^\perp})$ of $p(\tau_j)=0$.
Let $u_j \in \Theta(\infty,2)$ be the solution to
\[
\big(-i\partial_t - \tau_j(\xi_{\Theta^\perp})\big)
\f{\Theta^\perp}{u_j} = \frac{ \f{\Theta^\perp}{f}} {p'(\tau_j)}
\]
given by Lemma \ref{PDEmodel}. It satisfies the norm estimate
\[
\norm{\f{\Theta^\perp}{u_j}}_{\Theta(\infty,2)} \le
\varepsilon^{-1} \norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)}.
\]
The claim follows by setting $u = \sum_{j=1}^N u_j$ and recalling the
partial fraction decomposition of Lemma
\ref{partialFractionDecomposition}.
\end{proof}
We now focus on the second task, splitting an arbitrary
source function $f$ into a sum $f = f_1 + f_2 + \ldots + f_m$ with
directions $\Theta=\Theta_1, \Theta_2, \ldots, \Theta_m$ such that
$\f{}{f_k}(\xi) = 0$ when $\xi \in \mathscr{B}_{\Theta_k,
\varepsilon}$. Proposition \ref{solExistence} would then imply the
existence of a solution to $P(D)u_k=f_k$. Linearity then implies that
$u = u_1 + u_2 + \ldots + u_m$ solves the original
problem $P(D)u=f$.\\
The partial fraction expansion in Lemma
\ref{partialFractionDecomposition} cannot hold if $P(D)$
has a double characteristic, even a complex double
characteristic. Unlike in Theorem \ref{th:main}, the
algebraic techniques we use here rely on properties of
the discriminant which involves the multiplicities of
all the roots, including the the complex ones. Hence we
require that \(P\) be what algebraic geometers call a
\emph{nonsingular} polynomial.
\begin{definition}
A polynomial $P:\mathbb{C}^n\to\mathbb{C}$ is \emph{nonsingular} if, given
$\xi\in\mathbb{C}^n$, $P(\xi) = 0$ implies that $\abs{\nabla P(\xi)}\neq 0$.
\end{definition}
The sets $\mathscr{B}_{\Theta, \varepsilon}$ are difficult
to deal with for a general polynomial $P$, but the sets
$\mathscr{B}_{\Theta,0}$ are algebraic sets, and
this will enable us to prove that the intersection of
finitely many of them is empty. In order to conclude that
the intersections of the
$\mathscr{B}_{\Theta, \varepsilon}$ are empty, we will
assume that each $\mathscr{B}_{\Theta, \varepsilon}$ is
contained in a tubular neighborhood of $\mathscr{B}_{\Theta,0}$.
Additionally, we require a compactness hypothesis on
projections of two of the sets $\mathscr{B}_{\Theta,0}$
to insure that the cut-off function $\Psi$ associated with
$\mathscr{B}_{\Theta, \varepsilon}$ is a Fourier
multiplier as in Lemma \ref{multiplierNorm}.
\begin{definition}\label{admissibleSymbol}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a degree $N \geqslant 1$ nonsingular polynomial with
principal term $P_N$. It is \emph{admissible} if
\begin{enumerate}
\item\label{twoCylindersCompatibility} for any
$\Theta\in\mathbb{S}^{n-1}(\mathbb{R}^n) \setminus P_N^{-1}(0)$ and $r_0>0$
there is $\varepsilon>0$ such that
\[
\mathscr{B}_{\Theta, \varepsilon} \subset
\overline{B}(\mathscr{B}_{\Theta,0}, r_0),
\]
\item\label{compactDirections} there are non-parallel vectors
$\Theta_1, \Theta_2 \in \mathbb{S}^{n-1}(\mathbb{R}^n) \setminus P_N^{-1}(0)$
such that $\mathscr{B}_{\Theta_1,0} \cap (\Theta_1)^\perp$ and
$\mathscr{B}_{\Theta_2,0} \cap (\Theta_2)^\perp$ are compact,
\end{enumerate}
where $\mathscr{B}_{\Theta,\varepsilon}$ is defined in
\eqref{actualBadCylinderDef}.
\end{definition}
We suspect that Condition \ref{twoCylindersCompatibility} is true for
any nonsingular polynomial. It has been straightforward to verify in the
examples we have considered. Another way to state this condition is as
follows: let $\mathscr{D}$ be the set of $\xi\in\Theta^\perp$ where
$p(\tau)$, whose coefficients are polynomials of $\xi$, has a double
root, i.e. $p(\tau_0)=p'(\tau_0)=0$. Let $r_0 > 0$. Then we require that
there is $\varepsilon>0$ such that if $\xi\in\Theta^\perp$,
$d(\xi,\mathscr{D}) > r_0$, then $\abs{p'(\tau_0)} > \varepsilon$ for
all roots $\tau=\tau_0$ of $p(\tau)=0$.
Condition \ref{compactDirections} is likely
only a technical requirement. Requiring it could be
avoided if a theorem similar to Corollary
\ref{th:BasicEstCor} and Lemma \ref{th:qEst} could be
proven for higher order operators. Moreover this condition
is always satisfied in $\mathbb{R}^2$ because each
$\mathscr{B}_{\Theta,0}$ is a finite set of lines in the
direction $\Theta$ in this case.
A key point in our proof is the observation that
$\mathscr{B}_{\Theta,0}$ is an algebraic variety which can
be defined by the vanishing of a certain discriminant. We
first show that there is an infinite sequence of
directions $\Theta_k$ such that the intersection
$\cap_k \mathscr{B}_{\Theta_k,0}$ is empty. Because the
$\mathscr{B}_{\Theta_k,0}$ are algebraic varieties,
Hilbert's basis theorem then guarantees that the
intersection of a finite subset of the
$\mathscr{B}_{\Theta_k,0}$ is empty.
\begin{definition}\label{discDef}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a polynomial of degree $N \geqslant 1$. Write $P_N$ for
its principal term. For any $\xi\in\mathbb{C}^n$ and $\Theta\in\mathbb{C}^n$ such that
$P_N(\Theta)\neq0$ we define
\begin{equation}
Delta(\Theta,\xi) = \operatorname{disc}_\tau(P(\tau\Theta+\xi)) :=
\big(P_N(\Theta)\big)^{2(N-1)} \prod_{i<j} (\tau_i-\tau_j)^2,
\end{equation}
where $\{\tau_j(\Theta,\xi) \mid j=1,\ldots,N\}$ are the roots of
$P(\tau\Theta+\xi)=0$. If $N = 1$ we set
$\operatorname{disc}_{\tau}(a_1\tau+a_0) = a_1$.
\end{definition}
\begin{remark}
The discriminant of a polynomial \(P\) is a polynomial in the
coefficients of \(P\). Hence we can extend $Delta$ to the set
$\mathbb{C}^n\times\mathbb{C}^n$ by analytic continuation, and therefore it is
well-defined without the assumption that $P_N(\Theta)\neq0$. We point
out, however, that the discriminant of a degree $N$ polynomial, with
the high-order coefficients equal to zero, is not the same as the
discriminant of the resulting lower degree polynomial. See for
example the introduction of Gel'fand, Kapranov and Zelevinsky
\cite{GKZ}.
\end{remark}
\begin{remark}
We have $Delta(\Theta,\xi) = Delta(\Theta, \xi+r\Theta)$ for any
$r\in\mathbb{C}$. This follows from the fact that the roots of $P(\tau\Theta+ \xi+r\Theta) =
P((\tau+r)\Theta+\xi)$ are just the roots of
$P(\tau\Theta+ \xi)$, all translated by \(r\), so the
discriminant remains the same.
\end{remark}
\begin{remark}\label{th:DeltaHomog}
We have $Delta(\lambda\Theta,\xi) = \lambda^{N(N-1)}
Delta(\Theta,\xi)$ because $P(\tau\lambda\Theta+\xi)$ has roots
$\tau_j = r_j/\lambda$ where $P(r_j\Theta+\xi)=0$, and the principal
term will be $\big(\lambda^N P_N(\Theta)\big)\tau^N$.
\end{remark}
\begin{definition}\label{algebraicBadCylinderDef}
Let $\Theta\in\mathbb{C}^{n}$ and $P:\mathbb{C}^n\to\mathbb{C}$ be a degree $N \geqslant 1$
polynomial. Then the \emph{algebraic} tangent set (in the direction
$\Theta$) is defined as
\begin{equation}
\overline{\mathscr{D}_\Theta} = \{ \xi\in\mathbb{C}^n \mid
Delta(\Theta,\xi) = 0\}.
\end{equation}
The \emph{real} tangent set is $\mathscr{D}_\Theta =
\overline{\mathscr{D}_\Theta} \cap \mathbb{R}^n$.
\end{definition}
Figure \ref{partitionUnityProcedureFig} on page
\pageref{partitionUnityProcedureFig} illustrates the
example $P(\xi) =
\abs{\xi}^2 - 1$ with $\Theta \in \{e_1, e_2, e_3\}$. We have then
\[
P(\tau\Theta+\xi) = (\Theta\cdot\Theta) \tau^2 + 2(\Theta\cdot\xi)
\tau + \xi\cdot\xi-1
\]
and
\[
Delta(\Theta,\xi) = (\Theta\cdot\xi)^2 -
(\Theta\cdot\Theta)(\xi\cdot\xi-1).
\]
Homogeneity is easy to see in this example and a simple calculation
demonstrates that $Delta(\Theta,\xi+r\Theta) = Delta(\Theta,\xi)$
for all $r\in\mathbb{C}$, as expected.
We can study the sets $\mathscr{D}_\Theta$ as a proxy for the
sets $\mathscr{B}_{\Theta,\varepsilon}$, defined in
\eqref{actualBadCylinderDef}, that are actually used.
\begin{lemma}\label{DasBproxy}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a degree $N \geqslant 1$ polynomial and
$\Theta\in\mathbb{S}^{n-1}(\mathbb{R}^n)$ such that $P_N(\Theta)\neq0$ . Then
\[
\mathscr{D}_\Theta = \{ \xi\in\mathbb{R}^n \mid \exists \tau_0\in\mathbb{C}: p(\tau_0)
= p'(\tau_0) = 0 \} = \mathscr{B}_{\Theta,0}.
\]
\end{lemma}
\begin{proof}
If $N \geqslant 2$ this follows from the definition of
$\mathscr{B}_{\Theta,0}$ in \eqref{actualBadCylinderDef} and the fact
that $\tau_0$ is a double root of $p$ if and only if $p(\tau_0)=0$ and
$p'(\tau_0)=0$. If $N=1$ then $\xi\in\mathscr{D}_\Theta$ iff the first
order coefficient of $p$ vanishes, which is the same condition as
$\xi\in\mathscr{B}_{\Theta,0}$. This is impossible since
$P_N(\Theta)\neq0$.
\end{proof}
We will show that if $P$ is nonsingular then the intersection
$\cap_{\Theta \in \mathbb{S}^{n-1}} \mathscr{D}_{\Theta}$ is
empty. In other words, we show that, given any $\xi\in\mathbb{R}^n$, there is
some direction $\Theta$ such that the line $\tau\mapsto
\tau\Theta+\xi$ is not tangent to the characteristic manifold
$P^{-1}(0)$ at any point.
\begin{lemma}\label{goodDirection}
Assume that $P:\mathbb{C}^n\to\mathbb{C}$ is nonsingular. Let $\xi \in \mathbb{C}^n$. Then
there is $\Theta \in \mathbb{S}^{n-1}(\mathbb{R}^n)$ such that
$P_N(\Theta)\neq0$ and $Delta(\Theta,\xi) \neq 0$.
\end{lemma}
\begin{proof}
We keep the second variable $\xi$ fixed in this proof,
and suppress the dependence on \(\xi\), writing
$Delta(\Theta) = Delta(\Theta, \xi)$. We view
\(P(\tau\Theta+\xi)\) as a polynomial
\(p(\tau,\Theta)\) in \(\tau\) and \(\Theta\).
According to \cite{Hormander2}, Appendix 1.2., $Delta$
is a polynomial in $\Theta\in\mathbb{C}$ and
$Delta \not\equiv 0$ if \(p(\tau,\Theta)\)
is square-free. A nontrivial complex polynomial cannot
vanish identically on $\mathbb{R}^n$, and thus neither on
$\mathbb{R}^n \setminus P_N^{-1}(0)$.
Hence, if
$p(\tau,\Theta)$ has no square
factor, there is a $\Theta\in\mathbb{R}^n$ such that
$P_N(\Theta)\neq0$ and $Delta(\Theta) \neq 0$. Because
\(Delta\), as pointed out in Remark
\ref{th:DeltaHomog}, is a homogeneous function of
\(\Theta\), we may scale \(\Theta\) so it has unit
length, and the lemma follows in this case.
Next, we show that if \(p(\tau,\Theta)\) has a square
factor, then $P(z)$, viewed as a polynomial of
\(z\in\mathbb{C}^{n}\) has a square factor, which contradicts
the assumption that $P$ is nonsingular. Suppose that
\(p(\tau,\Theta) = \big(S_1(\tau,\Theta)\big)^2
S_2(\tau,\Theta)\). If we choose \(\tau=\lambda\) and
\(\Theta=(z-\xi)/\lambda\), then, for any \(z\in\mathbb{C}\),
\begin{eqnarray*}
P(z)&=& P\left(\lambda \frac{z-\xi}{\lambda} +
\xi\right)
\\
&=& p(\lambda,(z-\xi)/\lambda)
\\
&=& \big( S_1(\lambda, (z-\xi)/\lambda) \big)^2
S_2(\lambda, (z-\xi)/\lambda )
\end{eqnarray*}
so that, unless \(S_{1}(\lambda,\Theta)\) is independent
of \(\Theta\), \(P(z)\) must have a square factor, which
is a contradiction. Suppose now that \(S_{1}\) is independent of \(\Theta\).
It is a non-constant polynomial, so there is $\tau_0\in\mathbb{C}$ such that
$S_1(\tau_0) = 0$. If $\tau_0\neq0$, choosing
$\lambda=\tau_0$ implies that $P\equiv0$. If $\tau_0=0$, then
$P(\tau\Theta+\xi)$ vanishes to at least second order at the point $\xi$ in every
direction $\Theta\in\mathbb{C}^n$. This means that $\xi$ is a singular
point of $P$, again contradicting the hypothesis that
\(P\) is nonsingular. Hence $p(\tau,\Theta)$ has no
square factors and thus $Delta$ is not identically zero.
\end{proof}
\begin{proposition}\label{algProp}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a nonsingular polynomial of degree $N \geqslant 1$ with
principal term $P_N$. Then there is a finite set of directions
$\Theta_1, \ldots, \Theta_m \in \mathbb{S}^{n-1}(\mathbb{R}^n) \setminus
P_N^{-1}(0)$ such that
\[
\bigcap_{k=1}^m \overline{\mathscr{D}_{\Theta_k}} = \emptyset.
\]
\end{proposition}
\begin{proof}
We recall a few facts from algebra. A ring $R$ is \emph{Noetherian} if
every ideal is finitely generated. Another characterization
is that every increasing sequence of ideals stabilizes at a
finite index. In other words, if $I_1 \subset I_2 \subset \ldots$ are
ideals in $R$, then there is $m < \infty$ such that $I_\ell = I_m$ for
all $\ell \geqslant m$.
The ring of complex numbers is Noetherian: its only
ideals are $\{0\}$ and $\mathbb{C}$. Hilbert's basis theorem
says that polynomial rings over Noetherian rings are
also Noetherian. If $V \subset \mathbb{C}^n$ is an affine
variety then $V = \mathbb{V}(\mathbb{I}(V))$, where
\begin{align*}
&\mathbb{V}(I) = \{ \xi \in \mathbb{C}^n \mid f(\xi) = 0 \quad \forall f \in
I\},\\ &\mathbb{I}(V) = \{ f \in \mathbb{C}[\xi_1, \ldots, \xi_n] \mid f(\xi)
= 0 \quad \forall \xi \in V\}.
\end{align*}
Now we begin the proof. Let $\Theta_1, \Theta_2, \ldots \in
\mathbb{S}^{n-1}(\mathbb{R}^n) \setminus P_N^{-1}(0)$ be a sequence that's
dense in the surface measure inherited from the Lebesgue measure of
$\mathbb{R}^n$. Set
\[
V_\ell := \{\xi \in \mathbb{C}^n \mid Delta(\Theta, \xi) = 0, \text{ for }
\Theta = \Theta_1, \Theta_2, \ldots, \Theta_\ell\} = \bigcap_{k=1}^\ell
\overline{\mathscr{D}_{\Theta_k}}.
\]
We have $V_1 \supset V_2 \supset V_3 \supset \ldots$ and hence
$\mathbb{I}(V_1) \subset \mathbb{I}(V_2) \subset \mathbb{I}(V_3)
\subset \ldots$ etc. By Hilbert's basis theorem there is a finite $m$
such that $\mathbb{I}(V_\ell) = \mathbb{I}(V_m)$ for all $\ell \geqslant
m$. This implies that $V_\ell = \mathbb{V}(\mathbb{I}(V_\ell)) =
\mathbb{V}(\mathbb{I}(V_m)) = V_m$ for $\ell\geqslant m$.
If $V_m = \emptyset$ we are done. If not, then there is $\xi_{*}
\in V_m$, such that
\[
Delta(\Theta_k,\xi_{*}) = 0
\]
for all $k\in\mathbb{N}$. Because $\{\Theta_k\}$ is dense in
$\mathbb{S}^{n-1}(\mathbb{R}^n)$ and the discriminant is a
continuous function, we see that \(Delta(\Theta, \xi_{*})=0\)
for all
$\Theta\in\mathbb{S}^{n-1}(\mathbb{R}^n)$, which contradicts Lemma
\ref{goodDirection}.
\end{proof}
\begin{proposition}\label{epsNeighborhood}
Let $P:\mathbb{C}^n\to\mathbb{C}$ be a nonsingular polynomial of degree $N \geqslant 1$. Let
$\Theta_k\in\mathbb{S}^{n-1}(\mathbb{R}^n)$ be a finite sequence of
non-parallel vectors such that $\cap_k \mathscr{D}_{\Theta_k} =
\emptyset$.
If $\mathscr{D}_{\Theta_k} \cap \Theta_k^\perp$ is compact for
$k=1,2$ then there is $r_0>0$ such that
\begin{eqnarray}\label{eq:89}
\bigcap_k \overline{B}(\mathscr{D}_{\Theta_k}, 2r_0) =
\emptyset.
\end{eqnarray}
Moreover, there are smooth $\Psi_k:\mathbb{R}^n\to{[{0,1}]}$
such that $\Psi_k$ are bounded Fourier multipliers
acting on $\finv{\Theta^\perp}{\Theta(1,2)}$ for every
$\Theta\in\mathbb{S}^{n-1}(\mathbb{R}^n)$, satisfying
\begin{equation}
\sum_k \Psi_k \equiv 1
\end{equation}
and $\Psi_k \equiv 0$ in $B(\mathscr{D}_{\Theta_k}, r_0)$.
\end{proposition}
\begin{proof}
If \(\mathscr{D}_{\Theta_1}\) is empty, then so is any
neighborhood of it, hence the intersection in
\eqref{eq:89} is empty. If
not, there are at least two linearly independent $\Theta_k$. Then the
intersection $\mathscr{D}_{\Theta_1} \cap \mathscr{D}_{\Theta_2}$ is
compact because our assumption that the first two
$\mathscr{D}_{\Theta_k} \cap \Theta_k^\perp$ are compact
implies that the orthogonal projections of any point in
$\mathscr{D}_{\Theta_1} \cap \mathscr{D}_{\Theta_2}$ onto two
different codimension 1 subspaces, $\Theta_1^\perp$ and
$\Theta_2^\perp$, are bounded. Therefore, a closed
neighborhood of finite radius about the intersection is compact too.
Hence $\overline{B}(\mathscr{D}_{\Theta_1},1) \cap
\overline{B}(\mathscr{D}_{\Theta_2},1)$ is compact. We will use this
below.
Assume, contrary to the claim, that for any $r_0>0$ the intersection $\cap_k
\overline{B}(\mathscr{D}_{\Theta_k},2r_0)$ is non-empty. Then
there is a sequence $\xi^1, \xi^2, \ldots \in \mathbb{R}^n$ such
that $\sup_{k}d(\xi^\ell, \mathscr{D}_{\Theta_k})$
approaches zero.
By the
compactness of $\overline{B}(\mathscr{D}_{\Theta_1},1) \cap
\overline{B}(\mathscr{D}_{\Theta_2},1)$ we may assume that $\xi^\ell$
converges to some $\xi$. Then $\xi \in \mathscr{D}_{\Theta_k}$ for all
$k$ since the latter are closed sets. This contradicts the assumption
that the intersection of the
\(\mathscr{D}_{\Theta_k}\) is empty and establishes \eqref{eq:89}.\\
Let $\psi_k : \Theta_k^\perp \to {[{0,1}]}$ be smooth and such that
$\psi_k(\xi_{\Theta_k^\perp}) = 0$ if $d(\xi_{\Theta_k^\perp},
\mathscr{D}_{\Theta_k}) \le r_0$ and $\psi_k(\xi_{\Theta_k^\perp})
= 1$ if $d(\xi_{\Theta_k^\perp}, \mathscr{D}_{\Theta_k}) \geqslant 2r_0$.
Set
\begin{equation}\label{partitionOfUnityDef}
\Psi_1(\xi) = \psi_1(\xi_{\Theta_1^\perp}), \qquad \Psi_{k+1}(\xi) =
\psi_{k+1}(\xi_{\Theta_{k+1}^\perp}) \prod_{\ell=1}^k \big( 1 -
\psi_\ell(\xi_{\Theta_\ell^\perp}) \big)
\end{equation}
where $\xi_{\Theta_\ell^\perp} = \xi -
(\xi\cdot\Theta_\ell)\Theta_\ell \in \Theta_\ell^\perp$. Then
$\Psi_k:\mathbb{R}^n\to{[{0,1}]}$ smoothly and $\Psi_k \equiv 0$ on
$\overline{B}(\mathscr{D}_{\Theta_k},r_0)$.
Note that $1-\psi_k \in C^\infty_0(\Theta_k^\perp)$ for $k=1,2$ and
$\xi\mapsto\psi_k(\xi_{\Theta_k^\perp})$ is constant in the direction of $\Theta_k$. Thus, given any
$\Theta\in\mathbb{S}^{n-1}$, Corollary \ref{constantInDirectionNorm}
implies that
\begin{equation}\label{nonParallelMult}
\norm{ \f{\Theta^\perp}{M_{1-\psi_k}f} }_{\Theta(1,2)} \le
\norm{\finv{\Theta_{k\perp}}{\{1-\psi_k\}}}_{\Theta_{k\perp}(1,\infty)}
\norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)}
\end{equation}
for some direction $\Theta_{k\perp}$ in the $(\Theta, \Theta_k)$-plane
perpendicular to $\Theta_k$ when $\Theta\not\parallel\Theta_k$, and
\begin{equation}\label{parallelMult}
\norm{ \f{\Theta^\perp}{M_{1-\psi_k}f} }_{\Theta(1,2)} \le
\sup_{\Theta_k^\perp} \abs{1-\psi_k} \norm{ \f{\Theta^\perp}{f}
}_{\Theta(1,2)}
\end{equation}
when $\Theta\parallel\Theta_k$. Recall that in the first case the
$\Theta_{k\perp}(1,\infty)$-norm is taken in the $n-1$ dimensional
space $\Theta_k^\perp$. In both cases the multiplier norm, which we
denote by $C_k = C_k(\Theta,\Theta_k)$, is finite since $1-\psi_k$ is
smooth and compactly supported in $\Theta_k^\perp$, so in particular
$\finv{\Theta_{k\perp}}{\{1-\psi_k\}}$ is a Schwartz test function.
Thus, by \eqref{partitionOfUnityDef}, \eqref{nonParallelMult} and
\eqref{parallelMult}
\begin{align*}
&\norm{ \f{\Theta^\perp}{M_{\Psi_1}f} }_{\Theta(1,2)} \le \norm{
\f{\Theta^\perp}{f + M_{1-\psi_1}f} }_{\Theta(1,2)} \le (1+C_1)
\norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)}, \\ &\norm{
\f{\Theta^\perp}{M_{\Psi_2}f} }_{\Theta(1,2)} \le C_1 \norm{
\f{\Theta^\perp}{M_{\psi_2}f} }_{\Theta(1,2)} \le C_1(1+C_2) \norm{
\f{\Theta^\perp}{f} }_{\Theta(1,2)}.
\end{align*}
We cannot apply the same argument to $M_{\Psi_3},
M_{\Psi_4}, \ldots$ because the multipliers $1-\psi_k$ are not
necessarily compactly supported in $\Theta_k^\perp$. Instead we note
that $K = \supp (1-\psi_1)(1-\psi_2) \subset \mathbb{R}^n$ is compact. So
$\Psi_{k+1} \in C^\infty_0(B(K,1))$ for $k\geqslant 2$. Lemma
\ref{multiplierNorm} then implies that
\[
\norm{ \f{\Theta^\perp}{M_{\Psi_{k+1}}f} }_{\Theta(1,2)} \le
\frac{1}{\sqrt{2\pi}} \norm{ \finv{\Theta}{\Psi_{k+1}}
}_{\Theta(1,\infty)} \norm{ \f{\Theta^\perp}{f} }_{\Theta(1,2)}
\]
where the first norm is finite since $\finv{\Theta}{\Psi_{k+1}} \in
\mathscr{S}(\mathbb{R}^n)$. So the multipliers are bounded in all directions:
there are finite $C'_k =C'_k(\Theta,\Theta_1,\ldots,\Theta_k)$ such
that $\norm{ \f{\Theta^\perp}{M_{\Psi_k}f}}_{\Theta(1,2)} \le C'_k
\norm{ \f{\Theta^\perp}{f}}_{\Theta(1,2)}$ for all $k$ and any
$\Theta\in\mathbb{S}^{n-1}$.
For the last claim sum the $\Psi_k$ all up to get
\[
\sum_k \Psi_k(\xi) = 1 - \prod_k
\big(1-\psi_k(\xi_{\Theta_k^\perp})\big).
\]
Since $\supp (1-\psi_k) \subset \overline{B}(\mathscr{D}_{\Theta_k},
2r_0)$ and the intersection of the latter is empty, the product
vanishes everywhere.
\end{proof}
We now have all the necessary ingredients for
the proof of the main theorem of this section.
\begin{proof}[Proof of Theorem \ref{th:other}]
Let $P$ be admissible of degree $N\geqslant 1$ and $P_N$ its principal term.
Then propositions \ref{algProp} and \ref{epsNeighborhood} imply the
existence of a finite set of directions
$\Theta_k\in\mathbb{S}^{n-1}(\mathbb{R}^n) \setminus P_N^{-1}(0)$,
$k=1,\ldots,m$, an associated partition of unity $\Psi_k$ and a
constant $r_0>0$.
Set $f_k = M_{\Psi_k}f$. Then $f = \sum_k f_k$, $\f{}{f_k}(\xi) = 0$
when $d(\xi,\mathscr{D}_{\Theta_k}) \le r_0$, and
\[
\norm{ \f{\Theta^\perp}{f_k} }_{\Theta(1,2)} \le C_{k,\Theta} \norm{
\f{\Theta^\perp}{f} }_{\Theta(1,2)} \le C_{k,\Theta} \sqrt{d_s}
\norm{f}{L^2(D_s)}
\]
for any $\Theta\in\mathbb{S}^{n-1}$ by Proposition
\ref{epsNeighborhood} and Lemma \ref{mixedToUniformNorms}.
By Condition \ref{twoCylindersCompatibility} of the admissibility
definition in \ref{admissibleSymbol} there is $\varepsilon>0$ such
that $\f{}{f_k} = 0$ on $\mathscr{B}_{\Theta_k, \varepsilon}$. Let
$u_k$ be the solution to $P(D)u_k = f_k$ given by Proposition
\ref{solExistence}. We have
\[
\norm{ \f{\Theta_k^\perp}{u_k} }_{\Theta_k(\infty,2)} \le
\frac{N}{\varepsilon} \norm{ \f{\Theta_k^\perp}{f_k}
}_{\Theta_k(1,2)}
\]
by that same proposition. The theorem follows by setting $u = \sum_k
u_k$ since
\[
\norm{u_k}_{L^2(D_r)} \le \sqrt{d_r} \norm{ \f{\Theta_k^\perp}{u_k}
}_{\Theta_k(\infty,2)}
\]
by Lemma \ref{mixedToUniformNorms}.
\end{proof}
\begin{remark}
The same proof gives $\norm{u}_{L^q(D_r)} \le C d_r^{1/q} d_s^{1/p}
\norm{\f{}{f}}_{L^p(\mathbb{R}^n)}$ if $p\le2\le q$ and $p^{-1}+q^{-1}=1$.
\end{remark}
\section{Examples}\label{sec:examples}
We describe estimates for a few specific PDE's below. Some of the
estimates follow directly from Theorem \ref{th:main} or
Theorem \ref{th:other}. Others illustrate how the
method can be applied in different settings.
\begin{example}
The inhomogeneous Helmholtz equation $(Delta+k^2)u = f$
is the motivating example for this work. The equation is
rotation and translation invariant, and scales simply
under dilations. Estimates in weighted norms typically
share none of these properties\footnote{Homogeneous
weights, e.g. \(||\;|x|^{\partialelta}f||_{L^{2}}\), retains
scaling properties at the cost of allowing
singularities at the origin. They are invariant under
rotations about the origin, but not about any other
point}. For this reason, the dependence of the
estimate on wavenumber \(k\), which is the physically
relevant parameter, is not clear. However, an estimate
that comes from Theorem \ref{th:main} or Theorem
\ref{th:other}, with \(k=1\), i.e.
\[
\norm{u}_{L^2(D_r)} \le C_{1}\sqrt{d_r d_s} \norm{f}_{L^2(D_s)}
\]
for \(f\) with $\supp f \subset D_s \subset \mathbb{R}^n$, immediately implies
\[
\norm{u}_{L^2(D_r)} \le C \frac{\sqrt{d_r d_s}}{k}
\norm{f}_{L^2(D_s)}
\]
by simply noting that $U(x) = u(kx)$ satisfies
\begin{eqnarray*}
(Delta+k^2)U = k^{2}f(kx)
\end{eqnarray*}
and using the fact that the diameters scale as distance (i.e.
\(d\mapsto kd\)) and \(L^{2}\) norms like distance to
the power \(\frac{n}{2}\).
A second advantage is that \textit{diameter} in Theorems
\ref{th:main} and \ref{th:other} means the length of the
intersection of any line with \(D_{r}\) or \(D_{s}\). This
is particularly appropriate for a source that is supported
on a union of small sets that are far
apart\footnote{Locating well-separated sources and
scatterers is one of the most well-studied applied
inverse problems modelled by the Helmholtz
equation\cite{MUSIC}.}. In weighted norms, the parts of
the source that are far from the \textit{origin} at which
the weights are based, will have large norm because of
their location, yet their contribution to the solution
\(u\) or its far field (asymptotics used in scattering
theory and inverse problems) is no larger than it would be
if it were located at the origin. Insisting that our
estimates share all the invariance properties of the
underlying PDE eliminates these artificial differences
between the physics and the mathematics\footnote{Honesty demands
that we acknowledge that our domain dependent estimates
provide semi-norms, rather than norms, so we are not
ready to give up weighted norms and Besov type norms
entirely.}.
Estimates of the $L^q$ norms of \(u\) in in terms of $L^p$ norms of
\(\f{}f\) are sometimes useful as well \cite{corners}. For
$p^{-1}+q^{-1}=1$, $p\le 2 \le q$, our methods give
\[
\norm{u}_{L^q(D_r)} \le C k^{-1} d_r^{1/q} d_s^{1/p}
\norm{\f{}{f}}_{L^p(\mathbb{R}^n)}.
\]
\end{example}
\begin{example}
The Bilaplacian is a fourth order PDE that arises in the
theory of elasticity and in the modelling of fluid flow
(Stokes flow). We include a spectral parameter
\(\lambda\) and an
external force $f$:
\[
(Delta^2 - \lambda^2)u = f.
\]
Let us show that the admissibility
conditions for Theorem \ref{th:other} given by Definition
\ref{admissibleSymbol} are satisfied.
Assume $\lambda > 0$ and write $Delta^2-\lambda^2 = P(D)$, and so
$P(\xi) = \abs{\xi}^4 - \lambda^2$.
Let $\Theta\in\mathbb{S}^{n-1}$ and for $\xi_{\Theta^\perp} \in
\Theta^\perp$ write
\[
p(\tau) = P(\tau\Theta+\xi_{\Theta^\perp}) = (\tau^2 +
\abs{\xi_{\Theta^\perp}}^2 - \lambda) (\tau^2 +
\abs{\xi_{\Theta^\perp}}^2 + \lambda).
\]
The roots $\tau=\tau_j$ are easily seen to be
\begin{align*}
&\tau_1 = \sqrt{\lambda - \abs{\xi_{\Theta^\perp}}^2}, &&\tau_3 =
i\sqrt{\lambda + \abs{\xi_{\Theta^\perp}}^2}, \\ &\tau_2 =
-\sqrt{\lambda - \abs{\xi_{\Theta^\perp}}^2}, &&\tau_4 =
-i\sqrt{\lambda + \abs{\xi_{\Theta^\perp}}^2},
\end{align*}
where the square root has been chosen to return a non-negative real
part and mapping the negative real axis to the imaginary axis in the
upper half-plane.
The derivative in the direction $\Theta$ is given by $p'(\tau) =
4(\tau^2 + \abs{\xi_{\Theta^\perp}}^2)\tau$. Hence
\[
\abs{p'(\tau_j)} = 4 \lambda \sqrt{\lambda -
\abs{\xi_{\Theta^\perp}}^2}
\]
for $j=1,2$, and
\[
\abs{p'(\tau_j)} = 4 \lambda \sqrt{\lambda +
\abs{\xi_{\Theta^\perp}}^2}
\]
for $j=3,4$. Note that in the latter case $\abs{p'(\tau_j)} \geqslant 4
\lambda^{3/2}$ for all \(\xi_{\Theta^\perp}\) .
If $\varepsilon < 4\lambda^{3/2}$ we see that
\begin{align}\nonumber
\mathscr{B}_{\Theta,\varepsilon} &= \left\{ \xi\in\mathbb{R}^n \,\middle|\,
\lambda - \frac{\varepsilon^2}{16\lambda^2} \le
\abs{\xi_{\Theta^\perp}}^2 \le \lambda +
\frac{\varepsilon^2}{16\lambda^2} \right\}, \\\label{eq:90}
\mathscr{D}_\Theta &=
\left\{ \xi\in\mathbb{R}^n \,\middle|\, \abs{\xi_{\Theta^\perp}}^2 = \lambda
\right\}.
\end{align}
For any $r_0 > 0$, set $\varepsilon < 4 \lambda^{5/4} r_0^{1/2}$. For any $\xi
\in \mathscr{B}_{\Theta,\varepsilon}$ set $\zeta =
(\xi\cdot\Theta)\Theta + \lambda^{1/2} \xi_{\Theta^\perp} /
\abs{\xi_{\Theta^\perp}}$. Then $\zeta \in \mathscr{D}_\Theta$ and
\[
\abs{\xi-\zeta} = \abs{ \abs{\xi_{\Theta^\perp}} - \lambda^{1/2} } =
\frac{\abs{ \abs{\xi_{\Theta^\perp}}^2 - \lambda }}{
\abs{\xi_{\Theta^\perp}} + \lambda^{1/2} } \le
\frac{\varepsilon^2/(16\lambda^2)}{\lambda^{1/2}} < r_0
\]
and so $\mathscr{B}_{\Theta,\varepsilon} \subset B(\mathscr{D}_\Theta,
r_0)$ whenever $\varepsilon < \min( 4\lambda^{3/2}, 4 \lambda^{5/4}
r_0^{1/2} )$. Condition \ref{twoCylindersCompatibility} in Definition
\ref{admissibleSymbol} is thus satisfied. Condition
\ref{compactDirections} is an easy consequence of
\eqref{eq:90}. Combining the
estimate from Theorem \ref{th:other} with a
scaling argument similar to the previous example yields
\begin{eqnarray*}
\norm{u}_{L^2(D_r)} \le C \frac{\sqrt{d_r d_s}}{\lambda^{\frac{3}{2}}}
\norm{f}_{L^2(D_s)}.
\end{eqnarray*}
\end{example}
\begin{example}
The operator $P(D) = D_1^2D_2^2 - 1$ is not \emph{simply
characteristic}, and its zeros are not \emph{uniformly
simple}, as defined in definitions 4.2 and 6.2 by
Agmon and H\"ormander \cite{Agmon-Hormander} or Section
14.3.1 in H\"ormander's book \cite{Hormander2}. This is
because the characteristic variety $P^{-1}(0)$ has two
different branches approaching a common asymptote
(Figure \ref{exNonSimply}). Thus the Besov style
estimates established using uniform simplicity do not
apply to this operator. We show below that the conditions
in Definition \ref{admissibleSymbol} are satisfied, so that the
estimate of Theorem \ref{th:other} holds. As we remarked
in the introduction, the Besov style estimates of
\cite{Agmon-Hormander} are a specialization of
\eqref{eq:91}, and therefore a consequence of Theorem
\ref{th:other}.\\
\begin{figure}
\caption{Characteristic variety of $D_1^2D_2^2 - 1$.}
\label{exNonSimply}
\end{figure}
It is straightforward to check that \(P(\xi)\) is
nonsingular. We will verify the conditions in Definition
\ref{admissibleSymbol} for $\Theta\in\mathbb{S}^{n-1}$,
with $\Theta_1\neq 0$ and $\Theta_2 \neq 0$, and calculate
$\abs{\Theta\cdot\nabla P(\xi)}$ for every \(\xi\) in
$P(\xi)=0$ with a fixed $\xi_{\Theta^\perp}$ component. A
glance at Figure \ref{exNonSimply} shows that there will
between two and four real \(\xi\)'s satisfying $P(\xi)=0$ that have
the same $\xi_{\Theta^\perp}$ component. We begin by
parameterizing the complex characteristic variety
\[
P^{-1}(0)=\{ (s, s^{-1}), (s, -s^{-1}) \in \mathbb{C}^2 \mid s\in\mathbb{C}, s\neq 0\}.
\]
Next we project each point in the variety,
$\xi = (s,\pm s^{-1})$, onto
$\Theta^\perp = \{ b(-\Theta_2, \Theta_1) \mid
b\in\mathbb{R}\}$. Its \(\Theta^\perp\) component is\footnote{Not
all complex roots will project to
$\Theta^\perp$ embedded in the reals. But we are only
interested in the part of the characteristic variety
that does.}
\begin{eqnarray}\label{eq:93}
\xi_{\Theta^\perp} = \big( -\Theta_2 s \pm \Theta_1 s^{-1} \big)
(-\Theta_2, \Theta_1) =: b(-\Theta_2,
\Theta_1).
\end{eqnarray}
To verify conditions about
$\mathscr{B}_{\Theta,\varepsilon}$, we want to
parameterize the points on the
variety \(P^{-1}(0)\) in terms of their
\(\xi_{\Theta^\perp}\) component, which is parameterized
by \(b\). So we use \eqref{eq:93} to solve for
$s = s(b)$. The four (complex) roots $\xi = (s(b), \pm
s(b)^{-1})$ of $P$ on the line defined by $\xi_{\Theta^\perp} =
b(-\Theta_2,\Theta_1)$ are
\begin{align*}
&\xi^{(1)} = \left( \frac{-b+\sqrt{b^2+4\Theta_1\Theta_2}}{2\Theta_2},
\frac{b+\sqrt{b^2+4\Theta_1\Theta_2}}{2\Theta_1} \right), \\
&\xi^{(2)} = \left( \frac{-b-\sqrt{b^2+4\Theta_1\Theta_2}}{2\Theta_2},
\frac{b-\sqrt{b^2-4\Theta_1\Theta_2}}{2\Theta_1} \right), \\
&\xi^{(3)} = \left( \frac{-b+\sqrt{b^2-4\Theta_1\Theta_2}}{2\Theta_2},
\frac{b+\sqrt{b^2-4\Theta_1\Theta_2}}{2\Theta_1} \right), \\
&\xi^{(4)} = \left( \frac{-b-\sqrt{b^2-4\Theta_1\Theta_2}}{2\Theta_2},
\frac{b-\sqrt{b^2-4\Theta_1\Theta_2}}{2\Theta_1} \right).
\end{align*}
The derivative in the direction $\Theta$ at any root $\xi$ having
$\xi_1\xi_2=\pm1$ is
\[
\Theta\cdot\nabla P(\xi) = \pm 2 (\Theta_2 \xi_1 + \Theta_1 \xi_2).
\]
Hence, after simplification,
\begin{align*}
&\Theta\cdot\nabla P(\xi^{(1)}) = 2\sqrt{b^2+4\Theta_1\Theta_2},\\
&\Theta\cdot\nabla P(\xi^{(2)}) = -2\sqrt{b^2+4\Theta_1\Theta_2},\\
&\Theta\cdot\nabla P(\xi^{(3)}) = -2\sqrt{b^2-4\Theta_1\Theta_2},\\
&\Theta\cdot\nabla P(\xi^{(4)}) = 2\sqrt{b^2-4\Theta_1\Theta_2}.
\end{align*}
So $\abs{\Theta\cdot\nabla P(\xi)} \geqslant 2 \abs{b^2 -
4\abs{\Theta_1\Theta_2}}^{1/2}$ at any root $\xi$ with
$\xi_{\Theta^\perp} = b(-\Theta_2,\Theta_1)$.
Now we have explicit descriptions of the sets that appear
in Definition \ref{admissibleSymbol} and can verify the
hypotheses of Theorem \ref{th:other}; namely,
\begin{align*}
&\mathscr{D}_\Theta = \{ \xi\in\mathbb{R}^2 \mid \xi_{\Theta^\perp} =
b(-\Theta_2,\Theta_1), \quad b^2 - 4\abs{\Theta_1\Theta_2} = 0 \},\\
&\mathscr{B}_{\Theta,\varepsilon} = \{ \xi\in\mathbb{R}^2 \mid
\xi_{\Theta^\perp} = b(-\Theta_2,\Theta_1), \quad \abs{b^2 -
4\abs{\Theta_1\Theta_2}} \le \varepsilon^2/4 \}
\end{align*}
as long as we choose
$\varepsilon^2 < 16 \abs{\Theta_1\Theta_2}$. For any
$r_0>0$ let
$\varepsilon^2 \le
8r_0\sqrt{\abs{\Theta_1\Theta_2}}$. Then if
$\xi \in \mathscr{B}_{\Theta,\varepsilon}$, we have (with
$\xi_{\Theta^\perp} = b(-\Theta_2,\Theta_1)$)
\[
d(\xi,\mathscr{D}_\Theta) \le \abs{b -
2\sqrt{\abs{\Theta_1\Theta_2}}} = \frac{\abs{b^2 -
4\abs{\Theta_1\Theta_2}}}{b + 2\sqrt{\abs{\Theta_1\Theta_2}}} \le
\frac{\varepsilon^2}{8\sqrt{\abs{\Theta_1\Theta_2}}} \le r_0
\]
for $b \geqslant 0$, and similarly $d(\xi,\mathscr{D}_\Theta) \le \abs{b +
2\sqrt{\abs{\Theta_1\Theta_2}}} \le r_0$ for $b\le 0$. Hence
$\mathscr{B}_{\Theta,\varepsilon} \subset
\overline{B}(\mathscr{D}_\Theta,r_0)$ for any $r_0>0$ if
$\varepsilon^2 \le 8r_0\sqrt{\abs{\Theta_1\Theta_2}}$ and
$\varepsilon^2 < 16\abs{\Theta_1\Theta_2}$, so we have
verified Condition \ref{twoCylindersCompatibility}, and
Condition \ref{compactDirections} is automatic in two
dimensions, so we are finished.
\end{example}
\goodbreak
\begin{example}
The Faddeev operator is ubiquitous in the area of inverse problems.
Its solution enables the construction of the so-called \emph{Complex
Geometric Optics} solutions to the Laplace equation that are used to
prove uniqueness for many inverse scattering and inverse boundary
value problems. See \cite{Faddeev65} for an early application to
scattering theory, Sylvester and Uhlmann \cite{Sylvester-Uhlmann} and
Nachmann \cite{Nachmann88} for its application to solving the
\emph{Calder\'on problem} \cite{Calderon}, and \cite{Uhlmann2009} for
a review of more recent developments in that area.
The simplest form, as introduced by Calder\'on is
\begin{eqnarray}\label{eq:94}
\left(Delta + 2\zeta\cdot\nabla\right)u=f
\end{eqnarray}
with \(\zeta\in\mathbb{C}^{n}\) satisfying
\(\zeta\cdot\zeta=0\). It has complex coefficients, but
setting \(v=e^{i\Im{\zeta}\cdot x}u\) and
\(g=e^{i\Im{\zeta}\cdot x}f\) results in
\begin{eqnarray*}
\left(Delta + 2\mathbb{R}e{\zeta}\cdot\nabla\right)v=g
\end{eqnarray*}
which has real coefficients. Moreover, \(u\) and \(v\)
have the same \(L^{p}\) norms, as do \(f\) and \(g\).
The symbol and its gradient are
\begin{eqnarray*}
P(\xi) &=& -\xi\cdot(\xi-2i\mathbb{R}e{\zeta})
\\
\nabla P &=& 2(-\xi+i\mathbb{R}e{\zeta})
\end{eqnarray*}
so \(\nabla P\) has no real zeros. Thus \(P\) has no
real double characteristics and Theorem \ref{th:main}
applies. Because the equation, and the estimates, dilate
simply, scaling again gives the exact dependence on \(\zeta\).
\begin{eqnarray}\label{eq:92}
||v||_{L^{2}(D_{r})}\le C
\frac{\sqrt{d_{r}d_{s}}}{|\mathbb{R}e{\zeta|}}||g||_{L^{2}(D_{s})}
\end{eqnarray}
with $\supp f \subset D_s$. Here $d_r,
d_s$ are the diameters of the open sets $D_r,
D_s$. We may, of course, replace \(v\) by \(u\) and \(g\)
by \(f\).\\
In some applications, the condition \(\zeta\cdot\zeta=0\)
is replaced by \(\zeta\cdot\zeta=\lambda\). As the
gradient of \(P\) is still nowhere vanishing, Theorem
\ref{th:main} still applies, and the estimates still
scale, but it is not clear how the estimates depend on the
ratio \(\frac{\mathbb{R}e{\zeta}}{\sqrt{\lambda}}\). A direct
calculation shows that \eqref{eq:92} still holds. In addition,
Remark \ref{mixedToUniformNormsLp} also applies here, so
we have for $\frac{1}{p}+\frac{1}{q}=1, p \le 2 \le q$,
\[
\norm{u}_{L^q(D_r)} \le \frac{ d_r^{1/q} d_s^{1/p} }{\abs{\mathbb{R}e{\zeta}}}
\norm{ \f{}{f} }_{L^p(\mathbb{R}^n)}.
\]
Equation \eqref{eq:94} has a special direction. We
expect a solution to decay exponentially in
the direction $\Theta = \mathbb{R}e \zeta / \abs{\mathbb{R}e \zeta}$,
so an anisotropic estimate is natural here.
Taking the Fourier transform in the \(\Theta^{\perp}\) hyperplane
reduces \eqref{eq:94} to an ordinary
differential equation which can be factored into the product of two
first order operators. Then using \eqref{eq:45} for one of the factors
and \eqref{eq:46} for the other gives the estimate
\begin{equation}\label{eq:anisoFaddeev}
\norm{\f{\Theta^\perp}{u}}_{\Theta(\infty,2)} \le
\frac{\norm{\f{\Theta^\perp}{f}}_{\Theta(1,2)}}
{\inf_{\xi_{\Theta^\perp} \in \Theta^\perp} \abs{\abs{\mathbb{R}e\zeta} +
\sqrt{\abs{\Im\zeta+\xi_{\Theta^\perp}}^2 - \lambda}}} \le
\frac{\norm{\f{\Theta^\perp}{f}}_{\Theta(1,2)}}{\abs{\mathbb{R}e\zeta}}
\end{equation}
when $\lambda\in\mathbb{R}$. This estimate implies \eqref{eq:92} by
\eqref{eq:71}.
\end{example}
Theorems \ref{th:main} and \ref{th:other} apply to
scalar valued PDE's only, but the method can be applied
to systems. The next proposition could be substantially
more general, but it is enough to establish estimates for
the Dirac system.
\begin{proposition}\label{th:dirac}
Consider a constant coefficient first order system
\begin{eqnarray}
\nonumber
\bm A(D) =
\sum_{j=1}^{n}\bm A_{j}\frac{\partial}{\partial x_{j}} + \bm B
\end{eqnarray}
with $\bm A_1,\ldots,\bm A_n, \bm B \in \mathbb{C}^{n\times n}$ and suppose that, for some \(k\),
\begin{eqnarray}
\nonumber
\bm M(\xi) = \bm A_{k}^{-1}\left(\sum_{j\ne k}\bm A_{j}\xi_{j}+\bm B\right)
\end{eqnarray}
is normal for all \(\xi\)\footnote{Equivalently, for some
\(k\) and all \(j\), \(\bm A_{k}^{-1}\bm A_{j}\) and
\(\bm A_{k}^{-1}\bm B\) are normal} . Then, there is a constant
\(C\), such that for every \(f\), there exists \(u\)
solving
\begin{eqnarray}
\nonumber
\bm A(D)u &=&f
\\\na{and}\label{eq:75}
||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}||_{\Theta(\infty,2)}&\le&C||\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{f}||_{\Theta(1,2)}
\end{eqnarray}
where \(\Theta\) is the unit vector in the \(k\)th
coordinate direction, and consequently, for \(f\)
supported in \(D_{r}\) and any \(D_{s}\),
\begin{equation}
||u||_{L^2(D_{s})}\le C\sqrt{d_{r}d_{s}} ||f||_{_{L^2(D_{r})}}
\end{equation}
where \(d_{i}\)
is the diameter of \(D_{i}\)
and \(C\)
is a constant that depends only \(\bm A(D)\)
and the dimension \(n\).
\end{proposition}
\begin{proof}
We take the partial Fourier transform in the \(\Theta^\perp\)
hyperplane, and note that the vector \(\tilde{u}:=\ensuremath{{\mathscr{F}}_{\Theta^{\perp}}}{u}\) must satisfy
\begin{eqnarray}
\label{eq:25}
\frac{\partial}{\partial x_{k}}\tilde{u} + \bm M(\xi)\tilde{u} &=& \bm A_{k}^{-1}\tilde{f}
\end{eqnarray}
and simply write the solution
\begin{eqnarray}\label{eq:74}
\tilde{u}(t,\xi) &=&
\int_{-\infty}^{t}e^{\bm M(\xi)(t-s)}P^{+} \bm A_{k}^{-1}\tilde{f}(s,\xi)ds
\\
&-& \int_{t}^{\infty}e^{\bm M(\xi)(t-s)}P^{-} \bm A_{k}^{-1}\tilde{f}(s,\xi)ds
\end{eqnarray}
where \(P^{+}(\xi)\)
is the orthogonal projection onto the
\(\mathbb{R}e{\lambda}\ge 0\)
eigenspace of \(\bm M(\xi)\) and \(P^{-}(\xi)\)
is the orthogonal projection onto the
\(\mathbb{R}e{\lambda}< 0\)
eigenspace. The projections need not be continuous
functions of \(\xi\), but they need only be measurable
for the formula to make sense. The fact that \(\bm M(\xi)\)
is normal guarantees that the sum of the projections is
the identity, and therefore that \(\tilde{u}\) really does
solve (\ref{eq:25}). The estimate (\ref{eq:75}) follows
immediately from the formula (\ref{eq:74}) and the fact
that the orthogonal projections have norm one or zero.
\end{proof}
\begin{example}\label{exDirac}
The 4x4 Dirac operator may be written as a prolongation of
the curl operator
\begin{eqnarray}
\bm D =
\begin{pmatrix}
\nabla\times&-\nabla\\\nabla\cdot&0
\end{pmatrix}.
\end{eqnarray}
Alternatively, we may express the first order system as
\begin{eqnarray}
\nonumber
\bm D-i\omega \bm I = \sum_{j=1}^{3}\bm A_{j}\frac{\partial}{\partial x_{j}} -
i\omega \bm I
\end{eqnarray}
where $\bm I$ is the $4\times 4$ identity matrix and
\begin{eqnarray*}
\bm P =
\begin{pmatrix}
0&-1\\1&0
\end{pmatrix},
\quad
\bm A_{1} =
\begin{pmatrix}
0 &\bm P\\ \bm P & 0
\end{pmatrix},
\quad
\bm A_{2} =
\begin{pmatrix}
0&-\bm I \\ \bm I &0
\end{pmatrix},
\quad
\bm A_{3} =
\begin{pmatrix}
\bm P & 0 \\ 0 & -\bm P
\end{pmatrix}.
\end{eqnarray*}
It is straightforward to verify that each \(\bm A_{j}\) is
skew, \(\bm A_{j}^{2}=-\bm I\) and that \(\bm A_{i} \bm A_{j}=\pm \bm A_{k}\)
when all three indices are different. These facts
guarantee the hypotheses of Proposition \ref{th:dirac},
and hence the estimate (\ref{eq:75}) for a solution \(u\)
of
\begin{eqnarray}
\nonumber
\left(\bm D-i\omega \bm I\right)u = f.
\end{eqnarray}
\end{example}
\begin{example}[Non-Example]
We show that the estimates \eqref{eq:53} do not hold for
the Laplacian in 3 dimensions, which has a double
characteristic. Suppose that \(f\) is compactly
supported and
\begin{eqnarray*}
Delta u = f
\end{eqnarray*}
In 3 dimensions,
\begin{eqnarray*}
u(x) = \int\frac{f(y)}{|x-y|}dy + H(x)
\end{eqnarray*}
where \(H\) is a harmonic polynomial. For compactly
supported \(f\), the estimates \eqref{eq:92} would imply
that \(u\) grows no faster than \(1/|x|\), so \(H\) must
be zero. We choose \(f\) to be identically one on the ball
of radius \(A\) centered at the origin. In this case,
\(||f||_{L^{2}(A)}\) is
\(\sqrt{\frac{4}{3}\pi{}A^{3}}\). We next compute
\(||u||_{L^{2}(B_{R}(c))}\), with \(R>>A\) and
\(|c| = 2|R|\). For \(x\in B_{R}(c)\)
\begin{eqnarray*}
|u(x)| &>& \frac{1}{2}\frac{\int f(y)}{R}
\\\na{so that}
||u||_{L^{2}(B_{R}(c))}&\ge&\frac{1}{2}
\frac{(\frac{4}{3}\pi{}A^{3})(\sqrt{\frac{4}{3}\pi{}R^{3}})}{R}
=\frac{1}{2} (\frac{4}{3}\pi)^{\frac{3}{2}} A^{3}R^{\frac{1}{2}}
\end{eqnarray*}
Estimate \eqref{eq:53} would imply that
\begin{eqnarray*}
\frac{1}{2}(\frac{4}{3}\pi)^{\frac{3}{2}} A^{3}R^{\frac{1}{2}}
\le C\sqrt{AR}A^{\frac{3}{2}} = CA^{\frac{5}{2}}R^{\frac{1}{2}}
\end{eqnarray*}
which is impossible for large \(A\).
\end{example}
\section{Conclusions}
We have introduced a technique for proving some simple,
translation invariant estimates, which scale naturally,
and can therefore be directly interpreted for physical
systems and remain meaningful in any choice of units.
Such estimates are necessary because because physical
principles dictate that the fields should store
finite energy in a bounded region (i.e. solutions should
be locally \(L^{2}\)) and radiate finite power, which
implies that they should decay at least as fast as
\(r^{-\frac{n-1}{2}}\) near infinity. We have
replaced weighted norms by estimates on bounded regions
which depend on the diameter of these regions. Because
the estimates depend on natural geometric quantities,
which rotate, dilate and translate in natural ways, the
estimates themselves have the same symmetries as the
underlying PDE models. The \(L^{2}\) estimates are based on anisotropic
estimates that are analogous to those that hold for a
parameterized ODE, so it is reasonable to expect them to
hold for all simply characteristic PDE, but we have not
proven any theorems in that generality, nor produced
examples to show that more restrictions are
necessary. Indeed, we expect that these estimates are true
for many more PDE's and systems than we have covered
here.\\
Theorem \ref{th:main} can certainly be extended to allow
first order terms with complex coefficients using the
change of dependent variable in the line following
\eqref{eq:94}, but we do not know if we can allow other
complex coefficients as well.\\
Theorem \ref{th:other} includes many technical
assumptions that we doubt are necessary. The hypothesis
that the characteristic variety is non-singular over
\(\mathbb{C}^{n}\) rather than \(\mathbb{R}^{n}\) is clearly not
necessary, but we don't know of a simple replacement.
The admissibility conditions in Definition
\ref{admissibleSymbol} were chosen to facilitate the
proof, and enforce a certain uniform behavior outside
compact sets, somewhat similar to Agmon-H\"ormander's
\textit{uniformly simple} hypothesis. In two dimensions,
where Condition \ref{compactDirections} is automatically
satisfied, we are not aware of any nonsingular polynomial
$P:\mathbb{C}^2\to\mathbb{C}$ for which Condition
\ref{twoCylindersCompatibility} does not hold.
We have only given one example of a system of PDE's. The
estimates for the Dirac system were particularly easy
because, for any direction \(\Theta\),
the resulting model system was normal (this is the
equivalent of a non-vanishing discriminant for a single
high order equation). Other interesting systems of PDE,
e.g. Maxwell's equations, do no have this property.
\end{document} |
\begin{document}
\title[]{Vanishing viscosity limits for axisymmetric flows with boundary}
\author[]{K. Abe}
\date{}
\address[K. ABE]{Department of Mathematics, Graduate School of Science, Osaka City University, 3-3-138 Sugimoto, Sumiyoshi-ku Osaka, 558-8585, Japan}
\email{[email protected]}
\subjclass[2010]{35Q35, 35K90}
\keywords{Navier-Stokes equations, Axisymmetric solutions, Vanishing viscosity limits, Euler equations}
\date{\today}
\maketitle
\begin{abstract}
We construct global weak solutions of the Euler equations in an infinite cylinder $\Pi=\{x\in \mathbb{R}^{3}\ |\ x_h=(x_1,x_2),\ r=|x_h|<1\}$ for axisymmetric initial data without swirl when initial vorticity $\omega_{0}=\omega^{\theta}_{0}e_{\theta}$ satisfies $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,3)$. The solutions constructed are H\"older continuous for spatial variables in $\overline{\Pi}$ if in addition that $\omega^{\theta}_{0}/r\in L^{s}$ for $s\in (3,\infty)$ and unique if $s=\infty$. The proof is by a vanishing viscosity method. We show that the Navier-Stokes equations subject to the Neumann boundary condition is globally well-posed for axisymmetric data without swirl in $L^{p}$ for all $p\in [3,\infty)$. It is also shown that the energy dissipation tends to zero if $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,2]$, and Navier-Stokes flows converge to Euler flow in $L^{2}$ locally uniformly for $t\in [0,\infty)$ if additionally $\omega^{\theta}_{0}/r\in L^{\infty}$. The $L^{2}$-convergence in particular implies the energy equality for weak solutions.
\end{abstract}
\section{Introduction}
We consider the Navier-Stokes equations:
\begin{equation*}
\begin{aligned}
\mathbb{P}artial_t u-\nu\textrm{div}elta{u}+u\cdot \nabla u+\nabla{p}= 0,\quad \textrm{div}\ u&=0 \qquad \textrm{in}\ \Pi\times (0,\infty), \\
\nabla \times u\times n=0,\quad u\cdot n&=0\qquad \textrm{on}\ \mathbb{P}artial\Pi\times (0,\infty), \\
u&=u_0\dot{H}^{1}space{18pt} \textrm{on}\ \Pi\times\{t=0\},
\end{aligned}
\tag{1.1}
\end{equation*}\\
for the infinite cylinder
\begin{align*}
\Pi=\{x=(x_1,x_2,x_3)\in \mathbb{R}^{3}\ |\ x_h=(x_1,x_2),\ |x_h|<1 \ \}.
\end{align*}\\
Here, $n$ denotes the unit outward normal vector field on $\mathbb{P}artial\Pi$ and $\nu>0$ is the kinematic viscosity.
We study the problem (1.1) for axisymmetric initial data. We say that a vector field $u$ is axisymmetric if $u(x)={}^{t}Ru(Rx)$ for $x\in \Pi$, $\eta\in [0,2\mathbb{P}i]$, $R=(e_r,e_{\theta},e_z)$ and $e_{r}={}^{t}(\cos\eta,\sin\eta,0)$, $e_{\theta}={}^{t}(-\sin\eta,\cos\eta,0)$, $e_{z}={}^{t}(0,0,1)$. By the cylindrical coordinate $(r,\theta,z)$, an axisymmetric vector field is decomposed into three terms $u=u^{r}e_{r}+u^{\theta}e_{\theta}+u^{z}e_{z}$ and the azimuthal component $u^{\theta}$ is called swirl velocity (e.g., \cite{MaB}). It is known that the Cauchy problem is globally well-posed for axisymmetric initial data without swirl in $H^{2}$ \cite{La68b}, \cite{UI}, \cite{LMNP}. See also \cite{Abidi} for $H^{1/2}$.
The purpose of this paper is to study axisymmetric solutions in $L^{p}$. It is well known that the Cauchy problem is locally well-posed in $L^{p}$ for all $p\geq 3$ \cite{Kato84}. However, global well-posedness results are unknown even if initial data is axisymmetric without swirl. For the two-dimensional case, the problem is globally well-posed in $L^{p}$ for all $p\geq 2$ (including $p=\infty$ \cite{GMS}), since vorticity of two-dimensional flows are uniformly bounded. On the other hand, for axisymmetric flows without swirl, vorticity estimates are more involved due to the vortex stretching as $r\to\infty$.
Recently, global-in-time solutions of the Cauchy problem are constructed in \cite{FengSverak} for axisymmetric data without swirl when initial vorticity $\omega_0=\omega^{\theta}_{0}e_{\theta}$ is a vortex ring, i.e., $\omega^{\theta}_{0}=\kappa\delta_{r_0,z_0}$ for $\kappa\in \mathbb{R}$ and a Dirac measure $\delta_{r_0,z_0}$ in the $(r,z)$-plane. See \cite{GallaySverak2} for the uniqueness. For such initial data, initial velocity belong to $L^{p}$ for $p\in (1,2)$ and $\textrm{BMO}^{-1}$ by the Biot-Savart law. For small data in $\textrm{BMO}^{-1}$, a global well-posedness result is in known \cite{KT01}.
In this paper, we study axisymmetric solutions in the infinite cylinder $\Pi=\{r<1\}$, subject to the Neumann boundary condition. Since the cylinder is horizontally bounded, vorticity estimates are simpler than those in the whole space. We prove that vorticity of axisymmetric solutions without swirl to (1.1) is uniformly bounded in the infinite cylinder $\Pi$, and unique global-in-time solutions exist for large axisymmetric data without swirl in $L^{p}$ for all $p\in [3,\infty)$.
An important application of our well-poseness result is a vanishing viscosity limit as $\nu \to0$. We apply our global well-posedness result to (1.1) and construct global weak solutions of the Euler equations. Although local well-posedness results are well known for the Euler equations with boundary (see below Theorem 1.1), existence of global weak solutions was unknown. The well-posedness result to (1.1) in $L^{p}$ for $p\in [3,\infty)$ enable us to study weak solutions of the Euler equations when initial vorticity is in $L^{q}$ for $q\in [3/2,3)$ by the Biot-Savart law $1/p=1/q-1/3$.
To state a result, let $L^{p}_{\sigma}$ denote the $L^{p}$-closure of $C_{c,\sigma}^{\infty}$, the space of all smooth solenoidal vector fields with compact support in $\Pi$. The space $L^{p}_{\sigma}$ agrees with the space of all divergence-free vector fields whose normal trace is vanishing on $\mathbb{P}artial\Pi$ \cite{ST98}. By a local well-posedness result in the companion paper \cite{A6}, unique local-in-time solutions to (1.1) exist for $u_0\in L^{p}_{\sigma}$ and $p\in [3,\infty)$. Our first result is:
\begin{thm}
Let $u_0\in L^{p}_{\sigma}$ be an axisymmetric vector field without swirl for $p\in [3,\infty)$. Then, there exists a unique axisymmetric solution without swirl $u\in C([0,\infty); L^{p})\cap C^{\infty}(\overline{\Pi}\times (0,\infty))$ of (1.1) with some associated pressure $p\in C^{\infty}(\overline{\Pi}\times (0,\infty))$.
\end{thm}
We apply Theorem 1.1 to construct global weak solutions of the Euler equations:
\begin{equation*}
\begin{aligned}
\mathbb{P}artial_t u+u\cdot \nabla u+\nabla{p}= 0,\quad \textrm{div}\ u&=0 \qquad \textrm{in}\ \Pi\times (0,\infty), \\
\quad u\cdot n&=0\qquad \textrm{on}\ \mathbb{P}artial\Pi\times (0,\infty), \\
u&=u_0\dot{H}^{1}space{18pt} \textrm{on}\ \Pi\times\{t=0\}.
\end{aligned}
\tag{1.2}
\end{equation*}
Unique existence of local-in-time solutions of the Euler equations $u\in C([0,T]; W^{k,q})\cap C^{1}([0,T]; W^{k-1,q})$ is known for sufficiently smooth initial data $u_0\in W^{k,q}$ with integers $k>1+n/q$ and $q\in (1,\infty)$, when $\Pi$ is smoothly bounded in $\mathbb{R}^{n}$ for $n\geq 2$ \cite{EbinMarsden}, \cite{BouBre74}, \cite{Te75}, \cite{KatoLai}. For axisymmetric data without swirl, it is known that local-in-time solutions for $u_0\in H^{s}$ ($s\geq 3$) are continued for all time \cite{Yanagisawa94}. Observe that for $k=2$ and $n=3$, the condition $q\in (3,\infty)$ is required in order to construct local-in-time unique solutions. We construct global weak solutions under the lower regularity condition $q\in [3/2,3)$; see below.
When $\Pi$ is a two-dimensional bounded and simply-connected domain (e.g., a unit disk), global weak solutions of the Euler equations are constructed in \cite{MY92} by a vanishing viscosity method for initial vorticity satisfying $\omega_0\in L^{q}$ and $q\in (1,2)$. For the two-dimensional case, the Neumann boundary condition in (1.1) is reduced to the condition $\omega=0$ and $u\cdot n=0$ on $\mathbb{P}artial\Pi$, called the free condition \cite{Lions69} (\cite[p.129]{LionsBook}). The vanishing viscosity method subject to the free condition is studied in \cite{Bardos72}, \cite{VK81} for $\omega_0\in L^{2}$. The condition $q\in (1,2)$ implies that initial velocity belongs to $L^{p}$ for some $p\in (2,\infty)$ by the Biot-Savart law $u_{0}=\nabla^{\mathbb{P}erp} (-\textrm{div}elta_{D})^{-1}\omega_{0}$ for $1/p=1/q-1/2$. Here, $\nabla^{\mathbb{P}erp}={}^{t}(\mathbb{P}artial_2,-\mathbb{P}artial_1)$ and $-\textrm{div}elta_{D}$ denotes the Laplace operator subject to the Dirichlet boundary condition.
Our goal is to construct three-dimensional weak solutions in the infinite cylinder $\Pi$ for axisymmetric data without swirl when initial vorticity $\omega_{0}=\omega^{\theta}_{0}e_{\theta}$ satisfies $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,3)$. The assumption for $\omega^{\theta}_{0}/r$ is stronger than that for vorticity itself and implies that the initial velocity is in $L^{p}$ for some $p\in [3,\infty)$ by the Biot-Savart law $u_0=\nabla \times (-\textrm{div}elta_{D})^{-1}\omega_{0}$ and $1/p=1/q-1/3$. For such initial data, unique global-in-time solutions to (1.1) exist by Theorem 1.1. Note that the condition $\omega^{\theta}_{0}/r\in L^{q}$ is weaker than $u_0\in W^{2,q}$ for $q\in [3/2,3)$ since $\omega^{\theta}_{0}/r=-\textrm{div}elta u^{z}_{0}-\mathbb{P}artial_r\omega^{\theta}_{0}$.
Let $BC_{w}([0,\infty); L^{p})$ denote the space of bounded and weakly continuous (resp. weakly-star continuous) functions from $[0,\infty)$ to $L^{p}$ for $p\in (1,\infty)$ (resp. for $p=\infty$). Let $\mathbb{P}$ denote the Helmholtz projection on $L^{p}$ \cite{ST98}. We construct global weak solutions for $\omega^{\theta}_{0}/r\in L^{q}$ and $q\in [3/2,3)$, which are $L^{p}$-integrable and may not be continuous. Under the additional regularity assumptions $\omega^{\theta}_{0}/r\in L^{s}$ for $s\in(3,\infty)$ and $s=\infty$, the weak solutions are H\"older continuous and unique. The main result of this paper is the following:
\begin{thm}
Let $u_0\in L^{p}_{\sigma}$ be an axisymmetric vector field without swirl for $p\in [3,\infty)$ such that $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,3)$ and $1/p=1/q-1/3$.
\noindent
(i) (Existence) There exists a weak solution $u\in BC_{w}([0,\infty); L^{p})$ of (1.2) in the sense that $\nabla u\in BC_{w}([0,\infty); L^{q})$ and
\begin{align*}
\int_{0}^{\infty}\int_{\Pi}(u\cdot\mathbb{P}artial_t \varphi +uu:\nabla \varphi)\textrm{d} x\textrm{d} t
=-\int_{\Pi}u_0\cdot \varphi_{0}\textrm{d} x \tag{1.3}
\end{align*}\\
for all $\varphi\in C^{1}_{c}(\overline{\Pi}\times [0,\infty))$ such that $\textrm{div}\ \varphi=0$ in $\Pi$ and $\varphi\cdot n=0$ on $\mathbb{P}artial\Pi$ for $t\geq 0$, where $\varphi_0(x)=\varphi(x,0)$.
\noindent
(ii) (H\"older continuity) If $\omega^{\theta}_{0}/r\in L^{s}$ for $s\in (3,\infty)$, then $u\in BC([0,\infty); L^{s})$ satisfies $\nabla u\in BC_{w}([0,\infty); L^{s})$, $\mathbb{P}artial_t u\in L^{\infty}(0,\infty; L^{s})$ and
\begin{align*}
\mathbb{P}artial_t u+\mathbb{P}u\cdot \nabla u=0\quad \textrm{on}\ L^{s}\quad \textrm{for a.e.}\ t>0. \tag{1.4}
\end{align*}\\
In particular, $u(\cdot,t)$ is bounded and H\"older continuous in $\overline{\Pi}$ of exponent $1-3/s$ for each $t\geq 0$.
\noindent
(iii) (Uniqueness) If in addition that $\omega^{\theta}_{0}/r\in L^{\infty}$, then $\nabla \times u\in BC_{w}([0,\infty); L^{\infty})$ and the weak solution is unique.
\end{thm}
It is an interesting question whether the weak solutions constructed in Theorem 1.2 conserve the energy. Since the Poincar\'e inequality holds for the infinite cylinder (see Remarks 4.4 (ii)), the condition $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,2]$ implies the finite energy $u_0\in L^{p}_{\sigma}\cap L^{2}$ and the energy equality holds for global-in-time solutions to (1.1); see below (1.5). In the sequel, we consider the case $q\in [3/2,2]$.
In the Kolmogorov's theory of turbulence, it is a basic hypothesis that the energy dissipation tends to a positive constant at large Reynolds numbers. See, e.g., \cite{Frisch}. If the energy dissipation converges to a positive constant for global-in-time solutions $u_{\nu}$ to (1.1) as $\nu\to0$, we would obtain a weak solution strictly decreasing the energy as a vanishing viscosity limit. Unfortunately, due to a regularizing effect, the energy dissipation converges to zero at least under the initial condition $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,2]$. See Remarks 2.6 (ii) for $q\in [1,3/2)$. However, it is still non-trivial whether vanishing viscosity limits conserve the energy since they are no longer continuous.
In the sequel, we prove that global-in-time solutions to (1.1) converge to a limit in $L^{2}$ locally uniformly for $t\in [0,\infty)$ under the additional assumption $\omega^{\theta}_{0}/r\in L^{\infty}$. If the limit is a $C^{1}$-solution, the $L^{2}$-convergence to a limit is equivalent to the convergence of the energy dissipation \cite{Kato84vis}. The equivalence may not always hold if a limit is a weak solution. The assumption $\omega^{\theta}_{0}/r\in L^{\infty}$ does not imply that $\nabla u$ is bounded for the limit and at present is optimal in order to obtain the $L^{2}$-convergence. Once we know the $L^{2}$-convergence, the energy conservation immediately follows as a consequence.
\begin{thm}
Let $u_{0}\in L^{p}_{\sigma}\cap L^{2}$ be an axisymmetric vector field without swirl such that $\omega_{0}^{\theta}/r\in L^{q}$ for $q\in [3/2,2]$ and $1/p=1/q-1/3$. Let $u_{\nu}$ be a solution of (1.1) in Theorem 1.1.
\noindent
(i) (Energy dissipation)
The solution $u_{\nu}\in BC([0,\infty); L^{2})$ satisfies the energy equality
\begin{align*}
\int_{\Pi}|u_{\nu}|^{2}\textrm{d} x+2\nu \int_{0}^{t}\int_{\Pi}|\nabla u_{\nu}|^{2}\textrm{d} x\textrm{d} s=\int_{\Pi}|u_{0}|^{2}\textrm{d} x\qquad t\geq 0, \tag{1.5}
\end{align*}\\
and
\begin{align*}
\nu \int_{0}^{T}\int_{\Pi}|\nabla u_{\nu}|^{2}\textrm{d} x\textrm{d} s=O(\nu^{5/2-3/q})\quad \textrm{as}\ \nu\to0\quad \textrm{for each}\ T>0. \tag{1.6}
\end{align*}
\noindent
(ii) ($L^{2}$-convergence) Assume in addition that $\omega^{\theta}_{0}/r\in L^{\infty}$. Then,
\begin{align*}
\lim_{\nu,\ \mu\to0}\sup_{0\leq t\leq T}||u_{\nu}-u_{\mu}||_{L^{2}(\Pi)}=0. \tag{1.7}
\end{align*}\\
In particular, the limit $u\in BC([0,\infty); L^{2})$ satisfies the energy equality of (1.2):
\begin{align*}
\int_{\Pi}|u|^{2}\textrm{d} x=\int_{\Pi}|u_{0}|^{2}\textrm{d} x\qquad t\geq 0. \tag{1.8}
\end{align*}
\end{thm}
It is noted that there is a possibility that the energy equality (1.8) holds under a weaker assumption than $\omega^{\theta}_{0}/r\in L^{\infty}$ although we assumed it in order to prove the $L^{2}$-convergence (1.7). In fact, it is known as a celebrated Onsager's conjecture \cite{Onsager} that H\"older continuous weak solutions to the Euler equations of exponent $\alpha>1/3$ conserve the energy (but not necessarily if $\alpha\leq 1/3$). The conjecture is studied in \cite{Eyink} and the energy conservation is proved for weak solutions in the whole space under a stronger assumption. A simple proof is given in \cite{CET} under a weaker and natural assumption in the Besov space $u\in L^{3}(0,T; B_{3}^{\alpha,\infty})$ for $\alpha>1/3$. See \cite{DR}, \cite{Constantin08} for further developments and \cite{Eyink06} for a review. Recently, the energy conservation is proved in \cite{BT18} for weak solutions in a bounded domain in the H\"older space $u\in L^{3}(0,T; C^{\alpha}(\overline{\Pi}))$ for $\alpha>1/3$. The weak solutions constructed in Theorem 1.2 are indeed H\"older continuous of exponent $\alpha=1-3/s>1/3$ if in addition that $\omega^{\theta}_{0}/r\in L^{s}$ for $s\in (9/2,\infty]$. If the result of \cite{BT18} holds also for the infinite cylinder, the weak solutions in Theorem 1.2 satisfy (1.8) even for $s\in (9/2,\infty]$.
We outline the proofs of Theorems 1.1-1.3. By a local well-posedness result of (1.1) in \cite{A6}, there exist local-in-time smooth axisymmetric solutions without swirl $u\in C([0,T]; L^{p})\cap C^{\infty}(\overline{\Pi}\times (0,T])$ for $u _0\in L^{p}_{\sigma}$ and $p\in [3,\infty)$ satisfying the integral equation
\begin{align*}
u=e^{-t\nu A}u_0-\int_{0}^{t}e^{-(t-s)\nu A}\mathbb{P}\ (u\cdot \nabla u)(s)\textrm{d} s.
\end{align*}\\
Here, $A$ denotes the Stokes operator subject to the Neumann boundary condition. We establish an apriori estimate in $L^{p}$ based on the vorticity equation. Since the vorticity $\omega=\omega^{\theta}e_{\theta}$ vanishes on the boundary subject to the Neumann boundary condition, $\omega^{\theta}/r$ satisfies the drift-diffusion equation with the homogeneous Dirichlet boundary condition:
\begin{equation*}
\begin{aligned}
\mathbb{P}artial_t \Big(\frac{\omega^{\theta}}{r}\Big)+u\cdot \nabla\Big(\frac{\omega^{\theta}}{r}\Big)-\nu \Big(\textrm{div}elta+\frac{2}{r}\mathbb{P}artial_r\Big)\Big(\frac{\omega^{\theta}}{r}\Big)&=0\quad \textrm{in}\ \Pi\times (0,T),\\
\frac{\omega^{\theta}}{r}&=0\quad \textrm{on}\ \mathbb{P}artial\Pi\times (0,T).
\end{aligned}
\tag{1.9}
\end{equation*}\\
We prove the a priori estimate
\begin{align*}
\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{r}(\Pi)}\leq \frac{C}{(\nu t)^{\frac{3}{2}(\frac{1}{q}-\frac{1}{r}) }}\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)}\qquad t>0,\ \nu>0, \tag{1.10}
\end{align*}\\
for $1\leq q\leq r\leq \infty$. The estimate (1.10) for $r=q$ is proved in \cite{UI} for $\Pi=\mathbb{R}^{3}$. Moreover, the decay estimate for $r\in [1,\infty]$ and $q=1$ is established in \cite{FengSverak}. We prove (1.10) for the infinite cylinder $\Pi$. Since local-in-time solutions of (1.1) are smooth for $t>0$, we may assume that $\omega^{\theta}_{0}/r$ is bounded. Then, the a priori estimate (1.10) for $r=q=\infty$ implies that vorticity is uniformly bounded in the infinite cylinder $\Pi=\{r<1\}$. Since $\mathbb{P}u\cdot \nabla u=\mathbb{P}\omega\times u$, by the Gronwall's inequality we obtain an exponential bound of the form
\begin{align*}
||u||_{L^{p}(\Pi)}\leq C||u_0||_{L^{p}(\Pi)} \exp\Big(C\Big\|\frac{\omega^{\theta}_0}{r}\Big\|_{L^{\infty}(\Pi)} t\Big)\quad t\geq 0.
\end{align*}\\
The estimate implies that the $L^{p}$-norm does not blow-up. Hence the local-in-time solutions are continued for all time. (Moreover, the solutions converge to zero in $L^{r}$ for $r\in (p,\infty)$ as time goes to infinity; see Remarks 2.6 (iii).)
The proof of Theorem 1.2 is based on the Biot-Savart law in the infinite cylinder. We show that axisymmetric vector fields without swirl satisfy
\begin{align*}
u=\nabla \times (-\textrm{div}elta_{D})^{-1} (\nabla\times u), \tag{1.11}
\end{align*}
\begin{align*}
||u||_{L^{p}}+||\nabla u||_{L^{q}}\leq C||\nabla \times u||_{L^{q}}, \quad 1/p=1/q-1/3. \tag{1.12}
\end{align*}\\
The existence of global weak solutions (i) and regularity properties (ii) follow from the a priori estimate (1.10) for $r=q$ and (1.12) by taking a vanishing viscosity limit and applying an abstract compactness theorem. The uniqueness in Theorem 1.2 (iii) is based on the growth estimate of the $L^{r}$-norm
\begin{align*}
||\nabla u||_{L^{r}(\Pi)}&\leq Cr||\nabla \times u||_{L^{r}\cap L^{r_0}(\Pi)}, \tag{1.13}
\end{align*}\\
for $3<r_0< r<\infty$ with some absolute constant $C$. The estimate (1.13) is proved in \cite{Yudovich62} for bounded domains. We extend it for the infinite cylinder and adjust the Yudovich's energy method of uniqueness \cite{Yudovich63} for solutions with infinite energy by a cut-off function argument.
The convergence of the energy dissipation (1.6) follows from the vorticity estimate (1.10) for $r=2$ and $q\in [3/2,2]$. The $L^{2}$-convergence (1.7) is based on the estimate (1.13). Since the condition $\omega^{\theta}_{0}/r\in L^{\infty}$ implies that $\nabla \times u_{\nu}$ is uniformly bounded for $\nu>0$ and $r>3$, we estimate the energy norm of $u_{\nu}-u_{\mu}$ for two solutions of (1.1) by using the estimate (1.13).
This paper is organized as follows. In Section 2, we prove the vorticity estimate (1.10) for local-in-time solutions to (1.1). Since we only use the estimate (1.10) for $r=q=\infty$ in order to prove Theorem 1.1, we give a proof for the case $r\neq q$ in Appendix A. In Section 3, we prove the Biot-Savart law (1.11) and the estimate (1.13). In Section 4, we prove Theorem 1.2 (i) and (ii) by applying a vanishing viscosity method. In Section 5, we prove Theorem 1.2 (iii). In Section 6, we prove Theorem 1.3.
\section{Global smooth solutions with viscosity}
We prove Theorem 1.1. We first observe unique existence of local-in-time axisymmetric solutions without swirl for $u_0\in L^{p}_{\sigma}$ and $p\in [3,\infty)$. The vorticity estimate (1.10) for $r=q$ is obtained by integration by parts for $q\in [1,\infty)$ and a maximum principle for $q=\infty$. Throughout this Section, we denote solutions of (1.1) by $u=u_{\nu}$ and suppressing $\nu>0$.
\subsection{Local-in-time solutions}
We set the Laplace operator subject to the Neumann boundary condition
\begin{align*}
&Bu=-\textrm{div}elta u,\quad \textrm{for}\ u\in D(B),\\
&D(B)=\{u\in W^{2,p}(\Pi)\ |\ \nabla\times \ u\times n=0,\ u\cdot n=0\ \textrm{on}\ \mathbb{P}artial\Pi\ \},
\end{align*}\\
It is proved in \cite[Lemma B.1]{A6} that the operator $-B$ generates a bounded $C_0$-analytic semigroup on $L^{p}$ ($1<p<\infty$) for the infinite cylinder $\Pi$. We set the Stokes operator
\begin{align*}
&Au=B u,\quad \textrm{for}\ u\in D(A),\\
&D(A)=L^{p}_{\sigma}\cap D(B).
\end{align*}\\
Since $Au\in L^{p}_{\sigma}$ by the Neumann boundary condition, the operator $-A$ generates a bounded $C_0$-analytic semigroup on the solenoidal vector space $L^{p}_{\sigma}$. By the analyticity of the semigroup, we are able to construct local-in-time solutions satisfying the integral form
\begin{align*}
u=e^{-t\nu A}u_0-\int_{0}^{t}e^{-(t-s)\nu A}\mathbb{P}\ (u\cdot \nabla u)(s)\textrm{d} s. \tag{2.1}
\end{align*}\\
By a standard argument using a fractional power of the Stokes operator, it is not difficult to see that all derivatives of the mild solution belong to the H\"older space $C^{\mu}((0,T]; L^{s})$ for $\mu\in (0,1/2)$ and $s\in (3,\infty)$. Hence the mild solution is smooth for $t>0$ and satisfies (1.1).
\begin{lem}
For an axisymmetric vector field without swirl $u_0\in L^{p}_{\sigma}$ and $p\in [3,\infty)$, there exists $T>0$ and a unique axisymmetric mild solution without swirl $u\in C([0,T]; L^{p})\cap C^{\infty}(\overline{\Pi}\times (0,T])$ of (1.1).
\end{lem}
\begin{proof}
The unique existence of local-in-time smooth mild solutions is proved in \cite[Theorem 1.1]{A6}. The axial symmetry follows from the uniqueness. We consider a rotation operator $U: f\longmapsto {}^{t}Rf(Rx)$ for $R=(e_{r}(\eta), e_{\theta}(\eta),e_{z})$ and $\eta\in [0,2\mathbb{P}i]$. Since the Stokes semigroup $e^{-t\nu A}$ and the Helmholtz projection $\mathbb{P}$ are commutable with the operator $U$ (see \cite[Proposition 2.6]{AS}), by multiplying $U$ by (2.1) we see that $Uu$ is a mild solution for the same axisymmetric initial data $u_0$. By the uniqueness of mild solutions, the function $Uu$ agrees with $u$. Hence $u(x,t)={}^{t}Ru(Rx,t)$ for $\eta\in [0,2\mathbb{P}i]$ and $u$ is axisymmetric.
It is not difficult to see that $u$ is without swirl. By the Neumann boundary condition in (1.1), we see that an axisymmetric solution $u=u^{r}e_{r}+u^{\theta}e_{\theta}+u^{z}e_{z}$ satisfies
\begin{align*}
u^{r}=0,\quad \mathbb{P}artial_r u^{\theta}+u^{\theta}=0,\quad \mathbb{P}artial_r u^{z}=0\qquad \textrm{on}\ \{r=1\}. \tag{2.2}
\end{align*}\\
Since $u$ is smooth for $t>0$ and $u^{\theta}_{0}=0$, by a fundamental calculation, we see that $\varphi=u^{\theta}e_{\theta}\in C([0,T]; L^{p})\cap C^{\infty}(\overline{\Pi}\times (0,T))$ satisfies
\begin{align*}
\mathbb{P}artial_t \varphi-\textrm{div}elta \varphi +u\cdot \nabla \varphi-(\mathbb{P}artial_r u^{r}+ \mathbb{P}artial_z u^{z})\varphi&=0\quad \textrm{in}\ \Pi\times (0,T),\\
\mathbb{P}artial_n\varphi+\varphi&=0\quad \textrm{on}\ \mathbb{P}artial\Pi\times (0,T),\\
\varphi&=0\quad \textrm{on}\ \Pi\times \{t=0\}.
\end{align*}\\
Since the Laplace operator $-\textrm{div}elta_{R}$ with the Robin boundary condition generates a $C_0$-analytic semigroup on $L^{p}$ \cite{ADN} (\cite[Theorem 3.1.3]{Lunardi}), by the uniqueness of the inhomogeneous heat equation, the function $\varphi$ satisfies the integral form
\begin{align*}
\varphi=-\int_{0}^{t}e^{(t-s)\nu \textrm{div}elta_{R}}(u\cdot \nabla \varphi-(\mathbb{P}artial_r u^{r}+ \mathbb{P}artial_z u^{z})\varphi)\textrm{d} s.
\end{align*}\\
Since $u\in C([0,T]; L^{p})$ and $t^{1/2}\nabla u\in C([0,T]; L^{p})$, it is not difficult to show that $\varphi\equiv 0$ by estimating $L^{p}$-norms of $\varphi$. Hence the local-in-time solution $u$ is axisymmetric without swirl.
\end{proof}
In order to prove Theorem 1.3 later in Section 6, we show that local-in-time solutions satisfy the energy equality (1.5) for initial data with finite energy.
\begin{prop}
For axisymmetric initial data without swirl $u_0\in L^{p}_{\sigma}\cap L^{2}$ for $p\in [3,\infty)$, the local-in-time solution $u$ satisfies
\begin{align*}
u,\ t^{1/2}\nabla u\in C([0,T]; L^{p}\cap L^{2}), \tag{2.3}
\end{align*}\\
and the energy equality (1.5) for $t\geq 0$.
\end{prop}
\begin{proof}
We prove (2.3). The energy equality (1.5) follows from (2.2) and integration by parts. We give a proof for the case $p=3$ since we are able to prove the case $p\in (3,\infty)$ by a similar way. We may assume that $\nu =1$. We invoke an iterative argument in \cite[Theorem 5.2]{A6}. We use regularizing estimates of the Stokes semigroup \cite[Lemma 5.1]{A6},
\begin{align*}
||\mathbb{P}artial_x^{k}e^{-tA}f||_{L^{2}}\leq \frac{C}{t^{\frac{3}{2}(\frac{1}{r}-\frac{1}{2})+\frac{|k|}{2} }}||f||_{L^{r}} \tag{2.4}
\end{align*}\\
for $t\leq T_0,\ |k|\leq 1$ and $r\in [6/5,2]$. We set a sequence $\{u_j\}$ as usual by $u_1=e^{-tA}u_0$,
\begin{align*}
u_{j+1}=e^{-tA}u_0-\int_{0}^{t}e^{-(t-s)A}\mathbb{P}u_{j}\cdot \nabla u_{j}\textrm{d} s,\quad j\geq 1,
\end{align*} \\
and the constants
\begin{align*}
K_{j}&=\sup_{0\leq t\leq T}t^{\gamma} ||u_{j}||_{L^{q}}(t) ,\\
M_{j}&=\sup_{0\leq t\leq T}( ||u_{j}||_{L^{2}}+t^{1/2}||\nabla u_{j}||_{L^{2}} ),
\end{align*}\\
for $\gamma =3/2(1/3-1/q)$ and $q\in (3,\infty)$. Then, we have $K_j\leq K_1$ for all $j\geq 1$ for sufficiently small $T>0$. We set $1/r=1/2+1/q$. Since $r\in (6/5,2]$, applying (2.4) and the H\"older inequality imply that
\begin{align*}
||u_{j+1}||_{L^{2}}
&\leq ||e^{-tA}u_0||_{L^{2}}+\int_{0}^{t}\frac{C}{(t-s)^{\frac{3}{2}(\frac{1}{r}-\frac{1}{2}) }}||u_j\cdot \nabla u_j||_{L^{r}}\textrm{d} s\\
&\leq ||e^{-tA}u_0||_{L^{2}}+C'K_jM_j.
\end{align*}\\
We estimate the $L^{2}$-norm of $\nabla u_{j+1}$ in a similar way and obtain
\begin{align*}
M_{j+1}\leq M_1+CK_1M_j.
\end{align*}\\
We take $T>0$ sufficiently small so that $CK_1\leq 1/2$ and obtain the uniform bound $M_{j+1}\leq 2M_1$ for all $j\geq 1$. Since the sequence $\{u_j\}$ converges to a limit $u\in C([0,T]; L^{3})$ such that $t^{1/2}\nabla u \in C([0,T]; L^{3})$, in a similar way, the uniform estimate for $M_j$ is inherited by the limit. We obtained (2.3).
\end{proof}
\subsection{Vorticity estimates}
We shall prove the vorticity estimate (1.10) for $r=q\in [1,\infty]$. We first show the case $q\in [1,\infty)$ by integration by parts.
\begin{lem}
Let $u$ be an axisymmetric solution in Lemma 2.1. Assume that $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [1,\infty]$. Then, the estimate
\begin{align*}
\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{q}(\Pi)}\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)}\quad t>0 \tag{2.5}
\end{align*}\\
holds.
\end{lem}
\begin{prop}
The estimate (2.5) holds for $q\in [1,\infty)$.
\end{prop}
\begin{proof}
We observe that $\omega^{\theta}/r$ is smooth for $t>0$ and satisfies
\begin{equation*}
\begin{aligned}
\mathbb{P}artial_t \Big(\frac{\omega^{\theta}}{r}\Big)+u\cdot \nabla\Big(\frac{\omega^{\theta}}{r}\Big)-\nu \Big(\textrm{div}elta+\frac{2}{r}\mathbb{P}artial_r\Big)\Big(\frac{\omega^{\theta}}{r}\Big)&=0\quad \textrm{in}\ \Pi\times (0,T),\\
\Big(\frac{\omega^{\theta}}{r}\Big)&=0\quad \textrm{on}\ \mathbb{P}artial\Pi\times (0,T).
\end{aligned}
\tag{2.6}
\end{equation*}\\
In order to differentiate the $L^{q}$-norm of $\Omega=\omega^{\theta}/r$, we approximate the absolute value function $\mathbb{P}si(s)=|s|$. For an arbitrary $\varepsilon>0$, we set a smooth non-negative convex function $\mathbb{P}si_{\varepsilon}(s)=(s^{2}+\varepsilon^{2})^{1/2}-\varepsilon$ for $s\in \mathbb{R}$, i.e., $0\leq \mathbb{P}si_{\varepsilon}\leq |s|$, $\textrm{d}ot{\mathbb{P}si}_{\varepsilon}>0$. The function $\mathbb{P}si_{\varepsilon}$ satifies $\mathbb{P}si_{\varepsilon}(0)=\dot{\mathbb{P}si}_{\varepsilon}(0)=0$. We differentiate $\mathbb{P}si_{\varepsilon}^{q}(\Omega)$ to see that
\begin{align*}
\mathbb{P}artial_t \mathbb{P}si^{q}_{\varepsilon}(\Omega)&=q \mathbb{P}artial_t \Omega \dot{\mathbb{P}si}_{\varepsilon}(\Omega){\mathbb{P}si}^{q-1}_{\varepsilon}(\Omega),\\
\nabla \mathbb{P}si^{q}_{\varepsilon}(\Omega)&=q \nabla \Omega \dot{\mathbb{P}si}_{\varepsilon}(\Omega){\mathbb{P}si}^{q-1}_{\varepsilon}(\Omega).
\end{align*}\\
Since $\mathbb{P}si_{\varepsilon}(\Omega)=\dot{\mathbb{P}si}_{\varepsilon}(\Omega)=0$ on $\mathbb{P}artial\Pi$ by the boundary condition, integration by parts yields
\begin{align*}
\frac{d}{dt}\int_{\Pi}\mathbb{P}si_{\varepsilon}^{q}(\Omega)\textrm{d} x
&=q\int_{\Pi}\big(-u\cdot \nabla \Omega+\nu \textrm{div}elta \Omega+2\nu \frac{1}{r}\mathbb{P}artial_r\Omega\big)\dot{\mathbb{P}si}_{\varepsilon}(\Omega)\mathbb{P}si^{q-1}_{\varepsilon}(\Omega)\textrm{d} x\\
&=-\int_{\Pi}u\cdot \nabla \mathbb{P}si_{\varepsilon}^{q}(\Omega)\textrm{d} x+\nu q\int_{\Pi} \textrm{div}elta \Omega\dot{\mathbb{P}si}_{\varepsilon}(\Omega)\mathbb{P}si^{q-1}_{\varepsilon}(\Omega)\textrm{d} x+2\nu\int_{\Pi} \frac{1}{r}\mathbb{P}artial_r\mathbb{P}si^{q}_{\varepsilon}(\Omega)\textrm{d} x.
\end{align*}\\
The first-term vanishes by the divergence-free condition. Since $\mathbb{P}si_{\varepsilon}$ is non-negative and convex, we see that
\begin{align*}
\int_{\Pi} \textrm{div}elta \Omega\dot{\mathbb{P}si}_{\varepsilon}(\Omega)\mathbb{P}si^{q-1}_{\varepsilon}(\Omega)\textrm{d} x
&=-\int_{\Pi}|\nabla \Omega|^{2}\big((q-1)\mathbb{P}si^{q-2}_{\varepsilon}(\Omega)|\dot{\mathbb{P}si}_{\varepsilon}(\Omega)|^{2}+\mathbb{P}si_{\varepsilon}^{q-1}(\Omega)\textrm{d}ot{\mathbb{P}si}_{\varepsilon}(\Omega) \big)\textrm{d} x\\
&\leq -\frac{4}{q}\Big(1-\frac{1}{q}\Big)\int_{\Pi}\big|\nabla \mathbb{P}si_{\varepsilon}(\Omega)^{\frac{q}{2}}\big|^{2}\textrm{d} x,
\end{align*}
\begin{align*}
&\int_{\Pi}\frac{1}{r}\mathbb{P}artial_r\mathbb{P}si^{q}_{\varepsilon}(\Omega)\textrm{d} x
=2\mathbb{P}i \int_{\mathbb{R}}\textrm{d} z\int_{0}^{1}\mathbb{P}artial_r\mathbb{P}si^{q}_{\varepsilon}(\Omega)\textrm{d} r
=-2\mathbb{P}i \int_{\mathbb{R}}\mathbb{P}si^{q}_{\varepsilon}(\Omega(0,z,t))\textrm{d} z\leq 0,
\end{align*}\\
for $\Omega=\Omega(r,z,t)$. Hence we have
\begin{align*}
\frac{d}{dt}\int_{\Pi}\mathbb{P}si_{\varepsilon}^{q}(\Omega)\textrm{d} x
+4\nu\Big(1-\frac{1}{q}\Big)\int_{\Pi}\big|\nabla \mathbb{P}si_{\varepsilon}(\Omega)^{\frac{q}{2}}\big|^{2}\textrm{d} x
\leq 0. \tag{2.7}
\end{align*}\\
We integrate in $[0,t]$ and estimate
\begin{align*}
\int_{\Pi}\mathbb{P}si_{\varepsilon}^{q}(\Omega)\textrm{d} x\leq \int_{\Pi}\mathbb{P}si_{\varepsilon}^{q}(\Omega_0)\textrm{d} x \quad t\geq 0.
\end{align*}\\
Since $\mathbb{P}si_{\varepsilon}(s)$ monotonically converges to $\mathbb{P}si(s)=|s|$, sending $\varepsilon\to 0$ implies the desired estimate for $q\in [1,\infty)$.
\end{proof}
Following \cite{KNSS}, \cite{FengSverak}, we prove the case $q=\infty$ by a maximum principle.
\begin{proof}[Proof of Lemma 2.3]
We apply a maximum principle for $\Omega=\omega^{\theta}/r$ by regarding $\textrm{div}elta_{x}+2r^{-1}\mathbb{P}artial_r$ as the Laplace operator in $\mathbb{R}^{5}$. Let $S^{3}$ denote the unit sphere in $\mathbb{R}^{4}$. Let $(r,\theta,z)$ be the cylindrical coordinate for the cartesian coordinate $x=(x_1,x_2,x_3)$. For $r>0$, $z\in \mathbb {R}$ and $\tau \in S^{3}$, we set new variables $y=(y_{h},y_5)$ and $y_h=(y_1,y_2,y_3,y_4)$ by $y_h=r\tau$ and $y_5=z$. Then, the gradient and the Laplace operator are written as $\nabla_{y}=\tau \mathbb{P}artial_r +\nabla_{S^{3}}+e_{5}\mathbb{P}artial_z$ and $\textrm{div}elta_{y}=\mathbb{P}artial_r^{2}+3r^{-1}\mathbb{P}artial_r+\textrm{div}elta_{S^{3}}+\mathbb{P}artial_{z}^{2}$ with the surface gradient $\nabla_{S^{3}}$. Let $e_5={}^{t}(0,0,0,0,1)$. We define $\tilde{u}(y,t)$ and $\tilde{\Omega}(y,t)$ by
\begin{align*}
&\tilde{u}(y,t)=u^{r}(r,z,t)\tau+u^{z}(r,z,t)e_{5},\\
&\tilde{\Omega}(y,t)=\Omega(r,z,t).
\end{align*}\\
Since $\tilde{u}\cdot \nabla_y=u^{r}\mathbb{P}artial_r+u^{z}\mathbb{P}artial_z=u\cdot \nabla_x$ and $\textrm{div}elta_{y}=\textrm{div}elta_{x}+2r^{-1}\mathbb{P}artial_r+\textrm{div}elta_{S^{3}}$, the vorticity equation (2.6) is then written as
\begin{equation*}
\begin{aligned}
\mathbb{P}artial_t \tilde{\Omega}+\tilde{u}\cdot \nabla_y \tilde{\Omega}-\nu\textrm{div}elta_{y}\tilde{\Omega}&=0\quad \textrm{in}\ \Pi_5\times (0,T),\\
\tilde{\Omega}&=0\quad \textrm{on}\ \mathbb{P}artial\Pi_5\times (0,T),
\end{aligned}
\tag{2.8}
\end{equation*}\\
for the five-dimensional cylinder $\Pi_5=\{y\in \mathbb{R}^{5}\ |\ |y_h|<1\}$.
Since $\tilde{\Omega}$ may not be continuous at $t=0$, we approximate initial data by $u_{0,\varepsilon}=e^{-\varepsilon A}u_0$, $\varepsilon>0$, and apply a maximum principle for solutions to $u_{0,\varepsilon}$. Let $-\textrm{div}elta_{D}$ denote the Laplace operator subject to the Dirichlet boundary condition in the cylinder $\Pi_{5}$. Since $u_0$ is axisymmetric without swirl and $\omega^{\theta}_{0}/r\in L^{\infty}$, by the uniqueness of the heat equation, we see that $\omega^{\theta}_{0,\varepsilon}/r=e^{\varepsilon \textrm{div}elta_{D}}(\omega^{\theta}_{0}/r)$. Since the heat semigroup is a contraction semigroup on $L^{\infty}$, it follows that
\begin{align*}
\Big\|\frac{\omega^{\theta}_{0,\varepsilon}}{r}\Big\|_{L^{\infty}(\Pi)}\leq
\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{\infty}(\Pi)}.
\end{align*}\\
By Lemma 2.1, there exists a local-in-time axisymmetric solution without swirl $u_{\varepsilon}$ of (1.1) for $u_{0,\varepsilon}$. Since $u_{0,\varepsilon}\in D(A^m_{s})$ for $s\in (3,\infty)$ and all $m\geq 0$, all derivatives of the local-in-time solution are bounded and continuous up to time zero \cite[Remarks 6.5(i)]{A6}. Here, $A_{s}$ denotes the Stokes operator in $L^{s}_{\sigma}$. Hence applying the maximum principle for $(\tilde{u}_{\varepsilon}, \tilde{\Omega}_{\varepsilon})$ yields
\begin{align*}
\Big\|\frac{\omega^{\theta}_{\varepsilon}}{r}\Big\|_{L^{\infty}(\Pi)}\leq
\Big\|\frac{\omega^{\theta}_{0,\varepsilon}}{r}\Big\|_{L^{\infty}(\Pi)}
\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{\infty}(\Pi)}.
\end{align*}\\
Since $u_{0,\varepsilon}$ converges to $u_0$ in $L^{p}$, the local-in-time solution $u_{\varepsilon}$ converges to the mild solution $u\in C([0,T]; L^{p})$ for $u_0$ and the vorticity estimate is inherited to the limit. Thus (2.5) holds for $q=\infty$.
\end{proof}
We further deduce the decay estimate of vorticity from the inequality (2.7). We give a proof for the following Lemma 2.5 in Appendix A.
\begin{lem}
Under the same assumption of Lemma 2.3, the estimate
\begin{align*}
\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{r}(\Pi)}\leq \frac{C}{(\nu t)^{\frac{3}{2}(\frac{1}{q}-\frac{1}{r}) }}\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)}\qquad t>0,\ \nu>0, \tag{2.9}
\end{align*}\\
holds for $1\leq q\leq r\leq \infty$ with some constant $C$.
\end{lem}
\subsection{An exponential bound}
We now complete:
\begin{proof}[Proof of Theorem 1.1]
Let $u\in C([0,T]; L^{p})\cap C^{\infty}(\overline{\Pi}\times (0,T])$ be a local-in-time axisymmetric solution in Lemma 2.1. By replacing the initial time to some $t_0\in (0,T]$, we may assume that $\omega^{\theta}_{0}/r\in L^{\infty}$. We apply Lemma 2.3 and estimate the $L^{\infty}$-estimate of $\omega^{\theta}/r$. Since $r<1$, we have
\begin{align*}
\|\omega^{\theta}\|_{L^{\infty}(\Pi)}\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{\infty}(\Pi)}\quad t\geq 0.
\end{align*}\\
Since $\mathbb{P}u\cdot \nabla u=\mathbb{P}\omega\times u$ and the Stokes semigroup is a bounded semigroup on $L^{p}$, it follows from (2.1) that
\begin{align*}
||u||_{L^{p}(\Pi)}
\leq C_1||u_0||_{L^{p}(\Pi)}+C_2\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{\infty}(\Pi)}\int_{0}^{t}||u||_{L^{p}(\Pi)}\textrm{d} s\qquad t\geq0.
\end{align*}\\
with some constants $C_1$ and $C_2$, independent of the viscosity $\nu$. Applying the Gronwall's inequality yields
\begin{align*}
||u||_{L^{p}(\Pi)}\leq C_1||u_0||_{L^{p}(\Pi)} \exp\Big(C_2\Big\|\frac{\omega^{\theta}_0}{r}\Big\|_{L^{\infty}(\Pi)} t\Big)\quad t\geq 0. \tag{2.10}
\end{align*}\\
Since the $L^{p}$-norm of $u$ is globally bounded, the local-in-time solution is continued for all $t>0$. The proof is complete.
\end{proof}
\begin{rems}
\noindent
(i) ($p=\infty$)
It is unknown whether the assertion of Theorem 1.1 holds for $p=\infty$. For the two-dimensional Cauchy problem, unique existence of global-in-time solutions is known for bounded and non-decaying initial data $u_0\in L^{\infty}_{\sigma}$ \cite{GMS}. Moreover, global-in-time solutions satisfy a single exponential bound of the form
\begin{align*}
||u||_{L^{\infty}(\mathbb{R}^{2})}\leq C_1||u_0||_{L^{\infty}(\mathbb{R}^{2})} \exp\big(C_2||\omega_0||_{L^{\infty}(\mathbb{R}^{2})}t\big)\qquad t\geq 0,
\end{align*}\\
with some constants $C_1$ and $C_2$, independent of viscosity \cite{ST07}. The single exponential bound is further improved to a linear growth estimate as $t\to\infty$ by using viscosity. See \cite{Zelik13}, \cite{GallayLec}. Note that for a two-dimensional layer, unique global-in-time solutions exist for bounded initial data subject to the Neumann boundary condition. Moreover, the $L^{\infty}$-norm of solutions are uniformly bounded for all time \cite{GallaySl}, \cite{GallaySl2}.
\noindent
(ii) ($1\leq q<3/2$)
It is unknown whether unique global-in-time solutions to (1.1) exist for $\omega^{\theta}_{0}/r\in L^{q}$ and $q\in [1,3/2)$. This condition implies that initial velocity belongs to $L^{p}$ for $p\in [3/2,3)$ and $1/p=1/q-1/3$ by the Biot-Savart law. Although Theorem 1.1 may not be available for this case, it is still likely that unique global-in-time solutions to (1.1) exist. In fact, Gallay-{\v{S}}ver\'ak \cite{GallaySverak} constructed unique global-in-time solutions of the Cauchy problem for $\omega^{\theta}_{0}/r\in L^{1}$, based on the a priori estimate (2.5) for $r=q=1$ and the vorticity equation in a half plane. Note that the convergence of the energy dissipation (1.6) does not follow from the vorticity estimate (2.9) if $q\in [1,6/5]$.
\noindent
(iii) (Large time behavior)
Global-in-time solutions in Theorem 1.1 are uniformly bounded for all time, i.e., $u\in BC([0,\infty); L^{p})$. In fact, we are able to assume that $\omega^{\theta}_{0}/r\in L^{p}$ by replacing the initial time since local-in-time solutions $u(\cdot,t)$ belong to $W^{2,p}$ for $p\in [3,\infty)$. As proved later in Lemma 3.5 and Proposition 4.3, since axisymmetric solutions of (1.1) are uniquely determined by the Biot-Savart law and the Poincar\'e inequality holds in the cylinder, we have
\begin{align*}
||u||_{L^{p}(\Pi)}
\leq C||\nabla u||_{L^{p}(\Pi)}
\leq C' ||\nabla \times u||_{L^{p}(\Pi)}
\leq C'\Big\| \frac{\omega^{\theta}}{r}\Big\|_{L^{p}(\Pi)}.
\end{align*}\\
By the vorticity estimate (2.9), the solutions are uniformly bounded in $L^{p}$ and tend to zero in $L^{r}$ for $r \in (p,\infty)$ as $t\to\infty$.
\end{rems}
\section{The Biot-Savart law}
In this section, we give a Biot-Savart law in the infinite cylinder (Lemma 3.5). Since stream functions exist for axisymmetric vector fields without swirl and satisfy the Dirichlet boundary condition, we are able to represent axisymmetric vector fields without swirl by the Laplace operator with the Dirichlet boundary condition $u=\nabla \times (-\textrm{div}elta_{D})^{-1}\nabla \times u$. We first prepare $L^{p}$-estimtates for the Dirichlet problem of the Poisson equation and apply them to axisymmetric vector fields without swirl.
\subsection{$L^{p}$-estimates for the Poisson equation}
We consider the Poisson equation in the infinite cylinder:
\begin{equation*}
\begin{aligned}
-\textrm{div}elta \mathbb{P}hi=f\quad \textrm{in}\ \Pi,\qquad \mathbb{P}hi=0\quad \textrm{on}\ \mathbb{P}artial\Pi.
\end{aligned}
\tag{3.1}
\end{equation*}
\begin{lem}
\noindent
(i) Let $q\in (1,\infty)$. For $f\in L^{q}$, there exists a unique solution $\mathbb{P}hi\in W^{2,q}$ of (3.1) satisfying
\begin{align*}
||\mathbb{P}hi||_{W^{2,q}}\leq C||f||_{L^{q}} \tag{3.2}
\end{align*}\\
with some constant $C$.
\noindent
(ii) For $q\in (1,3)$ and $p\in (3/2,\infty)$ satisfying $1/p=1/q-1/3$, there exists a constant $C'$ such that
\begin{align*}
||\nabla \mathbb{P}hi||_{L^{p}}\leq C'||f||_{L^{q}}. \tag{3.3}
\end{align*}
\end{lem}
We prove Lemma 3.1 by using the heat semigroup $e^{t\textrm{div}elta_{D}}$.
\begin{prop}
\noindent
(i) There exists a constant $M$ such that
\begin{align*}
||e^{t\textrm{div}elta_{D}}f||_{L^{q}}\leq e^{-\mu_q t}||f||_{L^{q}}\quad t>0, \tag{3.4}
\end{align*}\\
for $f\in L^{q}$ with the constant $\mu_q=M/qq'$, where $q'$ is the conjugate exponent to $q\in (1,\infty)$.
\noindent
(ii) The heat kernel $K(x,y,t)$ of $e^{t\textrm{div}elta_{D}}$ satisfies the Gaussian upper bound,
\begin{align*}
0\leq K(x,y,t)\leq \frac{1}{(4\mathbb{P}i t)^{3/2}} e^{-|x-y|^{2}/4t}\quad x,y\in \Pi,\ t>0. \tag{3.5}
\end{align*}
\end{prop}
\begin{proof}
The pointwise upper bound (3.5) is known for an arbitrary domain. See \cite[Example 2.1.8]{Davies}. We prove the assertion (i). It suffices to show (3.4) for $f\in C^{\infty}_{c}$. Suppose that $f\geq 0$. Then, $u=e^{t\textrm{div}elta_{D}}f$ is non-negative by a maximum principle. By multiplying $qu^{q-1}$ to the heat equation and integration by parts, we see that $\varphi=u^{q/2}$ satisfies
\begin{align*}
\frac{d}{dt}\int_{\Pi}|\varphi|^{2}\textrm{d} x+\frac{4}{q'}\int_{\Pi}|\nabla \varphi|^{2}\textrm{d} x=0.
\end{align*}\\
Since the function $\varphi$ vanishes on $\mathbb{P}artial\Pi$, we apply the Poincar\'e inequality in the cylinder $||\varphi||_{L^{2}}\leq C||\nabla \varphi||_{L^{2}} $\cite[6.30 THEOREM]{Ad} to estimate
\begin{align*}
\frac{\textrm{d}}{\textrm{d} t} \int_{\Pi}|\varphi|^{2}\textrm{d} x\leq -\frac{4}{C^{2}q'}\int_{\Pi}|\varphi|^{2}\textrm{d} x.
\end{align*}\\
Thus the estimate (3.4) holds with the constant $M=4/C^{2}$. For general $f\in C^{\infty}_{c}$, we approximate the absolute value function as in the proof of Proposition 2.4 and obtain (3.4).
\end{proof}
Proposition 3.2 implies that the operator $-\textrm{div}elta_{D}$ is invertible on $L^{q}$.
\begin{proof}[Proof of Lemma 3.1]
We prove (i). We set
\begin{align*}
\mathbb{P}hi=\int_{0}^{\infty}e^{t \textrm{div}elta_{D}}f\textrm{d} t\quad \textrm{for}\ f\in L^{q}.
\end{align*}\\
Since the heat semigroup is an analytic semigroup on $L^{q}$, it follows from (3.4) that
\begin{align*}
||\mathbb{P}hi||_{W^{1,q}}\leq C\Big(1+\frac{1}{\mu_q}\Big)||f||_{L^{q}} \tag{3.6}
\end{align*}\\
with some constant $C$, independent of $q$. Since $-\textrm{div}elta\mathbb{P}hi=f$, by the elliptic regularity estimate \cite{ADN}, if follows that
\begin{align*}
||\nabla^{2}\mathbb{P}hi||_{L^{q}}\leq C(||f||_{L^{q}}+||\mathbb{P}hi||_{W^{1,q}}) \tag{3.7}
\end{align*}\\
We obtained (3.2). The uniqueness follows from a maximum principle.
We prove (ii). Since $\mathbb{P}hi=(-\textrm{div}elta_{D})^{-1}f=(-\textrm{div}elta_{D})^{-1/2}(-\textrm{div}elta_{D})^{-1/2}f$, we use a fractional power of the operator $-\textrm{div}elta_{D}$. We set the domain $D(-\textrm{div}elta_{D})$ by a space of all functions in $W^{2,q}$, vanishing on $\mathbb{P}artial\Pi$. By estimates of pure imaginary powers of the operator \cite{Seeley71}, the domain of the fractional power $D((-\textrm{div}elta_{D})^{1/2})$ is continuously embedded to the Sobolev space $W^{1,q}$. Hence the operator $\mathbb{P}artial (-\textrm{div}elta_{D})^{-1/2}$ acts as a bounded operator on $L^{q}$.
It suffices to show that the fractional power $(-\textrm{div}elta_{D})^{-1/2}$ acts as a bounded operator from $L^{q}$ to $L^{p}$. We see that
\begin{align*}
((-\textrm{div}elta_{D})^{-1/2}f)(x)
=\int_{0}^{\infty}t^{-1/2}e^{t\textrm{div}elta_{D}}f\textrm{d} t
=\int_{\Pi}f(y)\textrm{d} y\int_{0}^{\infty}t^{-1/2}K(x,y,t)\textrm{d} t.
\end{align*}\\
By (3.5), we have
\begin{align*}
\int_{0}^{\infty}t^{-1/2}K(x,y,t)\textrm{d} t\leq \frac{C}{|x-y|^{2}}.
\end{align*}\\
Since the operator $f\longmapsto |x|^{-2}*f$ acts as a bounded operator from $L^{q}$ to $L^{p}$ for $1/p=1/q-1/3$ by the Hardy-Littlewood-Sobolev inequality \cite[p.354]{Stein93}, so is $(-\textrm{div}elta_{D})^{-1/2}$. The proof is complete.
\end{proof}
\subsection{Dependence of a constant}
The growth rate of the constant in (3.2) is at most linear as $q\to \infty$.
\begin{lem}
Let $q_0\in (3,\infty)$. There exists a constant $C$ such that
\begin{align*}
||\mathbb{P}hi||_{W^{2,q}(\Pi)}\leq C q||f||_{L^{q}\cap L^{q_0}(\Pi)} \tag{3.8}
\end{align*}\\
holds for solutions of (3.1) for $f\in L^{q}\cap L^{q_0}(\Pi)$ and $q\in [q_{0},\infty)$, where
\begin{align*}
||f||_{L^{q}\cap L^{q_0}}=\max\{||f||_{L^{q}}, ||f||_{L^{q_0}}\}.
\end{align*}
\end{lem}
We prove Lemma 3.3 by a cut-off function argument.
\begin{prop}
Let $G$ be a smoothly bounded domain in $\mathbb{R}^{3}$. Let $q_0\in (3,\infty)$. There exists a constant $C$ such that
\begin{align*}
||\mathbb{P}hi||_{W^{2,q}(G)}\leq C q||f||_{L^{q}(G)} \tag{3.9}
\end{align*}\\
holds for solutions of (3.1) for $f\in L^{q}(G)$ and $q\in [q_0,\infty)$.
\end{prop}
\begin{proof}
The assertion is proved in \cite[Corollary 1]{Yudovich62} for general elliptic operators and $n$-dimensional bounded domains.
\end{proof}
\begin{proof}[Proof of Lemma 3.3]
Let $\{\varphi_{j}\}_{j=-\infty}^{\infty}\subset C^{\infty}_{c}(\mathbb{R})$ be a partition of the unity such that $0\leq \varphi_j\leq 1$, $\textrm{spt}\ \varphi_j\subset [j-1,j+1]$ and $\sum_{j=-\infty}^{\infty}\varphi_j(x_3)=1$, $x_3\in \mathbb{R}$. Let $\mathbb{P}hi\in W^{2,q}(\Pi)$ be a solution of (3.1) for $f\in L^{q}\cap L^{q_0}(\Pi)$. We set $\mathbb{P}hi_j=\mathbb{P}hi\varphi_j$ and observe that
\begin{align*}
-\textrm{div}elta \mathbb{P}hi_j&=f_j\quad \textrm{in}\ G_j,\\
\mathbb{P}hi_j&=0\quad \textrm{on}\ \mathbb{P}artial G_j,
\end{align*}\\
for $G_j=D\times (j-1,j+1)$ and $f_j=f\varphi_j-2\nabla \mathbb{P}hi\cdot \nabla \varphi_j-\mathbb{P}hi\textrm{div}elta \varphi_j$. We take a smooth bounded domain $\tilde{G}_{j}$ such that $G_j\subset \tilde{G}_{j}\subset D\times [j-2,j+2]$ and apply (3.9) to estimate
\begin{align*}
||\mathbb{P}hi_j||_{W^{2,q}(\tilde{G}_j)}\leq Cq ||f_{j}||_{L^{q}(\tilde{G}_{j})}
\end{align*}\\
for $q\in [q_0,\infty)$ with some constant $C$, independent of $j$ and $q$. It follows that
\begin{align*}
||\nabla^{2}\mathbb{P}hi \varphi_{j}||_{L^{q}(\Pi)}\leq Cq(||f||_{L^{q}(G_j)}+||\mathbb{P}hi||_{W^{1,q}(G_j)} ).
\end{align*}\\
By summing over $j$, we obtain
\begin{align*}
||\nabla^{2}\mathbb{P}hi ||_{L^{q}(\Pi)}\leq Cq(||f||_{L^{q}(\Pi)}+||\mathbb{P}hi||_{W^{1,q}(\Pi)} ). \tag{3.10}
\end{align*}\\
We estimate the lower order term of $\mathbb{P}hi$. By Lemma 3.1(i), we have $||\mathbb{P}hi||_{W^{2,q_0}}\leq C||f||_{L^{q_0}}$. In particular, $||\mathbb{P}hi||_{W^{1,\infty}}\leq C||f||_{L^{q_0}}$ by the Sobolev inequality. Applying the H\"older inequality implies that
\begin{align*}
||\mathbb{P}hi||_{W^{1,q}(\Pi)}\leq C||f||_{L^{q_0}(\Pi)} \tag{3.11}
\end{align*}\\
for $q\in [q_0,\infty)$ with some constant $C$, independent of $q$. The estimate (3.8) follows from (3.10) and (3.11).
\end{proof}
\subsection{Stream functions}
We shall give a Biot-Savart law for axisymmetric vector fields without swirl. We see that a smooth axisymmetric solenoidal vector field without swirl $u=u^{r}e_{r}+u^{z}e_{z}$ in $\Pi$ satisfies
\begin{align*}
\mathbb{P}artial_z(ru^{z})+\mathbb{P}artial_r(ru^{r})&=0\quad (z,r)\in \mathbb{R}\times (0,1),\\
ru^{r}&=0\quad \textrm{on}\ \{r=0,1\}.
\end{align*}\\
Since $(ru^{z},ru^{r})$ is regarded as a solenoidal vector field in the two-dimensional layer $\mathbb{R}\times (0,1)$, there exists a stream function $\mathbb{P}si(r,z)$ such that
\begin{align*}
ru^{z}=\frac{\mathbb{P}artial \mathbb{P}si}{\mathbb{P}artial r},\quad ru^{r}=-\frac{\mathbb{P}artial \mathbb{P}si}{\mathbb{P}artial z}.
\end{align*}\\
Since $\mathbb{P}si$ is constant on the boundary, we may assume that $\mathbb{P}si=0$ on $\{r=1\}$. Since $\mathbb{P}hi=(\mathbb{P}si/r)e_{\theta}$ satisfies
\begin{align*}
\textrm{div}\ \mathbb{P}hi=0,\ \nabla \times \mathbb{P}hi=u\quad \textrm{in}\ \Pi,\quad \mathbb{P}hi=0\quad \textrm{on}\ \mathbb{P}artial\Pi,
\end{align*}\\
we see that $-\textrm{div}elta \mathbb{P}hi=\nabla \times u$. Since the Laplace operator $-\textrm{div}elta_{D}$ is invertible, the stream function is represented by $\mathbb{P}hi=(-\textrm{div}elta_{D})^{-1}\nabla \times u$.
\begin{lem}
(i) Let $u$ be an axisymmetric vector field without swirl in $L^{p}_{\sigma}$ such that $\nabla \times u\in L^{q}$ for $q\in (1, 3)$ and $1/p=1/q-1/3$. Then,
\begin{align*}
u=\nabla \times (-\textrm{div}elta_{D})^{-1} (\nabla\times u). \tag{3.12}
\end{align*}\\
\noindent
(ii) The estimates
\begin{align*}
||u||_{L^{p}}+||\nabla u||_{L^{q}}&\leq C_1||\nabla \times u||_{L^{q}}, \tag{3.13}\\
||\nabla u||_{L^{r}}&\leq C_2||\nabla \times u||_{L^{r}},\quad 1<r<\infty, \tag{3.14} \\
||\nabla u||_{L^{r}}&\leq C_3r||\nabla \times u||_{L^{r}\cap L^{r_0}},\quad 3<r_0< r<\infty, \tag{3.15}
\end{align*}\\
hold with some constants $C_1-C_3$. The constant $C_3$ is independent of $r$.
\end{lem}
It suffices to show (3.12). The assertion (ii) follows from Lemmas 3.1 and 3.3.
\begin{prop}
Let $w$ be an axisymmetric vector field without swirl in $L^{p}$ for $p\in (1,\infty)$. Assume that
\begin{align*}
\textrm{div}\ w=0,\ \nabla \times w=0\quad \textrm{in}\ \Pi,\quad w\cdot n=0\quad \textrm{on}\ \mathbb{P}artial\Pi.
\end{align*}\\
Then, $w\equiv 0$.
\end{prop}
\begin{proof}
Since $w=w^{r}e_{r}+w^{z}e_{z}$ is a harmonic vector field in $\Pi$ and $\textrm{div}elta=\mathbb{P}artial_r^{2}+r^{-1}\mathbb{P}artial_r+r^{-2}\mathbb{P}artial_{\theta}^{2}+\mathbb{P}artial_{z}^{2}$ by the cylindrical coordinate, we see that
\begin{align*}
0=\textrm{div}elta w&=\textrm{div}elta (w^{r}e_{r})+\textrm{div}elta (w^{z}e_{z}) \\
&=\left\{\left(\mathbb{P}artial_r^{2}+r^{-1}\mathbb{P}artial_r-r^{-2}+\mathbb{P}artial_{z}^{2}\right)w^{r}\right\}e_{r}+(\textrm{div}elta w^{z})e_{z}\\
&=\left\{\left(\textrm{div}elta-r^{-2}\right)w^{r}\right\}e_{r}+(\textrm{div}elta w^{z})e_{z}.
\end{align*}\\
Hence, $(\textrm{div}elta-r^{-2})w^{r}=0$ and $\textrm{div}elta w^{z}=0$. By $\textrm{div}elta (w^{r}e_r)=\{(\textrm{div}elta-r^{-2})w^{r}\}e_r=0$, $w^{r}e_{r}$ is harmonic in $\Pi$. Since $w^{r}$ vanishes on the boundary and the operator $-\textrm{div}elta_{D}$ is invertible on $L^{p}$, we see that $w^{r}\equiv 0$. By the divergence-free condition $\mathbb{P}artial_r w^{r}+w^{r}/r+\mathbb{P}artial_z w^{z}=0$ and a decay condition $w^{z}\in L^{p}$, we have $w^{z}\equiv 0$.
\end{proof}
\begin{proof}[Proof of Lemma 3.5]
We set
\begin{align*}
\tilde{\mathbb{P}hi}=(-\textrm{div}elta_{D})^{-1}(\nabla \times u),\quad \tilde{u}=\nabla \times \tilde{\mathbb{P}hi}.
\end{align*}\\
Since $u$ is axisymmetric without swirl, $\tilde{\mathbb{P}hi}$ is axisymmetric and $\tilde{\mathbb{P}hi}=\tilde{\mathbb{P}hi}^{\theta} e_{\theta}$. Since $\tilde{\mathbb{P}hi}$ satisfies
\begin{align*}
\textrm{div}\ \tilde{\mathbb{P}hi}=0,\quad -\textrm{div}elta \tilde{\mathbb{P}hi}=\nabla \times u\quad \textrm{in}\ \Pi,\qquad \tilde{\mathbb{P}hi}=0\quad \textrm{on}\ \mathbb{P}artial\Pi,
\end{align*}\\
it follows that
\begin{align*}
\textrm{div}\ \tilde{u}=0,\quad \nabla \times \tilde{u}=\nabla \times u\quad \textrm{in}\ \Pi,\quad \tilde{u}\cdot n=0\quad \textrm{on}\ \mathbb{P}artial\Pi.
\end{align*}\\
Applying Proposition 3.6 for $w=u-\tilde{u}$ implies $u\equiv \tilde{u}$. We proved (3.12).
\end{proof}
\section{Vanishing viscosity limits}
We prove Theorem 1.2 (i) and (ii). When the initial vorticity satisfies $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,3)$, the initial velocity belongs to $L^{p}$ for $p\in [3,\infty)$ by the Biot-Savart law and a global-in-time unique solution $u=u_{\nu}$ of (1.1) exists by Theorem 1.1. We use the vorticity estimate (2.3) and construct global weak solutions of the Euler equations by sending $\nu\to0$. In the subsequent section, we prove H\"older continuity of weak solutions.
\subsection{Convergence to a limit}
We first derive a priori estimates independent of the viscosity $\nu>0$.
\begin{lem}
(i) Let $u_0\in L^{p}_{\sigma}$ be an axisymmetric vector field without swirl such that $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,3)$ and $1/p=1/q-1/3$. Let $u_{\nu}\in C([0,\infty); L^{p})\cap C^{\infty}(\overline{\Pi}\times (0,\infty))$ be a solution of (1.1) for $u_0$ in Theorem 1.1. There exits a constant $C$ such that
\begin{align*}
||u_\nu||_{L^{p}(\Pi)}+||\nabla u_\nu||_{L^{q}(\Pi)}\leq C\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)} \quad t\geq 0,\ \nu>0. \tag{4.1}
\end{align*}\\
Moreover, for each bounded domain $G\subset \Pi$, there exists a constant $C'$ such that
\begin{align*}
||\mathbb{P}artial_t u_{\nu}||_{W^{-1,q}(G)}\leq C'\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)}\Big(\nu+\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)}\Big) \quad t\geq 0,\ \nu>0, \tag{4.2}
\end{align*}\\
where $W^{-1,q}$ denotes the dual space of $W^{1,q'}_{0}$ and $q'$ is the conjugate exponent to $q$.
\noindent
(ii) If $\omega^{\theta}_{0}/r\in L^{s}$ for $s\in (3,\infty)$, then
\begin{align*}
||\nabla u_\nu||_{L^{s}(\Pi)}\leq C\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{s}(\Pi)} \quad t\geq 0,\ \nu>0. \tag{4.3}
\end{align*}\\
\noindent
(iii) If $\omega^{\theta}_{0}/r\in L^{\infty}$, then
\begin{align*}
||\nabla\times u_\nu||_{L^{\infty}(\Pi)}\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{\infty}(\Pi)} \quad t\geq 0,\ \nu>0. \tag{4.4}
\end{align*}
\end{lem}
\begin{proof}
Since $\omega^{\theta}_{0}/r\in L^{q}$, applying Lemma 2.3 implies the vorticity estimate
\begin{align*}
\Big\|\frac{\omega^{\theta}_{\nu}}{r}\Big\|_{L^{q}}\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}}.
\end{align*}\\
Since $r<1$, the $L^{q}$-norm of the vorticity $\nabla \times u_{\nu}$ is bounded. By the estimate of the Biot-Savart law (3.13), we obtain (4.1). The estimates (4.3) and (4.4) follow in the same way.
We prove (4.2). We take an arbitrary $\varphi\in C^{\infty}_{c}(G)$ and consider its zero extension to $\Pi\backslash \overline{G}$ (denoted by $\varphi$). We set $f=\mathbb{P}\varphi$ by the Helmholtz projection operator $\mathbb{P}$. By a higher regularity estimate of the Helmholtz projection operator \cite[Theorem 6]{ST98}, we see that $f\in C^{\infty}(\overline{\Pi})$ and
\begin{align*}
|| f||_{W^{1,s}}= || \mathbb{P}\varphi||_{W^{1,s}}\leq C_s|| \varphi||_{W^{1,s}}
\end{align*}\\
with some constant $C_s$ and $s\in(1,\infty)$. By multiplying $f$ by (1.1) and integration by parts, we see that
\begin{align*}
\int_{\Pi}\mathbb{P}artial_t u_{\nu}\cdot f\textrm{d} x
&=\nu\int_{\Pi} \textrm{div}elta u_{\nu}\cdot f\textrm{d} x
-\int_{\Pi}(u_{\nu}\cdot \nabla u_{\nu})\cdot f\textrm{d} x\\
&=-\nu \int_{\Pi}\nabla \times u_{\nu}\cdot \nabla \times f\textrm{d} x
+\int_{\Pi}u_{\nu}u_{\nu}: \nabla f\textrm{d} x.
\end{align*}\\
By $\textrm{div}\ u=0$, the left-hand side equals to the integral of $\mathbb{P}artial_{t} u\cdot \varphi$ in $G$. By applying the estimate of the Helmholtz projection, we obtain
\begin{align*}
\Bigg|\int_{G}\mathbb{P}artial_t u_{\nu}\cdot \varphi\textrm{d} x\Bigg|
\leq C\Bigg(\nu ||\nabla u_{\nu}||_{L^{q}(\Pi)}|| \varphi||_{W^{1,q'}(G)}+||u_{\nu}||_{L^{p}(\Pi)}^{2}|| \varphi||_{W^{1,p/(p-2)}(G)} \Bigg),
\end{align*}\\
Since $p/(p-2)\leq q'$, the norms of $\varphi$ are estimated by the $W^{1,q'}$-norm of $\varphi$ in $G$. By (4.1), we obtain (4.2).
\end{proof}
We apply the estimates (4.1) and (4.2) in order to extract a subsequence of $\{u_{\nu}\}$. We recall an abstract compactness theorem in \cite[Chapter III, Theorem 2.1]{Te}.
\begin{prop}
(i) Let $X_0$, $X$ and $X_1$ be Banach spaces such that $X_0\subset X\subset X_1$ with continuous injections, $X_0$ and $X_1$ are reflexive and the injection $X_0\subset X$ is compact. For $T\in (0,\infty)$ and $s\in (1,\infty)$, set the Banach space
\begin{align*}
Y=\{u\in L^{s}(0,T; X_0)\ |\ \mathbb{P}artial_t u\in L^{s}(0,T; X_1) \},
\end{align*}\\
equipped with the norm $||u||_{Y}=||u||_{L^{s}(0,T; X_0)}+||\mathbb{P}artial_t u||_{L^{s}(0,T; X_1)}$. Then, the injection $Y\subset L^{s}(0,T; X)$ is compact.
\end{prop}
\begin{proof}[Proof of Theorem 1.2 (i)]
For an arbitrary bounded domain $G\subset \Pi$, we set $X_0=W^{1,q}(G)$, $X=L^{q}(G)$ and $X_1=W^{-1,q}(G)$. Since $\{u_\nu\}$ is a bounded sequence in $Y$ by (4.1) and (4.2), we apply Proposition 4.2 to get a subsequence (still denoted by $u_\nu$) that converges to a limit $u$ in $L^{s}(0,T; L^{q}(G))$. By choosing a subsequence, we may assume that $u_\nu$ converges to $u$ in $L^{s}(0,T; L^{q}(G))$ for arbitrary $G\subset \Pi$ and $T>0$ and
\begin{align*}
u_\nu \to u\quad \textrm{a.e. in}\ \Pi\times (0,\infty).
\end{align*}\\
We take an arbitrary $\varphi\in C^{1}_{c}(\overline{\Pi}\times [0,\infty))$ such that $\textrm{div}\ \varphi=0$ in $\Pi$ and $\varphi\cdot n=0$ on $\mathbb{P}artial\Pi$. By multiplying $\varphi$ by (1.1) and integration by parts, we see that
\begin{align*}
\int_{0}^{\infty}\int_{\Pi} (u_{\nu}\cdot \mathbb{P}artial_t \varphi -\nu\nabla u_{\nu}\cdot \nabla \varphi +u_{\nu}u_{\nu}:\nabla \varphi)\textrm{d} x\textrm{d} t
=-\int_{\Pi}u_0\cdot \varphi_0\textrm{d} x \tag{4.5}
\end{align*}\\
Note that the integral of $\mathbb{P}artial_n u_{\nu}\cdot \varphi$ on $\mathbb{P}artial\Pi$ vanishes since $\mathbb{P}artial_r u^{z}_{\nu}=0$ and $\varphi\cdot n=0$ on $\mathbb{P}artial\Pi$. The first term converges to the integral of $u\cdot \mathbb{P}artial_t \varphi$ and the second term vanishes by (4.1). We take a bounded domain $G\subset \Pi$ and $T>0$ such that $\textrm{spt}\ \varphi\subset \overline{G}\times [0,T]$. Since $u_{\nu}$ converges to $u$ a.e. in $\Pi\times (0,\infty)$, by Egoroff's theorem (e.g., \cite[1.2]{EG}) for an arbitrary $\varepsilon>0$, there exists a measurable set $E\subset G\times(0,T)$ such that $|G\times (0,T)\backslash E|\leq \varepsilon$ and
\begin{align*}
u_{\nu}\to u\quad \textrm{uniformly on}\ E.
\end{align*}\\
Here, $|\cdot |$ denotes the Lebesgue measure. We set $F=G\times (0,T)\backslash E$. It follows from (4.1) that
\begin{align*}
\Bigg|\int\dot{H}^{1}space{-5pt}\int_{F} u_{\nu}u_{\nu}: \nabla \varphi\textrm{d} x\textrm{d} t\Bigg|
\leq ||u_\nu||_{L^{p}(F)}^{2}||\varphi||_{L^{p/(p-2)}(F)}
\leq C\varepsilon^{1-2/p},
\end{align*}\\
with some constant $C$, independent of $\nu$ and $\varepsilon$. Since $\varepsilon>0$ is arbitrary and $u_\nu$ converges to $u$ uniformly on $E$, the third term of (4.5) converges to the integral of $uu:\nabla \varphi$. Thus sending $\nu\to 0$ to (4.5) yields (1.5). Since the estimate (4.1) is inherited to the limit $u$, we see that $u\in L^{\infty}(0,\infty; L^{p})$ and $\nabla u\in L^{\infty}(0,\infty; L^{q})$.
We show the weak continuity $u\in BC_{w}([0,\infty); L^{p})$. We take an arbitrary $\varphi\in C_{c}^{\infty}(\Pi)$ and $\eta\in C^{1}[0,\infty)$. By multiplying $\varphi\eta$ by (1.1) and integration by parts as we did in the proof of Lemma 4.1, we obtain the estimate
\begin{align*}
&\Bigg|\int_{0}^{\infty}\Bigg(\int_{\Pi}u_{\nu}(x,t)\cdot \varphi(x)\textrm{d} x\Bigg)\dot{\eta}(t)\textrm{d} t\Bigg|\\
&\leq C\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)}\Big(\nu || \varphi||_{W^{1,q'}(\Pi)}+ \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}(\Pi)} || \varphi||_{W^{1,p/(p-2)}(\Pi)}\Big)\int_{0}^{\infty}\eta(s)\textrm{d} s
\end{align*}\\
Since the left-hand side converges, the integral of $u\varphi$ in $\Pi$ is weakly differentiable as a function of time. By sending $\nu \to 0$ and the duality, we obtain
\begin{align*}
\Bigg|\frac{\textrm{d} }{\textrm{d} t}\int_{\Pi}u(x,t)\cdot \varphi(x)\textrm{d} x\Bigg|
\leq C\Big\|\frac{\omega^{\theta}_{
0}}{r}\Big\|_{L^{q}(\Pi)}|| \varphi||_{W^{1,p/(p-2)}(\Pi)}\quad \textrm{a.e.}\ t>0.
\end{align*}\\
Hence for $s\in [0,\infty)$, we have
\begin{align*}
\int_{\Pi}u(x,t)\cdot \varphi(x)\textrm{d} x\to \int_{\Pi}u(x,s)\cdot \varphi(x)\textrm{d} x \quad \textrm{as}\ t\to s.
\end{align*}\\
By $u\in L^{\infty}(0,\infty; L^{p})$ and the density, the above convergence holds for all $\varphi\in L^{p'}$. Thus $u\in BC_{w}([0,\infty); L^{p})$. The weak continuity of $\nabla u$ on $L^{q}$ follows from that of $u$ on $L^{p}$. We proved the assertion (i).
\end{proof}
\subsection{Regularity of weak solutions}
We prove Theorem 1.2 (ii). We use the Poincar\'e inequality.
\begin{prop}
Let $u\in C(\overline{\Pi})$ be an axisymmetric vector field without swirl such that $\textrm{div}\ u=0$ in $\Pi$, $u\cdot n=0$ on $\mathbb{P}artial\Pi$ and $u(x)\to0$ as $|x|\to\infty$. Assume that $\nabla u\in L^{s}(\Pi)$ for $s\in (1,\infty)$. Then, the estimate
\begin{align*}
||u||_{L^{s}(\Pi)}\leq C||\nabla u||_{L^{s}(\Pi)} \tag{4.6}
\end{align*}\\
holds with some constant $C$.
\end{prop}
\begin{proof}
Since the radial component $u^{r}$ vanishes on $\mathbb{P}artial\Pi$ by $u\cdot n=0$, we apply the Poincar\'e inequality \cite{Ad} to estimate
\begin{align*}
||u^{r}||_{L^{s}(\Pi)}\leq C ||\nabla u^{r}||_{L^{s}(\Pi)}.
\end{align*}\\
We estimate $u^{z}$. For arbitrary $z_1, z_2\in \mathbb{R}$, we set $G=D\times (z_1,z_2)$. Since $\textrm{div}\ u=0$, it follows that
\begin{align*}
0=\int_{G}\textrm{div}\ u\textrm{d} x=\int_{D}u^{z}(r,z_2)\textrm{d} {\mathcal{H}}
-\int_{D}u^{z}(r,z_1)\textrm{d} {\mathcal{H}}.
\end{align*}\\
Since $u$ decays as $|z_2|\to\infty$, we see that the flux on $D$ is zero, i.e.,
\begin{align*}
\int_{D}u^{z}(r,z_1)\textrm{d} {\mathcal{H}}=0.
\end{align*}\\
We apply the Poincar\'e inequality \cite{E} to estimate
\begin{align*}
||u^{z}||_{L^{s}(D)}(z_1)\leq C||\nabla_{h} u^z||_{L^{s}(D)}(z_1),
\end{align*}\\
where $\nabla_h$ denotes the gradient for the horizontal variable $x_h=(x_1,x_2)$. By integrating for $z_1\in \mathbb{R}$, we obtain (4.6).
\end{proof}
\begin{proof}[Proof of Theorem 1.2 (ii)]
If $\omega^{\theta}_{0}/r\in L^{s}$ for $s\in (3,\infty)$, the limit $u\in BC_{w}([0,\infty); L^{p})$ satisfies $\nabla u\in L^{\infty}(0,\infty; L^{s})$ by (4.3). Thus $u(\cdot ,t)$ is H\"older continuous in $\overline{\Pi}$ and decaying as $|x|\to\infty$. We apply the Poincar\'e inequality (4.6) to see that $u\in L^{\infty}(0,\infty; W^{1,s})$ and $u\cdot \nabla u\in L^{\infty}(0,\infty; L^{s})$ by the Sobolev inequality. By integration by parts, it follows from (1.3) that
\begin{align*}
\int_{0}^{\infty}\int_{\Pi}u\cdot \varphi\dot{\eta}\textrm{d} x\textrm{d} t
=\int_{0}^{\infty}\int_{\Pi} (u\cdot \nabla u)\cdot \varphi\eta\textrm{d} x\textrm{d} t \tag{4.7}
\end{align*}\\
for all $\varphi\in L^{s'}_{\sigma}$ and $\eta\in C^{\infty}_{c}(0,\infty)$, where $s'$ is the conjugate exponent to $s$. By the boundedness of the Helmholtz projection on $L^{s'}$ and a duality, we see that $\mathbb{P}artial_t u\in L^{\infty}(0,\infty; L^{s})$ and $u\in BC([0,\infty); L^{s})$. The equation (1.4) follows from (4.7) by integration by parts. Since $\nabla u$ is bounded and $u$ is continuous on $L^{s}$, $\nabla u$ is weakly continuous on $L^{s}$. We proved the assertion (ii).
If in addition that $\omega^{\theta}_{0}/r\in L^{\infty}$, the limit satisfies $\nabla \times u\in BC_{w}([0,\infty); L^{\infty})$ by (4.4).
\end{proof}
\begin{rems}
(i) The equation (1.4) is written as
\begin{align*}
\mathbb{P}artial_t u+u\cdot \nabla u+\nabla p=0\qquad \textrm{on}\ L^{s}\quad \textrm{for a.e.}\ t>0,
\end{align*}\\
by the associated pressure $\nabla p=-(I-\mathbb{P})u\cdot \nabla u\in L^{\infty}(0,\infty; L^{s})$.
\noindent
(ii) The weak solutions in Theorem 1.2 are with finite energy for $q\in [3/2,2]$. In fact, by the global estimate (4.1) and applying the Poincar\'e inequality (4.6) for global-in-time solutions $u=u_{\nu}$ of (1.1) for $\omega^{\theta}_{0}/r\in L^{q}$ in Theorem 1.1, we see that
\begin{align*}
||u_{\nu}||_{L^{p}}+||u_{\nu}||_{L^{q}}\leq C||\nabla u_{\nu}||_{L^{q}}\leq C'\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}}\qquad t\geq 0,\ \nu>0,
\end{align*}\\
for $p\in [3,6]$ satisfying $1/p=1/q-1/3$. By the H\"older inequality, the solutions are uniformly bounded in $L^{r}$ for all $r\in [2,3]$ and the limit belongs to the same space.
\end{rems}
\section{Uniqueness}
We prove Theorem 1.2 (iii). It remains to show the uniqueness. Since the weak solutions are with infinite energy for $q\in (2,3)$, we estimate a local energy of two weak solutions in the cylinder by using a cut-off function $\theta_{R}$. We then send $R\to\infty$ and prove the uniqueness by using the growth bound of the $L^{r}$-norm (3.15). To this end, we show decay properties of weak solutions.
\subsection{Decay properties of weak solutions}
We use the Poincar\'e inequality (4.6) and deduce decay properties of velocity as $|x_3|\to\infty$.
\begin{prop}
The weak solutions $(u,p)$ in Theorem 1.2 (iii) satisfy
\begin{align*}
u,\nabla u,\mathbb{P}artial_t u,\nabla p\in L^{\infty}(0,\infty; L^{q}). \tag{5.1}
\end{align*}
\end{prop}
\begin{proof}
Since $u(\cdot,t)$ is bounded and H\"older continuous in $\overline{\Pi}$ and $\nabla u\in BC_{w}([0,\infty); L^{q})$, we see that $u\cdot \nabla u\in L^{\infty}(0,\infty; L^{q})$. Thus, $\mathbb{P}artial_t u$ and $\nabla p=-(I-\mathbb{P})u\cdot \nabla u$ belong to $L^{\infty}(0,\infty; L^{q})$ by (1.4). By the Poincar\'e inequality (4.6), $u\in L^{\infty}(0,\infty; L^{q})$ follows.
\end{proof}
We estimate the pressure $p$ as $|x_3|\to\infty$. We set
\begin{equation*}
\begin{aligned}
&\tilde{p}(x_h,x_3,t)=p(x_h,x_3, t)-\dot{H}^{1}at{p}(x_3, t),\\
&\dot{H}^{1}at{p}(x_3,t)=\frac{1}{|D|}\int_{D}p(x_h,x_3,t)\textrm{d} x_h.
\end{aligned}
\tag{5.2}
\end{equation*}
\begin{prop}
\begin{align*}
&\tilde{p}\in L^{\infty}(0,\infty; L^{q}), \tag{5.3} \\
&|\dot{H}^{1}at{p}(x_3,t)|\leq C(1+|x_3|)^{1/3}\quad x_3\in \mathbb{R},\ t>0, \tag{5.4}
\end{align*}\\
with some constant $C$.
\end{prop}
\begin{proof}
The property (5.3) follows from (5.1) by applying the Poincar\'e inequality on $D$. We show (5.4). We integrate the vertical component of (1.2) on $D$ to see that
\begin{align*}
\frac{\mathbb{P}artial}{\mathbb{P}artial t} \int_{D}u^{z}\textrm{d} x_h
+\int_{D}u\cdot \nabla u^{z}\textrm{d} x_h+\frac{\mathbb{P}artial}{\mathbb{P}artial z}\int_{D}p\textrm{d} x_h=0.
\end{align*}\\
Since the flux of $u$ on $D$ is zero as we have seen in the proof of Proposition 4.3, the first term vanishes. We integrate the equation by the vertical variable between $(0,z)$ to get
\begin{align*}
\int_{D}p(r,z,t)\textrm{d} x_h
=\int_{D}p(r,0,t)\textrm{d} x_h
-\int_{0}^{z}\int_{D}u\cdot \nabla u^{z}\textrm{d} x.
\end{align*}\\
We observe that $u\in L^{\infty}(0,\infty; W^{1,r})$ for $r\in [q,3)$ by (5.1) and Theorem 1.2 (ii). Since $u\cdot \nabla u\in L^{\infty}(0,\infty; L^{r/2})$ for $r\geq 2$, we apply the H\"older inequality to estimate
\begin{align*}
\Bigg|\int_{0}^{z}\int_{D}u\cdot \nabla u^{z}\textrm{d} x\Bigg|
\leq |D\times (0,z)|^{1-2/r}||u\cdot \nabla u||_{L^{r/2}}
\leq C|z|^{1-2/r}\qquad t>0.
\end{align*}\\
Since $r\in [2,3)$ and $1-2/r<1/3$, we obtain (5.4).
\end{proof}
We use the growth bound for the $L^{r}$-norm of $\nabla u$ as $r\to\infty$.
\begin{prop}
\begin{align*}
&\nabla u\in BC_{w}([0,\infty); L^{r})\quad r\in (3,\infty), \\
&\nabla\times u\in BC_{w}([0,\infty); L^{\infty}), \\
&||\nabla u||_{L^{r}}\leq Cr\quad r>3,\ t\geq 0,
\end{align*}\\
with some constant $C$, independent of $r$.
\end{prop}
\begin{proof}
By construction, the weak solution $u$ satisfies
\begin{align*}
||\nabla\times u||_{L^{r}}\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{r}}\qquad t\geq 0.
\end{align*}\\
for all $r\in (3,\infty)$. We fix $r\in (3,\infty)$ and take $r_0\in (3,r)$. It follows from (3.15) that
\begin{align*}
||\nabla u||_{L^{r}}\leq C r ||\nabla \times u||_{L^{r}\cap L^{r_0}}
\leq C r \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{r}\cap L^{r_0}}.
\end{align*}\\
Since the $L^{r}$-norm of $\omega^{\theta}_{0}/r$ is uniformly bounded for all $r>3$ by $\omega^{\theta}_{0}/r\in L^{\infty}$, we obtain the desired estimate for $\nabla u$.
\end{proof}
\subsection{Local energy estimates}
We now prove the uniqueness. Let $(u_1,p_1)$ and $(u_2,p_2)$ be two weak solutions to (1.2) in Theorem 1.2 (iii) for the same initial data. Then, $w=u_1-u_2$ and $\mathbb{P}i=p_1-p_2$ satisfy
\begin{equation*}
\begin{aligned}
\mathbb{P}artial_t w+u_1\cdot \nabla w+w\cdot \nabla u_2+\nabla \mathbb{P}i=0,\quad \textrm{div}\ w&=0\qquad \textrm{in}\ \Pi\times (0,\infty),\\
w\cdot n&=0\qquad \textrm{on}\ \mathbb{P}artial\Pi\times (0,\infty), \\
w&=0\qquad \textrm{on}\ \Pi\times\{t=0\}.
\end{aligned}
\tag{5.5}
\end{equation*}\\
Let $\theta\in C^{\infty}_{c}[0,\infty)$ be a smooth monotone non-increasing function such that $\theta\equiv 1$ in $[0,1]$ and $\theta\equiv 0$ in $[2,\infty)$. We set $\theta_R(x_3)=\theta(|x_3|/R)$ for $R\geq 1$ so that $\theta_{R}\equiv 1$ in $[0,R]$, $\theta_{R}\equiv 0$ in $[2R,\infty)$, $||\mathbb{P}artial_{x_3}\theta_{R}||_{\infty}\leq C/R$ and $\textrm{spt}\ \mathbb{P}artial_{x_3}\theta_{R}\subset I_R$ for $I_{R}=[R,2R]$. By multiplying $2w\theta_{R}$ by $(1.2)$ and integration by parts, we see that
\begin{align*}
\frac{\textrm{d}}{\textrm{d} t}\int_{\Pi}|w|^{2}\theta_{R}\textrm{d} x
+2\int_{\Pi}(w\cdot \nabla u_2)\cdot w\theta_{R}\textrm{d} x
-\int_{\Pi}u_1|w|^{2}\cdot \nabla \theta_{R}\textrm{d} x
-2\int_{\Pi}\mathbb{P}i w\cdot \nabla \theta_{R}\textrm{d} x=0. \tag{5.6}
\end{align*}\\
We set
\begin{align*}
\mathbb{P}hi_{R}(t)=\int_{\Pi}|w|^{2}(x,t)\theta_{R}(x_3)\textrm{d} x.
\end{align*}\\
By Theorem 1.2 (ii), the function $\mathbb{P}hi_R\in C[0,\infty)$ is differentiable for a.e. $t>0$ and satisfies $\mathbb{P}hi_R(0)=0$. We estimate errors in the cut-off procedure.
\begin{prop}
There exists a constant $C=C(R)$ such that
\begin{align*}
\Bigg|\int_{\Pi}u_1|w|^{2}\cdot \nabla \theta_{R}\textrm{d} x\Bigg|+
\Bigg|2\int_{\Pi}\mathbb{P}i w\cdot \nabla \theta_{R}\textrm{d} x\Bigg|\leq C\quad t>0. \tag{5.7}
\end{align*}\\
The constant $C(R)$ converges to zero as $R\to\infty$ for each $t>0$.
\end{prop}
\begin{proof}
Since $u_1\in L^{\infty}(0,\infty; L^{q})$ and $w\in L^{\infty}(\Pi\times (0,\infty))$ by Proposition 5.1 and Theorem 1.2 (ii), applying the H\"older inequality yields
\begin{align*}
\Bigg|\int_{\Pi}u_1|w|^{2}\cdot \nabla \theta_{R}\textrm{d} x\Bigg|
\leq \frac{C}{R}\int_{D\times I_R}|u_1|\textrm{d} x
\leq \frac{C'}{R}|D\times I_R|^{1/q'}||u_1||_{L^{q}}
\leq \frac{C''}{R^{1/q}}.
\end{align*}\\
We next estimate the second term of (5.7). We set $\mathbb{P}i=\tilde{\mathbb{P}i}+\dot{H}^{1}at{\mathbb{P}i}$ by (5.2). Since $\tilde{\mathbb{P}i}\in L^{\infty}(0,\infty; L^{q})$ by (5.3), it follows that
\begin{align*}
\Bigg|\int_{\Pi}\tilde{\mathbb{P}i}w\cdot \nabla \theta_R\textrm{d} x\Bigg|
\leq \frac{C}{R}\int_{D\times I_R}|\tilde{\mathbb{P}i}|\textrm{d} x\leq \frac{C'}{R^{1/q}}.
\end{align*}\\
It follows from (5.4) that
\begin{align*}
\Bigg|\int_{\Pi}\dot{H}^{1}at{\mathbb{P}i}w\cdot \nabla \theta_R\textrm{d} x\Bigg|
\leq \frac{C}{R^{2/3}}\int_{D\times I_R}|w|\textrm{d} x
\leq \frac{C}{R^{2/3}}|D\times I_R|^{1/q'}||w||_{L^{q}}
\leq \frac{C'}{R^{2/3-1/q'}}.
\end{align*}\\
Since $2/3-1/q'>0$ for $q\in [3/2,3)$, the right-hand side converges to zero as $R\to\infty$.
\end{proof}
\begin{proof}[Proof of Theorem 1.2 (iii)]
By Proposition 5.3, there exist constants $M_1$ and $M_2$ such that
\begin{align*}
||w||_{L^{\infty}}&\leq M_1,\\
||\nabla u_2||_{L^{r}}&\leq M_2 r\qquad r>3,\ t\geq 0.
\end{align*}\\
For an arbitrary $\delta\in (0,2/3)$, we set $r=2/\delta$. We apply the H\"older inequality with the conjugate exponent $r'=2/(2-\delta)$ to see that
\begin{align*}
\Bigg|2\int_{\Pi}(w\cdot \nabla u_2)\cdot w\theta_{R}\textrm{d} x\Bigg|
&\leq 2\int_{\Pi}|\nabla u_2|(|w|\theta_{R}^{1/2})^{2}\textrm{d} x\\
&\leq 2M_1^{\delta}\int_{\Pi}|\nabla u_2|(|w|\theta_{R}^{1/2})^{2-\delta}\textrm{d} x\\
&\leq 2M_1^{\delta}||\nabla u_2||_{L^{r}}\Bigg(\int_{\Pi}|w|^{2}\theta_{R}\textrm{d} x \Bigg)^{1/r'}\\
&\leq 2M_1^{\delta}M_2 r \mathbb{P}hi_{R}^{1/r'}.
\end{align*}\\
Thus, $\mathbb{P}hi_R$ satisfies the differential inequality
\begin{align*}
\dot{\mathbb{P}hi_R}(t)&\leq a \mathbb{P}hi_R(t)^{1/r'}+b,\quad t>0,\\
\mathbb{P}hi_R(0)&=0,
\end{align*}\\
with the constants $a=2M_1^{\delta}M_2r$ and $b=C(R)$ by (5.6) and (5.7). Hence we have
\begin{align*}
\int_{0}^{\mathbb{P}hi_R(t)}\frac{\textrm{d} s}{as^{1/r'}+b}\leq t. \tag{5.8}
\end{align*}\\
We prove that
\begin{align*}
\overline{\lim}_{R\to\infty}\mathbb{P}hi_R(t)<\infty\qquad \textrm{for each}\ t>0. \tag{5.9}
\end{align*}\\
Suppose on the contrary that (5.9) were false for some $t_0>0$. Then, there exists a sequence $\{R_j\}$ such that $\lim_{j\to\infty}\mathbb{P}hi_{R_j}(t_0)=\infty$. For an arbitrary $K>0$, we take a constant $N\geq 1$ such that $\mathbb{P}hi_{R_j}(t_0)\geq K$ for $j\geq N$. It follows from (5.8) that
\begin{align*}
\int_{0}^{K}\frac{\textrm{d} s}{as^{1/r'}+b}\leq t_0.
\end{align*}\\
Since the constant $b=C(R_{j})$ converges to zero as $R_{j}\to\infty$, sending $j\to\infty$ yields $(r/a)K^{1/r}\leq t_0$. Since $K>0$ is arbitrary, this yields a contradiction. Thus (5.9) holds.
Since $|w|^{2}\theta_R$ monotonically converges to $|w|^{2}$ in $\Pi$, it follows from (5.9) that
\begin{align*}
\mathbb{P}hi(t):=\int_{\Pi}|w|^{2}(x,t)\textrm{d} x=\lim_{R\to \infty}\int_{\Pi}|w|^{2}(x,t)\theta_{R}(x_3)\textrm{d} x<\infty.
\end{align*}\\
Sending $R\to\infty$ to (5.8) implies $(r/a)\mathbb{P}hi^{1/r}(t)\leq t$. We thus obtain
\begin{align*}
\int_{\Pi}|w(x,t)|^{2}\textrm{d} x\leq M_1^{2}(2M_2 t)^{2/\delta}.
\end{align*}\\
Since the right-hand side converges to zero as $\delta\to0$ for $t\in [0,T]$ and $T=(4M_2)^{-1}$, we see that $w\equiv 0$ in $[0,T]$. Applying the same argument for $t\geq T$ implies $u_1\equiv u_2$ for all $t\geq 0$. The proof is now complete.
\end{proof}
\begin{rem}
By a similar cut-off function argument, uniqueness of weak solutions of the Euler equations with infinite energy is proved in \cite[Theorem 5.1.1]{Chemin} for the whole space under different assumptions from Theorem 1.2 (iii). See also \cite[Theorem 2]{Danchin}. We proved uniqueness of weak solutions in the infinite cylinder based on the Yudovich's estimate (Lemma 3.3).
\end{rem}
\section{Solutions with finite energy}
It remains to prove Theorem 1.3. The proof of the $L^{2}$-convergence (1.7) is simpler than that of uniqueness of weak solutions since solutions are with finite energy.
\subsection{Energy dissipation}
\begin{prop}
The assertion of Theorem 1.3 (i) holds.
\end{prop}
\begin{proof}
Let $u_0\in L^{p}_{\sigma}\cap L^{2}$ be an axisymmetric vector field without swirl such that $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,2]$ and $1/p=1/q-1/3$. By Theorem 1.1 and Proposition 2.2, there exists a unique global-in-time solution $u_{\nu}\in BC([0,\infty); L^{p}\cap L^{2})$ of (1.1) satisfying the energy equality (1.5). Since $\omega^{\theta}_{0}/r\in L^{q}$, applying Lemma 2.5 yields
\begin{align*}
\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{2}}\leq \frac{C}{(\nu t)^{\frac{3}{2}(\frac{1}{q}-\frac{1}{2}) }}\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}}\qquad t\geq 0,\ \nu>0.
\end{align*}\\
It follows that
\begin{align*}
\nu\int_{0}^{T}||\nabla u_{\nu}||_{L^{2}}^{2}\textrm{d} t
=\nu\int_{0}^{T}||\omega^{\theta}||_{L^{2}}^{2}\textrm{d} t
\leq \nu\int_{0}^{T}\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{2}}^{2}\textrm{d} t
\leq C\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{q}}^{2}(\nu T)^{\frac{5}{2}-\frac{3}{q}}.
\end{align*}\\
Thus, (1.6) holds.
\end{proof}
\subsection{$L^{2}$-convergence}
We prove Theorem 1.3 (ii). We use uniform bounds for the viscosity $\nu>0$.
\begin{prop}
Let $u_0\in L^{p}_{\sigma}\cap L^{2}$ be an axisymmetric vector field without swirl such that $\omega^{\theta}_{0}/r\in L^{q}\cap L^{\infty}$ for $q\in [3/2,2]$ and $1/p=1/q-1/3$. Let $u_{\nu}\in BC([0,\infty);L^{p}\cap L^{2} )\cap C^{\infty}(\overline{\Pi}\times (0,\infty))$ be a solution of (1.1). Then, the estimates
\begin{align*}
||u_{\nu}||_{L^{\infty}}&\leq C\Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{r_0} }, \tag{6.1}\\
||\nabla u_{\nu}||_{L^{r}}&\leq C'r \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{r}\cap L^{r_0} }, \tag{6.2}\\
||\nabla u_{\nu}||_{L^{2}}&\leq \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{2} }, \tag{6.3}\\
\end{align*}
hold for $t>0$ and $3<r_0<r<\infty$ with some constants $C$ and $C'$, independent of $r$ and $\nu$.
\end{prop}
\begin{proof}
We take $r_0\in (3,\infty)$. It follows from (4.3) and (4.6) that
\begin{align*}
||u_{\nu}||_{W^{1,r_0}}\leq C \Big\|\frac{\omega^{\theta}_{0}}{r}\Big\|_{L^{r_0}}\quad t\geq 0, \nu>0.
\end{align*}\\
By the Sobolev inequality, the estimate (6.1) follows. The estimates (6.2) and (6.3) follow from (2.5) and (3.15).
\end{proof}
Let $(u_{\nu}, p_{\nu})$ and $(u_{\mu}, p_{\mu})$ be two solutions of (1.1) for the same initial data $u_0$. We may assume that $\nu\geq \mu$. Then, $w=u_{\nu}-u_{\mu}$ and $\mathbb{P}i=p_{\nu}-p_{\mu}$ satisfy
\begin{align*}
\mathbb{P}artial_t w-\nu\textrm{div}elta w-(\nu-\mu)\textrm{div}elta u_{\mu}+u_{\nu}\cdot \nabla w+w\cdot \nabla u_{\mu}+\nabla \mathbb{P}i=0\quad \textrm{div}\ w&=0\quad \textrm{in}\ \Pi\times (0,\infty),\\
\nabla \times w\times n=0,\ w\cdot n&=0\quad \textrm{on}\ \mathbb{P}artial\Pi\times (0,\infty),\\
w&=0\quad \textrm{on}\ \Pi\times \{t=0\}.
\end{align*}\\
By multiplying $2w$ by the equation and integration by parts, we see that
\begin{align*}
\frac{\textrm{d}}{\textrm{d} t}\int_{\Pi}|w|^{2}\textrm{d} x
+2\nu \int_{\Pi}|\nabla w|^{2}\textrm{d} x
+2(\nu-\mu)\int_{\Pi}\nabla u_{\mu}\cdot \nabla w\textrm{d} x
+2\int_{\Pi}(w\cdot \nabla u_{\mu})\cdot w\textrm{d} x=0.
\end{align*}\\
We set
\begin{align*}
\mathbb{P}hi_{\nu}(t)=\int_{\Pi}|w(x,t)|^{2}\textrm{d} x.
\end{align*}\\
We show that $K_{\nu}(T)=\sup_{0\leq t\leq T}\mathbb{P}hi_{\nu}(t)$ converges to zero as $\nu \to 0$ for each $T>0$.
\begin{prop}
There exist constants $M_1-M_3$, independent of $\nu, \mu>0$ such that
\begin{align*}
||w||_{L^{\infty}}&\leq M_1,\\
||\nabla u_{\mu}||_{L^{r}}&\leq M_2 r,\\
||\nabla u_{\mu}||_{L^{2}}+||\nabla w||_{L^{2}}&\leq M_3,\\
\end{align*}\\
hold for $t>0$ and $r>3$.
\end{prop}
\begin{proof}
The assertion follows from Proposition 6.2.
\end{proof}
\begin{proof}[Proof of Theorem 1.3 (ii)]
By Proposition 6.3, we estimate
\begin{align*}
\Big|2(\nu-\mu)\int_{\Pi}\nabla u_{\mu}\cdot \nabla w\textrm{d} x\Big|
\leq 2\nu ||\nabla u_{\mu}||_{L^{2}}||\nabla w||_{L^{2}}
\leq 2\nu M_{3}^{2}.
\end{align*}\\
For an arbitrary $\delta \in (0, 2/3)$, we set $r=2/\delta$. Since $r'=2/(2-\delta)$, by a similar way as in the proof of Theorem 1.2 (iii), we estimate
\begin{align*}
\Big|2\int_{\Pi}(w\cdot \nabla u_{\mu})\cdot w\textrm{d} x\Big|
\leq 2M_1^{\delta}M_2 r\mathbb{P}hi_{\nu}^{1/r'}.
\end{align*}\\
Thus, $\mathbb{P}hi_{\nu}$ satisfies
\begin{align*}
&\dot{\mathbb{P}hi_{\nu}}(t)\leq a\mathbb{P}hi_{\nu}^{1/r'}(t)+b,\\
&\mathbb{P}hi_{\nu}(0)=0,
\end{align*}\\
for $a=2M_1^{\delta}M_2r$ and $b=2\nu M_3^{2}$. We take an arbitrary $T>0$. We integrate the differential inequality between $(0,t)$ and take a supremum for $t\in [0,T]$ to estimate
\begin{align*}
\int_{0}^{K_{\nu}(T)}\frac{\textrm{d} s}{as^{1/r'}+b}\leq T.
\end{align*}\\
Since $b=b_{\nu}$ converges to zero as $\nu\to0$, by the same way as in the proof of Theorem 1.2 (iii), we see that the limit superior of $K_{\nu}(T)$ is finite for each $T>0$. We set
\begin{align*}
K(T):=\overline{\lim}_{\nu\to0}K_{\nu}(T)<\infty.
\end{align*}\\
Sending $\nu\to0$ to the above inequality implies $(r/a)K^{1/r'}(T)\leq T$. Since $a=2M_1^{\delta}M_2r$ and $r=2/\delta$, it follows that
\begin{align*}
K(T)\leq M_1^{2}(2M_2T)^{2/\delta}.
\end{align*}\\
Since the right-hand side converges to zero as $\delta\to0$ for $T\leq T_0$ and $T_0=(4M_2)^{-1}$, we see that $K(T)\equiv 0$. Thus the convergence (1.7) holds. By replacing the initial time and applying the same argument for $T\geq T_0$, we are able to show the convergence (1.7) for an arbitrary $T>0$. The proof is now complete.
\end{proof}
\begin{rems}
\noindent
(i) (Convergence in Sobolev space)
We constructed global weak solutions of the Euler equations for axisymmetric data without swirl by a vanishing viscosity method. As explained in the introduction, the condition $\omega^{\theta}_{0}/r\in L^{q}$ for $q\in [3/2,3)$ in Theorem 1.2 (i) is satisfied if $u_0\in W^{2,q}$. This condition is weaker than $u_0\in W^{2,q}$ for $q\in (3,\infty)$, required for the local well-posedness of the Euler equations.
Our approach is based on the a priori estimate (1.10) which is a special property of axisymmetric solutions and is not available at the broad level. On the other hand, there is an another approach to study vanishing viscosity limits when the Euler equation is locally well-posed. When $\Pi=\mathbb{R}^{3}$, unique local-in-time solutions of the Euler equations are constructed in \cite{Swann}, \cite{Kato72}, \cite{Kato75} by sending $\nu\to0$ to local-in-time solutions of the Navier-Stokes equations. See also \cite{Constantin86}. In particular, for a local-in-time solution $u\in C([0,T]; H^{s})$ of the Euler equations and $u_0\in H^{s}$, $s>5/2$, the convergence
\begin{align*}
u_{\nu} \to u\quad \textrm{in}\ L^{\infty}(0,T; H^{s}),
\end{align*}\\
is known to hold \cite{Masmoudi07}. The case with boundary is a difficult question related to analysis of boundary layer. See \cite{Constantin07} for a survey. However, convergence results are known subject to the Neumann boundary condition (1.1). See \cite{XiaoXin}, \cite{Veiga10}, \cite{Veiga11} for the case with flat boundaries and \cite{BS12}, \cite{BS14} for curved boundaries.
\noindent
(ii) (Navier boundary condition) The Neumann boundary condition in (1.1) may be viewed as a special case of the Navier boundary condition,
\begin{align*}
(D(u)n+\alpha u)_{\textrm{tan}}=0,\quad u\cdot n=0\quad \textrm{on}\ \mathbb{P}artial\Pi, \tag{6.4}
\end{align*}\\
where $D(u)=(\nabla u+\nabla^{T}u)/2$ is the deformation tensor and $f_{\textrm{tan}}=f-n(f\cdot n)$ for a vector field $f$. Indeed, for the two-dimensional case, the Neumann boundary condition is reduced to the free condition $\omega=0$ and $u\cdot n=0$ on $\mathbb{P}artial\Pi$. The free condition is a special case of (6.4), which is written as $\omega+2(\alpha-\kappa)u\cdot n^{\mathbb{P}erp}=0$ and $u\cdot n=0$ on $\mathbb{P}artial\Pi$, with the curvature $\kappa(x)$ and $n^{\mathbb{P}erp}=(-n^{2},n^{1})$. For a two-dimensional bounded domain, vanishing viscosity limits subject to (6.4) are studied in \cite{Robert98}, \cite{Planas05}, \cite{Ke06}. For the three-dimensional case, it is shown in \cite{IP06} that a Leray-Hopf weak solution $u_{\nu}$ subject to (6.4) converges to the local-in-time solution $u\in C([0,T]; H^{s})$ of the Euler equations for $u_0\in H^{3}$ in the sense that
\begin{align*}
u_{\nu}\to u\quad \textrm{in}\ L^{\infty}(0,T; L^{2}).
\end{align*}\\
For the Dirichlet boundary condition, the same convergence seems unknown. See \cite{IS11} for a boundary layer expansion subject to (6.4) and \cite{MR12} for a stronger convergence result.
\end{rems}
\appendix
\section{Decay estimates of vorticity}
We prove the decay estimate (2.9) (Lemma 2.5). It suffices to show:
\begin{lem}
There exists a constant $C$ such that the estimate (2.9) holds for $r=2^{m}q$, $q\in [1,\infty)$ and non-negative integers $m\geq 0$.
\end{lem}
\begin{proof}[Proof of Lemma 2.5]
We apply Lemma A.1. Since
\begin{align*}
\lim_{r\to\infty}\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{r}}(t)=\Big\|\frac{\omega^{\theta}}{r}\Big\|_{L^{\infty}}(t),
\end{align*}\\
sending $m\to\infty$ implies (2.9) for $r=\infty$ and $q\in [1,\infty)$. Since (2.9) holds for $r=q\in [1,\infty]$, we obtain (2.9) for all $1\leq q\leq r\leq \infty$ by the H\"older inequality.
\end{proof}
Let $\mathbb{P}si_{\varepsilon}(s)$ be a non-negative convex function in the proof of Proposition 2.4. We prove the estimate (2.9) for $\mathbb{P}si_{\varepsilon}(\Omega)$ and $\Omega=\omega^{\theta}/r$. The assertion of Lemma A.1 follows by sending $\varepsilon\to0$.
\begin{prop}
There exists a constant $C$ such that the estimate
\begin{align*}
\big\|\mathbb{P}si_{\varepsilon}(\Omega)\big\|_{L^{r}(\Pi)}\leq \frac{C}{(\nu t)^{\frac{3}{2}(\frac{1}{q}-\frac{1}{r}) }}\big\|\mathbb{P}si_{\varepsilon}(\Omega_0)\big\|_{L^{q}(\Pi)}\qquad t>0,\ \nu>0, \tag{A.1}
\end{align*}\\
holds for all $\varepsilon>0$, $r=2^{m}q$ and $m\geq 0$.
\end{prop}
We consider differential inequalities for $L^{r}$-norms of $\mathbb{P}si_{\varepsilon}(\Omega)$.
\begin{prop}
The function
\begin{align*}
\mathbb{P}hi_{r}(t)=\int_{\Pi}\mathbb{P}si_{\varepsilon}^{r}(\Omega)\textrm{d} x
\end{align*}\\
satisfies
\begin{align*}
\dot{\mathbb{P}hi}_{r}(t)\leq -\kappa \nu \Big(1-\frac{1}{r}\Big)\frac{\mathbb{P}hi_{r}^{5/3}}{\mathbb{P}hi_{r/2}^{4/3}}\qquad t>0 \tag{A.2}
\end{align*}\\
with an absolute constant $\kappa$, independent of $r$ and $\nu$.
\end{prop}
\begin{proof}
We apply the interpolation inequality
\begin{align*}
||\varphi||_{L^{2}}\leq C_0||\varphi||_{L^{1}}^{2/5}||\nabla \varphi||_{L^{2}}^{3/5}
\end{align*}\\
for $\varphi\in H^{1}_{0}$ with some absolute constant $C_{0}$. Since $\mathbb{P}si_{\varepsilon}(\Omega)$ satisfies
\begin{align*}
\frac{\textrm{d}}{\textrm{d} t}\int_{\Pi}\mathbb{P}si^{r}_{\varepsilon}(\Omega)\textrm{d} x
+4\nu \Big(1-\frac{1}{r}\Big) \int_{\Pi}\Big|\nabla \mathbb{P}si_{\varepsilon}(\Omega)^{\frac{r}{2}}\Big|^{2}\textrm{d} x\leq 0,
\end{align*}\\
by (2.7), applying the interpolation inequality for $\varphi=\mathbb{P}si_{\varepsilon}^{r/2}$ yields
\begin{align*}
\int_{\Pi}\Big|\nabla \mathbb{P}si_{\varepsilon}(\Omega)^{\frac{r}{2}}\Big|^{2}\textrm{d} x\geq \frac{1}{C_0^{10/3}}
\frac{\Big(\int_{\Pi}\mathbb{P}si_{\varepsilon}^{r}\textrm{d} x\Big)^{5/3}}{\Big(\int_{\Pi}\mathbb{P}si_{\varepsilon}^{r/2}\textrm{d} x\Big)^{4/3}}=\frac{\mathbb{P}hi_{r}^{5/3}}{C_0^{10/3}\mathbb{P}hi_{r/2}^{4/3}}.
\end{align*}\\
The differential inequality (A.2) follows from the above two inequalities with $\kappa=4C_{0}^{-10/3}$.
\end{proof}
\begin{proof}[Proof of Proposition A.2]
We set $\lambda=||\mathbb{P}si_{\varepsilon}(\Omega_0)||_{L^{q}}$. The estimate (A.1) is written as
\begin{align*}
\mathbb{P}hi_{r}^{1/r}(t)\leq \frac{C}{(\nu t)^{\frac{3}{2}(\frac{1}{q}-\frac{1}{r}) }}\lambda\qquad t>0, \tag{A.3}
\end{align*}\\
for $r=2^{m}q$ and $m\geq 0$. We prove (A.3) by induction for $m\geq 0$. For $m=0$, the estimate (A.3) holds with $C=1$ by Lemma 2.3.
Suppose that (A.3) holds for $m=k$ with some constant $C=C_k$. We set $s=2r$ for $r=2^{m}q$. By the assumption of our induction, we see that
\begin{align*}
\frac{1}{\mathbb{P}hi_r(t)}\geq \frac{(\nu t)^{\frac{3}{2}(2^{k}-1) }}{C_{k}^{r}\lambda^{r}}.
\end{align*}\\
It follows from (A.2) that
\begin{align*}
\mathbb{P}hi_{s}^{-5/3}\dot{\mathbb{P}hi}_{s}
&\leq -\kappa \nu \Big(1-\frac{1}{s}\Big)\mathbb{P}hi_{r}^{-4/3}\\
&\leq -\kappa \nu\Big(1-\frac{1}{s}\Big) \frac{(\nu t)^{2^{k+1}-2 }}{C_{k}^{4r/3}\lambda^{4r/3} }.
\end{align*}\\
We integrate the both sides between $[t_1,t]$ and estimate
\begin{align*}
\frac{3}{2}\Bigg(\frac{1}{\mathbb{P}hi^{2/3}(t)}-\frac{1}{\mathbb{P}hi^{2/3}(t_1)} \Bigg)
\geq \kappa \Big(1-\frac{1}{s}\Big) \frac{(\nu t)^{2^{k+1}-1 }-(\nu t_1)^{2^{k+1}-1 }}{C_{k}^{4r/3}\lambda^{4r/3} }\Bigg(\frac{1}{2^{k+1}-1}\Bigg).
\end{align*}\\
Since the left-hand side is smaller than $3/2\mathbb{P}hi_{s}^{-2/3}$, sending $t_1\to0$ yields
\begin{align*}
\mathbb{P}hi_s^{2/3}(t)\leq \frac{(C_k\lambda)^{4r/3} }{(\nu t)^{2^{k+1}-1} }\frac{ 2^{k+2}}{\kappa (1-1/s)}.
\end{align*}\\
Since
\begin{align*}
\frac{3}{2s}\frac{4r}{3}=1,\qquad
\frac{3}{2s}(2^{k+1}-1)=\frac{3}{2}\Big(\frac{1}{q}-\frac{1}{s}\Big),
\end{align*}\\
and $1-1/s\geq 1/2$, it follows that
\begin{align*}
\mathbb{P}hi_{s}^{1/s}(t)\leq \frac{C_k\lambda }{(\nu t)^{\frac{3}{2}(\frac{1}{q}-\frac{1}{s})} }\Bigg(\frac{ 2^{k+3}}{\kappa}\Bigg)^{\frac{3}{2^{k+2}q}}.
\end{align*}\\
We proved that (A.3) holds for $m=k+1$ with the constant $C_{k+1}=a_kC_k$ for $a_{k}=b^{\frac{1}{2^{k+2}}}d^{\frac{k+3}{2^{k+2}}}$ and
\begin{align*}
b=\kappa^{-3/q},\qquad d=2^{3/q}.
\end{align*}\\
Thus (A.3) holds for all $m\geq 0$. Since
\begin{align*}
C_{k+1}=a_kC_k
=\mathbb{P}rod_{j=1}^{k}a_{j}
=b^{\sum_{j=1}^{k}2^{-j-2}}d^{\sum_{j=1}^{k}(j+3)2^{-j-2}},
\end{align*}\\
and the right-hand side converges as $k\to \infty$, we are able to take a uniform constant $C$ in (A.3) for all $m\geq 0$. The proof is now complete.
\end{proof}
\end{document} |
\begin{document}
\title{Explicit Formulas for the Multivariate
Resultant }
\author{Carlos D' Andrea and Alicia Dickenstein}
\thanks{Both authors are supported by Universidad de Buenos Aires, grant TX094}
\thanks{The second author is also supported by CONICET, Argentina,
and the Wenner-Gren Foundation, Sweden.}
\date{}
\begin{abstract}
We present formulas for the multivariate resultant
as a quotient of two determinants. They extend the classical Macaulay
formulas, and involve matrices of considerably smaller size, whose non
zero entries
include coefficients of the given polynomials
and coefficients of their Bezoutian.
These formulas can also be viewed as an explicit computation of
the morphisms and the determinant of a resultant complex.
\end{abstract}
\maketitle
\section{Introduction}
Given $n$ homogeneous polynomials $f_1,\dots,f_n$ in $n$ variables
over an algebraically closed field $k$
with respective degrees $d_1,\dots, d_n$, the resultant ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$
is an irreducible
polynomial in the coefficients of $f_1,\dots,f_n$, which
vanishes whenever $f_1,\dots,f_n$ have a common root in projective space.
The study of resultants goes back to classical work of Sylvester, B\'ezout,
Cayley, Macaulay and Dixon. The use of resultants as a computational tool
for elimination of variables as well as a tool for the study of complexity
aspects of polynomial system solving in the last decade,
has renewed the interest in finding explicit formulas for their computation
(cf. \cite{AS}, \cite{C1}, \cite{C2}, \cite{cm2},\cite{EM}, \cite{kps},\cite{L},
\cite{R},\cite{rojas}).
\par
By a determinantal formula it is meant a matrix
whose entries are polynomials in the coefficients of $f_1,\dots,f_n$
and whose determinant equals the
resultant ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$. Of course, the interest on such a formula
is the computation of the resultant, and so it is implicit that
the entries should be algorithmically computed from the
inputs. It is also meant that all non-zero entries have degree strictly
less than the degree of the resultant.
\par
In case all $d_i$ have a common value $d$, all currently
known determinantal formulas
are listed by Weyman and Zelevinsky in \cite{wz}.
This list is short: if $d \geq 2,$
there exist determinantal formulas for all $d$ just
for binary forms (given by the well known Sylvester matrix),
ternary forms and quaternary
forms; when $n=5$, the only possible values for $d$ are $2$ and $3$; finally, for
$n=6$, there exists a determinantal formula only for $d=2$.
We find similar strict restrictions on general $n, d_1,\dots,
d_n$ (cf. Lemma \ref{square}).
\par
Given $d_1,\dots,d_n,$ denote $t_n:= \sum_{i=1}^n (d_i-1)$ the critical degree.
Classical Macaulay formulas \cite{Mac}
describe the resultant ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$
as an explicit quotient of two determinants.
These formulas involve a matrix of size at least the number of monomials
in $n$ variables of degree $t_n+1$, and a submatrix of it.
\par
Macaulay's work has been revisited and sharpened by Jouanolou
in \cite{jou}, where he proposes for each
$t\geq0,$ a square matrix $M_t$ of size
\begin{equation}
\label{tamatrix}
\rho\left(t\right):=\binom{t+n-1}{n-1}+ i(t_n-t)
\end{equation}
whose determinant is a nontrivial multiple of ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$
(cf. \cite{jou}, $3.11.19.7$).
Here, $i(t_n-t)$ denotes the dimension of the $k-$vector space
of elements of degree $t_n-t$ in the ideal generated by a
regular sequence of $n$ polynomials with degrees $d_1,\dots,d_n.$
Moreover, Jouanolou shows that
the resultant may be computed as the ratio between the determinant
of $M_{t_n}$ and
the determinant of one of its square submatrices.
(cf. \cite{jou}, Corollaire $3.9.7.7$).
In this paper, we explicitly find the extraneous factor in Jouanolou's
formulation, i.e. the polynomial $\det(M_t) /{\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$, for all
$t\geq 0$ which again happens
to be the determinant of a submatrix ${{\mathbb E}}_t$ of $M_t$ for every $t$,
and this allows us to present new resultant formulas {\it \`a la
Ma\-cau\-lay \/} for the resultant, i.e. as a quotient of two determinants
\begin{equation}
\label{macaulay}
{\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n) = \frac{\det(M_t)} {\det({{\mathbb E}}_t)}.
\end{equation}
For $t>t_n,$ we recover
Macaulay's classical formulas. For $t \leq t_n,$ the size of the matrix
$M_t$ is considerably smaller.
In order to give explicit examples, we need to recall the definition
of the {\it Bezoutian associated with $f_1,\dots,f_n$}
(cf. \cite{BCRS}, \cite{fgs}, \cite{kunz}, \cite{ss} and \cite{jou} under
the name ``Formes de Morley'').
Let $\left(f_1,\dots,f_n\right)$ be a sequence of generic homogeneous polynomials
with respective degrees $d_1,\dots,d_n$
\begin{equation*}
f_{i}:=\sum_{|\alpha _{i}|=d_{i}}a_{\alpha _{i}}X^{\alpha _{i}} \quad
\in A\left[ X_{1},\dots ,X_{n}\right] ,
\end{equation*}
where
$A$ is the factorial domain $A:={{\mathbb Z}}\left[ a_{\alpha _{i}}
\right]_{|\alpha _{i}|= d_{i},i=1,\dots ,n}.$
\par
Introduce two sets of $n$ variables $X,Y$
and for each pair $(i,j)$ with $ 1 \leq i,j \leq n$,
write $\Delta_{ij}(X,Y)$ for the incremental quotient
\begin{equation}
\label{deltaij}
\frac{f_{i}(Y_{1},\dots ,Y_{j-1},X_{j},\dots
,X_{n})-f_{i}(Y_{1},\dots ,Y_{j},X_{j+1},\dots ,X_{n})}{X_{j}-Y_{j}}.
\end{equation}
Note that $f_{i}(X)-f_{i}(Y)=\sum_{j=1}^{n}\Delta _{ij}(X,Y)(X_{j}-Y_{j}).$
The determinant
\begin{equation}
\label{delta}
\Delta(X,Y):= \det(\Delta_{ij}(X,Y))_{1\leq i,j\leq n}=\sum_{\left| \gamma
\right| \leq t_{n}}\Delta_{\gamma }\left( X\right) .Y^{\gamma }.
\end{equation}
is a representative of the
{\it Bezoutian} associated with
$\left(f_1,\dots,f_n\right).$
It is a homogeneous polynomial in $A \left[ X,Y\right] $
of degree $t_{n}.$
\par
Recall also that
$$\deg {\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n) = \sum_{i=1}^n d_1 \dots d_{i-1} \cdot d_i \dots d_n.$$
As a first example, let $n=3, \ \left(d_1,d_2,d_3\right)=\left(1,1,2\right),$
and let
$$
\begin{array}{ccc}
f_1 &= &a_1 X_1 + a_2 X_2 + a_3 X_3\\
f_2 &=& b_1 X_1 + b_2 X_2 + b_3 X_3\\
f_3 &=& c_1 X_1^2 + c_2 X_2^2 + c_3 X_3^2+ c_4 X_1X_2 + c_5 X_1 X_3 + c_6 X_2
X_3\\
\end{array}
$$
be generic polynomials of respective degrees $1,1,2$.
Here, $t_3= 1.$
Macaulay's classical matrix $M_2$ looks as follows:
$$\left(
\begin{array}{cccccc}
a_1 & 0 & 0 & 0 & 0 & c_1 \\
0 & a_2 & 0 & b_2 & 0 & c_2 \\
0 & 0 & a_3 & 0 &b_3 & c_3 \\
a_2 & a_1 & 0 & b_1 & 0 & c_4 \\
a_3 & 0 & a_1 & 0 & b_1 & c_5 \\
0 &a_3& a_2 & b_3 & b_2 & c_6
\end{array} \right)$$
and its determinant equals $ -a_1 Res_{1,1,2}.$ The extraneous factor is
the $1\times 1$ minor formed by the element in the fourth row, second
column.
\par
On the other hand, because of Lemma \ref{square}, we can exhibit a
determinantal formula for $\pm Res_{1,1,2},$ and it is given by Proposition
\ref{small} for $t=\left[\frac{t_3}{2}\right] =0$
by the determinant of
$$
\left(
\begin{array}{ccc}
\Delta_{(1,0,0)} & a_1 & b_1 \\
\Delta_{(0,1,0)}& a_2 & b_2 \\
\Delta_{(0,0,1)}& a_3 & b_3 \\
\end{array}
\right),
$$
where $\Delta_\gamma$ are coefficients of the Bezoutian (\ref{delta}).
Explicitly, we have
$$\Delta_{(1,0,0)} = c_1 (a_2 b_3 - a_3 b_2) - c_{4} (a_1 b_3 -
a_{3} b_{1}) + c_5 (a_1 b_2 - a_{2} b_{1}),$$
$$ \Delta_{(0,1,0)} = c_{6} (a_1 b_2 - a_{2} b_{1}) - c_{2} (a_1 b_3 -
b_1 a_3)$$
and
$$\Delta_{(0,0,1)}=c_{3} (a_1 b_2 - b_1 a_2).$$
This is the matrix $M_0$ corresponding to the linear transformation $\Psi_0$
which is defined in (\ref{prindis}).
Take now $n=4,$ and $\left(d_1,d_2,d_3,d_4\right)=\left(1,1,2,3\right).$
The critical degree is $ 3.$
Macaulay's classical matrix, $M_4,$ has size $35\times 35.$
Because the degree of
$\rm Res_{1,1,2,3}$ is $2+3+6+6=17,$ we know that its
extraneous factor must be a minor of size $18\times 18.$
By Proposition \ref{small}, we can find the smallest possible matrix
for $t= 1$ or $t=2.$ Set $t=2.$
We get the following $12\times 12$ matrix
$$\left(
\begin{array}{cccccccccccc}
\Delta^1_{(2,0,0,0)}&\Delta^2_{(2,0,0,0)}&\Delta^3_{(2,0,0,0)}&\Delta^4_{(2,0,0,0)}&
a_1 &0&0 &0 & 0 &0 & 0 & c_1 \\
\Delta^1_{(0,2,0,0)}&\Delta^2_{(0,2,0,0)}&\Delta^3_{(0,2,0,0)}&
\Delta^4_{(0,2,0,0)}&0 &a_2 &0 &0 & b_2 &0 & 0 & c_2 \\
\Delta^1_{(0,0,2,0)}&\Delta^2_{(0,0,2,0)}&\Delta^3_{(0,0,2,0)}&
\Delta^4_{(0,0,2,0)}&0 &0 &a_3 &0 & 0 &b_3 & 0 & c_3 \\
\Delta^1_{(0,0,0,2)}&\Delta^2_{(0,0,0,2)}&\Delta^3_{(0,0,0,2)}&
\Delta^4_{(0,0,0,2)}&0 &0 &0 &a_4 & 0 &0 & b_4 & c_4\\
\Delta^1_{(1,1,0,0)}&\Delta^2_{(1,1,0,0)}&\Delta^3_{(1,1,0,0)}&
\Delta^4_{(1,1,0,0)}&a_2 &a_1 &0 &0 & b_1 &0 & 0 & c_5 \\
\Delta^1_{(1,0,1,0)}&\Delta^2_{(1,0,1,0)}&\Delta^3_{(1,0,1,0)}&
\Delta^4_{(1,0,1,0)}&a_3 &0 &a_1 &0 &0 &b_1 & 0 & c_6 \\
\Delta^1_{(1,0,0,1)}&\Delta^2_{(1,0,0,1)}&\Delta^3_{(1,0,0,1)}&
\Delta^4_{(1,0,0,1)}&a_4 &0 &0 &a_1 &0 &0 & b_1 & c_7 \\
\Delta^1_{(0,1,1,0)}&\Delta^2_{(0,1,1,0)}&\Delta^3_{(0,1,1,0)}&
\Delta^4_{(0,1,1,0)}&0 &a_3 &a_2 &0 & b_3 &b_2 & 0 & c_8 \\
\Delta^1_{(0,1,0,1)}&\Delta^2_{(0,1,0,1)}&\Delta^3_{(0,1,0,1)}&
\Delta^4_{(0,1,0,1)}&0 &a_4 &0 &a_2 & b_4 &0 & b_2 & c_9 \\
\Delta^1_{(0,0,1,1)}&\Delta^2_{(0,0,1,1)}&\Delta^3_{(0,0,1,1)}&
\Delta^4_{(0,0,1,1)}&0 &0 &a_4 &a_3 & 0 &b_4 & b_3 & c_{10} \\
a_1&a_2&a_3&a_4 & 0&0&0&0&0&0&0&0\\
b_1&b_2&b_3&b_4 & 0&0&0&0&0&0&0&0\\
\end{array} \right)$$
where
$$
\begin{array}{ccc}
f_1 &= &a_1 X_1 + a_2 X_2 + a_3 X_3 + a_4 X_4\\
f_2 &=& b_1 X_1 + b_2 X_2 + b_3 X_3 + b_4 X_4\\
f_3 &=& c_1 X_1^2 + c_2 X_2^2 + c_3 X_3^2+ c_4 X_4^2+ c_5 X_1 X_2 + c_6
X_1 X_3 \\
&&+ c_7 X_1 X_4 + c_8 X_2 X_3 + c_9 X_2 X_4 + c_{10} X_3 X_4,\\
\end{array}
$$
$f_4$ is a homogeneous generic polynomial of degree $3$ in
four variables, and for each $\gamma, \, |\gamma|=2,$ we write
$$\Delta_\gamma(X) = \sum_{j=1}^4 \Delta_\gamma^j X_j,$$
which has degree $1$ in the coefficients of each $f_i, i=1,\dots,4.$
The determinant of this matrix is actually
$\pm a_1 {\rm Res}_{1,1,2,3}.$ Here,
the extraneous factor is the minor $1\times 1$ of the matrix obtained by
taking the element in the fifth row, sixth column.
In the following table, we display the minimal size of
the matrices $M_{t}$ and the size of classical Macaulay matrix
for several values of $n, \ d_1,\dots,d_n.$
$$
\boxed{
\begin{array}{cccccccc}
n && \left(d_1,\dots,d_n\right) && {\text { min size }} &&
{\text { classical }} \\
&& && && \\
2 && \left(10,70\right) && 70 && 80 \\
2 && \left(150,200\right) && 200 && 350\\
3 && \left(1,1,2\right)&& 3 && 6\\
3 && \left(1,2,5\right)&& 14 && 28\\
3 && \left(2,2,6\right)&& 21 && 45 \\
4 && \left(1,1,2,3)\right)&& 12&&35\\
4 && \left(2,2,5,5\right) && 94 && 364 \\
4 && \left(2,3,4,5\right)&& 90 && 364\\
5 && \left(4,4,4,4,4\right) && 670 && 4845 \\
7 && \left(2,3,3,3,3,3,3\right)&& 2373 && 38760\\
10&& \left(3,3,\dots,3\right) && 175803 && 14307150\\
20&& \left(2,2,\dots,2\right) && 39875264&&131282408400\\
\end{array}
}.
$$
We give in section 4 an estimate for the ratio between these sizes.
However, it should be noted that the number of coefficients of the
Bezoutian that one needs to compute increases when the size of
the matrix $M_t$ decreases. We refer to \cite{fgs} and \cite{sax}
for complexity considerations on the computation of
Bezoutians. In particular, this computation can be well parallelized.
Also, the particular structure of the matrix and the coefficients
could be used to improve the complexity estimates; this problem is studied
for $n=2$ and $n=3$ in \cite{czh}.
Our approach combines Macaulay's original ideas \cite{Mac},
expanded by Jouanolou
in \cite{jou}, with the expression for the resultant as the determinant of
a Koszul complex inspired by the work of Cayley \cite{cay}.
We also use the work
\cite{Ch1}, \cite{cha} of Chardin on homogeneous subresultants,
where a Macaulay
style formula for subresultants is presented.
In fact, we show that the proposed determinants are explicit
non-zero minors of a bigger
matrix which corresponds to one of the morphisms
in a Koszul resultant complex which in general has many non zero terms,
and whose determinant is ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ (cf. Theorem \ref{rescth}).
These are the complexes considered in \cite{wz},\cite{gkz} in the equal
degree case, built from the spectral sequence associated
with a twisted Koszul complex at the level of sheaves.
\par
We give explicit expressions
for the morphisms in these complexes in terms of the Bezoutian
associated with $f_1,\dots,f_n$ for degrees under critical degree, addressing
in this manner a problem raised by Weyman and Zelevinsky in \cite{wz}
(cf. also \cite[13.1.C]{gkz}.
\par
In the last sections, we show that different classical
formulas can be
viewed as special cases of the determinantal formulas that we present here
(cf. \cite{gkz},\cite{wz}). In particular, we also recover in this setting the
``affine'' Dixon formulas considered in \cite{EM} and we
classify in particular all such determinantal formulas.
\section{ Notations and some preliminary statements}
Let $S_u$ denote the $A$-free module generated by the monomials in
$A[X]$ with degree $u.$ If $u<0,$ then we set $S_u= 0.$
Define also the following free submodules
$E^{t,j} \, \subseteq \, S^{t,j} \, \subseteq \, S_{t-d_j},$
for all $j=1,\dots,n:$
\begin{equation}
S^{t,j}:= \left<X^\gamma, |\gamma|=t-d_{j},
\gamma_1<d_{1},\dots, \gamma_{j-1}< d_{j-1}\right>
\end{equation}
\begin{equation}
E^{t,j}:= \left<X^\gamma\in S^{t,j},
\mbox{there exists } \ i\neq j: \gamma_i\geq d_{i}\right>.
\end{equation}
Note that $E^{t,n}=0,$ and $S^{t,1}=S_{t-d_1}\
\forall t\in{\mathbb N}_0.$
\par
Let $j_u:S_u\rightarrow S_u^*$ be the isomorphism associated with
the monomial bases in $S_u$ and denote by $T_\gamma:=j_u(X^\gamma)$ the
elements in the dual basis.
\begin{con}
All spaces that we will consider have a monomial basis, or a dual monomial
basis. We shall suppose all these bases have a fixed order.
This will allow us to define matrices ``in the monomial bases'', with
no ambiguity.
\end{con}
Let $\psi_{1,t}$ be the $A$-linear map
$$\psi_{1,t}: S_{t_n-t}^* \rightarrow S_{t} $$
which sends
\begin{equation}
\label{eq:psi1}
T_\gamma \mapsto \Delta_{\gamma}\left(X\right),
\end{equation}
where the polynomial $\Delta_{\gamma}\left(X\right)$ is
defined in (\ref{delta}). Let $\Delta_t$ denote the matrix of $\psi_{1,t}$
in the monomial bases.
\begin{lemma}
\label{bezut}
For suitable orders of the monomial bases
in $S_t$ and $S_{t_n-t},$ we have that
$$
^{\bf t}\Delta_t=\Delta_{t_n-t}.
$$
\end{lemma}
\begin{proof}
It holds that $\Delta\left(X,Y\right)=\Delta\left(Y,X\right)$ by the
symmetry property of Bezoutians (cf. \cite[3.11.8]{jou}).
This implies that $$\sum_{|\gamma|=t_n-t}\Delta_\gamma\left(X\right) Y^\gamma=
\sum_{|\lambda|=t}\Delta_\lambda\left(Y\right) X^\lambda=
\sum_{|\gamma|=t_n-t,\ |\lambda|=t}c_{\gamma\lambda}X^\lambda Y^\gamma,$$
with $c_{\gamma\lambda}\in A.$
It is easy to see that if
$\Delta_t=\left(c_{\gamma\lambda}\right)_{|\gamma|=t_n-t, \ |\lambda|=t}$
then
$\Delta_{t_n-t}=\left(c_{\gamma\lambda}\right)_{|\lambda|=t, \
|\gamma|=t_n-t}.$
\end{proof}
Let us consider also the {\it Sylvester \/} linear map $ \psi_{2,t}:$
\begin{equation}
\label{prev}
\begin{array}{cccccccc}
\psi_{2,t}: & S^{t,1} & \oplus & \cdots & \oplus & S^{t,n} & \rightarrow
& S_{t} \\
& (\, g_{1} & , & \dots & , & g_{n} \, ) & \mapsto &
\sum_{i=1}^{n}g_{i}f_{i},
\end{array}
\end{equation}
and denote by $D_t$ its matrix in the monomial bases.
As usual, $\psi_{2,t_n-t}^*$
denotes the dual mapping of (\ref{prev}) in degree $t_n-t.$
Denote
\begin{equation}
\label{prindis}
\Psi_t: S_{t_n-t}^* \oplus \left(S^{t,1}\oplus
\cdots \oplus S^{t,n} \right)
\rightarrow S_{t}
\oplus \left (S^{t_n-t,1} \oplus \cdots \oplus
S^{t_n-t,n}\right)^*
\end{equation}
the $A$-morphism defined by
\begin{equation}
\label{prinmap}
( T\, , \, g \ ) \mapsto
\ (\psi_{1,t}(T)+\psi_{2,t}(g), \psi_{2,t_n-t}^*\left(T)\right),
\end{equation}
and call $M_t$ the matrix of $\Psi_t$ in
the monomial bases.
Denote also by $E_t$ the submatrix of $M_t$ whose columns are
indexed by
the monomials in $E^{t,1}\cup \dots \cup E^{t,n-1},$ and whose rows
are indexed by the monomials
$X^\gamma$
in $S_t$ for which there exist two different indices $i,j$ such that
$\gamma_i \geq d_i, \gamma_j\geq d_j.$
With these choices it is not difficult to see that
$M_t$ and $E_t$ (when defined) are square matrices.
\begin{remark}
Observe that $E_t$ is actually a submatrix of $D_t.$
In fact, $E_t$ is transposed of the square submatrix
named ${\mathcal E}
\left(t\right)$ in \cite{cha}, and whose determinant is denoted by
$\Delta\left(n,t\right)$ in \cite[Th. 6]{Mac}.
\end{remark}
\begin{lemma}\label{size}
$M_t$ is a square matrix of size $\rho(t),$ where $\rho$ is
the function defined in (\ref{tamatrix}).
\end{lemma}
\begin{proof}
The assignment which sends a monomial $m$ in $S^{t,i}$ to
$x_i^{d_i} \cdot m$ injects the union of the monomial bases in
each $S^{t,i}$ onto the monomials of degree $t$ which are divisible
by some $x_i^{d_i}.$
It is easy to see that the cardinality of the set of complementary
monomials of degree $t$ is precisely $H_d(t)$,
where $H_{d}(t)$ denotes
the dimension of the $t$-graded
piece of the quotient of the polynomial ring over $k$
by the ideal generated by a regular sequence
of homogeneous polynomials with degrees $d_1,\dots,d_n$
(cf. \cite[3.9.2]{jou}). Moreover, using the assignment
$\left(\gamma_1,\dots,\gamma_n\right)\mapsto
\left(d_1-1-\gamma_1,\dots,d_n-1-\gamma_n\right),$ it follows that
\begin{equation}
\label{sym}
H_d(t) = H_d(t_n-t).
\end{equation}
\par
We can compute explicitly this Hilbert function by the following
formula (cf. \cite[\S 2]{Mac}):
\begin{equation}
\label{hilbertf}
\frac{\prod_{i=1}^{n}\left( 1-Y^{d_{i}}\right) }{\left( 1-Y\right) ^{n}}=
\sum_{t=0}^\infty H_{d}(t).Y^t.
\end{equation}
Then, $${\rm rk} \, (S^{t,1}\oplus
\cdots \oplus S^{t,n}) = {\rm rk} \, S_t - H_d(t) .$$
Similarly,
$${\rm rk}\left(S^{t_n-t,1}\oplus
\cdots \oplus S^{t_n-t,n}\right)^* = {\rm rk} \, (S_{t_n-t})^*
- H_d(t_n-t).$$
Therefore, $M_t$ is square of
size ${\rm rk} \, S_t - H_d(t_n-t) + {\rm rk} \, S_{t_n-t}.$ Since
$i(t_n-t)= {\rm rk} \, S_{t_n-t} - H_d(t_n-t),$ the size of
$M_t$ equals
${\rm rk} \, S_t + i(t_n-t) = \rho(t).$
\end{proof}
\begin{remark} Ordering properly
the monomial bases, $M_t$ is the transpose of the matrix
which appears in \cite[3.11.19.7]{jou}. It has the following structure:
\begin{equation}
\label{mjouanolou}
\left[
\begin{array}{cc}
\Delta_{t} &D_{t} \\
^{\bf t} D_{t_n-t} & 0 \\
\end{array}
\right].
\end{equation}
\end{remark}
\begin{remark}
Because $\psi_{2,t}=0$ if and only if $t<\min\{d_i\},$ we have that
$\Psi_t=\psi_{2,t}+\psi_{1,t}$ if $t>t_n-\min\{d_i\},$
and $\Psi_t=\psi_{2,t}$ if $t>t_n.$
\end{remark}
Finally, denote ${{\mathbb E}}_t$ the square submatrix of $M_t$ which has the following
structure:
\begin{equation}
\label{efactor}
{{\mathbb E}}_t=\left[
\begin{array}{cc}
* &E_{t} \\
^{\bf t} E_{t_n-t} & 0 \\
\end{array}
\right].
\end{equation}
It is clear from the definition
that $\det({{\mathbb E}}_t)=\pm\det(E_t)\det(E_{t_n-t}).$
\begin{remark}
Dualizing (\ref{prinmap}) and using lemma \ref{bezut} with a careful
inspection at (\ref{mjouanolou}) and (\ref{efactor}), we have that
ordering properly their rows and
columns,
$$^{\bf t}M_t=M_{t_n-t} \ \ \mbox{and} \ \ ^{\bf t}{\mathbb E}_t={\mathbb E}_{t_n-t} . $$
\end{remark}
\section{Generalized Macaulay formulas}
We can extend the map $\psi_{2,t}$ in (\ref{prev}) to the direct
sum of all homogeneous polynomials with degrees $t-d_1,\dots, t-d_n,$
and the map $\psi_{2,t_n-t}$ to the direct sum of all homogeneous
polynomials with degrees $t_n-t-d_1,\dots, t_n-t-d_n,$ to get a map
$$
\tilde{\Psi}_t: \left(S_{t_n-t}\right)^* \oplus
\left(S_{t-d_1}\oplus \cdots \oplus S_{t-d_n} \right)
\rightarrow S_{t}
\oplus \left (S_{t_n-t-d_1} \oplus \cdots \oplus
S_{t_n-t-d_n}\right)^*.$$
We can thus see the matrix $M_t$ of $\Psi_t$ in (\ref{prindis}) as a choice
of a square submatrix of $\tilde{\Psi}_t.$
We will show that its determinant is a non zero minor of maximal size.
\begin{proposition}
\label{util}
Let $M'_t$ be a square matrix over $A$ of the form
\begin{equation}
\label{prima}
M'_t:=\left[
\begin{array}{cc}
\Delta_{t} & F_t \\
^{\bf t}F_{t_n-t}& 0 \\
\end{array}
\right]
\end{equation}
where $F_t$ has $i(t)$ columns and corresponds to a restriction of the
map
$$\begin{array}{ccc}
S_{t-d_1}\oplus\dots\oplus S_{t-d_n}&\rightarrow&S_t\\
\left(g_1,\dots,g_n\right)&\mapsto&\sum_{i=1}^n g_i \, f_i;\\
\end{array}
$$
and similarly for $F_{t_n-t}$ in degree $t_n-t.$
Then, $\det(M'_t)$ is a multiple
of ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ (probably zero).
\end{proposition}
\begin{proof}
It is enough to mimic for the matrix $M'_t$
the proof performed by Jouanolou in \cite[Prop. 3.11.19.10]{jou}
to show that the determinant of the matrix $M'_t$ is an inertia
form of the ideal $\left<f_1,\dots,f_n\right>$ (i.e. a multiple of the resultant).
We include this proof for the convenience of the reader.
\par
Let $N:=\sum_{i=1}^n \#\{\alpha_i\in{\mathbb N}^n:
|\alpha_i|=d_i\}.$
Given an algebraically closed field $k,$ and
$a = (a_{\alpha_i})_{ |\alpha_i|= d_i, \ i=1,\ldots,n}, $
a point in $k^N,$
we denote by $f_1(a), \dots, f_n(a)\in k[X]$
the polynomials obtained from $f_1,\dots,f_n$ when the coefficients
are specialized to $a$, and similarly for the coefficients of
the Bezoutian.
Because of the irreducibility of ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n),$ it is enough to
show that for all $a\in k^N$ such that $f_1(a),\ldots,f_n(a)$
have a non trivial solution in $k^n,$ the determinant of the
specialized matrix $M'_t(a)$ is equal to $0.$
\par
Suppose that this is case,
and let $\left(p_1,\ldots,p_n\right)$ be a non
trivial solution. Without loss of generality, we can suppose $p_1
\neq0.$
One of the rows of $M'_t(a)$ is indexed by $X_1^t.$ Replace all the
elements in that row as follows:
\begin{enumerate}
\item if the element belongs to a column indexed by a monomial
$X^\gamma, \ |\gamma|=t_n-t,$ then replace it with
$\Delta_\gamma (a);$
\item if it belongs to a column indexed by a monomial $X^\gamma\in
S_{t-d_i},$ replace it with $X^\gamma\,f_i(a).$
\end{enumerate}
It is easy to check that, the determinant of the
modified matrix is equal to $X_1^t\,\det(M'_t(a)).$
Now, we claim that under the specialization $X_i\mapsto p_i,$
the determinant of the modified matrix will be equal to zero if and
only if $\det(M'_t(a))=0.$
\par
In order to prove this, we will show that the following submatrix
of size $\left(i(t_n-t)+1\right)\times \binom{n+t-1}{n-1}$
has rank less or equal than $i(t_n-t):$
$$
\left[\begin{array}{ccc}
\Delta_{\gamma_1}(a)(p)&\ldots&\Delta_{\gamma_s}(a)(p)\\
&^{\bf t}F_{t_n-t}(a)&\\
\end{array}\right].
$$
This, combined with a Laplace expansion of the determinant of the modified
matrix, gives the desired result.
\par
If the rank of the block $\left[^{\bf t}F_{t_n-t}(a)\right]$ is less than
$i(t_n-t),$ then the claim follows straightforwardly. Suppose this is not
the case. Then the family
$\{X^{\gamma}\,f_i(a), \ X^\gamma\in S_{t_n-t-d_i}\}$
is a basis of the piece of degree $t_{n}-t$ of the
generated ideal $I(a):=
\langle f_1(a),\ldots,f_n(a)\rangle.$
We will show that in this case the polynomial
$ \sum_{|\gamma|=t_n-t}\Delta_{\gamma}(a)(p)X^\gamma$ belongs to $I(a)$,
which proves the claim.
\par
Because of (\ref{deltaij}) and (\ref{delta}),
the polynomial $\left(X_1-Y_1\right)\Delta(a)(X,Y)$ lies in the ideal
$\langle f_1(a)(X)- f_1(a)(Y),\ldots,f_n(a)(X) -f_n(a)(Y)\rangle$.
Specializing $Y_i\mapsto p_i,$ we deduce that
$ (X_1-p_1) \sum_{j=0}^{t_n}(\sum_{|\gamma|=j}
\Delta_{\gamma}(a)(p)\, X^\gamma)$ is in the graded ideal $I(a).$
This, combined with the fact that $p_1\neq0,$ proves that
$ {\sum_{|\gamma|=j}
\Delta_{\gamma}(a)(p)\, X^\gamma}\in I(a)$ for all $j.$
\end{proof}
In particular, ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ divides $\det(M_{t}).$
We describe the extraneous factor explicitly in the following theorem,
which is the main result in this section.
Before stating it, we set
the following convention:
if the matrix ${{\mathbb E}}_t$ is indexed by an empty set,
we define $\det\left({{\mathbb E}}_t\right)=1.$
\begin{theorem}
\label{mainth}
For any $t\geq0,$
$\det\left(M_t\right) \neq 0$ and $\det({\mathbb E}_t) \neq 0.$
\par
Moreover, we have the following formula {\it \`a la Macaulay \/}:
$$ {\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n) = \pm \frac{\det(M_t)} {\det({\mathbb E}_t)} \, .$$
\end{theorem}
For the proof of Theorem \ref{mainth},
we will need the following auxiliary lemma. Let
$D_t$ and $E_t$ be the matrices defined in
\S 2 before Lemma \ref{size}.
\begin{lemma}
\label{lemauxiliar}
Let $t\geq0,$ and $\Lambda$ a ring which contains $A.$ Suppose we have a
square matrix $M$ with coefficients in $\Lambda$ which has the following
structure:
$$ M=\left[
\begin{array}{cc}
M_1 & D_t\\
M_2 & 0 \\
\end{array}
\right],
$$
where $M_1, M_2$ are rectangular matrices.
Then, there exists an element $m\in\Lambda$ such that
$$\det\left(M\right)=m \ .\ \det\left(E_t\right)$$
\end{lemma}
\begin{proof}
$D_t$ is square if and only if $t>t_n.$ (cf.
\cite[\S 3]{Mac}).
In this case,
$$\det(M)= \pm \det(M_2)\det(D_t);$$
because of
Macaulay's formula (cf. \cite[Th. 5]{Mac}), we
have that the right hand side equals
$$\pm\det(M_2) \det(E_t)
{\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n),$$
and the conclusion follows easily.
\par
Suppose now
$0\leq t\leq t_n.$ As in the introduction, let $i(t)$ denote the dimension
of the $k$-vector space
of elements of degree $t$ in the ideal generated by a
regular sequence of $n$ polynomials with degrees $d_1,\dots,d_n.$
Then $D_t$ has $i(t) + H_d\left(t\right)$ rows and $i(t)$ columns,
and there is a bijection between the family ${{\mathcal F}}$ of $H_d\left(t\right)$
monomials of degree $t,$ and the maximal minors $m_{{\mathcal F}}$ of $D_t.$
Namely, $m_{{\mathcal F}}$ is the determinant of the square submatrix made
by avoiding all rows indexed by monomials in ${{\mathcal F}}.$
\par
It is not hard to check that $m_{{\mathcal F}}$ is the determinant
$\phi_{{\mathcal F}}^*$ which is used in
\cite{cha}, for computing the {\it subresultant} associated with the
family $\{X^\gamma\}_{\gamma\in{{\mathcal F}}}.$
\par
Now, using the generalized Macaulay's formula for the subresultant
(cf. \cite{cha}), we have that
$$m_{{\mathcal F}}= \pm \det(E_t). \Delta^t_{{\mathcal F}},$$
where $\Delta^t_{{\mathcal F}}$ is the subresultant associated with the family
${{\mathcal F}}.$ It is a polynomial in $A$ which vanishes under a
specialization of the coefficients $ f_1(a),\dots, f_n(a)$ if and only
if the family $\{X^\gamma\}_{\gamma\in{{\mathcal F}}}$ fails to be a basis of the
$t-$ graded piece of the quotient
$k[X_1,\dots,X_n]/ \langle f_1(a), \dots, f_n(a) \rangle$ (cf. \cite{Ch1}).
\par
Let $m^c_{{\mathcal F}}$ be
the complementary minor of $m_{{\mathcal F}}$ in $M$ (i.e. the determinant of the
square submatrix of $M$ which is made by deleting all rows and columns that
appear in $m_{{\mathcal F}}$).
By the Laplace expansion of the determinant, we have that
$$
\det \left( M \right) =\sum_{{\mathcal F}} s_{{\mathcal F}} \cdot m_{{\mathcal F}} \cdot m_{{\mathcal F}}^{c}
= \det\left(E_t\right) \left( \sum_{{\mathcal F}} s_{{\mathcal F}} \cdot m^c_{{\mathcal F}} \cdot \Delta^t_{{\mathcal F}}
\right)$$
with $ s_{{\mathcal F}}= \pm 1.$
Setting $m= \sum_{{\mathcal F}} s_{{\mathcal F}} \cdot m^c_{{\mathcal F}} \cdot \Delta^t_{{\mathcal F}}
\in\Lambda,$ we have the desired result.
\end{proof}
We now give the proof of Theorem \ref{mainth}.
\begin{proof}
In \cite{Mac} it is shown that $\det\left(E_t\right)
\neq 0, \ \forall t \geq 0.$ This implies that $\det\left({\mathbb E}_t\right)\neq0.$
In order to prove that $\det\left({\mathbb E}_t\right)=
\det(E_t)\det(E_{t_n-t})$ divides
$\det(M_t),$ we use the following trick:
consider the ring $B:={{\mathbb Z}}\left[ b_{\alpha _{i}}
\right]_{|\alpha _{i}|= d_{i},i=1,\dots ,n},$ where $b_{\alpha_i}$ are
new variables, and the polynomials
$$ f_{b,i}:=\sum_{|\alpha _{i}|=d_{i}}b_{\alpha_{i}}X^{\alpha _{i}} \quad
\in B\left[ X_{1},\dots ,X_{n}\right]. $$
Let $D_t^b$ the matrix of the linear transformation
$\psi^b_{2,t}$ determined by the formula (\ref{prev})
but associated with the sequence
$f_{b,1},\dots,f_{b,n}$ instead of $f_1,\dots,f_n.$
Set $\Lambda:= {\mathbb Z}\left[a_{\alpha_i},b_{\alpha_i}\right]$, and
consider the matrix $M(a,b)$ with coefficients in $\Lambda$ given by
$$ M(a,b)=\left[
\begin{array}{cc}
\Delta_t & D_t\\
^{\bf t} D_{t_n-t}^b & 0 \\
\end{array}
\right].
$$
It is easy to see that $M(a,a)= M_t$, and because
of Lemma \ref{lemauxiliar},
we have that $\det\left(E_t\right)$ divides $\det\left(M(a,b)\right)$ in
$\Lambda.$ Transposing $M(a,b)$ and using a symmetry argument,
again by the same lemma, we can
conclude that $\det\left(E^b_{t_n-t}\right)$ divides $\det\left(M(a,b)\right)$ in
$\Lambda,$ where $E^b_{t_n-t}$ has the obvious meaning.
\par
The ring $\Lambda$ is a factorial domain and $\det(E_t)$ and
$\det(E_{t_n-t}^b)$
have no common factors in $\Lambda$ because they depend on
different sets of variables. So, we have
$$\det\left(M(a,b)\right)= p(a,b) \, \det(E_t) \,
\det(E_{t_n-t}^b)$$
for some $p \in \Lambda.$ Now, specialize
$b_{\alpha_i}\mapsto a_{\alpha_i}.$
The fact that $\det\left(M_t\right)$ is a multiple of the resultant
has been proved in Proposition \ref{util} (see also \cite[Prop. 3.11.19.21]{jou})
for $0\leq t\leq t_n,$ and in \cite{Mac} for
$t>t_n.$ On the other side, since
${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ is
irreducible and depends on all the coefficients of $f_1,\dots,f_n$
while $\det(E_t)$ and $ \det(E_{t_n-t})$ do not depend on the coefficients of $f_n,$
we conclude that ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ divides $p(a,a).$ Moreover, the following lemma
shows that they have the same degree. Then, their ratio is a rational
number $\lambda.$
We can see that $\lambda=\pm 1,$ considering the specialized family
$X_1^{d_1}, \dots, X_n^{d_n}.$
\end{proof}
\begin{lemma}
For each $i=1,\dots,n$ the degree $\deg_{(a_{\alpha_i})}\left(M_t\right)$
of $M_t$ in the coefficients of $f_i$ equals
$$
\begin{array}{ccc}
\deg_{(a_{\alpha_i})}\left({\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)\right)+ \deg_{(a_{\alpha_i})}\left
(E_t\right)+\deg_{(a_{\alpha_i})}\left(E_{t_n-t}\right) &=&\\
d_1\dots d_{i-1}.\, d_{i+1}\dots d_n +\deg_{(a_{\alpha_i})}\left
(E_t\right)+\deg_{(a_{\alpha_i})}\left(E_{t_n-t}\right)
&&
\end{array}
$$
\end{lemma}
\begin{proof}
Set $J_u(i):= \{X^\gamma\in S_u,
\gamma_i\geq d_i, \, \gamma_j<d_j \, \forall j\neq i\}, \ u = t,t_n-t.$
{}From the definitions of $\psi_{2,t}$ and $E_t,$ it is easy to check that,
if $\delta_t$ is a maximal minor of $D_t,$
$$ \deg_{(a_{\alpha_i})}\left(\delta_t\right)
- \deg_{(a_{\alpha_i})}\left(E_{t}\right)= \# J_t(i).$$
Using Laplace expansion, it is easy to see that $\det\left(
M_t\right)$ may be expanded as follows
$$\det\left(M_t\right) = \sum_{\delta_t,\delta_{t_n-t}} s_\delta \cdot
m_\delta \cdot \delta_t \cdot \delta_{t_n-t} $$
where $s_\delta = \pm1$, $\delta_{t_n-t}$ is a maximal minor of $^{\bf t} D_{t_n-t}$
and $m_\delta$ is a minor of size $H_d(t)$ in $\Delta_{t}.$
As each entry of $\Delta_{t}$ has degree $1$ in the coefficients
of $f_i,$ the lemma will be proved if we show that
\begin{equation}
\label{sumacard}
\# J_t(i) + \# J_{t_n-t}(i) + H_d(t) = d_1\dots d_{i-1} . \, d_{i+1}\dots d_n.
\end{equation}
Now, as already observed in the proof of Lemma \ref{size},
$H_d(t)$ can be computed as the cardinality of the following set:
\begin{equation}
\label{hdt}
H_{d,t}:=\{X^\gamma\in S_t, \, \gamma_j < d_j \, \forall j\},
\end{equation}
and $d_1\dots d_{i-1}. \, d_{i+1}\dots d_n$ is
the cardinality of
$$\Gamma_{i}:=\{X_1^{\gamma_1}\dots X_{i-1}^{\gamma_{i-1}} X_{i+1}^{\gamma_{i+1}}\dots
X_n^{\gamma_n}\, , \, \gamma_j < d_j \, \forall j\}.$$
In order to prove (\ref{sumacard}) it is enough to exhibit a bijection
between $\Gamma_{i}$ and the disjoint union $J_t(i)\bigcup J_{t_n-t}(i)\bigcup
H_{d,t}.$ This is actually a disjoint union for all $t,$ unless
$t_n-t=t.$ But what follows shows that the bijection is well defined
even in this case.
Let $X^{\widehat \gamma}\in \Gamma_{i}, \, \widehat \gamma=
\left(\gamma_1,\dots,\gamma_{i-1},\gamma_{i+1},\dots
,\gamma_n \right)$ with $\gamma_j<d_j \, \forall j\neq i.$
If $|\widehat \gamma|\leq t,$ then there exists a unique $\gamma_i$ such that
$\gamma:= \left(\gamma_1,\dots,\gamma_n\right)\in{\mathbb N}_0^n$ verifies
$|\gamma|=t.$ If $\gamma_i<d_i,$ then we send $X^{\widehat \gamma}$ to
$X^\gamma \in H_{d,t}.$ Otherwise, we send it to $X^\gamma \in J_{t}(i).$
If $|\widehat \gamma|> t,$ let $\widehat\gamma^*$ denote the
multiindex
$$\left(d_1-1-\gamma_1,\dots,d_{i-1}-1-\gamma_{i-1},
d_{i+1}-1-\gamma_{i+1},\dots, d_n-1-\gamma_n\right).$$
Then, $|\widehat\gamma^*|< t_n-t,$ and there exists
a unique $\gamma_i$ such that the multiindex $\gamma$ defined
by
$$
\left(d_1-1-\gamma_1,\dots,d_{i-1}-1-\gamma_{i-1},\gamma_i,
d_{i+1}-1-\gamma_{i+1},\dots, d_n-1-\gamma_n\right)$$
has degree $t_n-t.$ We can send $X^{\widehat\gamma}$ to
$X^\gamma \in J_{t_n-t}(i)$ provided that $\gamma_i\geq d_i.$
Suppose this last statement does not happen, this implies that
the monomial with exponent
$$\gamma^*:=\left(\gamma_1,\dots,\gamma_{i-1},d_i-1-\gamma_i,\gamma_{i+1},\dots,
d_n\right)$$
has degree $t$ contradicting the fact that $|\widehat \gamma|> t.$
With these rules, it is straightforward to check that we obtain the desired bijection.
\end{proof}
Changing the order of the sequence $(f_1,\dots,f_n)$, and
applying Theorem (\ref{mainth}), we deduce that
\begin{corollary}
\label{remate}
${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n) = \gcd \{\mbox{maximal minors of } \, \, \tilde\Psi_t\}.$
\end{corollary}
\section{Estimating the size of $M_t$}
We have, for each integer $t\geq0,$ a matrix $M_t$ of size $\rho(t),$ where
$\rho$ was defined in (\ref{tamatrix}), whose determinant is
a nontrivial multiple of the resultant, and such that, moreover, its extraneous factor
is a minor of it. We want to know which is the smallest matrix we can
have.
We can write $\rho$ as
$$\rho(t) = \binom{n+t-1}{n-1} + \binom{n + t_n -t -1}{n-1} - H_d(t_n-t).$$
It is straightforward to check that
$\binom{n+t-1}{n-1} + \binom{n + t_n -t -1}{n-1}$ is the restriction to the integers of
a polynomial $\phi(t)$ in a real variable $t, $ symmetric with respect to $\frac{t_n}{2}$
(i.e. $\phi(\frac{t_n}{2}+t)=\phi(\frac{t_n}{2}-t)$ for all $t$).
Moreover, $\phi$
reaches its minimum over $\left[0,t_n\right]$ at $t =\frac{t_n}{2}.$
Since
\begin{equation}
\label{rho}
\rho(t) = \phi(t) -H_d(t) = \phi(t_n-t) - H_d(t_n-t) = \rho(t_n-t),
\end{equation}
in order to study the behaviour of $\rho$
we need to understand how $H_d(t)$ varies with $t$.
We denote as usual the integer part of a real number $x$ by the symbol
$\left[x\right].$
\begin{proposition}
\label{nondec}
$H_d(t)$ is non decreasing on (the integer points of )
the interval $\left[0,\left[\frac{t_n}{2}\right]\right].$
\end{proposition}
\begin{proof}
We will prove this result by induction on $n$. The case
$n=1$ is obvious since $t_1=d-1$ and $H_d(t) = 1$ for any $t =0,\dots,d-1.$
Suppose then that the statement holds for $n$ variables and set
$$\widehat d:=\left(d_1,\dots,d_{n+1}\right)\in{\mathbb N}_0^{n+1},$$
$$d:=(d_1,\dots,d_n).$$
Let $t < t+1 \leq \left[\frac{t_{n+1}}{2}\right].$ We want to
see that $\varphi(t):= H_{\widehat d}(t+1) - H_{\widehat d}(t)$
is non negative.
Recall from (\ref{hdt}) that, for every $t\in{\mathbb N}_0$,
$H_{\widehat d}(t) $ equals the cardinality of the
set
$$\{\gamma\in{\mathbb N}_0^{n+1}: \ |\gamma|=t,
\, 0\leq \gamma_i\leq d_i-1, \ i=1,\dots,n+1 \}. $$
Then, it can also be computed as
$$\sum_{j=0}^{d_{n+1}-1}\# \{\widehat\gamma\in{\mathbb N}_0^n: \
|\widehat\gamma|=t-j,
\, 0\leq \widehat\gamma_i\leq d_i-1, \ i=1,\dots,n
\},$$
which gives the equality
$H_{\widehat d}(t)= \sum_{j=0}^{d_{n+1}-1} H_d(t-j).$
It follows that $\varphi(t) = H_d(t+1) - H_d(t+1-d_{n+1}).$
\par
If $t+1 \leq \left[\frac{t_{n}}{2}\right],$ we deduce that $\varphi(t)
\geq 0$ by inductive hypothesis. Suppose then that $t+1$ is in the
range
$\left[\frac{t_{n}}{2}\right] < t +1 \leq \left[\frac{t_{n+1}}{2}\right].$
As $H_d(t+1)= H_d(t_n -t-1),$ it
is enough to show that $t_n -t -1 \geq t+1 -d_{n+1}$ and
$t_n -t-1 \leq \left[\frac{t_{n}}{2}\right],$ which can be
easily checked, and the result follows again by inductive
hypothesis.
\end{proof}
\begin{corollary}
\label{min}
The size
$\rho\left(t\right)$ of the matrix $M_t$ is minimal over ${\mathbb N}_0$
when $t=\left[\frac{t_n}{2}\right].$
\end{corollary}
\begin{proof} By (\ref{rho}), $\rho$ has a maximum at
$\left[\frac{t_{n}}{2}\right]$ over $[0,t_n]$ because $\phi$
has a maximum and $H_d$ has a minimum.
If $t>t_n,$ we have that
$\rho\left(t\right)=\binom{n+t-1}{n-1}.$
For $t$ in this range, it is easy to check that
$\rho\left(t_n\right)=\binom{n+t_n-1}{n-1}-1<
\rho\left(t\right).$ Then, $\rho(t) > \rho(t_n)
\geq \rho\left(\left[\frac{t_n}{2}\right]\right).
$
\end{proof}
\begin{remark}
Note that when
$t_n$ is odd, $\rho(\left[\frac{t_{n}}{2}\right]) =
\rho(\left[\frac{t_{n}}{2}\right]+1)$, and then the size
of $M_{t}$ is also minimal for $t=\left[\frac{t_{n}}{2}\right]+1$
in this case.
\end{remark}
Denote $p:= \frac{\sum_{i=1}^n d_i}{n}$ the average value of the degrees,
and set $q:=\frac{p+1}{2 p}.$ Note that except in the linear case when
all $d_i=1,$ it holds that $p>1$ and $q<1.$
\begin{proposition}
\label{rate}
Assume $p >1.$
The ratio between the size of the smallest matrix $M_t$ and the
classical Macaulay matrix $M_{t_n+1}$ can be bounded by
$$
\frac{\rho\left(\left[t_n/2\right]\right)}{\rho\left(t_n+1\right)}
\leq 2 \, q^{n-1}.$$
In particular, it tends to zero exponentially in $n$ when the number of
variables tends to infinity and $p$ remains bigger that a constant
$c > 1.$.
\end{proposition}
\begin{proof}
When $t_n$ is even, $t_n - [t_n/2] = [t_n/2]$ and when
$t_n$ is odd, $t_n -[t_n/2] = [t_n/2] +1.$ In both cases,
$$
\frac{\rho\left(\left[t_n/2\right]\right)}{\rho\left(t_n+1\right)}
\leq
\frac{2\, \binom{n+\left[t_n/2\right]}{n-1} }{
\binom{n+t_n} {n-1}}=
2\, \frac{\left(\left[t_n/2\right]+n\right)\dots
\left(\left[t_n/2\right]+2\right)}{\left(t_n+n\right)\dots
\left(t_n+2\right)}=
$$
$$
= 2\, \left(\frac{\left[t_n/2\right]+n}{t_n+n}\right)
\left(\frac{\left[t_n/2\right]+n-1}{
t_n+n-1}\right)\dots
\left(\frac{\left[t_n/2\right]+2}{
t_n+2}\right)\leq $$
$$\leq
2\, \left(\frac{\left[t_n/2\right]+n}{
t_n+n}\right)^{n-1}.$$
Since $t_n = np -n,$ we deduce that
$$\frac{\left[t_n/2\right]+n}{
t_n+n }\, \leq \, \frac{ \frac{np}{2} + \frac{n}{2}}{np} =
\frac{1}{2} + \frac{1}{2p} = q,
$$
as wanted.
\end{proof}
\section{Resultant complexes}
In this section we consider Weyman's complexes (cf. \cite{wz}, \cite{gkz})
and we make explicit the morphisms in these complexes,
which lead to polynomial expressions for the resultant via determinantal
formulas in the cases described in Lemma \ref{square}.
\par
We will consider a complex which is a ``coupling'' of the
Koszul complex $ {\bf K}^\bullet (t;f_1,\dots,f_n)$
associated with $f_1,\dots,f_n$ in degree $t$ and the dual of the
Koszul complex ${\bf K}^\bullet (t_n-t,f_1,\dots,f_n)^*$
associated with $f_1,\dots,f_n$ in degree $t_n -t.$
This complex arises from the spectral
sequence derived from the Koszul complex of
sheaves on ${\mathbb P}^{n-1}$ associated with $f_1,\dots,f_n$
twisted by ${\mathcal O_{{\mathbb P}^{n-1}}}(t).$ Here, ${\mathcal
O_{{\mathbb P}^{n-1}}}(t)$ denotes as usual the $t$-twist of the sheaf
of regular functions over the $(n-1)$-projective space ${\mathbb P}^{n-1}$
(see for instance \cite[p. 34]{gkz}). Its space of
global sections can be identified with the space of homogeneous
polynomials in $n$ variables of degree $t$.
We make explicit in terms of the Bezoutian
the map $\partial_0$ (see (\ref{prinmap}) below)
produced by cohomology obstructions. In fact, the non-trivial
contribution is given in terms of the mapping
$\psi_{1,t}$ defined in (\ref{eq:psi1}).
\par
Precisely, let ${\bf K}^\bullet (t;f_1,\dots,f_n)$ denote the complex
\begin{equation}
\label{complex1}
\{0 \longrightarrow \, K(t)^{-n} \, \mathop{\longrightarrow}^{\delta_{-(n-1)}}
\, \dots \, \mathop{\longrightarrow}^{\delta_{-1}} \, K(t)^{-1}\,
\mathop{\longrightarrow}^{\delta_{0}} \, K(t)^{0}\,
\} ,
\end{equation}
where
$$K(t)^{-j} = \mathop{\oplus}_{i_1<\dots <i_j} S_{t - d_{i_1} - \dots - d_{i_j}}$$
and $\delta_{-j}$ are the standard Koszul morphisms.
Similarly, let ${\bf K}^\bullet (t_n-t;f_1,\dots,f_n)^*$ denote the complex
\begin{equation}
\label{complex2}
\{
K(t_n-t)^{0} \, \mathop{\longrightarrow}^{\delta^*_{0}}
\, K(t_n-t)^{1}\,
\mathop{\longrightarrow}^{\delta^*_{1}}
\, \dots \, \mathop{\longrightarrow}^{\delta^*_{n}} \, K(t_n-t)^{n}\,
\} ,
\end{equation}
where
$$K(t_n-t)^{j} = \mathop{\oplus}_{i_1<\dots <i_{j}} S^*_{t_n-t - d_{i_1} - \dots - d_{i_{j}}}$$
and $\delta^*_j$ are the duals of the standard Koszul morphisms.
Note that in fact $K(t_n-t)^n =0 $ for any $t \geq 0.$
\par
Now, define ${\bf C}^\bullet (t;f_1,\dots,f_n) $ to be the following coupled complex
\begin{equation}
\label{complex}
\{ 0 \longrightarrow \,
C^{-n} \, \mathop{\longrightarrow}^{\partial_{-(n-1)}}
\, \dots \, \mathop{\longrightarrow}^{\partial_{-1}} \, C^{-1}\,
\mathop{\longrightarrow}^{\partial_{0}} \, C^{0}\,
\mathop{\longrightarrow}^{\partial_{1}}
\, \dots \, \mathop{\longrightarrow}^{\partial_{n-1}} \, C^{n-1} \longrightarrow 0 \,
\} ,
\end{equation}
\noindent where
\begin{equation}
\begin{array}{lcll}
\label{def:complex}
C^{-j} & = & K(t)^{-j}, & \, j=2, \dots, n \\
C^j & = & K(t_n-t)^{j+1}, & \, j=1,\dots,n-1\\
C^{-1} & = & K(t_n-t)^0 \oplus K(t)^{-1}&{}\\
C^{0} & = & K(t)^0 \oplus K(t_n-t)^1 & {}
\end{array}
\end{equation}
\noindent and the morphisms are defined by
\begin{equation}
\begin{array}{lcll}
\label{pmap}
\partial_{-j} & = & \delta_{-j}, & \, j=2, \dots, n-1 \\
\partial_j & = & \delta^*_{j}, & \, j=2,\dots,n-1\\
\partial_{-1} & = & 0 \oplus \delta_{-1} & {}\\
\partial_{0} & = & ( \psi_{1,t}+\delta_0) \oplus \delta^*_0 & {}\\
\partial_1 & = & 0 +\delta^*_1 & {}
\end{array}
\end{equation}
More explicitly, $\partial_0( T, (g_1,\dots,g_n) )=
( \psi_{1,t}(T) +\delta_0( g_1,\dots,g_n), \delta^*_0 (T))$
and $\partial_1(h,( T_1,\dots,T_n)) = \delta^*_1( T_1,\dots,T_n).$
Observe that $\partial_0$ is precisely the mapping
we called $\tilde\Psi_t$ in the previous section.
As in the proof of Proposition \ref{util},
given an algebraically closed field $k,$ and
$a = (a_{\alpha_i})_{ |\alpha_i|= d_i, \ i=1,\ldots,n}, $
a point in $k^N,$
we denote by $f_1(a), \dots,$ $f_n(a)$
the polynomials $\in k[X]$ obtained from $f_1,\dots,f_n$ when the coefficients
are specialized to $a$.
For any particular choice of coefficients
in (\ref{complex}) we get a complex of $k$-vector spaces.
We will denote the specialized
modules and morphisms by $K(t)^{1}(a), \delta_0(a),$ etc.
Let $D$ denote the determinant (cf. \cite[Appendix A]{gkz},
\cite{De})
of the complex of $A$-modules (\ref{complex})
with respect to the monomial bases
of the $A$-modules $C^\ell$.
This is an element in the field of fractions of $A$.
We now state the main result in this section.
\begin{theorem}
\label{rescth}
The complex (\ref{complex}) is generically exact, and for
each specialization of the coefficients it is exact if
and only if the resultant does not vanish. For any positive
integer $t$ we have that
\begin{equation}
\label{det}
D = {\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n),
\end{equation}
and moreover, $D$ equals the
greatest common divisor of all
maximal minors of a matrix representing the
$A$-module map $\partial_0$.
\end{theorem}
\begin{proof}
For $t>t_n,$ we get the Koszul complex in degree $t,$ and so the specialized
complex at a point $a \in k^N$ is exact if
and only if $f_1(a),\dots, f_n(a)$ is a regular sequence, i.e. if and only if
the resultant
does not vanish. The fact that the determinant of this complex equals the
resultant goes back to ideas of Cayley; for a proof see \cite{De}, \cite{gkz} or \cite{Ch3}.
\par
Suppose $0\leq t\leq t_n.$ Since $\delta_0 \circ \delta_{-1} = \delta^*_1 \circ
\delta^*_0 =0,$ it is easy to see that (\ref{complex}) is a complex.
\par
Set
$$U:= \{ a=(a_{\alpha_i}) \in k^N, i=1,\dots,n , |\alpha_i|=d_i : \det(M_t(a)) \not= 0 \}.$$
Note that the open set $U$ is non void because the vector of coefficients
of $\{X_1^{d_1}, \dots, X_n^{d_n}\}$ lies in $U,$ since in this case
$\det M_t= \pm 1.$ For any choice of homogeneous
polynomials $f_1(a),\dots, f_n(a) \in k[X]$ with respective degrees $d_1,\dots,d_n$ and
coefficients $a$ in $U$, the resultant does not
vanish by Theorem \ref{mainth} and then the specialized Koszul complexes
in (\ref{complex1}) and (\ref{complex2}) are exact.
\par
Then, the dimension
$\dim {\rm Im}(\delta_{0}(a))$ of the image of $\delta_{0}(a)$
equals $i(t) = \dim < f_1(a),\dots, f_n(a) >_t.$
Similarly, $\dim (\ker(\delta^*_0(a)) = i(t_n-t).$
Therefore,
\begin{eqnarray*}
\dim \ker(\partial_0(a)) \geq \dim {\rm Im}(\partial_{- 1}(a))=
\dim {\rm Im} (\delta_{-1}(a)) = \\
=\dim \ker(\delta_0(a)) = \dim K(t)^{-1}(a) -
i(t) .
\end{eqnarray*}
On the other side,
the fact that $M_t(a)$ is non singular of size $\rho(t)$ implies that
\begin{eqnarray*}
\dim \ker(\partial_0(a)) \leq \dim C^{-1}(a) - \rho(t) = \\ = \dim K(t)^{-1}(a) +
\dim K(t_n-t)^0(a) - \rho(t) = \\ = {\rm Im} K(t)^{-1} (a)+ \dim S_{t_n-t} (a)-\rho(t)= \\
= \dim K(t)^{-1}(a)- i(t).
\end{eqnarray*}
\par Therefore, $ \dim {\rm Im}(\partial_{-1}(a)) = \dim \ker(\partial_0(a))$
and the complex is exact at level $-1.$
\par In a similar way, we can check
that the complex is exact at level $0$, and so the full specialized complex
(\ref{complex}) is exact when the coefficients $a$ lie in $U.$
\par
In order to compute the determinant of the complex in
this case, we can make suitable choices of monomial
subsets in each term of the complex starting from the index sets
that define $M_t(a)$ to the left and to the right.
Then,
$$D(a) = \frac{\det M_t(a)} { p_1(a) \cdot p_2(a)},$$
where $p_1(a)$ (resp. $p_2(a)$) is a quotient of
product of minors of the morphisms on the left (resp. on the right).
\par
Taking into account (\ref{complex1}) and (\ref{complex2}), it follows from
\cite{cha} that
$$p_1 (a)= \det (E_t(a)), \, p_2(a)= \det(E_{t_n-t}(a)),$$
and so by Theorem \ref{mainth} we have
\begin{eqnarray*}
D(a) = {\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)(a) \frac{\det({{\mathbb E}}_t)(a)}{\det(E_t(a)) \det(E_{t_n-t}(a)) }= \\
= {\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)(a)
\end{eqnarray*}
for all families of homogeneous polynomials with coefficients $a$
in the dense open set $U$, and since $D$ and the resultant
are rational functions, this implies
(\ref{det}), as wanted. Moreover, it follows that the complex is exact
if and only if the resultant does not vanish.
\par
The fact that ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ is the greatest common divisor of all maximal
minors of the matrix representing $\partial_0$ has been proved in Corollary
\ref{remate}.
\end{proof}
We remark that from the statement of Theorem \ref{rescth} plus a close
look at the map at level $0$, it is not hard to deduce that for a given
specialization of $f_1,\dots,f_n$ in $k$ with non vanishing resultant, the
specialized polynomials $\Delta_{\gamma}(a), |\gamma| = t_{n}-t$
generate the quotient of the polynomial ring $k[X]$ by the ideal
$I(a) =\langle f_{1}(a), \dots, f_{n}(a) \rangle$ in degree $t$.
We can instead use the known dualizing properties of the
Bezoutian in case the polynomials define a regular sequence, to provide
an alternative proof of Theorem \ref{rescth}. This is a consequence of
Proposition \ref{duality} below. We refer to
\cite{jou},[\cite{kunz},
Appendix F], \cite{ss} and \cite{ts} for the relation
between the Bezoutian and the residue (i.e. an associated trace)
and we simply recall the properties that we will use.
Assume ${\rm Res}_{d_{1},\dots ,d_{n}}\left(f_1(a),\ldots,f_n(a)\right)$ is different from zero.
This implies
that $f_1(a),\ldots,f_n(a)$ is a regular sequence and
their zero locus consists of the single point ${\bf 0}\in k^n.$
Then, there exists a dualizing $k$-linear operator
$$R_{0} : k[Y]/ \langle f_{1}(a)(Y), \dots,
f_{n}(a)(Y) \rangle \longrightarrow k,$$
called the {\it residue or trace operator\/}, which verifies
\begin{enumerate}
\item $h(X) = R_0\left(h(Y)\,\Delta(a)(X,Y)\right)$ in the
quotient ring $k[X] / I(a).$
\item If $h$ is homogeneous of degree $t$ with $t\neq t_n,$
$R_0(h)=0$
\end{enumerate}
Then, for every polynomial $h(X)\in k[X]$ of degree $t,$ it
holds that
\begin{equation}
\label{dual}
h(X)
=\sum_{|\gamma|
=t_n-t}R_0\left(h(Y)\,Y^{\gamma}\right)\Delta_{\gamma}(a)(X) \quad
{\text { mod } } I(a),
\end{equation}
where $\Delta(a) (X,Y) = \sum_{|\gamma| = t_{n}-t}
\Delta_{\gamma}(a)(X) Y^\gamma$ as in (\ref{delta}).
As a consequence,
{\it the family $\{\Delta_{\gamma}(a)(X)\}_{|\gamma|=t_n-t},$ (resp.
$|\gamma|=t$) generates the graded piece of the quotient in degree
$t$ (resp. $t_n-t$)}. Moreover, it is easy to verify that for any
choice of polynomials $p_i(X,Y),\,q_i(X,Y)\in
k[X,Y], \ i=1,\ldots,n,$ the polynomial $\tilde{\Delta}_{a}(X,Y)$
defined by
\begin{equation}
\label{deltamod}
\tilde{\Delta}_{a}(X,Y):=\Delta(a)(X,Y) +\sum_{i=1}^n{
p_i(X,Y)\,f_i(a)(X) + q_i(X,Y)\,f_i(a)(Y)}.
\end{equation}
has the same dualizing properties as $\Delta(a)(X,Y)$.
\par
We are ready to prove a kind of ``converse'' to Proposition \ref{util}.
\begin{proposition}
\label{duality}
If ${\rm Res}_{d_{1},\dots ,d_{n}}\left(f_1(a),\ldots,f_n(a)\right)\neq 0,$ it is possible
to extract a square submatrix $M'_t$ of $\tilde{\Psi}_t$
as in (\ref{prima}) such that $\det\left(M'_t(a)\right)\neq0.$
\end{proposition}
\begin{proof}
Since
$ f_1(a)(X),\ldots,f_n(a)(X)$ is a regular sequence in $k[X],$
the dimensions of the graded pieces of the quotient $k[X] /
I(a)$ in degrees $t$ and $t_n-t$ are $i(t)$ and $i(t_n-t)$ respectively.
\par
We can then choose blocks $F_t$ and $F_{t_n-t}$ as in (\ref{prima}) such that
$F_t(a)$ and $F_{t_n-t}(a)$ have maximal rank.
Suppose without loss of generality that the blocks $F_t$ and
$F_{t_n-t}$ have respectively the form
$\left[\begin{array}{c}
Q_t\\
R_t\end{array}\right]$ and $\left[\begin{array}{c}
Q_{t_n-t}\\R_{t_n-t} \end{array}\right]$,
where $Q_t(a)$ and $Q_{t_n-t}(a)$ are square invertible matrices of maximal
size.
We are going to prove that, with this choice, the matrix $M'_t(a)$ is
invertible.
\par
Our specialized matrix will look as follows:
$$
M'_t(a) = \left[
\begin{array}{cc}
\Delta_t(a) & {\begin{array}{c}Q_t(a)\\
R_t(a)\end{array}} \\
^{\bf t}Q_{t_n-t}(a) \ ^{\bf t} R_{t_n-t}(a)&0
\end{array}
\right].
$$
Applying linear operations in the rows and columns of $M'_t(a),$ it can
be transformed into:
$$\left[\begin{array}{ccc}
0&0&Q_t(a)\\
0&\tilde{\Delta}_{t,a}&R_t(a)\\
^{\bf t}Q_{t_n-t}(a) & ^{\bf t} R_{t_n-t}(a)&0
\end{array}
\right],
$$
where the block $[\tilde{\Delta}_{t,a}]$ is square and of size $H_d(t).$
\par
But it is easy to check that this $\tilde{\Delta}_{t,a}$ corresponds to the
components in degree $t$ of another Bezoutian
$\tilde{\Delta}_{a}(X,Y)$
(in the sense of (\ref{deltamod})).
This is due to the fact that each of the linear operations performed on
$M'_t(a),$ when applied to the
block $\Delta_{t,a},$ can be read as a polynomial combination of
$f_i(a)(X)$ and $f_i(a)(Y)$ applied to the bezoutian $\Delta(a)(X,Y).$
\par
Using the fact that the polynomials $\tilde{\Delta}_{\gamma,a}(X)$
read in the columns of $\tilde{\Delta}_{t,a}$
generate the quotient in degree $t_n-t$ and they are as many as
its dimension, we deduce that they are a basis and so
$$
\det \left(
\begin{array}{cc}
0&\tilde{\Delta}_t \\
^{\bf t}Q_{t_n-t}(a) & ^{\bf t} R_{t_n-t}(a)
\end{array}
\right)\neq0,
$$
which completes the proof of the claim.
\end{proof}
We could then avoid the consideration of the open set $U$ in the
proof of Theorem \ref{rescth}, and use Proposition \ref{duality}
to show directly that the complex is exact outside the zero locus
of the resultant. In fact, this is not surprising since
for all specializations such that the resultant is non zero,
the residue operator defines a natural duality between the $t$-graded
piece of the the quotient of the ring of polynomials
with coefficients in $k$ by the ideal $I(a)$
and the $t_n-t$ graded piece of the quotient,
and we can read dual residue bases in the Bezoutian.
We characterize now those data $n, d_1,\dots,d_n$ for
which we get a determinantal formula.
\begin{lemma}
\label{square}
Suppose $d_1 \leq d_2 \leq \dots \leq d_n.$
The determinant of the resultant complex provides a determinantal
formula for the resultant ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ if and only if
the following inequality is verified
\begin{equation}
\label{squareq}
d_3 + \dots + d_n -n < d_1 + d_2 - 1.
\end{equation}
Moreover, when (\ref{squareq}) holds, there exists a determinantal
formula given by the resultant complex for each $t$ such that
\begin{equation}
\label{squaret}
d_3 + \dots + d_n -n< t <d_1 + d_2 .
\end{equation}
\end{lemma}
\begin{remark}
When all $d_i$ have a common value $d$, (\ref{squareq}) reads
$$(n-2) d < 2 d + n - 1,$$
which is true for any $d$ for $n\leq 4$, for $d=1,2,3$ in case $n=5$, for $d=1,2$ in
case $n=6$, and never happens for $n>7$
unless $d=1,$ as we quoted in the introduction.
\end{remark}
\begin{proof}
The determinant of the resultant complex provides a determinantal
formula for ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ precisely when $C^{-2} = C_1 =0.$
This is respectively equivalent to the inequalities
$$ t < d_1 + d_2$$
and
$$ t_n -t = d_1 + \dots d_n - n - t < d_1 + d_2,$$
from which the lemma follows easily. We have decreased the
right hand side of (\ref{squareq}) by a unit in order to allow for
a natural number $t$ satisfying (\ref{squaret}).
\end{proof}
\begin{corollary}
\label{7}
For all $n \geq 7$ there exists a determinantal formula only if
$d_1=d_2=d_3=1$ and $ n-3 \leq d_4 + \dots + d_n < n,$
which forces all $d_i$ to be $1$ or at most, all of them equal $1$
except for two of them which equal $2$, or all of them equal $1$
except for one of them which equals $3$.
\end{corollary}
The proof of the corollary follows easily from
the inequality (\ref{squareq}). In any case, if a
determinantal formula exists, we have a determinantal
formula for $t= [t_n/2],$ as the following proposition shows.
\begin{proposition}
\label{small}
If a determinantal formula given by the resultant complex exists,
then $M_{[t_n/2]}$
is square and of the smallest possible size
$\rho(\left[\frac{t_n}{2}\right]).$
\end{proposition}
\begin{proof}
In order to prove that $M_{[t_n/2]}$ is square, we need to check
by Lemma \ref{square} that
\begin{equation}
\label{entera}
d_3 + \dots + d_n -n < \left[\frac{t_n}{2}\right] < d_1 + d_2 .
\end{equation}
If there exists a determinantal formula, then the inequalily
(\ref{squareq}) holds, from which it is
straightforward to verify that
$$ d_3 + \dots + d_n -n < \frac{t_n}{2} < d_1 + d_2 .$$
To see that in fact (\ref{entera}) holds, it is enough to check
that
$$ d_3 + \dots + d_n -n +1/2 \not= \frac{t_n}{2} = \frac{d_1+\dots+
d_n-n}{2} .$$
But if the equality holds, we would have that
$d_3 + \dots + d_n = d_1 + d_2 + n-1,$ which is a contradiction.
According to Corollary \ref{min}, we also know that $M_{[t_n/2]}$
has the smallest possible size.
\end{proof}
\section{Dixon formulas}
We prove in this section that ``affine'' Dixon formulas can in fact
be recovered in this setting.
We first recall classical Dixon formulas to compute the resultant of
three bivariate affine polynomials of degree $d.$ We will make a slight
change of notation in what follows. The input affine polynomials (having
monomials of degree at most $d$ in two variables $(X_1,X_2)$) will be denoted
$f_1, f_2,f_3 $ and we will use capital letters $F_1,F_2,F_3$ to denote
the homogeneous polynomials in three variables given by their respective
homogenizations (with homogeneizing variable $X_3$).
Dixon (cf. \cite{Dix}) proposed the following determinantal
formula to compute the resultant ${\rm Res}_{d,d ,d}(f_1,f_2,f_3) $
$={\rm Res}_{d,d ,d}(F_1,F_2,F_3)$:
Let ${\rm Bez}(X_1,X_2,Y_1,Y_2)$ denote the polynomial obtained by
dividing the following determinant by $(X_1-Y_1) (X_2-Y_2)$:
$$\det \left(
\begin{array}{ccc}
f_1(X_1,X_2) &f_2(X_1,X_2) &f_3(X_1,X_2)\\
f_1(Y_1,X_2) &f_2(Y_1,X_2) &f_3(Y_1,X_2)\\
f_1(Y_1,Y_2) &f_2(Y_1,Y_2) &f_3(Y_1,Y_2).
\end{array}
\right)
$$
Note that by performing row operations we have that
${\rm Bez}(X_1,X_2,Y_1,Y_2)$ equals the determinant of
the matrix
$$\det \left(
\begin{array}{ccc}
\Delta_{11}& \Delta_{21}&\Delta_{31} \\
\Delta_{12}& \Delta_{22}&\Delta_{32}\\
f_1(Y_1,Y_2) &f_2(Y_1,Y_2) &f_3(Y_1,Y_2),
\end{array}
\right)
$$
where $\Delta_{ij}$ are as in (\ref{deltaij}).
Write
$${\rm Bez}(X_1,X_2,Y_1,Y_2) = \sum_{|\beta| \leq 2d-2}
B_\beta(X_1,X_2) Y_1^{\beta_1} Y_2^{\beta_2}.$$
Set $A :={\mathbb Z}[a],$ where $a$ denotes one indeterminate
for each coefficient of $f_1,f_2,f_3.$
Let $S$ denote the free module over $A$ with
basis ${\mathcal B}$ given by all monomials in two variables of
degree less or equal than $d-2,$
which has an obvious isomorphism with the free
module $S'$ over $A$ with basis ${{\mathcal B}}' $ given by all monomials in
three variables of degree equal to $d-2.$
The monomial basis of all polynomials in two variables
of degree less or equal than $2d-2$ will be denoted by
${\mathcal C}.$
\par
Let $M$ be the square matrix of size $2 d^2-d$
whose columns are indexed by ${\mathcal C}$
and whose rows contain consecutively the
expansion in the basis ${\mathcal C}$ of $m \cdot f_1,$
of $m \cdot f_2,$ and of $m \cdot f_3,$ where $m$
runs in the three cases over ${\mathcal B}$, and finally, the expansion
in the basis ${\mathcal C}$ of all $B_\beta, \, |\beta| \leq d-1.$
Then, Dixon's formula says that
$$ {\rm Res}_{d,d ,d}(f_1,f_2,f_3) = \pm \det M.$$
Here, $d_1=d_2=d_3=d$ and $n=3,$ so that (\ref{squareq}) holds
and by (\ref{squaret}) there is a determinantal formula for each
$t$ such that $ d-3 \, < \, t \, < 2d.$ So, one possible choice is $t= 2d-2.$
Then, $ t_3-t= d-1 <d, $ which implies $<F_1,F_2,F_3>_{t_3-t} =0.$
Also, $t-d =d-2 <d,$
and therefore $S^{t,i} = S',$ for all $i=1,2,3.$
\par
Let $\Delta(X_1,
X_2,X_3,
Y_1,Y_2,Y_3) = \sum_{|\gamma| \leq 3d-3} \Delta_\gamma(X) Y^\gamma$
be the Bezoutian associated with the homogeneous polynomials $F_1,
F_2,F_3.$ We know that ${\rm Res}_{d,d ,d}(F_1,F_2,F_3) = \pm \det M_{2d-2}.$
In this case, the transposed matrix $M_{2d-2}^t$ is a square matrix of the same size as $M$,
and it is obvious that their $3 d (d-1)/2$ first rows coincide (if the columns are
ordered conveniently). According to
(\ref{eq:psi1}), the last $(d+1)d/2$ rows of $M_{2d-2}^t$ contain the
expansion in the basis ${{\mathcal B}}' $ of all
$\Delta_\gamma \, , \, |\gamma| = d-1.$
\begin{proposition}
\label{dixon} The ``affine'' matrix $M$ and the ``homogeneous''
matrix $M_{2d-2}^t$ coincide.
\end{proposition}
\begin{proof}
Denote $P(X_1,X_2,X_3,Y_1,Y_2,t)$ the homogeneous polynomial of
degree $3d-2$ in $6$ variables obtained by dividing the following
determinant by $(X_1-Y_1)(X_2-Y_2):$
$$
\det \left(
\begin{array}{ccc}
\Delta_{1,1}(F)& \Delta_{2,1}(F)&\Delta_{3,1}(F) \\
\Delta_{1,2}(F)& \Delta_{2,2}(F)&\Delta_{3,2}(F)\\
F_1(Y_1,Y_2,t) &F_2(Y_1,Y_2,t) &F_3(Y_1,Y_2,t)
\end{array}
\right),
$$
where
$$\Delta_{i,1}(F):= F_i(X_1,X_2,X_3) - F_i(Y_1,Y_2,X_3) , \, i=1,2,3$$
and
$$\Delta_{i,2}(F) := F_i(Y_1,X_2,X_3) -F_i(Y_1,Y_2,X_3) , \, i=1,2,3.$$
It is easy to check that
\begin{equation}
\label{uno}
(X_3 -Y_3) \Delta(x,y) = P(X_1,X_2,X_3,Y_1,Y_2, X_3) -
P( X_1,X_2,X_3,Y_1,Y_2, Y_3)
\end{equation}
and that
\begin{equation}
\label{dos}
P(X_1,X_2,1,Y_1,Y_2,1) = {\rm Bez}(X_1,X_2,Y_1,Y_2)
\end{equation}
We are looking for the elements in ${\rm Bez}(X_1,X_2,Y_1,Y_2)$ of degree
less or equal than $d-1$ in the variables $Y_1,\ Y_2.$
But it is easy to check that
$\deg_y\left(P( X_1,X_2,X_3,Y_1,Y_2, Y_3)\right)\geq d.$ This, combined
with the
equality given in (\ref{uno}), implies that, for each $1\leq j\leq d-1:$
$$X_3\,\sum_{|\gamma|=j}\Delta_{\gamma}(X)Y^\gamma-Y_3\,
\sum_{|\gamma|=j-1}\Delta_{\gamma}(X)Y^\gamma $$
is equal to the piece of degree $j$ in the variables $Y_i$ of
the polynomial $P(X_1,X_2,X_3,Y_1,Y_2, Y_3).$
\par
Besides, this polynomial does not depend on $Y_3,$ so the following
formula holds for every pair $\gamma,\ \tilde{\gamma}$ such that
$\gamma=\tilde{\gamma}+(0,0,k), |\gamma|=j:$
\begin{equation}
\label{clave}
X_3^k\Delta_\gamma(X) = \Delta_{\tilde{\gamma}}(X).
\end{equation}
This allows us to compute $\Delta_{\gamma}(X)$ for every $|\gamma|=d-1,$
in terms of the homogeneization of $B_{(\gamma_1,\gamma_2)}.$
{}From equation (\ref{clave}), the claim follows straightforwardly.
\end{proof}
We conclude that Dixon's formula can be
viewed as a particular case of the determinantal expressions that
we addressed. Moreover, Proposition \ref{dixon} can be extended to any
number of variables and all Dixon matrices as in \cite[\S 3.5]{EM}
can be recovered in degrees $t$ such that
$\psi^*_{2,t_n-t} =0,$ i.e. such that $t_n \geq t > t_n - \min \{d_1,\dots, d_n\}.$
As we have seen, all one can hope in general is the explicit
quotient formula we give in Theorem \ref{mainth}. In fact, we have
the following consequence of Lemma \ref{square}
\begin{lemma}
There exists a determinantal Dixon formula if and only if $n=2,$ or
$n=3$ and $d_1=d_2=d_3,$ i.e. in the case considered by Dixon.
\end{lemma}
\begin{proof}
Assume $d_1 \leq d_2 \leq \dots \leq d_n.$ If
inequality (\ref{squaret}) is verified for $t > t_n -d_1,$ we deduce
that
\begin{equation}
\label{squared}
(n-2) d_1 -n \leq
d_3 + \dots + d_n -n < d_1 -2,
\end{equation}
and so $(n-3) d_1 < n-2.$ This equality cannot hold for any
natural number $d_1$ unless $n \leq 3.$ It is easy to check that for $n=2$
there exist a determinantal Dixon formula for any value of $d_1, d_2.$
In case $n=3$, (\ref{squared}) implies that $d_3 < d_1 +1.$ Then, $d_1=d_2=d_3,$
as claimed.
\end{proof}
\section{ Other known formulas and some extensions}
We can recognize other well known determinantal formulas for resultants
in this setting.
\subsection{Polynomials in one variable}
Let
$$f_1(x) = \sum_{j=0}^{d_1} a_j x^j \, \, , \, \,
f_2(x) = \sum_{j=0}^{d_2} b_j x^j $$ be generic univariate polynomials
(or their homogenizations in two variables) of degrees $d_1 \leq d_2.$
In this case, inequality (\ref{squaret}) is verified
for all $ t = 0, \dots, d_1 +d_2 -1$ and so we have a determinantal
formula for all such $t$. Here, $t_2 = d_1 + d_2 -2.$ When
$t= d_1+d_2 -1 = t_2 +1$ we have the classical Sylvester
formula.
Assume $d_1 = d_2= d$ and write
$$ \frac { f_1(x) f_2(y) - f_1(y) f_2(x)} {x-y} = \sum_{i,j=0}^d
c_{ij} x^i y^j.$$
Then, the classical {\it B\'ezout \/} formula for the
resultant between $f_1$ and $f_2$ says that
$${\rm Res}_{d,d}(f_1,f_2) = \det (c_{ij}).$$
It is easy to see that we obtain precisely this formulation for
$t= d-1.$ For other values of $t$ we get formulas
interpolating between Sylvester and B\'ezout as in
\cite[Ch. 12]{gkz}, even in case $d_1 \neq d_2$. It is
easy to check that the smallest possible matrix has size $d_2.$
Suppose for example that $d_1 =1, d_2 = 2.$ In this case,
$\left[ t_2/2\right] = \left[ 1/2 \right] = 0,$ and $M_0$ is
a $2\times 2 $ matrix representing a map from
$S_1^*$ to $S_0 \oplus S_0^*,$ whose determinant
equals the resultant
$${\rm Res}_{1,2}(f_1,f_2) = a_1^2 b_0 - a_0 a_1 b_1 + b_2 a_0^2.$$
If we write $f_1(x) = 0 x^2 + a_1 x + a_0$ and we use the classical
Bezout formula for $d=2,$ we would also get a $2 \times 2$
matrix but whose determinant equals $b_2 \cdot {\rm Res}_{1,2}(f_1,f_2).$
The exponent $1$ in $b_2$ is precisely the difference $d_2 - d_1.$
\subsection{ Sylvester formula for three ternary quadrics}
Suppose that $n=3, \ d_1=d_2=d_3=2$ and $2 \not=0.$ Let $J$ denote
the Jacobian determinant associated with the homogeneous
polynomials $f_1,f_2,f_3.$ A beautiful classical formula due
to Sylvester says that the resultant ${\rm Res}_{2,2 ,2}(f_1,f_2,f_3)$
can be obtained as $1/512$ times the determinant of the
$6 \times 6 $ matrix whose columns are indexed by the
monomials in $3$ variables of degree $2$ and whose rows
correspond to the expansion in this monomial basis of
$f_1, f_2, f_3, \frac{\partial J}{\partial X_1}, \frac{\partial J}{\partial X_2}$ and
$\frac{\partial J}{\partial X_3}.$ In this case, $\left[ t_3/2\right]=
\left[ 3/2\right] = 1,$ and by Lemma \ref{square} we have a determinantal formula in this degree
since $ 2 - 3 < 1 < 4.$ From Euler equations
$$ 2 f_i(X) = \sum_{j=1}^3 X_j \frac{\partial f_i(X)}{\partial X_j},$$
we can write
$$ \begin{array}{ccl}
2\left(f_i(X)-f_i(Y)\right)&=&\sum_{j=1}^3 {\left(
X_j \frac{\partial f_i(X)}{\partial X_j} - Y_j \frac{\partial f_i(Y)}
{\partial Y_j}\right)}\\
&=& \sum_{j=1}^3 {(X_j-Y_j)\frac{\partial f_i(X)}{\partial X_j}+
Y_j\left(\frac{\partial f_i(X)}{\partial X_j}-\frac{\partial f_i(Y)}
{\partial Y_j}\right)}\\
&=& \sum_{j=1}^3 \left((X_j-Y_j)\,\frac{\partial f_i(X)}{\partial X_j}+
Y_j\sum_{l=1}^3 {\frac{\partial^2 f_i(X)}{\partial X_j\partial X_l}
(X_l-Y_l)}\right).\\
\end{array}$$
Because of (\ref{deltamod}), we can compute the Bezoutian using
$$\Delta_{ij}(X,Y):=\frac12\left( \frac{\partial f_i(X)}{\partial X_j} +
\sum_{l=1}^3{\frac{\partial^2 f_i(X)}{\partial X_l\partial X_j}} Y_l\right).$$
Using this formulation, it is not difficult to see that we can recover
Sylvester formula from the equality $ {\rm Res}_{2,2 ,2}(f_1,f_2,f_3) =
\det M_1.$
\subsection{Jacobian formulations}
When $t=t_n,$ one has $H_d(t)=1,$ and via the canonical identification of $S_0^*$
with $A,$ the complex (\ref{def:complex}) reduces to the following modified Koszul Complex:
\begin{equation}
\label{jac1}
0 \longrightarrow \, K(t)^{-n} \,
\mathop{\longrightarrow}^{\delta_{-(n-1)}}
\, \dots \, \mathop{\longrightarrow}^{\delta_{-1}} \, A\oplus K(t)^{-1}\,
\mathop{\longrightarrow}^{\delta_{0}} \, K(t)^{0}\,{\longrightarrow}0,
\end{equation}
where $\delta_0$ is the following map:
$$
\begin{array}{ccccc}
A& \oplus&
S_{t_n-d_1}\oplus \cdots \oplus S_{t_n-d_n}
&\rightarrow & S_{t_n}
\\
(\lambda&,&g_1,\ldots,g_n)&\mapsto&\lambda\,\Delta_0+\sum_{i=1}^n g_i\,f_i,\\
\end{array}
$$
and $\Delta_0:=\Delta(X,0).$
As a corollary of Theorem \ref{rescth} we get that, for every specialization of the coefficients,
$\Delta_0$ is a non-zero element of the quotient if the resultant does
not vanish.
\par
Assume that the characteristic of $k$ does not divide the product $d_1\dots d_n.$
It is a well-known fact that the jacobian determinant $J$ of the sequence $(f_1,\dots,f_n)$ is another
non-zero element of degree $t_n,$ which is a non-zero element of the quotient whenever
the resultant does not vanish (cf. for instance \cite{ss}).
In fact, one can easily check that
\begin{equation}
\label{id1}
J \ =\ d_1\dots d_n \, \Delta_0 \ \mod \left<f_1,\ldots,f_n\right>.
\end{equation}
In \cite{CDS}, the same complex
is considered in a more general toric setting, but using $J$ instead of $\Delta_0$ .
Because of (\ref{id1}), we can recover their results in the homogeneous case.
\begin{theorem}
Consider the modified complex (\ref{jac1}) with $J$ instead of $\Delta_0.$ Then, for every
specialization of the coefficients,
the complex is exact if and only if the resultant does not vanish. Moreover, the determinant
of the complex equals $d_1\dots d_n \, {\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n).$
\end{theorem}
We can also replace $\Delta_0$ by $J$ in Macaulay's Formula (Theorem \ref{mainth}),
and have the following result:
\begin{theorem}
Consider the square submatrix $\tilde M_{t_n}$ which is extracted from the matrix of
$\delta_0$ in the monomial bases, choosing the same rows and columns of $M_{t_n}.$
Then, $\det(\tilde M_{t_n})\neq0,$ and we have the following formula
\`a la Macaulay:
$$d_1\dots d_n\,{\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n) = \frac{\det(\tilde M_{t_n})}{\det(E_{t_n})}.$$
\end{theorem}
We end the paper by addressing two natural questions that arise:
\subsection{Different choices of monomial bases}
Following Macaulay's original ideas, one can show that
there is some flexibility in the choice of the monomial bases
defining $S^{t,i}$ in order to get other non-zero minors,
of $\tilde\Psi_t$, i.e different square matrices
$M'_t$ whose determinants are non-zero multiples
of ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ with different extraneous factors
$\det(E'_t), \det(E'_{t_n-t})$, for
appropiate square submatrices $E'_t, E'_{t_n-t}$ of $M'_t.$
Besides the obvious choices coming from a permutation
in the indices of the variables, other choices can be made as follows.
\par
For any $i=1,\dots,n,$
set $\hat{d}_i:= (d_1, \dots,
d_{i-1},d_{i+1},\dots,d_n)$ and define $H_{\hat{d}_i}(t)$ for any positive
integer $t$ by the equality
$$\frac{\prod_{j\neq i}\left( 1-Y^{d_{j}}\right) }{\left( 1-Y\right) ^{n-1}}=
\sum_{t=0}^\infty H_{\hat{d}_i}(t).Y^t.$$
For each $t\in{\mathbb N}_0,$ set also
$\Lambda_t:=\{X^\gamma \in S_t: \ \gamma_j<d_j, \, j=1,\dots,n\}.$
We then have the following result:
\begin{proposition}
Let $M'_t$ a square submatrix of $\tilde\Psi_t$ of size $\rho(t).$
Denote its blocks as in (\ref{prima}).
Suppose that, for each $i=1,\dots,n$, the block $F_t$
has {\it exactly} $H_{\hat{d}_i}(t-d_i)$ of its columns corresponding to $f_i$
in common with the matrix $D_t$ defined in (\ref{prev})
and, also, the block $F_{t_n-t}$ shares exactly $H_{\hat{d}_i}(t_n-t-d_i)$
columns
corresponding to $f_i$ with $D_{t_n-t}.$ Then, if $\det\left(
M'_t\right)$ is not identically zero, the
resultant ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$ can be computed as the ratio $\frac{\det( M'_t)}
{\det({\mathbb E}_t')},$ where ${\mathbb E}_t'$ is made by joining two submatrices
$E'_t$ of $F_t$ and $E'_{t_n-t}$ of
$F_{t_n-t}.$ These submatrices are obtained by omitting the columns in common with
$D_t$ (resp. $D_{t_n-t}$) and the rows indexed by all common monomials in
$D_t$ (resp. $D_{t_n-t}$) and all monomials in
$\Lambda_t$ (resp. $\Lambda_{t_n-t}$).
\end{proposition}
We omit the proof which is rather technical, and based in
\cite[6a]{Mac}, and \cite{Ch1},\cite{cha}.
\subsection{Zeroes at infinity}
Given a non-homogeneous system of polynomial equations $\tilde{f}_1,
\dots,\tilde{f}_n$ in $n-1$ variables with respective degrees $d_1,\dots,d_n,$ we can
homogenize these polynomials and consider the resultant ${\rm Res}_{d_{1},\dots ,d_{n}}(f_1,\dots,f_n)$
associated with their respective homogenizations $f_1,\dots,f_n.$ However,
this resultant may vanish due to common zeros of $f_1,\dots,f_n$ at infinity
in projective space
${\mathbb P}^{n-1}$ even when there is no affine common root to
$\tilde{f}_1,= \dots =\tilde{f}_n =0.$
We can in this case extend Canny's construction \cite{C2}
of the Generalised Characteristic Polynomial (GCP) for classical
Macaulay's matrices to the matrices $M_t$ for any natural number
$t.$ In fact,
when we specialize $f_i$ to $ X_i^{d_i}$ for
all $i=1,\dots,n,$ the Bezoutian is given by
$$ \sum_{j_1=0}^{d_1-1} \cdots \sum_{j_n=0}^{d_n-1} X_1^{d_1-1-j_1}
\cdots X_n^{d_n-1-j_n} \ Y_1^{j_1} \cdots Y_n^{j_n},$$
and the specialized matrix $M_{t}(e)$ of $M_t$ has a
single non zero entry on each row and column
which is equal to $1,$ so that $\det(M_t(e)) = \pm 1.$ We order the
columns in such a way that $M_t(e)$ is the identity matrix.
With this convention, define the polynomial $C_t(s)$
by
$$C_t(s) := \frac { {\mbox {Charpoly }} (M_t) (s)} { {\mbox {Charpoly }} ({\mathbb E}_t) (s)},$$
where $s$ denotes a new variable and Charpoly means characteristic
polynomial.
We then have by the previous observation that
$$C_t(s) = {\rm Res}_{d_{1},\dots ,d_{n}}(f_1 - s \ x_1^{d_1}, \dots, f_n - s \ x_1^{d_n}).$$
Moreover, this implies that $C_t(s)$ coincides with Canny's GCP
$C(s),$ but involves matrices of smaller size.
Canny's considerations on how to compute more efficiently
the GCP also hold in this case.
Of course, it is in general much better to find a way to
construct ``tailored'' residual
resultants for polynomials with a generic structure which is not
dense, as in the case of sparse polynomial systems (\cite{EM},
\cite{gkz}).
\noindent {\bf Acknowledgements:}
We are grateful to J. Fern\'andez Bonder, G. Massaccesi and J.M. Rojas for
helpful suggestions. We are also grateful to David Cox for his
thorough reading of the manuscript.
\noindent{\bf Author's addresses:}
\noindent{Departamento de Matem\'atica, F.C.E y N., UBA,
(1428) Buenos Aires, Argentina. }
\noindent{{\tt [email protected] \hskip4.5truecm [email protected]}}
\end{document} |
\begin{document}
\title{Graph kernels encoding features of all subgraphs \ by quantum superposition}
\begin{abstract}
Graph kernels are often used in bioinformatics and network applications to measure the similarity
between graphs; therefore, they may be used to construct efficient graph classifiers.
Many graph kernels have been developed thus far, but to the best of our knowledge there is no
existing graph kernel that considers all subgraphs to measure similarity.
We propose a novel graph kernel that applies a quantum computer to measure the graph similarity taking
all subgraphs into account by fully exploiting the power of quantum superposition to encode every subgraph
into a feature.
For the construction of the quantum kernel, we develop an efficient protocol that removes the index
information of subgraphs encoded in the quantum state.
We also prove that the quantum computer requires less query complexity to construct the feature
vector than the classical sampler used to approximate the same vector.
A detailed numerical simulation of a bioinformatics problem is presented to demonstrate that, in
many cases, the proposed quantum kernel achieves better classification accuracy than existing
graph kernels.
\end{abstract}
\begin{multicols}{2}
\section*{Introduction}
An effective measure of the similarity between graphs is necessary in several science and
engineering fields, such as bioinformatics, chemistry, and social networking \cite{Vishwanathan}.
In machine learning, this measure is called the graph kernel, and it can be used to construct
a classifier for graph data \cite{BorgwardtProteinPred}.
In particular, a kernel in which all subgraphs are fully encoded is desirable, because it can access
the complete structural information of the graph.
However, constructing such a kernel is known as a nondeterministic polynomial time (NP)-hard
problem \cite{Gartner}.
Alternatively, a kernel that encodes partial features of {\it all} subgraphs may be used, but to
our best knowledge, this type of method has not been developed thus far.
Previous studies have instead focused on using different features originating from the target
graphs, such as random walks \cite{Gartner, Kashima, FastVishwanathan}, graphlet sampling
\cite{N-Shervashidze}, and shortest paths \cite{Borgwardt}.
A quantum computer may be applied to construct a graph kernel that covers all subgraphs, because
of its strong expressive power, which has been demonstrated in the quantum machine learning
scenario \cite{QMLReview,Ciliberto_2018}.
More specifically, the exponentially large Hilbert space of quantum states may serve as an appropriate
feature space where the kernel is induced \cite{QMLinFeatureHilbert, QSVM,schuld2021quantum}.
This kernel can then be further utilized for machine learning.
We now have two approaches: the implicit (hybrid) approach, in which the quantum computer computes
the kernel, and the classical computer uses it for machine learning, and the explicit approach, in which
both the kernel computation and the machine learning part are both executed by the quantum computer.
In fact, there exist a few proposals for the quantum computational approach to construct graph kernels.
For example, Ref. \cite{GBS-kernel} proposed the Gaussian Boson sampler, which estimates feature
vectors by sampling the number of perfect matchings in the set of subgraphs.
Another method used a quantum walk \cite{QWalkCont, QWalkDisc}.
However, there is no existing graph kernel that operates on a quantum circuit to design the features
obtained from all subgraphs.
In this study, we propose a quantum computing method to generate a graph kernel that
extracts important features from all subgraphs.
The point of this method is that the features of an exponential number of subgraphs can be effectively
embedded to a quantum state in a Hilbert space using quantum computing.
Note that a naive procedure immediately induces a difficulty; the corresponding quantum state
contains the index component, which may severely decrease the value of the kernel.
It is generally difficult to \textit{forget} the index component, as argued in~\cite{Aharonov-SZKcomp},
which we simply refer to as \textit{removing} the index component; nonetheless, we propose a protocol
to achieve this goal using a polynomial number of operations (i.e., query complexity) under a valid
condition, which is fortunately satisfied by the features used in typical problems in bioinformatics.
We then provide some concrete protocols to further compute the target kernel and discuss their
query complexity.
Also, we prove that they require fewer operations to generate the feature vector than a classical
sampler used to approximate the same vector.
Hence, up to the difference of the sense of complexities, the proposed protocols have quantum
advantage.
Lastly, we use the above-mentioned typical bioinformatics problem to investigate whether the
classifier based on the proposed quantum kernel achieves a higher classification accuracy than
existing graph classifiers.
\section*{Results}
\subsection*{Algorithm for graph kernel computation}
We consider a graph characterized by the pair \(G=(V,E)\), where \(V=\{v_1,v_2,\cdots,v_n\}\) is
an ordered set of \(n\) vertices, and \(E\subseteq V\times V\) is a set of undirected edges.
Hereafter, we use the notation \(n=|V|\).
Also, let \(d\) be the maximum degree of the graph \(G\).
In this study, \(G\) is assumed to be a simple undirected graph that does not contain self-loops
or multiple edges.
The first step of our algorithm is to encode the graph information of $G$ onto a quantum state
defined on the composite Hilbert space ${\mathcal H}_{\rm index}\otimes {\mathcal H}_{\rm feature}$.
The index space ${\mathcal H}_{\rm index}$ is composed of $n$ qubits, which identifies a subgraph
characterized by a set of vertices represented by the binary sequence $x$ of length \(n\).
The feature space ${\mathcal H}_{\rm feature}$ is composed of $m$ qubits, each state of which
represents the feature information of a chosen subgraph $x$.
The value of $m$ depends on what feature is used.
In this study, we consider the case $m=O(\log n)$, where each qubit represents the numbers of
vertices, edges, and vertices with a degree of 1, 2, and 3;
refer to Toy Example section for a concrete example.
Now, we assume an oracle operator that encodes the feature information of the chosen subgraph,
identified by the index $x$, to the function $E(G,x)$ and then generates the feature state
$\ket{E(G,x)}$.
Then, using the superposition principle of quantum mechanics, we can generate the quantum
state containing the features of {\it all} subgraphs of a graph \(G\) as follows:
\begin{align}
\label{eq:g_with_index}
\ket{\bar{G}}:=\sum_{x\in\{0,1\}^n} \ket{x}\ket{E(G,x)},
\end{align}
where the normalized coefficient is omitted to simplify the notation.
Next, we aim to compute the similarity of two graphs $G$ and $G'$.
For this purpose, it seems that the inner product of $\ket{\bar{G}}$ and $\ket{\bar{G}'}$ may be used.
Note that when the two graphs have the same feature $E(G,x_1) = E(G',x_2)$ with different
indices $x_1\neq x_2$, they should still contribute to the similarity of $G$ and $G'$, whereas the inner
product of $\ket{x_1}\ket{E(G,x_1)}$ and $\ket{x_2}\ket{E(G',x_2)}$ is zero.
Therefore, what we require is the state
\begin{align}
\label{eq:g_without_index}
\ket{G}:=\sum_{x\in\{0,1\}^n}\ket{E(G,x)},
\end{align}
instead of Eq. \eqref{eq:g_with_index}.
However, an exponential number of operations is generally necessary to remove
the index state \cite{Aharonov-SZKcomp}.
The first contribution of our study is that our algorithm only needs a polynomial number of operations to obtain
Eq.~\eqref{eq:g_without_index} from Eq.~\eqref{eq:g_with_index} under a condition that
may be satisfied in features useful for many graph classification problems.
\begin{figurehere}
\begin{align}
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0^n}} & {/} \qw & \multigate{1}{\textrm{Oracle}} & \gate{H^{\otimes n}} & \meter & \qw \\
\lstick{\ket{0..}} & {/} \qw & \ghost{\textrm{Oracle}} & \qw & \qw & \qw
}
\end{align}
\caption{Quantum circuit used to prepare the state \eqref{eq:g_without_index}.}
\label{qc:init-index-reg}
\end{figurehere} \noindent
\\
The algorithm to remove the index state is described in terms of the general discrete
function \(f\) as follows (see also Fig.~\ref{qc:init-index-reg}):
The state generated via the oracle operation is rewritten as
\begin{equation}
\frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} \ket{x}\ket{f(x)}
= \frac{1}{\sqrt{2^n}}\sum_{k=1}^a \sum_{x\in X_k} \ket{x}\ket{y_k},
\end{equation}
where \(X_k=\{x \, |\, f(x)=y_k,x\in\{0,1\}^n\}\).
Also, \(a=|Y|\), where \(Y\) is the range of \(f\), i.e., \(Y=\{y_k\}_{k=1}^a\).
Then, we apply \(H^{\otimes n} \otimes I \) to obtain
\begin{equation}
\label{eq:has-other-terms}
\frac{1}{2^n}\ket{0^n}\left(\sum_{k=1}^a |X_k|\ket{y_k}\right)+\mathrm{other\ terms},
\end{equation}
where the ``other terms" contain all quantum states with index states other than $\ket{0^n}$.
We now make a measurement in the computational basis of the index state;
then, the post-selected feature state obtained when the measurement result is \(0^n\) is given by
\begin{align}
\label{eq:thm1-final-state}
\frac{1}{\sqrt{\sum_{k=1}^a |X_k|^2}} \sum_{x\in\{0,1\}^n} \ket{f(x)},
\end{align}
which is our target state.
The probability to successfully obtain this state is
\begin{equation}
\label{eq:prob-to-init-ir}
\Pr(0^n)=\frac{1}{2^{2n}}\sum_{k=1}^a |X_k|^2.
\end{equation}
To evaluate the efficiency of the proposed algorithm, we derive the minimum of the probability
\eqref{eq:prob-to-init-ir} with respect to the family of set \(X=\{X_1,\cdots,X_k\}\) under the
condition $\sum_{k=1}^a |X_k|-2^n=0$.
Hence the problem is to maximize
\begin{equation}
L(X,\lambda)=-\sum_{k=1}^a |X_k|^2 - \lambda\left(\sum_{k=1}^a |X_k|-2^n\right),
\end{equation}
where $\lambda$ is the Lagrange multiplier.
Then, \(|X_k|=2^n/a\) maximizes \(L(X,\lambda)\); thus, Eq. \eqref{eq:prob-to-init-ir} has the
following lower bound:
\begin{equation}
\label{eq:prob-lower-bound}
\Pr(0^n)=\frac{1}{2^{2n}}\sum_{k=1}^a |X_k|^2
\geq \frac{1}{2^{2n}}a\left(\frac{2^n}{a}\right)^2=a^{-1}.
\end{equation}
The above result is summarized as follows:
\begin{theorem} \label{thm:rir}
The quantum circuit depicted in Fig.~\ref{qc:init-index-reg} generates the quantum state
\eqref{eq:thm1-final-state} with a probability of at least \(a^{-1}\).
\end{theorem}
Therefore, the quantum state \eqref{eq:thm1-final-state} can be effectively generated if $a=|Y|$ is
of the order of a polynomial with respect to $\{y_k\}$. This desirable assumption holds
in the encoding function $E(G,x)$ that is used in the bioinformatics problem analyzed later in this paper.
For a general statement of this fact, we introduce the following constraint on the range of
the function:
the set
\begin{align}
X_v=\{x \, | \, f(x)\in Y_v,x\in\{0,1\}^n\}, ~ n\in\mathbb{N}
\end{align}
and \(Y_v\) specified by
\begin{align}
\bigcup_{v=0}^n Y_v=Y, ~~
\bigcup_{v=0}^n \bigcup_{\substack{v'\neq v,\\v'\in[0,n]}}\left(Y_v\cap Y_{v'}\right)=\emptyset,
\end{align}
are assumed to satisfy
\begin{align} \label{eq:v_range_constraint}
|X_v|=\binom{n}{v}, ~ |Y_v|=\begin{cases}
1 & (v=0) \\
{\rm Pol}(v^{c-1}) & ({\rm otherwise})
\end{cases},
\end{align}
where ${\rm Pol}(v^{c-1})$ is a polynomial function with maximum degree $c-1$.
These conditions lead to
\begin{equation}
\label{eq:order of |Y|}
a=|Y|=\sum_{v=0}^n {\rm Pol}(v^{c-1})= O(n^c).
\end{equation}
Then, Theorem \ref{thm:rir} can be further refined as follows
(the proof is given in the Supplementary Information):
\begin{theorem} \label{thm:rir-v-range}
Given the condition \eqref{eq:v_range_constraint}, the algorithm depicted in Fig.~\ref{qc:init-index-reg}
generates the state \eqref{eq:thm1-final-state} with a probability of at least \(\Omega(\sqrt{n}/n^c)\).
\end{theorem}
In what follows we assume that the encoding function $E(G, x)$ satisfies the condition
\eqref{eq:v_range_constraint}; then the desired state transformation \eqref{eq:g_with_index}
$\to$ \eqref{eq:g_without_index} only requires a \(O(n^c/\sqrt{n})\) mean query complexity.
As a consequence, we are now able to effectively compute the inner product
\begin{align}
\label{eq:kernel_def}
\braket{G|G'},
\end{align}
as the similarity measure of the two graphs $G$ and $G'$
(note that both $\ket{G}$ and $\ket{G'}$ are real vectors).
The task of computing \eqref{eq:kernel_def} is typically done via the swap test \cite{swaptest}.
A diagram of the post-selection process from $\ket{0}\ket{\bar{G}}\ket{\bar{G}'}$
to $\ket{0}\ket{G}\ket{G'}$ is depicted in Fig. \ref{qc:swap-test}.
The total mean query complexity required to prepare both $\ket{G}$ and $\ket{G'}$ and
subsequently apply the swap test to compute the inner product \eqref{eq:kernel_def} is given by
\begin{equation}
\label{simple complexity for inner}
O(n^c/\sqrt{n}) + O(n'\mbox{}^c/\sqrt{n}) = O(n^c/\sqrt{n}),
\end{equation}
where $n$ and $n'$ are assumed to be of the same order.
Note that Eq.~\eqref{simple complexity for inner} contains the constant overhead required to repeat
the swap test circuit to compute the inner product with a fixed approximation error.
The resulting inner product computed by the swap test is represented by
\begin{align}
\label{eq:kernel_def_nff}
K_{\rm BH}(G, G')=k_{\rm BH}(G, G') f_G^\top f_{G'},
\end{align}
where $f_G=[|X_1|, \ldots, |X_a|]^\top$ is the column vector and the coefficient $k_{\rm BH}(G, G')$
is given by
\begin{align}
\label{eq:swap_test_coeff}
k_{\rm BH}(G, G')=\frac{1}{\sqrt{\sum_{k=1}^a |X_k|^2}\sqrt{\sum_{k=1}^a |X_k'|^2}}.
\end{align}
Importantly, Eq.~\eqref{eq:kernel_def_nff} is exactly the Bhattacharyya (BH) kernel
\cite{Bhattacharyya1943, BhattacharyyaKernel}, which has been successfully applied to image
recognition \cite{BhattacharyyaKernel} and text classification \cite{BhattacharyyaKernelText}.
\begin{figurehere}
\begin{align}
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0}} & \qw & \qw & \qw & \gate{H} & \ctrl{4} & \gate{H} & \meter \\
\lstick{\ket{0^n}} & {/} \qw & \multigate{1}{U} & \meter & \qw & \qw & \qw & \qw \\
\lstick{\ket{0..}} & {/} \qw & \ghost{U} & \qw & \qw & \qswap & \qw & \qw \\
\lstick{\ket{0^{n'}}} & {/} \qw & \multigate{1}{U'} & \meter & \qw & \qw & \qw & \qw \\
\lstick{\ket{0..}} & {/} \qw & \ghost{U'} & \qw & \qw & \qswap & \qw & \qw
}
\end{align}
\caption{
Quantum circuit to compute the inner product \eqref{eq:kernel_def}, which is composed of the oracles
$U$ and $U'$ to produce $\ket{\bar{G}}$ and $\ket{\bar{G}'}$; this is followed by the post-selection
operation to obtain $\ket{G}\ket{G'}$ and the swap test.
The probability of obtaining $0$ as a result of the measurement on the first qubit is
$\Pr(0)=(1+|\braket{G|G'}|^2)/2$, which enables us to estimate the inner product
\eqref{eq:kernel_def}.}
\label{qc:swap-test}
\end{figurehere}
\subsection*{Alternative algorithm with switch test}
\begin{figurehere}
\begin{align}
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0}} & \qw & \multigate{2}{\textrm{Oracle}} & \ctrlo{1} & \ctrl{1} & \qw & \qw \\
\lstick{\ket{0^n}} & {/} \qw & \ghost{\textrm{Oracle}} & \gate{H^{\otimes n}} & \gate{H^{\otimes n'}} & \meter & \qw \\
\lstick{\ket{0..}} & {/} \qw & \ghost{\textrm{Oracle}} & \qw & \qw & \qw & \qw
}
\end{align}
\caption{Quantum circuit to prepare the state \eqref{eq:sp_g_without_index}. }
\label{qc:init-index-reg-sp}
\end{figurehere} \noindent
\\
Here, we show that the use of the superposition
\begin{equation}
\label{eq:sp_g_without_index}
\ket{0}\ket{G}+\ket{1}\ket{G'}
\end{equation}
can also be used to compute the inner product \eqref{eq:kernel_def} which eventually yields
a different kernel than $K_{\rm BH}(G, G')$, yet using the same order of queries as the previous
case.
Similar to the previous case, the superposition \eqref{eq:sp_g_without_index} can be effectively
generated by the post-selection operation on
\begin{equation}
\label{eq:sp_g_with_index}
\ket{0}\ket{\bar{G}}+\ket{1}\ket{\bar{G}'},
\end{equation}
using the circuit depicted in Fig.~\ref{qc:init-index-reg-sp}.
We first apply the controlled \(H^{\otimes n}\) and \(H^{\otimes n'}\) on the
index state of Eq.~\eqref{eq:sp_g_with_index} and then post-select the state when the measurement
result on the index is \( 0^n \).
The formal statement of the result in terms of the general discrete functions $f$ and $g$ is given
as follows
(the proof is given in the Supplementary Information):
\begin{theorem} \label{thm:rir-qf}
Let \(Y\) and \(Y'\) be the ranges of \(f\) and \(g\), respectively, and let \(a=|Y|\) and \(a'=|Y'|\). Thus, \(Y=\{y_k\}_{k=1}^a\) and \(Y'=\{y_k'\}_{k=1}^{a'}\) with \(y_k\) and \(y_k'\) elements of \(Y\)
and \(Y'\), respectively.
Also, let \(X_k=\{x \, |\, f(x)=y_k,x\in\{0,1\}^n\}\) and \(X_k'=\{x \, |\, g(x)=y_k',x\in\{0,1\}^{n'}\}\) for
\(n\geq n'\).
The quantum circuit depicted in Fig.~\ref{qc:init-index-reg-sp} uses the oracle to prepare the state
\begin{align}
\frac{1}{\sqrt{2}}&\left(\frac{\ket{0}\sum_{x\in\{0,1\}^{n}}\ket{x}\ket{f(x)}}{\sqrt{2^{n}}} \right. \\
&+ \left. \frac{\ket{1}\ket{0^{n-n'}}\sum_{x\in\{0,1\}^{n'}}\ket{x}\ket{g(x)}}{\sqrt{2^{n'}}}\right). \label{eq:thm2-init-state}
\end{align}
Then, the final feature state of the circuit, which is post-selected when the measurement result is $0^n$
in the index state, is given by
\begin{equation}
\frac{1}{N_{fg}}\left(\frac{\ket{0}\sum_{x\in\{0,1\}^{n}}\ket{f(x)}}{2^{n}}
+\frac{\ket{1}\sum_{x\in\{0,1\}^{n'}}\ket{g(x)}}{2^{n'}}\right),
\label{eq:thm2-final-state}
\end{equation}
where
\begin{equation}
N_{fg}=\sqrt{\left(\frac{\sum_{k=1}^a|X_{k}|^2}{2^{2n}}
+\frac{\sum_{k=1}^{a'}|X_{k}'|^2}{2^{2n'}}\right)}.
\end{equation}
The probability of obtaining the state \eqref{eq:thm2-final-state} is at least \((a^{-1}+a'^{-1})/2\).
\end{theorem}
\mbox{}
As in the previous case, by imposing the encoding functions $E(G, x)$ and $E(G', x)$ to satisfy
the condition \eqref{eq:v_range_constraint}, the quantum circuit depicted in Fig.~\ref{qc:init-index-reg-sp}
transforms Eq.~\eqref{eq:sp_g_with_index} to Eq.~\eqref{eq:sp_g_without_index} with a probability of
$\Omega(\sqrt{n}/n^c + \sqrt{n'}/n'\mbox{}^c) = \Omega(\sqrt{n}/n^c)$, where $n$ and $n'$ are
assumed to be of the same order.
The proof of this result is the same as that of Theorem \ref{thm:rir-v-range}.
\begin{figurehere}
\begin{align}
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0}} & \gate{H} & \ctrlo{1} & \ctrl{1} & \qw & \gate{H} & \meter \\
\lstick{\ket{0^n}} & {/} \qw & \multigate{1}{U} & \multigate{1}{U'} & \meter & \qw & \qw \\
\lstick{\ket{0..}} & {/} \qw & \ghost{U} & \ghost{U'} & \qw & \qw & \qw
}
\end{align}
\caption{
Quantum circuit to compute the inner product \eqref{eq:kernel_def}, which is composed of the oracles
$U$ and $U'$ to produce $\ket{0}\ket{\bar{G}}+\ket{1}\ket{\bar{G}'}$. This is followed by the
post-selection operation to obtain $\ket{0}\ket{G}+\ket{1}\ket{G'}$ and the switch test.
Additionally, \(n\geq n'\) is assumed.
The probability of obtaining $0$ as a result of a measurement on the first qubit is
$\Pr(0)=(1+\braket{G|G'})/2$, which enables us to estimate the inner product \eqref{eq:kernel_def}. }
\label{qc:switch-test}
\end{figurehere}
\mbox{}
We have now obtained the state $\ket{0}\ket{G}+\ket{1}\ket{G'}$, which enables the application
of the switch test \cite{SWITCH-test-1st,SWITCH-test-called} to compute the inner product
\eqref{eq:kernel_def}.
The circuit diagram, which contains the post-selection operation, is shown in Fig. \ref{qc:switch-test}.
The total query complexity to compute the inner product \eqref{eq:kernel_def} is $O(n^c/\sqrt{n})$.
The resulting inner product computed through the switch test, which we call the SH kernel, is given by
\begin{align}
\label{eq:kernel_def_SH}
K_{\rm SH}(G, G') = k_{\rm SH}(G, G') f_G^\top f_{G'},
\end{align}
where, again, $f_G=[|X_1|, \ldots, |X_a|]^\top$, and the coefficient $k_{\rm SH}(G, G')$ is given by
\begin{equation}
\label{eq:switch_test_coeff}
k_{\rm SH}(G, G')
=\frac{2}{\frac{2^{n'}}{2^n}\sum_{k=1}^a |X_k|^2 + \frac{2^n}{2^{n'}}\sum_{k=1}^a |X_k'|^2}.
\end{equation}
In the Supplementary Information, we prove that this is a positive semidefinite kernel.
Also we show there that $K_{\rm SH}(G, G') \leq K_{\rm BH}(G, G')$ holds, implying that the SH
kernel may give a conservative classification performance than BH.
Note that the generalized T-student kernel \cite{GeneralizedTStudentKernelUsage} has a similar form.
\subsection*{Improved algorithm with amplitude amplification}
\begin{figurehere}
\begin{align}
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0^n}\ket{0..}} & {/} \qw & \gate{\textrm{Oracle}} & \qw \gategroup{1}{3}{1}{3}{.7em}{--} \\
& & \push{\text{AA}}
} \\ \\
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0}} & \qw & \multigate{1}{\textrm{Oracle}} & \qw \\
\lstick{\ket{0^n}\ket{0..}} & {/} \qw & \ghost{\textrm{Oracle}} & \qw \gategroup{1}{3}{2}{3}{.7em}{--} \\
& & \push{\text{AA}}
}
\end{align}
\caption{Quantum circuit with amplitude amplification to prepare the state
\eqref{eq:g_without_index} (upper) and \eqref{eq:sp_g_without_index} (lower).
The circuit that is iterated to realize the amplitude amplification is enclosed in the dashed line.}
\label{qc:g-grover}
\end{figurehere}
\mbox{}
\\
Recall that we used post-selection on the state \eqref{eq:has-other-terms}:
\begin{equation*}
\frac{1}{2^n}\ket{0^n}\left(\sum_{k=1}^a |X_k|\ket{y_k}\right)+\mathrm{other\ terms},
\end{equation*}
to probabilistically produce the first term
\begin{align}
\label{target state AA}
\frac{1}{2^n}\ket{0^n}\left(\sum_{k=1}^a |X_k|\ket{y_k}\right).
\end{align}
We can enhance the first term using the amplitude amplification (AA) operation
\cite{Grover,AmplitudeAmplification}) to {\it deterministically} produce the same state
\eqref{target state AA}.
The clear advantage of AA is that it requires the square root of the number of operations
to obtain this state. This is preferable over the previous repeat-until-success strategy.
This means that the query complexity to transform Eq.~\eqref{eq:g_with_index} to
Eq.~\eqref{eq:g_without_index} via AA is
\begin{align}
O\left(\sqrt{a/\sqrt{n}}\right)=O(\sqrt{a}/n^{1/4}).
\end{align}
Similarly, transforming Eq. \eqref{eq:sp_g_with_index} to Eq. \eqref{eq:sp_g_without_index}
via AA requires a query complexity of $O(\sqrt{a}/n^{1/4})$.
These circuits are depicted in Fig. \ref{qc:g-grover}; note that the measurement on the index
state is not necessary.
The circuit that includes the swap test and AA to compute the inner product
$\braket{G|G'} = K_{\rm BH}(G, G')$ is depicted in Fig.~\ref{qc:swap-switch-test-grover} (upper).
The circuit length is
\begin{align}
O(\sqrt{a}/n^{1/4})+O(\sqrt{a}/n^{1/4})=O(\sqrt{a}/n^{1/4}).
\end{align}
Also, the circuit containing the switch test and AA is depicted in Fig.~\ref{qc:swap-switch-test-grover}
(lower); as in the case of swap test, the circuit length is $O(\sqrt{a}/n^{1/4})$.
In particular, if the encoding functions $E(G, x)$ and $E(G', x)$ satisfy the condition
\eqref{eq:v_range_constraint}, then we can specify $a=O(n^c)$.
\begin{figurehere}
\begin{align}
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0}} & \qw & \qw & \gate{H} & \ctrl{3} & \gate{H} & \meter \\
\lstick{\ket{0^n}\ket{0..}} & {/} \qw & \gate{U} & \qw & \qswap & \qw & \qw \\
& & \push{\text{AA}} & & & & \\
\lstick{\ket{0^{n'}}\ket{0..}} & {/} \qw & \gate{U'} & \qw & \qswap & \qw & \qw \gategroup{2}{3}{2}{3}{.7em}{--} \gategroup{4}{3}{4}{3}{.7em}{--} \\
& & \push{\text{AA}}
} \\ \\
\Qcircuit @C=1em @R=.7em {
\lstick{\ket{0}} & \gate{H} & \ctrlo{1} & \ctrl{1} & \gate{H} & \meter \\
\lstick{\ket{0^n}\ket{0..}} & {/} \qw & \gate{U} & \gate{U'} & \qw & \qw \gategroup{1}{2}{2}{4}{.7em}{--}
} \\
\Qcircuit @C=1em @R=.7em {
\push{\text{AA}} & & & & & & & &
}
\end{align}
\caption{
Quantum circuit to compute the inner product \eqref{eq:kernel_def}; it is composed of the oracles
enhanced via AA followed by the swap test (upper) and the switch test (lower).}
\label{qc:swap-switch-test-grover}
\end{figurehere}
\subsection*{Time complexity for specific encoding functions}
Here, we discuss the time complexity that takes into account the number of elementary operations
contained in the oracle.
We particularly investigate the following two encoding functions:
\begin{align}
E_{ve}(G,x)&=[\#v,\#e], \label{eq:encode_ve} \\
E_{ved}(G,x)&=[\#v,\#e,\#d1,\#d2,\#d3],
\label{eq:encode_ved}
\end{align}
where \(\#v\) is the number of vertices of the subgraph specified by $x$, and \(\#e\) is the number
of edges of $x$.
Additionally, \(\#dD\) \((D\in\{1,2,3\})\) denotes the number of vertices that have a degree of \(D\).
Then, from Lemmas \ref{thm:v}, \ref{thm:e}, and \ref{thm:dD} given in the Supplementary Information,
the time complexities required to calculate Eq. \eqref{eq:encode_ve} and Eq. \eqref{eq:encode_ved}
are
\begin{align}
O(n(\log n)^2)+O(|E|(\log |E|)^2)
=O((n+|E|)(\log n)^2),
\end{align}
and
\begin{align}
&O(n(\log n)^2)+O(|E|(\log |E|)^2) \\
& \hspace{1em} +3\cdot O(n((\log n)^2+d(\log d)^2)) \\
&=O((n+|E|)(\log n)^2+nd(\log d)^2),
\end{align}
respectively.
Recall that $d$ is the maximum degree of the graph $G$.
The cardinality \(a=|Y|\) for the range \(Y\) can be evaluated as
\begin{align}
a=O(n)\cdot O(n^2)=O(n^3),
\end{align}
in the case of Eq. \eqref{eq:encode_ve}, and
\begin{align}
a=O(n)\cdot O(n^2)\cdot O(n)\cdot O(n)\cdot O(n)=O(n^6),
\end{align}
in the case of Eq. \eqref{eq:encode_ved}.
Note that Eqs. \eqref{eq:encode_ve} and \eqref{eq:encode_ved} satisfy the condition
\eqref{eq:v_range_constraint}.
Hence, the time complexity of the quantum algorithm, assisted by AA, is evaluated as follows:
in the case of Eq. \eqref{eq:encode_ve}, it is
\begin{equation}
O((n+|E|)(\log n)^2)\cdot O\left(\sqrt{n^3}/n^{1/4}\right)
=O(n^{3.25}(\log n)^2),
\end{equation}
and in the case of Eq. \eqref{eq:encode_ved}, it is
\begin{align}
&O((n+|E|)(\log n)^2+nd(\log d)^2)\cdot O\left(\sqrt{n^6}/n^{1/4}\right) \\
&\hspace{1em} =O(n^{4.75}(\log n)^2),
\end{align}
where \(|E|=n^2\) and \(d=n\) are used.
In contrast, the time complexity of typical existing graph kernels is \(O(n^3)\) for the random walk method \cite{FastVishwanathan} and \(O(n^4)\) for the shortest
paths method \cite{Borgwardt}.
Hence, our quantum computing approach for kernel computation has a time complexity
comparable to that of the typical classical approach.
However, note again that our kernel reflects features from {\it all} subgraphs, which are not
covered by existing methods.
\subsection*{Toy example}
\begin{figurehere}
\centering
\includegraphics[width=6cm]{example-graphs.png}
\caption{Structure of the toy graphs. }
\label{fig:example-graphs}
\end{figurehere}
\mbox{}
\\
We consider two simple toy graphs, which are depicted in Fig. \ref{fig:example-graphs}, to
demonstrate how to construct the corresponding quantum feature states $\ket{G}$ and $\ket{G'}$.
The encoding function is chosen as $E_{ved}(G, x)$, given in Eq. \eqref{eq:encode_ved}.
Note that both graphs have \(2^3=8\) induced subgraphs; thus, we need $n=3$ qubits
to cover all subgraphs.
First, $\ket{\bar{G}}\in {\cal H}_{\rm index}\otimes {\cal H}_{\rm feature}$ is constructed as
\begin{align}
\ket{\bar{G}}&=\ket{000}\ket{0,0,0,0,0} + \ket{100}\ket{1,0,0,0,0} \\
&+ \ket{010}\ket{1,0,0,0,0} + \ket{001}\ket{1,0,0,0,0} \\
&+ \ket{110}\ket{2,1,2,0,0} + \ket{101}\ket{2,1,2,0,0} \\
&+ \ket{011}\ket{2,1,2,0,0} + \ket{111}\ket{3,3,0,3,0}.
\end{align}
For example, the term \(\ket{110}\ket{2,1,2,0,0}\) represents the state of the subgraph composed
of the 0th and 1st vertices (thus, the index is 110);
this subgraph has 2 vertices, 1 edge, 2 vertices with a degree of 1, 0 vertices with a degree of 2,
and 0 vertices with a degree of 3 (thus, the feature is represented by $2,1,2,0,0$).
Note again that the normalization constant is omitted.
Therefore, the algorithm depicted in Fig.~\ref{qc:init-index-reg} enables us to remove the index state and
arrive at the feature state $\ket{G}\in {\cal H}_{\rm feature}$:
\begin{align}
\ket{G}&=\ket{0,0,0,0,0}+3\ket{1,0,0,0,0} \\
&+3\ket{2,1,2,0,0}+\ket{3,3,0,3,0}.
\end{align}
The state $\ket{\bar{G}'}$ can also be obtained in the same way as
\begin{align}
\ket{\bar{G}'}&=\ket{000}\ket{0,0,0,0,0} + \ket{100}\ket{1,0,0,0,0} \\
&+ \ket{010}\ket{1,0,0,0,0} + \ket{001}\ket{1,0,0,0,0} + \\
&+ \ket{110}\ket{2,1,2,0,0} + \ket{101}\ket{2,1,2,0,0} + \\
&+ \ket{011}\ket{2,0,0,0,0} + \ket{111}\ket{3,2,2,1,0},
\end{align}
which leads to
\begin{align}
\ket{G'}&=\ket{0,0,0,0,0}+3\ket{1,0,0,0,0} \\
&+2\ket{2,1,2,0,0}+\ket{2,0,0,0,0}+\ket{3,2,2,1,0}.
\end{align}
Hence, by considering the normalization factor, the inner product (i.e., the similarity
of the graphs) is calculated as follows:
in the case of the swap test, it is
\begin{align}
& \hspace{-1em} \braket{G|G'} = K_{\rm BH}(G, G')\\
&=\frac{1\cdot 1 + 3\cdot 3 + 3\cdot 2 + 0\cdot 1 + 1\cdot 0 + 0\cdot 1}{\sqrt{1^2+3^2+3^2+1^2}\sqrt{1^2+3^2+2^2+1^2+1^2}} \\
&\sim 0.8944,
\end{align}
and in the case of the switch test, it is
\begin{align}
&\hspace{-1em} \braket{G|G'} = K_{\rm SH}(G, G')\\
&=\frac{2(1\cdot 1 + 3\cdot 3 + 3\cdot 2 + 0\cdot 1 + 1\cdot 0 + 0\cdot 1)}{\frac{2^3}{2^3}(1^2+3^2+3^2+1^2)+\frac{2^3}{2^3}(1^2+3^2+2^2+1^2+1^2)} \\
&\sim 0.8889.
\end{align}
Note that they are certainly bigger than the value $\braket{\bar{G}|\bar{G}'}=0.75$ computed
via the naive method.
\subsection*{Advantage over classical sampling method}
In the classical case, an exponentially large resource is necessary to compute a feature vector,
corresponding to Eq.~\eqref{eq:g_without_index}, which considers all subgraphs; however, we can
utilize an efficient classical sampling method that was used in \cite{N-Shervashidze} to approximate
this feature vector.
More specifically, we first sample $S$ subgraphs from the entire graph $G$ and then apply the
encoding function $E(G,x)$, which satisfies Eq. \eqref{eq:v_range_constraint}, on those subgraphs
to approximate the feature vector.
The $k$th component of this vector is given by
\begin{align}
\hat{P}_{\mathbf{x}^S}(y_k)=\frac{1}{S}\sum_{i=1}^S 1( E(G, x_i)=y_k),
\end{align}
where $\mathbf{x}^S=\{ x_1,...,x_S\}$ are the set of indices that identify the sampled subgraphs.
Also, $1(A)$ represents the indicator function, which takes 1 when the condition $A$ is satisfied
and zero otherwise.
Now, the $k$th component of the true feature vector is represented as
\begin{align}
\label{eq:true-prob-dist}
P(y_k)=\frac{|X_k|}{2^n}.
\end{align}
We then have the following theorem (the proof is given in the Supplementary Information):
\begin{theorem}
\label{thm:classical-sample-size}
For a given $\epsilon>0$ and $\delta>0$,
\begin{align}
S=O\left(\frac{a-\log{\delta}}{\epsilon^2\log{n}}\right)
\end{align}
samples suffice to ensure that
\begin{align}
\label{eq:cl-l1-bound}
\Pr\left(\left\|P-\hat{P}_{\mathbf{x}^S}\right\|_1\geq\epsilon\right)\leq\delta,
\end{align}
where $\| \cdot \|_1$ denotes the $L_1$ norm.
In particular, for constant $\epsilon$ and $\delta$, we have that $S=O(a/\log{n})$.
\end{theorem}
Therefore, the sample complexity of this classical method is $O(a/\log{n})$, whereas the query
complexity of the proposed quantum algorithm is $O(a/\sqrt{n})$.
Hence, up to the difference of the sense of complexities, the proposed method has a clear
computational advantage.
Note that the inner product $\braket{G|G'}$ that is computed using the above classical sampling
method is given by
\begin{align}
\braket{G|G'} = \frac{1}{(\sum_{k=1}^a |X_k|)(\sum_{k=1}^a |X_k'|)} f_G^\top f_{G'},
\end{align}
which differs from that computed in the quantum case \eqref{eq:kernel_def_nff} or
\eqref{eq:kernel_def_SH}.
\subsection*{Numerical experiment}
Here we study the performance of classifiers constructed based on the proposed quantum kernel,
with comparison to some typical classical classifiers.
The quantum kernel was calculated, not using the quantum algorithm but via the direct calculation
of the inner product \eqref{eq:kernel_def}, in an ideal noise-free environment on a GPU;
for the details, see the Code Availability section in Supplementary Information.
We calculate both the BH kernel \eqref{eq:kernel_def_nff} and the SH kernel \eqref{eq:kernel_def_SH},
for the two different encoding functions $E_{ve}$ given in Eq.~\eqref{eq:encode_ve} and $E_{ved}$
given in Eq.~\eqref{eq:encode_ved}.
We compare the quantum graph kernels to the following classical graph kernels:
the random walk kernel (RW) \cite{FastVishwanathan}, the graphlet sampling kernel (GS)
\cite{N-Shervashidze}, and the shortest path kernel (SP) \cite{Borgwardt}.
These three classical kernels are simulated using Python's \textit{GraKeL} library \cite{grakel}.
We use the following benchmark datasets obtained from the repository of the Technical University
of Dortmund \cite{datasets}.
For the case of binary classification problems, we used:
AIDS (chemical compounds with or without evidence of anti-HIV activity \cite{AIDS});
BZR\_MD (dataset BZR of active or inactive benzodiazepine receptors \cite{ER};
converted to complete graphs \cite{ER_MD});
ER\_MD (dataset ER of active or inactive estrogen receptors \cite{ER};
converted to complete graphs \cite{ER_MD});
IMDB-BINARY (the movie genre is action or romance based on its co-starring
relationship \cite{IMDB});
MUTAG (chemical compounds with or without mutagenicity \cite{MUTAG});
and PTC\_FM (chemical compounds in the PTC dataset \cite{PTC} that are carcinogenic
or non-carcinogenic to female mice \cite{ER_MD}).
As for the multi-class classification problems, we used:
15-classes Fingerprint (fingerprint images converted to graphs and divided by type \cite{Fingerprint})
and
3-classes IMDB-MULTI (the movie genre is comedy, romance, or sci-fi based on its co-starring
relationship \cite{IMDB}).
Due to the limitation of the GPU memory, we took graphs with less than or equal to 28 vertices.
As a result, (the number of graphs)/(the total number of graphs) are
1774/2000 for AIDS,
296/306 for BZR\_MD,
398/446 for ER\_MD,
860/1000 for IMDB-BINARY,
188/188 for MUTAG,
331/349 for PTC\_FM,
2148/2148 (excluding graphs with \(\#edges\) less than 1) for Fingerprint, and
1406/1500 for IMDB-MULTI.
The necessary number of qubits is $28+\log{28}+\log{(28\times 27/2)}\sim 41$ for the case
$E_{ve}$ and $28+\log{28}+\log{(28\times 27/2)}+\log{28}+\log{28}+\log{28}\sim 56$ for the
case $E_{ved}$.
We apply the $C$-support vector machine (SVM), implemented via \textit{Scikit-learn}
\cite{scikit-learn}, to classify the dataset.
To evaluate the classification performance, we calculate the mean test accuracy, by running
10 repeats of a double 10-fold cross-validation.
In addition, we calculate {\it F-measure}, which is used when the number of data in different
classes are unbalanced; in fact, the numbers of data of two classes are 63 and 125 for MUTAG
and 400 and 1600 for AIDS.
The SVM parameter $C$ is taken from the discrete set $\{10^{-4}, 10^{-3}, \ldots, 10^3\}$, and
the best model with respect to $C$ is used to compute the classification performance.
The result are summarized in Table~\ref{tb:graphkernels}.
\end{multicols}
\begin{table}[H]
\centering
\caption{
Mean test accuracy (upper) and F-measure (lower) of the $C$-SVM constructed with each kernel;
the errors are the standard deviation between 10 repetitions of the double 10-fold cross-validation.
QK is the proposed quantum kernel.
RW, GS, and SP are the classical graph kernels.
Macro-F1 is used in the multiclass graph datasets Fingerprint and IMDB-MULTI.
The bold indicates the best performing value in the dataset.}
\label{tb:graphkernels}
\scalebox{0.8}[0.9]{
\begin{tabular}{l|ccccccc} \hline
Dataset & QK (BH\([ve]\)) & QK (BH\([ved]\)) & QK (SH\([ve]\)) & QK (SH\([ved]\)) & RW & GS & SP \\ \hline
AIDS & \(\mathbf{99.79}\pm 0.06\) & \(99.68\pm 0.05\) & \(\mathbf{99.79}\pm 0.06\) & \(99.71\pm 0.05\) & \(99.66\pm 0.00\) & \(98.74\pm 0.16\) & \(99.67\pm 0.02\) \\
BZR\_MD & \(64.23\pm 0.71\) & \(\mathbf{64.29}\pm 0.44\) & \(63.83\pm 1.01\) & \(63.83\pm 1.01\) & \(62.14\pm 0.76\) & \(53.90\pm 3.07\) & \(63.42\pm 1.02\) \\
ER\_MD & \(66.05\pm 1.36\) & \(66.00\pm 1.35\) & \(\mathbf{66.28}\pm 1.05\) & \(\mathbf{66.28}\pm 1.05\) & \(60.73\pm 0.47\) & \(55.40\pm 1.40\) & \(65.58\pm 0.83\) \\
IMDB-BINARY & \(\mathbf{70.16}\pm 0.90\) & \(69.85\pm 1.15\) & \(69.81\pm 0.60\) & \(69.85\pm 1.03\) & \(53.64\pm 0.72\) & \(42.05\pm 0.56\) & \(57.01\pm 1.09\) \\
MUTAG & \(85.88\pm 0.59\) & \(87.01\pm 1.20\) & \(85.56\pm 0.73\) & \(86.79\pm 0.96\) & \(\mathbf{88.11}\pm 0.70\) & \(69.94\pm 1.30\) & \(86.73\pm 1.30\) \\
PTC\_FM & \(\mathbf{60.82}\pm 1.30\) & \(60.12\pm 1.20\) & \(60.55\pm 1.24\) & \(60.15\pm 1.15\) & \(57.86\pm 0.99\) & \(52.29\pm 2.49\) & \(57.91\pm 0.74\) \\
Fingerprint & \(46.85\pm 0.36\) & \(\mathbf{47.09}\pm 0.27\) & \(46.94\pm 0.33\) & \(47.01\pm 0.24\) & \(47.03\pm 0.29\) & \(42.82\pm 0.56\) & \(46.99\pm 0.29\) \\
IMDB-MULTI & \(46.44\pm 0.23\) & \(47.13\pm 0.53\) & \(46.66\pm 0.48\) & \(\mathbf{47.57}\pm 0.48\) & \(35.49\pm 0.20\) & \(18.95\pm 0.34\) & \(42.53\pm 0.98\) \\
\hline \hline
AIDS & \(\mathbf{99.88}\pm 0.03\) & \(99.82\pm 0.03\) & \(\mathbf{99.88}\pm 0.03\) & \(99.84\pm 0.03\) & \(99.81\pm 0.00\) & \(99.30\pm 0.09\) & \(99.82\pm 0.01\) \\
BZR\_MD & \(70.44\pm 0.55\) & \(\mathbf{70.46}\pm 0.42\) & \(69.87\pm 1.11\) & \(69.87\pm 1.11\) & \(62.28\pm 1.08\) & \(54.06\pm 3.27\) & \(66.70\pm 1.48\) \\
ER\_MD & \(62.75\pm 1.75\) & \(62.73\pm 1.73\) & \(\mathbf{62.89}\pm 1.58\) & \(\mathbf{62.89}\pm 1.58\) & \(1.18\pm 2.52\) & \(41.53\pm 1.98\) & \(53.31\pm 1.95\) \\
IMDB-BINARY & \(\mathbf{69.27}\pm 1.40\) & \(69.06\pm 1.60\) & \(68.47\pm 0.91\) & \(67.91\pm 1.23\) & \(25.06\pm 1.76\) & \(41.79\pm 0.77\) & \(66.87\pm 1.13\) \\
MUTAG & \(89.15\pm 0.39\) & \(90.11\pm 0.86\) & \(88.96\pm 0.49\) & \(89.95\pm 0.64\) & \(\mathbf{90.74}\pm 0.54\) & \(76.85\pm 1.24\) & \(89.68\pm 0.95\) \\
PTC\_FM & \(34.75\pm 1.81\) & \(36.40\pm 1.63\) & \(34.28\pm 2.03\) & \(35.16\pm 1.96\) & \(2.88\pm 4.42\) & \(\mathbf{40.99}\pm 3.08\) & \(2.72\pm 2.44\) \\
Fingerprint & \(17.73\pm 0.17\) & \(17.81\pm 0.32\) & \(17.75\pm 0.13\) & \(\mathbf{17.84}\pm 0.08\) & \(17.32\pm 0.33\) & \(17.21\pm 0.22\) & \(16.74\pm 0.15\) \\
IMDB-MULTI & \(44.67\pm 0.35\) & \(45.40\pm 0.61\) & \(44.91\pm 0.60\) & \(\mathbf{45.97}\pm 0.55\) & \(21.04\pm 0.37\) & \(17.17\pm 0.31\) & \(38.71\pm 1.31\) \\
\hline
\end{tabular}
}
\end{table}
\begin{multicols}{2}
The table shows that, in many cases, the proposed quantum kernel achieves better classification
accuracy than that obtained via the classical kernels.
Note that the AIDS dataset is sparse and ER\_MD dataset is dense; the quantum kernels show the
better performance in both cases, implying that they are not significantly affected by the density
of the graph dataset.
In many cases, the two kernels BH and SH show a similar performance, but there are some
visible differences depending on the dataset;
this may be due to the property $K_{\rm SH}(G, G') \leq K_{\rm BH}(G, G')$, which is proven in
Lemma \ref{thm:cosinesimilarity-geq-ourkernel} in Supplementary Information.
Also, although $E_{ved}$ has more features than $E_{ve}$, from the table we do not observe a clear
superiority of the former over the latter in the classification accuracy; we will discuss further on this
point in the next section.
Lastly let us check the probability for successfully removing the index register;
recall from Theorem 2 that the lower bound is $\Omega(\sqrt{n}/a)$.
Figure~\ref{fig:prob-change} shows the success probabilities when Eq. \eqref{eq:encode_ved} is
used as the encoding function, in which case the lower bound is \(\Omega(\sqrt{n}/a) = \Omega(n^{-5.5})\).
As shown in the figure, the actual success probability is much higher than the lower bound, indicating
that the quantum algorithm for removing the index state will be more efficient than expected by
the theory.
\begin{figurehere}
\centering
\includegraphics[width=\linewidth]{probs_change_each_dataset.png}
\caption{
Success probability for removing the index state.
The horizontal axis represents the number of vertices $n$, and the vertical axis represents
the success probability. The error bar represents the standard error.}
\label{fig:prob-change}
\end{figurehere}
\section*{Discussion}
In this paper, as a main result, we provided the condition and the protocol for removing
the index state with polynomial query complexity.
The encoding function $E(G,x)$ that extracts features from a subgraph $x$, given by
Eq.~\eqref{eq:encode_ve} or \eqref{eq:encode_ved}, satisfies this condition, which allows
us to construct the graph kernel that correctly reflects features of all subgraphs.
We gave a proof-of-principle numerical demonstration to solve the problem of classifying
various type of graph set containing graphs at most 28 vertices, via the quantum simulator
composed of 41 or 56 qubits.
The proposed algorithm that efficiently removes the index states will be useful in various
other problems such as the task of counting the same words for text classification problems
using the Bhattacharyya kernel \cite{BhattacharyyaKernelText}.
We here give a remark on the choice of $E(G,x)$.
One would consider that $E(G,x)$ with more features including e.g. a cycle structure
\cite{CyclesReview}, which thus has a bigger range of function, may lead to better classification
performance, although it needs more query complexity for removing the index state and thereby
constructing the kernel.
(In particular, as is well known, if $E(G,x)$ and $x$ are one-to-one correspondence, we need
an exponential order of query complexity to do this task.)
However, the important fact revealed by the numerical demonstration is that such a bigger-range
encoding may be not required; actually, we found that $E_{ve}(G,x)$ and $E_{ved}(G,x)$ lead to
almost the same classification performance.
This is presumably because the proposed kernel covers all subgraphs.
Hence, a relatively simple $E(G,x)$ might be sufficient, which is the advantage of our quantum
algorithm.
In other words, existing kernels that do not cover all subgraphs may need to contain more features.
A disadvantage of the kernel method is that it needs heavy computational cost for calculating
the inner products $\braket{G_i|G_j}$ for all pairs of $(G_i, G_j)$ contained in the training dataset,
in order to construct the Gram matrix.
The generalization of the switch test protocol given in Theorem~\ref{thm:rir-qf} may give a solution
to this issue.
That is, we could have an algorithm that generates a superposition
$\ket{1}\ket{G_1}+\ket{2}\ket{G_2}+\ket{3}\ket{G_3}+\cdots$
and thereby efficiently construct the Gram matrix by some means.
This direction is worth investigating, as it is the scheme demanded in the field of kernel-based
quantum machine learning.
The algorithms posed in this paper are all difficult to implement on a near-term quantum device.
A key approach may be to develop a valid relaxation method of the condition, because very precise
computation of the kernel value may be not necessary.
\section*{Supplementary Information}
This supplementary information contains proofs of the theorems in the main text, properties of the
SH kernel, some lemmas related for constructing $E(G,x)$, and an additional information for our
numerical simulations.
\subsection*{Proof of theorems}
\begin{proof}[Proof of Theorem \ref{thm:rir-v-range}]
Suppose that the set \(X_{v,h}\) satisfies
\begin{align}
&\bigcup_{h=1}^{|Y_v|}X_{v,h}=X_v, \\
&\bigcup_{h=1}^{|Y_v|}\bigcup_{\substack{h'\neq h, \\ h'\in[1,|Y_v|]}}(X_{v,h}\cap X_{v,h'})=\emptyset.
\end{align}
We rewrite Eq. \eqref{eq:prob-to-init-ir} as
\begin{align}
\Pr(0^n)=\frac{1}{2^{2n}}\sum_{v=0}^n \sum_{h=1}^{|Y_v|} |X_{v,h}|^2.
\end{align}
To calculate the lower bound of
\begin{equation}
\sum_{h=1}^{|Y_v|} |X_{v,h}|^2
\end{equation}
subjected to the equality constraint
\begin{align}
\sum_{h=1}^{|Y_v|} |X_{v,h}|=|X_v|=\binom{n}{v},
\end{align}
we define the cost function
\begin{align}
&L(|X_{v,1}|,\dots,|X_{v,|Y_v|}|,\lambda) \\
&=-\sum_{h=1}^{|Y_v|} |X_{v,h}|^2-\lambda\left(\sum_{h=1}^{|Y_v|}|X_{v,h}|-\binom{n}{v}\right),
\end{align}
where \(\lambda\) denotes the Lagrange multiplier. Clearly,
\begin{equation}
|X_{v,h}|=\frac{1}{|Y_v|}\binom{n}{v}
\end{equation}
maximizes \(L(|X_{v,1}|,\dots,|X_{v,|Y_v|}|,\lambda)\).
Thus, the probability that we obtain \(0^{n}\) when measuring the index state is evaluated as
\begin{align}
\Pr(0^n)
&\geq \frac{1}{2^{2n}}\sum_{v=0}^n \sum_{h=1}^{|Y_v|}\left(\frac{1}{|Y_v|}\binom{n}{v}\right)^2
=\frac{1}{2^{2n}}\sum_{v=0}^n \frac{1}{|Y_v|} \binom{n}{v}^2 \\
&\geq\frac{1}{2^{2n}}\sum_{v=0}^n \frac{1}{\beta n^c} \binom{n}{v}^2
=\frac{1}{\beta n^c 2^{2n}}\binom{2n}{n} \\
&\sim \frac{1}{\beta n^c 2^{2n}}\frac{2^{2n}}{\sqrt{\pi n}}
=\Omega\left(\frac{\sqrt{n}}{n^c}\right),
\end{align}
where we used Eq.~\eqref{eq:order of |Y|} to have $|Y_v|={\rm Pol}(v^{c-1})\leq \beta n^c$ with
some constant $\beta$.
Note that $\Omega(\cdot)$ is defined as $p(n)=\Omega(q(n))$ through two probability distributions
$p$ and $q$ satisfying
\begin{align}
\exists n_0, \exists M>0\ \mathrm{s.t.}\ n\geq n_0 \Rightarrow p(n)\geq Mq(n),
\end{align}
for real valued functions $p(n)$ and $q(n)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:rir-qf}]
According to Fig.~\ref{qc:init-index-reg-sp}, we perform the controlled-\(H^{\otimes n}\) followed
by the controlled-\(H^{\otimes n'}\) on the following state
\begin{align}
&\frac{1}{\sqrt{2}}\left(\frac{\ket{0}\sum_{x\in\{0,1\}^{n}}\ket{x}\ket{f(x)}}{\sqrt{2^{n}}} \right. \\
\quad &+ \left. \frac{\ket{1}\ket{0^{n-n'}}\sum_{x\in\{0,1\}^{n'}}\ket{x}\ket{g(x)}}{\sqrt{2^{n'}}}\right) \\
&=\frac{1}{\sqrt{2}}\left(\frac{\ket{0}\sum_{k=1}^{a}\sum_{x\in X_{k}}\ket{x}\ket{y_{k}}}{\sqrt{2^{n}}} \right. \\
\quad &+ \left. \frac{\ket{1}\ket{0^{n-n'}}\sum_{k=1}^{a'}\sum_{x\in{X_{k}'}}\ket{x}\ket{y_{k}'}}{\sqrt{2^{n'}}}\right).
\end{align}
Then, we have
\begin{align}
&\frac{1}{\sqrt{2}}\left\{\ket{0}\left(\frac{1}{2^{n}}\ket{0^{n}}\sum_{k=1}^{a}|X_{k}|\ket{y_{k}}+\mathrm{{other\ terms}}\right)\right. \\
&+ \left. \ket{1}\left(\frac{1}{2^{n'}}\ket{0^{n}}\sum_{k=1}^{a'}|X_{k}'|\ket{y_{k}'}+\mathrm{{other\ terms}'}\right)\right\} \label{eq:sp-has-other-terms}.
\end{align}
The probability that we obtain \(0^{n}\) via the index state measurement is given by
\begin{equation} \label{eq:sp-prob-lower-bound}
\Pr(0^{n})=\frac{1}{2}\left(\frac{\sum_{k=1}^{a}|X_{k}|^2}{2^{2n}}+\frac{\sum_{k=1}^{a'}|X_{k}'|^2}{2^{2n'}}\right).
\end{equation}
Owing to Eq. \eqref{eq:prob-lower-bound}, this probability is lower bounded by
\begin{equation}
\Pr(0^{n})\geq \frac{1}{2}\left(a^{-1}+a'^{-1}\right).
\end{equation}
Therefore, we can remove the index state with probability at least \((a^{-1}+a'^{-1})/2\).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:classical-sample-size}]
We use the result given in \cite{Weissman};
for the empirical distribution of a sequence of independent identically distributed random variables,
$\hat{P}_{\mathbf{x}^S}$, and the true distribution $P$, the following inequality holds:
\begin{align} \label{eq:l1-deviation}
\Pr\left(\left\|P-\hat{P}_{\mathbf{x}^S}\right\|_1\geq\epsilon\right)
&\leq(2^a-2)e^{-S\varphi(\pi_P)\epsilon^2/4},
\end{align}
where \(\varphi(p)\) and \(\pi_P\) are given by
\begin{equation}
\varphi(p)=\frac{1}{1-2p}\ln{\frac{1-p}{p}}
\end{equation}
and
\begin{equation}
\pi_P=\max_{k} \min \{P(y_k), 1-P(y_k)\} = \max_{k} P(y_k).
\end{equation}
Here we assumed $P(y_k) < 1/2$, which in fact holds in our case.
Note that $\delta$ in Eq.~\eqref{eq:cl-l1-bound} is defined by the rightmost side of
Eq.~\eqref{eq:l1-deviation}.
Now, because the true probability distribution $P$ is given by Eq.~\eqref{eq:true-prob-dist}, we obtain
\begin{equation}
\pi_P=\max_{k}\frac{|X_k|}{2^n}.
\end{equation}
Then, from Eq.~\eqref{eq:v_range_constraint}, we have the following inequality:
\begin{align}
\frac{1}{(n/2)^{c-1}}\binom{n}{n/2}\frac{1}{2^n} \leq \pi_P \leq \binom{n}{n/2}\frac{1}{2^n},
\end{align}
which implies
\begin{align}
\frac{2^{c-1/2}}{\sqrt{\pi}n^{c-1/2}} \lesssim \pi_P \lesssim \frac{\sqrt{2}}{\sqrt{\pi n}}.
\end{align}
Thus, $\varphi(\pi_P)$ is of the order of $\Omega(\ln{n})$, and then $\delta$ can be evaluated
as $\delta \sim 2^a {\rm exp}(-S\epsilon^2 \Omega(\ln{n}))$, from Eq.~\eqref{eq:l1-deviation}.
As a result, we find
\begin{align}
S=\frac{a\ln{2}-\ln{\delta}}{\epsilon^2 \Omega(\ln{n})}
=O\left(\frac{a-\log{\delta}}{\epsilon^2\log{n}}\right)
=O\left(\frac{a}{\log{n}}\right).
\end{align}
Therefore we arrive at \(S=O(a/\log{n})\).
\end{proof}
\subsection*{Properties of the SH kernel}
\subsubsection*{Positive semidefiniteness}
First, we prove that the SH kernel is positive semidefinite, which is necessary to construct a
valid classifier based on the SH kernel.
\begin{lemma}\label{lem:semidef}
Let $\{ G_x, ~ x=1, \ldots, N\}$ be a set of graphs.
Then, the SH kernel \eqref{eq:kernel_def_SH}:
\begin{align}
K_{\rm SH}(G, G')=k_{\rm SH}(G, G') f_{G}^\top f_{G'},
\end{align}
where $f_{G}=[|X_1|, \ldots, |X_a| ]^\top$ and
\begin{align}
k_{\rm SH}(G, G')
=\frac{2}{\frac{2^{n'}}{2^n}\sum_{k=1}^a |X_k|^2 + \frac{2^n}{2^{n'}}\sum_{k=1}^a |X_k'|^2},
\end{align}
is a positive semidefinite kernel;
that is, the matrix $(K_{\rm SH}(G_x, G_y))_{x,y=1,\ldots,N}$ is a positive semidefinite matrix.
\end{lemma}
\begin{proof}
We use the following general fact; if $\kappa_1$ and $\kappa_2$ are positive semidefinite kernels,
then the product $\kappa(G,G')=\kappa_1(G,G')\kappa_2(G,G')$ is also a positive semidefinite kernel.
Now, because \(f_{G}^\top f_{G'}\) is positive semidefinite, we prove
that $k_{\rm SH}(G, G')$ is positive semidefinite.
For this purpose, we define
\begin{align}
p_x = \sum_{k=1}^a |X_{kx}|^2, ~~
q_{x,y} = \frac{2^{n_y}}{2^{n_x}}.
\end{align}
Then,
\begin{align}
k_{\rm SH}(G_x, G_y)=\frac{2}{q_{x,y}p_x+q_{y,x}p_y}.
\end{align}
Also we define
\begin{align} \label{eq:cholesky-L}
L_{i,j}=\begin{cases}
\frac{\prod_{k=1}^{j-1}\left(q_{k,j}p_k-q_{j,k}p_j\right)}
{\sqrt{2p_j}\prod_{k=1}^{j-1}\left(q_{k,j}p_k+q_{j,k}p_j\right)} & (i=j) \\
\frac{\sqrt{2p_j}\prod_{k=1}^{j-1}\left(q_{k,i}p_k-q_{i,k}p_i\right)}
{\prod_{k=1}^j\left(q_{k,i}p_k+q_{i,k}p_i\right)} & (i>j) \\
0 & (i<j).
\end{cases}
\end{align}
Below we will show that the matrix $(k_{\rm SH}(G_x, G_y))_{x,y=1,\ldots,N}$ is represented as
\begin{align} \label{eq:cholesky}
k_{\rm SH} = 2 LL^\top,
\end{align}
meaning that $k_{\rm SH}(G, G')$ is positive semidefinite.
The proof is divided into the three cases (i), (ii), and (iii).
\\
(i) The case \(x=y\).
\begin{align}
\left(LL^\top\right)_{x,x}&=\sum_{l=1}^x L_{x,l}^2 \\
&=\sum_{l=1}^{x-1}\frac{2p_l\prod_{k=1}^{l-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}{\prod_{k=1}^l\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2} \\
&+\frac{\prod_{k=1}^{x-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}{2p_x\prod_{k=1}^{x-1}\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2}.
\end{align}
Here, we define \(\alpha(t)\), \(\beta(t)\) (\(t\in[0,x-1]\)) as
\begin{align}
\alpha(t)&=\sum_{l=1}^{t}\frac{2p_l\prod_{k=1}^{l-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}{\prod_{k=1}^l\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2}, \\
\beta(t)&=\frac{\prod_{k=1}^{t}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}{\prod_{k=1}^{t}\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2}.
\end{align}
Then we have
\begin{align}
\left(LL^\top\right)_{x,x} = \alpha(x-1)+\frac{\beta(x-1)}{2p_x}.
\end{align}
When \(t\geq 1\),
\begin{align}
&\alpha(t)+\frac{\beta(t)}{2p_x} \\
&=\alpha(t-1)
+\frac{2p_t\prod_{k=1}^{t-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}
{\prod_{k=1}^t\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2} +\frac{\beta(t)}{2p_x} \\
&=\alpha(t-1)+\left(4p_t p_x + \left(q_{t,x}p_t-\frac{1}{q_{t,x}}p_x\right)^2\right) \\
&\hspace{2em}
\times\frac{\prod_{k=1}^{t-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}
{2p_x\prod_{k=1}^t\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2} \\
&=\alpha(t-1)+\frac{\prod_{k=1}^{t-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)^2}{2p_x\prod_{k=1}^{t-1}\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)^2} \\
&=\alpha(t-1)+\frac{\beta(t-1)}{2p_x}.
\end{align}
Thus, we obtain the following equation
\begin{align}
\left(LL^\top\right)_{x,x}=\alpha(0)+\frac{\beta(0)}{2p_x}=\frac{1}{2p_x}
=k_{\rm SH}(G_x, G_x)/2.
\end{align}
(ii) The case \(x<y\).
\begin{align}
\left(LL^\top\right)_{x,y}&=\sum_{l=1}^x L_{x,l}L_{y,l} \\
&=\sum_{l=1}^{x-1}\frac{\sqrt{2p_l}\prod_{k=1}^{l-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)}{\prod_{k=1}^l\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)} \\
&\hspace{2em}\times\frac{\sqrt{2p_l}\prod_{k=1}^{l-1}\left(q_{k,y}p_k-\frac{1}{q_{k,y}}p_y\right)}{\prod_{k=1}^l\left(q_{k,y}p_k+\frac{1}{q_{k,y}}p_y\right)} \\
&+\frac{\prod_{k=1}^{x-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)}{\sqrt{2p_x}\prod_{k=1}^{x-1}\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)} \\
&\hspace{2em}\times\frac{\sqrt{2p_x}\prod_{k=1}^{x-1}\left(q_{k,y}p_k-\frac{1}{q_{k,y}}p_y\right)}{\prod_{k=1}^x\left(q_{k,y}p_k+\frac{1}{q_{k,y}}p_y\right)}.
\end{align}
Here, we define \(\rho(t)\), \(\sigma(t)\) (\(t\in[0,x-1]\)) as
\begin{align}
\rho(t)&=\sum_{l=1}^{t}\frac{2p_l\prod_{k=1}^{l-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)}{\prod_{k=1}^l\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)} \\
&\times\frac{\prod_{k=1}^{l-1}\left(q_{k,y}p_k-\frac{1}{q_{k,y}}p_y\right)}{\prod_{k=1}^l\left(q_{k,y}p_k+\frac{1}{q_{k,y}}p_y\right)}, \\
\sigma(t)&=\frac{\prod_{k=1}^{t}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)}{\prod_{k=1}^{t}\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)} \\
&\times\frac{\prod_{k=1}^{t}\left(q_{k,y}p_k-\frac{1}{q_{k,y}}p_y\right)}{\prod_{k=1}^{t}\left(q_{k,y}p_k+\frac{1}{q_{k,y}}p_y\right)}.
\end{align}
Then we have
\begin{align}
\left(LL^\top\right)_{x,y} = \rho(x-1)+\frac{\sigma(x-1)}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y}.
\end{align}
When \(t\geq 1\),
\begin{align}
&\rho(t)+\frac{\sigma(t)}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y} \\
&=\rho(t-1)+\frac{2p_t\prod_{k=1}^{t-1}\left(q_{k,x}p_k-\frac{1}{q_{k,x}}p_x\right)}{\prod_{k=1}^t\left(q_{k,x}p_k+\frac{1}{q_{k,x}}p_x\right)} \\
&\hspace{2em}\times\frac{\prod_{k=1}^{t-1}\left(q_{k,y}p_k-\frac{1}{q_{k,y}}p_y\right)}{\prod_{k=1}^t\left(q_{k,y}p_k+\frac{1}{q_{k,y}}p_y\right)}+\frac{\sigma(t)}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y} \\
&=\rho(t-1)+\left(2\left(q_{x,y}p_x+\frac{1}{q_{x,y}}p_y\right)p_t+q_{t,x}q_{t,y}p_t^2 \right.\\
&\hspace{2em}-\left.\left(\frac{q_{t,y}}{q_{t,x}}p_x+\frac{q_{t,x}}{q_{t,y}}p_y\right)p_t+\frac{1}{q_{t,x}q_{t,y}}p_x p_y\right) \\
&\hspace{2em}\times\frac{1}{\left(q_{t,x}p_t+\frac{1}{q_{t,x}}p_x\right)\left(q_{t,y}p_t+\frac{1}{q_{t,y}}p_y\right)} \\
&\hspace{2em}\times\frac{\sigma(t-1)}{\left(q_{x,y}p_x+\frac{1}{q_{x,y}}p_y\right)}.
\end{align}
Here,
\begin{align}
q_{x,y}=\frac{2^{n_y}}{2^{n_x}}=\frac{2^{n_y}/2^{n_t}}{2^{n_x}/2^{n_t}}=\frac{q_{t,y}}{q_{t,x}}
\end{align}
holds, and thus we have
\begin{align}
\rho(t)+\frac{\sigma(t)}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y}
=\rho(t-1)+\frac{\sigma(t-1)}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y}.
\end{align}
As a result, we obtain the following equation:
\begin{align}
\left(LL^\top\right)_{x,y} &= \rho(0)+\frac{\sigma(0)}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y} \\
&=\frac{1}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y}=k_{\rm SH}(G_x, G_y)/2.
\end{align}
(iii) The case \(x>y\).
The result directly follows by exchanging \(x\) and \(y\) in the discussion given in the case (ii);
\begin{align}
\left(LL^\top\right)_{x,y}
&=\sum_{l=1}^y L_{x,l}L_{y,l}
=\frac{1}{q_{y,x}p_y+\frac{1}{q_{y,x}}p_x} \\
&=\frac{1}{q_{x,y}p_x+\frac{1}{q_{x,y}}p_y}=k_{\rm SH}(G_x, G_y)/2.
\end{align}
Summarizing, we have Eq.~\eqref{eq:cholesky}, meaning that the kernel
$K_{\rm SH}(G, G')$ is positive semidefinite.
\end{proof}
\subsubsection*{Relationship with the Bhattacharyya kernel}
In this paper, we proposed two kernels: the Bhattacharyya (BH) kernel \eqref{eq:kernel_def_nff}
arising in the case of swap test and the SH kernel \eqref{eq:kernel_def_SH} arising in the case
of switch test.
The following relationship holds:
\begin{lemma} \label{thm:cosinesimilarity-geq-ourkernel}
The BH kernel \eqref{eq:kernel_def_nff} is bigger than or equal to the SH kernel
\eqref{eq:kernel_def_SH}.
\end{lemma}
\begin{proof}
Note that $f_{G}^\top f_{G'}$ is common in both kernels.
From the inequality of the arithmetic and geometric means, we obtain the following inequality:
\begin{align}
K_{\rm SH}(G, G')
&=\frac{2}{\frac{2^{n'}}{2^{n}}\sum_{k=1}^a |X_{k}|^2+\frac{2^{n}}{2^{n'}}\sum_{k=1}^a |X_{k}'|^2}
f_{G}^\top f_{G'} \\
& \hspace{-4em}
\leq\frac{1}{\sqrt{\frac{2^{n'}}{2^{n}}\sum_{k=1}^a |X_{k}|^2}
\sqrt{\frac{2^{n}}{2^{n'}}\sum_{k=1}^a |X_{k}'|^2}}
f_{G}^\top f_{G'} \\
& \hspace{-4em}
=\frac{1}{\sqrt{\sum_{k=1}^a |X_{k}|^2}\sqrt{\sum_{k=1}^a |X_{k}'|^2}}
f_{G}^\top f_{G'} = K_{\rm BH}(G, G').
\end{align}
\end{proof}
This result means that, as a similarity measure, the SH kernel is more conservative than the BH kernel
(note that both kernels are upper bounded by 1, because they originate from the inner product
$\braket{G|G'}$).
Figure~\ref{fig:cosinesimilarity-ourkernel} depicts the relation between these kernels for the case of
MUTAG training dataset; this result implies that, in general, the difference between the kernel values
would be tiny, but as mentioned above, there will be a solid difference in the prediction classification
accuracy in view of the conservativity of the corresponding classifiers.
\\
\begin{figurehere}
\includegraphics[width=0.7\linewidth]{cosinesimilarity-ourkernel-MUTAG.png}
\caption{Relationship between BH (vertial axis) and SH (horizontal axis), for the case of
MUTAG dataset.}
\label{fig:cosinesimilarity-ourkernel}
\end{figurehere}
\mbox{}
\subsection*{Time complexity for constructing $E(G,x)$}
\begin{lemma} \label{thm:adder}
The time complexity of adding 1 to the state $\ket{k} (0\leq k \leq A)$ is $O((\log A)^2)$.
\end{lemma}
\begin{proof}
We use the multi-controlled NOT gate $C^{i-1} X$, which operates the NOT gate $X$ on the
\(i^{\text{th}}\) target qubit, if the value of the control qubits (i.e., the $0^{\text{th}}, 1^{\text{st}},
\ldots, (i-1)^{\text{th}}$ qubits) are all 1.
The adder of 1 to $k$ can be done by repeatedly applying $C^{i-1} X$ for $i=1,\ldots,\log k$
on the state $\ket{k}$.
Now $C^{i-1} X$ is composed of $O(i)$ Toffoli gates~\cite{nielsen};
in other words, the time complexity for operating $C^{i-1} X$ is $O(i)$.
Hence, the maximum of the total time complexity of adding 1 to the state $\ket{k}$ is
\begin{equation}
\sum_{i=1}^{\log A} O(i) = O((\log A)^2).
\end{equation}
\end{proof}
\begin{lemma} \label{thm:v}
The time complexity for preparing the feature state $\ket{\#v}$ of a subgraph $x$ is
$O(|V|(\log |V|)^2)$.
\end{lemma}
\begin{proof}
The oracle is realized as the set of adder of 1 controlled by the index state $\ket{x}$; that is,
if the $i$th index qubit is $\ket{1}$ (meaning that the $i$th vertex is contained in the subgraph $x$),
then the corresponding controlled adder adds 1 on the feature state.
The index runs from $i=0$ to $i=n=|V|$, and hence there are totally $n$ controlled adders.
Also each controlled adder needs $O((\log n)^2)$ time complexity due to Lemma 3.
Therefore, the total time complexity is $n\times O((\log n)^2) = O(n(\log n)^2)$.
\end{proof}
\begin{lemma} \label{thm:e}
The time complexity for preparing the feature state $\ket{\#e}$ of a subgraph $x$ is $O(|E|(\log |E|)^2)$.
\end{lemma}
\begin{proof}
The idea is the same as that of Lemma 4, except that the adder is controlled by the pair of index qubits
representing an edge of the subgraph $x$.
Because there are $|E|$ such pairs, the total time complexity is
$|E|\times O((\log |E|)^2) = O(|E|(\log |E|)^2)$.
\end{proof}
\begin{lemma} \label{thm:dD}
The time complexity for preparing the feature state $\ket{\#dD}$ (the number of vertices
with degree $D$, where $D=1, 2, 3$) of a subgraph $x$ is $O(n((\log n)^2+d(\log d)^2))$.
\end{lemma}
\begin{proof}
Recall that $d$ is the maximum degree of the graph $G$.
First, we store the degree of the $i$th qubit, $d$, into an auxiliary $\log d$ qubits.
This can be done by performing the adder controlled by each qubit to which the target vertex
is connected.
This operation requires $\log d$ qubits and $O(d(\log d)^2)$ time complexity.
Next, if the stored value is $D$, we perform the adder controlled by the auxiliary qubits, on
the feature state.
The required space of $\#dD$ is $O(\log n)$, because $\#dD\leq n$.
The time complexity is $O((\log n)^2)$.
Finally, we initialize the auxiliary qubits by the inverse operation.
We perform these calculation recursively from the $0^{\text{th}}$ qubit to the $n-1^{\text{th}}$ qubit.
As a result, the total time complexity is
\begin{align}
& n\times(O(d(\log d)^2) + O((\log n)^2) + O(d(\log d)^2)) \\
&\hspace{2em} = O(n((\log n)^2+d(\log d)^2)).
\end{align}
\end{proof}
\subsection*{Note on the simulation method}
In the numerical experiment, we studied various type of graph set containing a graph with
28 vertices, in which case we need a quantum device composed of 56 qubits.
A naive implementation of the numerical simulator would require a $2^{56}$ memory,
probably larger than 1 Exabit depending on the accuracy, which is not realistic.
Hence, we adopted a parallel GPU computation; that is, we compute $\ket{E(G,x)}$ for each
subgraph $x$ and store it in each GPU memory, which thus needs a $2^{28}$ memory.
\section*{Data Availability}
A complete set of the kernel values and the probabilities for removing the index state are available at
\url{https://github.com/TRSasasusu/GraphKernelEncodingAllSubgraphsQC}.
\section*{Code Availability}
The codes for computing the kernel values and the classification protocols are available at
\url{https://github.com/TRSasasusu/GraphKernelEncodingAllSubgraphsQC}.
\printbibliography
\end{document} |
\begin{document}
\begin{center}
{\bf \large The Gentlest Ascent Dynamics}
Weinan E
Department of Mathematics and PACM,
Princeton University
Xiang Zhou
Division of Applied Mathematics,
Brown University
\mathbb{E}nd{center}
\begin{abstract}
Dynamical systems that describe the escape from the basins of attraction of
stable invariant sets are presented and analyzed.
It is shown that the stable fixed points of such dynamical systems are the
index-1 saddle points. Generalizations to high index saddle
points are discussed. Both gradient and non-gradient systems are considered.
Preliminary results on the nature of the dynamical behavior are presented.
\mathbb{E}nd{abstract}
\section{The gentlest ascent dynamics}
Given an energy function $V$ on $\mathbf R^n$,
the simplest form of the steepest decent dynamics (SDD) associated with $V$ is
\begin{equation}
\label{SDD-grad}
\dot{\mathbf x} = - \nabla V(\mathbf x).
\mathbb{E}nd{equation}
It is easy to see that if $\mathbf x(\cdot)$ is a solution to \mathbb{E}qref{SDD-grad}, then
$V(\mathbf x(t))$ is a decreasing function of $t$. Furthermore, the stable fixed points of
the dynamics \mathbb{E}qref{SDD-grad} are the local minima of $V$.
Each local minimum has an associated basin of attraction which consists of all the initial
conditions from which the dynamics described by \mathbb{E}qref{SDD-grad} converges to that local
minimum as time goes to infinity.
For \mathbb{E}qref{SDD-grad}, these are simply the potential wells of $V$.
The basins of attraction are separated by separatrices, on
which the dynamics converges to saddle points.
We are interested in the opposite dynamics:
The dynamics of escaping a basin of attraction.
The most naive suggestion is to just reverse the sign in \mathbb{E}qref{SDD-grad},
the dynamics would then find the local maxima of $V$ instead.
This is not what we are interested in.
We are interested in the gentlest
way in which the dynamics climb out of the basin of attraction.
Intuitively, it is clear that what we need is a dynamics that converges to the index-1
saddle points of $V$.
Such a problem is of general interest to the study of noise-induced transition between
metastable states \cite{String2002,RMP1990} :
Under the influence of small noise, with high probability,
the escape pathway has to go through the neighborhood of a saddle point \cite{FW1998}.
The following dynamics serves the purpose:
\begin{subequations}
\begin{center}
\begin{empheq}[left=\mathbb{E}mpheqlbrace]{align}
\dot{\mathbf x} &= - \nabla V(\mathbf x) + 2\frac{(\nabla V, \mathbf v)}{(\mathbf v, \mathbf v)} \mathbf v, \\
\dot{\mathbf v} &= - \nabla^2 V(\mathbf x)\mathbf v + \frac{(\mathbf v, \nabla^2 V \mathbf v)}{(\mathbf v, \mathbf v)} \mathbf v .
\mathbb{E}nd{empheq}
\mathbb{E}nd{center}
\label{GAD-grad}
\mathbb{E}nd{subequations}
We will show later that the stable fixed points of this dynamics are precisely the index-1
saddle points of $V$ and the unstable directions of $V$ at the saddle points.
Intuitively the idea is quite simple: The second equation in \mathbb{E}qref{GAD-grad}
attempts to find the direction that corresponds to the smallest eigenvalue of
$ \nabla ^2 V$,
and the last term in the first equation makes this direction an ascent direction.
This consideration is not limited to the so-called ``gradient systems'' such as
\mathbb{E}qref{SDD-grad}. It can be extended to non-gradient systems.
Consider the following dynamical system:
\begin{equation}
\label{SDD-non-grad}
\dot{\mathbf x} = \mathbf F (\mathbf x).
\mathbb{E}nd{equation}
We can also speak about the stable invariant sets of this system, and escaping
basins of attraction of the stable invariant sets.
In particular, we can also think about finding index-1 saddle points,
though in this case, there is no guarantee that under the influence of small
noise, escaping the basin of attraction has to proceed via saddle points\cite{Xiangthesis}.
For non-gradient systems, \mathbb{E}qref{GAD-grad} has to be modified to
\begin{subequations}
\begin{center}
\begin{empheq}[left=\mathbb{E}mpheqlbrace]{align}
\dot{\mathbf x} &= \mathbf F(\mathbf x) -2 \frac{(\mathbf F(\mathbf x), \mathbf w )}{ (\mathbf w,\mathbf v)}\mathbf v,\label{GAD-non-grad-x}\\
\dot{\mathbf v} &= (\nabla \mathbf F(\mathbf x))\mathbf v - \alpha(\mathbf v)\mathbf v, \label{GAD-non-grad-v}\\
\dot{\mathbf w} &= (\nabla \mathbf F(\mathbf x))^T \mathbf w - \beta(\mathbf v,\mathbf w)\mathbf w. \label{GAD-non-grad-w}
\mathbb{E}nd{empheq}
\mathbb{E}nd{center}
\label{GAD-non-grad}
\mathbb{E}nd{subequations}
Here two directional vectors $\mathbf v$ and $\mathbf w$ are needed in order to follow
both the right and left eigenvectors of the Jacobian.
Given the matrix $\nabla \mathbf F(\mathbf x)$, two scalar valued functions $\alpha$ and $\beta$ are defined by
\begin{subequations}
\begin{center}
\begin{empheq}[left=\mathbb{E}mpheqlbrace]{align}
\alpha(\mathbf v) &=(\mathbf v, (\nabla \mathbf F(\mathbf x))\mathbf v), \\
\beta(\mathbf v,\mathbf w)&= 2(\mathbf w, (\nabla \mathbf F(\mathbf x))\mathbf v)-\alpha(\mathbf v).
\mathbb{E}nd{empheq}
\mathbb{E}nd{center}
\label{eqn:alpha-beta}
\mathbb{E}nd{subequations}
We have taken and we will take the normalization such that $(\mathbf v,\mathbf v)=1$ and $(\mathbf w, \mathbf v) = 1$.
They are to keep the normalization such that $(\mathbf v,\mathbf v)=1$ and $(\mathbf w, \mathbf v) = 1$.
This normalization is preserved by the dynamics as long as it holds initially.
Thus, the first equation in \mathbb{E}qref{GAD-non-grad} actually is equivalent to $\dot{\mathbf x} = \mathbf F(\mathbf x) -2 {(\mathbf F(\mathbf x), \mathbf w )} \mathbf v$.
(Of course, one can enforce other types of normalization condition, such as the symmetric one:
$(\mathbf v,\mathbf v)=(\mathbf w,\mathbf w)$ and $(\mathbf w, \mathbf v) = 1$, and define new expressions of $\alpha$ and $\beta$ accordingly.)
In the case of gradient flows, we can take $\mathbf w=\mathbf v$ and \mathbb{E}qref{GAD-non-grad}
reduces to \mathbb{E}qref{GAD-grad}.
\begin{figure}[htbp!]
\begin{center}
\scalebox{0.8}{ \input{EF_illustrate.pstex_t} }\ \caption{Illustration of the
gentlest ascent dynamics.
$\mathbf F$ is the force of the original dynamics and $\tilde{\mathbf F}$ is the
force of the gentlest ascent dynamics. $\mathbf v_1$ and $\mathbf v_2$ represent the
unstable and stable right eigenvectors, respectively; $\mathbf w_1$ and
$\mathbf w_2$ are the corresponding left eigenvectors. Note that $\mathbf w_1\mathbb{P}erp
\mathbf v_2$ and $\mathbf w_2\mathbb{P}erp \mathbf v_1$. $\mathbf F$ has the decomposition
$\mathbf F=\mathbf F_1+\mathbf F_2=c_1\mathbf v_1+c_2\mathbf v_2$ where the coefficient $ c_1 = (\mathbf F,
\mathbf w_1)/(\mathbf v_1, \mathbf w_1)$. Thus, $\tilde{\mathbf F}:=-\mathbf F_1+\mathbf F_2=\mathbf F-2\mathbf F_1=\mathbf F-2c_1\mathbf v_1$. }
\label{fig:EF_illustrate}
\mathbb{E}nd{center}
\mathbb{E}nd{figure}
We call this the {\it gentlest ascent dynamics}, abbreviated GAD.
It has its origin in some of the numerical
techniques proposed for finding saddle points. For example, there is indeed a numerical
algorithm proposed by Crippen and Scheraga called the
``gentlest ascent method'' \cite{Crippen1971}. The main idea
is similar to that of GAD, namely to find the right direction, the direction of the
eigenvector corresponding to the smallest eigenvalue and making that an ascent
direction.
But the details of the gentlest ascent method seem to be quite a bit more complex.
The ``eigenvector following method'' proposed in literature, for example, \cite{cerjan1981,Energylanscapes},
is based on a very similar idea.
There at each step, one finds the eigenvectors of the Hessian matrix of the potential.
Also closely related is the ``dimer method'' in which two states connected by a small line
segment are evolved simultaneously in order to find the saddle point \cite{Dimer1999}.
One advantage of the dimer method is that it avoids computing the Hessian of the potential.
From the viewpoint of our GAD,
the spirit of ``dimer method'' is equivalent to use central difference scheme to numerically calculate the matrix-vector multiplication
in GAD \mathbb{E}qref{GAD-non-grad} and \mathbb{E}qref{eqn:alpha-beta} by writing
$(\nabla \mathbf F(\mathbf x)) \mathbf{b} = \frac{d }{d \varepsilon} \mathbf F(\mathbf x+\varepsilon\mathbf{b})|_{\varepsilon=0}\approx \frac{1}{2\varepsilon} (\mathbf F(\mathbf x+\varepsilon\mathbf{b})-\mathbf F(\mathbf x-\varepsilon\mathbf{b}))$ for any vector $\mathbf{b}$.
We believe that as a dynamical system, the continuous formulation embodied in
\mathbb{E}qref{GAD-grad} and \mathbb{E}qref{GAD-non-grad} has its own interest.
We will demonstrate some of these interesting aspects in this note.
\noindent
{\bf Proposition.}
Assume that the vector field $\mathbf F$ is $C^3(\mathbf R^{n})$.
\begin{enumerate}[(a)]
\item If $(\mathbf x_{*},\mathbf v_{*},\mathbf w_{*})$ is a fixed point of the gentlest ascent dynamics \mathbb{E}qref{GAD-non-grad} and
$\mathbf v_{*},\mathbf w_{*}$ are normalized such that $\mathbf v_{*}^{T}\mathbf v_{*}=\mathbf v_{*}^{T}\mathbf w_{*}=1$, then
$\mathbf v_{*}$ and $\mathbf w_{*}$ are the right and left eigenvectors , respectively, of $\nabla \mathbf F(\mathbf x_{*})$ corresponding to one eigenvalue $\lambda_{*}$, i.e.,
$$(\nabla \mathbf F(\mathbf x_{*}))\mathbf v_{*}=\lambda_{*} \mathbf v_{*} , \quad (\nabla \mathbf F(\mathbf x_{*}))^{T}\mathbf w_{*}=\lambda_{*} \mathbf w_{*},$$
and $\mathbf x_{*}$ is a fixed point of the original dynamics system, i.e., $\mathbf F(\mathbf x_{*})=\mathbf{0}$.
\item Let $\mathbf x_s$ be a fixed
point of the original dynamical system $\dot{\mathbf x}=\mathbf F(\mathbf x)$.
If the Jacobian matrix $\mathbb{J}(\mathbf x_s)=\nabla \mathbf F(\mathbf x_{s})$ has $n$
distinct real eigenvalues
$\lambda_1, \lambda_2, \cdots, \lambda_n$ and $n$ linearly independent right and left
eigenvectors, denoted by $\mathbf v_i$ and $\mathbf w_i$ correspondingly, i.e.,
$$\mathbb{J}(\mathbf x_{s})\mathbf v_{i}=\lambda_{i}\mathbf v_{i},\qquad \mathbb{J}(\mathbf x_{s})^{T}\mathbf w_{i}=\lambda_{i}\mathbf w_{i},\quad i=1,\cdots, n$$
and in addition, we impose
the normalization condition $\mathbf v_{i}^{T}\mathbf v_{i}=\mathbf w_{i}^T \mathbf v_i=1,\ \forall i$ ,
then
for all $i=1,\cdots,n$, $(\mathbf x_s,\mathbf v_i,\mathbf w_i)$ is a fixed point of the gentlest ascent dynamics \mathbb{E}qref{GAD-non-grad}.
Furthermore, among these $n$ fixed points, there exists
one fixed point $(\mathbf x_s,\mathbf v_{i'},\mathbf w_{i'})$ which is linearly stable
if and only if $\mathbf x_s$ is an index-$1$ saddle point of the original dynamical system
$\dot{\mathbf x}=\mathbf F(\mathbf x)$ and the eigenvalue $\lambda_{i'}$ corresponding to $\mathbf v_{i'}$, $\mathbf w_{i'}$
is the only positive eigenvalue of $\mathbb{J}(\mathbf x_s)$.
\mathbb{E}nd{enumerate}
$h$-stability pace{-6mm}
{\bf Proof.}
(a) Under the given condition, it is obvious that $
(\nabla \mathbf F(\mathbf x_{*}))\mathbf v_{*} = \alpha(\mathbf v_{*})\mathbf v_{*}$ and
$(\nabla \mathbf F(\mathbf x_{*}))^T \mathbf w_{*} = \beta(\mathbf v_{*},\mathbf w_{*})\mathbf w_{*}$.
By definition and other conditions,
$
\beta(\mathbf v_{*},\mathbf w_{*}) = 2\mathbf w_{*}^{T} (\nabla \mathbf F(\mathbf x_{*}))\mathbf v_{*}-\alpha(\mathbf v_{*})=
2\mathbf w_{*}^{T} (\alpha(\mathbf x_{*}))\mathbf v_{*}-\alpha(\mathbf v_{*})=\alpha(\mathbf v_{*})
$. Therefore, $\mathbf v_{*}$ and $\mathbf w_{*}$ share the same eigenvalue $\lambda_{*}=\alpha(\mathbf v_{*})=\beta(\mathbf v_{*},\mathbf w_{*})$.
From the fixed point condition
$\mathbf F(\mathbf x_{*})-2 (\mathbf w_{*}^{T}\mathbf F(\mathbf x_{*})) \mathbf v_{*}=\mathbf{0}$,
we take the inner product of this equation with $\mathbf w_{*}$ to get $\mathbf w_{*}^{T}\mathbf F(\mathbf x_{*}) = 2\mathbf w_{*}^{T}\mathbf F(\mathbf x_{*}) $.
So $\mathbf w_{*}^{T}\mathbf F(\mathbf x_{*})=0$ and in consequence, the conclusion $\mathbf F(\mathbf x_{*})=\mathbf{0}$ holds from the
fixed point condition $\mathbf F(\mathbf x_{*})-2 (\mathbf w_{*}^{T}\mathbf F(\mathbf x_{*})) \mathbf v_{*}=\mathbf{0}$ again.
(b)
It is obvious that for all $i$,
$(\mathbf x_s,\mathbf v_i,\mathbf w_i)$ is a fixed point of the gentlest ascent dynamics \mathbb{E}qref{GAD-non-grad}
by the definition of $\mathbf v_i$ and $\mathbf w_i$. It is going to be shown that
we can explicitly write down the eigenvalues and eigenvectors of GAD at any fixed point
$(\mathbf x_s,\mathbf v_i,\mathbf w_i)$.
Let $\mathbb{J}(\mathbf x)= \nabla \mathbf F(\mathbf x)$.
The Jacobian matrix of the gentlest ascent dynamics \mathbb{E}qref{GAD-non-grad} has the following expression: \
$ \tilde{\mathbb{J}}(\mathbf x,\mathbf v,\mathbf w) =$
\begin{equation}
\left (
\begin{array}{ccc}
(\mathbb{I}-2\mathbf v\mathbf w^T ) \mathbb{J}(\mathbf x), & -2 (\mathbf F(\mathbf x), \mathbf w)\mathbb{I}, & -2\mathbf v \mathbf F (\mathbf x)^{T}\\
\mathbb{L}_1, & \mathbb{J}(\mathbf x) - \alpha(\mathbf v) \mathbb{I} -\mathbf v\mathbf v^{T}(\mathbb{J}(\mathbf x)+\mathbb{J}(\mathbf x)^{T}), & 0 \\
\mathbb{L}_2, & -2 \mathbf w \mathbf w^{T} \mathbb{J}(\mathbf x) + \mathbf w\mathbf v^{T}(\mathbb{J}(\mathbf x)+\mathbb{J}(\mathbf x)^{T}) , &\mathbb{J}(\mathbf x)^{T}-\beta(\mathbf v,\mathbf w) \mathbb{I} - 2\mathbf w \mathbf v^{T} \mathbb{J}(\mathbf x)^{T} \\
\mathbb{E}nd{array}
\right)\label{eqn:Jacobian2}
\mathbb{E}nd{equation}
where $\mathbb{L}_1$, $\mathbb{L}_2$ are $n\times n$ matrices
and $\mathbb{I}$ is the $n\times n$ identity matrix. To derive the above formula, we have used the results from \mathbb{E}qref{eqn:alpha-beta} that
$\nabla_{\mathbf v}( \alpha)= \mathbf v^{T}(\mathbb{J}^{T} + \mathbb{J}) $, $\nabla_{\mathbf v}(\beta) = 2 \mathbf w^{T}\mathbb{J} -\mathbf v^{T}(\mathbb{J}^{T} + \mathbb{J}) $
and
$\nabla_{\mathbf w}(\beta) = 2\mathbf v^{T}\mathbb{J}^{T} $ .
In the first $n$ rows of $\tilde{\mathbb{J}}$, there are two $n\times n$
blocks which contain the term $\mathbf F(\mathbf x)$ and thus vanish at the
fixed point $\mathbf x_{s}$. So the eigenvalues of $\tilde{\mathbb{J}}(\mathbf x_s,\mathbf v_i,\mathbf w_i)$ can be
obtained from the eigenvalues of its three $n\times n$ diagonal blocks: $\mathbb{N},\mathbb{M}$
and $\mathbb{K}$:
\begin{equation*}
\begin{split}
\mathbb{N}&=(\mathbb{I}-2\mathbf v_i\mathbf w_i^T ) \mathbb{J}(\mathbf x_{s}), \\
\mathbb{M}&= \mathbb{J}(\mathbf x_{s}) - \lambda_{i} \mathbb{I} -\mathbf v_{i}\mathbf v_{i}^{T}(\mathbb{J}(\mathbf x_{s})+\lambda_{i}\mathbb{I}), \\
\mathbb{K}&= \mathbb{J}^{T}(\mathbf x_{s})-\lambda_{i} \mathbb{I} - 2\mathbf w_{i} \mathbf v_{i}^{T} \mathbb{J}^{T}(\mathbf x_{s}).
\mathbb{E}nd{split}
\mathbb{E}nd{equation*}
Here the obvious facts that $\alpha(\mathbf v_{i})=\beta(\mathbf v_{i},\mathbf w_{i})=\lambda_{i}$
and $\mathbf v_{i}^{T}\mathbb{J}^{T}=\lambda_{i}\mathbf v_{i}^{T}$ are applied.
Now we derive the eigenvalues of $\mathbb{N}$, $\mathbb{M}$ and $\mathbb{K}$
by constructing the corresponding eigenvectors.
Note that $\mathbf v_i^{T}\mathbf w_j=\delta_{ij}$ holds under our assumption of the eigenvectors. One can verify that
\begin{equation*}
\begin{split}
\mathbb{N} \mathbf v_i &=
(\mathbb{I}-2\mathbf v_i\mathbf w_i^T)\lambda_i \mathbf v_i = -\lambda_i \mathbf v_i, \\
\mathbb{M}\mathbf v_{i}&= -2\lambda_{i} \mathbf v_{i}\mathbf v_{i}^{T}\mathbf v_{i} = -2\lambda_{i}\mathbf v_{i}, \\
\mathbb{K}\mathbf w_{i} &= - 2\lambda_{i}\mathbf w_{i}\mathbf v_{i}^{T}\mathbf w_{i}=-2\lambda_{i}\mathbf w_{i}, \\
\mathbb{E}nd{split}
\mathbb{E}nd{equation*}
and for all $j\neq i$,
\begin{eqnarray}
\mathbb{N} \mathbf v_j &=& (\mathbb{I}-2\mathbf v_i\mathbf w_i^T )\lambda_j \mathbf v_j = \lambda_j \mathbf v_j, \\
\mathbb{K}\mathbf w_{j} &=& (\lambda_{j}- \lambda_{i})\mathbf w_{j}- 2 \lambda_{j} \mathbf w_{i}\mathbf v_{i}^{T}\mathbf w_{j}= (\lambda_{j}- \lambda_{i})\mathbf w_{j},
\mathbb{E}nd{eqnarray}
and with a bit more effort,
\begin{equation}
\begin{split}
\mathbb{M}( \mathbf v_{j}- (\mathbf v_{i}^{T}\mathbf v_{j}) \mathbf v_{i})
&= \mathbb{M}\mathbf v_{j} - \mathbf v_{i}^{T}\mathbf v_{j}( \mathbb{M}\mathbf v_{i})
=\mathbb{M}\mathbf v_{j} + 2\lambda_{i} (\mathbf v_{i}^{T}\mathbf v_{j})\mathbf v_{i}\\
&= (\lambda_{j}- \lambda_{i})\mathbf v_{j} -(\lambda_{j}+\lambda_{i}) \mathbf v_{i}(\mathbf v_{i}^{T}\mathbf v_{j})+ 2\lambda_{i} (\mathbf v_{i}^{T}\mathbf v_{j})\mathbf v_{i}\\
&=(\lambda_{j}-\lambda_{i}) (\mathbf v_{j} - (\mathbf v_{i}^{T}\mathbf v_{j})\mathbf v_{i}).
\mathbb{E}nd{split}
\mathbb{E}nd{equation}
Hence the eigenvalues of the Jacobian $\tilde{\mathbb{J}}$ at any fixed point
$(\mathbf x_s,\mathbf v_i,\mathbf w_i)$ ($i=1,\cdots,n$) are
\begin{equation}
-2\lambda_i, \ -\lambda_i, \ \{\lambda_j: j\neq i\}, \ \{\lambda_j-\lambda_i: j\neq
i\}.
\label{eqn:eig-GAD}
\mathbb{E}nd{equation}
The first and last set of eigenvalues have multiplicity 2. The linear stability condition
is that all numbers in \mathbb{E}qref{eqn:eig-GAD} are negative.
Thus one fixed point $(\mathbf x_s, \mathbf v_{i'}, \mathbf w_{i'})$ is linearly stable if and
only if $\lambda_{i'}>0$ and all other eigenvalues $\lambda_j<0$ for $j\neq i'$, in which case
the fixed point $\mathbf x_{s}$ is index-$1$ saddle.
Next, we discuss some examples of GAD.
Consider first the case of a gradient system with
$V(\mathbf x) = \mathbf x^T A \mathbf x/(\mathbf x^T \mathbf x)$, where $A$ is a symmetric matrix.
$V$ is nothing but the Rayleigh quotient.
A simple computation shows that the GAD for this system is given by:
\begin{equation}
\left\{
\begin{split}
\dot{\mathbf x} = & -\frac{A \mathbf x}{\mathbf x^T \mathbf x} + \frac{\mathbf x^T A \mathbf x}{(\mathbf x^T \mathbf x)^2} \mathbf x +
2 \left( \frac{\mathbf v^T A \mathbf x}{\mathbf x^T \mathbf x} - \frac{\mathbf x^T A \mathbf x}{(\mathbf x^T \mathbf x)^2} (\mathbf v^T \mathbf x) \right) \mathbf v ,\\
\dot{\mathbf v} = & - A \mathbf v + (\mathbf v^T A \mathbf v) \mathbf v.
\mathbb{E}nd{split}
\right.
\mathbb{E}nd{equation}
Next, we consider an infinite dimensional example. The potential energy functional
is the Ginzburg-Landau energy for scalar fields:
$I(u) = \int_\Omega \left( \frac 12 |\nabla u|^2 + \frac 14 (u^2 - 1)^2 \right) d\mathbf x $.
The steepest decent dynamics in this case is described by the well-known
Allen-Cahn equation:
\begin{equation}
\mathbb{P}artial_t u = \Delta u - (u^2-1) u.
\mathbb{E}nd{equation}
A direct calculation gives the GAD in this case:
\begin{equation}
\left\{
\begin{split}
\mathbb{P}artial_t u & = \Delta u - (u^2-1) u - 2(\Delta u -(u^2-1) u, v) v, \\
\mathbb{P}artial_t v & = \Delta v - ( 3u^2-1) v - (\Delta v -(3u^2-1) v, v) v ,
\mathbb{E}nd{split}\right.
\mathbb{E}nd{equation}
where the inner product is defined to be:
\begin{equation*}
(u, v) = \int_\Omega u(\mathbf x) v(\mathbf x) d\mathbf x.
\mathbb{E}nd{equation*}
Clearly both the SDD and the GAD depend on the choice of the metric, the inner product.
If we use instead the $H^{-1}$ metric, then the SDD becomes the Cahn-Hilliard equation
and the GAD changes accordingly.
\section{High index saddle points}
GAD can also be extended to the case of finding high index saddle points.
We will discuss how to
generalize it to index-$2$ saddle points here. There are
two possibilities: Either the Jacobian $\mathbb{J}$ at the saddle point has
one pair of conjugate complex eigenvalues or it has two real eigenvalues at
the saddle point.
We discuss each separately.
Intuitively, the picture is as follows.
We need to find the projection of the flow, $\mathbf F(\mathbf x)$,
on the tangent plane, say $P$, of the two dimensional unstable manifold of the saddle point,
and change the direction of the flow on that tangent plane.
For this purpose, we need to find the vectors $\mathbf v_{1}$ and $\mathbf v_{2}$ that span $P$.
In the first case, we assume that the unstable eigenvalues at the saddle point
are $\lambda_{1,2}=\lambda_{R}\mathbb{P}m i \lambda_{I}$.
In this case there are no real eigenvectors corresponding to $\lambda_{1,2}$.
However, for any vector $\mathbf v$ in $P$, $(\nabla \mathbf F)\mathbf v$ simply rotates $\mathbf v$ inside $P$.
Hence, $\mathbf v_{2}$ can be taken as
$(\nabla \mathbf F) \mathbf v_{1}$ if we have already found some $\mathbf v_{1}\in P$.
The latter can be accomplished using the original dynamics in \mathbb{E}qref{GAD-non-grad}.
To see how one should modify the flow $\mathbf F$ on the tangent plane, we write
$$\mathbf F=c_{1}\mathbf v_{1}+c_{2}\mathbf v_{2} + \sum_{j>2}c_{j}\mathbf v_{j}.$$
Using the fact that the eigen-plane of $(\nabla \mathbf F)^{T}$ corresponding to
$\lambda_{R}\mathbb{P}m i \lambda_{I}$, which is
spanned by $\mathbf w_{1}$ and $\mathbf w_{2}=(\nabla \mathbf F)^{T} \mathbf w_{1}$,
is orthogonal to $\mathbf v_{j}$ for all $j>2$, we can derive a linear system for $c_1$ and
$c_2$ by taking the
inner product of ${\mathbf F}$ and $\mathbf w_{1}$,$\mathbf w_{2}$.
The solution of that linear system is given by:
\begin{equation}
c_{1} = \frac{a_{22}f_{1}-a_{12}f_{2}}{a_{11}a_{22}-a_{21}a_{21}}, \ c_{2} =
\frac{a_{11}f_{2}-a_{21}f_{1}}{a_{11}a_{22}-a_{21}a_{21}}
\label{eqn:c2}
\mathbb{E}nd{equation}
where $a_{ij}=(\mathbf w_{i},\mathbf v_{j})$ and $f_{j}=(\mathbf F(\mathbf x),\mathbf w_{j})$ for $i,j=1,2$.
The gentlest ascent dynamics for the $\mathbf x$ component is $$\tilde{\mathbf F}=\mathbf F -2c_{1}\mathbf v_{1}-2c_{2}\mathbf v_{2}.$$
{To s}ummarize, we obtain the following dynamical system:
\begin{equation}
\left\{
\begin{split}
\dot{\mathbf x} &= \mathbf F -2c_{1}\mathbf v_{1}-2c_{2}\mathbf v_{2}, \\
\dot{\mathbf v}_{1} &= (\nabla \mathbf F(\mathbf x))\mathbf v_{1} - \alpha(\mathbf v_{1})\mathbf v_{1},\\
\dot{\mathbf w}_{1} &= (\nabla \mathbf F(\mathbf x))^T \mathbf w_{1} -\beta(\mathbf v_{1},\mathbf w_{1})\mathbf w_{1},\\
\mathbf v_{2} &= \nabla \mathbf F(\mathbf x)\mathbf v_{1} ,\\
\mathbf w_{2} &= (\nabla \mathbf F(\mathbf x))^{T}\mathbf w_{1},
\mathbb{E}nd{split}
\right.
\label{eqn:GAD-ng-c2}
\mathbb{E}nd{equation}
where $c_1,c_2$ are given by \mathbb{E}qref{eqn:c2} and
$\alpha,\beta$ are defined by \mathbb{E}qref{eqn:alpha-beta}.
If the Jacobian has two positive real eigenvalues
at the saddle point, say, $\lambda_{1}>\lambda_{2}>0 \geq \lambda_{3} > \cdots$,
let us define a new matrix by the method of deflation:
\begin{equation}
\mathbb{J}_{2} := \nabla \mathbf F- \frac{ (\mathbf v_{1}, (\nabla \mathbf F) \mathbf v_{1})}{(\mathbf v_{1},\mathbf v_{1}) (\mathbf w_{1},\mathbf v_{1})} \mathbf v_{1}\mathbf w_{1}^{T}.
\label{eqn:J2-def}
\mathbb{E}nd{equation}
It is not difficult to see that if $\mathbf v_{1}$ is an eigenvector of $\nabla \mathbf F$
corresponding to $\lambda_{1}$, then ${\mathbb{J}}_{2}$ shares the same eigenvectors as $\mathbb{J}$, and
the eigenvalues of $\mathbb{J}_{2}$ become $0,\lambda_{2},\lambda_{3},\cdots$.
The largest eigenvalue of $\mathbb{J}_{2}$ at the index-$2$ saddle point becomes $\lambda_{2}$.
One can then use the dynamics \mathbb{E}qref{GAD-non-grad-v} associated with the new matrix $\mathbb{J}_{2}$ to find $\mathbf v_{2}$. Therefore,
we obtain the following index-$2$ GAD
\begin{equation}
\left\{
\begin{split}
\dot{\mathbf x} &= \mathbf F -2c_{1}\mathbf v_{1}-2c_{2}\mathbf v_{2}, \\
\dot{\mathbf v}_{1} &= (\nabla \mathbf F(\mathbf x))\mathbf v_{1} - \alpha_{1}\mathbf v_{1},\\
\dot{\mathbf w}_{1} &= (\nabla \mathbf F(\mathbf x))^T \mathbf w_{1} - \beta_{1}\mathbf w_{1},\\
\dot{\mathbf v}_{2} &= \mathbb{J}_{2} \mathbf v_{2} - \alpha_{2}\mathbf v_{2} ,\\
\dot{\mathbf w}_{2} &= \mathbb{J}_{2}^{T} \mathbf v_{2} - \beta_{2}\mathbf w_{2},
\mathbb{E}nd{split}
\right.
\label{eqn:GAD-ng-r2}
\mathbb{E}nd{equation}
with the initial normalization condition $(\mathbf v_{1},\mathbf v_{1}) =(\mathbf v_{2},\mathbf v_{2}) =(\mathbf w_{1},\mathbf v_{1}) =(\mathbf w_{2},\mathbf v_{2})=1$.
$c_1$ and $c_2$ are given in the same way as shown above \mathbb{E}qref{eqn:c2} and
$\alpha_{1,2},\beta_{1,2}$ are defined as follows to enforce that the
normalization condition is preserved :
$\alpha_{1} =(\mathbf v_{1}, (\nabla \mathbf F(\mathbf x))\mathbf v_{1}),
\beta_{1}= 2(\mathbf w_{1}, (\nabla \mathbf F(\mathbf x))\mathbf v_{1})-\alpha_{1}$
and
$\alpha_{2} =(\mathbf v_{2},\mathbb{J}_{2}\mathbf v_{2}),
\beta_{2}= 2(\mathbf w_{2}, \mathbb{J}_{2}\mathbf v_{2})-\alpha_{2}$.
The generalization to higher index saddle points with real eigenvalues is obvious.
\section{Examples}
\subsection{Analysis of a gradient system}
To better understand the dynamics of GAD, let us consider the case when
a different relaxation parameter is used
for the direction $\mathbf v$:
$$\left\{
\begin{split}
\dot{\mathbf x} &= - \nabla V(\mathbf x) + 2(\nabla V, \mathbf v) \mathbf v, \\
\tau \dot{\mathbf v} &= - \nabla^2 V(\mathbf x)\mathbf v + (\mathbf v, \nabla^2 V \mathbf v) \mathbf v.
\mathbb{E}nd{split}
\right.$$
To simplify the discussions, we consider the limit as $\tau \to 0 $.
In this case, we obtain a closed system for $\mathbf x$:
\begin{equation}
\dot{\mathbf x} = - \nabla V(\mathbf x) + 2(\nabla V, \mathbf v(\mathbf x)) \mathbf v(\mathbf x),
\label{eqn:gad-x}
\mathbb{E}nd{equation}
where $\mathbf v(\mathbf x)$ is the eigenvector of $\nabla^{2}V(\mathbf x)$ associated with the smallest eigenvalue.
Now we consider the following two dimensional system:
$$V(x,y)=\frac14(x^{2}-1)^{2}+\frac12 \mu y^{2}$$
where $\mu$ is a positive parameter. $\mathbf x_{\mathbb{P}m}=(\mathbb{P}m 1,0)$ are two stable fixed points
and $(0,0)$ is the index-$1$ saddle point.
The eigenvalues and eigenvectors of the Hessian at a point $\mathbf x=(x,y)$ are
\begin{align*}
\lambda_{1}&=3x^{2}-1 \text{ and } \mathbf v_{1}=(1,0),\\
\lambda_{2}&=\mu \text{ and } \mathbf v_{2}=(0,1).
\mathbb{E}nd{align*}
Therefore, the eigendirection picked by GAD is
\begin{equation}
\begin{cases}
\mathbf v_{GAD}(\mathbf x) = \mathbf v_{1}, &\mbox{ if } |x| < \sqrt{\frac{1+\mu}{3}},\\
\mathbf v_{GAD}(\mathbf x) = \mathbf v_{2}, &\mbox{ if } |x| > \sqrt{\frac{1+\mu}{3}}.
\mathbb{E}nd{cases}
\mathbb{E}nd{equation}
Consequently, by defining
$$V_{1}(x,y)=-\frac14(x^{2}-1)^{2}+\frac12 \mu y^{2},$$
and
$$V_{2}(x,y)=\frac14(x^{2}-1)^{2}-\frac12 \mu y^{2},$$
we can write the gentlest ascent dynamics \mathbb{E}qref{eqn:gad-x} in the form of a gradient system driven by the new potential:
\begin{equation}
\displaystyle
V_{GAD}(\mathbf x)= V_{1}(\mathbf x)\cdot 1_{|x|<\sqrt{\frac{1+\mu}{3}}} (\mathbf x)
+V_{2}(\mathbf x)\cdot 1_{|x|>\sqrt{\frac{1+\mu}{3}}} (\mathbf x)
\label{eqn:Vgad}
\mathbb{E}nd{equation}
where $1_{\cdot}(\mathbf x)$ is the indicator function.
Note that $V_{GA\com{D}}$ is \textit{not} continuous at the lines
$x=\mathbb{P}m \sqrt{\frac{1+\mu}{3}}$. The point $(0,0)$ becomes the unique local minimum
of $V_{1}$, with the basin of attraction $\{(x,y):-1<x<1\}$.
Outside of this basin of attraction, the flow goes to
$(x=\mathbb{P}m\infty,y=0)$ and the potential $V_{1}$ falls to $-\infty$ .
For $V_{2}$, the point $(0,0)$ is the unique local maximum and
all solutions go to $(x=\mathbb{P}m 1, y=\mathbb{P}m \infty)$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\textwidth]{Vgad_dis.eps}
\caption{The discontinuity of $V_{GAD}(x,y=0)$ at the location $x=\mathbb{P}m \sqrt{\frac{1+\mu}{3}}$.
Left: $\mu<2$; Right: $\mu>2$.}
\label{fig:Vgad_dis}
\mathbb{E}nd{center}
\mathbb{E}nd{figure}
If we start the gentlest ascent dynamics with the initial value $\mathbf x_{\mathbb{P}m} = (\mathbb{P}m 1,0)$,
then there are two different situations according to whether $\mu>2$ or $\mu<2$.
Although $\mathbf x_{\mathbb{P}m}$ becomes a saddle point for any $\mu\neq 2$,
the unstable direction for $\mu<2$ is $\mathbb{P}m \mathbf v_{2}$
while the unstable direction for $\mu>2$ is $\mathbb{P}m \mathbf v_{1}$,
as illustrated in figure \ref{fig:grad}. Furthermore, from figure \ref{fig:grad}
and the above discussion,
it is clear that the basin of attraction of the point $(0,0)$ associated with the potential $V_{GAD}$
is the region $-\sqrt{\frac{1+\mu}{3}}<x<\sqrt{\frac{1+\mu}{3}}$
for $\mu<2$ and $-1<x<1$ for $\mu>2$. (which
is larger than the basin of attraction for the Newton-Raphson method, confirmed by numerical calculation.) Consequently, the GAD with an initial value
$(x_{0},y_{0})$ \mathbb{E}mph{near} the
local minimum $\mathbf x_{\mathbb{P}m}$ of $V$ converges to the point $(0,0)$ of our interest
when $\mu>2$ and $|x_{0}|<1$.
This discuss suggests that GAD may not necessarily converge globally and instabilities
can occur when GAD is used as a numerical algorithm.
When instabilities do occur, one may simply reinitialize the initial position
or the direction.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.86\textwidth, height=0.32\textwidth]{grad0.eps}
\includegraphics[width=0.86\textwidth, height=0.32\textwidth]{grad1.eps}
\includegraphics[width=0.86\textwidth, height=0.32\textwidth]{grad2.eps}
\caption{The contour plots of $V$, $V_{GAD}$ for $\mu=1$ and $V_{GAD}$ for $\mu=3$, from the top to the bottom, respectively. For the plot of $V_{GAD}$, $V_{1}$ lies in the middle region
$-\sqrt{\frac{1+\mu}{3}}<x<\sqrt{\frac{1+\mu}{3}}$ and the
$V_{2}$ lies at the two sides.
The arrows show the flow directions of the gentlest ascent dynamics \mathbb{E}qref{eqn:gad-x}.}
\label{fig:grad}
\mathbb{E}nd{center}
\mathbb{E}nd{figure}
\subsection{Lorenz system}
Consider
\begin{equation} \label{eqn:Lorenz}\left\{\begin{array}{lcl}
\dot{x} &=& \sigma (y-x), \\ \dot{y} &=& \rho x - y -xz,\\
\dot{z} &=& -\beta z + xy. \mathbb{E}nd{array}\right.
\mathbb{E}nd{equation}
The parameters we use are
$\sigma=10$, $\beta=\frac83$ and $\beta=30$. There are three fixed points:
the origin $O=(0,0,0)$ and
two symmetric fixed points
$$Q_{\mathbb{P}m}=(\mathbb{P}m \sqrt{\beta(\rho-1)},\mathbb{P}m
\sqrt{\beta(\rho-1)},\rho-1).$$
$O$ is an index-$1$ saddle point.
The Jacobian at $Q_{\mathbb{P}m}$ has one pair of complex conjugate eigenvalues
with positive real part.
In our calculation, we prepare the
initial directions $\mathbf v_{0}$ and $\mathbf w_{0}$ by running the GAD
for long time starting from random initial conditions for
$\mathbf v$ and $\mathbf w$ while keeping $\mathbf x$ fixed,
although this is not entirely necessary.
Figure ~\ref{fig:lorenz-1} shows two solutions of GAD.
For the index-$1$ saddle point $O$, figure \ref{fig:lorenz-2} depicts how
the trajectory of GAD converges to it.
It can be seen that the component of the original force $\mathbf F$ along the unstable
direction of $O$ is nearly projected out, thus the trajectory will not be affected by
the unstable flow in that direction and avoids departing the saddle point.
Therefore the trajectory tends to follow the stable manifold toward the saddle point when the trajectory is close enough
to the saddle point. Similar behavior is seen for the case of searching
the point $Q_{+}$ which has one pair of complex eigenvalues.
The trajectory surrounding $Q_{+}$ in the figure \ref{fig:lorenz-1}
spirals to $Q_{+}$ and these spirals are
closer and closer to the unstable manifold of $Q_{+}$ in the original Lorenz dynamics,
which looks like a twisted disk.
The convergence rate of the spiraling trajectories in GAD is very slow
because the real part of the complex eigenvalues
($\lambda=0.1474 \mathbb{P}m 10.5243\text{ i} $) in the original dynamics is rather small
compared with its imaginary part.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.8\textwidth]{lorenz-1.eps}
\caption{The trajectories of GAD for the Lorenz system starting from two initial points. They converge
to the index-2 saddle point $Q_{+}$ (marked by the dot) and the index-1 saddle point $O$ (marked by ``$+$'')
respectively.}
\label{fig:lorenz-1}
\mathbb{E}nd{center}
\mathbb{E}nd{figure}
If we reverse time $t\to -t$, we have the time-reversed Lorenz system,
in which the origin $O$ becomes an index-$2$ saddle point.
We can apply the index-2 GAD algorithm \mathbb{E}qref{eqn:GAD-ng-r2} to search for this saddle point.
The GAD trajectory in this case is also plotted in the figure \ref{fig:lorenz-2}.
It is similar to the situation of GAD applied to the original Lorenz system
in the sense that the GAD trajectory nearly follows the $z$ axis when approaching
the limit point $O$.
Indeed, as far as the $\mathbf x$-component is concerned, the linearized gentlest ascent dynamics for the original
Lorenz system and the time-reversed one are the same.
From the proof of the Proposition (particularly, note that the eigenvalues of $\mathbb{N}$
are $-\lambda_{i}$ and $\lambda_{j}$),
it is not hard to see that the eigenvalues of the linearized gentlest ascent dynamics at the point $O$ are all negative and
have the same absolute values as the eigenvalues of the
original dynamics, and the two
dynamics share the same eigenvectors (again, we mean the $\mathbf x$ component of the GAD).
Thus, since the
change $t\to -t$ does not change the absolute values of the eigenvalues of the original dynamics, the gentlest ascent dynamics for the original
and time reversed Lorenz system have the same eigenvalues:
$\lambda_{1} = -23.3955$, $\lambda_{2}=-2.6667$, $\lambda_{3}=-12.3955$.
The two linearized GAD flows near the point $O$ are the same: $\mathbf x(t)=e^{-23.3955t}\mathbf v_{1}+e^{-2.6667t}\mathbf v_{2}+e^{-12.3995t}\mathbf v_{3}$,
where $\mathbf v_{1,2,3}$ are the eigenvectors: $\mathbf v_{2}=(0,0,1)$, and $\mathbf v_{1}$, $\mathbf v_{3}$
are in the $z=0$ plane. As $t\to +\infty$,
we then have $\mathbf x(t) \sim e^{-2.6667t}\mathbf v_{2}$.
This explains why both trajectories in the figure \ref{fig:lorenz-2}
follow the $z$ axis when approaching the saddle point $O$.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.8\textwidth]{lorenz-2.eps}
\caption{How the GAD trajectories approaches the saddle point $O$. The curve with two arrows
is the trajectory of index-$1$ GAD for the Lorenz system; the curve with single arrow is the trajectory of
index-$2$ GAD for the time reversed Lorenz system.
The unstable manifold of $O$, which is tangent to the $z=0$ plane, is also shown.}
\label{fig:lorenz-2}
\mathbb{E}nd{center}
\mathbb{E}nd{figure}
\subsection{A PDE example with nucleation}
Let us consider the following reaction-diffusion system
on the domain $x\in[0,1]$ with periodic boundary condition:
\begin{equation}
\left\{
\begin{split}
\frac{\mathbb{P}artial u}{\mathbb{P}artial t} &= \delta \Delta u + {\delta}^{-1}f(u,v),\\
\frac{\mathbb{P}artial v}{\mathbb{P}artial t} &= \delta \Delta v + {\delta}^{-1}g(u,v),
\mathbb{E}nd{split}\label{eqn:RD2}
\right.
\mathbb{E}nd{equation}
where
\begin{equation}
\label{eqn:ngre1}
\begin{cases}
f(u,v) &= (u-u^3+1.2)v+ \frac12 \mu u,\\
g(u,v) &= \frac12 u^2 -v.
\mathbb{E}nd{cases}
\mathbb{E}nd{equation}
The parameter $\delta$ is fixed at $0.01$ and we allow the parameter $\mu$ to vary.
There are two stable (spatially homogeneous) solutions for certain range of $\mu$: $\mathbf u_{+}=(u_{+},v_{+})$ and $\mathbf{0}=(0,0)$.
If one uses the square-pulse shape function as a initial guess in the Newton-Raphson method,
no convergence can be achieved in most situations.
We applied the index-$1$ GAD method to this example.
The initial conditions for GAD are constructed by adding a small amount of
perturbations around either stable solutions: $\mathbf u_{+}$ or $\mathbf{0}$.
We observed that for a fixed value of $\mu$, the solutions of GAD constructed this way
converge to the same saddle point.
The different saddle points obtained from GAD at different values of $\mu$ are plotted
in figure \ref{fig:case0_saddleprofiles}.
It is also numerically confirmed that these saddle points indeed have index $1$ and the unstable manifold
goes to $\mathbf u_{+}$ in one unstable direction and to $\mathbf{0}$ in the opposite
unstable direction.
It is interesting to observe the dependence of the saddle point on the parameter $\mu$
and that such a dependence is highly sensitive when $\mu$ is close to $-1.046\sim-1.045$.
In fact, there exists a critical value $\mu^{*}$ in this narrow interval at which the spatially extended
system \mathbb{E}qref{eqn:RD2} has a subcritical bifurcation, which does not appear in the corresponding ODE system
without spatial dependence.
We refer to \cite{subcrit2010} for further discussions about this point.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.85\textwidth]{case0_saddleprofiles.eps}
\caption{The profiles of saddle points of the example
\mathbb{E}qref{eqn:ngre1} ($\delta=0.01$). Only the component $u$ is shown since $v=\frac12 u^{2}$ at the saddle point.
From inside to outside, the
values of $\mu$ are $-1.000$, $-1.020$, $-1.040$, $-1.045$, $-1.046$,
$-1.050$.} \label{fig:case0_saddleprofiles}
\mathbb{E}nd{center}
\mathbb{E}nd{figure}
\section{Concluding remarks}
We expect that
GAD is particularly useful for handling high dimensional system in the sense that
it should have a larger basin of attraction for finding saddle points, than, for example,
the Newton-Raphson method.
There are many questions one can ask about GAD.
One question is the convergence of GAD as time goes to infinity.
Our preliminary result shows that GAD does not have to converge.
For finite dimensional systems, there is always local convergence
near the saddle point. The situation for infinite dimensional systems, i.e. PDEs,
seems to be much more subtle.
Another interesting point is whether one can accelerate GAD.
For the problem of finding local minima,
many numerical algorithms have been proposed and they promise to have much faster
convergence than SDD. It is natural to ask whether analogous ideas
can also be found for saddle points.
{\bf Acknowledgement:} The work presented here was supported in part
by AFOSR grant FA9550-08-1-0433.
The authors are grateful to Weiguo Gao and Haijun Yu and the second referee for helpful discussions.
\mathbb{E}nd{document} |
\begin{document}
\title{Truncated states obtained by iteration}
\author{W. B. Cardoso}
\email[Corresponding author: ]{[email protected]}
\affiliation{Instituto de F\'{\i}sica, Universidade Federal de Goi\'{a}s, 74.001-970, Goi
\^{a}nia (GO), Brazil}
\author{N. G. de Almeida}
\affiliation{N\'{u}cleo de Pesquisas em F\'{\i}sica, Universidade Cat\'{o}lica de Goi\'{a}
s, 74.605-220, Goi\^{a}nia (GO), Brazil.}
\affiliation{Instituto de F\'{\i}sica, Universidade Federal de Goi\'{a}s, 74.001-970, Goi
\^{a}nia (GO), Brazil}
\begin{abstract}
Quantum states of the electromagnetic field are of considerable importance,
finding potential application in various areas of physics, as diverse as
solid state physics, quantum communication and cosmology. In this paper we
introduce the concept of truncated states obtained \textit{via} iterative
processes (TSI) and study its statistical features, making an analogy with
dynamical systems theory (DST). As a specific example, we have studied TSI
for the doubling and the logistic functions, which are standard functions in
studying chaos. TSI for both the doubling and logistic functions exhibit
certain similar patterns when their statistical features are compared from
the point of view of DST. A general method to engineer TSI in the
running-wave domain is employed, which includes the errors due to the
nonidealities of detectors and photocounts.
\end{abstract}
\pacs{42.50.Dv, 42.50.-p}
\maketitle
\section{Introduction}
Quantum state engineering is an area of growing importance in quantum
optics, its relevance lying mainly in the potential applications in other
areas of physics, such as quantum teleportation \cite{bennett93}, quantum
computation \cite{Kane98}, quantum communication \cite{Pellizzari97},
quantum cryptography \cite{Gisin02}, quantum lithography \cite{Bjork01},
decoherence of states \cite{Zurek91}, and so on. To give a few examples of
their usefulness and relevance, quantum states arise in the study of quantum
decoherence effects in mesoscopic fields \cite{harochegato}; entangled
states and quantum correlations \cite{brune}; interference in phase space
\cite{bennett2}; collapses and revivals of atomic inversion \cite{narozhny};
engineering of (quantum state) reservoirs \cite{zoller}; etc. Also, it is
worth mentioning the importance of the statistical properties of one state
in determining some relevant properties of another \cite{barnett}, as well
as the use of specific quantum states as input to engineer a desired state
\cite{serra}.
Dynamical Systems Theory (DST), on the other hand, is a completely different
area of study, whose interest lies mainly in nonlinear phenomena, the source
of chaotic phenomena. DST groups several approaches to the study of chaos,
involving Lyapunov exponent, fractal dimension, bifurcation, and symbolic
dynamics among other elements \cite{devaney}. Recently, other approaches
have been considered, such as information dynamics and entropic chaos degree
\cite{ohya}.
The purpose of this paper is twofold: to introduce novel states of
electromagnetic fields, namely truncated states having coefficients obtained
\textit{via} iterative process (TSI), and to study chaos phenomena using
standard techniques from quantum optics, making an analogy with DST. We note
that, unlike previous states studied in the literature \cite{dodonov}, each
coefficient of the TSI is obtained from the previous one by iteration of a
function. Features of this state are studied by analyzing several of its
statistical properties in different regimes (chaotic \textit{versus}
nonchaotic) according to DST, and, for some iterating functions, we found
properties of TSI very sensitive (resembling chaos) to the first coefficient
$C_{0}$, which is used as a ~ seed ~ to obtain the remaining $C_{n}$.
This paper is organized as follows. In section 2 we introduce the TSI and in
section 3 we analyze the behavior of some of its properties as the Hilbert
space dimension is increased. In section 4 we show how to engineer the TSI
in the running-wave field domain, and the corresponding engineering fidelity
is studied in section 5. In section 6 we present our conclusions.
\section{Truncated states obtained via iteration (TSI)}
We define TSI as
\begin{equation}
|TSI\rangle =\sum\limits_{n=0}^{N}C_{n}|n\rangle
\end{equation}
where $C_{n}$ is the normalized complex coefficient obtained as the $n$th
iteration of a previously given generating function. For example, given $
C_{0}$,$C_{n}$ can be the $n$th iterate of the quadratic functions: $
C_{n}(\mu )=C_{n-1}^{2}+\mu $; sine functions: $C_{n}(\mu )=\mu \sin
(C_{n-1})$; logistic functions $C_{n}(\mu )=\mu C_{n-1}(1-C_{n-1})$;
exponential functions: $C_{n}=\mu \exp (C_{n-1})$; doubling function defined
on the interval [0,1): $C_{n}=2C_{n-1}$ \textit{mod} $1$, and so on, $\mu $
being a parameter. It is worth recalling that all the functions in the above
list are familiar to researchers in the field of dynamical systems theory
(DST). For example, for some values of $\mu $, it is known that some of
these functions can behave in quite a chaotic manner \cite{devaney}. Also,
note that by computing all the $C_{n}$ we are in fact determining the
\textit{orbit} of a given function, and because the $C_{n}$ and $P_{n}$, the
photon number distribution, are related by $P_{n}=|C_{n}|^{2}$, fixed or
periodic points of a function will correspond to fixed or periodic $P_{n}$.
Rather than studying all the functions listed in this section, we will focus
on the doubling function and the logistic function. These two functions have
been widely used to understand chaos in nature. As we shall see in the
following, although very different from each other, these functions give
rise to different TSI having similar patterns.
\section{Statistical properties of TSI using the doubling and the logistic
functions}
\subsection{Photon Number Distribution}
Since the expansion of TSI is known in the number state $|n\rangle $, we
have
\begin{equation}
P_{n}=|C_{n}|^{2}. \label{pn}
\end{equation}
Figs. $1$ and $2$ show the plots of the photon-number distribution $P_{n}$
versus $n$ for TSI using the doubling function. The Hilbert space dimension
is $N=50$. In order to illuminate the behavior of TSI for different values
of $C_{0}$, we take $C_{0}$ as $0.3$ and $0.29711$, respectively shown in
Figs. $1$ and $2$. Note the regular behavior for $C_{0}=0.3$ and rather an
irregular, or chaotic, behavior for $C_{0}=$ $0.29711$. Figs. $3$ and $4$
show $P_{n}$ for the logistic function. For $C_{0}=0.2$ and $\mu =3.49$ the
logistic function behaves regularly (Fig.$3$), showing clearly (as in the
case of the doubling-function) four values for $P_{n}$; by contrast, for $
\mu =4$ and $C_{0}=0.2$, $P_{n}$ oscillates quite irregularly (Fig.$4$).
This is so because the photon number distribution is equivalent to the
\textit{orbit} of the TSI dynamics \cite{devaney}. Thus, once a fixed -
attracting or periodic - point is attained, the subsequent coefficients, and
hence the subsequent $P_{n}$, will behave in a regular manner. Conversely,
when no fixed point exists, $P_{n}$ will oscillate in a chaotic manner.
Therefore, by choosing suitable $C_{0}$ and/or $\mu $, we can compare the
properties of TSI when different regimes (chaotic \textit{versus}
nonchaotic) in the DST sense are encountered. Note the similarity between
the properties of the logistic and the doubling functions when the DST
regimes are the same. Interestingly, these similarities are observed when
other properties are analyzed, as we shall see in the following.
\begin{figure}
\caption{Photon number distribution for the doubling function with $
C_{0}
\end{figure}
\begin{figure}
\caption{Photon number distribution for the doubling function with $
C_{0}
\end{figure}
\begin{figure}
\caption{Photon number distribution for the logistic function with $
C_{0}
\end{figure}
\begin{figure}
\caption{Photon number distribution for the logistic function with $
C_{0}
\end{figure}
\subsection{Even and Odd Photon number distribution}
The functions $P_{odd}$ and $P_{even}$ represent the photon number
distribution for $n$ $odd$ and $even$, respectively, given by Eq. (\ref{pn}
). It is well established in quantum optics \cite{Mandel} that if $
P_{odd}>0.5$ the Glauber-Sudarshan $P$-function assumes negative values,
prohibited in the usual probability distribution function, and the quantum
state has no classical analog. Since $P_{odd}+P_{even}=1$, the same is true
when $P_{even}<0.5$ . Figs. $5$ and $6$ show the behavior of $P_{odd}$ for $
C_{0}=0.3$ and $C_{0}=0.29711$ for the doubling function the Hilbert space $N
$ is increased. Figs. $7$ and $8$ refer to the the logistic function for $
\mu =3.49$ and $\mu =4$. In Figs. $5$ and $7$ (corresponding to a nonchaotic
regime in DST), note that TSI has a classical analog as $N$ increases. From
Figs. $6$ and $8$ (corresponding to a chaotic regime in DST), TSI can behave
as a nonclassical state, depending on $N$. More interestingly, note the
following pattern: whenever the coefficients of TSI correspond to the
nonchaotic regime in DST, $P_{odd}$ (and so $P_{even}$) will remain above or
below $0.5$ on a nearly monotonic curve, as seen in Figs. $5$ and $7$;
whenever the coefficients of TSI correspond to the chaotic regime in DST, $
P_{odd}$ (and so $P_{even}$) will tend to oscillate around $0.5$ (Figs. $6$
and $8$).
\begin{figure}
\caption{Even (solid) and odd (dots) photon number distributions for the
doubling function. We have used $C_{0}
\end{figure}
\begin{figure}
\caption{Even (solid) and odd (dots) photon number distributions for the
doubling function. $C_{0}
\end{figure}
\begin{figure}
\caption{Even (solid) and odd (dots) photon number distributions for the
logistic function. Here we chose $C_{0}
\end{figure}
\begin{figure}
\caption{Even (solid) and odd (dots) photon number distributions for the
logistic function. Here we chose $C_{0}
\end{figure}
\subsection{Average number and variance}
The average number $\left\langle \hat{n}\right\rangle $ and the variance $
\left\langle \Delta\hat{n}\right\rangle $ in TSI are obtained
straightforwardly from
\begin{equation}
\left\langle \hat{n}\right\rangle =\sum_{n=0}^{N}P(n)n ,
\end{equation}
and
\[
\left\langle \Delta\hat{n}\right\rangle =\sqrt{\left\langle \hat{n}
^{2}\right\rangle -\left\langle \hat{n}\right\rangle ^{2}} .
\]
Fig. $9$ shows the plot of $\left\langle \hat{n}\right\rangle $ and Fig. $10$
the plot of $\left\langle \Delta\hat{n}\right\rangle $ as functions of the
dimension $N$ of Hilbert space, for the doubling function. Note the near
linear behavior of the average photon number and its variance as $N$
increases for $C_{0}=0.3$ (nonchaotic regime in DST); this is not seen when $
C_{0}=0.29711$ (chaotic regime in DST). Figs. $11$ and $12$ for the logistic
function show essentially the same behavior when these two DST regimes are
shown together.
\begin{figure}
\caption{Average photon number for the doubling function. Here we chose $
C_{0}
\end{figure}
\begin{figure}
\caption{Variance of the photon number for the doubling function. Here we
chose $C_{0}
\end{figure}
\begin{figure}
\caption{Average photon number for the logistic function. Here we chose $
C_{0}
\end{figure}
\begin{figure}
\caption{Variance of the photon number for the logistic function. Here we
chose $C_{0}
\end{figure}
\subsection{Mandel parameter and second order correlation function}
The Mandel $Q$ parameter is defined as
\begin{equation}
Q=\frac{\left( \Delta\hat{n}^{2}-\left\langle \hat{n}\right\rangle \right) }{
\left\langle \hat{n}\right\rangle } ,
\end{equation}
while the second order correlation function $g^{\left( 2\right) }(0)$ is
\[
g^{\left( 2\right) }(0)=\frac{\left( \left\langle \hat{n}^{2}\right\rangle
-\left\langle \hat{n}\right\rangle \right) }{\left\langle \hat{n}
\right\rangle ^{2}} ,
\]
and for $Q<0$ ($Q>0$) the state is said to be sub-Poissonian
(super-Poissonian). Also, the $Q$ parameter and the second order correlation
function $g^{\left( 2\right) }$ are related by \cite{walls}
\begin{equation}
Q=\left[ g^{\left( 2\right) }(0)-1\right] \left\langle \hat{n}\right\rangle .
\label{qg}
\end{equation}
If $g^{\left( 2\right) }(0)<0$,\ then the Glauber-Sudarshan $P$-function
assumes negative values, outside the range of the usual probability
distribution function. Moreover, by Eq. (\ref{qg}) it is readily seen that $
g^{\left( 2\right) }(0)<1$ implies $Q<0$. As for a coherent state $Q=0$, a
given state is said to be a ``classical''\ one if $Q>0$.
Figs. $13$ to $16$ show the plots of the $Q$ parameter and the correlation
function $g^{\left( 2\right) }(0)$ versus $N$ ,\ for both the doubling and
the logistic functions. Note that TSI is predominantly super-Poissonian for
these two functions ($Q>0$ and $g^{\left( 2\right) }(0\dot{)}>1$), thus
being a \textquotedblleft classical\textquotedblright\ state in this sense
for $N\gtrsim 12$, while for small values of \ $N$ ($N<12$), the $Q$
parameter is less than $0$, showing sub-Poissonian statistics and is thus
associated with a \textquotedblleft quantum state\textquotedblright . From
Figs.$13$ and $15$, note that using $C_{0}=0.3$ for the doubling function, $
C_{0}=0.2$ and $\mu =3.49$ for the logistic function (nonchaotic regime in
DST), $Q$ shows a linear dependence on $N$. However, using $C_{0}=0.29711$
for the doubling function, $C_{0}=0.2$ and $\mu =4$ for the logistic
function (chaotic regime in DST), $Q$ oscillates irregularly. Similarly,
from Figs. $14$ and $16$, using $C_{0}=0.3$ for the doubling function, $
C_{0}=0.2$ and $\mu =3.49$ for the logistic function, we see that $g^{\left(
2\right) }(0)$ increases smoothly, while using $C_{0}=0.29711$ for the
doubling function, $C_{0}=0.2$ and $\mu =4$ for the logistic function, the
rise of $g^{\left( 2\right) }(0)$ is rather irregular. This pattern is
observed for other $C_{0}$ as input as well as for other values of the
parameter $\mu $, and whenever the dynamics is chaotic (regular), the $Q$
parameter and the $g^{\left( 2\right) }(0)$ correlation function oscillate
irregularly (regularly).
\begin{figure}
\caption{Q parameter for the doubling function. Here we chose $C_{0}
\end{figure}
\begin{figure}
\caption{Second order correlation function for the doubling function. Here
we chose $C_{0}
\end{figure}
\begin{figure}
\caption{Q parameter for the logistic function. Here we chose $C_{0}
\end{figure}
\begin{figure}
\caption{Second order correlation function for the logistic function. Here
we chose $C_{0}
\end{figure}
\subsection{Quadrature and variance}
Quadrature operators are defined as
\begin{eqnarray}
X_{1}& =\frac{1}{2}\left( a+a^{\dagger }\right) ; \nonumber \\
X_{2}& =\frac{1}{2i}\left( a-a^{\dagger }\right) .
\end{eqnarray}
where $a$ ($a^{\dagger }$) \ is the annihilation (creation) operator in Fock
space. Quantum effects arise when the variance of one of the two quadratures
attains a value $\Delta X_{i}<0.5$, $i=1,2$. Figs. $17$ and $18$ show the
plots of quadrature variance $\Delta X_{i}$ versus $N$. Note in this figures
that variances increase when $N$ is increased.
\begin{figure}
\caption{Averaged quadratures for the doubling function when the Hilbert
space $N$ is increased to $50$. Doted (dashed-doted) line refer to
quadrature variance $1$ $(2)$ for $C_{0}
\end{figure}
\begin{figure}
\caption{Averaged quadratures for the logistic function with $C_{0}
\end{figure}
\subsection{Husimi -Q function}
The Husimi Q-function for TSI is given by
\begin{equation}
Q_{\left\vert TSI\right\rangle }(\beta )=\frac{1}{\pi }\left\vert
\left\langle \beta |TSI\right\rangle \right\vert ^{2},
\end{equation}
where $\beta $ is a coherent state. Figs. $19$ to $22$ show the Husimi
Q-function for both the doubling and the logistic functions for $N=15.$ For
the doubling function, we use $C_{0}=0.29711$ and $C_{0}=0.3$, respectively,
and for the logistic function we use $\mu =3.49$ and $\mu =4$, respectively,
for the chaotic and nonchaotic regimes. Interestingly, even when the chaotic
and nonchaotic regimes of the DST are compared, Husimi Q-functions show
essentially no difference from each other.
\begin{figure}
\caption{Husimi Q function for doubling function. Here $C_{0}
\end{figure}
\begin{figure}
\caption{Husimi Q function for doubling function. Here $C_{0}
\end{figure}
\begin{figure}
\caption{Husimi Q function for logistic function. Here $C_{0}
\end{figure}
\begin{figure}
\caption{Husimi Q function for logistic function. Here $C_{0}
\end{figure}
\section{Generation of TSI}
TSI can be generated in various contexts, as for example trapped ions \cite
{serra2}, cavity QED \cite{serra,vogel}, and travelling wave-fields \cite
{dakna}. But due to severe limitation imposed by coherence loss and damping,
we will employ the scheme introduced by Dakna et al. \cite{dakna} in the
realm of running wave field. For brevity, the present application only shows
the relevant steps of Ref.\cite{dakna}, where the reader will find more
details. In this scheme, a desired state $\left\vert \Psi \right\rangle $
composed of a finite number of Fock states $\left\vert n\right\rangle $ can
be written as
\begin{eqnarray}
\left\vert \Psi \right\rangle &=&\sum_{n=0}^{N}C_{n}\left\vert
n\right\rangle =\frac{C_{N}}{\sqrt{N!}}\prod_{n=1}^{N}\left( \hat{a}
^{+}-\beta _{n}^{\ast }\right) \left\vert 0\right\rangle \nonumber \\
&=&\frac{C_{N}}{\sqrt{N!}}\prod_{k=1}^{N}\hat{D}(\beta _{k})\hat{a}^{+}\hat{D
}(\beta _{k})\left\vert 0\right\rangle , \label{teorica}
\end{eqnarray}
where $\hat{D}(\beta _{n})$ stands for the displacement operator and the $
\beta _{n}$ are the roots of the polynomial equation
\begin{equation}
\sum\limits_{n=0}^{N}C_{n}\beta ^{n}=0. \label{raizes}
\end{equation}
According to the experimental setup shown in the Fig.1 of Ref.\cite{dakna},
we have (assuming $0$-photon registered in all detectors) that the outcome
state is
\begin{equation}
\left\vert \Psi \right\rangle \sim \prod_{k=1}^{N}D(\alpha _{k+1})\hat{a}
^{+}T^{\hat{n}}D(\alpha _{k}) \left\vert 0\right\rangle ,
\label{experimental}
\end{equation}
where $T$ is the transmittance of the beam splitter and ${\alpha }_{k}$ are
experimental parameters. After some algebra, the Eq.(\ref{teorica}) and Eq.(
\ref{experimental}) can be connected. In this way, one shows that they
become identical when ${\alpha }_{1}=-\sum_{l=1}^{N}T^{-l}{\alpha }_{l+1}$
and $\alpha _{k}=T^{\ast N-k+1}(\beta _{k-1}-\beta _{k})$ for $k=2,3,4\ldots
N$. \ In the present case the coefficients $C_{n}$ are given by those of the
TSI. The roots $\beta _{k}^{\ast }=\left\vert \beta _{k}\right\vert
e^{-i\varphi _{\beta _{k}}}$ of the characteristic polynomial in Eq.(\ref
{raizes}) and the displacement parameters $\alpha _{k}=\left\vert \alpha
_{k}\right\vert e^{-i\varphi _{\alpha _{k}}}$ are shown in the Tables
I;II;III and IV, for $N=5.$
\begin{table}[tbp]
\caption{The roots $\protect\beta _{k}^{\ast }=|\protect\beta _{k}|e^{-i
\protect\varphi _{\protect\beta _{k}}}$ of the characteristic polynomial and
the displacement parameters $\protect\alpha _{k}^{\ast }=|\protect\alpha
_{k}|e^{-i\protect\varphi _{\protect\alpha _{k}}}$ are given for TSI using
the doubling function for $C_{0}=0.3$ (coinciding with nonchaotic behavior
in the DST sense), $N=5$ and $T=0.862$. The probability of producing the
state is 0.22$\%.$}\centering$
\begin{tabular}{||c||c||c||c||c||}
\hline\hline
N & $\left| \beta _{k}\right| $ & $\varphi _{\beta _{k}}$ & $\left| \alpha
_{k}\right| $ & $\varphi _{\alpha _{k}}$ \\ \hline\hline
1 & 2.169 & 2.638 & 1.187 & -0.220 \\ \hline\hline
2 & 2.169 & -2.638 & 1.155 & 1.570 \\ \hline\hline
3 & 0.545 & 3.141 & 1.096 & -2.483 \\ \hline\hline
4 & 1.460 & 1.084 & 1.323 & -2.331 \\ \hline\hline
5 & 1.460 & -1.084 & 2.225 & 1.570 \\ \hline\hline
6 & & & 1.460 & -1.084 \\ \hline\hline
\end{tabular}
$
\end{table}
\begin{table}[tbp]
\caption{The roots $\protect\beta _{k}^{\ast }=|\protect\beta _{k}|e^{-i
\protect\varphi _{\protect\beta _{k}}}$ of the characteristic polynomial and
the displacement parameters $\protect\alpha _{k}^{\ast }=|\protect\alpha
_{k}|e^{-i\protect\varphi _{\protect\alpha _{k}}}$ are given for a TSI using
the doubling function for $C_{0}=0.29711$ (coinciding with chaotic behavior
in the DST sense), $N=5$ and $T=0.867$. The probability of producing the
state is 0.21$\%$}\centering$
\begin{tabular}{||c||c||c||c||c||}
\hline\hline
N & $\left| \beta _{k}\right| $ & $\varphi _{\beta _{k}}$ & $\left| \alpha
_{k}\right| $ & $\varphi _{\alpha _{k}}$ \\ \hline\hline
1 & 2.306 & 2.692 & 1.372 & -0.198 \\ \hline\hline
2 & 2.306 & -2.692 & 1.130 & 1.570 \\ \hline\hline
3 & 0.543 & 3.141 & 1.193 & -2.563 \\ \hline\hline
4 & 1.489 & 1.089 & 1.357 & -2.321 \\ \hline\hline
5 & 1.489 & -1.089 & 2.289 & 1.570 \\ \hline\hline
6 & & & 1.489 & -1.089 \\ \hline\hline
\end{tabular}
$
\end{table}
\begin{table}[t]
\caption{The roots $\protect\beta _{k}^{\ast }=|\protect\beta _{k}|e^{-i
\protect\varphi _{\protect\beta _{k}}}$ of the characteristic polynomial and
the displacement parameters $\protect\alpha _{k}^{\ast }=|\protect\alpha
_{k}|e^{-i\protect\varphi _{\protect\alpha _{k}}}$ are given for TSI using
the logistic function for $C_{0}=0.2$, $\protect\mu =3.49$ (coinciding with
nonchaotic behavior in the DST sense), $N=5$ and $T=0.893$. The probability
of producing the state is 0.11$\%$}\centering$
\begin{tabular}{||c||c||c||c||c||}
\hline\hline
N & $\left| \beta _{k}\right| $ & $\varphi _{\beta _{k}}$ & $\left| \alpha
_{k}\right| $ & $\varphi _{\alpha _{k}}$ \\ \hline\hline
1 & 3.948 & 3.141 & 2.794 & 0.051 \\ \hline\hline
2 & 0.609 & 2.566 & 2.195 & -3.045 \\ \hline\hline
3 & 0.609 & -2.566 & 0.472 & 1.570 \\ \hline\hline
4 & 1.828 & 1.373 & 1.830 & -1.959 \\ \hline\hline
5 & 1.828 & -1.373 & 3.202 & 1.570 \\ \hline\hline
6 & & & 1.828 & -1.373 \\ \hline\hline
\end{tabular}
$
\end{table}
\begin{table}[t]
\caption{The roots $\protect\beta _{k}^{\ast }=|\protect\beta _{k}|e^{-i
\protect\varphi _{\protect\beta _{k}}}$ of the characteristic polynomial and
the displacement parameters $\protect\alpha _{k}^{\ast }=|\protect\alpha
_{k}|e^{-i\protect\varphi _{\protect\alpha _{k}}}$ are given for TSI using
the logistic function for $C_{0}=0.2,$ $\protect\mu =4$ (coinciding with
chaotic behavior in the DST sense), $N=5$ and $T=0.879$. The probability of
producing the state is 0.15$\%$}\centering$
\begin{tabular}{||c||c||c||c||c||}
\hline\hline
N & $\left| \beta _{k}\right| $ & $\varphi _{\beta _{k}}$ & $\left| \alpha
_{k}\right| $ & $\varphi _{\alpha _{k}}$ \\ \hline\hline
1 & 3.290 & 3.141 & 2.027 & 0.094 \\ \hline\hline
2 & 0.563 & 2.708 & 1.665 & -3.056 \\ \hline\hline
3 & 0.563 & -2.708 & 0.321 & 1.570 \\ \hline\hline
4 & 1.893 & 1.255 & 1.787 & -2.064 \\ \hline\hline
5 & 1.893 & -1.255 & 3.165 & 1.570 \\ \hline\hline
6 & & & 1.893 & -1.255 \\ \hline\hline
\end{tabular}
$
\end{table}
For $N=5$, the best probability of producing TSI is $0.22\%$ when the
doubling function is used, and $\ 0.15\%$ when the logistic funcion is used.
The beam-splitter transmittance which optimizes this probability is around $
T=0.878$.
\section{Fidelity of generation of TSI}
Until now we have assumed all detectors and beam-splitters as ideal.
Although very good beam-splitters are available by advanced technology, the
same is not true for photo-detectors in the optical domain. Thus, let us now
take into account the quantum efficiency $\eta $ at the photodetectors. For
this purpose, we use the Langevin operator technique as introduced in \cite
{norton1} to obtain the fidelity to get the TSI.
Output operators accounting for the detection of a given field $\hat{\alpha}$
reaching the detectors are given by \cite{norton1}
\begin{equation}
\widehat{\alpha }_{out}=\sqrt{\eta }\widehat{\alpha }_{in}+\widehat{L}
_{\alpha }, \label{E9}
\end{equation}
where $\eta $ stands for the efficiency of the detector and $\widehat{L}
_{\alpha }$, acting on the environment states, is the noise or Langevin
operator associated with losses into the detectors placed in the path of
modes $\widehat{\alpha }=a,b$. We assume that the detectors couple neither
different modes $a,b$ nor the Langevin operators $\widehat{L}_{\alpha }$, so
the following commutation relations are readily obtained from Eq.(\ref{E9}):
\begin{eqnarray}
\left[ \widehat{L}_{\alpha },\widehat{L}_{\alpha }^{\dagger }\right]
&=&1-\eta , \label{E10a} \\
\left[ \widehat{L}_{\alpha },\widehat{L}_{\beta }^{\dagger }\right] &=&0.
\label{E10b}
\end{eqnarray}
The ground-state expectation values for pairs of Langevin operators are
\begin{eqnarray}
\left\langle \widehat{L}_{\alpha }\widehat{L}_{\alpha }^{\dagger
}\right\rangle &=&1-\eta , \label{E11a} \\
\left\langle \widehat{L}_{\alpha }\widehat{L}_{\beta }^{\dagger
}\right\rangle &=&0, \label{E11b}
\end{eqnarray}
which are useful relations specially for optical frequencies, when the state
of the environment can be very well approximated by the vacuum state, even
for room temperature.
Let us now apply the scheme of the Ref.\cite{dakna} to the present case. For
simplicity we will assume all detectors having high efficiency ($\eta
\gtrsim 0.9$). This assumption allows us to simplify the resulting
expression by neglecting terms of order higher than $(1-\eta )^{2}$. When we
do that, instead of $|TSI\rangle $, we find the (mixed) state $|\Psi
_{FE}\rangle $ describing the field plus environment, the latter being due
to losses coming from the nonunit efficiency detectors. We have,
\begin{eqnarray}
\left\vert \Psi _{FE}\right\rangle &\sim &R^{N}D(\alpha _{N+1})\hat{a}
^{\dagger }T^{\hat{n}}D(\alpha _{N})\hat{a}^{\dagger }T^{\hat{n}} \nonumber
\\
&\times &D(\alpha _{N-1})\ldots \hat{a}^{\dagger }T^{\hat{n}}D(\alpha
_{1})\left\vert 0\right\rangle \widehat{L}_{0}^{\dagger } \nonumber \\
&+&R^{N-1}D(\alpha _{N+1})\hat{a}^{\dagger }T^{\hat{n}}D(\alpha _{N})\hat{a}
^{\dagger }T^{\hat{n}} \nonumber \\
&\times &D(\alpha _{N-1})\ldots \widehat{L}_{1}^{\dagger }T^{\hat{n}
}D(\alpha _{1})\left\vert 0\right\rangle \nonumber \\
&+&R^{N-1}D(\alpha _{N+1})\hat{a}^{\dagger }T^{\hat{n}}D(\alpha _{N})
\widehat{L}_{N-1}^{\dagger }T^{\hat{n}} \nonumber \\
&\times &D(\alpha _{N-1})\ldots \hat{a}^{\dagger }T^{\hat{n}}D(\alpha _{1})
\left\vert 0\right\rangle \nonumber \\
&+&R^{N-1}D(\alpha _{N+1})\widehat{L}_{N}^{\dagger }T^{\hat{n}}D(\alpha _{N})
\hat{a}^{\dagger }T^{\hat{n}} \nonumber \\
&\times &D(\alpha _{N-1})\ldots \hat{a}^{\dagger }T^{\hat{n}}D(\alpha _{1})
\left\vert 0\right\rangle , \label{damped}
\end{eqnarray}
where, for brevity, we have omitted the kets corresponding to the
environment. Here $R$ is the reflectance of the beam splitter, $\ \widehat{L}
_{0}^{\dagger }=\mathbf{1}$ is the identity operator and $\widehat{L}_{k}$, $
k=1,2..N$ stands for losses in the first, second $\ldots $ $N-th$ detector.
Although the $\widehat{L}_{k}\prime s$ commute with any system operator, we
have maintained the order above to keep clear the set of possibilities for
photo absorption: the first term, which includes $\widehat{L}_{0}^{\dagger }=
\mathbf{1}$, indicates the probability for nonabsorption; the second term,
which include $\widehat{\mathsf{L}}_{1}^{+}$, indicates the probability for
absorption in the first detector; and so on. Note that in case of absorption
at the k-$th$ detector, the annihilation operator $a$ is replaced by the $
\widehat{L}_{k}^{\dagger }$ creation Langevin operator. Other possibilities
such as absorption in more than one detector lead to a probability of order
lesser than $(1-\eta )^{2}$, which will be neglected.
Next, we have to compute the fidelity \cite{nota}, $F=\left\Vert
\left\langle \Psi \right. \left\vert \Psi _{FE}\right\rangle \right\Vert
^{2} $, where $\left\vert \Psi \right\rangle $ is the ideal state given by
Eq.(\ref{experimental}), here corresponding to the TSI characterized by the
parameters shown in Tables I-IV, and $\left\vert \Psi _{FE}\right\rangle $
is the state given in the Eq.(\ref{damped}). Assuming $\eta =$ $0.99$, $0.95$
and $0.90$ and starting with $C_{0}=0.3$ and $C_{0}=0.29711$ for the
doubling function, we find $F\simeq 0.9983$, $0.9943$ and $0.9909$,
respectively, and for the logistic function, starting with $C_{0}=0.2$, $\mu
=3.49$ and $\mu =4$, we find $F\simeq 0.9986$, $0.9944$ and $0.9911$,
respectively. These high fidelities show that efficiencies around $0.9$ lead
to states whose degradation due to losses is not so dramatic for $N=5$.
\section{Comments and conclusion}
In this paper we have introduced new states of the quantized electromagnetic
field, named truncated states with probability amplitudes obtained through
iteration of a function (TSI). Although TSI can be building using various
functions such as logistic, sine, exponential functions and so on, we have
focused our attention on the doubling and the logistic functions, which, as
is well known from dynamical systems theory, can exhibit a chaotic behavior
in the interval (0,1]. To characterize the TSI for the doubling and logistic
functions we have studied various of its features, including some
statistical properties, as well as the behavior of these features when the
dimension $N$ of Hilbert space is increased. Interesting, we found a
transition from sub-poissonian statistics to super-poissonian statistics
when $N$ is relatively small ($N\sim 12$). Besides, photon number
distribution, which is analogous to concept of orbits in the study of the
dynamic of maps, shows a regular or rather a \textquotedblleft
chaotic\textquotedblright\ behavior depending on existing or not fixed or
periodic points in the function to be iterated. Interestingly enough, we
have found a pattern when the properties of \ TSI for logistic function are
compared with that of TSI for the doubling function from the point of view
of dynamical systems theory (DST). For example, as $P_{n}$ has an analog
with orbits from DST, it is straightforward to identify repetitions (or
\textit{periods}) in $P_{n}$, if there are any, when the Hilbert space is
increased. Surprisingly, although the doubling and the logistic function are
different from each other, when other properties such as even and odd photon
number distribution, the average number and its variance, the Mandel
parameter and the second order correlation function were studied, they
presented the same following pattern: if, from the point of view of DST, the
coefficients of TSI for the doubling and the logistic function correspond to
a nonchaotic (chaotic) regime, all those properties increases smoothly
(irregularly) when the Hilbert space is increased.
\section{\textbf{Acknowledgments}}
NGA thanks CNPq, Brazilian agency, and VPG-Universidade Cat\'{o}lica de Goi
\'{a}s, and WBC thanks CAPES, for partially supporting this work.
\end{document} |
\begin{document}
\begin{abstract}
Noise or fluctuations play an important role in the modeling and understanding of the behavior
of various complex systems in nature. Fokker-Planck equations are
powerful mathematical tools to study behavior of such systems subjected
to fluctuations. In this paper we establish local well-posedness
result of a new nonlinear Fokker-Planck equation. Such equations
appear in the modeling of the grain boundary dynamics during
microstructure evolution in the polycrystalline materials and obey special energy laws.
\end{abstract}
\title{Local well-posedness of a nonlinear Fokker-Planck model}
\section{Introduction}
\label{sec:1}
\par Fluctuations play an essential role in the modeling and understanding of the behavior
of various complex processes. Many natural systems are affected by different
external and internal mechanisms that are not known explicitly, and
very often described as fluctuations or noise. Fokker-Planck models
are widely used
as a versatile mathematical tool to describe the macroscopic behavior of the systems
that undergo such fluctuations, see more detailed discussion and
examples in \cite{MR987631,MR2053476,MR3932086,doi:10.1137/S0036141096303359,MR3019444,MR3485127,MR4196904,MR4218540}, among
many others. In our previous work we derived Fokker-Planck type
systems as a part of grain growth models of polycrystalline materials,
e.g. \cite{DK:gbphysrev,MR2772123,MR3729587,epshteyn2021stochastic}.
From the thermodynamical point of view, many Fokker-Planck type systems can be viewed as special cases of
general diffusion \cite{GiKiLi16}. They can be derived from the kinematic continuity equations, the conservation law, and the specific energy
dissipation law, using the energetic variational approaches \cite{onsager1931reciprocal2,GiKiLi16}.
We want to point out that while the linear and nonlinear
Fokker-Planck models with the energy laws can be obtained using such
energetic variational approach, not all Fokker-Planck systems
derived from stochastic differential equations (SDEs) by the Ito process
have underlying energy law principles \cite{risken1996fokker}.
First, consider the following conservation law subject to the natural boundary condition,
\begin{equation}
\label{eq:4-1}
\left\{
\begin{aligned}
\frac{\partial f}{\partial t}
+
\nabla \cdot
(f\vec{u})
&=
0,&\quad
&t>0,\ x\in\Omega, \\
f\vec{u}\cdot\nu|_{\partial\Omega}
&=
0,&\quad
&t>0.
\end{aligned}
\right.
\end{equation}
Here $\Omega\subset\mathbb{R}^n$ is a convex domain,
$f=f(x,t):\Omega\times[0,T)\rightarrow\mathbb{R}$ is a probability density
function, $\vec{u}$ is the velocity vector which
depends on $x$, $t$, and the probability density function $f$, and $\nu$
is an outer unit normal to the boundary $\partial\Omega$ of the domain
$\Omega$. We assume that the above system \eqref{eq:4-1} also satisfies
the following energy law,
\begin{equation}
\label{eq:4-2}
\frac{d}{dt}
\int_\Omega \omega(f,x)\,dx
=
-
\int\pi(f,x,t)
|\vec{u}|^2\,dx.
\end{equation}
Here, $\omega=\omega(f,x)$ represents the free energy,
which defines the equilibrium state of the
system, and $\pi(f,x,t)$
is the so-called mobility function which defines the
evolution of the system to the equilibrium state. The specific forms of these quantities
will be
discussed in more details below. Now, take a formal time-derivative on the
left-hand side of \eqref{eq:4-2}, then using integration by parts
together with system \eqref{eq:4-1}, we get,
\begin{equation}
\label{eq:4-3}
\begin{split}
\frac{d}{dt}
\int_\Omega \omega(f,x)\,dx
&=
\int_\Omega \omega_f(f,x)f_t\,dx
\\
&=
-\int_\Omega \omega_f(f,x)\nabla\cdot(f\vec{u})\,dx
=
\int_\Omega \nabla\omega_f(f,x)\cdot(f\vec{u})\,dx.
\end{split}
\end{equation}
Using relations \eqref{eq:4-2} and \eqref{eq:4-3}, we have that,
\begin{equation*}
-
\int\pi(f,x,t)
|\vec{u}|^2\,dx
=
\int_\Omega \nabla\omega_f(f,x)\cdot(f\vec{u})\,dx.
\end{equation*}
Thus, the velocity field $\vec{u}$ of the model
\eqref{eq:4-1}-\eqref{eq:4-2} should satisfy the following relation,
\begin{equation}
\label{eq:4-4}
-
\pi(f,x,t)
\vec{u}
=
f
\nabla(\omega_f(f,x)).
\end{equation}
In fact \eqref{eq:4-4} represents the force balance equation for the system.
The left hand side represents the dissipative force and the right hand
side is the conservative force obtained using the
free energy of the system. This derivation is consistent with
the general energetic variational approach in \cite{onsager1931reciprocal2,GiKiLi16}.
Let us put this discussion in the context of linear and nonlinear
Fokker-Planck models now.
Such systems arise in many physical and engineering
applications, e.g., \cite{coleman1967thermodynamics, dafermos1978second,
DK:gbphysrev,MR2772123,MR3729587,epshteyn2021stochastic,liu2022brinkman}. One
example of the application of Fokker-Planck systems is the modeling
of grain growth in polycrystalline materials. Many technologically
useful materials appear as polycrystalline microstructures, composed
of small monocrystalline cells or grains, separated by
interfaces, or grain boundaries of crystallites with different
lattice orientations. In a planar grain boundary network, a point
where three grain
boundaries meet is called a triple junction point, see
Fig. ~\ref{figTJ}. Grain growth is a very complex multiscale and
multiphysics process influenced by the dynamics of grain boundaries,
triple junctions and the dynamics of lattice misorientations
(difference in the lattice orientations between two neighboring grains that
share the grain boundary, Fig. ~\ref{figTJ}), e.g.,
\cite{Katya-Chun-Mzn4,PATRICK2023118476,paperRickman}. In case of the grain growth
modeling \cite{epshteyn2021stochastic}, in the Fokker-Planck system, $f$
may describe the joint distribution function of the lattice misorientation of the
grain boundaries and of the position of the triple junctions, $\phi$ may
describe the grain boundary energy density, and $D$ is
related to the absolute temperature of the entire system
\cite{lai2022positivity} (it can be viewed as a function of
the fluctuation parameters of the lattice misorientations and of the
position of the triple junctions due to fluctuation-dissipation
principle \cite{epshteyn2021stochastic}).
\begin{figure}
\caption{Illustration of the three grain boundaries that meet at a triple
junction which is positioned at the $\vec{a}
\label{figTJ}
\end{figure}
In the cases when
$\omega(f,x)=Df(\log f-1)+f\phi$ (free energy density) and
$\pi(f,x,t)=f(x,t)$ (mobility), where $D>0$
is a positive constant and the potential function $\phi=\phi(x)$ is a given function. $D$ being a
constant is the case of the system with homogeneous
absolute temperature \cite{coleman1967thermodynamics,ericksen1998introduction}. We will recover the corresponding linear
Fokker-Planck model from conservation and energy laws,
\eqref{eq:4-1}-\eqref{eq:4-2}.
First, the direct computation yields,
\begin{equation*}
f\nabla \omega_f
=
f
\nabla(D\log f+\phi(x)).
\end{equation*}
Hence, from \eqref{eq:4-4}, the velocity field $\vec{u}$ should be,
\begin{equation}
\label{eq:4-5}
\vec{u}
=
-
\nabla(D\log f+\phi(x))=-\Big(D\frac{\nabla f}{f}+\nabla \phi(x)\Big).
\end{equation}
Using vector field \eqref{eq:4-5} in the conservation law \eqref{eq:4-1}, we
obtain the following \emph{linear} Fokker-Planck equation,
\begin{equation}
\label{eq:4-6}
\frac{\partial f}{\partial t}
=\nabla\cdot(\nabla \phi(x) f)+\nabla\cdot(D\nabla f).
\end{equation}
Note, that the linear Fokker-Planck equation has the associated
Langevin equation \cite{risken1996fokker,gardiner1985handbook},
\begin{equation}
\label{eq:4-6l}
dx=-\nabla\phi(x)dt+\sqrt{2D}dB.
\end{equation}
The linear Fokker-Planck equation \eqref{eq:4-6} can also be derived from
the corresponding
Langevin equation \eqref{eq:4-6l} (see \cite{MR3932086}).
Some diffusion equations can be interpreted using the idea of Brownian motion \cite{gardiner1985handbook}. Consider random process
\begin{equation}
dx = \upsilon (x) dt + \sigma (x) dB,
\end{equation}
where $B$ is standard Brownian motion. With a Taylor expansion of
probability density function $f (x, t)$, one can obtain the following PDEs:
\begin{itemize}
\item Ito calculus provides, $f_t + \nabla \cdot(\upsilon f)= \frac{1}{2} \Delta(\sigma^2 f) $.
\item The derivation using Stratonovich integral yields, $f_t + \nabla\cdot (\upsilon f ) =\frac{1}{2} \nabla\cdot [\sigma \nabla (\sigma f )]$.
\item One can also derive PDE with self-adjoint diffusion term,
namely, $ f_t + \nabla\cdot (\upsilon f ) =\frac{1}{2} \nabla\cdot [\sigma^2 \nabla (f )]$.
\end{itemize}
In many cases, these models can also be treated in the general framework of energetic variational approach.
Following the fluctuation-dissipation theorem
\cite{de2013non,kubo1966fluctuation}, taking the convection coefficient,
$\upsilon (x) = -\frac{1}{2} \sigma (x) ^2 \nabla \phi$,
and assuming that $f$ satisfies the conservation law $f_t +\nabla\cdot( u f) = 0$, the equations above
satisfy and can also be obtained from variation of the following
energy laws \cite{GiKiLi16},
\begin{itemize}
\item For Ito, $\frac{d}{dt} \int_{\Omega} [ f\ln (\frac{1}{2}\sigma^2 f) + \phi f ] \, dx = - \int_{\Omega} \frac{f}{\frac{1}{2}\sigma^2 } |u|^2 \, dx.$
\item For Stratonovich, $\frac{d}{dt} \int_{\Omega} [ f\ln (\sigma f) + \phi f ] \, dx = - \int_{\Omega} \frac{f}{\frac{1}{2}\sigma^2 } |u|^2 \, dx.$
\item For self-adjoint case, $\frac{d}{dt} \int_{\Omega} [ f\ln f+ \phi f ] \, dx = - \int_{\Omega} \frac{f}{\frac{1}{2}\sigma^2 } |u|^2 \, dx,$
\end{itemize}
where $\Omega\subset\mathbb{R}^d$ is a bounded domain, $d\ge 1$.
In this paper, instead of starting from the stochastic differential equations, we will derive the system from the energetic aspects,
by prescribing the kinematic conservation law and the energy dissipation law.
We will consider the case of the inhomogeneous absolute
temperature and more general dissipation mechanism. In
particular, we look at the case with
$\omega(f,x)=D(x)f(\log f-1)+f\phi(x)$,
and
$\pi(f,x,t)=2D(x)f/(b(x,t))^2$, where
$D=D(x)$ and $\phi=\phi(x)$ are positive functions. The function $b(x,t)$ is also
positive, and provides the extra freedom in the dissipation mechanism.
As discussed above, such
systems may arise in the grain growth modeling, e.g. \cite{epshteyn2021stochastic,Katya-Kamala-Chun-Masashi}.
In particular, the temperature, in terms of $D$ in this context, will account for some information of the under-resolved mechanisms in the systems,
such as critical events/disappearance events (e.g. grain disappearance, facet/grain boundary disappearance, facet
interchange, splitting of unstable junctions and nucleation of the
grains). The specific form of the mobility function $\pi(f, x, t)$ here is the direct
consequence of the fluctuation-dissipation theorem
\cite{kubo1966fluctuation,de2013non,epshteyn2021stochastic}, which
ensures that the system under consideration will approach the equilibrium configuration.
Since, in this case, the conservative force takes the form
\begin{equation*}
f\nabla \omega_f
=
f
\nabla(D(x)\log f+\phi(x)).
\end{equation*}
Hence, from \eqref{eq:4-4}, the velocity field $\vec{u}$ will be,
\begin{equation}
\label{eq:2-0-5}
\vec{u}
=
-
\frac{(b(x,t))^2}{2D(x)}
\nabla(D(x)\log f+\phi(x)).
\end{equation}
Using formula \eqref{eq:2-0-5} in the conservation law \eqref{eq:4-1}, we
obtain the \emph{nonlinear} Fokker-Planck equation (with energy law
as defined in \eqref{eq:4-2}, see also discussion below in Section~\ref{sec:2}),
\begin{equation}
\label{eq:4-8}
\frac{\partial f}{\partial t}
-
\nabla \cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f\nabla(D(x)\log f+\phi(x))
\right)
=
0.
\end{equation}
Note, that the nonlinearity $f\log f$ in \eqref{eq:4-8} comes as a
result of inhomogeneity of the absolute
temperature $D(x)$. In addition, in contrast with the linear
Fokker-Planck model \eqref{eq:4-6}, the nonlinear Fokker-Planck model
does not have the corresponding Langevin equation. Instead it has the
associated stochastic differential equation with coefficients that
depend on the probability density $f(x, t)$.
\par This work establishes
local well-posedness of the new nonlinear Fokker-Planck type model
\eqref{eq:4-8} subject to the boundary and initial conditions. Note, inhomogeneity and resulting non-linearity in
the new model \eqref{eq:4-8} are very
different from the vast existing literature on the Fokker-Planck type models. They come as a result of inhomogeneous
absolute temperature in a free energy for the system
\eqref{eq:2-0-12}. Such absolute temperature gives rise to a nonstandard nonlinearity of the
form $f\nabla D(x) \log f$ in the corresponding PDE model
(see \eqref{eq:4-8}, or \eqref{eq:2-0-4} in Section~\ref{sec:2} below).
For example, any conventional entropy methods, including Bakry-Emory
method \cite{MR3497125} do not extend
to such models in a standard or trivial way. In addition models like
\eqref{eq:4-8} or \eqref{eq:2-0-4} appear as subsystems in the much
more complex systems in the grain growth modeling in polycrystalline
materials, and hence one needs to know properties of the classical
solutions to such PDEs.
\par The paper is organized as follows. In Section~\ref{sec:2}, we first
state the nonlinear Fokker-Planck system and validate energy law using
given partial differential equation and the boundary conditions. After
that we show local existence of the solution to the model. In Section~\ref{sec:3},
we establish uniqueness of the local solution. Some conclusions are
given in Section~\ref{sec:4}.
\section{Existence of a local solution}
\label{sec:2}
In this section, we will provide a constructive proof of the existence
of a local classical solution of the following nonlinear Fokker-Planck
type equation with the natural boundary condition (see also
\eqref{eq:4-8} in Section~\ref{sec:1}):
\begin{equation}
\label{eq:2-0-4}
\left\{
\begin{aligned}
&\frac{\partial f}{\partial t}
=
-
\nabla\cdot
\left(
\left(
-
\frac{(b(x,t))^2}{2D(x)}\nabla\phi(x)
-
\frac{(b(x,t))^2}{2D(x)}\log f\nabla D(x)
\right)
f
\right)
+
\frac{1}{2}
\nabla\cdot
((b(x,t))^2\nabla f),
\quad
x\in\Omega,\ t>0, \\
&\left(
\frac{(b(x,t))^2}{2D(x)}f\nabla\phi(x)
+
\frac{(b(x,t))^2}{2D(x)}f\log f\nabla D(x)
+
\frac12
(b(x,t))^2\nabla f
\right)
\cdot
\nu
\bigg|_{\partial\Omega}
=
0,
\quad
t>0,\\
&f(x,0)
=
f_0(x),\quad
x\in\Omega,
\end{aligned}
\right.
\end{equation}
where $\Omega\subset\mathbb{R}^d$ is a bounded domain, $d\ge 1$. Here $b=b(x,t)$
is a positive function on $\Omega\times[0,\infty)$, $D=D(x)$ is a
positive function on $\Omega$, $f_0=f_0(x)$ is a
suitable (to be defined later through $\rho_0$ in
\eqref{eq:2-0-2} and \eqref{eq:2-0-3}) positive probability density function on $\Omega$ and
$\phi=\phi(x)$ is a function on $\Omega$. A function $f=f(x,t)>0$ is an
unknown probability density function.
The Fokker-Planck equation \eqref{eq:2-0-4} has a dissipative structure
for the following free energy,
\begin{equation}
\label{eq:2-0-12}
F[f]
:=
\int_{\Omega}
\left(
D(x)f(x,t)(\log f(x,t)-1)
+
f(x,t)\phi(x)
\right)
\,dx.
\end{equation}
Below, we validate an energy law for the Fokker-Planck equation
\eqref{eq:2-0-4} by performing formal calculations.
\begin{proposition}
Let $b=b(x,t)$, $D=D(x)$, $f_0=f_0(x)$, $\phi=\phi(x)$ be
sufficiently smooth
functions. Then a classical solution $f$ of the Fokker-Planck equation
\eqref{eq:2-0-4} satisfies the following energy law,
\begin{equation}
\label{eq:2-0-8}
\frac{d}{dt}F[f]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla
(
\phi(x)
+
D(x)\log f(x,t)
)
\right|^2
f(x,t)
\,dx.
\end{equation}
\end{proposition}
\begin{proof}
Here, we will validate the energy law via calculation of the rate of change
of the free energy $F$ (see also relevant discussion in Section~\ref{sec:1} where we
postulated the energy law for the model and derived the velocity
field, and hence the PDE as
a consequence). By direct computation of
$\frac{dF}{dt}$ and using the Fokker-Planck equation \eqref{eq:2-0-4} together
with $\nabla f=f\nabla\log f$, we have,
\begin{equation}
\label{eq:2-0-6}
\begin{split}
\frac{d}{dt}F[f]
&=
\int_\Omega
\left(
D(x)\log f(x,t)
+
\phi(x)
\right)
\frac{\partial f}{\partial t}(x,t)
\,dx \\
&=
-
\int_\Omega
\left(
D(x)\log f(x,t)
+
\phi(x)
\right)
\nabla\cdot(f(x,t)\vec{u})
\,dx,
\end{split}
\end{equation}
where we introduced the velocity vector field as,
\begin{equation}
\label{eq:2-0-13}
\vec{u}
:=
-
\frac{(b(x,t))^2}{2D(x)}\nabla\phi(x)
-
\frac{(b(x,t))^2}{2D(x)}\log f(x,t)\nabla D(x)
-
\frac12
(b(x,t))^2\nabla \log f(x,t).
\end{equation}
Note that, $\nabla (D(x)\log f(x,t))=\log f(x,t)\nabla D(x)+D(x)\nabla\log f(x,t)$,
hence formula \eqref{eq:2-0-13} becomes \eqref{eq:2-0-5}.
Next, applying integration by parts with the natural boundary condition
\eqref{eq:2-0-4}, we obtain,
\begin{multline}
\label{eq:2-0-7}
\int_\Omega
\left(
D(x)\log f(x,t)
+
\phi(x)
\right)
\nabla\cdot(f(x,t)\vec{u})
\,dx \\
=
-
\int_\Omega
\nabla
\left(
D(x)\log f(x,t)
+
\phi(x)
\right)
\cdot
(f(x,t)\vec{u})
\,dx.
\end{multline}
From \eqref{eq:2-0-6}, \eqref{eq:2-0-5}, and \eqref{eq:2-0-7},
we obtain the energy law,
\begin{equation*}
\frac{d}{dt}F[f]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla
\left(
\phi(x)
+
D(x)\log f(x,t)
\right)
\right|^2
f(x,t)
\,dx.
\end{equation*}
\end{proof}
One can observe from the energy law \eqref{eq:2-0-8} that an equilibrium state
$f^{\mathrm{eq}}$ for the Fokker-Planck equation \eqref{eq:2-0-4}
satisfies $\nabla(\phi(x)+D(x)\log f^{\mathrm{eq}})=0$. Here, we derive
the explicit representation of the equilibrium solution for the Fokker-Planck
equation \eqref{eq:2-0-4}.
\begin{proposition}
Let $b=b(x,t)$, $D=D(x)$, $f_0=f_0(x)$, $\phi=\phi(x)$ be sufficiently smooth
functions. Then the smooth equilibrium state $f^{\mathrm{eq}}$ for the
Fokker-Planck equation \eqref{eq:2-0-4} is given by,
\begin{equation}
\label{eq:2-0-9}
f^{\mathrm{eq}}(x)
=
\exp
\left(
-\frac{\phi(x)-\Cl{const:2.9}}{D(x)}
\right),
\end{equation}
where $\Cr{const:2.9}$ is a constant, which satisfies,
\begin{equation*}
\int_\Omega
\exp
\left(
-\frac{\phi(x)-\Cr{const:2.9}}{D(x)}
\right)
\,dx
=
1.
\end{equation*}
\end{proposition}
\begin{proof}
We have from the energy law \eqref{eq:2-0-8} that,
\begin{equation*}
0
=
\frac{d}{dt}F[f^{\mathrm{eq}}]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla
\left(
\phi(x)
+
D(x)\log f^{\mathrm{eq}}(x)
\right)
\right|^2
f^{\mathrm{eq}}(x)
\,dx,
\end{equation*}
hence $\nabla \left(\phi(x) + D(x)\log f^{\mathrm{eq}}
\right)=0$. Thus, there is a constant $\Cr{const:2.9}$ such that
\begin{equation*}
\phi(x)
+
D(x)
\log f^{\mathrm{eq}}(x)
=
\Cr{const:2.9},
\end{equation*}
and hence
\begin{equation*}
f^{\mathrm{eq}}(x)
=
\exp
\left(
-
\frac{\phi(x)-\Cr{const:2.9}}{D(x)}
\right).
\end{equation*}
\end{proof}
\begin{remark}
Note that the nonlinear Fokker-Planck equation \eqref{eq:2-0-4} can also be derived
from the dissipation property of the free energy $F[f]$
\eqref{eq:2-0-12} along with the Fokker-Planck equation,
\begin{equation}
\label{eq:2-0-14}
\frac{\partial f}{\partial t}
=
-\nabla\cdot
\left(
\vec{a}(x,t)f\right)
+
\frac12\nabla\cdot
\left(
(b(x,t))^2\nabla f
\right)
\end{equation}
subject to the natural boundary condition,
$(\vec{a}(x,t)f+\frac12(b(x,t))^2\nabla f)\cdot
\nu|_{\partial\Omega}=0,$
\cite{Katya-Kamala-Chun-Masashi}. Let us briefly review the
derivation \cite{Katya-Kamala-Chun-Masashi}. Indeed,
by \eqref{eq:2-0-14} and using the integration by parts, the rate of
change of the free energy
$\frac{d}{dt}F[f]$ is calculated as,
\begin{equation*}
\begin{split}
\frac{d}{dt}F[f]
&=
\int_\Omega
(D(x)\log f(x,t)+\phi(x))
\frac{\partial f}{\partial t}(x,t)\,dx \\
&=
-\int_\Omega
(D(x)\log f(x,t)+\phi(x))
\nabla\cdot
\left(
\left(
\vec{a}(x,t)
-
\frac12
(b(x,t))^2\nabla \log f(x,t)
\right)
f(x,t)\right)
\,dx \\
&=
\int_\Omega
\nabla
(D(x)\log f(x,t)+\phi(x))
\cdot
\left(
\vec{a}(x,t)
-
\frac12
(b(x,t))^2\nabla \log f(x,t)
\right)
f(x,t)
\,dx.
\end{split}
\end{equation*}
Since
\begin{equation*}
\nabla (D(x)\log f(x,t)+\phi(x))
=\log f(x,t)\nabla D(x)+D(x)\nabla \log f(x,t)+\nabla\phi(x),
\end{equation*}
we obtain the energy dissipation estimate as,
\begin{equation*}
\frac{d}{dt}F[f]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla (D(x)\log f(x,t)+\phi(x))
\right|^2
f(x,t)\,dx
\end{equation*}
provided the following relation holds,
\begin{equation}
\label{eq:2-0-15}
\vec{a}(x,t)
=
-\frac{(b(x,t))^2}{2D(x)}\nabla\phi(x)
-\frac{(b(x,t))^2}{2D(x)}\log f(x,t)\nabla D(x).
\end{equation}
Note that when $D(x)$ is independent of $x$, $\nabla D(x)=0$ and hence
\eqref{eq:2-0-4} becomes a \emph{linear} Fokker-Planck
equation. The relation \eqref{eq:2-0-15} is consistent with the
\emph{fluctuation-dissipation relation}, which should guarantee not only the
dissipation property of the free energy $F[f]$, but also that the
solution of the nonlinear Fokker-Planck equation \eqref{eq:2-0-4}
converges to the equilibrium state $f^{\mathrm{eq}}$ given by
\eqref{eq:2-0-9} (see also \cite{epshteyn2021stochastic} for more
detailed discussion).
\end{remark}
Now, let us
define the scaled function $\rho$ by taking the ratio of $f$ and
$f^{\mathrm{eq}}$ \eqref{eq:2-0-9},
\begin{equation}
\label{eq:2-0-11}
\rho(x,t)
=
\frac{f(x,t)}{f^{\mathrm{eq}}(x)},
\quad
\text{or}
\quad
f(x,t)
=
\rho(x,t)
f^{\mathrm{eq}}(x)
=
\rho(x,t)
\exp
\left(
-
\frac{\phi(x)-\Cr{const:2.9}}{D(x)}
\right).
\end{equation}
This auxiliary function was also employed in \cite[Theorem
2.1]{MR3497125} to study long-time asymptotics of the solutions of
linear Fokker-Planck equations. Here, we will use the
scaled function $\rho$ as a part of local well-posedness study. Hence,
below, we will reformulate the nonlinear Fokker-Planck equation
\eqref{eq:2-0-4} into a model for the scaled function $\rho$. We have,
\begin{equation*}
f^{\mathrm{eq}}
\frac{\partial \rho}{\partial t}
=
\nabla\cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}\rho
\left(
\nabla\phi(x)
+
\log(f^{\mathrm{eq}}\rho)\nabla D(x)
+
D(x)\nabla\log(f^{\mathrm{eq}}\rho)
\right)
\right).
\end{equation*}
Next, using the equilibrium state \eqref{eq:2-0-9}, we have,
\begin{equation*}
\nabla D(x)\log f^{\mathrm{eq}}
+
D(x)\nabla (\log f^{\mathrm{eq}})
+
\nabla\phi(x)
=
0.
\end{equation*}
In addition, note that $\log \rho \nabla D(x)+D(x)\nabla \log \rho =\nabla(D(x)\log
\rho)$. Thus, the scaled function $\rho$ satisfies,
\begin{equation*}
f^{\mathrm{eq}}
\frac{\partial \rho}{\partial t}
=
\nabla\cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}\rho
\nabla
\left(
D(x)\log\rho
\right)
\right).
\end{equation*}
Employing the property of the equilibrium state \eqref{eq:2-0-9} again, the natural boundary
condition \eqref{eq:2-0-4} becomes,
\begin{equation*}
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}\rho
\nabla
\left(
D(x)\log\rho
\right)
\right)
\cdot
\nu
\bigg|_{\partial\Omega}
=
0.
\end{equation*}
Therefore, the nonlinear Fokker-Planck equation
\eqref{eq:2-0-4} transforms into the following initial-boundary value
problem for $\rho$ defined in \eqref{eq:2-0-11},
\begin{equation}
\label{eq:2-0-10}
\left\{
\begin{aligned}
&f^{\mathrm{eq}}(x)
\frac{\partial \rho}{\partial t}
=
\nabla\cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)\rho
\nabla
\left(
D(x)\log\rho
\right)
\right),&\quad
&x\in\Omega,\
t>0, \\
&\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)\rho
\nabla
\left(
D(x)\log\rho
\right)
\right)
\cdot
\nu
\bigg|_{\partial\Omega}
=
0,&\quad
&t>0, \\
&\rho(0,x)
=
\rho_0(x)=\frac{f_0(x)}{f^{\mathrm{eq}}(x)},&
\quad
&x\in\Omega.
\end{aligned}
\right.
\end{equation}
Next, the free energy \eqref{eq:2-0-12} and the energy law
\eqref{eq:2-0-8} can also be stated in terms of $\rho$. Using $D(x)\log f^{\mathrm{eq}}(x)=-\phi(x)+\Cr{const:2.9} $ from
\eqref{eq:2-0-9}, we obtain,
\begin{equation}
\label{rhofe}
F[f]=\int_\Omega
\left(
D(x)(\log\rho-1)+\Cr{const:2.9}
\right)
\rho f^{\mathrm{eq}}(x)
\,dx,
\end{equation}
and,
\begin{equation}
\label{rhodfe}
\frac{d}{dt}F[f]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla
(
D(x)\log \rho
)
\right|^2
\rho f^{\mathrm{eq}}(x)
\,dx.
\end{equation}
Thus, it is clear from \eqref{rhofe}-\eqref{rhodfe} that weighted
$L^2$ space, $L^2(\Omega, f^{\mathrm{eq}}(x)\,dx)$ can play
an important role in studying the equation \eqref{eq:2-0-10} (see for example,
\cite{MR1812873, epshteyn2021stochastic}).
However, hereafter, we study a classical solution for the problem
\eqref{eq:2-0-10}, and we consider H\"older spaces and norms as
defined below. We give now the notion of a classical solution of
the problem \eqref{eq:2-0-10}.
\begin{definition}
A function $\rho=\rho(x,t)$ is a classical solution of the problem
\eqref{eq:2-0-10} in $\Omega\times[0,T)$ if $\rho\in
C^{2,1}(\Omega\times(0,T))\cap C^{1,0}(\overline{\Omega}\times[0,T))$,
$\rho(x,t)>0$ for $(x,t)\in \Omega\times[0,T)$, and satisfies equation
\eqref{eq:2-0-10} in a classical sense.
\end{definition}
To state assumptions and the main result, we also define the parabolic
H\"older spaces and norms. For the H\"older exponent
$0<\alpha<1$, the time interval $T>0$, and
the function $f$ on $\Omega\times[0,T)$, we define the
supremum norm $\|f\|_{C(\Omega\times[0,T))}$, the H\"older semi-norms
$[f]_{\alpha,\Omega\times[0,T)}$, and $\langle f\rangle_{\alpha,
\Omega\times[0,T)}$ as,
\begin{equation}
\begin{split}
\|f\|_{C(\Omega\times[0,T))}
&=
\sup_{x\in\Omega,\ t\in[0,T)}
|f(x,t)|, \\
[f]_{\alpha,\Omega\times[0,T)}
&:=
\sup_{x,x'\in\Omega,\ t\in[0,T)}
\frac{|f(x,t)-f(x',t)|}{|x-x'|^\alpha}, \\
\langle f\rangle_{\alpha,\Omega\times[0,T)}
&:=
\sup_{x\in\Omega,\ t,t'\in[0,T)}
\frac{|f(x,t)-f(x,t')|}{|t-t'|^\alpha},
\end{split}
\end{equation}
here $|x-x'|$ denotes the euclidean distance between
the vector variables $x$ and $x'$ and $|t-t'|$ denotes
the absolute value of $t-t'$. For the H\"older
exponent $0<\alpha<1$, the derivative of order $k=0,1,2$, and the time
interval $T>0$, we define the parabolic H\"older spaces
$C^{k+\alpha,(k+\alpha)/2}(\Omega\times[0,T))$ as,
\begin{equation}
C^{k+\alpha,(k+\alpha)/2}(\Omega\times[0,T))
:=
\{f:\Omega\times[0,T)\rightarrow\mathbb{R},\ \|f\|_{C^{k+\alpha,(k+\alpha)/2}(\Omega\times[0,T))}<\infty\},
\end{equation}
where
\begin{equation}
\begin{split}
\|f\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&:=
\|f\|_{C(\Omega\times[0,T))}
+
[f]_{\alpha,\Omega\times[0,T)}
+
\langle f\rangle_{\alpha/2,\Omega\times[0,T)}, \\
\|f\|_{C^{1+\alpha,(1+\alpha)/2}(\Omega\times[0,T))}
&:=
\|f\|_{C(\Omega\times[0,T))}
+
\|\nabla f\|_{C(\Omega\times[0,T))} \\
&\qquad
+
[\nabla f]_{\alpha,\Omega\times[0,T)}
+
\langle f\rangle_{(1+\alpha)/2,\Omega\times[0,T)}
+
\langle \nabla f\rangle_{\alpha/2,\Omega\times[0,T)}, \\
\|f\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
&:=
\|f\|_{C(\Omega\times[0,T))}
+
\|\nabla f\|_{C(\Omega\times[0,T))}
+
\|\nabla^2 f\|_{C(\Omega\times[0,T))}
+
\left\|
\frac{\partial f}{\partial t}
\right\|_{C(\Omega\times[0,T))} \\
&\qquad
+
[\nabla^2 f]_{\alpha,\Omega\times[0,T)}
+
\left[
\frac{\partial f}{\partial t}
\right]_{\alpha,\Omega\times[0,T)} \\
&\qquad
+
\langle \nabla f\rangle_{(1+\alpha)/2,\Omega\times[0,T)}
+
\langle \nabla^2f\rangle_{\alpha/2,\Omega\times[0,T)}
+
\left\langle
\frac{\partial f}{\partial t}
\right\rangle_{\alpha/2,\Omega\times[0,T)}.
\end{split}
\end{equation}
It is well-known that the parabolic H\"older space
$C^{k+\alpha,(k+\alpha)/2}(\Omega\times[0,T))$ is
a Banach space. More properties of the H\"older spaces
can be found in \cite{MR1406091, MR0241822, MR1465184}. Next, we give
assumptions for the coefficients and the initial data. First, we assume
the strong positivity for the coefficients $b$ and $D$, namely, there
are constants $\Cl{const:2.1}, \Cl{const:2.8}>0$ such that for
$x\in\Omega$ and $t>0$,
\begin{equation}
\label{eq:2-0-1}
b(x,t)\geq \Cr{const:2.1},\quad
D(x)\geq \Cr{const:2.8}.
\end{equation}
Next, we assume the H\"older regularity for $0<\alpha<1$: coefficients
$b(x,t)$, $\phi(x)$, $D(x)$, an initial datum
$\rho_0(x)$ and a domain $\Omega$ satisfy,
\begin{equation}
\label{eq:2-0-2}
b^2\in C^{1+\alpha,(1+\alpha)/2}(\Omega\times[0,T)),\
\phi\in C^{2+\alpha}(\Omega),\
D\in C^{2+\alpha}(\Omega),\
\partial\Omega\ \text{is}\ C^{2+\alpha},\
\text{and}\
\rho_0\in C^{2+\alpha}(\Omega).
\end{equation}
As a consequence of the above assumptions, $f^{\mathrm{eq}}$ is in
$C^{2+\alpha}(\Omega)$. Finally, assume the compatibility condition
for the initial data
$\rho_0$,
\begin{equation}
\label{eq:2-0-3}
\nabla( D(x)\log\rho_0) \cdot \nu
\bigg|_{\partial\Omega}
=
0.
\end{equation}
Since $b(x,t)$, $D(x)$, $f^{\mathrm{eq}}$, and $\rho$ are
positive, \eqref{eq:2-0-3} is sufficient for the compatibility condition of
\eqref{eq:2-0-10}.
Now we are ready to state the main theorem about existence of a classical solution
of \eqref{eq:2-0-10}.
\begin{theorem}
\label{thm:2-0-1}
Let coefficients $b(x,t)$, $\phi(x)$, $D(x)$, a positive probability
density function $\rho_0(x)$ and a bounded domain $\Omega$ satisfy the
strong positivity \eqref{eq:2-0-1}, the H\"older regularity
\eqref{eq:2-0-2} for $0<\alpha<1$, and the compatibility for the
initial data \eqref{eq:2-0-3}, respectively. Then, there exist a time interval $T>0$
and a classical solution $\rho=\rho(x,t)$ of \eqref{eq:2-0-10} on
$\Omega\times[0,T)$ with the H\"older regularity $\rho\in
C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$.
\end{theorem}
\begin{corollary}
\label{cor:main_th}
Let coefficients $b(x,t)$, $\phi(x)$, $D(x)$, and a bounded domain
$\Omega$ satisfy the strong positivity \eqref{eq:2-0-1} and the H\"older
regularity \eqref{eq:2-0-2} for $0<\alpha<1$, respectively. Let $f_0$ be a positive
probability density function from $C^{2+\alpha}(\Omega)$, which
is positive everywhere, and satisfies the compatibility condition,
\begin{equation*}
\nabla
\left(
\phi(x)
+
\log (D(x)f_0)
\right)
\cdot
\nu
\bigg|_{\partial\Omega}
=
0.
\end{equation*}
Then, there exist a time interval $T>0$ and a classical solution
$f=f(x,t)$ of \eqref{eq:2-0-4} on $\Omega\times[0,T)$ with the H\"older
regularity $f\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$.
\end{corollary}
Before we proceed with a proof of the Theorem \ref{thm:2-0-1}, and
hence Corollary \ref{cor:main_th}, we give
a brief overview of the main ideas of the proof:
\begin{enumerate}[1.]
\item In Section \ref{sec:2-1}, we consider the change of variables $h$
in \eqref{eq:2-1-3} and $\xi$ in \eqref{eq:2-1-2}. We will derive
evolution equations in terms of $h$ and $\xi$ in Lemma
\ref{lem:2-1-2} and Lemma \ref{lem:2-1-1}. Note that, $\xi$ vanishes
at $t=0$, namely, we have, $\xi(x,0)=0$.
\item In Section \ref{sec:2-2}, we give the decay properties of the
H\"older norms $\|\nabla\xi\|_{C^{\alpha.\alpha/2}(\Omega)\times[0,T)}$ and
$\|\xi\|_{C^{\alpha.\alpha/2}(\Omega)\times[0,T)}$ in terms of
$\xi$, see \eqref{eq:2-2-2} and \eqref{eq:2-2-3}. Thanks to the condition that $\xi(x,0)=0$, we can obtain
explicit decay of $\|\nabla
\xi\|_{C^{\alpha.\alpha/2}(\Omega)\times[0,T)}$ and
$\|\xi\|_{C^{\alpha.\alpha/2}(\Omega)\times[0,T)}$.
\item In Section \ref{sec:2-3}, we study a linear parabolic equation
\eqref{eq:2-3-8} associated with the nonlinear problem
\eqref{eq:2-1-4}. We show that for the appropriate choice of constants $M,T>0$
and for $\psi\in X_{M,T}$, where $X_{M,T}$ is defined in
\eqref{eq:2-3-9}, a solution $\xi$ of \eqref{eq:2-3-8}
belongs to $X_{M,T}$, see Lemma \ref{lem:2-3-1}. Thus, we can
define a solution map $A:\psi\mapsto\xi$ on $X_{M,T}$.
\item In Section \ref{sec:2-4}, we show that the solution map has the
contraction property, see Lemma \ref{lem:2-4-1}. In order to show
that the Lipschitz constant is less than $1$, we use the decay
properties of the H\"older norms \eqref{eq:2-2-2},
\eqref{eq:2-2-3}.
\item Since the solution map is a contraction mapping on $X_{M,T}$, there is a
fixed point $\xi\in X_{M,T}$. The fixed point is a classical solution of
\eqref{eq:2-1-4}, hence we can find a classical solution of
\eqref{eq:2-0-10}. Once we find a solution $\rho$ of \eqref{eq:2-0-10}, by the definition of
the scaled function \eqref{eq:2-0-11}, we obtain a solution of
\eqref{eq:2-0-4}. Note, that in Section \ref{sec:3}, we show
uniqueness of a local solution of the problem \eqref{eq:2-0-10}, and
hence of a local solution of the problem \eqref{eq:2-0-4}.
\end{enumerate}
\subsection{Change of variables}
\label{sec:2-1}
The problem \eqref{eq:2-0-10} is well defined only when $\rho>0$.
However, it is difficult to prove the positivity of $\rho$ using
\eqref{eq:2-0-10} directly due to lack of maximum principle for
the nonlinear models. Instead, we will construct a solution $\rho$ of
\eqref{eq:2-0-10}, and will guarantee the positivity of $\rho$, by
introducing a new auxiliary variable $h$ as follows,
\begin{equation}
\label{eq:2-1-3}
h(x,t)=D(x)\log\rho(x,t),\quad
\text{or}\quad
\rho(x,t)
=
\exp
\left(
\frac{h(x,t)}{D(x)}
\right).
\end{equation}
Once we find a solution $h$, then we can obtain a solution
$\rho$ of \eqref{eq:2-0-10} using the change of variables as in
\eqref{eq:2-1-3}. Furthermore, we will
show uniqueness of a local solution $\rho$ in Section \ref{sec:3}.
\par Let us derive the evolution equation in terms of the new variable $h$ in
\eqref{eq:2-1-3}.
\begin{lemma}
\label{lem:2-1-2}
Let $\rho$ be a classical solution of \eqref{eq:2-0-10} and define $h$
as in \eqref{eq:2-1-3}. Then, the auxiliary variable $h$ satisfies the
following equation in a classical sense,
\begin{equation}
\label{eq:2-1-5}
\left\{
\begin{aligned}
\frac{f^{\mathrm{eq}}(x)}{D(x)}
\frac{\partial h}{\partial t}
&=
\nabla\cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\nabla h
\right)
+
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\nabla h
\cdot
\nabla
\left(
\frac{h}{D(x)}
\right)
,\quad
x\in\Omega,\
t>0, \\
\nabla
h
\cdot
\nu
\bigg|_{\partial\Omega}
&=
0,\quad
t>0, \\
h(0,x)
&=
h_0(x)
=
D(x)\log \rho_0(x),
\quad
x\in\Omega.
\end{aligned}
\right.
\end{equation}
Conversely, let $h\in C^{2,1}(\Omega\times(0,T))\cap
C^{1,0}(\overline{\Omega}\times[0,T))$ be a solution of
\eqref{eq:2-1-5} in a classical sense and define $\rho$ as
\eqref{eq:2-1-3}. Then, $\rho$ is a classical solution of
\eqref{eq:2-0-10}.
\end{lemma}
\begin{proof}
By straightforward calculation of the derivative of $\rho$ using
\eqref{eq:2-1-3}, we have that $\rho_t=\frac{e^{h/D(x)}}{D(x)} h_t$,
as well as,
\begin{equation*}
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)\rho
\nabla
\left(
D(x)\log\rho
\right)
=
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
e^{h/D(x)}
\nabla
h,
\end{equation*}
and,
\begin{equation*}
\begin{split}
&\quad
\nabla\cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)\rho
\nabla
\left(
D(x)\log\rho
\right)
\right) \\
&=
e^{h/D(x)}
\nabla\cdot
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\nabla
h
\right)
+
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
e^{h/D(x)}
\nabla
h
\cdot
\nabla
\left(
\frac{h}{D(x)}
\right).
\end{split}
\end{equation*}
Note that $b$, $D$, $f^{\mathrm{eq}}$, and $e^{h/D}$ are positive
functions, hence the boundary condition of the model \eqref{eq:2-0-10} is
equivalent to the Neumann boundary condition for the function $h$. Using these
relations, we obtain result of Lemma \ref{lem:2-1-2}.
\end{proof}
\begin{remark}
Note, employing the change of the variable for $\rho$ in terms of $h$ \eqref{eq:2-1-3}, the free
energy $F[f]$ \eqref{rhofe} and
the dissipation law \eqref{rhodfe} are transformed into,
\begin{equation}
\label{hfe}
F[f]
=
\int_{\Omega}
\left(
h(x,t)-D(x)+\Cr{const:2.9}
\right)
\exp
\left(
\frac{h(x,t)}{D(x)}
\right)
f^{\mathrm{eq}}(x)
\,dx,
\end{equation}
and,
\begin{equation}
\label{hdfe}
\frac{d}{dt}F[f]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla
h(x,t)
\right|^2
\exp
\left(
\frac{h(x,t)}{D(x)}
\right)
f^{\mathrm{eq}}(x)
\,dx.
\end{equation}
\end{remark}
\begin{remark}
The non-linearity of the problem \eqref{eq:2-1-5} is the so-called \emph{scale critical}. The
diffusion term $\Delta h$ and the nonlinear term $|\nabla h|^2$ have
the same scale. To see this, for $\gamma>0$ we consider the following
equation,
\begin{equation}
\label{eq:2-1-7}
\frac{\partial u}{\partial t}(x,t)
=
\Delta u(x,t)
+
|\nabla u(x,t)|^\gamma,
\quad
x\in\mathbb{R}^d,\quad
t>0.
\end{equation}
For a positive scaling parameter $\lambda>0$ and
$(x_0,t_0)\in\mathbb{R}^d\times(0,\infty)$, let us consider the change of
variables $x-x_0=\lambda y$, $t-t_0=\lambda^2 s$, and a scale transformation
$v(y,s)=u(x,t)$. Then,
\begin{equation*}
\frac{\partial u}{\partial t}(x,t)=
\frac{1}{\lambda^2}\frac{\partial v}{\partial s}(y,s),\quad
\Delta_x u(x,t)=\frac{1}{\lambda^2}\Delta_y v(y,s),\quad
|\nabla_x u(x,t)|^\gamma=\frac{1}{\lambda^\gamma} |\nabla_y v(y,s)|^\gamma,
\end{equation*}
hence the scale transformation $v$ satisfies,
\begin{equation*}
\frac{\partial v}{\partial s}(y,s)
=
\Delta_y v(y,s)
+
\lambda^{2-\gamma}
|\nabla v(y,s)|^\gamma,
\quad
y\in\mathbb{R}^d,\quad
0<s<t_0.
\end{equation*}
When we take $\lambda\downarrow0$, the function $u(x,t)$ will blow-up
at $x=x_0$, and is regarded as a perturbation of a linear function
around $x=x_0$. If $\gamma<2$, which is called \emph{scale
sub-critical}, then $\lambda^{2-\gamma}\rightarrow0$ as
$\gamma\downarrow0$. Hence, the non-linearity $|\nabla u(x,t)|^\gamma$
can be regarded as a small perturbation in terms of the diffusion term
$\Delta u(x,t)$. If $\gamma>2$, which is called \emph{scale
super-critical}, then $\lambda^{\gamma-2}\rightarrow0$ as
$\gamma\downarrow0$. In this case, the non-linear term $|\nabla
u(x,t)|^\gamma$ becomes a principal term. Thus the behavior of $u$ may
be different from solutions of the linear problem, namely, the
solutions of the heat equation. If $\gamma=2$, which is called
\emph{scale critical case}, then $\lambda^{2-\gamma}=1$ (like in our
model \eqref{eq:2-1-5}). The diffusion term $\Delta u(x,t)$ and the
nonlinear term $|\nabla u(x,t)|^2$ are balanced, hence the
non-linearity $|\nabla u(x,t)|^2$ cannot be regarded as the small
perturbation anymore, especially for the study of the global existence
and long-time asymptotic behavior. Thus, in the problem
\eqref{eq:2-1-5}, we need to consider the interaction between the
diffusion term and the nonlinear term accurately. For the importance of
the scale transformation, see for instance \cite{MR2656972,
MR0784476}. The scale critical case for \eqref{eq:2-1-7} is related to the heat flow for harmonic maps. See for instance,
\cite{MR1324408,MR1266472,MR0990191,MR2155901}. See also
\cite{MR0164306,MR0664498} for the steady-state case.
\end{remark}
Our goal is to use the Schauder estimates for linear parabolic
equations, therefore
we rewrite \eqref{eq:2-1-5} in the non-divergence form,
\begin{equation*}
\frac{\partial h}{\partial t}
=
\frac{(b(x,t))^2}{2}
\Delta h
+
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
\cdot
\nabla h
+
\frac{(b(x,t))^2}{2D(x)}
|\nabla h|^2
-
\frac{(b(x,t))^2}{2(D(x))^2}
h
\nabla h
\cdot
\nabla
D(x).
\end{equation*}
Next, we introduce a new variable $\xi$ as,
\begin{equation}
\label{eq:2-1-2}
h(x,t)=h_0(x)+\xi(x,t),
\end{equation}
in order to change problem \eqref{eq:2-1-5} into the zero initial value
problem with $\xi(x,0)=0$. Note that, when $h$ is sufficiently close to the initial
data $h_0$ for small $t>0$ in the H\"older space, $\xi$ should be also
small enough for small $t>0$. To show the smallness of the
nonlinearity in the H\"older space, we consider the nonlinear terms in
terms of $\xi$ instead of $h$. Thus, below, we will derive the evolution equation in
terms of $\xi$.
\begin{lemma}
\label{lem:2-1-1}
Let $h\in C^{2,1}(\Omega\times(0,T))\cap
C^{1,0}(\overline{\Omega}\times[0,T))$ be a solution of
\eqref{eq:2-1-5} in a classical sense and define $\xi$ as
in \eqref{eq:2-1-2}. Then, $\xi$ satisfies the following equation in a
classical sense,
\begin{equation}
\label{eq:2-1-4}
\left\{
\begin{aligned}
\frac{\partial \xi}{\partial t}
&=
L\xi+g_0(x,t)+G(\xi),\quad
x\in\Omega,\
t>0, \\
\nabla
\xi
\cdot
\nu
\bigg|_{\partial\Omega}
&=
0,\quad
t>0, \\
\xi(0,x)
&=
0,
\quad
x\in\Omega,
\end{aligned}
\right.
\end{equation}
where
\begin{equation}
\label{eq:2-1-1}
\begin{split}
L\xi
&:=
\frac{(b(x,t))^2}{2}
\Delta \xi \\
&\quad
+
\left(
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
+
\frac{(b(x,t))^2}{D(x)}\nabla h_0(x)
-
\frac{(b(x,t))^2h_0(x)}{2(D(x))^2}\nabla D(x)
\right)
\cdot
\nabla \xi
\\
&\quad
-
\left(
\frac{(b(x,t))^2}{2(D(x))^2}\nabla D(x)\cdot\nabla h_0(x)
\right)
\xi, \\
g_0(x,t)
&:=
\frac{(b(x,t))^2}{2}
\Delta h_0(x)
+
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
\cdot
\nabla h_0(x) \\
&\quad
+
\frac{(b(x,t))^2}{2D(x)}
|\nabla h_0(x)|^2
-
\frac{(b(x,t))^2}{2(D(x))^2}
h_0(x)
\nabla h_0(x)
\cdot
\nabla
D(x), \\
G(\xi)
&:=
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
|\nabla \xi|^2
-
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\xi
\nabla \xi
\cdot
\nabla
D(x).
\end{split}
\end{equation}
Conversely, let $\xi\in C^{2,1}(\Omega\times(0,T))\cap
C^{1,0}(\overline{\Omega}\times[0,T))$ be a solution of
\eqref{eq:2-1-4} in a classical sense and define $h$ as in
\eqref{eq:2-1-2}. Then, $h$ is a solution of \eqref{eq:2-1-5} in a
classical sense.
\end{lemma}
\begin{proof}
The equivalence of the initial conditions for functions $h$ and $\xi$ is trivial, so we consider the
equivalence of the differential equations and of the boundary conditions
for $h$ and $\xi$.
First, we derive the differential equation for $\xi$ using the change
of variable in \eqref{eq:2-1-2}. Assume $h$ is a
solution of \eqref{eq:2-1-5} in a classical sense. Since $\xi_t=h_t$,
$\nabla h=\nabla h_0+\nabla\xi$, $\Delta h=\Delta h_0+\Delta\xi$, we
have,
\begin{equation}
\label{eq:2-1-6}
\begin{split}
\frac{\partial \xi}{\partial t}
&=
\frac{(b(x,t))^2}{2}
\Delta \xi
+
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
\cdot
\nabla \xi
\\
&\quad
+\frac{(b(x,t))^2}{2}
\Delta h_0(x)
+
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
\cdot
\nabla h_0(x)
\\
&\quad
+
\frac{(b(x,t))^2}{2D(x)}
|\nabla \xi+\nabla h_0(x)|^2
-
\frac{(b(x,t))^2}{2(D(x))^2}
(\xi+h_0(x))
\nabla (\xi+h_0(x))
\cdot
\nabla
D(x).
\end{split}
\end{equation}
Using the following relations,
\begin{equation*}
\begin{split}
|\nabla \xi+h_0(x)|^2
&=
|\nabla \xi|^2
+
2\nabla h_0(x)\cdot\nabla \xi
+
|\nabla h_0(x)|^2, \\
(\xi+h_0(x))
\nabla (\xi+h_0(x))
&=
\xi\nabla\xi
+
\xi\nabla h_0(x)
+
h_0(x) \nabla \xi
+
h_0(x)\nabla h_0(x),
\end{split}
\end{equation*}
the equation \eqref{eq:2-1-6} is transformed into the equation,
\begin{equation*}
\begin{split}
\frac{\partial \xi}{\partial t}
&=
\frac{(b(x,t))^2}{2}
\Delta \xi
+
\left(
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
+
\frac{(b(x,t))^2}{D(x)}\nabla h_0(x)
-
\frac{(b(x,t))^2h_0(x)}{2(D(x))^2}\nabla D(x)
\right)
\cdot
\nabla \xi
\\
&\quad
-
\left(
\frac{(b(x,t))^2}{2(D(x))^2}\nabla D(x)\cdot\nabla h_0(x)
\right)
\xi \\
&\quad
+\frac{(b(x,t))^2}{2}
\Delta h_0(x)
+
\frac{D(x)}{f^{\mathrm{eq}}(x)}
\nabla
\left(
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right)
\cdot
\nabla h_0(x) \\
&\quad
+
\frac{(b(x,t))^2}{2D(x)}
|\nabla h_0(x)|^2
-
\frac{(b(x,t))^2}{2(D(x))^2}
h_0(x)
\nabla h_0(x)
\cdot
\nabla
D(x) \\
&\quad
+
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
|\nabla \xi|^2
-
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\xi
\nabla \xi
\cdot
\nabla
D(x) \\
&=
L\xi+g_0(x,t)+G(\xi).
\end{split}
\end{equation*}
Thus, we obtain the equivalence of the differential equations for $h$
and $\xi$.
Next, we consider boundary condition
$\nabla\xi\cdot\nu|_{\partial\Omega}=0$. Using the compatibility
condition \eqref{eq:2-0-3}, we have,
\begin{equation*}
\nabla \xi\cdot\nu \bigg|_{\partial\Omega}
=
\nabla h\cdot\nu \bigg|_{\partial\Omega}
-
\nabla h_0\cdot\nu \bigg|_{\partial\Omega}
=
\nabla h\cdot\nu \bigg|_{\partial\Omega},
\end{equation*}
hence we also have the equivalence of the boundary conditions for $h$
and $\xi$.
\end{proof}
\begin{remark}
From the change of variable \eqref{eq:2-1-2}, the free energy $F[f]$ \eqref{hfe}
and the energy dissipation law \eqref{hdfe} are given in terms of $\xi$ below,
\begin{equation}
F[f]
=
\int_{\Omega}
\left(
\xi(x,t)+h_0(x)-D(x)+\Cr{const:2.9}
\right)
\exp
\left(
\frac{\xi(x,t)+h_0(x)}{D(x)}
\right)
f^{\mathrm{eq}}(x)
\,dx,
\end{equation}
and
\begin{equation}
\frac{d}{dt}F[f]
=
-
\int_\Omega
\frac{(b(x,t))^2}{2D(x)}
\left|
\nabla
\xi(x,t)
+
\nabla h_0(x)
\right|^2
\exp
\left(
\frac{\xi(x,t)+h_0(x)}{D(x)}
\right)
f^{\mathrm{eq}}(x)
\,dx.
\end{equation}
\end{remark}
\begin{remark}
The idea to consider the variable $\xi$ in \eqref{eq:2-1-2}, in order
to change \eqref{eq:2-1-5} into the zero initial value problem
\eqref{eq:2-1-4}, is similar to the study of the inhomogeneous Dirichlet
boundary value problems for the elliptic equations, see \cite[Theorem 6.8, Theorem
8.3]{MR1814364}.
\end{remark}
In this section, we made several changes of variables. Hereafter we study
\eqref{eq:2-1-4} with the homogeneous Neumann boundary condition and
with the zero
initial condition. As one can observe in \eqref{eq:2-1-1}, the initial
data $h_0$ (or equivalently $\rho_0$) is included into the coefficients of the linear operator $L$
and of
the external force $g_0$ of the problem \eqref{eq:2-1-4}.
\subsection{Properties of the H\"older spaces with the zero initial condition}
\label{sec:2-2}
In this section, we study properties of the H\"older spaces with the
zero initial value condition. The main idea behind the proof of the Theorem
\ref{thm:2-0-1} is to find a solution of the problem \eqref{eq:2-1-4} in a function
space as defined below,
\begin{equation}
\label{eq:2-3-9}
X_{M,T}
:=
\left\{
\zeta\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))
:
\zeta(x,0)=0\ \text{for}\ x\in\Omega,\
\nabla\cdot\zeta\big|_{\partial\Omega}=0,\
\|\zeta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}\leq M
\right\}
\end{equation}
for the appropriate choice of constants $M, T>0$.
For $\psi\in X_{M,T}$, let $\eta$ be a classical solution of the
following linear parabolic problem,
\begin{equation}
\label{eq:2-3-8}
\left\{
\begin{aligned}
\frac{\partial \eta}{\partial t}
&=
L\eta+g_0(x,t)+G(\psi),\quad
x\in\Omega,\
t>0, \\
\nabla
\eta
\cdot
\nu
\bigg|_{\partial\Omega}
&=
0,\quad
t>0, \\
\eta(0,x)
&=
0,
\quad
x\in\Omega,
\end{aligned}
\right.
\end{equation}
where $L$, $g_0(x,t)$ and $G$ are defined in \eqref{eq:2-1-1}. Note
that, in
Section~\ref{sec:2-3} our goal will be to select constants $M,T>0$ such that for any $\psi\in X_{M,T}$, a
solution $\eta$ belongs to $X_{M,T}$. Thus, here we first need to
introduce the idea of the solution map
and the well-definedness of the solution map on $X_{M,T}$.
\begin{definition}\label{def:solmap}
For $\psi\in X_{M,T}$, let $\eta=A\psi$ be a solution of
\eqref{eq:2-3-8}. We call $A$ a solution map for
\eqref{eq:2-3-8}. The solution map $A$ is\emph{ well-defined} on $X_{M,T}$ if
$A\psi\in X_{M,T}$ for all $\psi\in X_{M,T}$.
\end{definition}
Once we will show that the solution map $A$ is well-defined in $X_{M,T}$ and
is a contraction for the appropriate choices of constants, then we can
find a fixed point $\xi\in X_{M,T}$ for the solution map $A$, and thus
establish that $\xi$ is a classical solution of the problem
\eqref{eq:2-1-4}. In order to derive the contraction property of the
solution map $A$, first, we obtain the decay estimates for the H\"older's
norm for $\zeta\in X_{M,T}$.
As we noted in the Remark \ref{rem:2-1-1} below, when a function $\theta\in
C^{\alpha,\alpha/2}(\Omega\times[0,T))$ satisfies $\theta(x,0)=0$, the
supremum norm of $\theta$ and its derivatives will vanish at $t=0$, namely
\begin{equation*}
\sup_{\Omega\times[0,T)}|\theta|,\
\sup_{\Omega\times[0,T)}|\nabla \theta|,\
\sup_{\Omega\times[0,T)}|\nabla^2 \theta|
\rightarrow0,\quad \text{as}\ T\rightarrow0,
\end{equation*}
as a consequence of the H\"older's norm's estimates \eqref{eq:2-2-2} and
\eqref{eq:2-2-3} obtained below. Note again that $\theta(x,0)=0$ is
essential for the above convergence. In order to consider the nonlinear
model \eqref{eq:2-1-4} as a perturbation of the linear system \eqref{eq:2-3-8}, we need some smallness
for the norm in general. Hence, we next show explicit decay estimates
for the H\"older's norms which can be applied for a function $\zeta\in X_{M,T}$.
\begin{lemma}
\label{lem:2-2-1}
Let any function $\theta\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$,
$\theta(x,0)=0$ for $x\in\Omega$.
Then,
\begin{equation}
\label{eq:2-2-2}
\|\nabla\theta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
3 (T^{(1+\alpha)/2}+T^{1/2})\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Thus, for a function $\zeta\in X_{M,T}$,~\eqref{eq:2-2-2} also holds.
\end{lemma}
\begin{proof}
First, we consider $\|\nabla\theta\|_{C(\Omega\times[0,T))}$. For
$x\in\Omega$ and $t\in(0,T)$, we have, by $\nabla\theta(x,0)=0$ and the
definition of H\"older's norm, that,
\begin{equation}
\label{eq:2-2-4}
|\nabla\theta(x,t)|
=
\frac{|\nabla\theta(x,t)-\nabla\theta(x,0)|}{|t-0|^{(1+\alpha)/2}}
|t-0|^{(1+\alpha)/2}
\leq
t^{(1+\alpha)/2}\langle\nabla \theta\rangle_{(1+\alpha)/2,\Omega\times[0,T)}.
\end{equation}
Therefore, we have,
\begin{equation}
\label{eq:2-2-5}
\|\nabla\theta\|_{C(\Omega\times[0,T))}
\leq
T^{(1+\alpha)/2}\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Next, we derive the estimate of
$[\nabla\theta]_{\alpha,\Omega\times[0,T)}$. For $x,x'\in\Omega$ and
$t\in(0,T)$, we first assume that $|x-x'|<t^{1/2}$. Then,
since we assume that $\Omega$ is convex, the
fundamental theorem of calculus and the triangle inequality lead to,
\begin{equation*}
\begin{split}
|\nabla\theta(x,t)-\nabla\theta(x',t)|
&=
\left|
\int_0^1
\frac{d}{d\tau}
\nabla\theta(\tau x+(1-\tau)x',t)
\,d\tau
\right| \\
&\leq
|x-x'|\int_0^1
|\nabla^2\theta(\tau x+(1-\tau)x',t)|
\,d\tau. \\
\end{split}
\end{equation*}
Since $\nabla^2\theta(\tau x+(1-\tau)x',0)=0$, we have,
\begin{equation*}
\begin{split}
|\nabla^2\theta(\tau x+(1-\tau)x',t)|
&\leq
\frac{|\nabla^2\theta(\tau x+(1-\tau)x',t)-\nabla^2\theta(\tau x+(1-\tau)x',0)|}{|t-0|^{\alpha/2}}|t-0|^{\alpha/2} \\
&\leq
T^{\alpha/2}\langle
\nabla^2\theta
\rangle_{\alpha/2,\Omega\times[0,T)}.
\end{split}
\end{equation*}
Using the assumption $|x-x'|<t^{1/2}$, and that $|x-x'|=|x-x'|^{1-\alpha}|x-x'|^{\alpha}$, we conclude,
\begin{equation}
\label{eq:2-2-6}
|\nabla\theta(x,t)-\nabla\theta(x',t)|
\leq
T^{\alpha/2}
t^{(1-\alpha)/2}\langle
\nabla^2\theta
\rangle_{\alpha/2,\Omega\times[0,T)}
|x-x'|^\alpha
\leq
T^{1/2}\langle
\nabla^2\theta
\rangle_{\alpha/2,\Omega\times[0,T)}
|x-x'|^\alpha.
\end{equation}
Next, we consider the case $|x-x'|\geq t^{1/2}$. Using \eqref{eq:2-2-4},
we have,
\begin{equation*}
|\nabla\theta(x,t)|
=
\frac{|\nabla\theta(x,t)-\nabla\theta(x,0)|}{|t-0|^{(1+\alpha)/2}}
|t-0|^{(1+\alpha)/2}
\leq
T^{1/2}\langle\nabla \theta\rangle_{(1+\alpha)/2,\Omega\times[0,T)}
|x-x'|^\alpha,
\end{equation*}
hence we obtain,
\begin{equation}
\label{eq:2-2-7}
|\nabla\theta(x,t)-\nabla\theta(x',t)|
\leq
|\nabla\theta(x,t)|+|\nabla\theta(x',t)|
\leq
2 T^{1/2}\langle\nabla \theta\rangle_{(1+\alpha)/2,\Omega\times[0,T)}|x-x'|^\alpha.
\end{equation}
Combining \eqref{eq:2-2-6} and \eqref{eq:2-2-7} we arrive at,
\begin{equation}
\label{eq:2-2-8}
[\nabla\theta]_{\alpha,\Omega\times[0,T)}
\leq
2 T^{1/2}\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Finally, we consider
$\langle\nabla\theta\rangle_{\alpha/2,\Omega\times[0,T)}$. For
$x\in\Omega$ and $t,t'\in(0,T)$, we have
\begin{equation*}
|\nabla\theta(x,t)-\nabla\theta(x,t')|
\leq
\frac{|\nabla\theta(x,t)-\nabla\theta(x,t')|}{|t-t'|^{(1+\alpha)/2}}|t-t'|^{(1+\alpha)/2}
\leq
T^{1/2}|t-t'|^{\alpha/2}\langle\nabla\theta\rangle_{(1+\alpha)/2,\Omega\times[0,T)},
\end{equation*}
hence
\begin{equation}
\label{eq:2-2-9}
\langle\nabla\theta\rangle_{\alpha/2,\Omega\times[0,T)}
\leq
T^{1/2}\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Combining \eqref{eq:2-2-5}, \eqref{eq:2-2-8}, and \eqref{eq:2-2-9}, we
obtain the desired estimate \eqref{eq:2-2-2}.
\end{proof}
\begin{remark}
Note that, for arbitrary continuous function
$\theta:\Omega\times[0,T)\rightarrow\mathbb{R}$,
\begin{equation*}
\|\theta\|_{C(\Omega\times[0,T))}
\geq
\sup_{x\in\Omega}|\theta(x,0)|,
\end{equation*}
hence, in general, we cannot obtain the decay estimate \eqref{eq:2-2-2}, unless $\theta=0$ at
$t=0$.
\end{remark}
Next, we derive the decay estimate of
$\|\theta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ that will be also
used for $\zeta\in X_{M,T}$.
\begin{lemma}
\label{lem:2-2-2}
Let arbitrary function $\theta\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$,
$\theta(x,0)=0$ for $x\in\Omega$. Then,
\begin{equation}
\label{eq:2-2-3}
\|\theta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
3 (T+T^{1-\alpha/2})\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Thus, for $\zeta\in X_{M,T}$, the estimate~\eqref{eq:2-2-3} holds as well.
\end{lemma}
\begin{proof}
First we consider $\|\theta\|_{C(\Omega\times[0,T))}$. For
$x\in\Omega$ and $t\in(0,T)$, we have by $\theta(x,0)=0$,
\begin{equation}
\label{eq:2-2-1}
|\theta(x,t)|
=
|\theta(x,t)-\theta(x,0)|
=
\left|
\int_0^t
\theta_t(x,\tau)
\,d\tau
\right|
\leq
t\|\theta_t\|_{C(\Omega\times[0,T))}
,
\end{equation}
thus,
\begin{equation}
\label{eq:2-2-10}
\|\theta\|_{C(\Omega\times[0,T))}
\leq
T\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Next, we give the estimate of $[\theta]_{\alpha,\Omega\times[0,T)}$. For
$x,x'\in\Omega$ and $t\in(0,T)$, we first assume $|x-x'|<t^{1/2}$.
Then,
again using the assumption that $\Omega$ is convex,
the fundamental theorem of calculus and \eqref{eq:2-2-5} lead to,
\begin{equation*}
\begin{split}
|\theta(x,t)-\theta(x',t)|
&\leq
|x-x'|\int_0^1
|\nabla\theta(\tau x+(1-\tau)x',t)|
\,d\tau
\\
&\leq
T^{(1+\alpha)/2}|x-x'|\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{split}
\end{equation*}
Using the assumption $|x-x'|<t^{1/2}$, we have again,
\begin{equation*}
|\theta(x,t)-\theta(x',t)|
\leq T^{(1+\alpha)/2}
t^{(1-\alpha)/2}
\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}|x-x'|^\alpha
\leq
T\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}|x-x'|^\alpha.
\end{equation*}
Next, we consider the case that $|x-x'|\geq t^{1/2}$. Using the estimate~\eqref{eq:2-2-1}, we have,
\begin{equation*}
\begin{split}
|\theta(x,t)-\theta(x',t)|
&\leq
|\theta(x,t)|+|\theta(x',t)| \\
&\leq
2t
\|\theta_t\|_{C(\Omega\times[0,T))}
\leq
2 t^{1-\alpha/2}
\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}|x-x'|^\alpha\\
&\leq
2 T^{1-\alpha/2}
\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}|x-x'|^\alpha
.
\end{split}
\end{equation*}
Combining these estimates, we arrive at,
\begin{equation}
\label{eq:2-2-11}
[\theta]_{\alpha,\Omega\times[0,T)}
\leq
(T+2T^{1-\alpha/2})\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Finally, we consider
$\langle\theta\rangle_{\alpha/2,\Omega\times[0,T)}$. For $x\in\Omega$ and
$t,t'\in(0,T)$, the fundamental theorem of calculus leads to,
\begin{equation*}
|\theta(x,t)-\theta(x,t')|
\leq
\left|
\int_{t'}^t \theta_t(x,\tau)\,d\tau
\right|
\leq |t-t'|
\|\theta_t\|_{C(\Omega\times[0,T))}
\leq T^{1-\alpha/2}|t-t'|^{\alpha/2}
\|\theta_t\|_{C(\Omega\times[0,T))},
\end{equation*}
hence,
\begin{equation}
\label{eq:2-2-12}
\langle\theta\rangle_{\alpha/2,\Omega\times[0,T)}
\leq T^{1-\alpha/2}
\|\theta\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Combining \eqref{eq:2-2-10}, \eqref{eq:2-2-11}, and \eqref{eq:2-2-12},
we obtain estimate \eqref{eq:2-2-3}.
\end{proof}
\begin{remark}
In the proof of the Lemmas above, in order to apply the fundamental theorem of
calculus, we assumed the sufficient condition on the domain $\Omega$ to be
convex. However one may generalize the assumptions on the domain to more general
conditions.
\end{remark}
We later use the norm of the product of the H\"older functions
(cf. \cite[\S 8.5]{MR1406091}). Therefore, we establish the following result.
It is well-known inequalities (for instance, see \cite[\S
4.1]{MR1814364}), but we give a proof for readers convenience.
\begin{lemma}
\label{lem:2-2-3}
For functions $\theta \in C^{\alpha,\alpha/2}(\Omega\times[0,T))$ and $\tilde{\theta}\in C^{\alpha,\alpha/2}(\Omega\times[0,T))$,
the product of $\theta\tilde{\theta}$ is also in
$C^{\alpha,\alpha/2}(\Omega\times[0,T))$. Moreover, the following estimate holds,
\begin{equation*}
\|\theta\tilde{\theta}\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\|\theta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\|\tilde{\theta}\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{equation*}
\end{lemma}
\begin{proof}
For $x,x'\in\Omega$, $0< t,t'<T$, we have,
\begin{equation}
\label{est_prH}
|\theta(x,t)\tilde{\theta}(x,t)|\leq
\|\theta\|_{C(\Omega\times[0,T))}
\|\tilde{\theta}\|_{C(\Omega\times[0,T))}.
\end{equation}
In addition, we obtain that,
\begin{equation*}
\begin{split}
|\theta(x,t)\tilde{\theta}(x,t)
-
\theta(x',t)\tilde{\theta}(x',t)|
&\leq
|(\theta(x,t)-\theta(x',t))\tilde{\theta}(x,t)|
+
|\theta(x',t)(\tilde{\theta}(x,t)-\tilde{\theta}(x',t))| \\
&\leq
\left(
[\theta]_{\alpha,\Omega\times[0,T)}\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\|\theta\|_{C(\Omega\times[0,T))}
[\tilde{\theta}]_{\alpha,\Omega\times[0,T)}
\right)
|x-x'|^\alpha.
\end{split}
\end{equation*}
Hence, we have that,
\begin{equation}
\label{est_prH1}
[\theta \tilde{\theta}]_{\alpha,\Omega\times[0,T)} \leq
[\theta]_{\alpha,\Omega\times[0,T)}\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\|\theta\|_{C(\Omega\times[0,T))}
[\tilde{\theta}]_{\alpha,\Omega\times[0,T)}.
\end{equation}
Similarly,
\begin{equation*}
\begin{split}
|\theta(x,t)\tilde{\theta}(x,t)
-
\theta(x,t')\tilde{\theta}(x,t')|
&\leq
|(\theta(x,t)-\theta(x,t'))\tilde{\theta}(x,t)|
+
|\theta(x,t')(\tilde{\theta}(x,t)-\tilde{\theta}(x,t'))| \\
&\leq
\left(
\langle\theta\rangle_{\alpha/2,\Omega\times[0,T)}\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\|\theta\|_{C(\Omega\times[0,T))}
\langle\tilde{\theta}\rangle_{\alpha/2,\Omega\times[0,T)}
\right)
|t-t'|^{\alpha/2}.
\end{split}
\end{equation*}
Thus, we obtain,
\begin{equation}
\label{est_prH2}
\langle \theta \tilde{\theta}\rangle_{\alpha/2,\Omega\times[0,T)}\leq
\langle\theta\rangle_{\alpha/2,\Omega\times[0,T)}\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\|\theta\|_{C(\Omega\times[0,T))}
\langle\tilde{\theta}\rangle_{\alpha/2,\Omega\times[0,T)}.
\end{equation}
Therefore, combining above estimates
\eqref{est_prH}-\eqref{est_prH2}, we arrive at the desired inequality,
\begin{equation*}
\begin{split}
\|\theta\tilde{\theta}\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&=
\|\theta\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
[\theta\tilde{\theta}]_{\alpha,\Omega\times[0,T)}
+
\langle\theta\tilde{\theta}\rangle_{\alpha/2,\Omega\times[0,T)} \\
&\leq
\|\theta\|_{C(\Omega\times[0,T))}
\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\left(
[\theta]_{\alpha,\Omega\times[0,T)}\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\|\theta\|_{C(\Omega\times[0,T))}
[\tilde{\theta}]_{\alpha,\Omega\times[0,T)}
\right) \\
&\qquad
+
\left(
\langle\theta\rangle_{\alpha/2,\Omega\times[0,T)}\|\tilde{\theta}\|_{C(\Omega\times[0,T))}
+
\|\theta\|_{C(\Omega\times[0,T))}
\langle\tilde{\theta}\rangle_{\alpha/2,\Omega\times[0,T)}
\right) \\
&\leq
\|\theta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\|\tilde{\theta}\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{split}
\end{equation*}
\end{proof}
In this section, results of Lemma~\ref{lem:2-2-1} and Lemma~\ref{lem:2-2-2}
hold for any function $\zeta\in X_{M,T}$. Therefore, we obtained the decay estimates
for the H\"older norms $\|\nabla
\zeta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ and
$\|\zeta\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ of $\zeta\in X_{M,T}$. As a
consequence, in the following sections, for $\psi\in
X_{M,T}$, the nonlinear term $G(\psi)$ can be treated as a small
perturbation in terms of the H\"older norms.
\subsection{Well-definedness of the solution map}
\label{sec:2-3}
Here, we recall the function space $X_{M,T}$ defined in
\eqref{eq:2-3-9}. Here, for $\psi\in
X_{M,T}$, our goal is to consider first the linear parabolic equation \eqref{eq:2-3-8}
associated with the nonlinear problem \eqref{eq:2-1-4}. We also recall the
definition of the solution map $A:\psi\mapsto\eta$ from the
Definition~\ref{def:solmap} associated with the linear parabolic model \eqref{eq:2-3-8}.
Therefore,
in this Section~\ref{sec:2-3} and in the next Section~\ref{sec:2-4}, we are going to show that the
solution map $A:\psi\mapsto\eta$ is a contraction mapping on $X_{M,T}$,
where $\eta$ is a solution of \eqref{eq:2-3-8}. Once we will show that the
solution map $A$ is a contraction, we can obtain a fixed point $\xi\in
X_{M,T}$ for the solution map $A$, and hence $\xi$ will be a solution of
\eqref{eq:2-1-4}, \cite[\S 7.2]{MR2759829}.
\par First, we will show that the solution map is well-defined on
$X_{M,T}$, namely that there exist appropriate positive constants $M,T>0$ such that for any
$\psi\in X_{M,T}$, solution $\eta=A\psi$ of the linear parabolic equation
\eqref{eq:2-3-8} belongs to $X_{M,T}$.
Let us now recall the Schauder estimates for the following linear
parabolic equation:
\begin{equation}
\label{eq:2-3-1}
\left\{
\begin{aligned}
\frac{\partial w}{\partial t}
&=
Lw+g(x,t),\quad
x\in\Omega,\
t>0, \\
\nabla
w
\cdot
\nu
\bigg|_{\partial\Omega}
&=
0,\quad
t>0, \\
w(0,x)
&=
0,
\quad
x\in\Omega.
\end{aligned}
\right.
\end{equation}
Here, the operator $L$ is defined in \eqref{eq:2-1-1}. The following Schauder estimates
for the solution of \eqref{eq:2-3-1} can be applicable.
\begin{proposition}
[{\cite[Theorem 5.3 in Chapter IV]{MR0241822}, \cite[Theorem
4.31]{MR1465184}}] \label{prop:2-3-1}
Assume the strong positivity \eqref{eq:2-0-1}, the
regularity \eqref{eq:2-0-2}, and let $L$ be the differential operator
defined in \eqref{eq:2-1-1}.
For any H\"older continuous function $g\in
C^{\alpha,\alpha/2}(\Omega\times[0,T))$, there uniquely exists a
solution $w \in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$ of
\eqref{eq:2-3-1}, such that,
\begin{equation}
\label{eq:2-3-2}
\|w\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
\leq
\Cl{const:2.2}\|g\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))},
\end{equation}
where $\Cr{const:2.2}>0$ is a positive constant.
\end{proposition}
Using the Schauder estimate \eqref{eq:2-3-2}, we now show the well-definedness
of the solution map $A$ in $X_{M,T}$.
\begin{lemma}
\label{lem:2-3-1}
Assume the strong positivity \eqref{eq:2-0-1}, the
regularity \eqref{eq:2-0-2}, and let $L$ be the differential operator
defined in \eqref{eq:2-1-1}. Then, there are constants $M>0$ and
$T_0>0$, such that for $0<T\leq T_0$ and $\psi\in X_{M,T}$, the image
of the solution map $A\psi$ belongs to $X_{M,T}$ and the map $A$ is
well-defined on $X_{M,T}$.
\end{lemma}
\begin{proof}
Let us assume that we have constants $M,T>0$ that will be defined
later, then consider $\psi\in
X_{M,T}$. We use the Schauder estimate \eqref{eq:2-3-2} for $L$ and
for $g = g_0 + G(\psi)$, where $L$, $G(\psi)$ and $g_0$ are defined as
in \eqref{eq:2-1-1}.
First, we note that from the strong positivity
\eqref{eq:2-0-1} and the regularity \eqref{eq:2-0-2}, there is a
positive constant $\Cl{const:2.3}>0$ which depends only on
$\|b\|_{C^{1+\alpha,(1+\alpha)/2}(\Omega\times[0,T))}$,
$\|D\|_{C^{1+\alpha}(\Omega)}$, $\|\phi\|_{C^{1+\alpha}(\Omega)}$,
$\|h_0\|_{C^{2+\alpha}(\Omega)}$, and
the constant $\Cr{const:2.8}$ in \eqref{eq:2-0-1} such that,
\begin{equation}
\label{eq:2-3-10}
\|g_0\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\Cr{const:2.3}.
\end{equation}
Next, we calculate the norm of $\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x) |\nabla \psi|^2$. Using Lemma \ref{lem:2-2-3},
the strong positivity \eqref{eq:2-0-1} and the
regularity \eqref{eq:2-0-2}, we obtain for
$\psi\in X_{M,T}$,
\begin{equation*}
\left\|
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
|\nabla \psi|^2
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\left\|
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\left\|
\nabla \psi
\right\|^2_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{equation*}
Noting that $\psi(x,0)=0$ for $x\in\Omega$, we can apply Lemma
\ref{lem:2-2-1} and use the decay estimate \eqref{eq:2-2-2} to show that,
\begin{equation*}
\left\|
\nabla \psi
\right\|^2_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
9\|\psi\|^2_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T)))}
(T^{(1+\alpha)/2}+T^{1/2})^2.
\end{equation*}
Since $\psi\in X_{M,T}$,
$\|\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T)))}\leq M$, hence
we have,
\begin{equation}
\label{eq:2-3-3}
\left\|
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
|\nabla \psi|^2
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
9
\left\|
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
M^2
(T^{(1+\alpha)/2}+T^{1/2})^2.
\end{equation}
Next, we calculate the norm of $\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\psi
\nabla \psi
\cdot
\nabla
D(x)$. Using Lemma \ref{lem:2-2-3}, the strong
positivity \eqref{eq:2-0-1} and the regularity \eqref{eq:2-0-2}, we estimate,
\begin{equation*}
\begin{split}
\left\|
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\psi
\nabla \psi
\cdot
\nabla
D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
\left\|
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\nabla
D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
&\qquad
\times\left\|
\psi
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\left\|
\nabla \psi
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
.
\end{split}
\end{equation*}
Using Lemma \ref{lem:2-2-1} and \ref{lem:2-2-2} with the initial
condition $\psi=0$ at $t=0$, we have by \eqref{eq:2-2-2} and
\eqref{eq:2-2-3} that,
\begin{equation*}
\left\|
\nabla \psi
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
3
\|\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T)))}
(T^{(1+\alpha)/2}+T^{1/2}),
\end{equation*}
and,
\begin{equation*}
\|\psi\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
3
\|\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
(T+T^{1-\alpha/2}).
\end{equation*}
Again, since $\psi\in X_{M,T}$,
$\|\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T)))}\leq M$, and
thus, we obtain,
\begin{equation}
\label{eq:2-3-4}
\begin{split}
\left\|
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\psi
\nabla \psi
\cdot
\nabla
D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
9
\left\|
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
\nabla
D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
&\qquad
\times
M^2
(T^{(1+\alpha)/2}+T^{1/2})
(T+T^{1-\alpha/2}).
\end{split}
\end{equation}
Together with \eqref{eq:2-3-3} and \eqref{eq:2-3-4}, we can take a
positive constant $\Cl{const:2.4}>0$ which depends only on
$\|b\|_{C^{1+\alpha,(1+\alpha)/2}(\Omega\times[0,T))}$,
$\|D\|_{C^{1+\alpha}(\Omega)}$, $\|\phi\|_{C^{1+\alpha}(\Omega)}$,
and the constant $\Cr{const:2.8}$, such that,
\begin{equation}
\label{eq:2-3-11}
\left\|
G(\psi)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\Cr{const:2.4}M^2
\kappa(T),
\end{equation}
where
\begin{equation}
\label{eq:2-3-7}
\kappa(T)
=
(T^{(1+\alpha)/2}+T^{1/2})^2
+
(T^{(1+\alpha)/2}+T^{1/2})
(T+T^{1-\alpha/2}).
\end{equation}
Note that $\kappa(T)$ is an increasing function with respect to $T>0$ and
$\kappa(T)\rightarrow0$ as $T\downarrow0$. By the Schauder estimate
\eqref{eq:2-3-2}, together with \eqref{eq:2-3-10} and
\eqref{eq:2-3-11}, the solution $\xi=A\psi$ of the linear parabolic
equation \eqref{eq:2-3-8} satisfies,
\begin{equation}
\label{eq:2-3-5}
\|A\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
\leq
\Cr{const:2.2}
\left(
\Cr{const:2.3}
+
\Cr{const:2.4}M^2\kappa(T)
\right).
\end{equation}
In order to guarantee
$\|A\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}\leq M$ for
$0<T\leq T_0$, we take,
\begin{equation}
\label{eq:2-3-6}
M:=2 \Cr{const:2.2}\Cr{const:2.3},
\qquad
\Cr{const:2.4}M^2
\kappa(T_0)
\leq
\Cr{const:2.3}.
\end{equation}
Then from \eqref{eq:2-3-5}, $
\|A\psi\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))} \leq M$ for
$0<T\leq T_0$, hence $A\psi \in X_{M,T}$.
\end{proof}
\begin{remark}
Note that from \eqref{eq:2-3-6}, a positive constant $M>0$ depends on
$\|b\|_{C^{1+\alpha,(1+\alpha)/2}(\Omega\times[0,T))}$,
$\|D\|_{C^{1+\alpha}(\Omega)}$, $\|\phi\|_{C^{1+\alpha}(\Omega)}$,
$\|h_0\|_{C^{2+\alpha}(\Omega)}$, and the constant
$\Cr{const:2.8}$. Also, from \eqref{eq:2-3-6}, a time interval $T_0>0$
can be estimated as,
\begin{equation}
\label{eq:2-3-12}
\kappa(T_0)
\leq
\frac{1}{4\Cr{const:2.2}^2\Cr{const:2.3}\Cr{const:2.4}}.
\end{equation}
Since $\psi=0$ at $t=0$, the auxiliary function $\kappa(T)$ can be written
explicitly as in \eqref{eq:2-3-7}, in order to
estimate the H\"older norm of nonlinear term $G(\psi)$. Thus, using \eqref{eq:2-3-12}, we
obtain the explicit estimate of the time-interval $T_0>0$ to ensure
that the solution map $A$ is well-defined on $X_{M,T}$.
\end{remark}
\subsection{The contraction property}
\label{sec:2-4}
In this section, we show that the solution map
$A:X_{M,T}\ni\psi\mapsto\eta\in X_{M,T}$, where $\eta$ is a solution of
\eqref{eq:2-3-8}, is contraction on $X_{M,T}$. The explicit decay
estimates for the H\"older norm of $\psi\in X_{M,T}$ obtained in Lemmas
\ref{lem:2-2-1} and \ref{lem:2-2-2}, are essential for the derivation of the
smallness of the nonlinear term $G(\psi)$. Because, for $\psi\in
X_{M,T}$, H\"older norms
$\|\nabla \psi\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ and
$\|\psi\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ continuously go to
$0$ as $T\rightarrow0$, thus, the Lipschitz constant of $A$ in
$C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$ can be
taken smaller than $1$ if $T$ is sufficiently
small. This is the reason why we consider the change of variables
\eqref{eq:2-1-2}, and as result, consider the zero initial value problem
\eqref{eq:2-1-4} subject to the homogeneous Neumann boundary
condition.
\begin{lemma}
\label{lem:2-4-1}
Assume the strong positivity \eqref{eq:2-0-1},
regularity \eqref{eq:2-0-2}, and let $L$ be the differential operator
defined in \eqref{eq:2-1-1}.
Let $M>0$ and $T_0>0$ be the constants obtained in
Lemma \ref{lem:2-3-1}, \eqref{eq:2-3-6}. Then, there exists $T_1 \in
(0,T_0]$ such that $A$ is contraction on $X_{M,T}$ for $0<T\leq T_1$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:2-4-1}]
We take $0<T\leq T_0$, where $T$ will be specified later in the proof. For
$\psi_1$, $\psi_2\in X_{M,T}$, let
$\tilde{\eta}:=A\psi_1-A\psi_2$. Then from \eqref{eq:2-3-8}, $\tilde{\eta}$ satisfies,
\begin{equation}
\label{eq:2-4-11}
\left\{
\begin{aligned}
\frac{\partial \tilde{\eta}}{\partial t}
&=
L\tilde{\eta}+G(\psi_1)-G(\psi_2),\quad
x\in\Omega,\
t>0, \\
\nabla
\tilde{\eta}
\cdot
\nu
\bigg|_{\partial\Omega}
&=
0,\quad
t>0, \\
\tilde{\eta}(0,x)
&=
0,
\quad
x\in\Omega.
\end{aligned}
\right.
\end{equation}
Due to zero Neumann boundary and the initial conditions for
$\tilde{\eta}$, we can use the Schauder estimate \eqref{eq:2-3-2} for
the system
\eqref{eq:2-4-11}, hence, we have,
\begin{equation}
\label{eq:2-4-3}
\|\tilde{\eta}\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
\leq
\Cr{const:2.2}
\|
G(\psi_1)-G(\psi_2)
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{equation}
By direct calculation of the difference of the nonlinear terms
$G(\psi)$ \eqref{eq:2-1-1}, we have,
\begin{equation}
\label{eq:2-4-10}
G(\psi_1)-G(\psi_2)
=
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
(|\nabla \psi_1|^2-|\nabla \psi_2|^2)
-
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
(
\psi_1
\nabla \psi_1
-
\psi_2
\nabla \psi_2
)
\cdot
\nabla
D(x).
\end{equation}
First, we estimate $\|\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x) (|\nabla \psi_1|^2-|\nabla
\psi_2|^2)\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$. Since,
\begin{equation*}
\left|
|\nabla \psi_1|^2-|\nabla \psi_2|^2
\right|
=
\left|
(\nabla \psi_1+\nabla \psi_2)
\cdot
(\nabla \psi_1-\nabla \psi_2)
\right|,
\end{equation*}
we have due to Lemma \ref{lem:2-2-3} that,
\begin{equation}
\label{eq:2-4-5}
\|
|\nabla \psi_1|^2-|\nabla \psi_2|^2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\|
\nabla \psi_1+\nabla \psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\|
\nabla \psi_1-\nabla \psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Since $\psi_1$, $\psi_2 \in X_{M,T}$, we have that $\psi_1-\psi_2=0$
at $t=0$, and Lemma
\ref{lem:2-2-1} is applicable here to functions $\psi_1, \psi_2$ and $\psi_1-\psi_2$,
\begin{equation}
\label{eq:2-4-6}
\begin{split}
\|\nabla\psi_1\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T^{(1+\alpha)/2}+T^{1/2})\|\psi_1\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}, \\
\|\nabla\psi_2\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T^{(1+\alpha)/2}+T^{1/2})\|\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}, \\
\|\nabla\psi_1-\nabla\psi_2\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T^{(1+\alpha)/2}+T^{1/2})\|\psi_1-\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{split}
\end{equation}
Combining estimates \eqref{eq:2-4-5} and \eqref{eq:2-4-6}, we obtain,
\begin{equation*}
\begin{split}
&\quad
\|
|\nabla \psi_1|^2-|\nabla \psi_2|^2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
&\leq
9 (T^{(1+\alpha)/2}+T^{1/2})^2
(
\|
\psi_1
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
+
\|
\psi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
)
\|
\psi_1-\psi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{split}
\end{equation*}
Therefore, using the strong positivity \eqref{eq:2-0-1},
the regularity \eqref{eq:2-0-2}, and that functions $\psi_1,\psi_2\in X_{M,T}$, we
arrive at the inequality,
\begin{equation}
\label{eq:2-4-1}
\left\|
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x) (|\nabla \psi_1|^2-|\nabla
\psi_2|^2)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\Cl{const:2.5}M
(T^{(1+\alpha)/2}+T^{1/2})^2
\|
\psi_1-\psi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Here, constant
\begin{equation*}
\Cr{const:2.5}
=
9
\left\|
\frac{(b(x,t))^2}{D(x)}
f^{\mathrm{eq}}(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\end{equation*}
is a positive constant which depends only on
$\|b\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$,
$\|D\|_{C^{\alpha}(\Omega)}$, $\|\phi\|_{C^{\alpha}(\Omega)}$,
and the constant $\Cr{const:2.8}$ in \eqref{eq:2-0-1}.
Next, we estimate,
$
\|\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
(
\psi_1
\nabla \psi_1
-
\psi_2
\nabla \psi_2
)
\cdot
\nabla
D(x)
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$. Since, we can write,
\begin{equation*}
\psi_1 \nabla \psi_1
-
\psi_2 \nabla \psi_2
=
\psi_1
(
\nabla \psi_1
-
\nabla \psi_2
)
+
(
\psi_1 - \psi_2
)
\nabla\psi_2,
\end{equation*}
we can use Lemma \ref{lem:2-2-3} again,
\begin{multline}
\label{eq:2-4-7}
\|
\psi_1 \nabla \psi_1
-
\psi_2 \nabla \psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
\leq
\|
\psi_1
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\|
\nabla \psi_1-\nabla \psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
+
\|
\nabla \psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\|
\psi_1-\psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{multline}
Since $\psi_1$, $\psi_2 \in X_{M,T}$, we have that $\psi_1-\psi_2=0$
at $t=0$, and thus, we can use Lemma
\ref{lem:2-2-1} and \ref{lem:2-2-2} to obtain,
\begin{equation}
\label{eq:2-4-8}
\begin{split}
\|\psi_1\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T+T^{1-\alpha/2})\|\psi_1\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}, \\
\|\nabla\psi_2\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T^{(1+\alpha)/2}+T^{1/2})\|\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}, \\
\|\psi_1-\psi_2\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T+T^{1-\alpha/2})\|\psi_1-\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}, \\
\|\nabla\psi_1-\nabla\psi_2\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
&\leq
3 (T^{(1+\alpha)/2}+T^{1/2})\|\psi_1-\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{split}
\end{equation}
Combining \eqref{eq:2-4-7} and \eqref{eq:2-4-8},
we obtain the estimate,
\begin{equation*}
\begin{split}
&\quad
\|
\psi_1 \nabla \psi_1
-
\psi_2 \nabla \psi_2
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
&\leq
9 (T^{(1+\alpha)/2}+T^{1/2})(T+T^{1-\alpha/2}) (
\|
\psi_1
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
+
\|
\psi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
)\\
&\quad
\times
\|
\psi_1-\psi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}.
\end{split}
\end{equation*}
Therefore, using the strong positivity
\eqref{eq:2-0-1}, the regularity \eqref{eq:2-0-2}, and that $\psi_1,\psi_2\in
X_{M,T}$, we get,
\begin{equation}
\label{eq:2-4-2}
\begin{split}
&\quad
\left\|
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
(
\psi_1
\nabla \psi_1
-
\psi_2
\nabla \psi_2
)
\cdot
\nabla
D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
&\leq
\Cl{const:2.6}M
(T^{(1+\alpha)/2}+T^{1/2})(T+T^{1-\alpha/2})
\|
\psi_1-\psi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\end{split}
\end{equation}
where constant
\begin{equation*}
\Cr{const:2.6}
=
9
\left\|
\frac{(b(x,t))^2}{(D(x))^2}
f^{\mathrm{eq}}(x)
\nabla D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\end{equation*}
is a positive constant which depends only on
$\|b\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$,
$\|D\|_{C^{1+\alpha}(\Omega)}$, $\|\phi\|_{C^{\alpha}(\Omega)}$, and
the constant $\Cr{const:2.8}$ in \eqref{eq:2-0-1}.
Finally, combining \eqref{eq:2-4-3}, \eqref{eq:2-4-10}, \eqref{eq:2-4-1}, and
\eqref{eq:2-4-2}, we arrive at the estimate,
\begin{equation*}
\begin{split}
\|A\psi_1-A\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
&=
\|\tilde{\eta}\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))} \\
&\leq
\Cl{const:2.7}
M\kappa(T)
\|\psi_1-\psi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\end{split}
\end{equation*}
where
$\Cr{const:2.7}=\Cr{const:2.2}\max\{\Cr{const:2.5},\
\Cr{const:2.6}\}>0$ is a positive constant and,
\begin{equation}
\label{eq:2-4-9}
\kappa(T)
=
(T^{(1+\alpha)/2}+T^{1/2})^2
+
(T^{(1+\alpha)/2}+T^{1/2})
(T+T^{1-\alpha/2}).
\end{equation}
Note that $\kappa(T)$ is increasing with respect to $T>0$ and
$\kappa(T)\rightarrow0$ as $T\downarrow0$. Taking $T_1\in (0,T_0]$ such
that,
\begin{equation}
\label{eq:2-4-4}
\Cr{const:2.7} M\kappa(T_1)< 1,
\end{equation}
the solution map $A$ is a contraction mapping on $X_{M,T}$ for $0<T\leq
T_1$.
\end{proof}
\begin{remark}
\label{rem:2-1-1}
Note that, for $h\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$,
$\|h\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ and $\|\nabla
h\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ do not vanish as
$T\downarrow0$ in general. On the other hand, when $\psi=0$ at $t=0$,
H\"older's norms $\|\psi\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ and
$\|\nabla \psi\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}$ continuously
go to $0$ as $T\downarrow0$ by \eqref{eq:2-2-2} and \eqref{eq:2-2-3}.
Thus, we derived the explicit time-interval estimates in \eqref{eq:2-4-9}
and in \eqref{eq:2-4-4}, to ensure that the solution map $A$ is a
contraction map.
Further note that, we may show directly the well-definedness and contraction for
the solution map associated with the problem \eqref{eq:2-1-5}. Still it
is worth considering variable $\xi$ in \eqref{eq:2-1-2}: we can easily
construct a contraction mapping $A$ on $X_{M,T}$ and get the estimates
\eqref{eq:2-3-12} and \eqref{eq:2-4-4} to guarantee the
well-definedness and contraction for the solution map.
\end{remark}
We are now in position to prove existence of a solution of
\eqref{eq:2-0-10}.
\begin{proof}[Proof of Theorem \ref{thm:2-0-1}]
Let $M>0$ be a positive constant obtained in Lemma \ref{lem:2-3-1},
\eqref{eq:2-3-6}, and let $T_1>0$ be a positive constant from
Lemma \ref{lem:2-4-1}, \eqref{eq:2-4-4}. Then, due to Lemma
$\ref{lem:2-3-1}$ and \ref{lem:2-4-1}, the solution map $A$ is
a contraction on $X_{M,T_1}$. Therefore, there is a fixed point $\xi\in
X_{M,T_1}$, such that $\xi=A\xi$ and $\xi$ is a classical solution of
\eqref{eq:2-1-4}. Thus,
\begin{equation*}
\rho(x,t)
=
\exp
\left(
\frac{\xi(x,t)+h_0(x)}{D(x)}
\right)
\end{equation*}
is a classical solution of \eqref{eq:2-0-10}.
\end{proof}
In this section, we constructed a solution $\rho$ using auxiliary variables
$h$ in \eqref{eq:2-1-3} and $\xi$ in \eqref{eq:2-1-2}. Since $\xi=0$ at $t=0$, the time interval of a solution
can be explicitly estimated as in \eqref{eq:2-3-6} and in
\eqref{eq:2-4-4}. As a last step of our construction, we
will show uniqueness of the solution $\rho$ of \eqref{eq:2-0-10} in the
next section.
\section{Uniqueness}
\label{sec:3}
In this section, we show uniqueness for a local solution of
\eqref{eq:2-0-4}. As in Section \ref{sec:2}, uniqueness of a solution
of \eqref{eq:2-0-10} implies the uniqueness of a solution to \eqref{eq:2-0-4}. We make the same
assumptions as we did to show existence of a classical solution of
\eqref{eq:2-0-10}.
Note that, the contraction property of the solution map $A$ implies
the uniqueness of the fixed point on $X_{M,T}$, but not on $C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$. Nevertheless, similar to the proof of the contraction
property of the solution map $A$, Lemma~\ref{lem:2-4-1} in Section
\ref{sec:2}, we show below
uniqueness for a classical solution of \eqref{eq:2-0-10} on $C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$.
\begin{theorem}
\label{thm:3-0-1}
Let $b(x,t)$, $\phi(x)$, $D(x)$, $\rho_0(x)$ and $\Omega$ satisfy the
strong positivity \eqref{eq:2-0-1}, the H\"older regularity
\eqref{eq:2-0-2} for $0<\alpha<1$, and the compatibility for the
initial data \eqref{eq:2-0-3}, respectively. Then, there exists $T>0$ such that, if
$\rho_1$, $\rho_2\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$ are
classical solutions of \eqref{eq:2-0-10}, then $\rho_1=\rho_2$ on
$\Omega\times[0,T)$.
\end{theorem}
\begin{proof}
First, note that from Lemma \ref{lem:2-1-2} and Lemma \ref{lem:2-1-1},
it is sufficient to show uniqueness for a solution of \eqref{eq:2-1-4}.
Hereafter, we will show the uniqueness for a classical solution of the
problem \eqref{eq:2-1-4}.
Let $\xi_1, \xi_2\in C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))$ be
two distinct solutions of \eqref{eq:2-1-4}. We will prove that $\xi_1=\xi_2$ in
$\Omega\times[0,T)$ for sufficiently small $T>0$ using
contradiction argument. Assume that $\xi_1$ and $\xi_2$ are two
distinct solutions in $\Omega\times[0,T)$ for any
$T>0$. Then, subtracting $\xi_1$ from $\xi_2$, we obtain the equation,
\begin{equation*}
\frac{\partial (\xi_1-\xi_2)}{\partial t}
=
L(\xi_1-\xi_2)+G(\xi_1)-G(\xi_2),
\end{equation*}
where $L$ and $G$ are defined in \eqref{eq:2-1-1}. Since
$\xi_1-\xi_2=0$ at $t=0$, we can apply the Schauder estimates
\eqref{eq:2-3-2}, and we obtain,
\begin{equation}
\label{eq:3-0-6}
\|\xi_1-\xi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
\leq
\Cr{const:2.2}
\|
G(\xi_1)-G(\xi_2)
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{equation}
As in the proof of the Lemma~\ref{lem:2-4-1}, we estimate the norm of,
\begin{equation}
\label{eq:3-0-1}
G(\xi_1)-G(\xi_2)
=
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x)
(|\nabla \xi_1|^2-|\nabla \xi_2|^2)
-
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
(
\xi_1
\nabla \xi_1
-
\xi_2
\nabla \xi_2
)
\cdot
\nabla
D(x).
\end{equation}
Let $M(T):=\max\{
\|\xi_1\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\|\xi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}\}>0$. Then,
$\xi_1, \xi_2\in X_{M(T),T}$, where $X_{M(T),T}$ is defined in
\eqref{eq:2-3-9}, and thus, we have the same estimates of
\eqref{eq:2-4-1} and \eqref{eq:2-4-2}, namely we have,
\begin{equation}
\label{eq:3-0-4}
\left\|
\frac{(b(x,t))^2}{2D(x)}
f^{\mathrm{eq}}(x) (|\nabla \xi_1|^2-|\nabla
\xi_2|^2)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\Cr{const:2.5}M(T)
(T^{(1+\alpha)/2}+T^{1/2})^2
\|
\xi_1-\xi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\end{equation}
and
\begin{equation}
\label{eq:3-0-5}
\begin{split}
&\quad
\left\|
\frac{(b(x,t))^2}{2(D(x))^2}
f^{\mathrm{eq}}(x)
(
\xi_1
\nabla \xi_1
-
\xi_2
\nabla \xi_2
)
\cdot
\nabla
D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))} \\
&\leq
\Cr{const:2.6}M(T)
(T^{(1+\alpha)/2}+T^{1/2})(T+T^{1-\alpha/2})
\|
\xi_1-\xi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\end{split}
\end{equation}
where constants,
\begin{equation}
\Cr{const:2.5}
=
9
\left\|
\frac{(b(x,t))^2}{D(x)}
f^{\mathrm{eq}}(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}, \mbox{ and }
\quad
\Cr{const:2.6}
=
9
\left\|
\frac{(b(x,t))^2}{(D(x))^2}
f^{\mathrm{eq}}(x)
\nabla D(x)
\right\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}.
\end{equation}
Combining \eqref{eq:3-0-1}, \eqref{eq:3-0-4} and \eqref{eq:3-0-5}, we
obtain the estimate,
\begin{equation}
\label{eq:3-0-8}
\|
G(\xi_1)-G(\xi_2)
\|_{C^{\alpha,\alpha/2}(\Omega\times[0,T))}
\leq
\Cl{const:3.1}
M(T)\kappa(T)
\|\xi_1-\xi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\end{equation}
where $\Cr{const:3.1}=\max\{\Cr{const:2.5},\ \Cr{const:2.6}\}>0$ and,
\begin{equation}
\kappa(T)
=
(T^{(1+\alpha)/2}+T^{1/2})^2
+
(T^{(1+\alpha)/2}+T^{1/2})
(T+T^{1-\alpha/2}).
\end{equation}
Note that $M(T)$ and $\kappa(T)$ are increasing with respect to $T>0$, and
$\kappa(T)\rightarrow0$ as $T\downarrow0$. Therefore, take $T>0$ such that,
\begin{equation}
\label{eq:3-0-7}
\Cr{const:2.2}
\Cr{const:3.1} M(T)\kappa(T)<1.
\end{equation}
Then combining \eqref{eq:3-0-6}, \eqref{eq:3-0-8}, and \eqref{eq:3-0-7},
we obtain that,
\begin{equation}
\begin{split}
\|\xi_1-\xi_2\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))}
&\leq
\Cr{const:2.2}
\Cr{const:3.1} M(T)\kappa(T)
\|
\xi_1-\xi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))} \\
&<
\|
\xi_1-\xi_2
\|_{C^{2+\alpha,1+\alpha/2}(\Omega\times[0,T))},
\end{split}
\end{equation}
which is a contradiction. Thus, we established that $\xi_1=\xi_2$ in
$\Omega\times[0,T)$.
\end{proof}
\section{Conclusion}\label{sec:4}
\par In this paper, we presented a new nonlinear Fokker-Planck equation
which satisfies a special energy law with the inhomogeneous absolute
temperature of the system. Such models emerge as a part of grain
growth modeling in polycrystalline materials. We showed local
existence and uniqueness of the solution of the Fokker-Planck
system. Large time asymptotic analysis of the proposed Fokker-Planck
model, as well as numerical simulations of the system will
be presented in a forthcoming paper
\cite{Katya-Kamala-Chun-Masashi}. As a part of our future research, we
will further extend such Fokker-Planck systems to the modeling of the
evolution of the
grain boundary network that undergoes disappearance/critical events, e.g. \cite{epshteyn2021stochastic,Katya-Chun-Mzn4}.
\section*{Acknowledgments}
Yekaterina
Epshteyn acknowledges partial support of NSF DMS-1905463 and of NSF DMS-2118172, Masashi Mizuno
acknowledges partial support of JSPS KAKENHI Grant No. JP18K13446 and Chun Liu acknowledges partial support of
NSF DMS-1950868 and NSF DMS-2118181.
\end{document} |
\begin{document}
\newcommand{Z_{\rm eff}}{Z_{\rm eff}}
\newcommand{Z_{\rm eff}r}{Z_{\rm eff}^{(\rm res)}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\varepsilon _b}{\varepsilon _b}
\title{Scattering of a charged particle from a hard cylindrical solenoid: Aharonov-Bohm Effect}
\author{O. Yilmaz}
\email[Electronic address: ]{[email protected]}
\affiliation{Physics Department, Canakkale Onsekiz Mart University, Canakkale, Turkey}
\begin{abstract}
The scattering amplitude $f_k(\alpha,\theta)$ of a charged particle from a long hard cylinderical solenoid is derived by solving the time independent Schr\"{o}dinger equation on a double connected plane. It is a summation over the angular momentum quantum number (partial wave summation):$$f_k(\alpha,\theta)=\frac{-1}{\sqrt{2\pi i k}}\sum_ {m=-\infty} ^ \infty e^{im \theta}\frac{2J_{m+\alpha}(ka)}{H_{m+\alpha}^{(1)}(ka)}\,. $$
It is shown that only negative mechanical angular momenta, $m+\alpha < 0$, contribute to the amplitude when the radius of the solenoid goes to zero limit ($a\to 0$) without varying the magnetic induction flux $\alpha=\Phi_B/\Phi_0$ (Flux line). Original Aharonov-Bohm result is obtained with this limit.
\end{abstract}
\pacs{03.65.Nk, 03.65.Ta, 03.50.De}
\maketitle
\section{Introduction}\label{sec:intro}
In this work, the non-relativistic scattering of a charged particle from a long ideal solenoid with a finite radius $a$ is studied, e.g. look at the references \cite{R83, AALL84}. The question is considered as an unbound boundary value problem on a doubly connected domain. The scalar potential exists only as an infinite barrier on the cylindrical region. However, the magnetic induction flux appears as a phase in the wave function of the particle everywhere.
The non-local effect of magnetic flux on the quantum mechanical wave function of a charged particle is physically detectable even though there is no electromagnetic force on it. Before Aharonow and Bohm drew attention about the effect in 1959 in reference \cite{AB59}, other authors had already written the importance of the electromagnetic potentials on electron motion \cite{F39,ES49}. Yet it is mostly known as the Aharonov-Bohm (AB) effect in the literature. Eventually, it has become a text book material. Every modern quantum mechanics book published contains at least a section discussing AB effect, e.g., see \cite{JJS94, SW13}. Despite of the vast publications about AB effect, the monographs \cite{JH97,PT89} and references in there are worth to point out. The effect has also been verified experimentally in 1960 \cite{C60}.
The potential function concept is rooted from the classical motion of a system of particles. The Lagrangian function $L$ can be defined natural way if there exists a potential function $V$ to define the force field acting on the system. Let us consider a system with $n$ degrees of freedom ${\bf q}=(q^i)$ ($i=1, \ldots , n$); its motion is determined by a set of ordinary differential equations (Lagrange's equations)
\begin{equation}\label{eq:lagrange}
\frac{d}{dt}\frac{\partial T}{\partial\dot{q}^i}-\frac{\partial T}{\partial q^i}=Q_i,\,\,\,\,\,\, i=1,\ldots ,n\,,
\end{equation}
where $T$ is kinetic energy, $Q_i$ is generalized force field. Consequently the equation can be written in homogeneous form if there exists a function $V=V(t,{\bf q},\dot{\bf q})$ such that
\begin{align}\label{eq:force}
Q_i&=\frac{d}{dt}\frac{\partial V}{\partial\dot{q}^i}-\frac{\partial V}{\partial q^i}\nonumber\\
&=\frac{\partial^2 V}{\partial\dot{q}^i\partial\dot{q}^j} \ddot{q}^j+\frac{\partial^2 V}{\partial\dot{q}^i\partial q^j} \dot{q}^j+\frac{\partial^2 V}{\partial\dot{q}^i\partial t}-\frac{\partial V}{\partial q^i}\,.
\end{align}
If $Q_i$ is independent of $\ddot{q}^j$, ($j=1,\ldots,n$), then $\frac{\partial^2 V}{\partial\dot{q}^i\partial\dot{q}^j}=0$, ($i,j=1, \ldots ,n$). It implies that $V=-A_i(t,{\bf q}) \dot{q}^i+\varphi(t,{\bf q})$ \cite{FO93}. Therefore the equations of motion (\ref{eq:lagrange}) can be written in terms of the Lagrangian function $L=T-V$ now. $V$ is called extended or velocity dependent potential function. Electromagnetic force field (Lorentz force) is derived from this sort of potential.
The quantum picture comes into play in a standard way after having the Lagrangian function above. The canonical momenta $p_i$, which are conjugate to the generalized coordinates $q^i$, are defined as $p_i\equiv\frac{\partial L}{\partial \dot{q}^i}$. The Hamiltonian function $H(t,{\bf q}, {\bf p})$ is obtained by means of Legendre transformation $H=\dot{q}^ip_i-L( t,{\bf q}, {\bf p})$ if the Hessian does not vanish: ${ \rm Det} \left(\frac{\partial^2 L}{\partial\dot{q}^i\partial\dot{q}^j}\right)\neq 0 $, which is essential to solve $\dot{q}^i$ in terms of $t$, ${\bf q}$, and ${\bf p}$ from the set of equations $p_i=\frac{\partial L}{\partial \dot{q}^i}$, ($i=1,\ldots,n$). The physical meaning of the Hamiltonian function is the fact that it is the total energy of the system. At this point, the fundamental Poisson's brackets ($\{q^i,q^j\}=0$, $\{q^i,p_j\}=\delta_j^i$, and $\{p_i,p_j\}=0$) are used to quantize the classical Hamiltonian system (Dirac's method):
\begin{equation}
\{q^j,p_k\}=\frac{[\hat{q}^j,\hat{p}_k]}{i\hslash}\,,\,\,\,\,\, j,k=1,\ldots,n\, ,
\end{equation}
where $[\hat{q}^j,\hat{p}_k]\equiv \hat{q}^j\hat{p}_k-\hat{p}_k\hat{q}^j$ is the commutation of operators acting on Hilbert space, $\hslash$ is Planck's constant multiplied by $2\pi$, and $i$ is the imaginary unit. In this picture, time $t$ is a parameter carried into Hilbert space. A function obtained in terms of conjugate operators, $\hat{{\bf q}}$ and $\hat{{\bf p}}$, is an another operator acting on the same Hilbert space. For instance, the eigenvalue equation of the time independent Hamiltonian operator, $\hat{H}(\hat{{\bf q}},\hat{{\bf p}})|\psi\rangle=E|\psi\rangle$, is nothing but just the time independent Schr\"{o}dinger equation. Thus, it is seen that the potential function $V$ has more fundamental role in quantum physics than the force field $Q_i$ in Eq.(\ref{eq:lagrange}).
In the rest of the work, the scattering is considered as a boundary value problem for the wave equation in the case of electromagnetic potentials.
The scattering from a hard cylinder is solved in section \ref{sec:hard}. The particle cannot get into the cylinder due to an infinite potential barrier in the circular region. This is used as a prototype for the presence of existing a solenoid inside the forbidden region for the particle. The Aharonov-Bohm potential is explained in section \ref{sec:abpot}. The scattering amplitude and total cross section are derived in section \ref{sec:abscat} for the solenoid with a finite radius. It is shown that the limit of vanishing magnetic induction flux is reduced to the hard cylinder solution. Finally the limit to the flux line with zero radius (AB original solution) is examined in section \ref{sec:limit}. Then this limit is obtained directly from the scattering amplitude of the current work. Summary of the results and some future concerns are left to the section of conclusion and discussion.
\section{Solution of the force-free Schr\"{o}dinger equation in a doubly connected plane}\label{sec:free}
\subsection{Scattering from a hard cylinder} \label{sec:hard}
Our configuration space $D$ is the two dimensional plane $(x,y)$ with a hole of a finite radius $a$ at the origin. $a$ might be any finite real number $ 0 < \varepsilonilon \leqslant a \leqslant R < \infty$. This is equivalent to say that the particle is not allowed to penetrate the cylindrical region at the origin. That is, the probability of finding the particle in this region vanishes at the first glance. It can be realized in the physical point of view that there exits an infinite potential barrier in this region: $V(x,y)= \infty$ in $\sqrt{x^2+y^2} \leqslant a$ and zero anywhere else in the plane. Hence, the energy eigenfunction $u_E(x,y)$ satisfies the time-independent Schr\"{o}dinger equation outside of the circular boundary.
\begin{equation} \label{eq:sch}
-\frac{\hslash ^2}{2\mu}\left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) u_E+V(x,y)u_E=Eu_E
\end{equation}
In addition $u_E(x,y)=0$ at $x^2+y^2=a^2$ and required asymptotic behaviour at infinity must be applied on $u_E$ for an acceptable physical solution. From the equation $(x/a)^2+(y/a)^2=1$, it is easy to see that a scale transformation defined by $(x',y')=(x/a,y/a)$ reduces the problem around a cylinder with unit radius. It is also canonical transformation in the background classical dynamics because Poisson brackets remain invariant. Thus, the energy (eigenvalue) is scaled according to $E'=a^2 E$. Note that all these might be understood as a consequence of the Riemann mapping theorem in complex plane. Therefore, knowing the solution to the equation (\ref{eq:sch}) outside of a unit circle provides the solution for a circle with arbitrary radius $a$. However, the coordinates $(x',y')$ are not dimensional quantities any more, but rather real numbers without unit in the complex plane.
The polar coordinate system $(r' \geqslant 0 ,\,\, 0 \leqslant \theta < 2\pi)$ is clearly the most appropriate one for the present issue: $x'=r' \cos \theta$, $y'=r' \sin \theta$. The equation (\ref{eq:sch}) and boundary condition become
\begin{align}\label{eq:schpol}
\frac{\partial^2u }{\partial r'^2}+ \frac{1}{r'} \frac{ \partial u }{ \partial r' }+ \frac{1}{r'^2}\frac{\partial^2u}{\partial \theta^2}+k'^2u=0, \,\,\mathrm{in}\,\, D:r' > 1
\\ \label{eq:bc}
u(r'=1,\theta)=0 \,\, \mathrm{at\,\, boundary}\,\, \partial D
\end{align}
in polar coordinates, where $k'^2=2\mu Ea^2/ \hslash ^2=a^2k^2 $. Applying the standard method of separation of variable, the solution may be written in the form of $u(r',\theta)=f_m(r')e^{im\theta}$, where $m=0,\pm 1, \pm 2, \ldots$ and $f_m(r)$ is the solution of the radial differential equation
\begin{equation}
z^2\frac{ d^2f_m }{d z^2}+ z \frac{ d f_m }{ d z }+(z^2- m^2)f_m=0,
\end{equation}
where the radial coordinate is scaled with $z=k'r'$ to get the usual form of the Bessel equation. It has regular singularity at $z=0$ and irregular singularity at $z=\infty$. From the two independent solutions $J_m(kr)$ and
$N_m(kr)$ (Hereafter $'$ is dropped for simplicity from the scaled variables), known as Bessel functions of first kind and second kind respectively, the series representation of the solution of Eq.(\ref{eq:schpol}) with the boundary condition (\ref{eq:bc}) can be written due to the superposition principle:
\begin{equation} \label{eq:sol}
u(r,\theta)=\sum_ {m=-\infty} ^ \infty c_m e^{im \theta} [J_m(k)N_m(kr)-N_m(k)J_m(kr) ],
\end{equation}
where $c_m$ are the expansion coefficients to be determined from the asymptotic behaviour of the wave function $u(r,\theta)$ as $r\rightarrow \infty$ . In two dimensional scattering problem, the asymptotic behaviour is the following form for a short range potential:
\begin{equation} \label{eq:asym}
u(r \rightarrow \infty, \theta)=C \left( e^{ikr \cos \theta}+f_k( \theta) \frac{e^{ikr}}{ \sqrt{r} } \right)
\end{equation}
The first term shows the plane wave directed from the left to the impenetrable circle. $C$ is just a normalization factor. $f_k(\theta)$ is called scattering amplitude, which depends only $\theta$. In order to complete the solution, $c_m$ coefficients should be found from Eq.(\ref{eq:sol}) and Eq.(\ref{eq:asym}). First, using the well known asymptotic behaviour of the Bessel functions at large distances,
\begin{align}\label{eq:first}
J_m(kr)\sim \sqrt{ \frac{2}{ \pi kr } } \cos \left(kr-(m+\frac{1}{2}) \frac{\pi}{2} \right)
\\ \label{eq:second}
N_m(kr)\sim \sqrt{ \frac{2}{ \pi kr } } \sin \left(kr-(m+\frac{1}{2}) \frac{\pi}{2} \right)
\end{align}
$u(r,\theta)$ can be rewritten from Eq.(\ref{eq:sol}) about the same asymptotic region,
\begin{align}\label{eq:form1}
u(r,\theta)\sim\frac{e^{-ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty c_m e^{im \theta}iH_m^{(1)}(k) e^{i\varphi_m}
+\frac{e^{ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty c_m e^{im \theta}(-i)H_m^{(2)}(k) e^{-i\varphi_m} ,
\end{align}
where a phase $\varphi_m=(m+\frac{1}{2}) \frac{\pi}{2}$ is defined for convenience, $H_m^{(1)}(k)=J_m(k)+iN_m(k)$ and $H_m^{(2)}(k)=J_m(k)-iN_m(k)$ are Hankel functions. On the other hand, the same approximation might be obtained in another form from the Eq.(\ref{eq:asym}) as follows
\begin{align}\label{eq:form2}
u(r,\theta)\sim C \frac{e^{-ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty i^m e^{i(m \theta+\varphi_m)}
+C\frac{e^{ikr}}{\sqrt{2\pi kr}} \left[ \sqrt{2\pi k} f_k( \theta)+ \sum_ {m=-\infty} ^ \infty i^m e^{i(m \theta-\varphi_m)} \right]\,.
\end{align}
The plane wave have been represented by Jacobi-Anger series, $e^{ikr \cos \theta}=\sum_ {-\infty} ^ \infty i^m e^{im \theta}J_m(kr) $ to derive the asymptotic of the wave function (\ref{eq:form2}) above. Comparing these two series (Eqs. (\ref{eq:form1}) and (\ref{eq:form2})) at arbitrary radial distance $r$, the coefficients of the functions $e^{\pm ikr}$ must be equal as they are linearly independent functions. Consequently, the expansion coefficients are extracted as $c_m=Ci^m/iH_m^{(1)}(k)$. Substituting obtained $c_m$ into Eq. (\ref{eq:sol}) completes the solution of the scattering of a particle from the hard cylinder. Finally, it is easy to find the scattering amplitude $f_k(\theta)$ by writing the wave function $u(r,\theta)$ with the found $c_m$ as follows
\begin{equation} \label{eq:finalsol}
u(r, \theta)=C \left( e^{ikr \cos \theta}-\sum_ {m=-\infty} ^ \infty i^m \frac{J_m(k)}{H_m^{(1)}(k)} e^{im \theta}H_m^{(1)}(kr) \right)
\end{equation}
Note that the boundary condition at $r=1$ is satisfied clearly. The required asymptotic behaviour of $u(r,\theta)$ is also guaranteed by the Hankel function because its form at large distances is given by
\begin{equation}\label{eq:han1}
H_m^{(1)}(kr)\sim \sqrt{\frac{2}{\pi kr}}e^{i(kr-\varphi_m)}
\end{equation}
Thus, the scattering amplitude defined in Eq. (\ref{eq:asym}) can be found by means of Eqs. (\ref{eq:finalsol}) and (\ref{eq:han1}):
\begin{equation}\label{eq:sampli}
f_k(\theta)= -\sqrt{\frac{2}{\pi k}}\sum_ {m=-\infty} ^ \infty i^m \frac{J_m(k)}{H_m^{(1)}(k)} e^{i(m \theta-\varphi_m)}
\end{equation}
Integration of the differential cross section $d\sigma/d\theta=|f_k(\theta)|^2$ over $\theta$ gives the total cross section. It can be presented by a real phase parameter defined by $\sin \delta_m = |J_m(k)|/\sqrt{(J_m(k))^2+(N_m(k))^2}$. Because Bessel functions defined on real axis are real valued functions, therefore, we have total cross section as
\begin{equation} \label{eq:htcs}
\sigma=\frac{4}{k}\sum_{m=-\infty}^\infty \sin^2 \delta_m
\end{equation}
In the next section, electromagnetic field source (solenoid) will be added to the hard cylinder scattering. The infinitely long solenoid carrying steady electric current inside the cylinder create a magnetic field only inside the cylinder.
\subsection{Aharonov-Bohm potential}\label{sec:abpot}
Once having an impenetrable cylinder, we may place a source of field inside. In this work, we consider the current density on the surface of the cylinder such as a solenoid. According to Maxwell's equations, time independent (steady) current density will create time independent magnetic field around the source by means of Biot-Savart law. An infinitely long ideal solenoid will create a constant magnetic induction vector field inside the cylinder along the $z-$axis ${\bf B}=(0,0,B)$ but ${\bf B}=(0,0,0)$ outside, where $B$ is a constant. In general, electromagnetic fields that satisfy the Maxwell's equations can be realized by scalar and vector potentials $(\phi,{\bf A})$.
\begin{equation}\label{eq:fields}
{\bf E}= -\nabla \phi -\frac{1}{c}\frac{\partial{\bf A}}{\partial t},\,\,\,\,\,\,\,\,\,\, {\bf B}=\nabla\times {\bf A}
\end{equation}
These potential functions are not unique since another set of potentials can be found without changing the fields ${\bf E}$, ${\bf B}$ by gauge transformations:
\begin{equation}\label{eq:gauge}
\tilde{\phi}= \phi -\frac{1}{c}\frac{\partial\chi}{\partial t},\,\,\,\,\,\,\,\,\,\, \tilde{{\bf A}} ={\bf A}+\nabla\chi,
\end{equation}
where $\chi(t,{\bf x})$ is arbitrary real valued space-time function.
Aharonov-Bohm potential is known as a vector potential of a solenoid carrying steady electric current $I$. Therefore, the vector potential is time independent vector valued function ${\bf A}({\bf x})$.
\begin{align}\label{eq:abpot}
{\bf A}(x,y)&=\frac{B}{2}(-y,x,0)\,,& \,\,\mathrm{for}\,\,r \leqslant a \nonumber \\
&=\frac{B}{2} \frac{a^2}{x^2+y^2}(-y,x,0)\,,& \,\,\mathrm{for}\,\,r \geqslant a
\end{align}
From Eq. (\ref{eq:fields}), the fields ${\bf E}$, ${\bf B}$ are decoupled since there is no contribution to the electric field ${\bf E}$ from the vector potential, $\partial{\bf A}({\bf x})/\partial t=0$. A line integral of the vector potential along a closed curve around the solenoid equals the magnetic flux due to the constant magnetic induction inside the solenoid. This integral $$\oint_\Gamma{\bf A} \cdot d{\bf x}=B\pi a^2=\Phi_B$$ is independent of the shape of the simple loop $\Gamma$ around the solenoid so it is known as a topological property of the doubly connected space.
Moreover, the scalar potential $\phi$ can be considered the hard cylinder potential which prevents the charged particle going inside the solenoid. The scalar potential energy discussed in section \ref{sec:hard} can be written as $V(x,y)=q\phi(x,y)$, where $q$ is the electric charge of the particle (e.g. $q=-e$ is the charge of an electron). Note that $V$ is infinite inside the solenoid, so is $\phi$. Yet this can be assumed a limit of a potential as $V_0\rightarrow \infty$
\begin{align}\label{eq:spot}
V(x,y)&=q\phi=V_0\,,& \,\,\mathrm{for}\,\,r \leqslant a \nonumber \\
&=0\,,& \,\,\mathrm{for}\,\,r > a.
\end{align}
Although it is a fictitious potential since we have zero electric field everywhere except on the surface of the cylinder, it might be considered idealized version of a more realistic experimental set up.
In addition, it is not hard to see that the vector potential outside the solenoid may be obtained by a time-independent gauge transformation from ${\bf A}=0$ (natural gauge). Because curl of ${\bf A}(x,y)$ for $r\geqslant a$ vanishes, $ \nabla \times {\bf A}=0$. Therefore we can write $ {\bf A}=\nabla \chi(x,y)$ in the region where we want to solve Schr\"{o}dinger wave equation. Here $\chi(x,y)= -(Ba^2/2)\tan^{-1}(y/x)$ is the phase function of the gauge transformation. It is important to note that the phase $\chi$ is not a single valued function, so there is a branch cut on the negative $x-$axis. However, it is not possible to find such a gauge transformation for the vector potential inside the solenoid, where is out of our domain.
\subsection{Scattering from impenetrable long solenoid}\label{sec:abscat}
In the Lagrangian formalism, the interaction potential $V$ between a charged particle and electromagnetic fields is a velocity dependent potential, namely $V=q\phi-\frac{q}{c}{\bf A} \cdot {\bf v}$ (see section \ref{sec:intro}). Here, ${\bf v}=d{\bf x}/dt$ is the velocity vector of the particle. Hence the Lagrangian $L=\mu{\bf v}^2/2-V$ is usually used to define the conjugate canonical momentum ${\bf p}$ to the position vector ${\bf x}$ such that the required relations of the fundamental Poisson brackets are satisfied, $\lbrace x_i,p_j\rbrace=\delta_{ij}$. In the presence of the vector potential, the canonical momentum ${\bf p}$ and kinematical momentum ${\bf \Pi}=\mu{\bf v}$ do not equal to each other, rather they have the following relation
\begin{equation}
{\bf p}=\frac{\partial L}{\partial {\bf v}}={\bf \Pi}+\frac{q}{c}{\bf A}.
\end{equation}
Having the canonical pair $(x_j,p_j)$ for the above Lagrangian, we know the Hamiltonian for the particle in electromagnetic fields: $H={\bf \Pi}^2/2\mu+q\phi$.
It can be interpreted physically that scalar potential contributes to the particle energy via its electrical charge while the vector potential via its mechanical (kinematical) momentum. The canonical momentum ${\bf p}$ has a physical meaning in only the case of absence of the vector potential. Its combination with the vector potential, ${\bf \Pi}={\bf p}-\frac{q}{c}{\bf A}$, can determine the real trajectory of the particle classically. Under gauge transformation, Eq.(\ref{eq:gauge}), the dynamics of the particle does not chance since the fields ${\bf E}$, ${\bf B}$ remain the same and the position ${\bf x}$ and ${\bf \Pi}=\mu {\bf v}$ are determined by them. Note that the canonical momentum ${\bf p}$ is obviously not a gauge invariant quantity. The calculation of the classical dynamics may be performed by well-known Hamilton-Jacobi (HJ) partial differential equation satisfied by the Hamilton principle function $S(t,{\bf x},{\bf P})$ as well. $S$ is a generating function of a special canonical transformation and the constant ${\bf P}$ vector is the new canonical momentum after integration of the HJ equation. This is the closest interpretation to quantum mechanical description of the dynamics. The gauge invariance is guaranteed in this interpretation as long as the transformed Hamilton principle function $\tilde{S}$ is given by $\tilde{S}=S+\frac{q}{c}\chi$ under gauge transformation defined in Eq.(\ref{eq:gauge}).
In the light of the explanation above, the Hamiltonian of a particle with charge $q$ and mass $\mu$ outside the solenoid becomes $H={\bf \Pi}^2/2\mu$ as the scalar potential vanishes in the domain of the problem. Physically, this is a free particle Hamiltonian equivalent to the one in section \ref{sec:hard}, where mechanical momentum consists of only canonical momentum. On the other hand, quantization must be performed according to usual canonical quantization prescription: ${\bf p}$ in classical Hamiltonian is replaced by the operator $-i\hslash\nabla$ in the position representation. Therefore time independent wave equation (Schr\"{o}dinger equation) has the form of $\frac{1}{2\mu}\left(\frac{\hslash}{i}\nabla -\frac{q}{c}{\bf A}\right) ^2u_E=Eu_E$
outside the solenoid. Substituting the vector potential from Eq. (\ref{eq:abpot}) explicitly in this equation gives
\begin{equation} \label{eq:absch}
-\frac{\hslash ^2}{2\mu}\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}+\frac{2i\alpha\left(x\frac{\partial}{\partial y} - y\frac{\partial}{\partial x}\right)- \alpha^2}{x^2+y^2}\right) u_E=Eu_E
\end{equation}
with the boundary condition on the surface of the solenoid; $u_E(x,y)=0$ at $x^2+y^2=a^2$. Here $\alpha=\Phi_B/\Phi_0$ is the dimensionless magnetic flux with the unit of $\Phi_0=-2\pi\hslash c/q$. As in section \ref{sec:hard}, we can make the scale transformation $(x',y')=(x/a,y/a)$ to map the boundary to the circle with unit radius at the origin. Thus Eq. (\ref{eq:absch}) becomes
\begin{align}\label{eq:abschpol}
\frac{\partial^2u }{\partial r'^2}+ \frac{1}{r'} \frac{ \partial u }{ \partial r' }+ \frac{1}{r'^2}\left( \frac{\partial }{\partial \theta}+i\alpha\right)^2 u+k'^2u=0
\end{align}
in the polar coordinates. Here $k'=ak$ as before and the eigenfunction $u(r',\theta)$ must vanish on the boundary (at $r'=1$) since the particle is hitting an infinite potential barrier. It is seen that the magnetic flux $\alpha $ appears as some portion of the physical angular momentum $L_z$ about $z-$axis of the particle additional to canonical momentum; $L_z=-i\hslash \frac{\partial}{\partial\theta}-\frac{q\Phi_B}{2\pi c}$. Consequently, the angular momentum has eigenvalues $\hslash(m+\alpha)$ with eigenfunctions $e^{im\theta}$, $m=0,\pm 1,\pm 2,\cdots$. The spectrum consists of continuum part, $\hslash \alpha$, in angular momentum quantum number. After separating the angular part of the eigenfunction $u(r',\theta)$, we have the radial equation from (\ref{eq:abschpol}) as
\begin{equation}\label{eq:abrad}
r^2\frac{ d^2f_m }{d r^2}+ r \frac{ d f_m }{ d r }+(r^2k^2- (m+\alpha)^2)f_m=0\,,
\end{equation}
where we dropped $'$ from $k'$ and $r'$ for convenience.
Note that the magnetic flux contributes only through the index of the Bessel functions. The radial eigenfunctions $f_m(r)$ are proportional to Bessel functions $J_{m+\alpha}(kr)$ and $N_{m+\alpha}(kr)$ with non-integer index $m+\alpha$.
On the other hand, the solution of the wave function for AB potential should be obtained by a unitary rotation in Hilbert space from the hard cylinder solution in section \ref{sec:hard}. Because AB vector potential in field free region can be obtained by a gauge transformation from the trivial gauge ($\phi=0$, ${\bf A}=0$) of the hard cylinder potential (see section \ref{sec:abpot}) : $ \tilde{\phi}=-\partial \chi/c\partial t $, $\tilde{{\bf A}}=\nabla\chi$. Under this transformation wave function of the trivial gauge should gain a phase to leave the wave equation gauge invariant,
\begin{equation}\label{eq:abwf}
\tilde{u}(r,\theta)=e^{i\frac{q}{\hslash c}\chi(r,\theta)}u(r,\theta)
\end{equation}
It is easy to find the gauge function $\chi$ from the Eq.(\ref{eq:abpot}).
It is time independent since the vector potential outside the solenoid is time independent. Therefore there is no change occur on the scalar potential due to the transformation, which is zero outside the solenoid originally,
\begin{equation}
\chi(x,y)=\frac{\Phi_B}{2\pi}\arctan(\frac{y}{x})
\end{equation}
It is only depends on $\theta$ in polar coordinates and it is independent from the scale transformation of the coordinates as long as the flux $\Phi_B$ is kept constant. Hence, the wave function of the hard cylinder scattering gets extra phase due to the Eq.(\ref{eq:abwf}), $\tilde{u}=e^{-i\alpha \theta} u$. Since the solution of the hard cylinder $u(r,\theta)$ is constructed as a single valued function the transformed one $\tilde{u}(r, \theta)$ is not single valued since $\alpha$ is not an integer. The origin is a branch point and the negative $x-$axis is a branch cut. We should restrict the function to a single branch, $-\pi<\theta<\pi$, in order to make it single valued. However this is the solution in the presence of the vector potential in additional to hard cylinder barrier. It can be checked in the incoming plane wave region where the solution simply must behave like $e^{i(kr \cos \theta-\alpha\theta)}$ according to the gauge transformation so that the probability current density must be ${\bf J}=(\hslash k/\mu){\bf \hat{x}}$ physically like in the case of hard cylinder in that asymptotic region. Here ${\bf \hat{x}}$ is the unit vector in $x$ direction. Indeed, it can be derived that it is so by using the current density in the case of presence of vector potential:
\begin{equation}\label{eq:abcd}
{\bf J}=\frac{\hslash}{\mu}(u^*\nabla u-u\nabla u^*+\frac{\alpha}{r} |u|^2 {\bf \hat{\theta}})=\frac{\hslash k}{\mu}{\bf \hat{x}}
\end{equation}
Thus, the solution to the Eq.(\ref{eq:abschpol}) satisfying the boundary condition at $r=1$,
\begin{widetext}
\begin{equation}\label{eq:absol}
u(r,\theta)=\sum_ {m=-\infty} ^ \infty c_m e^{im \theta} [J_{m+\alpha}(k)N_{m+\alpha}(kr)-N_{m+\alpha}(k)J_{m+\alpha}(kr)]
\end{equation}
\end{widetext}
must show the same feature like in the asymptotic region as well. Indeed it
represents a holomorphic function of $kr$ ($kr$ is considered as a complex variable) throughout the complex plane cut along the negative $x$-axis because the index of Bessel functions $m+\alpha$ is not an integer \cite{AS64}. However the cut line is naturally excluded from the domain because $kr>0$ for a particle with non negative energy $E>0$. The expansion coefficients $c_m$ are determined from the asymptotic behaviour of the solution as $r\to \infty$. Because of the current density requirement from Eq. (\ref{eq:abcd})
the asymptotic behaviour of the solution at infinities should be given by
\begin{equation} \label{eq:abasym}
u(r \rightarrow \infty, \theta)=C \left( e^{i(kr \cos \theta-\alpha\theta)}+f_k(\alpha,\theta) \frac{e^{ikr}}{ \sqrt{r} } \right)\, .
\end{equation}
Series representation of it can be obtained by using the Fourier series representation of the phase contribution:
\begin{equation}\label{eq:fourier}
e^{-i\alpha\theta}=\sum_ {n=-\infty} ^ \infty \frac{\sin(\alpha+n)\pi }{(\alpha+n)\pi}e^{in\theta},\,\,\,\,\,-\pi<\theta<\pi\, .
\end{equation}
This enables us to use the Fourier expansion of the function $ e^{i(kr \cos \theta-n\theta)}=\sum_{-\infty}^\infty a_m e^{i m\theta} $, where the expansion coefficients are well-known integral \cite{AS64}
\begin{eqnarray}
a_m = \frac{1}{2\pi}\int_{-\pi}^\pi e^{i(kr \cos \theta-(m+n)\theta)}\,d\theta = i^{m+n}J_{m+n}(kr)\,.
\end{eqnarray}
Therefore, we may have the series representation for the plane wave with additional phase as
\begin{equation}
e^{i(kr \cos \theta-\alpha\theta)}=\sum_ {m,n=-\infty} ^ \infty i^{m+n} e^{im\theta}\frac{\sin(\alpha-n)\pi }{(\alpha-n)\pi}J_{m+n}(kr)
\end{equation}
instead of Jacobi-Anger expression in the hard cylinder case (see section \ref{sec:hard}). Instantly, it is clear that it is reduced Jacobi-Anger formula when $\alpha\to 0$ since only $n=0$ term survives in this limit. Hence it is not difficult to express Eq.(\ref{eq:abasym}) as linear combination of incoming and outgoing circular waves:
\begin{widetext}
\begin{align}\label{eq:abform1}
u(r,\theta)\sim & C \frac{e^{-ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty \sum_ {n=-\infty} ^ \infty i^{m+n} e^{i(m \theta+\varphi_{m+n})}\frac{\sin(\alpha-n)\pi }{(\alpha-n)\pi} \nonumber \\
+&C\frac{e^{ikr}}{\sqrt{2\pi kr}} \left[ \sqrt{2\pi k} f_k(\alpha , \theta)+ \sum_ {m=-\infty} ^ \infty \sum_ {n=-\infty} ^ \infty i^{m+n} e^{i(m \theta-\varphi_{m+n})}\frac{\sin(\alpha-n)\pi }{(\alpha-n)\pi} \right] \, .
\end{align}
\end{widetext}
The Eq.(\ref{eq:absol}) gives another form for asymptotic expansion of the eigenfunction $u(r,\theta)$ as $r\to \infty$ by means of the Bessel functions asymptotic formulae (\ref{eq:first}, \ref{eq:second}). Thus the second form of the asymptotic expansion of the eigenfunction is
\begin{widetext}
\begin{align}\label{eq:abform2}
u(r,\theta)\sim\frac{e^{-ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty c_m e^{im \theta}iH_{m+\alpha}^{(1)}(k) e^{i\varphi_{m+\alpha}}
+\frac{e^{ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty c_m e^{im \theta}(-i)H_{m+\alpha}^{(2)}(k) e^{-i\varphi_{m+\alpha}} \, .
\end{align}
\end{widetext}
at large distances from the solenoid. Here the Hankel's functions has non integer index $m+\alpha$ and the phase $ \varphi_{m+n} $ is defined as in section \ref{sec:hard}. Consequently, the unknown expansion coefficients $c_m$ can be extracted by comparing the two forms of the asymptotic expansions of the eigenfunction far away from the solenoid. This is allowed because of the uniqueness of the solution of the eigenvalue equation (\ref{eq:abschpol}). Comparing the coefficients of $e^{-ikr}/\sqrt{r}$ in equations (\ref{eq:abform1}) and (\ref{eq:abform2}) provides
\begin{align}\label{eq:abcm}
c_m=C\frac{i^m}{iH_{m+\alpha}^{(1)}(k)}\sum_ {n=-\infty} ^ \infty i^{n} e^{-i(\alpha-n)\frac{\pi}{2}}\frac{\sin(\alpha-n)\pi }{(\alpha-n)\pi}
=C\frac{i^m}{iH_{m+\alpha}^{(1)}(k)}e^{-i\alpha(\varepsilon-\frac{\pi}{2})}\,,
\end{align}
where the limit $ \varepsilon \to 0$ is assumed in the second equality. Eq.(\ref{eq:fourier}) is needed to obtain the expression (\ref{eq:abcm}).
Once having the $c_m$s, equating the coefficients of $e^{ikr}/\sqrt{r}$
in Eqs. (\ref{eq:abform1}) and (\ref{eq:abform2}) identifies the scattering amplitude $f_k(\alpha,\theta)$ in the presence of the solenoid in the hard cylinder:
\begin{widetext}
\begin{align}\label{eq:abscatam}
f_k(\alpha,\theta)=&-\frac{1}{\sqrt{2\pi k}}\sum_ {m=-\infty} ^ \infty \sum_ {n=-\infty} ^ \infty e^{i(m \theta-\varphi_{\alpha-n})}\frac{\sin(\alpha-n)\pi }{(\alpha-n)\pi}
\frac{H_{m+\alpha}^{(2)}(k)e^{-i(\alpha-n)\frac{\pi}{2}}+H_{m+\alpha}^{(1)}(k)e^{i(\alpha-n)\frac{\pi}{2}}}{H_{m+\alpha}^{(1)}(k)}\nonumber \\
=&-\frac{e^{-i\alpha\frac{ \varepsilon }{2}} }{\sqrt{2i\pi k}}\sum_ {m=-\infty} ^ \infty e^{im \theta}\frac{e^{i\alpha\frac{ \varepsilon }{2}}H_{m+\alpha}^{(1)}(k)+e^{-i\alpha\frac{ \varepsilon }{2}}H_{m+\alpha}^{(2)}(k)}{H_{m+\alpha}^{(1)}(k)}\,,
\end{align}
\end{widetext}
where it should be understood that the limit $ \varepsilon \to 0$ must be taken in the final expression of differential cross section, $d\sigma/d\theta=|f_k(\alpha,\theta)|^2$. It should be reduced to the scattering amplitude of the hard cylinder without magnetic flux in the limit $\alpha\to 0$. It can be inferred from the first line of Eq.(\ref{eq:abscatam}) that only contribution to the double series comes for $n=0$ as $\alpha\to 0$ since $\sin n\pi=0$ vanishes for any non-zero integer $n$. Consequently, we first take $n=0$ in the double sum then $\sin \alpha\pi/\alpha\pi\to 1$ in the limit $\alpha\to 0$ gives the the formula (\ref{eq:sampli}). On the other hand, in reference \cite{AB59} the scattering amplitude is derived for the solenoid with zero radius $a=0$. Our solution (\ref{eq:abscatam}) is
for any finite radius of the solenoid because the coordinates have been transformed to have a solenoid with unit radius by means of a simple scale transformation. Thus we can take the limit of $a\to 0$ in Eq.(\ref{eq:abscatam}) only after we undo the scale transformation.
As a conclusion, the angular momentum summation for the total cross section becomes
\begin{equation}\label{eq:abtcs}
\sigma=\frac{4}{k}\sum_{m=-\infty}^\infty \frac{|J_{m+\alpha}(k)|^2}{(J_{m+\alpha}(k))^2+(N_{m+\alpha}(k))^2}\,,
\end{equation}
which reduces to Eq.(\ref{eq:htcs}) of the hard cylinder scattering when the flux vanishes ($\alpha\to 0$). If $\alpha$ is expressed as $\alpha=[\alpha]+\nu$, where $[\alpha]$ is the integer part of it, only decimal part $\nu$ make (\ref{eq:abtcs}) distinguish from the total cross section of the hard cylinder, (\ref{eq:htcs}).
\subsection{Aharonov-Bohm solution as a limit case}\label{sec:limit}
In this section, we would like to make a second test whether the present method can reproduce the AB solution \cite{AB59} when the radius vanishes. As mentioned in the previous section, we need the solenoid radius $a$ as a parameter in the solutions in order to take the limit $a\to 0$. Hence the scale transformation made in section \ref{sec:abscat} is abandoned here, i.e., in any scaled solution, $k'=ka$ and $k'r'=kr$ are replaced with the unscaled ones. It is well known that the solution of the Schr\"{o}dinger equation as $a\to 0$ is reduced to \cite{AB59}
\begin{equation}\label{eq:zero}
u(r,\theta)=\sum_ {m=-\infty} ^ \infty c_m e^{im \theta} J_{| m+\alpha |}(kr)\, .
\end{equation}
Following the same steps as in sections \ref{sec:hard} and \ref{sec:abscat}, it is easy to obtain the expansion coefficients $c_m$ from the asymptotic form of the eigenfunction; see Eqs. (\ref{eq:abasym}) and (\ref{eq:abform1}). Yet they must be compared the asymptotic form of the solution (\ref{eq:zero}) in the case of flux line that follows
\begin{widetext}
\begin{align}\label{eq:zform2}
u(r,\theta)\sim\frac{e^{-ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty c_m e^{im \theta}e^{i\varphi_{|m+\alpha|}}
+\frac{e^{ikr}}{\sqrt{2\pi kr}}\sum_ {m=-\infty} ^ \infty c_m e^{im \theta}e^{-i\varphi_{|m+\alpha|}} \, .
\end{align}
\end{widetext}
Therefore the equating the factor of incoming wave $ e^{-ikr}/\sqrt{r}$ gives the required $c_m$ to construct the eigenfunctions.
\begin{eqnarray}
c_m = C i^m e^{-i\varphi_{|m+\alpha|}} \sum_ {n=-\infty} ^ \infty i^{n}\frac{\sin(\alpha-n)\pi }{(\alpha-n)\pi}e^{i\varphi_{m+n}}
= C e^{-i\alpha \varepsilon}(-1)^{m+\alpha}(-i)^{|m+\alpha|}\, ,
\end{eqnarray}
where the series (\ref{eq:fourier}) is used to get the second line. Since the angle $\theta=-\pi$ is on the branch cut, the small positive value $0< \varepsilon\ll 1$ is added, e.i. $\theta=-\pi+\varepsilon$. At the final expression, it is understood that the limit $ \varepsilon \to 0$ should be considered eventually.
At this point we might check our $c_m$ by substituting it to the solution (\ref{eq:zero}) and take the vanishing flux limit $\alpha\to 0$. If everything is correct, final eigenfunction in this limit must be plane wave. That is, there is no scattering at all because of no flux and no hard cylinder as well. Indeed it is not hard to see $\sum c_m e^{im \theta} J_{| m+\alpha |}(kr)\to e^{ikr\cos \theta}$ as $\alpha \to 0$.
Comparing the coefficients of the out going radial cylindrical waves $e^{ikr}/\sqrt{r}$ in asymptotic forms (\ref{eq:abform1}) and (\ref{eq:zform2}) gives the scattering amplitude,
\begin{widetext}
\begin{align}\label{eq:zscatam}
f_{k, \varepsilon}(\alpha,\theta)&=\frac { e^{-i\alpha \frac{\varepsilon}{2}}}{\sqrt{2 i\pi k}}\sum_ {m=-\infty} ^ \infty e^{im \theta}[e^{-i\alpha \frac{\varepsilon}{2}} e^{i\pi(m+\alpha-|m+\alpha |)}-e^{i\alpha \frac{\varepsilon}{2}}]\nonumber\\
&=-\frac { e^{-i\alpha \frac{\varepsilon}{2}}}{\sqrt{2 i\pi k}}\sum_ {m > -\alpha} ^ \infty e^{im \theta}2i\sin(\alpha \varepsilon /2)+\frac { e^{-i\alpha \frac{\varepsilon}{2}}}{\sqrt{2 i\pi k}}\sum_ {-\infty} ^ {m<-\alpha} e^{im \theta} [e^{-i\alpha \frac{\varepsilon}{2}} e^{i2\pi(m+\alpha)}-e^{i\alpha \frac{\varepsilon}{2}}]
\end{align}
\end{widetext}
Here we have used the same $ \varepsilon$ trick to get a single summation by means of the expression (\ref{eq:fourier}). After some manipulations, $f_k(\alpha, \theta)$ becomes as
\begin{align}\label{eq:zscatam1}
f_{k, \varepsilon} (\alpha,\theta)=\frac { e^{-i\alpha \frac{\varepsilon}{2}}}{\sqrt{2 i\pi k}}\frac{\sin(\alpha \varepsilon /2)e^{-i([\alpha]+1/2) \theta}}{\sin(\theta /2)}
+\frac { e^{i\alpha (\pi-\frac{\varepsilon}{2})}}{\sqrt{2 i\pi k}}\frac{\sin(\alpha (\pi-\varepsilon /2)e^{-i([\alpha]+1/2) \theta}}{\sin(\theta /2)}
\end{align}
where $[\alpha]$ is the integer part of the magnetic flux $\alpha$. There is no contribution that comes from $m+\alpha > 0$ as $ \varepsilon \to 0$. Therefore, the final expression for the scattering amplitude of the solenoid with zero radius becomes
\begin{equation}\label{eq:zscatam2}
f_{k} (\alpha,\theta)=\frac { \sin\pi\alpha}{\sqrt{2 i\pi k} }\frac{ e^{i\alpha \pi}e^{-i([\alpha]+1/2) \theta}}{\sin(\theta /2)}
\end{equation}
and the differential cross section is
\begin{equation}
\frac{d\sigma}{d \theta }=|f_{k} (\alpha,\theta)|^2=\frac { \sin^2\pi\alpha}{2 \pi k} \frac{ 1}{\sin^2(\theta /2)}\,,
\end{equation}
which is the AB original result apart from a phase factor \cite{AB59}.
Now it is possible to test if the scattering amplitude given in Eq.(\ref{eq:abscatam}) is reduced to the amplitude in (\ref{eq:zscatam2}) as the radius of the cylinder goes zero, $a\to 0$. After taking the limit $ \varepsilon \to 0$ and returning to the unscaled form of the amplitude $f_k=\sqrt{a}\tilde{f}_{\tilde{k}}$, the summation can be divided in two parts by the conditions $m +\alpha > 0$ and $m +\alpha < 0$:
\begin{eqnarray}\label{eq:lscat}
-\sqrt{2\pi i k}f_k(\alpha,\theta)=\sum_ {m>-\alpha} ^ \infty e^{im \theta}\frac{2J_{m+\alpha}(ka)}{H_{m+\alpha}^{(1)}(ka)}
+\sum^{m<-\alpha}_{-\infty} e^{im \theta}\frac{2J_{m+\alpha}(ka)}{H_{m+\alpha}^{(1)}(ka)}
\end{eqnarray}
Using the limiting form of the Bessel and Hankel functions when $z\to 0$ and $\beta$ is fixed \cite{AS64},
\begin{eqnarray}
J_\beta(z)\sim \frac{(z/2)^\beta}{\Gamma(\beta+1)},\,\,\,\,(\beta\neq-1,-2,-3,\ldots)\\
H_\beta^{(1)}(z)\sim \frac{(z/2)^{-\beta}}{i\pi}\Gamma(\beta), \,\,\,\, ({\rm Re}(\beta) > 0)\, ,
\end{eqnarray}
it is easy to see that there is no contribution to the scattering amplitude from the first summation in Eq.(\ref{eq:lscat}). The above limit form of Hankel function can not be applied directly to the second summation because $m+\alpha < 0$ there. We must first apply the phase change for the Hankel function, $H_{-\beta}^{(1)}(z)=e^{\beta\pi i}H_\beta^{(1)}(z)$, and subsequently the limiting forms are applied in the second summation. It makes the summation independent of $ka$,
\begin{widetext}
\begin{align}
f_k(\alpha,\theta)=&\frac{2\pi i e^{-i([\alpha]+1)\theta}e^{i\nu\pi}}{-\sqrt{2\pi i k} }\sum_{m=0} ^\infty \frac{e^{-im \theta}e^{-i(m+1)\pi}}{\Gamma(1-(m+1-\nu))\Gamma(m+1-\nu)}=\frac { \sin\pi\nu}{\sqrt{2 \pi i k} }\frac{ e^{i\nu \pi}e^{-i([\alpha]+1/2) \theta}}{\sin(\theta /2)}\ .
\end{align}
\end{widetext}
Here we use $\Gamma(1-z)\Gamma(z)=\pi/\sin\pi z$ to obtain the last expression, which is identical to Eq.(\ref{eq:zscatam2}). Here $\nu$ is the decimal part of the flux $\alpha$.
\section{Conclusion and discussion} \label{sec:conclusion}
An unbound eigenvalue equation has been solved as a scattering problem. A charged particle is scattered from a impenetrable cylinder enclosing a solenoid as an electromagnetic source. The magnetic induction field ${\bf B}$ inside the solenoid can not prevent the particle from entering the cylinder region. Therefore, it is supposed an infinite scalar potential barrier
exists in the circular region in additional to the magnetic vector potential.
The scattering amplitude and corresponding total cross section of a charged particle from a solenoid with finite radius are obtained as a sum over angular momentum quantum number. Different limiting cases are checked to test the result. The scattering amplitude must reduce to the hard cylinder scattering case when the radius of the cylinder is kept constant and the magnetic flux goes to zero. It must end up with the AB formula when the flux is fixed and the radius goes to zero. When both the flux and the radius vanish, it must give zero amplitude, i.e., no scattering at all. That is, the solution is just free particle in empty space. The result of this work fulfils all these requirements. Only negative angular momentum eigenvalues $m+\alpha<0$ contribute to the AB scattering amplitude ($a=0$) while all eigenvalues play the role for the amplitude of the solenoid with finite radius, $a\neq 0$ (see Eqs. (\ref{eq:zscatam}) and (\ref{eq:lscat})) .
The charged particle does not touch the source and its magnetic induction field inside the solenoid, thus the classical dynamics of the particle is the same with or without magnetic flux. But the flux still can create a measurable effect in the quantum level if the flux $\alpha$ is a non-integer real number (see Eq. (\ref{eq:abtcs})). It is worth mention that kinematical linear momentum or angular momentum is a gauge invariant quantity while the canonical ones are not. However, the quantization is performed by the canonical momenta and their conjugate coordinates. That is why AB effect is a purely quantum mechanical phenomena. The vector potential is just part of the kinematic momentum or the flux is the some part of the angular momentum around the symmetry axis (z$-$axis).
In field theoretical point of view, it is important to solve Dirac's equation in the same way like this work to include the relativistic spin effect for the AB scattering explicitly. It is also worth to study the time dependent scattering by considering the time varying current on the solenoid (Faraday induction and retardation effect). These are the future considerations at the moment. Moreover, less obvious feature of the problem is the following: the simple loop integration around the singularity is independent the shape of the loop so that it is called a topological property of the space (doubly connected region in the present case). The singularity in the forbidden region might be something else instead of the solenoid, such as electric charge, magnetic monopole, or black hole singularity. It might also be interesting to investigate those.
\begin{acknowledgments}
The author is grateful to C. Harabati and to O. Tosun for helpful discussions.
\end{acknowledgments}
\end{document} |
\begin{document}
\title{
Complete Facial Reduction in One Step for Spectrahedra
}
\author{
\href{https://uwaterloo.ca/combinatorics-and-optimization/about/people/ssremac}{Stefan
Sremac}\thanks{Department of Combinatorics and Optimization
Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1; Research supported by The Natural Sciences and Engineering Research Council of Canada and by AFOSR.
}
\and
\href{http://people.orie.cornell.edu/dd379}
{Hugo Woerdeman}\thanks{Department of Mathematics, Drexel University,
3141 Chestnut Street, Philadelphia, PA 19104, USA. Research supported by
Simons Foundation grant 355645.
}
\and
\href{http://www.math.uwaterloo.ca/~hwolkowi/}
{Henry Wolkowicz}
\thanks{Department of Combinatorics and Optimization
Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1; Research supported by The Natural Sciences and Engineering Research Council of Canada and by AFOSR;
\url{www.math.uwaterloo.ca/\~hwolkowi}.
}
}
\date{\today}
~ \; \forallncypagestyle{plain}{
~ \; \forallncyhf{}
\rfoot{Compiled on {\ddmmyyyydate\today} at \currenttime}
\lfoot{Page \thepage}
{\rm Re}\,newcommand{\headrulewidth}{0pt}}
\maketitle
\begin{abstract}
A spectrahedron is the feasible set of a semidefinite program, \textbf{SDP}\,p,
i.e.,~the intersection of an affine set with the positive semidefinite cone.
While strict feasibility is a generic property for random problems,
there are many classes of problems where strict feasibility fails and
this means that strong duality can fail as well. If the minimal
face containing the spectrahedron is known, the \textbf{SDP}\, can easily be
transformed into an equivalent problem where strict feasibility holds
and thus strong duality follows as well.
The minimal face is fully characterized by the range or nullspace
of any of the matrices in its relative interior. Obtaining such a matrix
may require many \emph{facial reduction} steps and is currently not known to be a tractable problem
for spectrahedra with \emph{singularity degree} greater than one. We propose a \emph{single}
parametric optimization problem with a resulting type of \emph{central
path} and prove that the optimal solution
is unique and in the relative interior
of the spectrahedron. Numerical tests illustrate the efficacy of our
approach and its usefulness in regularizing \textbf{SDP}\,sp.
\end{abstract}
{\bf Keywords:}
Semidefinite programming, SDP, facial reduction, singularity degree,
maximizing $\log \det$.
{\bf AMS subject classifications:} 90C22, 90C25
\tableofcontents
\listoftables
\listoffigures
\section{Introduction}
\ensuremath{\mathcal{L}eftarrow}bel{sec:intro}
A \textdef{spectrahedron} is the intersection of an affine manifold with the
positive semidefinite cone. Specifically, if \textdef{$\Sc^n$} denotes the set
of $n\times n$ symmetric matrices, \textdef{$\Sc^np$}$ \subset \Sc^n$ denotes the set of positive semidefinite matrices, ${\mathcal A}:\Sc^n \rightarrow {\mathbb{R}^m\,}$ is a linear map, and $b \in {\mathbb{R}^m\,}$, then
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:feasset}
\textdef{$\mathcal{F}=\mathcal{F}({\mathcal A},b)$} := \{ X\in \Sc^np : {\mathcal A}(X) = b \}
\end{equation}
is a spectrahedron. We emphasize that $\mathcal{F}$ is given to us as a
function of the algebra, the data ${\mathcal A},b$, rather than the geometry.
Our motivation for studying spectrahedra arises from
\textdef{semidefinite programs, \textbf{SDP}\,sp}{\mathcal Ind} ex{\textbf{SDP}\,, semidefinite program}, where a linear objective is minimized
over a spectrahedron. In contrast to \textdef{linear programs}, strong duality is not an inherent property of \textbf{SDP}\,s, but depends on a \textdef{constraint qualification (CQ)}{\mathcal Ind} ex{CQ, constraint qualification} such as the Slater CQ. For an \textbf{SDP}\, not satisfying the Slater CQ, the central path of the standard interior point algorithms is undefined and there is no guarantee of strong duality or convergence. Although instances where the Slater CQ fails are pathological, see e.g. \cite{MR3622250} and \cite{Pataki2017}, they occur in many applications and this phenomenon has lead to the development of a number of regularization methods, \cite{RaTuWo:95,Ram:95,lusz00,int:deklerk7,LuoStZh:97}.
In this paper we focus on the \textdef{facial reduction} method, \cite{bw1,bw2,bw3}, where the optimization problem is restricted to the minimal face of $\Sc^np$ containing $\mathcal{F}$, denoted $~ \; \forallce(\mathcal{F})$. We note that the different regularization methods for \textbf{SDP}\, are not fundamentally unrelated. Indeed, in \cite{RaTuWo:95} a relationship between the extended dual of Ramana, \cite{Ram:95}, and the facial reduction approach is established and in\cite{MR3063940} the authors show that the dual expansion approach, \cite{lusz00,LuoStZh:97} is a kind of `dual' of facial reduction. When knowledge of the minimal face is available, the optimization problem is easily transformed into one for which the Slater CQ holds. Many of the applications of facial reduction to \textbf{SDP}\, rely on obtaining the minimal face through analysis of the underlying structure. See, for instance, the recent survey \cite{DrusWolk:16} for applications to hard combinatorial optimization and matrix completion problems.
In this paper we are interested in instances of \textbf{SDP}\, where the minimal face can not be obtained analytically. An algorithmic approach was initially presented in \cite{bw3} and subsequent analyses of this algorithm as well as improvements, applications to \textbf{SDP}\,, and new approaches may be found in \cite{MR3108446,MR3063940,ScTuWonumeric:07,perm,permfribergandersen,2016arXiv160802090P,waki_mur_sparse}. While these algorithms differ in some aspects, their main structure is the same. At each iteration a subproblem is solved to obtain an \emph{exposing vector} for a face (not necessarily minimal) containing $\mathcal{F}$. The \textbf{SDP}\, is then reduced to this smaller face and the process repeated until the \textbf{SDP}\, is reduced to $~ \; \forallce(\mathcal{F})$. Since at each iteration, the dimension of the ambient face is reduced by one, at most $n-1$ iterations are necessary. We remark that this method is a kind of `dual' approach, in the sense that the exposing vector obtained in the subproblem is taken from the dual of the smallest face available at the current iteration. We highlight two challenges with this approach: (1) each subproblem is itself an \textbf{SDP}\, and thereby computationally intensive and (2) at each iteration a decision must be made regarding the rank of the exposing vector.
With regard to the first challenge, we note that it is really two-fold. The computational expense arises from the complexity of an individual subproblem and also from the number of such problems to be solved. The subproblems produced in \cite{ScTuWonumeric:07} are `nice' in the sense that strong duality holds, however, each subproblem is an \textbf{SDP}\, and its computational complexity is comparable to that of the original problem. In \cite{perm} a relaxation of the subproblem is presented that is less expensive computationally, but may require more subproblems to be solved. The number of subproblems needed to solve depends of course on the structure of the problem but also on the method used to determine that facial reduction is needed. For algorithms using the theorem of the alternative, \cite{bw1,bw2,bw3}, a theoretical lower bound, called the \emph{singularity degree}, is introduced in \cite{S98lmi}. In \cite{MR2724357} an example is constructed for which the singularity degree coincides with the upper bound of $n-1$, i.e., the worst case exists. In \cite{permfribergandersen}, the \textdef{self-dual embedding} algorithm of \cite{int:deklerk7} is used to determine whether facial reduction is needed. This approach may require fewer subproblems than the singularity degree.
The second challenge is to determine which eigenvalues of the exposing vector obtained at each iteration are identically zero, a classically challenging problem. If the rank of the exposing vector is chosen too large, the problem may be restricted to a face which is smaller than the minimal face. This error results in losing part of the original spectrahedron. If on the other hand, the rank is chosen too small, the algorithm may require more iterations than the singularity degree. The algorithm of \cite{ScTuWonumeric:07} is proved to be backwards stable only when the singularity degree is one, and the arguments can not be extended to higher singularity degree problems due to possible error in the decision regarding rank.
Our main contribution in this paper is a `primal' approach to facial reduction, which does not rely on exposing vectors, but instead obtains a matrix in the relative interior of $\mathcal{F}$, denoted ${\rm Re}\,lint(\mathcal{F})$
Since the minimal face is characterized by the range of any such matrix, we obtain a facially reduced problem in just one step. As a result, we eliminate costly subproblems and require only one decision regarding rank.
{\mathcal Ind} ex{relative interior, ${\rm Re}\,lint(\cdot)$}
{\mathcal Ind} ex{${\rm Re}\,lint(\cdot)$, relative interior}
While our motivation arises from \textbf{SDP}\,sp, the problem of characterizing the relative interior of a spectrahedron is independent of this setting. The problem is formally stated below.
\begin{problem}
\ensuremath{\mathcal{L}eftarrow}bel{prob:main}
Given a spectrahedron $\mathcal{F}({\mathcal A},b) \subseteq \Sc^n$, find $b(\alpha) r{X}\in {\rm Re}\,lint(\mathcal{F})$.
\end{problem}
This paper is organized as follows. In Section~{\rm Re}\,f{sec:prelim} we introduce notation and discuss relevant material on \textbf{SDP}\, strong duality and facial reduction. We develop the theory for our approach in Section~{\rm Re}\,f{sec:paramprob}, prove convergence to the relative interior, and prove convergence to the analytic center under a sufficient condition. In Section~{\rm Re}\,f{sec:projGN}, we propose an implementation of our approach and we present numerical results in Section~{\rm Re}\,f{sec:numerics}. We also present a method for generating instances of \textbf{SDP}\, with varied singularity degree in Section~{\rm Re}\,f{sec:numerics}. We conclude the main part of the paper with an application to matrix completion problems in Section~{\rm Re}\,f{sec:psdcyclecompl}.
\section{Notation and Background}
\ensuremath{\mathcal{L}eftarrow}bel{sec:prelim}
Throughout this paper the ambient space is the Euclidean space of
$n\times n$ real symmetric matrices, $\Sc^n$, with the standard
\textdef{trace inner product}
\[
\ensuremath{\mathcal{L}eftarrow}ngle X,Y \ensuremath{\mathbb{R}ightarrow}ngle := \trace(XY) = \sum_{i=1}^n \sum_{j=1}^n X_{ij}Y_{ij},
\]
and the induced \textdef{Frobenius norm}
\[
\lVert X \rVert_F := \sqrt{\ensuremath{\mathcal{L}eftarrow}ngle X, X\ensuremath{\mathbb{R}ightarrow}ngle }.
\]
In the subsequent paragraphs, we highlight some well known results on
the cone of positive semidefinite matrices and its faces, as well other
useful results from convex analysis. For proofs and further reading we
suggest \cite{SaVaWo:97,MR2724357,con:70}. The dimension of $\Sc^n$ is the triangular number $n(n+1)/2=: t(n)$.
We define \textdef{$\svec$}$: \Sc^n \rightarrow {\mathbb{R}^{\scriptsize{t(n)}}\,}$ such that it maps the upper triangular
elements of $X \in \Sc^n$ to a vector in ${\mathbb{R}^{\scriptsize{t(n)}}\,}$ where the off-diagonal
elements are multiplied by $\sqrt{2}$. Then $\svec$ is an isometry and an
isomorphism with \textdef{$\sMat$}$ := \svec^{-1}$. Moreover, for
$X,Y \in \Sc^n$,
{\mathcal Ind} ex{$t(n)$, triangular number}
{\mathcal Ind} ex{triangular number, $t(n)$}
\[
\ensuremath{\mathcal{L}eftarrow}ngle X,Y \ensuremath{\mathbb{R}ightarrow}ngle = \svec(X)^T \svec(Y).
\]
The eigenvalues of any $X \in \Sc^n$ are real and indexed so as to satisfy,
{\mathcal Ind} ex{operator norm, $\|X\|$}
{\mathcal Ind} ex{$\|X\|$, operator norm}
\[
\ensuremath{\mathcal{L}eftarrow}mbda_1(X) {\mathcal G} e \ensuremath{\mathcal{L}eftarrow}mbda_2(X) {\mathcal G} e \cdots {\mathcal G} e \ensuremath{\mathcal{L}eftarrow}mbda_n(X),
\]
and $\ensuremath{\mathcal{L}eftarrow}mbda(X) \in \mathbb{R}n$ is the vector consisting of all the eigenvalues.
In terms of this notation, the operator 2-norm for matrices is defined as
$\lVert X \rVert_2 := \max_i \lvert \ensuremath{\mathcal{L}eftarrow}mbda_i(X) \rvert$. When the argument to $\| \cdot \|_2$ is a vector, this denotes the usual Euclidean norm.
The Frobenius norm may also be expressed in terms of eigenvalues: $\lVert X \rVert_F=
\lVert \ensuremath{\mathcal{L}eftarrow}mbda(X) \rVert_2$. The set of \textdef{positive semidefinite
(PSD)}{\mathcal Ind} ex{PSD, positive semidefinite matrices} matrices, $\Sc^np$, is a closed convex cone in $\Sc^n$, whose
interior consists of the \textdef{positive definite (PD)}{\mathcal Ind} ex{PD, positive definite matrices} matrices,
\textdef{$\Sc^npp$}. The cone $\Sc^np$ induces the \textdef{L\"owner partial order} on $\Sc^n$.
That is, for $X,Y \in \Sc^n$ we write $X\succeq Y$ when $X-Y \in \Sc^np$ and similarly $X\succ Y$ when $X-Y \in \Sc^npp$. For $X,Y \in \Sc^np$ the following equivalence holds:
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:innerprodmatrixprod}
\ensuremath{\mathcal{L}eftarrow}ngle X, Y \ensuremath{\mathbb{R}ightarrow}ngle =0 \ \iff \ XY = 0.
\end{equation}
\begin{definition}[face]
\ensuremath{\mathcal{L}eftarrow}bel{def:face}
A closed convex cone $f \subseteq \Sc^np$ is a \textdef{face} of $\Sc^np$ if
\[
X,Y \in \Sc^np, \ X+Y \in f \ {\rm Im}\,plies \ X,Y \in f.
\]
\end{definition}
A nonempty face $f$ is said to be \emph{proper} if $f \ne \Sc^np$ and $f
\ne 0$. Given a convex set $C \subseteq \Sc^np$, the \textdef{minimal face}{\mathcal Ind} ex{$~ \; \forallce(\cdot)$, minimal face}
of $\Sc^np$ containing $f$, with respect to set inclusion, is denoted
$~ \; \forallce(C)$. A face $f$ is said to be \emph{exposed} if there exists $W \in \Sc^np \setminus \{0\}$ such that
\[
f = \{X \in \Sc^np : \ensuremath{\mathcal{L}eftarrow}ngle W, X \ensuremath{\mathbb{R}ightarrow}ngle = 0\}.
\]
Every face of $\Sc^np$ is exposed and the vector $W$ is referred to as an
\textdef{exposing vector}. The faces of $\Sc^np$ may be characterized in terms of the range of any of its maximal rank elements. Moreover, each face is isomorphic to a smaller dimensional positive semidefinite cone, as is seen in the subsequent theorem.
\begin{theorem}[\cite{DrusWolk:16}]
\ensuremath{\mathcal{L}eftarrow}bel{thm:face}
Let $f$ be a face of $\Sc^np$ and $X \in f$ a maximal rank element with rank $r$ and orthogonal spectral decomposition
\[
X=\begin{bmatrix} V & U \end{bmatrix}
\begin{bmatrix} D & 0 \cr 0 & 0 \end{bmatrix}
\begin{bmatrix} V & U \end{bmatrix}^T \in \Sc^np, \quad D\in \Sc^rpp.
\]
Then $f = V \Sc^rp V^T$ and ${\rm Re}\,lint(f) = V \Sc^rpp V^T$. Moreover, $W \in \Sc^np$ is an exposing vector for $f$ if
and only if $W \in U\Sc^nrpp U^T$.
\end{theorem}
We refer to $U\Sc^nrp U^T$, from the above theorem, as the \textdef{conjugate
face}{\mathcal Ind} ex{$f^c$, conjugate face}, denoted $f^c$. For any convex set $C$, an explicit form for $~ \; \forallce(C)$ and $~ \; \forallce(C)^c$ may be obtained from the orthogonal spectral decomposition of any of its maximal rank elements as in Theorem~{\rm Re}\,f{thm:face}.
For a linear map ${\mathcal A} : \Sc^n \rightarrow {\mathbb{R}^m\,}$, there exist $S_1,
\dotso,S_m \in \Sc^n$ such that
{\mathcal Ind} ex{$({\mathcal A}(X))_i = \ensuremath{\mathcal{L}eftarrow}ngle X,S_i \ensuremath{\mathbb{R}ightarrow}ngle$}
\[
\begin{pmatrix} {\mathcal A}(X)\end{pmatrix}_i = \ensuremath{\mathcal{L}eftarrow}ngle X,S_i \ensuremath{\mathbb{R}ightarrow}ngle, \quad \forall i \in \{1,\dotso,m\}.
\]
The \textdef{adjoint} of ${\mathcal A}$ is the unique linear map ${\mathcal A}^* : {\mathbb{R}^m\,} \rightarrow \Sc^n$ satisfying
\[
\ensuremath{\mathcal{L}eftarrow}ngle {\mathcal A}(X),y \ensuremath{\mathbb{R}ightarrow}ngle = \ensuremath{\mathcal{L}eftarrow}ngle X,{\mathcal A}^*(y) \ensuremath{\mathbb{R}ightarrow}ngle,
\quad \forall X \in \Sc^n, \, y \in {\mathbb{R}^m\,},
\]
and has the explicit form ${\mathcal A}^*(y) = \sum_{i=1}^m y_i S_i$,
i.e.,~$\ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*)=\spanl \{S_1,\ldots,S_m\}$.
We define $A_i\in \Sc^n$ to form a basis for the nullspace,
$\nul({\mathcal A})=\spanl \{ A_1,\dotso,A_q\}$.
{\mathcal Ind} ex{$\ensuremath{\mathbb{R}ightarrow}nge ({\mathcal A}^*)=\spanl \{ S_1,\dotso,S_m\}$}
{\mathcal Ind} ex{$\nul({\mathcal A})=\spanl \{ A_1,\dotso,A_q\}$}
For a non-empty convex set $C \subseteq \Sc^n$ the \textdef{recession cone}{\mathcal Ind} ex{$C^{\infty}$, recession cone}, denoted $C^{\infty}$, captures the directions in which $C$ is unbounded. That is
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:recession}
C^{\infty} := \{Y \in \Sc^n : X + \ensuremath{\mathcal{L}eftarrow}mbda Y \in C, \ \forall \ensuremath{\mathcal{L}eftarrow}mbda {\mathcal G} e 0, \ X \in C \}.
\end{equation}
Note that the recession directions are the same at all points $X \in C$. For a non-empty set $S \subseteq \Sc^n$, the \textdef{dual cone}{\mathcal Ind} ex{$S^+$, dual cone} (also referred to as the positive polar) is defined as
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:dualcone}
S^+ := \{ Y \in \Sc^n : \ensuremath{\mathcal{L}eftarrow}ngle X, Y \ensuremath{\mathbb{R}ightarrow}ngle {\mathcal G} e 0, \ \forall X \in S\}.
\end{equation}
A useful result regarding dual cones is that for cones $K_1$ and $K_2$,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:dualintersection}
(K_1 \cap K_2)^+ = \cl(K_1^+ + K_2^+),
\end{equation}
where \textdef{$\cl(\cdot)$}{\mathcal Ind} ex{closure, $\cl(\cdot)$} denotes set closure.
\subsection{Strong Duality in Semidefinite Programming and Facial Reduction}
\ensuremath{\mathcal{L}eftarrow}bel{sec:sdpstrongduality}
Consider the standard primal form SDP
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{prob:sdpprimal}
\textbf{SDP}\, \qquad \qquad \textdef{$p^{\star}$}:=\min \{ \ensuremath{\mathcal{L}eftarrow}ngle C,X\ensuremath{\mathbb{R}ightarrow}ngle : {\mathcal A}(X)=b, X\succeq 0\},
\end{equation}
with Lagrangian dual
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{prob:sdpdual}
\textbf{D-SDP}\, \qquad \qquad \textdef{$d^{\star}$}:=\min \{ b^Ty : {\mathcal A}^*(y) {\mathcal P\,}receq C \}.
\end{equation}
Let $\mathcal{F}$ denote the spectrahadron defined by the feasible set of $\textbf{SDP}\,$.
One of the challenges in semidefinite programming is that strong duality
is not an inherent property, but depends on a constraint qualification,
such as the Slater CQ.
\begin{theorem}[strong duality,~\cite{SaVaWo:97}]
\ensuremath{\mathcal{L}eftarrow}bel{thm:strongduality}
If the primal optimal value $p^{\star}$ is finite and
$\mathcal{F} \cap \Sc^npp \ne \emptyset$, then the primal-dual pair $\textbf{SDP}\,$ and
$\textbf{D-SDP}\,$ have a \textdef{zero duality gap}, $p^{\star}=d^{\star}$, and $d^{\star}$ is attained.
\end{theorem}
Since the Lagrangian dual of the dual is the primal, this result can
similarly be applied to the dual problem, i.e.,~if the primal-dual pair
both satisfy the Slater CQ, then there is a zero duality gap and both
optimal values are attained.
Not only can strong duality fail with the absence of
the Slater CQ, but the standard central path of an interior point
algorithm is undefined. The facial reduction regularization approach of \cite{bw1,bw2,bw3} restricts \textbf{SDP}\, to the minimal face of $\Sc^np$ containing $\mathcal{F}$:
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:sdpr}
\textbf{SDP}\,R \qquad \qquad \min \{\ensuremath{\mathcal{L}eftarrow}ngle C,X \ensuremath{\mathbb{R}ightarrow}ngle : {\mathcal A}(X) = b,\, X \in ~ \; \forallce(\mathcal{F}) \}.
\end{equation}
Since the dimension of $\mathcal{F}$ and $~ \; \forallce(\mathcal{F})$ is the same, the Slater CQ holds for the facially reduced problem. Moreover, $~ \; \forallce(\mathcal{F})$ is isomorphic to a smaller dimensional positive semidefinite cone, thus $\textbf{SDP}\,R$ is itself a semidefinite program. The restriction to $~ \; \forallce(\mathcal{F})$ may be obtained as in the results of
Theorem~{\rm Re}\,f{thm:face}. The dual of \textbf{SDP}\,R restricts the slack variable
to the dual cone
\[
Z=C-{\mathcal A}^*(y)\in ~ \; \forallce(\mathcal{F})^+.
\]
Note that $\mathcal{F}^+=~ \; \forallce(\mathcal{F})^+$.
If we have knowledge of $~ \; \forallce(\mathcal{F})$, i.e., we have the matrix $V$ such that $~ \; \forallce(\mathcal{F}) = V\Sc^rp V^T$, then we may
replace $X$ in \textbf{SDP}\, by $VRV^T$ with $R \succeq 0$. After rearranging, we obtain \textbf{SDP}\,R. Alternatively,
if our knowledge of the minimal face is in the form of an exposing vector,
say $W$, then we may obtain $V$ so that its columns form a basis for $\nul(W)$.
We see that the approach is straightforward when knowledge of $~ \; \forallce(\mathcal{F})$ is available. In instances where such knowledge is unavailable, the following theorem of the alternative from \cite{bw3} guarantees the
existence of exposing vectors that lie in $\ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*)$.
\begin{theorem}[of the alternative,~\cite{bw3}]
\ensuremath{\mathcal{L}eftarrow}bel{thm:alternative}
Exactly one of the following systems is consistent:
\begin{enumerate}
\item ${\mathcal A}(X) = b$, $X\succ 0$,
\item $0 \ne {\mathcal A}^*(y) \succeq 0$, $b^Ty = 0$.
\end{enumerate}
\end{theorem}
The first alternative is just the Slater CQ, while if the second
alternative holds, then ${\mathcal A}^*(y)$ is an exposing vector for a face
containing $\mathcal{F}$. We may use a basis for $\nul({\mathcal A}^*(y))$ to obtain a smaller \textbf{SDP}\,. If the Slater CQ holds for the new \textbf{SDP}\, we have obtained \textbf{SDP}\,R, otherwise, we find an exposing vector and reduce the problem again. We outline the facial reduction procedure in Algorithm~{\rm Re}\,f{algo:fr}. At each iteration, the dimension of the problem is reduced by at least one, hence this approach is bound to obtain \textbf{SDP}\,R in at most $n-1$ iterations, assuming that the initial problem is feasible. If at each iteration the
exposing vector obtained is of maximal rank then the number of
iterations required to obtain \textbf{SDP}\,R is referred to as the \emph{singularity
degree}, \cite{S98lmi}. For a non-empty spectrahedron, $\mathcal{F}$, we denote the singularity
degree as $\sd=\sd(\mathcal{F})$.
{\mathcal Ind} ex{singularity degree, $\sd=\sd(\mathcal{F})$}
{\mathcal Ind} ex{$\sd=\sd(\mathcal{F})$, singularity degree}
\begin{algorithm}
\ensuremath{\mathcal{L}eftarrow}bel{algo:fr}
\caption{Facial reduction procedure using the theorem of the alternative.}
\begin{algorithmic}
\STATE Initialize $S_i$ so that $({\mathcal A}(X))_i = \ensuremath{\mathcal{L}eftarrow}ngle S_i,X \ensuremath{\mathbb{R}ightarrow}ngle$ for $i \in \{1,\dotso,m\}$
\WHILE{Item 2. of Theorem~{\rm Re}\,f{thm:alternative}}
\STATE {obtain exposing vector $W$}
\STATE {$W = \begin{bmatrix}
U & V \end{bmatrix}\begin{bmatrix}
D & 0 \\
0 & 0 \end{bmatrix}\begin{bmatrix}
U & V\end{bmatrix} , \quad D \succ 0$}
\STATE {$S_i \leftarrow V^TS_iV, \quad i \in \{1,\dotso, m\}$}
{\mathcal E} NDWHILE
\end{algorithmic}
\end{algorithm}
We remark that any algorithm pursuing the minimal face through exposing vectors of the form ${\mathcal A}^*(\cdot)$, must perform at least as many iterations as the singularity degree. The singularity degree could be as large as the trivial upper bound $n-1$ as is seen in the example of \cite{MR2724357}. Thus facial reduction may be very expensive computationally. On the other hand, from Theorem~{\rm Re}\,f{thm:face} we see that $~ \; \forallce(\mathcal{F})$ is fully characterized by the range of any of its relative interior matrices. That is, from any solution to Problem~{\rm Re}\,f{prob:main} we may obtain the regularized problem \textbf{SDP}\,R.
\section{A Parametric Optimization Approach}
\ensuremath{\mathcal{L}eftarrow}bel{sec:paramprob}
In this section we present a parametric optimization problem that solves Problem~{\rm Re}\,f{prob:main}.
\begin{assump}
\ensuremath{\mathcal{L}eftarrow}bel{assump:main}
We make the following assumptions:
\begin{enumerate}
\item ${\mathcal A}$ is surjective,
\item $\mathcal{F}$ is non-empty, bounded and contained in a proper face of $\Sc^np$.
\end{enumerate}
\end{assump}
The assumption on ${\mathcal A}$ is a standard regularity assumption and so is the
non-emptiness assumption on $\mathcal{F}$. The necessity of $\mathcal{F}$ to be bounded
will become apparent throughout this section, however, our approach may
be applied to unbounded spectrahedra as well. We discuss such extensions in Section~{\rm Re}\,f{sec:unbounded}. The assumption that $\mathcal{F}$ is contained in a proper face of $\Sc^np$ restricts our discussion to those instances of \textbf{SDP}\, that are interesting with respect to facial reduction.
In the
following lemma are stated two useful characterizations of
bounded spectrahedra.
\begin{lemma}
\ensuremath{\mathcal{L}eftarrow}bel{lem:boundedchar}
The following holds:
\[
\mathcal{F} \text{ is bounded} \ \iff \ \nul({\mathcal A}) \cap \Sc^np= \{0\} \ \iff \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*) \cap \Sc^npp \ne \emptyset.
\]
\end{lemma}
\begin{proof}
For the first equivalence, $\mathcal{F}$ is bounded if and only if $\mathcal{F}^{\infty} =
\{0\}$ by Theorem~8.4 of \cite{con:70}. It suffices, therefore,
to show that $\mathcal{F}^{\infty} = \nul({\mathcal A}) \cap \Sc^np$. It is easy to
see that $(\Sc^np)^{\infty} = \Sc^np$ and that the recession cone of
the affine manifold defined by ${\mathcal A}$ and $b$ is $\nul({\mathcal A})$. By
Corollary~8.3.3 of \cite{con:70} the recession cone of the intersection of convex sets is the intersection of the respective recession cones, yielding the desired result.
Now let us consider the second equivalence. For the forward direction, observe that
\begin{align*}
\nul({\mathcal A}) \cap \Sc^np = \{0\} \ &\iff \ \left( \nul({\mathcal A}) \cap \Sc^np \right)^+ = \{0 \}^+, \\
& \iff \ \nul({\mathcal A})^{{\mathcal P\,}erp} + \Sc^np = \Sc^n, \\
& \iff \ \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*) + \Sc^np = \Sc^n.
\end{align*}
The second inequality is due to \eqref{eq:dualintersection} and one can verify that in this case $\nul({\mathcal A})^{{\mathcal P\,}erp} \cap \Sc^np$ is closed. Thus there exists $X \in \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*)$ and $Y \in \Sc^np$ such that $X+Y=-I$. Equivalently, $-X = I + Y \in \Sc^npp$. For the converse, let $X \in \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*) \cap \Sc^npp$ and suppose $0\ne S \in \nul({\mathcal A}) \cap \Sc^np$. Then $\ensuremath{\mathcal{L}eftarrow}ngle X,S \ensuremath{\mathbb{R}ightarrow}ngle = 0$ which implies, by \eqref{eq:innerprodmatrixprod}, that $XS = 0$. But then $\nul(X) \ne \{0\}$, a contradiction.
\end{proof}
Let $r$ denote the maximal rank of any matrix in ${\rm Re}\,lint (\mathcal{F})$ and let the columns of $V \in \mathbb{R}^{n\times r}$ form a basis for its range. In seeking a relative interior point of $\mathcal{F}$ we define a specific point from which we develop a parametric optimization problem.
{\mathcal Ind} ex{analytic center of $\mathcal{F}$, $\hat{X}$}
{\mathcal Ind} ex{$\hat{X}$, analytic center of $\mathcal{F}$}
\begin{definition}[analytic center]
\ensuremath{\mathcal{L}eftarrow}bel{def:analytic}
The analytic center of $\mathcal{F}$ is the unique matrix $\hat{X}$ satisfying
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:analytic}
\hat{X} = \arg \max \{ \log \det (V^TXV) : X \in \mathcal{F} \}.
\end{equation}
\end{definition}
Under Assumption~{\rm Re}\,f{assump:main} the analytic center is well-defined
and this follows from the proof of Theorem~{\rm Re}\,f{thm:maxdet}, below. It is easy to see that the analytic center is indeed in the
relative interior of $\mathcal{F}$ and therefore a solution to Probelm~{\rm Re}\,f{prob:main}. However, the optimization problem from which it is derived is
intractable due to the unknown matrix $V$. If $V$ is simply removed from the optimization problem (replaced with the identity), then the problem is ill-posed since the objective does not take any
finite values over the feasible set as it lies on the
boundary of the \textbf{SDP}\, cone. To combat these issues, we propose replacing $V$ with $I$ and also perturbing $\mathcal{F}$ so that it intersects $\Sc^npp$. The perturbation we choose is that of replacing $b$ with
$b(\alpha) := b+ \alpha {\mathcal A}(I), \ \alpha>0$, thereby defining a family of spectrahedra
\[
\mathcal{F}a := \{X \in \Sc^np : {\mathcal A}(X) = b(\alpha) \}.
\]
It is easy to see that if $\mathcal{F} \ne \emptyset$ then $\mathcal{F}a$ has postive definite elements for every $\alpha >0$. Indeed $\mathcal{F} + \alpha I \subset \mathcal{F}a$. Note that the affine manifold may be perturbed by any positive definite matrix and $I$ is chosen for simplicity. We now consider the family of optimization problems
for $\alpha > 0$:
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:Palpha}
{\bf P(\alpha)} \qquad \qquad \max \{ \log \det ( X) : X\in \mathcal{F}a \}.
\end{equation}
It is well known that the solution to this problem exists and is unique for each $\alpha > 0$. We include a proof in Theorem~{\rm Re}\,f{thm:maxdet}, below. Moreover, since $~ \; \forallce(\mathcal{F}a) = \Sc^np$ for each $\alpha > 0$, the solution to ${\bf P(\alpha)}$ is in ${\rm Re}\,lint(\mathcal{F}a)$ and is exactly the analytic center of $\mathcal{F}a$. The intuition behind our approach is that as the perturbation gets smaller, i.e., $\alpha \searrow 0$, the solution to ${\bf P(\alpha)}$ approaches the relative interior of $\mathcal{F}$. This intuition is validated in Section~{\rm Re}\,f{sec:convergence}. Specifically, we show that the solutions to ${\bf P(\alpha)}$ form a smooth path that converges to $b(\alpha) r{X} \in {\rm Re}\,lint(\mathcal{F})$. We also provide a sufficient condition for the limit point to be $\hat{X}$ in Section~{\rm Re}\,f{sec:analyticcenter}.
We note that our approach of perturbing the spectrahedron in order to use the $\log \det(\cdot)$ function is not entirely new. In \cite{fazelhindiboyd:01}, for instance, the authors perturb a convex feasible set in order to approximate the rank function using $\log \det(\cdot)$. Unlike our approach, their perturbation is constant.
\subsection{Optimality Conditions}
We choose the strictly concave function $\log \det (\cdot)$ for its
elegant optimality conditions, though the maximization is equivalent
to maximizing only the determinant. We treat it as an \textdef{extended
valued} concave function that takes the value $-\infty$ if $X$ is singular.
For this reason we refer to both functions $\det(\cdot)$ and $\log \det (\cdot)$ equivalently
throughout our discussion.
Let us now consider the optimality conditions for the problem ${\bf P(\alpha)}$.
Similar problems have been thoroughly studied throughout the literature
in matrix completions and \textbf{SDP}\,p,
e.g.,~\cite{GrJoSaWo:84,MR2807419,SaVaWo:97,MR1614078}. Nonetheless, we include a proof for
completeness and to emphasize its simplicity.
\begin{theorem}[optimality conditions]
\ensuremath{\mathcal{L}eftarrow}bel{thm:maxdet}
For every $\alpha >0$ there exists a
unique ${X(\alpha)} \in \mathcal{F}a \cap \Sc^npp$ such that
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:maxlogdet}
{X(\alpha)} =\arg \max \{ \log \det (X) : X \in \mathcal{F}a \}.
\end{equation}
Moreover, ${X(\alpha)}$ satisfies \eqref{eq:maxlogdet} if, and only if, there exists a unique ${y(\alpha)} \in {\mathbb{R}^m\,}$ and a unique ${Z(\alpha)} \in \Sc^npp$ such that
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:optimalsystem}
\begin{bmatrix}
{\mathcal A}^*({y(\alpha)})-{Z(\alpha)} \\
{\mathcal A}({X(\alpha)}) - b(\alpha) \\
{Z(\alpha)} {X(\alpha)} - I
\end{bmatrix} = 0.
\end{equation}
\end{theorem}
\begin{proof}
By Assumption~{\rm Re}\,f{assump:main}, $\mathcal{F} \ne \emptyset$ and bounded and it
follows that $\mathcal{F}a \cap \Sc^npp \ne \emptyset$ and by Lemma~{\rm Re}\,f{lem:boundedchar}
it is bounded. Moreover, $\log \det (\cdot)$ is a strictly concave function
over $\mathcal{F}a \cap \Sc^npp$ (a so-called barrier function) and
\[
\lim_{\det(X)\to 0} \log \det (X) = -\infty.
\]
Thus, we conclude that the optimum ${X(\alpha)} \in \mathcal{F}a \cap \Sc^npp$ exists and is
unique. The Lagrangian of problem \eqref{eq:maxlogdet} is
\begin{align*}
{\mathcal L} (X,y) &= \log \det(X) - \ensuremath{\mathcal{L}eftarrow}ngle y, {\mathcal A}(X) - b\ensuremath{\mathbb{R}ightarrow}ngle \\
&= \log \det(X) - \ensuremath{\mathcal{L}eftarrow}ngle {\mathcal A}^*(y), X \ensuremath{\mathbb{R}ightarrow}ngle + \ensuremath{\mathcal{L}eftarrow}ngle y, b\ensuremath{\mathbb{R}ightarrow}ngle.
\end{align*}
Since the constraints are linear,
stationarity of the Lagrangian holds at ${X(\alpha)}$. Hence there exists ${y(\alpha)} \in {\mathbb{R}^m\,}$
such that $({X(\alpha)})^{-1} = {\mathcal A}^*({y(\alpha)}) =: {Z(\alpha)}$. Clearly ${Z(\alpha)}$ is unique, and
since ${\mathcal A}$ is surjective, we conclude in addition that ${y(\alpha)}$ is unique.
\end{proof}
\subsection{The Unbounded Case}
\ensuremath{\mathcal{L}eftarrow}bel{sec:unbounded}
Before we continue with the convergence results, we briefly address
the case of unbounded spectrahedra.
The restriction to bounded spectrahedra is necessary in order to have
solutions to \eqref{eq:maxlogdet}. There are certainly large families
of \textbf{SDP}\,s where the assumption holds. Problems arising from liftings of
combinatorial optimization problems often have the diagonal elements
specified, and hence bound the corresponding spectrahedron. Matrix
completion problems are another family where the diagonal is often
specified. Nonetheless, many \textbf{SDP}\,s have unbounded feasible sets and we provide two methods for reducing such spectrahedra to bounded ones. First, we show that the boundedness of $\mathcal{F}$ may be determined by solving a projection problem.
\begin{prop}
\ensuremath{\mathcal{L}eftarrow}bel{prop:boundtest}
Let $\mathcal{F}$ be a spectrahedron defined by the affine manifold ${\mathcal A}(X) = b$ and let
\[
P := \arg \min \ \{\lVert X - I \rVert_F : X\in \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*) \}.
\]
Then $\mathcal{F}$ is bounded if $P \succ 0$.
\end{prop}
\begin{proof}
First we note that $P$ is well defined and a singleton since it is the projection of $I$ onto a closed convex set. Now $P\succ 0$ implies that $\ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*) \cap \Sc^npp \ne
\emptyset$ and by Lemma~{\rm Re}\,f{lem:boundedchar} this is equivalent to $\mathcal{F}$
bounded.
\end{proof}
The proposition gives us a sufficient condition for $\mathcal{F}$ to be bounded. Suppose this condition is not satisfied, but we have knowledge of some matrix $S \in \mathcal{F}$. Then for $t > 0$, consider the spectrahedron
\[
\mathcal{F}' := \{ X \in \Sc^n : X\in \mathcal{F}, \ \trace(X) = \trace(S) + t \}.
\]
Clearly $\mathcal{F}'$ is bounded. Moreover, we see that $\mathcal{F}' \subset \mathcal{F}$ and contains maximal rank elements of $\mathcal{F}$, hence $~ \; \forallce(\mathcal{F}') = ~ \; \forallce(\mathcal{F})$. It follows that ${\rm Re}\,lint(\mathcal{F}') \subset {\rm Re}\,lint(\mathcal{F})$ and we have reduced the problem to the bounded case.
Now suppose that the sufficient condition of the proposition does not hold and we do not have knowledge of a feasible element of $
F$. In this case we detect recession
directions, elements of $\nul ({\mathcal A}) \cap \Sc^np$, and project to the orthogonal complement. Specifically, if $\mathcal{F}$ is unbounded then $\mathcal{F}a$ is unbounded and problem \eqref{eq:Palpha} is unbounded. Suppose, we have detected unboundedness, i.e., we have $X \in \mathcal{F}(\alpha)\cap \Sc^np$ with large norm. Then $X = S_0 + S$ with $S \in \nul ({\mathcal A}) \cap \Sc^np$ and $\lVert S \rVert {\mathcal G} g \lVert S_0 \rVert$. We then restrict $\mathcal{F}$ to the orthogonal complement of $S$, that is, we consider the new spectrahedron
\[
\mathcal{F}' := \{X\in \Sc^n : X\in \mathcal{F}, \ \ensuremath{\mathcal{L}eftarrow}ngle S,X\ensuremath{\mathbb{R}ightarrow}ngle = 0\}.
\]
By repeated application, we eliminate a basis for the recession directions and obtain a bounded spectrahedron. From any of the relative interior points of this spectrahedron, we may obtain a relative interior point for $\mathcal{F}$ by adding to it the recession directions obtained throughout the reduction process.
\subsection{Convergence to the Relative Interior and Smoothness}
\ensuremath{\mathcal{L}eftarrow}bel{sec:convergence}
By simple inspection it is easy to see that $({X(\alpha)},{y(\alpha)},{Z(\alpha)})$, as in \eqref{eq:optimalsystem}, does not converge as $\alpha \searrow 0$. Indeed, under Assumption~{\rm Re}\,f{assump:main},
\[
\lim_{\alpha \searrow 0} \ensuremath{\mathcal{L}eftarrow}mbda_n({X(\alpha)}) \rightarrow 0 \ {\rm Im}\,plies \ \lim_{\alpha \searrow 0} \lVert {Z(\alpha)} \rVert_2 \rightarrow +\infty.
\]
It is therefore necessary to scale ${Z(\alpha)}$ so that it remains bounded. Let us look at an example.
\begin{example} Consider the matrix completion problem: find $X \succeq 0$ having the form
$$ \begin{pmatrix} 1 & 1 & ? \cr
1 & 1 & 1 \cr ? & 1 & 1 \end{pmatrix}. $$
The set of solutions is indeed a spectrahedron with ${\mathcal A}$ and $b$ given by
$$ {\mathcal A} \left( \begin{bmatrix} x_{11} & x_{12} & x_{13} \cr
x_{12} & x_{22} & x_{23} \cr x_{13} & x_{23} & x_{33} \end{bmatrix} \right) := \begin{pmatrix} x_{11} \cr x_{12} \cr x_{22} \cr x_{23} \cr x_{33} \end{pmatrix},\ b := \begin{pmatrix} 1 \cr
1 \cr 1 \cr 1 \cr 1\end{pmatrix}. $$
In this case, it is not difficult to obtain
$$ X(\alpha ) = \begin{pmatrix}1+\alpha & 1 & \frac{1}{1+\alpha} \cr 1 & 1+\alpha & 1\cr \frac{1}{1+\alpha} & 1 & 1+\alpha \end{pmatrix},$$ with inverse
$$ X(\alpha)^{-1} =\frac{1}{\alpha(2+\alpha)} \begin{pmatrix}1+\alpha & -1 & 0 \cr -1 & \frac{\alpha^2+2\alpha+2}{ 1+\alpha} & -1\cr 0 & -1 & 1+\alpha \end{pmatrix}. $$
Clearly $ \lim_{\alpha \searrow 0} \lVert X(\alpha)^{-1} \rVert_2 \rightarrow +\infty.$
However, when we consider $\alpha X(\alpha)^{-1}$, and take the limit as $\alpha$ goes to 0 we obtain the
bounded limit
$$ b(\alpha) r{Z} = \begin{pmatrix} \frac12 & - \frac12 & 0 \cr - \frac12 & 1 & - \frac12 \cr 0 &- \frac12 & \frac12 \end{pmatrix}. $$
Note that $b(\alpha) r{X}= X(0)$ is the $3\times 3$ matrix with all ones, ${\rm rank} b(\alpha) r{X}+ {\rm rank} b(\alpha) r{Z}= 3$, and $b(\alpha) r{X} b(\alpha) r{Z} = 0$.
\end{example}
It turns out that multiplying ${X(\alpha)}^{-1}$ by $\alpha$ always bounds the sequence $({X(\alpha)},{y(\alpha)},{Z(\alpha)})$. Therefore, we consider the scaled system
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:scaledoptimality}
\begin{bmatrix}
{\mathcal A}^*(y) - Z \\
{\mathcal A}(X) - b(\alpha) \\
ZX - \alpha I
\end{bmatrix} = 0, \ X \succ 0, \ Z \succ 0, \ \alpha > 0,
\end{equation}
that is obtained from \eqref{eq:optimalsystem} by multiplying the last equation by $\alpha$. Abusing our previous notation, we let $({X(\alpha)},{y(\alpha)},{Z(\alpha)})$ denote a solution
to \emph{this} system and we refer to the set of all such solutions as the
\textdef{parametric path}. The parametric path has clear parallels to the \emph{central path} of \textbf{SDP}\,, however, it differs in one main respect: it is not contained in the relative interior of $\mathcal{F}$. In the main theorems of this section we prove that the parametric path is smooth and converges as $\alpha\searrow 0$ with the primal limit point in ${\rm Re}\,lint(\mathcal{F})$. We begin by showing that the primal component of the parametric path has cluster points.
\begin{lemma}
\ensuremath{\mathcal{L}eftarrow}bel{lem:primalconverge}
Let $b(\alpha) r{\alpha}> 0$. For every sequence $\{\alpha_k\}_{k\in {\mathbb N}} \subset (0,b(\alpha) r{\alpha}]$ such that $\alpha_k \searrow 0$, there exists a subsequence $\{\alpha_l \}_{l\in {\mathbb N}}$ such that $X(\alpha_l) \rightarrow b(\alpha) r{X} \in \mathcal{F}$.
\end{lemma}
\begin{proof}
Let $b(\alpha) r{\alpha}$ and $\{\alpha_k\}_{k\in {\mathbb N}}$ be as in the hypothesis. First we show that the sequence $X(\alpha_k)$ is bounded. For any $k \in {\mathbb N}$ we have
\[
\lVert X(\alpha_k) \rVert_2 \le \lVert X(\alpha_k)+ (b(\alpha) r{\alpha} - \alpha_k) I \rVert_2 \le \max_{X\in \mathcal{F}(b(\alpha) r{\alpha})} \lVert X\rVert_2 < +\infty.
\]
The second inequality is due to $X(\alpha_k) + (b(\alpha) r{\alpha} - \alpha_k)I \in \mathcal{F}(b(\alpha) r{\alpha})$ and the third inequality holds since $\mathcal{F}(b(\alpha) r{\alpha})$ is bounded.
Thus there exists a convergent subsequence $\{\alpha_l\}_{l\in {\mathbb N}}$
with $X(\alpha_l) \rightarrow b(\alpha) r{X}$, that clearly belongs to $\mathcal{F}$.
\end{proof}
For the dual variables we need only prove that $Z(\alpha)$ converges
(for a subseqence) since this implies that $y(\alpha)$ also converges,
by the assumption that ${\mathcal A}$ is surjective. As for ${X(\alpha)}$, we show that
the tail of the parametric path corresponding to $Z(\alpha)$ is bounded.
To this end, we first prove the following technical lemma. Recall that
$\hat{X}$ is the analytic center of Definition~{\rm Re}\,f{def:analytic}.
\begin{lemma}
\ensuremath{\mathcal{L}eftarrow}bel{lem:technicalbounded}
Let $b(\alpha) r{\alpha} > 0$. There exists $M > 0$ such that for all $ \alpha \in (0,b(\alpha) r{\alpha}]$,
\[
0 < \ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, \hat{X} + \alpha I \ensuremath{\mathbb{R}ightarrow}ngle \le M.
\]
\end{lemma}
\begin{proof}
Let $b(\alpha) r{\alpha}$ be as in the hypothesis and let $\alpha \in (0,b(\alpha) r{\alpha}]$. The first inequality is trivial since both of the matrices are positive definite. For the second inequality, we have,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:boundednessfirst}
\begin{split}
\ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1} - X(\alpha)^{-1}, \hat{X} + b(\alpha) r{\alpha}I - X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle &= \ensuremath{\mathcal{L}eftarrow}ngle \frac{1}{b(\alpha) r{\alpha}}{\mathcal A}^*(y(b(\alpha) r{\alpha})) - \frac{1}{\alpha}{\mathcal A}^*(y(\alpha)), \hat{X} + b(\alpha) r{\alpha}I - X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle, \\
&= \ensuremath{\mathcal{L}eftarrow}ngle \frac{1}{b(\alpha) r{\alpha}}y(b(\alpha) r{\alpha}) - \frac{1}{\alpha}y(\alpha), {\mathcal A}(\hat{X} + b(\alpha) r{\alpha}I) - {\mathcal A}(X(\alpha)) \ensuremath{\mathbb{R}ightarrow}ngle, \\
&= \ensuremath{\mathcal{L}eftarrow}ngle \frac{1}{b(\alpha) r{\alpha}}y(b(\alpha) r{\alpha}) - \frac{1}{\alpha}y(\alpha), (b(\alpha) r{\alpha} - \alpha) {\mathcal A}(I) \ensuremath{\mathbb{R}ightarrow}ngle, \\
&= \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1} - X(\alpha)^{-1}, (b(\alpha) r{\alpha} - \alpha) I \ensuremath{\mathbb{R}ightarrow}ngle, \\
&= (b(\alpha) r{\alpha} - \alpha)\trace(X(b(\alpha) r{\alpha})^{-1}) - \ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, (b(\alpha) r{\alpha} - \alpha) I \ensuremath{\mathbb{R}ightarrow}ngle.
\end{split}
\end{equation}
On the other hand,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:boundednesssecond}
\begin{split}
\ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1} - X(\alpha)^{-1}, \hat{X} + b(\alpha) r{\alpha}I - X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle &= n + \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, \hat{X} \ensuremath{\mathbb{R}ightarrow}ngle + b(\alpha) r{\alpha}\trace(X(b(\alpha) r{\alpha})^{-1}) \\
& \qquad \qquad - \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle - \ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, \hat{X} + b(\alpha) r{\alpha} I \ensuremath{\mathbb{R}ightarrow}ngle.
\end{split}
\end{equation}
Combining \eqref{eq:boundednessfirst} and \eqref{eq:boundednesssecond} we get
\begin{align*}
(b(\alpha) r{\alpha} - \alpha)\trace(X(b(\alpha) r{\alpha})^{-1}) - \ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, (b(\alpha) r{\alpha} - \alpha) I \ensuremath{\mathbb{R}ightarrow}ngle &= n + \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, \hat{X} \ensuremath{\mathbb{R}ightarrow}ngle + b(\alpha) r{\alpha}\trace(X(b(\alpha) r{\alpha})^{-1}) \\
& \qquad \qquad - \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle - \ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, \hat{X} + b(\alpha) r{\alpha} I \ensuremath{\mathbb{R}ightarrow}ngle.
\end{align*}
After rearranging, we obtain
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:boundednessthird}
\begin{split}
\ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, \hat{X} + \alpha I \ensuremath{\mathbb{R}ightarrow}ngle &= n + \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, \hat{X} \ensuremath{\mathbb{R}ightarrow}ngle + b(\alpha) r{\alpha}\trace(X(b(\alpha) r{\alpha})^{-1})- \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle \\
& \qquad \qquad - (b(\alpha) r{\alpha} - \alpha)\trace(X(b(\alpha) r{\alpha})^{-1}), \\
&= n + \alpha \trace(X(b(\alpha) r{\alpha})^{-1}) + \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, \hat{X} \ensuremath{\mathbb{R}ightarrow}ngle - \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, X(\alpha) \ensuremath{\mathbb{R}ightarrow}ngle.
\end{split}
\end{equation}
The first and the third terms of the right hand side are positive constants. The second term is positive for every value of $\alpha$ and is bounded above by $b(\alpha) r{\alpha}\trace(X(b(\alpha) r{\alpha})^{-1})$ while the fourth term is bounded above by 0. Applying these bounds as well as the trivial lower bound on the left hand side, we get
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:boundednessfourth}
0 < \ensuremath{\mathcal{L}eftarrow}ngle X(\alpha)^{-1}, \hat{X} + \alpha I \ensuremath{\mathbb{R}ightarrow}ngle \le n + b(\alpha) r{\alpha}\trace(X(b(\alpha) r{\alpha})^{-1})+ \ensuremath{\mathcal{L}eftarrow}ngle X(b(\alpha) r{\alpha})^{-1}, \hat{X} \ensuremath{\mathbb{R}ightarrow}ngle =: M.
\end{equation}
\end{proof}
We need one more ingredient to prove that the parametric path
corresponding to ${Z(\alpha)}$ is bounded. This involves bounding the trace
inner product above and below by the \textdef{maximal and minimal scalar
products} of the eigenvalues, respectively.
\begin{lemma}[Ky-Fan \cite{Fan:50}, Hoffman-Wielandt \cite{hw53}]
\ensuremath{\mathcal{L}eftarrow}bel{lem:eigenvaluebound}
If $A,B \in \Sc^n$, then
\[
\sum_{i=1}^n \ensuremath{\mathcal{L}eftarrow}mbda_i(A)\ensuremath{\mathcal{L}eftarrow}mbda_{n+1-i}(B) \le \ensuremath{\mathcal{L}eftarrow}ngle A, B \ensuremath{\mathbb{R}ightarrow}ngle \le \sum_{i=1}^n \ensuremath{\mathcal{L}eftarrow}mbda_i(A)\ensuremath{\mathcal{L}eftarrow}mbda_i(B).
\]
\end{lemma}
We now have the necessary tools for proving boundedness and obtain the
following convergence result.
\begin{theorem}
\ensuremath{\mathcal{L}eftarrow}bel{thm:2paramcluster}
Let $b(\alpha) r{\alpha} >0$. For every sequence $\{ \alpha_{k} \}_{k \in {\mathbb N}} \subset (0,b(\alpha) r{\alpha}]$ such that $\alpha_k \searrow 0$, there exists a subsequence $\{\alpha_{\ell}\}_{\ell \in {\mathbb N}}$ such that
\[
(X(\alpha_{\ell}),y(\alpha_{\ell}),Z(\alpha_{\ell})) \rightarrow
(b(\alpha) r{X},b(\alpha) r{y},b(\alpha) r{Z}) \in \{\Sc^np \times {\mathbb{R}^m\,} \times \Sc^np \}
\]
with $b(\alpha) r{X} \in {\rm Re}\,lint(\mathcal{F})$ and $b(\alpha) r{Z} = {\mathcal A}^*(b(\alpha) r{y})$.
\end{theorem}
\begin{proof}
Let $b(\alpha) r{\alpha} > 0$ and $\{\alpha_k\}_{k\in {\mathbb N}}$ be as in the hypothesis. We may without loss of generality assume that $X(\alpha_k) \rightarrow b(\alpha) r{X} \in \mathcal{F}$ due to Lemma~{\rm Re}\,f{lem:primalconverge}. Let $k \in {\mathbb N}$. Combining the upper bound of Lemma~{\rm Re}\,f{lem:technicalbounded} with the lower bound of Lemma~{\rm Re}\,f{lem:eigenvaluebound} we have
\[
\sum_{i=1}^n \ensuremath{\mathcal{L}eftarrow}mbda_i(X(\alpha_{k})^{-1}) \ensuremath{\mathcal{L}eftarrow}mbda_{n+1-i}(\hat{X}+\alpha_{k} I) \le M.
\]
Since the left hand side is a sum of positive terms, the inequality applies to each term:
\[
\ensuremath{\mathcal{L}eftarrow}mbda_i(X(\alpha_{k})^{-1}) \ensuremath{\mathcal{L}eftarrow}mbda_{n+1-i}(\hat{X}+\alpha_{k} I) \le M, \quad \forall i \in \{1,\dotso,n\}.
\]
Equivalently,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:dualconverge}
\ensuremath{\mathcal{L}eftarrow}mbda_i(X(\alpha_{k})^{-1}) \le \frac{M}{ \ensuremath{\mathcal{L}eftarrow}mbda_{n+1-i}(\hat{X}) + \alpha_{k}}, \quad \forall i \in \{1,\dotso,n\}.
\end{equation}
Now exactly $r$ eigenvalues of $\hat{X}$ are positive. Thus for $i \in \{n-r+1,\dotso,n\}$ we have
\[
\ensuremath{\mathcal{L}eftarrow}mbda_i(X(\alpha_{k})^{-1}) \le \frac{M}{ \ensuremath{\mathcal{L}eftarrow}mbda_{n+1-i}(\hat{X}) + \alpha_{k}} \le \frac{M}{ \ensuremath{\mathcal{L}eftarrow}mbda_{n+1-i}(\hat{X})},
\]
and we conclude that the $r$ smallest eigenvalues of $X(\alpha_{k})^{-1}$ are bounded above. Consequently, there are at least $r$ eigenvalues of $X(\alpha_{k})$ that are bounded away from 0 and $\ensuremath{\mathbb{R}ightarrow}nk(b(\alpha) r{X}) {\mathcal G} e r$. On the other hand $b(\alpha) r{X} \in \mathcal{F}$ and $\ensuremath{\mathbb{R}ightarrow}nk(b(\alpha) r{X}) \le r$ and it follows that $b(\alpha) r{X} \in {\rm Re}\,lint (\mathcal{F})$.
Now we show that $Z(\alpha_{k})$ is a bounded sequence. Indeed, from \eqref{eq:dualconverge} we have
\[
\lVert Z(\alpha_{k}) \rVert_2 = \alpha_{k}\ensuremath{\mathcal{L}eftarrow}mbda_1(X(\alpha_{k})^{-1}) \le \alpha_{k}\frac{M}{ \ensuremath{\mathcal{L}eftarrow}mbda_n(\hat{X}) + \alpha_{k}} = \alpha_{k}\frac{M}{ \alpha_{k}} = M.
\]
The second to last equality follows from the assumption that $\hat{X} \in \Sc^np \setminus \Sc^npp$, i.e. $\ensuremath{\mathcal{L}eftarrow}mbda_n(\hat{X}) = 0$. Now there exists a subsequence $\{\alpha_{\ell}\}_{\ell \in {\mathbb N}}$ such that
\[
Z(\alpha_{\ell}) \rightarrow b(\alpha) r{Z}, \ X(\alpha_{\ell}) \rightarrow b(\alpha) r{X}.
\]
Moreover, for each $\ell$, there exists a unique $y(\alpha_{\ell})\in {\mathbb{R}^m\,}$ such that $Z(\alpha_{\ell}) = {\mathcal A}^*(y(\alpha_{\ell}))$ and since ${\mathcal A}$ is surjective, there exists $b(\alpha) r{y} \in {\mathbb{R}^m\,}$ such that $y(\alpha_{\ell}) \rightarrow b(\alpha) r{y}$ and $b(\alpha) r{Z} = {\mathcal A}^*(b(\alpha) r{y})$. Lastly, the sequence $Z(\alpha_{\ell})$ is contained in the closed cone $\Sc^np$ hence $b(\alpha) r{Z} \in \Sc^np$, completing the proof.
\end{proof}
We conclude this section by proving that the parametric path is smooth
and has a limit point as $\alpha \searrow 0$. Our proof relies on the
following lemma of Milnor and is motivated by an analogous proof for the
central path of \textbf{SDP}\, in \cite{Halicka:01,HalickaKlerkRoos:01}. Recall
that an \emph{algebraic set} is the solution set of a system of finitely many polynomial equations.
\begin{lemma}[Milnor \cite{mi68}]
\ensuremath{\mathcal{L}eftarrow}bel{lem:milnor}
Let ${\mathcal V} \subseteq \mathbb{R}k$ be an algebraic set and ${\mathcal U} \subseteq \mathbb{R}k$ be an open set defined by finitely many polynomial inequalities. Then if $0 \in \cl ({\mathcal U} \cap {\mathcal V} )$ there exists $\varepsilon > 0$ and a real analytic curve $p :[0,\varepsilon) \rightarrow \mathbb{R}k$ such that $p(0)=0$ and $p(t) \in {\mathcal U} \cap {\mathcal V} $ whenever $t > 0$.
\end{lemma}
\begin{theorem}
\ensuremath{\mathcal{L}eftarrow}bel{thm:2paramconverge}
There exists $(b(\alpha) r{X},b(\alpha) r{y},b(\alpha) r{Z}) \in \Sc^np \times {\mathbb{R}^m\,} \times \Sc^np$ with all the properties of Theorem~{\rm Re}\,f{thm:2paramcluster} such that
\[
\lim_{\alpha \searrow 0} (X(\alpha),y(\alpha),Z(\alpha)) = (b(\alpha) r{X},b(\alpha) r{y},b(\alpha) r{Z}).
\]
\end{theorem}
\begin{proof}
Let $(b(\alpha) r{X},b(\alpha) r{y},b(\alpha) r{Z})$ be a cluster point of the parametric path as in Theorem~{\rm Re}\,f{thm:2paramcluster}. We define the set ${\mathcal U} $ as
\[
{\mathcal U} := \{(X,y,Z, \alpha) \in \Sc^n \times {\mathbb{R}^m\,} \times \Sc^n \times \mathbb{R} : b(\alpha) r{X} + X \succ 0, \ b(\alpha) r{Z} + Z \succ 0, \ Z = {\mathcal A}^*(y), \ \alpha > 0 \}.
\]
Note that each of the positive definite constraints is equivalent to $n$
strict determinant (polynomial) inequalities. Therefore, ${\mathcal U} $ satisfies the assumptions of Lemma~{\rm Re}\,f{lem:milnor}. Next, let us define the set ${\mathcal V} $ as,
\[
{\mathcal V} := \left \{ (X,y,Z,\alpha) \in \Sc^n \times {\mathbb{R}^m\,} \times \Sc^n \times \mathbb{R}: \begin{bmatrix}
{\mathcal A}^*(y) - Z \\
{\mathcal A}(X) + \alpha{\mathcal A}(I) \\
(b(\alpha) r{Z}+Z)(b(\alpha) r{X} + X) - \alpha I
\end{bmatrix} = 0 \right \},
\]
and note that ${\mathcal V} $ is indeed a real algebraic set. Next we show that
there is a one-to-one correspondance between ${\mathcal U} \cap {\mathcal V} $ and the
parametric path without any of its cluster points. Consider
$(\tilde{X},\tilde{y},\tilde{Z},\tilde{\alpha}) \in {\mathcal U} \cap {\mathcal V} $ and let
$(X(\tilde{\alpha}),y(\tilde{\alpha}),Z(\tilde{\alpha}))$ be a point on
the parametric path. We show that
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:2paramfirst}
(b(\alpha) r{X} + \tilde{X}, b(\alpha) r{y} + \tilde{y}, b(\alpha) r{Z} + \tilde{Z}) =
(X(\tilde{\alpha}),y(\tilde{\alpha}),Z(\tilde{\alpha})).
\end{equation}
First of all $b(\alpha) r{X} + \tilde{X} \succ 0$ and $b(\alpha) r{Z} + \tilde{Z}
\succ 0$ by inclusion in ${\mathcal U} $. Secondly, $(b(\alpha) r{X} + \tilde{X}, b(\alpha) r{y}
+ \tilde{y}, b(\alpha) r{Z} + \tilde{Z})$ solves the system \eqref{eq:scaledoptimality}
when $\alpha = \tilde{\alpha}$:
\[
\begin{bmatrix}
{\mathcal A}^*(b(\alpha) r{y} + \tilde{y}) - (b(\alpha) r{Z} + \tilde{Z}) \\
{\mathcal A}(b(\alpha) r{X} + \tilde{X}) - b(\tilde{\alpha}) \\
(b(\alpha) r{Z} + \tilde{Z})(b(\alpha) r{X} + \tilde{X}) - \tilde{\alpha}I
\end{bmatrix} = \begin{bmatrix}
{\mathcal A}^*(b(\alpha) r{y}) - b(\alpha) r{Z} + ({\mathcal A}^*(\tilde{y}) - \tilde{Z}) \\
b +\tilde{\alpha}{\mathcal A}(I) - b(\tilde{\alpha}) \\
0
\end{bmatrix} = \begin{bmatrix}
0 \\
0 \\
0
\end{bmatrix}.
\]
Since \eqref{eq:scaledoptimality} has a unique solution, \eqref{eq:2paramfirst} holds. Thus,
\[
(\tilde{X},\tilde{y},\tilde{Z}) = (X(\alpha) - b(\alpha) r{X},y(\alpha) - b(\alpha) r{y}, Z(\alpha)-b(\alpha) r{Z}),
\]
and it follows that ${\mathcal U} \cap {\mathcal V} $ is a translation of the parametric path (without its cluster points):
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:2paramsecond}
{\mathcal U} \cap {\mathcal V} = \{(X,y,Z,\alpha) \in \Sc^n \times {\mathbb{R}^m\,} \times \Sc^n \times \mathbb{R} : (X,y,Z) = (X(\alpha) - b(\alpha) r{X},y(\alpha) - b(\alpha) r{y}, Z(\alpha)-b(\alpha) r{Z}), \ \alpha > 0 \}.
\end{equation}
Next, we show that $0 \in \cl({\mathcal U} \cap {\mathcal V} )$. To see this, note that
\[
(X(\alpha),y(\alpha),Z(\alpha)) \rightarrow (b(\alpha) r{X},b(\alpha) r{y},b(\alpha) r{Z}),
\]
as $\alpha \searrow 0$ along a subsequence. Therefore, along the same subsequence, we have
\[
( X(\alpha) - b(\alpha) r{X}, y(\alpha) - b(\alpha) r{y}, Z(\alpha) - b(\alpha) r{Z}, \alpha) \rightarrow 0.
\]
Each of the elements of this subsequence belongs to ${\mathcal U} \cap {\mathcal V} $ by \eqref{eq:2paramsecond} and therefore $0 \in \cl({\mathcal U} \cap {\mathcal V} )$.
We have shown that ${\mathcal U} $ and ${\mathcal V} $ satisfy all the assumptions of
Lemma~{\rm Re}\,f{lem:milnor}, hence there exists $\varepsilon > 0$ and an analytic curve $p:
[0,\varepsilon) \rightarrow \Sc^n \times {\mathbb{R}^m\,} \times \Sc^n \times \mathbb{R}$ such that $p(0) = 0$ and $p(t) \in {\mathcal U} \cap {\mathcal V} $ for $t > 0$. Let
\[
p(t) = (X_{(t)},y_{(t)},Z_{(t)},\alpha_{(t)}),
\]
and observe that by \eqref{eq:2paramsecond}, we have
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:2paramthird}
(X_{(t)},y_{(t)},Z_{(t)},\alpha_{(t)}) = (X(\alpha_{(t)}) - b(\alpha) r{X},y(\alpha_{(t)}) - b(\alpha) r{y}, Z(\alpha_{(t)})-b(\alpha) r{Z}).
\end{equation}
Since $p$ is a real analytic curve, the map $g: [0,\varepsilon) \rightarrow \mathbb{R}$ defined as $g(t) = \alpha_{(t)},$ is a differentiable function on the open interval $(0,\varepsilon)$ with
\[
\lim_{t\searrow 0} g(t) = 0.
\]
In particular, this implies that there is an interval
$[0,b(\alpha) r{\varepsilon}) \subseteq [0,\varepsilon)$ where $g$ is monotone.
It follows that on $[0,b(\alpha) r{\varepsilon})$, $g^{-1}$ is a well defined
continuous function that converges to $0$ from the right. Note that for
any $t > 0$, $(X(t),y(t),Z(t))$ is on the parametric path. Therefore,
\[
\lim_{t\searrow 0}X(t) = \lim_{t\searrow 0} X(g(g^{-1}(t))) = \lim_{t\searrow 0} X(\alpha_{(g^{-1}(t)}).
\]
Substituting with \eqref{eq:2paramthird}, we have
\[
\lim_{t\searrow 0}X(t) = \lim_{t\searrow 0} X_{(g^{-1}(t))} + b(\alpha) r{X} = b(\alpha) r{X}.
\]
Similarly, $y(t)$ and $Z(t)$ converge to $b(\alpha) r{y}$ and $b(\alpha) r{Z}$
respectively. Thus every cluster point of the parametric path is identical
to $(b(\alpha) r{X},b(\alpha) r{y},b(\alpha) r{Z})$.
\end{proof}
We have shown that the tail of the parametric path is smooth and it has
a limit point. Smoothness of the entire path follows from Berge's
Maximum Theorem, \cite{MR1464690}, or \cite[Example 5.22]{MR1491362}.
\subsection{Convergence to the Analytic Center}
\ensuremath{\mathcal{L}eftarrow}bel{sec:analyticcenter}
The results of the previous section establish that the parametric path
converges to ${\rm Re}\,lint (\mathcal{F})$ and therefore the primal part of the limit
point has excatly $r$ positive eigenvalues. If the smallest positive
eigenvalue is very small it may be difficult to distinguish it from zero
numerically. Therefore it is desirable for the limit point to be
`substantially' in the relative interior, in the sense that its smallest
positive eigenvalue is relatively large. The analytic center has this
property and so a natural question is whether the limit point coincides
with the analytic center. In the following modification of an example
of \cite{HalickaKlerkRoos:01}, the parametric path converges to a point different from the analytic center.
\begin{example}
\ensuremath{\mathcal{L}eftarrow}bel{ex:noncvg}
Consider the \textbf{SDP}\, feasibility problem where ${\mathcal A}$ is defined by
\[S_1 := \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix},\,\, S_2 :=
\begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
\end{bmatrix}, \,\, S_3 := \begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
\end{bmatrix}, \]
\[S_4 := \begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{bmatrix},\,\, S_5 := \begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
\end{bmatrix},
\]
and $b := (1,0,0,0,0)^T$. One can verify that the feasible set consists of positive semidefinite matrices of the form
\[X= \begin{bmatrix}
1-x_{22} & x_{12} & 0 & 0 \\
x_{12} & x_{22} & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}.\]
and the analytic center is the determinant maximizer over the positive definite blocks of this set and satisfies $x_{22}=0.5$ and $x_{12}=0$. However, the parametric path converges to a matrix with $x_{22} = 0.6$ and $x_{12} = 0$. To see this note that
\[{\mathcal A}(I) = \begin{pmatrix}2 & 1 & 1 & 0 & 1\end{pmatrix}^T,\quad b(\alpha) = \begin{pmatrix}1 + 2\alpha & \alpha & \alpha & 0 & \alpha \end{pmatrix}^T.\]
By feasibility, ${X(\alpha)}$ has the form
\[ \begin{bmatrix}
1+2\alpha-x_{22} & x_{12} & x_{13} & x_{14} \\
x_{12} & x_{22} & 0 & \frac{1}{2}(\alpha-x_{33}) \\
x_{13} & 0 & x_{33} & 0 \\
x_{14} & \frac{1}{2}(\alpha-x_{33}) & 0 & \alpha \\
\end{bmatrix}.\]
Moreover, the optimality conditions of Theorem~{\rm Re}\,f{thm:maxdet} indicate that ${X(\alpha)}a^{-1} \in \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*)$ and hence is of the form
\[ \begin{bmatrix}
* & 0 & 0 & 0 \\
0 & * & * & * \\
0& * & * & * \\
0 & * & * & * \\
\end{bmatrix}.\]
It follows that $x_{12}=x_{13}=x_{14} = 0$ and ${X(\alpha)}$ has the form
\[
\begin{bmatrix}
1 + 2\alpha -x_{22} & 0 & 0 & 0 \\
0 & x_{22} & 0 & \frac{1}{2}(\alpha-x_{33}) \\
0 & 0 & x_{33} & 0 \\
0 & \frac{1}{2}(\alpha-x_{33}) & 0 & \alpha \\
\end{bmatrix}.
\]
Of all the matrices with this form, ${X(\alpha)}a$ is the one maximizing the determinant, that is
\begin{align*}
([{X(\alpha)}a]_{22}, [{X(\alpha)}a]_{33})^T = \arg \max \ & x_{33}(1+2\alpha - x_{22})(\alpha x_{22} - \frac{1}{4}(\alpha - x_{33})^2), \\
s.t. \ & 0 < x_{22} < 1+2\alpha, \\
& x_{33} > 0, \\
& \alpha x_{22} > \frac{1}{4}(\alpha - x_{33})^2.
\end{align*}
Due to the strict inequalities, the maximizer is a stationary point of the objective function. Computing the derivative with respect to $x_{22}$ and $x_{33}$ we obtain the equations
\begin{align*}
x_{33}(-(\alpha x_{22} - \frac{1}{4}(\alpha - x_{33})^2) + \alpha(1+2\alpha-x_{22}) &= 0, \\
(1+2\alpha-x_{22})((\alpha x_{22} - \frac{1}{4}(\alpha - x_{33})^2) + \frac{1}{2}x_{33}(\alpha-x_{33})) &= 0.
\end{align*}
Since $x_{33} > 0$ and $(1+2\alpha-x_{22}) > 0$, we may divide them out. Then solving each equation for $x_{22}$ we get
\begin{align}
\ensuremath{\mathcal{L}eftarrow}bel{ex:first}
x_{22} &= \frac{1}{8\alpha}(\alpha - x_{33})^2 + \alpha + \frac{1}{2}, \\
\ensuremath{\mathcal{L}eftarrow}bel{ex:second}
x_{22} &= \frac{1}{4\alpha}(\alpha - x_{33})^2 - \frac{1}{2\alpha}x_{33}(\alpha-x_{33}).
\end{align}
Substituting \eqref{ex:first} into \eqref{ex:second} we get
\begin{align*}
0 &= \frac{1}{4\alpha}(\alpha - x_{33})^2 - \frac{1}{2\alpha}x_{33}(\alpha-x_{33}) - \frac{1}{8\alpha}(\alpha - x_{33})^2 - \alpha - \frac{1}{2}, \\
&= \frac{1}{8\alpha}(\alpha - x_{33})^2 - \frac{1}{2}x_{33} + \frac{1}{2\alpha}x_{33}^2 - \alpha - \frac{1}{2}, \\
&= \frac{1}{8\alpha}x_{33}^2 - \frac{1}{4}x_{33} +\frac{1}{8}\alpha - \frac{1}{2}x_{33} + \frac{1}{2\alpha}x_{33}^2 - \alpha - \frac{1}{2}, \\
&= \frac{5}{8\alpha}x_{33}^2 - \frac{3}{4}x_{33} +\frac{1}{8}\alpha - \alpha - \frac{1}{2}, \\
\end{align*}
Now we solve for $x_{33}$,
\begin{align*}
x_{33} &= \frac{\frac{3}{4} {\mathcal P\,}m \sqrt{ \frac{9}{16} - 4(\frac{5}{8\alpha})(\frac{1}{8}\alpha - \alpha - \frac{1}{2})}}{2\frac{5}{8\alpha}}, \\
&= \frac{3\alpha}{5} {\mathcal P\,}m \frac{4\alpha}{5}\sqrt{ \frac{11\alpha + 5}{4\alpha}}, \\
&= \frac{1}{5}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}).
\end{align*}
Since $x_{33}$ is fully determined by the stationarity constraints, we have $[{X(\alpha)}a]_{33} = x_{33}$ and $[{X(\alpha)}]_{33} \rightarrow 0$ as $\alpha \searrow 0$. Substituting this expression for $x_{33}$ into \eqref{ex:first} we get
\begin{align*}
[{X(\alpha)}a]_{22} &= \frac{1}{8\alpha}(\alpha - \frac{1}{5}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}))^2 + \alpha + \frac{1}{2}, \\
&= \frac{1}{8\alpha}(\alpha^2 - 2\alpha \frac{1}{5}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}) + \frac{1}{25}(9\alpha^2 + 6\alpha \sqrt{\alpha}\sqrt{ 11\alpha + 5} + 4\alpha(11\alpha+5))) + \alpha + \frac{1}{2}, \\
&= \frac{1}{8}\alpha - \frac{1}{20}(3\alpha + 2\sqrt{\alpha}\sqrt{ 11\alpha + 5}) + \frac{1}{200}(9\alpha + 6 \sqrt{\alpha}\sqrt{ 11\alpha + 5} + 4(11\alpha+5)) + \alpha + \frac{1}{2}, \\
&= \frac{31}{25}\alpha - \frac{7}{100}\sqrt{\alpha}\sqrt{ 11\alpha + 5} + \frac{6}{10}.
\end{align*}
Now it is clear that $[{X(\alpha)}a]_{22} \rightarrow 0.6$ as $\alpha \searrow 0$.
\end{example}
\subsubsection{A Sufficient Condition for Convergence to the Analytic Center}
\ensuremath{\mathcal{L}eftarrow}bel{sec:sufficientanalytic}
Recall that $~ \; \forallce(\mathcal{F}) = V\Sc^rp V^T$. To simplify the discussion we may assume that $V = \begin{bmatrix} I \\ 0 \end{bmatrix}$, so that
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:facialstructure}
~ \; \forallce(\mathcal{F}) = \begin{bmatrix}
\Sc^rp &0 \\
0 & 0
\end{bmatrix}.
\end{equation}
This follows from the rich automorphism group of $\Sc^np$, that is, for any full rank $W\in \mathbb{R}^{n \times n}$, we have $W\Sc^np W^T = \Sc^np$. Moreover, it is easy to see that there is a one-to-one correspondence between relative interior points under such transformations.
Let us now express $\mathcal{F}$ in terms of $\nul({\mathcal A})$, that is, if $A_0 \in \mathcal{F}$ and
recall that $A_1,\dotso, A_q, \, q=t(n)-m,$ form a basis for $\nul ({\mathcal A})$, then
{\mathcal Ind} ex{$\nul({\mathcal A})=\spanl \{ A_1,\dotso,A_q\}$}
\[
\mathcal{F} = \left( A_0 + \spanl \{ A_1,\dotso,A_q\} \right) \cap \Sc^np.
\]
Similarly,
\[
\mathcal{F}a = \left( \alpha I + A_0 + \spanl \{ A_1,\dotso,A_q\} \right) \cap \Sc^np.
\]
Next, let us partition $A_i$ according to the block structure of \eqref{eq:facialstructure}:
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:partNi}
A_i = \begin{bmatrix} L_i & M_i \cr M_i^T & N_i \end{bmatrix} , \quad i\in \{0, \ldots , q\}.
\end{equation}
Since $A_0 \in \mathcal{F}$, from \eqref{eq:facialstructure} we have $N_0 = 0$ and $M_0 = 0$. Much of the subsequent discussion focuses on the linear pencil $\sum_{i=1}^q x_iN_i$. Let ${\mathcal N\,}$ be the linear mapping such that
\[
\nul ({\mathcal N\,}) = \left \{ \sum_{i=1}^q x_iN_i : x \in \mathbb{R}q \right \}.
\]
\begin{lemma}
\ensuremath{\mathcal{L}eftarrow}bel{lem:maxdetN}
Let $\{N_1,\dotso,N_q\}$ be as in \eqref{eq:partNi},
$\spanl \{N_1,\dotso,N_q\} \cap \Sc^np = \{0\}$, and let
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:Q}
Q := \arg \max \{\log \det (X): X = I + \sum_{i=1}^q x_i N_i \succ 0, \ x \in \mathbb{R}q \}.
\end{equation}
Then for all $\alpha >0$,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:alphaQ}
\alpha Q = \arg \max \{\log \det (X): X = \alpha I + \sum_{i=1}^q x_i N_i \succ 0, \ x \in \mathbb{R}q \}.
\end{equation}
\end{lemma}
\begin{proof}
We begin by expressing $Q$ in terms of ${\mathcal N\,}$:
\[
Q = \arg \max \{\log \det (X): {\mathcal N\,}(X) = {\mathcal N\,}(I) \}.
\]
By the assumption on the span of the matrices $N_i$ and by Lemma~{\rm Re}\,f{lem:boundedchar}, the feasible set of \eqref{eq:Q} is bounded. Moreover, the feasible set contains positive definite matrices, hence all the assumptions of Theorem~{\rm Re}\,f{thm:maxdet} are satisfied. It follows that $Q$ is the unique feasible, positive definite matrix satisfying $Q^{-1} \in \ensuremath{\mathbb{R}ightarrow}nge( {\mathcal N\,}^*)$.
Moreover, $\alpha Q$ is positive definite, feasible for \eqref{eq:alphaQ},
and $(\alpha Q)^{-1} \in \ensuremath{\mathbb{R}ightarrow}nge({\mathcal N\,}^*)$. Therefore $\alpha Q$ is optimal
for \eqref{eq:alphaQ}.
\end{proof}
Now we prove that the parametric path converges to the analytic center under the condition of Lemma~{\rm Re}\,f{lem:maxdetN}.
\begin{theorem}
\ensuremath{\mathcal{L}eftarrow}bel{thm:analyticcenter}
Let $\{N_1,\dotso,N_q\}$ be as in \eqref{eq:partNi}.
If $\spanl \{N_1,\dotso,N_q\} \cap \Sc^np = \{0\}$ and $b(\alpha) r{X}$ is the limit point of the primal part of the parametric path as in Theorem~{\rm Re}\,f{thm:2paramconverge}, then $b(\alpha) r{X} = \hat{X}$.
\end{theorem}
\begin{proof}
Let
\[
b(\alpha) r{X} =: \begin{bmatrix}
b(\alpha) r{Y} & 0 \\
0 & 0
\end{bmatrix},\ \hat{X} =: \begin{bmatrix}
\hat{Y} & 0 \\
0 & 0
\end{bmatrix}
\]
and suppose, for eventual contradiction, that $b(\alpha) r{Y} \ne \hat{Y}$. Then let $r,s \in \mathbb{R}$ be such that
\[
\det(b(\alpha) r{Y}) < r < s < \det(\hat{Y}).
\]
Let $Q$ be as in Lemma~{\rm Re}\,f{lem:maxdetN} and let $x \in \mathbb{R}q$ satisfy $Q = I + \sum_{i=1}^q x_iN_i$. Now for any $\alpha >0$ we have
\[
\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i) = \begin{pmatrix} \hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_i L_i & \alpha \sum_{i=1}^q x_iM_i \cr \alpha \sum_{i=1}^q x_i M_i^T & \alpha Q \end{pmatrix} .
\]
Note that there exists $\varepsilon >0$ such that $\hat{X} + \alpha \sum_{i=1}^q x_iA_i \succeq 0$ whenever $\alpha \in (0,\varepsilon)$. It follows that
\[
\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i) \in \mathcal{F}a, \quad \forall \alpha \in (0,\varepsilon).
\]
Taking the determinant, we have
\begin{align*}
\frac{1}{\alpha^{n-r}} \det (\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i)) &= \frac{1}{\alpha^{n-r}}\det \left( \alpha Q-\alpha^2 (\sum_{i=1}^q x_iM_i ) (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i )^{-1} (\sum_{i=1}^q x_iM_i^T) \right) \\
&\qquad \qquad \times\det (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i), \\
&= \det \left( Q-\alpha (\sum_{i=1}^q x_iM_i ) (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i )^{-1} (\sum_{i=1}^q x_iM_i^T) \right) \\
&\qquad \qquad \times\det (\hat{Y}+\alpha I + \alpha \sum_{i=1}^q x_iL_i).
\end{align*}
Now we have
\[
\lim_{\alpha \searrow 0} \ \frac{1}{\alpha^{n-r}} \det (\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i)) = \det(Q)\det(\hat{Y}).
\]
Thus, there exists $\sigma \in (0,\varepsilon)$ so that for $\alpha \in (0,\sigma )$ we have
\[
\det (\hat{X} + \alpha( I + \sum_{i=1}^q x_i A_i)) > s \alpha^{n-r} \det (Q) .
\]
As ${X(\alpha)}$ is the determinant maximizer over $\mathcal{F}a$, we also have
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:detX}
\det( {X(\alpha)}) > s \alpha^{n-r} \det( Q ), \quad \forall \alpha \in (0, \sigma ).
\end{equation}
On the other hand ${X(\alpha)} \rightarrow b(\alpha) r{X}$ and let
\[
{X(\alpha)} =: \begin{bmatrix}
\alpha I + \sum_{i=1}^q x(\alpha)_i L_i & \sum_{i=1}^q x(\alpha)_i M_i \\
\sum_{i=1}^q x(\alpha)_i M^T_i & \alpha I + \sum_{i=1}^q x(\alpha)_i N_i
\end{bmatrix}.
\]
Then $\alpha I + \sum_{i=1}^q x(\alpha)_i L_i \rightarrow b(\alpha) r{Y}$ and there exists $\delta \in (0,\sigma)$ such that for all $\alpha \in (0,\delta)$,
\[
\det(\alpha I + \sum_{i=1}^q x(\alpha)_i L_i) < r.
\]
Moreover, by definition of $Q$,
\[
\det(\alpha I + \sum_{i=1}^q x(\alpha)_i N_i) \le \det(\alpha Q) = \alpha^{n-r} \det(Q).
\]
To complete the proof, we apply the Hadamard-Fischer inequality to $\det({X(\alpha)})$. For $\alpha \in (0,\delta)$ we have
\[
\det({X(\alpha)}) \le \det(\alpha I + \sum_{i=1}^q x(\alpha)_i L_i)\det(\alpha I + \sum_{i=1}^q x(\alpha)_i N_i) < r\alpha^{n-r} \det( Q),
\]
a contradiction of \eqref{eq:detX}.
\end{proof}
\begin{remark}
Note that Example {\rm Re}\,f{ex:noncvg} fails the hypotheses of Theorem
{\rm Re}\,f{thm:analyticcenter}. Indeed, the matrix
$\begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 2 & 0 \\
0 & -1 & 0 & 0 \\
\end{bmatrix}$
lies in $\nul({\mathcal A})$ and the bottom $2\times 2$ block is nonzero and positive
semidefinite.
\end{remark}
\section{The Projected Gauss-Newton Method}
\ensuremath{\mathcal{L}eftarrow}bel{sec:projGN}
We have constructed a parametric path that converges to a point in the
relative interior of $\mathcal{F}$. In this section we propose an algorithm to
follow the path to its limit point. We do not prove convergence of the
proposed algorithm and address its performance in
Section~{\rm Re}\,f{sec:numerics}. We follow the (projected) Gauss-Newton approach (the
nonlinear analog of Newton's method) originally introduced for \textbf{SDP}\,s in
\cite{KrMuReVaWo:98} and improved more recently in \cite{KrukDoanW:10}.
This approach has been shown to have improved robustness compared to
other symmetrization approaches. For well posed problems,
the Jacobian for the search direction remains full rank in the limit to
the optimum.
\subsection{Scaled Optimality Conditions}
The idea behind this approach is to view the system defining the
parametric path as an overdetermined
map and use the Gauss-Newton (GN) method for nonlinear systems.
In the process, the linear feasibility equations are eliminated and the
GN method is applied to the remaining bilinear equation.
For $\alpha {\mathcal G} e0$ let $G_{\alpha}: \Sc^np \times {\mathbb{R}^m\,} \times \Sc^np \rightarrow
\Sc^n \times {\mathbb{R}^m\,} \times \mathbb{R}^{n \times n}$ be defined as
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:GdefGN}
G_{\alpha}(X,y,Z):=
\begin{bmatrix}
{\mathcal A}^*(y)-Z \\
{\mathcal A}(X) -b(\alpha) \\
ZX - \alpha I \\
\end{bmatrix}.
\end{equation}
The solution to $G_{\alpha}(X,y,Z)= 0$ is exactly $({X(\alpha)},{y(\alpha)},{Z(\alpha)})$ when
$\alpha > 0$; and for $\alpha = 0$ the solution set is
\[
\mathcal{F} \times ({\mathcal A}^*)^{-1}({\mathcal D} ) \times {\mathcal D} , \quad {\mathcal D} := \ensuremath{\mathbb{R}ightarrow}nge({\mathcal A}^*) \cap ~ \; \forallce(\mathcal{F})^c.
\]
Clearly, the limit point of the parametric path satisfies $G_0(X,y,Z) = 0$. We fix $\alpha >
0$. The GN direction, $(dX,dy,dZ)$, uses the
overdetermined \textdef{GN system}
{\mathcal Ind} ex{$(dX,dy,dZ)$, Gauss-Newton direction}
{\mathcal Ind} ex{Gauss-Newton direction, $(dX,dy,dZ)$}
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:GNorig}
G_{\alpha}'(X,y,Z)\begin{bmatrix}
dX \\
dy \\
dZ
\end{bmatrix} = -G_{\alpha}(X,y,Z).
\end{equation}
Note that the search direction is a strict descent direction for the norm of
the residual, $\| \kvec(G_{\alpha}(X,y,Z)) \|_2^2$, when the Jacobian is full rank.
The size of the problem is then reduced by projecting out the first two equations. We are left with a single linearization of the bilinear
complementarity equation, i.e.,~$n^2$ equations in only $t(n)$ variables.
The \textdef{least squares solution} yields the projected
GN direction after backsolves.
We prefer steps of length $1$, however, the primal and dual step lengths, $\alpha_p$ and $\alpha_d$ respectively,
are reduced, when necessary,
to ensure strict feasibility: $X + \alpha_p dX \succ 0$ and
$Z+\alpha_d dZ \succ 0$.
The parameter $\alpha$ is then reduced and the
procedure repeated. On the parametric path, $\alpha$ satisfies
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:alpharep}
\alpha = \frac{\ensuremath{\mathcal{L}eftarrow}ngle {Z(\alpha)}, {X(\alpha)} \ensuremath{\mathbb{R}ightarrow}ngle }{n}.
\end{equation}
Therefore, this is a good estimate of the target for $\alpha$ near the
parametric path. As is customary, we then use a fixed $\sigma \in (0,1)$ to
move the target towards optimality, $\alpha \leftarrow \sigma \alpha$.
\subsubsection{Linearization and GN Search Direction}
For the purposes of this discussion we vectorize the variables and data
in $G_{\alpha}$. Let $A \in \mathbb{R}^{m\times t(n)}$ be the matrix representation
of ${\mathcal A}$, that is
{\mathcal Ind} ex{$A$, matrix representation}
{\mathcal Ind} ex{matrix representation, $A$}
\[
A_{i,:} := \svec(S_i)^T, \quad i\in \{1,\dotso,m\}.
\]
Let $N \in \mathbb{R}^{t(n)\times (t(n)-m)}$ be such that its columns form a
basis for $\nul (A)$ and let $\hat{x}$ be a particular solution to
$Ax=b(\alpha) $, e.g., the least squares solution. Then the affine manifold
determined from the equation ${\mathcal A}(X)=b(\alpha) $ is equivalent to that
obtained from the equation
\[
x = \hat{x} + Nv, \quad v\in \mathbb{R}^{t(n)-m}.
\]
Moreover, if $z:=\svec(Z)$, we have the vectorization
\begin{equation}
g_{\alpha}(x,v,y,z) := \begin{bmatrix}
A^Ty - z \\
x-\hat{x}-Nv \\
\sMat(z)\sMat(x) - \alpha I
\end{bmatrix} =: \begin{bmatrix}
r_d \\
r_p \\
R_c
\end{bmatrix},
\ensuremath{\mathcal{L}eftarrow}bel{eq:systemg}
\end{equation}
Now we show how the first two equations of the above system may be
projected out, thereby reducing the size of the problem. First we have
\[
g'_{\alpha}(x,v,y,z)\begin{pmatrix}
dx \\
dv \\
dy \\
dz
\end{pmatrix} = \begin{bmatrix}
A^Tdy - dz \\
dx - Ndv \\
\sMat(dz)\sMat(x) + \sMat(z)sMat(dx)
\end{bmatrix},
\]
and it follows that the GN step as in \eqref{eq:GNorig} is the least squares solution of the system
\[
\begin{bmatrix}
A^Tdy - dz \\
dx - Ndv \\
\sMat(dz)\sMat(x) + \sMat(z)\sMat(dx)
\end{bmatrix} = - \begin{bmatrix}
r_d \\
r_p \\
R_c \\
\end{bmatrix}.
\]
Since the first two equations are linear, we get $dz = A^Tdy+r_d$ and $dx = Ndv - r_p$. Substituting into the third equation we have,
\[
\sMat(A^Tdy + r_d)\sMat(x) + \sMat(z)\sMat(Ndv - r_p) = -R_c.
\]
After moving all the constants to the right hand side we obtain the projected GN system in $dy$ and $dv$,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:projGN}
\sMat(A^Tdy)\sMat(x) + \sMat(z)\sMat(Ndv) = -R_c + \sMat(z)\sMat(r_p) - \sMat(r_d)\sMat(x).
\end{equation}
The least squares solution to this system is the exact GN direction when $r_d = 0$ and $r_p=0$, otherwise it is an approximation. We then use the equations $dz = A^Tdy+r_d$ and $dx = Ndv - r_p$ to obtain search directions for $x$ and $z$.
In \cite[Theorem 1]{KrukDoanW:10}, it is proved that if the solution
set of $G_0(X,y,Z) = 0$ is a singleton such that $X+Z \succ 0$ and the
starting point of the projected GN algorithm is sufficiently
close to the parametric path then the algorithm, with a crossover
modification, converges quadratically.
As we showed
above, the solution set to our problem is
\[
\mathcal{F} \times ({\mathcal A}^*)^{-1}({\mathcal D} ) \times {\mathcal D} ,
\]
which is not a singleton as long as $\mathcal{F} \ne \emptyset$. Indeed, ${\mathcal D} $ is a non-empty cone. Although the convergence result of \cite{KrukDoanW:10} does not apply to our problem, their numerical tests indicate that the algorithm converges even for problems violating the strict complementarity and uniqueness assumptions and our observations agree.
\subsection{Implementation Details}
Several specific implementation modifications are used. We begin with initial $x,v,y,z$ with corresponding $X,Z\succ 0$. If we obtain $P \succ 0$ as in Proposition~{\rm Re}\,f{prop:boundtest} then we set $Z = P$ and define $y$ accordingly, otherwise $Z = X = I$. We
estimate $\alpha$ using \eqref{eq:alpharep} and set $\alpha \leftarrow
2\alpha$ to ensure that our target is somewhat well centered to start.
\subsubsection{Step Lengths and Linear Feasibility}
We start with initial step lengths $\alpha_p=\alpha_d=1.1$ and then
backtrack using a Cholesky factorization test to ensure positive definiteness
\[
X+\alpha_p dX \succ 0, \quad Z+\alpha_d dZ \succ 0.
\]
If the step length we find is still $>1$ after the backtrack,
we set it to $1$ and first update $v,y$ and then update $x,z$ using
\[
x=\hat x + N v, \quad z=A^Ty.
\]
This ensures exact linear feasibility. Thus we find that we maintain exact dual
feasibility after a few iterations. Primal feasibility changes since
$\alpha$ decreases. We have experimented with including an extra few iterations at the end of the algorithm
with a fixed $\alpha$ to obtain exact primal feasibility (for the given $\alpha$). In most cases the improvement of feasibility with respect to $\mathcal{F}$ was minimal and not worth the extra computational cost.
\subsubsection{Updating $\alpha$ and Expected Number of Iterations}
In order to drive $\alpha$ down to zero, we fix $\sigma \in (0,1)$ and
update alpha as $\alpha \leftarrow \sigma \alpha$. We use a moderate $\sigma =
.6$. However, if this reduction is performed too
quickly then our step lengths end up being too small and we get too close
to the positive semidefinite boundary. Therefore, we change $\alpha$
using information from $\min \{\alpha_p,\alpha_d\}$. If the steplength
is reasonably near $1$ then we decrease using $\sigma$; if the steplength is
around $.5$ then we leave $\alpha$ as is; if the steplength is small
then we \emph{increase} to $1.2\alpha$; and if the steplength is tiny
($<.1$), we increase to $2\alpha$. For most of the test problems,
this strategy resulted in steplengths of $1$ after the first few
iterations.
We noted empirically that the condition number of the Jacobian for the
least squares problem increases quickly, i.e.,~several singular values
converge to zero. Despite this we are able to
obtain high accuracy search directions.\footnote{Our algorithm finds
the search direction using \eqref{eq:projGN}. If we looked at a singular
value decomposition then we get the equivalent system $\Sigma
(V^T d b(\alpha) r s) = (U^T RHS)$. We observed that several singular
values in $\Sigma$ converge to zero while the corresponding elements
in $(U^T RHS)$ converge to zero at a similar rate. This accounts for the
improved accuracy despite the huge condition numbers. This appears to be a
similar phenomenon to that observed in the analysis of interior point
methods in \cite{MR99i:90093,MR96f:65055} and as discussed in
\cite{GoWo:04}.}
Since we typically have steplengths of $1$, $\alpha$ is generally decreased using $\sigma$. Therefore, for a desired tolerance
$\epsilon$ and a starting $\alpha =1$ we would want $\sigma^k < \epsilon$, or equivalently,
\[
\quad k < \log_{10} (\epsilon)/\log_{10}(\sigma).
\]
For our $\sigma=.6$ and $t$ decimals of desired accuracy, we expect to need
$k<4.5t$ iterations.
\section{Generating Instances and Numerical Results}
\ensuremath{\mathcal{L}eftarrow}bel{sec:numerics}
In this section we analyze the performance of an implementation of our algorithm. We begin with a discussion on generating spectrahedra. A particular challenge is in creating spectrahedra with specified singularity degree. Following this discussion, we present and analyze the numerical results.
\subsection{Generating Instances with Varying Singularity Degree}
\ensuremath{\mathcal{L}eftarrow}bel{sec:generating}
Our method for generating instances is motivated by the approach of \cite{WeiWolk:06} for generating \textbf{SDP}\,s with varying \emph{complementarity gaps}. We begin by proving a relationship between strict complementarity of a primal-dual pair of \textbf{SDP}\, problems and the singularity degree of the optimal set of the primal \textbf{SDP}\,. This relationship allows us to modify the code presented in \cite{WeiWolk:06} and obtain spectrahedra having various singularity degrees. Recall the primal \textbf{SDP}\,
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{prob:sdpprimalcopy}
\textbf{SDP}\, \qquad \qquad \textdef{$p^{\star}$}:=\min \{ \ensuremath{\mathcal{L}eftarrow}ngle C,X\ensuremath{\mathbb{R}ightarrow}ngle : {\mathcal A}(X)=b, X\succeq 0\},
\end{equation}
with dual
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{prob:sdpdualcopy}
\textbf{D-SDP}\, \qquad \qquad \textdef{$d^{\star}$}:=\min \{ b^Ty : {\mathcal A}^*(y) {\mathcal P\,}receq C \}.
\end{equation}
Let $O_P\subseteq \Sc^np$ and $O_D\subseteq \Sc^np$ denote the primal and dual optimal sets respectively, where the dual optimal set is with respect to the variable $Z$. Specifically,
\[
O_P := \{X\in \Sc^np : {\mathcal A}(X) = b, \ \ensuremath{\mathcal{L}eftarrow}ngle C, X\ensuremath{\mathbb{R}ightarrow}ngle = p^{\star} \}, \ O_D := \{ Z \in \Sc^np : Z = C-{\mathcal A}^*(y), \ b^Ty = d^{\star}, \ y \in {\mathbb{R}^m\,} \}.
\]
Note that $O_P$ is a spectrahedron determined by the affine manifold
\[
\begin{bmatrix}
{\mathcal A}(X) \\
\ensuremath{\mathcal{L}eftarrow}ngle C, X \ensuremath{\mathbb{R}ightarrow}ngle
\end{bmatrix} = \begin{pmatrix}
b \\
p^*
\end{pmatrix}.
\]
We note that the second system in the theorem of the alternative, Theorem~{\rm Re}\,f{thm:alternative}, for the spectrahedron $O_P$ is
\begin{equation}
\ensuremath{\mathcal{L}eftarrow}bel{eq:opalternative}
0 \ne \tau C + {\mathcal A}^*(y) \succeq 0, \ \tau p^{\star} + y^Tb = 0.
\end{equation}
We say that \emph{strict complementarity} holds for \textbf{SDP}\, and \textbf{D-SDP}\, if there exists $X^{\star} \in O_P$ and $Z^{\star}\in O_D$ such that
\[
\ensuremath{\mathcal{L}eftarrow}ngle X^{\star}, Z^{\star} \ensuremath{\mathbb{R}ightarrow}ngle = 0 \text{ and } \ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) + \ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}) = n.
\]
If strict complementarity does not hold for \textbf{SDP}\, and \textbf{D-SDP}\, and there exist $X^{\star} \in {\rm Re}\,lint(O_P)$ and $Z^{\star} \in {\rm Re}\,lint(O_D)$, then we define the complementarity gap as
\[
g := n - \ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) - \ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}).
\]
Now we describe the relationship between strict complementarity of \textbf{SDP}\, and \textbf{D-SDP}\, and the singularity degree of $O_P$.
\begin{prop}
\ensuremath{\mathcal{L}eftarrow}bel{prop:scsd}
If strict complementarity holds for
\textbf{SDP}\, and \textbf{D-SDP}\,, then $\sd(O_P) \le 1$.
\end{prop}
\begin{proof}
Let $X^{\star} \in {\rm Re}\,lint(O_P)$. If $X^{\star} \succ 0$, then $\sd(O_P) = 0$ and we are done. Thus we may assume $\ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) < n$. By strict complementarity, there exists $(y^{\star},Z^{\star}) \in {\mathbb{R}^m\,} \times \Sc^np$ feasible for \textbf{D-SDP}\, with $Z^{\star} \in O_D$ and $\ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) + \ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}) = n$. Now we show that $(1,-y^{\star})$ satisfies \eqref{eq:opalternative}. Indeed, by dual feasibility,
\[
C - {\mathcal A}^*(y^{\star}) = Z^{\star} \in \Sc^np \setminus \{0\},
\]
and by complementary slackness,
\[
p^{\star} - (y^{\star})^Tb = \ensuremath{\mathcal{L}eftarrow}ngle X^{\star}, C \ensuremath{\mathbb{R}ightarrow}ngle - \ensuremath{\mathcal{L}eftarrow}ngle {\mathcal A}^*(y^{\star}), X^{\star} \ensuremath{\mathbb{R}ightarrow}ngle = \ensuremath{\mathcal{L}eftarrow}ngle X^{\star},Z^{\star} \ensuremath{\mathbb{R}ightarrow}ngle = 0.
\]
Finally, since $\ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) + \ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}) = n$ we have $\sd(O_P) = 1$, as desired.
\end{proof}
From the perspective of facial reduction, the interesting spectrahedra are those with singularity degree greater than zero and the above proposition gives us a way to construct spectrahedra with singularity degree exactly one. Using the algorithm of \cite{WeiWolk:06} we construct strictly complementary \textbf{SDP}\,s and then use the optimal set of the primal to construct a spectrahedron with singularity degree exactly one. Specifically, given positive integers $n, m, r,$ and $g$ the algorithm of \cite{WeiWolk:06} returns the data ${\mathcal A},b,C$ corresponding to a primal dual pair of \textbf{SDP}\,s, together with $X^{\star} \in {\rm Re}\,lint(O_P)$ and $Z^{\star} \in {\rm Re}\,lint(O_D)$ satisfying
\[
\ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) = r, \ \ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}) = n-r-g.
\]
Now if we set
\[
\hat{{\mathcal A}}(X) := \begin{pmatrix}
{\mathcal A}(X) \\
\ensuremath{\mathcal{L}eftarrow}ngle C, X \ensuremath{\mathbb{R}ightarrow}ngle
\end{pmatrix}, \ \hat{b} = \begin{pmatrix}
b \\
\ensuremath{\mathcal{L}eftarrow}ngle C, X^{\star} \ensuremath{\mathbb{R}ightarrow}ngle
\end{pmatrix},
\]
then $O_P = \mathcal{F}(\hat{{\mathcal A}},\hat{b})$. Moreover, if $g=0$ then $\sd(O_P) = 1$, by Proposition~{\rm Re}\,f{prop:scsd}.
This approach could also be used to create spectrahedra with larger singularity degrees by constructing \textbf{SDP}\,s with greater complementarity gaps, if the converse of Proposition~{\rm Re}\,f{prop:scsd} were true. We provide a sufficient condition for the converse in the following proposition.
\begin{prop}
\ensuremath{\mathcal{L}eftarrow}bel{prop:sdscconverse}
If $\sd(O_P)=0$, then strict complementarity holds for \textbf{SDP}\, and \textbf{D-SDP}\,. Moreover, if $\sd(O_P) = 1$ and the set of solutions to \eqref{eq:opalternative} intersects $\mathbb{R}_{++} \times {\mathbb{R}^m\,}$, then strict complementarity holds for \textbf{SDP}\, and \textbf{D-SDP}\,.
\end{prop}
\begin{proof}
Since we have only defined singularity degree for non-empty spectrahedra, there exists $X^{\star} \in {\rm Re}\,lint(O_P)$. For the first statement, by Theorem~{\rm Re}\,f{thm:strongduality}, there exists $Z^{\star} \in O_D$. Complementary slackness always holds, hence $\ensuremath{\mathcal{L}eftarrow}ngle Z^{\star}, X^{\star} \ensuremath{\mathbb{R}ightarrow}ngle = 0$ and since $X^{\star}\succ 0$ we have $Z^{\star}=0$. It follows that $\ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) + \ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}) = n$ and strict complementarity holds for \textbf{SDP}\, and \textbf{D-SDP}\,.
For the second statement, let $(b(\alpha) r{\tau},b(\alpha) r{y})$ and $(\tilde{\tau},\tilde{y})$ be solutions to \eqref{eq:opalternative} with $b(\alpha) r{\tau} >0$ and $\tilde{\tau} C +{\mathcal A}^*(\tilde{y})$ of maximal rank. Let
\[
b(\alpha) r{Z} := b(\alpha) r{\tau}C + {\mathcal A}^*(b(\alpha) r{\tau}), \ \tilde{Z} := \tilde{\tau} C +{\mathcal A}^*(\tilde{y}).
\]
Then there exists $\varepsilon > 0$ such that $b(\alpha) r{\tau} + \varepsilon \tilde{\tau} >0$ and $\ensuremath{\mathbb{R}ightarrow}nk(b(\alpha) r{Z} + \varepsilon \tilde{Z}) {\mathcal G} e \ensuremath{\mathbb{R}ightarrow}nk(\tilde{Z})$. Define
\[
\tau := b(\alpha) r{\tau} + \varepsilon \tilde{\tau}, \ y := b(\alpha) r{y} + \varepsilon \tilde{y}, \ Z := b(\alpha) r{Z} + \varepsilon \tilde{Z}.
\]
Now $(\tau,y)$ is a solution to \eqref{eq:opalternative}, i.e.,
\[
0 \ne \tau C + {\mathcal A}^*(y) \succeq 0, \ \tau p^{\star} + y^Tb = 0.
\]
Moreover, $\ensuremath{\mathbb{R}ightarrow}nk(X^{\star}) + \ensuremath{\mathbb{R}ightarrow}nk(Z) = n$ since $\sd(O_P) = 1$ and $Z$ is of maximal rank. Now we define
\[
Z^{\star} := \frac{1}{\tau} Z = C - {\mathcal A}^*\left(-\frac{1}{\tau}y\right).
\]
Since $\tau>0$, it is clear that $Z^{\star} \succeq 0$ and it follows that $\left(-\frac{1}{\tau} y, Z^{\star}\right)$ is feasible for \textbf{D-SDP}\,. Moreover, this point is optimal since
\[
d^{\star} {\mathcal G} e -\frac{1}{\tau} y^Tb = p^{\star}{\mathcal G} e d^{\star}.
\]
Therefore $Z^{\star} \in O_D$ and since $\ensuremath{\mathbb{R}ightarrow}nk(Z^{\star}) = \ensuremath{\mathbb{R}ightarrow}nk(Z)$, strict complementarity holds for \textbf{SDP}\, and \textbf{D-SDP}\,.
\end{proof}
\subsection{Numerical Results}
\ensuremath{\mathcal{L}eftarrow}bel{sec:numericsreal}
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
$n$ & $m$ & $r$ & $\ensuremath{\mathcal{L}eftarrow}mbda_1(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_r(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r+1}(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_n(X)$ & $\lVert {\mathcal A}(X)-b \rVert_2$ & $\ensuremath{\mathcal{L}eftarrow}ngle Z,X\ensuremath{\mathbb{R}ightarrow}ngle$ & $\alpha_f$ \cr\hline
50 & 100 & 25 & 1.06e+02 & 2.80e+01 & 1.97e-11 & 5.07e-13 & 3.17e-12 & 1.26e-13 & 1.10e-12 \cr\hline
80 & 160 & 40 & 8.74e+01 & 3.22e+01 & 1.20e-10 & 9.00e-13 & 7.28e-12 & 2.95e-13 & 2.01e-12 \cr\hline
110 & 220 & 55 & 7.74e+01 & 3.73e+01 & 3.56e-10 & 7.23e-13 & 9.12e-12 & 3.65e-13 & 2.14e-12 \cr\hline
140 & 280 & 70 & 7.82e+01 & 3.84e+01 & 4.11e-10 & 7.08e-13 & 1.26e-11 & 5.20e-13 & 2.65e-12 \cr\hline
\end{tabular}
\caption{Results for the case $\sd=1$. The eigenvalues refer to those of the primal variable, $X$, and each entry is the average of five runs.}
\ensuremath{\mathcal{L}eftarrow}bel{tab:sd1}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
$n$ & $m$ & $r$ & $\ensuremath{\mathcal{L}eftarrow}mbda_1(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r_d}(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r_d+1}(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_n(Z)$ \cr\hline
50 & 100 & 25 & 1.85e+00 & 9.07e-02 & 3.96e-14 & 1.27e-14 \cr\hline
80 & 160 & 40 & 1.96e+00 & 6.91e-02 & 6.23e-14 & 2.30e-14 \cr\hline
110 & 220 & 55 & 1.98e+00 & 2.61e-02 & 5.77e-14 & 2.78e-14 \cr\hline
140 & 280 & 70 & 2.03e+00 & 2.46e-02 & 6.96e-14 & 3.39e-14 \cr\hline
\end{tabular}
\caption{Eigenvalues of the dual variable, $Z$, corresponding to the primal variable of Table~{\rm Re}\,f{tab:sd1}. Each entry is the average of five runs.}
\ensuremath{\mathcal{L}eftarrow}bel{tab:sd1dual}
\end{table}
\begin{table}[h!]
\centering
\tiny{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
$n$ & $m$ & $r$ & $g$ & $\ensuremath{\mathcal{L}eftarrow}mbda_1(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_r(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r+1}(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r+g}(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r+g+1}(X)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_n(X)$ & $\lVert {\mathcal A}(X)-b \rVert_2$ & $\ensuremath{\mathcal{L}eftarrow}ngle Z,X\ensuremath{\mathbb{R}ightarrow}ngle$ & $\alpha_f$ \cr\hline
50 & 100 & 17 & 5 & 9.89e+01 & 1.85e+01 & 6.62e-05 & 2.61e-05 & 2.13e-10 & 6.10e-13 & 4.99e-12 & 2.04e-13 & 1.07e-12 \cr\hline
80 & 160 & 27 & 8 & 1.11e+02 & 2.00e+01 & 1.89e-05 & 1.28e-05 & 7.36e-11 & 5.17e-13 & 8.40e-12 & 2.73e-13 & 1.27e-12 \cr\hline
110 & 220 & 37 & 11 & 1.09e+02 & 2.42e+01 & 3.52e-05 & 2.33e-05 & 2.05e-10 & 1.52e-12 & 1.92e-11 & 6.46e-13 & 2.33e-12 \cr\hline
140 & 280 & 47 & 14 & 1.63e+02 & 2.64e+01 & 1.07e-04 & 2.65e-05 & 1.02e-10 & 1.17e-13 & 9.84e-12 & 3.52e-13 & 1.48e-12 \cr\hline
\end{tabular}
}
\caption{Results for the case $\sd=2$. The eigenvalues refer to those of the primal variable, $X$, and each entry is the average of five runs.}
\ensuremath{\mathcal{L}eftarrow}bel{tab:sd2}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
$n$ & $m$ & $r$ & $g$ & $\ensuremath{\mathcal{L}eftarrow}mbda_1(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r_d}(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r_d+1}(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r_d+g}(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_{r_d+g+1}(Z)$ & $\ensuremath{\mathcal{L}eftarrow}mbda_n(Z)$ \cr\hline
50 & 100 & 17 & 5 & 2.22e+00 & 2.51e-02 & 1.04e-07 & 8.38e-08 & 9.18e-14 & 1.51e-14 \cr\hline
80 & 160 & 27 & 8 & 2.03e+00 & 3.65e-02 & 1.03e-07 & 7.45e-08 & 7.92e-14 & 1.69e-14 \cr\hline
110 & 220 & 37 & 11 & 2.13e+00 & 6.11e-02 & 1.78e-07 & 1.23e-07 & 1.36e-13 & 2.76e-14 \cr\hline
140 & 280 & 47 & 14 & 2.19e+00 & 4.16e-02 & 7.39e-08 & 4.35e-08 & 6.04e-14 & 8.14e-15 \cr\hline
\end{tabular}
\caption{Eigenvalues of the dual variable, $Z$, corresponding to the primal variable of Table~{\rm Re}\,f{tab:sd2}. Each entry is the average of five runs.}
\ensuremath{\mathcal{L}eftarrow}bel{tab:sd2dual}
\end{table}
For the numerical tests, we generate instances with $n \in \{50,80,110,140\}$ and $m=2n$. These are problems of small size relative to state of the art capabilities, nonetheless, we are able to demonstrate the performance of our algorithm through them. In Table~{\rm Re}\,f{tab:sd1} and Table~{\rm Re}\,f{tab:sd1dual} we record the results for the case $\sd=1$. For each instance, specified by $n$, $m,$ and $r$, the results are the average of five runs. By $r$, we denote the maximum rank over all elements of the generated spectrahedron, which is fixed to $r=n/2$. In Table~{\rm Re}\,f{tab:sd1} we record the relevant eigenvalues of the primal variable, primal feasibility, complementarity, and the value of $\alpha$ at termination, denoted $\alpha_f$. The values for primal feasibility and complementarity are sufficiently small and it is clear from the eigenvalues presented, that the first $r$ eigenvalues are significantly smaller than the last $n-r$. These results demonstrate that the algorithm returns a matrix which is very close to the relative interior of $\mathcal{F}$. In Table~{\rm Re}\,f{tab:sd1dual} we record the relevant eigenvalues for the corresponding dual variable, $Z$. Note that $r_d := n-r$ and the eigenvalues recorded in the table indicate that $Z$ is indeed an exposing vector. Moreover, it is a maximal rank exposing vector. While, we have not proved this, we observed that it is true for every test we ran with $\sd = 1$.
In Table~{\rm Re}\,f{tab:sd2} and Table~{\rm Re}\,f{tab:sd2dual} we record similar values for problems where the singularity degree may be greater than $1$. Using the approach described in Section~{\rm Re}\,f{sec:generating} we generate instances of \textbf{SDP}\, and \textbf{D-SDP}\, having a complementarity gap of $g$ and then we construct our spectrahedron from the optimal set of \textbf{SDP}\,. By Proposition~{\rm Re}\,f{prop:scsd} and Proposition~{\rm Re}\,f{prop:sdscconverse} the resulting spectrahedron may have singularity degree greater than 1. We observe that primal feasibility and complementarity are attained to a similar accuracy as in the $\sd=1$ case. The eigenvalues of the primal variable fall into three categories. The first $r$ eigenvalues are sufficiently large so as not to be confused with $0$, the last $n-r-g$ eigenvalues are convincingly small, and the third group of eigenvalues, exactly $g$ of them, are such that it is difficult to decide if they should be $0$ or not. A similar phenomenon is observed for the eigenvalues of the dual variable. This demonstrates that exactly $g$ of the eigenvalues are converging to $0$ at a rate significantly smaller than that of the other $n-r-g$ eigenvalues.
\section{An Application to PSD Completions of Simple Cycles}
\ensuremath{\mathcal{L}eftarrow}bel{sec:psdcyclecompl}
In this final section, we show that our parametric path and the relative interior point it converges to have interesting structure for cycle completion problems.
Let $G=(V,E)$ be an undirected graph with $n = \lvert V \rvert$ and let
$a \in \mathbb{R}^{\lvert E \rvert}$. Let us index the components of $a$ by the
elements of $E$. A matrix $X \in \Sc^n$ is a \textdef{completion} of $G$
under $a$ if $X_{ij} = a_{ij}$ for all $\{i,j\} \in E$. We say that $G$
is \textdef{partially PSD} under $a$ if there exists a completion of $G$
under $a$ such that all of its principle minors consisting entirely of
$a_{ij}$ are PSD. Finally, we say that $G$ is \textdef{PSD completable}
if for all $a$ such that $G$ is partially PSD, there exists a PSD
completion. Recall that a graph is \textdef{chordal} if for every cycle with at
least four vertices, there is an edge connecting non-adjacent vertices.
The classical result of \cite{GrJoSaWo:84} states that $G$ is PSD
completable if, and only if, it is chordal.
An interesting problem for non-chordal graphs is to characterize the
vectors $a$ for which $G$ admits a PSD completion. Here we consider PSD
completions of non-chordal cycles with loops. This problem was first
looked at in \cite{MR1236734}, where the following special case is presented.
\begin{theorem}[Corollary 6, \cite{MR1236734}]
\ensuremath{\mathcal{L}eftarrow}bel{thm:simplecycle}
Let $n{\mathcal G} e 4$ and $\theta, {\mathcal P\,}hi \in [0,{\mathcal P\,}i]$. Then
\begin{equation}\ensuremath{\mathcal{L}eftarrow}bel{simple} C := \begin{bmatrix}
1 & \cos(\theta) & & & \cos({\mathcal P\,}hi) \\
\cos(\theta) & 1 & \cos(\theta) & ? & \\
& \cos(\theta) & 1 & \ddots & \\
& ? & \ddots & \ddots & \cos(\theta) \\
\cos({\mathcal P\,}hi) & & & \cos(\theta) & 1
\end{bmatrix}, \end{equation}
has a positive semidefinite completion if, and only if,
$$ {\mathcal P\,}hi \le (n-1)\theta \le (n-2){\mathcal P\,}i + {\mathcal P\,}hi \qquad \text{for n even}$$
and
$$ {\mathcal P\,}hi \le (n-1)\theta \le (n-1){\mathcal P\,}i - {\mathcal P\,}hi \qquad \text{for n odd.}$$
The partial matrix \eqref{simple} has a positive definite completion if,
and only if, the above inequalities are strict.
\end{theorem}
Using the results of the previous sections we present an analytic expression for exposing vectors in the case where a PSD completion exists but not a PD one, i.e., the Slater CQ does not hold for the corresponding \textbf{SDP}\,. We begin by showing that the primal part of the parametric path is always Toeplitz. In general, for a partial Toeplitz matrix, the unique maximum determinant completion is not necessarily Toeplitz. For instance, the maximum determinant completion of
$$ \begin{bmatrix} 6 & 1 & x & 1 & 1 \cr 1 & 6 & 1 & y & 1 \cr x & 1 & 6 & 1 & z \cr 1 & y & 1 & 6 & 1 \cr 1 & 1 & z & 1 & 6 \end{bmatrix} $$
is given by $x=z=0.3113$ and $y=0.4247$.
\begin{theorem}\ensuremath{\mathcal{L}eftarrow}bel{md}
If the parital matrix
\[
P := \begin{bmatrix}
a & b & & & c \\
b & a & b & ? & \\
& b & a & \ddots & \\
& ? & \ddots & \ddots & b \\
c & & & b & a
\end{bmatrix}
\]
has a positive definite completion, then the unique maximum determinant completion is Toeplitz.
\end{theorem}
First we present the following technical lemma. Let $J_n \in \Sc^n$ be the matrix with ones on the antidiagonal and zeros everywhere else, that is, $[J_n]_{ij} = 1$ when $i+j=n+1$ and zero otherwise. For instance, $J_2=\begin{bmatrix} 0 & 1 \cr 1 & 0 \end{bmatrix}$.
\begin{lemma}\ensuremath{\mathcal{L}eftarrow}bel{persymm} If $A$ is the maximum determinant completion of $P$, then $A=JAJ$.
\end{lemma}
\begin{proof} As $A$ is a completion of $P$, so is $JAJ$. Furthermore, $\det (A) = \det (JAJ)$. Since the maximum determinant completion is unique, we must have that $A=JAJ$. \end{proof}
\begin{proof}[Proof of Theorem~{\rm Re}\,f{md}]
The proof is by induction on the size $n$. When $n=4$ the result follows from Lemma {\rm Re}\,f{persymm}.
Suppose Theorem~{\rm Re}\,f{md} holds for size $n-1$. Let $A$ be the maximum determinant completion of $P$.
Then by the optimality conditions of Theorem~{\rm Re}\,f{thm:maxdet},
$$A^{-1}= \begin{bmatrix}
* & * &0 & \cdots & 0 & * \\
* & * & * & 0 & \ddots & 0\\
0 & * & * & * & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0\\
0 & \cdots & 0 & * & * & * \\
* & 0 & \cdots & 0& *& *
\end{bmatrix}. $$
Let $\alpha := A_{1,n-1}$, and consider the $(n-1)\times (n-1)$ partial matrix
\begin{equation}\ensuremath{\mathcal{L}eftarrow}bel{simple2} \begin{bmatrix}
a & b & & & \alpha \\
b & a & b & ? & \\
& b & a & \ddots & \\
& ? & \ddots & \ddots & b \\
\alpha & & & b & a
\end{bmatrix}, \end{equation}
By the induction assumption, \eqref{simple2} has a Toeplitz maximum determinant completion, say $B$.
Note that
\begin{equation}\ensuremath{\mathcal{L}eftarrow}bel{simple4} B^{-1}= \begin{bmatrix}
* & * &0 & \cdots & 0 & * \\
* & * & * & 0 & \ddots & 0\\
0 & * & * & * & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0\\
0 & \cdots & 0 & * & * & * \\
* & 0 & \cdots & 0& *& *
\end{bmatrix}. \end{equation}
Now consider the partial matrix
\begin{equation}\ensuremath{\mathcal{L}eftarrow}bel{simple3} \begin{bmatrix} B & \begin{bmatrix} c \cr ? \cr \vdots \cr ? \cr b \end{bmatrix} \cr
\begin{bmatrix} c & ? & \cdots & ? & b \end{bmatrix} & a \end{bmatrix}
\end{equation}
Since this is a chordal pattern we only need to check that the fully prescribed principal minors are positive definite. These are $B$ and
$$ \begin{bmatrix} a & \alpha & c \cr \alpha & a & b \cr c & b & a
\end{bmatrix} , $$ the latter of which is a principal submatrix of the
positive definite matrix $A$. Thus \eqref{simple3} has a maximum determinant completion, say $C$. Then
$$C^{-1} = \begin{bmatrix} * & \begin{bmatrix} * \cr 0 \cr \vdots \cr 0 \cr * \end{bmatrix} \cr
\begin{bmatrix} * & 0 & \cdots & 0 & * \end{bmatrix} & * \end{bmatrix} = : \begin{bmatrix} L & M \cr M^T & N \end{bmatrix} . $$
By the properties of block inversion,
$$ C = \begin{bmatrix} (L-MN^{-1}M^T)^{-1} & * \cr * & * \end{bmatrix} = \begin{bmatrix} B & * \cr * & * \end{bmatrix} , $$
and it follows that $B^{-1} = L-MN^{-1}M^T$. Since $MN^{-1}M^T$ only has nonzero entries in the four corners, we obtain that
$$L=\begin{bmatrix}
* & * &0 & \cdots & 0 & * \\
* & * & * & 0 & \ddots & 0\\
0 & * & * & * & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & \ddots & 0\\
0 & \cdots & 0 & * & * & * \\
* & 0 & \cdots & 0& *& *
\end{bmatrix}.
$$
We now see that $C^{-1}$ and $A^{-1}$ have zeros in all entries $(i,j)$ with $|i-j| >1$ and $(i,j) \not\in\{(1,n-1), (1,n), (n-1,1) , (n,1)\}$. Also, $A$ and $C$ have the same entries in positions $(i,j)$ where $|i-j|\le 1$ or where $(i,j) \in\{(1,n-1), (1,n), (n-1,1) , (n,1)\}$. But then $A$ and $C$ are two positive definite matrices where for each $(i,j)$ either $A_{ij}=C_{ij}$ or $(A^{-1})_{ij} = (C^{-1})_{ij}$, yielding that $A=C$ (see, e.g., \cite{MR1321785}). Finally, observe that the Toeplitz matrix $B$ is the $(n-1)\times (n-1)$ upper left submatrix of $C$, and that $A=JAJ$, to conclude that $A$ is Toeplitz.
\end{proof}
When \eqref{simple} has a PD completion, then this result states that the analytic center of all the completions is Toeplitz. When \eqref{simple} has a PSD completion, but not a PD completion then the primal part of the parametric path is always Toeplitz and since the Toeplitz matrices are closed, \eqref{simple} admits a maximum rank Toeplitz PSD completion. In the following proposition we see that the dual part of the parametric path has a specific form.
\begin{prop}
\ensuremath{\mathcal{L}eftarrow}bel{Tinverse}
Let $T=(t_{i-j})_{i,j=1}^n$ be a positive definite real Toeplitz matrix, and suppose that $(T^{-1})_{k,1}=0$ for all $k\in \{3,\ldots , n-1\}$. Then $T^{-1}$ has the form
$$\begin{bmatrix}
a & c & 0& & d \\
c & b & c & \ddots & \\
0& c & b & \ddots &0 \\
& \ddots & \ddots & \ddots & c \\
d & & 0& c & a
\end{bmatrix},$$
with $b=\frac{1}{a} (a^2+c^2-d^2)$.
\end{prop}
\begin{proof}
Let us denote the first column of $T$ by $\begin{bmatrix} a & c & 0 &
\cdots & 0 & d \end{bmatrix}^T$. By the \textdef{Gohberg-Semencul
formula} (see \cite{MR0353038,MR1038316}) we have that
$$ T^{-1} =\frac{1}{a} ( AA^T-BB^T ), $$
where
$$ A=\begin{bmatrix}
a & 0 & 0& & 0 \\
c & a & 0 & \ddots & \\
0& c & a & \ddots &0 \\
& \ddots & \ddots & \ddots & 0 \\
d & & 0& c & a
\end{bmatrix}, B= \begin{bmatrix}
0 & 0 & 0& & 0 \\
d & 0 & 0 & \ddots & \\
0& d & 0 & \ddots &0 \\
& \ddots & \ddots & \ddots & 0 \\
c & & 0& d & 0
\end{bmatrix}.$$
\end{proof}
\begin{corollary}
\ensuremath{\mathcal{L}eftarrow}bel{cor:expvecsimplecycle}
If the set of PSD completions of \eqref{simple} is contained in a proper face of $\Sc^np$ then there exists an exposing vector of the form
$$C_E := \begin{bmatrix}
a & c & 0& & d \\
c & b & c & \ddots & \\
0& c & b & \ddots &0 \\
& \ddots & \ddots & \ddots & c \\
d & & 0& c & a
\end{bmatrix},$$
for a face containing the completions. Moreover, $C_E$ satisfies
$$2\cos(\theta)c + b = 0 \quad \text{and} \quad a + \cos(\theta)c + \cos({\mathcal P\,}hi)d = 0.$$
\end{corollary}
\begin{proof}
Existence follows from Proposition~{\rm Re}\,f{Tinverse}. By definition, $C_E$
is an exposing vector for the face if, and only if, $C_E \succeq 0$ and
$\ensuremath{\mathcal{L}eftarrow}ngle X, C_E\ensuremath{\mathbb{R}ightarrow}ngle = 0$ for all positive semidefinite completions, $X$, of
$C$. Since $X$ and $C_E$ are positive semidefinite, we have $XC_E = 0$ and in particular $\diag(XC_E) = 0$, which
is satisfied if, and only if,
$$\cos(\theta)c + b + \cos(\theta)c = 0 \quad \text{and} \quad a + \cos(\theta)c + \cos({\mathcal P\,}hi)d = 0,$$
as desired.
\end{proof}
\section{Conclusion}
\ensuremath{\mathcal{L}eftarrow}bel{sec:conclusion}
In this paper we have considered a `primal' approach to facial reduction for \textbf{SDP}\,s that reduces to finding a relative interior point of a spectrahedron. By considering a parametric optimization problem, we constructed a smooth path and proved that its limit point is in the relative interior of the spectrahedron. Moreover, we gave a sufficient condition for the relative interior point to coincide with the analytic center. We proposed a projected Gauss-Newton algorithm to follow the parametric path to the limit point and in the numerical results we observed that the algorithm converges. We also presented a method for constructing spectrahedra with singularity degree $1$ and provided a sufficient condition for constructing spectrahedra of larger singularity degree. Finally, we showed that the parametric path has interesting structure for the simple cycle completion problem.
This research has also highlighted some new problems to be pursued. We single out two such problems. The first regards the eigenvalues of the limit point that are neither sufficiently small to be deemed zero nor sufficiently large to be considered as non-zero. We have experimented with some eigenvalue deflation techniques, but none have led to a satisfactory method. Secondly, there does not seem to be a method in the literature for constructing spectrahedra with specified singularity degree.
{\mathcal P\,}rintindex
\addcontentsline{toc}{section}{Index}
\ensuremath{\mathcal{L}eftarrow}bel{ind:index}
\addcontentsline{toc}{section}{Bibliography}
\end{document} |
\begin{document}
\title{Retardation effect and dark state in a waveguide QED setup with rectangle cross section}
\author{Yang Xue}
\affiliation{National Demonstration Center for Experimental Physics Education,
Northeast Normal University, Changchun 130024, China}
\affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024, China}
\author{Zhihai Wang}
\email{[email protected]}
\affiliation{Center for Quantum Sciences and School of Physics, Northeast Normal University, Changchun 130024, China}
\begin{abstract}
In this paper, we investigate the dynamics of a two-atom system which couples to a quasi-one dimensional waveguide with rectangle cross section. The waveguide supports different TM and TE modes and the former ones play as environment by on-demand choosing the dipole moment of the atoms. Such environment induces the interaction and collective dissipation of the atoms. When both of the two atoms are located in the middle of the waveguide, we observe a retardation effect, which is broken by moving one of the atom to be off-centered. To preserve the complete dissipation of the system via dark state mechanism, we propose a scheme where the connection of the atoms are perpendicular to the axis of the waveguide. We hope our study will be useful in quantum information processing based on state-to-art waveguide structure.
\end{abstract}
\maketitle
\section{introduction}
The waveguide QED, which studies the light-matter interaction in a confined structure, has attracted lots of attention due to its interesting theoretical and experimental applications~\cite{DR2017,XG2017}.
In the waveguide QED setup, how to control the photon by the (artificial) atom and vise verse is a central task to construct the quantum network. On the one hand, the propagation of the flying photon can be controlled by the frequency of the atom, which is widely used to realize coherent quantum device such as photon transistor~\cite{JT2005,DE2007,LZ2008}, router~\cite{AA2010,IC2011,LZ2013,Wang2014,IS2014,CH2018,YL2022}, etc. On the other hand, the waveguide can serve as a data bus, to induce the interaction between different atoms~\cite{KS2012,AG2013,CG2013,HP2015,GC2016,AG2017,FG2018,HZ2019,EK2021}, which is utilized to realize remote quantum entanglement.
Due to the possible slow velocity of light in the waveguide, the time needed for the photon propagating from one atom to the other can be comparable to the lifetime of the atom. Therefore, the retardation effect, which will induce some non-Markovian dynamics, is becoming a hot topic recently. Such retardation effect will occur in multiple atoms system~\cite{FT1970,PW1974,Qu2012,HP2016,KS2020} or only one atom in front of a mirror~\cite{JE2001,UD2002,PB2004,FD2007,AG2010,TT2013,TT2013,TT201t,IC2015,YL2015}, and even a giant atom system which interacts with the waveguide via more than one connecting point~\cite{LG2017,LG2020,LD2021}. In these setups, the atomic population usually exhibits an oscillation behavior beyond the Markovian process.
In most of the previous studies about the retardation effect in waveguide system, the waveguide is usually theoretically considered as one dimensional. Therefore, the atom is resonantly coupled with only one flying photon mode in the waveguide. However, in the realistic physical system, the waveguide can never be one dimensional. Therefore, it is productive to investigate the effect of the finite cross section.
To tackle this issue, we here discuss the dynamics of two-atom system, which couples to a waveguide with rectangle cross section~\cite{JF,JL2021,Lj2020}. The finite cross section area of the waveguide generates two effects, the first one is that it supports more than one ${\rm TM}$ modes while the second one is that whether the atom is centered or off centered in the waveguide will lead to dramatically different dynamical behavior. For example, as both of the atoms are centered in the waveguide, the dynamics is similar to that in one dimensional waveguide, and we observe the non-Markovian retardation effect. Meanwhile, we recover the Markovian process by deviating one of the atom to be off-centered. We also find a dark state when the connection of the two atoms is properly perpendicular to the axis of the waveguide, in which both of the atoms will retain some excitation even after the evolution time is tend to be infinity.
The rest of the paper is organized as follows. In Sec.~\ref{model1}, we illustrate our model and give the general amplitudes equations. In Sec.~\ref{onemode}, we discuss the non-Markovian dynamics when the two atoms are both centered in the waveguide. In Sec.~\ref{twomode}, we consider the situation that one of the atom is off-centered. In Sec.~\ref{dark}, we reveal a dark state mechanism when the connection of the atoms is perpendicular to the axis of the waveguide. In Sec.~\ref{con}, we arrive at the conclusion.
\section{Model and amplitudes equations}
\label{model1}
As schematically shown in Fig.~\ref{model} (a) (b) and (c), we consider a system composed by two two-level atoms, which couples to a common waveguide with a $a\times b$ rectangle cross section and being infinite in $z$ direction. The two atoms are located at $\vec{r}_1=(x_1,y_1,z_1)$ and $\vec{r}_2=(x_2,y_2,z_2)$, respectively. The Hamiltonian of the coupled system is written as $H=H_0+H_I$ where ($\hbar=1$)
\begin{eqnarray}
H_0&=&\sum_{l=1}^{2}\omega_{a}\sigma^{+}_{l}\sigma^{-}_{l}
+\sum_j\int_{-\infty}^{\infty}dk\omega_{jk}a^{\dagger}_{jk}a_{jk},
\end{eqnarray}
describes the free energy of the atoms and the waveguide. Here, $\sigma^{+}_l=[\sigma^{-}_l]^{\dagger}=|e\rangle_l\langle g|$ is the Pauli operator of the $l$th atom. As shown in Fig.~\ref{model}(d), $\omega_a$ is the transition frequency between the atomic ground state $|g\rangle$ and excited state $|e\rangle$. $\omega_{jk}$ is the frequency of the travelling electromagnetic field mode in the waveguide. Here, the index $j$ denotes the electromagnetic field mode (see details below) and $k$ is the wave vector. $a_{jl}$ is the photon annihilation operator in the waveguide.
\begin{figure}
\caption{Schematic illustration of two atoms couple to the waveguide with a $a\times b$ rectangle cross section. (a) The two atoms are both located in the middle axis of the waveguide. (b) One of the atom is off-centered. (c) The connection between the two atoms is perpendicular to the axis of the waveguide. (d) The energy-level diagram of the atoms. (e) The energy spectrum of the waveguide.}
\label{model}
\end{figure}
Within the rotating wave approximation, the interaction between the atoms and the waveguide is illustrated by the Hamiltonian
\begin{equation}
H_{I}=i\sum_{l=1}^{2}\sum_{j}\int_{-\infty}^{\infty}dk\frac{g_{jl}}
{\sqrt{\omega_{jk}}}\sigma^{-}_{l}a^{\dagger}_{jk}e^{ikz_{l}}+{\rm H.c.},
\end{equation}
where $z_l$ is the location of the $l$th atom in the $z$ direction. In this paper, we consider that the dipole moment of the atoms are along the $z$ direction, therefore, they are decoupled with the TE modes in the waveguide, that is, only the TM modes
are needed to be considered. For simplicity, we use a single notation $j$ to denote the TM modes. For simplicity, we denote $j=1$ for $m=1,n=1$, $j=2$ for $m=2,n=1$ and $j=3$ for $m=3,n=1$. Then, the atom-waveguide coupling strength and the dispersion relation of the waveguide are
\begin{eqnarray}
g_{jl}&=&\frac{\Omega_{j}\mu_j \sin(\frac{x_{l}m\pi}{a})\sin(\frac{y_{l}n\pi}{b})}{\sqrt{A\pi\epsilon_{0}}},\\
\omega_{jk}&=&\sqrt{\Omega_{j}^2+c^2k^2},\\\nonumber
\end{eqnarray}
respectively. Here $\Omega_{mn}=c\sqrt{(m\pi/a)^2+(n\pi/b)^2}$ is the cutoff frequency for a traveling wave for the ${\rm TM}_{mn}$ mode. $A=ab$ is the area of the rectangular cross section and $|\mu_{1}|=|\mu_{2}|=|\mu|$ is the magnitude of the transition dipole moment of the atom which is assumed be real. $c$ is the light velocity and $\epsilon_0$ is the permittivity of vacuum. The dispersion relation of the waveguide is demonstrate in Fig.~\ref{model} (e), and we set $\omega_a=(\Omega_1+\Omega_3)/2$, so that the atoms are large detuned from ${\rm TM}_{31}$ mode, but is resonant with ${\rm TM}_{11}$ and ${\rm TM}_{21}$ with certain wave vector.
Since the number of the quanta is conserved in our system, the wave function can be assumed as:
\begin{eqnarray}
|\psi(t)\rangle&=&e^{-i\omega_a t}[B_1(t)\sigma_1^{+}|G,0\rangle+B_2(t)\sigma_2^{+}|G,0\rangle]\nonumber\\
&&+\sum_{j}\int_{-\infty}^{\infty}dke^{-i\omega_{jk} t}B_{jk}(t)a^{\dagger}_{jk}|G,0\rangle,\\\nonumber
\end{eqnarray}
where $|G,0\rangle$ represents the state that both of the atoms are in their ground states while
the waveguide is in the vacuum state. $B_1(t)$ and $B_2(t)$ represent the excitation amplitudes for first and second atom while $B_{jk}$ is that for the $k$th mode of the waveguide for ${\rm TM}_{mn}$. Based on the Sch\"{o}dinger equation, these amplitudes satisfy
\begin{eqnarray}
\dot B_1(t)&=&-\sum_{j}\int_{-\infty}^{\infty}dk\frac{g_{j1}B_{jk}(t)
e^{-i(\omega_{jk}-\omega_{a})t}e^{-ikz_{1}}}{\sqrt{\omega_{jk}}},\label{B1t}\\
\dot B_2(t)&=&-\sum_{j}\int_{-\infty}^{\infty}dk\frac{g_{j2}B_{jk}(t)
e^{-i(\omega_{jk}-\omega_{a})t}e^{-ikz_{2}}}{\sqrt{\omega_{jk}}},\label{B2t}\\
\dot B_{jk}(t)&=&\frac{(B_1(t)g_{j1}+B_2(t)g_{j2}e^{ikz_{0}})e^{ikz_{1}}
e^{i(\omega_{jk}-\omega_{a})t}}{\sqrt{\omega_{jk}}},\nonumber \\
\end{eqnarray}
where $z_0=z_2-z_1$. In the initial vacuum waveguide condition $B_{jk}(0)=0$, the excited amplitudes of the
waveguide can be obtained formally as
\begin{equation}
B_{jk}(t)=\int_{0}^{t}d\tau\frac{e^{ikz_{1}}}{\sqrt{w_{jk}}}
[g_{j1}B_{1}(\tau)+g_{j2}B_{2}(\tau)e^{ikz_{0}}]e^{i(w_{jk}-w_{a})\tau}.
\end{equation}
Substituting $B_{jk}(t)$ into Eqs.~(\ref{B1t}) and (\ref{B2t}), the retardation differential equations for the atomic amplitudes are obtained as
\begin{eqnarray}
&&(\partial_{t}+\sum_{j}\frac{g_{j1}^2\pi}{\omega_{a}v_{j}})B_1(t)=\nonumber\\
&&-\sum_{j}\frac{g_{j1}g_{j2}}{\omega_{a}v_{j}}B_2(t-\frac{d}{v_j})e^{ik_{j0}d}
\Theta(t-\frac{d}{v_j}),\label{A1}\\
&&(\partial_{t}+\sum_{j}\frac{g_{j2}^2\pi}{\omega_{a}v_{j}})B_2(t)\nonumber
=\\
&&-\sum_{j}\frac{g_{j1}g_{j2}}{\omega_{a}v_{j}}B_1(t-\frac{d}{v_j})e^{ik_{j0}d}
\Theta(t-\frac{d}{v_j})\label{A2}.
\end{eqnarray}
In the above equations, $k_{j0}=\sqrt{\omega_{a}^{2}-\Omega_{j}^{2}}/c$ is the wave vector of the waveguide mode which is resonant with the atoms and $d=|z_{0}|$ is the distance of the two atoms in the $z$ direction. The corresponding group velocity $v_j$ is
\begin{equation}
v_j=\frac{d\omega_{jk}}{dk}|_{k=k_{j0}}=\frac{c\sqrt{\omega_{a}^2-\Omega_{j}^2}}{\omega_{a}}.
\end{equation}
We emphasize that Heaviside unit step function $\Theta(x)$, which is defined as $\Theta(x)=1$ for $x>0$ and $\Theta(x)=0$ for $x\leq 0$, represents the non-Markovian retardation effects, since $d/v_j$ corresponds to the time needed for the photon propagating for one atom to the other.
\section{Two atoms centered in the waveguide}
\label{onemode}
As shown in Fig.~1(a), we now consider that the two atoms are both located in the middle of the waveguide, that is, $\vec{r}_1=(a/2,b/2,z_1)$ and $\vec{r}_2=(a/2,b/2,z_2)$. A direct observation shows that $g_{2i}=0$ for $i=1,2$. Therefore, the atoms are coupled to the ${\rm TM_{11}}$ mode in the waveguide and the coupling strengths are obtained as
\begin{equation}
g_{11}=g_{12}=\frac{\Omega_{1}\mu}{\sqrt{A\pi\epsilon_{0}}}.
\label{strength}
\end{equation}
As a result, the amplitude equation in Eqs.~(\ref{A1}) and (\ref{A2}) becomes
\begin{subequations}
\begin{eqnarray}
(\partial_{t}+\gamma_{11})B_1(t)=-\gamma_{11}B_2(t-\tau_{1})e^{ik_{10}d}\Theta(t-\tau_{1}),\nonumber \\ \\
(\partial_{t}+\gamma_{11})B_2(t)=-\gamma_{11}B_1(t-\tau_{1})e^{ik_{10}d}\Theta(t-\tau_{1}),\nonumber \\
\end{eqnarray}
\label{RE}
\end{subequations}
where $\gamma_{11}=g_{11}'^{2}\pi/v_{1}$ is the effective decay rate of the atoms (equal for each atom), $g_{11}'=g_{11}/\sqrt{\omega_{a}}$ is the renormalized coupling strength under the Weisskopf-Wigner approximation~\cite{MO1997}. $\tau_1=d/v_1$ is the delay time for the photon with group velocity $v_1$ travelling from one atom to the other in the waveguide.
\begin{figure}
\caption{Dynamical evolution of the atomic populations based on the retardation differential equations in Eq.~(\ref{RE}
\label{onem}
\end{figure}
In the viewpoint of quantum open system, the electromagnetic field in the waveguide serves as an environment, which induces the dissipation and indirect interaction between the atoms. Under the Born-Markovian approximation, the dynamics of the two atoms are governed by the master equation (ME)
\begin{equation}
\dot\rho=-i[\mathcal{H}_1,\rho]+\sum_{i,j=1}^{2}\frac{\Gamma_{ij}}{2}
(2\sigma_{j}^{-}\rho\sigma_{i}^{+}-\sigma_{i}^{+}\sigma_{j}^{-}\rho
-\rho\sigma_{i}^{+}\sigma_{j}^{-}),
\label{master}
\end{equation}
where the Hamilton $\mathcal{H}_1$ between the two atoms reads
\begin{equation}
\mathcal{H}_1=\sum_{i=1}^{2}\omega_{a}(\sigma_i^{+}\sigma_i^{-})
+\sum_{i,j=1}^{2}\frac{U_{ij}}{2}(\sigma_{i}^{+}\sigma_{j}^{-}+\sigma_{i}^{-}\sigma_{j}^{+}).
\end{equation}
where $\Gamma_{ij}=2{\rm Re}(A_{ij})$ is the two-atom collective decay rate and $U_{ij}=2{\rm Im}(A_{ij})$ is the waveguide induced interaction between the atoms. Here, we have set
\begin{equation}
A_{ij}=\frac{\pi g^{2}e^{ik|z_i-z_j|}}{v_1}.
\end{equation}
The dynamics of the system, which is characterized by the atomic population $P_i=\langle\sigma_i^{+}\sigma_i^{-}\rangle=|B_i|^2$ for $i=1,2$ is shown in Fig.~\ref{onem} for different atomic distance. Here, the system is initially prepared in the product state $|\psi(0)\rangle=\sigma_1^{+}|G,0\rangle$, in which the first atom is in the excited state, the second atom is in the ground state while the waveguide is in the vacuum state.
In Fig.~\ref{onem}(a), we consider the situation with $d=12/k_{j0}$, in which the ME yields a monotonous decay for $P_1$ and an increase-decrease transition for $P_2$. However, the results based on the Eqs.~(\ref{RE}) reveals the Non-Markovian nature of the system which is induced by the retardation effect during the photon propagation in the waveguide. For example, at the moment $t=\tau_1$, the emitted photon by the first atom arrives at the second atom and excites it, so $P_2$ acquires a non-zero value. Then, it also emits photon, which in turn arrives at the first atom during another time interval $\tau_1$, and the decreasing population $P_1$ revivals along with the reabsorption of photon. Repeating such photon emitting, propagation and absorbing process, both of the population $P_1$ and $P_2$ oscillates with period $\tau_1$. Due to the waveguide induced dissipation for the two-atom system, the populations will achieve zero after a sufficient long time. The similar behavior can also be found for a large atomic distance $d=24/k_{j0}$ as shown in Fig.~\ref{onem}(b). Comparing with the former situation, we find that the population will undergo a longer time to stay at the zero value (see the black solid curve nearby $t/\tau_1=2$ and the red dashed curve nearby $t/\tau_1=3$ ) due to the longer
retardate time.
\section{Effects of $ \text{TM}_{21}$ mode}
\label{twomode}
Now, we consider that the second atom is off-centred from the waveguide as shown in Fig.~\ref{model}(b), that is $\vec{r}_1=(a/2,b/2,z_1)$ and $\vec{r}_2=(a/2+\Delta x,b/2,z_2)\,(0<\Delta x<a/2)$. An immediate result is the change of atom-waveguide coupling strength, which yields
\begin{equation}
g_{11}=\frac{\Omega_{1}\mu}{\sqrt{A\pi\epsilon_{0}}},g_{12}=g_{11}\cos(\frac{\Delta x\pi}{a}),g_{12}'=\frac{g_{12}}{\sqrt{\omega_{a}}},
\end{equation}
More interesting, it is non-trivial that the second atom couples to the ${\rm TM}_{21}$ mode simultaneously besides ${\rm TM}_{11}$ mode and the coupling strength reads
\begin{equation}
g_{22}=\frac{\Omega_{2}\mu}{\sqrt{A\pi\epsilon_{0}}}\sin(\frac{2\Delta x\pi}{a}).
\end{equation}
\begin{figure}
\caption{Dynamical evolution of the atomic populations when the second atom is off centred from the middle axis of the waveguide. (a) The comparison when the ${\rm TM}
\label{twomai}
\end{figure}
As a result, the retardation differential equations for the atomic amplitudes becomes
\begin{subequations}
\begin{eqnarray}
(\partial_{t}+\gamma_{11})B_1(t)&=&-\gamma_{12}B_2(t-\tau_{1})e^{ik_{10}d}
\Theta(t-\tau_{1}),\nonumber\\
\\
(\partial_{t}+\gamma_{22}+\gamma_{212})B_2(t)&=&-\gamma_{12}B_1(t-\tau_{1})
e^{ik_{10}d}\Theta(t-\tau_{1})\nonumber,\\
\end{eqnarray}
\label{newmode}
\end{subequations}
where
\begin{eqnarray}
\gamma_{11}&=&\frac{g_{11}'^{2}\pi}{v_{1}},\,
\gamma_{12}=\frac{g_{11}'g_{12}'\pi}{v_{1}},\label{gamma1}\\
\gamma_{22}&=&\frac{g_{12}'g_{12}'\pi}{v_{1}},\,
\gamma_{212}=\frac{g_{22}'^2\pi}{v_2}.\label{gamma2}
\end{eqnarray}
with $g_{mn}'=g_{mn}/\sqrt{\omega_a}$. It is clear that $\gamma_{11}, \gamma_{12}$ and $\gamma_{22}$ come from the coupling to the ${\rm TM}_{11}$ mode while $\gamma_{212}$ comes from the effect of ${\rm TM}_{21}$ mode.
To demonstrate the effect of coupling to ${\rm TM}_{21}$ mode of the second atom, we plot the atomic populations for $\gamma_{212}=0$ and $\gamma_{212}\neq0$ based on Eq.~(\ref{newmode}) in Fig.~3(a) with the initial state being same with that in the last section.
When $\gamma_{212}$ is considered to be zero, that is, the ${\rm TM}_{21}$ mode is neglected, both $P_1$ and $P_2$ will experience the oscillation, which is similar to the situation when the two atoms are both centered in the waveguide, and the difference comes from the modification of coupling strength between the second atom and the waveguide due to the derivation. However, as the effect of the ${\rm TM}_{21}$ is taken into consideration ($\gamma_{212}\neq0$), $P_1$ experiences an exponential decay and $P_2$ nearly stays in the ground state all the time. Therefore, the ${\rm TM}_{21}$ mode provides a new dissipation channel for the second atom, which prevents its excitation.
Similar to the discussion in the last section, we can also obtain the ME under the Markovian approximation, which yields
\begin{eqnarray}
\dot\rho&=&-i[\mathcal{H}_2,\rho]+\sum_{i,j=1}^{2}\frac{\Gamma_{ij}'}{2}
(2\sigma_{j}^{-}\rho\sigma_{j}^{+}-\sigma_{i}^{+}\sigma_{j}^{-}\rho
-\rho\sigma_{i}^{+}\sigma_{j}^{-})\nonumber\\
&&+\gamma_{212}(2\sigma_{2}^{-}\rho\sigma_{2}^{+}
-\sigma_{2}^{+}\sigma_{2}^{-}\rho-\rho\sigma_{2}^{+}\sigma_{2}^{-}).
\label{newmodemaster}
\end{eqnarray}
Here, the last term represents the dissipation of the second atom induced by the ${\rm TM}_{21}$ mode in the waveguide. The Hamilton $\mathcal{H}_2$ for the interaction between the two atoms reads
\begin{equation}
\mathcal{H}_2=\sum_{l=1}^{2}\omega_{a}(\sigma_l^{+}\sigma_l^{-})
+\sum_{i,j=1}^{2}\frac{U_{ij}'}{2}(\sigma_{i}^{+}\sigma_{j}^{-}+\sigma_{i}^{-}\sigma_{j}^{+}),
\end{equation}
where
\begin{equation}
U_{ij}'=\gamma_{12}\sin|z_i-z_j|,\Gamma_{ij}'=\frac{2\pi g_{1i}g_{1j}\cos|z_{i}-z_{j}|}{c\sqrt{\omega_{a}^{2}-\Omega_{1}^{2}}}.
\end{equation}
In Fig.~\ref{twomai}(b), we show the agreement of the results between the retardation differential equations and ME. When the ${\rm TM}_{21}$ mode in considered, the second atom immediately decays after it is excited by the photon emitted by the first atom, so that we can barely observe the oscillation. Meanwhile, the photon emitted by the first atom propagates via ${\rm TM}_{11}$ mode can not be reflected by the second atom due to its dissipation via ${\rm TM}_{21}$ mode and therefore the ME works well and $P_1$ exhibits an exponential decay.
\section{Dark state}
\label{dark}
In the above sections, we have considered the situation with $d=|z_1-z_2|\neq0$, in which the dynamics of the system is demonstrated by the retardation differential equations. Another interesting situation is that the connection between the two atoms is perpendicular to the axis of the waveguide. We first consider the case illustrated in Fig.~\ref{model} (c), where both of the atoms are located in the position $z=z_0$ but the second atom is deviated from the first one in the $x$ direction, that is, $\vec{r}_1=(a/2,b/2,z_0)$ and $\vec{r}_2=(a/2+\Delta x,b/2,z_0)\,(0<\Delta x<a/2)$. As a result, there is no retardation effect and the amplitudes satisfy the differential equations
\begin{subequations}
\begin{eqnarray}
(\partial_{t}+\gamma_{11})B_1(t)&=&-\gamma_{12}B_2(t),\nonumber \\
\\
(\partial_{t}+\gamma_{22}+\gamma_{212})B_2(t)&=&-\gamma_{12}B_1(t),\nonumber\\
\end{eqnarray}
\label{per1}
\end{subequations}
where the parameters $\gamma_{11},\gamma_{22}, \gamma_{212}$ and $\gamma_{12}$ are same with those given in Eqs.~(\ref{gamma1}) and (\ref{gamma2}). Correspondingly, the Markovian master equation becomes
\begin{eqnarray}
\dot\rho&=&-i[\mathcal{H}_3,\rho]+\sum_{i,j=1}^{2}\frac{\Gamma_{ij}^{*}}{2}
(2\sigma_{j}^{-}\rho\sigma_{i}^{+}-\sigma_{i}^{+}\sigma_{j}^{-}\rho
-\rho\sigma_{i}^{+}\sigma_{j}^{-})\nonumber\\
&&+\gamma_{212}(2\sigma_{2}^{-}\rho\sigma_{2}^{+}
-\sigma_{2}^{+}\sigma_{2}^{-}\rho-\rho\sigma_{2}^{+}\sigma_{2}^{-}),
\label{mep}
\end{eqnarray}
where the Hamiltonian
\begin{equation}
\mathcal{H}_3=\sum_{l=1}^{2}\omega_{a}(\sigma_l^{+}\sigma_l^{-}),
\end{equation}
implies that the two atoms do not coherently couple to each other. However, the nonzero value of
\begin{equation}
\Gamma_{ij}^{*}=\frac{2\pi g_{1i}g_{1j}}{c\sqrt{\omega_{a}^{2}-\Omega_{1}^{2}}}.
\end{equation}
indicates that they will undergo a collective dissipation to the waveguide.
\begin{figure}
\caption{Dynamical evolution of the atomic populations when the connection of the two atoms are perpendicular to the axis of the waveguide. The parameters are set as $d=12/k_{j0}
\label{twomai}
\end{figure}
Fig.~4(a) shows the comparison between Eqs.~(\ref{per1}) and ME for dynamical evolution of the atomic populations. The absence of the retardation yields agreement of the two results as shown in the figure. Furthermore, $P_1$ experiences an exponential decay and $P_2$ nearly keeps zero during the time evolution, therefore, the second atom is nearly frozen in the ground state.
The above dynamical process can be broken if the second atom is deviated from the first one in the $y$ direction instead of $x$ direction, that is, $\vec{r}_1=(a/2,b/2,z_0)$ and $\vec{r}_2=(a/2,b/2+\Delta y,z_0)\,(0<\Delta y<b/2)$. In this case, the amplitudes equations and the ME are same with Eqs.~(\ref{per1}) and (\ref{mep}), respectively, the only difference is $\gamma_{212}=0$ since the second atom is decoupled from the ${\rm TM}_{21}$ mode. As shown in Fig.~4(b), the ME describes the dynamics of the system perfectly and the atomic populations will achieve an nonzero fixed value as the evolution time $t$ tends to be infinite. It means that the system finally reaches a dark state which protects the atoms from decaying to the ground state. The underlying physics can be extracted from the effective interaction Hamiltonian, which is simplified as
\begin{equation}
H_{I}=i\int_{-\infty}^{\infty}dk\frac{1}{\sqrt{\omega_{1k}}}\left[(g_{11}
\sigma^{-}_{1}+g_{12}\sigma^{-}_{2})a^{\dagger}_{1k}e^{ikz_0}-{\rm H.c.}\right].
\end{equation}
As a result, the dark state $|D\rangle$ which satisfies $H_{I}|D\rangle=0$ can be expressed as
\begin{equation}
|D\rangle=\frac{1}{\sqrt{g_{12}^2+g_{11}^2}}({g_{12}\sigma_1^{+}-g_{11}\sigma_2^{+}})|G,0\rangle.
\end{equation}
Therefore, we have
\begin{equation}
\label{dar}
\frac{P_{1}(t=\infty)}{P_{2}(t=\infty)}=\frac{g_{12}^{2}}{g_{11}^{2}}=\cos^{2}(\frac{y\pi}{b}),
\end{equation}
which coincides with results given in Fig.~4(b).
\section {conclusion}
\label{con}
In this paper, we investigate the time evolution of a two-atom system which couples to a waveguide with rectangle cross section. We find that the dynamics of the system can be controlled by adjusting the
relative location of the two atoms in a on-demand manner. Similar to the modeled waveguide system, where the effect of cross section is neglected, the dynamics exhibits an obvious non-Markovian retardation character when the atoms are both located on the middle axis of the waveguide, since they interact with the same mode in the waveguide. This mode not only induces the dissipation but also plays a data bus to indirectly couple the two atoms. As one of the atom is off-centered, an additional mode in the waveguide acts as pure dissipation environment, which erodes the retardation effect and therefore the Markovian master equation will capture the main physics in an analytical way. More interestingly, when the connection of the atoms are perpendicular to the axis of the waveguide, we find a dark state mechanism which prevents the complete decay of the system and is therefore of potential application in quantum information processing.
\begin{acknowledgments}
We thank Dr. L. Du for warm help. This work is supported by National Key R$\&$D Program of China (No. 2021YFE0193500), and the National Natural Science Foundation of China (No. 11875011).
\end{acknowledgments}
\end{document} |
\begin{document}
\title{The Higher Rank Rigidity Theorem \ for Manifolds With No Focal Points}
\noindent \textbf{Abstract.} We say that a Riemannian manifold $M$ has $\rank M \geq k$ if every geodesic in $M$ admits at least $k$ parallel Jacobi fields. The Rank Rigidity Theorem of Ballmann and Burns-Spatzier, later generalized by Eberlein-Heber, states that a complete, irreducible, simply connected Riemannian manifold $M$ of rank $k \geq 2$ (the ``higher rank'' assumption) whose isometry group $\Gamma$ satisfies the condition that the $\Gamma$-recurrent vectors are dense in $SM$ is a symmetric space of noncompact type. This includes, for example, higher rank $M$ which admit a finite volume quotient. We adapt the method of Ballmann and Eberlein-Heber to prove a generalization of this theorem where the manifold $M$ is assumed only to have no focal points. We then use this theorem to generalize to no focal points a result of Ballmann-Eberlein stating that for compact manifolds of nonpositive curvature, rank is an invariant of the fundamental group.
\section{Introduction}
In the mid-80's, building on an analysis of manifolds of nonpositive curvature of higher rank carried out by Ballmann, Brin, Eberlein, and Spatzier in \cite{BalBriEbe85} and \cite{BalBriSpa85}, Ballmann in \cite{Bal85} and Burns-Spatzier in \cite{BurSpa87-1} and \cite{BurSpa87-2} independently (and with different methods) proved their Rank Rigidity Theorem:
\begin{rrthm}
Let $M$ be a complete, simply connected, irreducible Riemannian manifold of nonpositive curvature, rank $k \geq 2$, and curvature bounded below; suppose also $M$ admits a finite volume quotient. Then $M$ is a locally symmetric space of noncompact type.
\end{rrthm}
The theorem was later generalized by Eberlein-Heber in \cite{EbeHeb90}. They removed the curvature bound, and also generalized the condition that $M$ admit a finite volume quotient to the condition that a dense set of geodesics in $M$ be $\Gamma$-recurrent; they called this condition the ``duality condition'', for reasons not discussed here.
We aim to prove the following generalization of Eberlein and Heber's result:
\begin{rrthm}
Let $M$ be a complete, simply connected, irreducible Riemannian manifold with no focal points and rank $k \geq 2$ with group of isometries $\Gamma$, and suppose that the $\Gamma$-recurrent vectors are dense in $M$. Then $M$ is a symmetric space of noncompact type.
\end{rrthm}
Poincar\'e recurrence implies that when $M$ admits a finite volume quotient, the $\Gamma$-recurrent vectors are dense in $M$. As a consequence we obtain the following corollary:
\begin{cor*}
Let $N$ be a complete, finite volume, irreducible Riemannian manifold with no focal points and rank $k \geq 2$; then $N$ is locally symmetric.
\end{cor*}
Since the conditions of no focal points and density of $\Gamma$-recurrent vectors pass nicely to de Rham factors, we also get a decomposition theorem:
\begin{cor*}
Let $M$ be a complete, simply connected Riemannian manifold with no focal points and with group of isometries $\Gamma$, and suppose that the $\Gamma$-recurrent vectors are dense in $SM$. Then $M$ decomposes as a Riemannian product
\[
M = M_0 \times M_S \times M_1 \times \cdot \times M_l,
\]
where $M_0$ is a Euclidean space, $M_S$ is a symmetric space of noncompact type and higher rank, and each factor $M_i$ for $1 \leq i \leq l$ is an irreducible rank-one Riemannian manifold with no focal points.
\end{cor*}
In 1987 Ballmann and Eberlein in \cite{BalEbe87} defined the \emph{rank} of an abstract group, and used the Higher Rank Rigidity Theorem in nonpositive curvature to show that, for nonpositively curved manifolds of finite volume, rank is an invariant of the fundamental group. In our final section, we derive the necessary lemmas to show that, at least in the case the manifold is compact, their proof applies to the case of no focal points as well. Therefore we have
\begin{thm*}
Let $M$ be a complete, simply connected Riemannian manifold without focal points, and let $\Gamma$ be a discrete, cocompact subgroup of isometries of $M$ acting freely and properly on $M$. Then $\rank(\Gamma) = \rank(M)$.
\end{thm*}
As a corollary of this and the higher rank rigidity theorem, we find for instance that the locally symmetric metric is the unique Riemannian metric of no focal points on a compact locally symmetric space.
Our proof follows closely the method of Ballmann and Eberlein-Heber, as presented in Ballmann's book \cite{Bal95}. The paper is organized as follows. In section \ref{S_prelim} we recall the necessary definitions and state the results we will need on manifolds with no focal points, most of which come from a paper of O'Sullivan \cite{OSu76}. We construct a visual boundary $M(\infty)$ and derive a few of its properties. Finally, subsection \ref{sS_24} is devoted to a number of lemmas that allow us to compare the behavior of $SM$ at two possibly distant vectors whose associated geodesics are asymptotic. These lemmas rely heavily on recurrence. One of the major tools lost when passing from nonpositive curvature to no focal points is convexity of the function $t \mapsto d(\gamma(t), \sigma(t))$ for geodesics $\gamma, \sigma$, and the lemmas in subsections \ref{sS_22} and \ref{sS_24} are the key tools we use here to replace it.
We then proceed to the main proof. In section \ref{S_flats} we show that $M$ has sufficiently many $k$-flats, and in section \ref{S_angle} we investigate the structure of the visual boundary of $M$ (defined as asymptotic classes of geodesic rays). These two sections repeat for manifolds of no focal points some of the breakthrough work of Ballmann, Brin, Eberlein, and Spatzier on manifolds of nonpositive curvature, which was instrumental in the original proofs of the Rank Rigidity Theorem (see \cite{BalBriEbe85}, \cite{BalBriSpa85}). In addition, we construct in secton \ref{S_angle} a closed proper invariant subset of the visual boundary. The arguments in these two sections are generalizations of their counterparts in nonpositive curvature, originally set out in \cite{BalBriEbe85}, \cite{BalBriSpa85}, \cite{Bal85}, and \cite{EbeHeb90}.
In section \ref{S_complete} we complete the proof of the higher rank rigidity theorem via an appeal to the holonomy classification theorem of Berger-Simons. This section follows Ballmann nearly word-for-word, but the arguments are brief enough that we present them again here.
Finally, in section \ref{S_FundGroup} we prove our generalization of the theorem of Ballmann-Eberlein. We omit many proofs here that follow Ballmann-Eberlein word-for-word, or with only trivial modifications; our work here is primarily in generalizing a number of well-known lemmas from nonpositive curvature to the case of no focal points. The main new tool here is Lemma \ref{A_B}, which is used as a replacement for a type of flat strip theorem in nonpositive curvature which says that if two geodesic rays $\gamma_1$ and $\gamma_2$ meet a geodesic $\sigma$ in such a way that the sum of the interior angles is $\pi$, then $\gamma_1, \gamma_2,$ and $\sigma$ bound a flat half strip.
The author is indebted to Ralf Spatzier for numerous conversations on the material of this paper.
\section{Preliminaries}\label{S_prelim}
\subsection{Notation}
For sections \ref{S_prelim} - \ref{S_complete} of this paper, $M$ is assumed to be a complete, simply connected, irreducible Riemannian manifold with no focal points.
We denote by $TM$ and $SM$ the tangent and unit tangent bundles of $M$, respectively, and we denote by $\pi$ the corresponding projection map. If $v$ is a unit tangent vector to a manifold $M$, we let $\gamma_v$ denote the (unique) geodesic with $\dot{\gamma}(0) = v$.
Recall that $SM$ inherits a natural metric from $M$, the Sasake metric, as follows: we have for any $v \in TM$ a decomposition
\[
T_vTM = T_{\pi(v)}M \oplus T_{\pi(v)}M
\]
given by the horizontal and vertical subspaces of the connection, and we therefore may give $T_vTM$ the inner product induced by this decomposition, giving a Riemannian metric on $TM$; the restriction of this metric to $SM$ is the Sasake metric.
Central to our discussion is the geodesic flow on $M$, which is the flow $g^t : SM \to SM$ defined by
\[
g^t v = \dot{\gamma}_v(t).
\]
In sections \ref{S_prelim} - \ref{S_complete} we denote by $\Gamma$ the group of isometries of $M$. A vector $v \in SM$ is called $\Gamma$-\emph{recurrent}, or simply \emph{recurrent}, if for each neighborhood $U \subseteq SM$ of $v$ and each $T > 0$ there is $t \geq T$ and $\phi \in \Gamma$ such that $(d\phi \circ g^t) v \in U$. We assume throughout the paper that the set of $\Gamma$-recurrent vectors is dense in $SM$ (Eberlein-Heber call this the duality condition, for reasons not discussed here). This holds in particular if $M$ admits a finite volume quotient.
The \emph{rank} of $v \in SM$, or of $\gamma_v$, is the dimension of the space of parallel Jacobi fields along $\gamma_v$. The \emph{rank} of $M$ is the minimum of $\rank{v}$ over all unit tangent vectors $v$. A unit tangent $v$ is called \emph{regular} if there exists some neighborhood $U \subseteq SM$ of $v$ such that for all $w \in U$, $\rank{w} = \rank{v}$. We denote by $\mathcal{R}$ the set of regular vectors, and $\mathcal{R}_m$ the set of regular vectors of rank $m$.
We let $k$ be the rank of $M$, and assume that $k \geq 2$. It is not difficult to see that the set of vectors of rank $\leq m$ is open for each $m$, and in particular, that the set $\mathcal{R}_k$ is open. In section \ref{S_flats}, we will construct for each $v \in SM$ a totally geodesic embedded $\mathbb{R}^k$ in $M$ with $v$ in its tangent bundle; such an $\mathbb{R}^k$ is called a \emph{$k$-flat}. The assumption that $k \geq 2$ will ensure that this is gives us nontrivial information about $M$.
We will also be interested in the ``behavior at $\infty$'' of geodesics on $M$. To do this we define the following two equivalence relations on vectors in $SM$ (equivalently, on geodesics in $M$): Two vectors $v, w \in SM$ are \emph{asymptotic} if $d(\gamma_v(t), \gamma_w(t))$ is bounded as $t \to \infty$, $t \geq 0$; and $v, w$ are called \emph{parallel} if $v, w$ are asymptotic and $-v, -w$ are also asymptotic. Geodesics $\gamma, \sigma$ are called asymptotic (resp. parallel) if $\dot{\gamma}(0), \dot{\sigma}(0)$ are asymptotic (resp. parallel). We develop these equivalence relations in section \ref{sS_bdry}.
\subsection{Results on no focal points}\label{sS_22}
Let $L$ be an arbitrary Riemannian manifold, $N$ a submanifold of $L$. The submanifold $N$ is said to have a \emph{focal point} at $q \in L$ if there exists a variation of geodesics $\gamma_s(t)$ with $\gamma_s(0) \in N$, $\gamma_0(a) = q$ for some $a$, $\dot{\gamma}_s(0) \perp N$ for all $s$, and $\partial_s\gamma_0(a) = 0$. Note that if $N$ is a point, then a focal point of $N$ is just a conjugate point of $N$ along some geodesic.
A Riemannian manifold $L$ is said to have \emph{no focal points} if every totally geodesic submanifold $N$ has no focal points. Equivalently, it suffices to check that for every Jacobi field $J$ along a geodesic $\gamma$ with $J(0) = 0$, $||J(t)||$ is a strictly increasing function of $t$ for $t > 0$. The results of this section hold for arbitrary Riemannian manifolds $L$ with no focal points.
It is easy to check that Riemannian manifolds of nonpositive curvature have no focal points, and that Riemannian manifolds of no focal points have no conjugate points. Recall that for simply connected manifolds with no conjugate points, the exponential map $\exp_p : T_pL \to L$ is a diffeomorphism. There is an analog for no focal points: Recall that if $N$ is a submanifold of a Riemannian manifold $L$, we may construct the normal bundle $\nu^{\perp}N$ of $N$ in $L$, and there is an associated exponential map
\[
\exp^{\perp}_N : \nu^{\perp}N \to L,
\]
which is just the restriction of the standard exponential map $\exp : TL \to L$. Then a totally geodesic submanifold $N$ of a Riemannian manifold has no focal points iff the map $\exp^{\perp}_N$ is a diffeomorphism. In fact, as one might expect, focal points occur exactly at the places where $d \exp^{\perp}_N$ is singular. For a reference, see for instance O'Sullivan \cite{OSu74}.
We now state the results we need on manifolds with no focal points. Throughout, $M$ is a Riemannian manifold with no focal points. The main reference here is O'Sullivan's paper \cite{OSu76}\footnote{Note that, as remarked by O'Sullivan himself, the relevant results in \cite{OSu76} are valid for \emph{all} manifolds with no focal points (rather than only those with a lower curvature bound), since the condition $||J(0)|| \to \infty$ for all nontrivial initially vanishing Jacobi fields $J$ is always satisfied for manifolds with no focal points, as shown by Goto \cite{Got78}.}.
First, we have the following two propositions, which often form a suitable replacement for convexity of the function $t \mapsto d(\gamma(t), \sigma(t))$ for geodesics $\gamma, \sigma$:
\begin{prop}[\cite{OSu76} \S 1 Prop 2]\label{OSu76 1}
Let $\gamma$ and $\sigma$ be distinct geodesics with $\gamma(0) = \sigma(0)$. Then for $t > 0$, both $d(\gamma(t), \sigma)$ and $d(\gamma(t), \sigma(t))$ are strictly increasing and tend to infinity as $t \to \infty$.
\end{prop}
\begin{prop}[\cite{OSu76} \S 1 Prop 4]\label{OSu76 2}
Let $\gamma$ and $\sigma$ be asymptotic geodesics; then both $d(\gamma(t), \sigma)$ and $d(\gamma(t), \sigma(t))$ are nonincreasing for $t \in \mathbb{R}$.
\end{prop}
\noindent O'Sullivan also proves an existence and uniqueness result for asymptotic geodesics:
\begin{prop}[\cite{OSu76} \S 1 Prop 3]\label{OSu76 3}
Let $\gamma$ be a geodesic; then for each $p \in M$ there is a unique geodesic through $p$ and asymptotic to $\gamma$.
\end{prop}
\noindent Finally, O'Sullivan also proves a flat strip theorem (this result was also obtained, via a different method, by Eschenburg in \cite{Esc77}):
\begin{prop}[\cite{OSu76} \S 2 Thm 1]\label{OSu76 flat}
If $\gamma$ and $\sigma$ are parallel geodesics, then $\gamma$ and $\sigma$ bound a flat strip; that is, there is an isometric immersion $\phi : [0, a] \times \mathbb{R} \to M$ with $\phi(0, t) = \gamma(t)$ and $\phi(a, t) = \sigma(t)$.
\end{prop}
\noindent We will also need the following result, which is due to Eberlein (\cite{Ebe73}); a proof can also be found in \cite{Esc77}.
\begin{prop}\label{Eb bdd}
Bounded Jacobi fields are parallel.
\end{prop}
\noindent Finally, we have the following generalization of Proposition \ref{OSu76 1}:
\begin{prop}\label{OSu76 4}
Let $p \in M$, let $N$ be a totally geodesic submanifold of $M$ through $p$, and let $\gamma$ be a geodesic of $M$ with $\gamma(0) = p$. Assume $\gamma$ is not contained in $N$; then $d(\gamma(t), N)$ is strictly increasing and tends to $\infty$ as $t \to \infty$.
\end{prop}
\begin{proof}
Let $\sigma_t$ be the unique geodesic segment joining $\gamma(t)$ to $N$ and perpendicular to $N$; then (by a first variation argument) $d(\gamma(t), N) = L(\sigma_t)$, where $L(\sigma_t)$ gives the length of $\sigma_t$. Thus if $d(\gamma(t), N)$ is not strictly increasing, then we have $L'(\sigma_t) = 0$ for some $t$, and again a first variation argument establishes that then $\sigma_t$ is perpendicular to $\gamma$, which is a contradiction since $\exp : \nu^{\perp} \sigma_t \to M$ is a diffeomorphism.
This establishes that $d(\gamma(t), N)$ is strictly increasing. To show it is unbounded we argue by contradiction. Suppose
\[
\lim_{t \to \infty} d(\gamma(t), N) = C < \infty,
\]
and choose sequences $t_n \to \infty$ and $a_n \in N$ such that $d(\gamma(t_n), N) = d(\gamma(t_n), a_n)$ and the sequence $d(\gamma(t_n), a_n)$ increases monotonically to $C$. We let $w_n$ be the unit tangent vector at $\gamma(0)$ pointing at $a_n$; by passing to a subsequence, we may assume $w_n \to w \in T_{\gamma(0)} N$.
We claim $d(\gamma(t), \gamma_w) \leq C$ for all $t \geq 0$, contradicting Proposition \ref{OSu76 1}. Fix a time $t \geq 0$ and $\epsilon > 0$. For each $n$, there is a time $s_n$ such that
\[
d(\gamma(t), \gamma_{w_n}) = d(\gamma(t), \gamma_{w_n}(s_n)).
\]
The triangle inequality gives
\[
s_n \leq t + C.
\]
Thus some subsequence of the points $\gamma_{w_n}(s_n)$ converges to a point $\gamma_w(s)$, and then clearly $d(\gamma(t), \gamma_w(s)) \leq C$, which establishes the result.
\end{proof}
\subsection{The boundary of $M$ at infinity}\label{sS_bdry}
We define for $M$ a visual boundary $M(\infty)$, the \emph{boundary of $M$ at infinity}, a topological space whose points are equivalence classes of unit speed asymptotic geodesics in $M$. If $\eta \in M(\infty)$, $v \in SM$, and $\gamma_v$ is a member of the equivalence class $\eta$, then we say $v$ (or $\gamma_v$) \emph{points at} $\eta$.
Proposition \ref{OSu76 3} shows that for each $p \in M$ there is a natural bijection $S_p M \cong M(\infty)$ given by taking a unit tangent vector $v$ to the equivalence class of $\gamma_v$. Thus for each $p$ we obtain a topology on $M(\infty)$ from the topology on $S_p M$; in fact, these topologies (for various $p$) are all the same, which we now show.
Fix $p, q \in M$ and let $\phi : S_pM \to S_qM$ be the map given by taking $v \in S_pM$ to the unique vector $\phi(v) \in S_qM$ asymptotic to $v$. We wish to show $\phi$ is a homeomorphism, and for this it suffices to show:
\begin{lem}\label{sameTop}\label{28}
The map $\phi : S_p M \to S_qM$ is continuous.
\end{lem}
\begin{proof}
Let $v_n \in S_pM$ with $v_n \to v$, and let $w_n, w \in S_qM$ be asymptotic to $v_n, v$, respectively. We must show $w_n \to w$. Suppose otherwise; then, passing to a subsequence, we may assume $w_n \to u \neq w$. Fix $t \geq 0$. Choose $n$ such that
\begin{align*}
d(\gamma_{w_n}(t), \gamma_u(t)) + d(\gamma_{v_n}(t), \gamma_v(t)) < d(p, q).
\end{align*}
Then
\begin{align*}
d(\gamma_u(t), \gamma_w(t)) \leq d(\gamma_u(t)&, \gamma_{w_n}(t)) + d(\gamma_{w_n}(t), \gamma_{v_n}(t)) \\ &+ d(\gamma_{v_n}(t), \gamma_v(t)) + d(\gamma_v(t), \gamma_w(t)) < 3 d(p, q),
\end{align*}
the second and fourth terms being bounded by $d(p, q)$ by Proposition \ref{OSu76 2}. Since $t$ is arbitrary, this contradicts Proposition \ref{OSu76 1}.
\end{proof}
We call the topology on $M(\infty)$ induced by the topology on any $S_pM$ as above the \emph{visual topology}. We will be defining a second topology on $M(\infty)$ presently, so we take a moment to fix notation: If $\zeta_n, \zeta \in M(\infty)$ and we write $\zeta_n \to \zeta$, we \emph{always} mean with respect to the visual topology unless explicitly stated otherwise.
If $\eta, \zeta \in M(\infty)$ and $p \in M$, then $\angle_p(\eta, \zeta)$ is defined to be the angle at $p$ between $v_{\eta}$ and $v_\zeta$, where $v_\eta, v_\zeta \in S_pM$ point at $\eta, \zeta$, respectively.
We now define a metric $\angle$ on $M(\infty)$, the \emph{angle metric}, by
\[
\angle(\eta, \zeta) = \sup_{p \in M} \angle_p(\eta, \zeta).
\]
We note that the metric topology determined by $\angle$ is not in general equivalent to the visual topology. However, we do have:
\begin{prop}\label{210}
The angle metric is lower semicontinuous. That is, if $\eta_n \to \eta$ and $\zeta_n \to \zeta$ (in the visual topology), then
\[
\angle(\eta, \zeta) \leq \liminf \angle(\eta_n, \zeta_n).
\]
\end{prop}
\begin{proof}
It suffices to show that for all $\epsilon > 0$ and all $q \in M$, we have for all but finitely many $n$
\[
\angle_q(\eta, \zeta) - \epsilon < \angle(\eta_n, \zeta_n).
\]
Fixing $q \in M$ and $\epsilon > 0$, since $\eta_n \to \eta$ and $\zeta_n \to \zeta$, for all but finitely many $n$ we have
\[
\angle_q(\eta, \zeta) < \angle_q(\eta_n, \zeta_n) + \epsilon,
\]
and this implies the inequality above.
\end{proof}
We also take a moment to establish a few properties of the angle metric.
\begin{prop}
The angle metric $\angle$ is complete.
\end{prop}
\begin{proof}
For $\xi \in M(\infty)$, we denote by $\xi(p) \in S_pM$ the vector pointing at $\xi$. Let $\zeta_n$ be a $\angle$-Cauchy sequence in $M(\infty)$. Then for each $p$ the sequence $\zeta_n(p)$ is Cauchy in the metric $\angle_p$, and so has a limit $\zeta(p)$; by Lemma \ref{sameTop}, the asymptotic equivalence class of $\zeta(p)$ is independent of $p$. We denote this class by $\zeta$; it is now easy to check that $\zeta_n \to \zeta$ in the $\angle$ metric. (This follows from the fact that the sequences $\zeta_n(p)$ are Cauchy uniformly in $p$.)
\end{proof}
\begin{lem}\label{nonincr}\label{nondecr}
Let $v \in SM$ point at $\eta \in M(\infty)$, and let $\zeta \in M(\infty)$. Then $\angle_{\gamma_v(t)}(\eta, \zeta)$ is a nondecreasing function of $t$.
\end{lem}
\begin{proof}
This follows from Proposition \ref{OSu76 2} and a simple first variation argument.
\end{proof}
\subsection{Asymptotic vectors, recurrence, and the angle metric}\label{sS_24}
In this section we collect a number of technical lemmas. As a consequence we derive Corollary \ref{flatcorrect}, which says that the angle between the endpoints of recurrect vectors is measured correctly from any flat. (In nonpositive curvature, this follows from a simple triangle-comparison argument.)
Our first lemma allows us to compare the behavior of the manifold at (possibly distant) asymptotic vectors:
\begin{lem}\label{seq}
Let $v, w \in SM$ be asymptotic. Then there exist sequences $t_n \to \infty, v_n \to v$, and $\phi_n \in \Gamma$ such that
\[
(d\phi_n \circ g^{t_n})v_n \to w
\]
as $n \to \infty$.
\end{lem}
\begin{proof}
First assume $w$ is recurrent. Then we may choose $s_n \to \infty$ and $\phi_n \in \Gamma$ so that $(d\phi_n \circ g^{s_n})w \to w$. For each $n$ let $q_n$ be the footpoint of $g^{s_n} w$, and let $v_n$ be the vector with the same footpoint as $v$ such that the geodesic through $v_n$ intersects $q_n$ at some time $t_n$. Clearly $t_n \to \infty$.
\begin{figure}\label{fig:Pic1}
\end{figure}
We now make two claims: First, that $v_n \to v$ and second, that $(d\phi_n \circ g^{t_n})v_n \to w$. Note that since $v$ and $w$ are asymptotic, Lemma \ref{nonincr} gives
\[
\angle_{\pi(v)} (v, v_n) \leq \angle_{q_n} (g^{t_n} v_n, g^{s_n} w).
\]
So if we show that the right-hand side goes to zero, both our claims are verified.
\begin{figure}\label{fig:Pic2}
\end{figure}
Consider the geodesic rays $\tau_n, \sigma_n$ through the point $\phi_n(q_n)$ satisfying
\begin{align*}
\dot{\tau}_n(0) &= - d\phi_n(g^{t_n} v_n), & \dot{\sigma}_n(0) &= - d\phi_n(g^{s_n}w).
\end{align*}
It suffices to show the angle between these rays goes to zero. Note $s_n, t_n \to \infty$. We claim that the distance between $\tau_n(t)$ and $\sigma_n(t)$ is bounded, independent of $n$, for $t \leq \max \set{s_n, t_n}$. To see this, first note that $|s_n - t_n| \leq d(\pi (v), \pi (w))$ by the triangle inequality. Suppose for example that $s_n \geq t_n$; then we find
\[
d(\sigma_n(s_n), \tau_n(s_n)) \leq 2 d(\pi v, \pi w),
\]
and Proposition \ref{OSu76 1} shows that for $0 \leq t \leq s_n$,
\[
d(\sigma_n(t), \tau_n(t)) \leq 2 d(\pi v, \pi w).
\]
The same holds if $t_n \geq s_n$. Hence for fixed $t$, for all but finitely many $n$ the above inequality holds. It follows that $\tau_n$ and $\sigma_n$ converge to asymptotic rays starting at $p$. This establishes the theorem for recurrent vectors $w$.
We now do not assume $w$ is recurrent; since recurrent vectors are dense in $SM$, we may take a sequence $w_m$ of recurrent vectors with $w_m \to w$. For each $m$, there are sequences $v_{n,m} \to v$, $t_{n,m} \to \infty$, and $\phi_{n,m} \in \Gamma$ such that
\[
(d\phi_{n,m} \circ g^{t_{n,m}})v_{n,m} \to w_m.
\]
An appropriate ``diagonal'' argument now proves the theorem.
\end{proof}
As a corollary of the above proof we get the following:
\begin{cor}\label{43}
Let $v \in SM$ be recurrent and pointing at $\eta \in M(\infty)$; let $\zeta \in M(\infty)$. Then
\[
\angle(\eta, \zeta) = \lim_{t \to \infty} \angle_{\gamma_v(t)}(\eta, \zeta).
\]
\end{cor}
\begin{proof}
By Lemma \ref{nonincr}, the limit exists. Let $p = \pi(v)$, and fix arbitrary $q \in M$. Since $v$ is recurrent, there exist $t_n \to \infty$ and $\phi_n \in \Gamma$ such that $(d\phi_n \circ g^{t_n})v \to v$. Let $p_n$ be the footpoint of $g^{t_n} v$, and let $\gamma_n$ be the geodesic from $q$ to $p_n$. Define
\begin{align*}
v_n &= g^{t_n} v & &\text{and} & v_n' = \dot{\gamma}_n(p_n).
\end{align*}
\begin{figure}\label{fig:Pic3}
\end{figure}
By the argument given in Lemma \ref{seq}, $\angle_{p_n}(v_n, v_n') \to 0$, and if we let $v' \in S_qM$ be the vector pointing at $\eta$, then $\dot{\gamma}_n(0) \to v'$. Thus
\[
\angle_{p_n}(\zeta, v_n') \geq \angle_q(\zeta, \dot{\gamma}_n(0)) \to \angle_q(\zeta, \eta).
\]
Since $q$ was arbitrary, this proves the claim.
\end{proof}
In fact, the above corollary is true if $v$ is merely asymptotic to a recurrent vector. To prove this we will need a slight modification to Lemma \ref{seq}, which is as follows:
\begin{lem}\label{43a}
Let $w$ be recurrent and $v$ asymptotic to $w$. Then there exist sequences $w_n \to w$ and $s_n, t_n \to \infty$ such that $g^{t_n} w_n$ and $g^{s_n} v$ have the same footpoint $q_n$ for each $n$, and
\[
\angle_{q_n}(g^{t_n}w_n, g^{s_n} v) \to 0.
\]
\end{lem}
\begin{proof}
First let $s_n \to \infty$, $\phi_n \in \Gamma$, be sequences such that
\[
(d\phi_n \circ g^{s_n}) w \to w.
\]
Define $p = \pi(w), q = \pi(v)$, $p_n = \pi(g^{s_n}w)$, and $q_n = \pi(g^{s_n}v)$. Let $w_n$ be the unit tangent vector with footpoint $p$ such that there exists $t_n$ such that $g^{t_n} w_n$ has footpoint $q_n$.
\begin{figure}\label{fig:Pic4}
\end{figure}
Note that for all $n$
\begin{align*}
d(\phi_n(q_n), p) &\leq d(\phi_n(q_n), \phi_n(p_n)) + d(\phi_n(p_n), p) \\
&\leq d(q_n, p_n) + K \\
&\leq d(q, p) + K,
\end{align*}
where $K$ is some fixed constant. In particular, the points $\phi_n(q_n)$ all lie within bounded distance of $p$, and hence within some compact set. Therefore, by passing to a subsequence, we may assume we have convergence of the following three sequences:
\begin{align*}
r_n := \phi_n(q_n) &\to r \\
w_n' := (d\phi_n \circ g^{t_n})w_n &\to w' \\
v_n' := (d\phi_n \circ g^{s_n})v &\to v'
\end{align*}
for some $r, w', v'$. Then by the argument in the proof of Lemma \ref{seq},
\[
d(\gamma_{-w_n'}(t), \gamma_{-v_n'}(t)) \leq 2 d(p,q)
\]
for $0 \leq t \leq \max \set{s_n, t_n}$. It follows that $(-w')$ and $(-v')$ are asymptotic; since both have footpoint $r$, we see $w' = v'$. This gives the lemma.
\end{proof}
We can now prove our previous claim:
\begin{prop}
Let $w \in SM$ be recurrent, $v$ asymptotic to $w$. Say $v$ and $w$ both point at $\eta \in M(\infty)$. Then for all $\zeta \in M(\infty)$
\[
\angle(\eta, \zeta) = \lim_{t \to \infty} \angle_{\gamma_v(t)}(\eta, \zeta).
\]
\end{prop}
\begin{proof}
Fix $\epsilon > 0$. By Lemma \ref{43}, there exists a $T$ such that
\[
\angle_{\gamma_w(T)}(\eta, \zeta) \geq \angle(\eta, \zeta) - \epsilon.
\]
We write $w' = g^Tw$ and note that $w'$ is also recurrent and asymptotic to $v$. Let $p$ be the footpoint of $w'$. Choose by Lemma \ref{43a} sequences $w_n \to w'$ and $s_n, t_n \to \infty$ such that
\begin{equation*}
\angle_{\gamma_{v}(s_n)}(g^{t_n}w_n, g^{s_n}v) \to 0. \tag{(*)}
\end{equation*}
To fix notation, let $w_n$ point at $\eta_n$. Then for large $n$
\begin{align*}
\angle_{\gamma_v(s_n)}(\eta, \zeta) &\geq \angle_{\gamma_v(s_n)}(\eta_n, \zeta) - \epsilon & &\text{by (*)} \\
&\geq \angle_{p}(\eta_n, \zeta) - \epsilon & &\text{by Lemma \ref{nonincr}}\\
&\geq \angle_p(\eta, \zeta) - 2\epsilon & &\text{by definition of the visual topology}\\
&\geq \angle(\eta, \zeta) - 3\epsilon & &\text{by construction of $w'$}.
\end{align*}
\end{proof}
The key corollary of these results is:
\begin{cor}\label{flatcorrect}\label{C}
Let $\eta$ be the endpoint of a recurrent vector $w$. Let $F$ be a flat at $q \in M$, and $v, v' \in S_qF$ with $v$ pointing at $\eta$. Say $v'$ points at $\zeta$; then
\[
\angle(\eta, \zeta) = \angle_q(\eta, \zeta).
\]
\end{cor}
In the next section we will establish the existence of plenty of flats; in section \ref{S_angle}, this corollary will be one of our primary tools when we analyze the structure of the angle metric on $M(\infty)$.
\section{Construction of flats}\label{S_flats}
We repeat our standing assumption that $M$ is a complete, simply connected, irreducible Riemannian manifold of higher rank and no focal points.
For a vector $v \in SM$, we let $\mathcal{P}(v) \subseteq SM$ be the set of vectors parallel to $v$, and we let $P_v$ be the image of $\mathcal{P}(v)$ under the projection map $\pi : SM \to M$. Thus, $p \in P_v$ iff there is a unit tangent vector $w \in T_pM$ parallel to $v$. Our goal in this section will be to show that if $v$ is a regular vector of rank $m$, that is, $v \in \mathcal{R}_m$, then the set $P_v$ is an $m$-flat (a totally geodesic isometrically embedded copy of $\mathbb{R}^m$). To this end, we will first show that $\mathcal{P}(v)$ is a smooth submanifold of $\mathcal{R}_m$.
We begin by recalling that if $v \in SM$, there is a natural identification of $T_vTM$ with the space of Jacobi fields along $\gamma_v$. In particular, the connection gives a decomposition of $T_vTM$ into horizontal and vertical subspaces
\[
T_vTM \cong T_{\pi(v)}M \oplus T_{\pi(v)} M,
\]
and we may identify an element $(x, y)$ in the latter space with the unique Jacobi field $J$ along $\gamma_v$ satisfying $J(0) = x, J'(0) = y$. Under this identification, $T_vSM$ is identified with the space of Jacobi fields $J$ such that $J'(t)$ is orthogonal to $\dot{\gamma}_v(t)$ for all $t$.
Define a distribution $\mathcal{F}$ on the bundle $TSM \to SM$ by letting $\mathcal{F}(v) \subseteq T_v SM$ be the space of parallel Jacobi fields along $\gamma_v$. The plan is to show that $\mathcal{F}$ is smooth and integrable on $\mathcal{R}_m$, and its integral manifold is exactly $\mathcal{P}(v)$. We note first that $\mathcal{F}$ is continuous on $\mathcal{R}_m$, since the limit of a sequence of parallel Jacobi fields is a parallel Jacobi field, and the dimension of $\mathcal{F}$ is constant on $\mathcal{R}_m$.
\begin{lem}\label{F smooth}
$\mathcal{F}$ is smooth as a distribution on $\mathcal{R}_m$.
\end{lem}
\begin{proof}
For $w \in SM$ let $\mathcal{J}_0(w)$ denote the space of Jacobi fields $J$ along $\gamma_w$ satisfying $J'(0) = 0$. For each $w \in \mathcal{R}_m$ and each $t > 0$, consider the quadratic form $Q_t^w$ on $\mathcal{J}_0(w)$ defined by
\[
Q_t^w(X, Y) = \int_{-t}^t \ip{R(X, \dot{\gamma}_w)\dot{\gamma}_w, R(Y, \dot{\gamma}_w)\dot{\gamma}_w} dt.
\]
Since a Jacobi field $J$ satisfying $J'(0) = 0$ is parallel iff $R(J, \dot{\gamma}_w)\dot{\gamma}_w = 0$ for all $t$, we see that $\mathcal{F}(w)$ is exactly the intersections of the nullspaces of $Q_t^w$ over all $t > 0$. In fact, since the nullspace of $Q_t^w$ is contained in the nullspace of $Q_s^w$ for $s < t$, there is some $T$ such that $\mathcal{F}(w)$ is exactly the nullspace of $Q^w_T$. We define $T(w)$ to be the infimum of such $T$; then $\mathcal{F}(w)$ is exactly the nullspace of $Q^w_{T(w)}$.
We claim that the map $w \mapsto T(w)$ is upper semicontinuous on $\mathcal{R}_m$. We prove this by contradiction. Suppose $w_n \to w$ with $w_n \in \mathcal{R}_m$, and suppose that $\limsup T(w_n) > T(w)$. Passing to a subsequence of the $w_n$, we may find for each $n$ a Jacobi field $Y_n$ along $\gamma_{w_n}$ satisfying $Y_n'(0) = 0$ and such that $Y_n$ is parallel along the segment of $\gamma_{w_n}$ from $-T(w)$ to $T(w)$, but not along the segment from $-T(w_n)$ to $T(w_n)$.
We project $Y_n$ onto the orthogonal complement to $\mathcal{F}(w_n)$, and then normalize so that $||Y_n(0)|| = 1$. Clearly $Y_n$ retains the properties stated above. Then, passing to a further subsequence, we may assume $Y_n \to Y$ for some Jacobi field $Y$ along $\gamma_w$. Then $Y$ is parallel along the segment of $\gamma_w$ from $-T(w)$ to $T(w)$. However, since $\mathcal{F}$ is continuous and $Y_n$ is bounded away from $\mathcal{F}$, $Y$ cannot be parallel along $\gamma_w$. This contradicts the choice of $T(w)$, and establishes our claim that $w \mapsto T(w)$ is upper semicontinuous.
To complete the proof, fix $w \in \mathcal{R}_m$ and choose an open neighborhood $U \subseteq \mathcal{R}_m$ of $w$ such that $\overline{U}$ is compact and contained in $\mathcal{R}_m$. Since $T(w)$ is upper semicontinuous it is bounded above by some constant $T_0$ on $U$. But then the nullspace of the form $Q^u_{T_0}$ is exactly $\mathcal{F}(u)$ for all $u \in U$; since $Q^u_{T_0}$ depends smoothly on $u$ and its nullspace is $m$-dimensional on $U$, its nullspace, and hence $\mathcal{F}$, is smooth on $U$.
\end{proof}
Our goal is to show that $\mathcal{F}$ is in fact integrable on $\mathcal{R}_m$; the integral manifold through $v \in \mathcal{R}_m$ will turn out to then be $\mathcal{P}(v)$, the set of vectors parallel to $v$. To apply the Frobenius theorem, we will use the following lemma, which states that curves tangent to $\mathcal{F}$ are exactly those curves consisting of parallel vectors:
\begin{lem}\label{F tan}
Let $\sigma : (-\epsilon, \epsilon) \to \mathcal{R}_m$ be a curve in $\mathcal{R}_m$; then $\sigma$ is tangent to $\mathcal{F}$ (for all $t$) iff for any $s, t \in (-\epsilon, \epsilon)$, the vectors $\sigma(s)$ and $\sigma(t)$ are parallel.
\end{lem}
\begin{proof}
First let $\sigma : (-\epsilon, \epsilon) \to \mathcal{R}_m$ be a curve tangent to $\mathcal{F}$. Consider the geodesic variation $\Phi : (-\epsilon, \epsilon) \times (-\infty, \infty) \to M$ determined by $\sigma$:
\[
\Phi(s, t) = \gamma_{\sigma(s)}(t).
\]
By construction and our identification of Jacobi fields with elements of $TTM$, we see that the variation field of $\Phi$ along the curve $\gamma_{\sigma(s)}$ is a Jacobi field corresponding exactly to the element $\dot{\sigma}(s) \in T_{\sigma(s)}TM$, and, by definition of $\mathcal{F}$, is therefore parallel. The curves $s \mapsto \Phi(s, t_0)$ are therefore all the same length $L$ (as $t_0$ varies), and thus for any $s, s'$ and all $t$
\[
d( \gamma_{\sigma(s)}(t), \gamma_{\sigma(s')}(t) ) \leq L.
\]
Thus (by definition) $\sigma(s)$ and $\sigma(s')$ are parallel.
Conversely, let $\sigma : (-\epsilon, \epsilon) \to \mathcal{R}_m$ consist of parallel vectors and construct the variation $\Phi$ as before. We wish to show that the variation field $J(t)$ of $\Phi$ along $\gamma_{\sigma(0)}$ is parallel along $\gamma_{\sigma(0)}$, and for this it suffices, by Proposition \ref{Eb bdd}, to show that it is bounded.
Our assumption is that the geodesics $\gamma_s(t) = \Gamma(s, t)$ are all parallel (for varying $s$), and thus for any $s$ the function $d(\gamma_0(t), \gamma_s(t))$ is constant (by Proposition \ref{OSu76 2}). It follows that $||J(t)|| = ||J(0)||$ for all $t$, which gives the desired bound.
\end{proof}
Any curve $\sigma : (-\epsilon, \epsilon) \to \mathcal{R}_m$ defines a vector field along the curve (in $M$) $\pi \circ \sigma$ in the obvious way. It follows from the above lemma (and the symmetry $D_t \partial_s \Phi = D_s \partial_t \Phi)$ for variations $\Phi$) that if $\sigma$ is a curve in $\mathcal{R}_m$ such that $\sigma(t)$ and $\sigma(s)$ are parallel for any $t, s$, then the associated vector field along $\pi \circ \sigma$ is a parallel vector field along $\pi \circ \sigma$.
We also require the following observation. Suppose that $p, q \in M$ are connected by a minimizing geodesic segment $\gamma : [0, a] \to M$, and let $v \in T_pM$. Then the curve $\sigma : [0, a] \to SM$ such that $\sigma(t)$ is the parallel transport of $v$ along $\gamma$ to $\gamma(t)$ is a minimizing geodesic in the Sasake metric. It follows from this and the flat strip theorem that if $v, w$ are parallel and connected by a unique minimizing geodesic in $SM$, then this geodesic is given by parallel transport along the unique geodesic from $\pi(v)$ to $\pi(w)$ in $M$ and is everywhere tangent to $\mathcal{F}(v)$.
\begin{lem}
$\mathcal{F}$ is integrable as a distribution on $\mathcal{R}_m$, and, if $v \in \mathcal{R}_m$, then the integral manifold through $v$ is an open subset of $\mathcal{P}(v)$.
\end{lem}
\begin{proof}
To show integrability, we wish to show that $[X, Y]$ is tangent to $\mathcal{F}$ for vector fields $X, Y$ tangent to $\mathcal{F}$. If $\phi_t, \psi_s$ are the flows of $X, Y$, respectively, then $[X, Y]_v = \dot{\sigma}(0)$, where $\sigma$ is the curve
\[
\sigma(t) = \psi_{-\sqrt{t}} \phi_{-\sqrt{t}} \psi_{\sqrt{t}} \phi_{\sqrt{t}}(v).
\]
From Lemma \ref{F tan} we see that $\sigma(0)$ and $\sigma(t)$ are parallel for all small $t$, which, by the other implication in Lemma \ref{F tan}, shows that $[X, Y]_v \in \mathcal{F}(v)$ as desired. So $\mathcal{F}$ is integrable.
Now fix $v \in \mathcal{R}_m$ and let $Q$ be the integral manifold of $\mathcal{F}$ through $v$. By Lemma \ref{F tan}, $Q \subseteq \mathcal{P}(v)$. Let $w \in Q$ and let $U$ be a normal neighborhood of $w$ contained in $\mathcal{R}_m$ (in the Sasake metric); to complete the proof it suffices to show that $U \cap \mathcal{P}(v) \subseteq Q$. Take $u \in U \cap \mathcal{P}(v)$. Then (by the observation preceding the lemma) the $SM$-geodesic from $w$ to $u$ is contained in $\mathcal{R}_m$ and consists of vectors parallel to $w$, and hence to $v$. Thus $u \in Q$.
\end{proof}
For $v \in \mathcal{R}_m$ it now follows that $\mathcal{P}(v) \cap \mathcal{R}_m$ is a smooth $m$-dimensional submanifold of $\mathcal{R}_m$, and since the $SM$-geodesic between nearby points in $\mathcal{R}_m$ is contained in $\mathcal{P}(v)$, we see that $\mathcal{P}(v)$ is totally geodesic.
Consider the projection map $\pi : \mathcal{P}(v) \to P_v$; its differential $d\pi$ takes $(X, 0) \in \mathcal{F}(v) \subseteq T_vSM$ to $X \in T_{\pi(v)}M$. It follows that $P_v$ is a smooth $m$-dimensional submanifold of $M$ near those points $p \in M$ which are footpoints of vectors $w \in \mathcal{R}_m$ (and that $\pi$ gives a local diffeomorphism of $\mathcal{P}(v)$ and $P_v$ near such vectors $w$). We would like to extend this conclusion to the whole of $P_v$, and for this we will make use of Lemma \ref{seq}.
\begin{prop}
For every $v \in \mathcal{R}_m$, the set $P_v$ is a convex $m$-dimensional smooth submanifold of $M$.
\end{prop}
\begin{proof}
Fix $v \in \mathcal{R}_m$. The flat strip theorem shows that $P_v$ contains the $M$-geodesic between any two of its points, i.e., is convex. So we must show that $P_v$ is an $m$-dimensional smooth submanifold of $M$.
For $u \in \mathcal{R}_m$, we let $C_{\epsilon}(u) \subseteq T_{\pi(u)}M$ be the intersection of the subspace $T_{\pi(u)}P_u$ with the $\epsilon$-ball in $T_{\pi(u)}M$. Since $\mathcal{F}$ is smooth and integrable the foliation $\mathcal{P}$ is continuous with smooth leaves on $\mathcal{R}_m$; it follows that we may fix $\epsilon > 0$ and a neighborhood $U \subseteq \mathcal{R}_m$ of $v$ such that for $u \in U$,
\[
\exp_{\pi(u)} C_{\epsilon}(u) = P_u \cap B_{\epsilon}(\pi(u)),
\]
where for $p \in M$ we denote by $B_p(\epsilon)$ the ball of radius $\epsilon$ about $p$ in $M$.
By the flat strip theorem, the above equation is preserved under the geodesic flow; that is, for all $t$ and all $u \in U$ we have
\[
\exp_{\pi(g^t u)} C_{\epsilon}(g^t u) = P_{g^t u} \cap B_{\epsilon}(\pi(g^t u)).
\]
This equation is also clearly also preserved under isometries.
Now fix $w \in \mathcal{P}(v)$; our goal is to show that $P_v$ is smooth near $\pi(w)$. Choose by Lemma \ref{seq} sequences $v_n \to v, t_n \to \infty$, and $\phi_n \in \Gamma$ such that $(d\phi_n \circ g^{t_n})v_n \to w$. We may assume $v_n \in U$ for all $n$. For ease of notation, let $w_n = (d\phi_n \circ g^{t_n})v_n$; then for all $n$ we have $w_n \in \mathcal{R}_m$, and
\[
\exp_{\pi(w_n)} C_{\epsilon}(w_n) = P_{w_n} \cap B_{\epsilon}(\pi(w_n)).
\]
By passing to a subsequence if necessary, we may assume the sequence of $m$-dimensional subspaces $d\pi( \mathcal{F}(w_n))$ converges to a subspace $W \subseteq T_{\pi(w)}M$. Denote by $W_\epsilon$ the $\epsilon$-ball in $W$. Then taking limits in the above equation we see that
\[
\exp_{\pi(w)} W_\epsilon \subseteq P_w = P_v.
\]
To complete the proof, we note that since $P_v$ is convex (globally) and $m$-dimensional near $v$, $P_v$ cannot contain an $(m+1)$-ball, for then convexity would show that it contains an $(m+1)$-ball near $v$. Thus if $U' \subseteq B_{\epsilon}(w)$ is a normal neighborhood of $w$, we must have
\[
P_w \cap U' = \exp_{\pi(w)}(W_\epsilon) \cap U',
\]
which shows that $P_v$ is a smooth $m$-dimensional submanifold of $M$ near $w$ and completes the proof.
\end{proof}
\begin{prop}
For every $v \in \mathcal{R}_m$, the set $P_v$ is an $m$-flat.
\end{prop}
\begin{proof}
Let $p = \pi(v)$. Choose a neighborhood $U$ of $v$ in $\mathcal{R}_m \cap T_{p}P_v$ such that for each $w \in U$, the geodesic $\gamma_w$ admits no nonzero parallel Jacobi field orthogonal to $P_v$. We claim $P_w = P_v$ for all $w \in U$.
To see this, recall that $T_p P_w$ is the span of $Y(0)$ for parallel Jacobi fields $Y(t)$ along $\gamma_w$. If $Y$ is such a field, then the component $Y^{\perp}$ of $Y$ orthogonal to $P_v$ is a bounded Jacobi field along $\gamma_w$, hence parallel, and therefore zero; it follows that $T_p P_w = T_p P_v$. Since $P_v$ and $P_w$ are totally geodesic, this gives $P_v = P_w$ as claimed.
But now take $m$ linearly independent vectors in $U$; by the above we may extend these to $m$ independent and everywhere parallel vector fields on $P_v$. Hence $P_v$ is flat.
\end{proof}
\begin{cor}\label{flats exist}
For every $v \in SM$, there exists a $k$-flat $F$ with $v \in S_{\pi(v)}F$.
\end{cor}
\begin{proof}
Let $v_n$ be a sequence of regular vectors with $v_n \to v$. Passing to a subsequence if necessary, we may assume there is some $m \geq k$ such that $v_n \in \mathcal{R}_m$ for all $n$. For each $n$ let $W_n$ be the $m$-dimensional subspace of $T_{\pi(v_n)}M$ such that $\exp(W_n) = P_{v_n}$. Passing to a further subsequence, we may assume $W_n \to W$, where $W$ is an $m$-dimensional subspace of $T_{\pi(v)}M$, and it is not difficult to see that $\exp W$ is an $m$-flat through $v$.
\end{proof}
\section{The angle lemma, and an invariant set at $\infty$}\label{S_angle}
The goal of the present section is to establish that $M(\infty)$ has a nonempty, proper, closed, $\Gamma$-invariant subset $X$. Our strategy is that of Ballmann \cite{Bal95} and Eberlein-Heber \cite{EbeHeb90}. In section \ref{S_complete} we will use this set to define a nonconstant function $f$ on $SM$, the ``angle from $X$'' function, which will be holonomy invariant, and this will show that the holonomy group acts nontransitively on $M$.
Roughly speaking $X$ will be the set of endpoints of vectors of maximum singularity in $SM$; more precisely, in the language of symmetric spaces, it will turn out that $X$ is the set of vectors which lie on the one-dimensional faces of Weyl chambers. To ``pick out'' these vectors from our manifold $M$, we will use the following characterization: For each $\zeta \in M(\infty)$, we may look at the longest curve $\zeta(t) : [0, \alpha(\zeta)] \to M(\infty)$ starting at $\zeta$ and such that
\[
\angle_q(\zeta(t), \zeta(s)) = |t - s|
\]
for every point $q \in M$; then $\zeta$ is ``maximally singular'' (i.e., $\zeta \in X$) if $\alpha(\zeta)$ (the length of the longest such curve) is as large as possible. One may check that in the case of a symmetric space this indeed picks out the one-dimensional faces of the Weyl chambers.
To show that the set so defined is proper, we will show that it contains no regular recurrent vectors; this is accomplished by demonstrating that every such path with endpoint at a regular recurrent vector extends to a longer such path in a neighborhood of that vector. For this we will need a technical lemma that appears here as Corollary \ref{47}.
We begin with the following lemma, which shows that regular geodesics have to ``bend'' uniformly away from flats:
\begin{lem}\label{42}
Let $k = \rank M$, $v \in \mathcal{R}_k$, and let $\zeta = \gamma_v(-\infty), \eta = \gamma_v(\infty)$. Then there exists an $\epsilon > 0$ such that if $F$ is a $k$-flat in $M$ with $d(\pi(v), F) = 1$, then
\[
\angle(\zeta, F(\infty)) + \angle(\eta, F(\infty)) \geq \epsilon.
\]
\end{lem}
\begin{proof}
By contradiction. If the above inequality does not hold for any $\epsilon$, we can find a sequence $F_n$ of $k$-flats satisfying $d(\pi(v), F_n) = 1$ and
\[
\angle(\zeta, F_n(\infty)) + \angle(\eta, F_n(\infty)) < 1/n.
\]
By passing to a subsequence, we may assume $F_n \to F$ for some flat $F$ satisfying $d(\pi(v), F) = 1$, and $\eta, \zeta \in F(\infty)$. In particular, $F$ is foliated by geodesics parallel to $v$, so that $\mathcal{P}(v)$ is at least $(k+1)$-dimensional, contradicting $v \in \mathcal{R}_k$.
\end{proof}
This allows us to prove the following ``Angle Lemma'':
\begin{lem}\label{45}
Let $k = \rank M$. Let $v \in \mathcal{R}_k$ be recurrent and suppose $v$ points at $\eta_0 \in M(\infty)$. Then there exists $A > 0$ such that for all $\alpha \leq A$, if $\eta(t)$ is a path
\[
\eta(t) : [0, \alpha] \to M(\infty)
\]
satisfying $\eta(0) = \eta_0$ and
\[
\angle(\eta(t), \eta_0) = t
\]
for all $t \in [0, \alpha]$, then $\eta(t) \in P_v(\infty)$ for all $t \in [0, \alpha]$.
\end{lem}
\begin{proof}
Let $p = \pi(v)$ be the footpoint of $v$ and let $\xi = \gamma_v(-\infty)$. By Lemma \ref{42} we may fix $\epsilon > 0$ such that if $F$ is a $k$-flat with $d(p, F) = 1$, then
\[
\angle(\xi, F(\infty)) + \angle(\eta_0, F(\infty)) > \epsilon.
\]
Choose $\nablata > 0$ such that if $w \in S_pM$ with $\angle_p(v, w) < \nablata$ then $w \in \mathcal{R}_k$, and set $A = \tfrac{1}{2} \min \set{\nablata, \epsilon}$. Fix $\alpha \leq A$.
For the sake of contradiction, suppose there exists a path $\eta(t) : [0, \alpha] \to M(\infty)$ as above, but for some time $a \leq \alpha$
\[
\eta(a) \notin P_v(\infty).
\]
For $0 \leq s \leq a$, let $\eta_p(s) \in S_pM$ be the vector pointing at $\eta(s)$; since $\alpha < \nablata$, we have $\eta_p(s) \in \mathcal{R}_k$. Fixing more notation, let $w = \eta_p(a)$.
We claim $\eta_0 \notin P_w(\infty)$. To see this, suppose $\eta_0 \in P_w(\infty)$; then by convexity $P_w(\infty)$ contains the geodesic $\gamma_v$, and since $\gamma_v$ is contained in a unique $k$-flat, we conclude $P_w = P_v$, which contradicts our assumption that $\eta(a) \notin P_v(\infty)$.
It follows from Proposition \ref{OSu76 4} that
\[
d(\gamma_v(t), P_w) \to \infty \text{ as } t \to \infty.
\]
Since $v$ is recurrent, we may fix $t_n \to \infty$ and $\phi_n \in \Gamma$ such that the sequence $v_n = (d\phi_n \circ g^{t_n})v$ converges to $v$. By the above we may also assume $d(\gamma_v(t_n), P_w) \geq 1$ for all $n$. Then, since $P_u$ depends continuously on $u \in \mathcal{R}_k$, there exists $s_n \in [0, a]$ such that
\[
d(\gamma_v(t_n), P_{\eta_p(s_n)}) = 1.
\]
\begin{figure}\label{fig:Pic5}
\end{figure}
We define a sequence of flats $F_n$ by
\[
F_n = \phi_n(P_{\eta_p(s_n)}).
\]
Notice that $F_n$ is indeed a flat, that $d(F_n, p) \to 1$, and that the geodesic $\gamma_{-v_n}$ intersects $F_n$ at time $t_n$. By Proposition \ref{OSu76 4}, we have
\[
d(\gamma_{-v_n}(t), F_n) \leq 1 \text{ for } 0 \leq t \leq t_n.
\]
By passing to a subsequence, we may assume $F_n \to F$ for some $k$-flat $F$ with $d(F, p) = 1$, and taking the limit of the above inequality, we see that $\gamma_{-v}(\infty) \in F(\infty)$. Thus Lemma \ref{42} guarantees
\[
\angle(\eta_0, F(\infty)) \geq \epsilon.
\]
On the other hand, consider the sequence $\eta(s_n)$. By passing to a further subsequence, we may assume $\phi_n(\eta(s_n)) \to \mu$; since (by definition) $\phi_n(\eta(s_n)) \in F_n(\infty)$, we have $\mu \in F(\infty)$. Then
\begin{align*}
\epsilon &\leq \angle(\eta_0, F(\infty)) \leq \angle(\eta_0, \mu) \\
&\leq \liminf_{n \to \infty} \angle(\phi_n(\eta_0), \phi_n(\eta(s_n)))) \\
&= \liminf_{n \to \infty} \angle(\eta_0, \eta(s_n)) \leq a \leq \alpha \leq \frac{\epsilon}{2},
\end{align*}
where the inequality on the second line follows from Proposition \ref{210}. This is the desired contradiction.
\end{proof}
As we did in section \ref{sS_24}, we wish to extend this result not just to the $k$-flat $F$ containing the regular recurrent vector $v$, but to every $k$-flat containing $\eta_0$ as an endpoint at $\infty$.
\begin{prop}\label{46}
Let $v \in \mathcal{R}_k$ be recurrent and point at $\eta_0$, let $A$ be as in Lemma \ref{45} above, and let $\alpha \leq A$. Let $F$ be a $k$-flat with $\eta_0 \in F(\infty)$, and suppose there exists a path
\[
\eta(t) : [0, \alpha] \to M(\infty)
\]
with $\eta(0) = \eta_0$ and
\[
\angle(\eta(t), \eta_0) = t \text{ for all } t \in [0, \alpha].
\]
Then $\eta(t) \in F(\infty)$ for all $t \in [0, \alpha]$.
\end{prop}
\begin{proof}
Fix $q \in F$, and let $\eta_q \in S_qF$ point at $\eta_0$. Let $p = \pi(v)$, and let $\phi : S_qF \to S_pM$ be the map such that $w$ and $\phi(w)$ are asymptotic. Denote by $B^F_\alpha(\eta_q)$ the restriction to $F$ of the closed $\alpha$-ball in the $\angle_q$-metric about $\eta_q$, and, similarly, denote by $B^{P_v}_\alpha(v)$ the restriction to $P_v$ of the closed $\alpha$-ball in the $\angle_p$-metric about $v$. We will show that $\phi$ gives a homeomorphism $B^F_\alpha(\eta_q) \to B^{P_v}_\alpha(v)$.
We first take a moment to note why this proves the proposition. We let $\eta_p(t) \in S_pM$ be the vector pointing at $\eta(t)$. Lemma \ref{45} tells us that $\eta_p(t) \in B^{P_v}_\alpha(v)$ for $t \in [0, \alpha]$. Then since $\phi^{-1}$ takes $B^{P_v}_\alpha(v)$ into $B^F_\alpha(\eta_q)$, we see that $\eta(t) \in F(\infty)$ for such $t$.
So we've left to show $\phi$ gives such a homeomorphism. First, let's see that $\phi$ takes $B^F_\alpha(\eta_q)$ into $B^{P_v}_\alpha(v)$. Let $w \in B^F_\alpha(\eta_q)$ and let
\[
\sigma : [0, \alpha] \to B^F_\alpha(\eta_q)
\]
be the $\angle_q$-geodesic with $\sigma(0) = \eta_q$ and $\sigma(a) = w$ for some time $a$. Let
\[
\tilde{\sigma} : [0, \alpha] \to M(\infty)
\]
be the path obtained by projecting $\sigma$ to $M(\infty)$. Then Corollary \ref{flatcorrect} guarantees that $\tilde{\sigma}$ satisfies the hypotheses of Lemma \ref{45}, and so we conclude that $\tilde{\sigma}(t) \in P_v(\infty)$ for all $t$, from which it follows that $\phi$ maps $B^F_\alpha(\eta_q)$ into $B^{P_v}_\alpha(v)$ as claimed.
Now, note that for all $w \in B^F_\alpha(\eta_q)$ we have
\[
\angle_q(w, \eta_q) = \angle_p(\phi(w), v),
\]
again by Corollary \ref{flatcorrect}. Therefore for each $r \in [0, \alpha]$, $\phi$ gives an injective continuous map of the sphere of radius $r$ in $B^F_\alpha(\eta_q)$ to the sphere of radius $r$ in $B^{P_v}_\alpha(v)$; but any injective continuous map of spheres is a homeomorphism, and it follows that $\phi$ gives a homeomorphism of $B^F_\alpha(\eta_q)$ and $B^{P_v}_\alpha(v)$ as claimed.
\end{proof}
\begin{cor}\label{47}
Let $v \in \mathcal{R}_k$ be recurrent and point at $\eta_0$, let $A$ be as in Lemma \ref{45}, and let $\alpha \leq A$. Suppose we have a path
\[
\eta(t) : [-\alpha, \alpha] \to M(\infty)
\]
with $\eta(0) = \eta_0$ and
\[
\angle(\eta(t), \eta(0)) = t \text{ for all } t.
\]
Then for all $q \in M$ and all $r, s \in [-\alpha, \alpha]$
\[
\angle_q(\eta(r), \eta(s)) = \angle(\eta(r), \eta(s)).
\]
\end{cor}
\begin{proof}
Choose two points $q_1, q_2 \in M$. Then by Corollary \ref{flats exist} there are $k$-flats $F_1, F_2$ through $q_1, q_2$, respectively, with $\eta_0 \in F_1(\infty) \cap F_2(\infty)$. By Corollary \ref{46}, the path $\eta(t)$ lifts to paths $\eta_1(t) \subseteq S_{q_1}F_1$, $\eta_2(t) \subseteq S_{q_2}F_2$.
Fix $r, s \in [-\alpha, \alpha]$. Then for $i \in \set{1, 2}$ we have
\[
d(\gamma_{\eta_i(r)}(t), \gamma_{\eta_i(s)}(t)) = 2t\sin\Big(\tfrac{1}{2} \big( \angle_{q_i}(\eta(r), \eta(s))\big) \Big).
\]
Since $d(\gamma_{\eta_1(r)}(t), \gamma_{\eta_2(r)}(t)$ and $d(\gamma_{\eta_1(s)}(t), \gamma_{\eta_2(s)}(t))$ are both bounded as $t \to \infty$, we must have $\angle_{q_1}(\eta_1(r), \eta_1(s)) = \angle_{q_2}(\eta_2(r), \eta_2(s))$. Thus $\angle_q(\eta(r), \eta(s))$ is independent of $q \in M$, which gives the result.
\end{proof}
\begin{prop}\label{48}
$M(\infty)$ contains a nonempty proper closed $\Gamma$-invariant subset.
\end{prop}
\begin{proof}
For each $\nablata > 0$ define $X_\nablata \subseteq M(\infty)$ to be the set of all $\xi \in M(\infty)$ such that there exists a path
\[
\xi(t) : [0, \nablata] \to M(\infty)
\]
with $\xi(0) = \xi$ and
\[
\angle_q(\xi(t), \xi(s)) = |t - s|
\]
for all $t, s \in [0, s]$, and all $q \in M$.
Obviously $X_\nablata$ is $\Gamma$-invariant. We claim it is closed. To this end, let $\xi_n \in X_\nablata$ with $\xi_n \to \xi$, and choose associated paths
\[
\xi_n(t) : [0, \nablata] \to M(\infty).
\]
By Arzela-Ascoli, some subsequence of these paths converges (pointwise, say) to a path $\xi(t)$, and this path satisfies
\[
\angle_q(\xi(t), \xi(s)) = \lim_{n \to \infty} \angle_q(\xi_n(t), \xi_n(s)) = |t - s|,
\]
so $\xi \in X_{\nablata}$. Thus $X_\nablata$ is closed; it follows that $X_\nablata$ is compact.
We claim now that $X_\nablata$ is nonempty for some $\nablata > 0$. To see this choose a recurrent vector $v \in \mathcal{R}_k$, and say $v$ points at $\eta$. Let $A$ be as in Lemma \ref{45}, and let
\[
\eta(t) : [0, A] \to M(\infty)
\]
be the projection to $M(\infty)$ of any geodesic segment of length $A$ starting at $v$ in $S_pP_v$. Then by Corollary \ref{C}, for all $t \in [0, A]$
\[
\angle(\eta(t), \eta) = \angle_p(\eta(t), \eta) = t.
\]
Thus by Corollary \ref{47}, $\angle_q(\eta(s), \eta(t))$ is independent of $q \in M$, and so in particular for any such $q$
\[
\angle_q(\eta(s), \eta(t)) = \angle_p(\eta(s), \eta(t)) = |t - s|.
\]
So $v \in X_A$.
A few remarks about the relationships between the various $X_\nablata$ are necessary before we proceed. First of all, notice that if $\nablata_1 < \nablata_2$ then $X_{\nablata_2} \subseteq X_{\nablata_1}$. Furthermore, for any $\nablata$, we claim that $\xi \in X_{\nablata}$ iff $\xi \in X_{\epsilon}$ for all $\epsilon < \nablata$. One direction is clear. To see the other, suppose $\xi \in X_{\epsilon_n}$ for a sequence $\epsilon_n \to \nablata$. Then there exist paths
\[
\xi_n(t) : [0, \epsilon_n] \to M(\infty)
\]
satisfying the requisite equality, and again Arzela-Ascoli guarantees for some subsequence the existence of a pointwise limit
\[
\xi(t) : [0, \nablata] \to M(\infty)
\]
which will again satisfy the requisite equality. Therefore, if we let
\[
\beta = \sup \set{\nablata | X_{\nablata} \text{ is nonempty} }
\]
then
\[
X_{\beta} = \bigcap_{\nablata < \beta} X_\nablata.
\]
In particular, being a nested intersection of nonempty compact sets, $X_\beta$ is nonempty.
We now show that $\beta < \pi$. To see this, note that $\beta = \pi$ implies in particular that there exist two points $\zeta, \xi$ in $M(\infty)$ such that the angle between $\zeta$ and $\zeta$ when seen from any point is $\pi$. This implies that there exists a vector field $Y$ on $M$ such that for any point $q$, $Y(q)$ points at $\zeta$ and $-Y(q)$ points at $\xi$. The vector field $Y$ is $\mathscr{C}^1$ by Theorem 1 (ii) in \cite{Esc77}, and the flat strip theorem now shows that the vector field $Y$ is holonomy invariant, so that $M$ is reducible. Thus $\beta < \pi$.
We claim $X_\beta$ is the desired set. We have already shown it is closed, nonempty, and $\Gamma$-invariant, so we have left to show that $X_\beta \neq M(\infty)$.
Fix a recurrent vector $v \in \mathcal{R}_k$; assume for the sake of contradiction that $v \in X_\beta$. Then there exists a path
\[
\eta(t) : [0, \beta] \to M(\infty)
\]
with $\eta(0) = \eta$ and $\angle_q(\eta(t), \eta(s)) = |t - s|$ for all $t, s \in [0, \beta]$. Let $p = \pi(v)$ be the footpoint of $v$, and let
\[
\eta_p(t) : [0, \beta] \to S_pP_v
\]
be the lift of $\eta(t)$. Then $\eta_p(t)$ is a geodesic segment in $S_pP_v$. We may choose $0 < \epsilon < A$, where $A$ is as in Lemma \ref{45}, so that $\beta + \epsilon < \pi$. Thus we may extend $\eta_p(t)$ to a geodesic
\[
\eta_p(t) : [-\epsilon, \beta] \to S_pP_v,
\]
and we may use this to extend $\eta(t)$. By Corollaries \ref{C} and \ref{47}, we have for all $q \in M$
\[
\angle_q(\eta(t), \eta(s)) = |t - s|,
\]
and so $\eta(-\epsilon) \in X_{\beta + \epsilon}$, contradicting our choice of $\beta$.
\end{proof}
\section{Completion of proof}\label{S_complete}
We now fix a nonempty proper closed $\Gamma$-invariant subset $Z \subseteq M(\infty)$ and define a function $f : SM \to \mathbb{R}$ by
\[
f(v) = \min_{\zeta \in Z} \angle_{\pi(v)} (\gamma_v(\infty), \zeta).
\]
It is clear that $f$ is $\Gamma$-invariant, and Lemma \ref{nonincr} gives that $f$ is nondecreasing under the geodesic flow (that is, $f(g^t v) \geq f(v)$). We use the next four lemmas to prove that $f$ is continuous, invariant under the geodesic flow, constant on equivalence classes of asymptotic vectors, and differentiable almost everywhere.
\begin{lem}
$f$ is continuous.
\end{lem}
\begin{proof}
For each $\zeta \in M(\infty)$ define a function $f_{\zeta} : SM \to \mathbb{R}$ by
\[
f_{\zeta}(v) = \angle_{\pi(v)} (\gamma_v(\infty), \zeta).
\]
We will show that the family $f_{\zeta}$ is equicontinuous at each $v \in SM$, from which continuity of $f$ follows.
Fix $v \in SM$ and $\epsilon > 0$. There is a neighborhood $U \subseteq SM$ of $v$ and an $a > 0$ such that
\[
d_a(u, w) = d(\gamma_u(0), \gamma_w(0)) + d(\gamma_u(a), \gamma_w(a))
\]
is a metric on $U$ giving the correct topology. Suppose $w \in U$ with $d_a(v, w) < \epsilon$. For $\zeta \in Z$, let $\zeta_{\pi(v)}, \zeta_{\pi(w)}$ be the vectors at $\pi(v), \pi(w)$, respectively, pointing at $\zeta$. Then
\[
|d_a(v, \zeta_{\pi(v)}) - d_a(w, \zeta_{\pi(w)})| \leq d_a(v, w) + d_a(\zeta_{\pi(v)}, \zeta_{\pi(w)}) \leq 3 \epsilon,
\]
by the triangle inequality for $d_a$ for the first inequality, and Proposition \ref{OSu76 2} for the second. This gives the desired equicontinuity at $v$.
\end{proof}
\begin{lem}
For $v \in SM$, we have $f(g^t v) = f(v)$ for all $t \in \mathbb{R}$.
\end{lem}
\begin{proof}
First assume $v$ is recurrent. Fix $t_n \to \infty$ and $\phi_n \in \Gamma$ so that $d\phi_n g^{t_n} v \to v$. Then
\[
f(d\phi_n g^{t_n} v) = f(g^{t_n} v)
\]
and the sequence $f(g^{t_n}v)$ is therefore an increasing sequence whose limit is $f(v)$ and all of whose terms are bounded below by $f(v)$, so evidently $f(g^{t_n}v) = f(v)$ for all $n$, and it follows that $f(g^t v) = f(v)$ for all $t \in \mathbb{R}$.
Now we generalize to arbitrary $v$. Fix $t > 0$ and $\epsilon > 0$. By continuity of $f$ and the geodesic flow, we may choose $\nablata > 0$ so that if $u \in SM$ is within $\nablata$ of $v$, then
\[
|f(u) - f(v)| < \epsilon \text{ and } |f(g^t u) - f(g^t v)| < \epsilon.
\]
Then choose $u$ recurrent within $\nablata$ of $v$ to see that
\[
|f(g^t v) - f(v)| \leq |f(g^t v) - f(g^t u) | + |f(g^t u) - f(u) | + |f(u) - f(v)| < 2\epsilon.
\]
Since $\epsilon$ was chosen arbitrarily, $f(g^t v) = f(v)$.
\end{proof}
\begin{lem}\label{f asymp}
Let $v, w \in SM$ be arbitrary. If either $v$ and $w$ are asymptotic or $-v$ and $-w$ are asymptotic, then $f(v) = f(w)$.
\end{lem}
\begin{proof}
If $v$ and $w$ are asymptotic, fix by Lemma \ref{seq} $t_n \to \infty$, $w_n \to w$, and $\phi_n \in \Gamma$, such that $(d\phi_n \circ g^{t_n})w_n \to v$. Then since $f$ is continous,
\[
f(w) = \lim f(w_n) = \lim f((d\phi_n \circ g^{t_n})w_n) = f(v).
\]
On the other hand, if $-v$ and $-w$ are asymptotic, we may fix $t_n \to -\infty$, $w_n \to w$, and $\phi_n \in \Gamma$, such that $(d\phi_n \circ g^{t_n})w_n \to v$, and the exact same argument applies.
\end{proof}
\begin{lem}\label{lip}
$f$ is differentiable almost everywhere.
\end{lem}
\begin{proof}
Fix $v \in SM$; there is a neighborhood $U$ of $v$ and an $a > 0$ such that
\[
d_a(u, w) = d(\gamma_u(0), \gamma_w(0)) + d(\gamma_u(a), \gamma_w(a))
\]
is a metric on $U$ (giving the correct topology). Choose $u, w \in U$, and let $w' \in S_{\pi(u)}M$ be asymptotic to $w$. Then
\[
|f(u) - f(w)| = |f(u) - f(w')| \leq \angle_{\pi(u)} (u, w') \leq C d_a(u, w'),
\]
for some constant $C$. But note that
\begin{align*}
d_a(u, w') &= d(\gamma_u(a), \gamma_{w'}(a)) \leq d(\gamma_u(a), \gamma_w(a)) + d(\gamma_w(a), \gamma_{w'}(a)) \\
&\leq d(\gamma_u(a), \gamma_w(a)) + d(\gamma_w(0), \gamma_{w'}(0)) = d_a(u, w),
\end{align*}
by Proposition \ref{OSu76 2}. Therefore $f$ is Lipschitz with respect to the metric $d_a$ on $U$, and hence differentiable almost everywhere on $U$.
\end{proof}
From here on, the proof follows Ballmann \cite{Bal95}, $\S \text{IV}.6$, essentially exactly. We repeat his steps below for convenience.
We denote by $W^s(v), W^u(v) \subseteq SM$ the weak stable and unstable manifolds through $v$, respectively. Explicitly, $W^s(v)$ is the collection of those vectors asymptotic to $v$, and $W^u(v)$ the collection of those vectors $w$ such that $-w$ is asymptotic to $-v$.
\begin{lem}
$T_vW^s(v) + T_vW^u(v)$ contains the horizontal subspace of $T_vSM$.
\end{lem}
\begin{proof}
Following Ballmann, given $w \in T_{\pi(v)}M$ we let $B^+(w)$ denote the covariant derivative of the stable Jacobi field $J$ along $\gamma_v$ with $J(0) = w$. That is, $B^+(w) = J'(0)$ where $J$ is the unique Jacobi field with $J(0) = w$ and $J(t)$ bounded as $t \to \infty$. Similarly, $B^-(w)$ is the covariant derivative of the unstable Jacobi field along $\gamma_v$ with $J(0) = w$. In this notation,
\begin{align*}
T_vW^s(v) &= \set{(w, B^+(w)) | w \in S_{\pi(v)}M} & &\text{and} & T_vW^u(v) &= \set{(w, B^-(w)) | w \in S_{\pi(v)}M}.
\end{align*}
Both $B^+$ and $B^-$ are symmetric (as is shown in Eschenburg-O'Sullivan \cite{EscOsu76}). We let
\[
E_0 = \set{w \in T_{\pi(v)}M | B^+(w) = B^-(w) = 0}.
\]
Since $B^+$ and $B^-$ are symmetric, they map $T_{\pi(v)}$ into the orthogonal complement $E_0^\perp$ of $E_0$.
The claim of the lemma is that any horizontal vector $(u, 0) \in T_vSM$ can be written in the form
\[
(u, 0) = (w_1, B^+(w_1)) + (w_2, B^-(w_2)).
\]
This immediately implies $w_2 = u - w_1$, so we are reduced to solving the equation
\[
-B^-(u) = B^+(w_1) - B^-(w_1),
\]
and for this it suffices to show the operator $B^+ - B^-$ surjects onto $E_0^\perp$, and for this it suffices to show that the restriction
\[
B^+ - B^- : E_0^\perp \to E_0^\perp
\]
is injective. Assuming $w \in E_0^\perp$, $B^+(w) = B^-(w)$ implies that the Jacobi field $J$ with $J(0) = w$ and $J'(0) = B^+(w) = B^-(w)$ is both stable and unstable, hence bounded, hence, by Proposition \ref{Eb bdd}, parallel; thus $w \in E_0$ and it follows that $w = 0$.
\end{proof}
\begin{cor}
If $c$ is a piecewise smooth horizontal curve in $SM$ then $f \circ c$ is constant.
\end{cor}
\begin{proof}
Obviously it suffices to show the corollary for smooth curves $c$, so we assume $c$ is smooth. By Lemma \ref{lip}, $f$ is differentiable on a set of full measure $D$. By the previous lemma and Lemma \ref{f asymp}, if $\tilde{c}$ is a piecewise smooth horizontal curve such that $\tilde{c}(t) \in D$ for almost all $t$, then $f \circ \tilde{c}$ is constant (since $df(\dot{\tilde{c}}(t)) = 0$ whenever this formula makes sense).
Our next goal is to approximate $c$ by suitable such curves $\tilde{c}$. Let $l$ be the length of $c$, and parametrize $c$ by arc length. Extend the vector field $\dot{c}(t)$ along $c$ to a smooth horizontal unit vector field $H$ in a neighborhood of $c$. Then there is some smaller neighborhood $U$ of $c$ which is foliated by the integral curves of $H$, and by Fubini (since $D \cap U$ has full measure in $U$), there exists a sequence of smooth horizontal curves $\tilde{c}_r$ such that $\dot{\tilde{c}}_r(t) \in D$ for almost all $t \in [0, l]$, and such that $\tilde{c}_r$ converges in the $\mathscr{C}_0$-topology to $c$. Since $f$ is constant on each curve $\tilde{c}_t$ by the argument in the previous paragraph and $f$ is continuous, we also have that $f$ is constant on $c$.
\end{proof}
Finally, an appeal to the Berger-Simons holonomy theorem proves the result:
\begin{rrthm}
Let $M$ be a complete irreducible Riemannian manifold with no focal points and rank $k \geq 2$. Assume that the $\Gamma$-recurrent geodesics are dense in $M$, where $\Gamma$ is the isometry group of $M$. Then $M$ is a symmetric space of noncompact type.
\end{rrthm}
\begin{proof}
By the previous corollary, the function $f$ is invariant under the holonomy group of $M$. However, it is nonconstant. Thus the holonomy group of $M$ is nontransitive and the Berger-Simons holonomy theorem implies that $M$ is symmetric.
\end{proof}
\section{Fundamental Groups}\label{S_FundGroup}
In this section $M$ is assumed to be a complete simply connected Riemannian manifold without focal points, and $\Gamma$ a discrete, \emph{cocompact} subgroup of isometries of $M$. We will also assume that $\Gamma$ acts properly and freely on $M$, so that $M/\Gamma$ is a closed Riemannian manifold.
Following Prasad-Raghunathan \cite{PraRag72} and Ballmann-Eberlein \cite{BalEbe87}, define for each nonnegative integer $i$ the subset $A_i(\Gamma)$ of $\Gamma$ to be the set of those $\phi \in \Gamma$ such that the centralizer $Z_{\Gamma}(\phi)$ contains a finite index free abelian subgroup of rank no greater than $i$. We sometimes denote $A_i(\Gamma)$ simply by $A_i$ when the group is understood.
We let $r(\Gamma)$ be the minimum $i$ such that $\Gamma$ can be written as a finite union of translates of $A_i$,
\[
\Gamma = \phi_1 A_i \cup \cdots \cup \phi_k A_i,
\]
for some $\phi_1, \dots, \phi_k \in \Gamma$. Finally, we define the \emph{rank} of $\Gamma$ by
\[
\rank(\Gamma) = \max\set{r(\Gamma^*) : \Gamma^* \text{ is a finite index subgroup of } \Gamma}.
\]
Ballmann-Eberlein have shown that $\rank(\Gamma) = \rank(M)$ when $M$ has nonpositive curvature. In this section, we generalize their result to no focal points:
\begin{thm}
Let $M$ be a complete, simply connected Riemannian manifold with no focal points, and let $\Gamma$ be a discrete, cocompact subgroup of isometries of $M$ acting freely and properly. Then $\rank(\Gamma) = \rank(M)$.
\end{thm}
Some remarks are in order. First, the Higher Rank Rigidity Theorem proved earlier in this paper guarantees that $M$ has a de Rham decomposition
\[
M = M_S \times E_r \times M_1 \times \cdots \times M_l,
\]
where $M_S$ is a higher rank symmetric space, $E_r$ is $r$-dimensional Euclidean space, and $M_i$ is a rank one manifold of no focal points, for $1 \leq i \leq l$. Uniqueness of the de Rham decomposition implies that $\Gamma$ admits a finite index subgroup $\Gamma^*$ which preserves the de Rham splitting.
We assume for the moment that $M$ has no Euclidean factor, that is, $r = 0$ in the decomposition above. We then have the following lemma:
\begin{lem}
Let $M$ have no flat factors, and let $\Gamma$ be a cocompact subgroup of isometries of $M$. Then $M$ splits as a Riemannian product $M = M_S \times M_1$, where $M_S$ is symmetric and $M_1$ has discrete isometry group.
\end{lem}
\begin{proof}
Let $I_0$ denote the connected component of the isometry group of $M$. By Theorem 3.3 of Druetta \cite{Dru83}, $\Gamma$ has no normal abelian subgroups. Then theorem 3.3 of Farb-Weinberger \cite{FarWei08} shows that $I_0$ is semisimple with finite center, and Proposition 3.1 of the same paper shows that $M$ decomposes as a Riemannian warped product
\[
N \times_f B,
\]
where $N$ is locally symmetric of nonpositive curvature, and $\Isom(B)$ is discrete. We claim that such a warped product must be trivial, which establishes the lemma.
Thus it suffices to show that a compact nontrivial Riemannian warped product must have focal points: Let $N \times_f B$ be a Riemannian warped product, where $f : B \to \mathbb{R}_{> 0}$ is the warping function. If $f$ is not constant on $B$, there exists a geodesic $\gamma$ in $B$ such that $f$ is not constant on $\gamma$. Letting $\sigma$ be a unit speed geodesic in $N$, we construct the variation $\Gamma(s, t) = (\sigma(s), \gamma(t))$. It is then easy to see that the variation field $J(t) = \partial_s \Gamma(0, t)$ of this variation satisfies
\[
||J(t)|| = f(\gamma(t)),
\]
which is bounded but nonconstant, so that $N \times_f B$ must have focal points.
\end{proof}
We also have the following useful splitting theorem:
\begin{prop}
Let $M = M_1 \times M_2$ have no flat factors, and suppose $M_1$ has discrete isometry group. Let $\Gamma$ be a discrete, cocompact subgroup of isometries of $M$. Then $\Gamma$ admits a finite index subgroup that splits as $\Gamma_1 \times \Gamma_2$, where $\Gamma_1 \subseteq \Isom(M_1)$ and $\Gamma_2 \subseteq \Isom(M_2)$.
\end{prop}
\begin{proof}
By the uniqueness of the de Rham decomposition of $M$, we may pass to a finite index subgroup to assume that $\Gamma$ preserves the decomposition $M = M_1 \times M_2$, that is, $\Gamma \subseteq \Isom(M_1) \times \Isom(M_2)$. We let $\pi_i : \Gamma \to \Isom(M_i)$ be the projection maps for $i = 1, 2$. Abusing notation, we also denote by $\pi_i : M \to M_i$ the projection maps.
We wish to show that
\[
\pi_1(\ker \pi_2) \times \pi_2(\ker \pi_1)
\]
is a finite index subgroup of $\Gamma$, and for this it suffices to construct a compact coarse fundamental domain.
Let $F$ be a compact fundamental domain for the action of $\Gamma$, and let $H_1 \subseteq M_1$ be the Dirichlet fundamental domain for $\pi_1 \Gamma$. The set of all $a \in \pi_1 \Gamma$ such that $a H_1 \cap \pi_1 F \neq \emptyset$ is finite; denote its elements by $a_1, \dots, a_k$, and fix $b_1, \dots, b_k \in \Isom(M_2)$ such that $(a_i, b_i) \in \Gamma$ for each $i$. Consider the compact set
\[
K_2 = (a_1^{-1}, b_1^{-1}) F \cup \cdots \cup (a_k^{-1}, b_k^{-1}) F;
\]
we claim $H_1 \times M_2 \subseteq (\ker \pi_1)K_2$.
To see this let $q_1 \in H_1, q_2 \in M_2$. There exists $(p_1, p_2) \in F$ and some $\gamma \in \Gamma$ such that $\gamma(p_1, p_2) = (q_1, q_2)$, and $\gamma$ has the form $\gamma = (a_i^{-1}, \gamma_2)$ for some $\gamma_2 \in \Isom(M_2)$. But then
\[
(q_1, q_2) \in (1, \gamma_2 b_i)(a_i^{-1}, b_i^{-1}) F \subseteq (\ker\pi_1)K_2.
\]
This establishes our claim.
Note that discreteness of $\Isom(M_1)$ was used only to show $\pi_1 \Gamma$ is discrete. Thus we may now repeat the argument with $\pi_2 \ker \pi_1$ (which we have just established is cocompact) in place of $\pi_1 \Gamma$, and $\pi_1 \ker \pi_2$ in place of $\pi_2 \ker \pi_1$: We let $H_2$ be a fundamental domain for $\pi_2 \ker \pi_1$, and we obtain a compact set $K_1$ such that $M_1 \times H_2 \subseteq (\ker \pi_2)K_1$.
It is now not difficult to see that $K_1$ is a coarse fundamental domain for $\pi_1(\ker \pi_2) \times \pi_2(\ker \pi_1)$.
\end{proof}
As a consequence of these lemmas and the Rank Rigidity Theorem proven earlier in this paper, if $M$ has no flat factors, then it admits a decomposition
\[
M = M_S \times M_1 \times \cdots \times M_l,
\]
where each of the $M_i, 1 \leq i \leq l$, has rank one and discrete isometry group, and furthermore that our group $\Gamma$ has a finite index subgroup $\Gamma^*$ splitting as
\[
\Gamma^* = \Gamma_S \times \Gamma_1 \times \cdots \times \Gamma_l.
\]
Ballmann-Eberlein have shown (Theorem 2.1 in \cite{BalEbe87}) that $\rank(\Gamma^*) = \rank(\Gamma)$ and that the rank of $\Gamma^*$ is the sum of the rank of $\Gamma_S$ and the ranks of the $\Gamma_i$. Prasad-Raghunathan have shown that $\rank(\Gamma_S) = \rank(M_S)$ in \cite{PraRag72}.
Therefore, to finish the proof in the case where $M$ has no flat factors, we need to show that $\rank(\Gamma) = \rank(M)$ in the case where $M$ is irreducible, rank one, and has discrete isometry group. We proceed to do this now; we mimic the geometric construction of Ballmann-Eberlein, and for this we first generalize a number of lemmas about rank one geodesics in manifolds of nonpositive curvature to the no focal points case.
\subsection{Rank one $\Gamma$-periodic vectors.}
The following series of lemmas generalizes the work of Ballmann in \cite{Bal82}. As in that paper, we will be interested in geodesics $\gamma$ that are $\Gamma$-periodic, i.e., such that there exists a $\phi \in \Gamma$ and some $a \in \mathbb{R}$ with $\phi \circ \gamma(t) = \gamma(t + a)$ for all $t$. Such a geodesic $\gamma$ will be called \emph{axial}, and $\phi$ will be called an \emph{axis} of $\gamma$ with \emph{period} $a$.
We denote by $\overline{M}$ the union $M \cup M(\infty)$, and for each tangent vector $v$ and each $\epsilon$ we define the cone $C(v, \epsilon) \subseteq \overline{M}$ to be the set of those $x \in \overline{M}$ such that the geodesic from $\pi(v)$ to $x$ makes angle less than $\epsilon$ with $v$. The sets $C(v, \epsilon)$ together with the open subsets of $M$ form a subbasis for a topology on $\overline{M}$, called the \emph{cone topology}. Goto \cite{Got79} has shown that the cone topology is the unique topology on $\overline{M}$ with the property that for any $p \in M$, the exponential map is a homeomorphism of $\overline{T_pM}$ with $\overline{M}$ (where the former is given the cone topology).
If $p, q \in M$, we denote by $\gamma_{pq}$ the unit speed geodesic through $p$ and $q$ with $\gamma(0) = p$. We denote by $\gamma(\infty)$ the element of $M(\infty)$ at which $\gamma$ points, and analogously for $\gamma(-\infty)$. Note that if $\gamma$ is a geodesic and $t_n \to \infty$, then $\gamma(t_n) \to \gamma(\infty)$ in the cone topology on $\overline{M}$. Moreover, if $p_n \in \overline{M}$ and $p_n \to \zeta \in M(\infty)$, then for $p \in M$ the geodesics $\gamma_{pp_n}$ converge to $\gamma_{p\zeta}$. This follows from considering $\overline{T_pM}$ and the result of Goto cited above. More generally, we have the following lemma:
\begin{lem}\label{A_Aa}
Let $p, p_n \in M$ with $p_n \to p$, and let $x_n, \zeta \in \overline{M}$ with $x_n \to \zeta$. Then $\dot{\gamma}_{p_nx_n}(0) \to \dot{\gamma}_{p\zeta}(0)$.
\end{lem}
\begin{proof}
First pass to any convergent subsequence of $\dot{\gamma}_{p_nx_n}(0)$; say this subsequence converges to $\dot{\gamma}_{p\xi}(0)$, where $\xi \in M(\infty)$. Suppose for the sake of contradiction that $\xi \neq \zeta$. Let $c = d(\gamma_{p\zeta}(1), \gamma_{p\xi}(1)) > 0$. By the remarks preceding the lemma, we may choose $n$ large enough so that each of
\[
d(p_n, p), \; d(\gamma_{p_nx_n}(1), \gamma_{p\xi}(1)), \text{ and } d(\gamma_{px_n}(1), \gamma_{p\zeta}(1))
\]
is strictly smaller than $c/3$. Proposition \ref{OSu76 1} shows that $d(\gamma_{p_nx_n}(1), \gamma_{px_n}(1)) < c/3$, and the triangle inequality gives the desired contradiction:
\begin{align*}
c &= d(\gamma_{p\zeta}(1), \gamma_{p\xi}(1)) \\
&\leq d(\gamma_{p\zeta}(1), \gamma_{px_n}(1)) + d(\gamma_{px_n}(1), \gamma_{p_nx_n}(1)) + d(\gamma_{p_nx_n}(1), \gamma_{p\xi}(1))
\\ &< c.
\end{align*}
\end{proof}
Finally, we say that a geodesic $\gamma$ \emph{bounds a flat half-strip of with $c$} if there exists an isometric immersion $\Phi: \mathbb{R} \times [0, c) \to M$ such that $\Phi(t, 0) = \gamma(t)$, and that $\gamma$ \emph{bounds a flat half-plane} if there exists such $\Phi$ with $c = \infty$.
\begin{lem}\label{A_2.1.a}
Let $\gamma$ be a geodesic, and suppose there exist
\begin{align*}
p_k &\in C(-\dot{\gamma}(0), 1/k) \cap M & q_k &\in C(\dot{\gamma}(0), 1/k) \cap M
\end{align*}
such that $d(\gamma(0), \gamma_{p_k q_k}) \geq c > 0$ for all $k$. Then $\gamma$ is the boundary of a flat half-strip of width $c$.
\end{lem}
\begin{proof}
For each $k$ let $\tilde{p}_k, \tilde{q}_k$ be the points on $\gamma$ closest to $p_k, q_k$, respectively. Let $b_k(s)$ be a smooth path with $b_k(0) = \tilde{p}_k, \; b_k(1) = p_k$, and similarly let $c_k(s)$ be a smooth path with $c_k(0) = \tilde{q}_k, \; c_k(1) = q_k$. We may further choose $b_k$ so that the angle
\[
\angle_{\gamma(0)}(\tilde{p}_k, b_k(s))
\]
is an increasing function of $s$, and similarly for $c_k$. Finally, let $\sigma_{k,s}(t)$ be the unit speed geodesic through $b_k(s)$ and $c_k(s)$, parameterized so that $\sigma_{k,0}(0) = \gamma(0)$, and such that $s \mapsto \sigma_{k,s}(0)$ is a continuous path in $M$.
By hypothesis, $d(\sigma_{k,1}(0), \gamma(0)) \geq c$. Thus there exists $s_k, 0 < s_k \leq 1$, with $d(\sigma_{k, s_k}(0), \gamma(0)) = c$. Passing to a subsequence, we may assume the geodesics $\sigma_{k, s_k}$ converge as $k \to \infty$ to a geodesic $\sigma$ with $d(\sigma(0), \gamma(0)) = c$.
Finally, any convergent subsequence of $b_k(s_k)$, or of $c_k(s_k)$, must converge to a point on $\gamma$, or one of the endpoints of $\gamma$. However, Lemma \ref{A_Aa}, and the fact that $\sigma \neq \gamma$, shows that the only possibility is $b_k(s_k) \to \gamma(-\infty)$ and $c_k(s_k) \to \gamma(\infty)$. Another application of Lemma \ref{A_Aa} shows that $\sigma$ is parallel to $\gamma$. The flat strip theorem now gives the result.
\end{proof}
\begin{lem}\label{A_2.1.b}
Let $\gamma$ be rank one, and $c > 0$. Then there exists $\epsilon > 0$ such that if $x \in C(-\dot{\gamma}(0), \epsilon)$, $y \in C(\dot{\gamma}(0), \epsilon)$, then there is a geodesic connecting $x$ and $y$.
Furthermore, if $\sigma$ is a geodesic with $\sigma(-\infty) \in C(-\dot{\gamma}(0), \epsilon)$ and $\sigma(\infty) \in C(\dot{\gamma}(0), \epsilon)$, then $\sigma$ does not bound a flat half plane, and $d(\gamma(0), \sigma) \leq c$.
\end{lem}
\begin{proof}
By Lemma \ref{A_2.1.a}, there exists $\epsilon > 0$ such that $d(\gamma_{pq}, \gamma(0)) \leq c$ if $p \in C(-\dot{\gamma}(0), \epsilon) \cap M$ and $q \in C(\dot{\gamma}(0), \epsilon) \cap M$. We choose sequences $p_n \to x$ and $q_n \to y$; then some subsequence of $\gamma_{p_n q_n}$ converges to a geodesic connecting $x$ and $y$.
To prove the second part, note that all geodesics $\tau$ with endpoints in $C(-\dot{\gamma}(0), \epsilon)$ and $C(\dot{\gamma}(0), \epsilon)$ satisfy $d(\gamma(0), \tau) \leq c$ by choice of $\epsilon$. However, if $\sigma$ bounds a flat half-plane then there are geodesics $\tau_n$ with the same endpoints as $\sigma$ but with $\tau_n \to \infty$, a contradiction.
\end{proof}
As a corollary of the above, we see that if $\gamma$ is rank one and $\gamma_n$ is a sequence of geodesics with $\gamma_n(-\infty) \to \gamma(-\infty)$ and $\gamma_n(\infty) \to \gamma(\infty)$, then $\gamma_n \to \gamma$.
\begin{lem}\label{A_B}
Let $\gamma$ be a recurrent geodesic, and suppose $\phi_n$ is a sequence of isometries such that $d\phi_n(\dot{\gamma}(t_n)) \to \dot{\gamma}(0)$, where $t_n$ increases to $\infty$. Further suppose that there exists $x, \zeta \in M(\infty)$ with $\phi_n(x) \to \zeta$, where $\zeta \neq \gamma(\infty)$ and $\zeta \neq \gamma(-\infty)$. Then $\gamma$ is the boundary of a flat half plane $F$, and $\zeta \in F(\infty)$.
\end{lem}
\begin{proof}
For each $s \in \mathbb{R}$ let $\tau_s$ be the geodesic with $\tau_s(0) = \gamma(s)$ and $\tau_s(\infty) = x$, and let $\sigma_s$ be the geodesic with $\sigma_s(0) = \gamma(s)$ and $\sigma_s(\infty) = \zeta$. Fix $t > 0$.
We first claim that for each $\epsilon > 0$, there exists an infinite subset $L(\epsilon) \subseteq \mathbb{N}$ such that for each $N \in L(\epsilon)$ there exists an infinite subset $L_N(\epsilon) \subseteq \mathbb{N}$ such that for $n \in L_N(\epsilon)$,
\[
d(\tau_{t_N}(t), \tau_{t_n}(t)) \geq t_n - t_N - \epsilon.
\]
Let us first show this claim.
By passing to a subsequence, we may assume
\[
d(\phi_n\gamma(t_n), \gamma(0)) < \epsilon/3 \text{ and } d(\phi_n \tau_{t_n}(t), \sigma_0(t)) < \epsilon/3
\]
for all $n \geq 1$; the second inequality follows from recurrence of $\gamma$, the fact that $\phi_n(x) \to \zeta$, and Proposition \ref{OSu76 2}.
Assume for the sake of contradiction that our claim is false; then again by passing to a subsequence, we may assume that for $m > n \geq 1$
\[
d(\tau_{t_n}(t), \tau_{t_m}(t)) < t_m - t_n - \epsilon.
\]
From this and the previous inequality, we conclude that for $m > n \geq 1$
\[
d(\phi_n^{-1}\sigma_0(t), \phi_m^{-1}\sigma_0(t)) < t_m - t_n - \epsilon/3.
\]
Choose $l$ such that $l\epsilon/3 > 2t + \epsilon$. Then
\begin{align*}
d(\gamma(t_1), \gamma(t_l)) &\leq d(\gamma(t_1), \phi_1^{-1}\gamma(0)) + d(\phi_1^{-1}\gamma(0), \phi_1^{-1}\sigma_0(t)) + \sum_{i = 1}^{l-1} d(\phi_i^{-1}\sigma_0(t), \phi_{i+1}^{-1}\sigma_0(t)) \\
&+ d(\phi_l^{-1}\sigma_0(t), \phi_l^{-1}\gamma(0)) + d(\phi_l^{-1}\gamma(0), \gamma(t_l)) \\
&< \epsilon/3 + t + \sum_{i = 1}^{l-1} (t_{i+1} - t_i - \epsilon/3) + t + \epsilon/3 \\
&\leq 2t + \epsilon - l\epsilon/3 + t_l - t_1 \\
&< t_l - t_1,
\end{align*}
contradicting the fact that $\gamma$ is length minimizing. This proves our claim.
The next step of the proof is to show that for $s > 0$
\[
d(\sigma_0(t), \sigma_s(t)) = s.
\]
Fix such $s$. Note that $d(\sigma_0(t), \sigma_s(t)) \leq s$ by Proposition \ref{OSu76 2}. Suppose for the sake of contradiction that
\[
d(\sigma_0(t), \sigma_s(t)) = s - 3\epsilon
\]
for some $\epsilon > 0$. Choose $N \in L(\epsilon)$ large enough such that
\[
d(\phi_N \tau_{t_N}(t), \sigma_0(t)) < \epsilon \text{ and } d(\phi_N \tau_{t_N + s}(t), \sigma_s(t)) < \epsilon.
\]
As before, that this can be done follows from recurrence of $\gamma$, the fact that $\phi_n(x) \to \zeta$, and Proposition \ref{OSu76 2}. Then if $n \in L_N(\epsilon)$ with $t_n > t_N + s$, we find
\begin{align*}
d(\tau_{t_N}(t), \tau_{t_n}(t)) &= d(\phi_N \tau_{t_N}(t), \phi_N \tau_{t_n}(t)) \\
&\leq d(\phi_N \tau_{t_N}(t), \sigma_0(t)) + d(\sigma_0(t), \sigma_s(t)) \\
&+ d(\sigma_s(t), \phi_N\tau_{t_N + s}(t)) + d(\phi_N \tau_{t_N + s}(t), \phi_N \tau_{t_n}(t)) \\
&< \epsilon + (s - 3\epsilon) + \epsilon + t_n - (t_N + s) \\
&= t_n - t_N - \epsilon,
\end{align*}
contradicting the definitions of $L(\epsilon), L_N(\epsilon)$. Hence
\[
d(\sigma_s(t), \sigma_0(t)) = s
\]
as claimed. In fact, the above argument shows that for all $r, s \in \mathbb{R}$
\[
d(\sigma_r(t), \sigma_s(t)) = |r - s|.
\]
We now complete the proof. Lemma 2 in O'Sullivan \cite{OSu76} shows that the curves $\theta_t$ defined by $\theta_t(s) = \sigma_s(t)$ are geodesics, and they are evidently parallel to $\gamma$. Thus the flat strip theorem guarantees for each $t$ the existence of a flat $F_t$ containing $\gamma$ and $\theta_t$; since $F_t$ is totally geodesic, it contains each of the geodesics $\sigma_s$. (We remark, of course, that all the $F_t$ coincide.)
\end{proof}
\begin{cor}\label{A_E}
Let $\gamma$ be a recurrent geodesic, and suppose there exists $x \in M(\infty)$ such that $\angle_{\gamma(t)}(x, \gamma(\infty)) = \epsilon$ for all $t$, where $0 < \epsilon < \pi$. Then $\gamma$ is the boundary of a flat half-plane.
\end{cor}
\begin{proof}
If $\phi_n$ is a sequence of isometries such that $\phi_n \dot{\gamma}(t_n) \to \dot{\gamma}(0)$, for $t_n \to \infty$, one sees that any accumulation point $\zeta$ of $\phi_n(x)$ in $M(\infty)$ must satisfy $\angle_{\gamma(0)}(\gamma(\infty), \zeta) = \epsilon$, and so the previous lemma applies.
\end{proof}
\begin{cor}\label{A_2.4}
Let $\phi$ be an isometry with axis $\gamma$ and period $a$. Suppose $B \subseteq M(\infty)$ is nonempty, compact, $\phi(B) \subseteq B$, and neither $\gamma(\infty)$ nor $\gamma(-\infty)$ is in $B$. Then $\gamma$ bounds a flat half plane.
\end{cor}
\begin{proof}
Take $\phi_n = \phi^n$ and $t_n = na$, along with the recurrent geodesic $-\gamma$, in Lemma \ref{A_B}.
\end{proof}
\begin{lem}\label{A_2.5}
Let $\phi$ be an isometry with rank one axis $\gamma$ and period $a$. Then for all $\epsilon, \nablata$ with $0 < (\epsilon, \nablata) < \pi$ and all $t \in \mathbb{R}$ there exists $s$ with
\[
\overline{C(\dot{\gamma}(s), \nablata)} \subseteq C(\dot{\gamma}(t), \epsilon).
\]
\end{lem}
\begin{proof}
Suppose otherwise; then there exists such $\epsilon, \nablata, t$ such that for all $s$ the above inclusion does not hold. In particular we may choose for each $n$ a point $z_n$ with
\begin{align*}
z_n &\in \overline{C(\dot{\gamma}(na), \nablata)} & z_n &\notin C(\dot{\gamma}(t), \epsilon).
\end{align*}
Then if we set $x_n = \phi^{-n}(z_n)$, we have $x_n \in \overline{C(\dot{\gamma}(0), \nablata)}$, and none of $x_n, \phi(x_n), \dots, \phi^n(x_n)$ is in $C(\dot{\gamma}(t), \epsilon)$.
Thus if we let $B$ be the set
\[
B = \set{x \in M(\infty) \cap \overline{C(\dot{\gamma}(0), \nablata)} : \phi^n(x) \notin C(\dot{\gamma}(t), \epsilon) \text{ for all } n},
\]
we see that $B$ is nonempty (it contains any accumulation point of $x_n$) and satisfies the other requirements of Corollary \ref{A_2.4}, so $\gamma$ is the boundary of a flat half plane.
\end{proof}
\begin{thm}\label{A_2.2}
Let $\phi$ be an isometry with axis $\gamma$ and period $a$. The following are equivalent:
\begin{enumerate}
\item $\gamma$ is not the boundary of a flat half plane;
\item Given $\overline{M}$-neighborhoods $U$ of $\gamma(-\infty)$ and $V$ of $\gamma(\infty)$, there exists $N \in \mathbb{N}$ with $\phi^n(\overline{M} - U) \subseteq V$ and $\phi^{-n}(\overline{M} - V) \subseteq U$ whenever $n \geq N$; and
\item For any $x \in M(\infty)$ with $x \neq \gamma(\infty)$, there exists a geodesic joining $x$ and $\gamma(\infty)$, and none of these geodesics are the boundary of a flat half plane.
\end{enumerate}
\end{thm}
\begin{proof}
$(1 \Rightarrow 2)$ By Lemma \ref{A_2.5} we can find $s \in \mathbb{R}$ with
\begin{align*}
&\overline{C(-\dot{\gamma}(-s), \pi/2)} \subseteq U, & &\overline{C(\dot{\gamma}(s), \pi/2)} \subseteq V.
\end{align*}
If $Na > 2s$ then for $n \geq N$
\begin{align*}
\phi^n(\overline{M} - U) &\subseteq \phi^n(\overline{M} - C(-\dot{\gamma}(-s), \pi/2)) \\
&\subseteq \overline{C(\dot{\gamma}(s), \pi/2)} \\
&\subseteq V,
\end{align*}
and analogously for $U$ and $V$ swapped.
$(1 \Rightarrow 3)$ By Lemma \ref{A_2.1.b} we can find $\epsilon > 0$ such that for $y \in C(-\dot{\gamma}(0), \epsilon)$ there exists a geodesic from $y$ to $\gamma(\infty)$ which does not bound a flat half plane. But by $(2)$ we can find $n$ such that $\phi^{-n}(x) \in C(-\dot{\gamma}(0), \epsilon)$.
$(2 \Rightarrow 1)$ and $(3 \Rightarrow 1)$ are obvious (by checking the contrapositive).
\end{proof}
\begin{prop}\label{A_D}\label{A_2.13}
If $\gamma$ is rank one and $U, V$ are neighborhoods of $\gamma(-\infty)$ and $\gamma(\infty)$, then there exists an isometry $\phi \in \Gamma$ with rank one axis $\sigma$, where $\sigma(-\infty) \in U$ and $\sigma(\infty) \in V$.
\end{prop}
\begin{proof}
Since $\Gamma$-recurrent vectors are dense in $SM$, we may assume $\gamma$ is recurrent, and take $\phi_n \in \Gamma$, $t_n \to \infty$, such that $d\phi_n \dot{\gamma}(t_n) \to \dot{\gamma}(0)$. By Lemma \ref{A_2.1.b} we may replace $U$ and $V$ by smaller neighborhoods such that for any $x \in U, y \in V$, there exists a rank one geodesic joining $x$ and $y$. By the flat strip theorem, such a geodesic is unique.
The argument in the proof of Lemma 2.13 in Ballmann \cite{Bal82}, using Corollary \ref{A_E} in place of Ballmann's Proposition 1.2, shows that for sufficiently large $n$, $\phi_n$ has fixed points $x_n \in U$ and $y_n \in V$. Then $\phi_n$ must fix the oriented geodesic $\sigma_n$ from $x_n$ to $y_n$. Since $d(\phi_n \gamma(0), \gamma(0)) \to \infty$, but $d(\sigma_n, \gamma(0))$ is uniformly bounded (again by Lemma \ref{A_2.1.b}), $\phi_n$ must act as a nonzero translation on $\sigma_n$ for large enough $n$.
\end{proof}
\begin{cor}\label{A_D_c}
Rank one $\Gamma$-periodic vectors are dense in the set of rank one vectors.
\end{cor}
\subsection{The geometric construction.}
Our goal in this subsection is to prove the following:
\begin{thm}\label{A_3.1}
Let $M$ have rank one, and let $\Gamma$ be a discrete subgroup of isometries of $M$ such that $\Gamma$-recurrent vectors are dense in $M$. Then $r(\Gamma) = 1$.
\end{thm}
Our method is simply to show that the Ballmann-Eberlein construction works equally well in the setting of no focal points. Thus we define
\[
B_1(\Gamma) = \set{\phi \in \Gamma : \phi \text{ translates a rank one geodesic }}.
\]
\begin{lem}\label{A_C}
$B_1(\Gamma) \subseteq A_1(\Gamma)$.
\end{lem}
\begin{proof}
For $\phi \in B_1(\Gamma)$ translating $\gamma$, the flat strip theorem guarantees that $\gamma$ is the unique rank one geodesic translated by $\phi$. Thus every element of $Z_{\Gamma}(\phi)$ leaves $\gamma$ invariant. Since $\Gamma$ is discrete, $Z_{\Gamma}(\phi)$ must therefore contain an infinite cyclic group of finite index.
\end{proof}
As in Ballmann-Eberlein a point $x \in M(\infty)$ is called \emph{hyperbolic} if for any $y \neq x$ in $M(\infty)$, there exists a rank one geodesic joining $y$ to $x$. By Theorem \ref{A_2.2}, any rank one axial geodesic has hyperbolic endpoints; thus Corollary \ref{A_D_c} implies that the set of hyperbolic points is dense in the open set of $M(\infty)$ consisting of endpoints of rank one vectors.
\begin{lem}\label{A_3.5}
Let $p \in M$, let $x \in M(\infty)$ be hyperbolic, and let $U^*$ be a neighborhood of $x$ in $\overline{M}$. Then there exists a neighborhood $U$ of $x$ in $\overline{M}$ and $R > 0$ such that if $\sigma$ is a geodesic with endpoints in $U$ and $\overline{M} - U^*$, then $d(p, \sigma) \leq R$.
\end{lem}
\begin{proof}
Repeat the argument of Ballmann-Eberlein, Lemma 3.5 \cite{BalEbe87}. (This proof references Lemma 3.4 of the same paper, which follows immediately from our Lemma \ref{A_2.1.b}.)
\end{proof}
\begin{lem}\label{A_3.6}
Let $x \in M(\infty)$ be hyperbolic, and $U^*$ a neighborhood of $x$ in $M(\infty)$. Then there exists a neighborhood $U \subseteq M(\infty)$ of $x$ such that for all $x^* \in U, y^* \in M(\infty) - U^*$, there exists a rank one geodesic between $x^*$ and $y^*$.
\end{lem}
\begin{proof}
Repeat the argument of Ballmann-Eberlein, Lemma 3.6 \cite{BalEbe87}.
\end{proof}
\begin{lem}\label{A_3.8}
Let $x, y$ be distinct points in $M(\infty)$ with $x$ hyperbolic, and suppose $U_x$ and $U_y$ are neighborhoods of $x$ and $y$, respectively. Then there exists an isometry $\phi \in \Gamma$ with
\begin{align*}
&\phi(\overline{M} - U_x) \subseteq U_y & &\phi^{-1}(\overline{M} - U_y) \subseteq U_x.
\end{align*}
\end{lem}
\begin{proof}
By Proposition \ref{A_D} there is a $\Gamma$-periodic geodesic with endpoints in $U_x$ and $U_y$; then apply Theorem \ref{A_2.2}.
\end{proof}
\begin{lem}\label{A_3.9}
Let $x \in M(\infty)$ be hyperbolic, $U^* \subseteq \overline{M}$ a neighborhood of $x$, and $p \in M$. Then there exists a neighborhood $U \subseteq \overline{M}$ of $x$ such that if $\phi_n$ is a sequence of isometries with $\phi_n(p) \to z \in M(\infty) - U^*$, then
\[
\sup_{u \in U} \angle_{\phi_n(p)}(p, u) \to 0 \text{ as } n \to \infty.
\]
\end{lem}
\begin{proof}
By Lemma \ref{A_3.5} there exists $R > 0$ and a neighborhood $U \subseteq \overline{M}$ of $x$ such that if $\sigma$ is a geodesic with endpoints in $U$ and $\overline{M} - U^*$ then $d(p, \sigma) \leq R$.
Let $x_n \in U$ be an arbitrary sequence, and for each $n$ let $\sigma_n$ be the geodesic through $x_n$ with $\sigma_n(0) = \phi_n(p)$. Denote by $b_n$ be the point on $\sigma_n$ closest to $p$, and let $\gamma_n$ be the geodesic through $p$ with $\gamma_n(0) = \phi_n(p)$.
By construction $d(p, b_n) \leq R$, and so we also have $d(\phi_n^{-1}(p), \phi_n^{-1}(b_n)) \leq R$. It follows that any subsequential limit of $\phi_n^{-1} \sigma_n$ is asymptotic to any subsequential limit of $\phi_n^{-1} \gamma_n$. In particular
\[
\angle_{\phi_n(p)}(p, x_n) = \angle_p(\phi_n^{-1}(p), \phi_n^{-1}(x_n)) \to 0,
\]
from which the lemma follows.
\end{proof}
\begin{thm}
If $M$ is a rank one manifold without focal points and $\Gamma$ is a discrete subgroup of isometries of $M$, then $r(\Gamma) = 1$.
\end{thm}
\begin{proof}
This now follows, with at most trivial modifications, from the argument in the proof of Theorem 3.1 in Ballmann-Eberlein \cite{BalEbe87}. (This argument uses Lemma 3.10 of that paper; their proof of that lemma works in no focal points as well, when combined with the lemmas we have proven above.)
\end{proof}
\subsection{Completion of the Proof.}
We now write $M = E_r \times M_1$, where $E_r$ is a Euclidean space of dimension $r$ and $M_1$ is a manifold of no focal points with no flat factors. We fix a discrete, cocompact subgroup $\Gamma$ of isometries of $M$; by uniqueness of the de Rham decomposition, $\Gamma$ respects the factors of the decomposition $M = E_r \times M_1$. In light of this, we freely write elements of $\Gamma$ as $(\gamma_e, \gamma_1)$, where $\gamma_e$ is an isometry of $E_r$ and $\gamma_1$ an isometry of $M_1$.
In this section we work with Clifford transformations, which are isometries $\phi$ of $M$ such that $d(p, \phi(p))$ is constant for $p \in M$. For a group $\Gamma'$ of isometries of $M$, we denote by $C(\Gamma')$ the set of Clifford transformations in $\Gamma'$, and by $Z(\Gamma')$ the center of $\Gamma'$. Theorem 2.1 of Druetta \cite{Dru83} shows that a Clifford transformation of $M = E_r \times M_1$ has the form $(\phi, \text{id})$, where $\phi$ is a translation of $E_r$.
We begin by proving the following generalization (to no focal points) of a lemma in Eberlein \cite{Ebe82}:
\begin{lem}
$\Gamma$ admits a finite index subgroup $\Gamma_0$ such that for any finite index subgroup $\Gamma^*$ of $\Gamma_0$, we have $Z(\Gamma^*) = C(\Gamma^*)$.
\end{lem}
\begin{proof}
Our proof is essentially the same as Eberlein's. We let $\Gamma_0$ be the centralizer $Z_{\Gamma}(C(\Gamma))$; note that $\Gamma_0$ is just the subgroup of $(\gamma_e, \gamma_1) \in \Gamma$ such that $\gamma_e$ is a Euclidean translation. Since $C(\Gamma)$ is just the set of those elements of the form $(\gamma_e, \text{id})$ where $\gamma_e$ is a translation, $C(\Gamma) \subseteq \Gamma_0$, and we have
\[
C(\Gamma_0) = C(\Gamma) \subseteq Z(\Gamma_0).
\]
We claim $\Gamma_0$ is finite index in $\Gamma$. This follows from a trivial modification of the argument in Lemma 3 in Yau \cite{Yau71}, noting that $\Gamma_0$ is normal in $\Gamma$ and the projection of $\Gamma_0$ to its first factor is a lattice of translations of $E_r$.
Now let $\Gamma^*$ be a finite index subgroup of $\Gamma_0$. Theorem 3.2 of Druetta \cite{Dru83} gives $Z(\Gamma^*) \subseteq C(\Gamma^*)$. On the other hand,
\[
C(\Gamma^*) \subseteq C(\Gamma_0) \cap \Gamma^* \subseteq Z(\Gamma_0) \cap \Gamma^* \subseteq Z(\Gamma^*),
\]
so $C(\Gamma^*) = Z(\Gamma^*)$.
\end{proof}
We also need the following, which is Lemma A of Eberlein's paper \cite{Ebe83}:
\begin{lem}
Let $M = E_r \times M_1$ as above, and let $p_1: \Isom(M) \to \Isom(M_1)$ be projection onto the second factor. If $\Gamma$ is a discrete subgroup of $M$ such that $\Gamma$-recurrent vectors are dense in $SM$, then $p_1(\Gamma)$ is discrete in $\Isom(M_1)$.
\end{lem}
\begin{proof}
We sketch the argument of Eberlein, indicating the necessary changes for no focal points. For details see the proof of Lemma A in \cite{Ebe83}.
Let $A$ denote the subgroup of translations of $E_r$ and let $G$ be the closure in $\Isom(M)$ of $\Gamma A$. $A$ is a closed normal abelian subgroup of $G$ and the connected component $G_0$ of the identity of $G$ is solvable. Then $p_1(G_0)$ is solvable, and we let $A^*$ be the last nonidentity subgroup in its derived series; then $A^*$ is an abelian normal subgroup of $p_1(G_0)$, and by Theorem 3.3 of Druetta \cite{Dru83}, $A^*$ must be trivial. Hence $p_1(G_0) = \set{\text{id}}$.
It now follows as in \cite{Ebe83} that $p_1(\Gamma)$ is discrete.
\end{proof}
\begin{thm}
Let $M$ be a complete, simply connected Riemannian manifold with no focal points, and let $\Gamma$ be a discrete, cocompact subgroup of isometries of $M$ acting freely and properly. Then $\rank(\Gamma) = \rank(M)$.
\end{thm}
\begin{proof}
With the above results in hand, the rest of the argument is now identical to the proof of Theorem 3.11 in Ballmann-Eberlein. (We remark that this proof cites the main theorem of Eberlein's paper \cite{Ebe83}; this theorem follows from Druetta, Theorem 3.3 \cite{Dru83}.)
\end{proof}
\end{document} |
\begin{document}
\title{ extsc{Monotone 3-Sat-$(2,2)$}
\begin{abstract}
We show that \textsc{Monotone 3-Sat} remains NP-complete if (i) each clause contains exactly three distinct variables, (ii) each clause is unique, i.e., there are no duplicates of the same clause, and (iii), amongst the clauses, each variable appears unnegated exactly twice and negated exactly twice. Darmann and Döcker~\cite{darmann19} recently showed that this variant of \textsc{Monotone 3-Sat} is either trivial or NP-complete. In the first part of the paper, we construct an unsatisfiable instance which answers one of their open questions (Challenge 1) and places the problem in the latter category.
Then, we adapt gadgets used in the construction to (1) sketch two reductions that establish NP-completeness in a more direct way, and (2), to show that $\forall\exists$ \textsc{3-SAT} remains $\Pi_2^P$-complete for quantified Boolean formulas with the following properties: (a) each clause is monotone (i.e., no clause contains an unnegated and a negated variable) and contains exactly three distinct variables, (b) each universal variable appears exactly once unnegated and exactly once negated, (c) each existential variable appears exactly twice unnegated and exactly twice negated, and (d) the number of universal and existential variables is equal. Furthermore, we show that the variant where (b) is replaced with (b') each universal variable appears exactly twice unnegated and exactly twice negated, and where (a), (c) and (d) are unchanged, is $\Pi_2^P$-complete as well. Thereby, we improve upon two recent results by Döcker et al.~\cite{doecker19} that establish $\Pi_2^P$-completeness of these variants in the non-monotone setting.
We also discuss a special case of \textsc{Monotone 3-Sat-$(2,2)$} that corresponds to a variant of \textsc{Not-All-Equal Sat}, and we show that all such instances are satisfiable.
\end{abstract}
\noindent{\bf Keywords:} Monotone 3-Sat, bounded variable appearances, balanced variable appearances, quantified satisfiability, polynomial hierarchy, computational complexity.
\section{Introduction}
The satisfiability problem for Boolean formulas is one of the go-to problems when choosing a base problem for polynomial reductions. Indeed, it was the first problem shown to be NP-complete~\cite{cook71}. The seminal book by Garey and Johnson~\cite{garey79} contains a large list of known NP-complete problems and an extensive introduction into the theoretical foundation of NP-completeness. A very popular variant of the satisfiability problem is \textsc{3-SAT}, where each clause contains exactly three variables. This problem remains NP-complete even if further restrictions are imposed (see Table~\ref{table:complexity_results_overview}). In this article, we consider variants of \textsc{3-SAT} where each clause contains exactly three \emph{distinct} variables. Hence, unless we explicitly say otherwise, the considered instances have this property (the same goes for references regarding \textsc{3-SAT} variants).
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{Clauses} & \multicolumn{4}{|c|}{Variables} & Complexity\\
\cline{1-7}
unique & monotone & E4 & 3P1N, 1P3N & 3P1N & 2P2N & \\
\hline
\hline
\checkmark & & & & \checkmark & & NP-c~\cite[Cor.\,11]{darmann19} \\
\hline
\checkmark & & & & & \checkmark & NP-c~\cite[Thm.\,1]{berman03}\\
\hline
\checkmark & \checkmark & \checkmark & & & & NP-c~\cite[Cor.\,4]{darmann18} \\
\hline
\checkmark & \checkmark & & \checkmark & & & NP-c~\cite[Thm.\,9]{darmann19} \\
\hline
\checkmark & \checkmark & & & \checkmark & & ? \\
\hline
& \checkmark & & & & \checkmark & NP-c~\cite[Thm.\,5]{darmann19} \\
\hline
\checkmark & \checkmark & & & & \checkmark & NP-c (Thm.~\ref{thm:mon_3sat-(2,2)}) \\
\hline
\end{tabular}
\caption{Overview of complexity results for (monotone) \textsc{3-SAT}. A checkmark in the ``unique'' subcolumn means that each clause contains exactly three \emph{distinct} variables. The headings of the subcolumns in the ``Variables'' column denote the following properties: E4 := each variable appears exactly four times; 3P1N, 1P3N := each variable appears exactly four times and either exactly once unnegated or exactly once negated; 3P1N := each variable appears exactly three times unnegated and once negated; 2P2N := each variable appears exactly twice unnegated and exactly twice negated. In the last column we use the abbreviation NP-c for NP-complete. Note that we only ticked the strongest restrictions, e.g., a checkmark in the 3P1N subcolumn implies a checkmark in the two preceding subcolumns. Moreover, by symmetry we can omit the 1P3N case (identical to 3P1N).}
\label{table:complexity_results_overview}
\end{table}
Recently, Darmann and Döcker~\cite[Cor.\,2]{darmann19} showed that for each fixed $k \geq 3$ \textsc{Monotone 3-Sat} is NP-complete if each variable appears exactly $k$ times unnegated und exactly $k$ times negated. Further, they were able to prove that the case $k = 2$ is either trivial or NP-complete. In other words, finding a single unsatisfiable instance is enough to prove that the problem remains NP-complete for $k = 2$. Hence, by constructing an unsatisfiable instance for $k = 2$, we settle this case and thus, one of their open problems (Challenge~1). As the problem is trivial for $k = 1$~\cite[p.\,32]{darmann19} by a result from Tovey~\cite[Thm.\,2.4]{tovey84}, our result closes the last remaining gap for this variant of \textsc{Monotone 3-Sat}.
The gadgets used in the construction of the unsatisfiable instance can also be used to obtain a more direct way of establishing NP-completeness for the case $k = 2$ (we describe two reductions in this article). Then, we use one of the new gadgets to show that two recent results from Döcker et al.~\cite[Thm.\,3.1 and Thm.\,3.2]{doecker19} hold even in the monotone setting. First, we show that $\forall\exists$ \textsc{3-SAT} remains $\Pi_2^P$-complete if (i) each clause is monotone (ii) each universal variable appears exactly once unnegated and exactly once negated, (iii) each existential variable appears exactly twice unnegated and exactly twice negated, and (iv) the number of universal and existential variables is equal. Second, we show that the variant where (ii) is replaced with (ii') each universal variable appears exactly twice unnegated and exactly twice negated, and where (i), (ii) and (iv) are unchanged, is $\Pi_2^P$-complete, too.
The article is structured as follows: In Section~\ref{sec:preliminaries}, we recall important definitions and concepts. Then, in Section~\ref{sec:construction_unsatisfiable_instance}, we construct an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$}. Section~\ref{sec:more_ways} contains two reductions that can be used to obtain the main result in a more direct way and one of the involved gadgets is subsequently used in Section~\ref{sec:quantified_results} to show that a restricted variant of $\forall\exists$ \textsc{3-SAT} remains $\Pi_2^P$-complete. The appendix contains proofs of two Lemmas used in Section~\ref{sec:construction_unsatisfiable_instance}, and a representation of
\begin{itemize}
\item a gadget on which several of our results are based, and
\item the constructed unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$},
\end{itemize}
which can be used to verify our results with the help of a SAT Solver (e.g., using the PySAT Toolkit~\cite{ignatiev18}).
\section{Preliminaries}\label{sec:preliminaries}
Let $V=\{x_1,x_2,\ldots,x_n\}$ be a set of $n$ variables. We also write $X_1^i$ to denote the set $\{x_1,x_2,\ldots,x_i\}$ for $i \geq 1$. A positive literal is an element of $\mathcal{L}_+ = V$, a negative literal is an element of $\mathcal{L}_- = \{\overline{x_i} \mid x_i \in V\}$, and the set of literals is denoted by $\mathcal{L} = \mathcal{L}_+ \cup \mathcal{L}_-$. A clause is a subset of $\mathcal{L}$. We say that a clause $C_j \subseteq \mathcal{L}$ is a $k$-clause if $|C_j| = k$ and $C_j$ is monotone if $C_j \subseteq \mathcal{L}_+$ or $C_j \subseteq \mathcal{L}_-$. A Boolean formula is a set of $m$ clauses
\[
\bigcup_{j=1}^m \{C_j\}.
\]
A Boolean formula is monotone if $C_j$ is monotone for each $j \in \{1, \ldots, m\}$. A truth assignment $\beta\colon V \rightarrow \{T, F\}$ maps each variable to the truth value $T$ (True) or $F$ (False). A formula is satisfied for a truth assignment $\beta\colon V \rightarrow \{T, F\}$ if $\beta$ sets at least one literal in each clause true (e.g., a negative literal evaluates to true if $\beta$ sets the corresponding variable false). If such a truth assignment exists, we say that the formula is satisfiable; otherwise the formula is unsatisfiable. Further, a formula is nae-satisfiable if and only if there exists a truth assignment $\beta$ that sets at least one literal in each clause true and at least one false. The main result concerns the following decision problem.
\begin{center}
\noindent\fbox{\parbox{.95\textwidth}{
\noindent \textsc{Monotone 3-Sat-$(2,2)$}\\
\noindent{\bf Input.} A Boolean formula
$$\bigcup_{j=1}^m \{C_j\}$$
over a set $V=\{x_1,x_2,\ldots,x_n\}$ of variables such that (i) each $C_j$ is a unique monotone 3-clause that contains exactly three \emph{distinct} variables, and (ii), amongst the clauses, each variable appears unnegated exactly twice and negated exactly twice.
\\
\noindent{\bf Question.} Does there exist a truth assignment for $V$ such that each clause of the formula is satisfied?
}}
\end{center}
\noindent {\bf Remark.} A monotone 3-clause always contains exactly three distinct variables.
In one instance, we reduce from \textsc{Monotone 3-Sat*-$(2,2)$}~\cite{darmann19} which is the variant of \textsc{Monotone 3-Sat-$(2,2)$} where variables may appear more than once in a clause. Note that we can assume that each variable appears at most twice in a given clause, since each clause is monotone and there are only two unnegated and two negated appearances of any variable.
\noindent {\bf Enforcers.} In the construction of an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$} and the reductions after that, we make use of gadgets that enforce truth assignments to have certain properties (gadgets also go by the name of \emph{enforcers}~\cite{berman03}). As an example, we consider an enforcer introduced by Berman et al.~\cite[p.\,3]{berman03}:
\begin{align*}
\mathcal{S}(\ell_1, \ell_2, \ell_3) = &(\ell_1 \vee \overline{a} \vee b) \wedge (\ell_2 \vee \overline{b} \vee c) \wedge (\ell_3 \vee a \vee \overline{c}) \wedge {} \\
&(a \vee b \vee c) \wedge (\overline{a} \vee \overline{b} \vee \overline{c}),
\end{align*}
where $a, b, c$ are new variables. The enforcer $\mathcal{S}(\ell_1, \ell_2, \ell_3)$ can not be satisfied by a truth assignment $\beta$ that sets all literals in $\{\ell_1, \ell_2, \ell_3\}$ false. On the other hand, if at least one literal in $\{\ell_1, \ell_2, \ell_3\}$ evaluates to true, we can find truth values for the variables $a, b, c$ such that all clauses of the enforcer are satisfied. In other words, $\mathcal{S}(\ell_1, \ell_2, \ell_3)$ simulates a clause but has the advantage that we can allow duplicates since each literal in $\{\ell_1, \ell_2, \ell_3\}$ ends up in a different clause (cf.~\cite[p.\,3]{berman03}). Note that this enforcer is not monotone. In this article, we construct a monotone version with 99 new variables and 133 clauses (instead of 3 new variables and 5 clauses in the setting above).
\section{Construction of an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$}}\label{sec:construction_unsatisfiable_instance}
In this section, we construct an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$}. First, we construct an enforcer~$\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$ that, intuitively, consists of three smaller gadgets. The first gadget is only satisfiable by truth assignments for the corresponding variables that can be placed in one of two categories. Depending on the category of the truth assignment (and the restrictions imposed by them), it is not possible to find a truth assignment for the variables contained in the second or the third gadget such that all clauses are satisfied. The second and the third gadget (see Lemmas~\ref{lem:second_gadget} and~\ref{lem:third_gadget}) have been found via computer search. The basic idea of the implemented Python code is the following: start with a collection of random candidates and try to improve them by swapping literals of differenct clauses, where this operation preserves the properties of an instance of \textsc{Monotone 3-Sat-$(2,2)$} (a reduction in the number of satisfying truth assignments is considered an improvement here). We used the PySAT Toolkit~\cite{ignatiev18} to (1) obtain a list of all satisfying truth assignments for a given collection of clauses, and (2), to verify some of our constructions (see appendix). Finally, we combine several instances of the enforcer~$\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$ to obtain an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$}.
We start with the construction of the first gadget. Let $\mathcal{F}_2$ denote the set consisting of the following 2-clauses:
\begin{multicols}{4}
\begin{enumerate}
\item $\{x_1, x_{2}\}$
\item $\{\overline{x_2}, \overline{x_3}\}$
\item $\{\overline{x_2}, \overline{x_4}\}$
\end{enumerate}
\end{multicols}
Further let $\mathcal{F}_3$ denote the set consisting of the following 3-clauses:
\begin{multicols}{4}
\begin{enumerate}
\setcounter{enumi}{3}
\item $\{\overline{x_{3}}, \overline{x_{5}}, \overline{x_{6}}\}$
\item $\{\overline{x_{4}}, \overline{x_{5}}, \overline{x_{6}}\}$
\item $\{x_5, x_7, x_8\}$
\item $\{x_6, x_7, x_8\}$
\item $\{\overline{x_{7}}, \overline{z_1}, \overline{z_2}\}$
\item $\{\overline{x_{7}}, \overline{z_3}, \overline{z_4}\}$
\item $\{\overline{x_{8}}, \overline{z_1}, \overline{z_2}\}$
\item $\{\overline{x_{8}}, \overline{z_3}, \overline{z_4}\}$
\end{enumerate}
\end{multicols}
First, the 2-clauses in $\mathcal{F}_2$ are equivalent to the implications
\[
\overline{x_1} \Rightarrow x_2, \quad x_2 \Rightarrow \overline{x_3}, \quad x_2 \Rightarrow \overline{x_4}.
\]
Hence, if $\beta(x_1) = F$ then $\beta(x_2) = T$ and consequently $\beta(x_3) = \beta(x_4) = F$. Next, we introduce a set of clauses for which no satisfying truth assignment exists that sets $\beta(x_3) = F$ and $\beta(x_4) = F$. To this end, let $\mathcal{G}$ be the set consisting of the following 3-clauses:
\begin{multicols}{4}
\begin{enumerate}
\setcounter{enumi}{11}
\item $\{x_3, y_1, y_2\}$
\item $\{x_3, y_3, y_4\}$
\item $\{x_4, y_5, y_6\}$
\item $\{x_4, y_7, y_8 \}$
\item $\{y_1, y_4, y_7\}$
\item $\{y_2, y_5, y_9\}$
\item $\{y_3, y_8, y_9\}$
\item $\{\overline{y_1}, \overline{y_5}, \overline{y_8}\}$
\item $\{\overline{y_1}, \overline{y_6}, \overline{y_9}\}$
\item $\{\overline{y_2}, \overline{y_3}, \overline{y_6}\}$
\item $\{\overline{y_2}, \overline{y_4}, \overline{y_8}\}$
\item $\{\overline{y_3}, \overline{y_5}, \overline{y_7}\}$
\item $\{\overline{y_4}, \overline{y_7}, \overline{y_9}\}$
\end{enumerate}
\end{multicols}
Note that for $\beta(x_3) = \beta(x_4) = F$, omitting the appearances of $x_3$ and $x_4$ in $\mathcal{G}$ has no effect on the satisfiability. We deferred the proof that the resulting instance is unsatisfiable to the appendix (see Lemma~\ref{lem:second_gadget}). Now, for at least one $x_i \in \{x_3, x_4\}$ we have $\beta(x_i) = T$ and we may assume that $\beta(x_1) = T$ and $\beta(x_2) = F$. Next, by clauses~4 and~5 we have $\beta(x_j) = F$ for at least one $x_j \in \{x_5, x_6\}$. Then, clauses~6 and~7 imply $\beta(x_k) = T$ for at least one $x_k \in \{x_7, x_8\}$. Hence, by clauses 8, 9, 10 and 11 we get two clauses~$\{F, \overline{z_1}, \overline{z_2}\}$ and~$\{F, \overline{z_3}, \overline{z_4}\}$ which is equivalent to $\{\overline{z_1}, \overline{z_2}\}$ and~$\{\overline{z_3}, \overline{z_4}\}$. Recalling that $\beta(x_1) = T$ and $\beta(x_2) = F$, the first three clauses in the following set~$\mathcal{H}$ of 3-clauses evaluate to $\{F, \overline{z_5}, \overline{z_6}\}$, $\{F, \overline{z_7}, \overline{z_8}\}$ and $\{F, z_7, z_{15}\}$, respectively.
\begin{multicols}{3}
\begin{enumerate}
\setcounter{enumi}{24}
\item $\{\overline{x_1}, \overline{z_5}, \overline{z_6}\}$
\item $\{\overline{x_1}, \overline{z_7}, \overline{z_8}\}$
\item $\{x_2, z_7, z_{15}\}$
\item $\{z_1, z_6, z_8\}$
\item $\{z_1, z_{11}, z_{12}\}$
\item $\{z_2, z_{6}, z_{8}\}$
\item $\{z_2, z_{11}, z_{12}\}$
\item $\{z_3, z_{5}, z_{9}\}$
\item $\{z_3, z_{13}, z_{14}\}$
\item $\{z_4, z_{5}, z_{14}\}$
\item $\{z_4, z_{9}, z_{10}\}$
\item $\{z_7, z_{10}, z_{13}\}$
\item $\{\overline{z_{5}}, \overline{z_{8}}, \overline{z_{15}}\}$
\item $\{\overline{z_6}, \overline{z_7}, \overline{z_9}\}$
\item $\{\overline{z_9}, \overline{z_{11}}, \overline{z_{13}}\}$
\item $\{\overline{z_{10}}, \overline{z_{11}}, \overline{z_{14}}\}$
\item $\{\overline{z_{10}}, \overline{z_{12}}, \overline{z_{14}}\}$
\item $\{\overline{z_{12}}, \overline{z_{13}}, \overline{z_{15}}\}$
\end{enumerate}
\end{multicols}
Now, the inferred 2-clauses
\[
\{\overline{z_1}, \overline{z_2}\},\, \{\overline{z_3}, \overline{z_4}\},\, \{\overline{z_5}, \overline{z_6}\},\, \{\overline{z_7}, \overline{z_8}\} \text{ and } \{z_7, z_{15}\}
\]
in conjunction with the clauses $\mathcal{H} \setminus \{\{\overline{x_1}, \overline{z_5}, \overline{z_6}\}, \{\overline{x_1}, \overline{z_7}, \overline{z_8}\}, \{x_2, z_7, z_{15}\}\}$ are unsatisfiable (again, the proof is deferred to the appendix; see Lemma~\ref{lem:third_gadget}).
Hence, the constructed set of 42 clauses
\[
\mathcal{M} := \{\{x_1, x_{2}\}, \{\overline{x_2}, \overline{x_3}\}, \{\overline{x_2}, \overline{x_4}\}\} \cup \mathcal{F}_3 \cup \mathcal{G} \cup \mathcal{H}
\]
over the set of variables $V := X_1^8 \cup Y_1^9 \cup Z_1^{15}$ is unsatisfiable. We note that each literal appears at most twice in $\mathcal{M}$. The only variables that appear less than 4 times are $x_1, x_5, x_6, y_6$ and $z_{15}$ each of which appear once unnegated and twice negated. Consider the following enforcer
\[
\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3}) := \{\{x_1^i, x_{2}^i, u_1^i\}, \{\overline{x_2^i}, \overline{x_3^i}, \overline{u_2^i}\}, \{\overline{x_2^i}, \overline{x_4^i}, \overline{u_3^i}\}\} \cup \mathcal{F}_3^i \cup \mathcal{G}^i \cup \mathcal{H}^i,
\]
where $\mathcal{F}_3^i, \mathcal{G}^i, \mathcal{H}^i$ is obtained from $\mathcal{F}_3, \mathcal{G}, \mathcal{H}$ by replacing each variable, say $v$, with $v^i$ (e.g. $z_1$ is replaced with $z_1^i$). The enforcer $\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$ has two properties that we use to construct an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$}. First, as alluded to in Section~\ref{sec:preliminaries}, we can deal with duplicates in a clause $\{u_1, \overline{u_2}, \overline{u_3}\}$, i.e., if $\overline{u_2} = \overline{u_3}$ and, second, we can transform a mixed clause into a monotone clause. Further, we obtain a second enforcer $\overline{\mathcal{M}}^{(i)}(\overline{u_1}, u_2, u_3)$ by negating every literal in $\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$.
It is easy to verify that the following collection of clauses is unsatisfiable (we do not use set notation here since the clauses contain duplicates):
\begin{align*}
(\overline{a} \vee \overline{d} \vee \overline{f}) &\wedge (b \vee d \vee e) \wedge (e \vee \overline{b} \vee \overline{b}) \wedge (d \vee \overline{f} \vee \overline{c}) \\
&\wedge (a \vee \overline{c} \vee \overline{e}) \wedge (\overline{e} \vee c \vee c) \wedge (\overline{d} \vee a \vee b) \wedge (\overline{a} \vee f \vee f).
\end{align*}
Now, we are in a position to construct an unsatisfiable instance~$\mathcal{U}$ of \textsc{Monotone 3-Sat-$(2,2)$}:
\begin{align*}
\mathcal{U} := \{\{\overline{a}, \overline{d}, \overline{f}\}, \{b, d, e\}\} &\cup \mathcal{M}^{(1)}(e, \overline{b}, \overline{b}) \cup \mathcal{M}^{(2)}(d, \overline{f}, \overline{c}) \cup \mathcal{M}^{(3)}(a, \overline{c}, \overline{e}) \\
&\cup \overline{\mathcal{M}}^{(4)}(\overline{e}, c, c) \cup \overline{\mathcal{M}}^{(5)}(\overline{d}, a, b) \cup \overline{\mathcal{M}}^{(6)}(\overline{a}, f, f) \\
&\cup \bigcup_{i \in \{1,5,6\}} \{\{x_i^1, x_i^2, x_i^3\}, \{\overline{x_i^4}, \overline{x_i^5}, \overline{x_i^6}\}\}\\
&\cup \{\{y_6^1, y_6^2, y_6^3\}, \{\overline{y_6^4}, \overline{y_6^5}, \overline{y_6^6}\}, \{z_{15}^1, z_{15}^2, z_{15}^3\}, \{\overline{z_{15}^4}, \overline{z_{15}^5}, \overline{z_{15}^6}\}\}
\end{align*}
\begin{prop}
There is an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$} with 198 variables and 264 clauses.
\end{prop}
Now, with the result from Darmann and Döcker~\cite[Thm.\,4]{darmann19} we get the following theorem as a consequence of the existence of an unsatisfiable instance of \textsc{Monotone 3-Sat-$(2,2)$}.
\begin{thm}\label{thm:mon_3sat-(2,2)}
\textsc{Monotone 3-Sat-$(2,2)$} is NP-complete.
\end{thm}
Since \textsc{Monotone 3-Sat-$(k,k)$} is known to be NP-complete for each fixed $k \geq 3$~\cite[Cor.\,2]{darmann19}, we get the following corollary.
\begin{cor}
\textsc{Monotone 3-Sat-$(k,k)$} is NP-complete for each fixed $k \geq 2$.
\end{cor}
\subsection*{A special case that is always satisfiable}
We briefly consider instances of \textsc{Monotone 3-Sat-$(k,k)$} with the property that for each clause $C = \{x, y, z\}$ the instance also contains $\overline{C} = \{\overline{x}, \overline{y}, \overline{z}\}$. Noting that this is \textsc{Monotone NAE 3-SAT} with exactly $k$ appearances of each variable, it follows that this problem is hard for $k = 4$ (see \cite[Cor.\,1]{darmann19}).
\noindent {\bf Remark.} In the context of \textsc{NAE SAT} monotone means that negations are completely absent. This is no restriction since the two clauses $\{x, y, z\}$ and $\{\overline{x}, \overline{y}, \overline{z}\}$ impose exactly the same restrictions in this setting.
Porschen et al.~\cite[Thm.\,4]{porschen04} show that for $k = 3$ the corresponding \textsc{Monotone NAE 3-SAT} problem can be solved in linear time. In particular, they show that such an instance is nae-satisfiable if and only if the \emph{variable graph} has no component isomorphic to the complete graph $K_7$ on 7 vertices~\cite[Cor.\,4]{porschen04}. The variable graph (cf., e.g.,~\cite[p.\,2]{jain10} and~\cite[p.\,175]{porschen04}) of an instance of \textsc{NAE 3-SAT} (resp. \textsc{3-SAT}), contains a vertex for each variable and an edge between two vertices if the corresponding variables appear together in some clause of the instance. For example, the variable graph of the following instance is isomorphic to the $K_7$ and is, thus, not nae-satisfiable:
\begin{align*}
\mathcal{U}_\text{NAE} = \{&\{x_1, x_2, x_7\}, \{x_1, x_3, x_6\}, \{x_1, x_4, x_5\}, \\
&\{x_2, x_3, x_4\}, \{x_2, x_5, x_6\}, \{x_3, x_5, x_7\}, \{x_4, x_6, x_7\}\}.
\end{align*}
Let us now consider $k = 2$. We show that the property mentioned above leads to a trivial instance of \textsc{Monotone NAE 3-SAT} with exactly two appearances of each variable and, hence, \textsc{Monotone 3-Sat-$(2,2)$} is always satisfiable if clauses always appear in pairs $\{C, \overline{C}\}$. Jain~\cite[p.\,2]{jain10} observed that instances of \textsc{Monotone NAE 3-SAT} are in P if the variable graph is 4-colorable. Indeed, such instances are trivial since we can associate each truth value with exactly two colors such that a 4-coloring corresponds to a truth assignment that sets at least one variable of each clause false and at least one true (since each clause contains exactly three distinct variables, all clauses are satisfied). Pilz~\cite[Thm.\,12]{pilz19} used an approach based on this idea to show that every instance of \textsc{Planar SAT} in which each clause contains at least three negated or at least three unnegated appearances of distinct variables is satisfiable. He transformed the incidence graph of the formula into a certain subgraph of the variable graph, showed that this transformation preserves planarity, and then applied the Four Color Theorem~\cite{appel89} to obtain a 4-coloring.
Hence, all we need to show is that the variable graph of an instance of \textsc{Monotone NAE 3-SAT} where each variable appears exactly twice is always 4-colorable. First, observe that a vertex corresponding to a variable $x$ in the variable graph of such an instance has degree 2 if and only if $x$ is contained in two clauses
\[
\{x, y, z\}, \{x, y, z\}
\]
for some variables $y, z$ such that $x, y, z$ are pairwise distinct (otherwise $x$ has at least three neighbours). Such clauses can simply be removed as it is trivial to nae-satisfy them. Hence, we can assume that the variable graph has no cycles and, in particular, no cycles of odd length. Furthermore, it is easy to see that each instance has a number of variables that is divisible by 3 and, hence, each connected component in the variable graph contains a number of vertices that is a multiple of 3. Now, there is no component with 3 vertices since we already removed the clauses that would result in such a subgraph (the $K_3$ is a cycle of odd length). Noting that the degree of each vertex is bounded by 4, we conclude that no component with 6 or more vertices is a complete graph. Consequently, we can assume that the variable graph of an instance of \textsc{Monotone NAE 3-SAT} does not contain a component that is a complete graph or a cycle of odd length. Hence, the variable graph is 4-colorable by Brooks' Theorem~\cite{brooks41} and we get the following theorem.
\begin{thm}
All instances of \textsc{Monotone NAE 3-SAT}, where each variable appears exactly twice, are satisfiable.
\end{thm}
\begin{cor}
Let $\mathcal{I} = \bigcup_{j=1}^m \{C_j\}$ be an instance of \textsc{Monotone 3-Sat-$(2,2)$}. If the instance $\mathcal{I}$ has the property
\[
C_j \in \mathcal{I} \Rightarrow \overline{C_j} \in \mathcal{I},
\]
where $\overline{C_j}$ is obtained from $C_j$ by negating each literal, then $\mathcal{I}$ is satisfiable.
\end{cor}
\section{More ways to obtain the main result}\label{sec:more_ways}
It is also possible to show NP-hardness of \textsc{Monotone 3-Sat-$(2,2)$} by reduction from \textsc{Monotone 3-Sat*-$(2,2)$}, for which NP-hardness was established by Darmann and Döcker~\cite[Thm.\,5]{darmann19}. To this end, let
\[
\mathcal{N}^{(i)}(\overline{u_i}, \overline{u_i}) := \{\{x_1^i, x_{2}^i\}, \{\overline{x_2^i}, \overline{x_3^i}, \overline{u_i}\}, \{\overline{x_2^i}, \overline{x_4^i}, \overline{u_i}\}\} \cup \mathcal{F}_3^i \cup \mathcal{G}^i \cup \mathcal{H}^i,
\]
By construction, this set of clauses is not satisfied for any truth assignment $\beta$ that sets $\beta(u_1) = T$. Now, we can construct another enforcer which has exactly three positive 2-clauses:
\begin{align*}
\mathcal{S}(v_1, v_2, v_3) = &\{\{x_1^1, x_{2}^1, v_1\}, \{x_1^2, x_{2}^2, v_2\}, \{x_1^3, x_{2}^3, v_3\}\}\\
&\cup \mathcal{N}^{(1)}(\overline{u_1}, \overline{u_1})\setminus\{\{x_1^1, x_{2}^1\}\} \cup \mathcal{N}^{(2)}(\overline{u_2}, \overline{u_2})\setminus\{\{x_1^2, x_{2}^2\}\}\\
&\cup \mathcal{N}^{(3)}(\overline{u_3}, \overline{u_3})\setminus\{\{x_1^3, x_{2}^3\}\} \cup \{\{u_1, u_2, u_3\}\}\\
&\cup \bigcup_{i \in \{1,5,6\}} \{\{x_i^1, x_i^2, x_i^3\}\}\\
&\cup \{\{y_6^1, z_{15}^1, u_1\}, \{y_6^2, z_{15}^2, u_2\}, \{y_6^3, z_{15}^3, u_3\}\}
\end{align*}
Let $V_\mathcal{S}$ denote the set of variables that appear in $\mathcal{S}(v_1, v_2, v_3)$. Each variable $v \in V_\mathcal{S}\setminus\{v_1, v_2, v_3\}$ appears exactly twice unnegated and twice negated. For each instance of $\mathcal{S}(v_1, v_2, v_3)$, we create new variables $V_\mathcal{S}\setminus\{v_1, v_2, v_3\}$ (we omitted additional indices to improve readability). By negating each literal in $v \in V_\mathcal{S}\setminus\{v_1, v_2, v_3\}$ we obtain a second enforcer $\overline{\mathcal{S}}(\overline{v_1}, \overline{v_2}, \overline{v_3})$. By construction, the enforcer~$\mathcal{S}(v_1, v_2, v_3)$ has no satisfying truth assignment $\beta$ with $\beta(v_1) = \beta(v_2) = \beta(v_3) = F$. On the other hand, if $\beta(v_i) = T$ for at least one $v_i \in \{v_1, v_2, v_3\}$, we can assign truth values to the remaining variables of $\mathcal{S}(v_1, v_2, v_3)$ such that all clauses of the enforcer are satisfied (this is straightforward to verify with a SAT solver).
Given an instance $\mathcal{I}$ of \textsc{Monotone 3-Sat*-$(2,2)$}, we replace each positive (resp. negative) clause with a duplicate, say $(p \vee p \vee q)$ (resp. $(\overline{p} \vee \overline{p} \vee \overline{q})$), by an enforcer $\mathcal{S}(p, p, q)$ (resp. $\overline{\mathcal{S}}(\overline{p}, \overline{p}, \overline{q})$). The result is an instance of \textsc{Monotone 3-Sat-$(2,2)$} that is satisfiable if and only if $\mathcal{I}$ is satisfiable.
Yet another approach is the following. We can also reduce from \textsc{3-Sat-$(2,2)$}, for which NP-hardness was established by Berman et al.~\cite[Thm.\,1]{berman03}, and use an extended version of the enforcers $\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$ and $\overline{\mathcal{M}}^{(i)}(\overline{u_1}, u_2, u_3)$ to transform mixed clauses that may be present in a given instance into monotone clauses. To this end, consider
\begin{align*}
\mathfrak{M}_{j} := &\mathcal{M}^{(3j)}(u_1, \overline{u_2}, \overline{u_3}) \cup \mathcal{M}^{(3j+1)}(u_4, \overline{u_5}, \overline{u_6}) \cup \mathcal{M}^{(3j+2)}(u_7, \overline{u_8}, \overline{u_9})\\
&\cup \{\{x_1^{3j}, x_5^{3j}, x_6^{3j}\},\{y_6^{3j}, z_{15}^{3j}, x_1^{3j+1}\}, \{x_5^{3j+1}, x_6^{3j+1}, y_6^{3j+1}\}\}\\
&\cup \{\{z_{15}^{3j+1}, x_1^{3j+2}, x_5^{3j+2}\}, \{x_6^{3j+2}, y_6^{3j+2}, z_{15}^{3j+2}\}\}
\end{align*}
Combining three instances of the enforcer $\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$ in this way has the advantage that each instance of $\mathfrak{M}_{j}$ introduces only variables that appear exactly twice unnegated and twice negated. A second enforcer $\overline{\mathfrak{M}_{j}}$ is again obtained by negating all literals. In order to be able to use these enforcers to replace all mixed clauses in a given instance of \textsc{3-Sat-$(2,2)$} we need the number of clauses with a positive (resp. negative) duplicate to be divisible by 3. This can be achieved by simply taking three copies of the original instance on pairwise disjoint sets of variables. With the help of a SAT solver it is easy to verify that $\mathfrak{M}_j$ has only satisfying truth assignments that set at least one literal in each of $\{u_1, \overline{u_2}, \overline{u_3}\}$, $\{u_4, \overline{u_5}, \overline{u_6}\}$ and $\{u_7, \overline{u_8}, \overline{u_9}\}$ true.
\section{On a restricted variant of $\forall\exists$ \textsc{3-SAT}}\label{sec:quantified_results}
In this section, we consider the monotone variant of the following problem and show that it remains $\Pi_2^P$-complete in restricted settings. We assume the reader is familiar with basic concepts regarding the polynomial hierarchy and, in particular, with the complexity class $\Pi_2^P$. For an in-depth introduction to this theory, we refer to Stockmeyer~\cite{stockmeyer76} (see \cite{schaefer02} for a list containing many problems that are known to be $\Pi_2^P$-complete). We use the same notation defined in~\cite{{doecker19}}, e.g., for $i \leq i'$, let
\[
X_i^{i'} := \{x_i, x_{i+1}, \ldots, x_{i'}\},
\]
and
\[
Q X_i^{i'} := Qx_i Qx_{i+1} \cdots Qx_{i'}, \quad Q \in \{\forall, \exists\}.
\]
Let $s_1, s_2, t_1,t_2$ be four non-negative integers.
\begin{center}
\noindent\fbox{\parbox{.95\textwidth}{
\noindent {\sc Balanced $\forall \exists$ 3-SAT-$(s_1,s_2, t_1, t_2)$}~\cite[p.\,6f]{doecker19}\\
\noindent{\bf Input.} A quantified Boolean formula
$$\forall X_1^p \exists X_{p+1}^n \bigcup_{j=1}^m \{C_j\}$$
over a set $V=\{x_1,x_2,\ldots,x_n\}$ of variables such that (i) $n = 2p$, (ii) each $C_j$ is a 3-clause that contains three \emph{distinct} variables, and (iii), amongst the clauses, each universal variable appears unnegated exactly $s_1$ times and negated exactly $s_2$ times, and each existential variable appears unnegated exactly $t_1$ times and negated exactly $t_2$ times.
\\
\noindent{\bf Question.} For every truth assignment for $\{x_1, x_2, \ldots, x_p\}$, does there exist a truth assignment for $\{x_{p+1}, x_{p+2}, \ldots, x_n\}$ such that each clause of the formula is satisfied?
}}
\end{center}
Recently, Döcker et al.~\cite[Thm.\,3.1 and Thm.\,3.2]{doecker19} showed that {\sc Balanced $\forall \exists$ 3-SAT-$(2, 2, 2, 2)$} and {\sc Balanced $\forall \exists$ 3-SAT-$(1, 1, 2, 2)$} are both $\Pi_2^P$-complete. We use the gadgets $\mathfrak{M}_j$ and $\overline{\mathfrak{M}_j}$ to show that these results also hold for instances, where each clause is monotone (i.e., each clause consists of exactly three unnegated variables or exactly three negated variables, respectively). Since the transformation is virtually identical for both cases, we focus on the second result and mention the necessary adaption to obtain the first result. Consider an instance of {\sc Balanced $\forall \exists$ 3-SAT-$(1, 1, 2, 2)$}, i.e. a quantified Boolean formula
\[
\Phi = \forall X_1^p \exists X_{p+1}^n \varphi,
\]
with $\varphi = \bigcup_{j=1}^m \{C_j\}$. Let $\varphi'$ and $\varphi''$ be the sets of clauses obtained from $\varphi$ by replacing $x_i$ with $y_i$ and $z_i$, respectively ($y_i$ and $z_i$ are distinct new variables). It is easy to see that the following quantified Boolean formula is a yes-instance if and only if $\Phi$ is a yes-instance.
\[
\Phi' = \forall (X_1^p \cup Y_1^p \cup Z_1^p) \exists (X_{p+1}^n \cup Y_{p+1}^n \cup Z_{p+1}^n) (\varphi \cup \varphi' \cup \varphi'').
\]
Now, the number of mixed clauses with two negative (resp. positive) literals is divisible by 3. Hence, we can replace such clauses in triples using $\mathfrak{M}_j$ and $\overline{\mathfrak{M}_j}$, respectively. For example, we replace first triple of mixed clauses, e.g.,
\[
\{x_i, \overline{x_j}, \overline{x_k}\}, \{y_i, \overline{y_j}, \overline{y_k}\}, \{z_i, \overline{z_j}, \overline{z_k}\},
\]
with the following collection of monotone clauses
\begin{align*}
\mathfrak{M}_{0} := &\mathcal{M}^{(0)}(x_i, \overline{x_j}, \overline{x_k}) \cup \mathcal{M}^{(1)}(y_i, \overline{y_j}, \overline{y_k}) \cup \mathcal{M}^{(2)}(z_i, \overline{z_j}, \overline{z_k})\\
&\cup \{\{x_1^{0}, x_5^{0}, x_6^{0}\},\{y_6^{0}, z_{15}^{0}, x_1^{1}\}, \{x_5^{1}, x_6^{1}, y_6^{1}\}\}\\
&\cup \{\{z_{15}^{1}, x_1^{2}, x_5^{2}\}, \{x_6^{2}, y_6^{2}, z_{15}^{2}\}\}.
\end{align*}
Note that we introduce $3 \cdot 32 = 96$ new existential variables with each instance of $\mathfrak{M}_j$ or $\overline{\mathfrak{M}_j}$. By construction, the resulting quantified Boolean formula $\Phi''$ is a yes-instance if and only if $\Phi'$ is a yes-instance. Since we introduced a number of existential variables that is divisible by 3, we can use multiple instances (each with new variables) of the following quantified enforcer introduced by Döcker et al.~\cite[p.\,9]{doecker19}
\begin{align*}
Q^3 = &\{u, r, a\}, \{\overline{u}, \overline{b}, \overline{a}\}, \{v, q, b\}, {} \{\overline{v}, \overline{r}, \overline{a}\} , \{w, a, b\}, \{\overline{w}, \overline{q}, \overline{b}\},
\end{align*}
where $u, v, w, q, r$ are universal variables and $a, b$ are existential variables, to obtain a quantified Boolean formula with the same number of existential and universal variables. Since $Q^3$ is a yes-instance~\cite[Lem.\,3.2]{doecker19}, the resulting quantified Boolean formula is a yes-instance if and only if $\Phi''$ is a yes-instance. Noting that the transformation is polynomial, we get the following theorem.
\begin{thm}
{\sc Balanced Monotone $\forall \exists$ 3-SAT-$(1, 1, 2, 2)$} is $\Pi_2^P$-complete.
\end{thm}
The only difference in the reduction from {\sc Balanced $\forall \exists$ 3-SAT-$(2, 2, 2, 2)$} to obtain the first result is the last step. Here, we are not able to use the existing quantified enforcer $Q^1$ given in~\cite[p.\,9]{doecker19}, since it introduces mixed clauses. For this reason, we adapt the quantified enforcer $Q^3$ as follows
\begin{align*}
Q^1_\text{mon} = &\{u, r, a\}, \{\overline{u}, \overline{b}, \overline{a}\}, \{v, q, b\}, \{\overline{v}, \overline{r}, \overline{a}\} , \{w, a, b\}, \{\overline{w}, \overline{q}, \overline{b}\}, {} \\
&\{u, r, c\}, \{\overline{u}, \overline{d}, \overline{c}\}, \{v, q, d\}, \{\overline{v}, \overline{r}, \overline{c}\} , \{w, c, d\}, \{\overline{w}, \overline{q}, \overline{d}\},
\end{align*}
where $u, v, w, q, r$ are universal variables and $a, b, c, d$ are existential variables. Intuitively, we use two instances of $Q^3$ on the same universal variables but with different existential variables. Consider an arbitrary truth assignment $\beta$ for the universal variables. Since $Q^3$ is a yes-instance we can find truth values $\beta(a)$ and $\beta(b)$ such that the top eight clauses in $Q^1_\text{mon}$ are satisfied. Hence, for $\beta(c) = \beta(a)$ and $\beta(d) = \beta(b)$ we can satisfy all clauses in $Q^1_\text{mon}$. In other words, $Q^1_\text{mon}$ is a yes-instance of $\forall \exists$ \textsc{3-SAT} that introduces 5 universal variables but only 4 existential variables (each of which appears exactly twice unnegated and exactly twice negated). Now, we can use multiple instances (each with new variables) of $Q^1_\text{mon}$ to obtain a formula with the same number of existential and universal variables. Thus, we get the following theorem.
\begin{thm}
{\sc Balanced Monotone $\forall \exists$ 3-SAT-$(2, 2, 2, 2)$} is $\Pi_2^P$-complete.
\end{thm}
\appendix
\section{Proofs}
We used the PySAT Toolkit~\cite{ignatiev18} in the proofs of Lemmas~\ref{lem:second_gadget} and~\ref{lem:third_gadget} to obtain a DRUP proof~\cite{heule13} which is a certificate of unsatisfiablity. Here, we use the solver Lingeling~\cite{biere16} included in the PySAT Toolkit since it is one of the solvers that provide the option to return such a certificate of unsatisfiability.
\begin{lem}\label{lem:second_gadget} The following set of clauses over variables $Y_1^9$ is unsatisfiable.
\begin{multicols}{4}
\begin{enumerate}
\item $\{y_1, y_2\}$
\item $\{y_3, y_4\}$
\item $\{y_5, y_6\}$
\item $\{y_7, y_8\}$
\item $\{y_1, y_4, y_7\}$
\item $\{y_2, y_5, y_9\}$
\item $\{y_3, y_8, y_9\}$
\item $\{\overline{y_1}, \overline{y_5}, \overline{y_8}\}$
\item $\{\overline{y_1}, \overline{y_6}, \overline{y_9}\}$
\item $\{\overline{y_2}, \overline{y_3}, \overline{y_6}\}$
\item $\{\overline{y_2}, \overline{y_4}, \overline{y_8}\}$
\item $\{\overline{y_3}, \overline{y_5}, \overline{y_7}\}$
\item $\{\overline{y_4}, \overline{y_7}, \overline{y_9}\}$
\end{enumerate}
\end{multicols}
\end{lem}
\begin{proof}
We can use the following Python code to obtain a DRUP proof.
\begin{lstlisting}[frame=single]
from pysat.solvers import Lingeling
cnf = [[1, 2], [3, 4], [5, 6], [7, 8], [1, 4, 7], [2, 5, 9],
[3, 8, 9], [-1, -5, -8], [-1, -6, -9], [-2, -3, -6],
[-2, -4, -8], [-3, -5, -7], [-4, -7, -9]]
solver = Lingeling(bootstrap_with=cnf, with_proof=True)
print solver.solve()
print solver.get_proof()
solver.delete()
\end{lstlisting}
Output of the program:
\begin{lstlisting}[language=bash]
False
['-8 -7 -5 0', '-8 9 5 0', '-5 -8 0', 'd -1 -5 -8 0', '-8 9 0', 'd 5 -8 9 0', '9 0', '-4 -2 0', 'd -8 -4 -2 0', '-5 -3 0', 'd -7 -5 -3 0', '-3 -2 0', 'd -6 -3 -2 0', '-2 0', '1 0', '-6 0', '5 0', '-8 0', '-3 0', '7 0', '4 0', '0']
\end{lstlisting}
\end{proof}
\begin{lem}\label{lem:third_gadget} The following set of clauses over variables $Z_1^{15}$ is unsatisfiable.
\begin{multicols}{3}
\begin{enumerate}
\item $\{\overline{z_1}, \overline{z_2}\}$
\item $\{\overline{z_3}, \overline{z_4}\}$
\item $\{\overline{z_5}, \overline{z_6}\}$
\item $\{\overline{z_7}, \overline{z_8}\}$
\item $\{z_7, z_{15}\}$
\item $\{z_1, z_6, z_8\}$
\item $\{z_1, z_{11}, z_{12}\}$
\item $\{z_2, z_{6}, z_{8}\}$
\item $\{z_2, z_{11}, z_{12}\}$
\item $\{z_3, z_{5}, z_{9}\}$
\item $\{z_3, z_{13}, z_{14}\}$
\item $\{z_4, z_{5}, z_{14}\}$
\item $\{z_4, z_{9}, z_{10}\}$
\item $\{z_7, z_{10}, z_{13}\}$
\item $\{\overline{z_{5}}, \overline{z_{8}}, \overline{z_{15}}\}$
\item $\{\overline{z_6}, \overline{z_7}, \overline{z_9}\}$
\item $\{\overline{z_9}, \overline{z_{11}}, \overline{z_{13}}\}$
\item $\{\overline{z_{10}}, \overline{z_{11}}, \overline{z_{14}}\}$
\item $\{\overline{z_{10}}, \overline{z_{12}}, \overline{z_{14}}\}$
\item $\{\overline{z_{12}}, \overline{z_{13}}, \overline{z_{15}}\}$
\end{enumerate}
\end{multicols}
\end{lem}
\begin{proof}
We can use the following Python code to obtain a DRUP proof.
\begin{lstlisting}[frame=single]
from pysat.solvers import Lingeling
cnf = [[-1, -2], [-3, -4], [-5, -6], [-7, -8], [7, 15],
[1, 6, 8], [1, 11, 12], [2, 6, 8], [2, 11, 12],
[3, 5, 9], [3, 13, 14], [4, 5, 14], [4, 9, 10],
[7, 10, 13], [-5, -8, -15], [-6, -7, -9],
[-9, -11, -13], [-10, -11, -14], [-10, -12, -14],
[-12, -13, -15]]
solver = Lingeling(bootstrap_with=cnf, with_proof=True)
print solver.solve()
print solver.get_proof()
solver.delete()
\end{lstlisting}
Output of the program:
\begin{lstlisting}[language=bash]
False
['6 8 0', 'd 1 6 8 0', '14 10 13 0', '11 12 0', 'd 1 11 12 0', '-8 14 -13 0', '14 -13 0', 'd -8 14 -13 0', '14 10 0', 'd 13 14 10 0', '-9 10 7 0', '-9 10 0', 'd 7 -9 10 0', '-5 0', '10 9 0', 'd 4 10 9 0', '-13 -15 0', 'd -12 -13 -15 0', '-14 -10 0', 'd -11 -14 -10 0', '14 13 0', 'd 3 14 13 0', '13 7 0', 'd 10 13 7 0', '7 0', '-8 0', '6 0', '-9 0', '3 0', '10 0', '-4 0', '-14 0', '0']
\end{lstlisting}
\end{proof}
\section{Enforcer $\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$}
The set of clauses $\mathcal{M}$ constructed in Section~\ref{sec:construction_unsatisfiable_instance} is the basis for the enforcer $\mathcal{M}^{(i)}(u_1, \overline{u_2}, \overline{u_3})$ and thus, for several results presented in this article. To facilitate verification of our results, we provide the set $\mathcal{M}$ as a Python list:\\
\begin{scriptsize}
[[1, 2], [-2, -3], [-2, -4], [-3, -5, -6], [-4, -5, -6], [5, 7, 8], [6, 7, 8], [-7, -18, -19], [-7, -20, -21], [-8, -18, -19], [-8, -20, -21], [3, 9, 10], [3, 11, 12], [4, 13, 14], [4, 15, 16], [9, 12, 15], [10, 13, 17], [11, 16, 17], [-9, -13, -16], [-9, -14, -17], [-10, -11, -14], [-10, -12, -16], [-11, -13, -15], [-12, -15, -17], [2, 24, 32], [18, 23, 25], [18, 28, 29], [19, 23, 25], [19, 28, 29], [20, 22, 26], [20, 30, 31], [21, 22, 31], [21, 26, 27], [24, 27, 30], [-1, -22, -23], [-1, -24, -25], [-22, -25, -32], [-23, -24, -26], [-26, -28, -30], [-27, -28, -31], [-27, -29, -31], [-29, -30, -32]]
\end{scriptsize}
\section{Unsatisfiable instance of Monotone 3-Sat-$(2,2)$}
The unsatisfiable instance constructed in Section~\ref{sec:construction_unsatisfiable_instance} as a Python list:\\
\begin{scriptsize}
[[-193, -196, -198], [194, 196, 197], [1, 2, 197], [-2, -3, -194], [-2, -4, -194], [-3, -5, -6], [-4, -5, -6], [5, 7, 8], [6, 7, 8], [-7, -18, -19], [-7, -20, -21], [-8, -18, -19], [-8, -20, -21], [3, 9, 10], [3, 11, 12], [4, 13, 14], [4, 15, 16], [9, 12, 15], [10, 13, 17], [11, 16, 17], [-9, -13, -16], [-9, -14, -17], [-10, -11, -14], [-10, -12, -16], [-11, -13, -15], [-12, -15, -17], [2, 24, 32], [18, 23, 25], [18, 28, 29], [19, 23, 25], [19, 28, 29], [20, 22, 26], [20, 30, 31], [21, 22, 31], [21, 26, 27], [24, 27, 30], [-1, -22, -23], [-1, -24, -25], [-22, -25, -32], [-23, -24, -26], [-26, -28, -30], [-27, -28, -31], [-27, -29, -31], [-29, -30, -32], [33, 34, 196], [-34, -35, -195], [-34, -36, -198], [-35, -37, -38], [-36, -37, -38], [37, 39, 40], [38, 39, 40], [-39, -50, -51], [-39, -52, -53], [-40, -50, -51], [-40, -52, -53], [35, 41, 42], [35, 43, 44], [36, 45, 46], [36, 47, 48], [41, 44, 47], [42, 45, 49], [43, 48, 49], [-41, -45, -48], [-41, -46, -49], [-42, -43, -46], [-42, -44, -48], [-43, -45, -47], [-44, -47, -49], [34, 56, 64], [50, 55, 57], [50, 60, 61], [51, 55, 57], [51, 60, 61], [52, 54, 58], [52, 62, 63], [53, 54, 63], [53, 58, 59], [56, 59, 62], [-33, -54, -55], [-33, -56, -57], [-54, -57, -64], [-55, -56, -58], [-58, -60, -62], [-59, -60, -63], [-59, -61, -63], [-61, -62, -64], [65, 66, 193], [-66, -67, -195], [-66, -68, -197], [-67, -69, -70], [-68, -69, -70], [69, 71, 72], [70, 71, 72], [-71, -82, -83], [-71, -84, -85], [-72, -82, -83], [-72, -84, -85], [67, 73, 74], [67, 75, 76], [68, 77, 78], [68, 79, 80], [73, 76, 79], [74, 77, 81], [75, 80, 81], [-73, -77, -80], [-73, -78, -81], [-74, -75, -78], [-74, -76, -80], [-75, -77, -79], [-76, -79, -81], [66, 88, 96], [82, 87, 89], [82, 92, 93], [83, 87, 89], [83, 92, 93], [84, 86, 90], [84, 94, 95], [85, 86, 95], [85, 90, 91], [88, 91, 94], [-65, -86, -87], [-65, -88, -89], [-86, -89, -96], [-87, -88, -90], [-90, -92, -94], [-91, -92, -95], [-91, -93, -95], [-93, -94, -96], [-97, -98, -197], [98, 99, 195], [98, 100, 195], [99, 101, 102], [100, 101, 102], [-101, -103, -104], [-102, -103, -104], [103, 114, 115], [103, 116, 117], [104, 114, 115], [104, 116, 117], [-99, -105, -106], [-99, -107, -108], [-100, -109, -110], [-100, -111, -112], [-105, -108, -111], [-106, -109, -113], [-107, -112, -113], [105, 109, 112], [105, 110, 113], [106, 107, 110], [106, 108, 112], [107, 109, 111], [108, 111, 113], [-98, -120, -128], [-114, -119, -121], [-114, -124, -125], [-115, -119, -121], [-115, -124, -125], [-116, -118, -122], [-116, -126, -127], [-117, -118, -127], [-117, -122, -123], [-120, -123, -126], [97, 118, 119], [97, 120, 121], [118, 121, 128], [119, 120, 122], [122, 124, 126], [123, 124, 127], [123, 125, 127], [125, 126, 128], [-129, -130, -196], [130, 131, 193], [130, 132, 194], [131, 133, 134], [132, 133, 134], [-133, -135, -136], [-134, -135, -136], [135, 146, 147], [135, 148, 149], [136, 146, 147], [136, 148, 149], [-131, -137, -138], [-131, -139, -140], [-132, -141, -142], [-132, -143, -144], [-137, -140, -143], [-138, -141, -145], [-139, -144, -145], [137, 141, 144], [137, 142, 145], [138, 139, 142], [138, 140, 144], [139, 141, 143], [140, 143, 145], [-130, -152, -160], [-146, -151, -153], [-146, -156, -157], [-147, -151, -153], [-147, -156, -157], [-148, -150, -154], [-148, -158, -159], [-149, -150, -159], [-149, -154, -155], [-152, -155, -158], [129, 150, 151], [129, 152, 153], [150, 153, 160], [151, 152, 154], [154, 156, 158], [155, 156, 159], [155, 157, 159], [157, 158, 160], [-161, -162, -193], [162, 163, 198], [162, 164, 198], [163, 165, 166], [164, 165, 166], [-165, -167, -168], [-166, -167, -168], [167, 178, 179], [167, 180, 181], [168, 178, 179], [168, 180, 181], [-163, -169, -170], [-163, -171, -172], [-164, -173, -174], [-164, -175, -176], [-169, -172, -175], [-170, -173, -177], [-171, -176, -177], [169, 173, 176], [169, 174, 177], [170, 171, 174], [170, 172, 176], [171, 173, 175], [172, 175, 177], [-162, -184, -192], [-178, -183, -185], [-178, -188, -189], [-179, -183, -185], [-179, -188, -189], [-180, -182, -186], [-180, -190, -191], [-181, -182, -191], [-181, -186, -187], [-184, -187, -190], [161, 182, 183], [161, 184, 185], [182, 185, 192], [183, 184, 186], [186, 188, 190], [187, 188, 191], [187, 189, 191], [189, 190, 192], [1, 33, 65], [-97, -129, -161], [5, 37, 69], [-101, -133, -165], [6, 38, 70], [-102, -134, -166], [14, 46, 78], [-110, -142, -174], [32, 64, 96], [-128, -160, -192]]
\end{scriptsize}
\end{document} |
\begin{document}
\author{Jean-Pierre Labesse}
\thanks{This work was partly begun during the stay of the second author in the fall term 2017
at the School of Mathematics, Institute for Advanced Study, Princeton, and then pursued at the
Max-Planck-Institute for Mathematics, Bonn; he gratefully acknowledges the funding at the IAS,
provided by the Charles Simonyi Endowment, as well as the support at the MPIM.}
\address{Institut de Math\'ematique de Marseille, UMR 7373,
Aix-Marseille Universit\'e,
France}
\email{[email protected]}
\author{Joachim Schwermer}
\address{Faculty of Mathematics, University Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria resp. Max-Planck-Institute
for Mathematics,
Vivatsgasse 7, D-53111 Bonn, Germany.}
\email{[email protected]}
\date{}
\maketitle
\section{Introduction}
\subsection{Main theorem}
Let $F$ be a global field (of arbitrary characteristic) and denote by ${\mathbb A_F}$ its ring of adèles.
Let $G$ and $H$ be two connected reductive group defined over $F$
endowed with an $F$-morphism
$f: H\toG$ such that the induced morphism $H_{der}\toG_{der}$ on the derived groups is a central isogeny.
We study how automorphic representations behave, under the morphism induced between groups
of adélic points, via restriction and induction.
We also discuss similar statements for representations of groups over local fields.
Consider the restrictions to $f(H({\mathbb A_F}))$
of a cuspidal representation $\pi$ of $G({\mathbb A_F})$; it splits into a possibly infinite sum of irreducible
representations of $H({\mathbb A_F})$ and some of them may not be automorphic. Conversely given
a cuspidal representation $\sigma$ of $H({\mathbb A_F})$ it is not always possible to find it in the
restriction of some cuspidal representation $\pi$ of $G({\mathbb A_F})$.
Our main result is:
\begin{theorem} \label{goal}
Given any irreducible cuspidal representation $\pi$ of $G({\mathbb A_F})$
its restriction to $f(H({\mathbb A_F}))$ contains a cuspidal representation $\sigma$ of $H({\mathbb A_F})$.
Conversely, assuming moreover that $f$ is an injection,
any irreducible cuspidal representation $\sigma$ of $H({\mathbb A_F})$
appears in the restriction of some cuspidal representation $\pi$ of $G({\mathbb A_F})$.
\end{theorem}
Experts in the theory of automorphic forms have expected such a natural result, already known in some cases.
In fact this is in agreement with Langlands functoriality conjectures which relate
local or automorphic representations for a group $G$
to elements in the first cohomology set $H^1(W_F,\checkG)$ of Weil groups with value
in the dual group $\checkG$ and, in particular, with lifting results established in \cite{Lab}
for the map $H^1(\check f):H^1(W_F,\checkG)\to H^1(W_F,\checkH)$
when $f$ is injective.
Local results, quite elementary when the characteristic of the field is zero and
already known in general to some extent (see for example \cite{T}), are given here
for the sake of completeness. In the global case, we do not know of any published reference
except when $H = SL(n)$, $G = GL(n)$.
For this pair of groups, Theorem \ref{goal} (or rather its reformulation as in \ref{main})
is claimed in \cite[Sect. 3]{LS} if $F$ is a number field.
Unfortunately, as was pointed out by Laurent Clozel, the proof given in \cite{LS},
which generalizes the argument given in \cite{LL} for $n=2$,
does not apply for arbitrary $n$
since we implicitly assumed the validity of the local-global principle for the $n$-th powers in $F$,
which may failfootnote {
The Grunwald-Wang Theorem \cite[Chap. X, Thm. 1]{AT}
computes the obstruction group to this local-global principle and, in fact, this group can be
non-trivial. This only happens in very special cases: in particular $8\vert n$
is among the necessary conditions.}.
The argument is corrected here.
\subsection{Organization of the paper}
In Section \ref{cliff}, generalizing Clifford's theory for finite groups,
we consider a pair
$B \subset A$ of a locally compact groups where $B$ is a closed invariant subgroup of $A$
such that $B\backslash A$ is abelian and compact and we
analyze, under a suitable finiteness condition, the interplay via restriction and induction between irreducible unitary
representations $\pi$ of $A$ and $\sigma$ of $B$.
Results of Section \ref{cliff} are used in Section \ref{loc} to investigate
extension, induction and restriction of irreducible unitary representations between pairs of groups of points
over local fields of arbitrary characteristic for pairs of connected reductive groups $G$ and $H$
as above. The local results are summarizd in \ref{local} and \ref{bija}.
Similar questions are studied in section
\ref{cusp} for cuspidal automorphic representations of groups of points
over adèles of global fields of arbitrary characteristic
and our main results are Theorems \ref{cuspa} and \ref{cuspb}. They imply in particular
the above Theorem \ref{goal}.
We rely on structural results taken care of in Section \ref{setting} or,
by a very different approach, in the appendix provided by B. Lemaire.
We conclude the paper with a new multiplicity formula.
\section{Variation on a theme by Clifford}\label{cliff}
In this section we establish a variant of Clifford's theory for finite groups \cite{CR} (already used implicitly
in \cite{LL} or explicitly in \cite{He} but in a slightly different context).
This is elementary but,
not knowing of any reference valid in our setting, it is given with some details for the convenience of the reader.
In particular, we provefootnote{
The reader may wonder why we give a proof of such a result since
there are many references for instances where Frobenius reciprocity is known to hold. In fact,
for admissible representations of reductive groups over non archimedean local fields
Frobenius reciprocity is well known and could be used in certain sections below where we deal with this specific case.
Nevertheless more general groups and different kinds of representations will occur
and we did not find any reference for the form
we need: for example Moore's result (Section 4 of \cite{MCC}), which is the closest to our needs we could find,
applies only to finite dimensional representations; similarly
Mackey's quite general theorems (e.g.\! Theorem 5.1 of \cite{Mc}) does not seem to be of any help
since the representations we are dealing with
may show up with measure zero in the spectral decomposition of the right regular representations
for the groups we study.}
a form of Frobenius reciprocity in \ref{frobenius}.
\subsection{Notation}
It is understood that unitary representations are strongly continuous and characters
(i.e.\! one dimensional representations) are unitary.
By abuse of notation we shall often denote by the same symbol a quotient and a set of representatives for its elements.
Let $A$ be a locally compact group and $B$ a closed subgroup such that
$B\backslash A$ has an $A$-invariant measure.
Let $\pi$ be an irreducible unitary representation of $A$ and
$\sigma$ an irreducible unitary representation of $B$.
Given $g\in A$ we denote by $\sigma^g$ the representation of $B$
defined by
$$\sigma^g(x)=\sigma(gxg^{-1})\,\,.$$
Let $\rho$ be the induced representation
$$\rho=\mathrm{Ind}_B^A\sigma\,\,. $$
We denote by $<v, w>_\sigma$ the scalar product of two vectors $v$ and $w$
in the space $V_\sigma$ of the representation $\sigma$.
Recall that $V_\rho$, the space of $\rho$, is the set of classes of measurable functions
from $A$ to $V_\sigma$ (up to equality almost everywhere),
such that $ f(hg)=\sigma(h)f(g)$ and that are square integrable on $B\backslash A$.
\subsection{A first finiteness assumption}
Assume $B\backslash A$ is of finite volume.
\begin{lemma}\label{frob}
There is an injective map
$$\mathrm{Hom}_B(\pi\vert_B,\sigma)\to\mathrm{Hom}_A(\pi,\mathrm{Ind}_B^A\sigma)\,\,.$$
If $\pi\vert_B$ and $\pi'\vert_B$ have a common constituent $\sigma$ then
$\pi$ and $\pi'$ both occur in $\rho=\mathrm{Ind}_B^A\sigma\,\,. $
\end{lemma}
\begin{proof}
Consider an element $\Psi\in \mathrm{Hom}_B(\pi\vert_B,\sigma)$ and $w\in V_\pi$. The function
$$\varphi_w:g\mapsto\Psi(\pi(g)w)\qquad\qquad\hbox{for}\quad g\in A$$ defines a vector in $V_\rho$. In fact
this is a continuous function which satisfies the required functional equation and
whose square norm $$g\mapsto||\varphi_w(g)||^2:=<\varphi_w(g),\varphi_w(g)>_\sigma\le
||\Psi||^2<w,w>_\pi
$$
is bounded and hence integrable since $B\backslash A$ is of finite volume. The map
$$\Phi:w\mapsto \varphi_w$$ defines an element in $\mathrm{Hom}_A(\pi,\rho)$.
The assignment $\Psi\mapsto\Phi$ is obviously injective.
The second assertion follows immediately.
\end{proof}
Assume from now on that $A$ is unimodular and
$B$ is an invariant closed subgroup. The quotient group $C=B\backslash A$ is also assumed to be abelian compact
and endowed with the normalized Haar measure i.e.\! such that $\mathrm{vol}(C)=1$.
Let $X$ be the discrete group of characters of $C$.
We observe that if $\pi$ occurs in $\rho =\mathrm{Ind}_B^A\sigma\,\,$ then, given $\chi\in X$,
the representation
$\pi\otimes\chi$ also occurs in $\rho$ with the same multiplicity.
\begin{proposition}\label{cliffb}
Given $\pi$ and $\pi'$ two irreducible unitary representations of $A$ whose restrictions
to $B$ have a constituent $\sigma$ in common, then there exist a character $\chi\in X$ such that
$$\pi'\simeq\pi\otimes\chi\,\,.$$
The representation $\rho$ is an Hilbert direct sum of representations of the form $\pi\otimes\chi$.
\end{proposition}
\begin{proof}
Since both the restrictions of
$\pi$ and $\pi'$ to $B$ have $\sigma$ as a constituent in common, Lemma~\ref{frob} shows
they both occur in $\rho$. Let us denote by $V_\pi$ the space of the representation $\pi$.
Let $\Psi$ be a non-trivial intertwining operator in $$\mathrm{Hom}_B(\pi\vert_B,\sigma)$$
and consider for $w\in V_\pi$ the function
$$\varphi_w:g\mapsto\Psi(\pi(g)w)\,\,.$$
The closed subspace generated by the functions $\varphi_w\chi$, where
$w$ varies in $V_\pi$ and $\chi$ varies in $X$,
is the space of a subrepresentation $\rho'$ of $\rho$, generated by a set of subrepresentations isomorphic to
$\pi\otimes\chi$.
Let $f$ be a function from $A$ to $V_\sigma$
that belongs to the orthogonal $\rho''$ of $\rho'$.
We have to show that $f=0$. Let us denote by
$<\varphi,f>_\rho$ the scalar product of two functions $\varphi$ and $f$ in the space of $\rho$.
By hypothesis
$$<\varphi_w{\chi},f>_\rho=\int_{C} {\chi(g)}<\varphi_w(g),f(g)>_\sigma\,d{\dot g}=0$$
for all $w\in V_\pi$ and all $\chi\in X$.
This implies that $ <\varphi_w(g),f(g)>_\sigma=0$ for almost all $\dot g\in C$ and all $w \in V_\pi$.
Now $w\mapsto\varphi_w(g)$ is an intertwining operator $\Psi^g$ between $\pi\vert_B$ and $\sigma^g$,
a representation of $B$ in $V_\sigma$
which is irreducible; the image of $\Psi^g$ equals $V_\sigma$ and necessarily $f(g)=0$ for almost all $g$.
\end{proof}
\subsection{A second finiteness assumption}
We denote by $A(\sigma)$ the subgroup of $A$
(containing $B$) of $g\in A$ such that $\sigma^g\simeq\sigma$ and by
$X(\pi)$ the subgroup of $\chi\in X$ such that $\pi\otimes\chi\simeq\pi$.
We shall now moreover assume that $X(\pi)$ is finite.
\begin{proposition}\label{cliffa}
Let $\pi$ be an irreducible unitary representation of $A$ such that $X(\pi)$ is finite.
Its restriction $\pi\vert_{ B}$ is a finite direct sum of
irreducible unitary representations of $B$. Let $\sigma$ be an irreducible constituent of $\pi\vert_{ B}$.
The vector space $$V= \mathrm{Hom}_B(\pi\vert_B,\sigma)$$
is of finite dimension, say $m$.
All other constituents
are conjugates under $A$ of $\sigma$ and
$$\pi\vert_{ B}\simeq \bigoplus _{\dot g\in A/A(\sigma)} V\otimes\sigma^g$$
where $B$ acts trivially on $V$.
The algebra $\mathcal I(\pi)$, of intertwining operators for $\pi$ restricted to $B$, has a basis indexed
by $X(\pi)$ and
$$\mathrm{dim}(\mathcal I(\pi))=\mathrm{card}(X(\pi))=m^2\times\mathrm{card}(A/A(\sigma)) \,\,.$$
\end{proposition}
\begin{proof} For $\chi\in X(\pi)$
choose a non-trivial intertwining operator $U_\chi$ between $\pi$ and $\pi\otimes\chi$.
According to Schur's lemma, the operator $U_\chi$ is well defined up to a scalar.
Consider $I\in\mathcal I(\pi)$ and $\chi\in X$, then the operator
$$I_\chi=\int_{\dot g\in C}\overline{\chi(g)}.\pi(g)^{-1} I\pi(g)\,\,d{\dot g}$$
is a scalar multiple of $U_\chi$ for $\chi\in X(\pi)$ and is zero if $\chi\notin X(\pi)$.
Fourier inversion shows that
$$I=\sum_{\chi\in X(\pi)} I_\chi=\sum_{\chi\in X(\pi)} c_\chi U_\chi$$ with $c_\chi\in\CM$.
This implies $\mathrm{dim}(\mathcal I(\pi))=\mathrm{card}(X(\pi))$.
By assumption $\sigma$ is an irreducible constituent of $\pi\vert_{ B}$.
The closed subspace generated by the isotypic components of
$\sigma$ and its $A$-conjugates is an $A$-invariant subspace
of $V_\pi$, equal to $V_\pi$ since $\pi$ is irreducible. Hence $\pi\vert_{ B}$
is isomorphic to a finite sum of irreducible representations of $B$ that are $A$-conjugates
of $\sigma$.
\end{proof}
The group $X(\pi)$ is of course finite when $C$ is finite but there are many other instances of it, in particular
when dealing with admissible representations (see \ref{fini} and \ref{finiadele} below).
One should note that the algebra $\mathcal I(\pi)$ may not be isomorphic to the group algebra $\CM[X(\pi)]$.
This is the case when $m\ge2$.
An example occurs in the study of inner forms of $SL(2)$ (cf. \cite{LL}) where one may have
$A(\sigma)=A$ while $X(\pi)$ is an abelian group
of order 4 but $m=2$ and $\mathcal I(\pi)=M(2,\CM)$ the algebra of $2\times2$ matrices.
Further examples are given in \cite{HS}.
Consider the subgroup $B(\pi)$ of $g\in A$
such that $\chi(g)=1$ for all $\chi\in X(\pi)$. If $X(\pi)$ is finite, $B(\pi)$ is of index $\mathrm{card}(X(\pi))$ in $A$
and we have the following inclusions
$$B\subset B(\pi)\subset A(\sigma)\subset A\,\,.$$
\begin{corollary} If $X(\pi)$ is finite the representation $\sigma$ of $B$ can be extended to a representation
$\tilde\sigma$ of $B(\pi)$ in the same space.
\end{corollary}
\begin{proof}
Proposition~\ref{cliffa} applied to the pairs $(A,B)$ and $(A,B(\pi))$
tells us that the dimension of the intertwining algebra for $\pi\vert_{B(\pi)}$ and $\pi\vert_B$ are both equal
to $\mathrm{card}(X(\pi))$
and hence the irreducible constituents of $\pi\vert_{B(\pi)}$ remain irreducible when
restricted to $B$. \end{proof}
\begin{proposition} \label{frobenius} Let $B \subset A$ be a pair of a locally compact groups where
$B$ is a closed invariant subgroup of $A$ such that $B\backslash A$ is a compact abelian group.
Let $\pi$ and $\sigma$ be irreducible unitary representations of $A$ and $B$ respectively.
Assume that the group $X(\pi)$, of characters of $B\backslash A$ such that $\pi\otimes\chi\simeq\pi$,
is finite. Then Frobenius reciprocity holds: the natural map
$$Frob:\mathrm{Hom}_B(\pi\vert_B,\sigma)\to\mathrm{Hom}_A(\pi,\mathrm{Ind}_B^A\sigma)\,\,.$$
is an isomorphism.
\end{proposition}
\begin{proof}
We have seen that $\sigma$ can be extended to a representation $\tilde\sigma$ of $B(\pi)$.
Since $A/B(\pi)$ is finite all functions in the space of $\mathrm{Ind}_{B(\pi)}^A\tilde\sigma$ are continuous
and evaluation at the origin yields
the Frobenius reciprocity, i.e.\! the following map is a bijection:
$$\mathrm{Hom}_{B(\pi)}(\pi\vert_{B(\pi)},\tilde\sigma)\to\mathrm{Hom}_A(\pi,\mathrm{Ind}_{B(\pi)}^A\tilde\sigma)\,\,.
\leqno{(a)}$$
On the other hand, there is an isomorphism
$$\mathrm{Hom}_{B}(\pi\vert_{B},\sigma)\to\mathrm{Hom}_{B(\pi)}(\pi\vert_{B(\pi)},\tilde\sigma)\,\,.\leqno{(b)}
$$
Now $\tilde\sigma$ injects in $\mathrm{Ind}_B^{B(\pi)}\sigma$: in fact
the map
$$w\mapsto f_w\qquad \hbox{with}\qquad f_w(x)=\tilde\sigma(x)w$$
is an intertwining operator since
$$\rho(y)f_w(x)=f_w(xy)=
\tilde\sigma(xy)w=f_{\tilde\sigma(y)w}(x),$$
and it follows form \ref{cliffb} that
$\mathrm{Ind}_B^{B(\pi)}\sigma$ is the Hilbert direct sum of the $\tilde\sigma\otimes\nu$
where $\nu$ runs over characters of $B(\pi)/B$ while $\mathrm{Ind}_{B(\pi)}^A\tilde\sigma$
is a multiple of $\pi$. This implies that
$$\mathrm{Hom}_A(\pi,\mathrm{Ind}_{B(\pi)}^A(\tilde\sigma\otimes\nu))=\mathrm{Hom}_A(\pi,(\mathrm{Ind}_{B(\pi)}^A\tilde\sigma)\otimes\chi)=0\leqno{(c)}$$
unless $\chi$, which is any extension of $\nu$ to $A$,
belongs to $X(\pi)$.
Now induction by stages shows that
$$\mathrm{Ind}_{B}^A\,\sigma=\widehat\bigoplus_\nu\mathrm{Ind}_{B(\pi)}^A(\tilde\sigma\otimes\nu)$$
where $\nu$ runs over characters of $B(\pi)/B$ and (c) implies
$$\mathrm{Hom}_A(\pi,\rho)=\mathrm{Hom}_A(\pi,\mathrm{Ind}_{B}^A\,\sigma)=\mathrm{Hom}_A(\pi,\mathrm{Ind}_{B(\pi)}^A\tilde\sigma)\leqno{(d)}.$$
In view of (a), (b) and (d)
the proof is complete.
\end{proof}
\section{The groups in question}\label{setting}
Let $k$ be a field.
Let $H$ and $G$ be two
connected algebraic groups over $k$ with a morphism $f:H\toG\,\,.$
Let $ZG$ denote the center of $G$ and $ZH$ the center of $H$.
Let $Z$ be the connected component of $ZG$; this is a torus.
\subsection{Some crossed modules}
We shall assume that the natural morphism $Z\timesH\toG$
is a central map which means that it is surjective and its kernel an abelian group scheme
in the center (see Appendix A).
This is equivalent to ask that the morphism induced between the derived subgroups
$$f_{der}:H_{der}\toG_{der}$$
is a central isogeny.
This is also equivalent to asking that the induced map
$$f_{ad}:H_{ad}\toG_{ad}$$ between the adjoint groups is an isomorphism.
The last isomorphism shows that $G$ acts on $H$ by conjugacy
and this implies that the complex $[H\toG]$ is a crossed-module.
We refer the reader to \cite[Chap.~1]{LBC} or \cite[Appendix~B]{Mi}
for this concept.
The particular case where $H=G_{sc}$ is the simply connected cover of the derived group has been extensively
studied in \cite{LBC}.
\begin{lemma} \label{quasi} Let $TG$ be a maximal torus in $G$ and let $TH$ be its inverse image in $H$.
The map between complexes $$[TH\toTG]\to[H\toG]$$
induces a quasi-isomorphism between complexes of points over the separable closure.
\end{lemma}
\begin{proof} Let $k^{sep}$ denote the separable closure of $k$. We want to prove that
$$[TH(k^{sep})\toTG(k^{sep})]\to[H(k^{sep})\toG(k^{sep})]$$
is a quasi-isomorphism. In particular we need to compute the kernel and cokernel of the map $f^{sep}$.
But then we are dealing with split groups and split tori.
Since the unipotent subgroups and Weyl groups
are isomorphic \cite[Th\'eor\`eme (2.20), page 260]{BT} and using Bruhat decomposition,
we are left with the kernel and cokernel of the induced map between the tori.
\end{proof}
\subsection{Crossed modules over local fields}\label{crossl}
In this subsection $F$ is a local field; by this we mean
archimedean or non archimedean local field as well.
As a convention for the Galois cohomology with values in complexes $[B\to A]$ we take $A$ in degree 0.
This is the convention used in \cite{LBC}.
We denote by $Hplus$ the subgroup of $G(F)$ generated by $f(H(F))$ and $ ZG(F)$.
The reader is warned that although $Hplus$ is a Lie group when $F$ is archimedean
and a totally disconnected group when $F$ is non archimedean,
it is not in general the group of points of an algebraic reductive group over $F$.
\begin{proposition}\label{coloc} The group
$Hplus$ is an invariant subgroup in $G(F)$. The quotient $G(F)/Hplus$ is abelian and compact;
it is even finite for local fields of characteristic zero.
\end{proposition}
\begin{proof} Replacing if necessary $H$ by $H\timesZ$ which is again reductive and connected,
we may assume that the map $f$ is surjective. Then
it is enough to prove that, in such a case, $f(H(F))\backslash G(F)$ is abelian, compact and
even finite for local fields of characteristic zero.
Since quasi-isomorphisms between complexes of points on the separable closure compatible
with Galois action
induce isomorphisms in Galois cohomology \cite[Proposition 1.2.2]{LBC},
Lemma \ref{quasi} implies that
the map
$${{\mathbf H}}^0(F,[TH\toTG])\to {{\mathbf H}}^0(F,[H\toG])$$
is an isomorphism and hence ${{\mathbf H}}^0(F,[H\toG])$ is abelian.
One has an exact sequence
$$
1\tof(H)\backslashG\to{\mathbf H}^0(F,[H\toG])\to{\mathbf H}^1(F,H).
$$
In particular $f(H)\backslash G$ is an abelian subgroup of finite index in ${\mathbf H}^0(F,[H\toG])$.
There is an exact sequence
$$
1\tof(TH)\backslashTG\to{\mathbf H}^0(F,[TH\toTG])\to{\mathbf H}^1(F,TH)\,\,.
$$
Since ${\mathbf H}^1(F,TH)$ is finite it remain to observe that $f(TH)\backslashTG$ is compact when $f$ is surjective
(see for example Lemma~\ref{apptor} in Appendix A).
It is finite for local fields of characteristic zero
\end{proof}
For an alternative argument independent of Galois hypercohomology
see \ref{appcomp} in Appendix A.
\begin{remark} In the case of a central isogeny $H \to G$ for groups over a non-archimedean
local field this result was stated (without proof) and used in \cite{Si}.
\end{remark}
\subsection{Crossed modules over global fields}\label{crossg}
In this subsection $F$ is a global field.
We shall use the notation of \cite{KS} and \cite{LBC} for adelic cohomology. The reader should be aware that
the degree conventions for hypercohomology of complexes are not the same in
these references: namely
${\mathbf H}^0(\star,B \to A)$ in \cite{LBC} is ${\mathbf H}^1(\star,B\to A)$ in \cite{KS}. We shall use the
convention of \cite{LBC}.
\begin{lemma}\label{co} Assume the morphism $f: H \to G$ is surjective.
Then ${\mathbf H}^0({\mathbb A_F}/F,[H\toG])$ is compact.
\end{lemma}
\begin{proof} The quasi-isomorphism $[TH\toTG]\to[H\toG]$ implies isomorphisms in cohomology.
Hence it is equivalent to prove that $${\mathbf H}^0({\mathbb A_F}/F,[TH\toTG])$$ is compact. However, this
is one of the statements in Lemma C.2.D, page 153, in \cite{KS} (up to the shift in degree explained above).
Although this reference is written for number fields the proof extends verbatim to the case of arbitrary global fields. Namely,
one has an exact sequence
$$1\to D\to{\mathbf H}^0({\mathbb A_F}/F,[TH\toTG])\to {\mathbf H}^1({\mathbb A_F}/F,TH)$$
where $$D=\mathrm{Coker}[{\mathbf H}^0({\mathbb A_F}/F,TH)\to{\mathbf H}^0({\mathbb A_F}/F,TG)]$$
is compact if $f$ is surjective while ${\mathbf H}^1({\mathbb A_F}/F,TH)$ is finite.
\end{proof}
\begin{lemma}\label{cor} Assume $f$ is surjective.
Then $G(F)f(H({\mathbb A_F}))\backslashG({\mathbb A_F})$ is compact.
\end{lemma}
\begin{proof} Let us denote by $K$ the complex $[H\toG]$.
The following diagram
$$\begin{matrix}
&&H(F)&\to &H({\mathbb A_F})&\to &{\mathbf H}^0({\mathbb A_F}/F,H)&&\cr
&&\downarrow&&\downarrow&&\downarrow&&\cr
&&G(F)&\to&G({\mathbb A_F})&\to&{\mathbf H}^0({\mathbb A_F}/F,G)&&\cr
&&\downarrow&&\downarrow&&\downarrow&&\cr
&& {\mathbf H}^0(F,K)&\to&{\mathbf H}^0({\mathbb A_F},K)&\to&{\mathbf H}^0({\mathbb A_F}/F,K)&\to&\mathrm{Ker}^1(F,K)\cr
&&\downarrow&&\downarrow&&&&\cr
\mathrm{Ker}^1(F,H)&\to& {\mathbf H}^0(F,H)&\to&{\mathbf H}^0({\mathbb A_F},H)&&&&\cr
\end{matrix}$$
is commutative with exact lines and columns.
Now \ref{co} and the finiteness of $$\mathrm{Ker}^1(F,K)\simeq \mathrm{Ker}^1(F,[TH\toTG])$$
(cf. \cite{KS})
imply that
$$\mathrm{Coker}[{\mathbf H}^0(F,[H\toG])\to{\mathbf H}^0({\mathbb A_F},[H\toG])]$$
is also compact.
Thanks to the finiteness of $\mathrm{Ker}^1(F,H)$ (cf. \cite[Prop. 1.7.3]{LBC} for number fields,
which rephrases results of Kottwitz
and \cite[Thm 1.3.3]{Con} for function fields. The latter one relies on \cite{Ha})
the image of $G({\mathbb A_F})$ in this cokernel is up to a finite subgroup
isomorphic to the quotient $$G(F)f(H({\mathbb A_F}))\backslashG({\mathbb A_F})$$
and hence this quotient is also compact.
\end{proof}
We now return to the general case where $f: H \to G$ need not be surjective.
\begin{proposition}\label{coad} Let $Hplus: = ZG({\mathbb A_F})G(F) f(H({\mathbb A_F}))$.
The quotient $Hplus\backslashG({\mathbb A_F})$ is an abelian compact group.
\end{proposition}
\begin{proof} Replacing
if necessary $H$ by $H\timesZ$ this is a consequence of Lemma \ref{cor}.
\end{proof}
For an alternative argument independent of adèlic hypercohomology
see \ref{appcompb} in Appendix A.
\section{A first application of Clifford's theory: the local case}\label{loc}
In this section $F$ is a local field.
{Some aspects of what follows have been observed by various authors}
(see in particular \cite{GK}, \cite{He}, \cite{HS}, \cite{LL}, \cite{LS}, \cite{Si} and \cite{T}).
\subsection{The basic results}
Consider $G$ and $H$ with a map $f:H\toG$ over $F$ inducing a central isogeny of their derived groups
and consider $Hplus$ the subgroup of $G(F)$ generated by $f(H(F))$ and $ ZG(F)$.
We denote by $N$ the kernel of the map $f:H\toG$.
\begin{lemma}\label{fini}
The quotient $Hplus\backslash G(F)$ is abelian compact.
If $\pi$ is an irreducible unitary representation of $G(F)$ then the group $X(\pi)$ of characters $\chi$ of $Hplus\backslash G(F)$
such that $\pi \otimes \chi \cong \pi$ is finite.
\end{lemma}
\begin{proof}
We apply the results of section~\ref{cliff} to $A=G(F)$ and $B=Hplus$.
The assertions are obvious when $C=Hplus\backslash G$ is finite which is the case
for local fields of zero characteristic according to Proposition \ref{coloc}. For
non archimedean fields of arbitrary characteristic we appeal again to Proposition \ref{coloc} or \ref{appcompb}
for the first statement.
The finiteness of $X(\pi)$ is known for admissible irreducible representations
(\cite{He}, \cite{Si}). To conclude we recall that the subspace of smooth vectors in an
irreducible unitary representation of the group of points of a connected
reductive group group over a non archimedean local field is admissible (\cite{Be}).
\end{proof}
Let $F$ be a non archimedean local field, and assume $G$ is a quasi-split
connected reductive group,
split over an unramified extension. Choose an hyper-special maximal compact subgroup $K\subset G(F)$.
We say that a representation $\pi$ of $G(F)$ is unramified if the operator
$\pi(K)$ fixes a non zero vector.
\begin{lemma}\label{unr} If $\pi$ is unramified, all elements in $X(\pi)$ are also unramified.
\end{lemma}
\begin{proof} Choose an Iwahori subgroup $I\subset K$; then there is a
unique Borel subgroup $P_0\subset G$ with Levi decomposition
$P_0=T\ltimes U$ such that $$I=(T(F)\cap I)(U(F)\cap I)(\overline U(F)\cap I) $$
where $\overline U$ is the opposite unipotent subgroup.
An unramified representation $\pi$ is the spherical subquotient of a principal series
representation obtained by parabolic induction of a character $\lambda$ of $T(F)$ which is trivial on $T(F)\cap I$.
A character $\chi\in X(\pi)$ defines by restriction a character $\tilde\chi$ of $T(F)$.
The representation $\pi\otimes\chi$ is a subquotient of the principal series
representation obtained by parabolic induction of $\lambda\tilde\chi$.
But since $\pi\simeq\pi\otimes\chi$ one has
$$\lambda\tilde\chi=s(\lambda)\qquad\hbox{for some $s$ in the Weyl group}\,\,.$$
This shows that $\tilde\chi=s(\lambda)\lambda^{-1}$ is trivial on $T(F)\cap I$.
Then $\chi$ must be trivial on the subgroup generated by
$f(H(F))$ and $T(F)\cap I$. Now $f(H(F))\supset U(F)\supset U(F)\cap I$ and similarly for $\overline U$.
Hence $\chi$ is trivial on $I$. Denote by $K'$ the hyper-special subgroup in $H(F)$ such that
$f(K')\subset K$. Any $s'\in W'$ has a representative $w_{s'}\in H(F)\cap K'$.
The Weyl group $W'$ of $H(F)$ maps bijectively via $f$ onto the Weyl group $W$ of $G(F)$
and hence any $s\in W$ has a representative $$w_s=f(w_{s'})\in f(H(F))\cap K\,\,.$$ Since the $w_s$ and $I$ generate $K$
the character $\chi$ is trivial on $K$. \end{proof}
\begin{proposition} \label{local}
Given an irreducible unitary representation $\pi$ of $G(F)$ its restriction to $H(F)$ is a direct sum of finitely many
irreducible unitary representations that are $G(F)$-conjugate.
Conversely, any irreducible unitary representation
of $H(F)$ trivial on $N$
occurs in the restriction of some $\pi$ and all such irreducible representations are of the form $\pi\otimes\chi$ with
$\chi\in X$ the group of characters of $G(F)/Hplus$.
\end{proposition}
\begin{proof}
We first restrict $\pi$ to $Hplus$. In view of Lemma \ref{fini} we may use Proposition \ref{cliffa} with
$A=G(F)$ and $B=Hplus$. Hence
this restriction is a direct sum of finitely many
irreducible unitary representations of $Hplus$ that are conjugate under $G(F)$.
Then restriction from $Hplus$ to $N\backslashH(F)$ preserves irreducibility.
Conversely, consider a representation $\sigma$ of $H(F)$ and $\omega$ a character of $ ZG(F)$
such that its restriction to $ZH(F)$ is the character with which $ZH(F)$ acts via $\sigma$. One can extend $\sigma$ to a
representation $\sigma^+$ of $Hplus$ and
then induce this representation from $Hplus$ to $G(F)$. According to Proposition \ref{cliffb}
this is a sum of representations of the form $\pi\otimes\chi$ with
$\pi$ irreducible and
$\chi\in X$ the group of characters of $Hplus\backslashG(F)$. The restriction of $\pi$ to $N\backslashH(F)$ contains $\sigma$
according to Proposition~\ref{frobenius}.
\end{proof}
We observe that if $G$ and $H$ are quasisplit, and if $\pi$ is generic (i.e.\! has a Whittaker model for some
character of the unipotent radical of a chosen Borel subgroup)
the restriction $\pi\vert_{f(H(F))}$ is multiplicity free (i.e.\! $m=1$
in the notation of Proposition~\ref{cliffa})
as follows from Proposition~\ref{frobenius} using
the uniqueness of Whittaker models and the compatibility
of Whittaker models with induction.
\subsection{Two equivalence relations}
\begin{definition} We say that two irreducible unitary representations
$ \sigma$ and $ \sigma'$ of $H(F)$ are in the same
``$G(F)$-packet''
if there exists an element $g\inG(F)$ such that $\sigma^g \cong \sigma'$.
We denote by $\mathcal{A}_{G}(H)$ the set of $G(F)$-packets of irreducible unitary representations of $H(F)/N$.
\end{definition}
We observe that
$G(F)$-packets coincide with $L$-packets when $H=SL(n)$ and $G=GL(n)$
and for compatible inner forms as well. In general $L$-packets should be
unions of $G(F)$-packets since adjoint conjugacy is a special case of stable conjugacy.
\begin{definition}We define two irreducible unitary representations $\pi $ and $\pi'$ of $G(F)$ to be
$\mathcal{E}_{H}$-equivalent if there exists a character $\mu$ of $G(F)/Hplus$ such that $\pi \otimes \mu \cong
\pi'$.
We denote by $\mathcal{E}_{H}(G)$ the corresponding set of equivalence classes.
\end{definition}
Now, all elements in the $\mathcal{E}_{H}$-class of some $\pi$ have equivalent restrictions to $f(H(F))$ and all components
of the restriction
belong to the same $G$-packet.
Let $R$ be the map
which assigns to an $\mathcal{E}_{H}$-equivalence class
represented by $\pi$ the $G(F)$-packet of components $\sigma^g$ of the restriction of
$\pi$ to ${{ H(F)}}$.
The above Propositions and remarks can be summarized as
\begin{proposition} \label{bija} The map
$R : \mathcal{E}_{H}(G) \to \mathcal{A}_{G}(H)$
is a bijection.
\end{proposition}
\section{Second application: the case of cuspidal representations.}\label{cusp}
Now $F$ is a global field and
we examine how cuspidal automorphic representation behave under restriction and induction.
By cuspidal representation we understand an irreducible unitary automophic representation ocuring in the cuspidal
spectrum. For a definition of these objects over fields of arbitrary characteristics
we refer the reader to \cite{MW}. We consider two connected reductive groups with a
map $f:H\toG$ over some global field $F$ inducing a central isogeny of their derived groups.
\subsection{The key construction}
We have introduced in subsection~\ref{crossg} the subgroup
$$Hplus: = ZG({\mathbb A_F})G(F)f( H({\mathbb A_F}))$$ in $G({\mathbb A_F})$.
According to Propositions \ref{coad} or \ref{appcompb}
the quotient $Hplus\backslashG({\mathbb A_F})$ is abelian and compact.
\begin{lemma}\label{finiadele}
Let $\pi$ be an automorphic representation of $G({\mathbb A_F})$. Then the group $X(\pi)$ of characters
of $Hplus\backslashG({\mathbb A_F})$ such that $\pi\otimes\chi\simeq\pi$ is finite.
\end{lemma}
\begin{proof}
We observe that $Hplus$ contains the product over all places $v$ of groups $Hplus_v$
generated by $f(H_v)$ and $ ZG_v$. Let $\pi$ be an automorphic representation of $G({\mathbb A_F})$. Thanks to
Lemma \ref{fini} and \ref{unr}
we know
there is a compact open subgroup $K_f$ of the finite ad\`eles
on which any $\chi\in X(\pi)$ is trivial.
Recall that $Hplus_\infty$ is of finite index in $G_\infty$ when $F$ is a number field.
In all cases $ K_f.Hplus$ is an open subgroup of finite index in $G({\mathbb A_F})$
on which any $\chi$ such that $\pi\otimes\chi\simeq\pi$ is necessarily trivial, hence $X(\pi)$ is finite.
\end{proof}
Denote by ${\mathbf N}$ the kernel of the map $f_{\mathbb A_F}:H({\mathbb A_F})\toG({\mathbb A_F})\,\,.$
This is a subgroup in the center of $H({\mathbb A_F})$
and we may identify ${\mathbf N}\backslashH({\mathbb A_F})$ with $f(H({\mathbb A_F}))$.
Let $Zplus:= ZG({\mathbb A_F})G(F)$.
Observe that
$$Zplus/G(F)= ZG({\mathbb A_F}) G(F)/G(F)
= ZG({\mathbb A_F})/ ZG(F)\,\,.$$
Let
$$ Zun^+: Zplus \cap f(H({\mathbb A_F})) \qquad\hbox{and} \qquad Zun=f^{-1}(Zun^+)\,\,.$$
$Zun$ is a closed subgroup in $H({\mathbb A_F})$ that contains and
normalizes $H(F)$. Let $$Gamma^+=G(F)\cap Zun^+=G(F)\capf(H({\mathbb A_F}))
\qquad\hbox{and}\qquadGamma=f^{-1}(Gamma^+)
\,\,.$$
The subgroup $Gamma$ in $H({\mathbb A_F})$ contains ${\mathbf N}.H(F)$ and
$Zplus_1/Gamma^+\simeqZun/Gamma\,\,.$
Thus, a unitary character $$\omega: ZG({\mathbb A_F})/ ZG(F) \to \CM^{\times}$$
defines a character of $Zplus$, again denoted $\omega$, and
we obtain by restriction a character $\omega_1^+$ on $Zun^+$ trivial on $Gamma^+$.
Observe that conversely any character on $Zun^+/Gamma^+$ extends
to a character of $ ZG({\mathbb A_F})/ ZG(F)$. Denote by $\omega_1$ the character of $Zun/Gamma$
defined by $\omega_1^+$.
\begin{remark}\label{contrex}
Observe that if $f$ is injective, i.e.\! if $H$ is a subgroup of $G$, then
$$Gamma=G(F)\capH({\mathbb A_F})=H(F)\,\,.$$
But when $f$ is not injective it may happen that ${\mathbf N}.H(F)$ is a strict subgroup of $Gamma$. This is,
for example, the case if $G=GM_m$, $H=GM_m$
and $f:x\mapsto x^n$ when $(F,n)$ is a counter example to the local-global principle for $n$-th powers
(see \cite[Chap. X, Thm. 1]{AT}).
\end{remark}
Since the group $Zun$ normalizes $H(F)$, it acts via left translations on $H(F)\backslash H({\mathbb A_F})$, hence on the space
$$L^2(H(F)\backslash H({\mathbb A_F}),\omega_0)$$ of functions that are
square-integrable modulo the center on $H(F)\backslash H({\mathbb A_F})$ and that transforms according to $\omega_0$
some automorphic character of the center of $H({\mathbb A_F})$.
The latter space is
endowed with the right regular representation $\rho_{\omega_0}$ of $H({\mathbb A_F})$.
The space of left $Gamma$ invariant functions that are
square-integrable modulo the center on $H(F)\backslash H({\mathbb A_F})$ can be decomposed
according to the characters of $Gamma\backslashZun$ and this decomposition is compatible with the spectral
decomposition of the right regular representation. Observe that the action of $Zun$ preserves cuspidality.
Now, given $\omega$ and $\omega_1$ as above consider
a function $\varphi$ on $H({\mathbb A_F})$ which satisfies the condition
$$ \varphi(c h) = \omega_1(c) \varphi(h) \hbox{ for all}\; c \in Zun,\,\, h \inH({\mathbb A_F})\,\,.$$
There exists a unique function $\varphi^+$ on $ Hplus$ such that
$$ {\varphi}^+({z}\gamma g) = \omega({z}) {\varphi}^+(g)$$
for any ${z} \in ZG({\mathbb A_F})$, $\gamma\in G(F)$, $g\inHplus$, and moreover (using $\dot x$ to denote $f(x)$)
$${\varphi}^+( \zeta \dot h) = \varphi(c h)= \omega_1(c) \varphi(h)$$
whenever ${\zeta} = \dot c \gamma$ with $c \in Zun$, ${\gamma} \in G(F)$
and $h\in H({\mathbb A_F})$.
This yields a bijection
$$ L^2(H(F)\backslash H({\mathbb A_F}), \omega_1)
\tilde{\longrightarrow} L^2(G(F) \backslash Hplus, \omega),$$
that preserves cuspidality. Here cuspidality for representations of $Hplus$ has the obvious definition namely the vanishing
of integrals over quotients $U(F)\backslash U({\mathbb A_F})$
of non trivial ``unipotent subgroups'' that are isomorphic images in $G(F)\backslashHplus$ of quotients of unipotent subgroups in $
H({\mathbb A_F})$.
Hence one obtains a bijection between the cuspidal spectra
$$ L^2_{\mathrm{cusp}}(H(F)\backslash H({\mathbb A_F}), \omega_1)
\tilde{\longrightarrow}L^2_{\mathrm{cusp}}(G(F) \backslash Hplus, \omega).
\leqno{(\star)} $$
It is known that the right regular representation $\rho_{\mathrm{cusp},\omega_1}$ of $H({\mathbb A_F})$ in
$$ L^2_{\mathrm{cusp}}(H(F)\backslash H({\mathbb A_F}), \omega_1)$$ splits into a direct Hilbert sum with
finite multiplicities. This implies that
the right regular representation of $Hplus$ in
$ \rho^+_{\mathrm{cusp},\omega}$ in $L^2_{\mathrm{cusp}}(G(F) \backslash Hplus, \omega)$
also splits into a direct Hilbert sum with
finite multiplicities.
Now we observe that $L^2(G(F) \backslash Hplus, \omega)$ is the space of the representation $$\rho_{\omega}^+ = \text{Ind}
^{Hplus}
_{Zplus}\omega,$$
while $L^2(G(F) \backslash G({\mathbb A_F}), \omega)$ is the space of the representation
$$\rho_{\omega} = \text{Ind}^{G({\mathbb A_F})}
_{ Zplus}\,\,\omega.$$ Thus, since induction preserves cuspidality, we see that
$$ \rho_{\mathrm{cusp},\omega} = \text{Ind}^{G({\mathbb A_F})}
_{ Hplus}\rho^+_{\mathrm{cusp},\omega}.$$
\subsection{Main results}
\begin{theorem} \label{cuspa} The restriction to ${\mathbf N}\backslashH({\mathbb A_F})$
of any cuspidal representation $\pi$ of $G({\mathbb A_F})$
contains a cuspidal representation $\sigma$ of $H({\mathbb A_F})$.
\end{theorem}
\begin{proof}
Any cuspidal automorphic representation $\pi$ of $G({\mathbb A_F})$
with central character $\omega$ occurs in
$$\rho_{\sigma^+}=\text{Ind}^{G({\mathbb A_F})}_{ Hplus}\sigma^+$$
for some constituent $\sigma^+$ of $\rho_{\mathrm{cusp},\omega}^+$.
It follows from Lemma~\ref{finiadele} and Proposition~\ref{frobenius} that $\sigma^+$ occurs in the restriction of $\pi$ to $Hplus$.
But the isomorphism $(\star)$ shows that
the restriction of $\sigma^+$ to ${\mathbf N}\backslashH({\mathbb A_F})$ is a direct sum of cuspidal representations.
\end{proof}
\begin{theorem} \label{cuspb}
Any cuspidal representation $\sigma$ of $H({\mathbb A_F})$ that can be realized in a space of functions on
$Gamma\backslashH({\mathbb A_F})$
appears in the restriction of some cuspidal representation $\pi$ of $G({\mathbb A_F})$.
This is in particular true for any cuspidal representation of $H({\mathbb A_F})$ when $f$ is injective.
\end{theorem}
\begin{proof} Consider $\mathcal H$ the subspace
of left $Gamma$-invariant functions in the space of cuspidal square integrable functions modulo the center
$$L^2_{\mathrm{cusp}}(H(F)\backslashH({\mathbb A_F}),\omega_0)$$
where $\omega_0$ is the character by which $\sigma$ acts when restricted to the center of $H({\mathbb A_F})$,
The space of the isotypic component, say $W_\sigma$, of $\sigma$ in $\mathcal H$
can be decomposed according to characters of $Gamma\backslashZun$ with $Zun$
acting on the left. Let $\omega_1$
be a character that occurs and consider the subspace $W_\sigma(\omega_1)$ of $W_\sigma$
cut out by this character. Choose $\omega$ extending $\omega_1$ to $Zplus$. Then $W_{\sigma}(\omega_1)$
can be mapped into a subspace of
$\rho^+_{\mathrm{cusp},\omega}$ via $(\star)$ and let $\sigma^+$ be an irreducible constituent of the subspace
generated by the image of $W_\sigma(\omega_1)$
under the action of $Hplus$.
We get a family of cuspidal representations for $G$ by decomposing the induced representation
$$\rho_{\sigma^+}=\text{Ind}^{G({\mathbb A_F})}_{ Hplus}\sigma^+$$
According to \ref{cliffb} the various representations that occur in $\rho_{\sigma^+}$ are of the form
$\pi\otimes\chi$ for some $\pi$ where $\chi$ runs over characters of $G({\mathbb A_F})/Hplus$. Thanks to Lemma~\ref{finiadele} and
Proposition~\ref{frobenius} we know that $\sigma^+$ occurs in the restriction of $\pi$ to $Hplus$ and, in turn, by construction,
$\sigma$ occurs in the restriction of $\sigma^+$ to ${\mathbf N}\backslashH({\mathbb A_F})$.
The last statement follows from \ref{contrex}.
\end{proof}
\subsection{A reformulation}
\begin{definition}
We denote by $\mathcal{A}_{G}(H,{\mathbb A_F})$ the set of $G({\mathbb A_F})$-conjugacy classes
of irreducible unitary representations of $H({\mathbb A_F})$ trivial on ${\mathbf N}$.
\end{definition}
\begin{definition}
Two irreducible unitary representations $\pi $ and $\pi'$ of $G({\mathbb A_F})$ are said to be
$\mathcal{E}_{H}$-equivalent if there exists a character $\mu$ of $G({\mathbb A_F})/H({\mathbb A_F})$ such that $\pi \otimes \mu \cong \pi'$.
We denote by $\mathcal{E}_{H}(G, {\mathbb A_F})$ the corresponding set of equivalence classes.
\end{definition}
All elements in the $\mathcal{E}_{H}$-class of some global $\pi$ have equivalent restrictions to $H({\mathbb A_F})$ and all components of
the restriction
belong to the same $G$-packet.
Let $$R : \mathcal{E}_{H}(G, {\mathbb A_F}) \to \mathcal{A}_{G}(H, {\mathbb A_F})$$ be the map
which assigns to an $\mathcal{E}_{H}$-equivalence class
represented by $\pi$ the $G({\mathbb A_F})$-packet of components of $\pi\vert_{ H({\mathbb A_F})}$.
Observe that $R$ is the restricted product of local restrictions. This makes sense since, for almost all places $v$, the restriction
to $ H_v$ of an
unramified representation of $G_v$
contains a unique constituent that is unramified.
\begin{proposition} \label{bijb}
The map
$$ R :\mathcal{E}_{H}(G, {\mathbb A_F}) \to \mathcal{A}_{G}(H, {\mathbb A_F})$$
is a bijection.
\end{proposition}
\begin{proof}
The local analogue \ref{fini} implies the injectivity of $R$. The surjectivity follows from the
local analogue and the
fact that if $G_v$ and $H_v$ are unramified
any unramified representation of $H_v$ occurs in the restriction of
an unramified representation of $G_v$.
\end{proof}
We denote by $\mathcal{A}_{G,\mathrm{cusp}}(H, {\mathbb A_F})$ the subset of $\mathcal{A}_{G}(H, {\mathbb A_F})$
of $G$-packets that contain
some cuspidal automorphic representation of $H({\mathbb A_F})$.
We define
$\mathcal{E}_{H,\mathrm{\mathrm{cusp}}}(G, {\mathbb A_F})$ to be the subset of $\mathcal{E}_{H}(G, {\mathbb A_F})$
of $\mathcal{E}_{H}$-equivalence classes
that contain some cuspidal automorphic representations of $ G({\mathbb A_F})$.
\begin{theorem}\label{main} Assume that $Gamma={\mathbf N}.H(F)$ (this is true
in particular when $f$ is injective).
The map $$R: \mathcal{E}_{H}(G, {\mathbb A_F}) \to \mathcal{A}_{G}(H, {\mathbb A_F})$$
induces a bijection
$$\mathcal{E}_{H,\mathrm{cusp}}(G, {\mathbb A_F}) \tilde{\longrightarrow} \mathcal{A}_{G,\mathrm{cusp}}(H, {\mathbb A_F})\,\,.$$
\end{theorem}
\begin{proof} In view of Propositions \ref{bija}, \ref{bijb} and Remark~\ref{contrex}
this is nothing but
a reformulation of Theorems \ref{cuspa} and \ref{cuspb}.
\end{proof}
Observe that when $Gamma$ is strictly bigger than ${\mathbf N}.H(F)$ (in particular $f$ is not injective)
the map
$$\mathcal{E}_{H,\mathrm{cusp}}(G, {\mathbb A_F}) \to \mathcal{A}_{G,\mathrm{cusp}}(H, {\mathbb A_F})$$
may not be surjective: an example is given in Remark~\ref{contrex}.
The image consists of classes of cuspidal representations that can be realized
in a subspace of $Gamma$-left-invariant functions.
\begin{remarks} The reader should be aware of the following pitfalls.
\par\noindent 1 - If $\sigma$ is a cuspidal representation of $H({\mathbb A_F})$
it is not always the case that all conjugates $\sigma^g$ for $g\inG({\mathbb A_F})$
are automorphic. Examples of this fact do occur in the case $H=SL(n)$ and $G=GL(n)$
for representations that are ``endoscopic'' (see~\cite{LL} for the case $n=2$).
\par\noindent 2 - Consider two cuspidal automorphic representations $\pi$ and $\pi'$ that
are of the form $\pi'\simeq\pi\otimes\mu$; it may happen that
$\mu$ cannot be chosen to be automorphic (see \cite{BL} where examples are constructed
for $H=SL(n)$ and $G=GL(n)$ provided $n\ge 3$).
\end{remarks}
\subsection{A multiplicity formula}\label{fail}
We assume moreover from now on that $f$ is injective.
Given an irreducible unitary representation $\pi$ of $G({\mathbb A_F})$ the restriction of $\pi$ to $H({\mathbb A_F})$ splits into a direct sum with
finite multiplicities if $\pi_v$ is generic almost everywhere.
In fact the restriction to $ H_v$ of an
unramified representation
contains a unique constituent that is unramified.
The representation $\pi\vert_{H({\mathbb A_F})}$ is the direct sum of the restricted products of the
constituents of the $\pi_v\vert_{H_v}$.
We know that locally everywhere the multiplicity is finite
(cf.~\ref{fini}).
But, whenever $\pi_v$ has a Whittaker model, the restriction
is multiplicity free. Hence the global decomposition is a direct sum (infinite in general) and
with finite multiplicities if $\pi_v$ is generic almost everywhere.
We observe that given $\pi$ the set components of $\pi\vert_{Hplus}$ is finite according to
Propositions \ref{cliffa} and \ref{finiadele}, but one should be aware that not all such representations
will show up in $\rho^+_{\mathrm{cusp},\omega}$. In fact, for example,
if $G=GL(n)$ only one such $\sigma^+$, in the restriction to $Hplus$ of a given $\pi$,
may occur in $\rho^+_{\mathrm{cusp},\omega}$ since otherwise
this would contradict the multiplicity one theorem for cuspidal representations of $GL(n)$.
On the other hand there may be more than one $\sigma^+$ in the space generated by the isotypic component
of some $\sigma$ and they may be inequivalent.
This is in fact the case when considering cuspidal representations of $SL(n)$ with multiplicity
greater than one (which may exist for $n\ge3$). In such a case the various $\pi$'s
containing $\sigma$ in their restriction to $H({\mathbb A_F})$ may differ by
non automorphic characters (see \cite{BL}).
More generally we have the following multiplicity formula.
\begin{theorem}\label{multi}
Assume $G$ and $H$ quasi-split.
Let $\pi$ be a generic cuspidal representation for $G$
and $\sigma$ a generic cuspidal representation for $H$ that
occurs in the restriction of $\pi$ to $H({\mathbb A_F})$.
Let $Y(\pi)$ be the group of characters $\mu$ of $G({\mathbb A_F})/ZG({\mathbb A_F})H({\mathbb A_F})$
such that $\pi\otimes\mu$ is also a cuspidal representation. Let
$X_{loc}(\pi)$ the subgroup of characters
$\mu\in Y(\pi)$ such that $\pi\otimes\mu\simeq\pi$.
This is the restricted product over the set of places of $F$ of the $X(\pi_v)$.
Let $m(\pi)$ be the multiplicity of $\pi$ in the cuspidal spectrum for $G$.
Then, the multiplicity $m(\sigma)$ of $\sigma$ in the cuspidal spectrum of $H$ is given by
$$m(\sigma)=\sum_{\mu\in M(\pi)} m(\pi\otimes\mu)$$
where $M(\pi)=Y(\pi)/X_{loc}(\pi).X$.
\end{theorem}
\begin{proof} The uniqueness of Whittaker models tells us that the restriction of $\pi$
to $H({\mathbb A_F})$ is multiplicity free. In particular any $\pi$ defines a unique $\sigma^+$
in $\rho^+_{\mathrm{cusp},\omega}$ and conversely this $\sigma^+$ is associated to
the set of cuspidal representations of the form $\pi\otimes\chi$ with $\chi\in X$ i.e.\!
trivial on $Hplus$, in particular $\chi$ is automorphic. Now the set of representations
$\pi'$ in $ \rho_{\mathrm{cusp},\omega}$ whose restriction to $H({\mathbb A_F})$ contains $\sigma$,
is the set of $\pi'=\pi\otimes\mu$ with $\mu\in Y(\pi)$.
\end{proof}
\subsection{Miscellaneous remarks}
Assume again $f$ injective.
Let $ZO= ZG({\mathbb A_F})\cap H({\mathbb A_F})$.
The group $$ Zun=ZG({\mathbb A_F}) G(F) \cap H({\mathbb A_F})$$ is often equal to
$ZO.H(F)$. For example, this latter equality holds in the case $G=GL(n)$ and $H=SL(n)$
whenever the local-global principle for $n^{th}$-roots of unity holds for $F$ and $n$. In fact, if $z\gamma\in Zun$ which means
$\text{det}( z\gamma)=1$ then $\text{det}(\gamma)$ is locally everywhere an $n^{th}$-power, and, if the
local-global principle holds, this means that $\text{det}(\gamma)$ is itself an $n^{th}$-power and $\gamma$ can be rewritten as
$\zeta.\eta$ with $\zeta\in ZG(F)$ and $\eta\in H(F)$, hence $z\gamma=z_0\eta$ with $z_0\in ZO$. This shows that, in this case,
the new argument is essentially identical to the argument used in \cite[Sect. 3]{LS}.
Transfer results similar to Theorem~\ref{cuspa} and Theorem~\ref{cuspb}
for cuspidal automorphic forms, have been obtained by Chenevier \cite{Ch}
under the condition $Zun=ZO.H(F)$.
As observed in the introduction the map $f:H\to G$ induces a map $\check f$ between
dual groups which in turn defines a map
$$H^1(\check f):H^1(W_F,\checkG)\to H^1(W_F,\checkH)$$
between cohomology sets for Weil groups with values in dual groups.
The induced correspondence between packets of
representations for $H$ to packets for $G$
provided by Langlands functoriality conjectures should fit with
\ref{local} and \ref{cuspa}.
If $f$ is injective Proposition \ref{bija}
and Theorem~\ref{main} are compatible with lifting Theorems 7.1 and 8.1 in \cite{Lab}.
\appendix
\def\noindent{\noindent}
\section{Central Morphisms}
\centerline{By Bertrand LEMAIRE}
Let $k$ be a field.
Recall that a morphism of algebraic groups $f: H\rightarrow G$ (over $k$) is said to be {\it central} if the schematic kernel of $f$ is
contained
in the schematic center of $H$, which means that for any commutative $k$-algebra $A$, we have the inclusion
$$
\mathrm{Ker} \left( f_A: H(A) \rightarrow G(A)\right) \subset Z(H(A)),
$$
where $Z(H(A))$ denotes the center of $H(A)$.footnote{
Note that if $k$ is of caracteristic $p>0$, a surjective central $k$-morphism
(e.g. a central $k$-isogeny) may be inseparable: for example, the map $t\mapsto t^2$ from the multiplicative group
${\mathbb G}_m$ into itself, with $p=2$.}
From \cite[22.4]{B} we know that, given a connected reductive $k$-group $G$, the product morphism
$
Z^G\times G_{der} \rightarrow G
$
where $Z^G$ and $G_{der}$ are respectively the (set-theoretic) center and the derived group of $G$, is a central $k$-isogeny.
Let $F$ be a global field. We denote by ${\mathbb A_F}$ the adèle ring of $F$. If $v$ is a place of $F$ its completion $F_v$ is
either $\RM$ or $\CM$ or
a non-Archimedean local field (i.e.\! a finite extension of ${\mathbb Q}_p$, resp. ${\mathbb F}_p((t))$).
\subsection{Surjective maps of tori}
\begin{lemma}\label{apptor}
Let $f: T\rightarrow S$ be a surjective morphism of tori.
\begin{enumerate}
\item[(1)] For any place $v$, the group $S(F_v)/f(T(F_v))$ is compact.
\item[(2)] The group $S({\mathbb A_F})/f(T({\mathbb A_F}))S(F)$ is compact.
\end{enumerate}
\end{lemma}
\begin{proof}
We only give a proof for assertion (2), the proof of (1) being essentially the same but simpler. Let $S_d$ be the maximal $F$-split
subtorus of $S$, and $X(S_d)$ the group of algebraic characters of $S$ (they are all defined over $F$). Let us fix a finite place
$v$ of $F$, and a uniformizer element $\varpi_v$
of the completion $F_v$ of $F$ at $v$. The set
$$
S(\varpi_v)=\mathrm{Hom} (X(S),\varpi_v^{\mathbb Z})
$$
is a free abelian group of finite rank, and a co-compact subgroup of $S_d(F_v)$. It also naturally identifies with a subgroup of
$S_d({\mathbb A_F})$. Moreover $S_d(\varpi_v)\cap S_d(F)=\{1\}$ and the group $S_d({\mathbb A_F})/S_d(\varpi_v)S_d(F)$ is compact. Now let
$\overline{S}=S/S_d$. It is an $F$-anisotropic torus, hence the group $\overline{S}({\mathbb A_F})/\overline{S}(F)$ is compact \cite[3.5]
{Sp}, which implies the group $S({\mathbb A_F})/S_d({\mathbb A_F})S(F)$ is compact. Since
$$
S_d({\mathbb A_F}) \cap (S_d(\varpi_v)S(F))= S_d(\varpi_v)S_d(F),
$$
we obtain the group $S({\mathbb A_F})/S_d(\varpi_v)S(F)$ is compact. On the other hand, $f$ induces a surjective $F$-morphism $f_d: T_d
\rightarrow S_d$
which sends $T_d(\varpi_v)$ onto a sub-lattice of $S_d(\varpi_v)$. Hence the group $S({\mathbb A_F})/f(T_d(\varpi_v))S(F)$ is compact. This
implies (2).
\end{proof}
\subsection{Central surjective morphisms of reductive groups}
\begin{proposition}\label{appcomp}
Let $f:H \rightarrow G$ be a surjective central morphism of connected reductive groups.
\begin{enumerate}
\item[(1)] For any place $v$, the quotient $G(F_v)/f(H(F_v))$ is an abelian compact group.
\item[(2)] The quotient $G({\mathbb A_F})/ f(H({\mathbb A_F}))G(F)$ is an abelian compact group.
\end{enumerate}
\end{proposition}
\begin{proof}
As above, we only give a proof of assertion (2). From \cite[2.2, 2.6]{BT} (see \cite[2.3]{BT}), there exists an
$F$-morphism $\kappa:G \times G \rightarrow H$ such that for all $x,\,y\in H$, we have
$$
\kappa(f(x),f(y))= xyx^{-1}y^{-1}.
$$
So the commutator map $G\times G\rightarrow G,\, (x,y)\mapsto [x,y]= xyx^{-1}y^{-1}$ coincides with $f\circ \kappa$, and we have
$$
[G({\mathbb A_F}),G({\mathbb A_F})]= f\circ \kappa (G({\mathbb A_F})\times G({\mathbb A_F}))\subset f(H({\mathbb A_F})).
$$
Hence $f(H({\mathbb A_F}))$ is an invariant subgroup of $G({\mathbb A_F})$, and the group $G({\mathbb A_F})/ f(H({\mathbb A_F}))$ is abelian.
A fortiori the quotient $G({\mathbb A_F})/f(H({\mathbb A_F}))G(F)$ is an abelian group. It remains to prove the compacity.
Let $S$ be a maximal $F$-split torus in $G$, and $M=Z^G(S)$ the centralizer of $S$ in $G$.
Let $P$ be a (minimal) parabolic $F$-subgroup of $G$ with Levi component $M$, and $U=U_P$ the unipotent radical of $P$.
From \cite[22.6]{B}, the inverse image $S'$ of $S$ in $H$ is a maximal $F$-split torus in $H$, and the inverse image $P'$ of $P$ in
$H$ is a minimal parabolic $F$-subgroup of $H$. Put $M'=Z^H(S')$ and $U'=U_{P'}$. From loc.~cit., $f$ induces a surjective
$F$-morphism
$M'\rightarrow M$ and an $F$-isomorphism $U'\rightarrow U$. Moreover, $M'\rightarrow M$ is central. On the other hand, we have
the Iwasawa decomposition
$
G({\mathbb A_F})= \boldsymbol{K}P({\mathbb A_F})
$
where $\boldsymbol{K}= \prod_v\boldsymbol{K}_v$ is an $M$-admissible maximal compact subgroup of $G({\mathbb A_F})$. Hence the
product map
$\boldsymbol{K}\times P({\mathbb A_F})\rightarrow G({\mathbb A_F})$ gives a surjective map
$$
\boldsymbol{K} \times (P({\mathbb A_F})/f(P'({\mathbb A_F}))P(F) \rightarrow G({\mathbb A_F})/ f(H({\mathbb A_F}))G(F).
$$
Since $f(U'({\mathbb A_F}))=U({\mathbb A_F})$, we have
$$P({\mathbb A_F})/f(P'({\mathbb A_F}))P(F)= M({\mathbb A_F})/f(M'({\mathbb A_F}))M(F).$$
So we just need to prove the compacity of the quotient (that we already know to be an abelian group) $M({\mathbb A_F})/f(M'({\mathbb A_F}))M(F)$.
Since the quotient $\overline{M}= M/S$ is a connected reductive $F$-anisotropic group, the set $\overline{M}({\mathbb A_F})/\overline{M}(F)$
is compact
\cite[3.5]{Sp}, which implies the set $M({\mathbb A_F})/ S({\mathbb A_F})M(F)$ is compact. A fortiori the quotient
$$M({\mathbb A_F})/f(M'({\mathbb A_F}))S({\mathbb A_F})M(F)$$
is compact. Since
$$f(M'({\mathbb A_F}))\cap (S({\mathbb A_F})M(F))= f(S'({\mathbb A_F}))S(F),$$
we are reduced to prove the compacity of the group $S({\mathbb A_F})/ f(S'({\mathbb A_F}))S(F)$. It is given by the Lemma~\ref{apptor}.
\end{proof}
\begin{corollary}\label{appcompb}
Let $f:H\rightarrow G$ be an $F$-morphism of connected reductive groups such that the induced morphism
$f_{der}: H_{der} \rightarrow G_{der}$ is a central isogeny.
\begin{enumerate}
\item[(1)]For any place $v$, the quotient $G(F_v)/Z^G(F_v)f(H(F_v))$ is an abelian compact group.
\item[(2)]The quotient $G({\mathbb A_F})/Z^G({\mathbb A_F})f(H({\mathbb A_F}))G(F)$ is an abelian compact group.
\end{enumerate}
\end{corollary}
\begin{proof}
The morphism
$$\textrm{id} \times f_{der} : Z^G \times H_{der} \rightarrow Z^G \times G_{der}$$
and the product morphism
$Z^G \times G_{der} \rightarrow G$
are central $F$-isogenies. The composition of these two morphisms
$Z^G \times H_{der} \rightarrow G$
is also a central $F$-isogeny. This implies the corollary.
\end{proof}
\begin{remarks}
In the corollary, we may replace $Z^G$ by its connected component $Z$, which is the maximal central $F$-torus in $G$; the product
morphism $Z\times G_{der} \rightarrow G$ is still a central $F$-isogeny.
\end{remarks}
\end{document} |
\baregin{document}
\deltaate{}
\alphauthor{Gang ${\rm Tian}^*$ \ \ \ \ \ \ Xiaohua ${\rm Zhu}^{**}$}
\tauhetaanks {* Partially supported by NSFC and NSF Grants}
\tauhetaanks {** Partially supported by the NSFC Grants 11271022 and 11331001}
\sigmagmaubjclass[2000]{Primary: 53C25; Secondary: 53C55,
58J05}
\keywords {Conic K\"ahler-Einstein metrics, complex Monge-Amp\`ere equation, Ricci flow}
\alphaddress{Gang Tian, SMS and BICMR, Peking
University, Beijing, 100871, China, and Department of Mathematics, Princeton
University, New Jersey, NJ 02139, USA\\ [email protected]}
\alphaddress{Xiaohua Zhu, SMS and BICMR, Peking
University, Beijing 100871, China\\
[email protected]}
\tauitle{ Properness of log $F$-functionals}
\baregin{abstract}
In this paper, we apply the method developed in [Ti97] and [TZ00] to proving the properness of log $F$-functional on any conic K\"ahler-Einstein manifolds.
As an application, we give an alternative proof for the openness of the continuity method through conic K\"ahler-Einstein metrics.
\e_2and{abstract}
\title{ Properness of log $F$-functionals}
\tauableofcontents
\sigmagmaection *{ 0. Introduction}
It has been very active to study conic K\"ahler-Einstein metrics in recent years partly because of their use in studying problems in algebraic geometry and
K\"ahler geometry. For example, they provide a continuity method for establishing the existence of K\"ahler-Einstein metrics on any Fano manifold $M$,
that is, a compact K\"ahler manifold with positive first Chern class $c_1(M)>0$. Such a continuity is used in the recent solution to Yau-Tian-Donaldson conjecture
given independently by Tian and Chen-Dondaldson-Sun [Ti12], [CDT13]. The conjecture states that a Fano manifold $M$ admits a K\"ahler-Einstein metric if and only if $M$ is K-stable as defined in [Ti97] and reformulated in [Do02]. The K-stability is closely related to the properness of
Mabuchi's $K$-energy, or equivalently, the $F$-functional.
It is proved in [Ti97] that if $M$ admits no non-zero holomorphic vector field, then the existence of K\"ahler-Einstein metrics on $M$ is equivalent
to the properness of $F$-functional or Mabuchi's $K$-energy. The purpose of this paper is to adapt the arguments in [Ti97] as well as [TZ00]
to show that similar results still hold for conic K\"ahler metrics.
Now let us recall some basics on conic K\"ahler metrics. Let $D$ be a smooth divisor of $M$ with $[D]\sqrt{-1}n \lambdaambdambda c_1(M)$ for some $\lambdaambdambda>0$ and $S$
be a defining section of $D$. Choose a smooth K\"ahler metric $\Omegaegaegamega_0$ with K\"ahler class $[\Omegaegaegamega_0]\,=\, 2\partiali \,c_1(M)$, then there is
a Hermitian metric $H_0$ on $[D]$ whose curvature is $\Omegaegaegamega_0$. Following computations in [Au84] and [Di88] (also see [Ti87], [DT92]),
Jeffres-Mazzeo-Rubinstein introduced
a log $F$-functional on the space of K\"ahler potentials [JMR11]:
$$\mathcal H(M, \Omegaegaegamega_0)=\{ \partialsi\sqrt{-1}n C^\sqrt{-1}nfty(M)|~\Omegaegaegamega_{\partialsi}=\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\partialsi>0\}.$$
This log $F$-functional is an Eular-Langrange energy of conic K\"ahler-Einstein metrics with cone angle $2\partiali\bareta$ along $D$ and is defined by
(also see [LS12])
\baregin{align}\lambdaambdabel{log-functional} F_{\Omegaegaegamega_0,\mu}(\partialsi)\,=\, J_{\Omegaegaegamega_0}(\partialsi) \,-\,{\mathfrak f}rac{1}{V}\sqrt{-1}nt_M\,\partialsi\,\Omegaegaegamega^n_0\,-\,{\mathfrak f}rac{1}{\mu}\vskip .1cmg\lambdaeft({\mathfrak f}rac{1}{V}\,
\sqrt{-1}nt_M \,{\mathfrak f}rac{1}{|S|_{H_0}^{2\bareta}}\,e^{h_0-\mu\partialsi}\,\Omegaegaegamega^n_0\rightarrowght),
\e_2and{align}
where $\mu\,=\,1-(1-\bareta)\lambdaambdambda\sqrt{-1}n (0,1)$, $V\,=\,\sqrt{-1}nt_M\,\Omegaegaegamega_0^n$ and $h_0$ is the Ricci potential of $\Omegaegaegamega_0$ defined by
$${\rm Ric}(\Omegaegaegamega_0)\,-\,\Omegaegaegamega_0\,=\,\sigmagmaqrt{-1}\,\partialartialrtial\barar\partialartialrtial \,h_0,~~~~\sqrt{-1}nt_M\,\lambdaeft(e^{h_0}\,-\,1\rightarrowght)\,\Omegaegaegamega_0^n\,=\,0.$$
Note that $J_{\Omegaegaegamega_0}(\partialhi)$ is defined by (see [Au84], [Ti87])
\baregin{align}
J_{\Omegaegaegamega_0}(\partialsi)\,=\,{\mathfrak f}rac{1}{V}\sigmagmaum_{i=0}^{n-1}{\mathfrak f}rac{i+1}{n+1}
\sqrt{-1}nt_M\sigmagmaqrt{-1}\partialartialrtial\partialsi\wedge\barar\partialartialrtial\partialsi\wedge\Omegaegaegamega_0^{i}\wedge\Omegaegaegamega_{\partialsi}^{n-i-1}.\nuotag
\e_2and{align}
The main result of this paper is the following
\baregin{theo}\lambdaambdabel{tian-zhu-conic-ke-metric}
Let $D$ be a smooth divisor of a Fano manifold $M$ with $[D]\sqrt{-1}n \lambdaambdambda c_1(M)$ for some $\lambdaambdambda>0$ such that there is no non-zero holomorphic field
which is tangent to $D$ along $D$. {\mathfrak f}ootnote{This condition can be removed if $\lambdaambdambda\mathfrak ge 1$ by a result in [Be11], or [SW12].}
Suppose that there exists a conic K\"ahler-Einstein metric on $M$ with cone angle $2\partiali\bareta\sqrt{-1}n (0,2\partiali)$ along $D$.
Then there exists two uniform constants $\deltaelta$ and $C$ such that
\baregin{align}\lambdaambdabel{proper-inequality}F_{\Omegaegaegamega_0,\mu}(\partialsi)\,\mathfrak ge\, \deltaelta \,I_{\Omegaegaegamega_0}(\partialsi)\, -\,C,~~~{\mathfrak f}orall~\partialsi \sqrt{-1}n \mathcal H(M,\Omegaegaegamega_0),
\e_2and{align}
where $I_{\Omegaegaegamega_0}(\partialsi)\,=\,{\mathfrak f}rac{1}{V}\sqrt{-1}nt_{M}\,\partialhi\,(\Omegaegaegamega_0^n\,-\,\Omegaegaegamega_{\partialsi}^n)$.
\e_2and{theo}
Combined with a result in [JMR11], Theorem \ref{tian-zhu-conic-ke-metric} implies that there exists a conic K\"ahler-Einstein metric on $M$ along
$D$ with cone angle $2\partiali\bareta\sqrt{-1}n (0,2\partiali)$ if and only if $F_{\Omegaegaegamega_0,\mu}(\cdot)$ is proper. This generalizes Tian's theorem in [Ti97] in the case of
smooth K\"ahler-Einstein manifolds.
As an application of Theorem \ref{tian-zhu-conic-ke-metric} or more precisely, its weaker version Theorem \ref{tian-zhu-conic-ke-metric-weak} in Section 6,
we give an alternative proof for the openness of the continuity method through conic K\"ahler-Einstein metrics. The proof of such an openness was first
sketched by Donaldson [Do11] as an application of the $C^{2,\alphalphapha;\bareta}$ Schauder estimates Donaldson developed for conic K\"ahler metrics.
Since the space $C^{2,\alphalphapha;\bareta}$ will depend on the cone angles $2\partiali\bareta$ of metrics, the usual Implicit Function Theorem could not be applied directly to prove the openness. Instead, Donaldson consider a family of linear elliptic operators associated to approximated conic metrics to get a prior Schauder estimates needed
for proving the openness. Our proof is to use the perturbation method first introduced in [Ti12] to approximate conic K\"ahler-Einstein metrics
by smooth K\"ahler metrics, then we apply the Implicit Function Theorem to approximated smooth K\"ahler metrics and take limit (cf. Section 7).
To assume the limit exists, we need to establish a prior $C^0$ and $C^2$-estimates for the K\"ahler potentials associated to those
approximated metrics. With these a prior estimates, we can take the limit to get a weak conic K\"ahler-Einstein metric. This metric
is in fact in sense of $C^{2,\alphalphapha;\bareta}$ Schauder theory by the regularity theorem in [JMR11].
The proof of Theorem \ref{tian-zhu-conic-ke-metric} is an adaption of that in [Ti97] for smooth K\"ahler-Einstein manifolds.
In our situation, there are some technical issues we need to make sure. First we need to show how to smooth singular metrics near the conic
K\"ahler-Einstein metric. We will use a family
of twisted K\"ahler-Ricci flows with initial values given by smooth metrics which approximate conic metrics (see Section 5, 6).
Then we shall deal with the local smoothing behavior of these flows as well as the local convergence of flows when the initial values vary.
Note that as a parabolic version of twisted K\"ahler-Einstein metric equation, which was first introduced by Song-Tian [ST12],
the twisted K\"ahler-Ricci flow has been also studied by many people, such as Collins-Szekelyhihi [CS12], Liu-Zhang [LZ14], Liangmin Shen [Sh14] etc..
The organization of this paper is as follows: In Section 1, we recall some basics on conic K\"ahler metrics.
In Section 2, we prove the lower bound of log $F$-functional $F_{\Omegaegaegamega_0,\mu}(\cdot)$. In Section 3, we introduce a
family of smooth K\"ahler metrics to approximate the conic K\"ahler metrics discussed in Section 2.
In Section 4, we introduce a family of twisted K\"ahler-Ricci flows to smooth the approximated metrics in Section 3, then in Section 5, we prove the
local convergence of these flows. Theorem \ref{tian-zhu-conic-ke-metric} will be proved in Section 6.
In Section 7, we apply Theorem \ref{tian-zhu-conic-ke-metric} to give a proof of the openness for the continuity method through conic K\"ahler-Einstein metrics
which was first given by Donaldson.
\sigmagmaection{Conic K\"ahler metrics}
Let $S$ be a defining function of $D$ and $H_0$ a Hermitian metric on $D$ induced by $\Omegaegaegamega_0$. Then it is easy to see that $|S|^{2\bareta} =|S|_{H_0}^{2\bareta}\sqrt{-1}n C^{2,\alphalphapha;\bareta} (M)$ for any $\alphalphapha\sqrt{-1}n (0,1)$
in sense of [Do11].
Moreover, one can check that
$\Omegaegaegamega^*=\Omegaegaegamega_0+\deltaelta\sigmagmaqrt{-1}\partialartialrtial\barar{\partialartialrtial} |S|^{2\bareta}$ is a conic K\"ahler metric with cone angle $2\partiali\bareta$ along $D$, as long as the number $\deltaelta$ is sufficiently small (cf. [Br11]).
There is an important property of $\Omegaegaegamega^*$ shown in [JMR11]
that the bisectional holomorphic curvature of $\Omegaegaegamega^*$ is uniformly bounded from above on $M\sigmagmaetminus D$.
Let $h^*$ be a log Ricci potential of $\Omegaegaegamega^*$ defined by
\baregin{align}\lambdaambdabel{ricci-potential-omega-star}
\sigmagmaqrt{-1}\partialartialrtial\barar{\partialartialrtial}h^*={\rm Ric}(\Omegaegaegamega^*)-\mu\Omegaegaegamega^*-2\partiali(1-\bareta)[D].
\e_2and{align}
Then we have
\baregin{align}\lambdaambdabel{ricci-potential-relation}
h^*= h_0-\mu\deltaelta |S|^{2\bareta}-\vskip .1cmg{\mathfrak f}rac{|S|^{2(1-\bareta)}(\Omegaegaegamega^*)^n}{\Omegaegaegamega_0^n}+const,
\e_2and{align}
where $h_0$ is a Ricci potential of $\Omegaegaegamega_0$. A direct computation shows that $h^*\sqrt{-1}n C^{,\mathfrak gammamma;\bareta}(M)$, where $\mathfrak gammamma=\min({\mathfrak f}rac{2}{\bareta}-2,1).$
In general, a K\"ahler potential of conic K\"ahler metric is not necessary in $C^{2,\alphalphapha;\bareta} (M)$. But, for a conic K\"ahler-Einstein metric
$\Omegaegaegamega_{CKE}=\Omegaegaegamega_\partialhi=\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar{\partialartialrtial}\partialhi$ with angle $2\partiali\bareta$ along $D$, $\partialhi$ should be in $C^{2,\alphalphapha_0;\bareta} (M)$
for some positive number $\alphalphapha_0 \lambdae\mathfrak gammamma.$ This is because
$\Omegaegaegamega_{CKE}$ satisfies a conic K\"ahler-Einstein metric equation,
\baregin{align}\lambdaambdabel{conic-KE-equation}
{\rm Ric}(\Omegaegaegamega)-\mu\Omegaegaegamega-2\partiali(1-\bareta)[D]=0.
\e_2and{align}
Then $\partialhi$ satisfies a non-degenerate complex Monge-Amp\`ere equation,
\baregin{align}\lambdaambdabel{complex-MA-equation-conic}
(\Omegaegaegamega^*+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\partialhi)^n=e^{h^*-\mu\partialhi}(\Omegaegaegamega^*)^n.
\e_2and{align}
The $C^{2,\alphalphapha;\bareta}$-regularity theorem established in [JMR11] (also in [GP13]) implies that $ \partialhi\sqrt{-1}n C^{2,\alphalphapha_0;\bareta} (M)$ for some $\alphalphapha_0\lambdae \mathfrak gammamma$.
For any positive number $\alphalphapha\lambdae \alphalphapha_0$, we introduce a space of $C^{2,\alphalphapha; \bareta}$ K\"ahler potentials by
\baregin{align}&\mathscr{H}^{2,\alphalphapha; \bareta}(M,\Omegaegaegamega_0)\nuotag\\
&=\{\partialsi \sqrt{-1}n C^{2,\alphalphapha; \bareta}(M)|~ \Omegaegaegamega_\partialsi=\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\partialsi ~{\rm ~is ~ a ~conic~ Kaehler~ metric~ on} ~M\}.
\nuotag\e_2and{align}
One can show that both functionals $F_{\Omegaegaegamega_0,\mu}(\cdot)$ and $I_{\Omegaegaegamega_0}(\cdot)$ are well-defined on $\mathscr{H}^{2,\alphalphapha;\bareta}$~~
\nuewline $(M,\Omegaegaegamega_0)$.
\baregin{lem}\lambdaambdabel{functional-smoothing} For any $\partialsi\sqrt{-1}n\mathcal H(M,\Omegaegaegamega_0) $, there a sequence of $\partialsi_\deltaelta\sqrt{-1}n \mathscr{H}^{2,\alphalphapha;\bareta}(M,\Omegaegaegamega_0)$ such that
$$ F_{\Omegaegaegamega_0,\mu}(\partialsi)=\lambdaim_{\deltaelta\tauo 0} F_{\Omegaegaegamega_0,\mu}(\partialsi_\deltaelta)$$
and
$$I_{\Omegaegaegamega_0}(\partialsi)=\lambdaim_{\deltaelta\tauo 0} I_{\Omegaegaegamega_0}(\partialsi_\deltaelta).$$
\e_2and{lem}
\baregin{proof} In fact, one can choose $\partialsi_\deltaelta=\partialsi+\deltaelta |S|^{2\bareta}$ to verify the lemma.
\e_2and{proof}.
\sigmagmaection{Lower bound of $F_{\Omegaegaegamega_0,\mu}(\cdot)$}
In this section, we use the continuity method of Ding-Tian in [DT92] ( also see [Ti97]) to study the lower bound of log $F$-functional $F_{\Omegaegaegamega_0,\mu}(\cdot)$. This method will be extended to prove
Main Theorem \ref{tian-zhu-conic-ke-metric} in this paper.
It is worthy to mention that the lower bound of $F_{\Omegaegaegamega_0,\mu}(\cdot)$ can be obtained by using a general theorem of Berndtsson for the uniqueness problem of
special K\"ahler potetials in [Be11]. Berndtsson's method is based on applications of geodesic theory about K\"ahler potentials space studied in [Se92], [Do98], [Ch00], etc..
\baregin{theo}\lambdaambdabel{Bern_thm}
Let $\Omegaegaegamega=\Omegaegaegamega_{CKE}=\Omegaegaegamega_\partialhi$ be a conic K\"ahler-Einstein metric on $M$ with cone angle $2\partiali\bareta$ along $D$. Then $\partialhi$ obtains the minimum
of $F_{\Omegaegaegamega_0,\mu}(\cdot)$ on $\mathscr{H}^{2,\alphalphapha, \bareta}(M,\Omegaegaegamega)$. In particular,
\baregin{equation}
F_{\Omegaegaegamega_0,\mu}(\partialsi)\mathfrak ge-c(\Omegaegaegamega_0,\mu),~{\mathfrak f}orall~ \partialsi\sqrt{-1}n\mathscr{H}^{2,\alphalphapha;\bareta}(M,\Omegaegaegamega_0).
\e_2and{equation}
\e_2and{theo}
\baregin{proof}
For any $\partialsi\sqrt{-1}n \mathscr{H}^{2,\alphalphapha; \bareta}(M,\Omegaegaegamega_0)$, log Ricci potential of ${\mathfrak h}at\Omegaegaegamega=\Omegaegaegamega_{\partialsi}$ is given by
$$h_{{\mathfrak h}at\Omegaegaegamega}=-\vskip .1cmg{\mathfrak f}rac{\Omegaegaegamega_\partialsi^n}{(\Omegaegaegamega^*)^n} -\mu\partialsi+h^*+const.$$
Then $h_{{\mathfrak h}at\Omegaegaegamega}\sqrt{-1}n C^{,\alphalphapha;\bareta}(M)$. We consider the following complex Monge-Amp\`ere equations with a parameter $t\sqrt{-1}n [0,\mu]$:
\baregin{equation}\lambdaambdabel{backward_MAE}
({{\mathfrak h}at\Omegaegaegamega}+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\varepsilonphi)^n=e^{h_{{\mathfrak h}at\Omegaegaegamega}-t\varepsilonphi}{{\mathfrak h}at\Omegaegaegamega}^n.
\e_2and{equation}
By the assumption, there exists a solution $\varepsilonphi_\mu=\partialhi-\partialsi+const$ at $t=\mu=1-(1-\bareta)\lambdaambdambda$. Note that the kernel of operator $({\mathfrak D}elta_{\Omegaegaegamega}+\mu)$ is zero (cf. [Do11]).
Then by the Donaldson's linear theory for Laplace operators associated to conic metrics, we can apply Implicity Function Theorem to
show that there exists a $\deltaelta>0$ such that (\ref{backward_MAE}) is solvable in the space $\mathscr{H}^{2,\alphalphapha;\bareta}(M,\Omegaegaegamega_0)$ on any $t\sqrt{-1}n (\mu-\deltaelta, \mu]$.
Set
$$E=\{s\sqrt{-1}n [0,\mu]|~ (\ref{backward_MAE}) \mbox{ is solvable on }~t=s ~{\rm in } ~ \mathscr{H}^{2,\alphalphapha'; \bareta}(M,\Omegaegaegamega_0) ~{\rm for ~some}~\alphalphapha'\lambdae \alphalphapha \}.$$
We want to prove $E=[0,\mu].$
Clearly, $E$ is non-empty since $(\mu-\deltaelta, \mu]\sigmagmaubset E$. On the other hand, it is easy to see that (\ref{backward_MAE}) are equivalent to Ricci curvature equations,
\baregin{align}\lambdaambdabel{log-twisted-equation-t}{\rm Ric}({\mathfrak h}at \Omegaegaegamega_{\varepsilonphi})=t{\mathfrak h}at\Omegaegaegamega_{\varepsilonphi}+ (\mu-t){\mathfrak h}at\Omegaegaegamega+ 2\partiali (1-\bareta)[D], ~t~\sqrt{-1}n [0,\mu].
\e_2and{align}
Then
\baregin{align}\lambdaambdabel{ricci-equation-t}{\rm Ric}({\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t})>t{\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}, ~{\rm in}~ M\sigmagmaetminus D.
\e_2and{align}
Thus the first non-eigenvalue of ${\mathfrak D}elta_t$ is strictly bigger than $t$ [JMR11],
where ${{\mathfrak D}elta}_t$ is the Laplace operator associated to $\Omegaegaegamega_t$ and ${\Omegaegaegamega}_t={\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}={\mathfrak h}at{\Omegaegaegamega}+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\varepsilonphi_t$.
It follows that the kernel of operator $({\mathfrak D}elta_{t}+t)$ is zero on any $t\sqrt{-1}n E$. By Implicity Function Theorem, $E$ is an open set.
It remains to prove that $E$ is also a closed set. This is related to apriori estimates for solution $\varepsilonphi_t$ of (\ref{backward_MAE}) on $t\sqrt{-1}n E$ below.
First we deal with the $C^0$-estimate. We may assume that $t\mathfrak ge \deltaelta$ by the implicity theorem since (\ref{backward_MAE}) is solvable at $t=0$ [JMR11].
By a direct computation, we have
\baregin{align}\nuonumber
{\mathfrak f}rac{d}{dt}(I_{{\mathfrak h}at\Omegaegaegamega}(\varepsilonphi_t)-J_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t))&=-{\mathfrak f}rac{1}{V}\sqrt{-1}nt_M\varepsilonphi_t{{\mathfrak D}elta}_t\deltaot{\varepsilonphi_t}{\Omegaegaegamega}_t^n\\ \nuonumber
&={\mathfrak f}rac{1}{V}\sqrt{-1}nt_M({{\mathfrak D}elta}_t\deltaot{\varepsilonphi_t}+t\deltaot{\varepsilonphi_t}){{\mathfrak D}elta}_t\deltaot{\varepsilonphi_t}{\Omegaegaegamega}_t^n.\nuonumber
\e_2and{align}
Note that
$${{\mathfrak D}elta}_t\deltaot{\varepsilonphi_t}=-t\deltaot{\varepsilonphi_t}-\varepsilonphi_t$$
by differentiating $\sqrt{-1}nt_M e^{h_{{\mathfrak h}at{\Omegaegaegamega}}-t\varepsilonphi}{\mathfrak h}at{\Omegaegaegamega}^n=V$. By the fact that the first non-eigenvalue of ${\mathfrak D}elta_t$ is strictly bigger than $t$, we get
$${\mathfrak f}rac{d}{dt}(I_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t)-J_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t))\mathfrak ge 0.$$
This means that $I_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t)-J_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t)$ is increasing in $t$. Thus
$$I_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t)\lambdae (n+1)I_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_\mu)\lambdae C.$$
By using the Green formula [JMR11], we derive
$${\rm osc}(\varepsilonphi_t)\lambdae C.$$
To get the $C^2$-estimate, we rewrite (\ref{backward_MAE}) as
\baregin{equation}\lambdaambdabel{backward_MAE-2}
(\Omegaegaegamega^*+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\varepsilonphi^*)^n=e^{h^*-t\varepsilonphi^*-(\mu-t)(\partialsi-\deltaelta |S|^{2\bareta}) } (\Omegaegaegamega^*)^n,
\e_2and{equation}
where $\varepsilonphi^*=\varepsilonphi-\deltaelta |S|^{2\bareta}+\partialsi$ and $h^*$ is the log Ricci potential of $\Omegaegaegamega^*$ as in (\ref{ricci-potential-omega-star}).
Since ${\rm Ric}(\Omegaegaegamega_{\varepsilonphi})>0$, by the Chern-Lu inequality [Cher68], [Lu68], we have
\baregin{align}\lambdaambdabel{Chern-Lu inequality-1}
{\mathfrak D}elta_t \vskip .1cmg{\rm tr}_{ \Omegaegaegamega_{t}}(\Omegaegaegamega^*)\mathfrak ge -a(\Omegaegaegamega^*)\, {\rm tr}_{ \Omegaegaegamega_{t}}(\Omegaegaegamega^*), ~{\rm in}~M\sigmagmaetminus D,
\e_2and{align}
where $a=a(\Omegaegaegamega^*)$ is a uniform constant which depends only on the upper bound of bisectional holomorphic curvature of $\Omegaegaegamega^*$, and so it
depends only on $\Omegaegaegamega_0$ and the divisor $D$. Set
$$u=\vskip .1cmg{\rm tr}_{ \Omegaegaegamega_{t}}(\Omegaegaegamega^*) -(a+1)\varepsilonphi^*.$$
Then there exists a uniform constant $C=C(\sigmagmaup_M \partialhi, \sigmagmaup_M \partialsi)$ such that
$${\mathfrak D}elta_t u\mathfrak ge e^{u-C(a+1)}-n(a+1).$$
By the maximum principle as in [JMR11], it follows
\baregin{align}\lambdaambdabel{upper-bound-phi-1}\Omegaegaegamega_t\mathfrak ge C^{-1}\Omegaegaegamega^*.
\e_2and{align}
By (\ref{backward_MAE-2}), we also get
\baregin{align}\lambdaambdabel{lower-bound-phi-1}\Omegaegaegamega_t\lambdae C \Omegaegaegamega^*.
\e_2and{align}
Once (\ref{upper-bound-phi-1}) and (\ref{lower-bound-phi-1}) hold, we can apply the $C^{2,\alphalphapha;\bareta}$-regularity theorem in [JMR11] to show
$$\|\varepsilonphi\|_{C^{2,\alphalphapha';\bareta}(M)}\lambdae C$$
for some $\alphalphapha'\lambdae \alphalphapha$. Thus $\varepsilonphi_t\sqrt{-1}n \mathscr{H}^{2,\alphalphapha'; \bareta}(M,\Omegaegaegamega_0)$. This implies that $E$ is a closed set and so $E=[0,\mu]$.
By a direct computation as in [DT92], we have
\baregin{align}\lambdaambdabel{integral-dt} t\lambdaeft(J_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_\mu)-{\mathfrak f}rac{1}{V}\sqrt{-1}nt_M\varepsilonphi_\mu{\mathfrak h}at{\Omegaegaegamega}^n\rightarrowght)=-\sqrt{-1}nt_0^t (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds\lambdae 0, ~{\mathfrak f}orall~t\lambdae \mu.
\e_2and{align}
Note that $\sqrt{-1}nt_M e^{h_{{\mathfrak h}at{\Omegaegaegamega}}-\mu\varepsilonphi_\mu}{\mathfrak h}at{\Omegaegaegamega}^n=V$. Thus
$$F_{{\mathfrak h}at{\Omegaegaegamega},\mu}(\varepsilonphi_\mu)=J_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_\mu)-{\mathfrak f}rac{1}{V}\sqrt{-1}nt_M\varepsilonphi_\mu{\mathfrak h}at{\Omegaegaegamega}^n\lambdae 0.$$
By the cocycle condition of log $F$-functional, it follows
\baregin{align}
F_{\Omegaegaegamega,\mu}({\partialsi-\partialhi})=-F_{{\mathfrak h}at{\Omegaegaegamega},\mu}(\varepsilonphi_\mu)\mathfrak ge 0.\nuotag
\e_2and{align}
Again by the cocycle condition, we get
\baregin{align}\lambdaambdabel{cocycle-condition}
F_{\Omegaegaegamega_0,\mu}(\partialsi)=F_{\Omegaegaegamega,\mu}(\partialsi-\partialhi)+F_{\Omegaegaegamega_0,\mu}(\partialhi)\mathfrak ge F_{\Omegaegaegamega_0,\mu}(\partialhi).
\e_2and{align}
Hence we prove that $F_{\Omegaegaegamega_0,\mu}(\cdot)$ takes the minimum at $\partialhi$.
\e_2and{proof}
\baregin{cor}\lambdaambdabel{Bern_thm-coro}
Suppose that there exists a conic K\"ahler-Einstein metric on $M$ with cone angle $2\partiali\bareta$ along $D$. Then
\baregin{align}
F_{\Omegaegaegamega_0,\mu}(\partialsi)\mathfrak ge-c(\Omegaegaegamega_0,\mu),~{\mathfrak f}orall~ \partialsi\sqrt{-1}n^{{\rm cu}}p_{0<\alphalphapha'\lambdae \alphalphapha_0}\mathscr{H}^{2,\alphalphapha';\bareta}(M,\Omegaegaegamega_0).\nuotag
\e_2and{align}
\e_2and{cor}
\sigmagmaection{Approximation of conic K\"ahler metrics}
In this section, we construct approximated smooth K\"ahler potentials of solution $\varepsilonphi_t$ of (\ref{backward_MAE}) on each $t\sqrt{-1}n (0,\mu)$ by solving certian complex Monge-Amp\`ere equation.
First we shall smooth the conic metric
${\mathfrak h}at \Omegaegaegamega=\Omegaegaegamega_{\partialsi}$. Note
$$({\mathfrak h}at\Omegaegaegamega)^n=f_0\Omegaegaegamega_0^n, $$
where $f_0=g{\mathfrak f}rac{1}{|S|^{2-2\bareta}}$ for some $L^\sqrt{-1}nfty$-function $g$. In particular, $f_0$ is a $L^p$-function.
Take a family of smooth functions $f_\deltaelta$ with $\sqrt{-1}nt_M f_\deltaelta\Omegaegaegamega_0^n=\sqrt{-1}nt_M \Omegaegaegamega_0^n$ such that $f_\deltaelta$ converge to $f_0$ in $L^p$ as $\deltaelta\tauo 0$.
Then by the Yau's solution to Calabi's problem,
there are a family of K\"ahler potentials ${\mathfrak P}si_\deltaelta$, which solve equations $(\deltaelta>0)$,
$$(\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial{\mathfrak P}si_{\deltaelta})^n=f_\deltaelta\Omegaegaegamega_0^n.$$
By the Kolodziej's H\"older estimate [Kol08], ${\mathfrak P}si_{\deltaelta}$ converge to $\partialsi$ in the $C^\alphalphapha$-norm modulo constants as $\deltaelta\tauo 0.$
For simplicity, we set $\Omegaegaegamega_\deltaelta=\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial{\mathfrak P}si_{\deltaelta}.$
We modify (\ref{log-twisted-equation-t}) by a family of Ricci curvature equations with parameter $\deltaelta\sqrt{-1}n (0,\deltaelta_0]$ for each $t\sqrt{-1}n [0,\mu]$,
\baregin{align}\lambdaambdabel{twisted-equation-t}{\rm Ric}(\Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}})=t\Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}+ (\mu-t)\Omegaegaegamega_\deltaelta+ (1-\bareta)\e_2ata_\deltaelta,
\e_2and{align}
where $\Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}=\Omegaegaegamega_\deltaelta+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\varepsilonphi_{\deltaelta}$ and $\e_2ata_\deltaelta=\lambdaambdambda\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial \vskip .1cmg(\deltaelta+|S|^2).$
(\ref{twisted-equation-t}) are in fact a family of twisted K\"ahler-Einstein metric eqautions associated to positive $(1,1)$-forms $\Omegaega= (\mu-t)\Omegaegaegamega_\deltaelta+ (1-\bareta)\e_2ata_\deltaelta$ [ST12].
One can check that (\ref{twisted-equation-t}) are equivalent to the following complex Monge-Amp\`ere equations,
\baregin{equation}\lambdaambdabel{smooth-continuity-modified}
(\Omegaegaegamega_\deltaelta+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\varepsilonphi_\deltaelta)^n=e^{h_\deltaelta-t\varepsilonphi_\deltaelta}\Omegaegaegamega_\deltaelta^n,
\e_2and{equation}
where $h_\deltaelta$ are twisted Ricci potentials of $\Omegaegaegamega_\deltaelta$ defined by
\baregin{align}\lambdaambdabel{twisted-h-delta}\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial h_\deltaelta= {\rm Ric}(\Omegaegaegamega_\deltaelta)-\mu\Omegaegaegamega_\deltaelta- (1-\bareta)\e_2ata_\deltaelta.
\e_2and{align}
We shall study the solutions of (\ref{smooth-continuity-modified}) and their convergence as $\deltaelta\tauo 0.$
Rewrite (\ref{twisted-equation-t}) as
\baregin{align}\lambdaambdabel{twisted-equation-t-2}{\rm Ric}(\Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}})=t\Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}+(\mu-t)\Omegaegaegamega_0+ (1-\bareta)\e_2ata_\deltaelta +(\mu-t)\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial {\mathfrak P}si_{\deltaelta}.
\e_2and{align}
Then equations (\ref{smooth-continuity-modified}) are equivalent to
\baregin{equation}\lambdaambdabel{twisted-MA-equation}
(\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial{\mathfrak h}at\varepsilonphi_\deltaelta)^n={\mathfrak f}rac{1}{(\deltaelta+|S|^2)^{1-\bareta}}e^{h_0-t{\mathfrak h}at\varepsilonphi_\deltaelta -(\mu-t){\mathfrak P}si_\deltaelta}\Omegaegaegamega_0^n,~~t\sqrt{-1}n[0,\mu],
\e_2and{equation}
where ${\mathfrak h}at \varepsilonphi_{\deltaelta}= {\mathfrak h}at \varepsilonphi_{t, \deltaelta}=\varepsilonphi_{t, \deltaelta} + {\mathfrak P}si_{\deltaelta}$.
As in [Ti12], for a fixed $\deltaelta>0$, we define a family of twisted F-functionals with parameter $t\sqrt{-1}n (0,\mu ]$ as follows,
\baregin{align}\lambdaambdabel{twisted-f-functional-t}
F_{t, \deltaelta}(\varepsilonphi)=J_{\Omegaegaegamega_0}(\varepsilonphi)-{\mathfrak f}rac{1}{V}\sqrt{-1}nt_M\varepsilonphi\Omegaegaegamega^n_0-{\mathfrak f}rac{1}{t}\vskip .1cmg
\lambdaeft({\mathfrak f}rac{1}{V}\sqrt{-1}nt_Me^{{\mathfrak h}at h_\deltaelta-t\varepsilonphi}\Omegaegaegamega_0^n\rightarrowght),
\e_2and{align}
where
\baregin{align}
{\mathfrak h}at h_\deltaelta=h_0-(1-\bareta)\vskip .1cmg (\deltaelta+|S|_0^2)+(t-\mu){\mathfrak P}si_{\deltaelta}+C_\deltaelta,~~\sqrt{-1}nt_M(e^{{\mathfrak h}at h_\deltaelta}-1)\Omegaegaegamega^n_0=0.\nuotag
\e_2and{align}
Then all $F_{t, \deltaelta}(\cdot)$ are proper for any $t\sqrt{-1}n (0,\mu ), \deltaelta\sqrt{-1}n (0,\deltaelta_0]$ since log $F$-functionals $F_{\Omegaegaegamega_0, t}(\cdot)$ defined in (\ref{log-functional}) are proper for any $t\sqrt{-1}n (0,\mu )$. The latter follows from a result in [LS12] by using the fact that
$F_{\Omegaegaegamega_0, \mu }(\cdot)$ is bounded from below according to Theorem \ref{Bern_thm}. By the Green formula, we get
$${\rm osc}_{M}{\mathfrak h}at \varepsilonphi_{t,\deltaelta} \lambdae C(I_{\Omegaegaegamega_0}({\mathfrak h}at \varepsilonphi_{t,\deltaelta})+1) \lambdae C',~{\mathfrak f}orall ~t\sqrt{-1}n (0,\mu ), $$
where the constants $C,C'$ depend only on $t$. Note that all higher order estimates
for solutions ${\mathfrak h}at \varepsilonphi_{t,\deltaelta}$ depend only on $\deltaelta$ and their $C^0$-norm. Thus by using the continuity method as in the proof of Theorem 2.5 in [Ti12],
(\ref{twisted-MA-equation}), and so (\ref{smooth-continuity-modified}) are solvable on any $t\sqrt{-1}n (0,\mu), \deltaelta\sqrt{-1}n (0,\deltaelta_0]$.
Next we improve higher order estimates for solutions ${\mathfrak h}at \varepsilonphi_{t,\deltaelta}$ to show that they are independent of $\deltaelta>0$. Let's introduce a family of smooth K\"ahler potentials ${\mathfrak P}hi^\bareta_\deltaelta$ ($\deltaelta >0)$ constructed by
Guenancia-Paun in [GP13]. Such ${\mathfrak P}hi^\bareta_\deltaelta$ have property:
1) ${\mathfrak P}hi^\bareta_\deltaelta$ converge to ${\mathfrak P}hi^\bareta_0=k|S|^{2\bareta}$ as $\deltaelta\tauo 0$ in sense of H\"older-norm.
2) Let
$$h_{\kappa_\deltaelta}=-\vskip .1cmg{\mathfrak f}rac{\kappa_\deltaelta^n}{\Omegaegaegamega_0^n} -{\mathfrak P}hi^\bareta_\deltaelta+h_0$$
be Ricci potentials of $\kappa_\deltaelta=\Omegaegaegamega_0+ \sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial {\mathfrak P}hi^\bareta_\deltaelta$. Then
${\mathfrak f}rac{1}{\deltaelta+|S|^{2\bareta}} e^{h_\kappa}$ is uniformly bounded on $\deltaelta$.
3) The bisectional holomorphic curvatures $R_{\deltaelta i\barar ij\barar j}$ of $\kappa_\deltaelta$ satisfy: for any K\"ahler metric
$\Omegaegaegamega_{\partialhi+ {\mathfrak P}hi^\bareta_\deltaelta}=\kappa_\deltaelta+\sigmagmaqrt{-1}
\partialartialrtial\barar\partialartialrtial\partialhi$, it holds
\baregin{align}\lambdaambdabel{gp-inequality} &\sigmagmaum_{i<j} ({\mathfrak f}rac{1+\partialhi_{ i\barar i}} {1+\partialhi_{ j\barar j}} +{\mathfrak f}rac{1+\partialhi_{ j\barar j}} {1+\partialhi_{ i\barar i}} -2)R_{\deltaelta i\barar ij\barar j} -C_0{\rm tr}_{\kappa_\deltaelta}(\Omegaegaegamega_{\partialhi+ {\mathfrak P}hi^\bareta_\deltaelta})
{\mathfrak D}elta_{ \Omegaegaegamega_{\partialhi+ {\mathfrak P}hi^\bareta_\deltaelta}} {\mathfrak P}hi^\bareta_\deltaelta\nuotag\\
&+ {\mathfrak D}elta_{ \kappa_{\deltaelta}} \vskip .1cmg({\mathfrak f}rac{\kappa_\deltaelta^n}{\Omegaegaegamega_0^n}\tauimes (\deltaelta+|S|^2)^{1-\bareta})\nuotag\\
&\lambdae C \sigmagmaum_{i<j} ({\mathfrak f}rac{1+\partialhi_{ i\barar i}} {1+\partialhi_{ j\barar j}} +{\mathfrak f}rac{1+\partialhi_{j\barar j}}{1+\barar\partialhi_{ i\barar i}}) +C {\rm tr}_{\kappa_\deltaelta}(\Omegaegaegamega_{\partialhi+ {\mathfrak P}hi^\bareta_\deltaelta })\tauimes {\rm tr}_{\Omegaegaegamega_{\partialhi+ {\mathfrak P}hi^\bareta_\deltaelta}}(\kappa_\deltaelta)+C,
\e_2and{align}
where $C_0$ and $C$ are two uniform constants.
The following is about uniform aprior $C^2$-estimate for $\varepsilonphi= \varepsilonphi_{t,\deltaelta}$.
\baregin{lem}\lambdaambdabel{c2-estimate-twisted-metrics-lemma} For any $t\sqrt{-1}n (0,\mu),\deltaelta\sqrt{-1}n(0,\deltaelta_0]$, it holds
\baregin{align}\lambdaambdabel{c2-estimate-twisted-metrics}
C^{-1} \kappa_\deltaelta\lambdae \Omegaegaegamega^\deltaelta_{\varepsilonphi_{t,\deltaelta}} \lambdae C \kappa_\deltaelta.
\e_2and{align}
Here $C$ is a uniform constant which depends only on the metric ${\mathfrak h}at\Omegaegaegamega$ and $t$.
\e_2and{lem}
\baregin{proof} Let $\barar\varepsilonphi=\barar\varepsilonphi_\deltaelta={\mathfrak h}at\varepsilonphi_\deltaelta-{\mathfrak P}hi^{\bareta}_\deltaelta$. Then (\ref{twisted-MA-equation}) are equivalent to
\baregin{equation}\lambdaambdabel{twisted-MA-equation-3}
(\kappa_\deltaelta+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\barar\varepsilonphi)^n={\mathfrak f}rac{1}{(\deltaelta+|S|_0^2)^{1-\bareta}} e^{h_{\kappa_\deltaelta}-t\barar\varepsilonphi -(\mu-t){\mathfrak P}si_{\deltaelta} -(1-t){\mathfrak P}hi^\bareta_\deltaelta }\kappa_\deltaelta^n,~~t\sqrt{-1}n (0,\mu_0).
\e_2and{equation}
Following Yau's $C^2$-estimate in [Yau78], we have
\baregin{align}\lambdaambdabel{c2-yau} & -{\mathfrak D}elta_{\Omegaegaegamega_{\varepsilonphi_\deltaelta}^\deltaelta} \vskip .1cmg {\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} )\nuotag\\
&\lambdae{\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} ) }\sigmagmaum_{i<j} ({\mathfrak f}rac{1+\barar\varepsilonphi_{ i\barar i}} {1+\barar\varepsilonphi_{ j\barar j}} +{\mathfrak f}rac{1+\barar\varepsilonphi_{ j\barar j}}{1+\barar\varepsilonphi_{ i\barar i}} -2)R_{\deltaelta i\barar ij\barar j} \nuotag\\
&+{\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} )} {\mathfrak D}elta_{ \kappa_{\deltaelta}}( t\barar\varepsilonphi +(\mu-t){\mathfrak P}si_{\deltaelta} +(1-t){\mathfrak P}hi^\bareta_\deltaelta - \barar h_{\kappa_\deltaelta} ).
\e_2and{align}
On the other hand, by
$$B\kappa_\deltaelta + \sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial {\mathfrak P}si_{\deltaelta}\mathfrak ge 0,$$
it is easy to see
\baregin{align}{\mathfrak D}elta_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} } {\mathfrak P}si_{\deltaelta}\mathfrak ge {\mathfrak f}rac{ {\mathfrak D}elta_{ \kappa_{\deltaelta}} {\mathfrak P}si_{\deltaelta} } {{\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} )}
-nB {\rm tr}_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}}(\kappa_\deltaelta).\nuotag
\e_2and{align}
Using the fact
\baregin{align}\lambdaambdabel{laplace-h}{\mathfrak D}elta_{ \kappa_{\deltaelta}} h_0 \mathfrak ge -A,
\e_2and{align}
we get
\baregin{align} &{\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} )} {\mathfrak D}elta_{ \kappa_{\deltaelta}}( t\barar\varepsilonphi +(\mu-t){\mathfrak P}si_{\deltaelta} +(1-t){\mathfrak P}hi^\bareta_\deltaelta - \barar h_{\kappa_\deltaelta} ) -{\mathfrak D}elta_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}} } {\mathfrak P}si_{\deltaelta} \nuotag\\
&\lambdae {\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}})} {\mathfrak D}elta_{ \kappa_{\deltaelta}} \vskip .1cmg({\mathfrak f}rac{\kappa_\deltaelta^n}{\Omegaegaegamega_0^n}\tauimes (\deltaelta+|S|^2_0)^{1-\bareta})
+{\mathfrak f}rac{n(1-t) +A}{{\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}})}
+t +nB{\rm tr}_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}}(\kappa_\deltaelta).
\nuotag
\e_2and{align}
Thus by the Guenancia-Paun inequality (\ref{gp-inequality}) for metrics $\Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}$, we deduce from (\ref{c2-yau}),
\baregin{align}& - {\mathfrak D}elta_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}} (\vskip .1cmg {\rm tr}_{\kappa_\deltaelta} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}) +C_0 {\mathfrak P}hi^\bareta_\deltaelta - {\mathfrak P}si_{\deltaelta} ) \nuotag\\
&\lambdae C' {\rm tr}_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}}(\kappa_\deltaelta)+C'.\nuotag
\e_2and{align}
Let
$$u=\vskip .1cmg{\rm tr}_{\kappa_\deltaelta}( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}})+C_0 {\mathfrak P}hi^\bareta_\deltaelta - {\mathfrak P}si_{\deltaelta} -(C'+1)\barar\varepsilonphi_\deltaelta.$$
Then
$${\mathfrak D}elta_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}} u\mathfrak ge {\rm tr}_{ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}} (\kappa_\deltaelta)-C''.$$
By the maximum principle, it follows
\baregin{align}\lambdaambdabel{upper-bound-phi-delta} \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}=\kappa_{\deltaelta} +\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\barar\varepsilonphi_\deltaelta\mathfrak ge C^{-1}\kappa_\deltaelta.
\e_2and{align}
By (\ref{twisted-MA-equation-3}), we can also get
\baregin{align}\lambdaambdabel{lower-bound-phi-delta} \Omegaegaegamega^\deltaelta_{\varepsilonphi_{\deltaelta}}\lambdae C\kappa_\deltaelta.
\e_2and{align}
\e_2and{proof}
\baregin{theo}\lambdaambdabel{phi-delta-convergence} For any $t\sqrt{-1}n (0,\mu)$, it holds
\baregin{align}
\lambdaim_\deltaelta \varepsilonphi_{t, \deltaelta}=\varepsilonphi_t\nuotag
\e_2and{align}
in sense of H\"older-norm.
\e_2and{theo}
\baregin{proof} First we claim that $\varepsilonphi_{t, \deltaelta}$ converges to a $C^{2,\alphalphapha;\bareta}$-solution of (\ref{backward_MAE}) as $\deltaelta\tauo 0$.
In fact, by the Kolodziej's H\"older estimate, we see that ${\mathfrak h}at \varepsilonphi_{t,\deltaelta}$ converge to a H\"older continuous solution $\partialhi'$ of following complex Monge-Amp\`ere equation in the current sense,
\baregin{equation}\lambdaambdabel{smooth-continuity-2}
(\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\partialhi')^n= {\mathfrak f}rac{1}{|S|^{2-2\bareta}}e^{h_0- t \partialhi' +( t-\mu)\partialsi}\Omegaegaegamega_0^n.
\e_2and{equation}
Clearly, (\ref{smooth-continuity-2}) is nothing, just (\ref{backward_MAE}). Since $\Omegaegaegamega^*=\Omegaegaegamega_0+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial{\mathfrak P}hi_0^\bareta$ is
equivalent to ${\mathfrak h}at\Omegaegaegamega$, by Lemma \ref{c2-estimate-twisted-metrics-lemma}, we get
\baregin{align}\lambdaambdabel{c2-limit-phi-delta-t}
C^{-1} {\mathfrak h}at\Omegaegaegamega\lambdae \Omegaegaegamega_{\partialhi'} \lambdae C {\mathfrak h}at\Omegaegaegamega, ~{\rm in}~ M\sigmagmaetminus D,
\e_2and{align}
where $C$ is a uniform constant. Note that (\ref{smooth-continuity-2}) implies that $\partialhi'- \partialsi$ is a solution of (\ref{backward_MAE}). Thus by the $C^{2,\alphalphapha;\bareta}$ reguarity theorem,
$\partialhi'- \partialsi$ is a $C^{2,\alphalphapha;\bareta}$-solution of (\ref{backward_MAE}). This proves the claim.
On the other hand, according to the proof in Theorem \ref{Bern_thm}, it is easy to see that $C^{2,\alphalphapha;\bareta}$-solution of (\ref{backward_MAE})
as a twisted K\"ahler-Einstein metric is unique. Thus $\partialhi'- \partialsi$ must be $\varepsilonphi_{ t}$. The theorem is proved.
\e_2and{proof}
\sigmagmaection{Smoothing of twisted Ricci potentials}
Define a Log Ricci potential $h_t$ of ${\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}$ of solution of (\ref{backward_MAE}) on $t$ by
$$\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial h_t={\rm Ric}({\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_{t}}) - \mu{\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}- 2\partiali (1-\bareta)[D], ~\sqrt{-1}nt_M e^{ h_t} {\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_{t}}^n=V, $$
and define a twisted Ricci potential of $ \Omegaegaegamega^\deltaelta_{\varepsilonphi_{t, \deltaelta}}= \Omegaegaegamega_\deltaelta+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial \varepsilonphi_{t,\deltaelta} $ of solution of (\ref{smooth-continuity-modified}) on $t$ by
$$\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial h_{t,\deltaelta}= {\rm Ric}(\Omegaegaegamega_{\varepsilonphi_{t, \deltaelta}}^\deltaelta) -\mu\Omegaegaegamega_{\varepsilonphi_{t, \deltaelta}}^\deltaelta-(1-\bareta)\e_2ata_\deltaelta, ~\sqrt{-1}nt_M e^{ h_{t,\deltaelta}} ( \Omegaegaegamega^\deltaelta_{\varepsilonphi_{t,\deltaelta}})^n=V.$$
Then it is easy to see
$$ h_t=-(\mu-t)\varepsilonphi_t+ const, $$
and
$$ h_{t, \deltaelta}=-(\mu-t)\varepsilonphi_{t, \deltaelta} + const.$$
Thus by Theorem \ref{phi-delta-convergence}, we have
\baregin{lem}For any $t\sqrt{-1}n (0,\mu)$, it holds
\baregin{align}
\lambdaim_{\deltaelta\tauo 0} h_{t,\deltaelta}=h_t\nuotag
\e_2and{align}
in sense of H\"older-norm.
\e_2and{lem}
To smooth $ h_{t,\deltaelta}$ for each fixed $\deltaelta\sqrt{-1}n (0,\deltaelta_0)$, we introduce the following twisted K\"ahler-Ricci flow,
\baregin{align}\lambdaambdabel{twisted-K-R-flow}
&{\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \Omegaegaegamega_{\partialsi}^\deltaelta= -{\rm Ric} (\Omegaegaegamega_{\partialsi}^\deltaelta ) +\mu\Omegaegaegamega_{\partialsi}^\deltaelta+(1-\bareta)\e_2ata_\deltaelta,\nuotag\\
& \Omegaegaegamega_{\partialsi}^\deltaelta(\cdot, 0)=\Omegaegaegamega^\deltaelta_{\partialsi_{0, \deltaelta}}=\Omegaegaegamega^\deltaelta_{\partialsi_{s, \deltaelta}}|_{s= 0}=\Omegaegaegamega_{\varepsilonphi_{t,\deltaelta}}^\deltaelta.
\e_2and{align}
Clearly, the twisted Ricci potential $h_{t,s, \deltaelta}$ of $\Omegaegaegamega_{\partialsi_{s,\deltaelta}}^\deltaelta$ is given by
$$ h_{t, s, \deltaelta}=- {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \partialsi_{s,\deltaelta}+ const.$$
In particular,
$$ h_{t, \deltaelta}=- {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \partialsi_{s,\deltaelta}|_{s=0}+ const.$$
(\ref{twisted-K-R-flow}) reduces to a complex Monge-Amp\`ere flow,
\baregin{align}\lambdaambdabel{smooth-flow-MA-0}
& {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \tauilde \partialsi =\vskip .1cmg {\mathfrak f}rac{( \Omegaegaegamega_{\varepsilonphi_{t, \deltaelta}+\tauilde\partialsi}^\deltaelta)^n}{(\Omegaegaegamega_{\varepsilonphi_{t, \deltaelta}}^\deltaelta)^n}+\mu\tauilde \partialsi - h_{t,\deltaelta},\nuotag\\
& \tauilde \partialsi_{0,\deltaelta}=\tauilde\partialsi_{\deltaelta}(0,\cdot)= 0.
\e_2and{align}
Here $h_{t,\deltaelta}$ can be normalized so that $ h_{t, \deltaelta}=-(\mu-t)\varepsilonphi_{t, \deltaelta}$. Then $ h_{t, \deltaelta}=- {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \tauilde\partialsi_{s,\deltaelta}|_{s=0}$
Similarly to K\"ahler-Ricci flow in [Ti97], applying the maximum principle to (\ref{smooth-flow-MA-0}), we have following estimates (also see [LZ14]).
\baregin{lem}\lambdaambdabel{gradient-laplace-flow}
\baregin{align} &1)~| {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \tauilde\partialsi_{s,\deltaelta}|^2+s|\nuablabla '{\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \tauilde\partialsi_{s,\deltaelta}|^2\lambdae e^{2\mu s} (\mu-t)^2 \|\varepsilonphi_{t,\deltaelta}\|_{C^0(M)}^2.\nuotag\\
&2) ~{\mathfrak D}elta' (- {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \tauilde\partialsi_{s,\deltaelta})\mathfrak ge e^{\mu s} {\mathfrak D}elta h_{t,\deltaelta}.\nuotag
\e_2and{align}
Here ${\mathfrak D}elta'$, $ {\mathfrak D}elta$ are Laplace operators associated to metrics $\Omegaegaegamega_{\partialsi_{s, \deltaelta}}^\deltaelta, \Omegaegaegamega_{\varepsilonphi_{t,\deltaelta}}^\deltaelta$, respectively.
\e_2and{lem}
\baregin{lem}\lambdaambdabel{holder-estimate-perturbation-flow}Let $v=v_{t, s,\deltaelta}$ be a normalization of $h_{t,s,\deltaelta}$ by adding a suitable constant such that
$$\sqrt{-1}nt_M v(\Omegaegaegamega_{\partialsi_{s,\deltaelta }}^\deltaelta)^n=0.$$
Let $\mathfrak gammamma={\mathfrak f}rac{1}{8+8n}$. Then there exists a
small number $\e_2apsilonsilon>0$ such that for any $t$ and $\varepsilonphi_{t,\deltaelta}$
satisfying
\baregin{align}\lambdaambdabel{t-condition-phi}(\mu-t)^{1+
{\mathfrak f}rac{\mathfrak gammamma}{2}}\|\varepsilonphi_{t,\deltaelta}\|_{C^0(M)}\lambdae\e_2apsilonsilon,\e_2and{align}
we have
\baregin{align}\lambdaambdabel{holder-estimate-h-delta}\|v\|_{C^{{\mathfrak f}rac{1}{2}}(M)} \lambdae
C\e_2apsilonsilon (\mu-t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}
\e_2and{align}
and
\baregin{align}\lambdaambdabel{c0-perturbation}{\rm osc}_M \tauilde\partialsi_{s,\deltaelta}\lambdae C\e_2apsilonsilon(\mu -t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}, ~{\mathfrak f}orall~ s\sqrt{-1}n [(\mu -t)^{2\mathfrak gammamma}, 1],
\e_2and{align}
provided that for any $s\sqrt{-1}n [(\mu -t)^{2\mathfrak gammamma}, 1]$ the first non-zero eigenvalue
\baregin{align}\lambdaambdabel{large-eigenvalue}\lambdaambdambda_1\mathfrak ge \lambdaambdambda_0>0
\e_2and{align}
of Laplace operator ${\mathfrak D}elta'$ associated to the
metric $\Omegaegaegamega_{\partialsi_{s,\deltaelta}}^\deltaelta$ and the following condition holds: there exists a
constant $a>0$ such that for any $x_0\sqrt{-1}n M$ and $0<r<1$,
\baregin{align}\lambdaambdabel{volume-condition}{\rm vol} (B_r(x_0)) \mathfrak ge ar^{2n}
\e_2and{align}
with respect to $\Omegaegaegamega_{\partialsi_{s,\deltaelta}}^\deltaelta$. Here $C=C(
a,\lambdaambdambda_0)$ denotes a uniform constant depending only on the
constants $ a$ and $\lambdaambdambda_0$.
\e_2and{lem}
\baregin{proof} Lemma \ref{holder-estimate-perturbation-flow} can be proved following the argument of smoothing lemma in [Ti97] (also see Proposition 4.1 in [CTZ05]). In fact, under the conditions (\ref{c0-perturbation})
and (\ref{large-eigenvalue}), using the estimates 1) and 2) in Lemma \ref{twisted-K-R-flow}, we get
$$|v|
\lambdae C( a,\lambdaambdambda_0)
(1+\|h_{t,\deltaelta}\|_{C^0(M)})(\mu -t)^{{\mathfrak f}rac
{3}{8(n+1)}},~{\mathfrak f}orall~
s\sqrt{-1}n[(\mu -t)^{2\mathfrak gammamma}, 1]. $$
On the other hand, again by the estimate 1) in Lemma \ref{twisted-K-R-flow}, we have
\baregin{align} \|\nuablabla' v\|^2_{C^0(M)} \lambdaeq {\mathfrak f}rac{1}{s}
e^2\|h_{t,\deltaelta}\|^2_{C^0(M)},\nuotag
~{\mathfrak f}orall ~s>0.
\e_2and{align}
Combining these two relations, we derive
\baregin{align}
\|v_s\|_{C^{{\mathfrak f}rac{1}{2}}(M)}
&\lambdae C( a,
\lambdaambdambda_0)(1+\|h_{t,\deltaelta}\|_{C^0(M)})(\mu -t)^{{\mathfrak f}rac
{1}{8(n+1)}}\nuotag\\
&\lambdae C [\e_2apsilonsilon+\|h_{t,\deltaelta}\|_{C^0(M)} (\mu -t)^{{\mathfrak f}rac
{\mathfrak gammamma}{2}}] (\mu -t)^{{\mathfrak f}rac
{\mathfrak gammamma}{2}} ,~{\mathfrak f}orall~
s\sqrt{-1}n[(\mu -t)^{2\mathfrak gammamma}, 1]\nuotag.
\e_2and{align}
Note
$$-(\mu-t)\varepsilonphi_{t,\deltaelta}=h_{t,\deltaelta}.$$
Thus under the assumption (\ref{t-condition-phi}), we get (\ref{holder-estimate-h-delta}) immediately.
By the estimate 1) in Lemma \ref{gradient-laplace-flow}, we have
\baregin{align}| {\mathfrak f}rac{\partialartialrtial
}{\partialartialrtial s}
\tauilde\partialsi|\lambdae
e\|h_{t,\deltaelta}\|_{C^0(M)},~{\mathfrak f}orall~s\lambdae
1.\nuotag
\e_2and{align}
Note
$$\tauilde\partialsi=(\sqrt{-1}nt_0^{(\mu-t)^{2\mathfrak gammamma}}+\sqrt{-1}nt_{(\mu-t)^{2\mathfrak gammamma}}^s)({\mathfrak f}rac{\partialartialrtial }{\partialartialrtial s}
\tauilde\partialsi).$$
Then by the assumption (\ref{t-condition-phi}), we obtain
\baregin{align}\lambdaambdabel{c0-perturbation-2} {\rm osc}_M \tauilde\partialsi &\lambdae (\mu-t)^{2\mathfrak gammamma} \sigmagmaup_{s\sqrt{-1}n [0,1]} \|{\mathfrak f}rac{\partialartialrtial }{\partialartialrtial s}
\tauilde\partialsi \|_{C^0(M)}\nuotag\\
& + \sigmagmaup_{s\sqrt{-1}n [(\mu -t)^{2\mathfrak gammamma} ,1]} \| {\mathfrak f}rac{\partialartialrtial }{\partialartialrtial s}
\tauilde\partialsi \|_{C^0(M)}\nuotag\\
&\lambdae C\e_2apsilonsilon (\mu -t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}.
\e_2and{align}
\e_2and{proof}
\sigmagmaection{Convergence of twisted K\"ahler-Ricci flows}
In this section, we deal with the local convergence of flows (\ref{twisted-K-R-flow}). First, similarly to Lemma \ref{c2-estimate-twisted-metrics-lemma}, we have
\baregin{lem}\lambdaambdabel{c2-parabolic-twisted-KR} For any $t\sqrt{-1}n (0,\mu),\deltaelta\sqrt{-1}n(0,\deltaelta_0]$, it holds
\baregin{align}\lambdaambdabel{c2-parabolic-estimate-twisted-metrics}
C^{-1} \kappa_\deltaelta\lambdae \Omegaegaegamega^\deltaelta_{\partialsi_{s,\deltaelta}} \lambdae C \kappa_\deltaelta.
\e_2and{align}
Here $C$ is a uniform constant depends only on metrics ${\mathfrak h}at\Omegaegaegamega, \Omegaegaegamega^*$ and norms of $ \|\partialsi_{s, \deltaelta}\|_{C^0(M)}$, $\|{\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s}\partialsi_{s,\deltaelta}\|_{C^0(M)}.$
\e_2and{lem}
\baregin{proof}
Let $\barar\partialsi_\deltaelta=\barar\partialsi_{s, \deltaelta}=\partialsi_{s, \deltaelta}+ {\mathfrak P}si_{\deltaelta}-{\mathfrak P}hi_{\deltaelta}^\bareta$. Then by (\ref{smooth-flow-MA-0}), $\barar\partialsi=\barar\partialsi_\deltaelta$ satisfies the following complex Monge-Amp\`ere flow,
\baregin{align}\lambdaambdabel{smooth-flow-MA}
& {\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} \barar \partialsi =\vskip .1cmg {\mathfrak f}rac{(\kappa_\deltaelta+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\barar\partialsi)^n}{\kappa_\deltaelta^n}+\mu\barar \partialsi - \barar h_{\kappa_\deltaelta},\nuotag\\
& \barar \partialsi_{0,\deltaelta}=\barar\partialsi_{\deltaelta}(0,\cdot)= \varepsilonphi_{t,\deltaelta}+{\mathfrak P}si_{\deltaelta} -{\mathfrak P}hi_{\deltaelta}^\bareta.
\e_2and{align}
Following the estimate of (\ref{c2-yau}), for the parabolic equation (\ref{smooth-flow-MA}), we get Yau's $C^2$-estimate,
\baregin{align} &({\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} -{\mathfrak D}elta_{ \Omegaegaegamega_{\partialsi_\deltaelta}}^\deltaelta) \vskip .1cmg {\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta)\nuotag\\
&\lambdae{\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta)}\sigmagmaum_{i<j} ({\mathfrak f}rac{1+\barar\partialsi_{ i\barar i}} {1+\barar\partialsi_{ j\barar j}} +{\mathfrak f}rac{1+\barar\partialsi_{ j\barar j}}{1+\barar\partialsi_{ i\barar i}} -2)R_{\deltaelta i\barar ij\barar j} +{\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta)} {\mathfrak D}elta_{ \kappa_{\deltaelta}}(\mu\barar \partialsi_\deltaelta - \barar h_{\kappa_\deltaelta} )\nuotag
\e_2and{align}
On the other hand, by (\ref{laplace-h}), we have
\baregin{align} &{\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta)} {\mathfrak D}elta_{ \kappa_{\deltaelta}}(\mu\barar \partialsi_\deltaelta - \barar h_{\kappa_\deltaelta} )\nuotag\\
&\lambdae {\mathfrak f}rac{1}{{\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta)} {\mathfrak D}elta_{ \kappa_{\deltaelta}} \vskip .1cmg({\mathfrak f}rac{\kappa_\deltaelta^n}{\Omegaegaegamega_0^n}\tauimes (\deltaelta+|S|^2_0)^{1-\bareta}) +{\mathfrak f}rac{A}{{\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta)}
+\mu.
\nuotag
\e_2and{align}
By the Guenancia-Paun inequality (\ref{gp-inequality}), it follows
\baregin{align}&({\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} - {\mathfrak D}elta_{ \Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta}) (\vskip .1cmg {\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta) +C_0 {\mathfrak P}hi^\bareta_\deltaelta) \nuotag\\
&\lambdae C_0' {\rm tr}_{\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta}(\kappa_\deltaelta)+C_0', \nuotag
\e_2and{align}
where $C_0$ and $C_0'$ are two uniform constants depending only on metrics ${\mathfrak h}at\Omegaegaegamega$ and $\Omegaegaegamega^*$. Hence by choosing a large number $B$, we deduce
\baregin{align}&({\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} - {\mathfrak D}elta_{ \Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta}) (\vskip .1cmg {\rm tr}_{\kappa_\deltaelta} (\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta) +C_0 {\mathfrak P}hi^\bareta_\deltaelta -B\barar \partialsi_\deltaelta ) \nuotag\\
&\lambdae - {\rm tr}_{\Omegaegaegamega_{\partialsi_\deltaelta}^\deltaelta}(\kappa_\deltaelta)+C_0''. \nuotag
\e_2and{align}
Now we can apply the maximum principle to see that there exists a uniform constant $C$, which depends only on ${\mathfrak h}at\Omegaegaegamega, \Omegaegaegamega^*, \|\partialsi_{s,\deltaelta}\|_{C^0(M)}$ and $\|{\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s}\partialsi_{s,\deltaelta}\|_{C^0(M)}$, such that
\baregin{align}\Omegaegaegamega_{\partialsi_{s,\deltaelta}}^\deltaelta=\kappa_\deltaelta+ \sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial \barar\partialsi_{s,\deltaelta} \mathfrak ge C^{-1}\kappa_\deltaelta.\nuotag
\e_2and{align}
By (\ref{smooth-flow-MA}), we also obtain
\baregin{align}\kappa_\deltaelta \mathfrak ge C'^{-1} \Omegaegaegamega_{\partialsi_{s, \deltaelta}}^\deltaelta,\nuotag
\e_2and{align}
where $C'=C'( {\mathfrak h}at\Omegaegaegamega, \Omegaegaegamega^*, \|\partialsi_{s,\deltaelta}\|_{C^0(M)}, \|{\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s}\partialsi_{s,\deltaelta}\|_{C^0(M)}).$
\e_2and{proof}
\baregin{theo}\lambdaambdabel{perturbation-limit-from-flow} For any $s\sqrt{-1}n (0,1]$, $\Omegaegaegamega_{\partialsi_{s,\deltaelta}}^\deltaelta$ converge to a conic K\"ahler metric ${\mathfrak h}at\Omegaegaegamega_{\tauilde\partialhi_{t,s}}={\mathfrak h}at\Omegaegaegamega+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial \tauilde \partialhi_{t,s}$
in sense of $C^{2,\alphalphapha;\bareta}$ K\"ahler potentials.
\e_2and{theo}
\baregin{proof} By the estimate 1) in Lemma \ref{gradient-laplace-flow}, we have
\baregin{align}\lambdaambdabel{c0-psi-s} \|\partialsi_{s, \deltaelta}\|_{C^0(M)}, \|{\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s}\partialsi_{s,\deltaelta}\|_{C^0(M)} \lambdae e(\mu-t) \|\varepsilonphi_{t,\deltaelta}\|_{C^0(M)}.
\e_2and{align}
Then by Lemma \ref {c2-parabolic-twisted-KR}, we see that there exists a uniform constant $C$, which depends only on ${\mathfrak h}at\Omegaegaegamega,\varepsilonphi_t$, such that
$$C^{-1} \kappa_\deltaelta\lambdae \Omegaegaegamega_{\partialsi_{s, \deltaelta}}^\deltaelta \lambdae C \kappa_\deltaelta,~{\mathfrak f}orall ~\deltaelta\sqrt{-1}n (0,\deltaelta_0].$$
Thus the Sobolev constant associated to $\Omegaegaegamega_{\partialsi_{s, \deltaelta}^\deltaelta}$ is uniformly bounded above as same as the metric $\kappa_\deltaelta$ (cf. [LZ14]).
Derivativing (\ref{smooth-flow-MA}) on $s$, we have
$$({\mathfrak f}rac{\partialartialrtial}{\partialartialrtial s} -{\mathfrak D}elta_{ \Omegaegaegamega_{\partialsi_{\deltaelta}}^\deltaelta }) \deltaot\partialsi_\deltaelta=\mu\deltaot\partialsi_\deltaelta,$$
where $\deltaot\partialsi_\deltaelta=\deltaot\partialsi_{s,\deltaelta}={\mathfrak f}rac{\partialartialrtial }{\partialartialrtial s} \partialsi_{s,\deltaelta}$. Hence the standard Moser iteration method for the parabolic equation implies
that there exist a positive number $\alphalphapha$ and a uniform constant $C$ such that
$$\sigmagmaup_{x,y\sqrt{-1}n M} {\mathfrak f}rac{ |\deltaot \partialsi_{\deltaelta}(x)-\deltaot\partialsi_{\deltaelta}(y)|} {|x-y|_{\kappa_{\deltaelta}}^\alphalphapha}\lambdae C.$$
As a consequece, $\deltaot \partialsi_{\deltaelta}$ converges to a H\"older continueous function $f$ associated to the metric $\Omegaegaegamega$ as $\deltaelta\tauo 0$. Namely,
$$\sigmagmaup_{x,y\sqrt{-1}n M} {\mathfrak f}rac{ |f(x)-f(y)|} {|x-y|_{\Omegaegaegamega}^\alphalphapha}\lambdae C.$$
On the other hand, by the Kolodziej's H\"older estimate, $ \partialsi_{\deltaelta}$ are uniformly H\"older continuous functions, so they converge to
a H\"older continuous function $\tauilde\partialhi_t=\tauilde\partialhi_{t,s}$ as $\deltaelta\tauo 0$. Moreover, $\tauilde\partialhi_t$ is a current solution of following complex Monge-Amp\`ere equation,
\baregin{equation}\lambdaambdabel{limit-phi-delta-t-flow}
({\mathfrak h}at\Omegaegaegamega+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial\partialhi)^n=e^{f+h_{{\mathfrak h}at\Omegaegaegamega}-\mu\partialhi}({\mathfrak h}at\Omegaegaegamega)^n.
\e_2and{equation}
By the reguarity theorem in [JMR11], it follows that $\tauilde\partialhi_t$ is a $C^{2,\alphalphapha';\bareta}$-solution. Hence ${\mathfrak h}at\Omegaegaegamega_{\tauilde\partialhi_t}$ is a conic K\"ahler metric.
\e_2and{proof}
\baregin{lem}\lambdaambdabel{small-c0-puterbation-limit}For any $t $ and $\varepsilonphi_t$
satisfying
\baregin{align}\lambdaambdabel{t-condition}(\mu-t)^{1+
{\mathfrak f}rac{\mathfrak gammamma}{2}}\|\varepsilonphi_{t}\|_{C^0(M)}\lambdae\e_2apsilonsilon,
\e_2and{align}
it holds
\baregin{align}\lambdaambdabel{c0-perturbation-2-limit} {\rm osc }_M (\tauilde\partialhi_{t,s}-\varepsilonphi_t)\lambdae C\e_2apsilonsilon(\mu -t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}, ~{\mathfrak f}orall ~s\sqrt{-1}n (0,1].
\e_2and{align}
\e_2and{lem}
\baregin{proof} In (\ref{c0-perturbation-2}), we in fact prove
$$ {\rm osc }_M \tauilde\partialsi_{s,\deltaelta}\lambdae C\e_2apsilonsilon(\mu -t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}, ~{\mathfrak f}orall ~s\sqrt{-1}n (0,1].$$
Then (\ref{c0-perturbation-2-limit}) follows immediately from Theorem \ref{phi-delta-convergence} and Theorem \ref{perturbation-limit-from-flow} by taking $\deltaelta\tauo 0$.
\e_2and{proof}
\baregin{lem}\lambdaambdabel{holder-estimate-perturbation}Let $\tauilde v=\tauilde v_{t,s}$ be a normalization of $h_{\tauilde\partialhi_{t,s}}$ by adding a suitable constant such that
$$\sqrt{-1}nt_M \tauilde v ({\mathfrak h}at\Omegaegaegamega_{\tauilde \partialhi_{t,s }})^n=0.$$
Then for any $t $ and $\varepsilonphi_t$
satisfying (\ref{t-condition}),
it holds
\baregin{align}\lambdaambdabel{l2-v}
\|\tauilde v_{t,s}\|_{C^{{\mathfrak f}rac{1}{2}}(M)} \lambdae
C\e_2apsilonsilon (1-t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}, ~{\mathfrak f}orall ~s\sqrt{-1}n [(\mu -t)^{2\mathfrak gammamma}, 1].
\e_2and{align}
\e_2and{lem}
\baregin{proof} We claim: for the metric ${\mathfrak h}at\Omegaegaegamega_{ \tauilde\partialhi_{t,s}}$, it holds
\baregin{align}\lambdaambdabel{metric-equivalence-t}
{\mathfrak f}rac{1}{2} {\mathfrak h}at\Omegaegaegamega\lambdae {\mathfrak h}at\Omegaegaegamega_{\tauilde\partialhi_{t,s}}\lambdae 2{\mathfrak h}at\Omegaegaegamega.
\e_2and{align}
By the above claim together Theorem \ref{perturbation-limit-from-flow}, we see that there exists a small $\deltaelta_0$ such that the conditions (\ref{large-eigenvalue}) and (\ref{volume-condition}) in Lemma \ref{holder-estimate-perturbation-flow}
are satisfied for metrics $\Omegaegaegamega_{\partialsi_{s,\deltaelta}}^\deltaelta$ with $\deltaelta\sqrt{-1}n(0,\deltaelta_0]$. Note that (\ref{t-condition-phi}) also holds by Theorem \ref{phi-delta-convergence}.
Then by Lemma \ref{c2-parabolic-twisted-KR}, it follows that (\ref{holder-estimate-h-delta}) holds for $v=v_{t,s,\deltaelta}$ with $\deltaelta\sqrt{-1}n (0, \deltaelta_0]$. Taking the limit of $v_{t,s,\deltaelta}$ as $\deltaelta\tauo 0$, we get (\ref{l2-v}).
We prove (\ref{metric-equivalence-t}) by contradiction. If (\ref{metric-equivalence-t}) is not true, then there exists a $\partialsi\sqrt{-1}n\mathscr{H}^{2,\alphalphapha;\bareta}(M,\Omegaegaegamega_0)$ such that
the solution $\varepsilonphi_t$ of (\ref{backward_MAE}) on $t$ satisfies (\ref{t-condition}) and the $C^{2,\alphalphapha;\bareta}$-norm of K\"ahler potential $\tauilde\partialhi_{t,s}$ in Theorem \ref{perturbation-limit-from-flow} satisfies
\baregin{align}\lambdaambdabel{big-estimate}\|\tauilde\partialhi_{t,s}\|_{C^{2,\alphalphapha;\bareta}(M)}\mathfrak ge A_0,
\e_2and{align}
where $A_0$ is a positive number.
On the other hand, from the proof of Theorem \ref{perturbation-limit-from-flow}, the $C^{2,\alphalphapha;\bareta}$-norm of $\tauilde\partialhi_{t,s}$ depends on $\varepsilonphi_t$ continuously. Thus we may aslso assume that
$$\|\tauilde\partialhi_{t,s}\|_{C^{2,\alphalphapha;\bareta}(M)}\lambdae 2A_0$$
and
\baregin{align}\lambdaambdabel{metric-equivalence-t-2}
{\mathfrak f}rac{1}{4} {\mathfrak h}at\Omegaegaegamega\lambdae {\mathfrak h}at\Omegaegaegamega_{\tauilde\partialhi_{t,s}}\lambdae 4{\mathfrak h}at\Omegaegaegamega.
\e_2and{align}
Once (\ref{metric-equivalence-t-2}) holds, we can use the above argument again to
conclude that there exists a small $\deltaelta_0$ such that (\ref{holder-estimate-h-delta}) holds for $v=v_{t,s,\deltaelta}$ with $\deltaelta\sqrt{-1}n (0, \deltaelta_0]$. Taking the limit of $v_{t,s,\deltaelta}$ as $\deltaelta\tauo0$, we get (\ref{l2-v})
for ${\mathfrak h}at\Omegaegaegamega_{\tauilde\partialhi_{t,s}}$.
Applying Implicity Function Theorem to (\ref{limit-phi-delta-t-flow}), we obtain
\baregin{align}\|\tauilde\partialhi_{t,s}\|_{C^{2,\alphalphapha;\bareta}(M)}\lambdae C(\e_2apsilonsilon)\tauo 0,~{\rm as}~\e_2apsilonsilon\tauo 0.\nuotag
\e_2and{align}
But this is impossible by (\ref{big-estimate}). The claim is proved.
\e_2and{proof}
\sigmagmaection{Properness of $F_{\Omegaegaegamega_0,\mu}(\cdot)$}
By using the estimates at last section, we can improve Theorem \ref{Bern_thm} to
\baregin{theo}\lambdaambdabel{tian-zhu-conic-ke-metric-weak} Suppose that there exists a conic K\"ahler-Einstein metric $\Omegaegaegamega=\Omegaegaegamega_{CKE}$ on $M$ along
$D$ with cone angle $2\partiali\bareta\sqrt{-1}n (0,2\partiali).$ Then there exists two uniform constants $\deltaelta$ and $C$ such that
\baregin{align}\lambdaambdabel{nonlinear-ms}F_{\Omegaegaegamega_0,\mu}(\partialsi)\mathfrak ge \deltaelta I(\partialsi)^{{\mathfrak f}rac{1}{8n+9}} -C,~{\mathfrak f}orall~\partialsi\sqrt{-1}n \mathscr{H}^{2,\alphalphapha; \bareta}(M,\Omegaegaegamega_0).
\e_2and{align}
\e_2and{theo}
\baregin{proof}
First, by the first relation in (\ref{integral-dt}), we get an identity
$$F_{\Omegaegaegamega,\mu}(\partialsi-\partialhi)=F_{\Omegaegaegamega,\mu}(-\varepsilonphi_\mu)={\mathfrak f}rac{1}{\mu}\sqrt{-1}nt_0^\mu(I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds.$$
Then as in [TZ00], [CTZ05], we obtain
\baregin{align}\lambdaambdabel{f-inequiality}&F_{\Omegaegaegamega,\mu}(\partialsi-\partialhi)\nuotag\\
&\mathfrak ge {\mathfrak f}rac{1}{\mu}(\mu-t)(I-J)_{{\mathfrak h}at\Omegaegaegamega}(\varepsilonphi_t)\nuotag\\
&\mathfrak ge {\mathfrak f}rac{1}{n\mu}(\mu-t)J_{{\mathfrak h}at\Omegaegaegamega}(\varepsilonphi_t)\nuotag\\
&\mathfrak ge {\mathfrak f}rac{1}{n\mu}(\mu-t)J_{{\mathfrak h}at\Omegaegaegamega}(\varepsilonphi_\mu) - {\mathfrak f}rac{1}{n\mu}(\mu-t) {\rm osc}_M(\varepsilonphi_t-\varepsilonphi_\mu)\nuotag\\
&\mathfrak ge {\mathfrak f}rac{1}{n\mu(n+1)}(\mu-t)I_{{\mathfrak h}at\Omegaegaegamega}(\varepsilonphi_\mu) - {\mathfrak f}rac{1}{n\mu}(\mu-t) {\rm osc}_M(\varepsilonphi_t-\varepsilonphi_\mu).
\e_2and{align}
Next, for a small $\e_2apsilonsilon$, we choose a $t$
such that
\baregin{align}\lambdaambdabel{special-condition} (\mu-t)^{1+{\mathfrak f}rac{\mathfrak gammamma}{2}}\|\varepsilonphi_t\|_{C^0(M)}=\e_2apsilonsilon.
\e_2and{align}
Without loss of generality, we may assume that the above inequality can be obtained,
otherwise $\|\varepsilonphi_t\|_{C^0(M)}$ is unifromly bonuded and the situation will be simple.
Then by Theorem \ref{perturbation-limit-from-flow} and Lemma \ref{small-c0-puterbation-limit}, there exists a $C^{2,\alphalphapha'; \bareta}$ K\"ahler potential $\tauilde\varepsilonphi_t$ such that
\baregin{align}{\rm osc}_M(\varepsilonphi_t-\varepsilonphi_\mu)& \lambdae {\rm osc}_M (\tauilde\varepsilonphi_t -\varepsilonphi_\mu)+ {\rm osc}_M( \tauilde \varepsilonphi_t -\varepsilonphi_t)\nuotag\\
&\lambdae {\rm osc}_M (\tauilde\varepsilonphi_t -\varepsilonphi_\mu)+ C\e_2apsilonsilon(\mu -t)^{{\mathfrak f}rac{\mathfrak gammamma}{2}}.
\nuotag
\e_2and{align}
On the other hand, by Lemma \ref{holder-estimate-perturbation}, we can apply Implicity Function Theorem to (\ref{limit-phi-delta-t-flow}) to get
$${\rm osc}_M (\tauilde\varepsilonphi_t -\varepsilonphi_\mu)\lambdae C(\e_2apsilonsilon)\tauo 0,~{\rm as}~ \e_2apsilonsilon\tauo 0.$$
Thus
\baregin{align}\lambdaambdabel{osc-phi-t-phi-mu}{\rm osc}_M(\varepsilonphi_t-\varepsilonphi_\mu)\lambdae C(\e_2apsilonsilon).
\e_2and{align}
Combining (\ref{f-inequiality}) and (\ref{osc-phi-t-phi-mu}), we have
\baregin{align}\lambdaambdabel{f-proper-1}F_{\Omegaegaegamega,\mu}(\partialsi-\partialhi)\mathfrak ge {\mathfrak f}rac{1}{n\mu(n+1)}
(\mu-t)I(\varepsilonphi_\mu)-C.
\e_2and{align}
Note
$$\|\varepsilonphi_t\|_{C^0(M)}\lambdae {\rm osc}_M(\varepsilonphi_t).$$
Then by (\ref{osc-phi-t-phi-mu}), we get
\baregin{align}\lambdaambdabel{osc-phi-t}\|\varepsilonphi_t\|_{C^0(M)}\lambdae {\rm osc}_M(\varepsilonphi_\mu)+1.
\e_2and{align}
In a special case, we assume that the K\"ahler potential $\partialsi$ satisfies
\baregin{align}\lambdaambdabel{osc-condition}
{\rm osc}_M\partialsi\lambdae I_{\Omegaegaegamega_0}(\partialsi)+C_0,
\e_2and{align}
where $C_0$ is a uniform constant.
Then by the relation (\ref{special-condition}) and (\ref{osc-phi-t}), a simple computation shows
\baregin{align}\lambdaambdabel{special-case-1} F_{\Omegaegaegamega,\mu}(\partialsi-\partialhi)&\mathfrak ge \deltaelta
I_{\Omegaegaegamega_0}(\varepsilonphi_\mu)^{{\mathfrak f}rac{1}{8n+9}}-C'\nuotag\\
&\mathfrak ge \deltaelta
I_{\Omegaegaegamega_0}(\partialsi)^{{\mathfrak f}rac{1}{8n+9}}-C',
\e_2and{align}
where $\deltaelta, C'>0$ are two uniform constants which depending only on the choice of $\e_2apsilonsilon$ in (\ref{special-condition}). Using the cocycle condition in (\ref{cocycle-condition}), we derive immediately,
\baregin{align}\lambdaambdabel{special-case}F_{\Omegaegaegamega_0,\mu}(\partialsi)\mathfrak ge \deltaelta
I_{\Omegaegaegamega_0}(\partialsi)^{{\mathfrak f}rac{1}{8n+9}}-C''.
\e_2and{align}
In general case, we can use a trick in [TZ00] to derive (\ref{special-case}) for $\partialsi$. In fact, we can first apply (\ref{special-case}) for solutions $\varepsilonphi_t$ with $t\mathfrak ge\e_2apsilonsilon_0>0$ to get an estimate for
${\rm osc}_M(\varepsilonphi_t-\varepsilonphi_\mu)$, then by the relation in (\ref{f-inequiality})
we obtain (\ref{special-case}) for $\partialsi$.
\e_2and{proof}
\baregin{proof}[End of proof of Theorem \ref{tian-zhu-conic-ke-metric}]Theorem \ref{tian-zhu-conic-ke-metric} is an improvement of Theorem \ref{tian-zhu-conic-ke-metric-weak}. By Lemma \ref{functional-smoothing},
we suffice to obtain the esitimate (\ref{proper-inequality}) in Theorem \ref{tian-zhu-conic-ke-metric} for
K\"ahler potentials in $\mathscr{H}^{2,\alphalphapha; \bareta}(M,\Omegaegaegamega_0)$.
It was observed by Phong-Song-Strum-Weinkove that (\ref{proper-inequality}) can come from (\ref{nonlinear-ms})
in case of K\"ahler-Einstein metric [PSSW08]. In fact, as in [TZ00], by (\ref{nonlinear-ms}) for solutions $\varepsilonphi_t$ with $t\mathfrak ge\e_2apsilonsilon_0>0$, they further show that there exists a $t_0$ with $\mu-t_0\mathfrak ge \deltaelta_0>0$ (where $\mu=1$)
for some uniform constant $\deltaelta_0$ such that
$${\rm osc}_M(\varepsilonphi_{t_0}-\varepsilonphi_\mu)\lambdae A,$$
where $A$ is a uniform constant which depends only on the K\"ahler-Einstein metric. We show that such choice of $t_0$ can be done similarly in our case of conic K\"ahler-Einstein metric $\Omegaegaegamega_\partialhi$ as follows.
By the first relation in (\ref{integral-dt}) together with the equation (\ref{backward_MAE}), we have
\baregin{align} &F_{\Omegaegaegamega,\mu}(\varepsilonphi_t-\varepsilonphi_\mu)=F_{{\mathfrak h}at\Omegaegaegamega,\mu}(\varepsilonphi_t)-F_{{\mathfrak h}at\Omegaegaegamega,\mu}(\varepsilonphi_\mu)\nuotag\\
&= -{\mathfrak f}rac{1}{t}\sqrt{-1}nt_0^t (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds+{\mathfrak f}rac{1}{\mu}\sqrt{-1}nt_0^{\mu} (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds -{\mathfrak f}rac{1}{\mu} \vskip .1cmg({\mathfrak f}rac{1}{V}\sqrt{-1}nt_M e^{ (t-\mu)\varepsilonphi_t}{\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}^n)\nuotag\\
&\lambdae-{\mathfrak f}rac{1}{t}\sqrt{-1}nt_0^t (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds+{\mathfrak f}rac{1}{\mu}\sqrt{-1}nt_0^{\mu} (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds+ {\mathfrak f}rac{\mu-t}{\mu V}\sqrt{-1}nt_M \varepsilonphi_t{\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}^n.\nuotag
\e_2and{align}
Note that the first relation in (\ref{integral-dt}) is equivalent to
$$- {\mathfrak f}rac{1}{V} \sqrt{-1}nt_M \varepsilonphi_t{\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}^n= (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t)-{\mathfrak f}rac{1}{t} \sqrt{-1}nt_0^t (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds.$$
It follows
\baregin{align}\lambdaambdabel{f-upper-bound} F_{\Omegaegaegamega,\mu}(\varepsilonphi_t-\varepsilonphi_\mu)&\lambdae {\mathfrak f}rac{1}{\mu} \sqrt{-1}nt_t^{\mu} (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_s)ds -{\mathfrak f}rac{\mu-t}{\mu} (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t) \nuotag\\
&\lambdae {\mathfrak f}rac{\mu-t}{\mu} [(I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_\mu) - (I-J)_{{\mathfrak h}at{\Omegaegaegamega}}(\varepsilonphi_t)]\nuotag\\
&\lambdae {\mathfrak f}rac{n(\mu-t)}{\mu}{\rm osc}_M(\varepsilonphi_t-\varepsilonphi_\mu).
\e_2and{align}
On the other hand, by the Green formula in [JMR11], (\ref{osc-condition}) holds for $\varepsilonphi_t-\varepsilonphi_\mu$ whenever $t\mathfrak ge \e_2apsilonsilon_0>0$
since Ricci curvature of ${\mathfrak h}at\Omegaegaegamega_{\varepsilonphi_t}=\Omegaegaegamega+\sigmagmaqrt{-1}\partialartialrtial\barar\partialartialrtial (\varepsilonphi_t-\varepsilonphi_\mu)$ is strictly positive.
Then applying (\ref{special-case-1}) for $\varepsilonphi_t-\varepsilonphi_\mu$, we see that there exist two constants $A_0, C>0$ such that
$$F_{\Omegaegaegamega,\mu}(\varepsilonphi_t-\varepsilonphi_\mu)\mathfrak ge A_0
I_\Omegaegaegamega( \varepsilonphi_t-\varepsilonphi_\mu)^{{\mathfrak f}rac{1}{8n+9}}-C, ~{\mathfrak f}orall ~t\mathfrak ge\e_2apsilonsilon_0.$$
Combining this with (\ref{f-upper-bound}), we derive
\baregin{align}\lambdaambdabel{i-functional-relation}
I_{\Omegaegaegamega} ( \varepsilonphi_t-\varepsilonphi_\mu)^{{\mathfrak f}rac{1}{8n+9}} [A_0-{\mathfrak f}rac{n(\mu-t)}{\mu} I_{\Omegaegaegamega} ( \varepsilonphi_t-\varepsilonphi_\mu)^{1-{\mathfrak f}rac{1}{8n+9}}]\lambdae C'.
\e_2and{align}
Case 1: For any $t\sqrt{-1}n [{\mathfrak f}rac{\mu}{2},\mu]$, it holds
$$(\mu-t) I_{\Omegaegaegamega} ( \varepsilonphi_{t}-\varepsilonphi_\mu)^{1-{\mathfrak f}rac{1}{8n+9}}<{\mathfrak f}rac{A_0\mu}{2n}.$$
Then we can choose $t=t_0={\mathfrak f}rac{\mu}{2}$ so that
$ I_{\Omegaegaegamega} ( \varepsilonphi_{t_0}-\varepsilonphi_\mu)$, and also ${\rm osc}_M( \varepsilonphi_{t_0}-\varepsilonphi_\mu)$ is uniformly bounded.
Thus by the relation in (\ref{f-inequiality}), we get
(\ref{proper-inequality}). The proof is finished.
Case 2: There exists a $t_0\sqrt{-1}n [{\mathfrak f}rac{\mu}{2},\mu]$ such that
\baregin{align}\lambdaambdabel{t-condition-2} (\mu-t_0) I_{\Omegaegaegamega} ( \varepsilonphi_{t_0}-\varepsilonphi_\mu)^{1-{\mathfrak f}rac{1}{8n+9}}={\mathfrak f}rac{A_0\mu}{2n}.
\e_2and{align}
By the above choice of $t_0$, from (\ref{i-functional-relation}) it is easy to see that ${\rm osc}_M( \varepsilonphi_{t_0}-\varepsilonphi_\mu)$ is uniformly bounded.
Again by (\ref{t-condition-2}), we get $\mu-t_0\mathfrak ge \deltaelta_0>0$ for some uniform constant $\deltaelta_0$. The theorem is proved.
There is another way to get (\ref{proper-inequality}) by using the Donaldson's openness theorem, Theorem \ref{Do_openness} in next section. This is observed in [LS12].
\e_2and{proof}
\sigmagmaection{A new proof of Donaldson's openness theorem}
In this section, we apply Theorem \ref{tian-zhu-conic-ke-metric-weak} to prove the following Donaldson's openness theorem.
\baregin{theo}\lambdaambdabel{Do_openness}Let $D$ be a smooth divisor of a Fano manifold $M$ with $[D]\sqrt{-1}n \lambdaambdambda c_1(M)$ for some $\lambdaambdambda>0$ such that there is no non-zero holomorphic field
which is tangent to $D$ along $D$.
Suppose that there exists a conic K\"ahler-Einstein metric on $M$ with cone angle $2\partiali\bareta_0\sqrt{-1}n (0,2\partiali)$ along $D$.
Then for any $\bareta$ close to $\bareta_0$ there exists a conic K\"ahler-Einstein metric with cone angle $2\partiali\bareta$.
\e_2and{theo}
\baregin{proof} Let $\mu_0= 1-\lambdaambdambda(1-\bareta_0)$. Then $F_{\Omegaegaegamega_0,\mu_0}(\cdot)$ is proper by Theorem \ref{tian-zhu-conic-ke-metric-weak}. Thus
twisted $F$-functionals $F_{\mu_0,\deltaelta}(\cdot)$ defined by (\ref{twisted-f-functional-t}) are all proper for
any $\deltaelta\sqrt{-1}n (0,\deltaelta_0]$. By the argument in Section 3, it follows that there exists a solution of (\ref{smooth-continuity-modified}) on $t=\mu_0$ for any $\deltaelta\sqrt{-1}n (0,\deltaelta_0]$. Hence, for a fixed $\deltaelta={\mathfrak f}rac{\deltaelta_0}{2}$, we
apply Implicit Function Theorem to see that there exists an $\e_2apsilonsilon_0$ such that (\ref{smooth-continuity-modified}) is solvable for any $\mu\sqrt{-1}n [\mu_0,\mu_0+\e_2apsilonsilon_0)$. Note that the twisted Ricci potential
$h_\deltaelta$ of $\Omegaegaegamega_\deltaelta$
in (\ref{smooth-continuity-modified}) satisfies
(\ref{twisted-h-delta}) with $\bareta=1- {\mathfrak f}rac{1-\mu}{\lambdaambdambda}$.
This means that there exists a twisted K\"ahler-Einstein metric
associated to positive $(1,1)$-form $\Omegaega= (1-\bareta)\e_2ata_{{\mathfrak f}rac{\deltaelta_0}{2}}$ for any $\mu\sqrt{-1}n [\mu_0,\mu_0+\e_2apsilonsilon_0)$. By a result of X. Zhang and X. W. Zhang [ZhZ13], the twisted $F$-functionals $F_{\mu,{\mathfrak f}rac{\deltaelta_0}{2}}(\cdot)$ is proper for any $\mu\sqrt{-1}n [\mu_0,\mu_0+\e_2apsilonsilon_0)$. In fact, they prove a version of Theorem \ref{tian-zhu-conic-ke-metric-weak} on a Fano manifold which admits a twisted K\"ahler-Einstein metric.
By a direct computation, it is easy to see that for any $\deltaelta\sqrt{-1}n (0,\deltaelta_0)$ it holds
$$|F_{\mu,{\mathfrak f}rac{\deltaelta_0}{2}}(\partialsi)-F_{\mu,\deltaelta}(\partialsi)|\lambdae C(\|{\mathfrak P}si_{\deltaelta}\|_{C^0(M)}, \|{\mathfrak P}si_{{\mathfrak f}rac{\deltaelta_0}{2}}\|_{C^0(M)})\lambdae C, ~{\mathfrak f}orall~\partialsi\sqrt{-1}n \mathcal H(M,\Omegaegaegamega_0).$$
This implies that $ F_{\mu,\deltaelta}(\cdot)$ are all proper for any $\deltaelta\sqrt{-1}n (0,\deltaelta_0)$ and $\mu\sqrt{-1}n [\mu_0,\mu_0+\e_2apsilonsilon_0)$.
Thus by the argument in Section 3, we see that there exists a solution $\varepsilonphi_{\mu, \deltaelta}$
of (\ref{smooth-continuity-modified}) on $t=\mu\sqrt{-1}n [\mu_0,\mu_0+\e_2apsilonsilon_0)$ for any $\deltaelta\sqrt{-1}n (0,\deltaelta_0)$.
Moreover,
$${\rm osc}_M\varepsilonphi_{\mu, \deltaelta}\lambdae C,$$
where $C$ is a uniform constant independent of $\mu$ and $\deltaelta$.
On the other hand, the estimate (\ref{c2-estimate-twisted-metrics}) in Lemma \ref{c2-estimate-twisted-metrics-lemma} also holds for metrics $\Omegaegaegamega_{\varepsilonphi_{\mu, \deltaelta}}^\deltaelta$.
By taking a sequence $\deltaelta_i\tauo 0$, $\varepsilonphi_{\mu,\deltaelta_i}$ converge to a H\"older continueous function $\varepsilonphi_\sqrt{-1}nfty -\partialsi$ which satisfies the weak conic K\"ahler-Einstein metric equation,
$${\rm Ric}(\Omegaegaegamega_{\varepsilonphi_\sqrt{-1}nfty})=\mu\Omegaegaegamega_{\varepsilonphi_\sqrt{-1}nfty}+2\partiali (1-\bareta)[D], ~\mu\sqrt{-1}n [\mu_0, \mu_0+\e_2apsilonsilon_0)$$
with property:
$$C^{-1} {\mathfrak h}at\Omegaegaegamega\lambdae \Omegaegaegamega_{\varepsilonphi_\sqrt{-1}nfty} \lambdae C {\mathfrak h}at\Omegaegaegamega, ~{\rm in}~ M\sigmagmaetminus D$$
for some uniform positive number $C$.
By the regularity theorem in [JMR11], $\Omegaegaegamega_{\varepsilonphi_\sqrt{-1}nfty}$ is a conic K\"ahler-Einstein metric in sense of $C^{2,\alphalphapha;\bareta}$ K\"ahler potentials.
The proof of Theorem \ref{Do_openness} is completed.
\e_2and{proof}
\vskip20mm
\sigmagmaection*{References}
\sigmagmamall
\baregin{enumerate}
\renewcommand{\lambdaambdabelenumi}{[\alpharabic{enumi}]}
\baribitem{Au83}[Au84] Aubin, T., R\'eduction du cas positif de l'equation de Monge-Amp\`ere sur
les varietes Kahleriennes compactes \`{e} la demonstration d'une inegalite,
J. Funct. Anal., 57 (1984), 143-153.
\baribitem{Be11}[Be11] Berman, R., A thermodynamical formalism for Monge-Amp\`ere
equations, Moser-Trudinger inequalities and K\"ahler-Einstein metrics,
Advances in Math., 248 (2013), 1254-1297.
\baribitem{Bo11}[Bo11] Berndtsson, B., Brunn-Minkowski type inequality for Fano manifolds
and the Bando-Mabuchi uniqueness theorem, Preprint, arXiv:1103.0923.
\baribitem{Br11}[Br11] Brendle, S., Ricci flat K\"ahler metrics with edge singularities,
arXiv:1103.5454, to appear in Int. Res. Math. Notices.
\baribitem{Ch00} [Ch00] Chen, X. X., The spaces of K\"{a}hler metrics, J. Diff. Goem., 56 (2000), 189-234.
\baribitem{Cher68} [Cher68] Chern, S. S., On holomorphic mappings of Hermitian manifolds of same dimension, Proc. Symp. Pure Math., 11, Amer. Math. Soci., 1968, 157-170.
\baribitem{CDS13} [CDS13] Chen, X.X., Donaldson, S., Sun, S., K\"{a}hler-Einstein metrics on Fano manifolds I, II, III, J. Amer. Math. Soc. 28 (2015), 183-197, 199-234, 235-278.
\baribitem{CS12}[CS12] Collins T. and Szekelyhihi, G., The twisted K\"ahler-Ricci flow,
\nuewline arXiv:1207.5441v1.
\baribitem{CTZ05}[CTZ05] Cao, H.D., Tian, G., and Zhu, X.H.,
K\"ahler-Ricci solitons on compact K\"ahler manifolds with
$c_1(M)>0$, Jour Geom and Funct. Anal., 15 (2005)
697-619.
\baribitem{Di88}[Di88] Ding, W., Remarks on the existence problem of positive K\"ahler-Einstein metrics, Math. Ann. 282 (1988), 463-471.
\baribitem{Do98}[Do98] Donaldson, S., Symmetric spaces, K\"ahler geometry and Hamiltonian dynamics, Northern California Symplectic Geometry Seminar, 13-33.
\baribitem{Do02}[Do02] Donaldson, S., Scalar curvature and stability of toric varieties, J. Diff.
Geom., 62 (2002), 289-349.
\baribitem{Do11}[Do11] Donaldson, S., K\"ahler metrics with cone singularities along a divisor,
Preprint, arXiv:1102.1196.
\baribitem{DT92}[DT92] Ding, W. and Tian, G., The generalized Moser-Trudinger inequality, Nonlinear Analysis and Microlocal Analysis, Proceedings of the International
Conference at Nankai Institute of Mathematics (K.C. Chang et
al., Eds.), World Scientific, 1992, 57-70.
\baribitem{GP13}[GP13] Guenancia H. and Paun H., , Conic singularities metrics with perscribed Ricci curvature: the case of general cone angles along normal crossing divisors, arXiv: 1307.6375.
Preprint, arXiv:1102.1196.
\baribitem{JMR11}[JMR11] Jeffres, T., Mazzeo, R. and Rubinstein, Y., K\"ahler-Einstein metrics
with edge singularities, arXiv:1105.5216, to appear in Ann. Math.
\baribitem{Kol08}[Kol08] Kolodziej, S, H\"older continuity of solutions to the complex Monge-Amp\`{e}re equation with the righ-hand side in Lspp: the case of compact K\"ahler manifolds, Math. Ann (2008), 379--386.
\baribitem{Lu68}[Lu68] Lu, Y. C., Holomorphic mappings of complex manifolds, J. Diff. Goem., 2 (1968), 299-312.
\baribitem{LS12}[LS12] Li, Chi and Sun, Song, Conic K\"ahler-Einstein metrics revisited, Preprint,
arXiv:1207.5011.
\baribitem{LZ14}[LZ14] Liu, J.W. and Zhang, X., The conical K\"ahler-Ricci flow on Fano manifolds, preprint.
\baribitem{PSSW08}[PSSW08] Phong D.H., Song, J., Sturm J. and Weinkove, B., The Moser-Trudinger inequality on K\"ahler-Einstein manifolds, Amer. J. Math., 130 (2008), 1067-1085.
\baribitem{Sh14}[Sh14] Shen, L., Smooth approximation of conic K\"ahler metric with lower Ricci curvature bound, Preprint, arXiv:1406.0222v1.
\baribitem{Se92}[Se92] Semmes S., Complex Monge-Amp\`ere equations and symplectic manifolds, Amer. J. Math., 114 (1992), 495-550.
\baribitem{SW12}[SW12] Song, J. and Wang, X.W., The greatest Ricci lower bound, conical Einstin metrics and the Chern number inequality, arXiv:1207.4839v1.
\baribitem{ST12}[ST12] Song, J. and Tian, G., Canonical measures and K\"ahler-Ricci flow, J. Amer. Math. Soc., 20 (2012), no. 3, 303-353.
\baribitem{Ti87}[Ti87] Tian, G., On K\"ahler-Einstein metrics on certain K\"ahler manifolds with $C_1(M)>0$, Invent. Math. 89 (1987), no. 2, 225-246.
\baribitem{Ti97}[Ti97] Tian, G., K\"ahler-Einstein metrics with positive scalar curvature, Invent. Math., 130 (1997), 1-39.
\baribitem{Ti12}[Ti12] Tian, G., $K$-stability and K\"ahler-Einstein metrics, Preprint, arXiv:1211.4669.
\baribitem{TZ00}[TZ00] Tian, G. and Zhu, X.H., A nonlinear inequality of Moser-Trudinger type, Cal. Var. PDE, 10 (2000), 349-354.
\baribitem {Yau78}[Yau78] S.T. Yau, On the Ricci curvature of a compact K\"ahler manifold
and the complex Monge-Amp\`ere equation, I, Comm. Pure Appl.
Math., \tauextbf{31} (1978), 339--411.
\baribitem{Yau93}[Yau93] Yau, S.T., Open problem in geometry. Differential geometry: partial differential equations on manifolds (Los Angles, CA, 1990), 1-28, Proc. Sympos. Pure Math., 54, Part 1, Amer.
Math. Soc., Providence, RI, 1993.
\baribitem{ZhaZ13}[ZhaZ13] Zhang, X. and Zhang X.W., Generalized K\"ahler-Einstein metrics and energy functionals, Canadian J. Math., doi:10.4153/CJM-2013-034-3.
\e_2and{enumerate}
\e_2and{document} |
\begin{equation}gin{document}
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\ket}[1]{|#1\rangle}
\markboth{Ciara Morgan} {The information-carrying capacity of certain quantum channels}
\begin{equation}gin{titlepage}
\begin{equation}gin{center}
{\large THE INFORMATION-CARRYING CAPACITY OF CERTAIN QUANTUM CHANNELS}\\
{\bf Ciara Morgan}\\
{\scshape The thesis is submitted to\\
University College Dublin\\
for the degree of PhD\\
in the College of\\
Engineering, Mathematical and Physical Sciences}\\
{\it January 2010}
Based on research conducted in the\\
Dublin Institute for Advanced Studies\\
and the School of Mathematical Sciences, \\ University College Dublin\\
{\footnotesize {\it (Head of School: Dr. Miche\'al \'O Searc\'oid )}}
under the supervision of\\
{\bf Prof. Tony Dorlas \, \& \, Prof. Joe Pul\'e}
\end{center}
\end{titlepage}
\pagenumbering{roman}
\tableofcontents
\includePreface{}{quote}
\includePreface{}{dedication}
\includePreface{Acknowledgements}{acknowledgements}
\includePreface{Abstract}{abstract}
\includePreface{Notation}{notation}
\setcounter{page}{1}
\pagenumbering{arabic}
\pagestyle{fancy}
\fancyhf{}
\fancyhead[LO]{\rightmark}
\fancyfoot[CO]{\thepage}
\includeChapter{Introduction}{introduction}
\includeChapter{Preliminaries}{prelim}
\includeChapter{Deriving a minimal ensemble for the quantum amplitude damping \\channel}{article1v4}
\includeChapter{The classical capacity of two quantum channels with memory}{article2v3}
\includeChapter{Strong converse to the channel coding theorem for a \\periodic quantum channel}{StrongConvPerv4}
\appendix
\addappheadtotoc
\includeAppendix{Carath\'eodory's Theorem \& an application to minimal \\ optimal-ensembles}{appendix1}
\includeAppendix{Product-state capacity of a periodic quantum channel}{appendix2}
\addcontentsline{toc}{chapter}{List of figures}
\listoffigures
\addcontentsline{toc}{chapter}{Bibliography}
\end{document} |
\begin{document}
\title{Quantum steering as a witness of quantum scrambling}
\author{Jhen-Dong Lin}
\thanks{Jhen-Dong Lin and Wei-Yu Lin contributed equally to this work.}
\affiliation{Department of Physics, National Cheng Kung University, 701 Tainan, Taiwan}
\affiliation{Center for Quantum Frontiers of Research \& Technology, NCKU, 70101 Tainan, Taiwan}
\author{Wei-Yu Lin}
\thanks{Jhen-Dong Lin and Wei-Yu Lin contributed equally to this work.}
\affiliation{Department of Electrical Engineering, National Taiwan University, 106 Taipei, Taiwan}
\author{Huan-Yu Ku}
\affiliation{Department of Physics, National Cheng Kung University, 701 Tainan, Taiwan}
\affiliation{Center for Quantum Frontiers of Research \& Technology, NCKU, 70101 Tainan, Taiwan}
\author{Neill Lambert}
\affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan}
\author{Yueh-Nan Chen}
\email{[email protected]}
\affiliation{Department of Physics, National Cheng Kung University, 701 Tainan, Taiwan}
\affiliation{Center for Quantum Frontiers of Research \& Technology, NCKU, 70101 Tainan, Taiwan}
\author{Franco Nori}
\affiliation{Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama 351-0198, Japan}
\affiliation{RIKEN Center for Quantum Computing (RQC), Wakoshi, Saitama 351-0198, Japan}
\affiliation{Department of Physics, The University of Michigan, Ann Arbor, 48109-1040 Michigan, USA}
\date{\today}
\begin{abstract}
Quantum information scrambling describes the delocalization of local information to global information in the form of entanglement throughout all possible degrees of freedom. A natural measure of scrambling is the tripartite mutual information (TMI), which quantifies the amount of delocalized information for a given quantum channel with its state representation, i.e., the Choi state. In this work, we show that quantum information scrambling can also be witnessed by temporal quantum steering for qubit systems. We can do so because there is a fundamental equivalence between the Choi state and the pseudo-density matrix formalism used in temporal quantum correlations. In particular, we propose a quantity as a scrambling witness, based on a measure of temporal steering called temporal steerable weight. We justify the scrambling witness for unitary qubit channels by proving that the quantity vanishes whenever the channel is non-scrambling.
\end{abstract}
\maketitle
\section{Introduction.}
Quantum systems evolving under strongly interacting channels can experience the delocalization of initially local information into non-local degrees of freedom. Such an effect is termed ‘‘quantum information scrambling,'' and this new way of looking at delocalization in quantum theory has found applications in a range of physical effects, including chaos in many-body systems~\citep{maldacena2016remarks,nahum2017quantum,von2018operator,cotler2017chaos,fan2017out,khemani2018operator,page1993average,khemani2018operator,gu2017local}, and the black-hole information paradox~\citep{hayden2007black,sekino2008fast,lashkari2013towards,gao2017traversable,shenker2014black,maldacena2017diving,roberts2015localized,shenker2014multiple,roberts2015diagnosing,blake2016universal,kitaev2014hidden}.
One can analyze the scrambling effect by using the state representation of a quantum channel (also known as the Choi state), which encodes the input and output of a quantum channel into a quantum state~\citep{choi1975completely,jamiolkowski1972linear}. Within this formulation, quantum information scrambling can be measured by the tripartite mutual information (TMI) of a Choi state~\cite{hosur2016chaos,ding2016conditional,PhysRevE.98.052205,PhysRevA.97.042330,PhysRevLett.124.200504,PhysRevA.101.042324,PhysRevB.98.134303} which is written as
\begin{equation}
-I_3 = I(A:CD)-I(A:C)-I(A:D).\label{-I_3}
\end{equation}
Here, $A$ denotes a local region of the input subsystem whereas $C$ and $D$ are partitions of the output subsystem. The mutual information $I(A:X)$ quantifies the amount of information about $A$ stored in the region $X$. When $I(A:CD)>I(A:C)+I(A:D)$ or $-I_3>0$, it means that the amount of information about $A$ encoded in the whole output region $CD$ is larger than that in local regions $C$ and $D$. Therefore, $-I_3>0$ implies the delocalization of information or quantum information scrambling~\cite{hosur2016chaos}. Note that the TMI and the out-of-time-ordered correlator are closely related, suggesting that one can also use the out-of-time-ordered correlator as an alternative witness of quantum information scrambling~\citep{hosur2016chaos,landsman2019verified,swingle2016measuring,yao2016interferometric,zhu2016measurement,garttner2017measuring,swingle2018resilience,huang2019finite,yoshida2019disentangling,alonso2019out}.
From another point of view, because TMI is a multipartite entanglement measure, Eq.~\eqref{-I_3} can also be seen as a quantification of the multipartite \textit{entanglement in time}, i.e., the entanglement between input and output subsystems~\cite{hosur2016chaos}. Motivated by such an insight, one could expect that the scrambling effect can also be investigated from the perspective of temporal quantum correlations, i.e., temporal analogue of space-like quantum correlations.
Moreover, Ku~\textit{et al.} \citep{ku2018hierarchy} has shown that three notable temporal quantum correlations (temporal nonlocality, temporal steering, and temporal inseperability) can be derived from a fundamental object called pseudo-density matrix~\citep{fitzsimons2015quantum,Ried2015,Zhao2018,Pi2019}, while elsewhere it was noted that there is a strong relationship between the Choi state and the pseudo-density matrix itself ~\citep{pisarczyk2019causal}. Taking inspiration from these connections, in this work, we aim to link the notion of scrambling to one particular scenario of temporal quantum correlation called temporal steering (TS)~\citep{chen2014temporal,chen2015detecting,chen2016quantifying,ku2016temporal,ku2018hierarchy,chen2017spatio,bartkiewicz2016temporal,liu2018quantum}.
Partly inspired by the Leggett-Garg inequality~\citep{leggett1985quantum,emary2013leggett}, temporal steering was developed as a temporal counterpart of the notion of spatial EPR steering~\citep{schrodinger1935discussion,wiseman2007steering,jones2007entanglement,cavalcanti2009experimental,piani2015necessary,skrzypczyk2014quantifying,costa2016quantification,branciard2012one,law2014quantum,uola2019quantum}. Recent work has shown that TS can quantify the information flow between different quantum systems~\citep{chen2016quantifying}, further suggesting it may also be useful in the study of scrambling. Here, our goal is to demonstrate that \textit{one can witness information scrambling with temporal steering, which implies that the scrambling concept has nontrivial meaning in the broader context of temporal quantum correlations.} In addition, we wish to show that \textit{one can use ``measures'' developed to study temporal steering as a practical tool for the study of scrambling.}
We will restrict our attention to unitary channels of qubit systems, where the structure of non-scrambling channels can be well characterized~\cite{ding2016conditional}. More specifically, a unitary channel is non-scrambling, i.e. $-I_3=0$, if and only if the unitary is a ``criss-cross" channel that locally routes the local information from the input to the output subsystems. For qubit systems, a criss-cross channel can be described by a sequence of local unitaries and SWAP operations.
The main result of this work is that we propose a quantity, $-T_3$, as a scrambling witness based on a measure of temporal steering called temporal steerable weight. We justify $-T_3$ to be a scrambling witness by proving that $-T_3=0$ when the global unitary channel is non-scrambling as mentioned above. We then compare the $-T_3$ with $-I_3$ by numerically simulating the Ising spin-chain model and the Sachdev-Ye-Kitaev (SYK) model. Finally, based on the one-sided device independent nature of steering, we point out that obtaining $-T_3$ requires less experimental resource than $-I_3$.
\begin{figure}
\caption{\label{ets}
\label{ets}
\end{figure}
\section{Extended temporal steering scenario and scrambling witness}
\subsection{Temporal steering scenario}
Let us review the TS scenario~\cite{chen2014temporal} with the schematic illustration shown in Fig.~\ref{ets}. We focus on the reduced system $q_1$ and treat other qubits as the environment. In general, the TS task consists of many rounds of experiments. For each round, Alice receives $q_1$ with a fixed initial state $\rho_0$. Before the system evolves, Alice performs one of the projective measurements $\{E_{a|x}\}_{a,x}$ on $q_1$. Here, $x$ stands for the index of the measurement basis, to which Alice can freely choose, and $a$ is the corresponding measurement outcome. The resulting post-measured conditional states can be written as
\begin{equation}
\Big\{\rho_{a|x}(0) = \frac{E_{a|x}\rho_0 E_{a|x}^{\dag} }{\tr(E_{a|x}\rho_0 E_{a|x}^{\dag})}\Big\}_{a,x},
\end{equation} where $\big\{p(a|x) = \tr(E_{a|x}\rho_0) \}_{a,x}$ predicts the probability of obtaining the outcome $a$ conditioned on Alice's choice $x$. Alice then sends the system to Bob through the quantum channel $\Lambda_t$, which describes the reduced dynamics of $q_1$ alone by tracing out other qubits. Finally, Bob performs another measurement on $q_1$ after the evolution.
After all rounds of the experiments are finished, Alice sends her measurement results to Bob by classical communication, such that, Bob can also obtain the probability distribution $p(a|x)$. Additionally, based on the knowledge $(x,a)$ for each round of the experiment, Bob can approximate the conditional state $\rho_{a|x}(t)=\Lambda_t[\rho_{a|x}(0)]$ by quantum state tomography. The aforementioned probability distribution and the conditional states can be summarized as a set called the TS assemblage $\{\sigma_{a|x}=p(a|x)\rho_{a|x}(t)\}_{a,x}$. Note that the assemblage can also be derived from the pseudo-density matrix (see Appendix~\ref{pdmchoi} for the derivation). In Appendix~\ref{pdmchoi}, we also show that the Choi matrix and the pseudo-density matrix are related by a partial transposition.
Now, Bob can determine whether a given assemblage is steerable or unsteerable; that is, whether his system is quantum mechanically steered by Alice's measurements. In general, if the assemblage is unsteerable, it can be generated in a classical way, which is described by the local hidden state model:
\begin{equation}
\sigma_{a|x}^{\text{LHS}}(t)= \sum_{\lambda} p(a|x, \lambda) p(\lambda) \sigma_{\lambda}(t)~~~\forall a,x, \label{LHS}
\end{equation} where $\{p(\lambda), \sigma_\lambda(t)\}$ is an ensemble of local hidden states, and $\{p(a|x,\lambda)\}$ stands for classical post-processing. Therefore, the assemblage is steerable when it cannot be described by Eq.~\eqref{LHS}.
Bob can further quantify the magnitude of temporal steering~\cite{cavalcanti2016quantum,uola2019quantum}. Here, we use one of the quantifiers called temporal steerable weight (TSW)~\cite{chen2016quantifying}. For a given TS assemblage $\{\sigma_{a|x}(t)\}$, one can decompose it into a mixture of a steerable and unsteerable parts, namely,
\begin{equation}
\sigma_{a|x}(t) = \mu \sigma_{a|x}^\text{US}(t) +(1- \mu) \sigma_{a|x}^\text{S}(t) ~~\forall a,x, \label{decomposition}
\end{equation}
where $\{\sigma_{a|x}^\text{S}(t)\}$ and $\{\sigma_{a|x}^\text{US}(t)\}$ are the steerable and unsteerable assemblages, respectively, and $\mu$ stands for the portion (or weight) of the unsteerable part with $0\leq \mu\leq 1$. The TSW for the assemblage is then defined as
\begin{equation}
\text{TSW}[\sigma_{a|x}(t)] = 1-\mu^*, \label{TSW}
\end{equation}
where $\mu^*$ is the maximal unsteerable portion among all possible decompositions described by Eq.~\eqref{decomposition}. In other words, TSW can be interpreted as the minimum steerable resource required to reproduce the TS assemblage (e.g., $\text{TSW}=0$ for minimal steerability, and $\text{TSW}=1$ for maximal steerability). Note that Eq.~\eqref{TSW} can be numerically computed through semi-definite programming~\citep{cavalcanti2016quantum}.
According to Ref.~\cite{chen2016quantifying}, the TSW can reveal the direction of the information flow between an open quantum system and its environment during the time evolution. When the information irreversibly flows out to the environment, TSW will monotonically decrease. Accordingly, the temporal increase of TSW implies information backflow. Recall that Alice steers $q_1$'s time evolution by her measurement $E_{a|x}$. In other words, the measurement encodes the information about $(a,x)$ in $q_1$. Therefore, after the evolution, Bob can estimate the amount of the information preserved in $q_1$ by computing the TSW.
\subsection{Extended temporal steering as a witness of scrambling}
As shown in Fig.~\ref{ets}, the evolution for the total system is still unitary, meaning that the information initially stored in $q_1$ is just redistributed (and localized) or scrambled after the evolution. Therefore, if we extend the notion of TS, which allows Bob to access the full system (regions $C$ and $D$), he can, in general, find out how the information localized or scrambled throughout the whole system. To be more specific, we now consider a global system with $N$ qubits labeled by $\{q_n\}_{n=1\dots N}$. Before Alice performs any measurement, we reset the total system by initializing the qubits in the maximally mixed state $\rho^{\text{tot}}_0 = \mathbb{1}^{\otimes N}/2^N$, where $\mathbb{1}$ is the two-dimensional identity matrix. In this case, no matter how one probes the system, it gives totally random results, and no meaningful information can be learned. Then, Alice encodes the information $(a,x)$ in $q_1$ by performing $\{E_{a|x}\}$, which results in the conditional states of the total system:
\begin{align}
&\{\rho^{\text{tot}}_{a|x}(0) = \frac{1}{2^N}(2E_{a|x}\otimes \mathbb{1}^{\otimes N-1})\}_{a,x}\\
\text{with~~}&\{p(a|x) = \tr(E_{a|x}~\frac{\mathbb{1}}{2})=1/2 \}_{a,x}.
\end{align}
After that, let these conditional states evolve freely to time $t$, such that
\begin{equation}
\rho^{\text{tot}}_{a|x}(t) = U_t\,\rho^{\text{tot}}_{a|x}(0)\,U_t^\dagger~~~\forall a,x,
\end{equation}
where $U_t$ can be any unitary operator acting on the total system. The assemblage for the global system then reads
\begin{equation}
\{\sigma^{\text{tot}}_{a|x}(t) = p(a|x)\,\rho^{\text{tot}}_{a|x}(t)\}_{a,x}.
\end{equation}
Because the global evolution is unitary, it is straightforward that
\begin{equation}
\text{TSW}[\sigma_{a|x}^\text{tot}(t)] = \text{TSW}[\sigma_{a|x}^\text{tot}(0)],
\end{equation}
which means that the information is never lost when all the degrees of freedom in the global system can be accessed by Bob.
In order to know how the information spread throughout all degrees of freedom, Bob can further analyze the assemblages obtained from different portions of the total system. For instance, he can divide the whole system into two local regions $C$ and $D$ as shown in Fig.~\ref{ets}, where $C$ contains $n_c$ qubits $\{q_1, \cdots, q_{n_c}\}$ and $D$ contains $n_d = N-n_c$ qubits $\{q_{n_c+1},\cdots ,q_N \}$, such that Bob obtains two additional assemblages: $\{\sigma_{a|x}^C (t) = \tr_D [\sigma_{a|x}^\text{tot}(t)]\}$ and
$\{\sigma_{a|x}^D (t) = \tr_C [\sigma_{a|x}^\text{tot}(t)]\}$. Therefore, he can compute $\text{TSW}[\sigma_{a|x}^C (t)]$ and $\text{TSW}[\sigma_{a|x}^D (t)]$, estimating the amount of information localized in regions C and D.
In analogy with Eq.~\eqref{-I_3}, we propose the following quantity to be a \textit{scrambling witness}:
\begin{equation}
-T_3(t) = \text{TSW}[\sigma_{a|x}^\text{tot}(t)] - \text{TSW}[\sigma_{a|x}^C(t)]-\text{TSW}[\sigma_{a|x}^D(t)],
\end{equation} where the minus sign for the quantity aims to keep the consistency with the TMI scrambling measure in Eq.~\eqref{-I_3}. It can be interpreted as the information stored in the whole system minus the information localized in regions $C$ and $D$; namely, the information scrambled to the non-local degrees of freedom.
As mentioned in the introduction section, for a non-scrambling channel consisting of local unitaries and SWAP operations, the information will stay localized (non-scrambled). Therefore, we further justify that $-T_3(t)$ can be a scrambling witness, under the assumption of global {\em unitary} evolution, by proving that under non-scrambling evolutions, this quantity will vanish, i.e. $-T_3=0$. Accordingly, any nonzero value of $-T_3$ can be seen as a witness of scrambling.
\begin{theorem}
If the global unitary evolution $U$ is local for regions $C$ and $D$, that is, $U = U_C\otimes U_D$, the resulting $-T_3$ is zero.\label{local}
\end{theorem}
The proof is given in Appendix~\ref{pf:1}.
\begin{theorem}
If the global unitary $U$ is a SWAP operation between qubits, then $-T_3(t) = 0$.\label{swap}
\end{theorem}
The proof is given in Appendix~\ref{pf:2}.
According to the results of Theorem~\ref{local} and Theorem~\ref{swap}, we conclude that $-T_3(t)$ will vanish if the global evolution is any sequence of local unitaries and SWAP operations, as required for a witness of scrambling.
\section{numerical simulations}
In this section, we present the numerical simulations for the Ising spin chain and the SYK model. For simplicity, we consider $\{E_{a|x}\}$ to be projectors of Pauli matrices $\{\sigma_x,~\sigma_y,~\sigma_z\}$ such that $\text{TSW}[\sigma_{a|x}^\text{tot}(0)]=1$~\cite{chen2016quantifying}.
\subsection{Example 1: The Ising spin chain}
\begin{figure*}
\caption{\label{spin_scrambling}
\label{spin_scrambling}
\end{figure*}
\begin{figure*}
\caption{\label{syk_scrambling}
\label{syk_scrambling}
\end{figure*}
We now consider a one-dimensional Ising model of $N$ qubits with the Hamiltonian
\begin{equation}
H = - \sum_{i=1}^{N-1}\sigma_{i}^{z}\sigma_{i+1}^{z} - h\sum_{i=1}^{N}\sigma_i^z - g\sum_{i=1}^{N}\sigma_i^x.
\end{equation}
The key feature is that one can obtain chaotic behavior by simply turning on the longitudinal field parametrized by $h$.
Here, we consider the system containing 7 qubits $\{q_1\dots q_7\}$ and compare the dynamical behavior of information scrambling for chaotic ($g=1$, $h=0.5$) and integrable regimes ($g=1$, $h=0$) by encoding the information in $q_1$.
As shown in Fig.~\ref{spin_scrambling}, we plot the information scrambling measured by $-I_3$ and witnessed by $-T_3$ and the amount of information stored in region $C$ ($D$) with the quantities $I(A:C)$ and TSW$(\sigma_{a|x}^C)$ [$I(A:D)$ and TSW$(\sigma_{a|x}^D)$] for different partitions of the output system. For a fixed output partition, [Fig. ~\ref{spin_scrambling}(a3), ~\ref{spin_scrambling}(b3), ~\ref{spin_scrambling}(c3) for instance], one can find that the local minima of the scrambling corresponds to the local maxima of the information stored either in region $C$ or region $D$. Therefore, we can conclude that \textit{the decrease of the scrambling during the evolution results from the information backflow from non-local degrees of freedom to local degrees of freedom.}
Moreover, information scrambling behaves differently for chaotic and integrable evolutions. For chaotic evolution, the scrambling will remain large after a period of time, because the information is mainly encoded in non-local degrees of freedom. However, for integrable systems, we can observe that both $-I_3$ and $-T_3$ show oscillating behavior. Furthermore, as the dimension of region $C$ becomes larger, the oscillating behavior of the scrambling for integrable cases significantly increases, whereas the scrambling patterns for chaotic cases remain unchanged.
\subsection{Example 2 : The Sachdev-Ye-Kitaev model}
We now consider the SYK model which can be realized by a Majorana fermionic system with the Hamiltonian
\begin{equation}
\begin{aligned}
H = \sum_{i<j<k<l} J_{ijkl} \chi_i \chi_j \chi_k \chi_l \: , \\
\overline{J_{ijkl}^2} = \frac{3!}{(N-1)(N-2)(N-3)} J^2 ,
\end{aligned}
\end{equation}
where the $\chi_i$ represent Majorana fermions with $j,k,l,m=1,...N$.
Meanwhile, $J_{ijkl}$ in the Hamiltonian follow the random normal distribution with zero mean and variance $\overline{J_{ijkl}^2}$. To study this model in qubit system, we can use the Jordan-Wigner transformation
\begin{align}
\chi_{2i-1} &= \frac{1}{\sqrt{2}} X_1 X_2 ... X_{i-1} Z_i , \nonumber\\ \chi_{2i} &= \frac{1}{\sqrt{2}} X_1 X_2 ... X_{i-1} Y_i
\end{align}
to convert the Majorana fermions to spin chain Pauli operators. In our numerical results, we consider $N=14$ (a seven qubits system) and $J=1$.
Figure~\ref{syk_scrambling} shows the time evolutions of the information scrambling and the information localized in region $C$ and $D$ for different partitions of the output system similar to those in Example $1$.
The main difference between these examples is that, in Example $1$, the qubits only interact with their nearest neighbors; whereas in example $2$, the model includes the interactions to all other qubits. Therefore, we can observe that in the spin chain model, the scrambling is sensitive to the dimension of the output system. However, in the SYK model, the scrambling is not susceptible to the partition of the output system, namely, the scrambling time and the magnitude of the tripartite mutual information after the scrambling period of different output partitions are similar (asymptotically reaching the Harr-scrambled value~\cite{hosur2016chaos}). Note that in Appendix~\ref{finitesize}, we also provide numerical simulations involving different number of qubits for the above examples. We find that when decreasing (increasing) the number of qubits, the tendency of information backflow~\cite{chen2016quantifying} from global to local degrees of freedom will increase (decrease) for both chaotic and integrable dynamics.
Finally, for the scrambling dynamics (chaotic spin chain and SYK model), we can find that $\text{TSW}(\sigma_{a|x}^C)$ degrades more quickly to zero than $I(A:C)$. In addition, $\text{TSW}(\sigma_{a|x}^D)$ remains zero all the time, while $I(A:D)$ could reach some non-zero value. The different behavior between the $\text{TSW}(\sigma_{a|x}^{C/D})$ and $I(A:C/D)$ results from the hierarchical relation between these two quantities~\citep{ku2018hierarchy}, which states that temporal quantum steering is a stricter quantum correlation than bipartite mutual information. In other words, we can find some moments where $I(A:C/D)$ has non-zero value whereas $\text{TSW}(\sigma_{a|x}^{C/D})$ is zero, but not \textit{vice versa}.
The situation when $\text{TSW}(\sigma_{a|x}^{C})=\text{TSW}(\sigma_{a|x}^{D})=0$ [$I(A:C)=I(A:D)=0$] implies that $-T_3$ [$-I_3$] reaches its maximum. Therefore, for scrambling dynamics we can observe that $-T_3$ reaches its maximum earlier than $-I_3$.
\section{Summary}
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{lll}
& Space-like structure & Time-like structure\\
\hline Diagram&&\\[15pt]
& \raisebox{0pt}{\includegraphics[width=0.2\paperwidth]{Space-like.eps}} &\raisebox{0pt}{\includegraphics[width=0.2\paperwidth]{time-like.eps}}\\
\hline
Operator & Choi Matrix & Pseudo Density Matrix \\
\hline
\vtop{\hbox{\strut Quantum}\hbox{\strut correlations}} & \vtop{\hbox{\strut Mutual information}\hbox{\strut Entanglement}\hbox{\strut CHSH inequality}} & \vtop{\hbox{\strut Temporal steering}\hbox{\strut Temporal inseparability}\hbox{\strut Legget-Garg inequality}} \\[30pt]
\hline
\vtop{\hbox{\strut Input} \hbox{\strut state $\rho_0$}} & EPR pairs & Maximally mixed states \\
\end{tabular}
\end{ruledtabular}
\caption{\label{sttable} Relations between space-like and time-like structures. In the diagrams, the vertical line with a dot in the middle for the space-like structure represents the bipartite entanglement as the resource of the input state. The red boxes represent the measurements required for the spatial/temporal quantum state tomography for different scenarios.}
\end{table*}
In summary, we demonstrate that the information scrambling can not only be verified by the spatial quantum correlations encoded in a Choi matrix but also the temporal quantum correlations encoded in a pseudo-density matrix (see Table ~\ref{sttable} for the comparison between the space-like and time-like structures). Moreover, we further provide an information scrambling witness, $-T_3$, based on the extended temporal steering scenario.
A potential advantage of using $-T_3$ as a scrambling witness, over $-I_3$, is that $-T_3$ requires less measurement resources than $-I_3$. More specifically, when measuring $-T_3$, we do not have to access the full quantum state of the input region $A$, because in the steering scenario Alice's measurement bases are only characterized by the classical variable $x$. From a practical point of view, the number of Alice's measurement basis can be less than that required for performing quantum state tomography on the region $A$. For the examples presented in this work, we consider that region $A$ only contains a single qubit ($q_1$), in which the standard choice of the measurement bases is the set of Pauli matrices, $\{\sigma_x, \sigma_y,\sigma_z\}$. For the steering scenario, we can choose only two of these matrices as Alice's measurement bases, though for the numerical simulations presented in this work, we still consider that all three Pauli matrices are used by Alice.
Once the dimension of region $A$ increases, the number of the measurements required to perform quantum state tomography and obtain $-I_3$ will also increase. However, as aforementioned, for the steering scenario, the dimension of the region $A$ is not assumed, implying that we can still choose a manageable number of Alice's measurements to verify the steerability and compute $-T_3$.
Finally, it is important to note that we only claim that $-T_3$ is a witness of scrambling rather than a quantifier, because we only prove that $-T_3$ vanishes whenever the evolution is non-scrambling. An open question immediately arises: Can $-T_3$ be further treated as a quantifier from the viewpoint of resource theory~\citep{chitambar2019quantum}? To show this, our first step would be to prove that $-T_3$ monotonically decreases whenever the evolution is non-scrambling, and we leave it as a future work.
\textit{Note added}---After this work was completed, we became aware of \citep{zhang2020quantum}, which independently showed that the temporal correlations are connected with information scrambling, because the out-of-time-ordered correlators can be calculated from pseudo-density matrices.
\begin{acknowledgments}
This work is supported partially by the National Center for Theoretical Sciences and Ministry of Science and Technology, Taiwan, Grants No. MOST 107-2628-M-006-002-MY3, and MOST 109-2627-M-006 -004, and the Army Research Office (under Grant No. W911NF-19-1-0081). N.L. acknowledges partial support from JST PRESTO through Grant No. JPMJPR18GC, the Foundational Questions Institute (FQXi), and the NTT PHI Laboratory. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), the Moonshot R\&D Grant Number JPMJMS2061, and the Centers of Research Excellence in Science and Technology (CREST) Grant No. JPMJCR1676], the Japan Society for the Promotion of Science (JSPS) [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134 and the JSPS–RFBR Grant No. JPJSBP120194828], the Army Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06.
\end{acknowledgments}
\appendix
\section{Relation between Choi matrix and pseudo density matrix \label{pdmchoi}}
To illustrate the main idea behind the TMI scrambling measure in Ref.~\citep{hosur2016chaos}, let us now consider a system made up of $N$ qubits, labeled by $\{q_1,\cdots,q_N\}$, with a Hilbert space $\mathcal{H}^\text{In}_q = \bigotimes_{i=1}^N\mathcal{H}^{\text{In}}_{q_i}$. We then create $N$ ancilla qubits, labeled with $\{\tilde{q}_1,\cdots,\tilde{q}_N\}$, where each $\tilde{q}_i$ is maximally entangled with the corresponding qubit $q_i$. Therefore, the Hilbert space of the total $2N$ qubits system is $\mathcal{H}_{\tilde{q}}^\text{In}\otimes \mathcal{H}_q^\text{In}$. The corresponding density operator is $\rho_0^\text{CJ} \in L(\mathcal{H}_{\tilde{q}}^\text{In})\otimes L(\mathcal{H}_q^\text{In})$, where $L(\mathcal{H}_q^\text{X})$ denotes the set of linear operators on the Hilbert state $\mathcal{H}_q^\text{X}$. We can expand $\rho_0^\text{CJ}$ with Pauli matrices such that
\begin{equation}
\rho^{CJ}_0 = \frac{1}{4^N} \sum_{i_1,\cdots i_N=0}^3 T_{i_1\cdots i_N}(\bigotimes_{m=1}^N \sigma_{i_m})\otimes (\bigotimes_{m=1}^N \sigma_{i_m}) ,
\end{equation}
where $T_{\mu_1\cdots\mu_N} = V_{\mu_1}\cdots V_{\mu_N}$, $\mathbf{V} = (+1,+1,-1,+1)$, and $\bm{\sigma} = (\mathbb{1},\sigma_x,\sigma_y,\sigma_z) $.
Let us now send the original qubits into a quantum channel (completely positive and trance preserving map) $\Phi_t: L(\mathcal{H}^{\text{In}}_q)\rightarrow L(\mathcal{H}^{\text{Out}}_q)$. Here, we consider the channel to be unitary; namely, $\Phi_t(\rho) = U_t \rho U_t^\dagger$, where $U_t$ is a unitary operator. The evolved density matrix (known as the Choi matrix) then reads
\begin{align}
\rho^{CJ}_t &= (\mathbb{1}\otimes\Phi_t)[\rho_0^\text{CJ}]\nonumber\\&=(\mathbb{1}\otimes U_t )\rho^{CJ}_0 (\mathbb{1}\otimes U_t^\dagger) \in L(\mathcal{H}_{\tilde{q}}^\text{In}) \otimes L(\mathcal{H}_q^\text{Out}).
\end{align}In general, $U_t$ can be expanded as
\begin{equation}
U_t=\sum_{\mu_1\cdots\mu_N}u_{\mu_1\cdots\mu_N}\bigotimes_{m=1}^N \sigma_{\mu_m}.
\end{equation}
We therefore can expand the Choi matrix into:
\begin{flalign}
\rho^{CJ}_t&=\frac{1}{4^N}\sum_{i_1\cdots i_N}\sum_{j_1\cdots j_N} \Omega_{j_1\cdots j_N}^{i_1\cdots i_N}(\bigotimes_{m}^N \sigma_{i_m}) \otimes (\bigotimes_{n}^N \sigma_{j_n}),& \nonumber\\
\Omega_{j_1\cdots j_N}^{i_1\cdots i_N}&=
\frac{1}{2^N}\sum_{\mu_1\cdots\mu_N}\sum_{\nu_1\cdots\nu_N}[T_{i_1\cdots i_N}u_{\mu_1\cdots \mu_N}u^*_{\nu_1\cdots \nu_N}\times&\nonumber\\
&\qquad\qquad\qquad\qquad~\prod_{m=1}^N \tr (\sigma_{j_m}\sigma_{\mu_m}\sigma_{i_m}\sigma_{\nu_m})]&\label{CJ}.
\end{flalign}
We now construct the pseudo-density matrix (PDM) through a \textit{temporal analogue of quantum state tomography (QST)} between measurement events at two different moments~\citep{fitzsimons2015quantum}. A PDM for an $N$ qubits system in an initially maximally mixed state undergoing $\Phi_t$ is given by
\begin{align}
R_t&=\frac{1}{4^N}\sum_{i_1\cdots i_N}\sum_{j_1\cdots j_N} C_{j_1\cdots j_N}^{i_1\cdots i_N}(\bigotimes_{m}^N \sigma_{i_m}) \otimes (\bigotimes_{n}^N \sigma_{j_n}),\nonumber\\
C_{j_1\cdots j_N}^{i_1\cdots i_N}&=\mathbf{E}[~\{\bigotimes_{m}^N \sigma_{i_m} , \bigotimes_{n}^N \sigma_{j_n}\}~]\nonumber\\
&= \frac{1}{2^N}\sum_{\mu_1\cdots\mu_N}\sum_{\nu_1\cdots\nu_N} [u_{\mu_1\cdots \mu_N}u^*_{\nu_1\cdots \nu_N}\times\nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~\prod_{m=1}^N \tr (\sigma_{j_m}\sigma_{\mu_m}\sigma_{i_m}\sigma_{\nu_m})],\label{PDM}
\end{align}
where $\mathbf{E}[~\{\bigotimes_{m}^N \sigma_{i_m} , \bigotimes_{n}^N \sigma_{j_n}\}~]$ is the expectation value of the product of the outcome of the measurement $\bigotimes_{m}^N \sigma_{i_m}$ performed on the initial time and the outcome of the measurement $\bigotimes_{n}^N \sigma_{j_n}$ performed at the final time $t$. Similarly, $R_t \in L(\mathcal{H}_{\tilde{q}}^\text{In}) \otimes L(\mathcal{H}_q^\text{Out}) $.
By comparing the coefficients of the $N$ qubits Choi matrix ($\Omega_{j_1\cdots j_N}^{i_1\cdots i_N}$) in Eq.~\eqref{CJ} with those of the PDM in Eq.~\eqref{PDM} ($C_{j_1\cdots j_N}^{i_1\cdots i_N}$), one can find that these two matrices are related through a partial transposition of the input degree of freedom, i.e.
\begin{equation}
(\rho^\text{CJ}_t)^{T_\text{In}} = R_t.
\label{CJPDM}
\end{equation}
According to Ref.~\citep{ku2018hierarchy}, the TS assemblage can also be derived from the pseudo density matrix $R_t$ [which is defined in Eq.~\eqref{PDM}] by the following Born's rule:
\begin{equation}
\sigma_{a|x}(t) = \tr_{\text{In}}[(E_{a|x}\otimes \mathbb{1}^{\otimes 2N-1})R_t],
\end{equation}
where $\tr_{\text{In}}$ denotes the partial trace over the input Hilbert space.
As mentioned in the main text, the notion of
scrambling can be understood as the multipartite entanglement in the Choi state. Therefore, the insight
inferred from Eq.~\eqref{CJPDM} suggests us that it should be possible to reformulate the information scrambling with multipartite temporal quantum correlations.
\section{Proof of Theorem 1\label{pf:1}}
\begin{proof}
Let's start from the evolved assemblage for the total system (region $CD$):
\begin{align}
\sigma^\text{tot}_{a|x}(t) &= U_C\otimes U_D \Big[\frac{1}{2^N}(E_{a|x}\otimes\mathbb{1}^{\otimes N-1})\Big] U_C^\dagger \otimes U_D^\dagger \nonumber\\
&=U_C\Big[\frac{1}{2^{n_c}}(E_{a|x}\otimes\mathbb{1}^{\otimes n_c-1})\Big]U_C^\dagger \otimes U_D \frac{\mathbb{1}^{\otimes n_d}}{2^{n_d}}U_D^\dagger,\\
\sigma_{a|x}^C(t) &= U_C\Big[\frac{1}{2^{n_c}}(E_{a|x}\otimes\mathbb{1}^{\otimes n_c-1})\Big]U_C^\dagger,\\
\sigma_{a|x}^D(t) &= U_D \frac{\mathbb{1}^{\otimes n_d}}{2^{n_d+1}}U_D^\dagger.
\end{align}
Since $U_C$ and $U_D$ are unitary, leading to the invariance of the TSW, we find the following results:
\begin{align}
\text{TSW}[\sigma_{a|x}^\text{tot}(t)] &= \text{TSW}[\sigma_{a|x}^\text{tot}(0)] = \text{TSW}\Big(\frac{E_{a|x}\otimes \mathbb{1}^{\otimes N-1}}{2^N}\Big),\\
\text{TSW}[\sigma_{a|x}^C(t)] &= \text{TSW}[\sigma_{a|x}^C(0)] = \text{TSW}\Big( \frac{E_{a|x}\otimes \mathbb{1}^{\otimes n_c-1}}{2^{n_c}}\Big),\\
\text{TSW}[\sigma_{a|x}^D(t)] &= \text{TSW}[\sigma_{a|x}^D(0)] = \text{TSW}\Big( \frac{\mathbb{1}^{\otimes n_d}}{2^{n_d+1}}\Big)
\end{align}
It is straightforward to conclude that $\text{TSW}[\sigma_{a|x}^D(0)] = 0$, since $\{\sigma_{a|x}^D(0)\}$ can be decomposed as the local hidden state model shown in Eq.~\eqref{LHS}. In addition,
\begin{equation}
\text{TSW}\Big(\frac{E_{a|x}\otimes \mathbb{1}^{\otimes n-1}}{2^{n}}\Big) = \text{TSW}\Big(\frac{E_{a|x}}{2}\Big)
\end{equation} for arbitrary positive integer $n$. Therefore, we can deduce that
\begin{equation}
-T_3(t) = \text{TSW}[\sigma_{a|x}^\text{tot}(t)]-\text{TSW}[\sigma_{a|x}^C(t)]-\text{TSW}[\sigma_{a|x}^D(t)]=0.
\end{equation}
\end{proof}
\section{Proof of Theorem 2 \label{pf:2}}
\begin{proof}
We can find that the sum of the TSW for regions $C$ and $D$ is invariant under any permutation between qubits such that
\begin{align}
&\text{TSW}[\sigma_{a|x}^C(t)]+\text{TSW}[\sigma_{a|x}^D(t)]\nonumber \\
= &\text{TSW}\Big(\frac{E_{a|x}}{2}\Big)+\text{TSW}\Big(\frac{\mathbb{1}}{4}\Big)
\end{align}
Therefore, under the SWAP operation, $-T_3(t) = \text{TSW}(\frac{E_{a|x}}{2})- \text{TSW}(\frac{E_{a|x}}{2})=0$.
\end{proof}
\section{The qubit Clifford scrambler}
\begin{figure*}
\caption{\label{Clifford}
\label{Clifford}
\end{figure*}
\begin{figure*}
\caption{\label{finite}
\label{finite}
\end{figure*}
In this section, we numerically analyze the qubit Clifford scrambling circuit, proposed in Ref.~\cite{landsman2019verified}. The setting only involves three qubits with a quantum circuit depicted in Fig.~\ref{Clifford}, which is parametrized by $\theta$. By changing the angle $\theta$, one can scan the angle from non-scrambling ($\theta=0$) to maximally scrambling ($\theta = \pm \frac{\pi}{2}$), which can be described by the following unitary matrix
\begin{equation}
U_s=\frac{i}{2}
\begin{pmatrix}
-1&0&0&-1&0&-1&-1&0\\
0&1&-1&0&-1&0&0&1\\
0&-1&1&0&-1&0&0&1\\
1&0&0&1&0&-1&-1&0\\
0&-1&-1&0&1&0&0&1\\
1&0&0&-1&0&1&-1&0\\
1&0&0&-1&0&-1&1&0\\
0&-1&-1&0&-1&0&0&-1
\end{pmatrix}.
\end{equation}
According to Ref.~\cite{landsman2019verified}, the scrambling unitary delocalizes all single qubit Pauli operators to three qubit Pauli operators in the following way:
\begin{align}
U_s(\sigma_x\otimes \mathbb{1}\otimes\mathbb{1})U_s^\dagger &=- \sigma_x\otimes \sigma_y\otimes \sigma_y\nonumber\\
U_s(\sigma_y\otimes \mathbb{1}\otimes\mathbb{1})U_s^\dagger &=- \sigma_y \otimes \sigma_z \otimes \sigma_z \nonumber\\
U_s(\sigma_z\otimes \mathbb{1}\otimes\mathbb{1})U_s^\dagger &=- \sigma_z \otimes \sigma_x \otimes \sigma_x\nonumber\\
U_s(\mathbb{1}\otimes \sigma_x\otimes\mathbb{1})U_s^\dagger &=- \sigma_y \otimes \sigma_x \otimes \sigma_y \nonumber\\
U_s(\mathbb{1}\otimes \sigma_y\otimes\mathbb{1})U_s^\dagger &=- \sigma_z \otimes \sigma_y \otimes \sigma_z\nonumber\\
U_s(\mathbb{1}\otimes \sigma_z\otimes\mathbb{1})U_s^\dagger &=- \sigma_x \otimes \sigma_z \otimes \sigma_x\nonumber\\
U_s(\mathbb{1}\otimes \mathbb{1}\otimes\sigma_x)U_s^\dagger &=- \sigma_y\otimes \sigma_y \otimes\sigma_x\nonumber\\
U_s(\mathbb{1}\otimes \mathbb{1}\otimes\sigma_y)U_s^\dagger &=- \sigma_z\otimes \sigma_z \otimes \sigma_y\nonumber\\
U_s(\mathbb{1}\otimes \mathbb{1}\otimes\sigma_z)U_s^\dagger &=- \sigma_x \otimes \sigma_x \otimes \sigma_z.
\end{align}Such a delocalization is often known as operator growth, which can be viewed as a key signature of quantum scrambling.
In Fig.~\ref{Clifford}, we plot the values of $-T_3$ and $-I_3$ by changing the angles $\theta$. We can see that both $-I_3$ and $-T_3$ display an oscillating pattern with period $\pi$. The value of $-I_3$ reaches its maximum scrambling value at $\theta=\pi/2$; while, $-T_3$ reaches its maximum scrambling value earlier than $-I_3$ due to the sudden vanishing of the TSW for local regions.
\section{Numerical simulations for different system sizes \label{finitesize}}
\renewcommand{1.5}{1.5}
\begin{table}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$\mathcal{I}_{I_3}(T)$ & 3-qubit & 4-qubit & 5-qubit & 8-qubit\\[5pt]
\hline
Spin chain(Integrable) & 5.295 & 2.602 & 1.764 & 0.557 \\[5pt]
\hline
Spin chain(chaotic) & 1.945 & 0.692 & 0.266 & 0.038 \\[5pt]
\hline
SYK model & 2.311 & 0.265 & 0.057 & 0.001 \\[5pt]
\hline
\end{tabular}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$\mathcal{I}_{T_3}(T)$ & 3-qubit & 4-qubit & 5-qubit & 8-qubit\\[5pt]
\hline
Spin chain(Integrable) & 7.589 & 4.240 & 2.894 & 0.329 \\[5pt]
\hline
Spin chain(chaotic) & 1.340 & 0 & 0 & 0 \\[5pt]
\hline
SYK model & 1.194 & 0 & 0 & 0 \\[5pt]
\hline
\end{tabular}
\end{center}
\caption{\label{Information backflow} The total amount of information backflow for different systems and different numbers of qubit. The top table considers $-I_3$, whereas the bottom one is for $-T_3$. Here, $T=40/g$ and $T=148/J$ for the spin-chains and the SYK-models, respectively. As a result, we can conclude that the system with a larger number of qubit tends to have smaller amount of information backflow.}
\end{table}
In Fig.~\ref{finite}, we plot the numerical simulations of $-I_3$ and $-T_3$ for the integrable spin chain, chaotic spin chain, and the SYK model, involving different numbers of qubits. We can observe that as the qubit number decreases (increases), the oscillation magnitude of information scrambling for both integrable and chaotic dynamics increases (decreases). The result suggests that when the system size decreases (increases), it would be more likely (unlikely) to observe information backflow from non-local to local degrees of freedom.
Because any decrease of $-I_3$ ($-T_3$) signifies the backflow of information, we can quantify the total amount of information backflow within a time interval by summing up the total negative changes of the scrambling witnesses. More specifically, we define a quantity $\mathcal{I}_{Q}(T)$, which quantifies the total amount of information backflow for a given time interval $t\in[0,T]$, as follows
\begin{equation}
\mathcal{I}_{Q}(T)=\int_{t=0,\sigma_Q>0}^{t=T}\sigma_Q(t)~dt,
\end{equation}
where $Q \in \{I_3, T_3\}$ and $\sigma_Q(t) = \frac{d}{dt}Q(t)$. In other words, $\mathcal{I}_{Q}(T)$ integrates all positive changes of $Q$ (or equivalently, all negative changes of $-Q$) for $t\in[0,T]$. Note that this quantification of information backflow is consistent with that in the framework of quantum non-Markovianity~(see Ref.~\cite{chen2016quantifying}, for instance). We summarize the results in Table ~\ref{Information backflow}, which show that as the number of qubit increases, the amount of information backflow $\mathcal{I}_{Q}(T)$ decreases, implying a stronger scrambling effect.
\begin{thebibliography}{65}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{https://doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Maldacena}\ and\ \citenamefont
{Stanford}(2016)}]{maldacena2016remarks}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Maldacena}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Stanford}},\ }\bibfield {title} {\bibinfo {title} {{Remarks on the
Sachdev-Ye-Kitaev model}},\ }\href
{https://doi.org/10.1103/PhysRevD.94.106002} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo
{pages} {106002} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nahum}\ \emph {et~al.}(2017)\citenamefont {Nahum},
\citenamefont {Ruhman}, \citenamefont {Vijay},\ and\ \citenamefont
{Haah}}]{nahum2017quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Nahum}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ruhman}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Vijay}},\ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Haah}},\ }\bibfield {title}
{\bibinfo {title} {{Quantum Entanglement Growth under Random Unitary
Dynamics}},\ }\href {https://doi.org/10.1103/PhysRevX.7.031016} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume}
{7}},\ \bibinfo {pages} {031016} (\bibinfo {year} {2017})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {von Keyserlingk}\ \emph {et~al.}(2018)\citenamefont
{von Keyserlingk}, \citenamefont {Rakovszky}, \citenamefont {Pollmann},\ and\
\citenamefont {Sondhi}}]{von2018operator}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~W.}\ \bibnamefont
{von Keyserlingk}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Rakovszky}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Pollmann}},\ and\ \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Sondhi}},\ }\bibfield {title} {\bibinfo {title} {{Operator Hydrodynamics,
OTOCs, and Entanglement Growth in Systems without Conservation Laws}},\
}\href {https://doi.org/10.1103/PhysRevX.8.021013} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\
\bibinfo {pages} {021013} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cotler}\ \emph {et~al.}(2017)\citenamefont {Cotler},
\citenamefont {Hunter-Jones}, \citenamefont {Liu},\ and\ \citenamefont
{Yoshida}}]{cotler2017chaos}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Cotler}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Hunter-Jones}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Liu}},\
and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}},\
}\bibfield {title} {\bibinfo {title} {Chaos, complexity, and random
matrices},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
High Energy Phys.}\ }\textbf {\bibinfo {volume} {2017}}\bibinfo {number} {
(11)},\ \bibinfo {pages} {48}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fan}\ \emph {et~al.}(2017)\citenamefont {Fan},
\citenamefont {Zhang}, \citenamefont {Shen},\ and\ \citenamefont
{Zhai}}]{fan2017out}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Fan}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shen}},\ and\
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zhai}},\ }\bibfield
{title} {\bibinfo {title} {Out-of-time-order correlation for many-body
localization},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Science bulletin}\ }\textbf {\bibinfo {volume} {62}},\ \bibinfo {pages}
{707} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Khemani}\ \emph {et~al.}(2018)\citenamefont
{Khemani}, \citenamefont {Vishwanath},\ and\ \citenamefont
{Huse}}]{khemani2018operator}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Khemani}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vishwanath}},\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Huse}},\ }\bibfield {title} {\bibinfo {title} {{Operator Spreading and the
Emergence of Dissipative Hydrodynamics under Unitary Evolution with
Conservation Laws}},\ }\href {https://doi.org/10.1103/PhysRevX.8.031057}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo
{volume} {8}},\ \bibinfo {pages} {031057} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Page}(1993)}]{page1993average}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~N.}\ \bibnamefont
{Page}},\ }\bibfield {title} {\bibinfo {title} {Average entropy of a
subsystem},\ }\href {https://doi.org/10.1103/PhysRevLett.71.1291} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {71}},\ \bibinfo {pages} {1291} (\bibinfo {year}
{1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gu}\ \emph {et~al.}(2017)\citenamefont {Gu},
\citenamefont {Qi},\ and\ \citenamefont {Stanford}}]{gu2017local}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Gu}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Qi}},\ and\
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stanford}},\ }\bibfield
{title} {\bibinfo {title} {{Local criticality, diffusion and chaos in
generalized Sachdev-Ye-Kitaev models}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo {volume}
{2017}}\bibinfo {number} { (5)},\ \bibinfo {pages} {125}}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Hayden}\ and\ \citenamefont
{Preskill}(2007)}]{hayden2007black}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Hayden}}\ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Preskill}},\ }\bibfield {title} {\bibinfo {title} {Black
holes as mirrors: quantum information in random subsystems},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf
{\bibinfo {volume} {2007}}\bibinfo {number} { (09)},\ \bibinfo {pages}
{120}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sekino}\ and\ \citenamefont
{Susskind}(2008)}]{sekino2008fast}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Sekino}}\ and\ \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Susskind}},\ }\bibfield {title} {\bibinfo {title} {Fast
scramblers},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J.
High Energy Phys.}\ }\textbf {\bibinfo {volume} {2008}}\bibinfo {number} {
(10)},\ \bibinfo {pages} {065}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lashkari}\ \emph {et~al.}(2013)\citenamefont
{Lashkari}, \citenamefont {Stanford}, \citenamefont {Hastings}, \citenamefont
{Osborne},\ and\ \citenamefont {Hayden}}]{lashkari2013towards}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Lashkari}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Stanford}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Hastings}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Osborne}},\ and\ \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Hayden}},\ }\bibfield {title} {\bibinfo {title} {Towards
the fast scrambling conjecture},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo {volume}
{2013}}\bibinfo {number} { (4)},\ \bibinfo {pages} {22}}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Gao}\ \emph {et~al.}(2017)\citenamefont {Gao},
\citenamefont {Jafferis},\ and\ \citenamefont {Wall}}]{gao2017traversable}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Gao}}, \bibinfo {author} {\bibfnamefont {D.~L.}\
\bibnamefont {Jafferis}},\ and\ \bibinfo {author} {\bibfnamefont {A.~C.}\
\bibnamefont {Wall}},\ }\bibfield {title} {\bibinfo {title} {Traversable
wormholes via a double trace deformation},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo
{volume} {2017}}\bibinfo {number} { (12)},\ \bibinfo {pages}
{151}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shenker}\ and\ \citenamefont
{Stanford}(2014{\natexlab{a}})}]{shenker2014black}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{S.~H.}\ \bibnamefont {Shenker}}\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Stanford}},\ }\bibfield {title} {\bibinfo {title} {Black
holes and the butterfly effect},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {J. High Energy Phys.}\ }\textbf {\bibinfo {volume}
{2014}}\bibinfo {number} { (3)},\ \bibinfo {pages} {67}}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Maldacena}\ \emph {et~al.}(2017)\citenamefont
{Maldacena}, \citenamefont {Stanford},\ and\ \citenamefont
{Yang}}]{maldacena2017diving}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Maldacena}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Stanford}},\ and\ \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Yang}},\ }\bibfield {title} {\bibinfo {title} {Diving
into traversable wormholes},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Fortschr. Phys.}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo
{pages} {1700034} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Roberts}\ \emph {et~al.}(2015)\citenamefont
{Roberts}, \citenamefont {Stanford},\ and\ \citenamefont
{Susskind}}]{roberts2015localized}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Roberts}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Stanford}},\
and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Susskind}},\
}\bibfield {title} {\bibinfo {title} {Localized shocks},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf
{\bibinfo {volume} {2015}}\bibinfo {number} { (3)},\ \bibinfo {pages}
{51}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shenker}\ and\ \citenamefont
{Stanford}(2014{\natexlab{b}})}]{shenker2014multiple}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{S.~H.}\ \bibnamefont {Shenker}}\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Stanford}},\ }\bibfield {title} {\bibinfo {title}
{Multiple shocks},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{J. High Energy Phys.}\ }\textbf {\bibinfo {volume} {2014}}\bibinfo {number}
{ (12)},\ \bibinfo {pages} {46}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Roberts}\ and\ \citenamefont
{Stanford}(2015)}]{roberts2015diagnosing}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{D.~A.}\ \bibnamefont {Roberts}}\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Stanford}},\ }\bibfield {title} {\bibinfo {title}
{Diagnosing chaos using four-point functions in two-dimensional conformal
field theory},\ }\href {https://doi.org/10.1103/PhysRevLett.115.131603}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {115}},\ \bibinfo {pages} {131603} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Blake}(2016)}]{blake2016universal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Blake}},\ }\bibfield {title} {\bibinfo {title} {Universal charge diffusion
and the butterfly effect in holographic theories},\ }\href
{https://doi.org/10.1103/PhysRevLett.117.091601} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\
\bibinfo {pages} {091601} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kitaev}(2014)}]{kitaev2014hidden}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kitaev}},\ }\bibfield {title} {\bibinfo {title} {{Hidden correlations in
the Hawking radiation and thermal noise}},\ }in\ \href@noop {} {\emph
{\bibinfo {booktitle} {contribution to the Fundamental Physics Prize
Symposium}}},\ Vol.~\bibinfo {volume} {10}\ (\bibinfo {year}
{2014})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Choi}(1975)}]{choi1975completely}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-D.}\ \bibnamefont
{Choi}},\ }\bibfield {title} {\bibinfo {title} {Completely positive linear
maps on complex matrices},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Linear Algebra Appl.}\ }\textbf {\bibinfo {volume} {10}},\
\bibinfo {pages} {285} (\bibinfo {year} {1975})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jamio{\l}kowski}(1972)}]{jamiolkowski1972linear}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Jamio{\l}kowski}},\ }\bibfield {title} {\bibinfo {title} {Linear
transformations which preserve trace and positive semidefiniteness of
operators},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rep.
Math. Phys.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {275}
(\bibinfo {year} {1972})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hosur}\ \emph {et~al.}(2016)\citenamefont {Hosur},
\citenamefont {Qi}, \citenamefont {Roberts},\ and\ \citenamefont
{Yoshida}}]{hosur2016chaos}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Hosur}}, \bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Qi}},
\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Roberts}},\ and\
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}},\ }\bibfield
{title} {\bibinfo {title} {Chaos in quantum channels},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf
{\bibinfo {volume} {2016}}\bibinfo {number} { (2)},\ \bibinfo {pages}
{4}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ding}\ \emph {et~al.}(2016)\citenamefont {Ding},
\citenamefont {Hayden},\ and\ \citenamefont {Walter}}]{ding2016conditional}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Hayden}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Walter}},\ }\bibfield {title} {\bibinfo {title} {Conditional mutual
information of bipartite unitaries and scrambling},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {J. High Energy Phys.}\ }\textbf
{\bibinfo {volume} {2016}}\bibinfo {number} { (12)},\ \bibinfo {pages}
{145}}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Seshadri}\ \emph {et~al.}(2018)\citenamefont
{Seshadri}, \citenamefont {Madhok},\ and\ \citenamefont
{Lakshminarayan}}]{PhysRevE.98.052205}
\BibitemOpen
\bibfield {number} { }\bibfield {author} {\bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Seshadri}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Madhok}},\ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Lakshminarayan}},\ }\bibfield {title} {\bibinfo {title}
{Tripartite mutual information, entanglement, and scrambling in permutation
symmetric systems with an application to quantum chaos},\ }\href
{https://doi.org/10.1103/PhysRevE.98.052205} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {052205} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Iyoda}\ and\ \citenamefont
{Sagawa}(2018)}]{PhysRevA.97.042330}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Iyoda}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sagawa}},\
}\bibfield {title} {\bibinfo {title} {Scrambling of quantum information in
quantum many-body systems},\ }\href
{https://doi.org/10.1103/PhysRevA.97.042330} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo
{pages} {042330} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shen}\ \emph {et~al.}(2020)\citenamefont {Shen},
\citenamefont {Zhang}, \citenamefont {You},\ and\ \citenamefont
{Zhai}}]{PhysRevLett.124.200504}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Shen}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {Y.-Z.}\ \bibnamefont {You}},\ and\ \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Zhai}},\ }\bibfield {title}
{\bibinfo {title} {Information scrambling in quantum neural networks},\
}\href {https://doi.org/10.1103/PhysRevLett.124.200504} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {124}},\
\bibinfo {pages} {200504} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Li}\ \emph {et~al.}(2020)\citenamefont {Li},
\citenamefont {Li},\ and\ \citenamefont {Jin}}]{PhysRevA.101.042324}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Li}},\ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Jin}},\ }\bibfield
{title} {\bibinfo {title} {Information scrambling in a collision model},\
}\href {https://doi.org/10.1103/PhysRevA.101.042324} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\
\bibinfo {pages} {042324} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Pappalardi}\ \emph {et~al.}(2018)\citenamefont
{Pappalardi}, \citenamefont {Russomanno}, \citenamefont {\ifmmode
\check{Z}\else \v{Z}\fi{}unkovi\ifmmode~\check{c}\else \v{c}\fi{}},
\citenamefont {Iemini}, \citenamefont {Silva},\ and\ \citenamefont
{Fazio}}]{PhysRevB.98.134303}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Pappalardi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Russomanno}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {\ifmmode
\check{Z}\else \v{Z}\fi{}unkovi\ifmmode~\check{c}\else \v{c}\fi{}}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Iemini}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Silva}},\ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Fazio}},\ }\bibfield {title} {\bibinfo
{title} {Scrambling and entanglement spreading in long-range spin chains},\
}\href {https://doi.org/10.1103/PhysRevB.98.134303} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {98}},\
\bibinfo {pages} {134303} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Landsman}\ \emph {et~al.}(2019)\citenamefont
{Landsman}, \citenamefont {Figgatt}, \citenamefont {Schuster}, \citenamefont
{Linke}, \citenamefont {Yoshida}, \citenamefont {Yao},\ and\ \citenamefont
{Monroe}}]{landsman2019verified}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~A.}\ \bibnamefont
{Landsman}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Figgatt}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Schuster}}, \bibinfo
{author} {\bibfnamefont {N.~M.}\ \bibnamefont {Linke}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Yoshida}}, \bibinfo {author} {\bibfnamefont
{N.~Y.}\ \bibnamefont {Yao}},\ and\ \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Monroe}},\ }\bibfield {title} {\bibinfo {title} {Verified
quantum information scrambling},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {567}},\ \bibinfo
{pages} {61} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Swingle}\ \emph {et~al.}(2016)\citenamefont
{Swingle}, \citenamefont {Bentsen}, \citenamefont {Schleier-Smith},\ and\
\citenamefont {Hayden}}]{swingle2016measuring}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Swingle}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bentsen}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schleier-Smith}},\ and\
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hayden}},\ }\bibfield
{title} {\bibinfo {title} {Measuring the scrambling of quantum information},\
}\href {https://doi.org/10.1103/PhysRevA.94.040302} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\
\bibinfo {pages} {040302} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yao}\ \emph {et~al.}(2016)\citenamefont {Yao},
\citenamefont {Grusdt}, \citenamefont {Swingle}, \citenamefont {Lukin},
\citenamefont {Stamper-Kurn}, \citenamefont {Moore},\ and\ \citenamefont
{Demler}}]{yao2016interferometric}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~Y.}\ \bibnamefont
{Yao}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Grusdt}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Swingle}}, \bibinfo
{author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}}, \bibinfo {author}
{\bibfnamefont {D.~M.}\ \bibnamefont {Stamper-Kurn}}, \bibinfo {author}
{\bibfnamefont {J.~E.}\ \bibnamefont {Moore}},\ and\ \bibinfo {author}
{\bibfnamefont {E.~A.}\ \bibnamefont {Demler}},\ }\bibfield {title}
{\bibinfo {title} {{Interferometric Approach to Probing Fast Scrambling}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:1607.01801}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhu}\ \emph {et~al.}(2016)\citenamefont {Zhu},
\citenamefont {Hafezi},\ and\ \citenamefont {Grover}}]{zhu2016measurement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Zhu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hafezi}},\ and\
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Grover}},\ }\bibfield
{title} {\bibinfo {title} {Measurement of many-body chaos using a quantum
clock},\ }\href {https://doi.org/10.1103/PhysRevA.94.062329} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{94}},\ \bibinfo {pages} {062329} (\bibinfo {year} {2016})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {G{\"a}rttner}\ \emph {et~al.}(2017)\citenamefont
{G{\"a}rttner}, \citenamefont {Bohnet}, \citenamefont {Safavi-Naini},
\citenamefont {Wall}, \citenamefont {Bollinger},\ and\ \citenamefont
{Rey}}]{garttner2017measuring}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{G{\"a}rttner}}, \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont
{Bohnet}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Safavi-Naini}}, \bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont
{Wall}}, \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont
{Bollinger}},\ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Rey}},\ }\bibfield {title} {\bibinfo {title} {Measuring out-of-time-order
correlations and multiple quantum spectra in a trapped-ion quantum magnet},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\
}\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {781} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Swingle}\ and\ \citenamefont
{Yunger~Halpern}(2018)}]{swingle2018resilience}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Swingle}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Yunger~Halpern}},\ }\bibfield {title} {\bibinfo {title} {Resilience of
scrambling measurements},\ }\href
{https://doi.org/10.1103/PhysRevA.97.062113} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo
{pages} {062113} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2019)\citenamefont {Huang},
\citenamefont {Brand\~ao},\ and\ \citenamefont {Zhang}}]{huang2019finite}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Huang}}, \bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\ \bibnamefont
{Brand\~ao}},\ and\ \bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont
{Zhang}},\ }\bibfield {title} {\bibinfo {title} {Finite-size scaling of
out-of-time-ordered correlators at late times},\ }\href
{https://doi.org/10.1103/PhysRevLett.123.010601} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\
\bibinfo {pages} {010601} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yoshida}\ and\ \citenamefont
{Yao}(2019)}]{yoshida2019disentangling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Yoshida}}\ and\ \bibinfo {author} {\bibfnamefont {N.~Y.}\ \bibnamefont
{Yao}},\ }\bibfield {title} {\bibinfo {title} {{Disentangling Scrambling and
Decoherence via Quantum Teleportation}},\ }\href
{https://doi.org/10.1103/PhysRevX.9.011006} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages}
{011006} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gonz\'alez~Alonso}\ \emph {et~al.}(2019)\citenamefont
{Gonz\'alez~Alonso}, \citenamefont {Yunger~Halpern},\ and\ \citenamefont
{Dressel}}]{alonso2019out}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Gonz\'alez~Alonso}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Yunger~Halpern}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dressel}},\ }\bibfield {title} {\bibinfo {title}
{{Out-of-Time-Ordered-Correlator Quasiprobabilities Robustly Witness
Scrambling}},\ }\href {https://doi.org/10.1103/PhysRevLett.122.040404}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {122}},\ \bibinfo {pages} {040404} (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ku}\ \emph {et~al.}(2018)\citenamefont {Ku},
\citenamefont {Chen}, \citenamefont {Lambert}, \citenamefont {Chen},\ and\
\citenamefont {Nori}}]{ku2018hierarchy}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont
{Ku}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \bibinfo
{author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\ and\ \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title}
{\bibinfo {title} {Hierarchy in temporal quantum correlations},\ }\href
{https://doi.org/10.1103/PhysRevA.98.022104} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {022104} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fitzsimons}\ \emph {et~al.}(2015)\citenamefont
{Fitzsimons}, \citenamefont {Jones},\ and\ \citenamefont
{Vedral}}]{fitzsimons2015quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont
{Fitzsimons}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont
{Jones}},\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Vedral}},\ }\bibfield {title} {\bibinfo {title} {Quantum correlations which
imply causation},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Sci. Rep.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {18281}
(\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ried}\ \emph {et~al.}(2015)\citenamefont {Ried},
\citenamefont {Agnew}, \citenamefont {Vermeyden}, \citenamefont {Janzing},
\citenamefont {Spekkens},\ and\ \citenamefont {Resch}}]{Ried2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Ried}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Agnew}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Vermeyden}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Janzing}}, \bibinfo {author}
{\bibfnamefont {R.~W.}\ \bibnamefont {Spekkens}},\ and\ \bibinfo {author}
{\bibfnamefont {K.~J.}\ \bibnamefont {Resch}},\ }\bibfield {title} {\bibinfo
{title} {A quantum advantage for inferring causal~structure},\ }\href
{https://doi.org/10.1038/nphys3266} {\bibfield {journal} {\bibinfo
{journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages}
{414} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhao}\ \emph {et~al.}(2018)\citenamefont {Zhao},
\citenamefont {Pisarczyk}, \citenamefont {Thompson}, \citenamefont {Gu},
\citenamefont {Vedral},\ and\ \citenamefont {Fitzsimons}}]{Zhao2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Zhao}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Pisarczyk}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Thompson}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Gu}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Vedral}},\ and\ \bibinfo {author}
{\bibfnamefont {J.~F.}\ \bibnamefont {Fitzsimons}},\ }\bibfield {title}
{\bibinfo {title} {Geometry of quantum correlations in space-time},\ }\href
{https://doi.org/10.1103/PhysRevA.98.052312} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {052312} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Pisarczyk}\ \emph
{et~al.}(2019{\natexlab{a}})\citenamefont {Pisarczyk}, \citenamefont {Zhao},
\citenamefont {Ouyang}, \citenamefont {Vedral},\ and\ \citenamefont
{Fitzsimons}}]{Pi2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Pisarczyk}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhao}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ouyang}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ and\ \bibinfo {author}
{\bibfnamefont {J.~F.}\ \bibnamefont {Fitzsimons}},\ }\bibfield {title}
{\bibinfo {title} {{Causal Limit on Quantum Communication}},\ }\href
{https://doi.org/10.1103/PhysRevLett.123.150502} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\
\bibinfo {pages} {150502} (\bibinfo {year} {2019}{\natexlab{a}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Pisarczyk}\ \emph
{et~al.}(2019{\natexlab{b}})\citenamefont {Pisarczyk}, \citenamefont {Zhao},
\citenamefont {Ouyang}, \citenamefont {Vedral},\ and\ \citenamefont
{Fitzsimons}}]{pisarczyk2019causal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Pisarczyk}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zhao}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ouyang}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ and\ \bibinfo {author}
{\bibfnamefont {J.~F.}\ \bibnamefont {Fitzsimons}},\ }\bibfield {title}
{\bibinfo {title} {Causal limit on quantum communication},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Physical review letters}\
}\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {150502} (\bibinfo
{year} {2019}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2014)\citenamefont {Chen},
\citenamefont {Li}, \citenamefont {Lambert}, \citenamefont {Chen},
\citenamefont {Ota}, \citenamefont {Chen},\ and\ \citenamefont
{Nori}}]{chen2014temporal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \bibinfo
{author} {\bibfnamefont {S.-L.}\ \bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Ota}}, \bibinfo {author} {\bibfnamefont
{G.-Y.}\ \bibnamefont {Chen}},\ and\ \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo {title} {Temporal
steering inequality},\ }\href {https://doi.org/10.1103/PhysRevA.89.032112}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {89}},\ \bibinfo {pages} {032112} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2015)\citenamefont {Chen},
\citenamefont {Chao},\ and\ \citenamefont {Chen}}]{chen2015detecting}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {C.-S.}\ \bibnamefont {Chao}},\
and\ \bibinfo {author} {\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\
}\bibfield {title} {\bibinfo {title} {Detecting the existence of an
invisibility cloak using temporal steering},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume}
{5}},\ \bibinfo {pages} {15571} (\bibinfo {year} {2015})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2016)\citenamefont {Chen},
\citenamefont {Lambert}, \citenamefont {Li}, \citenamefont {Miranowicz},
\citenamefont {Chen},\ and\ \citenamefont {Nori}}]{chen2016quantifying}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}},
\bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Miranowicz}}, \bibinfo {author}
{\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo
{title} {{Quantifying Non-Markovianity with Temporal Steering}},\ }\href
{https://doi.org/10.1103/PhysRevLett.116.020503} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\
\bibinfo {pages} {020503} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ku}\ \emph {et~al.}(2016)\citenamefont {Ku},
\citenamefont {Chen}, \citenamefont {Chen}, \citenamefont {Lambert},
\citenamefont {Chen},\ and\ \citenamefont {Nori}}]{ku2016temporal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-Y.}\ \bibnamefont
{Ku}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {H.-B.}\ \bibnamefont {Chen}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Lambert}}, \bibinfo {author}
{\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}},\ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo
{title} {Temporal steering in four dimensions with applications to coupled
qubits and magnetoreception},\ }\href
{https://doi.org/10.1103/PhysRevA.94.062126} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo
{pages} {062126} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2017)\citenamefont {Chen},
\citenamefont {Lambert}, \citenamefont {Li}, \citenamefont {Chen},
\citenamefont {Chen}, \citenamefont {Miranowicz},\ and\ \citenamefont
{Nori}}]{chen2017spatio}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}},
\bibinfo {author} {\bibfnamefont {C.-M.}\ \bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {G.-Y.}\ \bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {Y.-N.}\ \bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Miranowicz}},\ and\ \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo
{title} {Spatio-temporal steering for testing nonclassical correlations in
quantum networks},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Sci. Rep.}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1}
(\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bartkiewicz}\ \emph {et~al.}(2016)\citenamefont
{Bartkiewicz}, \citenamefont {\ifmmode~\check{C}\else \v{C}\fi{}ernoch},
\citenamefont {Lemr}, \citenamefont {Miranowicz},\ and\ \citenamefont
{Nori}}]{bartkiewicz2016temporal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Bartkiewicz}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{\ifmmode~\check{C}\else \v{C}\fi{}ernoch}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Lemr}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Miranowicz}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nori}},\ }\bibfield {title} {\bibinfo {title} {Temporal steering and
security of quantum key distribution with mutually unbiased bases against
individual attacks},\ }\href {https://doi.org/10.1103/PhysRevA.93.062345}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {93}},\ \bibinfo {pages} {062345} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2018)\citenamefont {Liu},
\citenamefont {Huang},\ and\ \citenamefont {Sun}}]{liu2018quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Huang}},\ and\
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Sun}},\ }\bibfield
{title} {\bibinfo {title} {Quantum temporal steering in a dephasing channel
with quantum criticality},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Ann. Phys.}\ }\textbf {\bibinfo {volume} {530}},\ \bibinfo {pages}
{1700373} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Leggett}\ and\ \citenamefont
{Garg}(1985)}]{leggett1985quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Leggett}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Garg}},\
}\bibfield {title} {\bibinfo {title} {Quantum mechanics versus macroscopic
realism: Is the flux there when nobody looks?},\ }\href
{https://doi.org/10.1103/PhysRevLett.54.857} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo
{pages} {857} (\bibinfo {year} {1985})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Emary}\ \emph {et~al.}(2013)\citenamefont {Emary},
\citenamefont {Lambert},\ and\ \citenamefont {Nori}}]{emary2013leggett}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Emary}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Lambert}},\
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield
{title} {\bibinfo {title} {Leggett--\uppercase{g}arg inequalities},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rep. Prog. Phys.}\
}\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {016001} (\bibinfo
{year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schr{\"o}dinger}(1936)}]{schrodinger1935discussion}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Schr{\"o}dinger}},\ }\bibfield {title} {\bibinfo {title} {Probability
relations between separated systems},\ }in\ \href@noop {} {\emph {\bibinfo
{booktitle} {Math. Proc. Cambridge Philos. Soc.}}},\ Vol.~\bibinfo {volume}
{31}\ (\bibinfo {organization} {Cambridge University Press},\ \bibinfo {year}
{1936})\ p.\ \bibinfo {pages} {446}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wiseman}\ \emph {et~al.}(2007)\citenamefont
{Wiseman}, \citenamefont {Jones},\ and\ \citenamefont
{Doherty}}]{wiseman2007steering}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont
{Wiseman}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Jones}},\
and\ \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Doherty}},\
}\bibfield {title} {\bibinfo {title} {{Steering, Entanglement, Nonlocality,
and the Einstein-Podolsky-Rosen Paradox}},\ }\href
{https://doi.org/10.1103/PhysRevLett.98.140402} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {98}},\
\bibinfo {pages} {140402} (\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jones}\ \emph {et~al.}(2007)\citenamefont {Jones},
\citenamefont {Wiseman},\ and\ \citenamefont
{Doherty}}]{jones2007entanglement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Jones}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\
and\ \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Doherty}},\
}\bibfield {title} {\bibinfo {title} {{Entanglement, Einstein-Podolsky-Rosen
correlations, Bell nonlocality, and steering}},\ }\href
{https://doi.org/10.1103/PhysRevA.76.052116} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo
{pages} {052116} (\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cavalcanti}\ \emph {et~al.}(2009)\citenamefont
{Cavalcanti}, \citenamefont {Jones}, \citenamefont {Wiseman},\ and\
\citenamefont {Reid}}]{cavalcanti2009experimental}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont
{Cavalcanti}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Jones}}, \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\
and\ \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Reid}},\
}\bibfield {title} {\bibinfo {title} {{Experimental criteria for steering
and the Einstein-Podolsky-Rosen paradox}},\ }\href
{https://doi.org/10.1103/PhysRevA.80.032112} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo
{pages} {032112} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Piani}\ and\ \citenamefont
{Watrous}(2015)}]{piani2015necessary}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Piani}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Watrous}},\ }\bibfield {title} {\bibinfo {title} {{Necessary and Sufficient
Quantum Information Characterization of Einstein-Podolsky-Rosen Steering}},\
}\href {https://doi.org/10.1103/PhysRevLett.114.060404} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\
\bibinfo {pages} {060404} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Skrzypczyk}\ \emph {et~al.}(2014)\citenamefont
{Skrzypczyk}, \citenamefont {Navascu\'es},\ and\ \citenamefont
{Cavalcanti}}]{skrzypczyk2014quantifying}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Navascu\'es}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cavalcanti}},\ }\bibfield {title} {\bibinfo {title} {{Quantifying
Einstein-Podolsky-Rosen Steering}},\ }\href
{https://doi.org/10.1103/PhysRevLett.112.180404} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\
\bibinfo {pages} {180404} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Costa}\ and\ \citenamefont
{Angelo}(2016)}]{costa2016quantification}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~C.~S.}\
\bibnamefont {Costa}}\ and\ \bibinfo {author} {\bibfnamefont {R.~M.}\
\bibnamefont {Angelo}},\ }\bibfield {title} {\bibinfo {title}
{{Quantification of Einstein-Podolsky-Rosen steering for two-qubit states}},\
}\href {https://doi.org/10.1103/PhysRevA.93.020103} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\
\bibinfo {pages} {020103} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Branciard}\ \emph {et~al.}(2012)\citenamefont
{Branciard}, \citenamefont {Cavalcanti}, \citenamefont {Walborn},
\citenamefont {Scarani},\ and\ \citenamefont {Wiseman}}]{branciard2012one}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Branciard}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont
{Cavalcanti}}, \bibinfo {author} {\bibfnamefont {S.~P.}\ \bibnamefont
{Walborn}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}},\
and\ \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\
}\bibfield {title} {\bibinfo {title} {One-sided device-independent quantum
key distribution: Security, feasibility, and the connection with steering},\
}\href {https://doi.org/10.1103/PhysRevA.85.010301} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {85}},\
\bibinfo {pages} {010301} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Law}\ \emph {et~al.}(2014)\citenamefont {Law},
\citenamefont {Bancal}, \citenamefont {Scarani} \emph
{et~al.}}]{law2014quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~Z.}\ \bibnamefont
{Law}}, \bibinfo {author} {\bibfnamefont {J.-D.}\ \bibnamefont {Bancal}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \emph
{et~al.},\ }\bibfield {title} {\bibinfo {title} {Quantum randomness
extraction for various levels of characterization of the devices},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A}\
}\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {424028} (\bibinfo
{year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Uola}\ \emph {et~al.}(2020)\citenamefont {Uola},
\citenamefont {Costa}, \citenamefont {Nguyen},\ and\ \citenamefont
{G\"uhne}}]{uola2019quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Uola}}, \bibinfo {author} {\bibfnamefont {A.~C.~S.}\ \bibnamefont {Costa}},
\bibinfo {author} {\bibfnamefont {H.~C.}\ \bibnamefont {Nguyen}},\ and\
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {G\"uhne}},\ }\bibfield
{title} {\bibinfo {title} {Quantum steering},\ }\href
{https://doi.org/10.1103/RevModPhys.92.015001} {\bibfield {journal}
{\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {92}},\
\bibinfo {pages} {015001} (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Cavalcanti}\ and\ \citenamefont
{Skrzypczyk}(2016)}]{cavalcanti2016quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Cavalcanti}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Skrzypczyk}},\ }\bibfield {title} {\bibinfo {title} {Quantum steering: a
review with focus on semidefinite programming},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo
{volume} {80}},\ \bibinfo {pages} {024001} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chitambar}\ and\ \citenamefont
{Gour}(2019)}]{chitambar2019quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Chitambar}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Gour}},\ }\bibfield {title} {\bibinfo {title} {Quantum resource theories},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews of Modern
Physics}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {025001}
(\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2020)\citenamefont {Zhang},
\citenamefont {Dahlsten},\ and\ \citenamefont {Vedral}}]{zhang2020quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Dahlsten}},\
and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\
}\bibfield {title} {\bibinfo {title} {Quantum correlations in time},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:2002.10448}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document} |
\begin{document}
\title{On the Additive Constant of the $k$-server Work Function Algorithm}
\author{
Yuval Emek\thanks{Tel Aviv University, Tel Aviv, 69978 Israel.
E-mail: \texttt{[email protected]}.
This work was partially done during this author's visit at LIAFA, CNRS and
University Paris Diderot, supported by Action COST 295 DYNAMO.}
\and
Pierre Fraigniaud\thanks{CNRS and University Paris Diderot, France.
Email: \texttt{[email protected]}.
Additional support from the ANR project ALADDIN, by the INRIA
project GANG, and by COST Action 295 DYNAMO.}
\and
Amos Korman\thanks{CNRS and University Paris Diderot, France.
Email: \texttt{[email protected]}.
Additional support from the ANR project ALADDIN, by the INRIA
project GANG, and by COST Action 295 DYNAMO.}
\and
Adi Ros\'{e}n\thanks{CNRS and University of Paris 11, France.
Email: \texttt{[email protected]}.
Research partially supported by ANR projects AlgoQP and ALADDIN.}
}
\date{}
\maketitle
\begin{abstract}
We consider the Work Function Algorithm for the $k$-server problem
\cite{CL91,KP95}.
We show that if the Work Function Algorithm is $c$-competitive, then it is
also {\em strictly} $(2c)$-competitive.
As a consequence of \cite{KP95} this also shows that the Work Function
Algorithm is strictly $(4k-2)$-competitive.
\end{abstract}
\section{Introduction}
A (deterministic) online algorithm \Algorithm{} is said to be
\emph{$c$-competitive} if for all finite request sequences $\rho$, it holds
that $\Algorithm(\rho) \leq c\cdot OPT(\rho) +\beta$, where $\Algorithm(\rho)$
and $OPT(\rho)$ are the costs incurred by \Algorithm{} and the optimal
algorithm, respectively, on $\sigma$ and $\beta$ is a constant independent of
$\rho$.
When this condition holds for $\beta=0$, then \Algorithm{} is said to be
\emph{strictly $c$-competitive}.
The $k$-server problem is one of the most extensively studied online problems
(cf. \cite{BE}).
To date, the best known competitive ratio for the $k$-server problem on
general metric spaces is $2k-1$ \cite{KP95}, which is achieved by the Work
Function Algorithm \cite{CL91}.
A lower bound of $k$ for any metric space with at least \( k + 1 \) nodes is
also known \cite{MMS90}.
The question whether online algorithms are strictly competitive, and in
particular if there is a {\em strictly} competitive $k$-server algorithm, is
of interest for two reasons.
First, as a purely theoretical question.
Second, at times one attempts to build a competitive online algorithm by
repeatedly applying another online algorithm as a subroutine.
In that case, if the online algorithm applied as a subroutine is not strictly
competitive, the resulting online algorithm may not be competitive at all due
to the growth of the additive constant with the length of the request
sequence.
In this paper we show that there exists a strictly competitive $k$-server
algorithm for general metric spaces.
In fact, we show that if the Work Function Algorithm is $c$-competitive, then
it is also strictly $(2c)$-competitive.
As a consequence of \cite{KP95}, we thus also show that the Work Function
Algorithm is strictly $(4k-2)$-competitive.
\section{Preliminaries}
Let \( \Metric = (V, \Distance) \) be a metric space.
We consider instances of the \(k\)-server problem on \(\Metric\), and when
clear from the context, omit the mention of the metric space.
At any given time, each server resides in some node \( v \in V \).
A subset \( X \subseteq V \), \( |X| = k \), where the servers reside is
called a \emph{configuration}.
The \emph{distance} between two configurations \(X\) and \(Y\), denoted by
\(\ConfigurationDistance(X, Y)\), is defined as the weight of a minimum weight
matching between \(X\) and \(Y\).
In every \emph{round}, a new \emph{request} \( r \in V \) is presented and
should be \emph{served} by ensuring that a server resides on the request \(r\).
The servers can move from node to node, and the movement of a server from node
\(x\) to node \(y\) incurs a \emph{cost} of \(\Distance(x,
y)\).
Fix some initial configuration \(A_0\) and some finite request sequence
\(\rho\).
The \emph{work function} \(\WorkFunction_{\rho}(X)\) of the configuration
\(X\) with respect to \(\rho\) is the optimal cost of serving \(\rho\)
starting in \(A_0\) and ending up in configuration \(X\).
The collection of work function values \( \WorkFunction_{\rho}(\cdot) = \{ (X,
\WorkFunction_{\rho}(X)) \mid X \subseteq V, |X| = k \} \) is referred to as
the \emph{work vector} of \(\rho\) (and initial configuration \(A_0\)).
A move of some server from node \(x\) to node \(y\) in round \(t\) is called
\emph{forced} if a request was presented at \(y\) in round \(t\).
(An empty move, in case that \( x = y \), is also considered to be forced.)
An algorithm for the \(k\)-server problem is said to be \emph{lazy} if
it only makes forced moves.
Given some configuration \(X\), an offline algorithm for the \(k\)-server
problem is said to be \emph{\(X\)-lazy} if in every round other than the last
round, it only makes forced moves, while in the last round, it makes a forced
move and it is also allowed to move servers to nodes in \(X\) from nodes not
in \(X\).
Since unforced moves can always be postponed, it follows that
\(\WorkFunction_{\rho}(X)\) can be realized by an \(X\)-lazy (offline)
algorithm for every choice of configuration \(X\).
Given an initial configuration \(A_0\) and a request sequence \(\rho\), we
denote the total cost paid by an online algorithm \Algorithm{} for serving
\(\rho\) (in an online fashion) when it starts in \(A_0\) by \(\Algorithm(A_0,
\rho)\).
The optimal cost for serving \(\rho\) starting in \(A_0\) is denoted by \(
\Optimal(A_0, \rho) = \min_{X}\{ \WorkFunction_{\rho}(X) \} \).
The optimal cost for serving \(\rho\) starting in \(A_0\) and ending in
configuration \(X\) is denoted by \( \Optimal(A_0, \rho, X) =
\WorkFunction_{\rho}(X) \).
(This seemingly redundant notation is found useful hereafter.)
\Comment{
Note that the number of servers used by the algorithm (either online or
optimal) is implicitly cast in the above notation through the cardinality of
the initial configuration \(A_0\).
(This will be important when the number of servers is not explicitly stated.)
}
Consider some metric space \(\Metric\).
In the context of the \(k\)-server problem, an algorithm \Algorithm{} is said
to be \emph{\(c\)-competitive} if for any initial configuration \(A_0\), and
any finite request sequence \(\rho\), \( \Algorithm(A_0, \rho) \leq c \cdot
\Optimal(A_0, \rho) + \beta \), where \(\beta\) may depend on the initial
configuration \(A_0\), but not on the request sequence \(\rho\).
\Algorithm{} is said to be \emph{strictly \(c\)-competitive} if it is
\(c\)-competitive with additive constant \( \beta = 0 \), that is, if for any
initial configuration \(A_0\) and any finite request sequence \(\rho\), \(
\Algorithm(A_0, \rho) \leq c \cdot \Optimal(A_0, \rho) \).
As common in other works, we assume that the online algorithm and the optimal
algorithm have the same initial configuration.
\section{Strictly competitive analysis}
We prove the following theorem.
\begin{AvoidOverfullParagraph}
\begin{theorem} \label{theorem:OmitAdditiveTermWFA}
If the Work Function Algorithm is \(c\)-competitive, then it is also strictly
\( (2 c) \)-competitive.
\end{theorem}
\end{AvoidOverfullParagraph}
In fact, we shall prove Theorem~\ref{theorem:OmitAdditiveTermWFA} for a
(somewhat) larger class of \(k\)-server online algorithms, referred to as
\emph{robust} algorithms (this class will be defined soon).
We say that an online algorithm for the \(k\)-server problem is
\emph{request-sequence-oblivious}, if for every initial configuration \(A_0\),
request sequence \(\rho\), current configuration \(X\), and request \(r\), the
action of the algorithm on \(r\) after it served \(\rho\) (starting in
\(A_0\)) is fully determined by \(X\), \(r\), and the work vector
\(\WorkFunction_{\rho}(\cdot)\).
In other words, a request-sequence-oblivious online algorithm can replace the
explicit knowledge of \(A_0\) and \(\rho\) with the knowledge of
\(\WorkFunction_{\rho}(\cdot)\).
An online algorithm is said to be \emph{robust} if it is lazy,
request-sequence-oblivious, and its behavior does not change if one adds to
all entries of the work vector any given value \(d\).
We prove that if a robust algorithm is \(c\)-competitive, then it is also
strictly \((2 c)\)-competitive.
Theorem~\ref{theorem:OmitAdditiveTermWFA} follows as the work function
algorithm is robust.
In what follows, we consider a robust online algorithm \Algorithm{} and a
lazy optimal (offline) algorithm \Optimal{} for the \(k\)-server
problem.
(In some cases, \Optimal{} will be assumed to be \(X\)-lazy for some
configuration \(X\).
This will be explicitly stated.)
We also consider some underlying metric \( \Metric = (V, \Distance) \) that we
do not explicitly specify.
Suppose that \Algorithm{} is \(\alpha\)-competitive and given the initial
configuration \(A_0\), let \( \beta = \beta(A_0) \) be the additive constant
in the performance guarantee.
Subsequently, we fix some arbitrary initial configuration \(A_0\) and request
sequence \(\rho\).
We have to prove that \( \Algorithm(A_0, \rho) \leq 2 \alpha \Optimal(A_0,
\rho) \).
A key ingredient in our proof is a designated request sequence \(\sigma\)
referred to as the \emph{anchor} of \(A_0\) and \(\rho\).
Let \( \ell = \min \{ \Distance(x, y) \mid x, y \in A_0, x \neq y \} \).
Given that \( A_0 = \{ x_1, \dots, x_k \} \), the anchor is
defined to be
\[
\sigma = (x_1 \cdots x_k)^m \text{, where }
m = \left\lceil \max\left\{ \frac{2 k \Optimal(A_0, \rho)}{\ell} + k^2,
\frac{2 \alpha \Optimal(A_0, \rho) + \beta(A_0)}{\ell} \right\} \right\rceil +
1 ~ .
\]
That is, the anchor consists of \(m\) \emph{cycles} of requests presented at
the nodes of \(A_0\) in a round-robin fashion.
Informally, we shall append \(\sigma\) to \(\rho\) in order to ensure that
both \Algorithm{} and \Optimal{} return to the initial configuration \(A_0\).
This will allow us to analyze request sequences of the form \( (\rho \sigma)^q
\) as \(q\) disjoint executions on the request sequence \( \rho \sigma \), thus
preventing any possibility to ``hide'' an additive constant in the performance
guarantee of \(\Algorithm(A_0, \rho)\).
Before we can analyze this phenomenon, we have to establish some preliminary
properties.
\begin{AvoidOverfullParagraph}
\begin{proposition} \label{proposition:RetracingCost}
For every initial configuration \(A_0\) and request sequence \(\rho\), we have
\( \Optimal(A_0, \rho, A_0) \leq 2 \cdot \Optimal(A_0, \rho) \).
\end{proposition}
\end{AvoidOverfullParagraph}
\begin{proof}
Consider an execution \(\eta\) that
(i) starts in configuration \(A_0\);
(ii) serves \(\rho\) optimally; and
(iii) moves (optimally) to configuration \(A_0\) at the end of round
\(|\rho|\).
The cost of step (iii) cannot exceed that of step (ii) as we can always
retrace the moves \(\eta\) did in step (ii) back to the initial configuration
\(A_0\).
The assertion follows since \(\eta\) is a candidate to realize \(\Optimal(A_0,
\rho, A_0)\).
\end{proof}
Since no moves are needed in order to serve the anchor \(\sigma\) from
configuration \(A_0\), it follows that
\begin{equation}
\Optimal(A_0, \rho) \leq \Optimal(A_0, \rho \sigma) \leq 2 \cdot \Optimal(A_0,
\rho) ~ .
\label{equation:TwiceTheOptimal}
\end{equation}
Proposition~\ref{proposition:RetracingCost} is also employed to establish the
following lemma.
\begin{AvoidOverfullParagraph}
\begin{lemma} \label{lemma:OfflineVisitingInitialConfiguration}
Given some configuration \(X\), consider an \(X\)-lazy execution
\(\eta\) that realizes \(\Optimal(A_0, \rho \sigma, X)\).
Then \(\eta\) must be in configuration \(A_0\) at the end of round \(t\) for
some \( |\rho| \leq t < |\rho \sigma| \).
\end{lemma}
\end{AvoidOverfullParagraph}
\begin{proof}
Assume by way of contradiction that \(\eta\)'s configuration at the end of
round \(t\) differs from \(A_0\) for every \( |\rho| \leq t < |\rho \sigma|
\).
The cost \( \Optimal(A_0, \rho \sigma, X) \) paid by \(\eta\) is at most \( 2
\cdot \Optimal(A_0, \rho) + \ConfigurationDistance(A_0, X) \) as
Proposition~\ref{proposition:RetracingCost} guarantees that this is the total
cost paid by an execution that
(i) realizes \(\Optimal(A_0, \rho, A_0)\);
(ii) stays in configuration \(A_0\) until (including) round \( |\rho \sigma|
\); and
(iii) moves (optimally) to configuration \(X\).
Let \(Y\) be the configuration of \(\eta\) at the end of round \(|\rho|\).
We can rewrite the total cost paid by \(\eta\) as \( \Optimal(A_0, \rho
\sigma, X) = \Optimal(A_0, \rho, Y) + \Optimal(Y, \sigma, X) \).
Clearly, the former term \(\Optimal(A_0, \rho, Y)\) is not smaller than
\(\ConfigurationDistance(A_0, Y)\) which lower bounds the cost paid by any
execution that starts in configuration \(A_0\) and ends in configuration \(Y\).
We will soon prove (under the assumption that \(\eta\)'s configuration at the
end of round \(t\) differs from \(A_0\) for every \( |\rho| \leq t < |\rho
\sigma| \)) that the latter term \(\Optimal(Y, \sigma, X)\) is (strictly)
greater than \( 2 \cdot\Optimal(A_0, \rho) + \ConfigurationDistance(Y, X) \).
Therefore \( \ConfigurationDistance(A_0, Y) + 2 \cdot \Optimal(A_0, \rho) +
\ConfigurationDistance(Y, X) < \Optimal(A_0, \rho, Y) + \Optimal(Y, \sigma,
X) = \Optimal(A_0, \rho \sigma, X) \).
The inequality \( \Optimal(A_0, \rho \sigma, X) \leq 2 \cdot \Optimal(A_0,
\rho) + \ConfigurationDistance(A_0, X) \) then implies that
\( \ConfigurationDistance(A_0, X) > \ConfigurationDistance(A_0, y) +
\ConfigurationDistance(Y, X) \), in contradiction to the triangle inequality.
It remains to prove that \( \Optimal(Y, \sigma, X) > 2 \cdot \Optimal(A_0,
\rho) + \ConfigurationDistance(Y, X) \).
For that purpose, we consider the suffix \(\phi\) of \(\eta\) which corresponds
to the execution on the subsequence \(\sigma\) (\(\phi\) is an
\(X\)-lazy execution that realizes \(\Optimal(Y, \sigma, X)\)).
Clearly, \(\phi\) must shift from configuration \(Y\) to configuration \(X\),
paying cost of at least \(\ConfigurationDistance(Y, X)\).
Moreover, since \(\phi\) is \(X\)-lazy, and by the assumption that
\(\phi\) does not reside in configuration \(A_0\), it follows that in each of
the \(m\) cycles of the round-robin,
at least one server must move between two different nodes in
\(A_0\).
(To see this, recall that each server's move of the lazy
execution ends up in a node of \(A_0\).
On the other hand, all \(k\) servers never reside in configuration \(A_0\).)
Thus \(\phi\) pays a cost of at least \(\ell\) per cycle, and \( m \ell \)
altogether.
A portion of this \( m \ell \) cost can be charged on the shift from
configuration \(Y\) to configuration \(X\), but we show that the remaining
cost is strictly greater than \( 2 \cdot \Optimal(A_0, \rho) \), thus deriving
the desired inequality \( \Optimal(Y, \sigma, X) > 2 \cdot \Optimal(A_0, \rho)
+ \ConfigurationDistance(Y, X) \).
The \(k\) servers make at least \(m\) moves between two different nodes in
\(A_0\) when \(\phi\) serves the subsequence \(\sigma\), hence there exists
some server \(s\) that makes at least \( m / k \) such moves as part of
\(\phi\).
The total cost paid by all other servers in \(\phi\) is bounded from below by
their contribution to \(\ConfigurationDistance(Y, X)\).
As there are \(k\) nodes in \(A_0\), at most \(k\) out of the \( m / k \)
moves made by \(s\) arrive at a new node, i.e., a node which was not previously
reached by \(s\) in \(\phi\).
Therefore at least \( m / k - k \) moves of \(s\) cannot be charged on its
shift from \(Y\) to \(X\).
It follows that the cost paid by \(s\) in \(\phi\) is at least \( (m / k - k)
\ell \) plus the contribution of \(s\) to \(\ConfigurationDistance(Y, X)\).
The assertion now follows by the definition of \(m\), since \( (m / k - k)
\ell > 2 \cdot \Optimal(A_0, \rho) \).
\end{proof}
Since the optimal algorithm \Optimal{} is assumed to be lazy,
Lemma~\ref{lemma:OfflineVisitingInitialConfiguration} implies the following
corollary.
\begin{corollary} \label{corollary:OfflineEndingInitialConfiguration}
If the optimal algorithm \Optimal{} serves a request sequence of the form \(
\rho \sigma \tau \) (for any choice of suffix \(\tau\)) starting from the
initial configuration \(A_0\), then at the end of round \( |\rho \sigma| \) it
must be in configuration \(A_0\).
\end{corollary}
Consider an arbitrary configuration \(X\).
We want to prove that \( \WorkFunction_{\rho \sigma}(X) \geq
\WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X) \).
To this end, assume by way of contradiction that \( \WorkFunction_{\rho
\sigma}(X) < \WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X)
\).
Fix \( \WorkFunction_0 = \WorkFunction_{\rho \sigma}(A_0) \).
Lemma~\ref{lemma:OfflineVisitingInitialConfiguration} guarantees that an
\(X\)-lazy execution \(\eta\) that realizes \( \WorkFunction_{\rho
\sigma}(X) = \Optimal(A_0, \rho \sigma, X) \) must be in configuration \(A_0\)
at the end of some round \( |\rho| \leq t < |\rho \sigma| \).
Let \(\WorkFunction_t\) be the cost paid by \(\eta\) up to the end of round
\(t\).
The cost paid by \(\eta\) in order to move from \(A_0\) to \(X\) is
at least \(\ConfigurationDistance(A_0, X)\), hence \( \WorkFunction_{\rho
\sigma}(X) \geq \WorkFunction_t + \ConfigurationDistance(A_0, X) \).
Therefore \(\WorkFunction_t < \WorkFunction_0\), which derives a
contradiction, since \(\WorkFunction_0\) can be realized by an execution that
reaches \(A_0\) at the end of round \(t\) and stays in \(A_0\) until it
completes serving \(\sigma\) without paying any more cost.
As \( \WorkFunction_{\rho \sigma}(X) \leq \WorkFunction_{\rho \sigma}(A_0) +
\ConfigurationDistance(A_0, X) \), we can establish the following corollary.
\begin{corollary} \label{corollary:WorkFunctionEnding}
For every configuration \(X\), we have \( \WorkFunction_{\rho \sigma}(X) =
\WorkFunction_{\rho \sigma}(A_0) + \ConfigurationDistance(A_0, X) \).
\end{corollary}
Recall that we have fixed the initial configuration \(A_0\) and the request
sequence \(\rho\) and that \(\sigma\) is their anchor.
We now turn to analyze the request sequence \( \chi = (\rho \sigma)^{q} \),
where \(q\) is a sufficiently large integer that will be determined soon.
Corollary~\ref{corollary:OfflineEndingInitialConfiguration} guarantees that
\Optimal{} is in the initial configuration \(A_0\) at the end of round \(
|\rho \sigma| \).
By induction on \(i\), it follows that \Optimal{} is in \(A_0\) at the end of
round \( i \cdot |\rho \sigma| \) for every \( 1 \leq i \leq q \).
Therefore the total cost paid by \Optimal{} on \(\chi\) is merely
\begin{equation}
\Optimal(A_0, \chi) = q \cdot \Optimal(A_0, \rho \sigma) ~ .
\label{equation:OfflineRepetition}
\end{equation}
Suppose by way of contradiction that the online algorithm
\Algorithm{}, when invoked on the request sequence \( \rho \sigma \) from initial
configuration \(A_0\), does not end up in \(A_0\).
Since \Algorithm{} is lazy, we conclude that \Algorithm{} is not in
configuration \(A_0\) at the end of round \(t\) for any \( |\rho| \leq t <
|\rho \sigma| \).
Therefore in each cycle of the round-robin, \Algorithm{} moves at least once
between two different nodes in \(A_0\), paying cost of at least \(\ell\).
By the definition of \(m\) (the number of cycles), this sums up to \(
\Algorithm(A_0, \rho \sigma) \geq m \ell > 2 \alpha \Optimal(A_0, \rho) +
\beta(A_0) \).
By inequality~(\ref{equation:TwiceTheOptimal}), we conclude that \(
\Algorithm(A_0, \rho \sigma) > \alpha \Optimal(A_0, \rho \sigma) + \beta(A_0)
\), in contradiction to the performance guarantee of \Algorithm{}.
It follows that \Algorithm{} returns to the initial configuration \(A_0\)
after serving the request sequence \( \rho \sigma \).
Consider some two request sequences \(\tau\) and \(\tau'\).
We say that the work vector \(\WorkFunction_{\tau}(\cdot)\) is
\emph{\(d\)-equivalent} to the work vector \(\WorkFunction_{\tau'}(\cdot)\),
where \(d\) is some real, if \( \WorkFunction_{\tau}(X) -
\WorkFunction_{\tau'}(X) = d \) for every \( X \subseteq V \), \( |X| = k \).
It is easy to verify that if \(\WorkFunction_{\tau}(\cdot)\) is
\(d\)-equivalent to \(\WorkFunction_{\tau'}(\cdot)\), then \(
\WorkFunction_{\tau r}(\cdot) \) is \(d\)-equivalent to \(
\WorkFunction_{\tau' r}(\cdot) \) for any choice of request \( r \in V \).
Corollary~\ref{corollary:WorkFunctionEnding} guarantees that the work vector
\(\WorkFunction_{\rho \sigma}(\cdot)\) is \(d\)-equivalent to the work vector
\(\WorkFunction_{\omega}(\cdot)\) for some real \(d\), where \(\omega\) stands
for the empty request sequence.
(In fact, \(d\) is exactly \( \WorkFunction_{\rho \sigma}(A_0) \).)
By induction on \(j\), we show that for every prefix \(\pi\) of \( \rho \sigma
\) and for every \( 1 \leq i < q \) such that \( |(\rho \sigma)^i \pi| = j \),
the work vector \( \WorkFunction_{(\rho \sigma)^i \pi}(\cdot) \) is
\(d\)-equivalent to the work vector \(\WorkFunction_{\pi}(\cdot)\) for some
real \(d\).
Therefore the behavior of the robust online algorithm \Algorithm{} on \(\chi\)
is merely a repetition (\(q\) times) of its behavior on \( \rho \sigma \) and
\begin{equation}
\Algorithm(A_0, \chi) = q \cdot \Algorithm(A_0, \rho \sigma) ~ .
\label{equation:OnlineRepetition}
\end{equation}
We are now ready to establish the following inequality:
\begin{align*}
\Algorithm(A_0, \rho)
& \leq ~ \Algorithm(A_0, \rho \sigma) \\
& = ~ \frac{\Algorithm(A_0, \chi)}{q} \quad \text{by
inequality~(\ref{equation:OnlineRepetition})} \\
& \leq ~ \frac{\alpha \Optimal(A_0, \chi) + \beta(A_0)}{q} \quad \text{by the
performance guarantee of \Algorithm{}} \\
& = ~ \frac{\alpha q \Optimal(A_0, \rho \sigma) + \beta(A_0)}{q} \quad \text{by
inequality~(\ref{equation:OfflineRepetition})} \\
& \leq ~ \frac{2 \alpha q \Optimal(A_0, \rho) + \beta(A_0)}{q} \quad \text{by
inequality~(\ref{equation:TwiceTheOptimal})} \\
& = ~ 2 \alpha \Optimal(A_0, \rho) + \frac{\beta(A_0)}{q} ~ .
\end{align*}
For any real \( \epsilon > 0 \), we can fix \( q = \lceil \beta(A_0) /
\epsilon \rceil + 1 \) and conclude that \( \Algorithm(A_0, \rho) < 2 \alpha
\Optimal(A_0, \rho) + \epsilon \).
Theorem~\ref{theorem:OmitAdditiveTermWFA} follows.
As the Work Function Algorithm is known to be \( (2 k - 1) \)-competitive
\cite{KP95}, we also get the following corollary.
\begin{corollary}\label{corollary:strictly}
The Work Function Algorithm is strictly \((4k-2)\)-competitive.
\end{corollary}
\paragraph{Acknowledgments} We thank Elias Koutsoupias for useful discussions.
\end{document} |
\begin{document}
\title{A relative Seidel morphism and the Albers map}
\author{Shengda Hu}
\address{Department of pure mathematics, University of Waterloo, Waterloo (Ontario), Canada}
\curraddr{}
\email{[email protected]}
\thanks{}
\author{Fran\c cois Lalonde}
\address{D\'epartement de math\'ematiques et de statistique, Universit\'e de Montr\'eal, Montr\'eal (Qu\'ebec), Canada}
\curraddr{}
\email{[email protected]}
\thanks{}
\subjclass[2000]{53D12, 53D40, 53D45, 57R58, 57S05}
\date{}
\dedicatory{}
\begin{abstract} In this note, we introduce a relative (or Lagrangian) version of the Seidel homomorphism that assigns to each homotopy class of paths in ${\rm Ham}(M)$, starting at the identity and ending on the subgroup that preserves a given Lagrangian submanifold $L$, an element in the Floer homology of $L$. We show that these elements are related to the absolute Seidel elements by the Albers map. We also study, for later use, the effect of reversing the signs of the symplectic structure as well as the orientations of the generators and of the operations on the Floer homologies.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\label{intro}
Let $(M, \omega)$ be a closed symplectic manifold. Seidel constructed in \cite{Seidel} a map from a covering of $\pi_1{\rm Ham}(M, \omega)$ to the invertible elements of either $QH_*(M, \omega)$ or $FH_*(M)$. It has found many uses, for example, in the study of Hamiltonian fibrations (Lalonde--McDuff--Polterovich \cite{LalondeMcDuffPolterovich}) and the quantum ring structure of toric varieties (McDuff-Tolman \cite{McDuffTolman}). In this article, we introduce a similar construction when given a Lagrangian submanifold $L$ in $(M, \omega)$. Instead of considering the loops in ${\rm Ham}(M, \omega)$, we consider the paths in ${\rm Ham}(M, \omega)$ starting at the identity and ending in the subgroup ${\rm Ham}_L(M, \omega)$:
$${\rm Ham}_L(M, \omega) := \{\varphi \in {\rm Ham}(M, \omega) | \varphi (L) = L \}$$
Another natural subgroup to consider consists of Hamiltonian symplectomorphisms that fix $L$ pointwisely. It is easy to see that any diffeomorphism of $L$ that is isotopic to the identity can be extended to a Hamiltonian symplectomorphism in $M$ that preserves $L$. It follows that the two choices of subgroups essentially differ by ${\rm Diff}(L)$. For the purpose of this paper, the reader can think of either choice.
Under the monotonicity assumption in the Lagrangian setting, one can define the Seidel elements for the elements in a covering of $\pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega))$. There is a homotopy exact sequence for the Hamiltonian groups, and we show that
the following diagram commute (cf. corollary \ref{lagSeidel:3actions})
\begin{equation}\label{intro:commutdiag}
\text{
\xymatrix{
\widetilde \pi_1{\rm Ham}(M, \omega) \ar[r]\ar[d]_{\Psi} & \widetilde \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega)) \ar[d]_{\Psi_L}
\ar[r] & \widetilde \pi_0{\rm Ham}_L(M, \omega)\\
FH_*(M) \ar[r]^{\mathscr A} & FH_*(M, L)
&
}}
\end{equation}
where $\Psi$ and $\Psi_L$ denotes the respective absolute and relative Seidel maps and $\mathscr A$ denotes Albers' comparison map between $FH_*(M)$ and $FH_*(M, L)$ \cite{Albers}.
We should explain the above diagram a little more. The Seidel maps are defined for the extensions $\widetilde\pi_1$ of the respective $\pi_1$'s by the corresponding period groups $\Gamma_\omega$ or $\Gamma_L$.
An element $\widetilde g \in \widetilde \pi_1{\rm Ham}(M, \omega)$ can be viewed canonically as an element in $\widetilde \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega))$ (cf. lemma \ref{lagSeidel:groupinclusion}) and the corresponding Seidel elements are related by $\mathscr A$.
The group $\widetilde \pi_0{\rm Ham}_L(M, \omega)$ is an extension of $\pi_0 {\rm Ham}_L(M, \omega)$
so that the top sequence is exact.
On the other hand, we may adopt McDuff's point of view in \cite{McDuff} where the Seidel map is defined on $\pi_1$'s directly by choosing a prefered extension $\widetilde g$ for each $g \in \pi_1$. Then \eqref{intro:commutdiag} holds with the non-extended homotopy groups.
One of the original motivations for studying these Seidel elements and the above diagram was to obtain information on the third term in the exact sequence, namely $\pi_0{\rm Ham}_L(M, \omega)$. This is the most elementary question that one may ask about the subgroup ${\rm Ham}_L(M, \omega)$ of Hamiltonian transformations leaving a given Lagrangian submanifold invariant.
If, say, one calls the elements in the image of ${\mathscr A} \circ \Psi$ the {\it circular Seidel elements} in $FH_*(M,L)$ and the elements in the image of $\Psi_L$ the {\it semi-circular} ones, then by the commutativity of the diagram, the latter ones contain the former ones. One could try to find semi-circular, but not circular, elements by computing explicitly the two Seidel morphisms. This would imply that
$$
\pi_1{\rm Ham}(M, \omega) \rightarrow \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega))
$$
is not onto, and therefore $\pi_0{\rm Ham}_L(M, \omega)$ is not trivial. Of course, if a given component of ${\rm Ham}_L(M, \omega)$ is made of Hamiltonian diffeomorphisms whose restrictions to $L$ is not isotopic to the identity, it is obviously not the identity component. Hence our construction is useful when components are made of Hamiltonian diffeomorphisms whose restriction to $L$ is isotopic to the identity (or cannot be easily shown to be non-isotopic to the identity).
In presence of a real structure on $M$, i.e. of an anti-symplectic
involution $c$ with fixed point set $L$,
one could replace everywhere ${\rm Ham}_L(M, \omega)$ by its subgroup
${\rm Ham}_c(M, \omega)$ of
Hamiltonian diffeomorphisms of $M$ commuting with $c$. Obviously the
corresponding diagram \eqref{intro:commutdiag} would still commute. In this paper, we restrict ourselves to a detailed setting of the theory, postponing to a forthcoming paper applications and computations of examples.
In section \S\ref{lagFloer}, we review Lagrangian Floer homology in the setting of Hamiltonian paths, cf. \cite{Albers}. Other versions of Floer homology of a single Lagrangian already exist, e.g. \cite{Oh, BiranCornea, BiranCornea2, FOOO} by Oh, Fukaya--Oh--Ohta--Ono, Biran--Cornea. Because we will be working over ${\mathbb R}$, we include a discussion of the coherent orientation for the Floer trajectories, essentially following Fukaya--Oh--Ohta--Ono \cite{FOOO}. The necessary gluing is similar to those found in Albers \cite{Albers}. The half-pair-of-pants product is analogous to the pair-of-pants product in the Hamiltonian Floer homology with some notorious differences (say its non-commutativity). It should coincide with the product defined from holomorphic triangles as in \cite{FOOO} or using the linear cluster complex as in \cite{BiranCornea}. For later use, we also discuss the action of $FH_*(M)$ as well as the Albers' comparison map. Other versions of the action of $FH_*(M)$ has been described before, e.g. in the context of the linear cluster complex (or pearl complex) \cite{BiranCornea} or in the context of $L_\infty$-action on the $A_\infty$-algebra in \cite{FOOO}.
In \S\ref{lagSeidel} we carry out Seidel's construction for $FH_*(M, L)$ and show that it has the expected properties. It is also in this section that we show the commutativity of diagram \eqref{intro:commutdiag}. Finally, for later use related to the computations in $(X, \omega) \times (X, -\omega)$, we explain in \S\ref{reversed:novikov} and in \S\ref{reversed:lagFloer} the effect of reversing the orientations of the generators of the symplectic and Lagrangian Floer homologies a well as the reversal of time in operations on Floer homologies.
We would like to mention here that the possibility of defining a relative Seidel morphism appears implicitly in the recent paper of R\'emi Leclercq in \cite{Leclercq}. Indeed, the proof of his basic proposition 3.1 that he needs to define his Lagrangian spectral invariants, contains the main ingredients of the construction of a relative Seidel morphism, even though it is not presented in these terms.
\noindent
{\bf Acknowledgement.} We would like to thank the referee for the constructive suggestions. The first author would like to thank Octav Cornea, Dusa McDuff and Jean-Yves Welschinger for valuable discussions.
\section{Lagrangian Floer theory}\label{lagFloer}
Let $(M, \omega)$ be a symplectic manifold and $L$ a Lagrangian submanifold. We set up here the Floer theory for $(M, L)$ with a generic Hamiltonian purturbation.
\subsection{Novikov rings}\label{lagFloer:novikov}
We think of $\omega$, $c_1(TM)$ and $\mu_L$ as functions on $\pi_2(M)$ or $\pi_2(M, L)$ and denote them as:
$$I_\omega : \pi_2(M) \text{ or } \pi_2(M, L) \to {\mathbb R}, I_c : \pi_2(M) \to {\mathbb R} \text{ and } I_\mu : \pi_2(M, L) \to {\mathbb R}.$$
Let
$$\Gamma_\omega = \frac{\pi_2(M)}{\ker I_\omega \cap \ker I_c} \text{ and } \Gamma_L = \frac{\pi_2(M, L)}{\ker I_\omega \cap \ker I_\mu},$$
where one
could as well replace $\pi_2(M)$ and $\pi_2(M, L)$ by their images under the respective Hurewicz homomorphisms, namely, the spherical homology groups $H^S_2(M)$ and $H^S_2(M, L)$, since the quotients are the same\footnote{
If one adopts the point of view in \cite{McDuff} so that \eqref{intro:commutdiag} holds for the non-extended groups, then the $\pi_2$'s are replaced by the respective the spherical homology in $\mathbb R$-coefficients, namely $H^S_2(M;\mathbb R)$ and $H^S_2(M,L;\mathbb R)$. The maps $I_\omega$, $I_c$ and $I_\mu$ are well defined on these homology groups and $\Gamma_\omega$ and $\Gamma_L$ are defined as the respective quotients.
}.
Then the Novikov rings for quantum (or Floer) homology are defined as follows:
$$\Lambda_\omega = \left\{\left.\sum_{B \in \Gamma_\omega} a_B e^{B} \right| a_B \in {\mathbb R} \text{ and } \forall K \in {\mathbb R}, \#\{B | a_B \neq 0 \text{ and } \omega(B) < K\} < \infty \right\}$$
$$\Lambda_L = \left\{\left.\sum_{B \in \Gamma_L} a_B e^{B} \right| a_B \in {\mathbb R} \text{ and } \forall K \in {\mathbb R}, \#\{B | a_B \neq 0 \text{ and } \omega(B) < K\} < \infty\right\}.$$
Their degrees are defined by $\deg(e^{B}) = -2I_c(B)$ and $\deg(e^{B}) = -I_\mu(B)$.
The homotopy exact sequence $\pi_2(L) \to \pi_2(M) \to \pi_2(M, L) \to \pi_1(L)$ induces an inclusion on the quotients
$$i : \Gamma_\omega \to \Gamma_L,$$
since $\ker I_\omega\cap \ker I_c$ is mapped to $\ker I_\omega \cap \ker I_\mu$. It follows that there is a natural inclusion of the Novikov rings:
$$i : \Lambda_\omega \to \Lambda_L.$$
We can then make a $\Lambda_L$-module into a $\Lambda_\omega$-module via this inclusion.
\subsection{The flow equation}\label{lagFloer:functeq}
Let $H : [0,1] \times M \to {\mathbb R}$ be a time-dependent Hamiltonian function and $\mathbf J = \{J_t\}_{t \in [0,1]}$ a time-dependent $\omega$-compatible almost complex structure. The space of such pairs is $$\mathcal{HJ} = C^\infty([0,1] \times M) \times \mathcal J,$$
where $\mathcal J$ is the space of one-parameter families of $\omega$-compatible almost complex structures.
Let
$$D^2_+ = \{z\in {\mathbb C} : |z| {\leqslant} 1, \Im z {\geqslant} 0\},$$
$\partial_+$ denote the part of boundary of $D^2_+$ on the unit circle, parametrized by $t \in [0,1]$ as $e^{i\pi t}$, and $\partial_0$ the part on the real line, parametrized by $t \in [0,1]$ as $2t -1$.
We consider the path space
$${\mathcal{P}}_L M = \{l : ([0,1], \{0,1\}) \to (M,L) | [l] = 0 \in \pi_1(M,L)\}.$$
and the covering space $\widetilde {\mathcal{P}}_L M$ of ${\mathcal{P}}_L M$ whose elements are the equivalence classes:
$$[l, w] \text{ where } l \in {\mathcal{P}}_L M \text{ and } w: (D^2_+; \partial_+, \partial_0) \to (M; l, L),$$
where
$$(l, w) \sim (l', w') \iff l = l' \text{ and } I_\omega(w\#(-w')) = I_\mu(w\#(-w')) = 0.$$
The action functional on $\widetilde {\mathcal{P}}_L M$ is given by
$$a_{H}([l, w]) = -\int_{D^2_+}w^*\omega + \int_{[0,1]} H_t(l(t)) dt,$$
where we use the convention $dH = - \iota_{X_H} \omega$ for the Hamiltonian vector fields.
An element $\widetilde{l} = [l, w] \in \widetilde {\mathcal{P}}_L M$ is a critical point of $a_H$ if and only if $l$ is a Hamiltonian path connecting points on $L$.
\begin{definition}\label{lagFloer:nondegen}
A critical point $\widetilde{l}$ is \emph{nondegenerate} if $d\phi_1 T_{l(0)} L \pitchfork T_{l(1)}L$, where $\phi_{t \in[0,1]}$ is the Hamiltonian isotopy generated by $H_{t \in [0,1]}$.
\end{definition}
In a way similar to the case of Hamiltonian Floer homology on $M$, we have
\begin{prop}\label{lagFloer:genric}
For a generic $H$, all critical points of $a_H$ are non-degenerate. \qed
\end{prop}
Floer theory studies the negative gradient flow of $a_H$. Let $(,)_{\mathbf{J}}$ be the metric on ${\mathcal{P}}_L M$ defined by
$$(\xi, \eta)_{\mathbf{J}} = \int_{[0,1]} \omega(\xi(t), J_t\eta(t)) dt,$$
then the equation of \emph{negative} gradient flow for $a_{H}$ is the following perturbed $\mathbf{J}$-holomorphic equation for $u : {\mathbb R}\times [0,1] \to M$:
\begin{equation}\label{lagFloer:floweq}
\left\{\begin{matrix}
\frac{\partial u}{\partial s} + J_t(u)
\left(\frac{\partial u}{\partial t} - X_{H_t}(u)\right) = 0 & \text{ for all }
(s, t) \in {\mathbb R} \times [0,1], \\
u|_{{\mathbb R} \times \{0, 1\}} \subset L
\end{matrix}\right.
\end{equation}
The energy $E(u)$ of a solution $u$ of \eqref{lagFloer:floweq} with respect to the metric induced by $\mathbf{J}$ is defined as its $s$-energy:
$$E(u)= \int \left|\frac{\partial u}{\partial s} \right|_t^2ds dt$$
where the $t$-metric on $M$ is ${\langle}\xi,\eta{\rangle}_t = \omega (\xi, J_t \eta)$.
Suppose that all critical points of $a_H$ are non-degenerate, and let $u$ be a finite energy solution, then $l_s(t) := u(s, t)$ converges uniformly to Hamiltonian paths in $C^0$-topology, i.e. $\exists l_{\pm}$ critical points of $a_H$ so that $\lim_{s \to \pm\infty} l_s(t) = l_{\pm}(t)$ uniformly in $t$.
\subsection{Conley-Zehnder index}\label{lagFloer:maslov}
For each nondegenerate critical point $\widetilde{l} = [l, w]$, we can define a Conley-Zehnder index $\mu_{H}(\widetilde{l})$.
Since $D^2_+$ is contractible, we find a symplectic trivialization $\Phi$ of the bundle $w^*TM$ given by $\Phi_z: T_{w(z)}M \to {\mathbb C}^n$ with standard symplectic structure $\omega_0$ on ${\mathbb C}^n$. We require that $\Phi_r(T_{w(r)}L) = {\mathbb R}^n$ for $r \in \partial_{0} D^2_+ \subset D^2_+$, which is possible since $\partial_{0} D^2_+$ is contractible. Then the linearized Hamiltonian flow $d\phi_t$ along $l$ defines a path of symplectic matrices
$$E_t = \Phi_{e^{i\pi t}} \circ d\phi_t \circ \Phi^{-1}_1 \in Sp({\mathbb C}^n).$$
The Conley-Zehnder index of $\widetilde l$ is defined using the Maslov index of paths of Lagrangian subspaces introduced in Robbin-Salamon \cite{RobbinSalamon1}:
\begin{propdef}\label{lagFloer:intind}
The \emph{Conley-Zehnder index} of $\widetilde l$ is defined as $\mu_H(\widetilde l) = \mu(E_t{\mathbb R}^n, {\mathbb R}^n)$; it satisfies:
\begin{enumerate}
\item $\mu_H(\widetilde l)$ does not depend on the trivialization;
\item $\mu_H(\widetilde l) + \frac{n}{2} \in {\mathbb{Z}}$;
\item under the deck transformation by $\beta \in \Gamma_L$, we have
$\mu_H(\widetilde l\#\beta) = \mu_H(\widetilde l) + I_\mu(\beta)$.
\end{enumerate}
\end{propdef}
\proof
For $(2)$ see \cite{RobbinSalamon1}, Theorem $2.4$. The rest can be shown similarly as in the case of Hamiltonian loops in $M$.
\qed
\begin{definition}\label{lagFloer:chaingroup}The \emph{Floer chain group} is $FC_*(H) = \oplus_{k} FC_k(H) $ where
$$FC_k(H) := \left\{\left.\sum_{\mu_H(\widetilde l) = k} a_{\widetilde l} \widetilde l \right| a_{\tilde l} \in {\mathbb R} \text{ and } \forall K \in {\mathbb R}, \#\{\widetilde l | a_{\widetilde l} \neq 0, a_H(\widetilde l) < K\} < \infty \right\}.$$
\end{definition}
\noindent
It is easy to see that $FC_*(H)$ is a graded module over the Novikov ring $\Lambda_L$ via:
$$e^B \cdot \widetilde l = \widetilde l \# \beta $$
and we have $$e^B \cdot FC_*(H) \subset FC_{*- \deg(e^B)}(H).$$
We note that by the ring inclusion $i: \Lambda_\omega\to \Lambda_L$, $FC_*(H)$ is also a $\Lambda_\omega$-module.
\subsection{The linearized operator and moduli spaces of flows}\label{lagFloer:linear}
Let us suppose that all critical points of $a_{H}$ are non-degenerate and consider the linearized operator of \eqref{lagFloer:floweq} at a finite energy solution $u$
\begin{equation}\label{linear:op}
D_u \xi = \nabla_{\frac{\partial}{\partial s}} \xi + J_t (u)\nabla_{\frac{\partial}{\partial t}} \xi + \nabla_{\xi}J_t(u) \partial_t u - \nabla_{\xi} \left(J_t (u)X_{H_t}(u)\right),
\end{equation}
where $\xi \in \Gamma(u^*TM; L) = \{\xi \in \Gamma(u^*TM) | \xi|_{{\mathbb R} \times \{0,1\}} \subset TL\}$. Under suitable Banach completion, $D_u: L^p_k(u^*TM; L) \to L^p_{k-1}(u^*TM)$ is Fredholm whose index is the expected dimension of the space of solutions near $u$.
By \cite{RobbinSalamon2} (Theorem 7.1), the index can be identified as the difference of the Conley-Zehnder indices of the two ends:
\begin{prop}\label{lagFloer:indexdiff}
Let $\widetilde{{\mathcal{M}}}_{H, \mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+)$ be the space of all solutions of the equation \eqref{lagFloer:floweq} connecting
$\widetilde{l}_{-}$ to $\widetilde{l}_{+}$ such that
$[\widetilde l_- \# u \# (- \widetilde l_+)] = 0 \in \Gamma_L$. Its expected dimension is then given by:
$${\rm ind } D_u = \mu_H(\widetilde l_-) - \mu_H(\widetilde l_+).$$
\end{prop}
\qed
The unparametrized moduli space is ${\mathcal{M}}_{H, \mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+) = \widetilde{{\mathcal{M}}}_{H, \mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+)/{\mathbb R}$ where the ${\mathbb R}$ action is the shifting of $s$. Thus we have in generic conditions:
$$\dim {\mathcal{M}}_{H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_+) = \mu_H(\widetilde l_-) - \mu_H(\widetilde l_+) - 1.$$
\subsection{Coherent orientations}\label{lagFloer:orient}
We will work over ${\mathbb{Q}}$ or ${\mathbb C}$ instead of ${\mathbb{Z}}_2$. For this reason, we impose the following assumption from now on:
\begin{assumption}\label{lagFloer:relspin}
$L$ is \emph{relatively spin}, i.e. $L$ is orientable and $w_2(L) \in H^2(L; {\mathbb{Z}}_2)$ extends to a class in $H^2(M)$.
\end{assumption}
The above assumption implies that the moduli spaces of holomorphic discs with boundary on $L$ can be canonically oriented with the choice of a relatively spin structure on $L$, i.e.
\begin{itemize}
\item an orientation of $L$,
\item an extension of $w_2(L)$ to $H^2(M)$ and
\item a spin structure on $TL \oplus V |_{L_{(2)}}$, i.e. a trivialization of $TL \oplus V|_{L_{(1)}}$ that extends to $L_{(2)}$,
\end{itemize}
where $L_{(2)}$ is the $2$-skeleton of some triangulation of $L$ and $V$ is an oriented real vector bundle on the $3$-skeleton $M_{(3)}$ of $M$ so that $w_2(V)$ extends $w_2(L)$. It follows that $TL \oplus V |_{L_{(2)}}$ is indeed spin.
Starting from these choices, we may assign to the moduli spaces $\widetilde {\mathcal{M}}_{H, \mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+)$ a coherent orientation (see for example \cite{FOOO}, \S$44$) in the following way.
First, in order to orient the moduli space of half-tubes $\widetilde {\mathcal{M}}_{H, \mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+)$, we consider essentially an oriented version of the argument for the PSS \cite{Albers}.
It involves another type of moduli spaces $\widetilde {\mathcal{M}}^\pm_{H^\pm, \mathbf J^\pm}(M, L; \widetilde{l})$ consisting of maps from either the capped strip $Z_-$ or $Z_{+}$ (\cite{Albers}):
\begin{equation}\label{lagFloer:caps}{\mathbb C} \supset Z_\pm = D^2_\mp \cup ({\mathbb R}^\pm \times [0, 1]) \xrightarrow {u^\pm} M \text{ so that } u({\partial Z_\pm}) \subset L.\end{equation}
where this time $D^2_{-}$ denotes the closed left half part of the disk of raduis $1/2$ centered at $1/2 i \in {\mathbb C}$ while $D^2_{+}$ denotes the closed right half part.
The coordinates in $Z_\pm$ is $z = s+it$. Choose and fix $(H^\pm, \mathbf J^\pm)$, a pair of smoothly $z$-dependent Hamiltonian function and almost complex structures so that
$$(H^\pm, \mathbf J^\pm)|_{D^2_\mp} = (0, J) \text{ and } (H^\pm, \mathbf J^\pm)|_{1\mp s < 0} = (H, \mathbf J),$$
where $J$ is a generic almost complex structure on $M$. Consider the equation for $u^\pm : Z_\pm \to M$:
\begin{equation}\label{lagFloer:capeq}
\left\{\begin{matrix}
\frac{\partial u^\pm}{\partial s} + J^\pm_z(u^\pm)
\left(\frac{\partial u^\pm}{\partial t} - X_{H^\pm_z}(u^\pm)\right) = 0 & \text{ for all }
(s, t) \in Z_\pm, \\
u|_{\partial Z_\pm} \subset L
\end{matrix}\right.
\end{equation}
The energy $E(u^\pm)$ is defined as the $s$-energy in the usual way, and finite energy solutions $u^\pm$ converge uniformly to Hamiltonian paths of $H$ when $s \to \pm \infty$. Then set
$$\widetilde {\mathcal{M}}_{H^\pm, \mathbf J^\pm}(M, L; \widetilde l) := \left\{u^\pm: Z_\pm \to M \left| \begin{matrix} u^\pm \text{ satisfies \eqref{lagFloer:capeq}},\\ \lim_{s\to \pm\infty} u^\pm = l \text{ and }\\
[\widetilde l \# (-u^\pm)] = 0 \in \Gamma_L \end{matrix}\right.\right\}.$$
There are evaluation maps for these moduli spaces, at the points $p_\pm = \pm 1/2 + 1/2 i \in D^2_\pm$:
$$ev^{\pm} : \widetilde {\mathcal{M}}_{H^\pm, \mathbf J^\pm}(M, L; \widetilde l) \to L : u^\pm \mapsto u^\pm(p_\mp).$$
We argue that a choice of the orientations of all the moduli spaces of the form $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l^+)$ induces the orientations of the moduli spaces of the form $\widetilde {\mathcal{M}}_{H^-, \mathbf J^-}(M, L; \widetilde l^-)$ where $l^+ = l^-$. We consider the gluing of the equations
\eqref{lagFloer:capeq} for the moduli spaces $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l^+)$ and $\widetilde {\mathcal{M}}_{H^-, \mathbf J^-}(M, L; \widetilde l^-)$ along $l$. That is, choose and fix an appropriate cut off function $\beta$ and consider the domains
$$Z_{+,R} = D^2_- \cup ([0, R+1] \times [0, 1]) \text{ and } Z_{-,R} = D^2_+ \cup ([-R-1, 0] \times [0,1]),$$
and use $\beta$ to glue the two equations on $Z_\pm$ to define an equation on the glued domain
$$Z_R := Z_{+, R} \sqcup Z_{-, R} / (z \sim z-R-1 \text{ in the ends}).$$
We note that $Z_R$ is conformal to $D^2$ and the equation on $Z_R$ is in fact a compact perturbation of the $\bar\partial_J$-equation for discs with boundary on $L$.
Because the moduli space of discs is canonically oriented by the choice of a relatively spin structure, we see that the moduli space $\widetilde {\mathcal{M}}_{H^\pm, \mathbf J^\pm}(M, L; \widetilde l, R)$ for the glued equation on $Z_R$ is oriented. From the additivity of indices by standard gluing arguments, we see that the orientations of the $+$-moduli spaces induce orientations of the $-$-moduli spaces.
Let $B \in \pi_2(M, L)$ and consider $\widetilde l^B = \widetilde l \# B$. When $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l^B)$ is not empty, its orientation is defined from that of $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l)$ and the $\bar\partial$-equation for discs with boundary on $L$ representing class $B$. We note that the moduli space of discs might be empty, or the $\bar \partial$-operator might be non-surjective. Nevertheless, an orientation can be assigned to the index of the $\bar\partial$-operator. Summarizing, we have
\begin{prop}\label{lagFloer:orientcap}
The orientations of the moduli spaces $\widetilde {\mathcal{M}}_{H^\pm, \mathbf J^\pm}(M, L; \widetilde l)$ are determined by the canonical orientations on the indices of the $\bar\partial$-operators of discs with boundaries on $L$ as well as a choice of the orientations on $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l_j)$ for a $\Lambda_L$-basis $\{\widetilde l_j\}$ of $FC_*(H)$.
\qed\end{prop}
\begin{definition}\label{lagFloer:preferredbase}
The basis $\{\widetilde l_j\}$ is called a \emph{preferred basis} for the orientation of the Floer complex $FC_*(M, L; H, \mathbf J)$.
\end{definition}
To obtain the orientations for the moduli spaces $\widetilde {\mathcal{M}}_{H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_+)$, we notice, for example, that gluing these latter moduli spaces with the moduli spaces $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l_-)$ yields the moduli spaces $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l_+)$. Since both the latter two have been given orientations, these orientations canonically determine orientations on the moduli spaces of half-tubes. Considering the opposite gluing, that is to say using $\widetilde {\mathcal{M}}_{H^-, \mathbf J^-}(M, L; \widetilde l_+)$ instead of $\widetilde {\mathcal{M}}_{H^+, \mathbf J^+}(M, L; \widetilde l_-)$, would give the same induced orientations.
It is now easy to see that the orientations introduced on $\widetilde {\mathcal{M}}_{H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_+)$ are naturally coherent in the sense of Hofer-Salamon \cite{HoferSalamon}.
\subsection{Floer homology}\label{lagFloer:hlgy}
From now on, we consider only monotone Lagrangians, i.e. satisfies the following:
\begin{equation}\label{lagFloer:condition}
\text{ there is } \lambda > 0 \text{ such that } I_\omega = \lambda I_\mu \text{ on } \pi_2(M, L).
\end{equation}
Together with assumption \ref{lagFloer:relspin}, we see that the minimal Maslov number of $L$ is at least $2$. The monotonicity condition also ensures that there are no non-trivial holomorphic spheres with non-positive Chern numbers or non-trivial discs with boundary on $L$ with non-positive Maslov index.
Let $M_k(\mathbf J)$ denote the set of points of $M$ lying on non-constant $J$-holomorphic spheres with Chern number $\leqslant k$, $L_k(\mathbf J)$ the set of points of $L$ lying on the boundary of non-constant $J$-holomorphic discs with Maslov number $\leqslant k$ and $P(H)$ be the set of points of $M$ lying on connecting orbits of $H$.
In the following, we will assume that the pair $(H, \mathbf J)$ is \emph{regular} in the sense that
\begin{itemize}
\item all $J_{0/1}$-holomorphic discs with Maslov index $2$ are regular,
\item $\mathbf J$ is regular for pseudo-holomorphic spheres with Chern number $1$,
\item all connecting orbits of $H$ are non-degenerate,
\item $D_u$ is surjective for finite energy solutions $u$ of \eqref{lagFloer:floweq} with $\text{index } D_u \leqslant 2$.
\item $P(H) \cap M_1(\mathbf J) = \emptyset$ and $P(H) \cap L_2(\mathbf J)$ is empty or of dimension $0$.
\end{itemize}
Standard arguments (cf. e.g. \cite{HoferSalamon}) implies that generic pairs are regular.
The Floer chain complex $FC_*(H, \mathbf{J})$ is
given by the Floer chain group $FC_*(H)$ with
the boundary map defined from counting the $0$-dimensional moduli space of solutions:
$$\partial_{H, \mathbf{J}} \widetilde{l}_- = \sum_{\mu_H(\widetilde l_-) = \mu_H(\widetilde l_+) + 1} \#{\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+) \widetilde{l}_+,$$
and extending linearly.
We then show that
\begin{prop}\label{lagFloer:dsquare}
With assumptions \ref{lagFloer:relspin}
and assume that $(H, \mathbf J)$ is regular, then $\partial_{H, \mathbf J}^2 = 0$.
\end{prop}
{\it Proof:}
Writing
$$\partial_{H, \mathbf J}^2 \widetilde l_- = \sum_{\mu_H(\widetilde l_-) = \mu_H(\widetilde l_0) + 1} \#{\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_0) \sum_{\mu_H(\widetilde l_0) = \mu_H(\widetilde l_+) + 1} \#{\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_0, \widetilde{l}_+) \widetilde l_+$$
we see that the proposition is equivalent to saying that for each pair of $\widetilde l_-$ and $\widetilde l_+$, we have
$$\sum_{\mu_H(\widetilde l_-) = \mu_H(\widetilde l_0) + 1} \sum_{\mu_H(\widetilde l_0) = \mu_H(\widetilde l_+) + 1} \#{\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_0, \widetilde{l}_+) \#{\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_0) = 0.$$
The summand above is the counting for the moduli space of the broken half-tubes connecting $\widetilde l_\pm$. The moduli space of broken half-tubes is part of the boundary components of the $1$-dimensional moduli space ${\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+)$.
Let $\mathcal C$ be a connected component of the (compactification) of ${\mathcal{M}}_{H,\mathbf{J}}(M,L; \widetilde{l}_-, \widetilde{l}_+)$. A boundary point of $\mathcal C$ is of type $I$ if it is a broken half-tube, is type $II$ if it is a bubbling off of holomorphic discs. The counting in $\partial^2_{H, \mathbf J}$ concerns the type $I$ boundaries. We have the following $3$ cases for $\partial \mathcal C$:
\begin{itemize}
\item empty or is of type $II$ on both ends, or
\item is of type $I$ on both ends, or
\item is of type $I$ on one end and type $II$ on the other.
\end{itemize}
Obviously, if no type $II$ boundary occur in the compactification, an argument similar to the Hamiltonian Floer theory gives the proposition. In the following, we assume that type $II$ boundary does occur. Then the type $I$ boundary and type $II$ boundary are cobordant and the vanishing of counting for either type implies the vanishing of the other. In the following we show the vanishing of counting for the type $II$ boundary points, which would then imply the proposition.
Assume that type $II$ boundary does occur. Then there exist critical points $\widetilde l_\pm$ and a holomorphic disc $v$ with $\mu_L = 2$ so that $v$ is attached to a solution $u$ of \eqref{lagFloer:floweq} such that $\lim_{s \to \pm \infty} u(s, t) = l_\pm$. It follows that $\mu_H(\widetilde l_-) = \mu_H(\widetilde l_+)$. By regularity assumptions, we see that $\widetilde l_- = \widetilde l_+ = \widetilde l$ where $l$ is a connecting Hamiltonian orbit of $H$ and $u(s, t) = l(t)$ for all $s$.
Also by our assumption, $L_2(\mathbf J)$ is compact of dimension $n$. It follows that there are $J_{0/1}$-holomorphic discs through each point of $L$.
The orientations of the moduli spaces of $J_{0/1}$-holomorphic discs with minimal Maslov number are consistent in the sense that they are connected through cobordisms. On the other hand,
the orientation of ${\mathcal{M}}_{H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_+) \neq \emptyset$ is obtained as in \S\ref{lagFloer:orient}, by considering the gluing operations, via the canonical orientation of the moduli space of disc together with the choice of orientations on the moduli spaces $\widetilde {\mathcal{M}}_+$.
The boundary components of ${\mathcal{M}}_{H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_+)$ is oriented by considering the gluing operations. To derive the orientation of the type $II$ boundary points, we need to consider the gluing of the following moduli spaces to the main component $l$:
\begin{quotation}
the moduli spaces $\widetilde {\mathcal{M}}_\pm$ of the capped strips and the moduli space ${\mathcal{M}}_1(M, L; B, J_{0/1})$ of $1$-marked $J_{0/1}$-holomorphic disc with $\mu_L(B) = 2$.
\end{quotation}
The ordering of the gluing operations is given by the orientation of the half-tube ${\mathbb R} \times [0,1]$. Namely, for the case of bubbling off of a disc at $t = 0$, the cyclic order is $\widetilde {\mathcal{M}}_+$, ${\mathcal{M}}_1(M, L; B, J_0)$ then $\widetilde {\mathcal{M}}_-$ and for bubbling off at $t = 1$, the order is $\widetilde {\mathcal{M}}_-$, ${\mathcal{M}}_1(M, L; B, J_1)$ then $\widetilde {\mathcal{M}}_+$.
Note that the orientations of the moduli spaces of holomorphic discs are consistent, while the cyclic ordering of the gluing operations are opposite.
It follows that the counting of configuration of bubbling off at $t = 0$ and $t = 1$ have opposite signs.
It then follows that the counting of type $II$ boundary points vanishes, which implies $\partial_{H, \mathbf J}^2 = 0$.
\qed
Thus
we define the Floer homology of $(M, L)$ for the regular pair $(H, \mathbf J)$ to be
$$FH_*(M,L; H, \mathbf{J}) = H_*(FC_*(H, \mathbf{J}), \partial_{H, \mathbf{J}}).$$
The independence of $FH_*(M,L; H, \mathbf{J})$ with respect to the choices of (regular) $H$
and $\mathbf{J}$ can be seen using the usual arguments of continuation principle and homotopy
of homotopies.
\subsection{Half pair of pants product}\label{lagFloer:prod}
The product on $FH_*(M,L)$ can be defined by ``half pair-of-pants'', perturbed similarly as
in Seidel \cite{Seidel}, as following. Consider the half cylinder with a boundary puncture $\Sigma_0 = {\mathbb R} \times
[0,1]\setminus \{(r,0)\}$. The surface $\Sigma_0$ has three ends $e_{\pm}$ and $e_0$:
$$e_+ : [1, \infty) \times [0,1] \to \Sigma_0 \text{ and } e_-: (-\infty, -1] \times [0,1] \to \Sigma_0$$
where $e_{\pm}(s, t) = (s, t)$,
and
$$e_0 : (-\infty, -1] \times [0,1] \to \Sigma_0 : (b, \theta) \mapsto s+it = e^{b - 1+\pi i \theta}$$
is holomorphic with respect to the standard complex structures on the domain and target,
whose image lies completely in $(-\frac{1}{2}, \frac{1}{2})\times (0, \frac{1}{4})$.
The ends $e_-$ and $e_0$ are the ``incoming'' ends and $e_+$ is the ``outgoing end''.
We choose regular pairs $(H_{\pm}, \mathbf{J}_{\pm})$ and $(H_0, \mathbf{J}_0)$ for the
corresponding ends. Consider the pair $(\mathbf{H}, \mathbf{J})$ where $\mathbf{H} \in C^{\infty}
(\Sigma \times M)$ and $\mathbf{J}$ is a family of compatible almost complex structures
parametrized by $\Sigma$, such that the pull back of $(\mathbf{H}, \mathbf{J})$ by the maps
$e_{*}$ is equal to the corresponding pair $(H_*,\mathbf{J}_*)$. Furthermore, we require that
$\mathbf{H}$ restricts to $0$ over $e_0([-2, -1] \times [0,1]) \times M$.
\begin{remark}\label{lagFloer:notion}
\rm{Here and in the following, a region $\mathscr D \subset {\mathbb R} \times [0,1]$ is provided with cylindrical coordinates
if there is a biholomorphic map $e : I \times S \to \mathscr D$ where $I \subset {\mathbb R}$ is a (possibly infinite) interval
and $S = [0,1]$ or ${\mathbb R}/{\mathbb{Z}}$.
When we ask for the regular pair $(\mathbf{H}, \mathbf J)$ to pull back to
a pair $(H', \mathbf J')$ on a region provided with cylindrical coordinates $I \times S$, we mean that
there is a sequence of (nonempty) smaller intervals
$$I'' \subset \bar {I''} \subsetneq (I')^\circ \subset \bar{I'} \subsetneq I^\circ \subset I,$$
so that $(\mathbf H, \mathbf J)$ pull-backs to $(H', \mathbf J')$ on $e(I'' \times S)$ while it pulls-back to $(0, J_0)$ on $e((I \setminus I') \times S)$
for some fixed generic compatible almost complex structure $J_0$.
}
\end{remark}
\noindent
The description is conveniently summarized on figure \ref{fig:lagFloer:ppdomain}.
\begin{figure}\label{fig:lagFloer:ppdomain}
\end{figure}
Let $({\mathbb R} \times
[0,1])^0 = ({\mathbb R} \times [0,1]) \setminus e_0((-\infty, -1] \times [0,1])$ and consider
the equation
\begin{equation}\label{lagFloer:pants}
\left\{\begin{matrix}
\frac{\partial u}{\partial s} + J_{s, t}(u)
\left(\frac{\partial u}{\partial t} - X_{H_{s, t}} (u)\right) = 0 & \text{ for }
(s, t) \in ({\mathbb R} \times [0,1])^0, \\
\frac{\partial u_0}{\partial s} + J_{e_0(s, t)}(u_0)
\left(\frac{\partial u_0}{\partial t} - X_{H_{e_0(s, t)}} (u_0)\right) = 0 & \text{ for }
(s, t) \in (-\infty, -1] \times [0,1], \\
u|_{({\mathbb R} \times \{0, 1\}) \setminus \{(0,0)\}} \subset L
\end{matrix}\right.
\end{equation}
where $u_0 = u \circ e_0$. On the ends $e_*$, a solution $u$ of finite energy again
limits to critical points $\widetilde l_*$ for the Floer action functional $a_{H_*}$ when
$s\to \pm\infty$. The half pair-of-pants product is then defined on
the chain level by counting the $0$-dimensional moduli space ${\mathcal{M}}_{\mathbf H, \mathbf J}
(M, L; \widetilde l_-, \widetilde l_0, \widetilde l_+)$ of such solutions:
$$\widetilde l_- * \widetilde l_0 = \sum_{\widetilde l_+} \#{\mathcal{M}}_{\mathbf H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_0, \widetilde l_+) \widetilde l_+.$$
The orientation of the moduli spaces involved is obtained by considering the gluing with the respective moduli spaces of capped strips. More precisely, gluing with $\widetilde {\mathcal{M}}_{H_-^+,\mathbf J_-^+}(M, L; \widetilde l_-)$ and $\widetilde {\mathcal{M}}_{H_0^+, \mathbf J_0^+}(M, L; \widetilde l_0)$ gives a compact perturbation of the moduli problem for $\widetilde {\mathcal{M}}_{H_+^+, \mathbf J_+^+}(M, L; \widetilde l_{+})$. The order of the gluing operation is first on the end $e_0$ then $e_-$. With all these fixed, we give ${\mathcal{M}}_{\mathbf H, \mathbf J}(M, L; \widetilde l_-, \widetilde l_0, \widetilde l_+)$ the induced orientation. We note that, implicitly, we are also using the orientation of the moduli spaces of holomorphic discs.
To show that it passes to homology, we again look at the boundary of the $1$-dimensional
moduli space of solutions.
The assumption on the minimal Maslov number implies that a generic family does not have
any disc bubbling and thus all $1$-dimensional moduli spaces can be compactified by adding broken trajectories.
\begin{remark}\label{lagFloer:productdiffmod}
\rm{
There are two boundary components in ${\mathbb R} \times [0,1]$. In the discussion above, we could also puncture the half cylinder at $(0,1)$ instead and all the arguments will go through and end up with a product of the form $\widetilde l_0 * \widetilde l_-$. The model used above, which is used in this article, will be called the \emph{right} model and the one which punctures $(0,1)$ will be called the \emph{left} model.
}
\end{remark}
Now we assume that the identity exists for the product just defined and give a description of it in the following. For $\delta \ll 1$, the domain we consider is the unpunctured domain in figure \ref{fig:lagFloer:identitydomain}.
\begin{figure}\label{fig:lagFloer:identitydomain}
\end{figure}
The semi-annulus labelled by $H_0$ is biholomorphic to the half-cylinder $[\ln\delta + 1, -1] \times [0,1]$ by the following:
$$e_\delta : [\ln \delta + 1 , -1] \times [0,1] \to {\mathbb R} \times [0,1] : (b, \theta) \mapsto s+it = e^{b-1 + \pi i \theta}.$$
As $\delta \to 0$, the length of the half-cylinder goes to $\infty$.
We then choose a regular pair $(\mathbf H_\delta, \mathbf J_\delta)$ that pulls-back to the respective regular pairs on the shaded regions and to $(0, J_0)$ on the half disc labelled by $0$. When $\delta > 0$ counting of the $0$-dimensional moduli spaces of the solutions to the perturbed $\bar\partial$ equation described by figure \ref{fig:lagFloer:identitydomain} gives the connecting homomorphism between $FH_*(M, L; H_-, \mathbf J_-)$ and $FH_*(M, L; H_+, \mathbf J_+)$, which by definition is the identity on the Floer homology $FH_*(M, L)$.
Now let $\delta \to 0$, then in the limit the domain splits into two parts, one of which is the domain in figure \ref{fig:lagFloer:ppdomain}. Another part is the ``cap domain'' as shown in figure \ref{fig:lagFloer:capdomain}.
\begin{figure}\label{fig:lagFloer:capdomain}
\end{figure}
Now let $\widetilde {\mathcal{M}}_{H_0^+, \mathbf J_0^+} (M, L; \widetilde l_0)$ denote the corresponding moduli space of caps (as considered in \S\ref{lagFloer:orient}) defined by the above domain, and consider the element defined from counting dimension $0$ moduli spaces:
$$1\!\!1_L := \sum_{\widetilde l_0} \#\widetilde {\mathcal{M}}_{H_0^+, \mathbf J_0^+} (M, L; \widetilde l_0) [\widetilde l_0] \in FH_*(M, L).$$
A gluing argument as those in \cite{Albers} then shows that multiplication by $1\!\!1_L$ gives the identity map of $FH_*(M,L)$, i.e. $1\!\!1_L$ is the identity of $FH_*(M, L)$ under the half-pair-of-pants product.
\subsection{Action of $FH_*(M)$}\label{lagFloer:quantaction}
Putting in our framework ideas that first appeared in Albers \cite{Albers}, and then in Biran-Cornea \cite{BiranCornea}, we have the following
\begin{prop}\label{lagFloer:rightmod}
There is a natural action $FH_*(M) \otimes_{\Lambda_\omega} FH_*(M, L) \to FH_*(M, L)$, that exhibits $FH_*(M,L)$ as a right $FH_*(M)$-module, where the Novikov ring $\Lambda_\omega$ is defined over $\mathbb R$.
\end{prop}
We first recall some basic notations from the Hamiltonian Floer homology $FH_*(M)$. We consider the space $\Omega M$ of contractible loops in $M$ and its covering space $\widetilde \Omega M$ which fits into the following covering diagram:
$$\Gamma_\omega \to \widetilde \Omega M \to \Omega M.$$
An element $\widetilde \gamma \in \widetilde \Omega M$ is an equivalent class of pairs $(\gamma, v)$ where
$$\{\gamma : {\mathbb R}/{\mathbb{Z}} \to M\} \in \Omega M \text{ and } v : (D^2, S^1) \to (M, \gamma)$$
such that $v(e^{2\pi i t}) = \gamma(t)$. The equivalence relation is given by
$$(\gamma, v) \sim (\gamma', v') \text{ whenever } \gamma = \gamma' \text{ and } I_\omega(v\# -v') = I_c(v\# -v') = 0.$$
We describe the definition of such an action at the chain level. The domain we consider is $\Sigma_0 = {\mathbb R} \times [0,1] \setminus \{(0, \frac{1}{2})\}$, on which we put the structure of 3 ends, $e_\pm$ as in the previous subsection and $e_p$:
$$e_{p} : (-\infty, -1] \times {\mathbb R}/{\mathbb{Z}} \to \Sigma_0 : (b, \theta) \mapsto s+it = e^{b-1 + 2\pi i \theta} + \frac{i}{2}$$
which is biholomorphic and whose image lies completely in $(-\frac{1}{2}, \frac{1}{2}) \times (\frac{1}{4}, \frac{3}{4})$. We choose regular pairs $(H_\pm, \mathbf J_\pm)$ for $FH_*(M, L)$ on the ends $e_\pm$ and $(H_p, \mathbf J_p)$ for $FH_*(M)$ on the end $e_{p}$. We consider the pair $(\mathbf H, \mathbf J)$ where $\mathbf H \in C^\infty(\Sigma_0 \times M)$ and $\mathbf J$ a family of almost complex structures parametrized by $\Sigma_0$ so that it pull-backs to the respective $(\mathbf H_*, \mathbf J_*)$ on the ends. The domain is again summarized in figure \ref{fig:lagFloer:actiondomain}.
\begin{figure}\label{fig:lagFloer:actiondomain}
\end{figure}
Let $\Sigma_0^p = \Sigma_0 \setminus e_p((-\infty, -1] \times {\mathbb R}/{\mathbb{Z}})$ and consider the equation
\begin{equation}\label{lagFloer:quantacteq}
\left\{\begin{matrix}
\frac{\partial u}{\partial s} + J_{s, t}(u)
\left(\frac{\partial u}{\partial t} - X_{H_{s, t}} (u)\right) = 0 & \text{ for }
(s, t) \in \Sigma_0^p, \\
\frac{\partial u_p}{\partial s} + J_{e_p(s, t)}(u_0)
\left(\frac{\partial u_p}{\partial t} - X_{H_{e_p(s, t)}} (u_p)\right) = 0 & \text{ for }
(s, t) \in (-\infty, -1] \times {\mathbb R}/{\mathbb{Z}}, \\
u|_{{\mathbb R} \times \{0, 1\}} \subset L
\end{matrix}\right.
\end{equation}
where $u_p = u \circ e_p$. On the ends $e_*$, a solution $u$ of finite energy
limits to critical points $\widetilde l_{\pm}$ and $\widetilde \gamma_0$ for the respective Floer action functionals when
$s\to \pm\infty$. The chain level action is then defined
by counting the $0$-dimensional moduli space ${\mathcal{M}}_{\mathbf H, \mathbf J}
(M, L; \widetilde l_-, \widetilde \gamma_0, \widetilde l_+)$ of such solutions:
$$\widetilde l_- \circ \widetilde \gamma_0 = \sum_{\widetilde l_+} \#{\mathcal{M}}_{\mathbf H, \mathbf J}(M, L; \widetilde l_-, \widetilde \gamma_0, \widetilde l_+) \widetilde l_+.$$
The moduli spaces involved are oriented by the canonical orientation of moduli spaces of discs as well as the choices of orientation for the caps.
The chain level action passes to homology by the condition $R$, which garantees no bubbling off of holomorphic discs and spheres for the dimension $1$ moduli spaces.
Composing with the PSS isomorphism, we obtain the action of $QH_*(M)$ on $FH_*(M, L)$.
For the sake of completeness and commodity of the reader, we continue to reformulate ideas and results introduced by Albers in our setting: we now show that the action gives a structure of $FH_*(M)$-module. Thus we need here that
\begin{equation}\label{lagFloer:actionmodule}\widetilde l_- \circ (\widetilde \gamma_1 *_{PP} \widetilde \gamma_2) = (\widetilde l_- \circ \widetilde \gamma_1) \circ \widetilde \gamma_2,\end{equation}
where $*_{PP}$ is the pair-of-pants product in $FH_*(M)$.
We consider the twice-punctured domain $\Sigma_{R, 0} = {\mathbb R} \times [0,1] \setminus \{(R, \frac{1}{2}), (0, \frac{1}{2})\}$, where we always set $R > 0$. The basic structure of ends on the domain is illustrated on Figure \ref{fig:lagFloer:basicdomain},
\begin{figure}\label{fig:lagFloer:basicdomain}
\end{figure}
where $r = \epsilon \min (\frac{R}{2}, \frac{1}{2})$ for some $\epsilon \ll 1$.
The end $e_- : (-\infty, -1] \times [0,1] \to \Sigma_{R,0}$ is the identity map while $e_+ : [1, \infty) \times [0,1] \to \Sigma_{R,0}$ shifts by $R$ to the right.
The structure of the ends in the shaded discs labelled by $H_j$ for $j = 1, 2$ are given by the following
$$e_j : (-\infty, -1] \times {\mathbb R}/{\mathbb{Z}} \to \Sigma_{R,0} : (b, \theta) \mapsto s+it = e^{b+ 2\pi i \theta} + z_j,$$
where $z_1 = \frac{i}{2}$ and $z_2 = R + \frac{i}{2}$,
and $(H_j, \mathbf J_j)$ are regular pairs for $FH_*(M)$ on the ends $e_j$ for $j = 1, 2$.
The equation \eqref{lagFloer:actionmodule} is obtained when $R \to 0$ or $R \to \infty$, for which we need compact perturbations of the above basic structure. Because the perturbations are restricted in a compact region of the domain, as well as the condition $R$ excluding bubbling off of discs and spheres, the corresponding operators and resulting moduli spaces (at dimension $0$) allow for a cobordism argument, which will establish the equation \eqref{lagFloer:actionmodule}.
For $R \to \infty$ we choose regular pair $(H_0, \mathbf J_0)$ for $FH_*(M,L)$ and perturb as in figure \ref{fig:lagFloer:inftydomain}.
\begin{figure}\label{fig:lagFloer:inftydomain}
\end{figure}
We write the coordinates explicitly in the region labelled by $H_0$:
$$e_{0,R} : [0, \frac{R}{2}] \times [0,1] \to \Sigma_{R,0} : (s, t) \mapsto (s+\frac{R}{4}, t).$$
When $R \to \infty$, the width of the shaded region labelled by $H_0$ goes to $\infty$. This gives the right hand side of \eqref{lagFloer:actionmodule}.
On the other hand, for $R \ll 1$ we choose a regular pair $(H_3, \mathbf J_3)$ for $FH_*(M)$ and perturb as in figure \ref{fig:lagFloer:zerodomain},
\begin{figure}\label{fig:lagFloer:zerodomain}
\end{figure}
where the shaded region labelled by $H_3$ is an annulus centered at $(\frac{R}{2}, \frac{1}{2})$, for which the outer circle has radius $e^{-1}$ and the inner circle has radius $2R$. This region is biholomorphic to the cylinder $[\ln(2R), -1] \times {\mathbb R}/{\mathbb{Z}}$ via
$$e_{3,R}: [\ln(2R), -1] \times {\mathbb R}/{\mathbb{Z}} \to \Sigma_{R,0} : (b, \theta) \mapsto s+it = e^{b+2\pi i\theta} + \frac{1}{2}(R + i),$$
and we put on the annulus $(H_{3, \theta}, J_{3,\theta})$ using the cylindrical coordinates given by the map $e_{3, R}$.
When $R \to 0$, the length of the above cylinder goes to $\infty$. This gives the left hand side of \eqref{lagFloer:actionmodule}.
The equations we consider are similar to \eqref{lagFloer:pants}, where we choose a generic family of pairs $(\mathbf H^R, \mathbf J^R)$ where $\mathbf H^R \in C^\infty(\Sigma_{R, 0} \times M)$ and $\mathbf J^R$ is a family of almost complex structures parametrized by $\Sigma_{R,0}$, so that its pull-back to the cylindrical parts (the shaded parts in the above diagrams) coincides with the corresponding labellings and restricts to $0$ in a neighbourhood of the boundaries of the shaded regions. Then the equation for $(u, R)$ have the form:
\begin{equation}\label{lagFloer:deformeddomains}
\left\{\begin{matrix}
\frac{\partial u}{\partial s} + J^R_{s, t}(u)
\left(\frac{\partial u}{\partial t} - X_{H^R_{s, t}} (u)\right) = 0 & \text{ for }
(s, t) \in \text{ unshaded region}, \\
\frac{\partial u_*}{\partial s} + J^R_{e_*(s, t)}(u_*)
\left(\frac{\partial u_*}{\partial t} - X_{H^R_{e_*(s, t)}} (u_*)\right) = 0 & \text{ for }
(s, t) \in \text{ the domain of } e_*, \\
u|_{{\mathbb R} \times \{0, 1\}} \subset L
\end{matrix}\right.
\end{equation}
where $u_* = u\circ e_*$ and $* = +, -, 1, 2, (0,R)_{\text{when } R \to \infty}, (3,R)_{\text{when }R \to 0}$ respectively. The gluing that relates the limiting configuration to the configuration where $R \in (0, \infty)$ is similar in all respects to the one employed in the literature, e.g. in \cite{Albers}.
We recall here that the pair-of-pants product in $FH_*(M, \omega)$ is defined by considering the domain $S^2 \setminus \{0, 1, \infty\}$ (see figure \ref{fig:lagFloer:pantsdomain}).
\begin{figure}\label{fig:lagFloer:pantsdomain}
\end{figure}
The shaded discs around $0$, $1$ and $\infty$ are provided with cylindrical coordinates, so that the disc around $0$ and $1$ are identified with $(-\infty, -1] \times {\mathbb R}/{\mathbb{Z}}$ and the one around $\infty$ is identified with $[1, \infty) \times {\mathbb R}/{\mathbb{Z}}$. To set the order of the multiplication, let $\widetilde \gamma_j$ be critical points for the action functional of $H_j$ for $j = 1, 2$. Then counting $0$-dimensional moduli spaces of maps from the above domain satisfying the perturbed holomorphic equation as described in figure \ref{fig:lagFloer:pantsdomain} defines the product $\widetilde \gamma_1 * \widetilde \gamma_2$. Comparing this with the description of figure \ref{fig:lagFloer:zerodomain}, we see that the $FH_*(M)$-action is indeed a right action.
\begin{remark}\label{lagFloer:quantactionrmk}
\rm{Via the PSS-isomorphism between $FH_*(M)$ and $QH_*(M)$, the action described in this section can also be thought of as a right action of $QH_*(M)$ on $FH_*(M, L)$. In a way similar to the description of the PSS-isomorphism, the action by $QH_*(M)$ can be constructed directly using Morse trajectories as in Biran-Cornea \cite{BiranCornea}.
}
\end{remark}
\subsection{Albers' map}\label{lagFloer:Albers}
We describe a proof of the following
\begin{prop}\label{lagFloer:compatAlbers}
The action introduced above is compatible with the comparison map $\mathscr A : FH_*(M) \to FH_*(M,L)$ introduced by Albers in \cite{Albers}
via the half-pair-of-pants product, whenever all ingredients are defined. It means
$$[\widetilde l_-] \circ [\widetilde \gamma_0] = [\widetilde l_-] * \mathscr A([\widetilde \gamma_0]),$$
where $[\widetilde l_-] \in FH_*(M, L)$ and $[\widetilde \gamma_0] \in FH_*(M)$.
\end{prop}
We consider the domain $\Sigma_{\delta} = {\mathbb R} \times [0,1] \setminus \{(0, \delta)\}$, $\delta \to 0$ with the cylindrical structure as in figure \ref{fig:lagFloer:albersdomain}, where $r = \epsilon \delta$ for some $\epsilon \ll 1$.
\begin{figure}\label{fig:lagFloer:albersdomain}
\end{figure}
We choose a regular pair $(H_0, \mathbf J_0)$ for $FH_*(M, L)$ and $(H_1, \mathbf J_1)$ for $FH_*(M)$. The shaded half-annulus labelled by $H_0$ is centered at $(0,0)$ and outer radius $e^{-2}$ and inner radius is $2\delta$. The cylindrical coordinates on the shaded half-annulus is given by the biholomorphic map
$$e_{a, \delta} : [\ln (2\delta) + 1, -1] \times [0,1] \to \Sigma_{\delta} : (b, \theta) \mapsto s+it = e^{b -1 + \pi i \theta},$$
and we put on it $(H_{a,\theta}, J_{a, \theta})$ using $e_{a, \delta}$. When $\delta \to 0$, the length of the cylinder goes to $\infty$.
We then solve an equation for $(u, \delta)$ of the type \eqref{lagFloer:deformeddomains} with the above domain and the cylindrical data. In the limit $\delta \to 0$, the domain splits into the domain for the half-pair-of-pants in \S\ref{lagFloer:prod} together with the ``chimney domain'' used to define the map $\mathscr A$, see figure \ref{fig:lagFloer:chimneydomain}.
\begin{figure}\label{fig:lagFloer:chimneydomain}
\end{figure}
Then a gluing argument similar to the one in \cite{Albers} proves the statement.
The following corollary of proposition \ref{lagFloer:compatAlbers} gives the image of the identity $1\!\!1 \in FH_*(M)$ under the comparison map $\mathscr A$:
\begin{prop}\label{lagFloer:albersidentity}
Suppose that identity exists for the half-pair-of-pants product defined in \S\ref{lagFloer:prod}, then $\mathscr A(1\!\!1) = 1\!\!1_L$.
\end{prop}
{\it Proof:}
According to proposition \ref{lagFloer:compatAlbers}, we only have to show that the action of $1\!\!1$ on $FH_*(M, L)$ gives the identity map. We consider the domain in figure \ref{fig:lagFloer:pssdomain}, for $\delta \ll 1$.
\begin{figure}\label{fig:lagFloer:pssdomain}
\end{figure}
When $\delta > 0$, this again leads to the identity map on the Lagrangian Floer homology $FH_*(M, L)$. In the limit when $\delta \to 0$, the domain splits into two parts, one of which is described in figure \ref{fig:lagFloer:actiondomain}. The other one is the ``capped domain'' in the description of PSS map in \cite{PSS}, see figure \ref{fig:lagFloer:psscapdomain}.
\begin{figure}\label{fig:lagFloer:psscapdomain}
\end{figure}
A similar argument as in \S\ref{lagFloer:prod} shows that counting the dimension $0$ moduli spaces of caps gives $1\!\!1 \in FH_*(M)$, and a gluing argument then shows that the action of $1\!\!1$ on $FH_*(M, L)$ is indeed the identity map.
\qed
\section{Seidel's construction for Lagrangian Floer homology} \label{lagSeidel}
For the discussion in this section, we impose furthermore the following
\begin{assumption}\label{lagSeidel:assume}
$FH_*(M,L)$ is non-vanishing, in particular, $L$ is non-displaceable in $M$.
\end{assumption}
\noindent
By this assumption, any (time-dependent) Hamiltonian function has contractible flow line connecting points on $L$.
\subsection{Path group and action}\label{lagSeidel:groupact}
Let ${\rm Ham}_L(M, \omega)$ be the subgroup of ${\rm Ham}(M, \omega)$ that preserves $L$, i.e.
$$\phi \in {\rm Ham}_L(M, \omega) \iff \phi \in {\rm Ham}(M, \omega) \text{ and } \phi(L) = L.$$
We consider the following path group in ${\rm Ham}(M, \omega)$:
\begin{definition}\label{lagSeidel:definition}
${\mathcal{P}}_L{\rm Ham}(M, \omega) := \{g: ([0,1]; \{0\}, \{1\}) \to ({\rm Ham}(M, \omega); id, {\rm Ham}_L(M, \omega))\}$ is a group with pointwise composition:
$$(g \circ h)_t = g_t h_t.$$
For $g \in {\mathcal{P}}_L{\rm Ham}(M,\omega)$, the action of it on a path $l : ([0,1]; \{0,1\}) \to (M, L)$ is
$$(g\circ l)(t) = l^g(t) := g_t\circ l(t).$$
\end{definition}
Suppose that $g$ is generated by $K : [0,1] \times M \to {\mathbb R}$ then we have $g^*\alpha_{H} = \alpha_{H^g}$ and $g^*(,)_{\mathbf J} = (,)_{\mathbf J^g}$ where
$$H^g(t,x) = H(t, g_tx) - K(t,g_tx) \text{ and } J_t^g(x) = dg_t^{-1} \circ J_t(g_tx) \circ dg_t.$$
In particular, we have $(H^g)^h = H^{gh}$ and $(\mathbf J^g)^h = \mathbf J^{gh}$.
Let $\phi_t$ denote the Hamiltonian isotopy generated by $H_t$, then $H^g_t$ generates $g^{-1}_t \phi_t$. It follows that the connecting Hamiltonian flow lines of $H_t$ and $H_t^g$ are in one-to-one correspondence.
As in Lalonde-McDuff-Polterovich \cite{LalondeMcDuffPolterovich} where it is shown that ${\rm Ham}(M)$ acts trivially on homology (and sends contractible loops in $M$ to contractible loops), one sees easily that the same argument shows that the action of ${\mathcal{P}}_L{\rm Ham}(M, \omega)$ on the space of paths preserves the component ${\mathcal{P}}_L M$.
Most computations in the following are parallel to the corresponding ones in \cite{Seidel}.
\begin{prop}\label{lagSeidel:lifting}
The action of ${\mathcal{P}}_L{\rm Ham}(M,\omega)$ on ${\mathcal{P}}_L M$ can be lifted to an action of $\widetilde{{\mathcal{P}}}_L{\rm Ham}(M,\omega)$ on the covering $\widetilde{{\mathcal{P}}}_L M$, where
$$\widetilde{{\mathcal{P}}}_L{\rm Ham}(M,\omega) := \left\{\left.(g, \widetilde g) \in {\mathcal{P}}_L{\rm Ham}(M,\omega) \times {\rm Homeo}(\widetilde{{\mathcal{P}}}_L M) \right| \widetilde{g} \text{ lifts the action of } g \right\}$$
\end{prop}
\proof
We only need to show that the action can be lifted. Suppose $\gamma : S^1 \to {\mathcal{P}}_LM$ is a loop that can be lifted to $\widetilde {\mathcal{P}}_LM$, then it is represented by a map $B: \left(S^1 \times [0,1], S^1 \times \{0,1\}\right) \to (M, L)$, such that $\omega(B) = \mu_L(B) = 0$. The loop $\gamma^g = \{g(\gamma_s)\}_{s \in S^1}$ is represented by $B^g(s, t) = g_t \circ B(s, t)$. Because $dg_t : (B^*TM, \partial B^*TL) \to ((B^g)^*TM, (\partial B^g)^*TL)$ is an isomorphism of symplectic bundles preserving the Lagrangian boundary conditions, it follows that $\mu_L(B^g) = \mu_L(B) = 0$.
We compute $(B^g)^*\omega = \omega\left(\frac{\partial B^g}{\partial s}, \frac{\partial B^g}{\partial t}\right) ds\wedge dt = B^*\omega + d\theta$ with $\theta = K(t, B^g(s, t)) dt$. Since $\theta|_{\partial(S^1 \times [0,1])} = 0$ we find that $\omega(B^g) = \omega(B) = 0$. Thus, $\gamma^g$ again can be lifted to $\widetilde {\mathcal{P}}_LM$, which implies that the action of $g$ can be lifted.
\qed
The groups fit into the exact sequence:
$$0\to \Gamma_L \to \widetilde{{\mathcal{P}}}_L{\rm Ham}(M,\omega) \to {\mathcal{P}}_L{\rm Ham}(M,\omega) \to 0,$$
and passing to homotopy, we get the exact sequence:
$$\Gamma_L \to \widetilde \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega)) \to \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega)) \to 0.$$
Let $\widetilde l = [l, w] \in \widetilde {\mathcal{P}}_L M$ (not necessarily a critical point of any functional) and $\Phi_z : T_{w(z)} M \to {\mathbb C}^n$ be any trivialization satisfying $\Phi_r : T_{w(r)}L \to {\mathbb R}^n$ for $r \in [-1, 1]$. Let $\Phi^{\widetilde g}_z$ be a similar trivialization defined for $\widetilde l^{\widetilde g} = [l^g, w^{\widetilde g}]$.
Now consider
$$G_t = \Phi^{\widetilde g}_{e^{i\pi t}} \circ dg_t \circ \Phi_{e^{i\pi t}}^{-1} : {\mathbb C}^n \to {\mathbb C}^n.$$
Then $\{G_t{\mathbb R}^n\}$ is a loop of Lagrangian subspaces in ${\mathbb C}^n$. The following definition then does not depend on either the trivialization or the choice of $\widetilde l$.
\begin{definition}\label{lagSeidel:actionind}
The \emph{Maslov degree} of $\widetilde g$ is $\mu(\widetilde g) = \mu(G_t{\mathbb R}^n)$. \qed
\end{definition}
\begin{prop}\label{lagSeidel:index}
Let $\widetilde{l} = [l, w]$ be a critical point of $a_{H^g}$, then $\widetilde{l}^{\widetilde{g}}$ is a critical point of $a_H$. Furthermore, $\widetilde l$ is non-degenerate if and only if $\widetilde l^{\widetilde g}$ is so. For such critical points, we have $\mu(\widetilde g) = \mu_H(\widetilde{l}^{\widetilde{g}}) - \mu_{H^g}(\widetilde{l})$. It follows that $\mu : \widetilde g \mapsto \mu(\widetilde g)$ defines a group homomoprhism $\mu: \widetilde \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega)) \to {\mathbb{Z}}$.
\end{prop}
\proof
A direct computation from the definitions establish the first statement of the proposition.
Suppose $l$ is non-degenerate, then $dg_1^{-1} \circ d\phi_1 (T_{l(0)}L) \pitchfork T_{l(1)}L \iff d\phi_1 (T_{l(0)}L) \pitchfork T_{l^g(1)}L$ since $dg_1$ preserves $TL$, and so the second statement follows.
Let $\widetilde l^{\widetilde g} = [l^g, w^g]$ and $\Phi_z : T_{w^g(z)}M \to {\mathbb C}^n$ be a trivialization that defines $\mu_H(\widetilde l^{\widetilde g})$, then
$$\mu_H(\widetilde l^{\widetilde g}) = \mu(E_t{\mathbb R}^n, {\mathbb R}^n), \text{ where } E_t = \Phi_{e^{i\pi t}}\circ d\phi_t \circ \Phi_1^{-1} : {\mathbb C}^n \to {\mathbb C}^n.$$
Let $\Phi^g_z : T_{w(z)}M \to {\mathbb C}^n$ be the trivialization defining $\mu_{H^g}(\widetilde l)$, then
$$\mu_{H^g}(\widetilde l) = \mu(E^g_t{\mathbb R}^n, {\mathbb R}^n), \text{ where } E^g_t = \Phi^g_{e^{i\pi t}}\circ dg^{-1}_t \circ d\phi_t \circ \left(\Phi^g_1\right)^{-1} : {\mathbb C}^n \to {\mathbb C}^n.$$
Suppose $\Phi^g_1 = \Phi_1$ and let $G_t^{-1} = \Phi^g_{e^{i\pi t}}\circ dg^{-1}_t \circ \Phi_{e^{i\pi t}}^{-1}$, then $E_t = G_t \circ E^g_t = G_t \# E^g_t$ because $G_t$ is a loop. Thus the property of the Maslov index of Lagrangian paths (see \cite{RobbinSalamon1} theorem $2.3$) gives
$$\mu(E_t{\mathbb R}^n, {\mathbb R}^n) = \mu(G_t) + \mu(E^g_t {\mathbb R}^n, {\mathbb R}^n) \Rightarrow \mu(\widetilde g) = \mu_H(\widetilde{l}^{\widetilde{g}}) - \mu_{H^g}(\widetilde{l}).$$
\qed
In a way entirely parallel to \cite{Seidel}, we have
\begin{prop}\label{lagSeidel:fhaction}
For critical points $\widetilde l_-$, $\widetilde l_+$ of $a_{H^g}$, there is a bijection of moduli spaces:
$$\begin{matrix}
{\mathcal{M}}_{H^g, \mathbf J^g}(M, L; \widetilde l_-, \widetilde l_+) & \to & {\mathcal{M}}_{H, \mathbf J}(M, L; \widetilde l_-^g, \widetilde l_+^g)\\
u & \mapsto & u^g
\end{matrix}
$$
where
\begin{equation}\label{lagSeidel:flowaction}
u^g(s, t) := g_t \circ u(s, t).
\end{equation}
Furthermore, $(H, \mathbf J)$ is regular iff $(H^g, \mathbf J^g)$ is.
The map $FC_*(\widetilde g; H, \mathbf J)$ defined by ${\langle}\widetilde l{\rangle} \mapsto {\langle}\widetilde l^g{\rangle}$ passes to homology:
$$FH_*(\widetilde g) : FH_*(H^g, \mathbf J^g) \to FH_{* + \mu(\widetilde g)}(H, \mathbf J)$$
and defines an automorphism of $FH_*(M, L)$ of degree $\mu(\widetilde g)$. Furthermore the following hold:
\begin{enumerate}
\item for $(g, \widetilde g) = (id, id)$, $FH_*(\widetilde g) = id$,
\item for $(g, \widetilde g) = (id, \beta)$ with $\beta \in \Gamma_L$, we have $FH_*(\widetilde g) = \beta \cdot id$,
\item $FH_*(\widetilde g)$ is a $\Lambda_L$-module automorphism of degree $\mu(\widetilde g)$,
\item $FH_*(\widetilde g \circ \widetilde g') = FH_*(\widetilde g)\circ FH_*(\widetilde g')$.
\end{enumerate}
\qed
\end{prop}
\subsection{Homotopy invariance}\label{lagSeidel:htpyinv}
We consider a smooth path $\{g_r\}_{r \in [0,1]}$ starting from the identity in ${\mathcal{P}}_L{\rm Ham}(M, \omega)$ and a lift $\{(g_r, \widetilde g_r)\}$ of it to $\widetilde {\mathcal{P}}_L{\rm Ham}(M, \omega)$. Then proposition \ref{lagSeidel:index} implies that $\mu(\widetilde g_r) = 0$ for all $r \in [0,1]$.
The path $\{g_r\}$ corresponds to a smooth family $\{g_{r,t}\}_{(r,t) \in [0,1]^2}$ of Hamiltonian diffeomorphisms in ${\rm Ham}(M, \omega)$ so that
$$g_{0,t} = g_{r, 0} = id \text{ and } g_{r, 1} \in {\rm Ham}_L(M, \omega) \text{ for all } r \in [0,1].$$
Choose a smooth family of Hamiltonians $K_r : [0,1] \times M \to {\mathbb R}$ for $r \in [0,1]$ so that $K_r$ generates $g_r$ and $K_0 = 0$. Let
$$H^{g_r}(t, x) = H(t, g_{r,t}(x)) - K_r(t, g_{t,r}(x)) \text{ and } J_t^{g_r} = dg_{r, t}^{-1} \circ J_t(g_{r, t}(x)) \circ dg_{r, t}$$
Let $(H_0, \mathbf J_0) = \{(H_t, J_t)\}_{t \in [0,1]}$ and $(H_1, \mathbf J_1) = \{(H^{g_1}_t, J^{g_1}_t)\}_{t \in [0,1]}$, then the construction in the last subsection gives
$$FH_*(\widetilde g_1) : FH_*(H_1, \mathbf J_1) \to FH_*(H_0, \mathbf J_0).$$
Let $(\bar H, \bar{\mathbf J}) = \{(H_{s,t}, J_{s,t})\}_{(s,t) \in {\mathbb R} \times [0,1]}$ be a regular homotopy connecting $(H_0, \mathbf J_0)$ and $(H_1, \mathbf J_1)$:
$$(\bar H_s, \bar{\mathbf J}_s) = \left\{\begin{matrix}(H_1, \mathbf J_1) & s {\leqslant} -1 \\ (H_0, \mathbf J_0) & s {\geqslant} 1 \end{matrix}\right. .$$
We consider the moduli spaces of the solutions of the following equation for maps $u : {\mathbb R} \times [0,1] \to M$ with $\partial u : {\mathbb R} \times \{0,1\} \to L$:
$$\frac{\partial u}{\partial s} + J_{s, t}(u)
\left(\frac{\partial u}{\partial t} - X_{H_{s, t}} (u)\right) = 0.$$
Here $(\bar H, \bar{\mathbf J})$ being regular means that all solutions $u$ are regular, i.e. their linearizations are surjective.
The moduli space ${\mathcal{M}}_{\bar H, \bar{\mathbf J}}(M, L; \widetilde l_-, \widetilde l_+)$ denotes the space of solutions $u$ that converge to Hamiltonian paths when $s \to \pm \infty$:
$$\lim_{s\to -\infty} = \widetilde l_-^{g_1^{-1}} \text{ and } \lim_{s\to +\infty} = \widetilde l_+,$$
where $\widetilde l_\pm$ are critical points of $a_{H}$. The dimension of the moduli space is given by
$$\mu_H(\widetilde l_-) - \mu_H(\widetilde l_+),$$
since there is no ${\mathbb R}$-action anymore.
Then the continuation map on the chain level
$$\Phi_{\bar H, \bar{\mathbf J}}: FC_*(H_1) \to FC_*(H_0)$$
is defined by counting dimension $0$ moduli spaces:
$$\Phi_{\bar H, \bar{\mathbf J}}(\widetilde l_-^{g_1^{-1}}) = \sum_{\widetilde l_+} \#{\mathcal{M}}_{\bar H, \bar{\mathbf J}}(M, L; \widetilde l_-, \widetilde l_+) \widetilde l_+.$$
That $\Phi_{\bar H, \bar{\mathbf J}}$ is a chain map is shown by considering the dimension $1$ moduli spaces. Thus we have the continuation map for Floer homology, which is also denoted $\Phi_{\bar H, \bar{\mathbf J}}$. The homotopy invariance of $FH_*(\widetilde g)$ is equivalent to the following:
\begin{prop}\label{lagSeidel:deformhtpy}
With the above setup, we have $FH_*(\widetilde g_1) = \Phi_{\bar H, \bar{\mathbf J}}$.
\end{prop}
As in \cite{Seidel}, we consider the deformation of homotopies, from the trivial homotopy to $(\bar H, \bar{\mathbf J})$ by the curve $\{(g_r, \widetilde g_r)\}$, which is a family $(\widetilde H, \widetilde{J}) = \{(H_{r,s,t}, J_{r,s,t})\}_{(r,s,t) \in [0,1] \times {\mathbb R} \times [0,1]}$ where
\begin{eqnarray*}
H_{r,s,t}(x) = H^{g_r}_t(x), & J_{r,s,t}(x) = J^{g_r}_t(x) & \text{ for } s {\leqslant} -1\\
H_{r,s,t}(x) = H_t(x), & J_{r,s,t}(x) = J_t(x) & \text{ for } s {\geqslant} 1\\
H_{0,s,t}(x) = H_t(x), & J_{0,s,t}(x) = J_t(x) & \text{ and } \\
H_{1,s,t}(x) = H_{s,t}(x) & J_{1,s,t}(x) = J_{s,t}(x) &
\end{eqnarray*}
The equation that we are now concerned with is the following:
\begin{equation}\label{lagSeidel:homotopyeq}\frac{\partial u}{\partial s} + J_{r, s, t}(u)
\left(\frac{\partial u}{\partial t} - X_{H_{r, s, t}} (u)\right) = 0,
\end{equation}
for the pair $(r, u)$, where $r \in [0,1]$ and $u: {\mathbb R} \times [0,1] \to M$ with $\partial u : {\mathbb R} \times \{0,1\} \to L$. Let ${\mathcal{M}}_{\widetilde H, \widetilde{\mathbf J}}(M, L; \widetilde l_-, \widetilde l_+)$ denote the moduli space of solutions $(r, u)$ so that $u$ solves the equation at the parameter $r$ and converges to Hamiltonian paths as $s \to \pm \infty$, i.e.:
$$\lim_{s\to -\infty} = \widetilde l_-^{g_r^{-1}} \text{ and } \lim_{s\to +\infty} = \widetilde l_+,$$
where $\widetilde l_\pm$ are critical points of $a_H$. Then the expected dimension of this moduli space is
$$\mu_H(\widetilde l_-) - \mu_H(\widetilde l_+) + 1,$$
because of the extra parameter $r$.
The deformation of homotopies is said to be regular if the linearized operator for \eqref{lagSeidel:homotopyeq} is surjective for all $(r,u)$ and no bubbling off of either spheres or discs occur for the moduli spaces with dimension ${\leqslant} 1$.
We note that here the monotonicity guarantees the existence of regular deformation of homotopies.
\qed
\subsection{Module property and Seidel element}\label{lagSeidel:moduleproperty}
\begin{prop}\label{lagSeidel:moduleprop}
The map $FH_*(\tilde g)$ is a module map with respect to the half pair of pants product on $FH_*(M, L)$, i.e. for $[\tilde l_-], [\tilde l_0] \in FH_*(M, L)$, we have
$$FH_*(\tilde g)([\tilde l_-] * [\tilde l_0]) = FH_*(\tilde g)([\tilde l_-]) * [\tilde l_0]$$
\end{prop}
{\it Proof:}
Because of the homotopy invariance, we may reparametrize $g$ so that $g_t = id$ for $t \in [0, \frac{1}{2}]$. Consider the half pair of pants product defined by the punctured strip as in Figure \ref{fig:lagFloer:ppdomain}, \S\ref{lagFloer:prod}. Let $(\mathbf H, \mathbf J)$ be a regular pair which pulls back to the ends $e_\pm$ and $e_0$ respectively as $(H_\pm, J_\pm)$ and $(H_0, J_0)$. Then the pair $(\mathbf H^g, \mathbf J^g)$ defined by
$$\mathbf H^g(s, t, x) := \mathbf H(s, t, g_tx) - K(s, t, g_tx) \text{ and } \mathbf J^g(s, t, x) := dg_t^{-1} \circ \mathbf J_{s, t}(x) \circ d g_t$$
pulls back to the ends $e_\pm$ and $e_0$ respectively as $(H^g_\pm, J^g_\pm)$ and $(H_0, J_0)$.
Let $\tilde l^g_\pm$ and $\tilde l_\pm$ be critical points of the action functionals $a_{H_\pm}$ and $a_{H^g_\pm}$ respectively and $\tilde l_0$ a critical point of $a_{H_0}$. We then have the isomorphism of moduli spaces as in Proposition \ref{lagSeidel:fhaction}
$$\mathcal M_{\mathbf H^g, \mathbf J^g}(M, L; \tilde l_-, \tilde l_0, \tilde l_+) \cong \mathcal M_{\mathbf H, \mathbf J}(M, L; \tilde l_-^g, \tilde l_0, \tilde l_+^g): u \mapsto u^g,$$
where $u^g(s, t) := g_t \circ u(s, t)$.
The statement then follows.
\qed
From the properties in proposition \ref{lagSeidel:fhaction} and the homotopy invariance, we may define similarly the Seidel element for the Lagrangian Floer homology. Here we have to assume more:
\begin{definition}\label{lagSeidel:identity}
Suppose that $FH_*(M, L)$ is non-zero and has an identity element with respect to the half-pair-of-pants product defined above, which is denoted $1\!\!1_L$.
Then
$$F\Psi_{\widetilde g, L} := FH_*(\widetilde g)(1\!\!1_L) \in FH_*(M, L)$$
is the Seidel element for the class $[\widetilde g] \in \widetilde\pi_1({\rm Ham}(M, \omega), Ham_L(M, \omega))$.
\end{definition}
\noindent
It follows that in this case, for $[\widetilde l] \in FH_*(M, L)$ we have
$$FH_*(\widetilde g)([\widetilde l]) = F\Psi_{\widetilde g, L} * [\widetilde l].$$
The assumption above is satisfied in many cases, for example the diagonal in $M \times M$.
\subsection{Hamiltonian fibrations over a disc}\label{lagSeidel:hamfib}
The unit disc $D^2$ in ${\mathbb C}$ can be parametrized by the upper half plane $\bar {\mathbb{H}}$ compactified by ${\mathbb R}$ and a point at $\infty$.
\begin{notation}The following notations are only used in this section. They are NOT compatible with the notations for the same objects used elsewhere in this paper. In the parametrization by ${\mathbb{H}}$, let
$$D^2_{\pm} = \{z \in \bar {\mathbb{H}} | \pm(|z| - 1) {\geqslant} 0\}.$$
\end{notation}
The two half discs can be identified by the map:
$$D^2_+ \to D^2_- : z \mapsto {\bar z}^{-1} \text{ or } re^{i\theta} \mapsto r^{-1}e^{i\theta}.$$
We consider the fibration over $D^2$ defined from an element $g \in {\mathcal{P}}_L{\rm Ham}(M,\omega)$:
$$P_{g} = M \times D^2_{+} \sqcup M \times {D^2_{-}} / \sim : (x, e^{i\pi t}) \sim (g_{t}(x), e^{i\pi t}) \text{ for } t \in [0, 1].$$
Let $\pi : P_g \to D^2$ denote the projection. We note that along the $S^1$-boundary, we have the restricted bundle $N$ that is obtained as the union of the copies of $L$ in each fiber; it is a Lagrangian submanifold of $P$. Note that $N \simeq L \times S^1$ over $S^1$ if the restriction of $g_1$ to $L$ is diffeotopic to the identity.
A choice of the lifting $\widetilde g \in \widetilde {\mathcal{P}}_L{\rm Ham}(M,\omega)$ amounts again to the choice of a section class $\sigma_{\widetilde g}$ in $\pi_2(P_{g}, N)$ as follows.
For $x \in L$, consider $\widetilde l = [x, x] \in \widetilde {\mathcal{P}}_L$ and let $\widetilde l^{\widetilde g} = [g_t(x), w]$ with
$$w : D^2_+ \to M : w(e^{i\pi t}) = g_t(x).$$
Via the identification of $D^2_\pm$, we write
$$w_- : D^2_- \to M: w_-(z) = w({\bar z}^{-1}),$$
in particular, $w_-(e^{i\pi t}) = g_t(x)$ as well. Now the following section in $P_g$ represents $\sigma_{\widetilde g}$:
$$\{x\} \sqcup \{w_-\} / \sim : (x, e^{i\pi t}) \sim (g_{t}(x), e^{i\pi t}) \text{ for } t \in [0,1],$$
where, for example, $\{w_-\}$ denotes the graph of the map $w_-$.
\begin{definition}\label{lagSeidel:vertmaslov}
Let the smooth map $u: D^2 \to P_g$ represent $B \in \pi_2(P_g, N)$. The \emph{vertical Maslov index} of $B$, denoted $\mu^v(B)$ is the Maslov index of the bundle pair $(u^*T^vP_g, (\partial u)^*T^vN)$, where $T^v = \ker d\pi$ denotes the respective vertical tangent bundles.
\end{definition}
\noindent
It's not hard to show that the above is well defined and not dependent on the choice of a smooth map $u$. We then have the following
\begin{prop}\label{lagSeidel:maslovdeg}
$\mu(\widetilde g) = \mu^v([\sigma_{\widetilde g}])$.
\end{prop}
{\it Proof:}
The trivial Lagrangian path $T^v_xN = T_xL$ over $D^2_+$ is isotopied to $G_t(x){\mathbb R}^n$ over $D^2_-$, via any trivialization chosen for $u^*T^vP_{g}$. The proposition follows from the definitions.
\qed
\begin{remark}
\rm{Since we will not need it in this article, we leave to the reader to check that the definition of the action of the paths in ${\mathcal{P}}_L{\rm Ham}(M,\omega)$ on the relative Floer homology can be interpreted in a geometric way on this bundle over the 2-disc, roughly as the absolute Seidel morphism was interpreted in Lalonde-McDuff-Polterovich \cite{LalondeMcDuffPolterovich} as a map from the quantum homology of the fiber at the north pole to the quantum homology of the fiber at the south pole in a fibration over the 2-sphere.
For this purpose, one considers the fibers $(M_{\pm 1}, L_{\pm 1}) = \pi^{-1}(\pm 1)$. The natural map from $FH_*(M_1, L_1)$ to $FH_*(M_{-1}, L_{-1})$ can be defined by the pearl complex (i.e. linear clusters). Namely, one flows inside $M_1$ from a critical point of the Morse function on $L_1$ in a linear cluster until that cluster reaches a pseudo-holomorphic section $\sigma$ of $P$ with boundary on $N$; then flows along a linear cluster in the fiber $M_{-1}$, starting from the point $\sigma \cap M_{-1} \in L_{-1}$, until it reaches some critical point of the Morse function on $L_{-1} \subset M_{-1}$.
}
\end{remark}
\subsection{Compatibility among the actions}\label{lagSeidel:compatiblity}
We start by noting the obvious inclusion:
$$\Omega_0{\rm Ham}(M, \omega) \subset {\mathcal{P}}_L{\rm Ham}(M, \omega),$$
where $\Omega_0{\rm Ham}(M, \omega)$ denotes the group of smooth loops in ${\rm Ham}(M, \omega)$ based at the identity.
Recall that in \cite{Seidel}, the covering $\widetilde \Omega_0{\rm Ham}(M, \omega)$ is defined as following (cf. proposition \ref{lagSeidel:lifting}):
$$\widetilde \Omega_0{\rm Ham}(M, \omega) := \{\left.(g, \widetilde g) \in \Omega_0{\rm Ham}(M, \omega) \times {\rm Homeo}(\widetilde \Omega M) \right| \widetilde g \text{ lifts the action of } g\}.$$
\begin{lemma}\label{lagSeidel:groupinclusion}
We have the inclusion of groups
$$\widetilde\Omega_0{\rm Ham}(M, \omega) \subset \widetilde{\mathcal{P}}_L{\rm Ham}(M, \omega),$$
extending the inclusion $\Gamma_\omega \xrightarrow i \Gamma_L$ in \S\ref{lagFloer:novikov}.
\end{lemma}
{\it Proof:} We show that
$$\widetilde\Omega_0{\rm Ham}(M, \omega) \subset \widetilde{\mathcal{P}}^0_L{\rm Ham}(M, \omega),$$
where
$$\widetilde {\mathcal{P}}^0_L{\rm Ham}(M, \omega) := \left\{\left.(g, \widetilde g) \in \widetilde {\mathcal{P}}_L{\rm Ham}(M, \omega) \right| g_1|_L = {\rm id} \in {\rm Diff}(L)\right\}.$$
Let
$$\Omega_L M = \Omega M \cap {\mathcal{P}}_L M$$
be the space of loops in $M$ starting at points in $L$. Then an element of $\Omega_0{\rm Ham}(M, \omega)$ or ${\mathcal{P}}_L{\rm Ham}(M, \omega)$ is determined by how it acts on $\Omega_L M$. This fact gives a definition of the inclusion $\Omega_0{\rm Ham}(M, \omega) \hookrightarrow {\mathcal{P}}_L{\rm Ham}(M, \omega)$.
Let $\pi : \widetilde \Omega M \to \Omega M$ and $\pi_L : \widetilde {\mathcal{P}}_L M \to {\mathcal{P}}_L M$ be the covering projections. Consider
$$\widetilde \Omega_L M := \pi^{-1}(\Omega_L M) \text{ and } \widetilde {\mathcal{P}}_L^0 M := \pi_L^{-1}(\Omega_LM).$$
Then by definition, we have
$$\widetilde \Omega_LM = \{(l, w_\Omega) | l \in \Omega_LM \text{ and } w_\Omega : (D^2, S^1) \to (M, l)\} / \sim_\Omega$$
$$\widetilde {\mathcal{P}}_L^0M = \{(l, w_{\mathcal{P}}) | l \in \Omega_LM \text{ and } w_{\mathcal{P}} : (D^2_+; \partial_+, \partial_0) \to (M; l, L)\} / \sim_{\mathcal{P}}$$
where
$$w_\Omega \sim_\Omega w'_\Omega \iff I_\omega(v_\Omega) = I_c(v_\Omega) = 0 \text{ for } v_\Omega = w_\Omega \#(-w'_\Omega)$$
$$w_{\mathcal{P}} \sim_{\mathcal{P}} w'_{\mathcal{P}} \iff I_\omega(v_{\mathcal{P}}) = I_\mu(v_{\mathcal{P}}) = 0 \text{ for } v_{\mathcal{P}} = w_{\mathcal{P}} \#(-w'_{\mathcal{P}})$$
Let's choose and fix a smooth map
$$\iota: (D^2_+; \partial_+, \partial_0) \to (D^2; S^1, \{1\})$$
which contracts $\partial_0$ to $\{1\}$ and is an isomorphism otherwise.
We have for $w_\Omega$
$$\widetilde w_\Omega := w_\Omega \circ \iota: (D^2_+; \partial_+, \partial_0) \to (M; l, L),$$
as well as
$$w_\Omega \sim_\Omega w'_\Omega \iff \widetilde w_\Omega \sim_{\mathcal{P}} \widetilde w'_\Omega.$$
The ``$\Rightarrow$'' above is obvious. The ``$\Leftarrow$'' is because that $I_\mu = 2 I_c$ on the maps of the form $\widetilde w_\Omega \#(- \widetilde w'_\Omega)$.
In particular, $\iota$ induces an inclusion $\iota_*: \widetilde \Omega_LM \to \widetilde {\mathcal{P}}_L^0 M$.
On the other hand, for $w_{\mathcal{P}}$ as above, we define $\partial_0 w_{\mathcal{P}}$ by
$$w_{\mathcal{P}}|_{\partial_0}: ([-1, 1], \{\pm 1\}) \to ([-1, 1]/\{\pm 1\}, \{[1]\}) \xrightarrow{\partial_0 w_{\mathcal{P}}} (L, l(0)= l(1)).$$
We then see that $\partial_0 : w_{\mathcal{P}} \mapsto \partial_0 w_{\mathcal{P}}$ defines a map
$$\partial_0^* : \widetilde {\mathcal{P}}_L^0M \to \pi_1(L) / K$$
where $K$ is the image of $\ker I_\omega \cap \ker I_\mu$ under the map $\pi_2(M, L) \to \pi_1(L)$, and there is the exact sequence
$$0 \to \widetilde \Omega_L M \xrightarrow {\iota_*} \widetilde {\mathcal{P}}_L^0 M \xrightarrow {\partial_0^*} \pi_1(L) / K.$$
It follows that $\widetilde {\mathcal{P}}_L^0M$ is a disjoint union of copies of $\widetilde \Omega_L M$.
Now an element in $\widetilde \Omega_0{\rm Ham}(M, \omega)$ is determined by its action on $\widetilde \Omega_LM$ and one in $\widetilde {\mathcal{P}}^0_L{\rm Ham}(M, \omega)$ by its action on $\widetilde {\mathcal{P}}_L^0M$.
It follows that $\widetilde \Omega_0{\rm Ham}(M, \omega)$ is the subgroup of $\widetilde {\mathcal{P}}^0_L{\rm Ham}(M, \omega)$ preserving each copy of $\widetilde \Omega_LM$ in $\widetilde {\mathcal{P}}_L^0M$. The rest of the statement is obvious.
\qed
\begin{remark}\label{lagSeidel:homotopysequence}
\rm{From the lemma, we obtain the exact sequence described in \eqref{intro:commutdiag}:
$$\widetilde \pi_1{\rm Ham}(M, \omega) \to \widetilde \pi_1({\rm Ham}(M, \omega), {\rm Ham}_L(M, \omega)) \to \widetilde \pi_0{\rm Ham}_L(M, \omega) \to 0,$$
where the third term is the quotient. From the extension sequences of the first two groups and from the triviality of $\pi_0{\rm Ham}(M, \omega)$, we have the following extension sequence:
$$0 \to \Gamma' \to \widetilde \pi_0{\rm Ham}_L(M, \omega) \to \pi_0{\rm Ham}_L(M, \omega) \to 0,$$
where $\Gamma'$ is a quotient of $\Gamma_L$.
}
\end{remark}
\begin{theorem}\label{lagSeidel:albersseidel}
Let $[\widetilde \gamma] \in FH_*(M, \omega)$ and $\widetilde g \in \widetilde \Omega_0{\rm Ham}(M, \omega) \subset \widetilde {\mathcal{P}}_L{\rm Ham}(M, \omega)$. Then we have \footnote{The same holds for Seidel elements in quantum homology, for which one apply the construction in \cite{McDuff} for the Hamiltonian Seidel elements, while the relative version is obtained from a similar construction in the fibration over a disc. Correspondingly, one needs to consider $H_2^S(M;\mathbb R)$ and $H_2^S(M, L; \mathbb R)$ instead of $H_2^S(M)$ and $H_2^S(M, L)$ when defining the Novikov rings.}
$$\mathscr A(FH_*(\widetilde g)([\widetilde \gamma])) = FH_*(\widetilde g)\mathscr A([\widetilde \gamma]).$$
\end{theorem}
{\it Proof:}
Recall that the description of the ``chimney domain'' in \cite{AbbondandoloSchwarz} as ${\mathbb R} \times [0,1] / \sim$ where $(s, 0) \sim (s,1)$ when $s {\leqslant} 0$, and the conformal structure at $(0,0)$ is given by $\sqrt z$. The domain is depicted in figure \ref{fig:lagSeidel:schwarzdomain}, where the shaded left half of the strip has its two boundaries glued together forming a half infinite cylinder. Let $(H, \mathbf J)$ be a regular pair for both $FH_*(M)$ and $FH_*(M, L)$ so that $(H^g, \mathbf J^g)$ is also regular for both of the theories.
\begin{figure}\label{fig:lagSeidel:schwarzdomain}
\end{figure}
We then consider an equation similar to \eqref{lagFloer:floweq}:
\begin{equation}\label{lagSeidel:chimneyeq}
\left\{\begin{matrix}
\frac{\partial u}{\partial s} + J^g_t(u)
\left(\frac{\partial u}{\partial t} - X_{H^g_t}(u)\right) = 0 & \text{ for all }
(s, t) \in {\mathbb R} \times [0,1], \\
u(s, 0) = u(s, 1) & \text{ for } s \leqslant 0 \\
u|_{[0, \infty) \times \{0, 1\}} \subset L
\end{matrix}\right.
\end{equation}
Then $\mathscr A(\tilde \gamma)$ is defined by counting $0$-dimensional moduli spaces of solutions to \eqref{lagSeidel:chimneyeq}.
We note that $FH_*(\tilde g)([\tilde \gamma])$ is represented by $\tilde \gamma^g$ while $FH_*(\tilde g)([\tilde l])$ by $\tilde l^g$. As in Proposition \ref{lagSeidel:fhaction}, there is a bijection of moduli spaces of solutions to \eqref{lagSeidel:chimneyeq} with the pair $(H^g, \mathbf J^g)$ and that with the pair $(H, \mathbf J)$, given by
$$u \mapsto u^g, \text{ where } u^g (s, t) := g_t \circ u(s, t) : {\mathbb R} \times [0,1] / \sim \to M.$$
This can be shown by directly computing the corresponding equation \eqref{lagSeidel:chimneyeq}.
It follows that $\mathscr A$ and $FH_*(\tilde g)$ commute on chain level. The independence of choices as well as of $\tilde g$ in the same homotopy class is established similarly as in the case for the Seidel maps.
\qed
\begin{corollary}\label{lagSeidel:3actions}
Assume that the identity exists for $FH_*(M, L)$, then the Seidel element $F\Psi_{\widetilde g, L} \in FH_*(M, L)$ is defined. Let $F\Psi_{\widetilde g}$ denote the Seidel element in $FH_*(M)$ then we have
$$\mathscr A(F\Psi_{\widetilde g}) = F\Psi_{\widetilde g, L}.$$
\end{corollary}
{\it Proof:}
Recall that Proposition \ref{lagFloer:albersidentity} states that $\mathscr A(1\!\!1) = 1\!\!1_L$ where $1\!\!1$ and $1\!\!1_L$ are the respective identity element in $FH_*(M)$ and $FH_*(M, L)$.
Now replace $[\widetilde \gamma]$ in proposition \ref{lagSeidel:albersseidel} by $1\!\!1 \in FH_*(M)$ and we obtain the proposition.
\qed
\begin{remark}\label{lagSeidel:commutdiag}
\rm{
The above corollary completes the commutative diagram \eqref{intro:commutdiag}, where the maps $\Psi$ and $\Psi_L$ are defined respectively as
$$\Psi(\widetilde g) := F\Psi_{\widetilde g} \text{ and } \Psi_L(\widetilde g) := F\Psi_{\widetilde g, L}.$$
}
\end{remark}
\section{Reversing the sign of the symplectic structure}\label{reversed:novikov}
We consider here the effects of reversing the symplectic structure on $M$, i.e. the relations between the structures defined on $(M, \omega)$ and $(M, -\omega)$.
Fix a Lagrangian submanifold $L \subset (M, \omega)$.
Let $\omega' = -\omega$ and $c_1'$ and $\mu'$ denote respectively the Chern class and Maslov class for the reversed symplectic structure and $L$, then we obviously have
$$I_{\omega'} = - I_\omega, I_{c'} = -I_c \text{ and } I_{\mu'} = -I_\mu.$$
Correspondingly, we have the Novikov rings $\Lambda_{\omega'}$ and $\Lambda_{L'}$.
Let $\tau : \pi_2(M) \to \pi_2(M)$ and $\pi_2(M, L) \to \pi_2(M, L)$ be the respective involution induced by reversing the signs, then it induces involutions $\tau$ of the groups $\Gamma_\omega$ and $\Gamma_L$ as well as isomorphisms of the Novikov rings as graded rings:
\begin{equation}\label{reversed:novikoviso}
\tau: \Lambda_{\omega} \to \Lambda_{\omega'} \text{ and } \tau: \Lambda_{L} \to \Lambda_{L'} :
a_B e^B \mapsto (-1)^{\frac{1}{2}\deg e^{B}} a_{B} e^{\tau (B)}.
\end{equation}
Under our assumption, we see that $\deg e^B$ is always even for either of the two Novikov rings, and thus the above is an isomorphism over ${\mathbb R}$.
\subsection{Quantum ring structure on $QH_*(M)$}\label{reversed:quantring}
Let $2m = \dim_{\mathbb R} M$, then the orientation of $(M, \omega')$ is the $(-1)^m$-multiple of that of $(M, \omega)$.
\begin{lemma}\label{reversed:classicalring}
Let $\pitchfork$ and $\pitchfork'$ denote the intersections products on $H_*(M, \omega)$ and $H_*(M, \omega')$ respectively. Then we have
$$\tau(\alpha \pitchfork \beta) = \tau(\alpha) \pitchfork' \tau(\beta)$$
where $\alpha, \beta \in H_*(M) = H_*(M, \omega) = H_*(M, \omega')$ and
$$\tau : H_*(M, \omega) \to H_*(M, \omega') :
\alpha \mapsto (-1)^m \alpha.
$$
\end{lemma}
{\it Proof:} Let $\{\gamma_j\}$ be a base of $H_*(M)$ and $\{\gamma_j^*\}$ it's dual base with respect to the product $\pitchfork$ and $\{\gamma_j^{*'}\}$ that with respect to $\pitchfork'$. Thus we have
$$\gamma_j^{*'} = (-1)^m \gamma_j^*.$$
Let $a, b, c_j$ and $c_j^{*}$ be generic cycles representing $\alpha, \beta, \gamma_j$ and $\gamma_j^{*}$.
The intersection product $\pitchfork$ (respectively $\pitchfork'$) is alternatively written as
$$\alpha \pitchfork \beta = \sum_j {\langle}\alpha, \beta, \gamma_j^*{\rangle}\gamma_j \text{ (respectively } \alpha\pitchfork' \beta = \sum_j {\langle}\alpha, \beta, \gamma_j^{*'}{\rangle}' \gamma_j \text{)},$$
where ${\langle}\alpha, \beta, \gamma_j^*{\rangle}$ is the intersection number of $a\times b\times c_j^*$ with $\triangle$, the minimal diagonal, in $M^3$, oriented by $\omega$. We only need to compare the coefficients in front of the $\gamma_j$'s.
The orientations of the cycle $\triangle$ in $(M, \omega)^3$ and $(M, -\omega)^3$, as well as the orientations of $M^3$ in either case, differ by $(-1)^m$, while the orientations of $\gamma_j^*$ and $\gamma_j^{*'}$ also differ by $(-1)^m$.
The lemma then follows.
\qed
We consider the effect on $QH_*(M)$. Since
$$QH_*(M, \omega) = H_*(M) \otimes \Lambda_\omega \text{ and } \tau: \Lambda_\omega \to \Lambda_{\omega'},$$
we define naturally the induced map by:
\begin{equation}\label{reversed:quantumcomap}\tau_*: QH_*(M, \omega) \to QH_*(M, \omega') : \alpha\otimes f \mapsto \tau(\alpha) \otimes \tau(f).\end{equation}
\begin{prop}\label{reversed:quantumcoho}
The map $\tau_*$ defined in \eqref{reversed:quantumcomap} is a ring isomorphism of quantum homologies over the isomorphism $\tau$ of the Novikov rings in \eqref{reversed:novikoviso}.
\end{prop}
{\it Proof:}
The quantum intersection product on $QH_*(M, \omega)$ (resp. $QH_*(M, \omega')$) is denoted $*$ (resp. $*'$). Choose and fix a base $\{\gamma_j\}$ of $H_*(M)$ and denote $\{\gamma_j^*\}$ (resp. $\{\gamma_j^{*'}\}$) the dual base with respect to the intersection product on $(M, \omega)$ (resp. $(M, -\omega$)). Then for $\alpha, \beta \in H_*(M)$:
$$\alpha * \beta = \sum_{j, B} {\langle}\alpha, \beta, \gamma^*_j{\rangle}_B \gamma_j e^{-B} \text{ and } \alpha *' \beta = \sum_{j, B} {\langle}\alpha, \beta, \gamma_j^{*'}{\rangle}'_B \gamma_j e^{-B}.$$
We need to check that $\tau_*\{(\alpha e^A) * (\beta e^B)\} = \tau_*(\alpha e^A) *' \tau_*(\beta e^B)$, where we dropped the $\otimes$ in the expressions.
It follows from lemma \ref{reversed:qhcoefcompare} below, which compares the coefficients of the two quantum intersection products. Given the lemma, we have
\begin{equation*}
\begin{split}
LHS = & \tau_*\{(\alpha * \beta) e^{A+B}\} = \tau_*\left\{\sum_{j, C} {\langle}\alpha, \beta, \gamma_j^*{\rangle}_C \gamma_j e^{A+B-C}\right\} \\
= & \sum_{j, C} (-1)^{m + I_c(A+B-C)} {\langle}\alpha, \beta, \gamma_j^*{\rangle}_C \gamma_j e^{\tau(A+B-C)}\\
\left._\lozenge\right.= & \sum_{j, C} (-1)^{I_c(A+B)}{\langle}\alpha, \beta, \gamma_j^{*'}{\rangle}'_{\tau(C)} \gamma_j e^{\tau(A+B-C)} \\
= & \sum_{j, C} {\langle}(-1)^m \alpha, (-1)^m \beta, \gamma_j^{*'}{\rangle}'_{\tau(C)} \gamma_j e^{-\tau(C)} \left\{(-1)^{I_c(A)}e^{\tau(A)}\right\} \left\{(-1)^{I_c(B)}e^{\tau(B)}\right\} \\
= RHS &
\end{split}
\end{equation*}
where $\lozenge$ is lemma \ref{reversed:qhcoefcompare}.
\qed
\begin{lemma}\label{reversed:qhcoefcompare}
For all $B \in \Gamma_\omega$ and $j$, we have ${\langle}\alpha, \beta, \gamma_j^*{\rangle}_B = (-1)^{m + I_c(B)}{\langle}\alpha, \beta, \gamma_j^{*'}{\rangle}'_{\tau(B)}$.
\end{lemma}
{\it Proof:}
We recall first the definition of the triple intersection ${\langle}\alpha, \beta, \gamma_j^*{\rangle}_B$. Consider the moduli space $\bar{\mathcal{M}}_{0,3}(M, \omega, J; B)$ of $J$-holomophic spheres in $M$ with $3$-marked points, representing $B \in \Gamma_\omega$. The marked points fixes the parametrization of the principle components in the domain and we assume that they corresponds to $0, 1$ and $\infty$ (in that order) respectively. Let $ev$ denote the evaluation map
$$ev: \bar{\mathcal{M}}_{0,3}(M, \omega, J; B) \to M^3.$$
Choose and fix generic cycles $a, b$ and $c^*$ in $M$ representing the classes $\alpha, \beta$ and $\gamma_j^*$, so that $ev$ is transversal to $a \times b \times c^*$. Then the triple intersection is defined to be the cardinality of the following intersection when the resulting dimension is $0$:
$${\langle}\alpha, \beta, \gamma_j^*{\rangle}_B = ev_*([\bar{\mathcal{M}}]) \pitchfork a\times b \times c^*.$$
Let $\rho: {\mathbb{CP}}^1 \to {\mathbb{CP}}^1$ denote the standard complex conjugation on ${\mathbb{CP}}^1 = S^2$, in particular, it fixes the $3$ marked points $0, 1$ and $\infty$. We note that $u : S^2 \to (M, \omega, J)$ is $J$-holomorphic and represents $B \in \Gamma_\omega$ iff $\rho(u) : S^2 \xrightarrow{\rho} S^2 \to (M, \omega', J')$ is $J'$-holomorphic and represents $\tau(B) \in \Gamma_{\omega'}$. We can in fact establish an explicit identification of the moduli spaces:
$$\rho : \bar{\mathcal{M}}_{0,3}(M, \omega, J; B) \to \bar{\mathcal{M}}_{0,3}(M, \omega', J'; \tau(B)) : u \mapsto \rho(u),$$
where slightly more care is taken in case of the nodal domains. Furthermore, the evaluation maps coincide, i.e.
$$ev = ev' \circ \bar \rho, \text{ where } ev' : \bar{\mathcal{M}}_{0,3}(M, \omega', J'; \tau(B)) \to M^3.$$
It then follows that the two triple intersections coincide upto a sign.
The sign comes from two sources, the manifold $M$ and the moduli spaces ${\mathcal{M}}$. The orientation of $(M, -\omega)$ implies that
$$\gamma_j^{*'} = (-1)^m \gamma_j^* \text{ and } \pitchfork' = (-1)^{3m} \pitchfork.$$
It follows that the overall sign only comes from ${\mathcal{M}}$ and the identification $\rho$.
We check this sign in the following. Fix $u \in {\mathcal{M}}_{0,3}(M, \omega, J; B)$, then the tangent space at $u$ is given by the linearized operator $D\bar\partial_J$:
$$D\bar\partial_J(\xi)(z) = \nabla\xi(z) + J(u(z)) \circ \nabla \xi(z) \circ j_z + l.o.t., \text{ for } \xi \in \Omega^0(u^*TM), z \in S^2,$$
where $\nabla$ is the induced connection on $u^*TM$ from a Hermitian connection on $TM$, compatible with $(\omega, J)$. The operator $D\bar\partial_J$ can be homotopied through Fredholm operators to the standard $\bar\partial$ operator on the holomorphic vector bundle $u^*T^{1,0}_JM$. Under this homotopy, we obtain an identification of the solution space to $H^0({\mathbb{CP}}^1, u^*T^{1,0}_{J} M)$.
The orientation of ${\mathcal{M}}_{0,3}(M, \omega, J; B)$ at $u$ is then defined by the canonical orientation of the complex vector space $H^0({\mathbb{CP}}^1, u^*T^{1,0}_{J} M)$.
For $v := \rho(u) \in {\mathcal{M}}_{0,3}(M, \omega', J'; \tau(B))$, we have similarly the linearized operator
$$D\bar\partial_{J'}(\zeta)(z) = \nabla' \zeta(z) + J'(v(z)) \circ \nabla' \zeta(z) \circ j_z + l.o.t, \text{ for } \zeta \in \Omega^0(v^*TM),$$
where $\nabla'$ is the induced connection on $v^*TM$ from the same Hermitian connection on $TM$. The following in fact holds:
$$D\bar\partial_{J'} = \rho^*D\bar\partial_J,$$
and thus the homotopy to $\bar\partial$ is pulled back via $\rho$. The orientation of the moduli space ${\mathcal{M}}_{0,3}(M, \omega', J'; \tau(B))$ at $v$ is thus defined by the canonical orientation of the complex vector space $H^0({\mathbb{CP}}^1, v^*T^{1,0}_{J'}(M))$.
The tangent map $d\rho$ at $u$ is:
$$d\rho : \xi \mapsto \rho^*\xi \text{ where } (\rho^*\xi)(z) = \xi(\rho(z)),$$
which induces the following indentification as real vector spaces:
$$d\rho : H^0({\mathbb{CP}}^1, E) \to H^0({\mathbb{CP}}^1, \rho^*\bar E),$$
where $E = u^*T^{0,1}_J M$ is a rank $n$ holomorphic vector bundle over ${\mathbb{CP}}^1$. We check that $d\rho$ is complex anti-linear by evaluating $\xi$ and $d\rho(\xi)$ at respective points in ${\mathbb{CP}}^1$. Since the fibers $E_z$ and $(\rho^*\bar E)_{\rho(z)}$ are identical with opposite complex structures, we have for $\lambda \in {\mathbb C}$:
$$(\rho^*(\lambda \xi)_E)(z) = (\lambda\xi)_E(\rho(z)) = (\bar\lambda\xi)_{\bar E}(\rho(z)) = (\bar\lambda \rho^*\xi)_{\bar E}(z).$$
It follows that the orientation of the map $\rho$ is given by $(-1)^{\dim {\mathcal{M}}_{0,3}(M, B)} = (-1)^{m + I_c(B)}$ and the lemma follows.
\qed
\subsection{Seidel elements in $QH_*(M)$}\label{reversed:seidelelts}
The group ${\rm Ham}(M, \omega)$ is naturally a subgroup of ${\rm Diff}(M)$. Suppose that $\{H_t\}_{t \in [0,1]}$ generates $g_{t \in [0,1]} \in {\rm Ham}(M, \omega)$, then regarded as an element in ${\rm Ham}(M, -\omega)$, it is alternatively generated by $\{-H_t\}_{t \in [0,1]}$.
We see that
$${\rm Ham}(M, \omega) = {\rm Ham}(M, -\omega) \subset {\rm Diff}(M).$$
We define a \emph{reversion map} $\tau$ on the group of loops:
\begin{equation}\label{reversed:loopgroup}\tau : \Omega_0{\rm Ham}(M, \omega) \to \Omega_0{\rm Ham}(M, -\omega) : g := \{g_t\} \mapsto g^- := \{g_{1-t}\}.
\end{equation}
The following lemma is obvious:
\begin{lemma}\label{reversed:genham}
Suppose $K = K_t$ generates the loop $g \in \Omega_0{\rm Ham}(M, \omega)$, then $\underline K := \{K_{1-t}\}$ generates the loop $g^- \in \Omega_0(M, -\omega)$.
\qed
\end{lemma}
\noindent
We note that the loop $g^-$ is homotopic to $g^{-1}$ in ${\rm Ham}(M, \pm \omega)$, viewed as identical subgroup in ${\rm Diff}(M)$.
The reversion map $\tau$ on $\Omega M$ can be extended to $\widetilde \Omega M$ via:
$$\tau([\gamma, v]) = [\tau(\gamma), \tau(v)] \text{ where } \tau(v) : D^2 \to M : z \mapsto v(\bar z).$$
Then $\tau : \widetilde \Omega_0{\rm Ham}(M, \omega) \to \widetilde \Omega_0{\rm Ham}(M, -\omega)$ is defined by
$$\tau(g, \widetilde g) (\tau(\widetilde \gamma)) = \tau \circ (g, \widetilde g) \circ \tau (\widetilde \gamma) \text{ for } \widetilde \gamma \in \widetilde \Omega M.$$
In the following, we will use the description of the Seidel element $\Psi_{[g]}$ for $[g] \in \pi_1{\rm Ham}(M, \omega)$ in terms of Gromov-Witten invariants in the Hamiltonian fibration $P_{[g]} \to S^2$ defined from $[g]$ as in Lalonde-McDuff-Polterovich \cite{LalondeMcDuffPolterovich}.
\begin{prop}\label{reversed:hamSeidel}
$$\Psi_{\tau([g])} = \tau_*(\Psi_{[g]}) \in QH_*(M, -\omega).$$
\end{prop}
\noindent
{\it Proof:}
Let $(g, \widetilde g) \in \widetilde{\Omega}_0{\rm Ham}(M, \omega)$ and $\Psi_{\widetilde g}$ the corresponding Seidel element.
Let $P_g \xrightarrow \pi S^2$ be the fibration defined by $g$:
$$P_g = D^2_1\times M \cup_g D^2_2 \times M, \text{ where } D^2_1 \times M \ni (e^{2\pi it}, x) \sim (e^{2\pi it}, g_t(x)) \in D^2_2 \times M,$$
and $\kappa$ the coupling form on $P_g$, extending $\omega$ on the fibers, then, for appropriate $\varepsilon > 0$, $\omega_g = \pi^*\omega_0 + \varepsilon \kappa$ is a symplectic form on $P_g$ where $\omega_0$ is the standard symplectic form on $S^2$ inducing the positive orientation on $D^2_1$ (thus negative orientation on $D^2_2$). Then $\Psi_{\widetilde g}$ is defined by looking at the section classes in $P_g$.
The corresponding bundle $P_{g^-}$ can be defined similarly. We give an alternative construction below. Let $r : D^2 \to D^2$ be the standard conjugation as the unit disc in ${\mathbb C}$, and use the same letter $r$ to denote the induced conjugation map on $S^2 = D^2_1 \cup_\partial D^2_2$. Then $$P_{g^-} = r^*P_g, \kappa^- = -r^*\kappa, P_{g} = r^*P_{g^-}, \kappa = - r^*\kappa^- \text{ and } r^*\omega_0 = -\omega_0,$$
where, of course, $r$ is also used to denote the pull-back maps between the Hamiltonian fibrations in the above. The symplectic form on $P_{g^-}$ is then
$$\omega_{g^-} = \pi^*\omega_0 + \varepsilon\kappa^- \Rightarrow r^*\omega_{g^-} = - \omega_{g},$$
i.e. $r : (P_{g^-}, \omega_{g^-}) \xrightarrow \simeq (P_{g}, -\omega_{g})$ symplectically.
The two sides will be used interchangeably.
Let $\sigma_0 \in H_2(P_g)$ be the standard reference section class (cf. \cite{McDuff} Lemma 3.2), for example, when $c_1(TM)$ and $[\omega]$ are not proportional on spherical classes in $M$, we ask
$$c_1^v(\sigma_0) = \kappa(\sigma_0) = 0.$$
Then we have $\Psi_{\widetilde g} = e^{\sigma_{\widetilde g}} \Psi_{[g]}$ for some $\sigma_{\widetilde g} \in H_2(M; {\mathbb R})$ and
$$\Psi_{[g]} = \sum_{B, j} {\langle}[M], [M], \iota_*(\gamma_j^*){\rangle}_{\sigma_0 + \iota_*(B)} \gamma_j e^{-B} \in QH_*(M, \omega),$$
where $\iota : M \to P_g$ is the inclusion of a fiber, $B \in H_2(M; {\mathbb R})$ so that $\sigma_0 + \iota_*(B) \in H_2(P_g; {\mathbb{Z}})$ is represented by a section and $\{\gamma_j\}$, $\{\gamma_j^*\}$ are dual bases of $H_*(M)$ under $\pitchfork$ and ${\langle}\ldots{\rangle}_{\ldots}$ denotes the Gromov-Witten invariants in $(P_g, \omega_g)$. It follows that
$$\tau_*(\Psi_{[g]}) = \sum_{B, j} (-1)^{m + I_c(B)}{\langle}[M], [M], \iota_*(\gamma_j^*){\rangle}_{\sigma_0 + \iota_*(B)} \gamma_j e^{-\tau(B)} \in QH_*(M, -\omega).$$
Let $\sigma$ be a section class in $(P_g, \omega_g)$, i.e. $\pi_*\sigma = [(S^2, \omega_0)]$ in the natural orientation, then $\tau(\sigma) := -\sigma$ is a section class in $(P_g, -\omega_g)$, because $\pi_*(-\sigma) = [(S^2, -\omega_0)]$. On the other hand, $\sigma_0^- := \tau(\sigma_0)$ is a standard reference section class as well.
We may write down the Seidel element for $\tau(\widetilde g)$ in $QH_*(M, -\omega)$ as $\Psi_{\tau(\widetilde g)} = e^{\tau(\sigma_{\widetilde g})} \Psi_{\tau([g])}$, where:
\begin{equation*}
\Psi_{\tau([g])} = \sum_{B, j} {\langle}[M], [M], \iota_*(\gamma_j^{*'}){\rangle}'_{\sigma_0^- + \iota_*(\tau(B))} \gamma_j e^{-\tau(B)},
\end{equation*}
and we have to show that:
$${\langle}[M], [M], \iota_*(\gamma_j^{*'}){\rangle}'_{\sigma_0^- + \iota_*(\tau(B))} = (-1)^{m +I_c(B)}{\langle}[M], [M], \iota_*(\gamma_j^*){\rangle}_{\sigma_0 + \iota_*(B)}.$$
The dimension of the relevant moduli spaces is $m + 1+c_1(TP)(\sigma_0 + \iota_*(B))$. Let $[P_g, \omega_g]$ be the fundamental class of $P_g$ with orientation given by $\omega_g$, then
$$[P_g, \omega_g] = (-1)^{m+1} [P_g, -\omega_g] \text{ and }$$
$$[M, \omega] = (-1)^m [M, -\omega].$$
We have also $\gamma_j^{*'} = (-1)^m \gamma_j^*$. It follows from the same argument as in lemma \ref{reversed:qhcoefcompare} that the overall sign for the Gromov-Witten invariants is given by
$$(-1)^{3m+3 + 3m + m+1 + c_1(TP)(\sigma_0 + \iota_*(B))} = (-1)^{m+ c_1(TS^2)([S^2]) + c_1^v(B)} = (-1)^{m + I_c(B)}.$$
\qed
\section{Reversing operations in Lagrangian Floer homology}\label{reversed:lagFloer}
We define first a \emph{reversion map} on $\widetilde {\mathcal{P}}_LM$, that we denote by $\underline\tau$. Let $\widetilde l = [l, w]$ denote a typical element of $\widetilde {\mathcal{P}}_LM$, i.e.
$$l : ([0,1], \{0,1\}) \to (M, L) \text{ and } w: (D^2_+; \partial_+, \partial_0) \to (M; l, L).$$
Then we define $\underline {\widetilde l} := \underline \tau (\widetilde l)$ by
$$\underline l : [0,1] \to M : t \mapsto l(1-t) \text{ and } \underline w: D^2_+ \to M : z \mapsto w(-\bar z).$$
It's obvious that $\underline\tau$ is an involution, i.e. $\underline\tau^2 = id$. We note that the action of $\pi_2(M, L)$ on $\widetilde {\mathcal{P}}_LM$ as deck transformations are intertwined by $\underline\tau$:
$$\underline {(B\circ \widetilde l)} = \underline \tau([l, w \# B]) = [\underline l, \underline {w \# B}] = [\underline l, \underline w \#\tau(B)] = \tau(B) \circ \underline{\widetilde l}.$$
It follows that $\underline\tau$ defines an involution on $\widetilde {\mathcal{P}}_LM$.
Let now $(H, \mathbf J)$ be a regular pair for defining the Floer homology of $(M, L; \omega)$. We consider the \emph{reversed} pair $(\underline H, \underline{\mathbf J})$:
$$\underline H_t = H_{1-t} \text{ and } \underline J_t = -J_{1-t}.$$
Then it's easy to check that the corresponding action functionals satisfy
$$a_{\underline H} (\underline{\widetilde l}) = a_{H}(\widetilde l).$$
In fact, the involution $\tau$ identifies the metric $(,)_{\mathbf J}$ with $(,)_{\underline{\mathbf J}}$ as well. We show that the Floer homologies are identified by $\tau$.
The next three lemmas are obvious.
\begin{lemma}\label{reversed:nondegen}
$l$ is Hamiltonian path of $H$ in $(M, \omega)$ $\iff$ $\underline{l}$ is a Hamiltonian path for $\underline H$ in $(M, \omega' = -\omega)$. Furthermore, $\widetilde l$ is non-degenerate $\iff$ $\underline{\widetilde l}$ is non-degenerate.
\end{lemma}
{\it Proof:} Let $X_t = \omega(H_t)$ be the Hamiltonian vector field of $H_t$, then $\underline X_t = -\omega(H_{1-t}) = -X_{1-t}$ is the Hamiltonian vector field of $\underline H_t$. Let $\phi_t$ and $\underline\phi_t$ be the Hamiltonian isotopies generated by $X_t$ and $\underline X_t$ respectively, then $\underline\phi_t = \phi_{1-t}\circ \phi_1^{-1}$. Thus $\underline\phi_1 = \phi_1^{-1}$ and the lemma follows.
\qed
Next we compute the Conley-Zehnder index.
\begin{lemma}\label{reversed:czindex}
$\mu_H(\widetilde l) = \mu_{\underline H}(\underline{\widetilde l})$.
\end{lemma}
{\it Proof:}
We recall the notations in \S\ref{lagFloer:maslov}. Let $\Phi_z: (T_{w(z)} M, \omega) \to ({\mathbb C}^n, \omega_0)$ be the trivialization of $w^*TM$ so that $\Phi_r(T_{w(r)}L) = {\mathbb R}^n$ for all $r \in [-1, 1]$ and
$$E_t = \Phi_{e^{i\pi t}} \circ d\phi_t \circ \Phi^{-1}_1 \in Sp({\mathbb C}^n, \omega_0)$$
be the path of symplectic matrices. Then
$$\mu_H(\widetilde l) = \mu(E_t{\mathbb R}^n \oplus {\mathbb R}^n, \triangle)$$
where the symplectic structure on ${\mathbb C}^n \oplus {\mathbb C}^n$ is given by $(\omega_0 \oplus -\omega_0)$. For $\underline{\widetilde l}$, a symplectic trivialization of $\underline w^*TM$ is given by
$$\underline \Phi_z := \Phi_{-\bar z} : (T_{\underline w(z)} M, -\omega) \to ({\mathbb C}^n, -\omega_0),$$
and we have
$$\underline E_t := \underline\Phi_{e^{i\pi t}} \circ d\underline\phi_t \circ \underline\Phi^{-1}_1 = E_{1-t} \circ E_1^{-1}.$$
Now the index we need is
$$\mu_{\underline H}(\underline{\widetilde l}) = \mu(\underline E_t {\mathbb R}^n \oplus {\mathbb R}^n, \triangle) \text{ in } ({\mathbb C}^n \oplus {\mathbb C}^n, -\omega_0 \oplus \omega_0).$$
Comparing with the expression for $\mu_H(\widetilde l)$, this reverses both the symplectic structure and the path of symplectic matrices. The property of the Maslov index of pairs as defined in \cite{RobbinSalamon2} implies the lemma.
\qed
\begin{lemma}\label{reversed:nondegenerate}
The pair $(H, \mathbf J)$ is regular iff $(\underline H, \underline{\mathbf J})$ is regular.
\end{lemma}
{\it Proof:}
It is staightforward to check that the defining equations for the various objects involved in either case are identified by transformations induced from $\tau$.
\qed
By \S\ref{lagFloer:orient}, the orientations of the trajectories are given by those of the moduli spaces of discs (canonically given by the choice of relative spin structure) and the moduli spaces of capped strips. Here, we discuss first the effect of reversion on the moduli spaces of discs.
We consider the parametrized disc $D^2$.
Let $\rho: D^2 \to D^2$ be the complex conjugation on $D^2 \subset {\mathbb C}$. Obviously $u : (D^2, S^1) \to (M, L; \omega, J)$ is a holomorphic disc with boundary on $L$ representing $B \in \pi_2(M, L)$ iff $\rho(u) : (D^2, S^1) \xrightarrow \rho (D^2, S^1) \xrightarrow u (M, L; -\omega, -J)$ is holomorphic and represents $\tau(B) \in \Gamma_{L'}$. Let $\widetilde {\mathcal{M}}(M, L; \omega, J; B)$ denote the moduli space of parametrized $J$-holomorphic discs representing the class $B$.
\begin{lemma}
With the same choice of the relative spin structure of $L$ in $M$, the orientation of the map
$$\rho :\widetilde {\mathcal{M}}(M, L; \omega, J; B)\to \widetilde {\mathcal{M}}(M, L; -\omega, -J; \tau(B)) : u \mapsto \rho(u)$$
is given by $(-1)^{\frac{1}{2}\deg B}$.
\end{lemma}
{\it Proof:}
Recall that the orientation of the moduli space of discs is given by the identification (cf. \cite{FOOO, BiranCornea}):
$$\ker D\bar\partial_J \simeq \ker({\rm Hol} _J(D^2, S^1; {\mathbb C}^n, {\mathbb R}^n) \times {\rm Hol} _J(S^2; E) \xrightarrow{ev} {\mathbb C}^n).$$
The three items on the right are oriented respectively by the following. The first item is oriented by the choice of the relative spin structure, while independent of the structure $J$. With the choice of the relative spin structure, the second item is oriented by the structure $J$. The last item is oriented by $J$ while independent of the relative spin structure. Under the map $\rho$, the first item is canonically identified, while the rest follows similarly as in lemma \ref{reversed:qhcoefcompare}. Thus, if we fix the relative spin structure of $L$ and reverse $J$, the orientation of the moduli space is changed by $(-1)^{\frac{1}{2}\mu_L(B)}$, which by definition is $(-1)^{-\frac{1}{2}\deg B} = (-1)^{\frac{1}{2}\deg B}$.
\qed
The reversing map $\underline\tau$ on $\widetilde {\mathcal{P}}_LM$ induces the correspondence between the respective caps (cf. \eqref{lagFloer:caps}) via the complex conjugation $\rho$ of $Z_\pm \subset {\mathbb C}$. We see that $u^\pm$ is a solution of the equation \eqref{lagFloer:capeq} for $(\omega, \mathbf J, J, H)$ iff $\underline u^\pm = u^\pm \circ \rho$ is a solution for $(-\omega, \underline{\mathbf J}, \underline J, \underline H)$. It follows that the corresponding moduli spaces of caps are isomorphic via the map
$$ \rho: u \mapsto \rho(u) = \underline u.$$
We assign the orientations for the reversed moduli spaces so that the map $\rho$ preserves the orientations for the preferred basis. The orientations of the reversed caps given by the reversed preferred basis are related by
$$(-1)^{\rho(\widetilde l \# B)} = (-1)^{\rho(\widetilde l) + \frac{1}{2}\deg B}.$$
\begin{prop}\label{reversed:isofloer}$\underline\tau$ induces an isomorphism of Floer homologies, intertwining as well the isomorphism of Novikov rings \eqref{reversed:novikoviso},
$$\tau_* : FH_*(M, L, \omega; H, \mathbf J) \to FH_*(M, L, -\omega; \underline H, \underline{\mathbf J}).$$
\end{prop}
{\it Proof:}
The map $\tau$ induces natural transformation taking the equation \eqref{lagFloer:floweq} for the left side to the one for the right side:
$$v(s, t) := \tau(u)(s, t) = u(s, 1-t) \text{ so that}$$
\begin{equation*}
\begin{split}
& \frac{\partial v}{\partial s} + \underline J_t(v) \left(\frac{\partial v}{\partial t} - X_{\underline H_t}(v)\right) = \\
= & \left.\frac{\partial u}{\partial s}\right|_{1-t} + J_{1-t}(u(s, 1-t)) \left(\left.\frac{\partial u}{\partial t}\right|_{1-t} - X_{H_{1-t}}(u(s, 1-t))\right) = 0
\end{split}
\end{equation*}
Together with the last three lemmas, we see that the moduli spaces, as well as the compactifications, correspond via $\tau$ and $\underline \tau$.
We then have an isomorphism at the chain level (where the orientations of the moduli spaces identified by $\tau$ are defined to be the same) and thus the proposition follows. The intertwining of the isomorphism \eqref{reversed:novikoviso} is automatic.
\qed
\end{document} |
\begin{document}
\title{Generalized solutions in PDE's and the Burgers' equation}
\author{Vieri Benci\thanks{\textsc{V. Benci, Dipartimento di Matematica, Università degli Studi
di Pisa, Via F. Buonarroti 1/c, 56127 Pisa, ITALY and Centro Linceo
Interdisciplinare Beniamino Segre, Palazzo Corsini - Via della Lungara
10, 00165 Roma, ITALY,}\texttt{ [email protected]}} \and Lorenzo Luperi Baglini\thanks{\textsc{L. Luperi Baglini, Faculty of Mathematics, University of Vienna,
Austria, Oskar-Morgenstern-Platz 1, 1090 Wien, AUSTRIA,}\texttt{ [email protected]}}\thanks{L.~Luperi Baglini has been supported by grants P25311-N25 and M1876-N35
of the Austrian Science Fund FWF.}}
\maketitle
\begin{abstract}
In many situations, the notion of function is not sufficient and it
needs to be extended. A classical way to do this is to introduce the
notion of weak solution; another approach is to use generalized functions.
Ultrafunctions are a particular class of generalized functions that
has been previously introduced and used to define generalized solutions
of stationary problems in \cite{ultra,belu2012,milano,beyond,gauss}.
In this paper we generalize this notion in order to study also evolution
problems. In particular, we introduce the notion of Generalized Ultrafunction
Solution (GUS) for a large family of PDE's, and we confront it with
classical strong and weak solutions. Moreover, we prove an existence
and uniqueness result of GUS for a large family of PDE's, including
the nonlinear Schroedinger equation and the nonlinear wave equation.
Finally, we study in detail GUS of Burgers' equation, proving that
(in a precise sense) the GUS of this equation provides a description
of the phenomenon at microscopic level.
\end{abstract}
\tableofcontents{}
\section{Introduction}
In order to solve many problems of mathematical physics, the notion
of function is not sufficient and it is necessary to extend it. Among
people working in partial differential equations, the theory of distributions
of Schwartz and the notion of weak solution are the main tools to
be used when equations do not have classical solutions. Usually, these
equations do not have classical solutions since they develop singularities.
The notion of weak solution allows to obtain existence results, but
uniqueness may be lost; also, these solutions might violate the conservation
laws. As an example let us consider the Burgers' equation:
\begin{equation}
\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=0.\tag{BE}\label{BE}
\end{equation}
A local classical solution $u(t,x)$ is unique and, if it has compact
support, it preserves the momentum $P=\int u\ dx$ and the energy
$E=\frac{1}{2}\int u^{2}\ dx$ as well as other quantites. However,
at some time a singularity appears and the solution can be no longer
described by a smooth function. The notion of weak solution is necessary,
but the problem of uniqueness becomes a central issue. Moreover, in
general, $E$ is not preserved.
An approach that can be used to try to overcome these difficulties
is the use of generalized functions (see e.g. \cite{biagioni,biagioni-MO,ros},
where such an approach is developed using ideas in common with Colombeau
theory). In this paper we use a similar approach by means of non-Archimedean
analysis, and we introduce the notion of ultrafunction solution for
a large family of PDE's using some of the tools of Nonstandard Analysis
(NSA). Ultrafunctions are a family of generalized functions defined
on the field of hyperreals, which are a well known extension of the
reals. They have been introduced in \cite{ultra}, and also studied
in \cite{belu2012,belu2013,algebra,beyond,gauss,topologia}. The non-Archimedean
setting in which we will work (which is a reformulation, in a topological
language, of the ultrapower approach to NSA of Keisler) is introduced
in Section \ref{lt}. In Section \ref{ul} we introduce the spaces
of ultrafunctions, and we show their relationships with distributions.
In Section \ref{gus} we introduce the notion of generalized ultrafunction
solutions (GUS). We prove an existence and uniqueness theorem for
these generalized solutions, and we confront them with strong and
weak solutions of evolution problems. In particular, we show the existence
of a GUS even in the presence of blow ups (as e.g. in the case of
the nonlinear Schroedinger equation), and we show the uniqueness of
GUS for the nonlinear wave equation. Finally, in Section \ref{tbe}
we study in detail Burgers' equation and, in a sense precised in Section
\ref{sub:The-microscopic-part}, we show that in this case the unique
GUS of this equation provides a description of the phenomenon at microscopic
level.
\subsection{Notations\label{not}}
Let $\Omega$\ be a subset of $\mathbb{R}^{N}$: then
\begin{itemize}
\item $\mathcal{C}\left(\Omega\right)$ denotes the set of continuous functions
defined on $\Omega\subset\mathbb{R}^{N};$
\item $\mathcal{C}_{c}\left(\Omega\right)$ denotes the set of continuous
functions in $\mathcal{C}\left(\Omega\right)$ having compact support
in $\Omega;$
\item $\mathcal{C}_{0}\left(\overline{\Omega}\right)$ denotes the set of
continuous functions in $\mathcal{C}\left(\Omega\right)$ which vanish
on $\partial\Omega;$
\item $\mathcal{C}^{k}\left(\Omega\right)$ denotes the set of functions
defined on $\Omega\subset\mathbb{R}^{N}$ which have continuous derivatives
up to the order $k;$
\item $\mathcal{C}_{c}^{k}\left(\Omega\right)$ denotes the set of functions
in $\mathcal{C}^{k}\left(\Omega\right)\ $having compact support;
\item $\mathscr{D}\left(\Omega\right)$ denotes the set of the infinitely
differentiable functions with compact support defined on $\Omega\subset\mathbb{R}^{N};\ \mathcal{\mathscr{D}}^{\prime}\left(\Omega\right)$
denotes the topological dual of $\mathscr{D}\left(\Omega\right)$,
namely the set of distributions on $\Omega;$
\item for any set $X$, $\mathcal{P}_{fin}(X)$ denotes the set of finite
subsets of $X$;
\item if $W$ is a generic function space, its topological dual will be
denated by $W^{\prime}$ and the paring by $\left\langle \cdot,\cdot\right\rangle _{W}$,
or simply by $\left\langle \cdot,\cdot\right\rangle .$
\end{itemize}
\section{$\Lambda$-theory\label{lt}}
\subsection{Non-Archimedean Fields\label{naf}}
In this section we recall the basic definitions and facts regarding
non-Archimedean fields, following an approach that has been introduced
in \cite{topologia} (see also \cite{ultra,BDN2003,belu2012,belu2013,milano,algebra,beyond,gauss}).
In the following, ${\mathbb{K}}$ will denote an ordered field. We
recall that such a field contains (a copy of) the rational numbers.
Its elements will be called numbers.
\begin{defn}
Let $\mathbb{K}$ be an infinite ordered field. Let $\xi\in\mathbb{K}$.
We say that:
\begin{itemize}
\item $\xi$ is infinitesimal if, for all positive $n\in\mathbb{N}$, $|\xi|<\frac{1}{n}$;
\item $\xi$ is finite if there exists $n\in\mathbb{N}$ such that $|\xi|<n$;
\item $\xi$ is infinite if, for all $n\in\mathbb{N}$, $|\xi|>n$ (equivalently,
if $\xi$ is not finite).
\end{itemize}
An ordered field $\mathbb{K}$ is called non-Archimedean if it contains
an infinitesimal $\xi\neq0$.
\end{defn}
It's easily seen that all infinitesimal are finite, that the inverse
of an infinite number is a nonzero infinitesimal number, and that
the inverse of a nonzero infinitesimal number is infinite.
\begin{defn}
A superreal field is an ordered field $\mathbb{K}$ that properly
extends $\mathbb{R}$.
\end{defn}
It is easy to show, due to the completeness of $\mathbb{R}$, that
there are nonzero infinitesimal numbers and infinite numbers in any
superreal field. Infinitesimal numbers can be used to formalize a
notion of \textquotedbl{}closeness\textquotedbl{}:
\begin{defn}
\label{def infinite closeness} We say that two numbers $\xi,\zeta\in{\mathbb{K}}$
are infinitely close if $\xi-\zeta$ is infinitesimal. In this case
we write $\xi\sim\zeta$.
\end{defn}
Clearly, the relation \textquotedbl{}$\sim$\textquotedbl{} of infinite
closeness is an equivalence relation.
\begin{thm}
If $\mathbb{K}$ is a superreal field, every finite number $\xi\in\mathbb{K}$
is infinitely close to a unique real number $r\sim\xi$, called the
\textbf{shadow} or the \textbf{standard part} of $\xi$.
\end{thm}
Given a finite number $\xi$, we denote its shadow as $sh(\xi)$.
\subsection{The $\Lambda-$limit}
In this section we introduce a particular non-Archimedean field by
means of $\Lambda-$theory\footnote{Readers expert in nonstandard analysis will recognize that $\Lambda$-theory
is equivalent to the superstructure constructions of Keisler (see
\cite{keisler76} for a presentation of the original constructions
of Keisler, and \cite{topologia} for a comparison between these two
approaches to nonstandard analysis). }, in particular by means of the notion of $\Lambda-$limit (for complete
proofs and for further informations the reader is referred to \cite{benci99},
\cite{ultra}, \cite{belu2012} and \cite{topologia}). To recall
the basics of $\Lambda-$theory we have to recall the notion of superstructure
on a set (see also \cite{keisler76}):
\begin{defn}
Let $E$ be an infinite set. The superstructure on $E$ is the set
\[
V_{\infty}(E)=\bigcup_{n\in\mathbb{N}}V_{n}(E),
\]
where the sets $V_{n}(E)$ are defined by induction by setting
\[
V_{0}(E)=E
\]
and, for every $n\in\mathbb{N}$,
\[
V_{n+1}(E)=V_{n}(E)\cup\mathcal{P}\left(V_{n}(E)\right).
\]
\end{defn}
Here $\mathcal{P}\left(E\right)$ denotes the power set of $E.$ Identifying
the couples with the Kuratowski pairs and the functions and the relations
with their graphs, it follows that $V_{\infty}(E)$ contains almost
every usual mathematical object that can be constructed starting with
$E;$ in particular, $V_{\infty}(\mathbb{R})$, which is the superstructure
that we will consider in the following, contains almost every usual
mathematical object of analysis.
Throughout this paper we let
\[
\mathfrak{L}=\mathcal{P}_{fin}(V_{\infty}(\mathbb{R}))
\]
and we order $\mathfrak{L}$ via inclusion. Notice that $(\mathfrak{L},\subseteq)$
is a directed set. We add to $\mathfrak{L}$ a \textquotedbl{}point
at infinity\textquotedbl{} $\Lambda\notin\mathfrak{L}$, and we define
the following family of neighborhoods of $\Lambda:$
\[
\{\{\Lambda\}\cup Q\mid Q\in\mathcal{U}\},
\]
where $\mathcal{U}$ is a fine ultrafilter on $\mathfrak{L}$, namely
a filter such that
\begin{itemize}
\item for every $A,B\subseteq\mathfrak{L}$, if $A\cup B=\mathfrak{L}$
then $A\in\mathcal{U}$ or $B\in\mathcal{U}$;
\item for every $\lambda\in\mathfrak{L}$ the set $I_{\lambda}=\{\mu\in\mathfrak{L}\mid\lambda\subseteq\mu\}\in\mathcal{U}$.
\end{itemize}
In particular, we will refer to the elements of $\mathcal{U}$ as
qualified sets and we will write $\Lambda=\Lambda(\mathcal{U})$ when
we want to highlight the choice of the ultrafilter. We are interested
in considering real nets with indices in $\mathfrak{L}$, namely functions
\[
\varphi:\mathfrak{L}\rightarrow\mathbb{R}\text{.}
\]
In particular, we are interested in $\Lambda-$limits of these nets,
namely in
\[
\lim_{\lambda\rightarrow\Lambda}\varphi(\lambda).
\]
The following has been proved in \cite{topologia}.
\begin{thm}
\label{nuovo}There exists a non-Archimedean superreal field $(\mathbb{K},+,\cdot,<)$
and an Hausdorff topology $\tau$ on the space $\left(\mathfrak{L}\times\mathbb{R}\right)\cup\mathbb{K}$
such that
\begin{enumerate}
\item \label{tre}$\left(\mathfrak{L}\times\mathbb{R}\right)\cup\mathbb{K}=cl_{\tau}\left(\mathfrak{L}\times\mathbb{R}\right);$
\item \label{uno}for every net $\varphi:\mathfrak{L}\rightarrow\mathbb{R}$
the limit
\[
L=\lim_{\lambda\rightarrow\Lambda}(\lambda,\varphi(\lambda))
\]
exists, it is in $\mathbb{K}$ and it is unique; moreover for every
$\xi\in\mathbb{K}$ there is a net $\varphi:\mathfrak{L}\rightarrow\mathbb{R}$
such that
\[
\xi=\lim_{\lambda\rightarrow\Lambda}(\lambda,\varphi(\lambda));
\]
\item \label{due}$\forall$ $c\in\mathbb{R}$ we have that
\[
\lim_{\lambda\rightarrow\Lambda}\left(\lambda,c\right)=c;
\]
\item \label{quattro}for every $\varphi,\psi:\mathfrak{L}\rightarrow\mathbb{R}$
we have that
\begin{eqnarray*}
\lim_{\lambda\rightarrow\Lambda}\left(\lambda,\varphi(\lambda)\right)+\lim_{\lambda\rightarrow\Lambda}\left(\lambda,\psi(\lambda)\right) & = & \lim_{\lambda\rightarrow\Lambda}\left(\lambda,(\varphi+\psi)(\lambda)\right);\\
\lim_{\lambda\rightarrow\Lambda}\left(\lambda,\varphi(\lambda)\right)\cdot\lim_{\lambda\rightarrow\Lambda}\left(\lambda,\varphi(\lambda)\right) & = & \lim_{\lambda\rightarrow\Lambda}\left(\lambda,(\varphi\cdot\psi)(\lambda)\right).
\end{eqnarray*}
\end{enumerate}
\end{thm}
\begin{proof}
For a complete proof of Theorem \ref{nuovo} we refer to \cite{topologia}.
The idea\footnote{To work, this idea needs some additional requirement on the ultrafilter
$\mathcal{U}$, see e.g. \cite{BDN200301}, \cite{topologia}.} is to set
\[
I=\left\{ \varphi\in\mathfrak{F}\left(\mathfrak{L},\mathbb{R}\right)\ |\ \varphi(x)=0\ \text{in a qualified set}\right\} ;
\]
it is not difficult to prove that $I$ is a maximal ideal in $\mathfrak{F}\left(\mathfrak{L},\mathbb{R}\right),$
and hence
\[
\mathbb{K}:=\frac{\mathfrak{F}\left(\mathfrak{L},\mathbb{R}\right)}{I}
\]
is a field. Now the claims of Theorem \ref{nuovo} follows by identifying
every real number $c\in\mathbb{R}$ with the equivalence class of
the constant net $\left[c\right]_{I}$ and by taking the topology
$\tau$ generated by the basis of open sets
\[
b(\tau)=\left\{ N_{\varphi,Q}\ |\ \varphi\in\mathfrak{F}\left(\mathfrak{L},\mathbb{R}\right),Q\in\mathcal{U}\right\} \cup\mathcal{P}(\mathfrak{L}\times\mathbb{R}),
\]
where
\[
N_{\varphi,Q}:=\left\{ \left(\lambda,\varphi(\lambda)\right)\mid\lambda\in Q\right\} \cup\left\{ \left[\varphi\right]_{I}\right\}
\]
is a neighborhood of $\left[\varphi\right]_{I}$.
\end{proof}
Now we want to define the $\Lambda$-limit of nets $(\lambda,\varphi(\lambda))_{\lambda\in\mathfrak{L}}$,
where $\varphi(\lambda)$ is any bounded net of mathematical objects
in $V_{\infty}(\mathbb{R)}$ (a net $\varphi:\mathfrak{L}\rightarrow V_{\infty}(\mathbb{R)}$
is called bounded if there exists $n$ such that $\forall\lambda\in\mathfrak{L},\ \varphi(\lambda)\in V_{n}(\mathbb{R)}$).
To this aim, let us consider a net
\begin{equation}
\varphi:\mathfrak{L}\rightarrow V_{n}(\mathbb{R}).\label{net}
\end{equation}
We will define $\lim_{\lambda\rightarrow\Lambda}\ \left(\lambda,\varphi(\lambda)\right)$
by induction on $n$.
\begin{defn}
\label{def}For $n=0,$ $\lim\limits _{\lambda\rightarrow\Lambda}(\lambda,\varphi(\lambda))$
exists by Thm. (\ref{nuovo}); so by induction we may assume that
the limit is defined for $n-1$ and we define it for the net (\ref{net})
as follows:
\[
\lim_{\lambda\rightarrow\Lambda}\ \left(\lambda,\varphi(\lambda)\right)=\left\{ \lim\limits _{\lambda\rightarrow\Lambda}(\lambda,\psi(\lambda))\ |\ \psi:\mathfrak{L}\rightarrow V_{n-1}(\mathbb{R)}\text{ and}\ \forall\lambda\in\mathfrak{L},\ \psi(\lambda)\in\varphi(\lambda)\right\} .
\]
\end{defn}
From now on, we set
\[
\lim\limits _{\lambda\uparrow\Lambda}\varphi(\lambda):=\lim_{\lambda\rightarrow\Lambda}\left(\lambda,\varphi(\lambda)\right).
\]
Notice that it follows from Definition \ref{def} that $\lim\limits _{\lambda\uparrow\Lambda}\varphi(\lambda)$
is a well defined object in $V_{\infty}(\mathbb{R}^{\ast})$ for every
bounded net $\varphi:\mathfrak{L}\rightarrow V_{\infty}(\mathbb{R})$.
\subsection{Natural extension of sets and functions}
In this section we want to show how to extend subsets and functions
defined on $\mathbb{R}$ to subsets and functions defined on $\mathbb{K}$.
\begin{defn}
\label{janez}Given \textit{\emph{a set}}\textit{ }$E\subseteq\mathbb{R}$,
we set
\[
E^{\ast}:=\left\{ \lim_{\lambda\uparrow\Lambda}\psi(\lambda)\ |\ \forall\lambda\in\mathfrak{L}\,\psi(\lambda)\in E\right\} .
\]
$E^{\ast}$ is called the \textbf{natural extension }of $E.$
\end{defn}
Thus $E^{\ast}$ is the set of all the limits of nets with values
in $E$. Following the notation introduced in Def. \ref{janez}, from
now on we will denote $\mathbb{K}$ by $\mathbb{R}^{\ast}.$ Similarly,
it is possible to extend functions.
\begin{defn}
\label{extfun}Given a function
\[
f:A\rightarrow B
\]
we call natural extension of $f$ the function
\[
f^{\ast}:A^{\ast}\rightarrow B^{\ast}
\]
such that
\[
f^{\ast}\left(\lim_{\lambda\rightarrow\Lambda}\left(\lambda,\varphi(\lambda)\right)\right):=\lim_{\lambda\rightarrow\Lambda}\left(\lambda,f\left(\varphi(\lambda)\right)\right)
\]
for every $\varphi:\mathfrak{L}\rightarrow A.$
\end{defn}
That Definition \ref{extfun} is well posed has been proved in \cite{topologia}.
Let us observe that, in particular, $f^{\ast}(a)=f(a)$ for every
$a\in A$ (which is why $f^{\ast}$ is called the extension of $f$).
\section{Ultrafunctions\label{ul}}
\subsection{Definition of Ultrafunctions}
We follow the construction of ultrafunctions that we introduced in
\cite{gauss}. Let $N$ be a natural number, let $\Omega$ be a subset
of $\mathbb{R}^{N}$ and let $V\subset\mathfrak{F}\left(\Omega,\mathbb{R}\right)$
be a function vector space such that $\mathcal{\mathscr{D}}(\Omega)\subseteq V(\Omega)\subseteq L^{2}(\Omega).$
\begin{defn}
\label{approxseq} We say that $(V_{\lambda}(\Omega)_{\lambda\in\mathfrak{L}})$
is an \textbf{approximating net for} $V(\Omega)$ if\end{defn}
\begin{enumerate}
\item $V_{\lambda}(\Omega)$ is a finite dimensional vector subspace of
$V(\Omega)$ for every $\lambda\in\mathfrak{L}$;
\item if $\lambda_{1}\subseteq\lambda_{2}$ then $V_{\lambda_{1}}(\Omega)\subseteq V_{\lambda_{2}}(\Omega)$;
\item \label{unioni} if $W(\Omega)\subset V(\Omega)$ is a finite dimensional
vector space then there exists $\lambda\in\mathfrak{L}$ such that
$W(\Omega)\subseteq V_{\lambda}(\Omega)$ (i.e., $V(\Omega)=\bigcup\limits _{\lambda\in\mathfrak{L}}V_{\lambda}(\Omega)$).
\end{enumerate}
Let us show two examples.
\begin{example}
Let $V(\mathbb{R})\subseteq L^{2}(\mathbb{R})$. We set, for every
$\lambda\in\mathfrak{L}$,
\[
V_{\lambda}(\Omega):=Span(V(\Omega)\cap\lambda).
\]
Then $(V_{\lambda}(\Omega))_{\lambda\in\mathfrak{L}}$ is an approximating
net for $V(\Omega)$.
\end{example}
\begin{example}
Let
\[
\left\{ e_{a}\right\} _{a\in\mathbb{R}}
\]
be a Hamel basis\footnote{We recall that $\left\{ e_{a}\right\} _{a\in\mathbb{R}}$ is a Hamel
basis for $W$ if $\left\{ e_{a}\right\} _{a\in\mathbb{R}}$ is a
set of linearly indipendent elements of $W$ and every element of
$W$ can be (uniquely) written has a finite sum (with coefficients
in $\mathbb{R}$) of elements of $\left\{ e_{a}\right\} _{a\in\mathbb{R}}.$
Since a Hamel basis of $W$ has the continuum cardinality we can use
the points of $\mathbb{R}$ as indices for this basis.
{}} of $V(\Omega)\subseteq L^{2}$. For every $\lambda\in\mathfrak{L}$
let
\[
V_{\lambda}(\Omega)=Span\left\{ e_{a}\ |\ a\in\lambda\right\} .
\]
Then $(V_{\lambda}(\Omega))_{\lambda\in\mathfrak{L}}$ is an approximating
net for $V(\Omega).$ \end{example}
\begin{defn}
Let $\mathcal{U}$ be a fine ultrafilter on $\mathfrak{L}$, let $\Lambda=\Lambda(\mathcal{U})$
and let $(V_{\lambda}(\Omega))_{\lambda\in\mathfrak{L}}$ be an approximating
net for $V(\Omega)$. We call \textbf{space of ultrafunctions} \textbf{generated
by }$(V_{\lambda}(\Omega))$ the $\Lambda$-limit
\[
V_{\Lambda}(\Omega):=\lim_{\lambda\uparrow\Lambda}V_{\lambda}(\Omega)=\left\{ \lim_{\lambda\uparrow\Lambda}f_{\lambda}\ |\ \forall\lambda\in\mathfrak{L~}f_{\lambda}\in V_{\lambda}(\Omega)\right\} .
\]
In this case we will also say that the space $V_{\Lambda\text{ }}(\Omega)$
is based on the space $V(\Omega)$. When $V_{\lambda}(\Omega):=Span(V(\Omega)\cap\lambda)$
for every $\lambda\in\mathfrak{L}$, we will say that $V_{\Lambda}(\Omega)$
is a \textbf{canonical space of ultrafunctions}.
\end{defn}
Using the above definition, if $V(\Omega)$, $\Omega\subset\mathbb{R}^{N}$,
is a real function space and $(V_{\lambda}(\Omega))$ is an approximating
net for $V(\Omega)$ then we can associate to $V(\Omega)$ the following
three hyperreal functions spaces:
\begin{equation}
V(\Omega)^{\sigma}=\left\{ f^{\ast}\ |\ f\in V(\Omega)\right\} ;\label{sigma1}
\end{equation}
\begin{equation}
V_{\Lambda}(\Omega)=\left\{ \lim_{\lambda\uparrow\Lambda}\ f_{\lambda}\ |\ \forall\lambda\in\mathfrak{L~}f_{\lambda}\in V_{\lambda}(\Omega)\right\} ;\label{tilda}
\end{equation}
\begin{equation}
V(\Omega)^{\ast}=\left\{ \lim_{\lambda\uparrow\Lambda}\ f_{\lambda}\ |\ \forall\lambda\in\mathfrak{L~}f_{\lambda}\in V(\Omega)\right\} .\label{star}
\end{equation}
Clearly we have
\[
V(\Omega)^{\sigma}\subset V_{\Lambda}(\Omega)\subset V(\Omega)^{\ast}.
\]
So, given any vector space of functions $V(\Omega)$, the space of
ultrafunctions generated by $V(\Omega)$ is a vector space of hyperfinite
dimension that includes $V(\Omega)^{\sigma}$, and the ultrafunctions
are $\Lambda$-limits of functions in $V_{\lambda}(\Omega)$. Hence
the ultrafunctions are particular internal functions
\[
u:\left(\mathbb{R}^{\ast}\right)^{N}\rightarrow{\mathbb{C}^{\ast}.}
\]
Since $V_{\Lambda}(\Omega)\subset\left[L^{2}(\mathbb{R})\right]^{\ast},$
we can equip $V_{\Lambda}(\Omega)$ with the following scalar product:
\begin{equation}
\left(u,v\right)=\int^{\ast}u(x)v(x)\ dx,\label{inner}
\end{equation}
where $\int^{\ast}$ is the natural extension of the Lebesgue integral
considered as a functional
\[
\int:L^{1}(\Omega)\rightarrow{\mathbb{R}}.
\]
Therefore, the norm of an ultrafunction will be given by
\[
\left\Vert u\right\Vert =\left(\int^{\ast}|u(x)|^{2}\ dx\right)^{\frac{1}{2}}.
\]
Sometimes, when no ambiguity is possible, in order to make the notation
simpler we will write $\int$ istead of $\int^{\ast}$.
\begin{rem}
\label{nina}Notice that the natural extension $f^{\ast}$ of a function
$f$ is an ultrafunction if and only if $f\in V(\Omega).$ \end{rem}
\begin{proof}
Let $f\in V(\Omega)$ and let $(V_{\lambda}(\Omega))$ be an approximating
net for $V(\Omega)$. Then, eventually, $f\in V_{\lambda}(\Omega)$
and hence
\[
f^{\ast}=\lim_{\lambda\uparrow\Lambda}f\in\lim_{\lambda\uparrow\Lambda}\ V_{\lambda}(\Omega)=V_{\Lambda}(\Omega).
\]
Conversely, if $f\notin V(\Omega)$ then $f^{\ast}\notin V^{\ast}(\Omega)$
and, since $V_{\Lambda}(\Omega)\subset V^{\ast}(\Omega)$, this entails
the thesis.
\end{proof}
\subsection{Canonical extension of functions, functionals and operators}
Let $V_{\Lambda}(\Omega)$ be a space of ultrafunctions based on $V(\Omega)\subseteq L^{2}(\Omega)$.
We have seen that given a function $f\in V(\Omega),$ its natural
extension
\[
f^{\ast}:\Omega^{\ast}\rightarrow\mathbb{R}^{\ast}
\]
is an ultrafunction in $V_{\Lambda}(\Omega).$ In this section we
investigate the possibility to associate an ultrafunction $\widetilde{f}$
to any function $f\in L_{loc}^{1}(\Omega)$ in a consistent way. Since
$L^{2}(\Omega)\subseteq V^{\prime}(\Omega)$, this association can
be done by means of a duality method.
\begin{defn}
\label{CP}Given $T\in\left[L^{2}(\Omega)\right]^{\ast},$ we denote
by $\widetilde{T}$ the unique ultrafunction such that $\forall v\in V_{\Lambda}(\Omega),$
\[
\int_{\Omega^{\ast}}\widetilde{T}(x)v(x)\,dx=\int_{\Omega^{\ast}}T(x)v(x)\,dx.
\]
The map
\[
P_{\Lambda}:\left[L^{2}(\Omega)\right]^{\ast}\rightarrow V_{\Lambda}(\Omega)
\]
defined by $P_{\Lambda}T=\widetilde{T}$ will be called the \textbf{canonical
projection.}
\end{defn}
The above definition makes sense, as $T$ is a linear functional on
$V(\Omega)^{\ast}$, and hence on $V_{\Lambda}(\Omega)\subset V(\Omega)^{\ast}.$
Since $V(\Omega)\subset L^{2}(\Omega),$ using the inner product (\ref{inner})
we can identify $L^{2}(\Omega)$ with a subset of $V^{\prime}(\Omega),$
and hence $\left[L^{2}(\Omega)\right]^{\ast}$ with a subset of $\left[V^{\prime}(\Omega)\right]^{\ast};$
in this case, $\forall f\in\left[L^{2}(\Omega)\right]^{\ast},\ \forall v\in V_{\Lambda}(\Omega),$
\[
\int\widetilde{f}(x)v(x)\ dx=\int f(x)v(x)\ dx,
\]
namely the map $P_{\Lambda}f=\widetilde{f}$ restricted to $\left[L^{2}(\Omega)\right]^{\ast}$
reduces to the orthogonal projection
\[
P_{\Lambda}:\left[L^{2}(\Omega)\right]^{\ast}\rightarrow V_{\Lambda}(\Omega).
\]
If we take any function $f\in L_{loc}^{1}(\Omega)\cap L^{2}(\Omega),$
then $f^{\ast}\in\left[L_{loc}^{1}(\Omega)\cap L^{2}(\Omega)\right]^{\ast}\subset\left[L^{2}(\Omega)\right]^{\ast}$
and hence $\widetilde{f^{\ast}}$ is well defined by Def.~\ref{CP}.
In order to simplify the notation we will simply write $\widetilde{f.}$
This discussion suggests the following definition:
\begin{defn}
Given a function $f\in L_{loc}^{1}(\Omega)\cap L^{2}(\Omega),$ we
denote by $\widetilde{f}$ the unique ultrafunction in $V_{\Lambda}(\Omega)$
such that $\forall v\in V_{\Lambda}(\Omega),$
\[
\int\widetilde{f}(x)v(x)\ dx=\int f^{\ast}(x)v(x)\ dx.
\]
$\widetilde{f}$ is called the canonical extension of $f.$ \end{defn}
\begin{rem}
\label{cina}As we observed, for every $f:\mathbb{R}\rightarrow\mathbb{R}$
we have that $^{\ast}f\in V_{\Lambda}(\Omega)$ iff $f\in V(\Omega).$
Therefore for every $f:\mathbb{R}\rightarrow\mathbb{R}$
\[
\widetilde{f}=f^{\ast}\Leftrightarrow f\in V(\Omega).
\]
\end{rem}
Let us observe that we need to assume that $V(\Omega)\subset L_{c}^{\infty}(\Omega)=(L_{loc}^{1}(\Omega))^{\prime}$
if we want $\widetilde{f}$ to be defined for every function $f\in L_{loc}^{1}(\Omega)$.
Using a similar method, it is also possible to extend operators:
\begin{defn}
\label{b}Given an operator
\[
\mathcal{A}:V(\Omega)\rightarrow V^{\prime}(\Omega)
\]
we can extend it to an operator
\[
\widetilde{\mathcal{A}}:V_{\Lambda}(\Omega)\rightarrow V_{\Lambda}(\Omega)
\]
in the following way: given an ultrafunction $u,$ $\widetilde{\mathcal{A}}(u)$
is the unique ultrafunction such that
\[
\forall v\in V_{\Lambda}(\Omega),\ \int^{\ast}\widetilde{\mathcal{A}}(u)v\ dx\ =\int^{\ast}\mathcal{A}^{\ast}(u)v\,dx;
\]
namely
\[
\widetilde{\mathcal{A}}=P_{\Lambda}\circ\mathcal{A}^{\ast},
\]
where $P_{\Lambda}$ is the canonical projection.
\end{defn}
Sometimes, when no ambiguity is possible, in order to make the notation
simpler we will write $\mathcal{A}(u)$ instead of $\widetilde{\mathcal{A}}(u).$
\begin{example}
The derivative of an ultrafunction is well defined provided that the
weak derivative is defined from $V(\Omega)$ to his dual $V^{\prime}(\Omega):$
\[
\mathcal{\partial}:V(\Omega)\rightarrow V^{\prime}(\Omega).
\]
For example you can take $V(\Omega)=\mathcal{C}^{1}(\Omega),$ $H^{1/2}(\Omega),$
$BV(\Omega)$ etc. Following Definition \ref{b}, we have that the
ultrafunction derivative
\[
D:V_{\Lambda}(\Omega)\rightarrow V_{\Lambda}(\Omega)
\]
of an ultrafunction $u$ is defined by duality as the unique ultrafunction
$Du$ such that
\begin{equation}
\forall v\in V_{\Lambda}(\Omega),\ \int Du\ v\ dx\ =\left\langle \partial^{\ast}u,v\right\rangle .\label{sd}
\end{equation}
Notice that, in order to simplify the notation, we have denoted the
generalized derivative by $D=\widetilde{\mathcal{\partial}}.$
\end{example}
To construct the space of ultrafunctions that we need to study Burgers'
Equation we will use the following theorem:
\begin{thm}
\label{thm:Spaces of Ultrafunctions as hyperfinite extensions}Let
$n\in\mathbb{N},$ $\Omega\subseteq\mathbb{R}^{n}$ and let $V(\Omega)$
be a vector space of functions. Let $V(\Omega)^{\ast}$ be a $\left|\mathfrak{L}\right|^{+}$-enlarged\footnote{For the notion of enlarging, as well as for other important notions
in nonstandard analysis such as saturation and overspill, we refer
to \cite{keisler76,rob}.} ultrapower of $V(\Omega)$. Then every hyperfinite dimensional vector
space $W(\Omega)$ such that $V(\Omega)^{\sigma}\subseteq W(\Omega)\subseteq V(\Omega)^{\ast}$
contains an isomorphic copy of a canonical space of ultrafunctions
on $V(\Omega)$.\end{thm}
\begin{proof}
First of all, we claim that there exist a hyperfinite set $H\in\left(\mathcal{P}_{fin}(\mathfrak{L})\right)^{\ast}$
such that $\lambda\subseteq H$ for every $\lambda\in\mathfrak{L}$
and such that $B=H\cap W(\Omega)$ is a hyperfinite basis of $W(\Omega)$.
To prove this claim we set, for every $\lambda\in\mathfrak{L}$,
\[
H_{\lambda}=\left\{ H\in\left(\mathcal{P}_{fin}(\mathfrak{L})\right)^{\ast}\mid\lambda^{\ast}\subseteq H\,\text{and}\,Span(H\cap V^{\ast}(\Omega))=W(\Omega)\right\} .
\]
Clearly, if $H_{\lambda}\neq\emptyset$ for every $\lambda\in\mathfrak{L}$
then the family $\{H_{\lambda}\}_{\lambda\in\mathfrak{L}}$ has the
finite intersection property (as $H_{\lambda_{1}}\cap\dots\cap H_{\lambda_{k}}=H_{\lambda_{1}\cup\dots\cup\lambda_{k}})$.
To prove that $H_{\lambda}\neq\emptyset$ for every $\lambda\in\mathfrak{L}$,
let $\lambda\in\mathfrak{L}$ be given and let $B$ be a fixed hyperfinite
basis of $W(\Omega)$ with $V(\Omega)^{\sigma}\subseteq B$ (whose
existence can be easily deduced from the enlarging property of the
extension, as $V(\Omega)^{\sigma}\subseteq W(\Omega)$). Let $\lambda=\lambda_{0}\cup\lambda_{1}$,
where $\lambda_{0}\cap\lambda_{1}=\emptyset$ and $\lambda_{0}=\lambda\cap V(\Omega)$,
and let $H=B\cup\lambda_{1}^{\ast}$. It is immediate to notice that
$H\in H_{\lambda}$. Therefore this proves that the family $\{H_{\lambda}\}_{\lambda\in\mathfrak{L}}$
has the finite intersection property, and so our claim can be derived
as a consequence of the $|\mathfrak{L}|^{+}$-enlarging property of
the extension. From now on, we let $H$ be an hyperfinite set with
the properties of our claim, and we let $B=H\cap W(\Omega)$. Finally,
we set $\mathcal{U}=\{X\subseteq\mathfrak{L}\mid H\in X^{\ast}\}$.
Clearly, $\mathcal{U}$ is an ultrafilter on $\mathfrak{L}$; moreover,
our construction of $H$ has been done to have that $\mathcal{U}$
is a fine ultrafilter. To prove this, let $\lambda_{0}\in\mathfrak{L}$.
Then
\[
\{\lambda\in\mathfrak{L}\mid\lambda_{0}\subseteq\lambda\}\in\mathcal{U}\Leftrightarrow H\in\{\lambda\in\mathfrak{L}\mid\lambda_{0}\subseteq\lambda\}^{\ast}\Leftrightarrow\lambda_{0}\subseteq H,
\]
and $\lambda_{0}\subseteq H$ by our construction of the set $H$.
Now we set $V_{\lambda}(\Omega)=Span(V(\Omega)\cap\lambda$) for every
$\lambda\in\mathfrak{L}$, we set $V_{\Lambda(\mathcal{U})}:=\lim_{\lambda\uparrow\Lambda(\mathcal{U})}v_{\lambda}$
and we let $\Phi:V_{\Lambda(\mathcal{U})}(\Omega)\rightarrow W(\Omega)$
be defined as follows: for every $v=\lim_{\lambda\uparrow\Lambda(\mathcal{U})}v_{\lambda}$,
\[
\Phi\left(\lim_{\lambda\uparrow\Lambda(\mathcal{U})}v_{\lambda}\right):=v_{B},
\]
where $v_{B}$ is the value of the hyperextension $v^{\ast}:\mathfrak{L}^{\ast}\rightarrow V^{\ast}(\Omega)$
of the function $v:\mathfrak{L}\rightarrow V(\Omega)$ evaluated in
$B\in\mathfrak{L}^{\ast}$. Let us notice that, as $v_{\lambda}\in Span(V(\Omega)\cap\lambda)$
for every $\lambda\in\mathfrak{L}$, by transfer we have that $v_{B}\in Span(V(\Omega)^{\ast}\cap B)=W(\Omega)$,
namely the image of $\Phi$ is included in $W(\Omega).$
To conclude our proof, we have to show that $\Phi$ is an embedding
(so that we can take $\Phi\left(V_{\Lambda(\mathcal{U})}(\Omega)\right)$
as the isomorphic copy of a canonical space of ultrafunctions contained
in $W(\Omega)$). The linearity of $\Phi$ holds trivially; to prove
that $\Phi$ is injective let $v=\lim_{\lambda\uparrow\Lambda(\mathcal{V})}v_{\lambda},\,w=\lim_{\lambda\uparrow\Lambda(\mathcal{V})}w_{\lambda}$.
Then
\[
\Phi(v)=\Phi(w)\Leftrightarrow v_{B}=w_{B}\Leftrightarrow B\in\left\{ \lambda\in\mathfrak{L}\mid v_{\lambda}=w_{\lambda}\right\} ^{\ast}\Leftrightarrow
\]
\[
\left\{ \lambda\in\mathfrak{L}\mid v_{\lambda}=w_{\lambda}\right\} \in\mathcal{V}\Leftrightarrow v=w.\qedhere
\]
\end{proof}
\begin{lem}
\label{lem:Adding a function}Let $V(\Omega)$ be given, let $(V_{\lambda}(\Omega))_{\lambda\in\mathfrak{L}}$
be an approximating net for $V(\Omega)$ and let $V_{\Lambda}(\Omega)=\lim_{\lambda\uparrow\Lambda}V_{\lambda}(\Omega)$.
Finally, let $u\in V(\Omega)^{\ast}\setminus V_{\Lambda}(\Omega)$.
Then $W(\Omega):=Span\left(V_{\Lambda}(\Omega)\cup\left\{ u\right\} \right)$
is a space of ultrafunctions on $V(\Omega)$.\end{lem}
\begin{proof}
Let $u=\lim_{\lambda\uparrow\Lambda}u_{\lambda}$, where $u_{\lambda}\notin V_{\lambda}(\Omega)$
for every $\lambda\in\mathfrak{L}$, and let, for every $\lambda\in\mathfrak{L}$,
$W_{\lambda}=Span\left(V_{\lambda}\cup\left\{ u_{\lambda}\right\} \right)$.
Clearly, $(W_{\lambda})_{\lambda\in\mathfrak{L}}$ is an approximating
net for $V(\Omega)$. We claim that $W(\Omega)=W_{\Lambda}(\Omega)=\lim_{\lambda\uparrow\Lambda}W_{\lambda}(\Omega)$.
Clearly, $V_{\Lambda}(\Omega)\subseteq W_{\Lambda}(\Omega)$ and $u\in W_{\Lambda}(\Omega)$,
and hence $W(\Omega)\subseteq W_{\Lambda}(\Omega)$. As for the reverse
inclusion, let $w\in W_{\Lambda}(\Omega)$ and let $w=\lim_{\lambda\uparrow\Lambda}w_{\lambda}$.
For every $\lambda\in\mathfrak{L}$ let $w_{\lambda}=v_{\lambda}+c_{\lambda}u_{\lambda}$,
where $v_{\lambda}\in V_{\lambda}$. Then
\[
w=\lim_{\lambda\uparrow\Lambda}v_{\lambda}+\lim_{\lambda\uparrow\Lambda}c_{\lambda}\cdot\lim_{\lambda\uparrow\Lambda}u_{\lambda}
\]
so, as $\lim_{\lambda\uparrow\Lambda}v_{\lambda}\in V_{\Lambda}(\Omega)$
and $\lim_{\lambda\uparrow\Lambda}u_{\lambda}=u$, we have that $w\in W(\Omega)$,
and hence the thesis is proved. \end{proof}
\begin{thm}
\label{vecchiaroba}There is a space of ultrafunctions $U_{\Lambda}(\mathbb{R})$
which satisfies the following assumptions:
\begin{enumerate}
\item \label{enu: Thm 23 (1)}$H_{c}^{1}(\mathbb{R})\subseteq U_{\Lambda}(\mathbb{R})$;
\item \label{enu: Thm 23 (2)}the ultrafunction $\widetilde{1}$ is the
identity in $U_{\Lambda}(\mathbb{R}),$ namely $\forall u\in U_{\Lambda}(\mathbb{R}),$
$u\cdot\widetilde{1}=u;$
\item \label{enu: Thm 23 (3)}$D\widetilde{1}=0$;
\item \label{enu: Thm 23 (4)}$\forall u,v\in U_{\Lambda}(\mathbb{R}),$
$\int^{\ast}\left(Du\right)v\ dx\ =-\int^{\ast}u\left(Dv\right)\ dx.$
\end{enumerate}
\end{thm}
\begin{proof}
We set
\begin{multline*}
H_{\flat}^{1}(\mathbb{R})=Span\{u\in L^{2}(\mathbb{R})\ |\ \exists n\in\mathbb{N}~\text{s.t}.~supp(u)\subseteq\left[-n,n\right],\\
u(n)=u(-n),~u\in H^{1}([-n,n])\}.
\end{multline*}
Let $\beta\in\mathbb{N}^{\ast}\setminus\mathbb{N}$; we set
\[
W(\mathbb{R}):=\left\{ v\in\left[H_{\flat}^{1}(\mathbb{R})\right]^{\ast}\mid supp(u)\subseteq[-\beta,\beta],\,u(-\beta)=u(\beta)\right\}
\]
and we let $V_{\Lambda}(\mathbb{R})$ be a hyperfinite dimensional
vector space that contains the characteristic function $1_{[-\beta,\beta]}(x)$
of $[-\beta,\beta${]} and such that\footnote{To have this property we need the nonstandard extension to be a $\left|\mathcal{P}(\mathbb{R})\right|^{+}$-enlargment.}
\[
\left[H_{\flat}^{1}(\mathbb{R})\right]^{\sigma}\subseteq V_{\Lambda}(\mathbb{R})\subseteq W(\mathbb{R}).
\]
As $W(\mathbb{R})\subseteq\left[H_{\flat}^{1}(\mathbb{R})\right]^{\ast}$
we can apply Thm. \ref{thm:Spaces of Ultrafunctions as hyperfinite extensions}
to deduce that $V_{\Lambda}(\mathbb{R})$ contains an isomorphic copy
of a canonical space of ultrafunctions on $H_{\flat}^{1}(\mathbb{R})$.
If this isomorphic copy does not contain $1_{[-\beta,\beta]}$, we
can apply Lemma \ref{lem:Adding a function} to construct a space
of ultrafunctions included in $V_{\Lambda}(\Omega)$ that contains
$1_{[-\beta,\beta]}$. Let $U_{\Lambda}(\Omega)$ denote this space
of ultrafunctions on $H_{\flat}^{1}(\mathbb{R})$.
Condition (\ref{enu: Thm 23 (1)}) holds as $H_{c}^{1}(\mathbb{R})\subseteq H_{\flat}^{1}(\mathbb{R})$.
To prove condition (\ref{enu: Thm 23 (2)}) let us show that $\widetilde{1}=1_{[-\beta,\beta]}:$
in fact, for every $u\in U_{\Lambda}(\mathbb{R})$ we have
\[
\int\widetilde{1}\cdot u\,dx=\int1\cdot u\,dx=\int_{-\beta}^{\beta}u\,dx=\int1_{[-\beta,\beta]}\cdot u\,dx.
\]
Henceforth condition (\ref{enu: Thm 23 (2)}) holds as $1_{[-\beta,\beta]}\cdot u=u$
for every $u\in U_{\Lambda}(\mathbb{R})$. To prove condition (\ref{enu: Thm 23 (3)})
let $u\in U_{\Lambda}(\mathbb{R}).$ Then
\[
\int D\left(1_{[-\beta,\beta]}\right)\cdot u\,dx=\int\partial\left(1_{[-\beta,\beta]}\right)\cdot u\,dx=u(\beta)-u(-\beta)=0,
\]
namely $D\left(1_{[-\beta,\beta]}\right)=0$. Finally, as $U_{\Lambda}(\mathbb{R})\subseteq\left[BV(\mathbb{R})\right]^{\ast}$,
by equation (\ref{sd}), we have that
\[
\int Du\ v\ dx\ =\left\langle \partial^{\ast}u,v\right\rangle =-\left\langle u,\partial^{\ast}v\right\rangle =-\int u\ Dv\ dx
\]
and so condition (\ref{enu: Thm 23 (4)}) holds. \end{proof}
\begin{rem}
Let $U_{\Lambda}(\mathbb{R})$ be the space of ultrafunctions given
by Theorem \ref{vecchiaroba}. Then for every ultrafunction $u\in U_{\Lambda}(\mathbb{R})$
we have
\[
\int^{\ast}u(x)dx=\int^{\ast}u(x)\cdot1dx=\int^{\ast}u(x)\cdot\widetilde{1}dx=\int_{-\beta}^{\beta}u(x)dx.
\]
We will use this property in Section \ref{tbe} when talking about
Burgers' equation.
\end{rem}
\subsection{Spaces of ultrafunctions involving time\label{sub:Spaces-of-ultrafunctions with time}}
Generic problems of evolution are usually formulated by equations
of the following kind:
\begin{equation}
\partial_{t}u=\mathcal{A}(u),\label{pura}
\end{equation}
where
\[
\mathcal{A}:V(\Omega)\rightarrow L^{2}(\Omega)
\]
is a differential operator.
By definition, a \textbf{strong solution} of equation (\ref{pura})
is a function
\[
\phi\in V(I\times\Omega):=\mathcal{C}{}^{0}(I,V(\Omega))\cap\mathcal{C}^{1}(I,L^{2}(\Omega))
\]
where $I:=\left[0,T\right)$ is the interval of time and $\mathcal{C}^{k}(I,B),$
$k\in\mathbb{N}$, denotes the space of functions from $I$ to a Banach
space $B$ which are $k$ times differentiable with continuity.
In equation (\ref{pura}), the independent variable is $(t,x)\in I\times\Omega\subset\mathbb{R}^{N+1},$
$I=\left[0,T\right)$. A disappointing fact is that a ultrafunction
space based on $V(I\times\Omega)$ is not a convenient space where
to study this equation, since these ultrafunctions spaces are not
homogeneous in time in the following sense: if for every $t\in I^{\ast}$
we set
\[
V_{\Lambda,t}(\Omega)=\left\{ v\in V(\Omega)^{\ast}\ |\ \exists u\in V_{\Lambda}(I\times\Omega):u(t,x)=v(x)\right\} ,
\]
for $t_{2}\neq t_{1}$ we have that
\[
V_{\Lambda,t_{2}}(\Omega)\neq V_{\Lambda,t_{1}}(\Omega).
\]
This fact is disappointing since we would like to see $u(t,\cdot)$
as a function defined on the same space for all the times $t\in I^{\ast}$.
For this reason we think that a convenient space to study equation
(\ref{pura}) in the framework of ultrafunctions is
\[
\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega)),
\]
defined as follows:
\begin{defn}
\label{def:C kappa}For every $k\in\mathbb{N}$ we set
\[
\mathcal{C}^{k}(I^{\ast},V_{\Lambda}(\Omega))=\left\{ u\in\left[\mathcal{C}^{k}(I,V(\Omega))\right]^{\ast}\ |\ \forall t\in I^{\ast},\ \forall i\leq k,\,\partial_{t}^{i}u(t,\cdot)\in V_{\Lambda}(\Omega)\right\} ,\ k\in\mathbb{N}.
\]
\end{defn}
The advantage in using $\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega))$
rather than $V_{\Lambda}(I\times\Omega)$ relays in the fact that
we want to consider our evolution problem as a dynamical system on
$V_{\Lambda}(\Omega)$, and the time as a continuous and homogeneous
variable. In fact, at least in the models which we will consider,
we have a better description of the phenomena in $\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega))$
rather than in $V_{\Lambda}(I\times\Omega)$ or in the standard space
$\mathcal{C}^{0}(I,V(\Omega))\cap\mathcal{C}^{1}(I,L^{2}(\Omega))$.
\subsection{Ultrafunctions and distributions\label{distri}}
One of the most important properties of spaces of ultrafunctions is
that they can be seen (in some sense that we will make precise later)
as generalizations of the space of distributions (see also \cite{algebra},
where we construct an algebra of ultrafunctions that extends the space
of distributions). The proof of this result is the topic of this section.
Let $E\subset\mathbb{R}^{N}$ be a set not necessarily open. In the
applications in this paper $E$ will be $\Omega\subset\mathbb{R}^{N}$
or $\left[0,T\right)\times\Omega\subset\mathbb{R}^{N+1}.$
\begin{defn}
\label{DEfCorrespondenceDistrUltra}The space of \textbf{generalized
distribution} on $E$ is defined as follows:
\[
\mathcal{\mathscr{D}}_{G}^{\prime}(E)=L^{2}(E)^{\ast}/N,
\]
where
\[
N=\left\{ \tau\in L^{2}(E)^{\ast}\ |\ \forall\varphi\in\mathscr{D}(E)\ \int\tau\varphi\ dx\sim0\right\} .
\]
\end{defn}
The equivalence class of $u$ in $L^{2}(E)^{\ast},$ with some abuse
of notation, will be denoted by
\[
\left[u\right]_{\mathscr{D}}.
\]
\begin{defn}
For every (internal or external) vector space $W(E)\subset L^{2}(E)^{\ast},$
we set
\[
\left[W(E)\right]_{B}=\left\{ u\in W(E)\ |\ \forall\varphi\in\mathfrak{\mathcal{\mathscr{D}}}(E)\,\int u\varphi\ dx\ \ \text{is\ finite}\right\} .
\]
\end{defn}
\begin{defn}
Let $\left[u\right]_{\mathfrak{\mathscr{D}}}$ be a generalized distribution.
We say that $\left[u\right]_{\mathscr{D}}$ is a bounded generalized
distribution if $u\in\left[L^{2}(E)^{\ast}\right]_{B}$.
\end{defn}
Finally, we set
\[
\mathscr{D}_{GB}^{\prime}(E):=[\mathcal{\mathscr{D}}_{G}^{\prime}(E)]_{B}.
\]
We now want to prove that the space $\mathfrak{\mathcal{\mathscr{D}}}_{GB}^{\prime}(E)$
is isomorphic (as a vector space) to $\mathfrak{\mathcal{\mathscr{D}}}^{\prime}(E).$
To do this we will need the following lemma.
\begin{lem}
\label{cannolo} Let $(a_{n})_{n\in\mathbb{N}}$ be a sequence of
real numbers and let $l\in\mathbb{R}$. If $\lim_{n\rightarrow+\infty}a_{n}=l$
then $sh(\lim_{\lambda\uparrow\Lambda}a_{|\lambda|})=l.$ \end{lem}
\begin{proof}
Since\textbf{\ }$\lim_{n\rightarrow+\infty}a_{n}=l,$ for every $\varepsilon\in\mathbb{R}_{>0}$
the set
\[
I_{\varepsilon}=\{\lambda\in\mathfrak{L}\mid|l-a_{|\lambda|}|<\varepsilon\}\in\mathcal{U}.
\]
In fact, let $N\in\mathbb{N}$ be such that $|a_{m}-l|<\varepsilon$
for every $m\geq N$. Then for every $\lambda_{0}\in\mathfrak{L}$
such that $|\lambda_{0}|\geq N$ we have that $I_{\varepsilon}\supseteq\{\lambda\in\mathfrak{L}\mid\lambda_{0}\subseteq\lambda\}\in\mathcal{U}$,
and this proves that $I_{\varepsilon}\in\mathcal{U}$. Therefore for
every $\varepsilon\in\mathbb{R}_{>0}$ we have
\[
|l-\lim_{\lambda\uparrow\Lambda}a_{|\lambda|}|<\varepsilon,
\]
and so $sh(\lim_{\lambda\uparrow\Lambda}a_{|\lambda|})=l.$ \end{proof}
\begin{thm}
\label{bello}There is a linear isomorphism
\[
\Phi:\mathfrak{\mathcal{\mathscr{D}}}_{GB}^{\prime}(E)\rightarrow\mathfrak{\mathscr{D}}^{\prime}(E)
\]
defined by the following formula:
\[
\forall\varphi\in\mathscr{D},\,\left\langle \Phi\left(\left[u\right]_{\mathfrak{\mathcal{\mathscr{D}}}}\right),\varphi\right\rangle _{\mathfrak{\mathcal{\mathscr{D}}}(E)}=sh\left(\int^{\ast}u\ \varphi^{\ast}\ dx\right).
\]
\end{thm}
\begin{proof}
Clearly the map $\Phi$ is well defined (namely $u\approx_{\mathcal{\mathscr{D}}}v\Rightarrow\Phi\left(\left[u\right]_{\mathcal{\mathscr{D}}}\right)=\Phi\left(\left[v\right]_{\mathfrak{\mathscr{D}}}\right)$),
it is linear and its range is in $\mathcal{\mathscr{D}}^{\prime}(E)$.
It is also immediate to see that it is injective. The most delicate
part is to show that it is surjective. To see this let $T\in\mathfrak{\mathcal{\mathscr{D}}}^{\prime}(E);$
we have to find an ultrafunction $u_{T}$ such that
\begin{equation}
\Phi\left(\left[u_{T}\right]_{\mathscr{D}}\right)=T.\label{lana}
\end{equation}
Since $L^{2}(E)$ is dense in $\mathcal{\mathscr{D}}^{\prime}(E)$
with respect to the weak topology, there is a sequence $\psi_{n}\in L^{2}(E)$
such that $\psi_{n}\rightarrow T.$ We claim that
\[
u_{T}=\lim\limits _{\lambda\uparrow\Lambda}\ \psi_{|\lambda|}
\]
satisfies (\ref{lana}) and $[u_{T}]_{\mathcal{\mathscr{D}}}\in\mathfrak{\mathscr{D}}_{GB}^{\prime}(E)$.
Since $u_{T}$ is a $\Lambda$-limit of $L^{2}(E)$ functions, we
have that $u_{T}\in L^{2}(E)^{\ast}$, so $[u_{T}]_{\mathcal{\mathscr{D}}}\in\mathfrak{\mathcal{\mathscr{D}}}_{G}^{\prime}(E)$.
It remains to show that $[u_{T}]_{\mathfrak{\mathcal{\mathscr{D}}}}$
is bounded and that $\Phi\left(\left[u_{T}\right]_{\mathfrak{\mathscr{D}}}\right)=T$.
Take $\varphi\in\mathcal{\mathscr{D}}$; by definition,
\[
\left\langle T,\varphi\right\rangle _{\mathfrak{\mathcal{\mathscr{D}}}(E)}=\lim\limits _{n\rightarrow+\infty}\int^{\ast}\psi_{n}\cdot\varphi dx=\lim\limits _{n\rightarrow+\infty}a_{n},
\]
where we have set $a_{n}=\int\psi_{n}\cdot\varphi dx$. Then by Lemma
\ref{cannolo} we have
\begin{multline*}
\lim\limits _{n\rightarrow+\infty}a_{n}=sh\left(\lim\limits _{\lambda\uparrow\Lambda}a_{|\lambda|}\right)=sh\left(\lim\limits _{\lambda\uparrow\Lambda}\int\psi_{|\lambda|}\cdot\varphi dx\right)=\\
sh\left(\int^{\ast}\left(\lim\limits _{\lambda\uparrow\Lambda}\psi_{|\lambda|}\cdot\varphi\right)dx\right)=sh\left(\int^{\ast}u_{T}\cdot\varphi dx\right)=\left\langle \Phi\left(\left[u_{T}\right]_{\mathfrak{\mathscr{D}}}\right),\varphi\right\rangle _{\mathscr{D}(E)},
\end{multline*}
therefore $\left\langle \Phi\left(\left[u_{T}\right]_{\mathfrak{\mathscr{D}}}\right),\varphi\right\rangle _{\mathfrak{\mathscr{D}}(E)}=\left\langle T,\varphi\right\rangle _{\mathfrak{\mathscr{D}}(E)}\in\mathbb{R}$
and the thesis is proved.
\end{proof}
From now on we will identify the spaces $\mathfrak{\mathscr{D}}_{GB}^{\prime}(E)\ $and
$\mathscr{D}^{\prime}(E);$ so, we will identify $\left[u\right]_{\mathscr{D}}\ $with
$\Phi\left(\left[u\right]_{\mathscr{D}}\right)$ and we will write
$\left[u\right]_{\mathscr{D}}\in\mathfrak{\mathscr{D}}^{\prime}(E)$
and
\[
\left\langle \left[u\right]_{\mathscr{D}},\varphi\right\rangle _{\mathscr{D}(E)}:=\langle\Phi[u]_{\mathscr{D}},\varphi\rangle=sh\left(\int^{\ast}u\ \varphi^{\ast}\ dx\right).
\]
Moreover, with some abuse of notation, we will write also that $\left[u\right]_{\mathscr{D}}\in L^{2}(E),\ \left[u\right]_{\mathfrak{\mathscr{D}}}\in V(E),$
etc. meaning that the distribution $\left[u\right]_{\mathscr{D}}$
can be identified with a function $f$ in $L^{2}(E),$ $V(E),$ etc.
By our construction, this is equivalent to say that $f^{\ast}\in\left[u\right]_{\mathfrak{\mathscr{D}}}.$
So, in this case, we have that $\forall\varphi\in\mathfrak{\mathscr{D}}(E)$
\[
\left\langle \left[u\right]_{\mathfrak{\mathscr{D}}},\varphi\right\rangle _{\mathfrak{\mathscr{D}}(E)}=sh\left(\int^{\ast}u\ \varphi^{\ast}\ dx\right)=sh\left(\int^{\ast}f^{\ast}\varphi^{\ast}dx\right)=\int f\ \varphi\ dx.
\]
An immediate consequence of Theorem \ref{bello} is the following:
\begin{prop}
The space $\left[\mathcal{C}^{1}(I,V_{\Lambda}(\Omega))\right]_{B}$
can be mapped into a space of distributions by setting, $\forall u\in\left[\mathcal{C}^{1}(I,V_{\Lambda}(\Omega))\right]_{B},$
\begin{equation}
\forall\varphi\in\mathscr{D}(I\times\Omega),\left\langle \left[u\right]_{\mathscr{D}(I\times\Omega)},\varphi\right\rangle =sh\int\int u(t,x)\varphi^{\ast}(t,x)dxdt.\label{pipa}
\end{equation}
\end{prop}
Finally, let us also notice that the proof of Theorem \ref{bello}
can be modified to prove the following result:
\begin{prop}
If $W(E)$ is an internal space such that $\mathscr{D}^{\ast}(E)\subset W(E)\subset L^{2}(E)^{\ast}$,
then every distribution $\left[v\right]_{\mathscr{D}}$ has a representative
$u\in W(E)\cap\left[v\right]_{\mathscr{D}}$. Namely, the map
\[
\Phi:[W(E)]_{B}\rightarrow\mathscr{D}^{\prime}(E)
\]
defined by
\[
\Phi(u)=\left[u\right]_{\mathscr{D}}
\]
is surjective.\end{prop}
\begin{proof}
We can argue as in the proof of Thm \ref{bello}, by substituting
$L^{2}(E)$ with $\mathscr{D}(E)$. This is possible since $\mathscr{D}(E)$
is dense in $L^{2}(E)$ (and so, in particular, $W(E)$ is dense in
$L^{2}(E)$), and the density property was the only condition needed
to prove the surjectivity of the embedding.
\end{proof}
In the following sections we want to study problems such as equation
(\ref{pura}) in the context of ultrafunctions. To do so we will need
to restrict to the following family of operators:
\begin{defn}
\label{wc} We say that an operator
\[
\mathcal{A}:V(\Omega)\rightarrow V^{\prime}(\Omega)
\]
is weakly continuous if, $\forall u,v\in\left[V_{\Lambda}(\Omega)\right]_{B},\ \forall\varphi\in\mathscr{D}(\Omega)$,
we have that if
\[
\int u\varphi^{\ast}\ dx\sim\int v\varphi^{\ast}\ dx
\]
then
\[
\int\mathcal{A}^{\ast}(u)\ \varphi^{\ast}\ dx\sim\int\mathcal{A}^{\ast}(v)\ \varphi^{\ast}\ dx.
\]
\end{defn}
For our purposes, the important property of weakly continuous operators
is that if
\[
\mathcal{A}:V(\Omega)\rightarrow V^{\prime}(\Omega)
\]
is weakly continuous then it can be extended to an operator
\[
\left[\mathcal{A}\right]_{\mathfrak{\mathscr{D}}}:\mathfrak{\mathscr{D}}^{\prime}(\Omega)\rightarrow\mathfrak{\mathscr{D}}^{\prime}(\Omega)
\]
by setting
\[
\left[\mathcal{A}\right]_{\mathscr{D}}\left(\left[u\right]_{\mathscr{D}}\right)=\left[\mathcal{A}\left(w\right)\right]_{\mathscr{D}},
\]
where $w\in\left[u\right]_{\mathscr{D}}\cap V(\Omega).$ In the following,
with some abuse of notation we will write $\left[\mathcal{A}\left(u\right)\right]_{\mathscr{D}}$
instead of $\left[\mathcal{A}\right]_{\mathfrak{\mathscr{D}}}\left(\left[u\right]_{\mathscr{D}}\right).$
\begin{rem}
Definition \ref{wc} can be reformulated in the classical language
as follows: $\mathcal{A}$ is weakly continuous if for every weakly
convergent sequence $u_{n}$ in $\mathfrak{\mathscr{D}}^{\prime}(\Omega)$
the sequence $\mathcal{A}\left(u_{n}\right)$ is weakly convergent
in $\mathfrak{\mathscr{D}}^{\prime}(\Omega)$.
\end{rem}
\section{Generalized Ultrafunction Solutions (GUS)\label{gus}}
In this section we will show that an evolution equation such as equation
(\ref{pura}) has Generalized Ultrafunction Solutions (GUS) under
very general assumptions on $\mathcal{A}$, and we will show the relationships
of GUS with strong and weak solutions. However, before doing this,
we think that it is helpful to give the feeling of the notion of GUS
for stationary problems. This will be done in Section \ref{gussp}
providing a simple typical example. We refer to \cite{ultra}, \cite{belu2012}
and \cite{milano} for other examples.
\subsection{Generalized Ultrafunction Solutions for stationary problems\label{gussp}}
A typical stationary problem in PDE can be formulated ad follows:
\[
\text{\textit{Find}\ \ \ }u\in V(\Omega)\ \ \ \text{\textit{such that}}
\]
\begin{equation}
\mathcal{A}(u)=f,\label{ps}
\end{equation}
where $V(\Omega)\subseteq L^{2}(\Omega)$ is a vector space and $\mathcal{A}:V(\Omega)\rightarrow V^{\prime}(\Omega)$
is a differential operator and $f\in L^{2}(\Omega)$.
The \textquotedbl{}typical\textquotedbl{} formulation of this problem
in the framework of ultrafunctions is the following one:
\[
\text{\textit{Find} \ }u\in V_{\Lambda}(\Omega)\ \ \text{\textit{such that}}
\]
\begin{equation}
\widetilde{\mathcal{A}}(u)=\widetilde{f}.\label{P}
\end{equation}
In particular, if $\mathcal{A}:V(\Omega)\rightarrow L^{2}(\Omega)$
and $f\in L^{2}(\Omega),$ the above problem can be formulated in
the following equivalent \textquotedbl{}weak form\textquotedbl{}:
\[
\text{\textit{Find} \ }u\in V_{\Lambda}(\Omega)\ \ \text{\textit{such that}}
\]
\begin{equation}
\forall\varphi\in V_{\Lambda}(\Omega),\ \int_{\Omega^{\ast}}^{\ast}\mathcal{A}^{\ast}(u)\varphi dx=\int_{\Omega^{\ast}}^{\ast}f^{\ast}\varphi dx.
\end{equation}
Such an ultrafunction $u$ will be called a \textbf{GUS} of Problem
(\ref{P}).
Usually, it is possible to find a classical solution for problems
of the type (\ref{ps}) if there are a priory bounds, but the existence
of a priori bounds is not sufficient to guarantee the existence of
solutions in $V(\Omega).$ On the contrary, the existence of a priori
bounds is sufficient to find a GUS in $V_{\Lambda}(\Omega)$ (as we
are going to show).
Following the general strategy to find a GUS for Problem (\ref{P}),
we start by solving the following approximate problems for every $\lambda$
in a qualified set :
\[
\text{\textit{Find} }u_{\lambda}\in V_{\lambda}(\Omega)\text{ \ \textit{such\ that}}
\]
\[
\forall\varphi\in V_{\lambda}(\Omega),\ \int_{\Omega}\mathcal{A}(u_{\lambda})\varphi dx=\int_{\Omega}f\varphi dx.
\]
A priori bounds in each space $V_{\lambda}(\Omega)$ are sufficient
to guarantee the existence of solutions. The next step consists in
taking the $\Lambda$-limit. Clearly, this strategy can be applied
to a very large class of problems. Let us consider a typical example
in details:
\begin{thm}
\label{nl}Let $\mathcal{A}:\,V(\Omega)\rightarrow V^{\prime}(\Omega)$
be a hemicontinuous\footnote{An operator between Banach spaces is called \emph{hemicontinuous}
if its restriction to finite dimensional subspaces is continuous.} operator such that for every finite dimensional space $V_{\lambda}\subset V(\Omega)$
there exists $R_{\lambda}\in\mathbb{R}$ such that
\begin{equation}
\text{if}\ u\in V_{\lambda}\text{and}\ \left\Vert u\right\Vert _{\sharp}=R_{\lambda}\ \text{then}\ \left\langle \mathcal{A}(u),u\right\rangle >0,\label{gilda}
\end{equation}
where $\left\Vert \cdot\right\Vert _{\sharp}$ is any norm in $V(\Omega).$
Then the equation (\ref{P}) has at least one solution $u_{\Lambda}\in V_{\Lambda}(\Omega).$ \end{thm}
\begin{proof}
If we set
\[
B_{\lambda}=\left\{ u\in V_{\lambda}|\ \left\Vert u\right\Vert _{\sharp}\leq R_{\lambda}\right\}
\]
and if $\mathcal{A}_{\lambda}:V_{\lambda}\rightarrow V_{\lambda}$
is the operator defined by the following relation:
\[
\forall v\in V_{\lambda},\ \left\langle \mathcal{A}_{\lambda}(u),v\right\rangle =\left\langle \mathcal{A}(u),v\right\rangle
\]
then it follows from the hypothesis (\ref{gilda}) that $\deg(\mathcal{A}_{\lambda},B_{\lambda},0)=1,\ $where
$\deg(\cdot,\cdot,\cdot)$ denotes the topological degree (see e.g.
\cite{AmMa2003}). Hence, $\forall\lambda\in\mathfrak{L}$,
\[
\exists u\in V_{\lambda},\forall v\in V_{\lambda},\ \left\langle \mathcal{A}_{\lambda}(u),v\right\rangle =0.
\]
Taking the $\Lambda$-limit of the net $(u_{\lambda})$ we get a GUS
$u_{\Lambda}\in V_{\Lambda}(\Omega)$ of equation (\ref{P}).\end{proof}
\begin{example}
Let $\Omega$ be an open bounded set in $\mathbb{R}^{N}$ and let
\[
a(\cdot,\cdot,\cdot):\mathbb{R}^{N}\times\mathbb{R}\times\overline{\Omega}\rightarrow\mathbb{R}^{N},\ b(\cdot,\cdot,\cdot):\mathbb{R}^{N}\times\mathbb{R}\times\overline{\Omega}\rightarrow\mathbb{R}
\]
be continuous functions such that $\forall\xi\in\mathbb{R}^{N},\forall s\in\mathbb{R},\forall x\in\overline{\Omega}$
we have
\begin{equation}
a(\xi,s,x)\cdot\xi+b(\xi,s,x)\geq\nu\left(|\xi|\right),\label{lalla}
\end{equation}
where $\nu$ is a function (not necessarely negative) such that
\begin{equation}
\nu\left(t\right)\rightarrow+\infty\ \text{for\ }t\rightarrow+\infty.\label{lalla1}
\end{equation}
We consider the following problem:
\[
\text{\textit{Find\ }\ }u\in\mathcal{C}^{2}(\Omega)\cap\mathcal{C}_{0}(\overline{\Omega})\ \ \text{\textit{s.t.}}
\]
\begin{equation}
\nabla\cdot a(\nabla u,u,x)=b(\nabla u,u,x).\label{5th}
\end{equation}
In the framework of ultrafunctions this problem becomes
\[
\text{\textit{Find} \ }u\in V(\Omega):=\left[\mathcal{C}^{2}(\Omega)\cap\mathcal{C}_{0}(\overline{\Omega})\right]_{\Lambda}\ \ \text{\textit{such that}}
\]
\[
\forall\varphi\in V(\Omega),\ \int_{\Omega}\nabla\cdot a(\nabla u,u,x)\ \varphi\ dx=\int_{\Omega}b(\nabla u,u,x)\varphi dx.
\]
If we set
\[
\mathcal{A}(u)=-\nabla\cdot a(\nabla u,u,x)+b(\nabla u,u,x)
\]
it is not difficult to check that conditions (\ref{lalla}) and (\ref{lalla1})
are sufficient to guarantee the assumptions of Thm.~\ref{nl}. Hence
we have the existence of a ultrafunction solution of problem (\ref{5th}).
Problem (\ref{5th}) covers well known situations such as the case
in which $\mathcal{A}$ is a maximal monotone operator, but also very
pathological cases. E.g., by taking
\[
a(\nabla u,u,x)=(|\nabla u|^{p-1}-\nabla u);\ b(\nabla u,u,x)=f(x),
\]
we get the equation
\[
\Delta_{p}u-\Delta u=f.
\]
Since
\[
\int_{\Omega}\left(-\Delta_{p}u+\Delta u\right)\ u\ dx=\left\Vert u\right\Vert _{W_{0}^{1,p}}^{p}-\left\Vert u\right\Vert _{H_{0}^{1}}^{2},
\]
it is easy to check that we have a priori bounds (but not the convergence)
in $W_{0}^{1,p}(\Omega)$. Therefore we have GUS, and it might be
interesting to study the kind of regularity of these solutions.
\end{example}
\subsection{Strong and weak solutions of evolution problems\label{sub:Strong-and-weak}}
As usual, let
\[
\mathcal{A}:V(\Omega)\rightarrow V^{\prime}(\Omega)
\]
be a differential operator.
We are interested in the following Cauchy problem for $t\in I:=\left[0,T\right)$:
find $u$ such that
\begin{equation}
\left\{ \begin{array}{c}
\partial_{t}u=\mathcal{A}(u);\\
\\
u\left(0\right)=u_{0}.
\end{array}\right.\label{cp}
\end{equation}
A solution $u=u(t,x)$ of problem (\ref{cp}) is called a \textbf{strong
solution} if
\[
u\in C^{0}(I,V(\Omega))\cap C^{1}(I,V^{\prime}(\Omega)).
\]
It is well known that many problems of type (\ref{cp}) do not have
strong solutions even if the initial data is smooth (for example Burgers'
equation \ref{BE}). This is the reason why the notion of weak solution
becomes necessary. If $\mathcal{A}$ is a linear operator and $\mathcal{A}\left(\mathfrak{\mathcal{\mathscr{D}}}(\Omega)\right)\subset\mathfrak{\mathcal{\mathscr{D}}}'(\Omega)$,
classically a distribution $T\in V^{\prime}(I\times\mathbb{R})$ is
called a weak solution of problem (\ref{cp}) if
\[
\forall\varphi\in\mathfrak{\mathscr{D}}(I\times\mathbb{R}),\ -\left\langle T,\partial_{t}\varphi\right\rangle +\int_{\Omega}u_{0}(x)\ \varphi(0,x)dx=\left\langle T,\mathcal{A}^{\dag}\varphi\right\rangle ,
\]
where $\mathcal{A}^{\dag}$ is the adjoint of $\mathcal{A}$.
If $\mathcal{A}$ is not linear there is not a general definition
of weak solution. For example, if you consider Burgers' equation,
a function $w\in L_{loc}^{1}(I\times\Omega)$ is considered a weak
solution if
\[
\forall\varphi\in\mathfrak{\mathscr{D}}(I\times\Omega),\ -\int\int w\partial_{t}\varphi\ dxdt-\int_{\Omega}u_{0}(x)\ \varphi(0,x)dx+\frac{1}{2}\int\int w^{2}\partial_{x}\varphi\ dxdt=0.
\]
However, if we use the notion of generalized distribution developed
in section \ref{distri} we can give a definition of weak solution
for problems involving weakly continuous operators that generalizes
the classical one for linear operators:
\begin{defn}
\label{DS} Let $\mathcal{A}:W\rightarrow\mathscr{D}^{\prime}$ be
weakly continuous. We say that $u\in W$ is a weak solution of Problem
(\ref{cp}) if the following condition is fulfilled: $\forall\varphi\in\mathfrak{\mathscr{D}}(I\times\Omega)$
\[
\int u(t,x)\varphi_{t}(t,x)\,dxdt-\int u(0,x)\varphi(0,x)dx=\langle A(u),\varphi\rangle.
\]
\end{defn}
From the theory developed in Section \ref{distri}, the notion of
weak solution given by Definition \ref{DS} can be written in nonstandard
terms as follows: $\left[w\right]_{\mathscr{D}}$ is a weak solution
of Problem (\ref{cp}) if
\[
\left\{ \begin{array}{c}
w\in\left[C^{1}(I,V(\Omega))^{\ast}\right]_{B};\\
\\
\forall\varphi\in\mathscr{D}(I\times\Omega),\ \int_{0}^{T}\int_{\Omega}\partial_{t}w\varphi^{\ast}dxdt+\int_{0}^{T}\mathcal{A}\left(w\right)\varphi^{\ast}dt\sim0;\\
\\
w\left(0,x\right)=u_{0}(x).
\end{array}\right.
\]
By the above equations, any strong solution is a weak solution, but
the converse is not true. A very large class of problems (such as
\ref{BE}) which do not have strong solutions have weak solutions,
or even only distributional solutions. Unfortunately, there are problems
which do not have even weak (or distributional) solutions, and worst
than that there are problems (such as \ref{BE}) which have more than
one weak solution, namely the uniqueness of the Cauchy problem is
violated, and hence the physical meaning of the problem is lost. This
is why we think that it is worthwhile to investigate these kind of
problems in the framework of generalized solutions in the world of
ultrafunctions.
\subsection{Generalized Ultrafunction Solutions and their first properties}
In Section \ref{gussp} we gave the definition of GUS for stationary
problems. The definition of GUS for evolution problems is analogous:
\begin{defn}
\label{def:pippa}An ultrafunction $u\in\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega)),$
is called a Generalized Ultrafunction Solution (GUS) of problem (\ref{cp})
if $\forall v\in V_{\Lambda}(\Omega),\ $
\begin{equation}
\left\{ \begin{array}{c}
\int\partial_{t}uv\ dx=\int\mathcal{A}^{\mathcal{\ast}}(u)v\ dx;\\
\\
u\left(0,x\right)=u_{0}\left(x\right).
\end{array}\right.\label{q}
\end{equation}
\end{defn}
Problem (\ref{q}) can be rewritten as follows:
\[
\left\{ \begin{array}{c}
u\in\mathcal{C}^{1}(I^{\ast},V_{\Lambda});\notag\\
\\
\partial_{t}u=P_{\Lambda}\mathcal{A}^{\mathcal{\ast}}(u);\label{esse}\\
\\
u\left(0,x\right)=u_{0}\left(x\right),\notag
\end{array}\right.
\]
where $P_{\Lambda}$ is the orthogonal projection. The main Theorem
of this section states that problem (\ref{cp}) locally has a GUS.
As for the ordinary differential equations in finite dimensional spaces,
this solution is defined for an interval of time which depends on
the initial data.
\begin{thm}
\label{TT}Let $\mathcal{A}|_{V_{\lambda}(\Omega)}$ be locally Lischitz
continuous $\forall\lambda\in\mathfrak{L}$; then there exists a number
$T_{\Lambda}(u_{0})\in\left(0,T\right]_{\mathbb{R}^{\ast}}$ such
that problem (\ref{cp}) has a unique GUS $u_{\Lambda}$ in $\left[0,T_{\Lambda}(u_{0})\right)_{\mathbb{R}^{\ast}}.$ \end{thm}
\begin{proof}
For every $\lambda\in\mathfrak{L}$ let us consider the approximate
problem
\begin{equation}
\left\{ \begin{array}{c}
u\in C^{1}(I,V_{\lambda}(\Omega))\ \ and\ \ \forall v\in V_{\lambda}(\Omega);\\
\\
\int_{\Omega}\partial_{t}u(t,x)\ v(x)dx=\int_{\Omega}\mathcal{A}(u(t,x))\ v(x)dx;\\
\\
u_{\lambda}\left(0\right)=\int_{\Omega}u_{0}(x)\ v(x)dx.
\end{array}\right.\label{gonerilla}
\end{equation}
It is immediate to check that this problem is equivalent to the following
one
\begin{equation}
\left\{ \begin{array}{c}
u\in C^{1}(I,V_{\lambda}(\Omega));\\
\\
\partial_{t}u(t,x)=P_{\lambda}\mathcal{A}(u(t,x));\\
\\
u_{\lambda}\left(0\right)=P_{\lambda}u_{0},
\end{array}\right.\label{regana}
\end{equation}
where the \textquotedbl{}projection\textquotedbl{} $P_{\lambda}:L^{2}(\Omega)\rightarrow V_{\lambda}(\Omega)$
is defined by
\begin{equation}
\int_{\Omega}P_{\lambda}w(x)v(x)dx=\left\langle w,v\right\rangle ,\ \forall v\in V_{\lambda}(\Omega).\label{PL}
\end{equation}
The Cauchy problem (\ref{regana}) is well posed since $V_{\lambda}(\Omega)$
is a finite dimensional vector space and $P_{\lambda}\circ$ $\mathcal{A}$
is locally Lipschitz continuous on $V_{\lambda}$. Then there exists
a number $T_{\lambda}(u_{0})\in\left(0,T\right]_{\mathbb{R}}$ such
that problem (\ref{regana}) has a unique solution in $\left[0,T_{\lambda}(u_{0})\right)_{\mathbb{R}}.$
Taking the $\Lambda$-limit, we get the conclusion. \end{proof}
\begin{defn}
We will refer to a solution $u_{\Lambda}$ given as in Theorem \ref{TT}
as to a local GUS.
\end{defn}
Clearly the GUS is a global solution (namely a function defined for
every $t\in\left[0,T\right)$) if $T_{\lambda}(u_{0})$ is equal to
$T$. In concrete applications, the existence of a global solution
usually is a consequence of the existence of a coercive integral of
motion. In fact, we have the following corollary:
\begin{cor}
\label{coco}Let the assumptions of Thm.~\ref{TT} hold. Moreover,
let us assume that there exists a function $I:V(\Omega)\rightarrow\mathbb{R}$
such that if $u(t)$ is a local GUS in $\left[0,T_{\lambda}\right)$,
then
\begin{equation}
\partial_{t}I^{\ast}\left(u(t)\right)\leq0\label{vend}
\end{equation}
(or, more in general, that $I^{\ast}\left(u(t)\right)$ is not increasing)
and such that $\forall\lambda\in\mathfrak{L},\ I|_{V_{\lambda}(\Omega)}$
is coercive (namely if $u_{n}\in V_{\lambda}(\Omega)$ and $\left\Vert u_{n}\right\Vert \rightarrow\infty$
then $I\left(u_{n}\right)\rightarrow\infty$). Then $u(t)$ can be
extended to the full interval $\left[0,T\right).$ \end{cor}
\begin{proof}
By our assumptions, there is a qualified set $Q$ such that $\forall\lambda\in Q$,
if $u_{\lambda}(t)$ is defined in $\left[0,T_{\lambda}\right),$
then
\begin{equation}
\partial_{t}I\left(u_{\lambda}(t)\right)\leq0\label{vend1}
\end{equation}
since otherwise the inequality (\ref{vend}) would be violated. By
(\ref{vend1}) and the coercivity of $I|_{V_{\lambda}(\Omega)}$ we
have that $T_{\lambda}(u_{0})=T.$ Hence also $u(t)$ is defined in
the full interval $\left[0,T\right).$
\end{proof}
\subsection{GUS, weak and strong solutions}
We now investigate the relations between GUS, weak solutions and strong
solutions.
\begin{thm}
\label{mina}Let $u\in C^{1}(I^{\ast},V_{\Lambda}(\Omega))$ be a
GUS of Problem (\ref{cp}), and let us assume that $\mathcal{A}$
is weakly continuous. Then
\begin{enumerate}
\item \label{enu:if-then-the 1}if
\[
u\in\left[\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega))\right]_{B}
\]
then the distribution $\left[u\right]_{\mathscr{D}}$ is a weak solution
of Problem (\ref{cp});
\item moreover, if
\[
w\in\left[u\right]_{\mathscr{D}}\cap\mathcal{C}^{1}(I,V(\Omega))
\]
then $w$ is a strong solution of Problem (\ref{cp}).
\end{enumerate}
\end{thm}
\begin{proof}
(1) In order to simplify the notations, in this proof we will write
$\int$ instead of $\int^{\ast}$. Since $u$ is a GUS, then for any
$\varphi\in\mathscr{D}\left(I\times\Omega\right)\subset\mathcal{C}_{B}^{\infty}(I^{\ast},V_{\Lambda}(\Omega))\ $
(we identify $\varphi$ and $\varphi^{\ast}$) we have that
\[
\int_{0}^{T}\int_{\Omega^{\ast}}\partial_{t}u\varphi\ dx\ dt=\int_{0}^{T}\int_{\Omega^{\ast}}\mathcal{A^{\ast}}(u)\varphi dx\ dt.
\]
Integrating in $t$, we get
\[
\int_{0}^{T}\int_{\Omega^{\ast}}u(t,x)\ \partial_{t}\varphi dx\ dt-\int_{\Omega}u_{0}(x)\varphi(0,x)dx+\int_{0}^{T}\int_{\Omega^{\ast}}\mathcal{A}^{\ast}(u(t,x))\varphi dx\ dt=0.
\]
By the definition of $\left[u\right]_{\mathscr{D}}$, and as $\mathcal{A}$
is weakly continuous, we have that
\[
\int_{0}^{T}\int_{\Omega^{\ast}}u(t,x)\ \partial_{t}\varphi dx\ dt\sim\int_{0}^{T}\int_{\Omega}[u]_{\mathscr{D}}(t,x)\ \partial_{t}\varphi dx\ dt,
\]
\[
\int_{\Omega^{\ast}}u_{0}(x)\varphi(0,x)dx\sim\int_{\Omega}\left([u]_{\mathscr{D}}\right)_{0}(x)\varphi(0,x)dx,
\]
\[
\int_{0}^{T}\int_{\Omega^{\ast}}\mathcal{A^{\ast}}(u(t,x))\varphi dx\ dt\sim\int_{0}^{T}\int_{\Omega}\mathcal{A}([u]_{\mathscr{D}}(t,x))\varphi dx\ dt.
\]
Henceforth
\[
\int_{0}^{T}\int_{\Omega}[u]_{\mathscr{D}}(t,x)\ \partial_{t}\varphi dx\ dt-\int_{\Omega}\left([u]_{\mathscr{D}}\right)_{0}(x)\varphi(0,x)dx+\int_{0}^{T}\int_{\Omega}\mathcal{A}([u]_{\mathscr{D}}(t,x))\varphi dx\ dt\sim0.
\]
Since all three terms in the left hand side of the above equation
are real numbers, we have that their sum is a real number, and so
\[
\int_{0}^{T}\int_{\Omega}[u]_{\mathscr{D}}(t,x)\ \partial_{t}\varphi dx\ dt-\int_{\Omega}\left([u]_{\mathscr{D}}\right)_{0}(x)\varphi(0,x)dx+\int_{0}^{T}\int_{\Omega}\mathcal{A}([u]_{\mathscr{D}}(t,x))\varphi dx\ dt=0,
\]
namely $[u]_{\mathscr{D}}$ is a weak solution of Problem (\ref{cp}).
(2) If there exists $w\in\left[u\right]_{\mathscr{D}}\cap\mathcal{C}^{1}(I,V(\Omega))$
then $u\in\left[\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega))\right]_{B}$,
so from $(\ref{enu:if-then-the 1})$ we get that $w$ is a weak solution
of Problem (\ref{cp}). Moreover, $w\in\mathcal{C}^{1}(I,V(\Omega))\subseteq\mathcal{C}^{0}(I,V(\Omega))\cap\mathcal{C}^{1}(I,V^{\prime}(\Omega))$,
and hence $w$ is a strong solution.
\end{proof}
Usually, if problem (\ref{cp}) has a strong solution $w$, it is
unique and it coincides with the GUS $u$ in the sense that $\left[w^{\ast}\right]_{\mathscr{D}}=\left[u\right]_{\mathscr{D}}$
and in many cases we have also that
\begin{equation}
\left\Vert u-w^{\ast}\right\Vert \sim0.\label{fagiolino}
\end{equation}
If problem (\ref{cp}) does not have a strong solution but only weak
solutions, often they are not unique. Thus the GUS selects one weak
solution among them.
Now suppose that $w\in L_{loc}^{1}$ is a weak solution such that
$\left[u\right]_{\mathscr{D}}=\left[w^{\ast}\right]_{\mathscr{D}}$
but (\ref{fagiolino}) does not hold. If we \foreignlanguage{english}{set}
\[
\psi=u-w^{\ast}
\]
then $\left\Vert \psi\right\Vert $ is not an infinitesimal and $\psi$
carries some information which is not contained in $w.$ Since $u$
and $w$ define the same distribution, $\left[\psi\right]_{\mathscr{D}}=0,$
i.e.
\[
\forall\varphi\in\mathfrak{\mathscr{D},\ }\int\psi\varphi^{\ast}\ dx=0.
\]
So the information contained in $\psi$ cannot be contained in a distribution.
Nevertheless this information might be physically relevant. In Section
\ref{sub:The-microscopic-part}, we will see one example of this fact.
\subsection{First example:\ the nonlinear Schroedinger equation\label{se}}
Let us consider the following nonlinear Schroedinger equation in $\mathbb{R}^{N}:$
\begin{equation}
i\partial_{t}u=-\frac{1}{2}\Delta u+V(x)u-|u|^{p-2}u;\ p>2,\label{NSE}
\end{equation}
where, for simplicity, we suppose that $V(x)\in\mathcal{C}^{1}(\mathbb{R}^{N})$
is a smooth bounded potential. A suitable space for this problem is
\[
V(\mathbb{R}^{N})=H^{2}(\mathbb{R}^{N})\cap L^{p}(\mathbb{R}^{N})\cap\mathcal{C}(\mathbb{R}^{N}).
\]
In fact, if $u\in V(\mathbb{R}^{N}),$ then the energy
\begin{equation}
E(u)=\int\left[\frac{1}{2}\left\vert \nabla u\right\vert ^{2}+V(x)\left\vert u\right\vert ^{2}+\frac{2}{p}|u|^{p}\right]dx\label{senergia}
\end{equation}
is well defined; moreover, if $u\in V(\mathbb{R}^{N})$ we have that
\[
-\frac{1}{2}\Delta u+V(x)u-|u|^{p-2}u\in V^{\prime}(\mathbb{R}^{N}),
\]
so the problem is well-posed in the sense of ultrafunctions (see Def.
\ref{def:pippa}). It is well known, (see e.g. \cite{Ca03}) that
if $p<2+\frac{4}{N}$ then the Cauchy problem (\ref{NSE}) (with initial
data in $V(\mathbb{R}^{N})$) is well posed, and there exists a strong
solution
\[
u\in C^{0}(I,V(\mathbb{R}^{N}))\cap C^{1}(I,V^{\prime}(\mathbb{R}^{N})).
\]
On the contrary, if $p\geq2+\frac{4}{N},$ the solutions, for suitable
initial data, blows up in a finite time. So in this case weak solutions
do not exist. Nevertheless, we have GUS:
\begin{thm}
The Cauchy problem relative to equation (\ref{NSE}) with initial
data $u_{0}\in V_{\Lambda}(\mathbb{R}^{N})$ has a unique GUS $u\in\mathcal{C}^{1}(I,V_{\Lambda}(\mathbb{R}^{N}));$
moreover, the energy (\ref{senergia}) and the $L^{2}$-norm are preserved
along this solution. \end{thm}
\begin{proof}
Let us consider the functional
\[
I(u)=\int|u|^{2}dx.
\]
On every approximating space $V_{\lambda}(\Omega)$ we have that
\[
\frac{d}{dt}\int|u|^{2}dx=\int\frac{d}{dt}|u|^{2}dx=2Re\int\left(u,\frac{d}{dt}u\right)=0,
\]
therefore $I^{\ast}$ (namely, the $L^{2}$-norm) is constant on GUS.
A similar direct computation can be used to prove that also the energy
is constant on GUS. Moreover, it is easily seen that $\forall\lambda\in\mathfrak{L},\ I|_{V_{\lambda}(\Omega)}$
is coercive. Since also the other hypotheses of Theorem \ref{TT}
are verified, we can apply Corollary \ref{coco} to get the existence
and uniqueness of the GUS.
\end{proof}
Now it is interesting to know what these solutions look like, and
if they have any reasonable meaning from the physical or the mathematical
point of view. For example, when $p<2+\frac{4}{N}$ the dynamics given
by equation (\ref{NSE}), for suitale initial data, produces solitons
(see e.g. \cite{befolib} or \cite{milan}); so we conjecture that
in the case $p\geq2+\frac{4}{N}$ solitons with infinitesimal radius
will appear at the concentration points and that they will behave
as pointwise particles which follow the Newtonian Dynamics.
\subsection{Second example: the nonlinear wave equation}
Let us consider the following Cauchy problem relative to a nonlinear
wave equation in a bounded open set $\Omega\subset\mathbb{R}^{N}:$
\begin{equation}
\left\{ \begin{array}{ccc}
\square\psi+|\psi|^{p-2}\psi & = & 0\,\mbox{\,\ in\,\,}I\times\Omega;\\
\\
\psi & = & 0\,\mbox{\,\ on}\,\,I\times\partial\Omega;\\
\\
\psi(0,x) & = & \psi_{0}(x),
\end{array}\right.\label{cordelia}
\end{equation}
where $\square=\partial_{t}^{2}-\Delta,\ p>2\,,I=[0,T)$. In order
to formulate this problem in the form (\ref{cp}), we reduce it to
a system of first order equations (Hamiltonian formulation):
\[
\left\{ \begin{array}{c}
\partial_{t}\psi=\phi;\\
\\
\partial_{t}\phi=\Delta\psi-|\psi|^{p-2}\psi.
\end{array}\right.
\]
If we set
\[
u=\left[\begin{array}{c}
\psi\\
\phi
\end{array}\right];\ \mathcal{A}(u)=\left[\begin{array}{c}
\phi\\
\Delta\psi-|\psi|^{p-2}\psi
\end{array}\right],
\]
then problem (\ref{cordelia}) reduces to a particular case of problem
(\ref{cp}).
A suitable space for this problem is
\[
V(\Omega)=\left[\mathcal{C}^{2}(\Omega)\cap\mathcal{C}_{0}(\overline{\Omega})\right]\times\mathcal{C}(\Omega).
\]
If $u\in V(\Omega),$ the energy
\begin{equation}
E(u)=\int_{\Omega}\left[\frac{1}{2}\left\vert \phi\right\vert ^{2}+\frac{1}{2}\left\vert \nabla\psi\right\vert ^{2}+\frac{1}{p}|\psi|^{p}\right]dx\label{energia2}
\end{equation}
is well defined.
It is well known, (see e.g. \cite{JLL}) that problem (\ref{cordelia})
has a weak solution; however, it is possible to prove the global uniqueness
of such a solution only if $p<\frac{N}{N-2}$ (any $p$ if $N=1,2$).
On the contrary, in the framework of ultrafunctions we have the following
result:
\begin{thm}
The Cauchy problem relative to equation (\ref{cordelia}) with initial
data $u_{0}\in V_{\Lambda}(\Omega)$ has a unique solution $u\in\mathcal{C}^{1}(I^{\ast},V_{\Lambda}(\Omega));$
moreover, the energy (\ref{energia2}) is preserved along this solution. \end{thm}
\begin{proof}
We have only to apply Theorem \ref{TT} and Corollary \ref{coco},
where we set
\[
I(u):=E(u)=\int_{\Omega}\left[\frac{1}{2}\left\vert \phi\right\vert ^{2}+\frac{1}{2}\left\vert \nabla\psi\right\vert ^{2}+\frac{1}{p}|\psi|^{p}\right]dx.\qedhere
\]
\end{proof}
\section{The Burgers' equation\label{tbe}}
\subsection{Preliminary remarks}
In section \ref{se} we have shown two examples which show that:
\begin{itemize}
\item equations which do not have weak solutions usually have a unique GUS;
\item equations which have more than a weak solution have a unique GUS.
\end{itemize}
So ultrafunctions seem to be a good tool to study the phenomena modelled
by these equations. At this point we think that the main question
is to know what the GUS look like and if they are suitable to represent
properly the phenomena described by such equations from the point
of view of Physics. Of course this question might not have a unique
answer: probably there are phenomena which are well represented by
GUS and others which are not. In any case, it is worthwhile to investigate
this issue relatively to the main equations of Mathematical Physics
such as (\ref{NSE}), (\ref{cordelia}), Euler equations, Navier-Stokes
equations and so on.
We have decided to start this program with the (nonviscous) Burgers'
equation
\[
\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=0,
\]
since it presents the following peculiarities:
\begin{itemize}
\item it is one of the (formally) simplest nonlinear PDE;
\item it does not have a unique weak solution, but there is a physical criterium
to determine the solution which has physical meaning (namely the entropy
solution);
\item many solutions can be written explicitly, and this helps to confront
classical and ultrafunction solutions.
\end{itemize}
We recall that an other interesting approach to Burgers' equation
by means of generalized functions (in the Colombeau sense) has been
devoloped by Biagioni and Oberguggenberger in \cite{biagioni-MO}.
\subsection{Properties of the GUS of Burgers' equations}
The first property of Burgers' equation (\ref{BE}) that we prove
is that its smooth solutions with compact support have infinitely
many integrals of motion:
\begin{prop}
\label{tina}Let $G(u)$ be a differentiable function, $G\in\mathcal{C}^{1}(\mathbb{R}),$
$G(0)=0,$ and let $u(t,x)$ be a smooth solution of (\ref{BE}) with
compact support. Then
\[
I(u)=\int G(u(t,x))dx
\]
is a constant of motion of (\ref{BE}) (provided that the integral
converges). \end{prop}
\begin{proof}
The proof of this fact is known, we include it here only for the sake
of completeness. Multiplying both sides of equation (\ref{BE}) by
$G^{\prime}(u),$ we get the equation
\[
G^{\prime}(u)\partial_{t}u+G^{\prime}(u)u\partial_{x}u=0,
\]
which gives
\[
\partial_{t}G(u)+\partial_{x}H(u)=0,
\]
where
\begin{equation}
H(u)=\int_{0}^{u}sG^{\prime}(s)ds.\label{lilla}
\end{equation}
Since $u$ has compact support, we have that $-\int\partial_{x}H(u)dx=0,$
and hence
\[
\partial_{t}\int G(u)dx=-\int\partial_{x}H(u)dx=0.\qedhere
\]
\end{proof}
Let us notice that Proposition \ref{tina} would hold also if we do
not assume that $u$ has a compact support, provided that it decays
sufficiently fast.
In the literature, any function $G$ as in the above theorem is called
entropy and $H$ is called entropy flux (see e.g. \cite{Bianchini,MinEnt}),
since in some interpretation of this equation $G$ corresponds (up
to a sign) the the physical entropy. But this is not the only possible
interpretation.
If we interpret (\ref{BE}) as a simplification of the Euler equation,
the unknown $u$ is the velocity; then, for $G(u)=u\ $ and $\ G(u)=\frac{1}{2}u^{2},$
we have the following constants of motion: the \textbf{momentum}
\[
P(u)=\int udx
\]
and the \textbf{energy}
\[
E(u)=\frac{1}{2}\int u^{2}dx.
\]
However, in general the solutions of Burgers' equation are not smooth;
in fact, if the initial data $u_{0}(x)$ is a smooth function with
compact support, the solution develops singularities. Hence we must
consider weak solutions which, in this case, are solutions of the
following equation in weak form: $w\in L_{loc}^{1}(I\times\Omega)$,
and $\forall\varphi\in\mathscr{D}(I\times\Omega)$
\begin{multline}
\int_{0}^{T}\int_{\Omega}w(t,x)\partial_{t}\varphi(t,x)\ dxdt-\int_{\Omega}u_{0}(x)\varphi(0,x)dx+\\
\frac{1}{2}\int_{0}^{T}\int_{\Omega}w(t,x)^{2}\partial_{x}\varphi(t,x)\ dxdt=0.\label{zanna}
\end{multline}
Nevertheless, the momentum and the energy of the GUS of Burgers' equation
are constants of motion as we will show in Theorem \ref{eliseo}.
This result holds if we work in $\mathcal{C}^{1}(I^{\ast},U_{\Lambda}(\mathbb{R}))$,
where $U_{\Lambda}\left(\mathbb{R}\right)$ is the space of ultrafunctions
described in Th. \ref{vecchiaroba}.
With this choice of the space of ultrafunctions, a GUS of the Burgers'
equation, by definition, is a solution of the following problem:
\begin{equation}
\left\{ \begin{array}{c}
u\in\mathcal{C}^{1}(I^{\ast},U_{\Lambda}(\mathbb{R}))\ \ and\ \ \forall v\in U_{\Lambda}(\mathbb{R})\\
\\
\int\left(\partial_{t}u\right)vdx=-\int\left(u\partial_{x}u\right)vdx;\\
\\
u\left(0,x\right)=u_{0}\left(x\right),
\end{array}\right.\label{BEG}
\end{equation}
were $u_{0}\in U_{\Lambda}(\mathbb{R})$ (mostly, we will consider
the case where $u_{0}\in\left(H_{c}^{1}(\mathbb{R})\right)^{\sigma}$).
Let us recall that, by Definition \ref{def:C kappa}, for every $u\in\mathcal{C}^{1}(I^{\ast},U_{\Lambda}(\mathbb{R}))$,
we have $\partial_{t}u(t,\cdot)\in U_{\Lambda}(\mathbb{R})$.
We have the following result:
\begin{thm}
\label{elisa}For every initial data $u_{0}\in U_{\Lambda}(\mathbb{R})$
the problem (\ref{BEG}) has a GUS. \end{thm}
\begin{proof}
It is sufficient to apply Theorem \ref{TT} to obtain the local existence
of a GUS $u$, and then Corollary \ref{coco} with
\[
I(w)=E(w)=\frac{1}{2}\int w^{2}dx
\]
to deduce that the local GUS is, actually, global. In fact if we take
$v(t,x)=u(t,x)$ in the weak equation that defines Problem (\ref{BEG}),
we get
\begin{eqnarray*}
\int\left(\partial_{t}u\right)udx & = & -\int\left[u\partial_{x}u\right]udx=\\
-\int\left[u\partial_{x}u\right]u\cdot\widetilde{1}dx & = & -\int_{-\beta}^{\beta}\left[u\partial_{x}u\right]udx=\\
-\frac{1}{3}\int_{-\beta}^{\beta}\partial_{x}u^{3}dx & = & 0,
\end{eqnarray*}
as $u(\beta)=u(-\beta)$. Then
\[
\partial_{t}E(u)=0
\]
and hence Corollary \ref{coco} can be applied.\end{proof}
\begin{thm}
\label{eliseo}Problem (\ref{BEG}) has two constants of motion: the
energy
\[
E=\frac{1}{2}\int u^{2}\ dx
\]
and the momentum
\[
P=\int u\ dx.
\]
\end{thm}
\begin{proof}
We already proved that the energy is constant in the proof of Th.
\ref{elisa}. In order to prove that also $P$ is constant take $v=\widetilde{1}\in U_{\Lambda}(\mathbb{R})$
in equation (\ref{BEG}). Then we get
\[
\partial_{t}P=\partial_{t}\int udx=\int\partial_{t}u\widetilde{1}dx=-\int u\partial_{x}u\widetilde{1}dx=-\frac{1}{2}\int_{-\beta}^{+\beta}\partial_{x}u^{2}dx=0,
\]
as $u(-\beta)=u(\beta)$.
\end{proof}
Let us notice that Theorems \ref{elisa} and \ref{eliseo} hold even
if $u_{0}$ is a very singular object, e.g. a delta-like ultrafunction.
\begin{rem}
Prop. \ref{tina} shows that the strong solutions of (\ref{BE}) have
infinitely many constants of motion; is this fact true for the GUS?
Let us try to prove that
\[
\int G(u(t,x))dx
\]
is constant following the same proof used in Thm. \ref{elisa} and
\ref{eliseo}. We set
\[
v(t,x)=P_{\Lambda}G^{\prime}(u)\in C(I^{\ast},U_{\Lambda})
\]
and we replace it in eq. (\ref{BEG}), so that
\begin{eqnarray*}
\partial_{t}\int G(u(t,x))dx & = & \int\partial_{t}uG^{\prime}(u)dx\\
& = & \int\partial_{t}uP_{\Lambda}G^{\prime}(u)dx\ \ (\text{since\ }\partial_{t}u(t,\cdot)\in U_{\Lambda})\\
& = & -\int u\partial_{x}uP_{\Lambda}G^{\prime}(u)dx.
\end{eqnarray*}
\end{rem}
Now, if we assume that $G^{\prime}(u(t,.))\in U_{\Lambda}(\mathbb{R})$,
we have that $P_{\Lambda}G^{\prime}(u)=G^{\prime}(u)$ and hence
\begin{eqnarray*}
\partial_{t}\int G(u(t,x))dx & = & -\int u\partial_{x}uG^{\prime}(u)dx\\
& = & -\int\partial_{x}H(u)dx=0
\end{eqnarray*}
where $H(u)$ is defined by (\ref{lilla}). Thus $\int G(u(t,x))dx$
is a constant of motion provided that
\begin{equation}
G^{\prime}(u)\in C(I,U_{\Lambda}).\label{romina}
\end{equation}
However, this is only a sufficient condition. Clearly, in general
the analogous of condition (\ref{romina}) will depend on the choice
of the space of ultrafunctions $V_{\Lambda}(\mathbb{R})$: different
choices of this space will give different constants of motion. Our
choice $V_{\Lambda}(\mathbb{R})=U_{\Lambda}(\Omega)$ was motivated
by the fact that GUS of equation (\ref{BEG}) in $U_{\Lambda}(\Omega)$
preserves both the energy and the momentum.
\subsection{GUS and weak solutions of BE}
In this section we consider equation (\ref{BEG}) with $u_{0}\in\left(H_{c}^{1}(\mathbb{R})\right)^{\sigma}$.
Our first result is the following:
\begin{thm}
\label{thm:approssimazione}Let $u$ be the GUS of problem (\ref{BEG})
with initial data $u_{0}\in\left(H_{c}^{1}(\mathbb{R})\right)^{\sigma}$.
Then $[u]_{\mathscr{D}(I\times\Omega)}$ is a weak solution of problem
\ref{BE}.\end{thm}
\begin{proof}
From Theorem \ref{elisa} we know that the problem admits a GUS $u$,
and from Theorem \ref{eliseo} we deduce that $[u]_{\mathscr{D}}$
is a bounded generalized distribution: in fact, for every $\varphi\in\mathscr{D}(I\times\mathbb{R})$
we have
\[
\left|\int\overline{u}\varphi dx\right|\leq\left(\int\overline{u}^{2}dx\right)^{\frac{1}{2}}\left(\int\varphi^{2}dx\right)^{\frac{1}{2}}<+\infty
\]
as $\int\overline{u}^{2}dx=\int u_{0}^{2}dx<+\infty$ by the conservation
of energy on GUS. Therefore, from Th.~\ref{mina} we deduce that
$w:=\left[\bar{u}\right]_{\mathscr{D}(I\times\Omega)}$ is a weak
solution of problem \ref{BE}.
\end{proof}
Thus the GUS of problem \ref{BEG} is unique and it is associated
with a weak solution of problem \ref{BE}. It is well known (see e.g.
\cite{Bianchini} and references therein) that weak solutions of (\ref{BE})
are not unique: hence, in a certain sense, the ultrafunctions give
a way to choose a particular weak solution among the (usually infinite)
weak solutions of problem \ref{BE}.
However, among the weak solutions there is one that is of special
interest, namely the \textbf{entropy solution}. The entropy solution
is the only weak solution of (\ref{BE}) satisfying particular conditions
(the entropy conditions) along the curves of discontinuity of the
solution (see e.g. \cite{evans}, Chapter 3). For our purposes, we
are interested in the equivalent characterization of the entropy solution
as the limit, for\footnote{In this approach, $\nu$ is usally called the viscosity.}
$\nu\rightarrow0$, of the solutions of the following parabolic equations:
\begin{equation}
\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{\partial^{2}u}{\partial x^{2}}\label{VBE}
\end{equation}
(see e.g. \cite{Hopf} for a detailed study of such equations). These
equations are called the viscous Burgers' equations and they have
smooth solutions in any reasonable function space. In particular,
in Lemma \ref{LEmmaEntropy}, we will prove that the problem \ref{VBE}
has a unique GUS in $U_{\Lambda}(\mathbb{R})$ for every initial data
$u_{0}\in U_{\Lambda}(\mathbb{R})$. Now, if $\bar{u}$ is the GUS
of problem \ref{VBE} with a classical initial condition $u_{0}\in L^{2}(\mathbb{R})$,
then $[\overline{u}]_{\mathscr{D}}$ is bounded: in fact, for every
$\varphi\in\mathscr{D}(I\times\mathbb{R})$ we have
\[
\left|\int\overline{u}\varphi dx\right|\leq\left(\int\overline{u}^{2}dx\right)^{\frac{1}{2}}\left(\int\varphi^{2}dx\right)^{\frac{1}{2}}<+\infty
\]
as $\int\overline{u}^{2}dx\leq\int u_{0}^{2}dx<+\infty$. Therefore,
from Th. \ref{mina} we deduce that $w:=\left[\bar{u}\right]_{\mathscr{D}(I\times\Omega)}$
is a weak solution.
We are now going to prove that it is possible to choose $\nu$ infinitesimal
in such a way that $w$ is the entropy solution. This fact is interesting
since it shows that this GUS represents properly, from a Physical
point of view, the phenomenon described by Burgers' equation. In order
to see this let us consider the problem (\ref{VBE}) with $\nu$ hyperreal.
\begin{lem}
\label{LEmmaEntropy}The problem
\begin{equation}
\left\{ \begin{array}{c}
u\in\mathcal{C}^{1}(I,U_{\Lambda}(\Omega))\ \text{and}\ \forall v\in U_{\Lambda}(\mathbb{R})\\
\\
\int\left(\partial_{t}u(t,x)+u\partial_{x}u(t,x)\right)v(x)dx=\int\nu\partial_{x}^{2}u(t,x)v(x)dx,\\
\\
u\left(0\right)=u_{0}
\end{array}\right.\label{dora}
\end{equation}
has a unique GUS for every $\nu\in\left(\mathbb{R}^{+}\right)^{\ast}$
and every $u_{0}\in U_{\Lambda}(\mathbb{R})$. \end{lem}
\begin{proof}
Let $(U_{\lambda}(\mathbb{R}))_{\lambda\in\mathfrak{L}}$ be an approximating
net of $U_{\Lambda}(\mathbb{R})$. Since $\nu\in\left(\mathbb{R}^{+}\right)^{\ast}$
and $u_{0}\in U_{\Lambda}(\mathbb{R})$, we have that for every $\lambda\in\mathfrak{L}$
there exist $\nu_{\lambda}\in\mathbb{R}^{+}$ and $u_{0,\lambda}\in U_{\lambda}(\mathbb{R})$
such that
\[
\nu=\lim_{\lambda\uparrow\Lambda}\nu_{\lambda}\ \text{and}\ u_{0}=\lim_{\lambda\uparrow\Lambda}u_{0,\lambda}.
\]
Thus, we can consider the approximate problems
\begin{equation}
\left\{ \begin{array}{c}
u\in\mathcal{C}^{1}(I,U_{\lambda}(\mathbb{R}))\ and\ \ \forall v\in U_{\lambda}(\mathbb{R})\\
\\
\int\left(\partial_{t}u(t,x)+u\partial_{x}u(t,x)\right)v(x)dx=\int\nu\partial_{x}^{2}u(t,x)v(x)dx,\\
\\
u\left(0\right)=u_{0,\lambda}.
\end{array}\right.\label{jessica}
\end{equation}
For every $\lambda$, the problem (\ref{jessica}) has a unique solution
$u_{\lambda}$. If we let $u_{\Lambda}=\lim_{\lambda\uparrow\Lambda}u_{\lambda}$
we have that $u_{\Lambda}$ is the unique ultrafunction solution of
problem (\ref{dora}).
\end{proof}
Let us call $u_{\nu}$ the GUS of Problem (\ref{dora}). A natural
conjecture would be that, if $u_{0}$ is standard, then for every
$\nu$ infinitesimal the distribution $\left[u_{\nu}\right]_{\mathscr{D}(I\times\Omega)}$
is the entropy solution of Burgers' equation. However, as we are going
to show in the following Theorem, in general this property is true
only ``when $\nu$ is a \emph{large} infinitesimal'':
\begin{thm}
\label{EntropyGUS}Let $u_{0}$ be standard, let $z$ be the entropy
solution of Problem \ref{BE} with initial condition $u_{0}$ and,
for every $\nu\in\mathbb{R^{\ast}}$, let $u_{\nu}$ be the solution
of Problem \ref{dora} with initial condition $u_{0}^{\ast}$. Then
there exists an infinitesimal number $\nu_{0}$ such that, for every
infinitesimal $\nu\geq\nu_{0}$, $\left[u_{\nu}\right]_{\mathscr{D}(I\times\Omega)}=z$;
namely, the GUS of Problem \ref{dora}, for every infinitesimal $\nu\geq\nu_{0}$,
correspond (in the sense of Definition \ref{DEfCorrespondenceDistrUltra})
to the entropy solution of Problem \ref{BE}.\end{thm}
\begin{proof}
For every real number $\nu$ we have that the standard problem
\[
\left\{ \begin{array}{c}
w\in\mathcal{C}^{1}(I,H_{\flat}^{1}(\mathbb{R})),\\
\\
\partial_{t}w(t,x)+w\partial_{x}w(t,x)=\nu\partial_{x}^{2}w(t,x),\\
\\
w\left(0\right)=u_{0}
\end{array}\right.
\]
has a unique solution $w_{\nu}$. Therefore for every real number
$\nu$ we have $u_{\nu}=w_{\nu}^{\ast}$. For overspill we therefore
have that there exists an infinitesimal number $\nu_{0}$ such that,
for every infinitesimal $\nu\geq\nu_{0}$, $u_{\nu}=w_{\nu}$, where
$w_{\nu}$ is the solution of the problem
\[
\left\{ \begin{array}{c}
w\in\mathcal{C}^{1}(I,H_{\flat}^{1}(\mathbb{R}))^{\ast},\\
\\
\partial_{t}w(t,x)+w\partial_{x}w(t,x)=\nu\partial_{x}^{2}w(t,x),\\
\\
w\left(0\right)=u_{0}^{\ast}.
\end{array}\right.
\]
But as $z=\lim\limits _{\varepsilon\rightarrow0^{+}}v_{\varepsilon}$,
we have that for every infinitesimal number $\nu$, for every test
function $\varphi$ we have that
\[
\left\langle z^{\ast}-v_{\nu},\varphi^{\ast}\right\rangle \sim0.
\]
In particular for every infinitesimal $\nu\geq\nu_{0}$,
\[
\left\langle z^{\ast}-u_{\nu},\varphi^{\ast}\right\rangle \sim0,
\]
and as this holds for every test function $\varphi$ we have our thesis.
\end{proof}
Theorem \ref{EntropyGUS} shows that, for a standard initial value
$u_{0}$, there exists a ultrafunction which corresponds to the entropy
solution of Burgers' equation; moreover, this ultrafunction solves
a viscous Burgers' equation for an infinitesimal viscosity (namely,
it is the solution of an infinitesimal perturbation of Burgers' equation).
However, within ultrafunctions theory there is another ``natural''
solution of Burgers' equation for a standard initial value $u_{0}$,
namely the unique ultrafunction $u$ that solves Problem \ref{BEG}.
We already proved in Theorem \ref{thm:approssimazione} that $u$
corresponds (in the sense of Definition \ref{DEfCorrespondenceDistrUltra})
to a weak solution of Burgers' equation. Our conjecture is that this
weak solution is precisely the entropy solution; however, we have
not been able to prove this (yet!). Nevertheless, in any case it makes
sense to analyse this solution: this will be done in the next section.
\subsection{The microscopic part\label{sub:The-microscopic-part}}
Let $u\in C^{1}(I^{\ast},U_{\Lambda})$ be the GUS of (\ref{BEG})
and let $w=[u]_{\mathscr{D}}$. With some abuse of notation we will
identify the distribution $w$ with a $L^{2}$ function. We want to
compare $u$ and $w^{*}$ and to give a physical interpretation of
their difference.
Since we have that
\[
\left[u\right]_{\mathscr{D}}=\left[w^{*}\right]_{\mathscr{D}}
\]
we can write
\[
u=w^{\ast}+\psi;
\]
we have that
\[
\forall\varphi\in\mathscr{D}\left(I\times\Omega\right),\int\int^{\ast}u\varphi^{\ast}\ dx\ dt\sim\int\int w\varphi\ dx\ dt
\]
and
\begin{equation}
\int\int^{\ast}\psi\varphi^{\ast}\ dx\ dt\sim0.\label{micra}
\end{equation}
We will call $w$ (and $w^{\ast}$) the macroscopic part of $u$ and
$\psi$ the microscopic part of $u$; in fact, we can interpret (\ref{micra})
by saying that $\psi$ does not appear to a mascroscopic analysis.
On the other hand, $\int^{\ast}\psi\varphi\ dx\ dt\not\sim0$ for
some $\varphi\in C^{1}(I^{\ast},U_{\Lambda})\backslash\mathscr{D}\left(I\times\Omega\right)$.
Such a $\varphi$ ``is able'' to detect the \emph{infinitesimal
oscillations} of $\psi$. This justifies the expression \textquotedbl{}macroscopic
part\textquotedbl{} and \textquotedbl{}microscopic part\textquotedbl{}.
So, in the case of Burgers equation, the ultrafunctions do not produce
a solution to a problem without solutions (as in the example of section
\ref{se}), but they give a different description of the phenomenon,
namely they provide also the information contained in the microscopic
part $\psi.$
So let us analyze it:
\begin{prop}
The microscopic part $\psi$ of the GUS solution of problem (\ref{BEG})
satisfies the following properties:
\begin{enumerate}
\item \label{enu:the-momentum-of}the momentum of $\psi$ vanishes:
\[
\int\psi\ dx=0;
\]
\item \label{enu:-and-}$w^{\ast}$ and $\psi$ are almost orthogonal:
\[
\int\int\psi w^{\ast}\ dxdt\sim0;
\]
\item \label{enu:the-energy-of}the energy of $u$ is the sum of the kinetic
macroscopic energy, $\int\left\vert w(t,x)\right\vert ^{2}\ dx$,
the kinetic microscopic energy (heat)\textup{ $\int\left\vert \psi(t,x)\right\vert ^{2}\ dx$
}\textup{\emph{and an infinitesimal quantity;}}
\item \label{enu:if--is}if $w$ is the entropy solution then the ``heat''
$\int\left\vert \psi(t,x)\right\vert ^{2}\ dx$ increases.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{enu:the-momentum-of}) $\int\psi\,dx=\int u\,dx-\int w^{\ast}\,dx$,
and the conclusion follows as both $u$ and $w$ preserve the momentum.
\ref{enu:-and-}) First of all we observe that the $L^{2}$ norm of
$\psi$ is finite, as $\psi=u-w^{\ast}$ and the $L^{2}$ norms of
$u$ and $w$ are finite. Now let $\{\varphi_{n}\}_{n\in\mathbb{N}}$
be a sequence in $\mathscr{D}(I\times\Omega)$ that converges strongly
to $w$ in $L^{2}$. Let $\{\varphi_{\nu}\}_{\nu\in\mathbb{N}^{\ast}}$
be the extension of this sequence. As $\varphi_{n}\rightharpoonup w$
in $L^{2}$, we have that for any infinite number $N\in\mathbb{N}$
$\left\Vert \varphi_{N}-w^{\ast}\right\Vert _{L^{2}}\sim0$. For every
finite number $n\in\mathbb{N^{\ast}}$ we have that
\[
\int\psi\varphi_{n}dxdt=0,
\]
as $\varphi_{n}\in\mathscr{D}(I\times\Omega$). By overspill, there
exists an infinite number $N$ such that $\int\psi\varphi_{N}dxdt=0.$
If we set $\eta=w^{\ast}-\varphi_{N}$, we have $\left\Vert \eta\right\Vert _{L^{2}}\sim0$.
Then
\begin{eqnarray*}
\left|\int\psi w^{\ast}dxdt\right| & = & \left|\int\psi(\varphi_{N}+\eta)dxdt\right|\\
& = & \left|\int\psi\varphi_{N}dxdt+\int\psi\eta dxdt\right|\sim0,
\end{eqnarray*}
as $\int\psi\varphi_{N}dxdt=0$ and $\left|\int\psi\eta dxdt\right|\leq\int\left|\psi\right|\left|\eta\right|dxdt\leq\left(\left\Vert \psi\right\Vert _{L^{2}}\cdot\left\Vert \eta\right\Vert _{L^{2}}\right)^{\frac{1}{2}}\sim0$.
\ref{enu:the-energy-of}) This follows easily from (\ref{enu:-and-}).
\ref{enu:if--is}) The energy of $u=w^{\ast}+\psi$ is constant, while
the energy of $w^{\ast}$, if $w$ is the entropy solution, decreases.
Therefore we deduce our thesis from (\ref{enu:the-energy-of}).
\end{proof}
Now let $\Omega\subset I\times\mathbb{R}$ be the region where $w$
is regular (say $H^{1}$) and let $\Sigma=\left(I\times\mathbb{R}\right)\backslash\Omega$
be the singular region. We have the following result:
\begin{thm}
$\psi$ satisfies the following equation in the sense of ultrafunctions:
\[
\partial_{t}\psi+\partial_{x}\left(\mathbf{V}\psi\right)=F,
\]
where
\begin{equation}
\mathbf{V}=\mathbf{V}(w,\psi)=w(t,x)+\frac{1}{2}\psi(t,x)\label{eq:bella}
\end{equation}
and
\[
\mathfrak{supp}\left(F(t,x)\right)\subset N_{\varepsilon}(\Sigma),
\]
where $N_{\varepsilon}(\Sigma)$ is an infinitesimal neighborhood
of $\Sigma^{\ast}.$\end{thm}
\begin{proof}
In $\Omega$ we have that
\[
\partial_{t}w+w\partial_{x}w=0
\]
Since $u=w+\psi$ satisfies the following equation (in the sense of
ultrafunctions),
\[
D_{t}u+P\left(u{}_{x}\partial u\right)=0
\]
we have that $\psi$ satisfies the equation,
\[
D_{t}\psi+P\left[\partial_{x}\left(w\psi+\frac{1}{2}\psi^{2}\right)\right]=0
\]
in $\Omega^{\ast}\backslash N_{\varepsilon}(\Sigma)$ where $N_{\varepsilon}(\Sigma)$
is an infinitesimal neighborhood of $\Sigma^{\ast}.$
\end{proof}
As we have seen $\psi^{2}$ can be interpreted as the density of heat.
Then $\mathbf{V}$ can be interpreted as the flow of $\psi$; it consists
of two parts: $w$ which is the macroscopic component of the flow
and $\frac{1}{2}\psi(t,x)$ which is the transport due to the Brownian
motion.
\end{document} |
\begin{document}
\title{Probing dynamical symmetry breaking using quantum-entangled photons}
\author{Hao Li}
\affiliation{Department of Chemistry, University of Houston, Houston, TX 77204}
\author{Andrei Piryatinski}
\affiliation{Theoretical Division, Los Alamos National Lab, Los Alamos, NM 87545}
\author{Jonathan Jerke}
\affiliation{Department of Chemistry and Biochemistry and Department of Physics, Texas Tech University, Lubbock, Texas 79409-1061}
\altaffiliation[Also at ]{Department of Physics, Texas Southern University, Houston, Texas 77004}
\author{Ajay~Ram~Srimath~Kandada}
\affiliation{School of Chemistry \& Biochemistry and School of Physics, Georgia Institute of Technology, 901 Atlantic Drive, Atlanta, Georgia 30332}
\affiliation{Center for Nano Science and Technology @Polimi, Istituto Italiano di Tecnologia, via Giovanni Pascoli 70/3, 20133 Milano, Italy}
\author{Carlos Silva}
\affiliation{School of Chemistry \& Biochemistry and School of Physics, Georgia Institute of Technology, 901 Atlantic Drive, Atlanta, Georgia 30332}
\author{Eric Bittner}
\affiliation{Department of Chemistry \& Department of Physics, University of Houston, Houston, TX 77204}
\email{[email protected]}
\begin{abstract}
We present an input/output analysis of photon-correlation experiments whereby a quantum
mechanically entangled bi-photon state interacts with
a material sample placed in one arm of a Hong-Ou-Mandel (HOM)
apparatus. We show that the output signal contains
detailed information about subsequent entanglement with
the microscopic quantum states in the sample.
In particular, we apply the method to an ensemble
of emitters interacting with a common photon mode within
the open-system Dicke Model. Our results indicate
considerable dynamical information concerning spontaneous
symmetry breaking can be revealed with such an experimental
system.
\end{abstract}
\date{\today}
\maketitle
\section{Introduction}
The interaction between light and matter lies at the heart of all photophysics and
spectroscopy. Typically, one treats
the interaction within a semi-classical approximation,
treating light as an oscillating classical electro-magnetic wave as given by Maxwell's equations.
It is well recognized that light has a quantum mechanical discreteness (photons)
and one can prepare entangled interacting photon states. The pioneering work
by Hanbury Brown and Twiss in the 1950's, who measured intensity correlations
in light originating from thermal sources, set the stage for what has become quantum optics.
\cite{BRANNEN1956,
HANBURYBROWN1956,
PURCELL1956,Brown1957,Brown1958,Fano1961,Paul1982}
Quantum photons play a central role in a number of advanced technologies including
quantum cryptography\cite{RevModPhys.74.145},
quantum communications\cite{Gisin:2007aa},
and quantum computation\cite{Knill2001,PhysRevA.79.033832}.
Only recently has it been proposed that entangled photons
can be exploited as a useful spectroscopic probe of atomic and molecular processes.
\cite{PhysRevA.79.033832,Lemos2014,Carreno2016,Carreno2016a,Kalashnikov2016a,Kalashnikov2016b}
The spectral and temporal nature of entangled photons offer a unique means for interrogating
the dynamics and interactions between molecular states.
The crucial consideration is that when entangled photons are created, typically by spontaneous parametric down-conversion,
there is a precise relation between the frequency and wavevectors of the entangled pair.
For example if we create two entangled photons from a common
laser source, energy conservation dictates that $\omega_{laser} = \omega_{1} + \omega_{2}$. Hence
measuring the frequency of either photon will collapse the quantum entanglement and the
frequency of the other photon will be precisely defined. Moreover, in the case
of multi-photon absorption, entangled 2-photon absorption is greatly enhanced relative to
classical 2-photon absorption since the cross-section scales linearly rather than quadratically with intensity.
Recent work by Schlawin {\em et al.} indicate that entangled photon pairs may be useful in controlling and
manipulating population on the 2-exciton manifold of a model biological energy transport
system. \cite{Schlawin2013}
The non-classical features of entangled photons have also been used as a highly sensitive
detector of ultra-fast emission from organic materials.
\cite{doi:10.1021/acs.jpclett.6b02378}
Beyond the potential practical applications of quantum light in
high-fidelity communication and quantum encryption,
by probing systems undergoing
spontaneous symmetry breaking with quantum photons one can
draw analogies between bench-top laboratory based experiments and
experimentally inaccessible systems such as black holes, the early
Universe, and cosmological strings.\cite{Zurek1985,Nation:2010}
\begin{figure}
\caption{{\bf Sketch of Hong-Ou-Mandel apparatus (HOM) for 2-photon coincidence detection. }
\label{wwa}
\end{figure}
In this paper, we provide a precise connection between a material sample, described in terms of a model Hamiltonian, and the resulting signal
in the context of the interference experiment described by Kalashnikov {\em et al.}
in Ref.\cite{Kalashnikov2016b}, using the Dicke model for an ensemble
of two-level atoms as input.\cite{PhysRev.93.99} The Dicke model
is an important test-case since it undergoes a dynamical phase transition.
Such behaviour was recently observed by Klinder {\em et al.} by coupling
an atomic Bose-Einstein condensate to an optical cavity.\cite{kilnder:2015}
We begin with a brief overview of the
photon coincidence experiment and the preparation of two-photon
entangled states, termed ``Bell-states''. We then use the input/output approach of Gardner and Collett
\cite{Gardiner1985} to develop a
means for computing the transmission function for a
–material system placed in one of the arms of the Hong-Ou-Mandel (HOM) apparatus sketched in Fig.~\ref{wwa}.
\section{Quantum interference of entangled photons}
We consider the interferometric scheme implemented by Kalashnikov{\em et al.} \cite{Kalashnikov2016b}
A CW laser beam is incident on a nonlinear crystal, creating an entangled photon pair
state by spontaneous parametric down-conversion (SPDC), which we shall denote as a Bell state
\begin{eqnarray}
|\bell_{1} \rangle &=& \iint d\omega_{1} d\omega_{2} {\cal F}(\omega_{1},\omega_{2})
B^{\dagger}_{S}(\omega_{1})
B^{\dagger}_{I}(\omega_{2})
|0\rangle,
\end{eqnarray}
where ${\cal F}(\omega_{1},\omega_{2})$ is the bi-photon field amplitude and
$B^{\dagger}_{S,I}(\omega_{i})$ creates a photon with frequency $\omega_{i}$ in either the signal or idler branch.
The ket $|0\rangle$ is the vacuum state and $|\omega_{1}\omega_{2}\rangle$
denotes a two photon state.
In general, energy conservation requires that the entangled photons
generated by spontaneous parametric down conversion (SPDC) obey
$\omega_{L} = \omega_{1} + \omega_{2}$. Similarly, conservation of photon
momentum requires ${\bf k}_{L} = {\bf k}_{1} + {\bf k}_{2}$. By manipulating
the SPDC crystal, one can generate entangled photon pairs with
different frequencies.
As a result, the bi-photon field is strongly anti-correlated in frequency with
\begin{eqnarray}
|\bell\rangle = \int dz {\cal F}(z)B_{1}^{\dagger}(\omega_{L}-z)B_{2}^{\dagger}(\omega_{L}+z)|0\rangle,
\end{eqnarray}
where $\omega_{L}$ is the central frequency of the bi-photon field.
This aspect was recently exploited in Ref. \cite{Kalashnikov2016a},
which used a visible photon in the idler
branch and an infrared (IR)
photon in the signal branch, interacting with the sample.
As sketched in Figure~\ref{wwa}, both signal and idler are
reflected back towards a beam-splitter
(BS) by mirrors M1 and M2.
M1 introduces an optical delay with transmission function $\Phi(\omega)$
which we will take to be of modulo 1. In the other arm,
we introduce a resonant medium at S with transmission function ${\cal S}(\omega)$.
Not shown in our sketch is an optional pumping laser for creating
a steady state exciton density in S.
Upon interacting with both the delay element and the medium, the Bell-state
can be rewritten as
\begin{eqnarray}
|\bell_{2} \rangle = \iint d\omega_{1} d\omega_{2} F(\omega_{1},\omega_{2})
B^{\dagger}_{I}(\omega_{1})
B^{\dagger}_{S}(\omega_{2})
\Phi(\omega_{1}){\cal S}(\omega_{2})
|0\rangle.
\nonumber
\\
\end{eqnarray}
Finally, the two beams are recombined by a beam splitter (BS) and the
coincidence rate is given by
\begin{eqnarray}
P_{c} &=& \iint d\omega_{1} d\omega_{2} |\langle\omega_{1}\omega_{2} | \psi_{c} \rangle | ^{2} \nonumber
\\
&=&
\frac{1}{4}\iint d\omega_{1} d\omega_{2}
\left\{
|{\cal F}(\omega_{1},\omega_{2}) {\cal S}(\omega_{2})|^{2} +
|{\cal F}(\omega_{2},\omega_{1}) {\cal S}(\omega_{1})|^{2}
\right.
\nonumber \\
&-&
\left.
2 {\rm Re}\left[{\cal F}^{*}(\omega_{1},\omega_{2}){\cal F}(\omega_{2},\omega_{\textcolor{black}{1}})
{\cal S}^{*}(\omega_{2}){\cal S}(\omega_{1})
\Phi^{*}(\omega_{1})\textcolor{black}{\Phi}(\omega_{2})
\right]
\right\}.
\label{eq:pc}
\end{eqnarray}
We assume that the delay stage is dispersionless with $\Phi(\omega) = e^{i\omega t}$
and re-write equation~\ref{eq:pc} as
\begin{eqnarray}
P_{c}(t_{\rm {delay}})
&=&
\frac{1}{4}\int_{-\infty}^{+\infty} dz
|{\cal F}(z)|^{2}
\left\{
|{\cal S}(\omega_{L}-z)|^{2} +|{\cal S}(\omega_{L}+z)|^{2}
\right.
\nonumber
\\
&-& \left. 2 {\rm Re}\left[
{\cal S}^{*}(\omega_{L}-z)
{\cal S}(\omega_{L}+z)
e^{-2iz t_{\rm delay}}
\right]
\right\},
\label{eq:pc2}
\end{eqnarray}
where $t_{\rm delay}$ is the time lag between entangled
photons traversing the upper and lower arms of the HOM apparatus.
This is proportional to the counting rate of coincident photons
observed at detectors C1 and C2 and serves as the
central experimental observable.
\begin{figure*}
\caption{
{\bf Photon Coincidence Rates vs. Coupling.}
\label{pc-scan-lambda}
\end{figure*}
\section{Results}
A crucial component of our approach is the action of the sample at S which introduces a
transmission function $S(\omega)$ into the final Bell state. We wish to connect this
function to the dynamics and molecular interactions within the sample. To accomplish this, we
use the input/output formulation of quantum optics and apply this to
an ensemble of identical 2-level states coupled to a common photon mode.\cite{PhysRevA.75.013804}
Technical details of our approach are presented in the Methods section of this paper.
In short, we begin with a description of the material system described by $N$ two-level spin states coupled to
common set of photon cavity modes.
\begin{eqnarray}
\hat H_{sys} &=&\sum_{j} \frac{\hbar\omega_{o}}{2}\hat\sigma_{z,j}
+ \sum_{k}\hbar(\omega_{k}-i\kappa)\hat\psi_{k}^{\dagger}\hat\psi_{k} \nonumber \\
&+&\sum_{k,j}\frac{\hbar\lambda_{kj}}{\sqrt{N}}(\hat\psi_{k}^{\dagger} + \hat\psi_{k})(\hat\sigma^{+}_{j} + \hat\sigma_j^{-})\label{dickeH-1}
\end{eqnarray}
where $\{\hat\sigma_{z,j},\hat\sigma^{\pm}_{j}\}$ are local spin-1/2 operators for site $j$, $\hbar\omega_{o}$ is the local
excitation energy, and $\lambda_{kj}$ is the coupling between the $k$th photon mode and the $j$th site, which we will take to be
uniform over all sites. We introduce $\kappa$ as the decay rate of a cavity photon.
We allow the photons in the cavity (S) to exchange quanta with photons in the HOM apparatus
and derive the Heisenberg equations of motion corresponding to input and output photon fields within a
steady state assumption. This allows us to compute the scattering matrix connecting an
incoming photon with frequency $\nu$ from the field to an outgoing photon with frequency $\nu$ returned to the
field viz.
\begin{eqnarray}
\Psi_{out}(\nu) = -\hat\Omega^{(-)\dagger}_{out}\hat\Omega^{(+)}_{in}\Psi_{in}(\nu)
\end{eqnarray}
where $\hat\Omega^{(\pm)}_{in,out}$ are M{\o}ller operators that propagate an incoming (or outgoing) state from $t\to-\infty$ to $t=0$
where it interacts with the sample.
or from $t=0$ to an outgoing (or incoming) state at $t\to+\infty$ and give the $S$-matrix in the form of a response function
\begin{eqnarray}
{\cal S}(\nu) = \langle \delta \Psi^{\dagger}_{out}(\nu)\delta \Psi_{out}(\nu')\rangle\delta(\nu-\nu')
\label{response-main}
\end{eqnarray}
where the $\delta \Psi_{out}(\nu)$ are fluctuations in the output photon field about a steady-state solution.
The derivation of $S(\nu)$ for the Dicke model and its incorporation into equation~\ref{eq:pc2}
is a central result of this work and is presented in the Methods section of this paper.
In general, $S(\nu)$ is a complex function with a series of poles displaced above the
real $\nu$ axis and we employ a sync-transformation method to integrate equation~\ref{eq:pc2}.
The approach can be applied to any model Hamiltonian system and provides the necessary connection between a microscopic model and its predicted photon coincidence.
Before discussing the results of our calculations, it is important to recapitulate
a number of aspects of the Dicke model and how these features are manifest in the
photon coincidence counting rates. As stated already,
we assume that the sample is in a steady state by
exchanging the photons in the HOM apparatus
with photons within the sample cavity and that ${\cal S}(\nu)$ can be
described within a linear-response theory.
Because the cavity photons become entangled with
the material excitations, the excitation frequencies are split into upper photonic ($\omega_{+}$)
and lower excitonic ($\omega_{-}$) branches.
These frequencies are complex corresponding to
the exchange rate between cavity and HOM photons.
At very low values of $\lambda_{k}$, the real frequencies are equal
and $\omega_{+} = \omega_{-}$. In this over-damped regime,
photons leak from the cavity before the photon/exciton state has undergone a single Rabi oscillation.
At $\lambda_{k} = \kappa/2$ the system becomes critically damped and for $\lambda_{k}<\kappa/2$
and the degeneracy between the upper and lower polariton branches is lifted.
As $\lambda_{k}$ increases above a critical value given by
\begin{eqnarray}
\lambda_{c} = \sqrt{\frac{\omega_{k}\omega_o}{4}\left(1 + \frac{\kappa^{2}}{\omega_{k}^{2}}\right)},
\label{lambda-crit}
\end{eqnarray}
the system undergoes a quantum phase transition when $\omega_{-} = 0$.
Above this regime, excitations from the
non-equilibrium steady-state become collective and super-radiant.
For our numerical results, unless otherwise noted we use dimensionless quantities,
taking $\omega_{o} = \omega_{k} = 1.5$ for both the exciton frequency and cavity mode frequency,
$\kappa = 0.05$ for the cavity decay. These give
a critical value of $\lambda_{c} = 0.7516$.
We first consider the photon coincidence in the normal regime.
Figures~\ref{pc-scan-lambda}(a,b) show the variation of the photon coincidence count when the
laser frequency is resonant with the excitons ($\omega_{k} = \omega_{o}$).
For low values of $\lambda_{k}$, the system is in the over-damped regime and
the resulting coincidence scan reveals a slow decay for positive values of the
time-delay. This is the perturbative regime in which the scattering photon is
dephased by the interaction with the sample, but there is insufficient time
for the photon to become entangled with the sample.
For $\lambda_{k} > \kappa/2$, the scattering photon is increasingly
entangled with the material and further oscillatory structure begins to emerge
in the coincidence scan.
In the strong coupling regime, $P_{c}(t)$ becomes increasingly oscillatory with contributions from
multiple frequency components.
The origin of the structure is further revealed upon taking the Fourier cosine transform of
$P_{c}(t)$ (equation~\ref{eq:pc2}) taking the
bandwidth of the bi-photon amplitude to be broad enough to span the full spectral range.
The first two terms in the integral of equation~\ref{eq:pc2} are independent of time and simply give a
background count and can be ignored for purpose of analysis.
The third term depends upon the time delay and is Fourier-cosine transform of
the bi-photon amplitude times the scattering amplitudes,
\begin{eqnarray}
{\cal P}_{c}(\omega) = |{\cal F}(\omega)|^{2}{\cal S}^{*}(\omega_{L}-\omega){\cal S}(\omega_{L}+\omega). \label{scatamp}
\end{eqnarray}
As we show in the Methods, ${\cal S}(\omega)$ has a series of poles on the complex
plane that correspond to the
frequency spectrum of fluctuations
about the matter-radiation steady-state as given by the eigenvalues of ${\cal M}_{s}$ in
equation~\ref{modelM} in the Methods.
In Figure~\ref{poles}(a,b) we show the evolution of pole-structure of
${\cal S}^{*}(\omega_{L}-z){\cal S}(\omega_{L}+z)$
superimposed over the Fourier-cosine transform of the coincidence counts (${\cal P}_{c}(\omega)$)
with increasing coupling $\lambda_{k}$ revealing that
that both excitonic and photonic branches contribute to the overall photon coincidence
counting rates.
A close examination of the pole structure in the vicinity of the
phase transition reveals that two of the
$\omega_k^{(-)}$ modes become degenerate
over a small range of $\lambda_{k}$ but with
different imaginary components
indicating that the two modes decay at different rates.
This is manifest in Figure~\ref{poles}b by the rapid variation
and divergence in the ${\cal S}^{*}(\omega_{L}-z){\cal S}(\omega_{L}+z)$ about $\lambda_c$. \cite{Kopylov2013} While the
parametric width of this regime is small, it depends entirely
upon the rate of photon exchange between the cavity and the
laser field ($\kappa$).
\begin{figure*}
\caption{{\bf Pole Structure of Response Function}
\label{poles}
\end{figure*}
\section{Discussion}
We present here a formalism and method for connecting
the photon coincidence signals for a sample placed
in a HOM apparatus to the optical response
of the coupled photon/material system.
Our formalism reveals that by taking the
Fourier transform of the $P_{c}(t)$ coincidence
signal reveals the underlying pole
structure of the entangled material/photon system.
Our idea hinges upon an assumption that the
interaction with the material preserves the
initial entanglement between the two photons and that
sample on the entanglement introduces an additional phase
lag to one of the photons which we formally introduce in the
form of a scattering response function ${\cal S}$.
The pole-structure in the output comes about from the
further quantum entanglement of the signal photon with the
sample.
Encoded in the time-delay signals is important
information concerning the inner-workings of a
quantum phase transition.
Hence, we conclude that entangled photons with
interferometric detection techniques provide
a viable and tractable means to extract precise
information concerning light-matter interactions.
In particular, the approach reveals that at the onset
of the symmetry-breaking transition between normal and super-radiant
phases, two of the eigenmodes of the light-matter state
exhibit distinctly different lifetimes. This signature of
an intrinsic aspect of light-matter entanglement may be
observed in a relatively simple experimental geometry with
what amounts to a {\em linear} light-scattering/interferometry
set up.
At first glance, it would appear that using quantum photons
would not offer a clear advantage over more standard
spectroscopies based upon a semi-classical description of
the radiation field. However, the entanglement variable
adds an additional dimension to the experiment allowing one to
preform what would ordinarily be a non-linear experiment using classical light
as a linear experiment using quantized light.
The recent works by Kalashnikov {\em et al.} that inspired this work
are perhaps the proverbial tip of the iceberg.
\cite{Kalashnikov2016a,Kalashnikov2016b}
\section*{Acknowledgments}
The work at the University of Houston was funded in
part by the National Science Foundation (CHE-1664971, MRI-1531814)
and the Robert A. Welch Foundation (E-1337).
AP acknowledges the support provided by Los Alamos National Laboratory Directed Research and Development (LDRD) Funds.
CS acknowledges support from the School of Chemistry \& Biochemistry and the College of Science of Georgia Tech.
JJ acknowledges the support of the Army Research Office (W911NF-13-1-0162).
ARSK acknowledges funding from EU Horizon 2020 via Marie Sklodowska Curie Fellowship (Global) (Project No. 705874).
\appendix
\section{Preparation of Entangled States}
We review here the preparation of the entangled states as propagated to the
coincidence detectors in Fig. 1.
The initial laser beam
produces an entangled photon pair by spontaneous parametric down-conversion at B, with
state which we shall denote as a Bell state
\begin{eqnarray}
|\bell_{1} \rangle &=& \iint d\omega_{1} d\omega_{2} {\cal F}(\omega_{1},\omega_{2})
B^{\dagger}_{I}(\omega_{1})
B^{\dagger}_{S}(\omega_{2})
|0\rangle \\
&=& \iint d\omega_{1} d\omega_{2} {\cal F}(\omega_{1},\omega_{2})|\omega_{1}\omega_{2}\rangle,
\end{eqnarray}
where ${\cal F}(\omega_{1},\omega_{2})$ is the bi-photon field amplitude and
$B^{\dagger}_{I,S}(\omega_{i})$ creates a photon with frequency $\omega_{i}$ in either the idler (I) or signal (S) arm of the HOM
apparatus.
The ket $|0\rangle$ is the vacuum state and $|\omega_{1}\omega_{2}\rangle$
denotes a two photon state.
The two photons are reflected back towards a
beam-splitter (BS) by mirrors M1 and M2.
M1 introduces an optical delay with transmission function $\Phi(\omega)$
which we will take to be of modulo 1. In the other arm,
we introduce a resonant medium at S with transmission function ${\cal S}(\omega)$.
\begin{widetext}
Upon interacting with both the delay element and the medium, the Bell-state
can be rewritten as
\begin{eqnarray}
|\bell_{2} \rangle = \iint d\omega_{1} d\omega_{2} {\cal F}(\omega_{1},\omega_{2})
B^{\dagger}_{I}(\omega_{1})
B^{\dagger}_{S}(\omega_{2})
\Phi(\omega_{1})S(\omega_{2})
|0\rangle.
\nonumber
\\
\end{eqnarray}
Finally, the two beams are re-joined by a beam-splitter (BS)
producing the mapping
\begin{eqnarray}
\nonumber
B^{\dagger}_{I}(\omega_{1})B^{\dagger}_{S}(\omega_{2})
&\mapsto &\frac{1}{2}
[A_1^\dagger(\omega_1) + i A_2^\dagger(\omega_1)][A_2^\dagger(\omega_2) + i A_1^\dagger(\omega_2)]
\end{eqnarray}
whereby $A_{i}^{\dagger}(\omega_{j})$ creates a photon with frequency $\omega_{j}$ in the $i^{th}$
exit channel.
This yields a final Bell state
\begin{eqnarray}
|\bell_{out} \rangle &=& \frac{1}{2} \iint d\omega_{1} d\omega_{2} {\cal F}(\omega_{1},\omega_{2})
\left(\left(
A_1^\dagger(\omega_1)A_2^\dagger(\omega_2)-A_2^\dagger(\omega_1)A_1^\dagger(\omega_2)\right)\right.\nonumber \\
&+&\left.i\left(A_1^\dagger(\omega_2)A_1^\dagger(\omega_1)+A_2^\dagger(\omega_1)A_2^\dagger(\omega_2)\right)\right)
\Phi(\omega_{1})S(\omega_{2})
|0\rangle.
\end{eqnarray}
The coincidence count rate is determined only by the real part of photon creation term, so we write
\begin{eqnarray}
|\bell_{c} \rangle
&=&
\frac{1}{2}\int d\omega_{1} \int d\omega_{2}
\left( {\cal F}(\omega_{1},\omega_{2}) \Phi(\omega_{1})S(\omega_{2})\right. \nonumber \\
&-& \left. {\cal F}(\omega_{\textcolor{black}{2}},\omega_{\textcolor{black}{1}}) \Phi(\omega_{\textcolor{black}{2}})S(\omega_{\textcolor{black}{1}})
\right)
|\omega_{1}\omega_{2}\rangle
\end{eqnarray}
and
\begin{eqnarray}
P_{c} &=& \iint d\omega_{1} d\omega_{2} |\langle\omega_{1}\omega_{2} | \psi_{c} \rangle | ^{2} \nonumber
\\
&=&
\frac{1}{4}\iint d\omega_{1} d\omega_{2}
\left\{
|{\cal F}(\omega_{1},\omega_{2}) {\cal S}(\omega_{2})|^{2} +
|{\cal F}(\omega_{2},\omega_{1}) {\cal S}(\omega_{1})|^{2}
\right.
\nonumber \\
&-&
\left.
2 {\rm Re}\left[{\cal F}^{*}(\omega_{1},\omega_{2}){\cal F}(\omega_{2},\omega_{\textcolor{black}{1}})
{\cal S}^{*}(\omega_{2}){\cal S}(\omega_{1})
\Phi^{*}(\omega_{1})\textcolor{black}{\Phi}(\omega_{2})
\right]
\right\}.
\label{eq:pc-meth}
\end{eqnarray}
Upon taking $\omega_{1} = \omega_{L} + \omega$ and $\omega_{2} = \omega_{L} - \omega$,
and changing the integration variable we obtain equation~\ref{eq:pc2} in the text.
\section{Input/Output formalism}
Our theoretical approach is to treat S as a material system interacting with a bath of
quantum photons.
We shall denote our ``system'' as those degrees of freedom
describing the material and the photons directly interacting with the sample,
described by $H_{sys}$
and assume that the photons within sample cavity are exchanged with
external photons in the bi-photon field,
\begin{eqnarray}
H_{r} + H_{rs} = \hbar \int_{-\infty}^{\infty} \left\{
z B^{\dagger}_{k}(z) B_{k}(z)
-i \kappa(z) (\psi_{k}^{\dagger} B_{k}(z) - B_{k}^{\dagger}(z)\psi_{k})\right\} d z,
\end{eqnarray}
where $[B_{k}(z),B_{k'}^{\dagger}(z')] = \delta_{kk'}\delta(z - z')$ are boson operators for
photons in the laser field, and $[\psi_{k},\psi^{\dagger}_{k}] = \delta_{kk'}$ are boson operators for
cavity photons in the sample that directly interact with the material component of the system.
The Heisenberg equations of motion for the reservoir and system photon modes are given by
\begin{eqnarray}
\partial_{t} B_{k}(z) = -i z B_{k}(z) + \kappa(z) \psi_{k}
\end{eqnarray}
and
\begin{eqnarray}
\partial_{t}\psi_{k} =- \frac{i}{\hbar}[\psi_{k},H_{sys}] - \int \kappa(z) B_{k}(z;t) d z,
\end{eqnarray}
where the integration range is over all $z$.
We can integrate formally the equations for the reservoir given either the initial or final states of the reservoir field
\begin{eqnarray}
B_{k}(z;t) = \left\{
\begin{array}{ll}
e^{-iz (t-t_{i})}B_{k}(z;t_{i}) + \kappa(z)\int_{t_{i}}^{t}ds e^{-iz (t-s)}\psi_{k}(s) & {\rm for\, } t > t_{i} \\
e^{-iz (t-t_{f})}B_{k}(z;t_{f}) - \kappa(z)\int_{t}^{t_{f}}ds e^{-iz (t-s)}\psi_{k}(s) & {\rm for\, } t < t_{f} .
\end{array}
\right.
\end{eqnarray}
We shall eventually take $t_{i} \to -\infty$ and $t_{f}\to + \infty$ and require that the
forward-time propagated and reverse-time propagated solutions are the same
at some intermediate time $t$.
If we assume that the coupling is constant over the frequency range of interest, we can
write
$$\kappa(z) = \sqrt{\gamma/2\pi}$$
where $\gamma$ is the rate that energy is exchanged between the reservoir and the system. This is the (first) Markov approximation.
\end{widetext}
Using these identities, one can find the Heisenberg equations for the
cavity modes as
\begin{eqnarray}
\partial_{t}\psi_{k} = -\frac{i}{\hbar}[\psi_{k},H_{sys}] - \sqrt{\frac{\gamma}{2\pi}} \int_{-\infty}^{+\infty} B_{k}(z;t)dz
\end{eqnarray}
where $H_{sys}$ is the Hamiltonian for the isolated system. We can now cast the external field in this equation in terms of its
initial condition:
\begin{eqnarray}
\partial_{t}\psi_{k} &=& -\frac{i}{\hbar}[\psi_{k},H_{sys}] - \sqrt{\frac{\gamma}{2\pi}} \int_{-\infty}^{+\infty} e^{iz (t-t_{i})} B_{k0}(z) dz \nonumber \\
&-& {\frac{\gamma}{2\pi}}^{2}\int_{-\infty}^{+\infty}dz \int_{t_{i}}^{t} e^{iz(t-t')}\psi_{k}(t') dt'
\end{eqnarray}
Let us define an input field in terms of the Fourier transform of the reservoir operators:
\begin{eqnarray}
\psi_{k,in}(t) = -\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} dz e^{-iz (t-t_{i})} B_{ko}(z).
\end{eqnarray}
Since these depend upon the initial state of the reservoir, they are essentially a source of stochastic noise for the system.
In our case, we shall use these as a formal means to connect the fields inside the sample to the fields
in the laser cavity.
For the term involving ${\gamma}/{2\pi}$,
the integral over frequency gives a delta-function:
\begin{eqnarray}
\int_{-\infty}^{\infty} dz e^{iz(t-t')} = 2\pi \delta(t-t').
\end{eqnarray}
then
\begin{eqnarray}
\int_{t_{o}}^{t} dt' \delta(t-t') \psi_{k}(t') = \frac{\psi(t)}{2} {\rm \,\, for\,\,} (t_{o}< t < t_{f})
\end{eqnarray}
This gives the forward equation of motion.
\begin{eqnarray}
\partial_{t} \psi_{k} = -\frac{i}{\hbar}[ \psi_{k},H_{sys}] +\sqrt{\gamma} \psi_{k,in}(t) - \frac{\gamma}{2} \psi_{k}(t)
\end{eqnarray}
We can also define an output field by integrating the reservoir backwards from time $t_{f}$ to time $t$
given a final state of the bath, $ B_{kf}$.
\begin{eqnarray}
B_{k} = e^{-iz (t-t_{f})} B_{kf} - \sqrt{\frac{\gamma}{2\pi}}\int_{t}^{t_{f}}e^{-iz(t-t')}\psi_{k}(t')dt'.
\end{eqnarray}
This produces a similar equation of motion for the output field
\begin{eqnarray}
\partial_{t} \psi_{k} = -\frac{i}{\hbar}[ \psi_{k},H_{sys}] - \sqrt{\gamma } \psi_{k,out}(t) + \frac{\gamma}{2} \psi_{k}(t).
\end{eqnarray}
Upon integration:
\begin{eqnarray}
{\psi_{k,out}(t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty} d\nu e^{-i\nu (t-t_{f})} B_{kf}(\nu)}.
\end{eqnarray}
At the time $t$, both equations must be the same, so we can subtract one from the other
\begin{eqnarray}
\psi_{k,in} + \psi_{k,out} = \sqrt{2\kappa }\psi_{k}
\end{eqnarray}
to produce a relation between the incoming and outgoing components. This eliminates the
non-linearity and explicit reference to the bath modes.
We now write $\Psi = \{\psi_{k},\psi^{\dagger}_{k}, S_{1},S_{2},\cdots\}$ as a vector of Heisenberg variables for the material system $\{S_{1},S_{2},\cdots \}$ and
cavity modes $\{\psi_{k},\psi^{\dagger}_{k} \}$.
Taking the equations of motion for the all fields to be linear and of the form
\begin{eqnarray}
\partial_{t} \Psi = {\cal M}_{in}\cdot \Psi + \sqrt{\gamma} \Psi_{in}
\end{eqnarray}
where ${\cal M}_{in}$ is a matrix of coefficients which are independent of time.
The input vector $ \Psi_{in} $ is non-zero for only the terms
involving the input modes. We can also write a similar equation in
terms of the output field; however, we have to account for the
change in sign of the dissipation terms, so we denote the coefficient matrix as ${\cal M}_{out}$.
In this linearized form, the forward and reverse equations of motion can be solved formally using the Laplace transform, giving
\begin{eqnarray}
({\cal M}_{in} - iz) \Psi(z) &=&- \sqrt{\gamma} \Psi_{in}(z) \\
({\cal M}_{out}-iz) \Psi(z) &=&+ \sqrt{\gamma} \Psi_{out}(z).
\end{eqnarray}
These and the relation $ \Psi_{in} + \Psi_{out} = \sqrt{\gamma } \Psi $ allows one to eliminate the external variables entirely:
\begin{eqnarray}
\Psi_{out}(z) = -({\cal M}_{out} - i z )({\cal M}_{in} - iz )^{-1} \Psi_{in}(z).
\label{prop}
\end{eqnarray}
This gives a precise connection between the input and output fields. More over,
the final expression does not depend upon the assumed exchange rate between the
internal $\psi_{k}$ and external $B_{k}(z)$ photon fields.
The procedure is very much akin to the use of M{\o}ller operators in scattering theory.
To explore this connection, define $\hat\Omega^{(\pm)}_{in} $
as an operator which propagates an incoming solution at $\mp \infty$ to the interaction at time $t =0$
and its reverse $\hat\Omega^{(\pm)}_{out} $
which propagates an out-going solution at $\mp \infty$ back to the interaction at time $t =0$.
\begin{eqnarray}
\hat \Omega^{(\pm)\dagger}_{in,out}\hat\Omega^{(\pm)}_{in,out} = I,
\end{eqnarray}
and
\begin{eqnarray}
\hat\Omega^{(\pm)}_{in} = ({\cal M}_{in} \mp iz )^{-1} \\
\hat\Omega^{(\pm)}_{out} = ({\cal M}_{out} \pm iz )^{-1}
\end{eqnarray}
Thus, we can write equation~\ref{prop} as
\begin{eqnarray}
\Psi_{out}(z) = -\hat\Omega^{(-)\dagger}_{out}\hat\Omega^{(+)}_{in}\Psi_{in}(z).
\end{eqnarray}
To compute the response function, we consider fluctuations and excitations from a steady state solution:
\begin{eqnarray}
\Psi(t) = \Psi_{ss} + \delta\Psi(t).
\end{eqnarray}
The resulting
linearized equations of motion read
\begin{eqnarray}
\frac{d}{dt} \delta \Psi(t) = {\cal M}_{s} \delta \Psi(t)
\end{eqnarray}
implying a formal solution of
\begin{eqnarray}
\delta \Psi(t) = e^{{\cal M}_{s}t}\delta \Psi(0).
\end{eqnarray}
From this we deduce that the eigenvalues and eigenvectors of ${\cal M}_{s}$ give the fluctuations in terms of the
normal excitations about the stationary solution.
Using the input/output formalism, we can write the outgoing state (in terms of the Heisenberg variables) in terms of their input values:
\begin{eqnarray}
\delta \Psi_{out}(z) = -({\cal M}_{out,s} - i z I )({\cal M}_{in,s} - iz I)^{-1} \delta\Psi_{in}(z)\nonumber
\\
\end{eqnarray}
where as given above, $\delta\Psi(z)$ is a vector containing the fluctuations about the stationary values for each of the Heisenberg variables.
The ${\cal M}_{in,s}$ and ${\cal M}_{out,s}$ are the coefficient matrices from the linearisation process.
The input field satisfies $\langle \delta \psi_{in}(z)\delta \psi^{\dagger}_{in} (z')\rangle = \delta(z-z')$ and all other terms are zero.
Thus, the transmission function is given by
\begin{eqnarray}
\delta(z-z'){\cal S}(z) = \langle \delta \psi^{\dagger}_{out}(z)\delta \psi_{out}(z')\rangle
\label{response}
\end{eqnarray}
In other words, the ${\cal S}(z)$ is the response of the system to the input field of the incoming
photon state producing an output field for the out-going photon state.
\section{Dicke model for ensemble of identical emitters}
Let us consider an ensemble of $N$ identical two-level systems corresponding to local molecular sites
coupled to a set of photon modes described by $\psi_{k}$.
\begin{eqnarray}
\hat H &=&\sum_{j} \frac{\hbar\omega_{o}}{2}\hat\sigma_{z,j}
+ \sum_{k}\hbar\omega_{k}\hat\psi_{k}^{\dagger}\hat\psi_{k} \nonumber \\
&+&\sum_{k,j}\frac{\hbar\lambda_{kj}}{\sqrt{N}}(\hat\psi_{k}^{\dagger} + \hat\psi_{k})(\hat\sigma^{+}_{j} + \hat\sigma_j^{-})\label{dickeH}
\end{eqnarray}
where $\{\hat\sigma_{z,j},\hat\sigma^{\pm}_{j}\}$ are local spin-1/2 operators for site $j$, $\hbar\omega_{o}$ is the local
excitation energy, and $\lambda_{kj}$ is the coupling between the $k$th photon mode and the $j$th site, which we will take to be
uniform over all sites.
Defining the total angular momentum operators
$$ \,\, \hat J_{z} = {\sum_j} \hat\sigma_{z,j} \,\,\,{\rm and}\,\,\, \hat J_{\pm} ={\sum_j} \hat\sigma_{j}^{\pm}$$
and
$$\hat J^2=\hat J_z^2+(\hat J_+ \hat J_- + \hat J_- \hat J_+)/2$$
as the total angular momentum operator, this Hamiltonian can be cast in the form in equation~\ref{dickeH} by mapping the
total state space of $N$ spin 1/2 states onto a
single angular momentum state vector $|J,M\rangle$.
\begin{eqnarray}
\hat H &=& \hbar\omega_{o}\hat J_{z}+ \sum_{k}\hbar\omega_{k}\hat\psi_{k}^{\dagger}\hat\psi_{k} \nonumber \\
&+&\sum_{k}\frac{\hbar\lambda_{k}}{\sqrt{N}}(\hat \psi_{k}^{\dagger} + \hat\psi_{k})(\hat J_{+} + \hat J_{-})
\end{eqnarray}
Note that the ground state of the system corresponds to $|J,-J\rangle$ in which
each molecule is in its electronic ground state. Excitations from this state create up to
$N$ excitons within the system corresponding to the state $|J,+J\rangle$. Intermediate to this
are multi-exciton states which correspond to various coherent superpositions of
local exciton configurations.
For each value of the wave vector $k$ one obtains the following Heisenberg equations of motion for the
operators
\begin{eqnarray}
\pd{\hat\psi_{k}}{t} &=& (- i \omega_{k}-\kappa)\hat\psi_{k} - i \frac{\lambda_{k}}{\sqrt{N}}( \hat J_{+} + \hat J_{-}) \\
\pd{\hat\psi_{k}^{\dagger}}{t} &=& (i \omega_{k}- \kappa)\hat\psi^{\dagger}_{k} + i \frac{\lambda_{k}}{\sqrt{N}}( \hat J_{+}+ \hat J_{-}) \\
\pd{\hat J_{\pm}}{t}&=& \pm i \omega_{o}\hat J_{\pm} \mp 2i\hat J_{z}\sum_{k}\frac{\lambda_{k}}{\sqrt{N}}( \hat\psi_{k}+\hat\psi_{k}^{\dagger})\\
\pd{\hat J_{z}}{t} &=& +i \frac{\hbar\lambda_{k}}{\sqrt{N}}( \hat J_{-}- \hat J_{+})(\hat \psi_{k}+\hat \psi_{k}^{\dagger})
\end{eqnarray}
where $\kappa$ is gives the decay of photon $\hat \psi_k$ into the reservoir. These are non-linear equations
and we shall seek stationary solutions and linearize about them.
\begin{eqnarray}
\frac{d}{dt} \delta \Psi(t) = {\cal M}_{s} \delta \Psi(t)
\end{eqnarray}
with
\begin{widetext}
\begin{eqnarray}
\noindent
{\cal M}_{s}
& =&
\left[
\begin{array}{ccccc}
-(\kappa - i\omega_{k}) & 0 & i\lambda_{k} & i\lambda_{k} & 0\\
0 & -(\kappa + i\omega_{k}) & -i\lambda_{k} & -i\lambda_{k} & 0\\
2 i\lambda_{k} \overline{J}_{z} & 2 i\lambda_{k}\overline{J}_{z} & - i\omega_{o} & 0 & 2i\lambda_{k}(\overline{\psi}_{k}+\overline{\psi}_{k}^{\dagger})\\
-2 i\lambda_{k} \overline{J}_{z} & -2 i\lambda_{k} \overline{J}_{z} & 0 & i\omega_{o} & -2i\lambda_{k}(\overline{\psi}_{k}+\overline{\psi}_{k}^{\dagger})\\
i\lambda_{k}(\overline{J}_{-}-\overline{J}_{+}) & i\lambda_{k} (\overline{J}_{-}-\overline{J}_{+}) & i\lambda_{k}(\overline{\psi}_{k}+\overline{\psi}_{k}) & - i\lambda_{k}(\overline{\psi}_{k}+\overline{\psi}_{k}) & 0 \\
\end{array}
\right]\label{modelM}
\nonumber \\
\end{eqnarray}
where $\overline{J}_{z,\pm}$, $\overline\psi_{k}$, and $\overline\psi_{k}^{\dagger}$ denote the steady state solutions.
Where we have removed $N$ from the equations of motion by simply rescaling the variables.
The model has both trivial and non-trivial stationary solutions corresponding to the normal and super-radiant regimes.
For the normal regime,
\begin{eqnarray}
\overline{\psi}_{k} = \overline{\psi}_{k}^{\dagger} = \overline{J}_{\pm,s} = 0
\end{eqnarray}
and
\begin{eqnarray}
\overline{J}_{z} = \pm \frac{N}{2}.
\end{eqnarray}
which correspond to the case where every spin is excited or in the ground state. Since we are primarily
interested in excitations from the electronic ground state, we initially focus our attention to these solutions.
Non-trivial solutions to these equations predict that above a critical value of the coupling $\lambda > \lambda_{c}$,
the system will undergo a quantum phase transition to form a super-radiant state.
It should be pointed out that in the original Dicke model, above the critical coupling,
the system is no longer gauge invariant leading to a violation of the Thomas-Reiche-Kuhn (TRK) sum rule.
Gauge invariance can be restored; however, the system no longer undergoes a
quantum phase transition.\cite{Rzazewski:1975} However, for a {\em driven, non-equilibrium}
system such as presented here, the TRK sum rule does not apply and the quantum phase transition is
a physical effect.
\end{widetext}
The non-trivial solutions for the critical regime are given by
\begin{eqnarray}
\overline\psi_{k}^{2} &=& (\overline\psi_{k}^{\dagger})^{2} = \frac{1}{4}\frac{\omega_{k}\lambda_{k}^{2}}{\omega_{o}\lambda_{c}^{2}}\left(1 - \left(\frac{\lambda_{c}}{\lambda_{k}}\right)^{4}\right)\\
\overline{J}_{\pm} &=& \frac{1}{2}\left(1 - \left(\frac{\lambda_{c}}{\lambda_{k}}\right)^{4}\right)^{1/2} \\
\overline{J}_{z} &=& -\frac{1}{2}\left(\frac{\lambda_{c}}{\lambda_{k}}\right)^{2}.
\end{eqnarray}
The (real) eigenvalues of ${\cal M}_{s}$ gives 4 non-zero and 1 trivial normal mode frequencies (for $\kappa = 0$),
which we shall denote as
\begin{eqnarray}
\pm\omega_{\pm} =
\pm\frac{1}{\sqrt{2}}\sqrt{
(\omega_{k}^{2} + \omega_{o}^{2}) \pm \sqrt{(\omega_{k}^{2}-\omega_{o}^{2})^{2} + 16\lambda_{k}^{2}\omega_{k}\omega_{o} }
} \nonumber
\\
\end{eqnarray}
one obtains the critical coupling constant
\begin{eqnarray}
\lambda_{c} = \sqrt{\frac{\omega_{k}\omega_o}{4}\left(1 + \frac{\kappa^{2}}{\omega_{k}^{2}}\right)}
\end{eqnarray}
Figure~\ref{df1} gives the normal mode spectrum for a resonant
system with $\omega_{k} = \omega_{o} = 1.5 $ and $\kappa = 0.1$ (in reduced
units).
\begin{figure}
\caption{
{\bf Upper and lower polariton branches.}
\label{df1}
\end{figure}
\begin{figure}
\caption{{\bf Integration axes and coordinate rotation for integrals \ref{int78}
\label{complexplane}
\end{figure}
\section{Evaluation of integrals in Eqs. \ref{eq:pc2} and ~\ref{eq:pc}}
The integral in Eq~\ref{eq:pc2} for the photon coincidence can be problematic
to evaluate numerically given the oscillatory nature of the sinc function in ${\cal F}(z)$.
To accomplish this, we use define a sinc-transformation based upon ${\cal F}(z)$ using the identity
\begin{eqnarray}
{\rm sinc}(z) = \frac{{\rm \sin}(z)}{ z} = \frac{1}{2}\int_{-1}^{1}e^{ikz}dk.
\end{eqnarray}
which yields
\begin{eqnarray}
{\cal F}(z) =\frac{1}{2} \int_{-1}^{1}e^{ik b z^{2}}~dk.
\end{eqnarray}
From this we can re-write each term in Eqs. \ref{eq:pc2} in the form
\begin{eqnarray}
I(t) &=&\int_{-\infty}^{\infty} dz |{\cal F}(z)|^{2} {\cal S^*}(\omega_L-z){\cal S}(\omega_L+z)e^{2i z t}, \\
&=&
\frac{1}{4} \int_{-1}^{1}dk\int_{-1}^{1} dk'{\cal G}(k-k',t),
\end{eqnarray}
whereby we denote
\begin{eqnarray}
{\cal G}(q,t) = \int_{-\infty}^{\infty} dz~e^{-ibz^{2}q+2itz} {\cal S^*}(\omega_L-z){\cal S}(\omega_L+z).
\label{eq:G}
\end{eqnarray}
The integrand is highly oscillatory along the $z$-axis; however,
for non-zero $ k-k'= q $, ${\cal G}$ becomes a Gaussian integral under coordinate transformation obtained by completing the square:
\begin{eqnarray}
-i b q z^2 + 2 i t z &=& - i b q ( z^2 - 2 \frac{t}{b q} z ) \\
&=& - i b q \left[( z - \frac{t}{b q} )^2 - (\frac{t}{b q})^2 \right]\\
&=& - i b q ( z - \frac{t}{b q} )^2 - \frac{t^2}{i b q}.
\end{eqnarray}
For $q>0$, we take $u = (\sqrt{i})( z - \frac{t}{b q} )$, and for $q<0$, we take
$u = (-i\sqrt{i})( z - \frac{t}{b q} )$.
Solving for $z$ yields: $z = (-i\sqrt{i}) u +\frac{t}{b q}$ and $z = (\sqrt{i}) u +\frac{t}{b q}$, respectively.
In short, the optimal contour of the Gaussian integral is obtained by rotating by $\pi/4$ from the real-axis in the counter-clockwise direction for the
case of $q>0$ and by $\pi/4$ in the clockwise direction for the case of $q<0$ as indicated in Fig.~\ref{complexplane}.
The spectral response ${\cal S^*}(\omega_L-z){\cal S}(\omega_L+z)$ has a number of poles on the complex $z$ plane above the
real-$z$ axis.
We now use the residue theorem to evaluate the necessary poles which result as the contour rotates from the real axis to the complex $\pm\pi/4$ axis.
The 8 second order poles, $\{\rho_{n}\}$, are defined by roots of the denominators $D(\omega_L-z)$ and $D(\omega_L+z)$
and located $\kappa/2$ above the real axis at locations symmetrically placed around the origin.
For the counter-clockwise rotation ($q>0$),
poles included to the right of the real crossing point will be added and those to the left will be ignored (${\cal P}_{R}(t)$) ;
whereas for a clockwise rotation ($q<0$), the left-hand poles will be subtracted and the right-hand poles will be ignored (${\cal P}_{L}(t)$).
\begin{widetext}
For the unique case $q = 0$, all poles are summed (${\cal P}_{all}(t)$).
\begin{eqnarray}
{\cal P}(t) = 2 \pi i \sum_n \lim_{z\to{\rho_{n}}}\frac{d}{dz}\left[(z-\rho_{n})^2
{\cal S^*}(\omega_L-z){\cal S}(\omega_L+z)e^{-i b z^{2} q + 2 i z t }\right].
\end{eqnarray}
Thus, we obtain
\begin{eqnarray}
{\cal G}(q>0,t) &=& e^\frac{i t^2}{bq}\int_{-\infty}^{\infty} du e^{-b q u^2} {\cal S^*}(\omega_L-(-i\sqrt{i})u-\frac{t}{b q} ){\cal S}(\omega_L +(-i\sqrt{i})u+\frac{t}{b q} ) \nonumber \\
&+& {\cal P}_{R}(t), \label{int78}\\
{\cal G}(q=0,t>0) &=& {\cal P}_{all}(t), \label{int79} \\
{\cal G}(q=0,t<0) &=& 0, \label{int80} \\
{\cal G}(q<0,t) &=& e^\frac{i t^2}{bq}\int_{-\infty}^{\infty} du e^{+b q u^2 } {\cal S^*}(\omega_L-(\sqrt{i})u-\frac{t}{b q} ){\cal S}(\omega_L +(\sqrt{i})u+\frac{t}{b q} )\nonumber \\
&-& {\cal P}_{L}(t),
\label{int81}
\end{eqnarray}
whereby ${\cal P}_{L}$, ${\cal P}_{L}$, and ${\cal P}_{all}$ are summed over the left, right, or all poles, respectively.
The resulting expressions are analytic (albeit lengthy) and defined by exponentials and low order polynomials. Completion of the $k$ and $k'$ integrals yield an exact expression for the response.
\end{widetext}
\end{document} |
\begin{document}
\newcommand\relatedversion{}
\title{{\Large Ranking with submodular functions on the fly}\thanks{
This research is supported by the Academy of Finland projects MALSOME (343045), AIDA (317085) and MLDB (325117),
the ERC Advanced Grant REBOUND (834862),
the EC H2020 RIA project SoBigData++ (871042),
and the Wallenberg AI, Autonomous Systems and Software Program (WASP)
funded by the Knut and Alice Wallenberg Foundation.
}}
\author{Guangyi Zhang\thanks{KTH Royal Institute of Technology. \href{mailto:[email protected]}{[email protected]}}
\and Nikolaj Tatti\thanks{HIIT, University of Helsinki. \href{mailto:[email protected]}{[email protected]}}
\and Aristides Gionis\thanks{KTH Royal Institute of Technology. \href{mailto:[email protected]}{[email protected]}}
}
\date{}
\maketitle
\fancyfoot[R]{\scriptsize{Copyright \textcopyright\ 2023 by SIAM\\
Unauthorized reproduction of this article is prohibited}}
\begin{abstract} \small\baselineskip=9pt
Maximizing submodular functions have been studied extensively
for a wide range of subset-selection problems.
However, much less attention has been given to the role of submodularity
in sequence-selection and ranking problems.
A recently-introduced framework, named \emph{maximum submodular ranking} (MSR),
tackles a family of ranking problems that arise naturally
when resources are shared among multiple demands with different budgets.
For example, the MSR framework can be used to rank web pages for multiple user intents.
In this paper, we extend the MSR framework in the streaming setting.
In particular, we consider two different streaming models
and we propose practical approximation algorithms.
In the first streaming model, called \emph{function arriving},
we assume that submodular functions (demands) arrive continuously in a stream,
while in the second model, called \emph{item arriving},
we assume that items (resources) arrive continuously.
Furthermore, we study the MSR problem with additional constraints on the output sequence,
such as a matroid constraint that can ensure fair exposure among items from different groups.
These extensions significantly broaden the range of problems that can be captured by the MSR framework.
On the practical side, we develop several novel applications based on the MSR formulation,
and empirically evaluate the performance of the proposed~methods.
\end{abstract}
\section{Introduction}
\label{section:intro}
Submodular set functions capture a ``diminishing-returns'' property that
is present in many real-world phenomena~\citep{krause2014submodular}.
Submodular functions are popular
as they admit a rich toolbox of optimization techniques developed in the literature.
Examples of submodular functions used in practical problems include
document summarization \citep{lin2011class},
viral marketing in social networks \citep{kempe2015maximizing},
social welfare maximization \citep{vondrak2008optimal}, and many other.
The majority of existing submodularity-based problem formulations are restricted
to selecting a \emph{subset of items}, and completely disregard the effect of \emph{item order}.
However, the order of items plays an important role in many applications.
In this paper, we investigate a versatile approach to ranking items within the submodularity framework.
Creating a sequence of resources to be shared among multiple demands
appears in a broad range of applications.
For example, when ranking web pages in response to a user query we want to cater for
multiple user intents;
when creating a live stream of music content shared among a group
we want to satisfy the tastes of all listeners;
and when selecting advertising for a screen on public display
we want it to be relevant for all passengers.
Typically, each demand has an individual maximum budget,
e.g., in the previous scenario, the budget models the number of web pages that a user
is expected to browse.
The main challenge in these problems is to find a (partial) ranking of resources that best satisfies multiple demands with different budgets.
More concretely, the \emph{maximum submodular ranking} (\msr) formulation~\citep{zhang2022ranking}
deals with \emph{resources} and \emph{demands}.
A resource is referred to as an \emph{item} in a universe set \V.
Demands require resources, and the \emph{utility} of a demand for a set of resources
is characterized by a \emph{non-decreasing submodular set function} $f: 2^\V \to \reals_+$.
The \emph{budget} of a demand is represented by a cardinality constraint,
i.e., the maximum number of items its corresponding function is allowed to take.
The objective is to find a (partial) sequence of items to maximize the total utility,
i.e., the sum of function values.
A formal problem definition is introduced in Section~\ref{section:definition}.
To capture a wider range of problems and increase the versatility of the framework,
in this paper, we extend the \msr problem in two natural streaming models,
which we name \emph{function arriving} and \emph{item arriving}.
In the \emph{function-arriving model},
we assume that demands arrive continuously in an online fashion
and one is unaware of the type and/or volume of future demands.
This model is common in practice; for example, new audience may join a live stream in any time.
In the \emph{item-arriving model}, we assume that items arrive in a stream,
and we have to output a sequence of items in \emph{one pass} and
with limited memory after seeing all the items in the stream.
In other words, we need to process each item immediately after its arrival.
This model offers a way to handle the \msr problem when items are arriving continuously,
or are too many to be loaded into memory.
For both of the streaming models we consider we propose practical approximation algorithms.
Besides, we study the \msr problem with additional constraints on the output sequence,
such as a matroid constraint (see Section~\ref{section:definition})
that can ensure fair exposure among items from different groups.
On the practical side, we propose novel applications based on the \msr formulation.
We highlight here an application on \emph{progressively-diverse} personalized recommendation,
while many other interesting ones are discussed in Section~\ref{section:experiment},
including live streaming and catalogued viral marketing.
A popular approach to personalized recommendation \citep{mitrovic2017streaming}
is to select a succinct subset $S$ of items that maximizes a weighted sum
(with a trade-off parameter\,\Ctrade) of two submodular terms,
relevance and diversity.
Note that for a given value of the parameter \Ctrade,
a subset $S$ presents a fixed trade-off between relevance and diversity.
On the other hand, a user's need for diversity may be better served in an adaptive manner.
For example, a target user may appreciate a recommended list having the
\emph{most relevant items at the beginning} and becoming \emph{progressively diverse down the list}.
This requirement can be satisfied via an \msr formulation,
by maximizing a weighted sum of multiple such functions, each with an increasing trade-off value \Ctrade.
See Section~\ref{section:definition} for a formal definition.
Our contributions in this paper are summarized as follows.
\begin{itemize}
\item We study the {maximum submodular ranking} (\msr) problem
in two different streaming models,
and devise approximation algorithms for each model.
\item In the function-arriving streaming model, we show that a simple greedy algorithm yields
a 2 approximation for the \msr problem, if an item is allowed to be used multiple times.
This approximation ratio is tight for the greedy algorithm.
We also show that the problem is inapproximable if every item can be used at most once.
\item We propose a novel reduction that maps the ranking problem into a constrained
subset-selection problem subject to a bipartite-matching constraint.
An immediate consequence is that there exist approximation algorithms for the \msr problem
subject to a general \nm-matroid constraint on the output sequence.
Another consequence is that we can obtain efficient approximation algorithms
for the \msr problem in the item-arriving streaming~model.
\item We apply the enhanced \msr framework to several novel real-life applications, and
empirically evaluate the performance of the proposed algorithms.
\end{itemize}
The rest of the paper is organized as follows.
We discuss related work in Section~\ref{section:related}.
A formal problem definition is introduced in Section~\ref{section:definition}.
We present approximation algorithms for the function-arriving \msr problem and
the item-arriving \msr problem in Sections~\ref{section:msrf} and~\ref{section:msri}, respectively.
Our empirical evaluation is conducted in Section~\ref{section:experiment},
followed by concluding remarks in Section~\ref{section:conclusion}.
Missing proofs and further experimental details are deferred to the supplementary materials.
Our implementation is made publicly available.\code
\section{Related work}
\label{section:related}
\para{Submodularity for sequences.}
The \msr problem was proposed by \citet{zhang2022ranking}, and
it was later shown to be a special case of \emph{ordered submodularity},
introduced by \citet{kleinberg2022ordered}.
In particular, both formulations avoid an unnatural \emph{postfix monotonicity} property,
which is required in prior formulations \citep{streeter2008online,zhang2012submodularity,alaei2021maximizing}.
Postfix monotonicity requires a non-decreasing function value after prepending an arbitrary item at the front of a sequence.
Additionally, the \msr problem can be seen as a dual problem to the \emph{submodular ranking} problem \citep{azar2011ranking}, which aims to minimize the total cover time of all functions in the absence of individual budgets.
No streaming extension has been known for the \msr problem.
\para{Bipartite matching.}
Bipartite matching and its many variants have been extensively studied
in the literature~\citep{mehta2013online}.
The offline weighted bipartite matching can be solved exactly,
e.g., via a maximum-flow formulation, while
the best-known approximation ratio for the online variant is achieved by \citet{fahrbach2020edge}.
It is known that many popular variants can be treated as a special case of the
\emph{submodular social welfare} problem \citep{lehmann2006combinatorial}.
Similar to our reduction in Section~\ref{section:msri}, some assignment or scheduling problems can also be reduced to a subset-selection problem subjective to a bipartite-matching
constraint~\citep{vondrak2008optimal,pinedo2012scheduling}.
\para{Constrained submodular maximization.}
In the offline setting, for a \nm-matroid constraint, it is well-known that a greedy algorithm guarantees a \nm-approximation for a modular function and a $(\nm+1)$-approximation for a submodular function \cite{korte1978analysis,fisher1978analysis}.
\citet{feldman2011improved} achieve the best-known $(\nm+\epsilon)$-approximation for submodular maximization under a \nm-exchange constraint, which is more general then a \nm-matroid.
\note{Is the $(\nm+\epsilon)$-approximation for submodular function?
How come the approximation for \nm-exchange, which is a more general case,
is better than the approximation for \nm-matroid?
Guangyi: it is indeed better, with a local-search algorithm.
That is why it is the best-known result. $\nm+1$-approx by greedy is well-known, but not tight in general.}
A lower bound of $\Omega(\nm/\ln \nm)$ is known for approximating \nm-dimensional matching, which is a special case of maximizing a modular function over a \nm-matroid \citep{hazan2006complexity}.
In regards to the streaming setting,
\citet{levin2021streaming} offer a $3+2\sqrt{2} \approx 5.828$ approximation subject to a matching constraint.
Under a \nm-matroid, a $4\nm$-approximation is obtained by \citet{chakrabarti2015submodular}, which is inspired by the modular variant in \citet{badanidiyuru2011buyback}.
In terms of lower bounds,
the analysis in \citet{badanidiyuru2011buyback} is shown to be optimal for any online algorithm (i.e., a streaming algorithm that maintains only a feasible solution at any moment).
Besides, a 2.692-approximation is impossible even subject to a bipartite matching constraint, and
there exists evidence that the lower bound can be as high as 3-approximation~\citep{feldman2022submodular}.
For \nm-matroid constraint, a lower bound of $\nm$ has been proven for any streaming algorithm with sub-linear memory;
moreover, any logarithmic improvement over the best-known 4\nm-approximation requires memory super polynomial in \nm \citep{feldman2022submodular}.
\section{Problem definition}
\label{section:definition}
In this section we present the \emph{maximum submodular ranking} (\msr) problem
in two different streaming models.
Afterwards, we introduce a formulation for progressively-diverse personalized recommendation.
Prior to that, we briefly review notions of submodularity and matroids.
\spara{Submodularity.}
Given a set \V, a function $f: 2^\V \to \reals_+$ is called \emph{submodular} if
for any $X \subseteq Y \subseteq \V$ and $v \in \V \setminus Y$, it holds $f(v \mid Y) \le f(v \mid X)$,
where $f(v \mid Y) = f(Y+v) - f(Y)$ is the marginal gain of $v$ with respect to set $Y$.
A function $f$ is called \emph{modular} if $f(X) + f(Y) = f(X \cup Y)$ for any $X \cap Y = \emptyset$.
A function $f$ is called \emph{non-decreasing} if
for any $X \subseteq Y \subseteq \V$, it holds $f(Y) \ge f(X)$.
Without loss of generality, we can assume that function $f$ is normalized, i.e., $f(\emptyset)=0$.
\spara{Matroid.}
For a set \V, a family of subsets $\M \subseteq 2^\V$ is called a \emph{matroid} if it satisfies the following two conditions:
(1)~downward closeness: if $X \subseteq Y$ and $Y \in \M$, then $X \in \M$;
(2)~augmentation: if $X,Y \in \M$ and $|X| < |Y|$, then $X + v \in \M$ for some $v \in Y \setminus X$.
Two useful special cases are those of \emph{uniform matroid} and \emph{partition matroid}.
The former is simply a \nS-cardinality constraint, i.e.,
$\M = \{S \subseteq \V: |S| \le \nS \}$,
and the latter consists of multiple cardinality constraints,
each placed on a disjoint subset $G_\ell$ of $\V = \cup_\ell G_\ell$, i.e.,
$\M = \{S \subseteq \V: |S \cap G_\ell| \le \nS_\ell, \text{ for all } \ell \}$.
Given \nm matroids $\{\M_j\}_{j \in [\nm]}$, their intersection is called a \emph{\nm-matroid}.
We denote by $\seqs(\V)$ the set of all sequences formed by items in \V.
Given a sequence $\seq \in \seqs(\V)$,
we write
$\seq_i$ for the $i$-th item in \seq, and
$\seq + v$ for the new sequence obtained by appending item~$v$ to~\seq.
The length of a sequence \seq is denoted as $|\seq|$.
The set of items in \seq is denoted by $\V(\seq) \subseteq \V$.
A sequence $\seq$ being a subsequence of another sequence $\seq'$ is denoted by $\seq \preceq \seq'$.
Given an interval $w = [s, e]$, where $s, e$ are integers, we write $\seq[w] = \seq[s:e] = \{\seq_s, \ldots, \seq_e\}$.
More generally, given a subset of items $R \subseteq \V$, we write $\seq[R] = \{\seq_i \mid i \in R\}$.
We are now ready to define the \msr problem~\citep{zhang2022ranking} and its streaming variants.
\begin{problem}[Max-submodular ranking (\msr)]
\label{problem:msr}
Given a set \V of \nV items,
a collection of \nF non-decreasing submodular functions $\fs = \{f_i\}_{i \in [\nF]}$,
each associated with an integer $\nS_i$,
the objective is to find a sequence solving
\begin{equation}
\arg\max_{\seq \in \seqs(\V)} \sum_{f_i \in \fs} f_i(\seq[1 : \nS_i]).
\label{eq:obj}
\end{equation}
\end{problem}
If an additional \nm-matroid constraint $\M \subseteq 2^\V$ is imposed on items in a feasible sequence $\seq$, i.e.,
$\V(\seq) \in \M$,
we refer to the problem as $\msr\nm$.
Such a \nm-matroid constraint is useful, for example,
to avoid overrepresentation of some group of items
in the returned sequence.
If the functions in \fs are modular, we refer to the problem as \emph{maximum modular ranking} (\mmr).
In the function-arriving streaming model,
we observe a set of new functions $\fs_t$ at time step $t$,
and the objective is to produce a sequence \seq in real time, that is,
to decide irrevocably one item in \seq at each time step.
Note that items that are placed in previous item steps cannot be used anymore.
We assume that we observe the new functions $\fs_t$ before deciding the $t$-th item at step $t$, and
a function will stay active in subsequent steps after its arrival until it exhausts its budget.
It is also possible not to place any item at a step (by introducing dummy items in \V).
More formally, the \msr problem in the function-arriving model is defined as follows.
\note{
Just looking problem \ref{problem:msrf} more carefully.
What is the streaming/online nature of this problem exactly?
Isn't just solving MSR after each arrival?
We are not penalizing for past decisions since at time $t$
the objective is $f(\seq[t : \nS(f)])$,
and all the functions that contribute to the objective are available at that point.
Guangyi:
We actually penalize for past decisions by not allowing to use previously selected items.
This leads to a strong in-approx result.
When we allow an item to be re-used multiple times, it indeed can be interpreted like what you said, solving MSR after each arrival.
}
\begin{problem}[Function-arriving \msr (\msrf)]
\label{problem:msrf}
Given a set \V of \nV items, and
a collection of non-decreasing submodular functions $\fs_t$ that arrive at the beginning of step $t$,
with arrival time $\tim(f)=t$ and integers $\nS(f)$ for each $f \in \fs_t$,
the objective is to find a sequence solving
\begin{equation}
\arg\max_{\seq \in \seqs(\V)}
\sum_t \sum_{f \in \fs_{t}} f(\seq[t : \nS(f)]),
\end{equation}
by irrevocably deciding the $t$-th item $\seq_t$ at step~$t$.
\end{problem}
\iffalse
\note[AG]{In the next two paragraphs we introduce the idea
that a function is ``available'' only on certain ``slots''.
While it is nice to have this generalization,
the idea is written in a compact manner and the reader may get confused.
What exactly do we mean ``available slots''?
What is the intuition of allowing slots and what is a practical application?
My recommendation is to introduce the idea earlier on in the preliminaries,
and motivate it a little bit.
Guangyi: I have not come up with good applications for this generalization.
Actually, we do not have applications about it in experiments,
so I didn't motivate it in the Intro.
I can elaborate a bit more here.
AG: Looks good!
}
\fi
In contrast, in the item-arriving streaming model,
we have full information about the functions that are used.
Actually, we further allow each function $f_i$ to ``reserve'' arbitrary $\nS_i$ slots $\slots_i \subseteq [\nV]$ in a sequence,
instead of merely the first $\nS_i$ slots $[\nS_i]$ in \msr.
When given a sequence, function $f_i$ receives items only from slots $\slots_i$.
For example, when deciding showtimes in a cinema, a user ($f_i$) may only be available during weekends or at specific time of a day.
The goal of the item-arriving \msr problem is to produce a sequence \seq after processing all arriving items in one pass
and using ``small'' memory size.
In other words, items that are discarded from the memory cannot be used later.
If there is a slot in the sequence where no function is available, one is allowed to not place any item.
Formally, the \msr problem in the item-arriving streaming model is defined as follows.
\begin{problem}[Item-arriving \msr (\msri)]
\label{problem:msri}
Given a collection of non-decreasing submodular functions $\fs = \{f_i\}_{i \in [\nF]}$, each associated with $\nS_i$ available slots specified by $\slots_i \subseteq [\nV]$, and
items in \V that arrive in a stream,
the objective is to find a sequence in
\begin{equation}
\arg\max_{\seq \in \seqs(\V)} \sum_{f_i \in \fs} f_i(\seq[\slots_i]).
\label{eq:obj-msri}
\end{equation}
In addition, the number of items one can store at any moment depends only on $\{ \nS_i \}$ instead of \nV.
\end{problem}
In the offline setting where all items are in place, we call this variant
\emph{\msr with availability} (\msra) problem.
Note that when $\slots_i = [\nS_i] = \{1,\ldots,\nS_i\}$,
the \msra problem becomes equivalent to the original \msr problem.
We also note that when there is a single function in \fs,
\msri generalizes the problem of streaming submodular maximization
for which no streaming algorithm with sublinear memory in \nV
has an approximation ratio better than 2~\citep{feldman2020one}.
\spara{Progressively-diverse personalized recommendation.}
A popular approach to personalized recommendation \citep{mitrovic2017streaming}
is to select a succinct subset $S$ of items that maximizes a submodular function of the form
\begin{align}
f_{\Ctrade,\nS}(S) &= (1-\Ctrade) \sum_{v \in S} \text{rel}(v) + \frac{\Ctrade \nS}{|V|} \, \sum_{u \in \V} \max_{v \in S} \text{sim}(u,v), \nonumber\\
&~\text{ such that }~ |S| \le \nS.
\label{eq:recommend}
\end{align}
Here $\text{rel}(v)$ measures the relevance of an item $v$ to the target user, and
$\text{sim}(u,v)$ the similarity between two items $u$, $v$.
The second term represents one specific notion of diversity
(also known as representative\-ness or global coverage),
i.e., for every non-selected item $u \in \V$,
there exists some item $v \in S$ that is similar enough to $u$.
To create a recommended list that adaptively serves a user's need for diversity,
one could maximize a weighted sum of multiple functions $\{ f_{\Ctrade,\nS} \}$
with increasing trade-off value $\Ctrade \in [0,1]$ and cardinality $\nS$,
so that the later suffix of the list will be dominated by functions with larger \Ctrade.
\section{Function-arriving \msr}
\label{section:msrf}
In this section we discuss two scenarios for the function-arriving \msr (\msrf) problem,
depending on whether items in \V can be used at most once, or more than one time.
We show that the problem is inapproximable in the former case, and
we present a 2-approximation algorithm for the latter case.
To start off our analysis,
if the output sequence is constrained to not contain duplicate items,
it can be shown that the \msrf problem is inapproximable, even when all functions are modular.
This result, stated below, follows from the inapproximability of the \emph{online-selection problem},
which aims to select the maximum of an adversarial sequence with no recall \citep{kesselheim2016secretary}.
\begin{theorem}
\label{theorem:inapprox}
If items can be used at most once,
the \mmrf problem generalizes the online selection problem.
Thus, no randomized algorithm guarantees an $\smallo(\nV)$-approximation for the \mmrf problem.
\end{theorem}
Given the inapproximability of \mmrf for the case that the output
sequence should not contain duplicates,
we proceed to study the problem when items in \V can be used multiple times.
This assumption is reasonable in many application,
for example, a song can be added many times in a playlist ---
and notice here that unnecessary duplicates are discouraged implicitly as their marginal gain is zero
with respect to functions that have included these items already.
\begin{algorithm2e}[t]
\DontPrintSemicolon
Initialize an empty sequence \seq\;
$\fs \gets \emptyset$\;
\For{$t = 1,\ldots$}{
$\fs \gets \fs \cup \fs_t$ \tcp*{receive functions $\fs_t$}
$A \gets \{ f \in \fs : \nS(f) \geq t \}$
\tcp*{active functions at the $t$-th step}
$v^* \gets \arg\max_{v \in \V} \sum_{f \in A} f(v \mid \seq[\tim(f) : t - 1])$\;
$\seq \gets \seq + v^*$\;
}
\Return{\seq}\;
\caption{Greedy algorithm for Function-arriving \msr (\msrf)}
\label{alg:msrf-greedy}
\end{algorithm2e}
When duplicates are allowed in the output sequence,
we prove that a simple greedy algorithm returns a solution
with a 2-approximation guarantee.
The greedy, which is displayed as Algorithm~\ref{alg:msrf-greedy},
selects the most beneficial item with respect to the current set of ``active'' functions
at each step.
A function $f$ is called \emph{active} if it has not exhausted its item budget $\nS(f)$
up to that point.
\begin{theorem}
\label{theorem:online-functions}
If items in \V can be used multiple times in the output sequence,
Algorithm~\ref{alg:msrf-greedy} yields a 2-approximation solution for the \msrf problem.
\end{theorem}
The approximation guarantee of the greedy can be shown to be tight,
as the lower bound provided by \citet{zhang2022ranking} applies to our case, as well.
In particular,
since the \msrf problem generalizes the \msr problem,
by letting all functions to arrive at the beginning,
we obtain the following result.
\begin{remark}[\citet{zhang2022ranking}]
\label{remark:tight}
If an item can be used multiple times in the output sequence,
the 2-approximation solution obtained by Algorithm~\ref{alg:msrf-greedy} is tight for the \msrf problem.
\end{remark}
\iffalse
\note{It is tight even for \mmr}
\fi
In the rest of this section, we prove Theorem~\ref{theorem:online-functions},
and the proof of Theorem~\ref{theorem:inapprox} is deferred to Section~\ref{section:theorem:inapprox}.
\iffalse
\note{Possible extensions:
\begin{itemize}
\item Bounded number of new functions at every step: doesn't look interesting.
\item Functions favor more than one item: similar to k-secretary in a worst-case order.
\item Random arrival order: Like the secretary problem, need to guess and learn from early arrivals, during which we don't provide any item.
However, functions are not numbers, for which it is not clear how to ``guess''.
\end{itemize}}
\fi
\begin{proof}[Proof of Theorem \ref{theorem:online-functions}]
We write $A_t=\{ f \in \bigcup_{t' \le t} \fs_{t'}: \nS(f) \geq t \}$
for the set of active functions at step $t$.
We denote by \seq the sequence produced by Algorithm~\ref{alg:msrf-greedy} with objective value \ALG, and
by $\seq^*$ the optimal sequence with objective value \OPT.
By the greedy selection criterion,
we know that for any arbitrary item $v \in \V$, it holds that
\begin{equation}
\sum_{f \in A_{t}} f(\seq_{t} \mid \seq[\tim(f): t-1] )
\ge \sum_{f \in A_{t}} f(v \mid \seq[\tim(f): t-1]).
\label{eq:msrf-greedy}
\end{equation}
To simplify the notation, let us write $\seq[f]$ to mean $\seq[\tim(f) : \nS(f)]$.
It is easy to see that \ALG is equal to the sum over $t$ of the left-hand side in Equation~(\ref{eq:msrf-greedy}), implying
\note{I do not see what the first inequality is by Equation~(\ref{eq:msrf-greedy}).
Equation~(\ref{eq:msrf-greedy}) does not involve $\seq^*_t$.
Guangyi: item $v$ in Equation~(\ref{eq:msrf-greedy}) can be arbitrary, so we replace it with each corresponding item in $\seq^*_t$.}
\begin{align*}
\ALG
&= \sum_t \sum_{f \in A_{t}} f(\seq_{t} \mid \seq[\tim(f): t-1] ) \\
&\stackrel{(a)}{\ge} \sum_t \sum_{f \in A_{t}} f(\seq^*_t \mid \seq[\tim(f): t-1] ) \\
&= \sum_{f \in \fs} \sum_{t=\tim(f)}^{\nS(f)} f(\seq^*_t \mid \seq[\tim(f): t-1]) \\
&\stackrel{(b)}{\ge} \sum_{f \in \fs} \sum_{t=\tim(f)}^{\nS(f)} f(\seq^*_t \mid \seq[f]) \\
&\stackrel{(c)}{\ge} \sum_{f \in \fs} f(\seq^*[f] \mid \seq[f]) \\
&= \sum_{f \in \fs} f(\seq[f] \cup \seq^*[f]) - f(\seq[f]) \\
&\ge \sum_{f \in \fs} f(\seq^*[f]) - f(\seq[f])
= \OPT - \ALG,
\end{align*}
where
inequality (a) is by Eq.~(\ref{eq:msrf-greedy}), and
inequalities (b) and (c) are due to submodularity.
\end{proof}
\iffalse
\note[vs. online convex learning]{In online convex learning,
one decides a point $x_i$ at each step before seeing a new convex function $f_i$ (or a submodular function in some literature where $x_i$ becomes a feasible subset).
At each time step $t$ of \msr,
we encounter a set of new functions $\fs_t$, and
essentially we are dealing with a new submodular function $f_t$, defined as
\[
f_t(\cdot) = \sum_{t' \le t} \, \sum_{f \in \fs_{t'}: \nS(f) > t-t'} f(\cdot \mid \seq^{t-1} - \seq^{t'-1}).
\]
Although it looks close to the online convex learning problem, there are two differences.
One is that we get to see the function $f_t$ before selecting an item.
Second is that the \OPT is $\sum_t f_t(x^*)$ for regret minimization, where in this context $x^*$ is a single item.
}
\fi
\section{Item-arriving \msr}
\label{section:msri}
\begin{table*}[t]
\caption{Summary of approximation ratios for \msra and \msri}
\label{tbl:reduction}
\centering
\begin{tabular}{lccccc}
\toprule
&unconstrained &\nm-matroid \\
\midrule
\mmra &exact &$\nm+1$ (\citet{korte1978analysis}) \\
\msra &$2+\Cgap$ (\citet{feldman2011improved}) &$\nm+1+\Cgap$ (\citet{feldman2011improved}) \\
\mmri &$1/0.5086$ (\citet{fahrbach2020edge}) &$2(\nm+1+\sqrt{(\nm+1)\nm})-1$ (\citet{badanidiyuru2011buyback}) \\
\msri &$5.828$ (\citet{levin2021streaming}) &4(\nm+1) (\citet{chakrabarti2015submodular}) \\
\bottomrule
\end{tabular}
\iffalse
\note{For small \nm, there are some better algorithms.}
\fi
\end{table*}
In this section, we present the reduction that turns the ranking problem into a subset selection problem subject to a bipartite matching constraint, and its rich consequences.
A summary of approximation ratios in different settings is displayed in Table \ref{tbl:reduction}, using algorithms provided in the citations.
\begin{theorem}
\label{theorem:MSRp}
For any integer $\nm \ge 1$, the $\msra\nm$ problem is an instance of maximizing a non-decreasing submodular function subject to a $(\nm+1)$-matroid.
\end{theorem}
As an immediate consequence of Theorem \ref{theorem:MSRp},
the item-arriving \msr (\msri) can be solved by streaming algorithms for constrained submodular maximization.
\begin{corollary}
\label{corollary:MSR-V}
For any integer $\nm \ge 1$, the $\msri\nm$ problem is an instance of maximizing a non-decreasing submodular function subject to a $(\nm+1)$-matroid in one pass while remembering $\bigO(\sum_i \nS_i)$ items at any moment.
\end{corollary}
Moreover, when functions are modular, we can obtain a stronger result.
\begin{corollary}
\label{corollary:MMR}
For any integer $\nm \ge 1$, the $\mmra\nm$ problem is an instance of maximizing a modular function subject to a $(\nm+1)$-matroid.
In particular, the $\mmra$ problem is an instance of maximum weighted bipartite matching.
\end{corollary}
\note{Modular maximization over 3-matroid generalizes \np-hard 3-dimensional matching.}
\begin{proof}[Proof of Theorem \ref{theorem:MSRp}]
The main idea of the reduction is to create an extended universe set $\V'$, i.e.,
\begin{equation}
\label{eq:extended-V}
\V' = \{ (v,t) : v \in \V, \, t \in [\nV] \},
\end{equation}
which can be seen as the edges in a complete bipartite between items $L = \V$ and ranks $R = [\nV]$.
Let us define $\V(S') = \{ v : (v,t) \in S' \}$ to be the projection of $S'$ onto \V.
Let us also write $X_v = \{(v, t) : t \in [\nV]\}$ and $Y_t = \{(v, t) : v \in \V\}$.
Suppose we are given a \nm-matroid $\M \subseteq 2^\V$ over \V.
Define $\M' = \mathcal{A} \cap \mathcal{B} \cap \mathcal{C}$, where
\[
\begin{split}
\mathcal{A} & = \{ S' \subseteq \V': \V(S') \in \M \}, \\
\mathcal{B} & = \{ S' \subseteq \V': | S' \cap X_v | \le 1, \text{ for all } v \in \V \}, \\
\mathcal{C} & = \{ S' \subseteq \V': | S' \cap Y_t | \le 1, \text{ for all } t \in [\nV] \}. \\
\end{split}
\]
Here, $\mathcal{B}$ forces that an item appears only once and $\mathcal{C}$ forces that only one item appears at time $t$.
Consequently, a feasible sequence \seq can be written as a subset of $\V'$ that satisfy $\M'$.
On the other hand, a feasible subset $S' \subseteq \V'$ can be transformed to a sequence by ordering items in $\V(S')$ according to their associated ranks in $S'$.
If $S'$ consists of non-consecutive ranks, one can insert dummy items (or shifting items forward in \msr) to obtain a sequence, with no decrease in the objective function.
We claim that $\M'$ is a $(\nm+1)$-matroid. We will prove the claim
by arguing that $\mathcal{A} \cap \mathcal{B}$ is a $\nm$-matroid
(by Lemma~\ref{lemma:AB} in Section~\ref{section:theorem:MSRp}).
Then $\mathcal{A} \cap \mathcal{B} \cap \mathcal{C}$ is a $(\nm+1)$-matroid because $\mathcal{C}$ is a matroid.
What is left is to show that the objective function for \msra (Equation~\ref{eq:obj-msri}) is non-decreasing and submodular with respect to the new universe set $\V'$.
Showing non-decreasing is obvious, so we only elaborate on submodularity.
Recall that $\slots_i \subset [\nV]$ consists of available slots for function $f_i$.
Let us define $\slots'_i = \{ (v,t) : v \in \V, \, t \in \slots_i \}$.
The objective function can be now written as
$g(S) = \sum_{f_i \in \fs} f_i(\V(S \cap \slots_i'))$.
For any subset $S \subseteq W \subseteq \V'$,
the marginal gain $g(\cdot \mid S)$ of including an item $(v,t)$ into $S$ is
\begin{align*}
&g((v,t) \mid S) = \sum_{f_i \in \fs: t \in \slots_i} f_i(v \mid \V(S \cap \slots_i')) \\
&\ge \sum_{f_i \in \fs: t \in \slots_i} f_i(v \mid \V(W \cap \slots_i'))
= g((v,t) \mid W),
\end{align*}
since $\V(S \cap \slots_i') \subseteq \V(W \cap \slots_i')$ and submodularity of each $f_i$.
Hence, the $\msra\nm$ problem can be cast as an instance of non-decreasing submodular maximization under a $(\nm+1)$-matroid.
\end{proof}
\section{Experiments}
\label{section:experiment}
We evaluate our methods on novel use cases
that motivate the \msrf and \msri problems.
For each use case,
we simulate a concrete task using real-life data, and
empirically evaluate the performance of the proposed algorithms.
A summary of the datasets can be found in Table~\ref{tbl:datasets}.
An examination on the running time is deferred to Section~\ref{section:runtime}.
Our implementation has been made publicly available.\code
\begin{table}[t]
\caption{Datasets statistics}
\label{tbl:datasets}
\centering
\begin{tabular}{lrr}
\toprule
Dataset & $\nV = |\V|$
& $\nF = |\fs|$ \\
\midrule
Music \cite{Bertin-Mahieux2011} &61 415 &10 000 \\
Github social network \cite{rozemberczki2019multiscale} &37 700 &100 \\
Sogou web pages \cite{liu2011users} &725 &1 017 \\
Twitter words \cite{pennington2014glove} &10 000 &8 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\caption{\msrf.
Each demand is given a random arrival time, and a random budget between 1 to a parameter of maximum budget.
(a) Utility of a user demand is $f(S) = |S \cap S_{\text{like}
\label{fig:msrf}
\end{figure}
\subsection{Function-arriving \msr}
We present two use cases for the \msrf problem,
live streaming and catalogued viral marketing,
and we evaluate our methods on relevant datasets.
\para{Algorithms.}
The proposed greedy algorithm in Algorithm~\ref{alg:msrf-greedy} is termed \greedy.
Other baselines include
\random, which picks a random item at every step, and
\looptopk, which selects the top-$k$ items repetitively ---
in our use case this corresponds to playing the most popular songs in a loop.
Note that \looptopk is \emph{omniscient} as it requires information about the item popularity in advance.
\para{Live streaming.}
One increasingly popular application on the internet is live streaming,
where a live streamer performs various shows continuously,
while the audience may join or leave any time.
More concretely, we consider music live streaming,
where the live streamer plays songs continuously.
To simulate this application, we use the Million Song Dataset \cite{Bertin-Mahieux2011},
consisting of triples representing a \emph{user}, \emph{song}, and \emph{play count}.
We assume that a user \emph{likes} a song if it is played more than once.
We define \emph{user utility} to be fractional coverage of the liked songs,
which is a submodular function.
We set a random budget for each user between 1 to a maximum-budget parameter,
i.e., how many songs the user will listen to.
Besides, every user is given a random arrival time over a long horizon.
Our goal is decide a sequence of songs in real time that maximizes total user utility.
\para{Catalogued viral marketing.}
For viral-marketing applications in social networks,
the goal is to identify a small set of seed nodes who can influence many other users.
For popular diffusion models, the number of influenced nodes is a submodular function
of the seed set \cite{kempe2015maximizing}.
Here,
we introduce a generalization of this classic result
to cope with multiple marketing demands simultaneously,
where each demand is only interested in reaching a specific group of users.
For example, an advertiser may want to influence female users,
while another may want to influence users near a specific city.
Each demand provides a number of product samples to seed nodes,
with the hope to advertise their products by word of mouth.
As a use case for our experimental evaluation,
we consider a platform designed to help with these marketing demands, and
assume that a package of samples of different products
can be sent to one seed node at each time step.
As the marketing demands arrive in real time,
we aim to catalogue unfinished demands by identifying a seed node
that is beneficial to all of them.
Note that samples from one demand should be distributed as soon as possible after its arrival
to the outgoing packages.
To simulate this use case, we use the GitHub social network~\cite{rozemberczki2019multiscale}.
We consider 100 demands, each targeting a random subset of the network,
together with a random number of samples from 1 to a maximum-budget parameter,
and a random arrival time.
Our goal is to maximize the total utility of demands by sending a package
to one carefully chosen seed node at each time step.
\para{Results.}
The result of the simulation is shown in Figure~\ref{fig:msrf},
in which each data point represents an average of three runs,
each with a different random seed.
For the music-streaming task (Figure~\ref{fig:msrf}(a)),
the \greedy algorithm outperforms all other baselines by a large margin.
This suggests that for users with diverse preferences in songs,
an algorithm like \greedy, which can adapt to the need of current active users,
is required for good performance.
In the catalogued viral-marketing task (Figure~\ref{fig:msrf}(b)),
the \greedy algorithm continues to achieve the best performance.
However, several baselines from \looptopk come closer to \greedy as the demand budget increases.
This signifies the existence of a group of influencers
who can collectively reach the most users in the Github network.
\begin{figure}
\caption{\msri.
(a) Utility of a user intent is $f(S) = |S \cap S_{\text{rel}
\label{fig:msri}
\end{figure}
\subsection{Item-arriving \msri.}
Next we will present experiments for \msri as well as for progressively-diverse personalized recommendation.
\para{Algorithms.}
We adopt the state-of-the-art streaming algorithm in \citet{chakrabarti2015submodular} as our algorithm,
which greedily assigns each arriving item to one of ranks in the sequence whenever possible,
starting from rank 1.
We call this algorithm Exchange (\exc).
If a rank is occupied by some previous item,
\exc replaces it if the current item is twice more valuable than the existing item.
Other baselines include
\random, which produces a random sequence, and
\topk, which orders items by non-increasing singleton utility.
We also include an offline baseline, Omniscient Greedy (\greedyo) \cite{zhang2022ranking}, which serves as a tighter estimate for the optimal value.
\greedyo sequentially selects greedily an item with respect to the current set of active functions.
\para{Multiple intents re-ranking.}
In the absence of explicit user intent for a given query,
a search engine needs to take into account all possible intents when providing a list of returned web pages.
Each intent is only relevant to a subset of web pages.
We represent the utility of an intent by the fractional coverage
of relevant pages browsed before running out of patience.
The goal is to produce a list of web pages to maximize the utility over all intents.
In this use case, we extract user intents from the SogouQ click log dataset \cite{liu2011users}.
We consider all queries related to ``movie,'' and
for each such query
we collect all users who issued the query.
We also collect the pages they clicked.
We treat each user as an intent, and pages they clicked as the set of relevant pages.
For each intent, we generate a random number from 1 to a maximum-budget parameter as
user ``patience,'' i.e., the number of pages that the user will browse.
\para{Progressively-diverse personalized recommendation.}
Next we consider the recommendation task introduced in Section~\ref{section:definition}.
To simulate a concrete task, we consider the task of finding representative synonyms with respect to a given keyword.
We choose ``Trump'' as our keyword.
We use the pre-trained word embedding Glove over the Twitter corpus \cite{pennington2014glove}.
Similarity or relevance between any two words is measured by the cosine similarity minus 0.5 due to dense vectors.
The top 10 000 relevant words form our candidate set \V, among which 100 random words are chosen as bases to compute the diversity term (Equation~\ref{eq:recommend}).
Given a maximum length ($\nS=40$) of the recommended list,
we create multiple functions $\{ f_i \}$ (Equation~\ref{eq:recommend}) for $i \le 8$,
where the $i$-th function is associated with a weight $(1/2)^i$, a trade-off value $\Ctrade=(i-1)/\nS$ and a budget of $5i$.
Thus, function~$f_i$ is dominant for the $i$-th length-5 subsequence, and $f_i$ with a large $i$ favors increasingly diverse items.
\para{Results.}
The results are shown in Figure~\ref{fig:msri}.
Every data point represents an average of three runs, each with a different random seed.
For the task of web page ranking (Figure~\ref{fig:msri}(a)), all algorithms except for \random perform almost equally well.
This implies a heavy overlap in relevant pages among different user intents.
For the task of finding diverse synonyms (Figure~\ref{fig:msri}(b)),
the ranking in terms of the objective is
$\greedyo \approx \exc > \topk > \random$
($7.139 \approx 7.136 > 6.652 > 3.527$).
We demonstrate two components of the objective, relevance and diversity, separately in Figure~\ref{fig:msri}(b).
Note that the \random algorithm is a classic method in finding diverse representatives, while
the \topk algorithm is optimal if the objective degenerates into a single modular term of relevance.
The word list returned by the \exc algorithm is indeed increasingly the most diverse, while it also finds relevant synonyms at the beginning.
The actual word list returned by \exc is presented in Section~\ref{section:synonyms}.
\section{Conclusions}
\label{section:conclusion}
In this paper, we study extensions of the \msr problem in two streaming models.
In the first demand-arriving model,
we show that a greedy algorithm guarantees 2-approximation, if items can be reused.
In the second item-arriving model,
we discover a reduction that turns the ranking problem into a constrained subset-selection problem, and
inherit approximation guarantees from standard submodular maximization.
The reduction further allows us to approximate the \msr problem subjective to additional \nm-matroid constraints.
Finally, we describe several novel applications for the \msr problem, and examine empirical performance of the proposed algorithms.
One limitation of the \msr formulation is that individual budgets are not always known in some applications.
Another limitation is that it may be computationally costly if both the number of demands and items are large, especially for the item-arriving \msr problem.
With respect to ethical considerations of the work, our algorithm is a general submodularity-based framework for ranking,
which is not dedicated to a specific application.
We cannot identify strong negative societal concerns.
Many of the broader machine-learning issues, such as misuse of technology, biases in data, effects of automation in the society, and so on, are relevant to this work, as well, but in no greater degree than the whole machine-learning field.
\ifsupp
\appendix
\section{Appendix}
\subsection{Proof of Theorem \ref{theorem:inapprox}}
\label{section:theorem:inapprox}
\begin{proof}[Proof of Theorem \ref{theorem:inapprox}]
For the online-selection problem,
the input is a sequence of $\nV$ integer numbers $a_1,\ldots,a_\nV$
that are revealed one after another.
The problem asks to select exactly one number immediately after it is revealed,
and the goal is to maximize the value of the selected number.
It is well-known that this problem does not admit a $\smallo(\nV)$ approximation \citep{kesselheim2016secretary}.
\note{For a proof, see \url{https://www.mpi-inf.mpg.de/fileadmin/inf/d1/teaching/summer16/random/yaosprinciple.pdf}.}
We observe that the online-selection problem is a special case of \mmrf.
To see this, consider the following instance:
let the universe set \V consisting of an item $v$ and $\nV-1$ other dummy items.
Every arriving modular function $f$ has an identical form, with $f(v)=a$ and $f(u)=0$ for any item $u \ne v$.
Besides, every function $f$ is associated with a budget of $\nS(f)=1$.
At the $t$-th step, a function as defined above with $f(v)=a_t$ arrives.
It is easy to see that the optimal objective value of \mmrf is equal to the largest number $a_t$.
Then, deciding the rank of item $v$ in the output sequence for the \mmrf problem
is equivalent to deciding which number in $\{a_t\}$ to select for the online-selection problem.
Hence, \mmrf generalizes the online-selection problem.
\end{proof}
\note{Number $a_t$ could go up to $\bigO(C^\nV)$ for constant $C$.}
\subsection{Missing proof in Theorem \ref{theorem:MSRp}}
\label{section:theorem:MSRp}
\begin{lemma}
\label{lemma:AB}
$\mathcal{A} \cap \mathcal{B}$ is a $\nm$-matroid.
\end{lemma}
\begin{proof}
Since $\M$ is a $\nm$-matroid over \V, then we can write
$\M = \M_1 \cap \cdots \cap \M_{\nm}$ where $\M_i$ a matroid.
For each matroid $\M_i$, we construct another system over $\V'$,
\begin{align*}
&\M_i' = \{ S' \subseteq \V': \V(S') \in \M_i \} \cap \mathcal{B} \\
&= \{ S' \subseteq \V': \V(S') \in \M_i, | S' \cap X_v | \le 1, \text{ for all } v \in \V \}.
\end{align*}
Notice that $\mathcal{A} = \bigcap_i \{ S' \subseteq \V': \V(S') \in \M_i \}$, and
$\mathcal{A} \cap \mathcal{B} = \bigcap_i \M_i'$.
We prove that $\mathcal{A} \cap \mathcal{B}$ is a \nm-matroid by verifying that each $\M_i'$ is a matroid.
To show that $\M_i'$ is a matroid,
downward closeness is obvious, and we only verify augmentation.
Given any $T, U \in \M_i'$ such that $|T| < |U|$,
we have that $|\V(T)| = |T| \le |\V(U)| = |U|$.
Since $\V(T), \V(U) \in \M_i$, there exists $v \in \V(U)$ such that $v + \V(T) \in \M_i$.
Suppose $(v,t) \in U$ for that particular $v$, and it is obvious that $(v,t) + T \in \M_i'$.
Hence, $\M_i'$ is a matroid, implying that $\mathcal{A} \cap \mathcal{B} = \bigcap_i \M_i'$ is a \nm-matroid.
\end{proof}
\iffalse
Both $\mathcal{B}$ and $\mathcal{C}$ are matroids. We claim that $\mathcal{B} \cap \mathcal{C}$ is also a matroid.
Assume $T, U \in \mathcal{B} \cap \mathcal{C}$ with $|T| < |U|$. Since $\mathcal{B}$ is a matroid,
then there is $(v, t) \notin T$ such that $T + (v, t) \in \mathcal{B}$, that is, $T \cap X_v = \emptyset$.
Similarly,
there is $(v', t') \notin T$ such that $T + (v', t') \in \mathcal{C}$, that is, $T \cap Y_{t'} = \emptyset$.
Consequently, $(v, t') \notin T$ and $T + (v, t') \in \mathcal{B} \cap \mathcal{C}$. proving that $\mathcal{B} \cap \mathcal{C}$ is a matroid.
\fi
\iffalse
that forms a \emph{matching}, i.e., the intersection between a system $\M_L$,
\begin{equation}
\label{eq:matroid-L}
\M_L = \{ S' \subseteq \V': \V(S') \in \M \text{ and } |\{ (v,t) \in S': t \in [\nV] \}| \le 1, \, \forall v \in V \},
\end{equation}
and a partition matroid $\M_R$,
\begin{equation}
\label{eq:matroid-R}
\M_R = \{ S' \subseteq \V': |\{ (v,t) \in S': v \in \V \}| \le 1, \, \forall t \in [\nV] \},
\end{equation}
where $\V(S') = \{ v : (v,t) \in S' \}$ is the projection of $S'$ onto \V.
We verify that $\M_L$ is a \nm-matroid.
If \M is a matroid, let $\M'$ be a system constructed in the same way as $\M_L$, and
it is easy to verify that $\M'$ is also a matroid via the one-to-one correspondence between $S' \in \M'$ and $\V(S')$.
More generally, if \M is a \nm-matroid, i.e., the intersection of \nm matroids $\{ \M_j \}_{j \in [\nm]}$, it is straightforward to see that
$\M_L$ is the intersection of \nm matroids $\{ \M_j' \}_{j \in [\nm]}$, where $\M_j'$ is a matroid constructed from $\M_j$.
Therefore, the intersection of $\M_L$ and $\M_R$ is a $(\nm+1)$-matroid.
\fi
\subsection{Synonyms to ``Trump'' in Twitter}
\label{section:synonyms}
trump
banks
warren
clinton
gates
newman
buffett
founder
reagan
carson
ceo
appoints
butcher
duffy
carlson
lowe
travis
costello
joins
airbnb
company
tesla
sanford
krause
dunlap
cassidy
does
shipbuilding
shooter
hired
rwanda
asml
hartman
barb
grandfather
rig
exchanging
lowes
varela
lamontagne
\subsection{Running time}
\label{section:runtime}
\begin{figure}
\caption{Running time}
\label{fig:runtime}
\end{figure}
To examine the running time of the proposed algorithms, we generate synthetic data with an increasing number of items or demands.
More specific, we either fix the number of demands (100) while increase the number of items,
or fix the number of items (1000) while increase the number of demands.
Each demand is represented by a coverage function of a random subset of size 100, and is assigned a random budget between 1 to 100.
For the \msrf model (\exc), a random arrival time is given to each demand.
The results are displayed in Figure~\ref{fig:runtime}.
Running time of both \exc and \greedy algorithms increases linearly in the number of demands.
As to an increasing number of items, running time of both appears to be sub-linear instead of linear, which is due to the fact that many items fail to hit any demand subsets.
\subsection{Additional experimental details}
\label{section:experiment-details}
All experiments were carried out on a server equipped with 24 processors of AMD Opteron(tm)
Processor 6172 (2.1 GHz), 62GB RAM, running Linux~2.6.32-754.35.1.el6.x86\_64.
We use Python~3.8.5.
\fi
\end{document} |
\mathbf{e}gin{document}
\title{Lyapunov-based quantum synchronization in a designed optomechanical system}
\author{Wenlin Li}
\author{Chong Li}
\email{[email protected]}
\author{Heshan Song}
\email{[email protected]}
\affiliation{School of Physics and Optoelectronic Engineering, Dalian University of Technology, 116024, China}
\mathbf{e}gin{abstract}
We extend the concepts of quantum complete synchronization and phase synchronization, which are proposed firstly in [Phys. Rev. Lett, 111 103605 (2013)], to more widespread quantum generalized synchronization. The generalized synchronization can be considered as a necessary condition or a more flexible derivative of complete synchronization, and its criterion and synchronization measurement are further proposed and analyzed in this paper. As an example, we consider two typical generalized synchronizations in a designed optomechanical system. Unlike the effort to construct a special coupling synchronization system, we purposefully design extra control fields based on Lyapunov control theory. We find that the Lyapunov function can adapt to more flexible control objectives, which is more suitable for generalized synchronization control, and the control fields can be achieved simply with a time-variant voltage. Finally, the existence of quantum entanglement in different generalized synchronizations is also discussed.
\end{abstract}
\pacs{42.50.Wk, 05.45.Xt, 05.45.Mt, 03.65.Ud}
\maketitle
\section{Introduction}
Complete synchronization and phase synchronization between two continuous variable (CV) quantum systems were first studied by \citeauthor{S1} in mesoscopic optomechanical systems \cite{S1}, and they also made the forward-looking prediction that the quantum synchronization is of potential but important applications in quantum information processing (QIP). Subsequently, quantum synchronization is deeply discussed in variety of quantum systems, such as cavity quantum electrodynamics \cite{S2}, atomic ensembles \cite{S3,S4}, van der Pol (VdP) oscillators \cite{S2,S5,S6,S11}, Bose-Einstein condensation \cite{S7} ,superconducting circuit system \cite{S8}, and so on. In these works, quantum synchronization is further extended from CV system to finite dimensional Hilbert space corresponding to more excellent quantum properties and the quantum correlation is also analyzed quantificationally in those synchronous quantum system \cite{S2,S3,S5,S9}. In addition, quantum synchronization criteria \cite{S3,S10,S11,S12} and synchronization between nodes in quantum network are still hot topics in the field of quantum synchronization theory.
Generally, existing quantum synchronization schemes can be attributed to the concept of coupling synchronization, i.e., one subsystem of the synchronous systems plays the role of controller acting on the other subsystem \cite{S1,S2,S3,S4,S5,S6,S11,S10,S9}. A significant advantage of this kind of direct linking is its high maneuverability. However, it still remains some difficulties to achieve the better applications in QIP with quantum synchronization. For the weak coupling of the quantum level, it is difficult to eliminate the difference between systems if it is big enough, and in fact, it is often in this case. Fundamentally, too strong driving or pump fields will compel systems to the form of the forced synchronization, just like what they are realized in previous works \cite{S1,S2,S3,S9}. This deficiency of coupling synchronization is a severe limitation, which causes other types of synchronizations are hardly realized in addition to complete and phase synchronizations. For examples, antiphase synchronization and projective synchronization, which are also widely applied in the classical synchronization fields \cite{GSS1,GSS2,GSS3}, were rarely discussed in quantum systems.
In traditional control theory, besides the coupling terms, there exists an external controller which is imposed on response system in order to provide a more outstanding control capability, implying that a designed controller can establish a more flexible relationship between two controlled subsystems \cite{Con}. It enlightens us to think about such problems: can a more generalized synchronization (like above mentioned antiphase and projective synchronizations) be extended and obtained in quantum domain? If it works, what kinds of the criterion and measurement are needed in this quantum generalized synchronization? And most importantly, how are the controllers designed in order to satisfy various requirements corresponding to different kinds of generalized synchronizations?
For responsing above questions, in this paper, we study the general properties of different synchronization forms and expand them into quantum mechanics based on Mari's complete synchronization theory. The criterion and measurement of the generalized synchronization are also proposed and they will be divided into two orders for calculating and analyzing conveniently. Instead of directly establishing interaction between two subsystems, here we utilize Lyapunov control theory which has exhibited comprehensive applications in target quantum state preparation and suppressing decoherence to design the external controller \cite{Con,QC1,QC2}. Although the Lyapunov function is constituted by expectation values, our results show that the quantum fluctuations can also be effectively subdued by the controller. In addition, the classical and quantum correlations are considered via calculating the Lyapunov exponent and Gaussian Negativity. In particular, we demonstrate that CV entanglement can exist in generalized synchronization, however, it will disappear if generalized synchronization tends to complete synchronization. This phenomenon is consistent with Mari's and Ameri's conclusions about entanglement in complete synchronization \cite{S1,S2}. Therefore, we believe existing quantum complete synchronization can be included in our generalized synchronization theory.
We organize this paper as follows: in Sec. \ref{quantum Generalized synchronization}, we introduce the definition and the properties, especially measurement method, of quantum generalized synchronization. In Sec. \ref{Lyapunov-based synchronization in optomechanical systems}, we analyze the dynamics of an optomechanical system, and realize two kinds of representative generalized synchronization (constant error synchronization in Sec. \ref{Constant error synchronization} and time delay synchronization in Sec. \ref{Time delay synchronization}) respectively via designing appropriate control field based on Lyapunov function theory. The correlation in generalized synchronization is also discussed in Sec. \ref{correlation in Generalized synchronization} and a summary is finally given in Sec. \ref{Results and discussion}.
\section{quantum Generalized synchronization}
\label{quantum Generalized synchronization}
We begin this section with a brief introduction of generalized synchronization and its expansion in quantum domain. Considering two general classical dynamics systems whose evolutions satisfy following equations:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\partial_{t}x_{1}(t)=F(x_1(t))+U_{c1}(x_1,x_2)+U_{e1}\\
\partial_{t}x_{2}(t)=F(x_2(t))+U_{c2}(x_1,x_2)+U_{e2}
\end{split}
\label{eq:sys}
\end{equation}
here $x_{1,2}(t)\in R^n$ are the state variables of two systems in time $t$. $U_{c1,2}$ are the mutual couplings between systems and correspondingly, $U_{e1,2}$ are the external controllers belonging to their respective system. If there are continuous mappings $h_{1}, h_{2}:R^n\rightarrow R^k$ and following synchronization condition in Eq. (\ref{eq:classc}) can be achieved when $t\rightarrow\infty$, then two systems depending on $h_{1}$, $h_{2}$, $x_{1}$ and $x_{2}$ will be of the consistent evolution.
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\lim_{t\rightarrow\infty}\left|h_1(x_1(t))-h_2(x_2(t))\right|\rightarrow 0
\end{split}
\label{eq:classc}
\end{equation}
This controllable correlation is named as generalized synchronization, and it will degenerate to common complete synchronization or phase synchronization via selecting $h_{i}(x_{i})=x_{i}$ or $h_{i}(x_{i})=\arg(x_{i})$, respectively. Similarly with Mari's measurement $S_c(t):=\langle \hat{q}_{-}^2(t)+\hat{p}_{-}^2(t)\rangle^{-1}$ \cite{S1}, generalized synchronization can be extended from classical to quantum by considering conjugate quantities simultaneously, and the corresponding measurement can be defined as:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
S_g(t):=&\langle \hat{q}_{g-}^2(t)+ \hat{p}_{g-}^2(t)\rangle^{-1},
\label{eq:Mari}
\end{split}
\end{equation}
where $\hat{q}_{g-}:=(h_1(\hat{q}_1)-h_2(\hat{q}_2))/\sqrt{2}$ and $\hat{p}_{g-}:=(h_1(\hat{p}_1)-h_2(\hat{p}_2))/\sqrt{2}$ are the quantized generalized error operators.
Nevertheless, it is not easy to use Eq. (\ref{eq:Mari}) directly in a concrete model. In some cases, $h_{1,2}(q,p_{1,2})$ are not strict physical descriptions because they are actually superoperators. On the other hand, it could be difficult to calculate $S_{g}(t)$ in CV quantum systems. Therefore, in order to analyze quantum synchronization in CV mesoscopic systems, we adopt mean--field approximation to simplify Eq. (\ref{eq:Mari}). Then synchronization measurement can be divided into two parts: the first order criteria is to describe the consistency of expectation values:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
&\lim_{t\rightarrow\infty}\left|h_1(q_1(t))-h_2(q_2(t))\right|\rightarrow 0\\
&\lim_{t\rightarrow\infty}\left|h_1(p_1(t))-h_2(p_2(t))\right|\rightarrow 0
\label{eq:liwenclass}
\end{split}
\end{equation}
and the second order measurement is to determine following quantum fluctuations:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
S'_g(t):=&\langle \delta q_{g-}^2(t)+ \delta p_{g-}^2(t)\rangle^{-1}
\label{eq:liwen}
\end{split}
\end{equation}
where $o$ refers to $\langle o\rangle$ and $\delta o:=\hat{o}-o$ for $o\in \{{q}_{g-},{p}_{g-}\}$.
Physical meaning of Eq. (\ref{eq:liwenclass}) and Eq. (\ref{eq:liwen}) are more definite to explain quantum synchronization, i.e., systems' expectation values are required to satisfy the ``classical" generalized synchronization conditions and the perturbation on synchronization behavior caused by quantum effect is squeezed as much as possible. To verify this, Eq. (\ref{eq:liwen}) will be equivalent to Eq. (\ref{eq:Mari}) if the first order criteria are satisfied. Conversely, if a designed external field can not only make the evolution of systems to realize ``classical" generalized synchronization conditions, but also increase corresponding second order measurement $S'_{g}$, then it can be seen as an appropriate control field for realizing quantum synchronization. This is the basic idea of designing the control field.
In some particular models, if $h_1$ and $h_2$ are selected as flat mappings, Eq. (\ref{eq:liwen}) can be further simplified to measure fluctuation
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
S'_g(t):=&\langle \delta q_{-}^2(t)+ \delta p_{-}^2(t)\rangle^{-1}
\label{eq:liwenll}
\end{split}
\end{equation}
Here ${q}_{-}:=({q}_1-{q}_2)/\sqrt{2}$ and ${p}_{-}:=({p}_1-{p}_2)/\sqrt{2}$. Compared with Eq. (\ref{eq:liwen}), Eq. (\ref{eq:liwenll}) is more easy to be obtained via the covariance matrix of system.
\section{Lyapunov-based synchronization in optomechanical system}
\label{Lyapunov-based synchronization in optomechanical systems}
We analyze Lyapunov-based synchronization in optomechanical system to more intuitively explain above theory of quantum generalized synchronization. Our model consists of two oscillators which couple with a Fabry–-P\' erot cavity together (see Fig. \ref{fig:fig1}).
\mathbf{e}gin{figure}[]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3in]{fig1.eps}
\end{minipage}
\caption{Diagram of optomechanical system corresponding to our model. Here two oscillators are placed at wave nodes of a Fabry–-P\' erot cavity and they couple with the cavity field via linear optomechanical interactions, and their origins are respectively set at the equilibrium positions.
\label{fig:fig1}}
\end{figure}
The Hamiltonian corresponding to this model can be divide into four parts: $H=H_{0}+H_{int}+H_{div}+H_c(t)$. Here $H_{0}=\omega_{l}a^{\dagger}a+\sum_{j=1,2}(\dfrac{w_{mj}}{2}\hat{p}_{j}^2+\dfrac{w_{mj}}{2}\hat{q}_{j}^2)$ is a sum of free Hamiltonians corresponding to the optical field and two oscillators. Moreover, $H_{int}=-g_{1}a^{\dagger}a\hat{q_{1}}-g_{2}a^{\dagger}a\hat{q_{2}}$ and $H_{div}=iE(a^{\dagger}e^{-i\omega_{d}t}-ae^{i\omega_{d}t})$ are the standard forms of optomechanical interaction and driving field respectively \cite{H}. $H_c(t)$ is an external control Hamiltonian which represents the coupling with designed time-dependent fields. In our model, we consider such a form of control field which can create a deviation in respective potential term of two oscillators. This effect can be regarded as a time-dependent rescaling of the mirror frequency \cite{LE2}, i.e.,
\mathbf{e}gin{equation}
\dfrac{w_{mj}}{2}\hat{q}_{j}^2\rightarrow\dfrac{w_{mj}}{2}[1+C_j(t)]\hat{q}_{j}^2
\label{eq:contorl}
\end{equation}
We will give a more detailed discussion about how to realize this form of control field using specific experiments in Sec. \ref{Results and discussion}. Then the Hamiltonian of the whole system is written as follow after a frame rotating:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
H=&\sum_{j=1,2}\{\dfrac{w_{mj}}{2}\hat{p}_{j}^2+\dfrac{w_{mj}}{2}[1+C_{j}(t)]\hat{q}_{j}^2-g_{j}a^{\dagger}a\hat{q_{j}}\}\\&-\Delta a^{\dagger}a+iE(a^{\dagger}-a)
\label{eq:Hamilton}
\end{split}
\end{equation}
In above expressions, $a$($a^{\dagger}$) is the annihilation (creation) operator for the optical field and correspondingly for $j=1,2$, $q_{j}$ and $p_{j}$ are dimensionless position and momentum operators of the oscillator $j$ respectively. $\Delta=\omega_d-\omega_l$ refers to the detuning between the frequencies of the laser drive and the cavity mode. $\omega_{mj}$ is the mechanical frequency. $g_{j}$ is the optomechanical coupling constant and $E$ is the drive intensity. In order to solve the dynamics of the system, we consider the dissipative effects in the Heisenberg picture and write the quantum Langevin equations as follows \cite{LE1,LE2,LE3}:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
&\partial_{t}a=(-\kappa+i\Delta)a+ig_{1}a\hat{q}_1+ig_{2}a\hat{q}_2+E+\sqrt{2\kappa}{a}^{in}\\
&\partial_{t}\hat{q}_j=\omega_{mj}\hat{p}_{j}\\
&\partial_{t}\hat{p}_j=-\omega_{mj}[1+C_j(t)]\hat{q}_{j}-\gamma_{j}\hat{p}_{j}+g_{j}a^{\dagger}a+\hat{\xi}_{j}
\label{eq:qle}
\end{split}
\end{equation}
Here $\kappa$ is the decay rate of optical cavity, and $\gamma_{j}$ is the mechanical damping
rate of each oscillator. ${a}^{in}$ is the input bath operator, which satisfies $\langle{a}^{in}{(t)}{a}^{in,{\dagger}}{(t')}\rangle=\delta(t-t')$ \cite{Noise0}. Similarly, $\hat{\xi}_{j}(t)$ is the Brownian noise operator describing the dissipative friction force acting on the $j$th mirror. In the Markovian approximation, the autocorrelation function of $\hat{\xi}_{j}(t)$ satisfies the relation: $\langle\hat{\xi}_j{(t)}\hat{\xi}_{j'}{(t')}+\hat{\xi}_{j'}{(t')}\hat{\xi}_j{(t)}\rangle/2=\gamma_{j}(2\mathbf{a}r{n}_b+1)\delta_{jj'}\delta(t-t')$, where $\mathbf{a}r{n}_b=[\exp ({\hbar\omega_j}/{k_BT})-1]^{-1}$ is the mean phonon number of the mechanical bath which gauges the temperature $T$ \cite{Noise1,Noise2,Noise3}.
Solving directly a set of nonlinear differential operator equations like Eq. (\ref{eq:qle}) is quite difficult, however, a mean--field approximation is acceptable in our mesoscopic optomechanical model \cite{H,line1,line2,line3}. On the other hand, as we discussed in Sec. \ref{quantum Generalized synchronization}, the quantum synchronization measurement modifying by mean--field approximation can
describe the generalized synchronization effect more accurately. Therefore, the every operator in Eq. (\ref{eq:qle}) can be rewritten respectively as a sum of its expectation value and a small fluctuation near the expectation value, that is, $a(t)=A(t)+\delta a(t)$, $\hat{o}(t)=o(t)+\delta o(t), o\in(q_{1,2},p_{1,2})$. After neglecting the high--order fluctuation terms, the ``classical" properties of our optomechanical system can be described by following nonlinear equations:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
&\partial_{t}A=(-\kappa+i\Delta)A+ig_{1}Aq_1+ig_{2}Aq_2+E\\
&\partial_{t}q_j=\omega_{mj}p_{j}\\
&\partial_{t}p_j=-\omega_{mj}[1+C_j(t)]q_{j}-\gamma_{i}p_{j}+g_{j}\vert A\vert^2
\label{eq:mean}
\end{split}
\end{equation}
and the corresponding quantum fluctuations can also be confirmed by:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\partial_{t}\delta a=&(-\kappa+i\Delta)\delta a+\sum_{j=1,2} ig_{j}(q_{j}\delta a+A\delta q_{j})+\sqrt{2\kappa}{a}^{in}\\
\partial_{t}\delta q_j=&\omega_{mj}\delta p_{j}\\
\partial_{t}\delta p_j=&-\omega_{mj}[1+C_j(t)]\delta q_{j}-\gamma_{j}\delta {p}_{j}+g_{j}(A^{*}\delta{a}+A\delta a^{\dagger})+\hat{\xi}_{j}
\label{eq:fluctuations}
\end{split}
\end{equation}
Transforming the annihilation operators as the forms of $a=(\hat{x}+i\hat{y})/\sqrt{2}$ and $a^{in}=(\hat{x}^{in}+i\hat{y}^{in})/\sqrt{2}$ respectively. Then Eq. (\ref{eq:fluctuations}) can be rewritten more concisely as $\partial_{t}\hat{u}=S\hat{u}+\hat{\zeta}$ by setting the vectors $\hat{u}=(\delta x, \delta y, \delta q_1, \delta p_1, \delta q_2, \delta p_2)^{\top}$ and $\hat{\zeta}=(\hat{x}^{in}, \hat{y}^{in}, 0, \hat{\xi}_{1}, 0, \hat{\xi}_{2})^{\top}$, and the corresponding $S$ is a time-dependent coefficient matrix (see Appendix. A for more details). In this representation, the evolution of correlation matrix $D$ defined as
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
D_{ij}(t)=D_{ji}(t)=\dfrac{1}{2}\langle\hat{u}_i(t)\hat{u}_j(t)+\hat{u}_j(t)\hat{u}_i(t)\rangle
\label{eq:correlationmatrix}
\end{split}
\end{equation}
can be derived directly by (see \cite{S10,line1,line2,line3})
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\partial_{t}D=SD+DS^{\top}+N
\label{eq:env}
\end{split}
\end{equation}
$N$ in Eq. (\ref{eq:env}) is a noise matrix and it will be a diagonal form, i.e., $\textbf{diag}(\kappa, \kappa, 0, \gamma_{1}(2\mathbf{a}r{n}_b+1), 0, \gamma_{2}(2\mathbf{a}r{n}_b+1))$, if the noise correlation is defined by $\langle\hat{\zeta}_i(t)\hat{\zeta}_j(t)+\hat{\zeta}_j(t)\hat{\zeta}_i(t)\rangle/2= N_{ij}\delta(t-t')$. With the help of Eq. (\ref{eq:env}), above mentioned synchronization measurement $S'_{g}$ can be simply expressed as
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
S'_g(t)=&\langle\delta q_{-}^2(t)+\delta p_{-}^2(t)\rangle^{-1}\\
= &\{\dfrac{1}{2}[D_{33}(t)+D_{55}(t)-2D_{35}(t)]\\
&+\dfrac{1}{2}[D_{44}(t)+D_{66}(t)-2D_{46}(t)]\}^{-1}
\label{eq:Scenv}
\end{split}
\end{equation}
and its evolution can be obtained by solving Eq. (\ref{eq:mean}) and Eq. (\ref{eq:env}) in order. At this point, all dynamic properties of our system, including synchronization and correlation, can be learned by means of the solutions of Eq. (\ref{eq:mean}), (\ref{eq:env}) and (\ref{eq:Scenv}). In the following subsections, we introduce two common forms of generalized synchronization: constant error synchronization (\ref{Constant error synchronization}) and time delay synchronization (\ref{Time delay synchronization}) to exhibit our ability about control synchronization, we will also prove how the controller is designed to realize these synchronizations.
\subsection{Constant error synchronization}
\label{Constant error synchronization}
Constant error synchronization can be regarded as a translation in phase space between two systems. In Eq. (\ref{eq:classc}), if we let $h_{1}(x_1)=x_1+c_1$ and $h_{2}(x_2)=x_2+c_2$, then the ``classical'' synchronization criterion will be
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\lim_{t\rightarrow\infty}\left|x_1(t)-x_2(t)\right|\rightarrow c_2-c_1=c_{-}
\end{split}
\label{eq:constant}
\end{equation}
where $c_{-}$ is the so-called constant error. In view of the controller influencing directly on $\partial_{t}p_{j}$, we firstly only consider the evolutions of the momentum operators, and further construct following Lyapunov function by using their expectation values:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
V_{p}(t)=(p_1(t)-p_2(t))^2
\label{eq:lyp}
\end{split}
\end{equation}
One can easily verify that $V_{p}\geqslant 0$ and $V_{p}=0$ is valid only when $p_1(t)-p_2(t)=0$. Substituting Eq. (\ref{eq:mean}) into Eq.(\ref{eq:lyp}), the time derivative of $V_{p}$ can be calculated handily if $C_1(t)=C_2(t)=C(t)$
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\dot{V}_{p}(t)=&2(\dot{p}_1(t)-\dot{p}_2(t))(p_1(t)-p_2(t))\\
=&2\{[1+C(t)](\omega_{m2}q_{2}-\omega_{m1}q_{1})-\gamma_{1}p_{1}\\
&+\gamma_{2}p_{2}+(g_{1}-g_{2})\vert A\vert^2\}(p_1-p_2)
\label{eq:dotlyp}
\end{split}
\end{equation}
we find that $\dot{V}_{p}(t)$ is always non--positive by setting
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\dot{p}_1(t)-\dot{p}_2(t)=-k(p_1(t)-p_2(t))
\label{eq:ddotlyp}
\end{split}
\end{equation}
where $k$ is a positive real number. With this choice, $V_p$ simultaneously satisfies $V_{p}\geqslant 0$ and $\dot{V}_{p}=-2k(p_1(t)-p_2(t))^2\leqslant 0$. Under this condition, the system will gradually evolve to a stable state which corresponds to the origin of the Lyapunov function, i.e,
$p_1(t)=p_2(t)$ \cite{Con}. In order to satisfy the required form of Lyapunov function, the control field
can be obtained based on Eqs. (\ref{eq:dotlyp}) and (\ref{eq:ddotlyp})
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
C(t)=\dfrac{(\gamma-k)[p_1-p_2]-(g_1-g_2)\vert A\vert^2}{\omega_{m2}q_{2}-\omega_{m1}q_{1}}-1
\label{eq:confield}
\end{split}
\end{equation}
here we have already set $\gamma_{1}=\gamma_{2}=\gamma$. It is needed to emphasize that all mechanical quantities without specifically being marked in this equation represent the expectation values at time $t$ (e.g., $p_1:=p_1(t)$). Otherwise, confusions may occur in following discussion about time delay synchronization.
We notice, however, the control field $C(t)$ in Eq. (\ref{eq:confield}) could be infinite when the tracks of $q_1$ and $q_2$ are adjacent. In particular, complete synchronization is not acceptable if $\omega_{m1}\simeq \omega_{m2}$. For avoiding this singularity, it is necessary to add a lower bound in the denominator of the control field. Therefore, the control field is modified in follow form:
\mathbf{e}gin{equation}
C(t)=\left\{
\mathbf{e}gin{aligned}
&\dfrac{(\gamma-k)[p_1-p_2]-(g_1-g_2)\vert A\vert^2}{\omega_{m2}q_{2}-\omega_{m1}q_{1}}-1 \\
&(when\,\,\vert\omega_{m2}q_{2}-\omega_{m1}q_1\vert>c_-)\\
&0 \\
&(when\,\,\vert\omega_{m2}q_{2}-\omega_{m1}q_{1}\vert\leqslant c_-)\\
\end{aligned}
\right.
\label{eq:confin}
\end{equation}
\mathbf{e}gin{figure}[b]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3.3in]{fig2.eps}
\end{minipage}
\caption{Evolutions of expectation values (blue dashed and red solid), control field and errors (green dashed and black solid respectively denote control field is imposed or removed), in which (a) and (b) respectively corresponds to momentum and position operators of each oscillator. Here we set $\Delta=1$ as a unit and other parameters are: $\omega_{m1}=1$, $\omega_{m2}=1.005$, $g_{1}=0.008$, $g_{2}=0.005$, $E=10$, $\kappa=0.15$, $\gamma=0.005$ and $\mathbf{a}r{n}_b=0.05$. For the control field, the parameters are taken as $k=2$ and $c_-=3$. (c): The limit cycles of two oscillators in phase space. (d): Robustness of our control system.
\label{fig:fig2}}
\end{figure}
The physical mechanism corresponding to Eq. (\ref{eq:confin}) can be interpreted as follows: Assuming the gap between two oscillators is small enough to satisfy $\vert\omega_{m2}q_{2}-\omega_{m1}q_{1}\vert\leqslant c_-$ at initial moment, then the control field will not work and the difference between oscillators under different Hamiltonians will increase; Once such difference crosses the boundary $\vert\omega_{m2}q_{2}-\omega_{m1}q_{1}\vert> c_-$, then non--zero control field will drag their orbits to close each other until a critical distance is reached, which will cause the invalid control field is resumed again.
With time going by, the error evolution will be controlled in a stable limit ellipse. Under particular $k$ and $c_-$, it can be regard as a fixed point if the major axis of this ellipsoid is small enough. In this case, two systems will finally realize such a synchronization: $p_{1}-p_{2}=0$ and $q_{1}-q_{2}=c_-$. Therefore, we make sure that $C(t)$ in Eq. (\ref{eq:confield}) is able to control the system achieving generalized synchronization.
In Fig. \ref{fig:fig2} we provide simulation results of the two oscillators to verify the synchronization phenomenon under the control field. In Fig. \ref{fig:fig2}(a), one can directly see momentums of two oscillators will take on consistent evolution after $t=41.3$, which is exactly the same with the time point that control field is non-zero. Correspondingly, the momentum error will stabilize at zero instead of generally enlarging along with the control field. Fig. \ref{fig:fig2}(a) also quantitatively shows the control field is a slowly varying function of the time. Such a slowly varing control field can improve the stability of the system, simultaneously, it is more easily to be implemented by experiments. In Fig. \ref{fig:fig2}(b), we plot the positions of two oscillators and corresponding errors. It illustrates that, although two oscillators are not consistent in their positions, the error can still maintain a constant ($c_-$). Taken Fig. \ref{fig:fig2}(a) and Fig. \ref{fig:fig2}(b) together, we can determine constant error synchronization between two oscillators is achieved. In Fig. \ref{fig:fig2}(c), we show the ``tracks" of two oscillators in phase space. Two oscillators will evolve to their respective limit cycle and, as we predicted above, constant error synchronization corresponds to a translation between limit cycles in phase space. Fig. \ref{fig:fig2}(d) reports the robustness of our synchronization system. Here we assume each quantity in Eq. (\ref{eq:confin}) has been added a Gaussian noise whose standard deviation is $\sigma$, i.e., $o(t)=\mathcal{N}(o(t),\sigma)$ ($o\in\{q_{1,2},p_{1,2},A$) and when the final control field is
added on the system, it also has a noise ($C(t)=\mathcal{N}(C(t),\sigma)$). The accuracy of the synchronization scheme in this case is described by following auxiliary quantity
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
R(\sigma)=1-\dfrac{[(q_{-}-q'_{-})^2+(p_{-}-p'_{-})^2]^{1/2}}{\sqrt{2}r}
\label{eq:lubang}
\end{split}
\end{equation}
where $p'_{-},q'_{-}$ refer to the errors in biased control field, and $r$ is the average radius of the limit cycle. One can find that $R(\sigma)$ will always remain above $96\%$ even the $\sigma =0.02$. Under those parameters, even if there are obvious fluctuations in the control field, however, the errors between two oscillators are still stable to approach $0, c_-'$. Therefore, we confirm our control is stable enough for some interferences.
\mathbf{e}gin{figure}[]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3.3in]{fig3.eps}
\end{minipage}
\caption{(a): Evolution of modified synchronization measurement. (b): Time-averaged synchronization measurement with varied bath temperature. Here blue of corresponds to control field imposed and red is the case control field disappears.
\label{fig:fig3}}
\end{figure}
Besides the expectation values, Mari's measurement is also calculated to prove that quantum fluctuation is similarly squeezed by control field. Fig. \ref{fig:fig3}(a) illustrates an increasing $S'_g$ substitutes the trend to $0$, which significantly outperforms uncontrolled situation. Therefore, we recognize that the control field can indeed achieve quantum control rather than the synchronization of the classical level. In Fig. \ref{fig:fig3}(b), we also show how the bath temperature will influence synchronization phenomenon and it can be known that $S'_g(t)$ will keep almost unchanging if the bath temperature is limited within $1mK$($\mathbf{a}r{n}_b=0.28$ corresponding to a MHz phonon frequency), and it is still larger than that belonging to the uncontrolled system even though $T$ goes up to $10mK$($\mathbf{a}r{n}_b=6.14$). This range is quite broad compared to other correlation control schemes in optomechanical systems \cite{te1,te2}.
\subsection{Time delay synchronization}
\label{Time delay synchronization}
Time delay synchronization can be regarded as a constant phase deviation between two systems, and their ``tracks" in phase space are overlapping like complete synchronization. In Eq. (\ref{eq:classc}), if we set $h_{1}(x_1)=x_1(t)$ and $h_{2}(x_2)=x_2(t-\tau)$, then the ``classical'' synchronization criterion will be
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\lim_{t\rightarrow\infty}\left|x_1(t)-x_2(t-\tau)\right|\rightarrow 0
\end{split}
\label{eq:timedecay}
\end{equation}
Similarly with the above discussion, we define following Lyapunov function:
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
V_{p}(t)=(p_1(t)-p_2(t-\tau))^2
\label{eq:lyptd}
\end{split}
\end{equation}
and its derivative can also be expressed as $\dot{V}_{p}=2(\dot{p}_1-\dot{p}_2(t-\tau))(p_1-p_2(t-\tau))$, where
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\dot{p}_1-&\dot{p}_2(t-\tau)=-\omega_{m1}[1+C_1]q_{1}-\gamma_{1}p_{1}+g_{1}\vert A\vert^2\\
&+\omega_{m2}q_{2}(t-\tau)+\gamma_{2}p_{2}(t-\tau)+g_{2}\vert A(t-\tau)\vert^2
\label{eq:dotlyptd}
\end{split}
\end{equation}
It needs to emphasize again that, in above expressions, all mechanical quantities without specifically being marked represent the expectation values at time $t$. Similarly, we let
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\dot{p}_1-&\dot{p}_2(t-\tau)=-k(p_1(t)-p_2(t-\tau))
\label{eq:ddotlyptd}
\end{split}
\end{equation}
to satisfy $V_{p}\geqslant 0$ and $\dot{V}_{p}=-2k(p_1-p_2(t-\tau))^2\leqslant 0$. Then the corresponding control field will become:
\mathbf{e}gin{widetext}
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
C_1(t)=\dfrac{(\gamma-k)[p_1-p_2(t-\tau)]-g_1\vert A\vert^2+g_2\vert A(t-\tau)\vert^2-\omega_{m2}q_{2}(t-\tau)}{-\omega_{m1}q_{1}}-1
\label{eq:confieldtdt}
\end{split}
\end{equation}
via setting $C_2(t)=0$ and $\gamma_{1}=\gamma_{2}=\gamma$ for simplicity.
Eq. (\ref{eq:confieldtdt}) is also of singular point at $q_1(t)=0$, therefore, an artificial boundary is necessary to avoid an infinite control field too. Unlike Eq. (\ref{eq:confin}), our purpose here is to make two systems achieve complete synchronization after eliminating the time delay. Therefore, this limitation is on the whole control field instead of the denominator. Then the control field should be
\mathbf{e}gin{equation}
C_1(t)=\left\{
\mathbf{e}gin{aligned}
&\dfrac{(\gamma-k)[p_1-p_2(t-\tau)]-g_1\vert A\vert^2+g_2\vert A(t-\tau)\vert^2-\omega_{m2}q_{2}(t-\tau)}{-\omega_{m1}q_{1}}-1
\,\,\,\,\,\,\,\,&(-C_M\leqslant C_1\leqslant C_M)\\
&C_M
&(C_1 > C_M)\\
&-C_M
&(C_1 < -C_M)\\
\end{aligned}
\right.
\label{eq:confintdt}
\end{equation}
\end{widetext}
In Fig. \ref{fig:fig4}(a) and (b),
\mathbf{e}gin{figure}[b]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3.3in]{fig4.eps}
\end{minipage}
\caption{Evolutions of expectation values (blue dashed and red solid), control field and errors (green dashed and black solid respectively denote control field is imposed or removed), in which (a) and (b) respectively corresponds to momentum and position operators of each oscillator. (c): The limit cycles of two oscillators in phase space. (d): Robustness of control system. Here we set $\tau=5$, $C_M=1$ and other parameters are the same with Fig. \ref{fig:fig2}.
\label{fig:fig4}}
\end{figure}
we show that the evolution of one oscillator seems to be a time translation of the other oscillator, and the errors tend to zero like complete synchronization after eliminating the time delay. It also exhibits a quickly varying control field, which is different with the performance in constant error synchronization. Generally speaking, quickly varying control field
can make the system achieve synchronization faster. Fig. \ref{fig:fig4}(a) and (b) show two oscillators will achieve synchronization in a short period of time $t<10$, which shortens four times time relative to Fig. \ref{fig:fig2} (a) and (b). Furthermore, Fig. \ref{fig:fig4}(d) illustrates synchronization accuracy is $96\%$, which means robustness is also remained in a high level under the fast oscillating control field. We also plot the limit cycles of two oscillators in Fig. \ref{fig:fig4}(c). It can be seen that two limit cycles are almost coincident in most of time except the short time intervals at origin and destination points in which both limit circles take on inconsistent evolutions because of the time delay.
\mathbf{e}gin{figure}[b]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3.3in]{fig5.eps}
\end{minipage}
\caption{(a): Evolution of modified synchronization measurement. (b): Time-averaged synchronization measurement with varied bath temperature. Here blue corresponds to control field imposed and red is the case control field disappears.
\label{fig:fig5}}
\end{figure}
We also consider the evolution of quantum fluctuation. Fig. \ref{fig:fig5}(a) shows that the fluctuations are further squeezed by quickly varying control field, and $S'_g(t)$ exhibits a higher value than that in constant error synchronization. Fig. \ref{fig:fig5}(b) also illustrates that destruction on the synchronization effect causing by environment is also weakened and $\mathbf{a}r{S}'_g(t)$ will still remain high even at $T=10mK$. From this perspective, we believe that quickly varying control field is a appropriate form of synchronization control too.
\section{correlation in Generalized synchronization}
\label{correlation in Generalized synchronization}
The correlation between synchronized quantum systems is a important research object in QIP. Intuitively, two different systems can achieve consistency in some extent, meaning that there inevitably exists a certain correlation between those systems. To verify this, quantum mutual information which is a measurement of total correlation has been proved to be homology with synchronization measurement in VdP oscillators \cite{S2}. However, it is very difficult to identify the type of this correlation. Especially, whether quantum entanglement exists in the CV synchronization is controversial \cite{S1,S2,S6,S9}. Therefore, we pay more attention to the properties of entanglement when we consider the quantum correlation in our model.
The mean--field approximation used above can make it more convenient to analyze classical correlation and quantum entanglement. The classical correlation can be verified via calculating the largest Lyapunov exponent of the errors \cite{S10,line2}, i.e., $L_{y}^{max}=\max\{L_y(p_{g-}),L_y(q_{g-})\}$, where
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
L_y(o)=\lim_{t\rightarrow\infty}\dfrac{1}{t}\ln\left|\dfrac{\delta o(t)}{\delta o(0)}\right|,\,\,\,\,\,\, (o\in\{p_{g-},q_{g-}\})
\label{eq:lizhishu}
\end{split}
\end{equation}
Correspondingly, CV quantum entanglement is measured by Gaussian Negativity $E_n=\max\{0,-\log_{2}\nu_{-}\}$ \cite{Na1,Na2,LE3}, here
\mathbf{e}gin{equation}
\mathbf{e}gin{split}
\nu_{-}=\dfrac{\sqrt{\Delta(\Gamma)-\sqrt{\Delta(\Gamma)^2-4\det\Gamma}}}{2},
\label{eq:enenen}
\end{split}
\end{equation}
$\Delta(\Gamma)=\det A+\det B-2\det C$ and
\mathbf{e}gin{equation}
\Gamma=\mathbf{e}gin{pmatrix}
D_{33}& D_{34}& D_{35}& D_{36}\\
D_{43}& D_{44}& D_{45}& D_{46}\\
D_{53}& D_{54}& D_{55}& D_{56}\\
D_{63}& D_{64}& D_{65}& D_{66}
\end{pmatrix}
=\left( \mathbf{e}gin{array}{ll}
A& C\\
C^{\top}& B
\end{array}
\right).
\label{eq:enenenmatrix}
\end{equation}
\mathbf{e}gin{figure}[]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3.3in]{fig6.eps}
\end{minipage}
\caption{The largest Lyapunov exponents of the errors with varied $c_-$ (a), $\tau$ and $C_M$ (b) which corresponds to constant error synchronization and time delay synchronization, respectively.
\label{fig:fig6}}
\end{figure}
In Fig. \ref{fig:fig6}, we plot the largest Lyapunov exponent under different characteristic parameters ($c_-$, $\tau$ and $C_M$). The results show that negative Lyapunov exponent is always present under different control fields. This performance is superior to our conclusion in Ref. \cite{S10}, in which we found that the positive-negative of Lyapunov exponent corresponding to coupling synchronization depends sensitively on varing coupling intensity between two opomechanical systems. This means, compared to the direct coupling between two systems, designed Lyapunov function can more effectively help the system to establish correlation in the expect value level.
In Fig. \ref{fig:fig7}, we plot the results of maximum Negativity under different characteristic parameters. It should be noted that generalized synchronization is actually a necessary condition for complete synchronization and characteristic parameters can be regarded as a description about the gap between generalized synchronization and complete synchronization. Consequently, one can compare complete synchronization with generalized synchronization via setting $c_-=0$, $\tau=0$ and $C_M\rightarrow\infty$. Fig. \ref{fig:fig7}(a) shows that quantum entanglement does not exist until $c_-=2.65$. Then
it takes on a rising trend and its the maximum value is at $c_-=3.13$. Subsequently, Negativity will decline and tend to zero again. The evolution of maximum Negativity can be understood as follow: when $c_-$ is small enough, the oscillators are equivalent to achieving complete synchronization and they are always separable in this case. This opinion is the same with Mari's conclusion, i.e., the evolution in superposition tracks will hinder the generation of CV entanglement. With gradually going away of
two tracks, entanglement is allowed to exist because the oscillators are no longer complete synchronization. Finally, if the distance of two tracks increases constantly, quantum correlation between two oscillators is remarkably weakened and entanglement will disappeare again when $c_-$ is large. In Fig. \ref{fig:fig7}(b), similar conclusions are also obtained in time delay synchronization. Therefore, we think that Gaussian entanglement can coexist with generalized synchronization, although it may be prohibited by complete synchronization.
\mathbf{e}gin{figure}[]
\centering
\mathbf{e}gin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=3.3in]{fig7.eps}
\end{minipage}
\caption{ The maximum Negativity of the errors with varied $c_-$ (a), $\tau$ and $C_M$ (b) which corresponds to constant error synchronization and time delay synchronization, respectively.
\label{fig:fig7}}
\end{figure}
\section{discussion and Results}
\label{Results and discussion}
Here we give a brief discussion about the parameters of our optpmechanical system and the realization scheme of the control field. The parameters selected in the simulation are similar
with Ref. \cite{S1,S9,Exx1,Exx2,line1}. However, in order to highlight the roles of the coupling and the controller, we appropriately reduce the value of the driving intensity \cite{S10}. Beyond that, the deviation in potential like Eq. (\ref{eq:contorl}) has been investigated by theoretical researches \cite{LE2}, and recent works report that the deviation can be achieved by using charged mechanical resonators \cite{MR1,MR2,MR3,MR4}. For example, \citeauthor{MR2} found that the effective frequency $\omega_{eff}^2=\omega_{m}^2[(1+C(t))]$ can be controlled by a time-dependent bias gate voltage $U(t)=U_0f(t)$. And the relationship between dimensionless factor $f(t)$ and $C(t)$ is $C(t)=\eta f(t)$, where
\mathbf{e}gin{equation}
\eta=\dfrac{C_0U_0Q_{MR}}{\pi\varphirepsilon_0m\omega_m^2d^3}
\label{eq:contorlexex}
\end{equation}
is obtained in Ref. \cite{MR2}. Therefore, we are sure the control terms in Eq. (\ref{eq:confin}) and (\ref{eq:confintdt}) cab be achieved easily in a specific experiment.
In summary, we have extend Mari's theories about quantum complete synchronization and phase synchronization to a more general situation that we defined as quantum generalized synchronization in this paper. The corresponding control methods, criteria and measurements are also proposed quantificationally based on Lyapunov function, Lyapunov exponent and modified Mari's measurement. This generalized synchronization can be regard as a prerequisite of traditional quantum synchronization, and it can establish more flexible relations between two controlled systems. To verify this, we have shown that some important properties in our model, such as entanglement in synchronization, will be consistent with previous works if the generalized synchronization tends to complete synchronization. So, designers can complete different synchronizations according to their requirements based on our theory. For making our theory more intuitive, we have considered two common generalized synchronization, that is, so--called constant error synchronization and time delay synchronization in an optomechanical system. With the help of control fields designed by Lyapunov function, we have proved two oscillators can satisfy the requirements of various synchronizations. We believe that our work can bring certain application values in quantum information transmission, quantum control, and quantum logical processing.
\section{Parameters in quantum Langevin equations}
\label{Parameters in quantum Langevin equations}
The concrete form of coefficient matrix $S$ in Eq. (\ref{eq:env}) is
\mathbf{e}gin{widetext}
\mathbf{e}gin{equation}
S=
\mathbf{e}gin{pmatrix}
-\kappa& -(\Delta+g_1q_1+g_2q_2)& -\sqrt{2}g_1\text{Im}(A)& 0& -\sqrt{2}g_2\text{Im}(A)& 0\\
\Delta+g_1q_1+g_2q_2& -\kappa& \sqrt{2}g_1\text{Re}(A)& 0& \sqrt{2}g_2\text{Re}(A)& 0\\
0& 0& 0& \omega_{m1}& 0& 0\\
\sqrt{2}g_1\text{Re}(A)& \sqrt{2}g_1\text{Im}(A)& -\omega_{m1}[1+c_{1}(t)]& -\gamma_{1}& 0& 0\\
0& 0& 0& 0& 0& \omega_{m2}\\
\sqrt{2}g_2\text{Re}(A)& \sqrt{2}g_2\text{Im}(A)& 0& 0& -\omega_{m2}[1+c_{2}(t)]& -\gamma_{2}
\end{pmatrix}
\label{eq:enenenmatrix}
\end{equation}
\end{widetext}
\mathbf{e}gin{thebibliography}{35}
\expandafter\ifx\csname
natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname
urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\epsilonrint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Mari et~al.}(2014)\citenamefont{Mari}}]{S1}
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Mari}},
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Farace}},
\bibinfo{author}{\bibfnamefont{N.} \bibnamefont{Didier}},
\bibinfo{author}{\bibfnamefont{V.} \bibnamefont{Giovannetti}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Fazio}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{111}}, \bibinfo{pages}{103605-5}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Ameri et~al.}(2014)\citenamefont{Ameri, V. and Eghbali-Arani, M. and Mari, A. and Farace, A. and Kheirandish, F. and Giovannetti, V. and Fazio, R.}}]{S2}
\bibinfo{author}{\bibfnamefont{V.} \bibnamefont{Ameri}},
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Eghbali-Arani}},
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Mari}},
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Farace}},
\bibinfo{author}{\bibfnamefont{F.} \bibnamefont{Kheirandish}},
\bibinfo{author}{\bibfnamefont{V.} \bibnamefont{Giovannetti}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Fazio}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{91}}, \bibinfo{pages}{012301-6}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Xu et~al.}(2014)\citenamefont{Xu}}]{S3}
\bibinfo{author}{\bibfnamefont{M.~H.} \bibnamefont{Xu}},
\bibinfo{author}{\bibfnamefont{D.~A.} \bibnamefont{Tieri}},
\bibinfo{author}{\bibfnamefont{E.~C.} \bibnamefont{Fine}},
\bibinfo{author}{\bibfnamefont{J.~K.} \bibnamefont{Thompson}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~J.}~\bibnamefont{Holland}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{113}}, \bibinfo{pages}{154101-5}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Xu et~al.}(2015)\citenamefont{Xu}}]{S4}
\bibinfo{author}{\bibfnamefont{M.~H.} \bibnamefont{Xu}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~J.}~\bibnamefont{Holland}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{114}}, \bibinfo{pages}{103601-5}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Lee et~al.}(2014)\citenamefont{Lee, Tony E. and Sadeghpour, H. R}}]{S5}
\bibinfo{author}{\bibfnamefont{T.~E.} \bibnamefont{Lee}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~R.}~\bibnamefont{Sadeghpour}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{111}}, \bibinfo{pages}{234101-5}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Lee et~al.}(2014)\citenamefont{Lee, Tony E. and Chan, Ching-Kit and Wang, Shenshen}}]{S6}
\bibinfo{author}{\bibfnamefont{T.~E.} \bibnamefont{Lee}},
\bibinfo{author}{\bibfnamefont{C.~K.} \bibnamefont{Chan}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~S.}~\bibnamefont{Wang}},
\bibinfo{journal}{Phys. Rev. E}
\textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{022913-10}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Walter et~al.}(2014)\citenamefont{Walter, Stefan and Nunnenkamp, Andreas and Bruder, Christoph}}]{S11}
\bibinfo{author}{\bibfnamefont{S.} \bibnamefont{Walter}},
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Nunnenkamp}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Bruder}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{112}}, \bibinfo{pages}{094102-5}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Samoylova et~al.}(2014)\citenamefont{Marina Samoylova, Nicola Piovella, Gordon R.M. Robb, Romain Bachelard, Philippe W. Courteille}}]{S7}
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Samoylova}},
\bibinfo{author}{\bibfnamefont{N.} \bibnamefont{Piovella}},
\bibinfo{author}{\bibfnamefont{G.~R.~M.} \bibnamefont{Robb}},
\bibinfo{author}{\bibfnamefont{R.} \bibnamefont{Bachelard}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~W.}~\bibnamefont{Courteille}},
\bibinfo{journal}{arXiv: 1503.05616v1}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Samoylova et~al.}(2014)\citenamefont{Marina Samoylova, Nicola Piovella, Gordon R.M. Robb, Romain Bachelard, Philippe W. Courteille}}]{S8}
\bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Gul}},
\bibinfo{journal}{arXiv:1412.8497v1}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Ying et~al.}(2014)\citenamefont{Lei Ying, Ying-Cheng Lai,and Celso Grebogi}}]{S9}
\bibinfo{author}{\bibfnamefont{L.} \bibnamefont{Ying}},
\bibinfo{author}{\bibfnamefont{Y.~C.} \bibnamefont{Lai}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Grebogi}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{053810-6}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Li et~al.}(2015)\citenamefont{Li, W.L. and Li, C. and Song .H.S}}]{S10}
\bibinfo{author}{\bibfnamefont{W.~L.} \bibnamefont{Li}},
\bibinfo{author}{\bibfnamefont{C.} \bibnamefont{Li}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~S.}~\bibnamefont{Song}},
\bibinfo{journal}{J. Phys. B: At. Mol. Opt. Phys.}
\textbf{\bibinfo{volume}{48}}, \bibinfo{pages}{035503-8}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{Sun-Ho Choi and Seung-Yeal Ha}}]{S12}
\bibinfo{author}{\bibfnamefont{S.~H.} \bibnamefont{Choi}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.~Y.}~\bibnamefont{Ha}},
\bibinfo{journal}{J. Phys. A: Math. Theor.}
\textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{355104-16}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{Yong Xu, Hua Wang, Yongge Li, Bin Pei}}]{GSS1}
\bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Xu}},
\bibinfo{author}{\bibfnamefont{H.} \bibnamefont{Wang}},
\bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Li}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Pei}},
\bibinfo{journal}{Commun. Nonlinear Sci. Numer. Simulat.}
\textbf{\bibinfo{volume}{19}}, \bibinfo{pages}{3735–-3744}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{Yong Xu, Hua Wang, Yongge Li, Bin Pei}}]{GSS2}
\bibinfo{author}{\bibfnamefont{X.} \bibnamefont{Wu}},
\bibinfo{author}{\bibfnamefont{H.} \bibnamefont{Wang}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Lu}},
\bibinfo{journal}{Nonlinear Analysis: Real World Applications}
\textbf{\bibinfo{volume}{13}}, \bibinfo{pages}{1441–-1450}
(\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Lü et~al.}(2014)\citenamefont{Ling lu, Chengren Li, Liansong Chen, Linling Wei}}]{GSS3}
\bibinfo{author}{\bibfnamefont{L.} \bibnamefont{L\"u}},
\bibinfo{author}{\bibfnamefont{C.} \bibnamefont{Li}},
\bibinfo{author}{\bibfnamefont{L.} \bibnamefont{Chen}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Wei}},
\bibinfo{journal}{Commun. Nonlinear Sci. Numer. Simulat.}
\textbf{\bibinfo{volume}{19}}, \bibinfo{pages}{2843–-2849}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{Sun-Ho Choi and Seung-Yeal Ha}}]{Con}
\bibinfo{author}{\bibfnamefont{G.} \bibnamefont{Nicolis}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Prigogine}},
\bibinfo{journal}{\textit{self-organization in nonequilibrium systems}},
(\bibinfo{pages}{Wiley},
\bibinfo{pages}{New York},
\bibinfo{year}{1977}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{S. C. Hou, M. A. Khan, and X. X. Yi Daoyi Dong and Ian R. Petersen}}]{QC1}
\bibinfo{author}{\bibfnamefont{X.~X.} \bibnamefont{Yi}},
\bibinfo{author}{\bibfnamefont{X.~L.} \bibnamefont{Huang}},
\bibinfo{author}{\bibfnamefont{C.~F.} \bibnamefont{Wu}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~H.}~\bibnamefont{Oh}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{052316-5}
(\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{S. C. Hou, M. A. Khan, and X. X. Yi Daoyi Dong and Ian R. Petersen}}]{QC2}
\bibinfo{author}{\bibfnamefont{S.~C.} \bibnamefont{Hou}},
\bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Khan}},
\bibinfo{author}{\bibfnamefont{X.~X.} \bibnamefont{Yi}},
\bibinfo{author}{\bibfnamefont{D.~Y.} \bibnamefont{Dong}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~R.}~\bibnamefont{Petersen}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{022321-8}
(\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{S. C. Hou, M. A. Khan, and X. X. Yi Daoyi Dong and Ian R. Petersen}}]{H}
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Aspelmeyer}},
\bibinfo{author}{\bibfnamefont{T.~J.} \bibnamefont{Kippenberg}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Marquardt}},
\bibinfo{journal}{Rev. Mod. Phys.}
\textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{1391--1452}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Farace et~al.}(2014)\citenamefont{Alessandro Farace and Vittorio Giovannetti}}]{LE2}
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Farace}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Giovannetti}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{013820-12}
(\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{S. C. Hou, M. A. Khan, and X. X. Yi Daoyi Dong and Ian R. Petersen}}]{LE1}
\bibinfo{author}{\bibfnamefont{C.} \bibnamefont{Genes}},
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Mari}},
\bibinfo{author}{\bibfnamefont{D.} \bibnamefont{Vitalii}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Tombesi}},
\bibinfo{journal}{Adv. At. Mol. Opt. Phys.}
\textbf{\bibinfo{volume}{57}}, \bibinfo{pages}{33-86}
(\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{S. C. Hou, M. A. Khan, and X. X. Yi Daoyi Dong and Ian R. Petersen}}]{LE3}
\bibinfo{author}{\bibfnamefont{Y.~D.} \bibnamefont{Wang}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~A.}~\bibnamefont{Clerk}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{110}}, \bibinfo{pages}{253601-5}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{X. W. Xu Y,Li}}]{Noise0}
\bibinfo{author}{\bibfnamefont{C.~K.} \bibnamefont{Law}}
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{51}}, \bibinfo{pages}{2537}
(\bibinfo{year}{1995}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{Sun-Ho Choi and Seung-Yeal Ha}}]{Noise1}
\bibinfo{author}{\bibfnamefont{C.~W.} \bibnamefont{Gardiner}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Zoller}},
\bibinfo{journal}{\textit{Quantum Noise}},
(\bibinfo{pages}{Springer, Berlin},
\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{Yong-Chun Liu Yu-Feng Shen Qihuang Gong and Yun-Feng Xiao}}]{Noise2}
\bibinfo{author}{\bibfnamefont{Y.~C.} \bibnamefont{Liu}},
\bibinfo{author}{\bibfnamefont{Y.~F.} \bibnamefont{Shen}},
\bibinfo{author}{\bibfnamefont{Q.~H.} \bibnamefont{Gong}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.~F.}~\bibnamefont{Xiao}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{89}}, \bibinfo{pages}{053821-8}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{X. W. Xu Y,Li}}]{Noise3}
\bibinfo{author}{\bibfnamefont{X.~W.} \bibnamefont{Xu}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Li}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{91}}, \bibinfo{pages}{053852-8}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Choi et~al.}(2014)\citenamefont{A. Mari and J. Eisert}}]{line1}
\bibinfo{author}{\bibfnamefont{A.} \bibnamefont{Mari}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eisert}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{103}}, \bibinfo{pages}{213603-4}
(\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{G. L. Wang, L. Huang, Y. C. Lai, and C. Grebogi}}]{line2}
\bibinfo{author}{\bibfnamefont{G.~L.} \bibnamefont{Wang}},
\bibinfo{author}{\bibfnamefont{L.} \bibnamefont{Huang}},
\bibinfo{author}{\bibfnamefont{Y.~C.} \bibnamefont{Lai}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Grebogi}},
\bibinfo{journal}{Phys. Rev. Lett.}
\textbf{\bibinfo{volume}{112}}, \bibinfo{pages}{110406-5}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{G. L. Wang, L. Huang, Y. C. Lai, and C. Grebogi}}]{line3}
\bibinfo{author}{\bibfnamefont{J.} \bibnamefont{Larson}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Horsdal}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{021804(R)-4}
(\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{G. L. Wang, L. Huang, Y. C. Lai, and C. Grebogi}}]{te1}
\bibinfo{author}{\bibfnamefont{J.~Q.} \bibnamefont{Liao}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Nori}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{88}}, \bibinfo{pages}{023853}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{G. L. Wang, L. Huang, Y. C. Lai, and C. Grebogi}}]{te2}
\bibinfo{author}{\bibfnamefont{W.~Z.} \bibnamefont{Zhang}},
\bibinfo{author}{\bibfnamefont{J.} \bibnamefont{Cheng}},
\bibinfo{author}{\bibfnamefont{J.~Y.} \bibnamefont{Liu}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Zhou}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{91}}, \bibinfo{pages}{063836}
(\bibinfo{year}{2015}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{G. L. Wang, L. Huang, Y. C. Lai, and C. Grebogi}}]{Exx1}
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Eichenfield}},
\bibinfo{author}{\bibfnamefont{R.} \bibnamefont{Camacho}},
\bibinfo{author}{\bibfnamefont{J.} \bibnamefont{Chan}},
\bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Vahala}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Painter}},
\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{459}}, \bibinfo{pages}{550}
(\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{G. L. Wang, L. Huang, Y. C. Lai, and C. Grebogi}}]{Exx2}
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Eichenfield}},
\bibinfo{author}{\bibfnamefont{J.} \bibnamefont{Chan}},
\bibinfo{author}{\bibfnamefont{M.~R.} \bibnamefont{Camacho}},
\bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Vahala}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Painter}},
\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{462}}, \bibinfo{pages}{78}
(\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{Gerardo Adesso and Fabrizio Illuminat}}]{Na1}
\bibinfo{author}{\bibfnamefont{G.} \bibnamefont{Adesso}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Illuminat}},
\bibinfo{journal}{J. Phys. A: Math. Theor.}
\textbf{\bibinfo{volume}{40}}, \bibinfo{pages}{7821–-7880}
(\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{C. Joshi J. Larson M. Jonson E. Andersson, and P. O hberg}}]{Na2}
\bibinfo{author}{\bibfnamefont{C.} \bibnamefont{Joshi}},
\bibinfo{author}{\bibfnamefont{J.} \bibnamefont{Larson}},
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Jonson}},
\bibinfo{author}{\bibfnamefont{E.} \bibnamefont{Andersson}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.~\"O.}~\bibnamefont{hberg}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{85}}, \bibinfo{pages}{033805-11}
(\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{Yong Li Lian-Ao Wu and Z. D. Wang}}]{MR1}
\bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Li}},
\bibinfo{author}{\bibfnamefont{L.~A.} \bibnamefont{Wu}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{Z.~D.}~\bibnamefont{Wang}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{043804-5}
(\bibinfo{year}{2011}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{Yong Li Lian-Ao Wu and Z. D. Wang}}]{MR3}
\bibinfo{author}{\bibfnamefont{P.~C.} \bibnamefont{Ma}},
\bibinfo{author}{\bibfnamefont{J.~Q.} \bibnamefont{Zhang}},
\bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Xiao}},
\bibinfo{author}{\bibfnamefont{M.} \bibnamefont{Feng}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{Z.~M.}~\bibnamefont{Zhang}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{043825-7}
(\bibinfo{year}{2014}).
\bibitem[{\citenamefont{Zhang et~al.}(2014)\citenamefont{Yong Li Lian-Ao Wu and Z. D. Wang}}]{MR2}
\bibinfo{author}{\bibfnamefont{J.~Q.} \bibnamefont{Zhang}},
\bibinfo{author}{\bibfnamefont{Y.} \bibnamefont{Li}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Feng}},
\bibinfo{journal}{J. Phys: Condens. Matter}
\textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{142202-5}
(\bibinfo{year}{2013}).
\bibitem[{\citenamefont{Wang et~al.}(2014)\citenamefont{Yong Li Lian-Ao Wu and Z. D. Wang}}]{MR4}
\bibinfo{author}{\bibfnamefont{X.~Y.} \bibnamefont{L\"u}},
\bibinfo{author}{\bibfnamefont{J.~Q.} \bibnamefont{Liao}},
\bibinfo{author}{\bibfnamefont{L.} \bibnamefont{Tian}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Nori}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{91}}, \bibinfo{pages}{013834-8}
(\bibinfo{year}{2015}).
\end{thebibliography}
\end{document} |
\begin{document}
\title{Cyclic derivations, species realizations and potentials}
\begin{abstract}
In this survey paper we give an overview of a generalization, introduced by R. Bautista and the author, of the theory of mutation of quivers with potential developed in 2007 by Derksen-Weyman-Zelevinsky. This new construction allows us to consider finite dimensional semisimple $F$-algebras, where $F$ is any field. We give a brief account of the results concerning this generalization and its main consequences.
\end{abstract}
\section{Introduction}
Since the development of the theory of quivers with potentials created by Derksen-Weyman-Zelevinsky in \cite{5}, the search for a general concept of \emph{mutation of a quiver with potential} has drawn a lot of attention. The theory of quivers with potentials has proven useful in many subjects of mathematics such as cluster algebras, Teichm\"{u}ller theory, KP solitons, mirror symmetry, Poisson geometry, among many others. There have been different generalizations of the notion of a quiver with potential and mutation where the underlying $F$-algebra, $F$ being a field, is replaced by more general algebras, see \cite{4,8,10}.
This paper is organized as follows. In Section \ref{section2}, we review the preliminaries taken from \cite{1} and \cite{9}. Instead of working with an usual quiver, we consider the completion of the tensor algebra of $M$ over $S$, where $M$ is an $S$-bimodule and $S$ is a finite dimensional semisimple $F$-algebra. We will then see how to construct a cyclic derivation, in the sense of Rota-Sagan-Stein \cite{12}, on the completion of the tensor algebra of $M$. Then we introduce a natural generalization of the concepts of potential, right-equivalence and cyclical equivalence as defined in \cite{5}.
In Section \ref{section3}, we describe a generalization of the so-called \emph{Splitting theorem} (\cite[Theorem 4.6]{5}) and see how this theorem allows us to lift the notion of mutation of a quiver with potential to this more general setting.
Finally, in Section \ref{section4}, we recall the notion of species realizations and describe how the generalization given in \cite{1} allows us to give a partial affirmative answer to a question raised by J. Geuenich and D. Labardini-Fragoso in \cite{7}.
\pagebreak
\section{Preliminaries} \label{section2}
The following material is taken from \cite{1} and \cite{9}.
\begin{definition} Let $F$ be a field and let $D_{1},\ldots,D_{n}$ be division rings, each containing $F$ in its center and of finite dimension over $F$. Let $S=\displaystyle \prod_{i=1}^{n} D_{i}$ and let $M$ be an $S$-bimodule of finite dimension over $F$. Define the algebra of formal power series over $M$ as the set:
\begin{center}
$\mathcal{F}_{S}(M)=\left\{\displaystyle \sum_{i=0}^{\infty} a(i): a(i) \in M^{\otimes i}\right\}$
\end{center}
where $M^{0}=S$. Note that $\mathcal{F}_{S}(M)$ is an associative unital $F$-algebra where the product is the one obtained by extending the product of the tensor algebra $T_{S}(M)=\displaystyle \bigoplus_{i=0}^{\infty} M^{\otimes i}$.
\end{definition}
Let $\{e_{1},\ldots,e_{n}\}$ be a complete set of primitive orthogonal idempotents of $S$.
\begin{definition} An element $m \in M$ is legible if $m=e_{i}me_{j}$ for some idempotents $e_{i},e_{j}$ of $S$.
\end{definition}
\begin{definition} Let $Z=\displaystyle \sum_{i=1}^{n} Fe_{i} \subseteq S$. We say that $M$ is $Z$-freely generated by a $Z$-subbimodule $M_{0}$ of $M$ if the map $\mu_{M}: S \otimes_{Z} M_{0} \otimes_{Z} S \rightarrow M$ given by $\mu_{M}(s_{1} \otimes m \otimes s_{2})=s_{1}ms_{2}$ is an isomorphism of $S$-bimodules. In this case we say that $M$ is an $S$-bimodule which is $Z$-freely generated.
\end{definition}
Throughout this paper we will assume that $M$ is $Z$-freely generated by $M_{0}$.
\begin{definition} Let $A$ be an associative unital $F$-algebra. A cyclic derivation, in the sense of Rota-Sagan-Stein \cite{12}, is an $F$-linear function $\mathfrak{h}: A \rightarrow \operatorname{End}_{F}(A)$ such that:
\begin{equation}
\mathfrak{h}(f_{1}f_{2})(f)=\mathfrak{h}(f_{1})(f_{2}f)+\mathfrak{h}(f_{2})(ff_{1})
\end{equation}
for all $f,f_{1},f_{2} \in A$. Given a cyclic derivation $\mathfrak{h}$, we define the associated cyclic derivative $\delta: A \rightarrow A$ as $\delta(f)=\mathfrak{h}(f)(1)$.
\end{definition}
We now construct a cyclic derivative on $\mathcal{F}_{S}(M)$. First, we define a cyclic derivation on the tensor algebra $A=T_{S}(M)$ as follows. Consider the map:
\begin{center}
$\hat{u}: A \times A \rightarrow A$
\end{center}
given by $\hat{u}(f,g)=\displaystyle \sum_{i=1}^{n} e_{i}gfe_{i}$ for every $f,g \in A$. This is an $F$-bilinear map which is $Z$-balanced. By the universal property of the tensor product, there exists a linear map $u: A \otimes_{Z} A \rightarrow A$ such that $u(a \otimes b)=\hat{u}(a,b)$.
Now we define an $F$-derivation $\Delta: A \rightarrow A \otimes_{Z} A$ as follows. For $s \in S$, we define $\Delta(s)=1 \otimes s - s \otimes 1$; for $m \in M_{0}$, we set $\Delta(m)=1 \otimes m$. Then we define $\Delta: M \rightarrow T_{S}(M)$ by:
\begin{center}
$\Delta(s_{1}ms_{2})=\Delta(s_{1})ms_{2}+s_{1}\Delta(m)s_{2}+s_{1}m\Delta(s_{2})$
\end{center}
for $s_{1},s_{2} \in S$ and $m \in M_{0}$.
Note that the above map is well-defined since $M \cong S \otimes_{Z} M_{0} \otimes_{Z} S$ via the multiplication map $\mu_{M}$. Once we have defined $\Delta$ on $M$, we can extend it to an $F$-derivation on $A$. Now we define $\mathfrak{h}: A \rightarrow \operatorname{End}_{F}(A)$ as follows:
\begin{center}
$\mathfrak{h}(f)(g)=u(\Delta(f)g)$
\end{center}
We have
\begin{align*}
\mathfrak{h}(f_{1}f_{2})(f)&=u(\Delta(f_{1}f_{2})f) \\
&=u(\Delta(f_{1})f_{2}f)+u(f_{1}\Delta(f_{2})f) \\
&=u(\Delta(f_{1})f_{2}f)+u(\Delta(f_{2})ff_{1}) \\
&=\mathfrak{h}(f_{1})(f_{2}f)+\mathfrak{h}(f_{2})(ff_{1}).
\end{align*}
It follows that $\mathfrak{h}$ is a cyclic derivation on $T_{S}(M)$. We now extend $\mathfrak{h}$ to $\mathcal{F}_{S}(M)$. Let $f,g \in \mathcal{F}_{S}(M)$, then $\mathfrak{h}(f(i))(g(j)) \in M^{\otimes(i+j)}$; thus we define $\mathfrak{h}(f)(g)(l)=\displaystyle \sum_{i+j=l} \mathfrak{h}(f(i))(g(j))$ for every $non$-negative integer $l$.
In \cite[Proposition 2.6]{9}, it is shown that the $F$-linear map $\mathfrak{h}: \mathcal{F}_{S}(M) \rightarrow \operatorname{End}_{F}(\mathcal{F}_{S}(M))$ is a cyclic derivation. Using this fact we obtain a cyclic derivative $\delta$ on $\mathcal{F}_{S}(M)$ given by
\begin{center}
$\delta(f)=\mathfrak{h}(f)(1)$.
\end{center}
\begin{definition} Let $\mathcal{C}$ be a subset of $M$. We say that $\mathcal{C}$ is a right $S$-local basis of $M$ if every element of $C$ is legible and if for each pair of idempotents $e_{i},e_{j}$ of $S$, we have that $C \cap e_{i}Me_{j}$ is a $D_{j}$-basis for $e_{i}Me_{j}$.
\end{definition}
We note that a right $S$-local basis $\mathcal{C}$ induces a dual basis $\{u,u^{\ast}\}_{u \in \mathcal{C}}$, where $u^{\ast}: M_{S} \rightarrow S_{S}$ is the morphism of right $S$-modules defined by $u^{\ast}(v)=0$ if $v \in \mathcal{C} \setminus \{u\}$; and $u^{\ast}(u)=e_{j}$ if $u=e_{i}ue_{j}$.
Let $T$ be a $Z$-local basis of $M_{0}$ and let $L$ be a $Z$-local basis of $S$. The former means that for each pair of distinct idempotents $e_{i}$,$e_{j}$ of $S$, $T \cap e_{i}Me_{j}$ is an $F$-basis of $e_{i}M_{0}e_{j}$; the latter means that $L(i)=L \cap e_{i}S$ is an $F$-basis of the division algebra $e_{i}S=D_{i}$. It follows that the non-zero elements of the set $\{sa: s \in L, a \in T\}$ form a right $S$-local basis of $M$. Therefore, for every $s \in L$ and $a \in T$, we have the map $(sa)^{\ast} \in \operatorname{Hom}_{S}(M_{S},S_{S})$ induced by the dual basis.
\begin{definition} Let $\mathcal{D}$ be a subset of $M$. We say that $\mathcal{D}$ is a left $S$-local basis of $M$ if every element of $\mathcal{D}$ is legible and if for each pair of idempotents $e_{i},e_{j}$ of $S$, we have that $\mathcal{D} \cap e_{i}Me_{j}$ is a $D_{i}$-basis for $e_{i}Me_{j}$.
\end{definition}
Let $\psi$ be any element of $\mathrm{Hom}_{S}(M_{S},S_{S})$. We will extend $\psi$ to an $F$-linear endomorphism of $\mathcal{F}_{S}(M)$, which we will denote by $\psi_{\ast}$.
First, we define $\psi_{\ast}(s)=0$ for $s\in S$; and for $M^{\otimes l}$, where $l \geq 1$, we define $\psi_{\ast}(m_{1}\otimes \dotsm \otimes m_{l})=\psi (m_{1})m_{2} \otimes \cdots \otimes m_{l}\in M^{\otimes (l-1)}$ for $m_{1},\dots,m_{l}\in M$. Finally, for $f\in \mathcal{F}_{S}(M)$ we define $\psi_{\ast} (f)\in \mathcal{F}_{S}(M)$ by setting
$\psi_{\ast} (f)(l-1)=\psi_{\ast}(f(l))$ for each integer $l>1$. Then we define
\begin{center}
$\psi_{\ast} (f)=\displaystyle \sum _{l=0}^{\infty }\psi_{\ast} (f(l)).$
\end{center}
\begin{definition} Let $\psi \in M^{*}=\mathrm{Hom}_{S}(M_{S},S_{S})$ and $f\in \mathcal{F}_{S}(M)$. We define $\delta _{\psi }:\mathcal{F}_{S}(M)\rightarrow \mathcal{F}_{S}(M)$ as
\begin{center}
$\delta _{\psi }(f)=\psi_{\ast}(\delta (f))=\displaystyle \sum _{l=0}^{\infty }\psi_{\ast}(\delta (f(l))).$
\end{center}
\end{definition}
\begin{definition} Given an $S$-bimodule $N$ we define the \emph{cyclic part} of $N$ as $N_{cyc}:=\displaystyle \sum_{j=1}^{n} e_{j}Ne_{j}$.
\end{definition}
\begin{definition} A \emph{potential} $P$ is an element of $\mathcal{F}_{S}(M)_{cyc}$.
\end{definition}
Motivated by the \emph{Jacobian ideal} introduced in \cite{5}, we define an analogous two-sided ideal of $\mathcal{F}_{S}(M)$.
For each legible element $a$ of $e_{i}Me_{j}$, we let $\sigma(a)=i$ and $\tau(a)=j$.
\begin{definition} Let $P$ be a potential in $\mathcal{F}_{S}(M)$, we define a two-sided ideal $R(P)$ as the closure of the two-sided ideal of $\mathcal{F}_{S}(M)$ generated by all the elements $X_{a^{\ast}}(P)=\displaystyle \sum_{s \in L(\sigma(a))} \delta_{(sa)^{\ast}}(P)s$, $a \in T$.
\end{definition}
In \cite[Theorem 5.3]{1}, it is shown that $R(P)$ is invariant under algebra isomorphisms that fix pointwise $S$. Furthermore, one can show that $R(P)$ is independent of the choice of the $Z$-subbimodule $M_{0}$ and also independent of the choice of $Z$-local bases for $S$ and $M_{0}$.
\begin{definition} An algebra with potential is a pair $(\mathcal{F}_{S}(M),P)$ where $P$ is a potential in $\mathcal{F}_{S}(M)$ and $M_{cyc}=0$.
\end{definition}
We denote by $[\mathcal{F}_{S}(M),\mathcal{F}_{S}(M)]$ the closure in $\mathcal{F}_{S}(M)$ of the $F$-subspace generated by all the elements of the form $[f,g]=fg-gf$ with $f,g\in \mathcal{F}_{S}(M).$
\begin{definition} Two potentials $P$ and $P'$ are called cyclically equivalent if $P-P' \in [\mathcal{F}_{S}(M),\mathcal{F}_{S}(M)]$.
\end{definition}
\begin{definition} We say that two algebras with potential $(\mathcal{F}_{S}(M),P)$ and $(\mathcal{F}_{S}(M'),Q)$ are right-equivalent if there exists an algebra isomorphism $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M')$, with $\varphi|_{S}=id_{S}$, such that $\varphi(P)$ is cyclically equivalent to $Q$.
\end{definition}
The following construction follows the one given in \cite[p.20]{5}. Let $k$ be an integer in $[1,n]$ and $\bar{e}_{k}=1-e_{k}$. Using the $S$-bimodule $M$, we define a new $S$-bimodule $\mu_{k}M=\widetilde{M}$ as:
\begin{center}
$\widetilde{M}:=\bar{e}_{k}M\bar{e}_{k} \oplus Me_{k}M \oplus (e_{k}M)^{\ast} \oplus ^{\ast}(Me_{k})$
\end{center}
where $(e_{k}M)^{\ast}=\operatorname{Hom}_{S}((e_{k}M)_{S},S_{S})$, and $^{\ast}(Me_{k})=\operatorname{Hom}_{S}(_{S}(Me_{k}),_{S}S)$. One can show (see \cite[Lemma 8.7]{1}) that $\mu_{k}M$ is $Z$-freely generated.
\begin{definition} Let $P$ be a potential in $\mathcal{F}_{S}(M)$ such that $e_{k}Pe_{k}=0$. Following \cite{5}, we define
\begin{center}
$\mu_{k}P:=[P]+\displaystyle \sum_{sa \in _{k}\hat{T},bt \in \tilde{T}_{k}}[btsa]((sa)^{\ast})(^{\ast}(bt))$
\end{center}
\end{definition}
where:
\begin{align*}
_{k}\hat{T}&=\{sa: s \in L(k),a \in T \cap e_{k}M\} \\
\tilde{T}_{k}&=\{bt: b \in T \cap Me_{k}, t \in L(k)\}.
\end{align*}
\section{Mutations and potentials} \label{section3}
Let $P=\displaystyle \sum_{i=1}^{N} a_{i}b_{i}+P'$ be a potential in $\mathcal{F}_{S}(M)$ where $A=\{a_{1},b_{1},\ldots,a_{N},b_{N}\}$ is contained in a $Z$-local basis $T$ of $M_{0}$ and $P' \in \mathcal{F}_{S}(M)^{\geq 3}$. Let $L_{1}$ denote the complement of $A$ in $T$, $N_{1}$ be the $F$-vector subspace of $M$ generated by $A$ and $N_{2}$ be the $F$-vector subspace of $M$ generated by $L_{1}$; then $M=M_{1} \oplus M_{2}$ as $S$-bimodules where $M_{1}=SN_{1}S$ and $M_{2}=SN_{S}S$.
One of the main results proved in \cite{5} is the so-called \emph{Splitting theorem} (Theorem 4.6). Inspired by this result, the following theorem is proved in \cite{1}.
\begin{theorem} (\cite[Theorem 7.15]{1}) \label{split} There exists an algebra automorphism $\varphi: \mathcal{F}_{S}(M) \rightarrow \mathcal{F}_{S}(M)$ such that $\varphi(P)$ is cyclically equivalent to a potential of the form $\displaystyle \sum_{i=1}^{N} a_{i}b_{i}+P''$ where $P''$ is a reduced potential contained in the closure of the algebra generated by $M_{2}$ and $\displaystyle \sum_{i=1}^{N} a_{i}b_{i}$ is a trivial potential in $\mathcal{F}_{S}(M_{1})$.
\end{theorem}
\begin{definition} Let $P\in \mathcal{F}(M)$ be a potential and $k$ an integer in $\{1,\ldots,n\}$. Suppose that there are no two-cycles passing through $k$. Using Theorem \ref{split}, one can see that $\mu _{k}P$ is right-equivalent to the direct sum of a trivial potential $W$ and a potential $Q$ in $\mathcal{F}_{S}(M)^{\geq 3}$. Following \cite{5}, we define the mutation of $P$ in the direction $k$, as $\overline{\mu }_{k}(P)=Q$.
\end{definition}
One of the main results of \cite{5} is that mutation at an arbitrary vertex is a well-defined involution on the set of right-equivalence classes of reduced quivers with potentials. In \cite{1}, the following analogous result is proved.
\begin{theorem} (\cite[Theorem 8.21]{1}) Let $P$ be a reduced potential such that the mutation $\overline{\mu}_{k}P$ is defined. Then $\overline{\mu}_{k}\overline{\mu}_{k}P$ is defined and it is right-equivalent to $P$.
\end{theorem}
\begin{definition} Let $k_{1}, \hdots, k_{l}$ be a finite sequence of elements of $\{1,\hdots,n\}$ such that $k_{p} \neq k_{p+1}$ for $p=1,\hdots, l-1$. We say that an algebra with potential $(\mathcal{F}_{S}(M),P)$ is $(k_{l},\hdots,k_{1})$-nondegenerate if all the iterated mutations $\bar{\mu}_{k_{1}}P$, $\bar{\mu}_{k_{2}}\bar{\mu}_{k_{1}}P, \hdots, \bar{\mu}_{k_{l}} \cdots \bar{\mu}_{k_{1}}P$ are $2$-acyclic. We say that $(\mathcal{F}_{S}(M),P)$ is nondegenerate if it is $(k_{l},\hdots, k_{1})$-nondegenerate for every sequence of integers as above.
\end{definition}
In \cite{5}, it is shown that if the underlying base field $F$ is uncountable then a nondegenerate quiver with potential exists for every underlying quiver. Motivated by this result, the following theorem is proved in \cite{9}.
\begin{theorem} (\cite[Theorem 3.5]{9}) \label{teouncount} Suppose that the underlying field $F$ is uncountable, then $\mathcal{F}_{S}(M)$ admits a nondegenerate potential.
\end{theorem}
\section{Species realizations} \label{section4}
We begin this Section by recalling the definition of \emph{species realization} of a skew-symmetrizable integer matrix, in the sense of \cite{7} (Definition 2.22).
\begin{definition} \label{especies} Let $B=(b_{ij}) \in \mathbb{Z}^{n \times n}$ be a skew-symmetrizable matrix, and let $I=\{1,\hdots, n\}$. A species realization of $B$ is a pair $(\mathbf{S},\mathbf{M})$ such that:
\begin{enumerate}
\item $\mathbf{S}=(F_{i})_{i \in I}$ is a tuple of division rings;
\item $\mathbf{M}$ is a tuple consisting of an $F_{i}-F_{j}$ bimodule $M_{ij}$ for each pair $(i,j) \in I^{2}$ such that $b_{ij}>0$;
\item for every pair $(i,j) \in I^{2}$ such that $b_{ij}>0$, there are $F_{j}-F_{i}$-bimodule isomorphisms
\begin{center}
$\operatorname{Hom}_{F_{i}}(M_{ij},F_{i}) \cong \operatorname{Hom}_{F_{j}}(M_{ij},F_{j})$;
\end{center}
\item for every pair $(i,j) \in I^{2}$ such that $b_{ij}>0$ we have $\operatorname{dim}_{F_{i}}(M_{ij})=b_{ij}$ and $\operatorname{dim}_{F_{j}}(M_{ij})=-b_{ji}$.
\end{enumerate}
\end{definition}
In \cite[p.29]{1}, we impose the following condition on each of the bases $L(i)$. For each $s,t \in L(i)$:
\begin{equation} \label{eq2}
e_{i}^{\ast}(st^{-1}) \neq 0 \ \text{implies} \ s=t \ \text{and} \ e_{i}^{\ast}(s^{-1}t) \neq 0 \ \text{implies} \ s=t
\end{equation}
where $e_{i}^{\ast}: D_{i} \rightarrow F$ denotes the standard dual map corresponding to the basis element $e_{i} \in L(i)$.
In \cite[p.14]{7}, motivated by the seminal paper \cite{5}, J. Geuenich and D. Labardini-Fragoso raise the following question: \\
\textbf{Question} \cite[Question 2.23]{7} Can a mutation theory of species with potential be defined so that every skew-symmetrizable matrix $B$ have a species realization which admit a nondegenerate potential? \\
In \cite[Corollary 3.6]{9} a partially affirmative answer to Question $2.23$ is given by proving the following: let $B=(b_{ij}) \in \mathbb{Z}^{n \times n}$ be a skew-symmetrizable matrix with skew-symmetrizer $D=\operatorname{diag}(d_{1},\ldots,d_{n})$. If $d_{j}$ divides $b_{ij}$ for every $j$ and every $i$, then the matrix $B$ can be realized by a species that admits a nondegenerate potential.
We now give an example (\cite[p.8]{9}) of a class of skew-symmetrizable $4 \times 4$ integer matrices, which are not globally unfoldable nor strongly primitive, and that have a species realization admitting a nondegenerate potential. This gives an example of a class of skew-symmetrizable $4 \times 4$ integer matrices which are not covered by \cite{8}.
Let
\begin{equation} \label{eq3}
B
=\begin{bmatrix}
0 & -a & 0 & b \\
1 & 0 & -1 & 0 \\
0 & a & 0 & -b \\
-1 & 0 & 1 & 0
\end{bmatrix}
\end{equation}
where $a,b$ are positive integers such that $a<b$, $a$ does not divide $b$ and $\gcd(a,b) \neq 1$. \\
Note that there are infinitely many such pairs $(a,b)$. For example, let $p$ and $q$ be primes such that $p<q$. For any $n \geq 2$, define $a=p^{n}$ and $b=p^{n-1}q$. Then $a<b$, $a$ does not divide $b$ and $\gcd(a,b)=p^{n-1} \neq 1$. Note that $B$ is skew-symmetrizable since it admits $D=\operatorname{diag}(1,a,1,b)$ as a skew-symmetrizer. \\
\begin{remark} By \cite[Example 14.4]{8} we know that the class of all matrices given by \eqref{eq3} does \emph{not} admit a global unfolding. Moreover, since we are not assuming that $a$ and $b$ are coprime, then such matrices are not strongly primitive; hence they are not covered by \cite{8}.
\end{remark}
We have the following
\begin{proposition} (\cite[Proposition 5.2]{9})
The class of all matrices given by \eqref{eq3} are not globally unfoldable nor strongly primitive, yet they can be realized by a species admitting a nondegenerate potential.
\end{proposition}
By Theorem \ref{teouncount}, we know that a nondegenerate potential exists provided the underlying field $F$ is uncountable. If $F$ is infinite (but not necessarily uncountable) one can show that $\mathcal{F}_{S}(M)$ admits ``locally'' nondegenerate potentials. More precisely, we have
\begin{proposition} (\cite[Proposition 12.5]{1}) \label{infinite} Let $F$ be an infinite field and let $k_{1},\ldots,k_{l}$ be an arbitrary sequence of elements of $\{1,\ldots,n\}$. Then there exists a potential $P \in \mathcal{F}_{S}(M)$ such that the mutation $\overline{\mu}_{k_{l}} \cdots \overline{\mu}_{k_{1}}P$ exists.
\end{proposition}
We conclude the paper by giving an example of a class of skew-symmetrizable $4 \times 4$ integer matrices that have a species realization via field extensions of the rational numbers. Although in this case we cannot guarantee the existence of a nondegenerate potential, we can guarantee (by Proposition \ref{infinite}) the existence of ``locally'' nondegenerate potentials.
First we require some definitions.
\begin{definition} Let $E/F$ be a finite field extension. An $F$-basis of $E$, as a vector space, is said to be semi-multiplicative if the product of any two elements of the basis is an $F$-multiple of another basis element.
\end{definition}
It can be shown that every extension $E/F$ which has a semi-multiplicative basis satisfies (\ref{eq2}).
\begin{definition} A field extension $E/F$ is called a simple radical extension if $E=F(a)$ for some $a \in E$, with $a^n \in F$ and $n \geq 2$.
\end{definition}
Note that if $E/F$ is a simple radical extension then $E$ has a semi-multiplicative $F$-basis.
\begin{definition} A field extension $E/F$ is a radical extension if there exists a tower of fields $F=F_{0} \subseteq F_{1} \ldots \subseteq F_{l}=E$ such that $F_{i}/F_{i-1}$ is a simple radical extension for $i=1,\ldots, l$.
\end{definition}
As before, let
\begin{equation}
B
=\begin{bmatrix}
0 & -a & 0 & b \\
1 & 0 & -1 & 0 \\
0 & a & 0 & -b \\
-1 & 0 & 1 & 0
\end{bmatrix}
\end{equation}
but without imposing additional conditions on $a$ or $b$.
\begin{proposition} \label{rationals} Let $n,m \geq 2$. The matrix $B$ admits a species realization $(\mathbf{S},\mathbf{M})$ where $\mathbf{M}$ is a $Z$-freely generated $S$-bimodule and $S$ satisfies (\ref{eq2}).
\end{proposition}
\begin{proof} To prove this we will require the following result (cf. \cite[Theorem 14.3.2]{11}).
\begin{lemma} \label{lemroots} Let $n \geq 2$, $p_{1},\ldots,p_{m}$ be distinct primes and let $\mathbf{Q}$ denote the set of all rational numbers. Let $\zeta_{n}$ be a primitive $nth$-root of unity. Then
\begin{center}
$[\mathbf{Q}(\zeta_{n})(\sqrt[n]{p_{1}},\ldots,\sqrt[n]{p_{m}}): \mathbf{Q}(\zeta_{n})]=n^{m}$
\end{center}
\end{lemma}
Now we continue with the proof of Proposition \ref{rationals}. Let $F=\mathbf{Q}(\zeta_{n})$ be the base field and let $p_{1}$ be an arbitrary prime. By Lemma \ref{lemroots}, $F_{2}=F(\sqrt[n]{p_{1}})/F$ has degree $n$. Now choose $m-1$ distinct primes $p_{2},p_{3},\ldots,p_{m}$ and also distinct from $p_{1}$. Define $F_{4}=F(\sqrt[n]{p_{1}},\sqrt[n]{p_{2}},\ldots,\sqrt[n]{p_{m}})/F$, then by Lemma \ref{lemroots}, $F_{4}$ has degree $n^{m}$.
Let $S=F \oplus F_{2} \oplus F \oplus F_{4}$ and $Z=F \oplus F \oplus F \oplus F$. Since $F/\mathbf{Q}$ is a simple radical extension then it has a semi-multiplicative basis; thus it satisfies (\ref{eq2}). On the other hand, note that $F_{2}/\mathbf{Q}(\zeta_{n})$ and $F_{4}/\mathbf{Q}(\zeta_{n})$ are radical extensions. Using \cite[Remark 6, p.29]{1} we get that both $F_{2}$ and $F_{4}$ satisfy (\ref{eq2}); hence, it is always possible to choose a $Z$-local basis of $S$ satisfying (\ref{eq2}). Finally, for each $b_{ij}>0$, define $e_{i}Me_{j}=(F_{i} \otimes_{F} F_{j})^{\frac{b_{ij}}{d_{j}}}=F_{i} \otimes_{F} F_{j}$. It follows that $(\mathbf{S},\mathbf{M})$ is a species realization of $B$.
\end{proof}
\end{document} |
\begin{document}
\placetextbox{0.5cm}{0.5cm}{l}{\parbox{20cm}{\footnotesize This paper has been accepted at IEEE AICAS
2023. \copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works.}}
\title{Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge
\thanks{This work is funded in part by the Convolve project evaluated by the EU Horizon Europe research and innovation programme under grant agreement No. 101070374 and has been supported by the Swiss State Secretariat for Education Research and Innovation under contract number 22.00150.}
}
\author{\IEEEauthorblockN{Georg Rutishauser\IEEEauthorrefmark{1}, Francesco Conti\IEEEauthorrefmark{2}, Luca Benini\IEEEauthorrefmark{1}\IEEEauthorrefmark{2}}
\IEEEauthorblockA{\IEEEauthorrefmark{1}\textit{Departement Informationstechnologie und Elektrotechnik, ETH
Z{\"u}rich, Switzerland}}
\IEEEauthorblockA{\IEEEauthorrefmark{2}\textit{Dipartimento di Ingegneria dell'Energia Elettrica e
dell'Informazione, Universit\`{a} di Bologna, Bologna, Italy}}
\IEEEauthorblockA{\IEEEauthorrefmark{1}\texttt{\{georgr,lbenini\}@iis.ee.ethz.ch} \IEEEauthorrefmark{2}\texttt{[email protected]}}
}
\maketitle
\begin{abstract}
Mixed-precision quantization, where a deep neural network's layers are quantized to different precisions, offers the opportunity to optimize the trade-offs between model size, latency, and statistical accuracy beyond what can be achieved with homogeneous-bit-width quantization.
To navigate the intractable search space of mixed-precision configurations for a given network, this paper
proposes a hybrid search methodology.
It consists of a hardware-agnostic differentiable search algorithm followed by a hardware-aware heuristic
optimization to find mixed-precision configurations latency-optimized for a specific hardware target. We
evaluate our algorithm on MobileNetV1 and MobileNetV2 and deploy the resulting networks on a family of
multi-core RISC-V microcontroller platforms with different hardware characteristics. We achieve up to \SI{28.6}{\percent}
reduction of end-to-end latency compared to an 8-bit model at a negligible accuracy drop from a full-precision
baseline on the 1000-class ImageNet dataset. We demonstrate speedups relative to an 8-bit baseline, even on
systems with no hardware support for sub-byte arithmetic at negligible accuracy drop. Furthermore, we show the
superiority of our approach with respect to differentiable search targeting reduced binary operation counts as
a proxy for latency.
\end{abstract}
\begin{IEEEkeywords}
Edge AI, Mixed-Precision Neural Networks
\end{IEEEkeywords}
\section{Introduction}
\label{sec:introduction}
The number of \gls{iot} devices deployed is growing rapidly and is projected to reach 19.1 billion by 2025
\cite{ref:statista_report}.
To efficiently and accurately process the massive amounts of data collected by \gls{iot} sensor nodes under
the strict latency and power constraints imposed by \gls{iot} applications, the emerging field of Edge AI aims
to deploy \gls{dl} algorithms directly on the edge devices that collect them. \Glspl{mcu} have been a popular
target for edge deployment of \glspl{dnn} due to their ubiquity and low cost, and extensive research has been
conducted into designing efficient \gls{dnn} models and techniques to enable inference on \glspl{mcu}
\cite{ref:mcunetv2,ref:micronets}. As the active power consumption of edge nodes is generally dominated by
components other than the arithmetic units, the most effective way to decrease the full-system inference
energy on a given system is by reducing the inference latency while meeting their tight memory and storage
constraints.
A key technique to reduce both memory footprint and inference latency of \glspl{dnn} is \textit{quantization},
where model parameters and intermediate activations are represented in low-precision formats. 8-bit quantized
models generally exhibit equivalent accuracy to full-precision models. Thanks to \gls{simd} instructions,
these models can be executed on modern \gls{mcu}-based system with lower latency and correspondingly reduced
energy cost~\cite{ref:armv8.1}.
Quantization to even lower bit-widths has also seen widespread interest~\cite{ref:pact_sawb, ref:inq, ref:tnn}
and the hardware community has followed suit, proposing low-precision \gls{dnn} execution engines as well as
\gls{isa} extensions to accelerate networks quantized to sub-byte
precision~\cite{ref:dustin}.
However, homogeneous quantization to sub-byte precisions often
incurs a non-negligible accuracy penalty. To find the best trade-off between execution latency and statistical
accuracy, \textit{mixed-precision quantization} proposes to quantize different parts (usually at the
granularity of individual layers) of the network to different precisions.
In order to efficiently navigate the intractable search space of precision configurations of a given model,
multiple works have applied \gls{dnas} to mixed-precision search. These approaches generally rely on a proxy
for latency, such as \gls{bop} count, to guide the search \cite{ref:bb,ref:edmips,ref:bitprune}. By modeling
quantization to different precisions in a differentiable manner and adding a regularizer term to the loss
to penalize high \gls{bop} counts, these algorithms jointly minimize networks' \gls{bop} count and task
accuracy. The trade-off between operational complexity and accuracy is controlled by the regularization
strength.
However, as \Cref{fig:freebie_effects} shows, low \gls{bop} counts do not directly translate to reduced
execution latency on real hardware platforms, which motivates our work. Depending on the target platform,
certain layers even exhibit a higher latency when quantized to lower precisions. One reason for this
counterintuitive behavior is in the hardware implementation of low-precision operations. E.g., the XpulpNN
\gls{isa} extension used by \cite{ref:dustin} only supports operands of equal precision - when weight and
activation precisions differ, the lower-precision operands must be unpacked to the larger data
format.
In systems with hierarchically organized memory, tiling effects also shape the precision-latency landscape:
larger tiles lead to more efficient execution, with the consequence that some layers exhibit
lower latency in sub-8-bit precision on systems without hardware support for sub-8-bit arithmetic (XPulpV2 in
\Cref{fig:freebie_effects}). Past works have targeted the search for mixed-precision networks for deployment
to \gls{mcu}-class platforms. \cite{ref:rusci_mixed} focuses on reducing the memory/storage footprints, but
did not consider inference latency. \cite{ref:channelwise_mp_quant} targets inference energy reduction with a
DNAS-based approach to channel-wise quantization of convolutional layers. However, the channel-wise approach
requires a more complex runtime to enable deployment, and we achieve equivalent results with layer-wise
quantization on a much more challenging dataset than the MLPerf Tiny benchmark tasks used in
\cite{ref:channelwise_mp_quant}.
In this paper, \textit{we propose a mixed-precision latency optimization method consisting of a hardware-agnostic
differentiable search step
followed by a hardware-aware, profiling-based heuristic which both reduces execution latency and improves
accuracy by increasing the precision in layers where higher precisions achieve lower latency.} We use
Bayesian Bits \cite{ref:bb} for the first step, but our method is not specific to Bayesian Bits and
any mixed-precision search method could be used in its stead.
In evaluations on \gls{mnv1} and \gls{mnv2}, deployed to a cycle-accurate RISC-V multi-core simulator, our
approach results in an accuracy-latency trade-off curve that dominates those produced by differentiable search
alone. To the best of our knowledge, we demonstrate for the first time end-to-end deployment of mixed-precision
networks to an \gls{mcu}-class platform that exhibit not only a reduced memory footprint but also reduced
execution latency by up to \SI{28.6}{\percent} at full-precision equivalent classification accuracy.
Our key contributions are the following:
\begin{itemize}
\item We present a lightweight method to find latency-optimized mixed-precision quantization configurations for \glspl{dnn}, consisting of a hardware-agnostic differentiable model search and hardware-aware heuristics, allowing efficient generation of optimized configurations for different platforms.
\item We compose an end-to-end flow consisting of precision search, training, generation of integerized models and deploy the found configurations on a cycle-accurate simulator for high-performance RISC-V \gls{mcu} systems.
\item We analyze the resulting accuracy-latency trade-offs, demonstrating a reduction of end-to-end latency by up to \SI{28.6}{\percent} vs. 8-bit quantization at full-precision equivalent classification accuracy and finds Pareto-dominant configurations with respect to homogeneous 4-bit quantization.
\end{itemize}
\begin{figure*}
\caption{Throughput vs. precision of \gls{mnv1}
\label{fig:freebie_effects}
\end{figure*}
\section{Free Bits}
\label{sec:algo}
\textit{Free Bits} is a multi-step method to find mixed-precision configurations of \glspl{dnn} optimized for
low latency on a given target hardware platform. In the first step, we employ two variants of the Bayesian
Bits algorithm
to find baseline reduced-precision configurations of the
targeted network architecture.
In the second step, we use layer-wise profiling data collected on the target platform to update the initial
configuration by increasing the precision of layers which exhibit lower latency in higher precisions.
As these increases in precision are expected to improve both latency
and statistical accuracy, we name this step the \textit{free bits} heuristic.
From the configurations found in the second step, the one which best meets a given latency target is then
selected and fine-tuned using a modified version of the \gls{tqt} algorithm \cite{ref:tqt} and automatically
converted to an integer-only model, which can be fed to a deployment backend
for the target platform.
\subsection{Differentiable Mixed-Precision Search}
\label{subsec:diff_search}
To find the initial mixed-precision configurations,
we apply two variants of the Bayesian Bits algorithm. Bayesian Bits decomposes the quantization of each layer
into the contributions from each of the allowed bit-widths and aims to reduce a network's total \gls{bop}
count with a regularizer that penalizes each precision's contribution to the expected \gls{bop} count
individually. Because the execution latency of a layer does not necessarily increase monotonously with
precision and depends jointly on input and weight precisions, Bayesian Bits cannot target latency reduction
directly.
In addition to the original Bayesian Bits algorithm, we also employ a modified version
enforcing equal input and weight precisions. This modification accounts for the fact that on our target
platforms, the theoretical throughput for a layer with non-equal activation and weight precisions is bounded
by the higher of the two precisions.
\subsection{Free Bits Heuristic}
\label{subsec:profiling_heuristic}
\begin{algorithm}
\caption{Free Bits Heuristic}
\label{algo:freebie_heur}
\hspace*{\algorithmicindent}\textbf{Input:}
\begin{AlgoDx}
\item[$LD$:] Latency dictionary mapping \acrshort{lt} $t$ and precisions $(b_{in}, b_{wt})$ to a measured latency
\item[$C_{net}$:] Dictionary of \acrshortpl{lt} and precisions describing a \acrshort{mp} network, of the form $\left\{i:\left(t^i, \left( b_{in}^i, b_{wt}^i\right)\right)\right\}_{i=1}^N$
\item[$P_{all}$:] Set of allowed combinations $\left( b_{in}, b_{wt}\right)$ of input and weight precisions
\end{AlgoDx}
\hspace*{\algorithmicindent}\textbf{Output:}
\begin{AlgoDx}
\item[$C'_{net}$:] Latency-optimized \acrshort{mp} configuration of input network
\end{AlgoDx}
\begin{algorithmic}
\Function{higher}{$(b_{in, 1}, b_{wt,1}), (b_{in,2}, b_{wt,2})$}
\State \textbf{return} $(b_{in,1} \geq b_{in,2}) \wedge (b_{wt,1} \geq b_{wt,2})$
\EndFunction
\State $C'\gets C$
\ForAll{$i, c^i=\left(t^i, \left( b_{in}^i, b_{wt}^i\right)\right)\in C_{net}$}
\State $lat^i_0\gets LD\left[ c^i\right]$
\Comment{Initial latency}
\State $cdts\gets$\parbox[t]{0.5\linewidth}{$\{ \left(b_{in}, b_{wt}\right)\vert $ $\left(b_{in},
b_{wt}\right)\in P_{all},$ $LD\left[\left(t^i, \left(b_{in},b_{wt}\right)\right)\right]\leq
lat^i_0,$
$\Call{higher}{(b_{in}, b_{wt}), (b_{in}^i,
b_{wt}^i)}\}$}\Comment{\parbox[t]{0.27\linewidth}{\linespread{0.93}\selectfont Select lower-lat. configs with higher precisions}}
\State $best\gets$ \parbox[t]{0.5\linewidth}{$ \argmin\limits_{(b_{in},b_{wt})\in cdts}LD[(t^i,
(b_{in},b_{wt}))]$}\Comment{\parbox[t]{0.25\linewidth}{\linespread{0.93}\selectfont Select lowest-lat. candidate}}
\State $C'_{net}[i]\gets (t^i, best)$
\Comment{Update net configuration}
\EndFor
\end{algorithmic}
\end{algorithm}
As \Cref{fig:freebie_effects} shows, there are many cases where a given layer does not profit from reduced
precision, but in fact exhibits higher latency when executed in a lower precision. The \textit{free bits}
heuristic exploits this observation, relying on two core ideas: First, we assume our target platform executes
networks layer-by-layer, which implies $L_{net} \approx \sum_{i=1}^NL_i$ for the total execution latency
$L_{net}$ of an $N$-layer network where the $i$-th layer is executed with latency $L_i$.
Second, increasing a layer's input activation or weight precision never decreases the network's statistical
accuracy.
Following the first assumption, we characterize each unique linear operator in the target network as a
\textit{\gls{lt}}, the tuple of all quantities that parametrize the invocation of a
computational kernel, such as input dimensions, number of channels, or kernel size.
For each \gls{lt} occurring in the network, we profile the execution latency on the target platform for all
supported precision configurations.
We then update every layer in the network found by Bayesian Bits to the configuration
of higher or equal precision that exhibits the lowest latency.
By the two assumptions above, the resulting network's execution latency and statistical accuracy will be
upper-bounded and lower-bounded, respectively, by those of the configuration found by Bayesian Bits. As it is
expected to produce strictly superior configurations in terms of latency and statistical accuracy, we call
this procedure the \textit{free bits} heuristic. We show a pseudocode description of the procedure in
\Cref{algo:freebie_heur}.
\subsection{Quantization-Aware Fine-Tuning and Deployment}
Having arrived at a latency-optimized mixed-precision configuration, we perform \gls{qat} with the QuantLab
framework \footnote{\url{https://github.com/pulp-platform/quantlab/tree/georgr/bayesian_bits_gh}} to fine-tune the
network's parameters using a generalized version of \gls{tqt}~\cite{ref:tqt}, differing from the original
algorithm in that we do not force clipping bounds to be exact powers of two.
\section{Results}
\label{sec:results}
\subsection{Experimental Setup}
\label{subsec:exp_setup}
We performed experiments on the well-known and widely used MobileNetV1 and V2 architectures, applying the procedure proposed in \Cref{sec:algo} to \gls{mnv1} \cite{ref:mnv1} and \gls{mnv2} \cite{ref:mobilenetv2}. We used width multipliers of 0.75 for \gls{mnv1} and 1.0 for \gls{mnv2}. The input resolution was $224\times 224$ for both networks. We trained our networks on the ILSVRC2012 \cite{ref:imagenet} 1000-class dataset and report top-1 classification accuracies on the validation set.
\paragraph{Differentiable Mixed-Precision Search and \gls{qat} Fine-Tuning}
We applied the two variants of Bayesian Bits described in \Cref{subsec:diff_search} to the \gls{mnv1} and
\gls{mnv2} network topologies.
The configurations produced by our algorithm (as well as those produced by Bayesian Bits in the case of
\gls{mnv1}) were fine-tuned with \gls{tqt}.
In accordance with the capabilities of our hardware targets (see below), the precisions Bayesian Bits can select from are 2, 4, and 8 bits for both weights and activations.
\paragraph{Profiling, Deployment and Hardware Targets}
QuantLab's automated integerization flow generates precision-annotated, integer-only ONNX models, which are
consumed by the DORY \cite{ref:dory} deployment backend. DORY generates C code leveraging a
mixed-precision kernel library, which we run on GVSOC, a cycle-accurate, open-source simulator for multi-core
RISC-V systems. The platforms we target are open-source RISC-V \glspl{mcu} of the \gls{pulp} family and are
divided into two main domains. The \gls{soc} domain contains a RISC-V core serving as the fabric controller,
\SI{512}{\kibi\byte} of L2 memory and a full set of peripherals. The cluster domain hosts 8 high-performance
RISC-V cores operating on \SI{64}{\kibi\byte} of high-bandwidth L1 scratchpad memory.
This hierarchical memory structure necessitates \textit{tiled} execution of a network's layers with each
tile's inputs, outputs, and weights fitting into the L1 scratchpad. Tiling is automatically performed by
DORY.
All cores in the target system implement the base RV32IMF \gls{isa} and the custom XpulpV2
extensions. We measure latency on three systems, with varying degrees of support for sub-byte arithmetic in
the cluster cores. The \textit{XpulpV2} system's cluster implements only XpulpV2, which supports only 8-bit
\gls{simd} arithmetic. The \textit{XpulpNNv1} system implements the XpulpNN extension also used in \cite{ref:dustin},
which provides support for packed-SIMD sub-byte arithmetic on 2- and 4-bit data.
Because XpulpNN's arithmetic instructions require operands to have equal bit-width, mismatching activation
and weight precisions require lower-precision data to be unpacked in software to the higher precision.
Finally, the \textit{XpulpNNv2} system eliminates this overhead by performing the unpacking transparently in
Hardware. To generate the profiling data (shown in \Cref{fig:freebie_effects} for \gls{mnv1}) used by the free bits
heuristic, we again use DORY to generate and export dummy networks for all layer types in all precision
configurations.
\subsection{Latency-Accuracy Trade-Offs for XpulpNNv1}
\begin{figure}
\caption{Latency-Accuracy tradeoff of Bayesian Bits-trained MobileNetV1 configurations before and after
applying the free bits heuristic. Grey arrows indicate the effect of the heuristic.
\textbf{BB Orig./Locked}
\label{fig:bb_and_heur}
\end{figure}
\paragraph{MobileNetV1}
\Cref{fig:bb_and_heur} shows the latency-accuracy trade-off for \gls{mnv1} deployed to a \gls{pulp} system
with the XpulpNNv1 \gls{isa} extensions, with the effect of the free bits heuristic indicated. We observe that
the original Bayesian Bits algorithm generally does not produce low-latency configurations due to the reasons
discussed in \Cref{sec:introduction}.
With two exceptions, applying the free bits heuristic improves the latency of all configurations substantially
while increasing classification accuracy. For the $4\,$b/$4\,$b baseline, the heuristic
increases the precision of 12 layers, improving latency and accuracy by $7\%$ and $0.7$ percentage points,
respectively. As symmetric activation and weight precisions are theoretically optimal for XpulpNNv1's hardware
implementation of sub-byte arithmetic, this is a non-trivial result. The free bits heuristic lifts the
previously uncompetitive configurations found by the original Bayesian Bits algorithm to the Pareto front,
yielding accuracy and latency gains of $1.4-6.6$ percentage points and $12.3\%-61.6\%$, respectively. The
most accurate configuration matches the 8b/8b baseline in statistical accuracy at $69.1\%$ and reduces
execution latency by $7.6\%$, and the configuration at the Pareto front's knee point improves execution
latency by $27.9\%$ at a classification accuracy within $0.2$ percentage points of the full-precision
baseline of $68.8\%$.
\paragraph{MobileNetV2}
\begin{figure}
\caption{Latency-Accuracy tradeoff of MobileNetV2 configurations optimized for XpulpNNv1. The grey arrow
indicates the effect of the heuristic on the $4$b/$4$b baseline.}
\label{fig:mnv2_pareto}
\end{figure}
\Cref{fig:mnv2_pareto} shows the latency-accuracy trade-off of \gls{mnv2} configurations produced by Bayesian
Bits modified with the free bits heuristic running on the XpulpNNv1 system. The baseline 4b/4b configuration
contains many asymmetric-precision convolutional layers due to adder node outputs being quantized to 8 bits.
This leads to a latency higher than that of the 8b/8b baseline, which the free bits heuristic reduces by
$46\%$ while improving classification accuracy by $0.6$ percentage points. Nevertheless, the
resulting configuration is not Pareto-optimal with respect to those produced by our algorithm. In particular,
the locked-precision version of Bayesian Bits, when combined with the free bits heuristic, produces
configurations that dominate both baselines. The configuration at the
Pareto front's knee point reduces execution latency by $10.9\%$ at an accuracy penalty of only $0.3$
percentage points from the 8b/8b baseline.
\subsection{Free Bits Across Different Target Platforms}
\begin{table}[]
\centering
\begin{tabular}{l|lrr|rr}
Acc. Margin &\gls{isa} & \multicolumn{2}{c}{MobileNetV1} & \multicolumn{2}{|c}{MobileNetV2} \\\hline\hline
\multicolumn{1}{c}{}& & Lat. vs. 8b & Acc. & Lat. vs. 8b & Acc. \\\hline
\multicolumn{1}{l|}{8b Baseline} &\textit{all} & $+0\%$ & $69.1\%$ & $+0\%$ & $71.5\%$\\\hline
\multirow{3}{*}{$0.5$ pp.} & XPv2 & $-5.5\%$& $69.3\%$ & $-3.4\%$ & $71.0\%$ \\
& XPNNv1 & $-27.9\%$ & $68.6\%$ & $-10.9\%$ & $71.2\%$ \\
& XPNNv2 & $-28.6\%$ & $68.6\%$ & $-15.3\%$ & $71.0\%$ \\\hline
\multirow{3}{*}{$1.5$ pp.} & XPv2 & $-5.5\%$ & $69.3\%$ & $-6.3\%$ & $70.7\%$\\
& XPNNv1 & $-34.4\%$ & $67.6\%$ & $-15.1\%$ & $70.4\%$ \\
& XPNNv2 & $-35.1\%$ & $67.6\%$ & $-15.3\%$ & $71.0\%$\\\hline
\multirow{3}{*}{4b + FB} & XPv2 & $-3.5\%$ & $67.7\%$ & $-7.7\%$ & $70.9\%$ \\
& XPNNv1 & $-37.1\%$ & $66.3\%$ & $-12.8\%$ & $69.9\%$\\
& XPNNv2 & $-39.8\%$ & $66.6\%$ & $-25.7\% $ & $69.6\%$ \\\hline
\multirow{3}{*}{4b Baseline} & XPv2 & $+49.9\%$ & $65.6\%$ & $+37.0\%$ & $69.3\%$ \\
& XPNNv1 & $-32.3\%$ & $65.6\%$ & $+48.9\%$ & $69.3\%$ \\
& XPNNv2 & $-38.3\%$ & $65.6\%$ & $-23.4\%$ & $69.3\%$ \\
\end{tabular}
\caption{Configurations within margins of 0.5 and 1.5 percentage points (\textbf{pp.}) of 8b/8b
classification accuracy for PULP systems implementing different ISA extensions: XpulpV2 (\textbf{XPv2}),
XpulpNNv1 (\textbf{XPNNv1}) and XpulpNNv2 (\textbf{XPNNv2}).
\textbf{4b+FB}: target-specific free bits heuristic applied to homogeneously quantized 4b/4b network.}
\label{tbl:best_nets}
\end{table}
To evaluate the portability of our algorithm, we optimized \gls{mnv1} and \gls{mnv2} configurations found
with Bayesian Bits for the three different \gls{pulp} systems
described in \Cref{subsec:exp_setup}. \Cref{tbl:best_nets} shows the lowest-latency configurations
within $0.5$ and $1.5$ percentage points of classification accuracy of the 8b/8b baseline.
Notably, our approach achieves latency reductions even on the XpulpV2 system without hardware support for sub-byte arithmetic, which can be attributed to a lower data movement overhead thanks to larger tile sizes.
\section{Conclusion}
In this paper, we have presented \textit{Free Bits}, an efficient method to find latency-optimized
mixed-precision network configurations for inference on edge devices. Taking advantage of the fact that,
depending on the target platform, increasing input or weight precision may lead to lower execution latency,
the method optimizes mixed-precision configurations found by the hardware-agnostic Bayesian Bits
differentiable search algorithm. Deploying the \gls{mnv1} and \gls{mnv2} configurations found with our
algorithm on a family of high-performance \gls{mcu}-class RISC-V platforms, we find that, i) with hardware
support for sub-byte arithmetic, \gls{mnv1} end-to-end latency can be reduced by up to \SI{30}{\percent} while
retaining full-precision equivalent accuracy, ii) even without such hardware support, mixed-precision
quantization enables a latency reduction of up to $7.7\%$, and iii) the found configurations offer a superior
accuracy-latency trade-off with respect to homogeneous 4-bit and 8-bit quantization.
\end{document} |
\begin{document}
\title{Polariton Analysis of a Four-Level Atom Strongly Coupled to a Cavity Mode}
\author{S. Rebi\'{c}}
\email[E-mail: ]{[email protected]}
\author{A. S. Parkins}
\author{S. M. Tan}
\affiliation{Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand}
\begin{abstract}
We present a complete analytical solution for a single four-level atom strongly coupled to a cavity field mode and driven by external coherent laser fields. The four-level atomic system consists of a three-level subsystem in an EIT configuration, plus an additional atomic level; this system has been predicted to exhibit a photon blockade effect. The solution is presented in terms of polaritons. An effective Hamiltonian obtained by this procedure is analyzed from the viewpoint of an effective two-level system, and the dynamic Stark splitting of dressed states is discussed. The fluorescence spectrum of light exiting the cavity mode is analyzed and relevant transitions identified.
\end{abstract}
\pacs{42.50.-p, 32.80.-t, 42.65.-k}
\maketitle
\section{Introduction}
\label{sec:intro}
The interaction of a single mode of the electromagnetic field with a single atom has long been at the forefront of interest within the quantum optics community. In this context, the Jaynes-Cummings model~\cite{Jaynes63} and its extensions have been the main focus of attention, for several reasons. It is the simplest possible model, involving a single two-level atom interacting with a quantized field mode, and therefore is in many cases exactly solvable. It has also proven to be experimentally realizable, thus allowing direct comparison between theory and experiment. This line of research has deepened immensely our understanding of fundamental quantum phenomena, and continues to do so.
Experimentally, the field of cavity quantum electrodynamics (CQED)~\cite{Berman94} has been shown to be very promising for further studies of fundamental quantum systems. Recent advances in mirror manufacturing techniques make it possible to build high-finesse microcavities in which the coupling strength of an atomic transition to a cavity field mode can be an order of magnitude larger than the decoherence rates of the system~\cite{Hood98}. Furthermore, four spectacular experiments have recently demonstrated that it is possible to trap a single atom within a microscopic cavity using either an independent atomic trap~\cite{Ye99,Guthohrlein01}, or the field mode itself, containing not more than one photon at a time~\cite{Hood00,Pinkse00}.
Within the framework of CQED, the regime of strong atom-field coupling is interesting for many reasons. For one, it enables the study of strongly coupled quantum systems; in particular, the `atom-cavity molecule'~\cite{Hood00}. Secondly, it is also a very promising candidate for the realization of strong optical nonlinearities~\cite{Dunstan98}. For example, an approximation to a $\chi^{(3)}$ (Kerr) nonlinear optical system can be achieved using either a single, strongly coupled two-level atom, or an ensemble of weakly coupled two-level atoms. However, the large atom-field detuning, which minimizes atomic spontaneous emission noise, also minimizes the strength of the nonlinearity. Using the Kerr-type nonlinearities produced by a single two-level atom in a cavity, conditional quantum dynamics have been demonstrated by Brune \textit{et al.}~\cite{Brune94} in the microwave regime, and by Turchette \textit{et al.}~\cite{Turchette95} in the optical regime.
The effect of electromagnetically induced transparency (EIT)~\cite{Harris97} has been used by Schmidt and Imamo\u{g}lu~\cite{Schmidt96} to devise a scheme involving four-level atoms which produces a large Kerr nonlinearity with virtually no noise. It has been shown by Imamo\u{g}lu \textit{et al.}~\cite{Imamoglu97} that if such a strong optical Kerr nonlinearity is implemented in a CQED setting, then it is possible to realize {\it photon blockade}, in which the atom-cavity system mimics an ideal two-level system, and effectively acts as a photon turnstile device for single photons.
The proposal of Schmidt and Imamo\u{g}lu~\cite{Schmidt96} is very appealing in its use of EIT to substantially reduce decoherence. To utilize the advantages that a CQED environment offers, Rebi\'{c} \textit{et al.}~\cite{Rebic99} proposed a model in which a single four-level atom is trapped in a high-finesse microcavity. They showed that this system (which we call the EIT-Kerr system) can effect a near-ideal Kerr optical nonlinearity. In such a strongly coupled system, the composite excitations can be labeled as ``polaritons'', which are defined as mixtures of atom and cavity mode excitations. For weak to moderate driving, the EIT-Kerr system is well-approximated by a two-state system, corresponding to the two lowest lying polariton eigenstates. How this behaviour changes with the introduction of additional atoms was investigated by Werner and Imamo\u{g}lu~\cite{Werner99} (see also the work of Greentree \textit{et al.}~\cite{Greentree00}).
Analyzing the single-atom EIT-Kerr system theoretically is not in general a straightforward task. In the bad cavity regime or the good cavity regime, approximate solutions are possible, based on the relative sizes of the atom-field coupling constant and the decay rates. In particular, it is possible to adiabatically eliminate either the cavity or the atomic degrees of freedom, respectively. In the strong coupling case, neither of these simplifications is possible. The `atom-field molecule' must be truly regarded as a fundamental entity, which exhibits features that cannot be explained in terms of individual properties of its constituents. The natural basis for analysis of such a system is the {\em polariton basis}. In this paper we perform a polariton analysis of the strongly coupled atom-cavity system. Although we concentrate on a particular atomic configuration, the underlying method is general and could be applied to any strongly coupled system.
Polariton analysis has been used extensively of late to study the dynamics of EIT systems~\cite{Fleischhauer00,Fleischhauer00b,Fleischhauer01}, but these analyses have concentrated on the semiclassical case of an atomic gas driven by laser light; in particular, on the dynamics of `slow' light. Juzeliunas and Carmichael~\cite{Juzeliunas01} have refined the analysis of the corresponding `slow polaritons', and showed that it is possible to reverse a stopped polariton by reversing the control beam. However, none of the treatments so far have dealt with the coupled atom-cavity system.
In Section~\ref{sec:model}, we outline the bare model, i.e. the Hamiltonian written in terms of atomic and field operators, and explain how damping by reservoirs enters into the formulation. In Section~\ref{sec:dress} we diagonalise the interaction Hamiltonian exactly to find a set of basis states for subsequent analysis. In Section~\ref{sec:drivdamp}, the driving term and damping terms are expressed in terms of the new basis set, and the effective Hamiltonian in the polariton representation is found. In Section~\ref{sec:dynstark} we apply our results to obtain expressions for the dynamic Stark splitting and the spectrum of weak excitations in the effective two-level system. In Section~\ref{sec:fluorescence} we illustrate how to use the effective Hamiltonian to identify peaks and linewidths in the fluorescence spectrum for the light exiting the cavity mode. Finally, conclusions and outlook are presented in Section~\ref{sec:conclusion}.
\section{Bare Model}
\label{sec:model}
The atomic energy levels are shown in Fig.~\ref{fig:atom}. The atom is assumed to be coupled to a single cavity field mode and this cavity is driven through one of its mirrors by a coherent laser field. The interaction picture Hamiltonian describing the system in the rotating wave and electric dipole approximations is $\mathcal{H} = \mathcal{H}_0 + \mathcal{H}_d$, where
\begin{subequations}
\label{eq:hamiltonian}
\begin{eqnarray}
\mathcal{H}_0 &=& \hbar\delta \, \sigma_{22} + \hbar\Delta \, \sigma_{44} + i\hbar g_1\, \bigl( a^\dagger \sigma_{12} - \sigma_{21} a \bigr) \nonumber \\
&\ & + i\hbar \Omega_c\, \bigl( \sigma_{23} - \sigma_{32} \bigr) + i\hbar g_2\, \bigl( a^\dagger \sigma_{34} - \sigma_{43} a \bigr) , \label{eq:h0} \\
\mathcal{H}_d &=& i\hbar \mathcal{E}_p \, \bigl( a - a^\dagger \bigr). \label{eq:hd}
\end{eqnarray}
\end{subequations}
Here, $\sigma_{ij}$ represent atomic raising and lowering operators (for $i \neq j$), and energy level population operators (for $i = j$); $a^\dagger \, (a)$ is the cavity field creation (annihilation) operator. Detunings $\delta$ and $\Delta$ are defined from the relevant atomic energy levels; $g_{1,2}$ are atom-field coupling constants for the respective transitions, and $\Omega_c$ is the coupling field Rabi frequency. The cavity driving field is introduced through the parameter ${\mathcal E}_p$, given by
\begin{equation}
\label{eq:ep}
\mathcal{E}_p = \sqrt{\frac{\mathcal{P} \kappa T^2}{4\hbar\omega_{cav}}}.
\end{equation}
In this expression, $T$ is the cavity mirror transmission coefficient, $\kappa$ is the cavity decay rate, and $\mathcal{P}$ is the power output of the driving laser. Damping due to cavity decay and spontaneous emission is discussed below.
\begin{figure}
\caption{Atomic energy level scheme. The cavity mode couples to transitions $|1\rangle \rightarrow |2\rangle$ and $|3\rangle \rightarrow |4\rangle$, with respective coupling strengths $g_1$ and $g_2$. The transition $|2\rangle \leftrightarrow |3\rangle$ is coupled by a classical field of frequency $\omega_c$ and Rabi frequency $\Omega_c$. Spontaneous emission rates are denoted by $\gamma_j$. Detunings $\delta$ and $\Delta$ are defined as positive in the configuration shown.}
\label{fig:atom}
\end{figure}
Assume that the cavity mode subspace has been truncated at some finite size $N$. Together with the four atomic levels, these span a Hilbert space of dimension $4 \times N$. In the absence of driving (or in the limit where term~(\ref{eq:hd}) becomes negligible), Hamiltonian~(\ref{eq:h0}) takes a block-diagonal form, with $N$ blocks on the main diagonal. Each block represents a manifold of eigenstates associated with the appropriate term in the Fock expansion. The ground, first and second manifolds have been analyzed from the viewpoint of photon blockade in Refs.~\cite{Rebic99,Werner99,Greentree00}, where this truncation approach was found to be very useful. Addition of the driving term~(\ref{eq:hd}) significantly complicates the analysis. This term couples the different manifolds, and the Hamiltonian matrix loses its block-diagonal form. Therefore, it is not practical to perform a simple analytical diagonalization of the Hamiltonian~(\ref{eq:hamiltonian}), given the large size of the $4N$ by $4N$ matrix.
Dissipation can be added to the model by adding an anti-Hermitian term to the Hamiltonian~(\ref{eq:hamiltonian}). This term results from coupling to reservoir modes, and is obtained by tracing the system over these modes. In this approach we identify collapse operators, each of them corresponding to one decay channel~\cite{Carmichael93B}. In the EIT-Kerr case, there are the following four collapse operators
\begin{eqnarray}
\label{eq:collapse}
C_1 &=& \sqrt{\gamma_1}\, \sigma_{12}, \ \ C_2 = \sqrt{\gamma_2}\, \sigma_{32}, \nonumber \\
C_3 &=& \sqrt{\gamma_3}\, \sigma_{34}, \ \ C_4 = \sqrt{\kappa}\, a\, ,
\end{eqnarray}
where $\gamma_k$ denote spontaneous emission rates into each of the decay channels, and $\kappa$ denotes the cavity intensity decay rate. The effective non-Hermitian Hamiltonian takes the form
\begin{equation}
{\mathcal H}_{eff} = {\mathcal H} - i \hbar\sum_{k=1}^4 C_k^\dagger C_k \, , \label{eq:heff}
\end{equation}
${\mathcal H}$ being given by~(\ref{eq:hamiltonian}).
\section{Dressed States Analysis}
\label{sec:dress}
In this Section we solve the eigenvalue problem exactly for the Hamiltonian ${\mathcal H}_0$ given by Eq.~(\ref{eq:h0}), and obtain a basis for further calculations. In a strongly coupled system such as the one under analysis, dressed states~\cite{Cohen77} represent the natural basis for analysis, since the system under consideration should be viewed as an `atom-cavity molecule', rather than the mere sum of its constituent parts (atom + cavity mode in this case). We have already remarked in Section~\ref{sec:model} on the complexity of the problem of finding the exact (with coherent driving included) dressed states. Alsing {\em et al.}~\cite{Alsing92} succeeded in obtaining the exact solution for the case of a two-level atom when the driving field is resonant with the cavity mode. They recognized that the eigenstates can be expressed as a direct product of field and atomic states, where the field states are displaced squeezed states, thus simplifying the calculation. The driven EIT-Kerr system does not have a solution for the field states with similarly convenient properties, so the method of Ref.~\cite{Alsing92} can not be consistently applied. Instead we opt for an alternative approach which will be outlined in Section~\ref{sec:drivdamp}.
\subsection{Ground and First Manifold States}
\label{sec:gfman}
We use the notation $|${\em number of photons in cavity mode, atomic energy level} $\rangle$ to denote the bare states. The ground state is
\begin{equation}
\label{eq:e00}
|e_0^{(0)} \rangle = |0,1\rangle \, ,
\end{equation}
and has energy $E_0^{(0)} = 0$. Dressed state $j$ belonging to the manifold $n$ is denoted as $|e_j^{(n)} \rangle$.
There are three first-manifold states, one of them resonant with the cavity mode, the other two non-resonant
\begin{subequations}
\label{eq:firstman}
\begin{eqnarray}
|e_0^{(1)} \rangle &=& \alpha_0^{(1)} |1,1\rangle + \mu_0^{(1)} |0,3\rangle \, , \label{eq:e01} \\
|e_\pm^{(1)} \rangle &=& \alpha_\pm^{(1)} |1,1\rangle + \beta_\pm^{(1)}|0,2\rangle + \mu_\pm^{(1)} |0,3\rangle \, , \label{eq:epm1}
\end{eqnarray}
\end{subequations}
where the coefficients of the bare states are given by
\begin{subequations}
\label{eq:firstcoeff}
\begin{eqnarray}
\alpha_0^{(1)} &=& \frac{1}{\sqrt{1+(g_1/\Omega_c)^2}}\, , \ \mu_0^{(1)} = \frac{g_1/\Omega_c}{\sqrt{1+(g_1/\Omega_c)^2}}\, \label{eq:ce01}
\end{eqnarray}
\begin{eqnarray}
\alpha_\pm^{(1)} &=& -\frac{g_1/\Omega_c}{\sqrt{1+(g_1/\Omega_c)^2+(\epsilon_\pm^{(1)}/\Omega_c)^2}}\, , \nonumber \\
\beta_\pm^{(1)} &=& -\frac{i\epsilon_\pm^{(1)}/\Omega_c}{\sqrt{1+(g_1/\Omega_c)^2+(\epsilon_\pm^{(1)}/\Omega_c)^2}}\, , \ \\
\mu_\pm^{(1)} &=& \frac{1}{\sqrt{1+(g_1/\Omega_c)^2+(\epsilon_\pm^{(1)}/\Omega_c)^2}}\, . \nonumber \label{eq:cepm1}
\end{eqnarray}
\end{subequations}
The energies of these eigenstates are given by $E_j^{(1)} = \hbar(\omega_{cav} + \epsilon_j^{(1)})$, where
\begin{subequations}
\label{eq:en0pm}
\begin{eqnarray}
\epsilon_0^{(1)} &=& 0 \, , \\
\epsilon_\pm^{(1)} &=& \frac{\delta}{2} \pm \sqrt{\biggl(\frac{\delta}{2} \biggr)^2 + \Omega_c^2 \left( 1 + \frac{g_1^2}{\Omega_c^2}\right)} \, .
\end{eqnarray}
\end{subequations}
Note that $\sum_{i=\pm,0} \epsilon_i^{(1)} = \delta$, reflecting the fact that the cavity mode is detuned from a one-photon excitation of the atom (see Fig.~\ref{fig:atom}).
\subsection{Second and Higher Manifold States}
\label{sec:secman}
The second and higher manifold states can be written in a generic form,
\begin{eqnarray}
|e_k^{(n)}\rangle &=& \alpha_k^{(n)} |n,1\rangle + \beta_k^{(n)}|n-1,2\rangle \nonumber \\
&\ &+ \mu_k^{(n)} |n-1,3\rangle + \nu_k^{(n)} |n-2,4\rangle \, , \label{eq:ekn}
\end{eqnarray}
with $n \geq 2$ being the manifold label.
There are four states in each manifold, with energies $E_k^{(n)} = \hbar (n \omega_{cav} + \epsilon_k^{(n)})$. The coefficients of these states are
\begin{widetext}
\begin{subequations}
\label{eq:secpluscoeff}
\begin{eqnarray}
\alpha_k^{(n)} &=& -i\, \frac{g_1g_2\sqrt{n(n-1)}}{\epsilon_k^{(n)}\Omega_c} \biggl[ 1-\frac{\epsilon_k^{(n)}(\epsilon_k^{(n)}-\Delta)}{g_2^2(n-1)} \biggr] \, \nu_k^{(n)} \, , \\
\beta_k^{(n)} &=& \frac{g_2\sqrt{n-1}}{\Omega_c} \, \biggl[ 1-\frac{\epsilon_k^{(n)}(\epsilon_k^{(n)}-\Delta)}{g_2^2(n-1)} \biggr] \, \nu_k^{(n)} \, , \\
\mu_k^{(n)} &=& -i\, \frac{\epsilon_k^{(n)}-\Delta}{g_2\sqrt{n-1}}\, \nu_k^{(n)} \, , \\
\nu_k^{(n)} &=& \Biggl\{ 1 + \biggl(\frac{\epsilon_k^{(n)}-\Delta}{g_2\sqrt{n-1}} \biggr)^2 + \biggl( \frac{g_2\sqrt{n-1}}{\Omega_c} \biggr)^2 \biggl[ 1+n\biggl(\frac{g_1}{\epsilon_k^{(n)}} \biggr)^2\biggr] \biggl[ 1-\frac{\epsilon_k^{(n)}(\epsilon_k^{(n)}-\Delta)}{g_2^2(n-1)} \biggr]^2 \Biggr\}^{-1/2} \, .
\end{eqnarray}
\end{subequations}
The exact energies of the four states within a given manifold are found to be, in increasing order,
\begin{subequations}
\label{eq:epsilonkn}
\begin{eqnarray}
\epsilon_{1,2}^{(n)} &=& \frac{C}{4} - \frac{1}{2} \sqrt{\frac{C^2}{4}-\frac{2A}{3}+D} \mp \frac{1}{2} \sqrt{\frac{C^2}{4}-\frac{4A}{3}-D + \frac{2B+AC+C^3/4}{\sqrt{C^2/4-2A/3+D}}} \, , \\
\epsilon_{3,4}^{(n)} &=& \frac{C}{4} + \frac{1}{2} \sqrt{\frac{C^2}{4}-\frac{2A}{3}+D} \mp \frac{1}{2} \sqrt{\frac{C^2}{4}-\frac{4A}{3}-D - \frac{2B+AC+C^3/4}{\sqrt{C^2/4-2A/3+D}}} \, ,
\end{eqnarray}
\end{subequations}
where the following abbreviations have been used:
\begin{subequations}
\label{eq:constants}
\begin{eqnarray}
A &=& \Delta\delta - g_1^2n - g_2^2(n-1) - \Omega_c^2 \, , \ C = \Delta + \delta \, , \\
B &=& \Delta \bigl[ g_1^2n + \Omega_c^2 \bigr] + \delta \, g_2^2 (n-1) \, , \ G^2 = (g_1g_2)^2 \, n(n-1) \, , \\
X_1 &=& 2A^3 + 9A(BC-G^2) + 27(B^2-C^2G^2) \, , \ X_2 = A^2 + 3BC +12G^2 \, , \\
X &=& \sqrt[3]{X_1+\sqrt{X_1^2-4X_2^3}} \, , \ Y = X_2/X \, , \ D = \bigl( 2^{1/3}Y + 2^{-1/3}X \bigr)/3 \, .
\end{eqnarray}
\end{subequations}
\end{widetext}
Note that for the $n$--th ($n \geq 2$) manifold, $\sum_{i=1}^4 \epsilon_i^{(n)} = \Delta +\delta$, which is the two-photon detuning of the atom from the cavity resonance. These equations are the exact eigenstates of Hamiltonian~(\ref{eq:h0}), and can be rewritten using the polariton operators as
\begin{eqnarray}
\label{eq:h0pol}
\mathcal{H}_0 &=& \hbar\epsilon_-^{(1)}\, p_-^{(1)\dagger}p_-^{(1)} + \hbar\epsilon_+^{(1)}\, p_+^{(1)\dagger}p_+^{(1)} \nonumber \\
&\ & + \sum_{n=2}^\infty \sum_{j=1}^4 \hbar\epsilon_j^{(n)}\, p_{kj}^{(n)\dagger}p_{kj}^{(n)} \, ,
\end{eqnarray}
where the polariton operators are defined as $p_{ij}^{(n)} = |e_i^{(n-1)}\rangle\langle e_j^{(n)}|$. Index $k$ in the second row of Eq.~(\ref{eq:h0pol}) is a dummy index, since $p_{kj}^{(n)\dagger}p_{kj}^{(n)} = |e_j^{(n)}\rangle\langle e_j^{(n)}|$. Polariton operators $p_{0,\pm}^{(1)} = |e_0^{(0)}\rangle\langle e_{0,\pm}^{(1)}|$ can be written in a relatively simple form. These expressions, as well as a short discussion on the statistical properties of polaritons, are given in Appendix~\ref{sec:app}.
\begin{figure}
\caption{Schematic representation of the dressed states in a rotating system: ground, first and second manifold. The two states $|e_0^{(0)}
\label{fig:levels}
\end{figure}
The Hamiltonian~(\ref{eq:h0pol}) is written in a frame rotating at the cavity frequency $\omega_{cav}$. The energy level structure obtained has a form as shown in Fig.~\ref{fig:levels}: dressed states with the same detuning relative to the cavity resonance will be formally degenerate. There are only two degenerate eigenstates: ground state $|e_0^{(0)} \rangle$ and the first manifold state $|e_0^{(1)} \rangle$. It can be expected that driving this transition coherently yields dynamic Stark splitting. This indeed happens, and will be elaborated upon in Section~\ref{sec:dynstark}, but next we turn to the problem of how to include driving and damping into our polariton model.
\section{Driving and Damping Terms}
\label{sec:drivdamp}
The simplest way to include off-diagonal terms (as associated with driving and damping) into the effective Hamiltonian, written in the basis of states calculated in Section~\ref{sec:dress}, is to express the atomic and field operators in terms of operators describing the transitions between these states. Here again, we make a distinction between first and higher manifolds, a distinction imposed by the very different structure of these two groups of manifolds.
The first manifold dressed states can be expressed in terms of bare states as ${\mathbf b}_1 = {\mathbb M}_1{\mathbf d}_1$, where ${\mathbf b}_1 = \bigl( |1,\, 1\rangle,\, |0,\, 2\rangle\, , |0,\, 3\rangle \bigr)^{{\rm T}}$ and ${\mathbf d}_1 = \bigl( |e_-^{(1)}\rangle,\, |e_0^{(1)}\rangle,\, |e_+^{(1)}\rangle \bigr)^{{\rm T}}$. The transformation matrix is
\begin{eqnarray}
\label{eq:firstmatrix}
{\mathbb M}_1 &=& \begin{pmatrix} -\frac{g_1/\Omega_c}{N_-} & \frac{1}{N_0} & -\frac{g_1/\Omega_c}{N_+} \\ i\frac{\epsilon_-^{(1)}/\Omega_c}{N_-} & 0 & i\frac{\epsilon_+^{(1)}/\Omega_c}{N_+} \\ \frac{1}{N_-} & \frac{g_1/\Omega_c}{N_0} & \frac{1}{N_+} \end{pmatrix} \, ,
\end{eqnarray}
where
\begin{subequations}
\label{eq:ns}
\begin{eqnarray}
N_0 &=& \sqrt{1+\left( g_1/\Omega_c\right)^2}\, , \\
N_\pm &=& \sqrt{1+\left( \epsilon_\pm/\Omega_c \right)^2 + \left( g_1/\Omega_c\right)^2}\, .
\end{eqnarray}
\end{subequations}
Similarly, we can write for the higher manifolds ${\mathbf b}_n = {\mathbb M}_n{\mathbf d}_n$, $n \geq 2$, where ${\mathbf b}_n = \bigl( |n,\, 1\rangle,\, |n-1,\, 2\rangle\, , |n-1,\, 3\rangle\, , |n-2,\, 4\rangle \bigr)^{{\rm T}}$ and ${\mathbf d}_n = \bigl( |e_1^{(n)}\rangle,\, |e_2^{(n)}\rangle,\, |e_3^{(n)}\rangle,\, |e_4^{(n)}\rangle \bigr)^{{\rm T}}$, and
\begin{eqnarray}
\label{eq:nthmatrix}
{\mathbb M}_n &=& \begin{pmatrix}
\alpha_1^{(n)*} & \alpha_2^{(n)*} & \alpha_3^{(n)*} & \alpha_4^{(n)*} \\
\beta_1^{(n)*} & \beta_2^{(n)*} & \beta_3^{(n)*} & \beta_4^{(n)*} \\
\mu_1^{(n)*} & \mu_2^{(n)*} & \mu_3^{(n)*} & \mu_4^{(n)*} \\
\nu_1^{(n)*} & \nu_2^{(n)*} & \nu_3^{(n)*} & \nu_4^{(n)*} \end{pmatrix} \, ,
\end{eqnarray}
with the coefficients given by Eqs.~(\ref{eq:secpluscoeff}). These expressions provide all of the information needed for the subsequent calculations.
\subsection{External Driving}
\label{sec:extdrive}
Strongly coupled systems are very sensitive to the number of photons. In fact, the most interesting regimes include one or a few photons. In the system under investigation, the effect of photon blockade occurs when the dynamics is limited to the exchange of excitation between the ground state and the first manifold. It is therefore natural to express the field annihilation operator $a$ in terms of the transitions it produces between two adjacent manifolds. In general, deexcitation from manifold $n$ to manifold $n-1$ occurs via the operator $a^{(n)}$, expressed in terms of the bare states as
\begin{subequations}
\label{eq:an}
\begin{eqnarray}
&\, &a^{(1)} = |0,\, 1\rangle\langle 1,\, 1| \, \\
&\, &a^{(2)} = \sqrt{2} |1,\, 1\rangle\langle 2,\, 1| + |0,\, 2\rangle\langle 1,\, 2| + |0,\, 3\rangle\langle 1,\, 3| \, \\
&\, &a^{(n)} = \sqrt{n} |n-1,\, 1\rangle\langle n,\, 1| + \sqrt{n-1} \left( |n-2,\, 2\rangle\langle n-1,\, 2| \right. \nonumber \\
&\ & \left. + |n-2,\, 3\rangle\langle n-1,\, 3| \right) + \sqrt{n-2} |n-3,\, 4\rangle\langle n-2,\, 4| \, ,
\end{eqnarray}
\end{subequations}
and the full annihilation operator would then be given by a sum over all manifolds, $a = \sum_{n=1}^\infty a^{(n)}$. For the transition from the first excited state to the ground state we obtain
\begin{subequations}
\label{eq:rabifirst}
\begin{eqnarray}
\label{eq:a1}
{\mathcal E}_p a^{(1)} = \Omega_-^{(1,0)} p_-^{(1)} + \Omega_0^{(1,0)} p_0^{(1)} + \Omega_+^{(1,0)} p_+^{(1)},
\end{eqnarray}
with the polariton operators defined by $|e_0^{(0)}\rangle = p_j^{(1)} |e_j^{(1)}\rangle,\ j = 0,\pm$. The effective Rabi frequencies $\Omega_j^{(0,1)}$ can be calculated from the matrix~(\ref{eq:firstmatrix}) as
\begin{eqnarray}
\label{eq:omega01}
\Omega_0^{(1,0)} &=& \frac{{\mathcal E}_p}{\sqrt{1+\bigl( g_1/\Omega_c\bigr)^2}} \label{eq:omega0} \, , \\
\Omega_\pm^{(1,0)} &=& -\frac{{\mathcal E}_pg_1}{\sqrt{g_1^2+\Omega_c^2+(\epsilon_\pm^{(1)})^2}} \label{eq:omegapm} \, .
\end{eqnarray}
\end{subequations}
The three terms in the expansion~(\ref{eq:a1}) correspond to the three transitions between the ground state and the three states excited by a single photon. Each transition has an associated effective Rabi frequency $\Omega_j^{(1,0)}$. Note that the negative sign in~(\ref{eq:omegapm}) means the driving of the off-resonant states is out of phase with the driving of the resonant state (see Fig.~\ref{fig:manifolds} $(a)$).
There are twelve possible transitions between the first and the second manifolds, driven with the effective Rabi frequencies $\Omega_{ij}^{(2,1)}$, where
\begin{equation}
\label{eq:omega12}
\Omega_{ij}^{(2,1)} = {\mathcal E}_p \left[ \sqrt{2}\, \alpha_i^{(1)*}\alpha_j^{(2)} + \beta_i^{(1)*}\beta_j^{(2)} + \mu_i^{(1)*}\mu_j^{(2)} \right] \, ,
\end{equation}
with $i = 0,\, \pm$; $j = 1,\ldots,4$ and it follows from the Eq.~(\ref{eq:e01}) that $\beta_0^{(1)} \equiv 0$ (see Fig.~\ref{fig:manifolds} $(b)$). Since the second and subsequent manifolds have four states each, there are sixteen transitions between the adjacent manifolds, with effective Rabi frequencies of driving
\begin{eqnarray}
\label{eq:omegann}
\Omega_{ij}^{(n,n-1)} &=& {\mathcal E}_p \left[ \sqrt{n}\, \alpha_i^{(n-1)*}\alpha_j^{(n)} + \sqrt{n-1}\, \left( \beta_i^{(n-1)*}\beta_j^{(n)} \right. \right. \nonumber \\
&\ &\left. \left. + \mu_i^{(n-1)*}\mu_j^{(n)} \right) + \sqrt{n-2}\, \nu_i^{(n-1)*}\nu_j^{(n)} \right] \, ,
\end{eqnarray}
with $n>2$ and $i,\, j = 1,\ldots,4$ (see Fig.~\ref{fig:manifolds} $(c)$). The coefficients in this expression are given in Eqs~(\ref{eq:secpluscoeff}).
Note that the Rabi frequencies $\Omega_{ij}^{(n,n-1)}$ can also be obtained from $i\hbar\Omega_{ij}^{(n,n-1)} = \langle e_i^{(n-1)}|{\mathcal H}_d|e_j^{(n)}\rangle$, with ${\mathcal H}_d$ given by Eq.~(\ref{eq:hd}). However, the expansion of the operator $a$ in terms of contributions to different transitions, Eq.~(\ref{eq:an}), offers a clearer physical picture of the processes involved in the dynamics. The driving Hamiltonian can therefore be written in terms of the polariton operators as
\begin{eqnarray}
\label{eq:hdrivepol}
{\mathcal H}_d &=& i\hbar \mathcal{E}_p \, \left( a - a^\dagger \right) \nonumber \\
&=& i\hbar\sum_{i=\pm,0} \Omega_i^{(1,0)} \left( p_i^{(1)} - p_i^{(1)\dagger} \right) \nonumber \\
&\ & +i\hbar\sum_{i=\pm,0} \sum_{j=1}^4 \Omega_{ij}^{(2,1)} \left( p_{ij}^{(2)} - p_{ij}^{(2)\dagger} \right) \nonumber \\
&\ & + i\hbar\sum_{n=3}^\infty \sum_{i,j=1}^4 \Omega_{ij}^{(n,n-1)} \left( p_{ij}^{(n)} - p_{ij}^{(n)\dagger} \right) \, .
\end{eqnarray}
\begin{widetext}
Expression~(\ref{eq:hdrivepol}) is the expansion of the driving Hamiltonian in terms of the transitions that are permitted to occur between any two dressed states, as shown in Fig.~\ref{fig:manifolds}. The obvious advantage of this expansion over the original form of driving is in the strong coupling/low photon number regime. In this regime, the expansion~(\ref{eq:hdrivepol}) can be truncated at the order justified by the problem, while still retaining all (but not more!) of the relevant contributions from the external coherent driving. We will illustrate this assertion in Section~\ref{sec:dynstark}.
\begin{figure}
\caption{Transition between the polaritons in the adjacent manifolds. The cavity resonance is located at the center of each manifold. Figures represent: $(a)$ Transitions between the ground state and first manifold states; $(b)$ Transitions between first and second manifolds states; $(c)$ Transitions between polaritons in manifolds $(n-1)$ and $n$ for $n \geq 3$.}
\label{fig:manifolds}
\end{figure}
\end{widetext}
\subsection{Damping by Reservoir Modes}
The remaining part of the dynamics to be expressed in the polariton representation is damping by the reservoir modes. In Section~\ref{sec:model} it was explained how the damping enters into the effective Hamiltonian; in particular after a trace has been performed over the reservoir variables. The resulting Hamiltonian operator is anti-Hermitian and has the form
\begin{equation}
\label{eq:hres}
{\mathcal H}_{res} = -i\hbar\kappa a^\dagger a -i\hbar(\gamma_1+\gamma_2) \sigma_{22} -i\hbar\gamma_3 \sigma_{44} \, .
\end{equation}
We follow the reasoning of the previous Section and expand the relevant field and atomic operators in terms of the contributions from the individual manifolds:
\begin{subequations}
\label{eq:dampops}
\begin{eqnarray}
a^\dagger a &=& \sum_{n=1}^\infty \left( a^{(n)\dagger} a^{(n)} \right) \\
&=& |1,\, 1\rangle\langle 1,\, 1| \nonumber \\
&+& \sum_{n=2}^\infty \left[ n \, |n,\, 1\rangle\langle n,\, 1| + (n-1)\, \left( |n-1,\, 2\rangle\langle n-1,\, 2| \right. \right. \nonumber \\
&+& \left. \left. |n-1,\, 3\rangle\langle n-1,\, 3| \right) + (n-2) |n-2,\, 4\rangle\langle n-2,\, 4| \right] \, , \nonumber
\end{eqnarray}
\begin{eqnarray}
\sigma_{22} &=& \sum_{n=1}^\infty \sigma_{22}^{(n)} = \sum_{n=1}^\infty |n-1,\, 2\rangle\langle n-1,\, 2| \, , \\
\sigma_{44} &=& \sum_{n=2}^\infty \sigma_{44}^{(n)} = \sum_{n=2}^\infty |n-2,\, 4\rangle\langle n-2,\, 4| \, .
\end{eqnarray}
\end{subequations}
The operator ${\mathcal H}_{res}$ clearly takes a block-diagonal form in the dressed state representation, as the operator expansion~(\ref{eq:dampops}) includes terms containing every possible dressed level within a given manifold. The diagonal terms correspond to damping of the dressed states due to their decay straight into the reservoir. Off-diagonal terms couple two different dressed levels in a given manifold. This coupling arises due to couplings of both levels to the same reservoir level. If each of these diagonal blocks is again diagonalized, in the presence of damping we get shifts appearing on each level. The complex eigenvalues add their real part to the energy shift and the imaginary part becomes the damping rate. Energies and damping rates calculated in this manner will coincide with the experimentally observed ones (in the absence of driving). It was pointed out by Harris~\cite{Harris89} and Imamo\u{g}lu~\cite{Imamoglu89a} that these cross terms can be essential in creating destructive interference between the transition amplitudes of the appropriate transitions (see also Li and Xiao~\cite{Li95}).
The contribution of the off-diagonal terms to the eigenenergies and damping rates (diagonal elements of ${\mathcal H}_{res}$) is very small, and we will ignore their contribution to the eigenvalues in the rest of this paper for simplicity, though we leave them in a general expression for the damping Hamiltonian.
We write the damping Hamiltonian in a form that emphasizes the diagonal and off-diagonal contributions,
\begin{eqnarray}
\label{eq:hrespol}
{\mathcal H}_{res} &=&-i\hbar \sum_{i=\pm,0} \Gamma_i^{(1)} p^{(1)^\dagger}_{i} p^{(1)}_{i} \nonumber \\
&\ & -i\hbar\sum_{j \neq k = \pm,0} \Gamma_{jk}^{(1)} p^{(1)^\dagger}_{j} p^{(1)}_{k} \nonumber \\
&\ & -i\hbar\sum_{n=2}^\infty \sum_{j=1}^4 \Gamma_{jj}^{(n)} \, p^{(n)^\dagger}_{ij} p^{(n)}_{ij} \nonumber \\
&\ & -i\hbar\sum_{n=2}^\infty \sum_{j \neq k} \Gamma_{jk}^{(n)} \, p^{(n)^\dagger}_{ij} p^{(n)}_{ik} \, .
\end{eqnarray}
where
\begin{subequations}
\begin{eqnarray}
\label{eq:damprates}
&\, &\Gamma_{jk}^{(n)} = n\kappa \, \alpha_j^{(n)*}\alpha_k^{(n)} + \left[ (n-1)\kappa + \gamma_1 + \gamma_2 \right] \beta_j^{(n)*} \beta_k^{(n)} \nonumber \\
&\ &+ (n-1)\kappa\, \mu_j^{(n)*} \mu_k^{(n)} + \left[ (n-2)\kappa + \gamma_3 \right] \nu_j^{(n)*}\nu_k^{(n)} \, ,
\end{eqnarray}
\end{subequations}
For each manifold $(n)$, matrix $\Gamma_{jk}^{(n)}$ is a positive definite matrix, so we can write~\cite{Zhou97,Akram01}
\begin{subequations}
\begin{eqnarray}
\label{eq:costheta}
\Gamma_{jk}^{(n)} &=& \cos{\theta_{jk}}\, \sqrt{\Gamma_{jj}^{(n)}\Gamma_{kk}^{(n)}} \, , \\
\cos{\theta_{jk}} &=& \frac{\bm{\mu}_j\cdot \bm{\mu}_k}{|\bm{\mu}_j||\bm{\mu}_k|} \, ,
\end{eqnarray}
\end{subequations}
where $\bm{\mu}_{j,k}$ can be thought of as the effective dipole moments of the transitions contributing the off-diagonal term. Furthermore, we note that the diagonal matrix elements belonging to the first manifold can be written in a simple closed form as
\begin{subequations}
\label{eq:firstdamping}
\begin{eqnarray}
\Gamma_0^{(1)} &=& \frac{\kappa}{1+\left( g_1/\Omega_c \right)^2} \label{eq:gamma0} \, , \\
\Gamma_\pm^{(1)} &=& \frac{\kappa g_1^2 + (\gamma_1+\gamma_2)\left(\epsilon_\pm^{(1)}\right)^2}{g_1^2+\Omega_c^2+\left(\epsilon_\pm^{(1)}\right)^2} \, .
\end{eqnarray}
\end{subequations}
As before, all of the $\Gamma$'s could have also been calculated from $-i\hbar\Gamma_j^{(n)} = \langle e_j^{(n)}|{\mathcal H}_{res}|e_j^{(n)}\rangle$ and $-i\hbar\Gamma_{jk}^{(n)} = \langle e_j^{(n)}|{\mathcal H}_{res}|e_k^{(n)}\rangle$, but again, the outlined procedure offers a deeper physical insight.
\subsection{Quantum Jumps}
\label{sec:qjumps}
The effective Hamiltonian~(\ref{eq:heff}) describes the time evolution of the quantum system between successive jumps. The effect of quantum jumps is not included, and the proper way to include these is the subject of the quantum trajectories approach~\cite{Carmichael93B}. Here, we briefly describe the transformation of collapses into the dressed state basis.
Quantum jumps are included in the master equation for the time evolution of the density matrix $\rho$ via terms of the form $C_j\rho C_j^\dagger$, where $C_j$ denotes a collapse operator from the set~(\ref{eq:collapse}). Each of the collapse operators can then be expressed in terms of polariton operators, and a new set of collapse operators $S_{ij}^{(n)} = \sqrt{\Gamma_j^{(n)}} p^{(n)}_{ij}$ can be obtained. Note that the effective master equation resulting from the polariton expansion will contain cross terms in the collapse operators, giving damping terms of the form $\Gamma_{jk}^{(n)}\left( 2S_{ij}^{(n)}\rho S_{ik}^{(n)\dagger} - S_{ik}^{(n)\dagger}S_{ij}^{(n)}\rho - \rho S_{ik}^{(n)\dagger}S_{ij}^{(n)}\right)$. These cross terms have a very important role in modifying the emission rate from the polariton states to the reservoir. Such terms have been studied and well understood for the case of the modification of spontaneous emission in multilevel atoms~\cite{Agarwal74,Cardimona82}.
\section{Dynamic Stark Splitting}
\label{sec:dynstark}
It was proven earlier~\cite{Rebic99,Werner99} that the EIT-Kerr system can behave as an effective two-level system, with the states $|e_0^{(0)}\rangle$ and $|e_0^{(1)}\rangle$ being its ground and excited states. The two-level approximation is best for large values of effective dipole coupling, i.e. $(g_1/\Omega_c)^2 \gg 1$ and $g_2 \gg \kappa$. The two states of the effective model are coupled by the external field, with the Rabi frequency of the coupling, $\Omega_0^{(0,1)}$, given by Eq.~(\ref{eq:omega0}), and the decay rate of the excited state, $\Gamma_0^{(1)}$, given by Eq.~(\ref{eq:gamma0}). It is therefore expected that this system exhibits a dynamic Stark splitting, characteristic of every driven two-state system. We now explore this effect in more detail, using the polariton model.
\begin{figure}
\caption{Graphical representation of the dynamic Stark splitting of the effective two-level system.}
\label{fig:stark}
\end{figure}
Recall that the effective Hamiltonian~(\ref{eq:heff}) is non-Hermitian. We truncate the expansion over manifolds of the polariton Hamiltonian after the first manifold. Moreover, we concentrate on the ground and excited state of the effective two level system and write its polariton Hamiltonian in the reduced form
\begin{equation}
\label{eq:htwolevel}
{\mathcal H}_{red} = i\hbar\Omega_0^{(1,0)} \left( p_0^{(1)} - p_0^{(1)\dagger} \right) -i\hbar\Gamma_0^{(1)} p_0^{(1)\dagger} p_0^{(1)} \, .
\end{equation}
The eigenvalues of such an effective Hamiltonian are complex:
\begin{eqnarray}
\label{eq:eigred}
\varepsilon_\pm &=& \tilde{\epsilon}_\pm - i\tilde{\Gamma}_\pm \nonumber \\
&=& i\frac{\Gamma_0^{(1)}}{2} \pm \sqrt{\Omega_0^{(1,0)^2} - \left(\Gamma_0^{(1)}/2 \right)^2} \, .
\end{eqnarray}
The real parts of these eigenvalues represent the energies of the dressed states, while the imaginary parts represent their associated decay rates. We identify two operating regimes, depending on the size of $\Omega_0^{(1,0)}$, i.e. the size of ${\mathcal E}_p$:
\begin{itemize}
\item \underline{Regime 1}: $\Omega_0^{(1,0)} < \Gamma_0^{(1)}/2$ \\
\begin{subequations}
\label{eq:regimes}
\begin{eqnarray}
\tilde{\epsilon}_\pm &=& 0 \, , \nonumber \\
\tilde{\Gamma}_\pm &=& \frac{\Gamma_0^{(1)}}{2} \pm \sqrt{\left(\Gamma_0^{(1,0)}/2 \right)^2-\Omega_0^{(1,0)^2}}\, . \label{eq:wkdrivdec}
\end{eqnarray}
\item \underline{Regime 2}: $\Omega_0^{(1,0)} > \Gamma_0^{(1)}/2$ \\
\begin{eqnarray}
\tilde{\epsilon}_\pm &=& \pm \sqrt{\Omega_0^{(1,0)^2}-\left(\Gamma_0^{(1)}/2 \right)^2} \label{eq:epm} \, , \nonumber \\
\tilde{\Gamma}_\pm &=& \frac{\Gamma_0^{(1)}}{2}\, .
\end{eqnarray}
\end{subequations}
\end{itemize}
The eigenstates corresponding to the Stark-split states are
\begin{eqnarray}
\label{eq:rabistates}
|\psi_\pm^{(0,1)} \rangle &=& \frac{1}{\sqrt{2}} \, \bigl( |e_0^{(0)} \rangle \pm |e_0^{(1)} \rangle \bigr) \nonumber \\
&=& \frac{1}{\sqrt{2}} \, \biggl( |0,1 \rangle \pm \frac{|1,1\rangle + g_1/\Omega_c \, |0,3 \rangle}{\sqrt{1+(g_1/\Omega_c)^2}} \biggr) \, .
\end{eqnarray}
The transition between the two regimes happens at $\Omega_0^{(1,0)} = \Gamma_0^{(1)}/2$, or, in terms of the original parameters, at
\begin{equation}
\label{eq:regtrans}
{\mathcal E}_p = \frac{\kappa/2}{\sqrt{1+\bigl(g_1/\Omega_c \bigr)^2}} \, .
\end{equation}
Note that there is no contribution of the atomic decay rates in this simple model. This absence occurs because the excited state in the effective model is a dark state with respect to atomic spontaneous emission. The dynamic Stark splitting effect is shown schematically in Fig.~\ref{fig:stark}.
Another effective two-level system was predicted by Tian and Carmichael~\cite{Tian92}, who studied the case of a two-level atom in the cavity, driven on the lower Rabi resonance. They also predicted Stark splitting~\cite{Carmichael94} comparable to that presented in this paper.
For further comparison, the analysis of a two-level atom with spontaneous emission rate $\gamma$, coupled with strength $g$ to the vacuum cavity mode, yields results identical to those of Eqs.~(\ref{eq:regimes}), with the replacements $\Omega_0^{(1,0)} \rightarrow g$ and $\Gamma_0^{(1)} \rightarrow \gamma+\kappa$. Thus, the dynamic Stark splitting found in the EIT-Kerr system can be thought of as the exact counterpart to the vacuum Rabi splitting characteristic for a two-level atom coupled to the cavity mode. The distinction of the EIT-Kerr system is in the fact that the parameters $\Omega_0^{(1,0)}$ and $\Gamma_0^{(1)}$ can be adjusted by a simple adjustment of the coupling laser Rabi frequency $\Omega_c$.
\begin{figure}
\caption{Numerical (dashed lines) eigenenergies for the full system (cavity mode subspace truncated at 40) compared with the anaytical solution of Eq.~(\ref{eq:eigred}
\label{fig:rabi}
\end{figure}
How well can these results describe the full EIT-Kerr system, including damping terms, as described by Hamiltonian~(\ref{eq:heff})? Fig.~\ref{fig:rabi} compares the two solutions. There is very good agreement between numerical solution and analytical approximation, which breaks down only for large values of ${\mathcal E}_p$, where truncation after the first manifold is not justified any more, since the contribution of states from higher manifolds cannot be ignored.
One additional feature can be seen by looking at the eigenenergies in Figure~\ref{fig:rabi}. Notice that the Stark splitting of eigenenergies does not start at ${\mathcal E}_p = 0^+$ but at some small, finite value of ${\mathcal E}_p$. This behaviour and the related behaviour of the decay rates for weak excitation is shown in Fig.~\ref{fig:weakex}. In the limit ${\mathcal E}_p = 0$, the ground and excited states of the effective two level system are uncoupled, so the decay rates separate accordingly to the decay rates of ground state (which is zero) and the excited state $\Gamma_0^{(1)}$. Increasing the driving strength mixes these two states so that their decay rates become approximately equal. Once ${\mathcal E}_p$ exceeds the value given by Eq.~(\ref{eq:regtrans}), the energy levels shift in opposite directions, giving rise to the Stark splitting. For the parameters of Fig.~\ref{fig:weakex}, this happens at ${\mathcal E}_p \cong 0.16\kappa$.
\begin{figure}
\caption{Eigenenergies and their associated decay rates of the Stark split states in the weak excitation regime for the same set of parameters as in Fig.~\ref{fig:rabi}
\label{fig:weakex}
\end{figure}
In the weak driving regime (Regime 1 above), ${\mathcal E}_p < (\kappa/2)/\sqrt{1+\bigl(g_1/\Omega_c \bigr)^2}$, and the EIT-Kerr system truly behaves as an effective two-level system due to the absence of normal mode splitting. This is also the ideal photon blockade regime, and will be called a {\em weak driving regime}. The case ${\mathcal E}_p > (\kappa/2)/\sqrt{1+\bigl(g_1/\Omega_c \bigr)^2}$ (Regime 2) then includes the intermediate and strong driving regimes, a study of which will be published elsewhere. Once again, we emphasize the similarity of the effective two-level system coupled to the driving field and the two-level atom coupled to vacuum cavity mode. There is an equivalence in the behaviour of the Rabi-split states in the latter~\cite{Turchette95b,Kimble94} to the behaviour of the Stark-split states in the former.
The analysis of the dynamics of the Stark splitting in the dressed state basis offers a simple example of the convenience of the polariton approach. Once the Hamiltonian is expressed in terms of the polaritons and the reduced effective Hamiltonian is identified, the subsequent analysis is considerably simplified.
\section{Fluorescence Spectrum}
\label{sec:fluorescence}
The prediction of dynamic Stark splitting in Section~\ref{sec:dynstark} leads us naturally to an examination of the fluorescence spectrum of the light emitted by the `atom-cavity molecule'. We solve the master equation of the problem numerically to obtain the spectrum, and interpret the result using the insight provided by the polariton analysis.
The master equation of the full atom/cavity system may be written in the bare form as
\begin{eqnarray}
\dot{\rho} = -\frac{i}{\hbar} \left( {\mathcal H}_{eff}\rho - \rho{\mathcal H}_{eff}^\dagger \right) +2\sum_{i=1}^4 C_i\rho C_i^\dagger \label{eq:master} \, ,
\end{eqnarray}
where $\rho$ is the density matrix of the system, ${\mathcal H}_{eff}$ is given by Eq.~(\ref{eq:heff}) and $C_i$ denote the four collapse operators of Eq.~(\ref{eq:collapse}). Using the quantum regression theorem~\cite{Walls94}, we solve the master equation and calculate the steady-state fluorescence spectrum,
\begin{equation}
\label{eq:fluspec}
S_F(\omega) = {\rm Re}{\left[ \lim_{t\rightarrow \infty}\int_0^\infty {\rm d}\tau \langle a^\dagger(t),\, a(t+\tau) \rangle e^{i\omega\tau} \right]}\, .
\end{equation}
Results, for different values of driving ${\mathcal E}_p$, are shown in Figs.~\ref{fig:mollow} and~\ref{fig:flspec}.
\begin{figure}
\caption{Emergence of Mollow triplet with the increasing driving field. Parameters are $\kappa = 0.25$, $\gamma_j = 0.1$, $g_j = 6$, $\Omega_c = 2$, $\delta = 0$ and $\Delta = 0.1$.}
\label{fig:mollow}
\end{figure}
Fig.~\ref{fig:mollow} shows how the central peak (at the frequency of driving) splits into a Mollow triplet~\cite{Mollow69}. The particular values of parameters are given in dimensionless units, and the small value of $\kappa$ is chosen to produce narrow, highly-resolved peaks. For this set of parameters, Eq.~(\ref{eq:regtrans}) predicts a threshold value for the appearance of Mollow sidebands at ${\mathcal E}_p = 0.0395$. The central peak is the result of the two transitions $|\psi_-^{(0)}\rangle \leftrightarrow |\psi_-^{(1)}\rangle$ and $|\psi_+^{(0)}\rangle \leftrightarrow |\psi_+^{(1)}\rangle$ between the Stark doublet states in the ground and excited states (see Fig.~\ref{fig:stark}). Transitions $|\psi_-^{(0)}\rangle \leftrightarrow |\psi_+^{(1)}\rangle$ and $|\psi_+^{(0)}\rangle \leftrightarrow |\psi_-^{(1)}\rangle$ cause the appearance of the sidebands at frequencies $\omega = \omega_{cav}+(\tilde{\epsilon}_+-\tilde{\epsilon}_-)$ and $\omega = \omega_{cav}+(\tilde{\epsilon}_--\tilde{\epsilon}_+)$, respectively, where $\tilde{\epsilon}_\pm$ are given by Eq.~(\ref{eq:epm}).
The linewidths of these peaks can also be calculated. It is straightforward to write the master equation in the polariton picture as
\begin{eqnarray}
\dot{\rho} = -\frac{i}{\hbar} \left( {\mathcal H}_{red}\rho - \rho{\mathcal H}_{red}^\dagger \right) +2\Gamma_0^{(1)} p_0^{(1)}\rho p_0^{(1)\dagger} \label{eq:masterpol} \, ,
\end{eqnarray}
where ${\mathcal H}_{red}$ is given by Eq.~(\ref{eq:htwolevel}). Equations for the density matrix elements in the basis spanned by Stark states $|\psi_\pm\rangle$ of Eq.~(\ref{eq:rabistates}) can be derived using standard methods to give
\begin{subequations}
\label{eq:dmatels}
\begin{eqnarray}
\dot{\rho}_{++} &=& -\Gamma_0^{(1)}\rho_{++} + \frac{\Gamma_0^{(1)}}{2} \, , \\
\dot{\rho}_{+-} &=& -\left( \frac{3\Gamma_0^{(1)}}{2} - i\, 2\Omega_0^{(1,0)} \right)\rho_{+-} \nonumber \\
&\ &- \frac{\Gamma_0^{(1)}}{2}\rho_{-+} -\Gamma_0^{(1)} \, .
\end{eqnarray}
\end{subequations}
From these equations, it is easy to read the spectral linewidths of the Mollow spectrum. The central peak will have linewidth $\Gamma_0^{(1)}$, while the sidebands have linewidth $3\Gamma_0^{(1)}/2$. This is consistent with the results for resonance fluorescence~\cite{Walls94}.
\begin{figure}
\caption{Semilogarithmic plot of the fluorescence spectrum for the same parameters as in Fig.~\ref{fig:mollow}
\label{fig:flspec}
\end{figure}
\begin{figure}
\caption{Schematic depiction of the relevant transitions for Fig.~\ref{fig:flspec}
\label{fig:transflspec}
\end{figure}
Given the complex energy level structure of the atom-cavity molecule, it can be expected that transitions other than those producing the Mollow spectrum will be seen in the fluorescence spectrum. This is indeed true, and Fig.~\ref{fig:flspec} shows the additional sidebands. These peaks are relatively small $(\sim 10^{-3})$, so the associated transitions are not expected to contribute significantly to the dynamics. They do, however, cause a departure from the ideal two-level behaviour.
The transitions responsible for the sidebands are identified in Fig.~\ref{fig:transflspec}, where the relevant energy level structure is shown. Contributions of the transitions up to the third manifold states can be seen. We find peaks at the following frequencies: $\pm\Delta_1 \approx \pm 2.3$, $\pm\Delta_2 \approx \pm 5.7$, $\pm\Delta_3 \approx \pm 6.05$ and $\pm\Delta_4 \approx \pm 6.3$. The transitions corresponding to these peaks can be identified from the energy eigenvalues as $\Delta_1 = \epsilon^{(3)}_3 - \epsilon^{(2)}_3$, $-\Delta_1 = \epsilon^{(3)}_2 - \epsilon^{(2)}_2$, $\Delta_2 = \epsilon^{(2)}_3 - \tilde{\epsilon}^{(1)}_+$, $-\Delta_2 = \epsilon^{(2)}_2 - \tilde{\epsilon}^{(1)}_-$, $\Delta_3 = \epsilon^{(2)}_3 - \tilde{\epsilon}^{(1)}_-$, $-\Delta_3 = \epsilon^{(2)}_2 - \tilde{\epsilon}^{(1)}_+$, $\Delta_4 = \epsilon^{(1)}_+ - \tilde{\epsilon}^{(0)}_-$. The tiny asymmetry between the position of the positive frequency peaks and negative frequency peaks arises from the fact that the polariton states are asymmetrically detuned from the cavity resonance. The source of this asymmetry can be traced to the nonzero values of atomic detunings $\delta$ and $\Delta$ (see Fig.~\ref{fig:atom}). The linewidths of these sidebands can also be computed using the same method that produced the Eq.~(\ref{eq:dmatels}), but the results would be hard to verify, given the large degree of overlap between the adjacent peaks, seen in Fig.~\ref{fig:flspec}. We thus have a complete explanation of the fluorescence spectrum.
\section{Conclusion and Outlook}
\label{sec:conclusion}
In this paper, we have presented an exact solution to the eigenvalue problem of the Hamiltonian for a four-level atom strongly coupled to a cavity mode. The regime of strong coupling CQED presents a difficult problem for analytical calculation, as well as for the understanding of the physics involved. We have shown that consistent application of the polariton approach can offer a significant insight and even enable a relatively simple analytical treatment of the physical problem. In particular, the problem of an externally driven atom/quantum field system has been reduced to the problem of composite excitations, transitions between which are effectively driven by classical fields of Rabi frequencies $\Omega_{ij}^{(n,n-1)}$, which were calculated exactly.
The polariton approach can be interpreted as a change of basis in Hilbert space. It is obvious that such a change can in general simplify the analysis of the problem. The reason for this is that in the dressed state basis the number of degrees of freedom can be significantly reduced, compared to the treatment in terms of the bare states. For example, if the atom has $N_a$ levels (degrees of freedom), and quantum field mode can be safely truncated at some number $N_c$, the problem in the bare state basis has a dimension of at least $N_a \times N_c$. In the dressed state basis, we can identify which $N_p$ dressed states (and associated polaritons) participate in the dynamics, effectively reducing the dimension of the problem to $N_p \leq N_a \times N_c$. The treatment of dynamic Stark splitting in Section~\ref{sec:dynstark} provided a simple but extremely successful example of such reduction. In other words, from all of the (infinite number of) dimensions of Hilbert space, the polariton approach lets us pinpoint those few dimensions that are predominantly involved in the system dynamics.
Of course, this approach does not guarantee that the reduced problem will be analytically solvable. There are some general limits on solvability in the dressed state basis. For example, if the number of atomic levels $N_a > 4$, the diagonalisation of the interaction Hamiltonian is impossible {\em in principle}, except perhaps in some special cases, since $N_a$ determines the order of the polynomial of the eigenvalue problem. Also, the size of the reduced problem $N_p$ can still be impractically large. While nothing much can be done about the first problem, for the second one, we foresee ways to simplify the involved numerics. In particular, the coupled amplitudes approach and the effective master equation look promising. We are pursuing this avenue at the moment, and will publish our findings elsewhere.
\begin{acknowledgments}
The authors would like to thank S. G. Clark for valuable discussion. This work was supported by the Marsden Fund of the Royal Society of New Zealand.
\end{acknowledgments}
\appendix
\section{Polariton Operators of the First Manifold}
\label{sec:app}
In this Appendix, polariton operators for the first manifold states are given explicitly, and their commutation relations discussed.
From expressions~(\ref{eq:e00}) --~(\ref{eq:firstcoeff}), we deduce the form of the polariton operators in terms of atomic and field operators
\begin{subequations}
\label{eq:polaritons}
\begin{eqnarray}
p_0^{(1)\dagger} &=& \frac{a^{\dagger}+(g_1/\Omega_c)\sigma_{31}}{\sqrt{1+(g_1/\Omega_c)^2}} \label{eq:pol} \, , \\
p^{(1)\dagger}_\pm &=& -\frac{\bigl(g_1/\Omega_c\bigr) \, a^\dagger + i \, \bigl(\epsilon^{(1)}_\pm/\Omega_c\bigr) \, \sigma_{21} - \sigma_{31}}{\sqrt{1+\bigl(g_1/\Omega_c\bigr)^2+\bigl(\epsilon^{(1)}_\pm/\hbar\Omega _c\bigr)^2}} \label{eq:pols13} \, .
\end{eqnarray}
\end{subequations}
It is a well-known fact that the polaritons are neither bosons nor fermions. The commutation relations satisfied by the operators $p_j^{(1)}$ and $p_j^{(1)\dagger}$ are
\begin{subequations}
\label{eq:commrel}
\begin{eqnarray}
\bigl[ p_0^{(1)},\, p_0^{(1)\dagger} \bigr] &=& \frac{1-\bigl( g_1/\Omega_c \bigr)^2 D_{31}}{1+\bigl( g_1/\Omega_c \bigr)^2}\, , \\
\bigl[ p_\pm^{(1)},\, p_\pm^{(1)\dagger} \bigr] &=& \frac{\bigl( g_1/\Omega_c \bigr)^2 + \bigl( \epsilon_\pm^{(1)}/\hbar\Omega_c \bigr)^2 D_{21} - D_{31}}{1+\bigl( g_1/\Omega_c \bigr)^2+\bigl( \epsilon_\pm^{(1)}/\hbar\Omega_c \bigr)^2}\, ,
\end{eqnarray}
\end{subequations}
with $D_{21} = \sigma_{22} - \sigma_{11}$, $D_{31} = \sigma_{33} - \sigma_{11}$. Therefore, strong coupling of bosons and fermions yields an excitation (or a quasiparticle) of mixed statistics.
We can define two limits in which the polaritons become dominated by the contribution of their constituents, according to the ratio $g_1/\Omega_c$. When $g_1/\Omega_c \ll 1$, polariton $p_0^{(1)}$, and its corresponding eigenstate $|e_0^{(1)}\rangle$, become dominated by their photonic contribution, while polaritons $p_\pm^{(1)}$ and corresponding eigenstates become dominated by their atomic contribution. So, different polaritons become either photon-like or atom-like. The opposite situation occurs for $g_1/\Omega_c \gg 1$. The region $g_1/\Omega_c \sim 1$ is a ``no-mans land'', where photons and atoms contribute comparably to the composition of polaritons.
\printfigures*
\end{document} |
\begin{document}
\title{Measurements in the L\'{e}vy quantum walk}
\author{A. Romanelli}
\altaffiliation{\textit{E-mail address:} [email protected]}
\affiliation{Instituto de F\'{\i}sica, Facultad de Ingenier\'{\i}a\\
Universidad de la Rep\'ublica\\
casilla de correo 30, c\'odigo postal 11000, Montevideo, Uruguay}
\date{\today }
\begin{abstract}
We study the quantum walk subjected to measurements with a L\'evy
waiting-time distribution. We find that the system has a sub-ballistic
behavior instead of a diffusive one. We obtain an analytical expression for
the exponent of the power law of the variance as a function of the
characteristic parameter of the L\'evy distribution.
\end{abstract}
\pacs{03.67.-a, 05.45.Mt; 05.40.Fb}
\maketitle
\section{Introduction}
The development of the quantum walk (QW) in the context of quantum
computation, as a generalization of the classical random walk, has attracted
the attention of researchers from different fields. The fact that it is
possible to build and preserve quantum states experimentally has led the
scientific community to think that quantum computers could be a reality in
the near future. On the other hand, from a purely physical point of view,
the study of quantum computation allows to analyze and verify the principles
of quantum theory. In this last frame the study of the QW subjected to
different sources of decoherence is a topic that has been considered by
several authors \cite{kendon}. In particular we have recently studied \cite
{alejo0} the QW and the quantum kicked rotor in resonance subjected to noise
with a L\'evy waiting-time distribution \cite{Levy}, finding that both
systems have a sub-ballistic wave function spreading, as shown by the
power-law tail of the standard deviation ($\sigma (t)\sim t^{c}$ with $
0.5<c<1$), instead of the known ballistical growth ($\sigma (t)\sim
t$). This sub-ballistic behavior was also observed in the dynamics
of both the quantum kicked rotor \cite{alejo3} and the QW
\cite{Ribeiro} when these systems are subjected to an excitation
that follows an aperiodic Fibonacci prescription. Other authors also
investigated the kicked rotor subjected to noises with a L\'{e}vy
distribution \cite{Shomerus} and almost-periodic Fibonacci sequence
\cite{Casati}, showing that this decoherence never fully destroys
the dynamical localization of the kicked rotor but leads to a
sub-diffusion regime for a short time before localization appears.
All these mentioned papers have in common that they work with
quantum systems that have an anomalous behavior that was established
numerically. There are no analytical results that explain in a
general way why noises, with a power-law distribution or in a
Fibonacci sequence, lead the system to a new non-diffusive behavior.
Here we present a simple model that allows an analytical treatment
to understand the sub-ballistic behavior. We hope that this may help
to understand in a generical way how the frequency of the
decoherence is the main factor in this unexpected dynamics. With
this aim we investigate the QW when measurements are performed on
the system with waiting times between them following a L\'{e}vy
power-law distribution. We show that this noise produces a change
from ballistic to sub-ballistic behavior and we obtain analytically
a relation between the exponent of the standard deviation and the
characteristic parameter of L\'{e}vy distribution.
The paper is organized as follows. In the next section we develop the QW
model with L\'{e}vy noise, in the third section analytical results are
obtained and in the last section we draw the conclusions.
\section{Quantum walk and measurement}
\label{walked}The dynamics of the QW subjected to a series of measurements
will be generated by a large sequence of two time-step unitary operators $
U_{0}$ and $U_{1}$ as was done in a previous work \cite{alejo0}. But now $
U_{0}$ is the `free' evolution of the QW and $U_{1}$ is the operator that
measures simultaneously the position and the chirality of the QW. The time
interval between two applications of the operator $U_{1}$ is generated by a
waiting-time distribution $\rho (T) $, where $T$ is a dimensionless integer
time step. The detailed mechanism to obtain the evolution is given in \cite
{alejo0}. We take $\rho (T)$ in accordance with the L\'{e}vy distribution
\cite{Shlesinger,Klafter,Zaslavsky} that includes a parameter $\alpha $,
with $0<\alpha \leq 2$. When $\alpha< 2$ the second moment of $\rho$ is
infinite, when $\alpha=2$ the Fourier transform of $\rho$ is the Gaussian
distribution and the second moment is finite. Then, this distribution has no
characteristic size for the temporal jump, except in the Gaussian case. The
absence of scale makes the L\'{e}vy random walks scale-invariant fractals.
This means that any classical trajectory has many scales but none in
particular dominates the process. The most important characteristic of the L
\'{e}vy noise is the power-law shape of the tail, accordingly in this work
we use the waiting-time distribution
\begin{equation}
\rho (t)=\frac{\alpha }{\left( 1+\alpha \right) }\left\{
\begin{array}{cc}
1, & 0\leq t<1 \\
\left( \frac{1}{t}\right) ^{\alpha +1}, & t\geq 1
\end{array}
\right. . \label{Levy}
\end{equation}
To obtain the time interval $T$ we sort a continuous variable $t$ in
agreement with eq. (\ref{Levy}) and then we take the integer part $T_{i}$ of
this variable \cite{alejo0}.
To obtain the operator $U_{0}$ we develop in some detail the free QW model.
The standard QW corresponds to a one-dimensional evolution of a quantum
system (the walker) in a direction which depends on an additional degree of
freedom, the chirality, with two possible states: `left' $|L\rangle $ or
`right' $|R\rangle $. Let us consider that the walker can move freely over a
series of interconnected sites labeled by an index $n$. In the classical
random walk, a coin flip randomly selects the direction of the motion; in
the QW the direction of the motion is selected by the chirality. At each
time step a rotation (or, more generally, a unitary transformation) of the
chirality takes place and the walker moves according to its final chirality
state. The global Hilbert space of the system is the tensor product $
H_{s}\otimes H_{c}$ where $H_{s}$ is the Hilbert space associated to the
motion on the line and $H_{c}$ is the chirality Hilbert space.
If one is only interested in the properties of the probability distribution
it suffices to consider unitary transformations which can be expressed in
terms of a single real angular parameter $\theta $ \cite{Nayak,Tregenna,Bach}
. Let us call $M_{-}$ ($M_{+}$) the operators that move the walker one site
to the left (right) on the line in $H_{s}$ and let $|L\rangle \langle L|$
and $|R\rangle \langle R|$ be the chirality projector operators in $H_{c}$.
Then we consider free evolution transformations of the form \cite{Nayak},
\begin{equation}
U_{0}(\theta )=\left\{ M_{-}\otimes |L\rangle \langle L|+M_{+}\otimes
|R\rangle \langle R|\right\} \circ \left\{ I\otimes K(\theta )\right\} ,
\label{Ugen}
\end{equation}
where $K(\theta )=\sigma _{z}e^{-i\theta \sigma _{y}}$ is an unitary
operator acting on $H_{c}$, $\sigma _{y}$ and $\sigma _{z}$ being the
standard Pauli matrices, and $I$ is the identity operator in $H_{s}$. The
unitary operator $U_{0}(\theta )$ evolves the state $|\Psi (t)\rangle $ by
one time step,
\begin{equation}
|\Psi (t+1)\rangle =U_{0}(\theta )|\Psi (t)\rangle . \label{evol1}
\end{equation}
The wave-vector $|\Psi (t)\rangle $ is expressed as the spinor
\begin{equation}
|\Psi (t)\rangle =\sum\limits_{n=-\infty }^{\infty }{\binom{a_{n}(t)}{
b_{n}(t)}}|n\rangle , \label{spinor}
\end{equation}
where we have associated the upper (lower) component to the left (right)
chirality, the states $|n\rangle $ are eigenstates of the position operator
corresponding to the site $n$ on the line. The unitary evolution for $|\Psi
(t)\rangle $, corresponding to eq.~(\ref{evol1}) can then be written as the
map
\begin{align}
a_{n}(t+1)& =a_{n+1}(t)\,\cos \theta +b_{n+1}(t)\,\sin \theta \,,
\label{mapa} \\
b_{n}(t+1)& =a_{n-1}(t)\,\sin \theta -b_{n-1}(t)\,\cos \theta \,. \notag
\end{align}
To build the measurement operator $U_{1}$ we consider the case in which the
walker starts from the position eigenstate $|0\rangle $, and with an initial
qubit state $(a_{0},b_{0})=(1,i)/\sqrt{2}$. The operator $U_{1}$ must
describe the measurement of position and chirality simultaneously. The
measurement of position is direct, but among the many ways to measure the
chirality we choose to do it in such a way that the two qubit states $(1,i)/
\sqrt{2},$ and $(1,-i)/\sqrt{2}$ are eigenstates of the measurement
operator. This means that we project the chirality on the $y$ direction
using the $\sigma _{y\text{ }}$ Pauli operator. This form of measurement
ensures that the initial conditions after each measurement are equivalent to
each other from the point of view of the probability distribution \cite
{konno}. In this work we take $\theta =\pi /4$ as in the usual Hadamard walk
on the line. The probability distribution for the walker's position at time $
t$ is given by
\begin{equation}
P_{n}(t)=|a_{n}(t)|^{2}+|b_{n}(t)|^{2}. \label{prob}
\end{equation}
The effect of performing measurements on this system at time
intervals $T_{i} $, with a L\'{e}vy distribution, combined with
unitary evolution is as follows: the standard deviation has a
ballistic growth during the free evolution, and when a measurement
collapses the wave function the standard deviation starts again from
zero. Then the standard deviation has a zig-zag path.
\begin{figure}
\caption{The standard deviation for the QW as a function of dimensionless
time in logarithmic scales. The parameters of the curves from top to bottom
are: (1) $\protect\alpha =0.1$ and $c=1.$; (2) $\protect\alpha =0.5$ and $
c=0.99$; (3) $\protect\alpha =1.$ and $c=0.92$; (4) $\protect\alpha =1.5$
and $c=0.74$ and (5) $\protect\alpha =2.$ and $c=0.51$. }
\label{fig1}
\end{figure}
In Fig.~\ref{fig1} we present the numerical calculation of the average
standard deviation $\left\langle \sigma \left( t\right) \right\rangle $ of
the QW with measurements, calculated through a computer simulation of the
time evolution of an ensemble of $2\times 10^{6}$ stochastic trajectories
for each value of the parameter $\alpha $. This figure shows for different
values of $\alpha $ that the behavior is diffusive ($\sim t$) for times $
t\lesssim 10$ and follows a power law $\sim t^{c}$ for times $t\gtrsim 10$.
One would expect that the large degree of decoherence introduced by
measurements led to a diffusive behavior for all times \cite{alejo0};
instead the system changes to an unexpected sub-ballistic regime. Then $
\alpha $ determines the degree of diffusivity of the system for large
waiting times, that is ballistic for $\alpha =0$, sub-ballistic for $
0<\alpha <2$ and diffusive for $\alpha =2$. \label{walked} Fig.~\ref{fig2}
shows, in full line, the exponent $c$ of the power law as a function of the L
\'{e}vy parameter $\alpha $. Again the calculation has been made with an
ensemble of $2\times 10^{6}$ stochastic trajectories for each value of $
\alpha $. The passage from the ballistic behavior for $\alpha =0$ to the
diffusive behavior for $\alpha =2$ is clearly observed. In this figure the
result of the analytical calculation of the next section is also presented
in dashed line, the comparison of the curves proving the coherence of both
treatments.
\section{Theoretical model}
\label{sec:derivation} Different mechanisms of unitary noise may
drive a system from a quantum behavior at short times to a
classical-like one, at longer times. It is clear that the quadratic
growth in time of the variance of the QW is a direct consequence of
the coherence of the quantum evolution \cite{alejo1}. In this
section, we develop an analytical treatment to understand why the
L\'{e}vy decoherence does not fully break the ballistic behavior.
In our previous work \cite{alejo2} we investigated the QW on the
line when decoherence was introduced through simultaneous
measurements of the chirality and position. In that work it was
proved that the QW shows a diffusive behavior when measurements are
made at periodic times or with a Gaussian distribution. Now, we use
the L\'{e}vy distribution but some of the result obtained in
\cite{alejo2} can be used. For the sake of clarity we reproduce
briefly the main steps to obtain the dynamical equation of the
variance. Let us suppose that the wave-function is measured at the
time $t$, then it evolves according to the unitary map~(\ref{mapa})
during a time interval $T$, and again at this last time $t+T$, a new
measurement is performed. The probability that the wave-function
collapses in the eigenstate $|n\rangle $ due to the position
measured, after a time $T$, is
\begin{equation}
q_{n}\equiv P_{n}(T)\,. \label{position}
\end{equation}
These spatial distributions $q_{n}$ depend on the initial qubit state and
the time interval $T$, they will play the role of transition probabilities
for the global evolution. The mechanism used to perform measurements of
position and chirality assures that these distributions will repeat
themselves around the new position, because the initial chirality ($(1,i)/
\sqrt{2}$ or $(1,-i)/\sqrt{2}$) produces the same spatial distributions $
q_{n}$ and their value only depends on the size of $T$. Then it is
straightforward to build the probability distribution $P_{n}$ at the
new time $t+T$ as a convolution between this distribution at the
time $t$ with the conditional probability $q_{n}(T)$; this takes the
form of the following `master equation'
\begin{equation}
P_{n}(t +T)=\sum\limits_{j=n-T}^{n+T}q_{n-j}P_{j}(t), \label{markov1}
\end{equation}
where $q_{n-j}$ are the transition probabilities from site $j$ to site $n$,
defined in eq.~(\ref{position}) and the sum is extended between $j=n-T$ and $
j=n+T$, because $T$ is also used as the number of applications of the
quantum map eq.~(\ref{mapa}). Remember that for each time step the walker
moves one spatial step to the right and left. Using eq.~(\ref{markov1}) we
calculate the first moment $m_{1}(t)\equiv\sum jP_{j}(t)$ and the second
moment $m_{2}(t)\equiv\sum j^{2}P_{j}(t)$ to obtain
\begin{align}
m_{1}(t+T)& =m_{1}(t)+m_{1q}(T) \label{mom1} \\
m_{2}(t+T)& =m_{2}(t)+2m_{1}(t)m_{1q}(T)+m_{2q}(T) \label{mom2}
\end{align}
where ${m_{1q}(T)=\sum\limits_{n=-T}^{n=T}nq_{n}}$ and ${m_{2q}(T)=
\sum\limits_{n=-T}^{n=T}n^{2}q_{n}}$ are the first and second moments of the
unitary evolution between measurements.
\begin{figure}
\caption{the exponent $c$ of the power law of the standard deviation for the
QW as a function of the parameter $\protect\alpha$. The full line correspond
to the numerical result and the dashed line to the analytical one}
\label{fig2}
\end{figure}
Therefore the global variance $\sigma ^{2}(t)=m_{2}(t)-m_{1}^{2}(t)$
verifies the following equation for the process, obtained for first time in
\cite{alejo2}
\begin{equation}
\sigma ^{2}\left( t+T\right) =\sigma ^{2}\left( t\right) +\sigma _{q}^{2}(T),
\label{varianza}
\end{equation}
where $\sigma _{q}^{2}(T)=m_{2q}(T)-m_{1q}^{2}(T)$ is the variance
associated to the unitary evolution between measurements. These results can
be used for random time intervals $T$ between consecutive measurements. The
variance $\sigma _{q}^{2}$ depends very weakly on the qubit's initial
conditions and it increases quadratically with time \cite{Nayak}
\begin{equation}
\sigma _{q}^{2}(T)=kT^{2}\,, \label{varianza2}
\end{equation}
where $T\gg 1$ and $k$ is a constant determined by the initial
conditions. We now calculate the average of eq. (\ref{varianza})
taking integer time intervals $T_{i}$ between consecutive
measurements sorted according to the L\'{e}vy distribution eq.
(\ref{Levy})
\begin{equation}
\left\langle \sigma ^{2}\left( t+T_{i}\right) \right\rangle =\left\langle
\sigma ^{2}\left( t\right) \right\rangle +k\left\langle
T_{i}^{2}\right\rangle , \label{varianza3}
\end{equation}
where $\left\langle f(t)\right\rangle \equiv {\int\limits_{0}^{t}f(x){\rho
(x)}dx}$. From the previous numerical calculation, we know that the variance
grows as a power law, then $\left\langle \sigma ^{2}\left( t\right)
\right\rangle \propto t^{2c}$ and $\left\langle \sigma ^{2}\left(
t+T_{i}\right) \right\rangle \propto \left\langle( t+T_{i})
^{2c}\right\rangle\approx t^{2c}(1+2c\frac{\left\langle T_{i}\right\rangle}{t
})$ for large $t$. Substituting these expressions in eq.~(\ref{varianza3}),
the following result for the exponent $c$ is obtained
\begin{equation}
c\approx\frac{1}{2}\left( 1+\frac{\log \frac{\left\langle
T_{i}^{2}\right\rangle }{\left\langle T_{i}\right\rangle }}{\log t}\right) .
\label{exponente}
\end{equation}
valid for a large $t$. The first and the second moments of the waiting time
of our L\'{e}vy distribution are
\begin{equation}
\left\langle T_{i}\right\rangle =\frac{\alpha }{\alpha +1}\left\{ 1+\frac{
t^{1-\alpha }-1}{1-\alpha }\right\} , \label{first}
\end{equation}
\begin{equation}
\left\langle T_{i}^{2}\right\rangle =\frac{\alpha }{\alpha +1}\left\{ \frac{1
}{3}+\frac{t^{2-\alpha }-1}{2-\alpha }\right\} . \label{secon}
\end{equation}
Therefore, in the case when $t\rightarrow \infty $ the exponent $c$ is
\begin{equation}
c=\left\{
\begin{array}{cc}
1 \, , & \text{if \ }0\leqslant \alpha \leqslant 1 \\
\frac{1}{2}(3-\alpha ) \, , & \text{if \ }1\leqslant \alpha \leqslant 2.
\end{array}
\right. \label{limite}
\end{equation}
This result is in accordance with the numerical result obtained in the
previous section, as can be seen in Fig.~\ref{fig2}. It is important to
remark that eq.~(\ref{limite}) gives the analytical dependence of the
exponent of the power law for $\left\langle \sigma ^{2}\right\rangle$ on the
parameter $\alpha $.
\section{Conclusion}
\label{sec:conclusion} Several systems have been proposed as candidates to
implement the QW model, they include atoms trapped in optical lattices \cite
{Dur,Roldan}, cavity quantum electrodynamics \cite{Sanders} and
nuclear magnetic resonance in solid substrates \cite{Du,Berman}. All
these proposed implementations face the obstacle of decoherence due
to environmental noise and imperfections. Thus the study of the QW
subjected to different types of noise may be important in future
technical applications. Here we showed that the QW subjected to
measurements with a L\'{e}vy waiting-time distribution does not
break completely the coherence in the dynamics, but produces a
sub-ballistic behavior in the system, as an intermediate situation
between the ballistic and the diffusive behavior. Note that as
Gaussian noise is a particular case of the L\'{e}vy noise, our study
is open to wider experimental situations. We studied this behavior
numerically and we obtained also an analytical expression for the
exponent of the power law of the variance as a function of the
characteristic parameter of the L\'evy distribution. Measurement is
one of the strongest possible decoherences on any quantum system,
but even in this case the coherence of the QW treated here is not
fully lost . This fact shows that the most important ingredient for
the sub-ballistic behavior is the temporal sequence of the
perturbation, not its size or type. This extreme and simple model
that can
be treated analytically allows us to understand why similar models \cite
{alejo0,Ribeiro,alejo3} (that were studied numerically) show also a
power-law behavior.
\section{Acknowledgments}
\label{sec:Acknowledgments} I thank V. Micenmacher and R. Siri for their
comments and stimulating discussions. I acknowledge the support from
PEDECIBA and PDT S/C/IF/54/5.
\end{document} |
\begin{document}
\title{Nonparametric Estimation of ROC Surfaces Under Verification Bias}
\pagestyle{fancy}
\fancyhf{}
\rhead{\it Draft: \today\ at \currenttime}
\cfoot{\thepage}
\begin{abstract}
Verification bias is a well known problem when the predictive ability of a diagnostic test has to be evaluated. In this paper, we discuss how to assess the accuracy of continuous-scale diagnostic tests in the presence of verification bias, when a three-class disease status is considered. In particular, we propose a fully nonparametric verification bias-corrected estimator of the ROC surface. Our approach is based on nearest-neighbor imputation and adopts generic smooth regression models for both the disease and the verification processes. Consistency and asymptotic normality of the proposed estimator are proved and its finite sample behavior is investigated by means of several Monte Carlo simulation studies. Variance estimation is also discussed and an illustrative example is presented.
\\
\textbf{Key words:} diagnostic tests, missing at random, true class fractions, nearest-neighbor imputation.
\end{abstract}
\section{Introduction} \label{sec:intro}
The evaluation of the accuracy of diagnostic tests is an important issue in modern medicine. In order to evaluate a test, knowledge of the true disease status of subjects or patients under study is necessary. Usually, this is obtained by a gold standard (GS) test, or reference test, that always correctly ascertains the true disease status.
Sensitivity (Se) and specificity (Sp) are frequently used to assess the accuracy of diagnostic tests when the disease status has two categories (e.g., ``healthy'' and ``diseased'').
In a two-class problem, for a diagnostic test $T$ that yields a continuous measure, the receiver operating characteristic (ROC) curve is a popular tool for displaying the ability of the test to distinguish between non--diseased and diseased subjects. The ROC curve is defined as the set of points $\{(1-\mathrm{Sp}(c),\mathrm{Se}(c)), c \in (-\infty,\infty)\}$ in the unit square, where $\mathrm{Se}(c) = \mathrm{Pr}(T \ge c | \text{ subject is diseased})$ and $\mathrm{Sp}(c) = \mathrm{Pr}(T < c | \text{ subject is non--diseased})$ for given a cut point $c$. The shape of ROC curve allows to evaluate the ability of the test. For example, a ROC curve equal to a straight line joining points $(0,0)$ and $(1,1)$ represents a diagnostic test which is the random guess. A commonly used summary measure that aggregates performance information of the test is the area under ROC curve (AUC).
Reasonable values of AUC range from 0.5, suggesting that the test is no better than chance alone, to 1.0, which indicates a perfect test.
In some medical studies, however, the disease status often involves more than two categories; for example, Alzheimer's dementia can be classified into three categories (see \cite{chi} for more details). In such situations, quantities used to evaluate the accuracy of tests are the true class fractions (TCF's). These are well defined as a generalization of sensitivity and specificity.
For given a pair of cut points $(c_1,c_2)$ such that $c_1 < c_2$, the true class fractions TCF's of the continuous test $T$ at $(c_1,c_2)$ are
\begin{eqnarray}
\mathrm{TCF}_1(c_1) &=& \mathrm{Pr}(T < c_1|\mathrm{class}\, 1) = 1 - \mathrm{Pr}(T \ge c_1|\mathrm{class}\, 1), \nonumber \\
\mathrm{TCF}_2(c_1,c_2) &=& \mathrm{Pr}(c_1 < T < c_2|\mathrm{class}\, 2) = \mathrm{Pr}(T \ge c_1|\mathrm{class}\, 2) - \mathrm{Pr}(T \ge c_2|\mathrm{class}\, 2), \nonumber \\
\mathrm{TCF}_3(c_2) &=& \mathrm{Pr}(T > c_2|\mathrm{class}\, 3) = \mathrm{Pr}(T \ge c_2|\mathrm{class}\, 3) \nonumber.
\end{eqnarray}
The plot of (TCF$_1$, TCF$_2$, TCF$_3$) at various values of the pair $(c_1,c_2)$ produces the ROC surface in the unit cube. It is not hard to realize that ROC surface is a generalization of the ROC curve (see \cite{scu,nak:04,nak:14}). Indeed, the projection of the ROC surface to the plane defined by TCF$_2$ versus TCF$_1$ yields the ROC curve between classes 1 and 2. Similarly, by projecting ROC surface to the plane defined by the axes TCF$_2$ and TCF$_3$, the ROC curve between classes 2 and 3 is produced. The ROC surface will be the triangular plane with vertices $(0, 0, 1), (0, 1, 0)$, and $(1, 0, 0)$ if all of three TCF's are equal for every pair $(c_1,c_2)$. In this case, we say that the diagnostic test is the random guess, again. In practice, one can imagine that the graph of the ROC surface lies in the unit cube and above the plane of the triangle with three vertices $(0, 0, 1), (0, 1, 0)$, and $(1, 0, 0)$. A summary of the overall diagnostic accuracy of the test under consideration is the volume under the ROC surface (VUS), which can be seen as a generalization of the AUC. Reasonable values of VUS vary from 1/6 to 1, ranging from bad to perfect diagnostic tests.
If we know the true disease status of all patients for which the test $T$ is measured, then the ROC curve or the ROC surface can be estimated unbiasedly. In practice, however, the GS test can be too expensive, or too invasive, or both for regular use. Typically, only a subset of patients undergoes disease verification, and the decision to send a patient to verification is often based on the diagnostic test result and other patient characteristics. For example, subjects with negative test results may be less likely to receive a GS test than subjects with positive test results. If only data from patients with verified disease status are used to estimate the ROC curve or the ROC surface, this generally leads to a biased evaluation of the ability of the diagnostic tests. This bias is known as verification bias. See, for example, \cite{zho} and \cite{pep} as general references.
Correcting for verification bias is a fascinating issue of medical statistics. Various methods have been developed to deal with the problem, most of which assume that the true disease status, if missing, is missing at random (MAR), see \cite{lit}. Under the MAR assumption, there are some verification bias-corrected methods for diagnostic tests, in the two-class case. Among the others, \cite{zho} present maximum likelihood approaches, \cite{rot} consider a doubly robust estimation of the area under ROC curve, while \cite{he} study a robust estimator for sensitivity and specificity by using propensity score stratification. Verification bias correction for continuous tests has been studied by \cite{alo:05} and \cite{adi:15}. In particular, \cite{alo:05} propose four types of partially parametric estimators of sensitivity and specificity under the MAR assumption, i.e., full imputation (FI), mean score imputation (MSI), inverse probability weighting (IPW) and semiparametric efficient (SPE, also known as doubly robust DR) estimator. \cite{adi:15}, instead, proposed a fully nonparametric approach for ROC analysis.
The issue of correcting for the verification bias in ROC surface analysis is very scarcely considered in the literature. Until now, only \cite{chi} and \cite{toduc:15} discuss the issue. \cite{chi} propose maximum likelihood estimates for ROC surface and VUS corresponding to ordinal diagnostic tests, whereas \cite{toduc:15} extend the methods in \cite{alo:05} to the estimation of ROC surfaces in cases of continuous diagnostic tests.
FI, MSI, IPW and SPE methods in
\cite{toduc:15} are partially parametric methods. Their use requires the specification of parametric regression models for the probability of a subject being correctly classified with respect to the disease state, or the probability of a subject being verified
(i.e., tested by GS), or both. A wrong specification of such parametric models
can negatively affect the behavior of the estimators,
that are no longer consistent.
In this paper, we propose a fully nonparametric approach to estimate TCF$_1$, TCF$_2$ and TCF$_3$ in the presence of verification bias, for continuous diagnostic tests. The proposed approach is based on a nearest-neighbor (NN) imputation rule, as in \cite{adi:15}. Consistency and asymptotic normality of the estimators derived from the proposed method are studied. In addition, estimation of their variance is also discussed. To show usefulness of our proposal and advantages in comparison with partially parametric estimators, we conduct some simulation studies and give an illustrative example.
The rest of paper is organized as follows. In Section 2, we review partially parametric methods for correcting for verification bias in case of continuous tests. The proposed nonparametric method for estimating ROC surfaces and the related asymptotic results are presented in Section 3. In Section 4, we discuss variance-covariance estimation and in Section 5 we give some simulation results. An application is illustrated in Section 6. Finally, conclusions are drawn in Section 7.
\section{Partially parametric estimators of ROC surfaces}\label{sec:2}
Consider a study with $n$ subjects, for whom the result of a continuous diagnostic test $T$ is available. For each subject, $D$ denotes the true disease status,
that can be possibly unknown.
Hereafter, we will describe the true disease status as a trinomial random vector $D = (D_1,D_2,D_3)$. $D_k$ is a binary variable that takes $1$ if the subject belongs to class $k$, $k = 1,2,3$ and $0$ otherwise. Here, class 1, class 2 and class 3 can be referred, for example, as ``non-diseased'', ``intermediate'' and ``diseased''. Further, let $V$ be a binary verification status for a subject, such that $V = 1$ if he/she is undergoes the GS test, and $V = 0$ otherwise. In practice, some information, other than the results from the test $T$, can be obtained for each patient. Let $A$ be the covariate vector for the patients, that may be associated both with $D$ and $V$.
We are interested in estimating the ROC surface of $T$, and hence the
true class factions $\mathrm{TCF}_1(c_1) = \mathrm{Pr}(T_i < c_1| D_{1i} = 1),$ $\mathrm{TCF}_1(c_1) = \mathrm{Pr}(c_1 < T_i < c_2| D_{2i} = 1)$ and $\mathrm{TCF}_3(c_1) = \mathrm{Pr}(T_i \ge c_2| D_{3i} = 1)$, for fixed constants $c_1, c_2$, with $c_1 < c_2$.
When all patients have their disease status verified by a GS, i.e., $V_i = 1$ for all $i = 1,\ldots,n$, for any pair of cut points $(c_1,c_2)$, the true class fractions $\mathrm{TCF}_1(c_1), \mathrm{TCF}_2(c_1,c_2)$ and $\mathrm{TCF}_3(c_2)$ can be easily estimated by\\
\begin{eqnarray}
\widehat{\mathrm{TCF}}_1(c_1) &=& 1 - \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_1)D_{1i}}{\sum\limits_{i=1}^{n}D_{1i}} \nonumber\\
\widehat{\mathrm{TCF}}_2(c_1,c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)D_{2i}}{\sum\limits_{i=1}^{n}D_{2i}}\nonumber \label{nonp:est}\\
\widehat{\mathrm{TCF}}_3(c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_2)D_{3i}}{\sum\limits_{i=1}^{n}D_{3i}} \nonumber,
\end{eqnarray}
where $\mathrm{I}(\cdot)$ is the indicator function. It is straightforward to show that the above estimators are unbiased. However, they cannot be employed in case of incomplete data, i.e. when $V_i = 0$ for some $i = 1,\ldots,n$.
When only some subjects are selected to undergo the GS test, we need to make an assumption about the selection mechanism. We assume that the verification status $V$ and the disease status $D$ are mutually independent given the test result $T$ and covariate $A$. This means that $\mathrm{Pr} (V|T,A) = \mathrm{Pr} (V|D,T,A)$ or equivalently $\mathrm{Pr} (D|T,A) = \mathrm{Pr} (D|V,T,A)$. Such assumption is a special case of the missing at random (MAR) assumption (\cite{lit}).
Under MAR assumption, verification bias-corrected estimation of the true class factions is discussed in \cite{toduc:15}, where
(partially) parametric estimators, based on four different approaches, are given. In particular,
full imputation (FI) estimators of $\mathrm{TCF}_1(c_1), \mathrm{TCF}_2(c_1,c_2)$ and $\mathrm{TCF}_3(c_2)$ are defined as
\begin{eqnarray}
\widehat{\mathrm{TCF}}_{1,\mathrm{FI}}(c_1) &=& 1 - \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_1)\hat{\rho}_{1i}}{\sum\limits_{i=1}^{n}\hat{\rho}_{1i}} \nonumber, \\
\widehat{\mathrm{TCF}}_{2,\mathrm{FI}}(c_1,c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)\hat{\rho}_{2i}}{\sum\limits_{i=1}^{n}\hat{\rho}_{2i}}, \label{est:fi} \\
\widehat{\mathrm{TCF}}_{3,\mathrm{FI}}(c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_2)\hat{\rho}_{3i}}{\sum\limits_{i=1}^{n}\hat{\rho}_{3i}}. \nonumber
\end{eqnarray}
This method requires a parametric model (e.g. multinomial logistic regression model) to obtain the estimates $\hat{\rho}_{ki}$ of $\rho_{ki} = \mathrm{Pr}(D_{ki} = 1|T_i, A_i)$, using only data from verified subjects. Differently, the mean score imputation (MSI) approach only uses the estimates $\hat{\rho}_{ki}$ for the missing values of disease status $D_{ki}$. Hence, MSI estimators are
\begin{eqnarray}
\widehat{\mathrm{TCF}}_{1,\mathrm{MSI}}(c_1) &=& 1 - \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_1)\left[V_i D_{1i} + (1 - V_i) \hat{\rho}_{1i}\right]}{\sum\limits_{i=1}^{n}\left[V_i D_{1i} + (1 - V_i) \hat{\rho}_{1i}\right]}, \nonumber \\
\widehat{\mathrm{TCF}}_{2,\mathrm{MSI}}(c_1,c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)\left[V_i D_{2i} + (1 - V_i) \hat{\rho}_{2i}\right]}{\sum\limits_{i=1}^{n}\left[V_i D_{2i} + (1 - V_i) \hat{\rho}_{2i}\right]}, \label{est:msi3} \\
\widehat{\mathrm{TCF}}_{3,\mathrm{MSI}}(c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_2)\left[V_i D_{3i} + (1 - V_i) \hat{\rho}_{3i}\right]}{\sum\limits_{i=1}^{n}\left[V_i D_{3i} + (1 - V_i) \hat{\rho}_{3i}\right]}. \nonumber
\end{eqnarray}
The inverse probability weighting (IPW) approach weights each verified subject by the inverse of the probability that the subject is selected for verification. Thus, $\mathrm{TCF}_1(c_1), \mathrm{TCF}_2(c_1,c_2)$ and $\mathrm{TCF}_3(c_2)$ are estimated by
\begin{eqnarray}
\widehat{\mathrm{TCF}}_{1,\mathrm{IPW}}(c_1) &=& 1 - \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_1) V_i \hat{\pi}_i^{-1}D_{1i}}{\sum\limits_{i=1}^{n}V_i \hat{\pi}_i^{-1}D_{1i}} \nonumber, \\
\widehat{\mathrm{TCF}}_{2,\mathrm{IPW}}(c_1,c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)V_i \hat{\pi}_i^{-1}D_{2i}}{\sum\limits_{i=1}^{n}V_i \hat{\pi}_i^{-1}D_{2i}}, \label{est:ipw4} \\
\widehat{\mathrm{TCF}}_{3,\mathrm{IPW}}(c_2) &=& \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_2) V_i \hat{\pi}_i^{-1}D_{3i}}{\sum\limits_{i=1}^{n}V_i \hat{\pi}_i^{-1}D_{3i}}, \nonumber
\end{eqnarray}
where $\hat{\pi}_i$ is an estimate of the conditional verification probabilities $\pi_i = \mathrm{Pr}(V_i = 1|T_i, A_i)$. Finally, the semiparametric efficient (SPE) estimators are
\begin{eqnarray}
\widehat{\mathrm{TCF}}_{1,\mathrm{SPE}}(c_1) &=& 1 - \frac{\sum\limits_{i=1}^{n} \mathrm{I}(T_i \ge c_1) \left\{\frac{V_i D_{1i}}{\hat{\pi}_i} - \frac{\hat{\rho}_{1i}(V_i - \hat{\pi}_i)}{\hat{\pi}_i} \right\}} {\sum\limits_{i=1}^{n} \left\{\frac{V_i D_{1i}}{\hat{\pi}_i} - \frac{\hat{\rho}_{1i}(V_i - \hat{\pi}_i)}{\hat{\pi}_i} \right\}} \nonumber, \\
\widehat{\mathrm{TCF}}_{2,\mathrm{SPE}}(c_1,c_2) &=& \frac{\sum\limits_{i=1}^{n} \mathrm{I}(c_1 \le T_i < c_2) \left\{\frac{V_i D_{2i}}{\hat{\pi}_i} - \frac{\hat{\rho}_{2i}(V_i - \hat{\pi}_i)}{\hat{\pi}_i} \right\}}{\sum\limits_{i=1}^{n} \left\{\frac{V_i D_{2i}}{\hat{\pi}_i} - \frac{\hat{\rho}_{2i}(V_i - \hat{\pi}_i)}{\hat{\pi}_i} \right\}}, \label{est:spe3} \\
\widehat{\mathrm{TCF}}_{3,\mathrm{SPE}}(c_2) &=& \frac{\sum\limits_{i=1}^{n} \mathrm{I}(T_i \ge c_2) \left\{\frac{V_i D_{3i}}{\hat{\pi}_i} - \frac{\hat{\rho}_{3i}(V_i - \hat{\pi}_i)}{\hat{\pi}_i} \right\}} {\sum\limits_{i=1}^{n} \left\{\frac{V_i D_{3i}}{\hat{\pi}_i} - \frac{\hat{\rho}_{3i}(V_i - \hat{\pi}_i)}{\hat{\pi}_i} \right\}}. \nonumber
\end{eqnarray}
Estimators (\ref{est:fi})-(\ref{est:spe3}) represent an extension to the three-classes problem of the estimators proposed in \cite{alo:05}. SPE estimators are also known to be doubly robust estimators, in the sense that they are consistent if either the $\rho_{ki}$'s or the $\pi_{i}$'s are estimated consistently. However, SPE estimates could fall outside the interval $(0,1)$. This happens because the quantities $V_i D_{ki}\hat{\pi}_i^{-1} - \hat{\rho}_{ki}(V_i - \hat{\pi}_i)\hat{\pi}_i^{-1}$ can be negative.
\section{Nonparametric estimators}\label{s:prop}
\subsection{The proposed method}
All the verification bias-corrected estimators of $\mathrm{TCF}_1(c_1), \mathrm{TCF}_2(c_1,c_2)$ and $\mathrm{TCF}_3(c_2)$ revised in the previous section belong to the class of (partially) parametric estimators, i.e., they need regression models to estimate $\rho_{ki} = \mathrm{Pr}(D_{ki} = 1|T_i, A_i)$ and/or $\pi_i = \mathrm{Pr}(V_i = 1|T_i, A_i)$.
In what follows, we propose a fully nonparametric approach to the estimation of $\mathrm{TCF}_1(c_1), \mathrm{TCF}_2(c_1,c_2)$ and $\mathrm{TCF}_3(c_2)$. Our approach is based on the K-nearest neighbor (KNN) imputation method. Hereafter, we shall assume that $A$ is a continuous random variable.
Recall that the true disease status is a trinomial random vector $D = (D_1,D_2,D_3)$ such that $D_k$ is a $n$ Bernoulli trials with success probability $\theta_k = \mathrm{Pr}(D_k = 1)$. Note that $\theta_1 + \theta_2 + \theta_3 = 1$. Let $\beta_{jk} = \mathrm{Pr}(T \ge c_j, D_k = 1)$ with $j = 1,2$ and $k = 1,2,3$.
Since parameters $\theta_k$ are the means of the random variables $D_{k}$, we can use the KNN estimation procedure discussed in (\cite{ning:12}) to obtain nonparametric estimates $\hat{\theta}_{k,\mathrm{KNN}}$. More precisely, we define
\begin{equation}
\hat{\theta}_{k,\mathrm{KNN}} = \frac{1}{n}\sum_{i=1}^{n}\left[V_i D_{ki} + (1-V_i)\hat{\rho}_{ki,K}\right], \qquad K \in \mathbb{N},
\nonumber \label{est:knn1}
\end{equation}
where $\hat{\rho}_{ki,K} = \dfrac{1}{K}\sum\limits_{l=1}^{K}D_{ki(l)}$, and $\left\{(T_{i(l)},A_{i(l)},D_{ki(l)}): V_{i(l)} = 1, l = 1,\ldots,K\right\}$ is a set of $K$ observed data pairs and $(T_{i(l)},A_{i(l)})$ denotes the $j$-th nearest neighbor to $(T_i,A_i)$ among all $(T,A)$'s corresponding to the verified patients, i.e., to those $D_{kh}$'s with $V_h = 1$. Similarly, we can define the KNN estimates of $\beta_{jk}$ as follows
\begin{equation}
\hat{\beta}_{jk,\mathrm{KNN}} = \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)\left[V_iD_{ki} + (1-V_i)\hat{\rho}_{ki,K}\right],
\nonumber \label{est:knn2}
\end{equation}
each $j,k$. Therefore, the KNN imputation estimators for $\mathrm{TCF}_k$ are
\begin{align}
\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}(c_1) &= 1 - \frac{\hat{\beta}_{11}}{\hat{\theta}_1} = \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i < c_1)\left[V_i D_{1i} + (1 - V_i) \hat{\rho}_{1i,K}\right]}{\sum\limits_{i=1}^{n}\left[V_i D_{1i} + (1 - V_i) \hat{\rho}_{1i,K}\right]} \nonumber, \displaybreak[3] \\
\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}(c_1,c_2) &= \frac{\hat{\beta}_{12} - \hat{\beta}_{22}}{\hat{\theta}_2} = \frac{\sum\limits_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)\left[V_i D_{2i} + (1 - V_i) \hat{\rho}_{2i,K}\right]}{\sum\limits_{i=1}^{n}\left[V_i D_{2i} + (1 - V_i) \hat{\rho}_{2i,K}\right]}, \label{est:knn3} \\
\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}(c_2) &= \frac{\hat{\beta}_{23}}{\hat{\theta}_3} = \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i \ge c_2)\left[V_i D_{3i} + (1 - V_i) \hat{\rho}_{3i,K}\right]}{\sum\limits_{i=1}^{n}\left[V_i D_{3i} + (1 - V_i) \hat{\rho}_{3i,K}\right]}. \nonumber
\end{align}
\subsection{Asymptotic distribution}
Let $\rho_k(t,a) = \mathrm{Pr}(D_k = 1|T=t,A=a)$ and $\pi(t,a) = \mathrm{Pr}(V=1|T=t,A=a)$.
The KNN imputation estimators of $\mathrm{TCF}_1(c_1), \mathrm{TCF}_2(c_1,c_2)$ and $\mathrm{TCF}_3(c_2)$ are consistent and asymptotically normal. In fact, we have the following theorems.
\begin{theorem}\label{thr:knn:1}
Assume the functions $\rho_k(t,a)$ and $\pi(t,a)$ are finite and first-order differentiable. Moreover, assume that the expectation of $1/\pi(T,A)$ exists. Then, for a fixed pair cut of points $(c_1,c_2)$ such that $c_1 < c_2$, the KNN imputation estimators $\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}(c_1)$, $\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}(c_1,c_2)$ and $\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}(c_2)$ are consistent.
\end{theorem}
\begin{proof}
Since the disease status $D_{k}$ is a Bernoulli random variable, its second-order moment, $\mathbb{E} (D_{k}^2)$, is finite. According to the first assumption, we can show that the conditional variance of $D_{k}$ given the test results $T$ and $A$, $\mathbb{V}\mathrm{ar} (D_k | T = t, A = a)$ is equal to $\rho_{k}(t,a)\left[1 - \rho_{k}(t,a)\right]$ and is clearly finite. Thus, by an application of Theorem 1 in \cite{ning:12}, the KNN imputation estimators $\hat{\theta}_{k,\mathrm{KNN}}$ are consistent.
Now, observe that,
\begin{eqnarray}
\hat{\beta}_{jk,\mathrm{KNN}} - \beta_{jk} &=& \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)\left[V_iD_{ki} + (1-V_i)\rho_{ki}\right] + \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j) (1-V_i)(\hat{\rho}_{ki,K} - \rho_{ki}) - \beta_{jk} \nonumber\\
&=& \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)V_i\left[D_{ki} - \rho_{ki}\right] + \frac{1}{n}\sum_{i=1}^{n}\left[\mathrm{I}(T_i \ge c_j)\rho_{ki} - \beta_{jk}\right]\nonumber \\
&& + \: \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j) (1-V_i)(\hat{\rho}_{ki,K} - \rho_{ki})\nonumber \\
&=& S_{jk} + R_{jk} + T_{jk}. \nonumber
\end{eqnarray}
Here, the quantities $R_{jk}, S_{jk}$ and $T_{jk}$ are similar to the quantities $R,S$ and $T$ in the proof of Theorem 2.1 in \cite{cheng:94} and Theorem 1 in \cite{ning:12}. Thus, we have that
\[
\sqrt{n}R_{jk} \stackrel{d}{\to}\mathcal{N}\left(0,\mathbb{V}\mathrm{ar}\left[\mathrm{I}(T \ge c_j)\rho_k(T,A)\right]\right) \qquad \text{and} \qquad \sqrt{n}S_{jk} \stackrel{d}{\to} \mathcal{N}\left(0,\mathbb{E} \left[\pi(T,A)\delta^2_{jk}(T,A)\right]\right),
\]
where $\delta_{jk}^2(T,A)$ is the conditional variance of $\mathrm{I}(T \ge c_j,D_{k} = 1)$ given $T,A$. Also, by using a similar technique to that of proof of Theorem 1 in \cite{ning:12}, we get $T_{jk} = W_{jk} + o_{p}(n^{-1/2})$, where
\[
W_{jk} = \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)(1-V_i)\left[\frac{1}{K}\sum_{l=1}^{K}\left(V_{i(l)}D_{ki(l)} - \rho_{ki(l)}\right)\right].
\]
Moreover, $\mathbb{E}(W_{jk}) = 0$ and
\[
\mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}W_{jk}) = \frac{1}{K}\mathbb{E} \left[(1 - \pi(T,A))\delta_{jk}^2(T,A)\right] + \mathbb{E} \left[\frac{(1-\pi(T,A))^2\delta_{jk}^2(T,A)}{\pi(T,A)}\right].
\]
Then, the Markov's inequality implies that $W_{jk} \stackrel{p}{\to} 0$ as $n$ goes to infinity. This, together with the fact that $R_{jk}$ and $S_{jk}$ converge in probability to zero, leads to the consistency of $\hat{\beta}_{jk,\mathrm{KNN}}$, i.e, $\hat{\beta}_{jk,\mathrm{KNN}} \stackrel{p}{\to} \beta_{jk}$. It follows that $\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}(c_1) = 1 - \frac{\hat{\beta}_{11}}{\hat{\theta}_1}$, $\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}(c_1,c_2) = \frac{\hat{\beta}_{12} - \hat{\beta}_{22}}{\hat{\theta}_2}$ and $\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}(c_2) = \frac{\hat{\beta}_{23}}{\hat{\theta}_3}$ are consistent.
\end{proof}
\begin{theorem}\label{thr:knn:2}
Assume that the conditions in Theorem \ref{thr:knn:1} hold, we get
\begin{equation}
\sqrt{n}\left[\begin{pmatrix}
\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}(c_1) \\ \widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}(c_1,c_2) \\
\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}(c_2)
\end{pmatrix} - \begin{pmatrix}
\mathrm{TCF}_{1}(c_1) \\ \mathrm{TCF}_{2}(c_1,c_2) \\
\mathrm{TCF}_{3}(c_2)
\end{pmatrix}\right] \stackrel{d}{\to} \mathcal{N}(0,\Xi),
\label{asy:knn}
\end{equation}
where $\Xi$ is a suitable matrix.
\end{theorem}
\begin{proof}
A direct application of Theorem 1 in \cite{ning:12} gives the result that the quantity $\sqrt{n}(\hat{\theta}_{k,\mathrm{KNN}} - \theta_k)$ converges to a normal random variable with mean $0$ and variance $\sigma^2_{k} = \left[\theta_k(1-\theta_k) + \omega_{k}^2\right]$. Here,
\begin{eqnarray}
\omega_{k}^2 &=& \left(1 + \frac{1}{K}\right)\mathbb{E} \left[\rho_k(T,A)(1-\rho_k(T,A))(1-\pi(T,A))\right] \nonumber\\
&& + \: \mathbb{E} \left[\frac{\rho_k(T,A)(1-\rho_k(T,A))(1-\pi(T,A))^2}{\pi(T,A)}\right]. \label{omega_k}
\end{eqnarray}
In addition, from the proof of Theorem \ref{thr:knn:1}, we have
\[
\hat{\beta}_{jk,\mathrm{KNN}} - \beta_{jk} \simeq S_{jk} + R_{jk} + W_{jk} + o_{p}(n^{-1/2}),
\]
with
\[
\sqrt{n}R_{jk} \stackrel{d}{\to}\mathcal{N}\left(0,\mathbb{V}\mathrm{ar}\left[\mathrm{I}(T \ge c_j)\rho_k(T,A)\right]\right), \quad \sqrt{n}S_{jk} \stackrel{d}{\to} \mathcal{N}\left(0,\mathbb{E} \left[\pi(T,A)\delta^2_{jk}(T,A)\right]\right)
\]
and
\[
\sqrt{n}W_{jk} \stackrel{d}{\to} \mathcal{N}(0,\sigma^2_{W_{jk}}).
\]
Therefore, $\sqrt{n}(\hat{\beta}_{jk,\mathrm{KNN}} - \beta_{jk}) \stackrel{d}{\to} \mathcal{N}(0,\sigma_{jk}^2)$. Here, the asymptotic variance $\sigma^2_{jk}$ is obtained by
\[
\sigma_{jk}^2 =
\left[\beta_{jk}\left(1 - \beta_{jk}\right) + \omega_{jk}^2\right],
\]
with
\begin{eqnarray}
\omega_{jk}^2 &=& \left(1 + \frac{1}{K}\right)\mathbb{E} \left[\mathrm{I}(T \ge c_j)\rho_k(T,A)(1-\rho_k(T,A))(1-\pi(T,A))\right] \nonumber\\
&& + \: \mathbb{E} \left[\frac{\mathrm{I}(T \ge c_j)\rho_k(T,A)(1-\rho_k(T,A))(1-\pi(T,A))^2}{\pi(T,A)}\right]. \label{omega_jk}
\end{eqnarray}
This result follows by the fact that $R_{jk}$ and $S_{jk} + W_{jk}$ are uncorrelated and the asymptotic covariance between $S_{jk}$ and $W_{jk}$ is obtained by
\[
\mathrm{as}\mathbb{C}\mathrm{ov}\left(S_{jk},W_{jk}\right) = \mathbb{E} \left[(1 - \pi(T,A))\delta_{jk}^2(T,A)\right].
\]
Moreover, we get that the vector $\sqrt{n}(\hat{\theta}_{1,\mathrm{KNN}},\hat{\theta}_{2,\mathrm{KNN}},\hat{\beta}_{11,\mathrm{KNN}},\hat{\beta}_{12,\mathrm{KNN}},\hat{\beta}_{22,\mathrm{KNN}},\hat{\beta}_{23,\mathrm{KNN}})^\top$ is (jointly) asymptotically normally distributed with mean vector $(\theta_1,\theta_2,\beta_{11},\beta_{12},\beta_{22},\beta_{23})^\top$ and suitable covariance matrix $\Xi^*$. Then, result \eqref{asy:knn} follows by applying the multivariate delta method to
\begin{equation}
h(\hat{\theta}_{1},\hat{\theta}_{2},\hat{\beta}_{11},\hat{\beta}_{12},\hat{\beta}_{22},\hat{\beta}_{23})= \left(1 - \frac{\hat\beta_{11}}{\hat\theta_{1}}, \frac{(\hat\beta_{12} - \hat\beta_{22})}{\hat\theta_{2}},\frac{\hat\beta_{23}}{(1 - \hat\theta_{1} - \hat\theta_{2})}\right).
\nonumber \label{asy_h:knn}
\end{equation}
The asymptotic covariance matrix of $\sqrt{n}(\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}},\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}},\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}})^\top$, $\Xi$, is obtained by
\begin{equation}
\Xi = h'\Xi^*h'^\top,
\label{asym_var:knn1}
\end{equation}
where $h'$ is the first-order derivative of $h$, i.e.,
\begin{equation}
h' = \begin{pmatrix}
\frac{\beta_{11}}{\theta_{1}^2} & 0 & -\frac{1}{\theta_{1}} & 0 & 0 & 0 \\
0 & -\frac{(\beta_{12} - \beta_{22})}{\theta_{2}^2} & 0 & \frac{1}{\theta_{2}} & -\frac{1}{\theta_{2}} & 0\\
\frac{\beta_{23}}{(1 - \theta_{1} - \theta_{2})^2} & \frac{\beta_{23}}{(1 - \theta_{1} - \theta_{2})^2} & 0 & 0 & 0 & \frac{1}{(1 - \theta_{1} - \theta_{2})}
\end{pmatrix}.
\label{asym_var:knn2}
\end{equation}
\end{proof}
\subsection{The asymptotic covariance matrix}
\label{asy_var:knn}
Let
\[
\Xi =
\begin{pmatrix}
\xi_1^2 & \xi_{12} & \xi_{13} \\
\xi_{12} & \xi_2^2 & \xi_{23} \\
\xi_{13} & \xi_{23} & \xi^2_3
\end{pmatrix}
.
\]
The asymptotic covariance matrix $\Xi^*$ is a $6 \times 6$ matrix such that its diagonal elements are the asymptotic variances of $\sqrt{n}\hat{\theta}_{k,\mathrm{KNN}}$ and $\sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}}$.
Let us define $\sigma_{12}^* = \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}},\sqrt{n}\hat{\theta}_{2,\mathrm{KNN}})$, $\sigma_{sjk} = \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{s,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}})$ and $\sigma_{jkls} = \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{ls,\mathrm{KNN}})$. We write
\begin{equation}
\Xi^* = \begin{pmatrix}
\sigma_1^2 & \sigma_{12}^* & \sigma_{111} & \sigma_{112} & \sigma_{122} & \sigma_{123} \\
\sigma_{12}^* & \sigma_2^2 & \sigma_{211} & \sigma_{212} & \sigma_{222} & \sigma_{223} \\
\sigma_{111} & \sigma_{211} & \sigma_{11}^2 & \sigma_{1112} & \sigma_{1122} & \sigma_{1123} \\
\sigma_{112} & \sigma_{212} & \sigma_{1112} & \sigma_{12}^2 & \sigma_{1222} & \sigma_{1223} \\
\sigma_{122} & \sigma_{222} & \sigma_{1122} & \sigma_{1222} & \sigma_{22}^2 & \sigma_{2223} \\
\sigma_{123} & \sigma_{223} & \sigma_{1123} & \sigma_{1223} & \sigma_{2223} & \sigma_{23}^2
\end{pmatrix}.
\nonumber \label{asym_var:knn3}
\end{equation}
Hence, from (\ref{asym_var:knn1}) and (\ref{asym_var:knn2}),
\begin{eqnarray}
\xi_1^2=\mathrm{as}\mathbb{V}\mathrm{ar}\left(\sqrt{n}\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}(c_1)\right) &=& \frac{\beta_{11}^2}{\theta_1^4}\sigma_1^2 + \frac{\sigma_{11}^2}{\theta_1^2} - 2\frac{\beta_{11}}{\theta_1^3}\sigma_{111}, \nonumber \label{aV:tcf1} \\
\xi_2^2=\mathrm{as}\mathbb{V}\mathrm{ar}\left(\sqrt{n}\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}(c_1,c_2)\right) &=& \sigma_2^2\frac{(\beta_{12} - \beta_{22})^2}{\theta_2^4} + \frac{\sigma_{12}^2 + \sigma_{22}^2 - 2\sigma_{1222}}{\theta_2^2} \nonumber\\
&& - \: 2\frac{\beta_{12} - \beta_{22}}{\theta_2^3}(\sigma_{212} - \sigma_{222}) , \nonumber \label{aV:tcf2} \\
\xi_3^2=\mathrm{as}\mathbb{V}\mathrm{ar}\left(\sqrt{n}\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}(c_2)\right) &=& \frac{\beta_{23}^2}{(1 - \theta_1 - \theta_2)^4}\left(\sigma_1^2 + 2\sigma_{12}^* + \sigma_2^2\right) + \frac{\sigma_{23}^2}{(1-\theta_1-\theta_2)^2} \nonumber\\
&& + \: 2\frac{\beta_{23}}{(1-\theta_1- \theta_2)^3}\left(\sigma_{123} + \sigma_{223}\right). \label{aV:tcf3}
\end{eqnarray}
Let $\lambda^2=\mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\beta}_{12,\mathrm{KNN}} -\sqrt{n}\hat{\beta}_{22,\mathrm{KNN}}) $. Hence, $\sigma_{12}^2 + \sigma_{22}^2 - 2\sigma_{1222}=\lambda^2$, and
\begin{equation}
\xi_2^2=
\sigma_2^2\frac{(\beta_{12} - \beta_{22})^2}{\theta_2^4} + \frac{\lambda^2}{\theta_2^2} - 2\frac{\beta_{12} - \beta_{22}}{\theta_2^3}(\sigma_{212} - \sigma_{222}) \nonumber \label{aV:tcf2_2}.
\end{equation}
Observe that $\hat{\theta}_{3,\mathrm{KNN}} = 1 - (\hat{\theta}_{1,\mathrm{KNN}} + \hat{\theta}_{2,\mathrm{KNN}})$. Thus,
\begin{eqnarray}
\mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\theta}_{3,\mathrm{KNN}}) &=& \mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}} + \sqrt{n}\hat{\theta}_{2,\mathrm{KNN}}) \nonumber\\
&=& \mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}}) + \mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\theta}_{2,\mathrm{KNN}}) + 2\mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}},\sqrt{n}\hat{\theta}_{2,\mathrm{KNN}}).
\nonumber
\end{eqnarray}
This leads to the expression $\sigma_3^2=\sigma_1^2 + 2\sigma^*_{12} + \sigma_2^2$. In addition,
\begin{eqnarray}
\sigma_{123} + \sigma_{223} &=& \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{23,\mathrm{KNN}}) + \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{2,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{23,\mathrm{KNN}}) \nonumber\\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}} + \sqrt{n}\hat{\theta}_{2,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{23,\mathrm{KNN}}) \nonumber \\
&=& - \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n} - (\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}} + \sqrt{n}\hat{\theta}_{2,\mathrm{KNN}}),\sqrt{n}\hat{\beta}_{23,\mathrm{KNN}}) \nonumber \\
&=& - \sigma_{323}.\nonumber
\end{eqnarray}
Therefore, from (\ref{aV:tcf3}), the asymptotic variance of $\sqrt{n}\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}(c_2)$ is
\begin{equation}
\xi_3^2=
\frac{\beta_{23}^2 \sigma_3^2}{(1 - \theta_1 - \theta_2)^4} + \frac{\sigma_{23}^2}{(1-\theta_1-\theta_2)^2} - 2\frac{\beta_{23} \sigma_{323}}{(1-\theta_1- \theta_2)^3}. \nonumber \label{aV:tcf3_2}
\end{equation}
Recall that
$\sigma_k^2 = \left[\theta_k(1 - \theta_k) + \omega_k^2\right]$ and
$\sigma_{jk}^2=\left[\beta_{jk}(1 - \beta_{jk}) + \omega_{jk}^2\right]$,
where $\omega_{k}^2$ and $\omega_{jk}^2$ are given in (\ref{omega_k}) and
(\ref{omega_jk}), respectively.
To obtain $\sigma_{kjk}$, we observe that
\begin{eqnarray}
\beta_{jk} &=& \mathrm{Pr}\left(T \ge c_j, D_k = 1\right) = \mathrm{Pr}\left(D_k = 1\right)\mathrm{Pr}\left(T \ge c_j| D_k = 1\right) \nonumber\\
&=& \mathrm{Pr}\left(D_k = 1\right)\left[1 - \mathrm{Pr}\left(T < c_j| D_k = 1\right)\right] \nonumber\\
&=& \mathrm{Pr}\left(D_k = 1\right) - \mathrm{Pr}\left(D_k = 1\right)\mathrm{Pr}\left(T < c_j| D_k = 1\right) \nonumber \\
&=& \mathrm{Pr}\left(D_k = 1\right) - \mathrm{Pr}\left(T < c_j, D_k = 1\right) \nonumber\\
&=& \theta_k - \gamma_{jk}\nonumber,
\end{eqnarray}
for $k = 1,2,3$ and $j = 1,2$. Then, we define
\[
\hat{\gamma}_{jk,\mathrm{KNN}} = \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i < c_j)\left[V_iD'_{ki} + (1-V_i)\hat{\rho}_{ki,K}\right].
\]
The asymptotic variance of $\sqrt{n}\hat{\gamma}_{jk,\mathrm{KNN}}$, $\zeta_{jk}^2$, is obtained as that of $\sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}}$. In fact, we get $\zeta_{jk}^2 = \left[\gamma_{jk}(1 - \gamma_{jk}) + \eta_{jk}^2\right]$, where
\begin{eqnarray}
\eta_{jk}^2 &=& \frac{K + 1}{K}\mathbb{E} \left[\mathrm{I}(T < c_j)\rho_k(T,A)\{1-\rho_k(T,A)\} \{1-\pi(T,A)\}\right] \nonumber\\
&& + \: \mathbb{E} \left[\frac{\mathrm{I}(T < c_j)\rho_k(T,A)\{1-\rho_k(T,A)\} \{1-\pi(T,A)\}^2}{\pi(T,A)}\right]. \nonumber \label{eta_jk}
\end{eqnarray}
It is straightforward to see that $\hat{\gamma}_{jk,\mathrm{KNN}} = \hat{\theta}_{k,\mathrm{KNN}} - \hat{\beta}_{jk,\mathrm{KNN}}$. Thus, we can compute the asymptotic covariances $\sigma_{kjk}$ for $j = 1,2$ and $k = 1,2,3$, using the fact that
\begin{eqnarray}
\mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\gamma}_{jk,\mathrm{KNN}}) &=& \mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\theta}_{k,\mathrm{KNN}} - \sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}}) \nonumber \\
&=& \mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\theta}_{k,\mathrm{KNN}}) + \mathrm{as}\mathbb{V}\mathrm{ar}(\sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}}) - 2\mathrm{as}\mathbb{C}\mathrm{ov} (\sqrt{n}\hat{\theta}_{k,\mathrm{KNN}}, \sqrt{n}\hat{\beta}_{jk,\mathrm{KNN}}).\nonumber
\end{eqnarray}
This leads to
\[
\sigma_{kjk} = \frac{1}{2}\left(\sigma^2_k + \sigma^2_{jk} - \zeta^2_{jk}\right) \label{sigma_kjk}.
\]
Hence,
\[
\begin{array}{r r r r}
\sigma_{111} &= \dfrac{1}{2}\left(\sigma_1^2 + \sigma_{11}^2 - \zeta_{11}^2\right) ;& \qquad \sigma_{212} &= \dfrac{1}{2}\left(\sigma_2^2 + \sigma_{12}^2 - \zeta_{12}^2\right); \\ [8pt]
\sigma_{222} &= \dfrac{1}{2}\left(\sigma_2^2 + \sigma_{22}^2 - \zeta_{22}^2\right) ;& \qquad \sigma_{323} &= \dfrac{1}{2}\left(\sigma_3^2 + \sigma_{23}^2 - \zeta_{23}^2\right).
\end{array}
\]
As for $\lambda^2$, one can show that
\begin{equation}
\lambda^2 = \frac{1}{n}\left\{(\beta_{12} - \beta_{22})\left[1 - (\beta_{12} - \beta_{22})\right] + \omega^2_{12} - \omega^2_{22}\right\}. \nonumber
\end{equation}
(see Appendix 1 and, in particular, equation (\ref{eq:rem:2})).
Therefore, suitable explicit expressions for the asymptotic variances of KNN estimators can be found. Such expressions will depend on quantities as $\theta_k$, $\beta_{jk}$ $\omega^2_k$, $\omega^2_{jk}$, $\gamma_{jk}$ and $\eta^2_{jk}$ only. As a consequence, to obtain consistent estimates of the asymptotic variances, ultimately we need to estimate the quantities $\omega_k^2, \omega_{jk}^2$ and $\eta_{jk}^2$.
In Appendix 2 we show that suitable expressions can be obtained also for the elements
$\xi_{12}$, $\xi_{13}$ and $\xi_{23}$ of the covariance matrix $\Xi$. Such expressions will depend, among others, on certain quantities $\psi^2_{1212}$, $\psi^2_{112}$, $\psi^2_{213}$, $\psi^2_{12}$, $\psi^2_{113}$, $\psi^2_{223}$ and $\psi^2_{1223}$ similar to $\omega^2_k$, $\omega^2_{jk}$ or $\eta_{jk}^2$.
\subsection{Choice of $K$ and the distance measure}
\label{sec:choice}
The proposed method is based on nearest-neighbor imputation,
which requires the choice of a value for $K$ as well as
a distance measure.
In practice,
the selection of a suitable distance is tipically dictated
by features of the data and possible subjective evaluations; thus, a general indication about
an adequate choice is difficult to express.
In many cases, the simple Euclidean distance may be appropriate.
Other times, the researcher may wish to consider specific characteristics of data at hand, and then make a different choice. For example, the diagnostic test result $T$ and the auxiliary covariate $A$ could be heterogeneous with respect to their variances (which is particularly true when the variables are measured on heterogeneous scales). In this case, the choice of the Mahalanobis distance may be suitable.
As for the choice of the size of the neighborhood, \citet{ning:12} argue that
nearest-neighbor imputation whit a small value of $K$ tipically yields negligible bias of the estimators, but a large variance; the opposite happen with a large value of $K$. The authors suggest that the choice of $K \in \{1,2\}$ is generally adequate when the aim is to estimate an average.
A similar comment is also raised by \citet{adi:15} and \citet{adi:15:auc}, i.e., a small value of $K$, within the range 1--3, may be a good choice to estimate ROC curves and AUC. However, the authors stress that, in general, the choice of $K$ may depend on the dimension of the feature space, and propose to use cross--validation to find $K$ in case of high--dimensional covariate.
Specifically, the authors indicate that a suitable value of the size of neighbor could be found by
\[
K^* = \argmin_{K = 1, \ldots, n_{ver}} \frac{1}{n_{ver}}\left\|D - \hat{\rho}_{K} \right\|_1,
\]
where $\|\cdot\|_1$ denotes $L_1$ norm for vector and $n_{ver}$ is the number of verified subjects. The formula above can be generalized to our multi--class case. In fact, when the disease status $D$ has $q$ categories ($q \ge 3$), the difference between $D$ and $\hat{\rho}_{K}$ is a $n_{ver} \times (q-1)$ matrix. In such situation, the selection rule could be
\begin{equation}
K^* = \argmin_{K = 1, \ldots, n_{ver}} \frac{1}{n_{ver}(q-1)}\left\|D - \hat{\rho}_{K} \right\|_{1,1},
\label{choice:K:1}
\end{equation}
where $\|{\cal A}\|_{1,1}$ denotes $L_{1,1}$ norm of matrix ${\cal A}$, i.e.,
\[
\|{\cal A}\|_{1,1} = \sum_{j=1}^{q-1}\left(\sum_{i=1}^{n_{ver}}|a_{ij}|\right).
\]
\section{Variance-covariance estimation}
Consider first the problem of estimating of the variances of $\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}$, $\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}$ and $\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}$. In a nonparametric framework, quantities as ${\omega}_k^2, {\omega}_{jk}^2$ and ${\eta}_{jk}^2$ can be estimated by their empirical counterparts, using also the plug--in method. Here, we consider an approach that uses a nearest-neighbor rule to estimate both the functions $\rho_k(T,A)$ and the propensity score $\pi(T,A)$, that are present in the expressions of ${\omega}_k^2, {\omega}_{jk}^2$ and ${\eta}_{jk}^2$. In particular, for the conditional probabilities of disease, we can use KNN estimates $\tilde{\rho}_{ki}=\hat{\rho}_{ki,\bar{K}}$, where the integer $\bar{K}$ must be greater than one to avoid estimates equal to zero. For the conditional probabilities of verification, we can resort to the KNN procedure proposed in \cite{adi:15}, which considers the estimates
\[
\tilde{\pi}_{i} = \frac{1}{K^*_i}\sum_{l = 1}^{K^*_i} V_{i(l)},
\]
where $\left\{(T_{i(l)},A_{i(l)},V_{i(l)}): l = 1,\ldots,K_i^*\right\}$ is a set of $K_i^*$ observed pairs and $(T_{i(l)},A_{i(l)})$ denotes the $j$-th nearest neighbor to $(T_i,A_i)$ among all $(T,A)$'s. When $V_i$ equals 0, $K_i^*$ is set equal to the rank of the first verified nearest neighbor to the unit $i$, i.e., $K_i^*$ is such that $V_{i(K_i^*)} = 1$ and $V_i = V_{i(1)} = V_{i(2)} = \ldots = V_{i(K_i^* - 1)} = 0$. In case of $V_i = 1$, $K_i^*$ is such that $V_i = V_{i(1)} = V_{i(2)} = \ldots = V_{i(K_i^* - 1)} = 1$, and $V_{i(K_i^*)} = 0$, i.e., $K_i^*$ is set equal to the rank of the first non--verified nearest neighbor to the unit $i$.
Such a procedure automatically avoids zero values for the
$\tilde{\pi}_{i}$'s.
Then, based on the $\tilde{\rho}_{ki}$'s and $\tilde{\pi}_{i}$'s, we obtain the estimates
\begin{eqnarray}
\hat{\omega}_{k}^2 &=& \frac{K + 1}{nK}\sum_{i=1}^{n}\tilde{\rho}_{ki}\left(1 - \tilde{\rho}_{ki}\right)\left(1 - \tilde{\pi}_{i}\right) + \frac{1}{n}\sum_{i = 1}^{n}\frac{\tilde{\rho}_{ki}\left(1 - \tilde{\rho}_{ki}\right)\left(1 - \tilde{\pi}_{i}\right)^2}{\tilde{\pi}_{i}}, \nonumber \\
\hat{\omega}_{jk}^2 &=& \frac{K + 1}{nK}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)\tilde{\rho}_{ki}\left(1 - \tilde{\rho}_{ki}\right)\left(1 - \tilde{\pi}_{i}\right) \nonumber \\
&& + \: \frac{1}{n}\sum_{i = 1}^{n}\frac{\mathrm{I}(T_i \ge c_j)\tilde{\rho}_{ki}\left(1 - \tilde{\rho}_{ki}\right)\left(1 - \tilde{\pi}_{i}\right)^2}{\tilde{\pi}_{i}}, \nonumber \\
\hat{\eta}_{jk}^2 &=& \frac{K + 1}{nK}\sum_{i=1}^{n}\mathrm{I}(T_i < c_j)\tilde{\rho}_{ki}\left(1 - \tilde{\rho}_{ki}\right)\left(1 - \tilde{\pi}_{i}\right) \nonumber \\
&& + \: \frac{1}{n}\sum_{i = 1}^{n}\frac{\mathrm{I}(T_i < c_j)\tilde{\rho}_{ki}\left(1 - \tilde{\rho}_{ki}\right)\left(1 - \tilde{\pi}_{i}\right)^2}{\tilde{\pi}_{i}} , \nonumber
\end{eqnarray}
from which, along with $\hat{\theta}_{k,\mathrm{KNN}}$, $\hat{\beta}_{jk,\mathrm{KNN}}$ and $\hat{\gamma}_{jk,\mathrm{KNN}}$, one derives the estimates of the variances of
the proposed KNN imputation estimators.
To obtain estimates of covariances,
we need to estimate also the quantities $\psi^2_{1212}$, $\psi^2_{112}$, $\psi^2_{213}$, $\psi^2_{12}$, $\psi^2_{113}$, $\psi^2_{223}$ and $\psi^2_{1223}$ given in Appendix 2.
However, estimates of such quantities are similar to those given above for
${\omega}_k^2, {\omega}_{jk}^2$ and ${\eta}_{jk}^2$. For example,
\begin{eqnarray}
\hat{\psi}_{1212}^2 &=& \frac{K + 1}{nK}\sum_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)\tilde{\rho}_{1i}\tilde{\rho}_{2i}\left(1 - \tilde{\pi}_{i}\right) \nonumber \\
&& + \: \frac{1}{n}\sum_{i = 1}^{n}\frac{\mathrm{I}(c_1 \le T_i < c_2)\tilde{\rho}_{1i}
\tilde{\rho}_{2i}\left(1 - \tilde{\pi}_{i}\right)^2}{\tilde{\pi}_{i}}. \nonumber
\end{eqnarray}
Of course, there are other possible approaches to obtain variance and covariance estimates.
For instance, one could resort to a standard bootstrap procedure.
From the original observations $(T_i,A_i,D_i,V_i)$, $i = 1,\ldots,n$, consider $B$ bootstrap samples $(T_i^{*b},A_i^{*b},D_i^{*b},V_i^{*b})$, $b = 1,\ldots,B$, and $i = 1,\ldots,n$. For the $b$-th sample, compute the bootstrap estimates $\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}^{*b}(c_1)$, $\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}^{*b}(c_1,c_2)$ and $\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}^{*b}(c_2)$ as
\begin{align}
\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}}^{*b}(c_1) &= \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i^{*b} < c_1)\left[V_i^{*b} D^{*b}_{1i} + (1 - V_i^{*b}) \hat{\rho}_{1i,K}^{*b}\right]}{\sum\limits_{i=1}^{n}\left[V_i^{*b} D^{*b}_{1i} + (1 - V_i^{*b}) \hat{\rho}_{1i,K}^{*b}\right]} \nonumber, \\
\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}}^{*b}(c_1,c_2) &= \frac{\sum\limits_{i=1}^{n}\mathrm{I}(c_1 \le T_i^{*b} < c_2)\left[V_i^{*b} D^{*b}_{2i} + (1 - V_i^{*b}) \hat{\rho}_{2i,K}^{*b}\right]}{\sum\limits_{i=1}^{n}\left[V_i^{*b} D^{*b}_{2i} + (1 - V_i^{*b}) \hat{\rho}_{2i,K}^{*b}\right]}, \nonumber \label{boot:est:knn} \displaybreak[3] \\
\widehat{\mathrm{TCF}}_{3,\mathrm{KNN}}^{*b}(c_2) &= \frac{\sum\limits_{i=1}^{n}\mathrm{I}(T_i^{*b} \ge c_2)\left[V_i^{*b} D^{*b}_{3i} + (1 - V_i^{*b}) \hat{\rho}_{3i,K}^{*b}\right]}{\sum\limits_{i=1}^{n}\left[V_i^{*b} D^{*b}_{3i} + (1 - V_i^{*b}) \hat{\rho}_{3i,K}^{*b}\right]}, \nonumber
\end{align}
where $\hat{\rho}_{ki,K}^{*b}$, $k = 1,2,3$, denote the KNN imputation values for missing labels $D^{*b}_{ki}$ in the bootstrap sample. Then, the bootstrap estimator of the variance of $\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}(c_1,c_2)$ is
\begin{equation}
\widehat{\mathbb{V}\mathrm{ar}}(\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}(c_1,c_2)) = \frac{1}{B-1}\sum_{b=1}^{B}\left(\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}^{*b}(c_1,c_2) - \widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}^{*}(c_1,c_2)\right)^2 \nonumber \label{var:boot:est:knn},
\end{equation}
where $\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}^{*}(c_1,c_2)$ is the mean of the $B$ bootstrap estimates $\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}^{*b}(c_1,c_2)$.
More generally, the bootstrap estimate of the covariance matrix $\Xi$ is
\begin{equation}
\widehat{\Xi}_B = \frac{1}{B-1}\left(\widehat{\mathrm{TCF}}_{\mathrm{KNN}}^{*B}(c_1,c_2) - \widehat{\mathrm{TCF}}_{\mathrm{KNN}}^{*}(c_1,c_2)\right)\left(\widehat{\mathrm{TCF}}_{\mathrm{KNN}}^{*B}(c_1,c_2) - \widehat{\mathrm{TCF}}_{\mathrm{KNN}}^{*}(c_1,c_2)\right)^\top,
\nonumber \label{covar:boot:est:knn}
\end{equation}
where $\widehat{\mathrm{TCF}}_{\mathrm{KNN}}^{*B}(c_1,c_2)$ is a $B \times 3$ matrix, whose element in the $b$--th row and the $k$--th column corresponds to $\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}^{*b}(c_1,c_2)$, and $\widehat{\mathrm{TCF}}_{\mathrm{KNN}}^{*}(c_1,c_2)$ is a column vector that consist of the means of the $B$ bootstrap estimates $\widehat{\mathrm{TCF}}_{k,\mathrm{KNN}}^{*b}(c_1,c_2)$, $k=1,2,3$.
\section{Simulation studies}
\label{s:simulation}
In this section, the ability of KNN method to estimate TCF$_1$, TCF$_2$ and TCF$_3$ is evaluated by using Monte Carlo experiments.
We also compare the proposed method with partially parametric approaches, i.e., FI, MSI, IPW and SPE approaches. As already mentioned,
partially parametric bias-corrected estimators of TCF$_1$, TCF$_2$ and TCF$_3$ require parametric regression models to estimate $\rho_{ki} = \mathrm{Pr}(D_{ki} = 1|T_i, A_i)$, or $\pi_i = \mathrm{Pr}(V_i = 1 | T_i,A_i)$, or both. A wrong specification of such models may affect the estimators. Therefore, in the simulation study we consider two scenarios: in the parametric estimation process,
\begin{enumerate}
\item[(i)] the disease model and the verification model are both correctly specified;
\item[(ii)] the disease model and the verification model are both misspecified.
\end{enumerate}
In both scenarios,
we execute $5000$ Monte Carlo runs at each setting; we set three sample sizes, i.e., $250$, $500$ and $1000$ in scenario (i) and a sample size of $1000$ in scenario (ii).
We consider KNN estimators based on the Euclidean distance, with $K = 1$ and $K=3$.
This in light of the discussion in Section 3.4 and some results of a preliminary simulation study presented in Section S1, Supplementary Material.
In such study, we compared the behavior of the KNN estimators for several choices of the distance measure (Euclidean, Manhattan, Canberra and Mahalanobis) and the size of the neighborhood ($K = 1,3,5,10,20$).
\subsection{Correctly specified parametric models}
The true disease $D$ is generated by a trinomial random vector $(D_{1},D_{2},D_{3})$, such that $D_{k}$ is a Bernoulli random variable with success probability $\theta_k$, $k=1,2,3$. We set $\theta_1 = 0.4, \theta_2 = 0.35$ and $\theta_3 = 0.25$. The continuous test result $T$ and a covariate $A$ are generated from the following conditional models
\[
T,A |D_{k} \sim \mathcal{N}_2 \left(\mu_k, \Sigma\right), \qquad k = 1,2,3,
\]
where $\mu_k = (2 k, k)^\top$ and
\[
\Sigma = \left(\begin{array}{c c}
\sigma^2_{T|D} & \sigma_{T,A|D} \\
\sigma_{T,A|D} & \sigma^2_{A|D}
\end{array}\right).
\]
We consider three different values for $\Sigma$, specifically
\[
\left(\begin{array}{c c}
1.75 & 0.1 \\
0.1 & 2.5
\end{array}\right) , \qquad
\left(\begin{array}{c c}
2.5 & 1.5 \\
1.5 & 2.5
\end{array}\right) , \qquad
\left(\begin{array}{c c}
5.5 & 3 \\
3 & 2.5
\end{array}\right),
\]
giving rise to a correlation between $T$ and $A$ equal to $0.36, 0.69$ and $0.84$, respectively. Values chosen for $\Sigma$ give rise to true VUS values ranging from 0.7175 to 0.4778. The verification status $V$ is generated by the following model
\[
\mathrm{logit}\left\{\mathrm{Pr}(V = 1|T,A)\right\} = \delta_0 + \delta_1 T + \delta_2 A,
\]
where we fix $\delta_0 = 0.5, \delta_1 = -0.3$ and $\delta_2 = 0.75$. This choice corresponds to a verification rate of about $0.65$. We consider six pairs of cut points $(c_1,c_2)$, i.e., $(2,4), (2,5)$, $(2,7), (4,5), (4,7)$ and $(5,7)$.
Since the conditional distribution of $T$ given $D_{k}$ is the normal distribution, the true parameters values are
\begin{eqnarray}
{\mathrm{TCF}}_{1}(c_1) &=& \Phi \left(\frac{c_1 - 2}{\sigma_{T|D}}\right) \nonumber, \\
{\mathrm{TCF}}_{2}(c_1,c_2) &=& \Phi \left(\frac{c_2 - 4}{\sigma_{T|D}}\right) - \Phi \left(\frac{c_1 - 4}{\sigma_{T|D}}\right), \nonumber\\
{\mathrm{TCF}}_{3}(c_2) &=& 1 - \Phi \left(\frac{c_2 - 6}{\sigma_{T|D}}\right), \nonumber
\end{eqnarray}
where $\Phi(\cdot)$ denotes the cumulative distribution function of the standard normal random variable.
In this set--up, FI, MSI, IPW and SPE estimators are computed under correct working models for both the disease and the verification processes. Therefore, the conditional verification probabilities $\pi_i$ are estimated from a logistic model for $V$ given $T$ and $A$ with logit link. Under our data--generating process, the true conditional disease model is a multinomial logistic model
\[
\mathrm{Pr}(D_{k} = 1|T,A) = \frac{\exp \left(\tau_{0k} + \tau_{1k} T + \tau_{2k} A\right)}{1 + \exp \left(\tau_{01} + \tau_{11} T + \tau_{21} A\right) + \exp \left(\tau_{02} + \tau_{12} T + \tau_{22} A\right)},
\]
for suitable $\tau_{0k},\tau_{1k},\tau_{2k}$, where $k = 1,2$.
Tables \ref{tab:res11}--\ref{tab:res13} show Monte Carlo means and standard deviations of the estimators for the three true class factions. Results concern the estimators FI, MSI, IPW, SPE, and the KNN estimator with $K = 1$ and $K = 3$ computed using the Euclidean distance.
Also, the estimated standard deviations are shown in the tables. The estimates are obtained by using asymptotic results. To estimate standard deviations of KNN estimators, we use the KNN procedure discussed in Section 4, with $\bar{K} = 2$. Each table refers to a choosen value for $\Sigma$. The sample size is $250$. The results for sample sizes $500$ and $1000$ are presented in Section S2 of Supplementary Material.
As expected, the parametric approaches work well when both models for $\rho_k(t,a)$ and $\pi(t,a)$ are correctly specified. FI and MSI estimators seem to be the most efficient ones, whereas the IPW approach seems to provide less powerful estimators, in general.
The new proposals (1NN and 3NN estimators) yield also good results, comparable, in terms of bias and standard deviation, to those of the parametric competitors.
Moreover, estimators 1NN and 3NN seem to achieve similar performances,
and the results about estimated standard deviations of KNN estimators seem to show the effectiveness of the procedure discussed in Section 4.
Finally, some results of simulation experiments performed to explore the effect of a multidimensional vector of auxiliary covariates are given in Section S3, Supplementary Material. A vector $A$ of dimension 3 is employed. The results in Table 16, Supplementary Material, show that KNN estimators still behave satisfactorily.
\begin{table}[htbp]
\caption{Monte Carlo means, Monte Carlo standard deviations and estimated standard deviations of the estimators for true class fractions, in case of sample size equals to $250$. The first value of $\Sigma$ is considered. ``True'' denotes the true parameter value.}
\begin{center}
\begin{footnotesize}
\begin{tabular}{c c c c c c c c c c}
\toprule
& TCF$_1$ & TCF$_2$ & TCF$_3$ & MC.sd$_1$ & MC.sd$_2$ & MC.sd$_3$ & asy.sd$_1$ & asy.sd$_2$ & asy.sd$_3$ \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,4)$} \\
\hdashline
True & 0.5000 & 0.4347 & 0.9347 & & & & & & \\
FI & 0.5005 & 0.4348 & 0.9344 & 0.0537 & 0.0484 & 0.0269 & 0.0440 & 0.0398 & 0.0500 \\
MSI & 0.5005 & 0.4346 & 0.9342 & 0.0550 & 0.0547 & 0.0320 & 0.0465 & 0.0475 & 0.0536 \\
IPW & 0.4998 & 0.4349 & 0.9341 & 0.0722 & 0.0727 & 0.0372 & 0.0688 & 0.0702 & 0.0420 \\
SPE & 0.5010 & 0.4346 & 0.9344 & 0.0628 & 0.0659 & 0.0364 & 0.0857 & 0.0637 & 0.0363 \\
1NN & 0.4989 & 0.4334 & 0.9331 & 0.0592 & 0.0665 & 0.0387 & 0.0555 & 0.0626 & 0.0382 \\
3NN & 0.4975 & 0.4325 & 0.9322 & 0.0567 & 0.0617 & 0.0364 & 0.0545 & 0.0608 & 0.0372 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,5)$} \\
\hdashline
True & 0.5000 & 0.7099 & 0.7752 & & & & & & \\
FI & 0.5005 & 0.7111 & 0.7761 & 0.0537 & 0.0461 & 0.0534 & 0.0440 & 0.0400 & 0.0583 \\
MSI & 0.5005 & 0.7104 & 0.7756 & 0.0550 & 0.0511 & 0.0566 & 0.0465 & 0.0467 & 0.0626 \\
IPW & 0.4998 & 0.7108 & 0.7750 & 0.0722 & 0.0701 & 0.0663 & 0.0688 & 0.0667 & 0.0713 \\
SPE & 0.5010 & 0.7106 & 0.7762 & 0.0628 & 0.0619 & 0.0627 & 0.0857 & 0.0604 & 0.0611 \\
1NN & 0.4989 & 0.7068 & 0.7738 & 0.0592 & 0.0627 & 0.0652 & 0.0555 & 0.0591 & 0.0625 \\
3NN & 0.4975 & 0.7038 & 0.7714 & 0.0567 & 0.0576 & 0.0615 & 0.0545 & 0.0574 & 0.0610 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,7)$} \\
\hdashline
True & 0.5000 & 0.9230 & 0.2248 & & & & & & \\
FI & 0.5005 & 0.9229 & 0.2240 & 0.0537 & 0.0236 & 0.0522 & 0.0440 & 0.0309 & 0.0428 \\
MSI & 0.5005 & 0.9231 & 0.2243 & 0.0550 & 0.0285 & 0.0531 & 0.0465 & 0.0353 & 0.0443 \\
IPW & 0.4998 & 0.9238 & 0.2222 & 0.0722 & 0.0374 & 0.0765 & 0.0688 & 0.0360 & 0.0728 \\
SPE & 0.5010 & 0.9236 & 0.2250 & 0.0628 & 0.0362 & 0.0578 & 0.0857 & 0.0348 & 0.0573 \\
1NN & 0.4989 & 0.9201 & 0.2233 & 0.0592 & 0.0372 & 0.0577 & 0.0555 & 0.0366 & 0.0570 \\
3NN & 0.4975 & 0.9177 & 0.2216 & 0.0567 & 0.0340 & 0.0558 & 0.0545 & 0.0355 & 0.0563 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (4,5)$} \\
\hdashline
True & 0.9347 & 0.2752 & 0.7752 & & & & & & \\
FI & 0.9347 & 0.2763 & 0.7761 & 0.0245 & 0.0412 & 0.0534 & 0.0179 & 0.0336 & 0.0583 \\
MSI & 0.9348 & 0.2758 & 0.7756 & 0.0271 & 0.0471 & 0.0566 & 0.0220 & 0.0404 & 0.0626 \\
IPW & 0.9350 & 0.2758 & 0.7750 & 0.0421 & 0.0693 & 0.0663 & 0.0391 & 0.0651 & 0.0713 \\
SPE & 0.9353 & 0.2761 & 0.7762 & 0.0386 & 0.0590 & 0.0627 & 0.0377 & 0.0568 & 0.0611 \\
1NN & 0.9322 & 0.2734 & 0.7738 & 0.0374 & 0.0572 & 0.0652 & 0.0342 & 0.0553 & 0.0625 \\
3NN & 0.9303 & 0.2712 & 0.7714 & 0.0328 & 0.0526 & 0.0615 & 0.0332 & 0.0538 & 0.0610 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (4,7)$} \\
\hdashline
True & 0.9347 & 0.4883 & 0.2248 & & & & & & \\
FI & 0.9347 & 0.4881 & 0.2240 & 0.0245 & 0.0541 & 0.0522 & 0.0179 & 0.0444 & 0.0428 \\
MSI & 0.9348 & 0.4885 & 0.2243 & 0.0271 & 0.0576 & 0.0531 & 0.0220 & 0.0495 & 0.0443 \\
IPW & 0.9350 & 0.4889 & 0.2222 & 0.0421 & 0.0741 & 0.0765 & 0.0391 & 0.0713 & 0.0728 \\
SPE & 0.9353 & 0.4890 & 0.2250 & 0.0386 & 0.0674 & 0.0578 & 0.0377 & 0.0646 & 0.0573 \\
1NN & 0.9322 & 0.4867 & 0.2233 & 0.0374 & 0.0680 & 0.0577 & 0.0342 & 0.0633 & 0.0570 \\
3NN & 0.9303 & 0.4852 & 0.2216 & 0.0328 & 0.0630 & 0.0558 & 0.0332 & 0.0615 & 0.0563 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (5,7)$} \\
\hdashline
True & 0.9883 & 0.2132 & 0.2248 & & & & & & \\
FI & 0.9879 & 0.2118 & 0.2240 & 0.0075 & 0.0435 & 0.0522 & 0.0055 & 0.0336 & 0.0428 \\
MSI & 0.9882 & 0.2127 & 0.2243 & 0.0096 & 0.0467 & 0.0531 & 0.0084 & 0.0388 & 0.0443 \\
IPW & 0.9887 & 0.2130 & 0.2222 & 0.0193 & 0.0653 & 0.0765 & 0.0177 & 0.0618 & 0.0728 \\
SPE & 0.9888 & 0.2130 & 0.2250 & 0.0191 & 0.0571 & 0.0578 & 0.0184 & 0.0554 & 0.0573 \\
1NN & 0.9868 & 0.2133 & 0.2233 & 0.0177 & 0.0567 & 0.0577 & 0.0172 & 0.0532 & 0.0570 \\
3NN & 0.9860 & 0.2139 & 0.2216 & 0.0151 & 0.0519 & 0.0558 & 0.0168 & 0.0516 & 0.0563 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\label{tab:res11}
\end{table}
\begin{table}[htbp]
\caption{Monte Carlo means, Monte Carlo standard deviations and estimated standard deviations of the estimators for true class fractions, in case of sample size equals to $250$. The second value of $\Sigma$ is considered. ``True'' denotes the true parameter value.}
\begin{center}
\begin{footnotesize}
\begin{tabular}{c c c c c c c c c c}
\toprule
& TCF$_1$ & TCF$_2$ & TCF$_3$ & MC.sd$_1$ & MC.sd$_2$ & MC.sd$_3$ & asy.sd$_1$ & asy.sd$_2$ & asy.sd$_3$ \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,4)$} \\
\hdashline
True & 0.5000 & 0.3970 & 0.8970 & & & & & & \\
FI & 0.4999 & 0.3974 & 0.8973 & 0.0503 & 0.0421 & 0.0362 & 0.0432 & 0.0352 & 0.0466 \\
MSI & 0.5000 & 0.3975 & 0.8971 & 0.0521 & 0.0497 & 0.0416 & 0.0461 & 0.0451 & 0.0515 \\
IPW & 0.4989 & 0.3990 & 0.8971 & 0.0663 & 0.0685 & 0.0534 & 0.0647 & 0.0681 & 0.0530 \\
SPE & 0.5004 & 0.3980 & 0.8976 & 0.0570 & 0.0619 & 0.0516 & 0.0563 & 0.0620 & 0.0493 \\
1NN & 0.4982 & 0.3953 & 0.8976 & 0.0587 & 0.0642 & 0.0537 & 0.0561 & 0.0618 & 0.0487 \\
3NN & 0.4960 & 0.3933 & 0.8970 & 0.0556 & 0.0595 & 0.0494 & 0.0548 & 0.0600 & 0.0472 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,5)$} \\
\hdashline
True & 0.5000 & 0.6335 & 0.7365 & & & & & & \\
FI & 0.4999 & 0.6337 & 0.7395 & 0.0503 & 0.0436 & 0.0583 & 0.0432 & 0.0379 & 0.0554 \\
MSI & 0.5000 & 0.6330 & 0.7385 & 0.0521 & 0.0508 & 0.0613 & 0.0461 & 0.0469 & 0.0612 \\
IPW & 0.4989 & 0.6335 & 0.7386 & 0.0663 & 0.0676 & 0.0728 & 0.0647 & 0.0663 & 0.0745 \\
SPE & 0.5004 & 0.6333 & 0.7390 & 0.0570 & 0.0622 & 0.0682 & 0.0563 & 0.0612 & 0.0673 \\
1NN & 0.4982 & 0.6304 & 0.7400 & 0.0587 & 0.0645 & 0.0721 & 0.0561 & 0.0615 & 0.0672 \\
3NN & 0.4960 & 0.6283 & 0.7396 & 0.0556 & 0.0600 & 0.0670 & 0.0548 & 0.0597 & 0.0654 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,7)$} \\
\hdashline
True & 0.5000 & 0.8682 & 0.2635 & & & & & & \\
FI & 0.4999 & 0.8676 & 0.2655 & 0.0503 & 0.0316 & 0.0560 & 0.0432 & 0.0294 & 0.0478 \\
MSI & 0.5000 & 0.8678 & 0.2660 & 0.0521 & 0.0374 & 0.0583 & 0.0461 & 0.0364 & 0.0512 \\
IPW & 0.4989 & 0.8682 & 0.2669 & 0.0663 & 0.0507 & 0.0698 & 0.0647 & 0.0484 & 0.0692 \\
SPE & 0.5004 & 0.8681 & 0.2663 & 0.0570 & 0.0476 & 0.0608 & 0.0563 & 0.0459 & 0.0600 \\
1NN & 0.4982 & 0.8672 & 0.2672 & 0.0587 & 0.0495 & 0.0629 & 0.0561 & 0.0458 & 0.0609 \\
3NN & 0.4960 & 0.8657 & 0.2671 & 0.0556 & 0.0452 & 0.0610 & 0.0548 & 0.0442 & 0.0601 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (4,5)$} \\
\hdashline
True & 0.8970 & 0.2365 & 0.7365 & & & & & & \\
FI & 0.8980 & 0.2363 & 0.7395 & 0.0284 & 0.0367 & 0.0583 & 0.0239 & 0.0301 & 0.0554 \\
MSI & 0.8976 & 0.2356 & 0.7385 & 0.0318 & 0.0437 & 0.0613 & 0.0292 & 0.0386 & 0.0612 \\
IPW & 0.8975 & 0.2345 & 0.7386 & 0.0377 & 0.0594 & 0.0728 & 0.0373 & 0.0578 & 0.0745 \\
SPE & 0.8974 & 0.2353 & 0.7390 & 0.0364 & 0.0529 & 0.0682 & 0.0361 & 0.0522 & 0.0673 \\
1NN & 0.8958 & 0.2352 & 0.7400 & 0.0388 & 0.0540 & 0.0721 & 0.0373 & 0.0524 & 0.0672 \\
3NN & 0.8946 & 0.2350 & 0.7396 & 0.0362 & 0.0502 & 0.0670 & 0.0361 & 0.0510 & 0.0654 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (4,7)$} \\
\hdashline
True & 0.8970 & 0.4711 & 0.2635 & & & & & & \\
FI & 0.8980 & 0.4703 & 0.2655 & 0.0284 & 0.0512 & 0.0560 & 0.0239 & 0.0413 & 0.0478 \\
MSI & 0.8976 & 0.4703 & 0.2660 & 0.0318 & 0.0561 & 0.0583 & 0.0292 & 0.0490 & 0.0512 \\
IPW & 0.8975 & 0.4692 & 0.2669 & 0.0377 & 0.0693 & 0.0698 & 0.0373 & 0.0679 & 0.0692 \\
SPE & 0.8974 & 0.4701 & 0.2663 & 0.0364 & 0.0638 & 0.0608 & 0.0361 & 0.0629 & 0.0600 \\
1NN & 0.8958 & 0.4719 & 0.2672 & 0.0388 & 0.0666 & 0.0629 & 0.0373 & 0.0630 & 0.0609 \\
3NN & 0.8946 & 0.4724 & 0.2671 & 0.0362 & 0.0627 & 0.0610 & 0.0361 & 0.0611 & 0.0601 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (5,7)$} \\
\hdashline
True & 0.9711 & 0.2347 & 0.2635 & & & & & & \\
FI & 0.9710 & 0.2339 & 0.2655 & 0.0124 & 0.0407 & 0.0560 & 0.0104 & 0.0336 & 0.0478 \\
MSI & 0.9709 & 0.2348 & 0.2660 & 0.0166 & 0.0461 & 0.0583 & 0.0156 & 0.0412 & 0.0512 \\
IPW & 0.9709 & 0.2347 & 0.2669 & 0.0204 & 0.0568 & 0.0698 & 0.0202 & 0.0562 & 0.0692 \\
SPE & 0.9709 & 0.2348 & 0.2663 & 0.0202 & 0.0531 & 0.0608 & 0.0199 & 0.0524 & 0.0600 \\
1NN & 0.9701 & 0.2368 & 0.2672 & 0.0217 & 0.0549 & 0.0629 & 0.0213 & 0.0533 & 0.0609 \\
3NN & 0.9695 & 0.2375 & 0.2671 & 0.0200 & 0.0519 & 0.0610 & 0.0206 & 0.0517 & 0.0601 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\label{tab:res12}
\end{table}
\begin{table}[htbp]
\caption{Monte Carlo means, Monte Carlo standard deviations and estimated standard deviations of the estimators for true class fractions, in case of sample size equals to $250$. The third value of $\Sigma$ is considered. ``True'' denotes the true parameter value.}
\begin{center}
\begin{footnotesize}
\begin{tabular}{c c c c c c c c c c}
\toprule
& TCF$_1$ & TCF$_2$ & TCF$_3$ & MC.sd$_1$ & MC.sd$_2$ & MC.sd$_3$ & asy.sd$_1$ & asy.sd$_2$ & asy.sd$_3$ \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,4)$} \\
\hdashline
True & 0.5000 & 0.3031 & 0.8031 & & & & & & \\
FI & 0.5009 & 0.3031 & 0.8047 & 0.0488 & 0.0344 & 0.0495 & 0.0418 & 0.0284 & 0.0467 \\
MSI & 0.5005 & 0.3032 & 0.8045 & 0.0515 & 0.0448 & 0.0544 & 0.0460 & 0.0410 & 0.0542 \\
IPW & 0.5015 & 0.3030 & 0.8043 & 0.0624 & 0.0632 & 0.0649 & 0.0618 & 0.0620 & 0.0640 \\
SPE & 0.5007 & 0.3034 & 0.8043 & 0.0565 & 0.0576 & 0.0628 & 0.0564 & 0.0574 & 0.0614 \\
1NN & 0.4997 & 0.3021 & 0.8047 & 0.0592 & 0.0602 & 0.0682 & 0.0571 & 0.0584 & 0.0621 \\
3NN & 0.4984 & 0.3018 & 0.8043 & 0.0561 & 0.0565 & 0.0632 & 0.0556 & 0.0566 & 0.0601 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,5)$} \\
\hdashline
True & 0.5000 & 0.4682 & 0.6651 & & & & & & \\
FI & 0.5009 & 0.4692 & 0.6668 & 0.0488 & 0.0384 & 0.0616 & 0.0418 & 0.0323 & 0.0536 \\
MSI & 0.5005 & 0.4687 & 0.6666 & 0.0515 & 0.0495 & 0.0658 & 0.0460 & 0.0455 & 0.0610 \\
IPW & 0.5015 & 0.4681 & 0.6670 & 0.0624 & 0.0671 & 0.0753 & 0.0618 & 0.0670 & 0.0743 \\
SPE & 0.5007 & 0.4690 & 0.6665 & 0.0565 & 0.0624 & 0.0721 & 0.0564 & 0.0622 & 0.0704 \\
1NN & 0.4997 & 0.4676 & 0.6668 & 0.0592 & 0.0661 & 0.0780 & 0.0571 & 0.0634 & 0.0717 \\
3NN & 0.4984 & 0.4670 & 0.6666 & 0.0561 & 0.0619 & 0.0729 & 0.0556 & 0.0614 & 0.0695 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (2,7)$} \\
\hdashline
True & 0.5000 & 0.7027 & 0.3349 & & & & & & \\
FI & 0.5009 & 0.7030 & 0.3358 & 0.0488 & 0.0375 & 0.0595 & 0.0418 & 0.0318 & 0.0501 \\
MSI & 0.5005 & 0.7027 & 0.3360 & 0.0515 & 0.0474 & 0.0637 & 0.0460 & 0.0435 & 0.0563 \\
IPW & 0.5015 & 0.7026 & 0.3366 & 0.0624 & 0.0625 & 0.0730 & 0.0618 & 0.0618 & 0.0716 \\
SPE & 0.5007 & 0.7032 & 0.3362 & 0.0565 & 0.0591 & 0.0677 & 0.0564 & 0.0583 & 0.0657 \\
1NN & 0.4997 & 0.7024 & 0.3366 & 0.0592 & 0.0633 & 0.0712 & 0.0571 & 0.0592 & 0.0675 \\
3NN & 0.4984 & 0.7016 & 0.3362 & 0.0561 & 0.0590 & 0.0680 & 0.0556 & 0.0572 & 0.0660 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (4,5)$} \\
\hdashline
True & 0.8031 & 0.1651 & 0.6651 & & & & & & \\
FI & 0.8042 & 0.1660 & 0.6668 & 0.0383 & 0.0277 & 0.0616 & 0.0323 & 0.0231 & 0.0536 \\
MSI & 0.8037 & 0.1655 & 0.6666 & 0.0415 & 0.0372 & 0.0658 & 0.0380 & 0.0333 & 0.0610 \\
IPW & 0.8039 & 0.1651 & 0.6670 & 0.0473 & 0.0503 & 0.0753 & 0.0473 & 0.0493 & 0.0743 \\
SPE & 0.8036 & 0.1655 & 0.6665 & 0.0456 & 0.0465 & 0.0721 & 0.0458 & 0.0455 & 0.0704 \\
1NN & 0.8032 & 0.1655 & 0.6668 & 0.0487 & 0.0481 & 0.0780 & 0.0472 & 0.0466 & 0.0717 \\
3NN & 0.8020 & 0.1651 & 0.6666 & 0.0460 & 0.0450 & 0.0729 & 0.0457 & 0.0451 & 0.0695 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (4,7)$} \\
\hdashline
True & 0.8031 & 0.3996 & 0.3349 & & & & & & \\
FI & 0.8042 & 0.3999 & 0.3358 & 0.0383 & 0.0426 & 0.0595 & 0.0323 & 0.0349 & 0.0501 \\
MSI & 0.8037 & 0.3995 & 0.3360 & 0.0415 & 0.0522 & 0.0637 & 0.0380 & 0.0463 & 0.0563 \\
IPW & 0.8039 & 0.3996 & 0.3366 & 0.0473 & 0.0658 & 0.0730 & 0.0473 & 0.0645 & 0.0716 \\
SPE & 0.8036 & 0.3998 & 0.3362 & 0.0456 & 0.0618 & 0.0677 & 0.0458 & 0.0606 & 0.0657 \\
1NN & 0.8032 & 0.4003 & 0.3366 & 0.0487 & 0.0660 & 0.0712 & 0.0472 & 0.0619 & 0.0675 \\
3NN & 0.8020 & 0.3998 & 0.3362 & 0.0460 & 0.0617 & 0.0680 & 0.0457 & 0.0600 & 0.0660 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (5,7)$} \\
\hdashline
True & 0.8996 & 0.2345 & 0.3349 & & & & & & \\
FI & 0.9003 & 0.2338 & 0.3358 & 0.0266 & 0.0351 & 0.0595 & 0.0224 & 0.0292 & 0.0501 \\
MSI & 0.9004 & 0.2340 & 0.3360 & 0.0308 & 0.0443 & 0.0637 & 0.0285 & 0.0398 & 0.0563 \\
IPW & 0.9005 & 0.2345 & 0.3366 & 0.0355 & 0.0555 & 0.0730 & 0.0353 & 0.0550 & 0.0716 \\
SPE & 0.9004 & 0.2342 & 0.3362 & 0.0349 & 0.0523 & 0.0677 & 0.0346 & 0.0517 & 0.0657 \\
1NN & 0.9000 & 0.2348 & 0.3366 & 0.0373 & 0.0556 & 0.0712 & 0.0361 & 0.0531 & 0.0675 \\
3NN & 0.8992 & 0.2346 & 0.3362 & 0.0349 & 0.0520 & 0.0680 & 0.0349 & 0.0515 & 0.0660 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\label{tab:res13}
\end{table}
\subsection{Misspecified models}
We start from two independent random variables $Z_1 \sim \mathcal{N}(0,0.5)$ and $Z_2 \sim \mathcal{N}(0,0.5)$. The true conditional disease $D$ is generated by a trinomial random vector $(D_1,D_2,D_3)$ such that
\[
D_1 = \left\{\begin{array}{r l}
1 & \mathrm{ if } \, Z_1 + Z_2 \le h_1 \\
0 & \mathrm{ otherwise}
\end{array}\right. ,
\quad
D_2 = \left\{\begin{array}{r l}
1 & \mathrm{ if } \, h_1 < Z_1 + Z_2 \le h_2 \\
0 & \mathrm{ otherwise}
\end{array}\right. ,
\quad
D_3 = \left\{\begin{array}{r l}
1 & \mathrm{ if } \, Z_1 + Z_2 > h_2 \\
0 & \mathrm{ otherwise}
\end{array}\right. .
\]
Here, $h_1$ and $h_2$ are two thresholds. We choose $h_1$ and $h_2$ to make $\theta_1 = 0.4$ and $\theta_3 = 0.25$. The continuous test results $T$ and the covariate $A$ are generated to be related to $D$ through $Z_1$ and $Z_2$. More precisely,
\[
T = \alpha(Z_1 + Z_2) + \varepsilon_1, \qquad A = Z_1 + Z_2 + \varepsilon_2,
\]
where $\varepsilon_1$ and $\varepsilon_2$ are two independent normal random variables with mean $0$ and the common variance $0.25$. The verification status $V$ is simulated by the following logistic model
\[
\mathrm{logit}\left\{Pr(V = 1|T,A)\right\} = -1.5 - 0.35 T - 1.5A.
\]
Under this model, the verification rate is roughly $0.276$. This has led us to the choice of $n = 1000$. For the cut-point, we consider six pairs $(c_1,c_2)$, i.e., $(-1.0, -0.5)$, $(-1.0,0.7)$, $(-1.0, 1.3)$, $(-0.5,0.7)$, $(-0.5,1.3)$ and $(0.7, 1.3)$. Within this set--up, we determine the true values of TCF's as follows:
\begin{eqnarray}
{\mathrm{TCF}}_{1}(c_1) &=& \frac{1}{\Phi(h_1)} \int_{-\infty}^{h_1}\Phi\left(\frac{c_1 - \alpha z}{\sqrt{0.25}}\right)\phi(z)\mathrm{d} z\nonumber, \\
{\mathrm{TCF}}_{2}(c_1,c_2) &=& \frac{1}{\Phi(h_2) - \Phi(h_1)} \int_{h_1}^{h_2}\left[\Phi\left(\frac{c_2 - \alpha z}{\sqrt{0.25}}\right) - \Phi\left(\frac{c_1 - \alpha z}{\sqrt{0.25}}\right)\right]\phi(z)\mathrm{d} z, \nonumber \\
{\mathrm{TCF}}_{3}(c_2) &=& 1 - \frac{1}{1 - \Phi(h_2)} \int_{h_2}^{\infty}\Phi\left(\frac{c_2 - \alpha z}{\sqrt{0.25}}\right)\phi(z)\mathrm{d} z, \nonumber
\end{eqnarray}
where $\phi(\cdot)$ denotes the density function of the standard normal random variable. We choose $\alpha = 0.5$.
The aim in this scenario is to compare FI, MSI, IPW, SPE and KNN estimators when both the estimates for $\hat{\pi}_i$ and $\hat{\rho}_{ki}$ in the parametric approach are inconsistent. Therefore, $\hat{\rho}_{ki}$ could be obtained from a multinomial logistic regression model with $D = (D_1,D_2,D_3)$ as the response and $T$ as predictor. To estimate $\pi_i$, we use a generalized linear model for $V$ given $T$ and $A^{2/3}$ with logit link. Clearly, the two fitted models are misspecified. The KNN estimators are obtained by using $K = 1$ and $K=3$ and the Euclidean distance. Again, we use $\bar{K} = 2$ in the KNN procedure to estimate standard deviations of KNN estimators.
Table \ref{tab:res2} presents Monte Carlo means and standard deviations (across 5000 replications) for the estimators of the true class fractions, $\mathrm{TCF}_1$, $\mathrm{TCF}_2$ and $\mathrm{TCF}_3$.
The table also gives the means of the estimated standard deviations (of the estimators), based on the asymptotic theory. The table clearly shows limitations of the (partially) parametric approaches in case of misspecified models for $\mathrm{Pr}(D_k = 1|T,A)$ and $\mathrm{Pr}(V = 1|T,A)$. More precisely, in term of bias, the FI, MSI, IPW and SPE approaches perform almost always poorly, with high distortion in almost all cases. As we mentioned in Section 2, the SPE estimators could fall outside the interval $(0,1)$. In our simulations, in the worst case, the estimator $\widehat{\mathrm{TCF}}_{3,\mathrm{SPE}}(-1.0, -0.5)$ gives rise to $20\%$ of the values greater than $1$. Moreover, the Monte Carlo standard deviations shown in the table indicate that the SPE approach might yield unstable estimates. Finally, the misspecification also has a clear effect on the estimated standard deviations of the estimators.
On the other side, the estimators 1NN and 3NN seem to perform well in terms of both bias and standard deviation. In fact, KNN estimators yield estimated values that are near to the true values. In addition, we observe that the estimator 3NN has larger bias than 1NN, but with slightly less variance.
\begin{table}[htbp]
\caption{Monte Carlo means, Monte Carlo standard deviations and estimated standard deviations of the estimators for true class fractions when both models for $\rho_k(t,a)$ and $\pi(t,a)$ are misspecified and sample size equals to $1000$. ``True'' denotes the true parameter value.}
\begin{center}
\begin{footnotesize}
\begin{tabular}{c c c c c c c c c c}
\toprule
& TCF$_1$ & TCF$_2$ & TCF$_3$ & MC.sd$_1$ & MC.sd$_2$ & MC.sd$_3$ & asy.sd$_1$ & asy.sd$_2$ & asy.sd$_3$ \\
\midrule
& \multicolumn{9}{c}{cut points $ = (-1.0,-0.5)$} \\
\hdashline
True & 0.1812 & 0.1070 & 0.9817 & & & & & & \\
FI & 0.1290 & 0.0588 & 0.9888 & 0.0153 & 0.0133 & 0.0118 & 0.0154 & 0.0087 & 0.0412 \\
MSI & 0.1299 & 0.0592 & 0.9895 & 0.0154 & 0.0153 & 0.0131 & 0.0157 & 0.0110 & 0.0417 \\
IPW & 0.1231 & 0.0576 & 0.9889 & 0.0178 & 0.0211 & 0.0208 & 0.0175 & 0.0207 & 0.3694 \\
SPE & 0.1407 & 0.0649 & 0.9877 & 0.0173 & 0.0216 & 0.0231 & 0.0176 & 0.0212 & 0.0432 \\
1NN & 0.1809 & 0.1036 & 0.9817 & 0.0224 & 0.0304 & 0.0255 & 0.0211 & 0.0262 & 0.0242 \\
3NN & 0.1795 & 0.0991 & 0.9814 & 0.0214 & 0.0258 & 0.0197 & 0.0208 & 0.0244 & 0.0240 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (-1.0,0.7)$} \\
\hdashline
True & 0.1812 & 0.8609 & 0.4469 & & & & & & \\
FI & 0.1290 & 0.7399 & 0.5850 & 0.0153 & 0.0447 & 0.1002 & 0.0154 & 0.0181 & 0.0739 \\
MSI & 0.1299 & 0.7423 & 0.5841 & 0.0154 & 0.0453 & 0.1008 & 0.0157 & 0.0188 & 0.0666 \\
IPW & 0.1231 & 0.7690 & 0.5004 & 0.0178 & 0.0902 & 0.2049 & 0.0175 & 0.0844 & 0.2018 \\
SPE & 0.1407 & 0.7635 & 0.5350 & 0.0173 & 0.0702 & 0.2682 & 0.0176 & 0.0668 & 2.0344 \\
1NN & 0.1809 & 0.8452 & 0.4406 & 0.0224 & 0.0622 & 0.1114 & 0.0211 & 0.0544 & 0.1079 \\
3NN & 0.1795 & 0.8285 & 0.4339 & 0.0214 & 0.0521 & 0.0882 & 0.0208 & 0.0516 & 0.1066 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (-1.0,1.3)$} \\
\hdashline
True & 0.1812 & 0.9732 & 0.1171 & & & & & & \\
FI & 0.1290 & 0.9499 & 0.1900 & 0.0153 & 0.0179 & 0.0550 & 0.0154 & 0.0133 & 0.0422 \\
MSI & 0.1299 & 0.9516 & 0.1902 & 0.0154 & 0.0184 & 0.0552 & 0.0157 & 0.0142 & 0.0389 \\
IPW & 0.1231 & 0.9645 & 0.1294 & 0.0178 & 0.0519 & 0.1795 & 0.0175 & 0.0466 & 0.1344 \\
SPE & 0.1407 & 0.9567 & 0.1760 & 0.0173 & 0.0425 & 0.3383 & 0.0176 & 0.0402 & 3.4770 \\
1NN & 0.1809 & 0.9656 & 0.1124 & 0.0224 & 0.0218 & 0.0448 & 0.0211 & 0.0317 & 0.0710 \\
3NN & 0.1795 & 0.9604 & 0.1086 & 0.0214 & 0.0172 & 0.0338 & 0.0208 & 0.0305 & 0.0716 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (-0.5,0.7)$} \\
\hdashline
True & 0.4796 & 0.7539 & 0.4469 & & & & & & \\
FI & 0.3715 & 0.6811 & 0.5850 & 0.0270 & 0.0400 & 0.1002 & 0.0151 & 0.0145 & 0.0739 \\
MSI & 0.3723 & 0.6831 & 0.5841 & 0.0271 & 0.0409 & 0.1008 & 0.0162 & 0.0172 & 0.0666 \\
IPW & 0.3547 & 0.7114 & 0.5004 & 0.0325 & 0.0883 & 0.2049 & 0.0322 & 0.0831 & 0.2018 \\
SPE & 0.3949 & 0.6986 & 0.5350 & 0.0318 & 0.0687 & 0.2682 & 0.0331 & 0.0657 & 2.0344 \\
1NN & 0.4783 & 0.7416 & 0.4406 & 0.0361 & 0.0610 & 0.1114 & 0.0311 & 0.0551 & 0.1079 \\
3NN & 0.4756 & 0.7294 & 0.4339 & 0.0341 & 0.0499 & 0.0882 & 0.0304 & 0.0523 & 0.1066 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (-0.5,1.3)$} \\
\hdashline
True & 0.4796 & 0.8661 & 0.1171 & & & & & & \\
FI & 0.3715 & 0.8910 & 0.1900 & 0.0270 & 0.0202 & 0.0550 & 0.0151 & 0.0142 & 0.0422 \\
MSI & 0.3723 & 0.8924 & 0.1902 & 0.0271 & 0.0211 & 0.0552 & 0.0162 & 0.0165 & 0.0389 \\
IPW & 0.3547 & 0.9068 & 0.1294 & 0.0325 & 0.0535 & 0.1795 & 0.0322 & 0.0492 & 0.1344 \\
SPE & 0.3949 & 0.8918 & 0.1760 & 0.0318 & 0.0451 & 0.3383 & 0.0331 & 0.0435 & 3.4770 \\
1NN & 0.4783 & 0.8620 & 0.1124 & 0.0361 & 0.0349 & 0.0448 & 0.0311 & 0.0390 & 0.0710 \\
3NN & 0.4756 & 0.8613 & 0.1086 & 0.0341 & 0.0285 & 0.0338 & 0.0304 & 0.0371 & 0.0716 \\
\midrule
& \multicolumn{9}{c}{cut points $ = (0.7,1.3)$} \\
\hdashline
True & 0.9836 & 0.1122 & 0.1171 & & & & & & \\
FI & 0.9618 & 0.2099 & 0.1900 & 0.0122 & 0.0317 & 0.0550 & 0.0043 & 0.0132 & 0.0422 \\
MSI & 0.9613 & 0.2093 & 0.1902 & 0.0125 & 0.0320 & 0.0552 & 0.0048 & 0.0135 & 0.0389 \\
IPW & 0.9548 & 0.1955 & 0.1294 & 0.0339 & 0.0831 & 0.1795 & 0.0323 & 0.0784 & 0.1344 \\
SPE & 0.9582 & 0.1932 & 0.1760 & 0.0332 & 0.0618 & 0.3383 & 0.0320 & 0.0605 & 3.4770 \\
1NN & 0.9821 & 0.1204 & 0.1124 & 0.0144 & 0.0494 & 0.0448 & 0.0133 & 0.0487 & 0.0710 \\
3NN & 0.9804 & 0.1319 & 0.1086 & 0.0138 & 0.0404 & 0.0338 & 0.0131 & 0.0464 & 0.0716 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\label{tab:res2}
\end{table}
\section{An illustration}
We use data on epithelial ovarian cancer (EOC) extracted from the Pre-PLCO Phase II Dataset from the SPORE/Early Detection Network/Prostate, Lung, Colon, and Ovarian Cancer Ovarian Validation Study. \footnote{The study protocol and data are publicly available at the address: \url{http://edrn.nci.nih.gov/protocols/119-spore-edrn -pre-plco-ovarian-phase-ii-val idation}.}
As in \cite{toduc:15}, we consider the following three classes of EOC, i.e., benign disease, early stage (I and II) and late stage (III and IV) cancer, and 12 of the 59 available biomarkers, i.e. CA125, CA153, CA72--4, Kallikrein 6 (KLK6), HE4, Chitinase (YKL40) and immune costimulatory protein--B7H4 (DD--0110), Insulin--like growth factor 2 (IGF2), Soluble mesothelin-related protein (SMRP), Spondin--2 (DD--P108), Decoy Receptor 3 (DcR3; DD--C248) and Macrophage inhibitory cytokine 1 (DD--X065). In addition, age of patients is also considered.
After cleaning for missing data, we are left 134 patients with benign disease, 67 early stage samples and 77 late stage samples.
As a preliminary step of our analysis we ranked the 12 markers according to
value of VUS, estimated on the complete data.
The observed ordering, consistent with medical knowledge, led us to select CA125 as
the test $T$ to be used to illustrate our method.
To mimic verification bias,
a subset of the complete dataset is constructed using the test $T$ and a vector $A=(A_1, A_2)$ of two
covariates, namely the marker CA153 ($A_1$) and age ($A_2$).
Reasons for using CA153 as a covariate come from the medical literature that suggests that
the concomitant measurement of CA153 with CA125 could be advantageous in the pre-operative
discrimination of benign and malignant ovarian tumors.
In this subset, $T$ and $A$ are known for all samples (patients), but the true status (benign, early stage or late stage)
is available only for some samples, that we select according to the following mechanism.
We select all samples having a value for $T$, $A_1$ and $A_2$ above their respective medians, i.e. 0.87, 0.30 and 45;
as for the others, we apply the following selection process
\[
\mathrm{Pr}(V = 1) = 0.05 + 0.35\mathrm{I}(T > 0.87) + 0.25\mathrm{I}(A_1 > 0.30) + 0.35\mathrm{I}(A_2 > 45),
\]
leading to a marginal probability of selection equal to $0.634$.
Since the test $T$ and the covariates $A_1, A_2$ are heterogeneous with respect to their variances, the Mahalanobis distance is used for KNN estimators. Following discussion in Section 3.4, we use the selection rule (\ref{choice:K:1}) to find the size $K$ of the neighborhood. This leads to the choice of $K = 1$ for our data. In addition, we also employ $K = 3$ for the sake of comparison with 1NN result, and produce the estimate of the ROC surface based on full data (Full estimate), displayed in Figure \ref{fg:res:full}.
\begin{figure}
\caption{Estimated ROC surface for the CA125 test, based on full data.}
\label{fg:res:full}
\end{figure}
Figure \ref{fg:res:knn} shows the 1NN and 3NN estimated ROC surfaces for the test $T$ (CA125). In this figure, we also give the 95\% ellipsoidal confidence regions (green color) for $(\mathrm{TCF}_1,\mathrm{TCF}_2,\mathrm{TCF}_3)$ at cut points $(-0.56, 2.31)$.
These regions are built using the asymptotic normality of the estimators.
Compared with the Full estimate, KNN bias-corrected method proposed in the paper appears well behave, yielding reasonable estimates of the ROC surface with incoplete data.
\begin{figure}
\caption{Bias--corrected estimated ROC surfaces for CA125 test, based on incomplete data.}
\label{fg:res:knn}
\end{figure}
\section{Conclusions}
A suitable solution for reducing the effects of model misspecification in statistical inference is to resort to fully nonparametric methods. This paper proposes a nonparametric estimator of the ROC surface of a continuous-scale diagnostic test. The estimator is based on nearest-neighbor imputation and works under MAR assumption. It represents an alternative to (partially) parametric estimators discussed in \cite{toduc:15}. Our simulation results and the presented illustrative example show usefulness of the proposal.
As in \cite{adi:15} and \cite{adi:15:auc},
a simple extension of our estimator, that could be used when categorical auxiliary variables are also available, is possible.
Without loss of generality, we suppose that a single factor $C$, with $m$ levels,
is observed together with $T$ and $A$. We also assume that
$C$ may be associated with both $D$ and $V$. In this case, the sample can be divided into $m$ strata, i.e.
$m$ groups of units sharing the same level of $C.$ Then, for example, if
the MAR assumption and first-order differentiability of
the functions $\rho_k(t,a)$ and $\pi(t,a)$ hold in each stratum,
a consistent and asymptotically normally distributed estimator of TCF$_1$ is
\begin{equation}\nonumber
\widehat{\mathrm{TCF}}^S_{1,\mathrm{KNN}}(c_1) =\frac{1}{n}\sum_{j=1}^m n_j
\widehat{\mathrm{TCF}}_{1j,\mathrm{KNN}}^{cond}(c_1),
\end{equation}
where $n_j$ denotes the size of the $j$-th stratum and $\widehat{\mathrm{TCF}}^{cond}_{1j,\mathrm{KNN}}(c_1) $ denotes the KNN estimator of the conditional TCF$_1$, i.e., the KNN estimator in (\ref{est:knn3}) obtained from the patients in the $j$-th stratum.
Of course, we must assume that, for every $j$, ratios $n_j/n$ have finite and nonzero limits as
$n$ goes to infinity.
\appendix
\section{Appendix 1}
According the proof of Theorem \ref{thr:knn:2}, we have
\begin{equation}
\left(\hat{\beta}_{12,\mathrm{KNN}} - \hat{\beta}_{22,\mathrm{KNN}}\right) - \left(\beta_{12} - \beta_{22}\right) \simeq \left(S_{12} - S_{22}\right) + \left(R_{12} - R_{22}\right) + \left(W_{12} - W_{22}\right) + o_{p}(n^{-1/2}).
\label{eq:rem:1}
\end{equation}
Here, we have
\begin{eqnarray}
S_{12} - S_{22} &=& \frac{1}{n}\sum_{i=1}^{n} V_i \mathrm{I}(c_1 \le T_i < c_2)\left(D_{2i} - \rho_{2i}\right) \nonumber \\
R_{12} - R_{22} &=& \frac{1}{n}\sum_{i=1}^{n}\left[\mathrm{I}(c_1 \le T_i < c_2)\rho_{2i} - (\beta_{12} - \beta_{22}) \right] \nonumber \\
W_{12} - W_{22} &=& \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(c_1 \le T_i < c_2)(1-V_i)\left[\frac{1}{K}\sum_{l=1}^{K}\left(V_{i(l)}D_{2i(l)} - \rho_{2i(l)}\right)\right] \nonumber.
\end{eqnarray}
Under that, we realize that quantities $S_{12} - S_{22}$ and $S_{jk}$, so as $R_{12} - R_{22}$ and $R_{jk}$, and $W_{12} - W_{22}$ and $W_{jk}$, play, in essence, a similar role. Therefore, the quantities in right hand side of equation (\ref{eq:rem:1}) have approximately normal distributions with mean $0$ and variances
\begin{eqnarray}
\mathbb{V}\mathrm{ar} \left(\sqrt{n}(S_{12} - S_{22})\right) &=& \mathbb{E} \left\{\pi(T,A)\delta^2(T,A) \right\}, \nonumber \\
\mathbb{V}\mathrm{ar} \left(\sqrt{n} (R_{12} - R_{22}) \right) &=& \mathbb{V}\mathrm{ar} \left[\mathrm{I}(c_1 \le T_i < c_2)\rho_{2}(T,A)\right], \nonumber \\
\mathbb{V}\mathrm{ar} \left(\sqrt{n} (W_{12} - W_{22}) \right) &=& \frac{1}{K}\mathbb{E} \left[(1 - \pi(T,A))\delta^2(T,A)\right] + \mathbb{E} \left[\frac{(1-\pi(T,A))^2\delta^2(T,A)}{\pi(T,A)}\right] .\nonumber
\end{eqnarray}
where, $\delta^2(T,A)$ is the conditional variance of $\mathrm{I}(c_1 \le T_i < c_2, D_{2i} = 1)$ given $T,A$. Then, we get
\[
\sqrt{n}\left[\left(\hat{\beta}_{12,\mathrm{KNN}} - \hat{\beta}_{22,\mathrm{KNN}}\right) - \left(\beta_{12} - \beta_{22}\right)\right] \stackrel{d}{\to} \mathcal{N}(0,\lambda^2).
\]
To obtain $\lambda^2$, we notice that the quantities $R_{12} - R_{22}$ and $(S_{12} - S_{22}) + (W_{12} - W_{22})$ are uncorrelated and the asymptotic covariance of $S_{12} - S_{22}$ and $W_{12} - W_{22}$ equals to $\mathbb{E} \left[(1 - \pi(T,A))\delta^2(T,A)\right]$. Taking the sum of this covariance and the above variances, the desired asymptotic variance $\lambda^2$ is approximately
\begin{equation}
\lambda^2 = \left\{(\beta_{12} - \beta_{22})\left[1 - (\beta_{12} - \beta_{22})\right] + \omega^2_{12} - \omega^2_{22}\right\}.
\label{eq:rem:2}
\end{equation}
\section{Appendix 2}
Here, we focus on the elements $\xi_{12}$, $\xi_{13}$ and $\xi_{23}$ of the covariance mtrix $\Xi$.
We can write
\begin{eqnarray}
\xi_{12} &=& - \frac{1}{\theta_1 \theta_2} \left(\sigma_{1112} - \sigma_{1122}\right) + \frac{\beta_{11}}{\theta_1^2 \theta_2}\left(\sigma_{112} - \sigma_{122}\right) - \frac{\beta_{12} - \beta_{22}}{\theta_2^2} \left(\frac{\beta_{11}}{\theta_1^2} \sigma_{12}^* - \frac{\sigma_{211}}{\theta_1}\right),
\label{cov:12} \\
\xi_{13} &=& \frac{1}{1 - \theta_1 - \theta_2}\left(\frac{\beta_{11}}{\theta_1^2}\sigma_{123} - \frac{\sigma_{1123}}{\theta_1}\right) \nonumber \\
&& + \: \frac{\beta_{23}}{\theta_1(1 - \theta_1 - \theta_2)^2}\left[
\frac{\beta_{11}}{\theta_1} \left(\sigma_1^2 + \sigma_{12}^*\right) - \left(\sigma_{111} + \sigma_{211}\right)
\right],
\label{cov:13}
\end{eqnarray}
and
\begin{eqnarray}
\xi_{23} &=& \frac{1}{\theta_2(1 - \theta_1 - \theta_2)}\left[\left(\sigma_{1223} - \sigma_{2223}\right) - \frac{\beta_{12} - \beta_{22}}{\theta_2}\sigma_{223}\right] \nonumber \\
&& + \: \frac{\beta_{23}}{\theta_2(1 - \theta_1 - \theta_2)^2}\left[\left(\sigma_{112} - \sigma_{122} + \sigma_{212} - \sigma_{222}\right) - \frac{\beta_{12} - \beta_{22}}{\theta_2}\left(\sigma_2^2 + \sigma_{12}^*\right)\right].
\label{cov:23}
\end{eqnarray}
Recall that
\begin{eqnarray}
\hat{\theta}_{k,\mathrm{KNN}} - \theta_{k} &=& \frac{1}{n}\sum_{i=1}^{n}\left[V_iD_{ki} + (1-V_i)\rho_{ki}\right] + \frac{1}{n}\sum_{i=1}^{n} (1-V_i)(\hat{\rho}_{ki,K} - \rho_{ki}) - \theta_{k} \nonumber\\
&=& \frac{1}{n}\sum_{i=1}^{n}V_i\left[D_{ki} - \rho_{ki}\right] + \frac{1}{n}\sum_{i=1}^{n}\left[\rho_{ki} - \theta_{k}\right]\nonumber \\
&& + \: \frac{1}{n}\sum_{i=1}^{n}\left[\frac{1}{K}\sum_{l=1}^{K}\left(V_{i(l)}D_{ki(l)} - \rho_{ki(l)}\right)\right] + o_p\left(n^{-1/2}\right) \nonumber \\
&=& S_{k} + R_{k} + W_{k} + o_p\left(n^{-1/2}\right); \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\hat{\beta}_{jk,\mathrm{KNN}} - \beta_{jk} &=& \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)\left[V_iD_{ki} + (1-V_i)\rho_{ki}\right] + \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j) (1-V_i)(\hat{\rho}_{ki,K} - \rho_{ki}) - \beta_{jk} \nonumber\\
&=& \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)V_i\left[D_{ki} - \rho_{ki}\right] + \frac{1}{n}\sum_{i=1}^{n}\left[\mathrm{I}(T_i \ge c_j)\rho_{ki} - \beta_{jk}\right]\nonumber \\
&& + \: \frac{1}{n}\sum_{i=1}^{n}\mathrm{I}(T_i \ge c_j)(1-V_i)\left[\frac{1}{K}\sum_{l=1}^{K}\left(V_{i(l)}D_{ki(l)} - \rho_{ki(l)}\right)\right] + o_p\left(n^{-1/2}\right) \nonumber \\
&=& S_{jk} + R_{jk} + W_{jk} + o_p\left(n^{-1/2}\right). \nonumber
\end{eqnarray}
Then, we restating some terms that appear in expressions (\ref{cov:12})--(\ref{cov:23}).
First, we consider the term, $\sigma_{1112} - \sigma_{1122}$. We have
\begin{eqnarray}
\sigma_{1112} - \sigma_{1122} &=& \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\beta}_{11,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{12,\mathrm{KNN}}) - \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\beta}_{11,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{22,\mathrm{KNN}}) \nonumber \\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\beta}_{11,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{12,\mathrm{KNN}} - \sqrt{n}\hat{\beta}_{22,\mathrm{KNN}}) \nonumber \\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}(S_{11} + R_{11} + W_{11}),\sqrt{n}(S_{12} - S_{22}) + \sqrt{n}(R_{12} - R_{22}) + \sqrt{n}(W_{12} - W_{22})\right) \nonumber \\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{11},\sqrt{n}(S_{12} - S_{22})\right) + \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{11},\sqrt{n}(W_{12} - W_{22})\right) \nonumber\\
&& + \: \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}R_{11},\sqrt{n}(R_{12} - R_{22})\right) + \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{11},\sqrt{n}(S_{12} - S_{22})\right) \nonumber \\
&& + \: \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{11},\sqrt{n}(W_{12} - W_{22})\right). \nonumber
\end{eqnarray}
This result follows from the fact that
$\sqrt{n}R_{11}$ and $\sqrt{n}(S_{12} - S_{22}) + \sqrt{n}(W_{12} - W_{22})$, and $\sqrt{n}(S_{11} + W_{11})$ and $\sqrt{n}(R_{12} - R_{22})$ are uncorrelated (see also \cite{cheng:94}). By arguments similar to those used in
\cite{ning:12}, we also obtain
\begin{eqnarray}
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{11},\sqrt{n}(S_{12} - S_{22})\right) &=& \mathbb{E} \left\{\pi(T,A) \mathbb{C}\mathrm{ov}(\mathrm{I}(T \ge c_1)D_1,\mathrm{I}(c_1 \le T < c_2)D_2|T,A )\right\} \nonumber \\
&=& \mathbb{E} \left\{\pi(T,A) \mathrm{I}(c_1 \le T < c_2)\mathbb{C}\mathrm{ov}(D_1,D_2|T,A )\right\} \nonumber \\
&=& -\mathbb{E} \left\{\pi(T,A) \mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\} \nonumber.
\end{eqnarray}
Similarly, we have that
\begin{eqnarray}
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{11},\sqrt{n}(W_{12} - W_{22})\right) &=& -\mathbb{E}\left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}R_{11},\sqrt{n}(R_{12} - R_{22})\right) &=& - \beta_{11}(\beta_{12} - \beta_{22}) + \mathbb{E}\left\{\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{11},\sqrt{n}(S_{12} - S_{22})\right) &=& -\mathbb{E}\left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{11},\sqrt{n}(W_{12} - W_{22})\right) &=& -\frac{1}{K}\mathbb{E}\left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\} \nonumber\\
&& - \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)}{\pi(T,A)}\right\} . \nonumber
\end{eqnarray}
This leads to
\begin{eqnarray}
\sigma_{1112} - \sigma_{1122} &=& -\Bigg[
\psi^2_{1212} + \beta_{11}(\beta_{12} - \beta_{22}) \Bigg],
\label{sig_1112 - sig_1122}
\end{eqnarray}
where
\begin{eqnarray}
\psi^2_{1212} &=&
\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)}{\pi(T,A)}\right\}. \nonumber
\end{eqnarray}
Second, we consider $\sigma_{112} - \sigma_{122}$. In this case, we have
\begin{eqnarray}
\sigma_{112} - \sigma_{122} &=& \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{12,\mathrm{KNN}}) - \mathrm{as}\mathbb{C}\mathrm{ov}(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{22,\mathrm{KNN}}) \nonumber \\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}},\sqrt{n}(\hat{\beta}_{12,\mathrm{KNN}} - \hat{\beta}_{22,\mathrm{KNN}})\right) \nonumber \\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}(S_{1} + R_{1} + W_{1}),\sqrt{n}(S_{12} - S_{22}) + \sqrt{n}(R_{12} - R_{22}) + \sqrt{n}(W_{12} - W_{22})\right) \nonumber \\
&=& \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{1},\sqrt{n}(S_{12} - S_{22})\right) + \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{1},\sqrt{n}(W_{12} - W_{22})\right) \nonumber\\
&& + \: \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}R_{1},\sqrt{n}(R_{12} - R_{22})\right) + \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{1},\sqrt{n}(S_{12} - S_{22})\right) \nonumber \\
&& + \: \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{1},\sqrt{n}(W_{12} - W_{22})\right). \nonumber
\end{eqnarray}
We obtain
\begin{eqnarray}
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{1},\sqrt{n}(S_{12} - S_{22})\right) &=& -\mathbb{E} \left\{\pi(T,A) \mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}S_{1},\sqrt{n}(W_{12} - W_{22})\right) &=& -\mathbb{E}\left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}R_{1},\sqrt{n}(R_{12} - R_{22})\right) &=& - \theta_{1}(\beta_{12} - \beta_{22}) + \mathbb{E}\left\{\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{1},\sqrt{n}(S_{12} - S_{22})\right) &=& -\mathbb{E}\left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\}, \nonumber \\
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}W_{1},\sqrt{n}(W_{12} - W_{22})\right) &=& -\frac{1}{K}\mathbb{E}\left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)\right\} \nonumber\\
&& - \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(c_1 \le T < c_2)\rho_{1}(T,A)\rho_{2}(T,A)}{\pi(T,A)}\right\} , \nonumber
\end{eqnarray}
and then
\begin{eqnarray}
\sigma_{112} - \sigma_{122} &=& - [
\psi^2_{1212} + \theta_{1}(\beta_{12} - \beta_{22}) ].
\label{sig_112 - sig_122}
\end{eqnarray}\\
Similarly, it is straightforward to obtain
\begin{eqnarray}
\sigma_{211} &=& -[
\psi^2_{112}+ \theta_{2}\beta_{11} ]
\label{sig_211}
\end{eqnarray}
and
\begin{eqnarray}
\sigma_{123} &=& - [
\psi^2_{213}+ \theta_{1}\beta_{23}],
\label{sig_123}
\end{eqnarray}
with
\begin{eqnarray}
\psi^2_{112}&=&
\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\mathrm{I}(T \ge c_1)\rho_{1}(T,A)\rho_{2}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(T \ge c_1)\rho_{1}(T,A)\rho_{2}(T,A)}{\pi(T,A)}\right\} \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\psi^2_{213}&=&
\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\mathrm{I}(T \ge c_2)\rho_{1}(T,A)\rho_{3}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(T \ge c_2)\rho_{1}(T,A)\rho_{3}(T,A)}{\pi(T,A)}\right\}. \nonumber
\end{eqnarray}
The covariance between $\sqrt{n}\hat{\theta}_{1,\mathrm{KNN}}$ and $\sqrt{n}\hat{\theta}_{2,\mathrm{KNN}}$ is computed analogously, i.e.,
\begin{eqnarray}
\sigma_{12}^* &=& -[ \theta_{1}\theta_{2} + \psi^2_{12}
],
\label{sig_12*}
\end{eqnarray}
where
\begin{eqnarray}
\psi^2_{12}&=&
\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\rho_{1}(T,A)\rho_{2}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\rho_{1}(T,A)\rho_{2}(T,A)}{\pi(T,A)}\right\}. \nonumber
\end{eqnarray}
By using results (\ref{sig_1112 - sig_1122}), (\ref{sig_112 - sig_122}), (\ref{sig_211}) and (\ref{sig_12*}) into (\ref{cov:12}), we can obtain a suitable expression for $\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}\widehat{\mathrm{TCF}}_{1,\mathrm{KNN}} (c_1),\sqrt{n}\widehat{\mathrm{TCF}}_{2,\mathrm{KNN}} (c_1,c_2)\right)$, which depends on easily estimable quanties.
Clearly, a similar approach can be used to get suitable expressions for $\xi_{13}$ and $\xi_{23}$ too.
In particular, the estimable version of $\xi_{13}$ can be obtained by using suitable expressions for $\sigma_{123}$, $\sigma_{1123}$
and $\sigma_{111} + \sigma_{211}$.
The quantity $\sigma_{123}$ is already computed in (\ref{sig_123}), and the formula for $\sigma_{1123}$ can be obtained as
\begin{eqnarray}
\sigma_{1123} &=& -[
\psi^2_{213} + \beta_{11}\beta_{23}].
\nonumber \label{sig_1123}
\end{eqnarray}
To compute $\sigma_{111} + \sigma_{211}$, we notice that
\begin{eqnarray}
\mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}\hat{\theta}_{3,\mathrm{KNN}},\sqrt{n}\hat{\beta}_{11,\mathrm{KNN}}\right) &=& \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n} - \sqrt{n}(\hat{\theta}_{1,\mathrm{KNN}} + \hat{\theta}_{1,\mathrm{KNN}}),\sqrt{n}\hat{\beta}_{11,\mathrm{KNN}}\right)\nonumber \\
&=& - \mathrm{as}\mathbb{C}\mathrm{ov}\left(\sqrt{n}(\hat{\theta}_{1,\mathrm{KNN}} + \hat{\theta}_{1,\mathrm{KNN}}),\sqrt{n}\hat{\beta}_{11,\mathrm{KNN}}\right)\nonumber.
\end{eqnarray}
It leads to $\sigma_{111} + \sigma_{211} = -\sigma_{311}$. Similarly to (\ref{sig_211}), we have that
\begin{eqnarray}
\sigma_{311} &=& - [
\psi^2_{113} + \theta_{3}\beta_{11}],
\nonumber \label{sig_311}
\end{eqnarray}
where
\begin{eqnarray}
\psi^2_{113}&=&
\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\mathrm{I}(T \ge c_1)\rho_{1}(T,A)\rho_{3}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(T \ge c_1)\rho_{1}(T,A)\rho_{3}(T,A)}{\pi(T,A)}\right\}. \nonumber
\end{eqnarray}
For the last term $\xi_{23}$, we need to make some other calculations. First, the quantity $\sigma_{1223} - \sigma_{2223}$ is obtained as $\sigma_{1112} - \sigma_{1122}$.
We have
\[
\sigma_{1223} - \sigma_{2223}=-\beta_{23}(\beta_{12} - \beta_{22}),
\]
because $\mathrm{I}(c_1 \le T < c_2) \mathrm{I}(T \ge c_2) = 0$. Second, the term $\sigma_{223}$ is obtained as
\begin{eqnarray}
\sigma_{223} &=& - [
\psi^2_{223}+ \theta_{2}\beta_{23}],
\nonumber \label{sig_223}
\end{eqnarray}
where
\begin{eqnarray}
\psi^2_{223}&=&\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\mathrm{I}(T \ge c_2)\rho_{2}(T,A)\rho_{3}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(T \ge c_2)\rho_{2}(T,A)\rho_{3}(T,A)}{\pi(T,A)}\right\} . \nonumber
\end{eqnarray}
Moreover, it is straightforward to show that
\[
-(\sigma_{312} - \sigma_{322}) = \sigma_{112} - \sigma_{122} + \sigma_{212} - \sigma_{222},
\]
and that
\begin{eqnarray}
\sigma_{312} - \sigma_{322} &=& -[
\psi^2_{1223} + \theta_{3}(\beta_{12} - \beta_{22}) ],
\nonumber \label{sig_312 - sig_322}
\end{eqnarray}
with
\begin{eqnarray}
\psi^2_{1223}&=&\left(1 + \frac{1}{K}\right) \mathbb{E} \left\{[1-\pi(T,A)]\mathrm{I}(c_1 \le T < c_2)\rho_{2}(T,A)\rho_{3}(T,A) \right\} \nonumber \\
&& + \: \mathbb{E}\left\{\frac{[1-\pi(T,A)]^2\mathrm{I}(c_1 \le T < c_2)\rho_{2}(T,A)\rho_{3}(T,A)}{\pi(T,A)}\right\}. \nonumber
\end{eqnarray}
\end{document} |
\begin{document}
\title{The $\mathcal L_B$-cohomology on compact torsion-free $\mathrm{G}_2$~manifolds \\ and an application to `almost' formality}
\author{Ki Fung Chan \\ {\it Chinese University of Hong Kong} \\\tt{[email protected]} \and Spiro Karigiannis \\ {\it Department of Pure Mathematics, University of Waterloo} \\ \tt{[email protected]} \and Chi Cheuk Tsang \\ {\it Chinese University of Hong Kong} \\ \tt{[email protected]} }
\date{January 18, 2018}
\maketitle
\begin{abstract}
We study a cohomology theory $H^{\bullet}_{\varphi}$, which we call the $\mathcal L_B$-cohomology, on compact torsion-free $\mathrm{G}_2$~manifolds. We show that $H^k_{\varphi} \cong H^k_{\mathrm{dR}}$ for $k \neq 3, 4$, but that $H^k_{\varphi}$ is infinite-dimensional for $k = 3,4$. Nevertheless there is a canonical injection $H^k_{\mathrm{dR}} \to H^k_{\varphi}$. The $\mathcal L_B$-cohomology also satisfies a Poincar\'e duality induced by the Hodge star. The establishment of these results requires a delicate analysis of the interplay between the exterior derivative $\mathrm{d}$ and the derivation $\mathcal L_B$, and uses both Hodge theory and the special properties of $\mathrm{G}_2$-structures in an essential way. As an application of our results, we prove that compact torsion-free $\mathrm{G}_2$~manifolds are `almost formal' in the sense that most of the Massey triple products necessarily must vanish.
\end{abstract}
\tableofcontents
\section{Introduction} \label{sec:intro}
Let $(M, \varphi)$ be a manifold with $\mathrm{G}_2$-structure. Here $\varphi$ is a smooth $3$-form on $M$ that is \emph{nondegenerate} in a certain sense that determines a Riemannian metric $g$ and a volume form $\mathsf{vol}$, hence a dual $4$-form $\psi$. We say that $(M, \varphi)$ is a \emph{torsion-free} $\mathrm{G}_2$~manifold if $\nabla \varphi = 0$. Note that this implies that $\nabla \psi = \mathrm{d} \varphi = \mathrm{d} \psi = 0$ as well. In fact, it is now a classical result~\cite{FG} that the pair of conditions $\mathrm{d} \varphi = \mathrm{d} \psi = 0$ are actually equivalent to $\nabla \varphi = 0$.
The forms $\varphi$ and $\psi$ can be used to construct a vector-valued $2$-form $B$ and a vector-valued $3$-form $K$, respectively, by raising an index using the metric. These vector-valued forms were studied in detail by Kawai--L\^e--Schwachh\"ofer in~\cite{KLS1} in the context of the Fr\"olicher--Nijenhuis bracket.
These vector-valued forms $B$ and $K$ induce \emph{derivations} $\mathcal L_B$ and $\mathcal L_K$ on the space $\Omega^{\bullet}$ of forms on $M$, of degree $2$ and $3$, respectively. From these derivations we can define \emph{cohomology theories}. We call these the $\mathcal{L}_B$-cohomology, denoted $H^{\bullet}_{\varphi}$, and the $\mathcal{L}_K$-cohomology, denoted $H^{\bullet}_{\psi}$. When $M$ is compact, the $\mathcal L_K$-cohomology was studied extensively by Kawai--L\^e--Schwachh\"ofer in~\cite{KLS2}. In the present paper we study in detail the $\mathcal L_B$-cohomology when $M$ is compact. Specifically, we compute $H^k_{\varphi}$ for all $k$. The results are summarized in Theorem~\ref{thm:Hph}, which we restate here:
{\bf Theorem~\ref{thm:Hph}.} {\em The following relations hold.
\begin{itemize} \setlength\itemsep{-1mm}
\item $H^k_{\varphi} \cong H^k_{dR}$ for $k=0,1,2,5,6,7$.
\item $H^k_{\varphi}$ is infinite-dimensional for $k = 3,4$.
\item There is a canonical injection $\mathcal{H}^k \hookrightarrow H^k_{\varphi}$ for all $k$.
\item The Hodge star induces isomorphisms $\ast: H^k_{\varphi} \cong H^{7-k}_{\varphi}$.
\end{itemize}
}
The proof involves a very delicate analysis of the interplay between the exterior derivative $\mathrm{d}$ and the derivation induced by $B$, and uses Hodge theory in an essential way.
As an application of our results, we study the question of \emph{formality} of compact torsion-free $\mathrm{G}_2$~manifolds. This is a longstanding open problem. It has been studied by many authors, including Cavalcanti~\cite{Cavalcanti-ddc}. In particular, the paper~\cite{Verbitsky} by Verbitsky has very close connections to the present paper. What is called $\mathrm{d}_c$ in~\cite{Verbitsky} is $\mathcal L_B$ in the present paper. Verbitsky's paper contains many excellent ideas. Unfortunately, there are some gaps in several of the proofs in~\cite{Verbitsky}. Most important for us, there is a gap in the proof of~\cite[Proposition 2.19]{Verbitsky}, which is also used to prove~\cite[Proposition 2.20]{Verbitsky}, among several other results in~\cite{Verbitsky}. We give a different proof of this result, which is our Proposition~\ref{prop:quasi-isom}. We then use this to prove our Theorem~\ref{thm:almost-formal}, which essentially says that a compact torsion-free $\mathrm{G}_2$~manifold is `almost formal' in the sense that its de Rham complex is equivalent to a differential graded algebra with all differentials trivial except one.
A consequence of our Theorem~\ref{thm:almost-formal} is that almost all of the Massey triple products vanish on a compact torsion-free $\mathrm{G}_2$~manifold. This gives a new topological obstruction to the existence of torsion-free $\mathrm{G}_2$-structures on compact manifolds. The precise statement is the following:
{\bf Corollary~\ref{cor:Massey}.} {\em Let $M$ be a compact torsion-free $\mathrm{G}_2$~manifold. Consider cohomology classes $[\alpha]$, $[\beta]$, and $[\gamma] \in H^{\bullet}_{\mathrm{dR}}$. If the Massey triple product $\langle [\alpha], [\beta], [\gamma] \rangle$ is defined and we have $|\alpha| + |\beta| \neq 4$ and $|\beta| + |\gamma| \neq 4$, then $\langle [\alpha], [\beta], [\gamma] \rangle = 0$.
}
We also prove the following stronger result in the case of full holonomy $\mathrm{G}_2$ (the ``irreducible'' case):
{\bf Theorem~\ref{thm:irreducibleMassey}.} {\em Let $M$ be a compact torsion-free $\mathrm{G}_2$~manifold with full holonomy $\mathrm{G}_2$, and consider cohomology classes $[\alpha]$, $[\beta]$, and $[\gamma] \in H^{\bullet}_{\mathrm{dR}}$. If the Massey triple product $\langle [\alpha], [\beta], [\gamma] \rangle$ is defined, then $\langle [\alpha], [\beta], [\gamma] \rangle = 0$ except possibly in the case when $|\alpha| = |\beta| = |\gamma| = 2$.
}
The Massey triple products on a compact torsion-free $\mathrm{G}_2$~manifold are not discussed in~\cite{Verbitsky}.
{\bf Organization of the paper.} In the rest of this section, we discuss the domains of validity of the various results in this paper in Remark~\ref{rmk:when-torsion-free}, then we consider notation and conventions, and conclude with the statement of a trivial result from linear algebra that we use frequently.
Section~\ref{sec:main} is the heart of the paper, where we establish the various relations between the derivations $\mathrm{d}$, $\iota_B$, $\iota_B$, $\mathcal L_B$, and $\mathcal L_K$. We begin with a brief summary of known facts about $\mathrm{G}_2$-structures that we will need in Section~\ref{sec:forms}. In Section~\ref{sec:dLap} we study the operators $\mathrm{d}$ and $\Delta$ in detail. Some of the key results are Proposition~\ref{prop:dfigure}, which establishes Figure~\ref{figure:d}, and Corollary~\ref{cor:d-relations} and Proposition~\ref{prop:Laplacian} which establish second order differential identities. These have appeared before (without proof) in a paper of Bryant~\cite[Section 5.2]{Bryant}. But see Remark~\ref{rmk:Bryant}. A new and crucial result in Section~\ref{sec:dLap} is Theorem~\ref{thm:harmonic1} which relates the kernels of various operators on $\Omega^1$. In Section~\ref{sec:LBLK} we introduce the derivations $\iota_B$, $\iota_K$, $\mathcal L_B$, and $\mathcal L_K$ and study their basic properties. One of the highlights is Corollary~\ref{cor:LBLKfigures}, which establishes Figures~\ref{figure:LB} and~\ref{figure:LK}.
In Section~\ref{sec:cohom} we study and compute the $\mathcal L_B$-cohomology $H^{\bullet}_{\varphi}$ of a compact torsion-free $\mathrm{G}_2$~manifold. We use heavily both the results of Section~\ref{sec:main} and Hodge theory. This section culminates with the proof of Theorem~\ref{thm:Hph}. Then in Section~\ref{sec:formality} we apply the results of Section~\ref{sec:cohom} to study the Massey triple products of compact torsion-free $\mathrm{G}_2$~manifolds.
\begin{rmk} \label{rmk:when-torsion-free}
We summarize here the domains of validity of the various sections of the paper.
\begin{itemize} \setlength\itemsep{-1mm}
\item All results of Section~\ref{sec:forms} except the last one (Proposition~\ref{prop:Liederiv}), are valid for any $\mathrm{G}_2$-structure.
\item Proposition~\ref{prop:Liederiv} as well as \emph{the entirety of Section~\ref{sec:dLap}}, assume that $(M, \varphi)$ is torsion-free.
\item In Section~\ref{sec:LBLK}, the results that only involve the algebraic derivations $\iota_B$ and $\iota_K$, up to and including Proposition~\ref{prop:iotaKfigure}, are valid for any $\mathrm{G}_2$-structure.
\item The rest of Section~\ref{sec:LBLK}, beginning with Corollary~\ref{cor:LBLKfigures}, uses the results of Section~\ref{sec:dLap} heavily and is only valid in the torsion-free setting.
\item The cohomology theories introduced in Section~\ref{sec:cohom-defn} make sense on any torsion-free $\mathrm{G}_2$~manifold. However, beginning in Section~\ref{sec:computeHph0123} and for the rest of the paper, we assume that $(M, \varphi)$ is a \emph{compact} torsion-free $\mathrm{G}_2$~manifold, as we use Hodge theory throughout.
\end{itemize}
\end{rmk}
{\bf Notation and conventions.} We mostly follow the notation and conventions of~\cite{K-flows}, and we point out explicitly whenever our notation differs significantly. Let $(M, g)$ be an oriented smooth Riemannian $7$-manifold. Let $\{ e_1 , \ldots, e_7 \}$ be a local frame for $TM$ with dual coframe $\{ e^1, \ldots, e^7 \}$. It can be a local coordinate frame $\{ \mathrm{d}x{1}, \ldots, \mathrm{d}x{7} \}$ with dual coframe $\{ \dx{1}, \ldots, \dx{7} \}$ but this is not necessary. Note that the metric dual $1$-form of $e_i$ is $(e_i)^{\flat} = g_{ij} e^j$.
We employ the Einstein summation convention throughout. We write $\mathrm{\Lambda}bda^k$ for the bundle $\mathrm{\Lambda}bda^k (T^* M)$ and $\Omega^k$ for its space of smooth sections $\mathrm{G}_2amma (\mathrm{\Lambda}bda^k (T^* M))$. Then $\mathrm{\Lambda}bda^{\bullet} = \oplus_{k=1}^n \mathrm{\Lambda}bda^k$ is the exterior algebra of $T^*M$ and $\Omega^{\bullet} = \oplus_{k=0}^n \Omega^k$ is the space of smooth differential forms on $M$. Similarly, we use $\ensuremath{S^2 (T^* M)}$ to denote the second symmetric power of $T^* M$, and $\ensuremath{\mathcal{S}} = \mathrm{G}_2amma(\ensuremath{S^2 (T^* M)})$ to denote the space of smooth symmetric $2$-tensors on $M$.
The Levi-Civita covariant derivative of $g$ is denoted by $\nabla$. Let $\nabla_p = \nabla_{e_p}$. The exterior derivative $d\alpha$ of a $k$-form $\alpha$ can be
written in terms of $\nabla$ as
\begin{equation} \label{eq:dd}
\begin{aligned}
\mathrm{d} \alpha & = e^p \wedge \nabla_p \alpha, \\
(\mathrm{d} \alpha)_{i_1 i_1 \cdots i_{k+1}} & = \sum_{j=1}^{k+1} \nabla_{i_j} \alpha_{i_1 \cdots \hat{i_j} \cdots i_k}.
\end{aligned}
\end{equation}
The adjoint $d^{\ast}$ of $\mathrm{d}$ with respect to $g$ satisfies $\dop{\dd} = (-1)^k \ast \mathrm{d} \ast$ on $\Omega^k$. It can be written in terms of $\nabla$ as
\begin{equation} \label{eq:ds}
\begin{aligned}
\dop{\dd} \alpha & = - g^{pq} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \nabla_q \alpha, \\
(\dop{\dd} \alpha)_{i_1 \cdots i_{k-1}} & = - g^{pq} \nabla_p \alpha_{q i_1 \cdots i_{k-1}}.
\end{aligned}
\end{equation}
An element $h \in \ensuremath{\mathcal{S}}$ can be decomposed as $h = \tfrac{\operatorname{Tr}_g h}{7} g + h^0$, where $\operatorname{Tr}_g h = g^{ij} h_{ij}$ is the trace, and $h^0$ is the \emph{trace-free} component of $h$, which is orthogonal to $g$. We use $\ensuremath{S^2 (T^* M)}o$ to denote the bundle whose sections $\ensuremath{\mathcal{S}}o = \mathrm{G}_2amma(\ensuremath{S^2 (T^* M)}o)$ are the trace-free symmetric $2$-tensors. Finally, if $X$ is a vector field on $M$, we denote by $X^{\flat}$ the $1$-form metric dual to $X$ with respect to the metric $g$. Sometimes we abuse notation and write $X^{\flat}$ as simply $X$ when there is no danger of confusion.
We write $H^k_{\mathrm{dR}}$ for the $k^{\text{th}}$ de Rham cohomology over $\mathbb R$ and $\mathcal H^k$ for the space of harmonic $k$-forms. If $[\alpha]$ is a cohomology class, then $|\alpha|$ denotes the degree of any of its representative differential forms. That is, if $[\alpha] \in H^k_{\mathrm{dR}}$, then $|\alpha| = k$.
We use $C^{\bullet}$ to denote a $\mathbb Z$-graded complex of real vector spaces. A degree $k$ map $P$ of the complex $C^{\bullet}$ maps $C^i$ into $C^{i+k}$, and we write
\begin{equation} \label{eq:complexes}
\begin{aligned}
(\ker P)^i & = \ker (P : C^i \to C^{i+k}), \\
(\operatorname{im} P)^i & = \operatorname{im} (P : C^{i-k} \to C^i).
\end{aligned}
\end{equation}
\begin{lemma} \label{lemma:linalg}
We state two trivial results from linear algebra that we use several times in Section~\ref{sec:cohom}.
\begin{enumerate}[(i)]
\item Let $V \subseteq U \subseteq (V \oplus W)$ be nested subspaces. Then $U = V \oplus (W \cap U)$.
\item Let $U = A \oplus B \oplus C$ be a direct sum decomposition of a vector space into complementary subspaces $A, B, C$. Let $V, W$ be subspaces of $U$ such that $V = A' \oplus B' \oplus C'$ and $W = A'' \oplus B'' \oplus C''$ where $A',A''$ are subspaces of $A$, and $B',B''$ are subspaces of $B$, and $C',C''$ are subspaces of $C$. Then $V \cap W = (A' \cap A'') \oplus (B' \cap B'') \oplus (C' \cap C'')$.
\end{enumerate}
\end{lemma}
{\bf Acknowledgments.} These results were obtained in 2017 as part of the collaboration between the COSINE program organized by the Chinese University of Hong Kong and the URA program organized by the University of Waterloo. The authors thank both universities for this opportunity. Part of the writing was done while the second author held a Fields Research Fellowship at the Fields Institute. The second author thanks the Fields Institute for their hospitality. The authors also thank the anonymous referee for pointing out that we had actually also established Theorem~\ref{thm:irreducibleMassey}, which is the stronger version of Corollary~\ref{cor:Massey} in the case of full $\mathrm{G}_2$ holonomy.
\section{Natural derivations on torsion-free $\mathrm{G}_2$~manifolds} \label{sec:main}
We first review some facts about torsion-free $\mathrm{G}_2$~manifolds and the decomposition of the exterior derivative $\mathrm{d}$. Then we define two derivations on $\Omega^{\bullet} $ and discuss their properties.
\subsection{$\mathrm{G}_2$-structures and the decomposition of $\Omega^{\bullet}$} \label{sec:forms}
Let $(M^7, \varphi)$ be a manifold with a $\mathrm{G}_2$-structure. Here $\varphi$ is the positive $3$-form associated to the $\mathrm{G}_2$-structure, and we use $\psi$ to denote the dual $4$-form $\psi = \ast \varphi$ with respect to the metric $g$ induced by $\varphi$. We will use the sign/orientation convention for $\mathrm{G}_2$-structures of~\cite{K-flows}. In this section we collect some facts about $\mathrm{G}_2$-structures, taken from~\cite{K-flows}, that we will need. We recall the fundamental relation between $\varphi$ and $g$, which allows one to extract the metric from the $3$-form. This is:
\begin{equation} \label{eq:fund-eq}
(X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \varphi = - 6 g(X, Y) \mathsf{vol}.
\end{equation}
\begin{lemma} \label{lemma:identities}
The tensors $g$, $\varphi$, $\psi$ satisfy the following contraction identities in a local frame:
\begin{align*}
\varphi_{ijk} \varphi_{abc} g^{kc} & = g_{ia} g_{jb} - g_{ib} g_{ja} - \psi_{ijab}, \allowdisplaybreaks\\
\varphi_{ijk} \varphi_{abc} g^{jb} g^{kc} & = 6 g_{ia}, \allowdisplaybreaks\\
\varphi_{ijk} \varphi_{abc} g^{ia} g^{jb} g^{kc} & = 42, \allowdisplaybreaks\\
\varphi_{ijk} \psi_{abcd} g^{kd} & = g_{ia} \varphi_{jbc} + g_{ib} \varphi_{ajc} + g_{ic} \varphi_{abj} - g_{aj} \varphi_{ibc} - g_{bj} \varphi_{aic} - g_{cj} \varphi_{abi}, \allowdisplaybreaks\\
\varphi_{ijk} \psi_{abcd} g^{jc} g^{kd} & = - 4 \varphi_{iab}, \allowdisplaybreaks\\
\varphi_{ijk} \psi_{abcd} g^{ib} g^{jc} g^{kd} & = 0, \allowdisplaybreaks\\
\psi_{ijkl} \psi_{abcd} g^{ld} & = -\varphi_{ajk} \varphi_{ibc} - \varphi_{iak} \varphi_{jbc} - \varphi_{ija} \varphi_{kbc} \allowdisplaybreaks\\
& \qquad {} + g_{ia} g_{jb} g_{kc} + g_{ib} g_{jc} g_{ka} + g_{ic} g_{ja} g_{kb} - g_{ia} g_{jc} g_{kb} - g_{ib} g_{ja} g_{kc} - g_{ic} g_{jb} g_{ka} \allowdisplaybreaks\\
& \qquad {} -g_{ia} \psi_{jkbc} - g_{ja} \psi_{kibc} - g_{ka} \psi_{ijbc} + g_{ab} \psi_{ijkc} - g_{ac} \psi_{ijkb}, \allowdisplaybreaks\\
\psi_{ijkl} \psi_{abcd} g^{kc} g^{ld} & = 4 g_{ia} g_{jb} - 4 g_{ib} g_{ja} - 2 \psi_{ijab}, \allowdisplaybreaks\\
\psi_{ijkl} \psi_{abcd} g^{jb} g^{kc} g^{ld} & = 24 g_{ia}, \allowdisplaybreaks\\
\psi_{ijkl} \psi_{abcd} g^{ia} g^{jb} g^{kc} g^{ld} & = 168.
\end{align*}
\end{lemma}
\begin{proof}
This is proved in Lemmas A.12, A.13, and A.14 of~\cite{K-flows}.
\end{proof}
For $k = 0, \ldots, 7, $ the bundle $\mathrm{\Lambda}bda^k := \mathrm{\Lambda}bda^k (T^* M)$ decomposes as follows:
\begin{equation} \label{eq:bundle-decomp}
\begingroup
\renewcommand*{\arraystretch}{1.2}
\begin{matrix}
\mathrm{\Lambda}bda^0 & = & \mathrm{\Lambda}bda^0_1, \\
\mathrm{\Lambda}bda^1 & = & & & \mathrm{\Lambda}bda^1_7, \\
\mathrm{\Lambda}bda^2 & = & & & \mathrm{\Lambda}bda^2_7 & \oplus & \mathrm{\Lambda}bda^2_{14}, \\
\mathrm{\Lambda}bda^3 & = & \mathrm{\Lambda}bda^3_1 & \oplus & \mathrm{\Lambda}bda^3_7 & & & \oplus & \mathrm{\Lambda}bda^3_{27}, \\
\mathrm{\Lambda}bda^4 & = & \mathrm{\Lambda}bda^4_1 & \oplus & \mathrm{\Lambda}bda^4_7 & & & \oplus & \mathrm{\Lambda}bda^4_{27}, \\
\mathrm{\Lambda}bda^5 & = & & & \mathrm{\Lambda}bda^5_7 & \oplus & \mathrm{\Lambda}bda^5_{14}, \\
\mathrm{\Lambda}bda^6 & = & & & \mathrm{\Lambda}bda^6_7, \\
\mathrm{\Lambda}bda^7 & = & \mathrm{\Lambda}bda^7_1.
\end{matrix}
\endgroup
\end{equation}
Here $\mathrm{\Lambda}bda^k_l$ is a rank $l$ subbundle of $\mathrm{\Lambda}bda^k$, and the decomposition is orthogonal with respect to $g$. Moreover, we have $\mathrm{\Lambda}bda^{7-k}_l = \ast \mathrm{\Lambda}bda^k_l$. In fact there are isomorphisms $\mathrm{\Lambda}bda^k_l \cong \mathrm{\Lambda}bda^{k'}_l$, so the bundles in the same vertical column of~\eqref{eq:bundle-decomp} are all isomorphic. Moreover, the Hodge star $\ast$ and the operations of wedge product with $\varphi$ or with $\psi$ all commute with the projections $\pi_l$ for $l = 1, 7, 14, 27$.
We will denote by $\Omega^k_l$ the space of smooth sections of $\mathrm{\Lambda}bda^k_l$. The isomorphisms $\mathrm{\Lambda}bda^k_l \cong \mathrm{\Lambda}bda^{k'}_l$ induce isomorphisms $\Omega^k_l \cong \Omega^{k'}_l$. The descriptions of the $\Omega^k_l$ and the particular identifications that we choose to use in this paper are given explicitly as follows:
\begin{equation} \label{eq:forms-isom}
\begin{aligned}
\Omega^0_1 & = C^{\infty} (M), \\
\Omega^1_7 & = \mathrm{G}_2amma (T^* M) \cong \mathrm{G}_2amma (TM), \\
\Omega^2_7 & = \{ X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi : \, X \in \mathrm{G}_2amma(TM) \} \cong \Omega^1_7 \\
\Omega^2_{14} & = \{ \beta \in \Omega^2 : \, \beta \wedge \psi = 0 \} = \{ \beta \in \Omega^2 : \, \beta_{pq} g^{pi} q^{qj} \varphi_{ijk} = 0 \}, \\
\Omega^3_1 & = \{ f \varphi : \, f \in C^{\infty}(M) \} \cong \Omega^0_1, \\
\Omega^3_7 & = \{ X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi : \, X \in \mathrm{G}_2amma(TM) \} \cong \Omega^1_7, \\
\Omega^3_{27} & = \{ \beta \in \Omega^3 : \, \beta \wedge \varphi = 0 \text{ and } \beta \wedge \psi = 0 \} = \{ h_{ip} g^{pk} \dx{i} \wedge (\mathrm{d}xs{k} \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) : \, h \in \ensuremath{\mathcal{S}}o \} \\
\Omega^k_l & = \{ \ast \beta: \, \beta \in \Omega^{7-k}_l \}, \quad \text{for $k = 4, 5, 6, 7$}.
\end{aligned}
\end{equation}
\begin{rmk} \label{rmk:not-isom}
We emphasize that the particular identifications we have chosen in~\eqref{eq:forms-isom} are \emph{not isometric}. Making them isometric identifications would require introducing irrational constant factors but this will not be necessary. See also Remark~\ref{rmk:adjoints}.
\end{rmk}
We will denote by $\pi^k_l$ the orthogonal projection $\pi^k_l : \Omega^k \to \Omega^k_l$. We note for future reference that $\beta \in \Omega^3_1 \oplus \Omega^3_{27}$ if and only if $\beta \perp (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi)$ for all $X$, and $\beta \in \Omega^3_7 \oplus \Omega^3_{27}$ if and only if $\beta \perp \varphi$. In a local frame these observations are
\begin{equation} \label{eq:omega3stuff}
\begin{aligned}
\beta \in \Omega^3_1 \oplus \Omega^3_{27} \, & \longleftrightarrow \, \beta_{ijk} g^{ia} g^{jb} g^{kc} \psi_{abcd} = 0, \\
\beta \in \Omega^3_7 \oplus \Omega^3_{27} \, & \longleftrightarrow \, \beta_{ijk} g^{ia} g^{jb} g^{kc} \varphi_{abc} = 0.
\end{aligned}
\end{equation}
Similarly we have that $\gamma \in \Omega^4_1 \oplus \Omega^4_{27}$ if and only if $\gamma \perp (\varphi \wedge X)$ for all $X$, and $\gamma \in \Omega^4_7 \oplus \Omega^4_{27}$ if and only if $\gamma \perp \psi$. In a local frame these observations are
\begin{equation} \label{eq:omega4stuff}
\begin{aligned}
\gamma \in \Omega^4_1 \oplus \Omega^4_{27} \, & \longleftrightarrow \, \gamma_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} = 0, \\
\gamma \in \Omega^4_7 \oplus \Omega^4_{27} \, & \longleftrightarrow \, \gamma_{ijkl} g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd} = 0.
\end{aligned}
\end{equation}
\begin{lemma} \label{lemma:identities2}
The following identities hold:
\begin{align*}
\ast (\varphi \wedge X^{\flat}) & = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi, & \ast ( \psi \wedge X^{\flat}) & = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi, \\
\psi \wedge \ast (\varphi \wedge X^{\flat}) & = 0, & \varphi \wedge \ast (\psi \wedge X^{\flat}) & = -2 \psi \wedge X^{\flat}, \\
\varphi \wedge ( X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = -2 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi), & \psi \wedge (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = 3 \ast X^{\flat}, \\
\varphi \wedge (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = -4 \ast X^{\flat}, & \psi \wedge ( X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = 0.
\end{align*}
\end{lemma}
\begin{proof}
This is part of Proposition A.3 in~\cite{K-flows}.
\end{proof}
\begin{lemma} \label{lemma:identities3}
Identify $\Omega^1 \cong \mathrm{G}_2amma (TM)$ using the metric. The \emph{cross product} $\times : \Omega^1 \times \Omega^1 \to \Omega^1$ is defined by $X \times Y = Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi = \ast (X \wedge Y \wedge \psi)$. It satisfies the identity
\begin{equation*}
X \times (X \times Y) = - g(X, X) Y + g(X, Y) X.
\end{equation*}
\end{lemma}
\begin{proof}
This is part of Lemma A.1 in~\cite{K-flows}.
\end{proof}
In terms of a local frame, we define a map $\ell_{\varphi} : \mathrm{G}_2amma(T^* M \otimes T^* M) \to \Omega^3$ by
\begin{equation} \label{eq:ellphdefn}
\ell_{\varphi} A = A_{ip} g^{pq} e^i \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi).
\end{equation}
In components, we have
\begin{equation*}
(\ell_{\varphi} A)_{ijk} = A_{ip} g^{pq} \varphi_{qjk} + A_{jp} g^{pq} \varphi_{iqk} + A_{kp} g^{pq} \varphi_{ijq}.
\end{equation*}
Analogous to~\eqref{eq:ellphdefn}, we define $\ell_{\psi} : \mathrm{G}_2amma(T^* M \otimes T^* M) \to \Omega^4$ by
\begin{equation} \label{eq:ellpsdefn}
\ell_{\psi} A = A_{ip} g^{pq} e^i \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi).
\end{equation}
In components, we have
\begin{equation*}
(\ell_{\psi} A)_{ijkl} = A_{ip} g^{pq} \psi_{qjkl} + A_{jp} g^{pq} \psi_{iqkl} + A_{kp} g^{pq} \psi_{ijql} + A_{lp} g^{pq} \psi_{ijkq}.
\end{equation*}
It is easy to see that when $A = g$ is the metric, then
\begin{equation} \label{eq:ellg}
\ell_{\varphi} g = 3 \varphi, \qquad \qquad \ell_{\psi} g = 4 \psi.
\end{equation}
In~\cite[Section 2.2]{K-flows} the map $\ell_{\varphi}$ is written as $D$, but we use $\ell_{\varphi}$ to avoid confusion with the many instances of `$D$' throughout the present paper to denote various natural linear first order differential operators. We can orthogonally decompose sections of $\mathrm{G}_2amma(T^* M \otimes T^* M)$ into symmetric and skew-symmetric parts, which then further orthogonally decompose as
\begin{equation*}
\mathrm{G}_2amma(T^* M \otimes T^* M) = \Omega^0_1 \oplus \ensuremath{\mathcal{S}}o \oplus \Omega^2_7 \oplus \Omega^2_{14}.
\end{equation*}
In~\cite[Section 2.2]{K-flows} it is shown that $\ell_{\varphi}$ has kernel $\Omega^2_{14}$ and maps $\Omega^0_1$, $\ensuremath{\mathcal{S}}o$, and $\Omega^2_7$ isomorphically onto $\Omega^3_1$, $\Omega^3_{27}$, and $\Omega^3_7$, respectively. One can similarly show that $\ell_{\psi}$ has kernel $\Omega^2_{14}$ and maps $\Omega^0_1$, $\ensuremath{\mathcal{S}}o$, and $\Omega^2_7$ isomorphically onto $\Omega^4_1$, $\Omega^4_{27}$, and $\Omega^4_7$, respectively. (See also~\cite{KLL} for a detailed proof.) In particular, we note for future references that
\begin{equation} \label{eq:ellon14}
\begin{aligned}
\beta \in \Omega^2_{14} & \iff (\ell_{\varphi} \beta)_{ijk} = \beta_{ip} g^{pq} \varphi_{qjk} + \beta_{jp} g^{pq} \varphi_{iqk} + \beta_{kp} g^{pq} \varphi_{ijq} = 0, \\ & \iff (\ell_{\psi} \beta)_{ijkl} = \beta_{ip} g^{pq} \psi_{qjkl} + \beta_{jp} g^{pq} \psi_{iqkl} + \beta_{kp} g^{pq} \psi_{ijql} + \beta_{lp} g^{pq} \psi_{ijkq} = 0.
\end{aligned}
\end{equation}
When restricted to $\ensuremath{\mathcal{S}}$, the map $\ell_{\varphi}$ is denoted by $i$ in~\cite{K-flows}. We use $\ell_{\varphi}$ rather than $i$, to avoid confusion with the algebraic derivations $\iota_B$ and $\iota_K$ that we introduce later in Section~\ref{sec:LBLK}.
\begin{lemma} \label{lemma:starell}
Let $h \in \ensuremath{\mathcal{S}}o$. Then $\ast (\ell_{\varphi} h) = - \ell_{\psi} h$.
\end{lemma}
\begin{proof}
This is part of Proposition 2.14 in~\cite{K-flows}.
\end{proof}
The next two propositions will be crucial to establish properties of the algebraic derivations $\iota_B$ and $\iota_K$ in Section~\ref{sec:LBLK}.
\begin{prop} \label{prop:special}
Let $h = h_{ij} e^i e^j$ be a \emph{symmetric} $2$-tensor. The following identities hold:
\begin{equation} \label{eq:specialprop}
\begin{aligned}
h^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = - 2 (\operatorname{Tr}_g h) \psi + 2 \ell_{\psi} h, \\
h^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = 0, \\
h^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = 0.
\end{aligned}
\end{equation}
\end{prop}
\begin{proof}
Let $\alpha \in \Omega^k$ and $\beta \in \Omega^l$. Then we have
\begin{equation*}
(e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) = e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \big( \alpha \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) \big) - (-1)^k \alpha \wedge (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta).
\end{equation*}
Since the second term above is skew in $p,q$, when we contract with the symmetric tensor $h^{pq}$ we obtain
\begin{equation} \label{eq:specialtemp}
h^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) = h^{pq} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \big( \alpha \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) \big).
\end{equation}
We will repeatedly use the identities from Lemma~\ref{lemma:identities2}. When $\alpha = \beta = \psi$ in~\eqref{eq:specialtemp}, we have $\psi \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 0$, establishing the third equation in~\eqref{eq:specialprop}. When $\alpha = \varphi$ and $\beta = \psi$ in~\eqref{eq:specialtemp}, we have
\begin{equation*}
\varphi \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = - 4 \ast (g_{qm} e^m),
\end{equation*}
and hence using that $X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (\ast \alpha) = - \ast (X^{\flat} \wedge \alpha)$ for $\alpha \in \Omega^1$, we find
\begin{align*}
h^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = h^{pq} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} ( - 4 \ast g_{qm} e^m ) = -4 h^{pq} g_{qm} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (\ast e^m) \\
& = + 4 h^{pq} g_{qm} \ast \big( (e_p)^{\flat} \wedge e^m \big) = 4 h^{pq} g_{qm} g_{pl} \ast (e^l \wedge e^m) \\
& = 4 h_{lm} \ast (e^l \wedge e^m) = 0,
\end{align*}
establishing the second equation in~\eqref{eq:specialprop}. Finally, when $\alpha = \beta = \varphi$ in~\eqref{eq:specialtemp}, we have
\begin{equation*}
\varphi \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = - 2 \ast (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = - 2 \big( \psi \wedge (e_q)^{\flat} \big) = - 2 g_{qm} e^m \wedge \psi,
\end{equation*}
and hence using~\eqref{eq:ellpsdefn} we find
\begin{align*}
h^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = h^{pq} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (- 2 g_{qm} e^m \wedge \psi ) = -2 h^{pq} g_{qm} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (e^m \wedge \psi) \\
& = - 2 h^{pq} g_{qm} \partialta^m_p \psi + 2 h^{pq} g_{qm} e^m \wedge (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \\
& = - 2 h^{pq} g_{pq} \psi + 2 h_{ml} g^{lp} e^m \wedge (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = - 2 (\operatorname{Tr}_g h) \psi + 2 \ell_{\psi} h,
\end{align*}
establishing the first equation in~\eqref{eq:specialprop}.
\end{proof}
\begin{prop} \label{prop:special2}
For any fixed $m$, the following identities hold:
\begin{equation} \label{eq:specialprop2}
\begin{aligned}
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = 3 (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), \\
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = -3 \ast (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), \\
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = -3 \ast (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), \\
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = 4 \ast (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi).
\end{aligned}
\end{equation}
\end{prop}
\begin{proof}
In this proof, we use $e^{ijk}$ to denote $e^i \wedge e^j \wedge e^k$ and similarly for any number of indices. First, we compute
\begin{align*}
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = \tfrac{1}{2} (g^{pq} \varphi_{pij} \varphi_{mqk}) e^{ijk} \\
& = \tfrac{1}{2} ( g_{ik} g_{jm} - g_{im} g_{jk} - \psi_{ijkm}) e^{ijk} \\
& = 0 - 0 - \tfrac{1}{2} \psi_{ijkm} e^{ijk} = 3 (\tfrac{1}{6} \psi_{mijk} e^{ijk}) = 3 (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi),
\end{align*}
establishing the first equation in~\eqref{eq:specialprop2}.
Similarly we compute
\begin{align*}
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = \tfrac{1}{4} (g^{pq} \varphi_{pij} \psi_{mqkl}) e^{ijkl} \\
& = \tfrac{1}{4} ( g_{ik} \varphi_{jlm} + g_{il} \varphi_{kjm} + g_{im} \varphi_{klj} - g_{jk} \varphi_{ilm} - g_{jl} \varphi_{kim} - g_{jm} \varphi_{kli} ) e^{ijkl} \\
& = 0 + 0 + \tfrac{1}{4} g_{im} \varphi_{klj} e^{ijkl} + 0 + 0 - \tfrac{1}{4} g_{jm} \varphi_{kli} e^{ijkl} = \tfrac{1}{2} g_{im} \varphi_{jkl} e^{ijkl} \\
& = 3 (g_{mi} e^i) \wedge ( \tfrac{1}{6} \varphi_{jkl} e^{jkl} ) = 3 (e_m)^{\flat} \wedge \varphi = - 3 \ast (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi),
\end{align*}
establishing the second equation in~\eqref{eq:specialprop2}. Now let $h = g$ in the second equation of~\eqref{eq:specialprop}. Taking the interior product of $g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 0$ with $e_m$, we obtain
\begin{equation*}
g^{pq} (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) + g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 0,
\end{equation*}
which, after rearrangement and relabelling of indices, becomes
\begin{equation*}
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi),
\end{equation*}
establishing the third equation in~\eqref{eq:specialprop2}.
Finally we compute
\begin{align*}
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_c \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = \tfrac{1}{12} (g^{pq} \psi_{pijk} \psi_{cqab}) e^{ijkab} \\
& = -\tfrac{1}{12} (g^{pq} \psi_{ijkp} \psi_{abcq} ) e^{ijkab} \\
& = -\tfrac{1}{12} \bigg( -\varphi_{ajk} \varphi_{ibc} - \varphi_{iak} \varphi_{jbc} - \varphi_{ija} \varphi_{kbc} + g_{ia} g_{jb} g_{kc} + g_{ib} g_{jc} g_{ka} \\
& \qquad \qquad {} + g_{ic} g_{ja} g_{kb} - g_{ia} g_{jc} g_{kb} - g_{ib} g_{ja} g_{kc} - g_{ic} g_{jb} g_{ka} -g_{ia} \psi_{jkbc} \\
& \qquad \qquad {} - g_{ja} \psi_{kibc} - g_{ka} \psi_{ijbc} + g_{ab} \psi_{ijkc} - g_{ac} \psi_{ijkb} \bigg) e^{ijkab}.
\end{align*}
The first three terms above combine, and all the remaining terms except the last one vanish. Thus using Lemma~\ref{lemma:identities2} we have
\begin{align*}
g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_c \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = -\tfrac{1}{12} ( -3 \varphi_{ajk} \varphi_{ibc} - g_{ac} \psi_{ijkb} ) e^{ijkab} \\
& = -\tfrac{1}{4} (\varphi_{ajk} e^{ajk}) \wedge (\varphi_{cib} e^{ib}) - \tfrac{1}{12} (g_{ac} e^a) \wedge (\psi_{ijkb} e^{ijkb}) \\
& = -3 ( \tfrac{1}{6} \varphi_{ajk} e^{ajk} ) \wedge ( \tfrac{1}{2} \varphi_{cib} e^{ib} ) - 2 (g_{ca} a^a) \wedge ( \tfrac{1}{24} \psi_{ijkb} e^{ijkb} ) \\
& = -3 \varphi \wedge (e_c \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - 2 (e_c)^{\flat} \wedge \psi = 6 \ast (e_c \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - 2 \ast (e_c \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = 4 \ast (e_c \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi),
\end{align*}
establishing the fourth equation in~\eqref{eq:specialprop2}.
\end{proof}
For the rest of this section and all of the next section, we assume $(M, \varphi)$ is torsion-free. See also Remark~\ref{rmk:when-torsion-free}.
\begin{prop} \label{prop:Liederiv}
Suppose $(M, \varphi)$ is a torsion-free $\mathrm{G}_2$~manifold. Then $\ast (\pi_{27} \mathcal L_X \varphi) = - \pi_{27} \mathcal L_X \psi$ for any vector field $X$.
\end{prop}
\begin{proof}
Because $\varphi$ and $\psi$ are both parallel, from~\cite[equation (1.7)]{K-flows} we have
\begin{equation*}
(\mathcal L_X \varphi) = (\nabla_i X_p) g^{pq} e^i \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi), \qquad (\mathcal L_X \psi) = (\nabla_i X_p) g^{pq} e^i \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi).
\end{equation*}
Applying $\pi_{27}$ to both of the above expressions and using Lemma~\ref{lemma:starell} yields the desired result.
\end{proof}
\subsection{The exterior derivative $\mathrm{d}$ and the Hodge Laplacian $\Delta$} \label{sec:dLap}
In this section we analyze the exterior derivative $\mathrm{d}$ and the Hodge Laplacian $\Delta$ on a manifold with torsion-free $\mathrm{G}_2$-structure. Much, but not all, of the results in this section have appeared before, without proof, in~\cite[Section 5.2]{Bryant}. See Remark~\ref{rmk:Bryant} for details. Theorem~\ref{thm:harmonic1}, which relates kernels of various operators on $\Omega^1$, is fundamental to the rest of the paper and appears to be new.
We first define three first order operators on torsion-free $\mathrm{G}_2$~manifolds, which will be used to decompose $\mathrm{d} : \Omega^k \to \Omega^{k+1}$ into components. More details can be found in~\cite[Section 4]{Knotes}.
\begin{defn} \label{defn:ops}
Let $(M, \varphi)$ be a torsion-free $\mathrm{G}_2$~manifold. We define the following first order linear differential operators:
\begin{align*}
\ensuremath{\operatorname{grad}} : \Omega^0_1 & \to \Omega^1_7, & f & \mapsto \mathrm{d} f, \\
\ensuremath{\operatorname{div}} : \Omega^1_7 & \to \Omega^0_1, & X & \mapsto - \dop{\dd} X, \\
\ensuremath{\operatorname{curl}} : \Omega^1_7 & \to \Omega^1_7, & X & \mapsto \ast (\psi \wedge \mathrm{d} X).
\end{align*}
In a local frame, these operators have the following form:
\begin{equation} \label{eq:opslocal}
(\ensuremath{\operatorname{grad}} f)_k = \nabla_k f, \qquad \ensuremath{\operatorname{div}} X = g^{ij} \nabla_i X_j, \qquad (\ensuremath{\operatorname{curl}} X)_k = (\nabla_i X_j) g^{ip} g^{jq} \varphi_{pqk}.
\end{equation}
\end{defn}
\begin{defn} \label{defn:Doperators}
Denote by $D^{l}_{m}$ the composition
\begin{equation*}
D^l_m : \Omega^k_l \hookrightarrow \Omega^k \xrightarrow{\mathrm{d}} \Omega^{k+1} \twoheadrightarrow \Omega^{k+1}_{m},
\end{equation*}
where $k$ is the smallest integer such that this composition makes sense. Here the surjection is the projection $\pi^{k+1}_m$. That is, $D^l_m = \pi^{k+1}_m \circ {\left. {\mathrm{d}} \right|}_{\Omega^k_l}$.
\end{defn}
\begin{prop} \label{prop:vanishingDs}
The operators $D^1_1$, $D^1_{14}$, $D^{14}_1$, $D^1_{27}$, $D^{27}_1$, and $D^{14}_{14}$ are all zero.
\end{prop}
\begin{proof}
It is clear from~\eqref{eq:bundle-decomp} that $D^{14}_{14} = 0$. The operators $D^1_1 : \Omega^3_1 \to \Omega^4_1$ and $D^1_{27} : \Omega^3_1 \to \Omega^4_{27}$ are both zero because $\mathrm{d} (f \varphi) = (\mathrm{d} f) \wedge \psi \in \Omega^4_7$. Similarly, since $\mathrm{d} (f \psi) = (\mathrm{d} f) \wedge \psi \in \Omega^5_7$, we also have $D^1_{14} = 0$. If $\beta \in \Omega^2_{14}$, then $\beta \wedge \psi = 0$, so $(\mathrm{d} \beta) \wedge \psi = 0$, and thus $\pi_1 (\mathrm{d} \beta) = 0$, hence $D^{14}_1 = 0$. Similarly if $\beta \in \Omega^3_{27}$, then $\beta \wedge \varphi = 0$, so $(\mathrm{d} \beta) \wedge \varphi = 0$, and thus $\pi_1 (\mathrm{d} \beta) = 0$, hence $D^{27}_1 = 0$.
\end{proof}
\begin{figure}
\caption{Decomposition of the exterior derivative $\mathrm{d}
\label{figure:d}
\end{figure}
\begin{prop} \label{prop:dfigure}
With respect to the identifications described in~\eqref{eq:forms-isom}, the components of the exterior derivative $\mathrm{d}$ satisfy the relations given in Figure~\ref{figure:d}.
\end{prop}
\begin{proof}
We will use repeatedly the contraction identities of Lemma~\ref{lemma:identities} and the descriptions~\eqref{eq:forms-isom} of the $\Omega^k_l$ spaces.
(i) We establish the relations for $\pi_7 \mathrm{d} \pi_1 : \Omega^k_1 \to \Omega^{k+1}_7$ for $k=0,3,4$.
$k=0$: Let $f \in \Omega^0_1$. By Definition~\ref{defn:Doperators}, we have $D^1_7 f = \mathrm{d} f$.
$k=3$: Let $\beta = f \varphi \in \Omega^3_1$. Since $\mathrm{d} \beta = (\mathrm{d} f) \wedge \varphi \in \Omega^4_7$, we have $\pi_7 (\mathrm{d} \beta) = - \varphi \wedge (\mathrm{d} f) = - \ast ( (\mathrm{d} f) \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi)$, so $\pi_7 \mathrm{d} \pi_1 : \Omega^3_1 \to \Omega^4_7$ is identified with $-D^1_7$.
$k=4$: Let $\gamma = f \psi \in \Omega^4_1$. Since $\mathrm{d} \gamma = (\mathrm{d} f) \wedge \psi \in \Omega^5_7$, we have $\pi_7 (\mathrm{d} \gamma) = \psi \wedge (\mathrm{d} f) = \ast ( (\mathrm{d} f) \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)$, so $\pi_7 \mathrm{d} \pi_1 : \Omega^4_1 \to \Omega^5_7$ is identified with $D^1_7$.
(ii) We establish the relations for $\pi_1 \mathrm{d} \pi_7 : \Omega^k_7 \to \Omega^{k+1}_1$ for $k=2,3,6$.
$k=2$: Let $\alpha = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi \in \Omega^2_7$. Then $(\pi_1 \mathrm{d} \alpha)_{ijk} = f \varphi_{ijk}$ for some function $f$. Using~\eqref{eq:omega3stuff} we compute
\begin{align*}
(\mathrm{d} \alpha)_{ijk} g^{ia} g^{jb} g^{kc} \varphi_{abc} & = (\pi_1 \mathrm{d} \alpha)_{ijk} g^{ia} g^{jb} g^{kc} \varphi_{abc} = f \varphi_{ijk} g^{ia} g^{jb} g^{kc} \varphi_{abc} = 42 f, \\
& = ( \nabla_i \alpha_{jk} + \nabla_j \alpha_{ki} + \nabla_k \alpha_{ij} ) g^{ia} g^{jb} g^{kc} \varphi_{abc} = 3 (\nabla_i \alpha_{jk}) g^{ia} g^{jb} g^{kc} \varphi_{abc},
\end{align*}
and thus $f = \tfrac{3}{42} (\nabla_i \alpha_{jk}) g^{ia} g^{jb} g^{kc} \varphi_{abc}$. Substituting $\alpha_{jk} = X^m \varphi_{mjk}$ we obtain
\begin{equation*}
f = \tfrac{3}{42} (\nabla_i X^m) \varphi_{mjk} \varphi_{abc} g^{ia} g^{jb} g^{kc} = \tfrac{18}{42} (\nabla_i X^m) g^{ia} g_{ma} = \tfrac{3}{7} \nabla_i X^i,
\end{equation*}
and comparing with Definition~\ref{defn:ops} we find that
\begin{equation} \label{eq:D71-1}
D^7_1 X = \pi_1 \mathrm{d} (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = f \varphi = (\tfrac{3}{7} \ensuremath{\operatorname{div}} X) \varphi.
\end{equation}
$k=3$: Let $\beta = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi \in \Omega^3_7$. Then $(\pi_1 \mathrm{d} \beta)_{ijkl} = f \psi_{ijkl}$ for some function $f$. Using~\eqref{eq:omega4stuff} we compute
\begin{align*}
(\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd} & = (\pi_1 \mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd} = f \psi_{ijkl} g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd} = 168 f, \\
& = ( \nabla_i \beta_{jkl} - \nabla_j \beta_{ikl} + \nabla_k \beta_{ijl} - \nabla_l \beta_{ijk} ) g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd} \\
& = 4 (\nabla_i \beta_{jkl}) g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd},
\end{align*}
and thus $f = \tfrac{4}{168} (\nabla_i \beta_{jkl}) g^{ia} g^{jb} g^{kc} g^{ld} \psi_{abcd}$. Substituting $\beta_{jkl} = X^m \psi_{mjkl}$ we obtain
\begin{equation*}
f = \tfrac{4}{168} (\nabla_i X^m) \psi_{mjkl} \psi_{abcd} g^{ia} g^{jb} g^{kc} g^{ld} = \tfrac{4 \cdot 24}{168} (\nabla_i X^m) g^{ia} g_{ma} = \tfrac{4}{7} \nabla_i X^i,
\end{equation*}
and comparing with~\eqref{eq:D71-1} we find that $\pi_1 \mathrm{d} \pi_7 : \Omega^3_7 \to \Omega^4_1$ is identified with $\tfrac{4}{3} D^7_1$.
$k=6$: Let $\ast X \in \Omega^6_7$. Then $\pi_1 \mathrm{d} (\ast X) = \mathrm{d} \ast X = \ast^2 \mathrm{d} \ast X = - \ast (\dop{\dd} X) = - (\dop{\dd} X) \mathsf{vol}$, where we have used $\dop{\dd} = - \ast \mathrm{d} \ast$ on odd forms. Comparing with Definition~\ref{defn:ops} and~\eqref{eq:D71-1} we find that $\pi_1 \mathrm{d} \pi_7 : \Omega^6_7 \to \Omega^7_1$ is identified with $\tfrac{7}{3} D^7_1$.
(iii) We establish the relations for $\pi_7 \mathrm{d} \pi_7 : \Omega^k_7 \to \Omega^{k+1}_7$ for $k=1,2,3,4,5$.
$k=1$: Let $X \in \Omega^1_7$. Then $(\pi_7 \mathrm{d} X)_{ij} = Y^m \varphi_{mij}$ for some vector field $Y$. We compute
\begin{align*}
(\mathrm{d} X)_{ij} g^{ia} g^{jb} \varphi_{kab} & = (\pi_7 \mathrm{d} X)_{ij} g^{ia} g^{jb} \varphi_{kab} = Y^m \varphi_{mij} g^{ia} g^{jb} \varphi_{kab} = 6 Y_k, \\
& = ( \nabla_i X_j - \nabla_j X_i ) g^{ia} g^{jb} \varphi_{kab} = 2 (\nabla_i X_j) g^{ia} g^{jb} \varphi_{abk},
\end{align*}
from which it follows from Definition~\ref{defn:Doperators} that
\begin{equation} \label{eq:D77-1}
D^7_7 X = \pi_7 \mathrm{d} X = Y = \tfrac{1}{3} \ensuremath{\operatorname{curl}} X.
\end{equation}
$k=2$: Let $\alpha = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi \in \Omega^2_7$. Then $(\pi_7 \mathrm{d} \alpha)_{ijk} = Y^m \psi_{mijk}$ for some vector field $Y$. Using~\eqref{eq:omega3stuff} we compute
\begin{align*}
(\mathrm{d} \alpha)_{ijk} g^{ia} g^{jb} g^{kc} \psi_{labc} & = (\pi_7 \mathrm{d} \alpha)_{ijk} g^{ia} g^{jb} g^{kc} \psi_{labc} = Y^m \psi_{mijk} g^{ia} g^{jb} g^{kc} \psi_{labc} = 24 Y_l, \\
& = ( \nabla_i \alpha_{jk} + \nabla_j \alpha_{ki} + \nabla_k \alpha_{ij} ) g^{ia} g^{jb} g^{kc} \psi_{labc} = 3 (\nabla_i \alpha_{jk}) g^{ia} g^{jb} g^{kc} \psi_{labc},
\end{align*}
and thus $Y_l = \tfrac{1}{8} (\nabla_i \alpha_{jk}) g^{ia} g^{jb} g^{kc} \psi_{labc}$. Substituting $\alpha_{jk} = X^m \varphi_{mjk}$ we obtain
\begin{equation*}
Y_l = \tfrac{1}{8} (\nabla_i X^m) \varphi_{mjk} \psi_{labc} g^{ia} g^{jb} g^{kc} = -\tfrac{4}{8} (\nabla_i X^m) g^{ia} \varphi_{mla} = - \tfrac{1}{2} \ensuremath{\operatorname{curl}} X,
\end{equation*}
and comparing with~\eqref{eq:D77-1} we find that $\pi_7 \mathrm{d} \pi_7 : \Omega^2_7 \to \Omega^3_7$ is identified with $-\tfrac{3}{2} D^7_7$.
$k=3$: Let $\beta = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi \in \Omega^3_7$. Then $\pi_7 (\mathrm{d} \beta) = \ast (Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \varphi \wedge Y$ for some vector field $Y$. We have $(\pi_7 \mathrm{d} \beta)_{ijkl} = \varphi_{ijk} Y_l - \varphi_{ijl} Y_k + \varphi_{ikl} Y_j - \varphi_{jkl} Y_i$. Using~\eqref{eq:omega3stuff} we compute
\begin{align*}
(\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} & = (\pi_7 \mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = (\varphi_{ijk} Y_l - \varphi_{ijl} Y_k + \varphi_{ikl} Y_j - \varphi_{jkl} Y_i) g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = 42 Y_l - 3 \varphi_{ijl} Y_k g^{ia} g^{jb} g^{kc} \varphi_{abc} = 42 Y_l - 3(6 Y_k g^{kc} g_{lc}) = 24 Y_l.
\end{align*}
But we also have
\begin{align*}
(\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} & = ( \nabla_i \beta_{jkl} - \nabla_j \beta_{ikl} + \nabla_k \beta_{ijl} - \nabla_l \beta_{ijk} ) g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = 3 (\nabla_i \beta_{jkl}) g^{ia} g^{jb} g^{kc} \varphi_{abc} - (\nabla_l \beta_{ijk}) g^{ia} g^{jb} g^{kc} \varphi_{abc}.
\end{align*}
Substituting $\beta_{ijk} = X^m \psi_{mijk}$ we obtain
\begin{align*}
(\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{cab} & = 3 (\nabla_i X^m) \psi_{mljk} g^{ia} g^{jb} g^{kc} \varphi_{abc} - (\nabla_l X^m) \psi_{mijk} g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = - 12 (\nabla_i X^m) g^{ia} \varphi_{aml} - 0,
\end{align*}
and thus $Y_l = -\tfrac{12}{24} (\nabla_i X^m) g^{ia} \varphi_{aml} = - \tfrac{1}{2} \ensuremath{\operatorname{curl}} X$. Comparing with~\eqref{eq:D77-1} we find that $\pi_7 \mathrm{d} \pi_7 : \Omega^3_7 \to \Omega^4_7$ is identified with $-\tfrac{3}{2} D^7_7$.
$k=4$: Let $\gamma = \ast( X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \varphi \wedge X \in \Omega^4_7$. Then $\pi_7 (\mathrm{d} \gamma) = \pi_7 \mathrm{d} (\varphi \wedge X) = -\pi_7 (\varphi \wedge \mathrm{d} X) = \ast( Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)$ for some vector field $Y$. We compute
\begin{equation*}
\ast( Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi ) = -\pi_7 (\varphi \wedge \mathrm{d} X) = - \varphi \wedge (\pi_7 \mathrm{d} X) = 2 \ast (\pi_7 \mathrm{d} X).
\end{equation*}
Comparing with~\eqref{eq:D77-1} we find that $\pi_7 \mathrm{d} \pi_7 : \Omega^4_7 \to \Omega^5_7$ is identified with $2 D^7_7$.
$k=5$: Let $\eta = \ast( X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = \psi \wedge X \in \Omega^5_7$. Then $\pi_7 (\mathrm{d} \eta) = \mathrm{d} \eta = \mathrm{d} (\psi \wedge X) = \psi \wedge \mathrm{d} X = \ast Y$ for some vector field $Y$. Using Definition~\ref{defn:ops}, we compute
\begin{equation*}
Y = \ast( \psi \wedge \mathrm{d} X) = \ensuremath{\operatorname{curl}} X.
\end{equation*}
Comparing with~\eqref{eq:D77-1} we find that $\pi_7 \mathrm{d} \pi_7 : \Omega^5_7 \to \Omega^6_7$ is identified with $3 D^7_7$.
(iv) We establish the relations for $\pi_{14} \mathrm{d} \pi_7 : \Omega^k_7 \to \Omega^{k+1}_{14}$ for $k=1,4$.
$k=1$: Let $X \in \Omega^1_7$. By definition, we have $\pi_{14} \mathrm{d} X = D^7_{14} X$.
$k=4$. Let $\gamma = \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \varphi \wedge X \in \Omega^4_7$. Then $\mathrm{d} \gamma = - \varphi \wedge (\mathrm{d} X)$, so $\pi_{14} \mathrm{d} \gamma = - \pi_{14} (\varphi \wedge \mathrm{d} X) = - \varphi \wedge (\pi_{14} \mathrm{d} X) = - \ast (\pi_{14} \mathrm{d} X)$. Thus we find that $\pi_{14} \mathrm{d} \pi_7 : \Omega^4_7 \to \Omega^5_{14}$ is identified with $- D^7_{14}$.
(v) We establish the relations for $\pi_7 \mathrm{d} \pi_{14} : \Omega^k_{14} \to \Omega^{k+1}_7$ for $k=2,5$.
$k=1$: Let $\alpha \in \Omega^2_{14}$. By definition, we have $\pi_7 \mathrm{d} \alpha = D^{14}_7 X$.
$k=4$. Let $\eta = \ast \beta \in \Omega^5_{14}$ where $\beta \in \Omega^2_{14}$. We have $\ast \beta = \varphi \wedge \beta$, so $\pi_7 \mathrm{d} (\ast \beta) = \mathrm{d} (\ast \beta) = - \varphi \wedge \mathrm{d} \beta \in \Omega^6_7$. We can write $\pi_7 \mathrm{d} \beta = Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi \in \Omega^3_7$ for some vector field $Y$. Then using Lemma~\ref{lemma:identities2} we find $\pi_7 \mathrm{d} (\ast \beta) = - \varphi \wedge \mathrm{d} \beta = - \varphi \wedge (\pi_7 \mathrm{d} \beta) = - \varphi \wedge (Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 4 \ast Y$. Thus we find that $\pi_7 \mathrm{d} \pi_{14} : \Omega^5_{14} \to \Omega^6_7$ is identified with $4 D^{14}_7$.
(vi) We establish the relations for $\pi_{27} \mathrm{d} \pi_7 : \Omega^k_7 \to \Omega^{k+1}_{27}$ for $k=2,3$.
$k=2$: Let $\alpha \in \Omega^2_7$. By definition, we have $\pi_{27} \mathrm{d} \alpha = D^7_{27} \alpha$.
$k=3$: Let $\beta = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi \in \Omega^4_7$. Then $\pi_{27} \mathrm{d} \beta = \pi_{27} \mathrm{d} (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \pi_{27} \mathcal L_X \psi$. Consider $\alpha = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi$. Then similarly we have $\pi_{27} \mathrm{d} \alpha = \pi_{27} \mathcal L_X \varphi$. By Proposition~\ref{prop:Liederiv}, we have $\pi_{27} \mathrm{d} (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = - \ast (\pi_{27} \mathrm{d} (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi))$. Thus we find that $\pi_{27} \mathrm{d} \pi_7 : \Omega^3_{27} \to \Omega^4_7$ is identified with $- D^7_{27}$.
(vii) We establish the relations for $\pi_7 \mathrm{d} \pi_{27} : \Omega^k_{27} \to \Omega^{k+1}_7$ for $k=3,4$.
$k=3$: Let $\beta = \ell_{\varphi} h \in \Omega^3_{27}$, where $h \in \ensuremath{\mathcal{S}}o$. Then $\pi_7 (\mathrm{d} \beta) = \ast(Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \varphi \wedge Y$ for some vector field $Y$. We have $(\pi_7 \mathrm{d} \beta)_{ijkl} = \varphi_{ijk} Y_l - \varphi_{ijl} Y_k + \varphi_{ikl} Y_j - \varphi_{jkl} Y_i$. Using~\eqref{eq:omega4stuff} we compute
\begin{align*}
(\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} & = (\pi_7 \mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = (\varphi_{ijk} Y_l - \varphi_{ijl} Y_k + \varphi_{ikl} Y_j - \varphi_{jkl} Y_i) g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = 42 Y_l - 3 \varphi_{ijl} Y_k g^{ia} g^{jb} g^{kc} \varphi_{abc} = 42 Y_l - 3(6 Y_k g^{kc} g_{lc}) = 24 Y_l.
\end{align*}
But we also have
\begin{align*}
(\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{abc} & = ( \nabla_i \beta_{jkl} - \nabla_j \beta_{ikl} + \nabla_k \beta_{ijl} - \nabla_l \beta_{ijk} ) g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = 3 (\nabla_i \beta_{jkl}) g^{ia} g^{jb} g^{kc} \varphi_{abc} - (\nabla_l \beta_{ijk}) g^{ia} g^{jb} g^{kc} \varphi_{abc}.
\end{align*}
Substituting $\beta_{ijk} = h_{ip} g^{pq} \varphi_{qjk} + h_{jp} g^{pq} \varphi_{qki} + h_{kp} g^{pq} \varphi_{qjk}$ we obtain
\begin{align*}
24 Y_l & = (\mathrm{d} \beta)_{ijkl} g^{ia} g^{jb} g^{kc} \varphi_{cab} \\
& = 3 (\nabla_i (h_{jp} g^{pq} \varphi_{qkl} + h_{kp} g^{pq} \varphi_{qlj} + h_{lp} g^{pq} \varphi_{qjk})) g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& \qquad {} - (\nabla_l (h_{ip} g^{pq} \varphi_{qjk} + h_{jp} g^{pq} \varphi_{qki} + h_{kp} g^{pq} \varphi_{qjk})) g^{ia} g^{jb} g^{kc} \varphi_{abc} \\
& = 6 (\nabla_i h_{jp}) g^{pq} g^{ia} g^{jb} (g^{kc} \varphi_{lqk} \varphi_{abc}) + 3 (\nabla_i h_{lp}) g^{pq} g^{ia} (g^{jb} g^{kc} \varphi_{qjk} \varphi_{abc}) \\
& \qquad {} - 3 (\nabla_l h_{ip}) g^{pq} g^{ia} (g^{jb} g^{kc} \varphi_{qjk} \varphi_{abc}).
\end{align*}
We further simplify this as
\begin{align*}
24 Y_l & = 6 (\nabla_i h_{jp}) g^{pq} g^{ia} g^{jb} (g_{la} g_{qb} - g_{lb} g_{qa} - \psi_{lqab}) + 3 (\nabla_i h_{lp}) g^{pq} g^{ia} (6 g_{qa}) - 3 (\nabla_l h_{ip}) g^{pq} g^{ia} (6 g_{qa}) \\
& = 6 (\nabla_l h_{jp}) g^{jp} - 6 (\nabla_i h_{lp} )g ^{ip} - 0 + 18 (\nabla_i h_{lp}) g^{ip} - 18 (\nabla_l h_{ip}) g^{ip} \\
& = 6 \nabla_l (\operatorname{Tr} h) - 6 (\nabla_{i} h_{jl}) g^{ij} + 18 (\nabla_i h_{jl}) g^{ij} - 18 \nabla_l (\operatorname{Tr} h) = 12 g^{ij} (\nabla_{i} h_{jl}),
\end{align*}
and thus $Y_l = \tfrac{1}{2} g^{ij} (\nabla_{i} h_{jl})$. It follows from Definition~\ref{defn:Doperators} that
\begin{equation} \label{eq:D277-1}
D^{27}_7 h = \pi_7 \mathrm{d} (\ell_{\varphi} h) = \ast ( Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), \qquad \text{ where $Y_l = \tfrac{1}{2} g^{ij} (\nabla_{i} h_{jl})$}.
\end{equation}
$k=4$: Let $\gamma = \ast (\ell_{\varphi} h) \in \Omega^4_{27}$, where $h \in \ensuremath{\mathcal{S}}o$. Then $\pi_7 (\mathrm{d} \gamma) = \ast(Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)$ for some vector field $Y$. Taking Hodge star of both sides, we have $Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi = \ast \pi_7 (\mathrm{d} \ast \ell_{\varphi} h) = \pi_7 \ast \mathrm{d} \ast (\ell_{\varphi} h) = -\pi_7 \dop{\dd} (\ell_{\varphi} h)$. Thus we have
\begin{align*}
-(\dop{\dd} (\ell_{\varphi} h))_{ij} g^{ia} g^{jb} \varphi_{kab} & = -(\pi_7 \dop{\dd} (\ell_{\varphi} h))_{ij} g^{ia} g^{jb} \varphi_{kab} = (Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)_{ij} g^{ia} g^{jb} \varphi_{kab} \\
& = Y^m \varphi_{mij} g^{ia} g^{jb} \varphi_{kab} = 6 Y_k.
\end{align*}
But we also have
\begin{align*}
-(\dop{\dd} (\ell_{\varphi} h))_{ij} g^{ia} g^{jb} \varphi_{kab} & = g^{pq} (\nabla_p (\ell_{\varphi} h)_{qij}) g^{ia} g^{jb} \varphi_{kab} \\
& = g^{pq} (\nabla_p (h_{ql} g^{lm} \varphi_{mij} + h_{il} g^{lm} \varphi_{mjq} + h_{jl} g^{lm} \varphi_{mqi})) g^{ia} g^{jb} \varphi_{kab} \\
& = g^{pq} (\nabla_p h_{ql}) g^{lm} (g^{ia} g^{jb} \varphi_{mij} \varphi_{kab}) + 2 g^{pq} (\nabla_p h_{il}) g^{lm} g^{ia} (g^{jb} \varphi_{qmj} \varphi_{kab}) \\
& = g^{pq} (\nabla_p h_{ql}) g^{lm} (6 g_{mk}) + 2 g^{pq} (\nabla_p h_{il}) g^{lm} g^{ia} (g_{qk} g_{ma} - g_{qa} g_{mk} - \psi_{qmka}) \\
& = 6 g^{pq} (\nabla_p h_{qk}) + 2 \nabla_k (\operatorname{Tr} h) - 2 g^{ip} (\nabla_p h_{ik}) - 0 = 4 g^{ij} (\nabla_{i} h_{jk})
\end{align*}
Thus we have $Y_k = \tfrac{2}{3} g^{ij} (\nabla_{i} h_{jk}) = \tfrac{4}{3} (\tfrac{1}{2} g^{ij} (\nabla_{i} h_{jk}))$. Comparing with~\eqref{eq:D277-1} we find that $\pi_7 \mathrm{d} \pi_{27} : \Omega^4_{27} \to \Omega^5_7$ is identified with $\tfrac{4}{3} D^{27}_7$.
\end{proof}
\begin{cor} \label{cor:d-relations}
The operators of Definition~\ref{defn:Doperators} satisfy the following fourteen relations:
\begin{equation} \label{eq:d-relations}
\begin{aligned}
D^7_7 D^1_7 & = 0, \qquad \qquad & D^7_{14} D^1_7 & = 0, \\
D^7_1 D^7_7 & = 0, \qquad \qquad & \tfrac{3}{2} D^7_7 D^7_7 - D^{14}_7 D^7_{14} & = 0, \\
- D^1_7 D^7_1 + \tfrac{9}{4} D^7_7 D^7_7 + D^{27}_7 D^7_{27} & = 0, \qquad \qquad & \tfrac{3}{2} D^7_{14} D^7_7 - D^{27}_{14} D^7_{27} & = 0, \\
\tfrac{3}{2} D^7_{27} D^7_7 + D^{27}_{27} D^7_{27} & = 0, \qquad \qquad & D^7_{27} D^7_7 + D^{14}_{27} D^7_{14} & = 0, \\
D^7_1 D^{14}_7 & = 0, \qquad \qquad & \tfrac{3}{2} D^7_7 D^{14}_7 - D^{27}_7 D^{14}_{27} & = 0, \\
D^7_{27} D^{14}_7 - D^{27}_{27} D^{14}_{27} & = 0, \qquad \qquad & D^7_7 D^{27}_7 + D^{14}_7 D^{27}_{14} & = 0, \\
\tfrac{3}{2} D^7_7 D^{27}_7 + D^{27}_7 D^{27}_{27} & = 0, \qquad \qquad & D^7_{14} D^{27}_7 - D^{27}_{14} D^{27}_{27} & = 0.
\end{aligned}
\end{equation}
\end{cor}
\begin{proof}
These relations all follow from Figure~\ref{figure:d} and the fact that $\mathrm{d}^2 = 0$, by computing $\pi_{l'} \mathrm{d}^2 \pi_{l} : \Omega^k_l \to \Omega^{k+2}_{l'}$ for all $l, l' \in \{ 1, 7, 14, 27 \}$ and all $k = 0, \ldots, 5$. Some of the relations arise multiple times this way. Moreover, there are \emph{two distinct relations} for $(l, l') = (7,7)$, $(7,27)$, and $(27,7)$.
\end{proof}
\begin{cor} \label{cor:adjoints}
Consider the maps $D^l_m : \Omega^k_l \to \Omega^{k+1}_m$ introduced in Definition~\ref{defn:Doperators}. Recall these were only defined for the smallest integer $k$ where the composition makes sense. The formal adjoint is a map $(D^l_m)^* : \Omega^{k+1}_m \to \Omega^l_m$. With respect to the identifications described in~\eqref{eq:forms-isom}, these adjoint maps are given by
\begin{equation} \label{eq:adjoints}
\begin{aligned}
(D^1_7)^* & = -\tfrac{7}{3} D^7_1, \qquad & (D^7_7)^* & = 3 D^7_7, \qquad & (D^7_{14})^* & = 4 D^{14}_7, \\
(D^7_1)^* & = - D^1_7, \qquad & (D^7_{27})^* & = - \tfrac{4}{3} D^{27}_7, \qquad & (D^{14}_7)^* & = D^7_{14}, \\
(D^{14}_{27})^* & = - D^{27}_{14}, \qquad & (D^{27}_7)^* & = - D^7_{27}, \qquad & (D^{27}_{27})^* & = D^{27}_{27}, \\
(D^{27}_{14})^* & = - D^{14}_{27}.
\end{aligned}
\end{equation}
\end{cor}
\begin{proof}
These follow from Figure~\ref{figure:d} and the facts that $\dop{\dd} = (-1)^k \ast \mathrm{d} \ast$ on $\Omega^k$ and that $\ast$ is compatible with the identifications given in~\eqref{eq:forms-isom}.
\end{proof}
\begin{rmk} \label{rmk:adjoints}
One has to be very careful with the ``equations'' in~\eqref{eq:adjoints}. In particular, taking the adjoint of both sides of an equation in~\eqref{eq:adjoints} in general violates $P^{**} = P$. This is because these are not really \emph{equalities}, but identifications, and recall that unfortunately the identifications in~\eqref{eq:bundle-decomp} are not isometries, as explained in Remark~\ref{rmk:not-isom}. However, this will not cause us any problems, because the notation $D^l_m$ will always only refer to the maps introduced in Definition~\ref{defn:Doperators}, and we will never have need to consider the adjoints of any other components of $\mathrm{d}$.
\end{rmk}
We can now describe the Hodge Laplacian $\Delta = \mathrm{d} \dop{\dd} + \dop{\dd} \mathrm{d}$ on each summand $\Omega^k_l$ in terms of the operators of Definition~\ref{defn:Doperators}.
\begin{prop} \label{prop:Laplacian}
On $\Omega^k_l$, the Hodge Laplacian $\Delta$ can be written as follows:
\begin{equation} \label{eq:Laplacian}
\begin{aligned}
\left. \Delta \right|_{\Omega^k_1} & = -\tfrac{7}{3} D^7_1 D^1_7 & & \text{for $k=0,3,4,7$}, \\
\left. \Delta \right|_{\Omega^k_7} & = 9 D^7_7 D^7_7 - \tfrac{7}{3} D^1_7 D^7_1 & & \text{for $k=1,2,3,4,5,6$}, \\
\left. \Delta \right|_{\Omega^k_{14}} & = 5 D^7_{14} D^{14}_7 - D^{27}_{14} D^{14}_{27} & & \text{for $k=2,5$}, \\
\left. \Delta \right|_{\Omega^k_{27}} & = -\tfrac{7}{3} D^7_{27} D^{27}_7 - D^{14}_{27} D^{27}_{14} + (D^{27}_{27})^2 & & \text{for $k=3,4$}.
\end{aligned}
\end{equation}
\end{prop}
\begin{proof}
Recall that $\dop{\dd} = (-1)^k \ast \mathrm{d} \ast$ on $\Omega^k$ and that $\ast$ is compatible with the identifications given in~\eqref{eq:forms-isom}. The expressions in~\eqref{eq:Laplacian} can be checked on a case-by-case basis using these facts, Figure~\ref{figure:d}, and the relations in Corollary~\ref{cor:d-relations}. Note that one can show from general principles that $\Delta_{\mathrm{d}}$ preserves the splittings~\eqref{eq:bundle-decomp} when $\varphi$ is parallel, which we always assume. (See~\cite{Joyce} for details.) However, the proof of the present proposition gives an explicit verification of this fact, viewing it as a consequence of the fundamental relations~\eqref{eq:d-relations}.
\end{proof}
\begin{rmk} \label{rmk:TF}
We emphasize that for Proposition~\ref{prop:dfigure}, Corollary~\ref{cor:d-relations}, and Propostion~\ref{prop:Laplacian}, the torsion-free assumption is essential, as the proofs frequently made use of $\nabla \varphi = \nabla \psi = \mathrm{d} \varphi = \mathrm{d} \psi = 0$. For $\mathrm{G}_2$-structures with torsion, there would be many additional terms involving torsion, and in particular the Laplacian $\Delta$ would \emph{not} preserve the splittings~\eqref{eq:bundle-decomp}. See also Remark~\ref{rmk:when-torsion-free}.
\end{rmk}
\begin{rmk} \label{rmk:Bryant}
As mentioned in the introduction, the results of Proposition~\ref{prop:dfigure}, Corollary~\ref{cor:d-relations}, and Propostion~\ref{prop:Laplacian} have appeared before in~\cite[Section 5.2, Tables 1--3]{Bryant}, where Bryant says the results follow by routine computation. We have presented all the details for completeness and for readers to be able to use the computational techniques for possible future applications. Note that one has to be careful to compare our results with~\cite{Bryant}. First, we use a different orientation convention, which effectively replaces $\ast$ by $-\ast$ and $\psi$ by $-\psi$, although Bryant denotes the $3$-form by $\sigma$. Secondly, we use slightly different \emph{identifications} between the spaces $\Omega^k_l$ for different values of $k$. Finally, Bryant defines the ``fundamental'' operators differently. For example, Bryant's $\mathrm{d}^7_7$ is our $3 D^7_7$, and Bryant's $- \tfrac{3}{7} \mathrm{d}^7_1$ is our $D^7_1$. We did notice at least one typographical error in~\cite{Bryant}. The equation $\mathrm{d} (\alpha \wedge \ast_{\sigma} \sigma) = - \ast_{\sigma} \mathrm{d}^7_7 \alpha$ in Table 1 is inconsistent with the definition $\mathrm{d}^7_7 \alpha = \ast_{\sigma} ( \mathrm{d} (\alpha \wedge \ast_{\sigma} \sigma))$ on the previous page, since $(\ast_{\sigma})^2 = +1$, not $-1$.
\end{rmk}
From now on we assume $M$ is \emph{compact}, as we will be using Hodge theory throughout. Moreover, we can integrate by parts, so if $P$ is a linear operator on forms, then $P \alpha = 0 \iff P^* P \alpha = 0$, which we will use often. The next result relates the kernel of the operators in Definition~\ref{defn:Doperators} with harmonic $1$-forms. This result is \emph{fundamental} and is used often in the rest of the paper.
\begin{thm} \label{thm:harmonic1}
We have $\ker D^7_7 = \ker D^7_{14}$. Furthermore, let $\mathcal H^1 = \ker \left. \Delta \right|_{\Omega^1}$ denote the space of harmonic $1$-forms. Then we have
\begin{equation} \label{eq:harmonic1}
\begin{aligned}
\mathcal H^1 & = \ker D^7_1 \cap \ker D^7_7 \cap \ker D^7_{14} \\
& = \ker D^7_1 \cap \ker D^7_7 \\
& = \ker D^7_1 \cap \ker D^7_{27} \\
& = \ker D^7_7 \cap \ker D^7_{27}.
\end{aligned}
\end{equation}
In particular, the intersection of \emph{any two of the three} spaces $\ker D^7_1$, $\ker D^7_7$, $\ker D^7_{27}$ is $\mathcal H^1$.
\end{thm}
\begin{proof}
From Corollary~\ref{cor:adjoints}, on $\Omega^1_7$ we have that $\dop{\dd} = (D^1_7)^* : \Omega^1_7 \to \Omega^0_1$ equals $-\tfrac{7}{3} D^7_1$, and thus
\begin{align*}
\mathcal{H}^1 &= (\ker \mathrm{d})^1 \cap (\ker \dop{\dd})^1 = \ker (D^7_7 +D^7_{14}) \cap \ker (-\tfrac{7}{3} D^7_1)\\ &= \ker D^7_7 \cap \ker D^7_{14} \cap \ker D^7_1,
\end{align*}
establishing the first equality in~\eqref{eq:harmonic1}.
Similarly from Corollary~\ref{cor:adjoints}, we have $(D^7_7)^* = 3 D^7_7$ and $(D^7_{14})^* = 4 D^{14}_7$. Hence, using $D^{14}_7 D^7_{14} = \tfrac{3}{2} D^7_7 D^7_7$ from~\eqref{eq:d-relations}, we have
\begin{align*}
D^7_7 \alpha = 0 &\iff (D^7_7)^* D^7_7 \alpha = 3 D^7_7 D^7_7 \alpha = 0 \\
& \iff D^{14}_7 D^7_{14} \alpha = \tfrac{1}{4} (D^7_{14})^* D^7_{14} \alpha = 0 \\
& \iff D^7_{14} \alpha = 0.
\end{align*}
Thus we deduce that $\ker D^7_7 = \ker D^7_{14}$ as claimed, and hence the second equality in~\eqref{eq:harmonic1} follows.
Finally, from Corollary~\ref{cor:adjoints} we have $(D^7_1)^* = - D^1_7$ and $(D^7_{27})^* = - \tfrac{4}{3} D^{27}_7$ and $(D^7_7)^* = 3 D^7_7$. Thus the relation $- D^1_7 D^7_1 + \tfrac{9}{4} D^7_7 D^7_7 + D^{27}_7 D^7_{27} = 0$ from~\eqref{eq:d-relations} can be written as
\begin{equation*}
(D^7_1)^* D^7_1 + \tfrac{3}{4} (D^7_7)^* D^7_7 - \tfrac{3}{4} (D^7_{27})^* D^7_{27} = 0.
\end{equation*}
From the above relation we easily deduce again by integration by parts that any two of $D^7_1 \alpha = 0$, $D^7_7 \alpha = 0$, $D^7_{27} \alpha = 0$ implies the third, establishing the remaining equalities in~\eqref{eq:harmonic1}.
\end{proof}
\subsection{The derivations $\mathcal L_B$ and $\mathcal L_K$ and their properties} \label{sec:LBLK}
We begin with a brief discussion of derivations on $\Omega^{\bullet}$ arising from vector-valued forms on a general $n$-manifold $M$. A good reference for this material is~\cite{KMS}. We use notation similar to~\cite{CKT,dKS}.
Let $\Omega^r_{TM} = \mathrm{G}_2amma(\mathrm{\Lambda}bda^r (T^* M) \otimes TM)$ be the space of vector-valued $r$-forms on $M$. Given an element $K \in \Omega^r_{TM}$, it induces two derivations on $\Omega^{\bullet}$. They are the \emph{algebraic derivation} $\iota_K$, of degree $r-1$, and the \emph{Nijenhuis-Lie derivation} $\mathcal{L}_K$, of degree $r$. They are defined as follows. Let $\{ e_1, \ldots, e_n \}$ be a (local) tangent frame with dual coframe $\{ e^1, \ldots, e^n \}$. Then locally $K = K^j e_j$ where each $K^j$ is an $r$-form. The operation $\iota_K : \Omega^k \to \Omega^{k+r-1}$ is defined to be
\begin{equation} \label{eq:alg-derivation}
\iota_K \alpha = K^j \wedge (e_j \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha),
\end{equation}
where $e_j \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \cdot$ is the interior product with $e_j$. The operation $\iota_K$ is well-defined and is a derivation on $\Omega^{\bullet}$. Moreover, $\iota_K$ vanishes on functions, so $\iota_K (h \alpha) = h (\iota_K \alpha)$ for any $h \in \Omega^0$ and $\alpha \in \Omega^k$, which justifies why $\iota_K$ is called algebraic. If $Y \in \Omega^1$, then
\begin{equation} \label{eq:alg-derivation1}
(\iota_K Y) (X_1, \ldots, X_r) = Y( K(X_1, \ldots, X_r) ).
\end{equation}
The operation $\mathcal{L}_K : \Omega^k \to \Omega^{k+r}$ is defined to be
\begin{equation} \label{eq:Lie-derivation}
\mathcal{L}_K \alpha = \iota_K (\mathrm{d} \alpha) - (-1)^{r-1} \mathrm{d} (\iota_K \alpha) = [ \iota_K, \mathrm{d} ] \alpha.
\end{equation}
That is, $\mathcal{L}_K$ is the graded commutator of $\iota_K$ and $\mathrm{d}$. The graded Jacobi identity on the space of graded linear operators on $\Omega^{\bullet}$ and $\mathrm{d}^2 = 0$ together imply that
\begin{equation} \label{eq:commdL}
[ \mathrm{d}, \mathcal{L}_K ] = \mathrm{d} \mathcal{L}_K - (-1)^r \mathcal{L}_K \mathrm{d} = 0.
\end{equation}
From now on, let $g$ be a Riemannian metric on $M$.
\begin{lemma} \label{lemma:iotaKfromform}
Let $K \in \Omega^r_{TM}$ be obtained from an $(r+1)$-form $\eta$ by raising the last index. That is, $g( K(X_1, \ldots, X_r), X_{r+1}) = \eta(X_1, \ldots, X_{r+1})$. In a local frame we have $K_{i_1 \cdots i_r}^q = \eta_{i_1 \cdots i_r p} g^{pq}$. The operator $\iota_K$ is of degree $r-1$. For any $\alpha \in \Omega^k$, the $(k+r-1)$-form $\iota_K \alpha$ is given by
\begin{equation} \label{eq:iotaKframe}
\iota_K \alpha = (-1)^r g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha).
\end{equation}
\end{lemma}
\begin{proof}
In a local frame we have $K = \tfrac{1}{k!} K_{i_1 \cdots i_r}^q e^{i_1} \wedge \cdots \wedge e^{i_r} \otimes e_q$, and thus from~\eqref{eq:alg-derivation} we have
\begin{align*}
\iota_K \alpha & = \tfrac{1}{k!} K_{i_1 \cdots i_r}^q e^{i_1} \wedge \cdots \wedge e^{i_r} \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) \\
& = \tfrac{1}{k!} \eta_{i_1 \cdots i_r p} g^{pq} e^{i_1} \wedge \cdots \wedge e^{i_r} \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) \\
& = (-1)^r g^{pq} \big( \tfrac{1}{k!} \eta_{p i_1 \cdots i_r} e^{i_1} \wedge \cdots \wedge e^{i_r} \big) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) \\
& = (-1)^r g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha)
\end{align*}
as claimed.
\end{proof}
\begin{cor} \label{cor:iotaKzero}
Let $K$ be as in Lemma~\ref{lemma:iotaKfromform}. If $\alpha \in \Omega^{n-(r-1)}$, then $\iota_K \alpha = 0$ in $\Omega^n$.
\end{cor}
\begin{proof}
Let $\alpha \in \Omega^{n-(r-1)}$. Since $e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta \in \Omega^r$, the form $(e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge \alpha$ is of degree $(n+1)$ and hence zero. Taking the interior product with $e_q$, we have
\begin{equation*}
0 = e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \big( (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge \alpha \big) = (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge \alpha + (-1)^r (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha).
\end{equation*}
Thus, by the skew-symmetry of $e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta$ in $p,q$, we find from~\eqref{eq:iotaKframe} that
\begin{equation*}
\iota_K \alpha = (-1)^r g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) = - g^{pq} (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge \alpha = 0
\end{equation*}
as claimed.
\end{proof}
\begin{cor} \label{cor:iotaKstar}
Let $K$ be as in Lemma~\ref{lemma:iotaKfromform}. Then the adjoint $\iota_K^*$ is a degree $-(r-1)$ operator on $\Omega^{\bullet}$ and satisfies
\begin{equation} \label{eq:iotaKstar}
\iota_K^* \beta = (-1)^{nk + rk + nr + n + 1} \ast \iota_K \ast \beta \qquad \text{ for $\beta \in \Omega^k$}.
\end{equation}
\end{cor}
\begin{proof}
Let $\alpha \in \Omega^{k-(r-1)}$ and $\beta \in \Omega^k$. Then $\alpha \wedge \ast \beta \in \Omega^{n-(r-1)}$, so by Lemma~\ref{lemma:iotaKfromform} we have $\iota_K (\alpha \wedge \ast \beta) = 0$. Since $\iota_K$ is a derivation of degree $r-1$, and $\iota_K \ast \beta$ is an $(n-k+r-1)$-form, this can be written as
\begin{align*}
0 & = (\iota_K \alpha) \wedge \ast \beta + (-1)^{(r-1)(k-(r-1))} \alpha \wedge (\iota_K \ast \beta) \\
& = g(\iota_K \alpha, \beta) \mathsf{vol} + (-1)^{rk + k + r + 1} \alpha \wedge (-1)^{(n-k+r-1)(k-r+1)} \ast (\ast \iota_K \ast \beta) \\
& = g(\iota_K \alpha, \beta) \mathsf{vol} + (-1)^{rk + k + r + 1} (-1)^{k + r + 1 + nk + nr + n} g(\alpha, \ast \iota_K \ast \beta) \mathsf{vol} \\
& = g(\iota_K \alpha, \beta) \mathsf{vol} + (-1)^{nk + rk + nr + n} g(\alpha, \ast \iota_K \ast \beta) \mathsf{vol},
\end{align*}
and hence $\iota_K^* \beta = (-1)^{nk + rk + nr + n + 1} \ast \iota_K \ast \beta$ as claimed.
\end{proof}
Now let $(M, \varphi)$ be a manifold with $\mathrm{G}_2$-structure. In particular, $n=7$ from now on.
\begin{defn} \label{defn:BK}
From the $\mathrm{G}_2$-structure $\varphi$ on $M$, we obtain two particular vector-valued forms $B \in \Omega^2_{TM}$ and $K \in \Omega^3_{TM}$ by raising the last index on the forms $\varphi$ and $\psi$, respectively. That is,
\begin{equation*}
g( B(X, Y), Z) = \varphi(X, Y, Z), \qquad g( K(X, Y, Z), W) = \psi (X, Y, Z, W).
\end{equation*}
In local coordinates we have
\begin{equation*}
B_{ij}^q = \varphi_{ijp} g^{pq}, \qquad K_{ijk}^q = \psi_{ijkp} g^{pq}.
\end{equation*}
The vector-valued $2$-form $B$ is also called the \emph{cross product} induced by $\varphi$, and, up to a factor of $-\tfrac{1}{2}$, the vector-valued $3$-form $K$ is called the \emph{associator}. (See~\cite[p.116]{HL} for details.) Thus $\iota_B$ and $\iota_K$ are algebraic derivations on $\Omega^{\bullet}$ of degrees $1$ and $2$, respectively. We also have the associated Nijenhuis-Lie derivations $\mathcal L_B$ and $\mathcal L_K$. From~\eqref{eq:Lie-derivation} we have
\begin{equation} \label{eq:LBK}
\mathcal L_B = \iota_B \mathrm{d} + \mathrm{d} \iota_B, \qquad \mathcal L_K = \iota_K \mathrm{d} - \mathrm{d} \iota_K.
\end{equation}
The operators $\mathcal L_B$ and $\mathcal L_K$ are of degree $2$ and $3$, respectively.
\end{defn}
\begin{rmk} \label{rmk:chi}
In much of the literature the associator $K$ is denoted by $\chi$, but we are following the convention of~\cite{CKT, dKS} of denoting vector-valued forms by capital Roman letters.
\end{rmk}
\begin{prop} \label{prop:derivationsstar}
Let $\iota_B$, $\iota_K$, $\mathcal L_B$, and $\mathcal L_K$ be as in Definition~\ref{defn:BK}. Then on $\Omega^k$, we have
\begin{equation} \label{eq:derivationsstar}
\begin{aligned}
\iota_B^* & = (-1)^k \ast \iota_B \ast, & \qquad \iota_K^* & = -\ast \iota_K \ast, \\
\mathcal L_B^* & = - \ast \mathcal L_B \ast, & \qquad \mathcal L_K^* & = (-1)^k \ast \mathcal L_K \ast.
\end{aligned}
\end{equation}
\end{prop}
\begin{proof}
The first pairs of equations follow from~\eqref{eq:iotaKstar} with $n=7$ and $r=2,3$, respectively. In odd dimensions, $\dop{\dd} = (-1)^k \ast \mathrm{d} \ast$ on $k$-forms, and $\ast^2 = 1$. The second pair of equations follows from these facts and taking adjoints of~\eqref{eq:LBK}.
\end{proof}
The operations $\iota_B$ and $\iota_K$ are morphisms of $\mathrm{G}_2$-representations, and in fact they are constants on $\Omega^l_{l'}$ after our identifications~\eqref{eq:forms-isom}. We will prove this in Propositions~\ref{prop:iotaBfigure} and~\ref{prop:iotaKfigure}, but first we need to collect several preliminary results.
\begin{lemma} \label{lemma:iotaBKprelim1}
Let $f \in \Omega^0$ and $X \in \Omega^1$. The following identities hold:
\begin{equation} \label{eq:iotaBKprelimeq1}
\begin{aligned}
\iota_B f & = 0, & \qquad \iota_K f & = 0, \\
\iota_B X & = X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi, & \qquad \iota_K X & = - X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
The first pair of equations are immediate since any algebraic derivation vanishes on functions. Letting $\alpha = X$ in~\eqref{eq:iotaKframe} gives $\iota_K X = (-1)^r g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta) \wedge X_q = (-1)^r X^p e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta = (-1)^r X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \eta$. The second pair of equations now follows using $r = 2$ for $\eta = \varphi$ and $r = 3$ for $\eta = \psi$.
\end{proof}
\begin{lemma} \label{lemma:iotaBKprelim2}
The following identities hold:
\begin{equation} \label{eq:iotaBKprelimeq2}
\begin{aligned}
\iota_B \varphi & = - 6 \psi, & \qquad \iota_K \varphi & = 0, \\
\iota_B \psi & = 0, & \qquad \iota_K \psi & = 0.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
To establish each of these, we use~\eqref{eq:iotaKframe} and Proposition~\ref{prop:special} with $h = g$. First, using~\eqref{eq:ellg} and $\operatorname{Tr}_g g = 7$, we have
\begin{equation*}
\iota_B \varphi = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = - 2 (\operatorname{Tr}_g g) \psi + 2 \ell_{\psi} g = - 14 \psi + 8 \psi = -6 \psi.
\end{equation*}
Similarly from Proposition~\ref{prop:special} we find that
\begin{equation*}
\iota_B \psi = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 0,
\end{equation*}
and hence also $\iota_K \varphi = - g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = - \iota_B \psi = 0$. Finally, again from Proposition~\ref{prop:special} we deduce that
\begin{equation*}
\iota_K \psi = -g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 0
\end{equation*}
as well.
\end{proof}
\begin{lemma} \label{lemma:iotaBKprelim3}
Let $X \in \Omega^1$. The following identities hold:
\begin{equation} \label{eq:iotaBKprelimeq3}
\begin{aligned}
\iota_B (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = 3 (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), & \qquad \iota_K (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) & = 3 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), \\
\iota_B (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = -3 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi), & \qquad \iota_K (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) & = -4 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi).
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Let $X = X^m e_m$. By linearity of derivations and~\eqref{eq:iotaKframe} we have
\begin{align*}
\iota_B (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) & = X^m g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta), \\
\iota_K (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) & = - X^m g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta).
\end{align*}
The equations in~\eqref{eq:iotaBKprelimeq3} now follow immediately from Proposition~\ref{prop:special2}.
\end{proof}
\begin{lemma} \label{lemma:iotaBKprelim4}
Let $\beta \in \Omega^2_{14}$. The following identities hold:
\begin{equation} \label{eq:iotaBKprelimeq4}
\iota_B \beta = 0, \qquad \qquad \iota_K \beta = 0.
\end{equation}
\end{lemma}
\begin{proof}
We use the notation of Proposition~\ref{prop:special2}. Let $\beta \in \Omega^2_{14}$. Using~\eqref{eq:iotaKframe} and~\eqref{eq:ellon14} we compute
\begin{align*}
\iota_B \beta & = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) \\
& = \tfrac{1}{2} g^{pq} \varphi_{pij} \beta_{qk} e^{ijk} \\
& = - \tfrac{1}{6} (\beta_{kq} g^{qp} \varphi_{pij} + \beta_{iq} g^{qp} \varphi_{pjk} + \beta_{jq} g^{qp} \varphi_{pki} ) e^{ijk} = 0.
\end{align*}
Similarly, again using~\eqref{eq:iotaKframe} and~\eqref{eq:ellon14} we compute
\begin{align*}
\iota_K \beta & = -g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) \\
& = -\tfrac{1}{6} g^{pq} \psi_{pijk} \beta_{ql} e^{ijkl} \\
& = + \tfrac{1}{24} (\beta_{lq} g^{qp} \psi_{pijk} - \beta_{iq} g^{qp} \psi_{pljk} - \beta_{jq} g^{qp} \psi_{pilk} - \beta_{kq} g^{qp} \psi_{pijl} ) e^{ijkl} = 0
\end{align*}
as claimed.
\end{proof}
We are now ready to establish the actions of $\iota_B$ and $\iota_K$ on the summands of $\Omega^{\bullet}$ with respect to the identifications~\eqref{eq:forms-isom}.
\begin{figure}
\caption{Decomposition of the algebraic derivation $\iota_B$ into components}
\label{figure:iotaB}
\end{figure}
\begin{prop} \label{prop:iotaBfigure}
With respect to the identifications described in~\eqref{eq:forms-isom}, the components of the operator $\iota_B$ satisfy the relations given in Figure~\ref{figure:iotaB}.
\end{prop}
\begin{proof}
The derivation $\iota_B$ is of degree $1$, so it vanishes on $\Omega^7$. Moreover, by Corollary~\ref{cor:iotaKzero} is also vanishes on $\Omega^6$. We establish the rest of Figure~\ref{figure:iotaB} by each vertical column.
$\Omega^k_1$ column: This follows from~\eqref{eq:iotaBKprelimeq1} and~\eqref{eq:iotaBKprelimeq2}. In particular, the map $\iota_B : \Omega^3_1 \to \Omega^4_1$ is identified with multiplication by $-6$.
$\Omega^k_7$ column: The map $\iota_B : \Omega^1_7 \to \Omega^2_7$ is identified with multiplication by $1$ by~\eqref{eq:iotaBKprelimeq1}. The maps $\iota_B : \Omega^2_7 \to \Omega^3_7$ and $\iota_B : \Omega^3_7 \to \Omega^4_7$ are identified with multiplication by $3$ and $-3$, respectively, by~\eqref{eq:iotaBKprelimeq3}. Let $\ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \varphi \wedge X \in \Omega^4_7$. Then
\begin{align*}
\iota_B \big( \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) & = \iota_B (\varphi \wedge X) = (\iota_B \varphi) \wedge X - \varphi \wedge (\iota_B X) \\
& = (-6 \psi) \wedge X - \varphi \wedge (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = - 6 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) + 2 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \\
& = - 4 \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi),
\end{align*}
and hence the map $\iota_B : \Omega^4_7 \to \Omega^5_7$ is identified with multiplication by $-4$. Finally, let $\ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = \psi \wedge X \in \Omega^5_7$. Then
\begin{align*}
\iota_B \big( \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \big) & = \iota_B (\psi \wedge X) = (\iota_B \psi) \wedge X + \psi \wedge (\iota_B X) \\
& = 0 + \psi \wedge (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) = 3 \ast X,
\end{align*}
and hence the map $\iota_B : \Omega^5_7 \to \Omega^6_7$ is identified with multiplication by $3$.
$\Omega^k_{14}$ column: The map $\iota_B$ on $\Omega^2_{14}$ is zero by Lemma~\ref{lemma:iotaBKprelim4}. Let $\mu = \ast \beta \in \Omega^5_{14}$ where $\beta \in \Omega^2_{14}$. Then $\mu = \ast \beta = \varphi \wedge \beta$, so $\iota_B \mu = (\iota_B \varphi) \wedge \beta - \varphi \wedge (\iota_B \beta) = - 6 \psi \wedge \beta - 0 = 0$, by the description of $\Omega^2_{14}$ in~\eqref{eq:forms-isom}.
$\Omega^k_{27}$ column: Let $\gamma = \ell_{\varphi} h \in \Omega^3_{27}$, where $h \in \ensuremath{S^2 (T^* M)}o$. By~\eqref{eq:ellphdefn} we have $\gamma = h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)$. Since $\iota_B$ is algebraic, we can pull out functions, and using~\eqref{eq:iotaBKprelimeq1} and~\eqref{eq:iotaBKprelimeq3} we compute
\begin{align*}
\iota_B \gamma & = \iota_B \big( h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \big) \\
& = h_{kl} g^{lm} \big( (\iota_B e^k) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - e^k \wedge \iota_B (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \big) \\
& = h_{kl} g^{lm} \big( g^{kp} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - e^k \wedge (3 e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) \\
& = h^{pm} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - 3 h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi).
\end{align*}
By~\eqref{eq:specialprop} and~\eqref{eq:ellpsdefn}, since $\operatorname{Tr}_g h = 0$, the above expression is
\begin{equation*}
\iota_B \gamma = 2 \ell_{\psi} h - 3 \ell_{\psi} h = - \ell_{\psi} h.
\end{equation*}
Using Lemma~\ref{lemma:starell}, we conclude that $\iota_B (\ell_{\varphi} h) = \ast (\ell_{\varphi} h)$, and thus the map $\iota_B : \Omega^3_{27} \to \Omega^4_{27}$ is identified with multiplication by $1$. Finally, let $\eta = \ell_{\psi} h \in \Omega^4_{27}$, where $h \in \ensuremath{S^2 (T^* M)}o$. By~\eqref{eq:ellpsdefn} we have $\eta = h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi)$. Computing as before, we find
\begin{align*}
\iota_B \eta & = \iota_B \big( h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) \\
& = h_{kl} g^{lm} \big( (\iota_B e^k) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) - e^k \wedge \iota_B (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) \\
& = h_{kl} g^{lm} \big( g^{kp} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) - e^k \wedge (-3 \ast( e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) ) \big) \\
& = h^{pm} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) + 3 h_{kl} g^{lm} e^k \wedge (\varphi \wedge (e_m)^{\flat}).
\end{align*}
Using~\eqref{eq:specialprop}, the above expression becomes
\begin{equation*}
\iota_B \eta = 0 + 3 h_{kl} g^{lm} e^k \wedge \varphi \wedge (g_{mp} e^p) = - 3 h_{kp} e^k \wedge e^p \wedge \varphi = 0,
\end{equation*}
so the map $\iota_B$ on $\Omega^4_{27}$ is zero.
\end{proof}
\begin{figure}
\caption{Decomposition of the algebraic derivation $\iota_K$ into components}
\label{figure:iotaK}
\end{figure}
\begin{prop} \label{prop:iotaKfigure}
With respect to the identifications described in~\eqref{eq:forms-isom}, the components of the operator $\iota_K$ satisfy the relations given in Figure~\ref{figure:iotaK}.
\end{prop}
\begin{proof}
The derivation $\iota_K$ is of degree $2$, so it vanishes on $\Omega^6$ and $\Omega^7$. Moreover, by Corollary~\ref{cor:iotaKzero} is also vanishes on $\Omega^5$. We establish the rest of Figure~\ref{figure:iotaB} by each vertical column. Note that $\iota_K$ preserves the parity (even/odd) of forms.
$\Omega^k_1$ column: This follows from~\eqref{eq:iotaBKprelimeq1} and~\eqref{eq:iotaBKprelimeq2}.
$\Omega^k_7$ column: The map $\iota_K : \Omega^1_7 \to \Omega^3_7$ is identified with multiplication by $-1$ by~\eqref{eq:iotaBKprelimeq1}. The maps $\iota_K : \Omega^2_7 \to \Omega^4_7$ and $\iota_K : \Omega^3_7 \to \Omega^5_7$ are identified with multiplication by $3$ and $-4$, respectively, by~\eqref{eq:iotaBKprelimeq3}. Let $\ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = \varphi \wedge X \in \Omega^4_7$. Then, since $\iota_K$ is an even derivation,
\begin{align*}
\iota_K \big( \ast (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) & = \iota_K (\varphi \wedge X) = (\iota_K \varphi) \wedge X + \varphi \wedge (\iota_K X) \\
& = 0 + \varphi \wedge (-X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = -\varphi \wedge (X \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) = 4 \ast X
\end{align*}
and hence the map $\iota_K : \Omega^4_7 \to \Omega^6_7$ is identified with multiplication by $4$.
$\Omega^k_{14}$ column: The map $\iota_K$ on $\Omega^2_{14}$ is zero by Lemma~\ref{lemma:iotaBKprelim4}.
$\Omega^k_{27}$ column: Let $\gamma = \ell_{\varphi} h \in \Omega^3_{27}$, where $h \in \ensuremath{S^2 (T^* M)}o$. By~\eqref{eq:ellphdefn} we have $\gamma = h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)$. Computing as in the proof of Proposition~\ref{prop:iotaKfigure}, we find that
\begin{align*}
\iota_K \gamma & = \iota_K \big( h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \big) \\
& = h_{kl} g^{lm} \big( (\iota_K e^k) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - e^k \wedge \iota_K (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \big) \\
& = h_{kl} g^{lm} \big( -g^{kp} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - e^k \wedge (3 \ast( e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) ) \big) \\
& = -h^{pm} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) - 3 h_{kl} g^{lm} e^k \wedge \varphi \wedge (e_m)^{\flat}.
\end{align*}
The first term vanishes by~\eqref{eq:specialprop} and the second term vanishes as it is $-3 h_{kl} g^{lm} g_{mp} e^k \wedge \varphi \wedge e^p = 3 h_{kp} e^k \wedge e^p \wedge \varphi = 0$.
Thus the map $\iota_K$ vanishes on $\Omega^3_{27}$. Finally, let $\eta = \ell_{\psi} h \in \Omega^4_{27}$, where $h \in \ensuremath{S^2 (T^* M)}o$. By~\eqref{eq:ellpsdefn} we have $\eta = h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi)$. Computing as before, we find
\begin{align*}
\iota_K \eta & = \iota_K \big( h_{kl} g^{lm} e^k \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) \\
& = h_{kl} g^{lm} \big( (\iota_K e^k) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) - e^k \wedge \iota_K (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \big) \\
& = h_{kl} g^{lm} \big( -g^{kp} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) - e^k \wedge (-4 \ast( e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) ) \big) \\
& = -h^{pm} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (e_m \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) + 4 h_{kl} g^{lm} e^k \wedge (\psi \wedge (e_m)^{\flat}).
\end{align*}
Again, the first term vanishes by~\eqref{eq:specialprop} and the second term vanishes as it is $4 h_{kl} g^{lm} g_{mp} e^k \wedge \psi \wedge e^p = 4 h_{kp} e^k \wedge e^p \wedge \psi = 0$.
Thus the map $\iota_K$ vanishes on $\Omega^4_{27}$.
\end{proof}
\begin{figure}
\caption{Decomposition of the Nihenhuis-Lie derivation $\mathcal L_B$ into components}
\label{figure:LB}
\end{figure}
\begin{figure}
\caption{Decomposition of the Nijenhuis-Lie derivation $\mathcal L_K$ into components}
\label{figure:LK}
\end{figure}
From now on in the paper, we always assume that $(M, \varphi)$ is torsion-free. See also Remark~\ref{rmk:when-torsion-free}.
\begin{cor} \label{cor:LBLKfigures}
With respect to the identifications described in~\eqref{eq:forms-isom}, the components of the operators $\mathcal L_B$ and $\mathcal L_K$ satisfy the relations given in Figures~\ref{figure:LB} and~\ref{figure:LK}.
\end{cor}
\begin{proof}
This is straightforward to verify from Figures~\ref{figure:d},~\ref{figure:iotaB}, and~\ref{figure:iotaK} using the equations in~\eqref{eq:LBK}.
\end{proof}
Next we discuss some properties of $\mathcal L_B$ and $\mathcal L_K$.
\begin{lemma} \label{lemma:LBLKformula}
Let $\alpha$ be a form. In a local frame, the actions of $\mathcal L_B$ and $\mathcal L_K$ is given by
\begin{equation} \label{eq:LBLKframe}
\begin{aligned}
\mathcal L_B \alpha & = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (\nabla_q \alpha), \\
\mathcal L_K \alpha & = - g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \psi) \wedge (\nabla_q \alpha).
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
It is clear that both expressions in~\eqref{eq:LBLKframe} are independent of the choice of frame. To establish these expressions at $x \in M$, we choose a local frame determined by Riemannian normal coordinates centred at $x$. In particular, at the point $x$ we have $\nabla_p e_j = $ and $\nabla_p e^j = 0$. Recalling that $M$ is torsion-free, so $\nabla \varphi = 0$, using~\eqref{eq:LBK},~\eqref{eq:iotaKframe}, and~\eqref{eq:dd} at the point $x$ we compute
\begin{align*}
\mathcal L_B \alpha & = (\iota_B \mathrm{d} + \mathrm{d} \iota_B) \alpha \\
& = \iota_B (e^m \wedge \nabla_m \alpha) + e^m \wedge \nabla_m (\iota_B \alpha) \\
& = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \big( e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (e^m \wedge \nabla_m \alpha) \big) + e^m \wedge \nabla_m \big( g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \alpha) \big) \\
& = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \big( \partialta^m_q \nabla_m \alpha - e^m \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \nabla_m \alpha) \big) + g^{pq} e^m \wedge (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (e_q \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \nabla_m \alpha) \\
& = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \nabla_q \alpha,
\end{align*}
establishing the first equation in~\eqref{eq:LBLKframe}. The other equation is proved similarly using $\nabla \psi = 0$.
\end{proof}
\begin{cor} \label{cor:LBLKds}
For any for $\alpha$, we have
\begin{equation} \label{eq:LBLKds}
\begin{aligned}
\mathcal L_B \alpha & = - \dop{\dd}(\varphi \wedge \alpha) - \varphi \wedge \dop{\dd} \alpha, \\
\mathcal L_K \alpha & = \dop{\dd} (\psi \wedge \alpha) - \psi \wedge \dop{\dd} \alpha.
\end{aligned}
\end{equation}
\end{cor}
\begin{proof}
Consider a local frame determined by Riemannian normal coordinates centred at $x \in M$ as in the proof of Lemma~\ref{lemma:LBLKformula}. Using~\eqref{eq:LBLKframe} and~\eqref{eq:ds}, we compute
\begin{align*}
\mathcal L_B \alpha & = g^{pq} (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (\nabla_q \alpha) \\
& = g^{pq} \big( e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} ( \varphi \wedge \nabla_q \alpha) + \varphi \wedge (e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \nabla_q \alpha) \big) \\
& = g^{pq} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \nabla_q (\varphi \wedge \alpha) + \varphi \wedge ( g^{pq} e_p \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \nabla_q \alpha) \\
& = - \dop{\dd} (\varphi \wedge \alpha) - \varphi \wedge (\dop{\dd} \alpha),
\end{align*}
establishing the first equation in~\eqref{eq:LBLKds}. The other equation in proved similarly.
\end{proof}
\begin{prop} \label{prop:LBLKproperties}
The derivations $\mathcal L_B$ and $\mathcal L_K$ satisfy the following identities:
\begin{align} \label{eq:LBLKdscom}
\mathcal L_B \dop{\dd} & = \dop{\dd} \mathcal L_B, & \mathcal L_K \dop{\dd} & = -\dop{\dd} \mathcal L_K, \\ \label{eq:LBLKLap}
\mathcal L_B \Delta & = \Delta \mathcal L_B, & \mathcal L_K \Delta & = \Delta \mathcal L_K, \\
\label{eq:LcomposeL}
\mathcal L_B \mathcal L_K = \mathcal L_K \mathcal L_B & = 0, & (\mathcal L_K)^2 & = 0,
\end{align}
and
\begin{equation} \label{eq:LBLKonH}
\mathcal L_B = \mathcal L_K = 0 \text{ on } \mathcal{H}^k \quad \text{if $M$ is compact}.
\end{equation}
\end{prop}
\begin{proof}
The identities in~\eqref{eq:LBLKdscom}--\eqref{eq:LcomposeL} can be verified directly from the Figures~\ref{figure:d},~\ref{figure:LB}, and~\ref{figure:LK} using $\dop{\dd} = (-1)^k \ast \mathrm{d} \ast$ on $\Omega^k$ and $\Delta = \mathrm{d} \dop{\dd} + \dop{\dd} \mathrm{d}$, the identities in Corollary~\ref{cor:d-relations}, and recalling that our identifications were chosen compatible with $\ast$.
However, we now give an alternative proof of the first equation in~\eqref{eq:LBLKdscom} that is less tedious and more illuminating. A similar proof establishes the second equation in~\eqref{eq:LBLKdscom}. (In fact this proof can be found in~\cite{KLS2}). Using~\eqref{eq:LBLKds} and $(\dop{\dd})^2 = 0$, we compute
\begin{align*}
\mathcal L_B \dop{\dd} \alpha & = - \dop{\dd} (\varphi \wedge \dop{\dd} \alpha) - \varphi \wedge \big( \dop{\dd} (\dop{\dd} \alpha) \big) \\
& = \dop{\dd} \big( - \varphi \wedge (\dop{\dd} \alpha) - \dop{\dd} (\varphi \wedge \alpha) \big) \\
& = \dop{\dd} \mathcal L_B \alpha.
\end{align*}
The equations in~\eqref{eq:LBLKLap} can also be established from~\eqref{eq:LBLKdscom},~\eqref{eq:commdL}, and $\Delta = \mathrm{d} \dop{\dd} + \dop{\dd} \mathrm{d}$.
Equation~\eqref{eq:LBLKonH} can be similarly verified using Figures~\ref{figure:d},~\ref{figure:LB}, and~\ref{figure:LK}, noting that in the compact case, the space $\mathcal H^k$ of harmonic $k$-forms coincides with the space of $\mathrm{d}$-closed and $\dop{\dd}$-closed $k$-forms.
\end{proof}
\begin{rmk} \label{rmk:LBLKcomms}
For a $k$-form $\gamma$, let $L_{\gamma}$ be the linear operator of degree $k$ on $\Omega^{\bullet}$ given by $L_{\gamma} \alpha = \gamma \wedge \alpha$. In terms of graded commutators, in the torsion-free case Corollary~\ref{cor:LBLKds} says that $[ \dop{\dd}, L_{\varphi} ] = - \mathcal L_B$ and $[ \dop{\dd}, L_{\psi} ] = \mathcal L_K$, and Proposition~\ref{prop:LBLKproperties} says that $[ \dop{\dd}, \mathcal L_B ] = [ \dop{\dd}, \mathcal L_K ] = 0$, $[ \Delta, \mathcal L_B ] = [ \Delta, \mathcal L_K ] = 0$, and $[ \mathcal L_B, \mathcal L_K ] = [ \mathcal L_K, \mathcal L_K ] = 0$. (In fact the first equation in~\eqref{eq:LcomposeL} is actually stronger than $[ \mathcal L_B, \mathcal L_K ] = 0$.) These graded commutators and others are considered more generally for $\mathrm{G}_2$~manifolds with torsion in~\cite{K} using the general framework developed in~\cite{dKS} in the case of $\U{m}$-structures.
\end{rmk}
\section{The $\mathcal L_B$-cohomology $H^{\bullet}_{\varphi}$ of $M$ and its computation} \label{sec:cohom}
In this section we define two cohomologies on a torsion-free $\mathrm{G}_2$~manifold using the derivations $\mathcal L_B$ and $\mathcal L_K$. The cohomology determined by $\mathcal L_K$ was studied extensively by Kawai--L\^e--Schwachh\"ofer in~\cite{KLS2}. We recall one of the main results of~\cite{KLS2} on the $\mathcal L_K$-cohomology, stated here as Theorem~\ref{thm:KLS}. We then proceed to compute the cohomology determined by $\mathcal L_B$. This section culminates with the proof of Theorem~\ref{thm:Hph}, which is our analogue of Theorem~\ref{thm:KLS} for the $\mathcal L_B$-cohomology. An application to formality of compact torsion-free $\mathrm{G}_2$~manifolds is given in Section~\ref{sec:formality}.
\subsection{Cohomologies determined by $\mathcal L_B$ and $\mathcal L_K$} \label{sec:cohom-defn}
Recall from~\eqref{eq:LcomposeL} that $(\mathcal L_K)^2 = 0$. This observation motivates the following definition.
\begin{defn} \label{defn:Hps}
For any $0 \leq k \leq 7$, we define
\begin{equation*}
H^k_{\psi} : = \frac{\ker(\mathcal L_K: \Omega^k \to \Omega^{k+3})}{\operatorname{im}(\mathcal L_K: \Omega^{k-3} \to \Omega^k)}.
\end{equation*}
We call these groups the $\mathcal L_K$-cohomology groups.
\end{defn}
The $\mathcal L_K$-cohomology is studied extensively in~\cite{KLS2}. Here is one of the main results of~\cite{KLS2}.
\begin{thm}[Kawai--L\^e--Schwachh\"ofer~\cite{KLS2}] \label{thm:KLS}
The following relations hold.
\begin{itemize} \setlength\itemsep{-1mm}
\item $H^k_{\psi} \cong H^k_{\mathrm{dR}}$ for $k=0,1,6,7$.
\item $H^k_{\psi}$ is infinite-dimensional for $k=2,3,4,5$.
\item There is a canonical injection $\mathcal{H}^k \hookrightarrow H^k_{\psi}$ for all $k$.
\item The Hodge star induces isomorphisms $\ast: H^k_{\psi} \cong H^{7-k}_{\psi}$.
\end{itemize}
\end{thm}
\begin{proof}
This is part of~\cite[Theorem 1.1]{KLS2}.
\end{proof}
From Figure~\ref{figure:LB} and~\eqref{eq:d-relations} we see that in general $(\mathcal L_B)^2 \neq 0$. Because of this, we \emph{cannot} directly copy the definition of $H^k_{\psi}$ to define $\mathcal L_B$-cohomology groups. However, we can make the following definition.
\begin{defn} \label{defn:Hph}
For any $0 \leq k \leq 7$, we define
\begin{equation} \label{eq:Hphdefn}
H^k_{\varphi} := \frac{\ker(\mathcal L_B: \Omega^k \to \Omega^{k+2})}{\operatorname{im}(\mathcal L_B: \Omega^{k-2} \to \Omega^k) \cap \ker(\mathcal L_B: \Omega^k \to \Omega^{k+2})}.
\end{equation}
We call these groups the $\mathcal L_B$-cohomology groups.
\end{defn}
In Sections~\ref{sec:computeHph0123} and~\ref{sec:computeHph4567} we compute these $\mathcal L_B$-cohomology groups and then in Section~\ref{sec:Hphthm} we prove Theorem~\ref{thm:Hph}, which is the analogue to Theorem~\ref{thm:KLS}.
\subsection{Computation of the groups $H^0_{\varphi}$, $H^1_{\varphi}$, $H^2_{\varphi}$, and $H^3_{\varphi}$} \label{sec:computeHph0123}
From now on we always assume that $(M, \varphi)$ is a \emph{compact} torsion-free $\mathrm{G}_2$~manifold as we use Hodge theory frequently. See also Remark~\ref{rmk:when-torsion-free}.
\begin{rmk} \label{rmk:IBP}
In particular we will often use the following observations. (There is no summation over $l, l', l''$ in this remark. The symbols $l, l', l'' \in \{ 1, 7, 14, 27 \}$ are not indices.) By Corollary~\ref{cor:adjoints}, we have $D^{l'}_{l} = c (D^{l}_{l'})^*$ for some $c \neq 0$. Thus, by integration by parts,
\begin{equation*}
\text{whenever $D^{l'}_l D^l_{l'} \omega = 0$ for some $\omega$, then $D^l_{l'} \omega = 0$.}
\end{equation*}
More generally, by Corollary~\ref{cor:adjoints} an equation of the form $a D^{l'}_l D^l_{l'} \omega + b D^{l''}_l D^l_{l''} \omega = 0$ can be rewritten as $\tilde a (D^{l}_{l'})^* D^{l}_{l'} \omega + \tilde b (D^{l}_{l''})^* D^{l}_{l''} \omega = 0$ for some $\tilde a, \tilde b$. If $\tilde a, \tilde b$ \emph{have the same sign}, then again by integration by parts we conclude that both $D^l_{l'} \omega = 0$ and $D^l_{l''} \omega = 0$.
\end{rmk}
In the first two Propositions we establish that $H^k_{\varphi} \cong H^k_{\mathrm{dR}}$ for $k = 0,1,2$.
\begin{prop} \label{prop:Hph01}
We have $H^0_{\varphi} = \mathcal{H}^0 $ and $H^1_{\varphi} = \mathcal{H}^1$.
\end{prop}
\begin{proof}
From Figure~\ref{figure:LB} and Figure~\ref{figure:d}, we observe that
\begin{align*}
\operatorname{im} (\mathcal L_B: \Omega^{-2} \to \Omega^0) & = 0, \\
\ker (\mathcal L_B: \Omega^0 \to \Omega^2) & = \ker (D^1_7) = \mathcal{H}^0,
\end{align*}
and thus that $H^0_{\varphi} = \mathcal{H}^0$.
Similarly, using Figure~\ref{figure:LB} and Theorem~\ref{thm:harmonic1}, we observe that
\begin{align*}
\operatorname{im} (\mathcal L_B: \Omega^{-1} \to \Omega^1) & = 0, \\
\ker (\mathcal L_B: \Omega^1 \to \Omega^3) & = \ker (D^7_1) \cap \ker (D^7_7) \cap \ker (D^7_{27}) = \mathcal{H}^1
\end{align*}
and hence $H^1_{\varphi} = \mathcal{H}^1$.
\end{proof}
In the remainder of this section and the next we will often use the notation introduced in~\eqref{eq:complexes}.
\begin{prop} \label{prop:Hph2}
We have $H^2_{\varphi} \cong \mathcal{H}^2 $.
\end{prop}
\begin{proof}
We first show that the denominator in~\eqref{eq:Hphdefn} is trivial. Let $\omega \in (\ker \mathcal L_B)^2 \cap ( \operatorname{im} \mathcal L_B)^2$. Then by Figure~\ref{figure:LB} we have
\begin{equation*}
\omega = \mathcal L_B f = D^1_7 f \quad \text{for some } f \in \Omega^0_1
\end{equation*}
and also that
\begin{equation*}
0 = \mathcal L_B \omega = -2 D^7_1 (D^1_7 f) - 2 D^7_{27} (D^1_7 f).
\end{equation*}
Projecting onto the $\Omega^4_1$ component, we find that $D^7_1 D^1_7 f = 0$. By Remark~\ref{rmk:IBP}, we deduce that $\omega = D^1_7 f = 0$. Thus we have shown that $(\ker \mathcal L_B)^2 \cap ( \operatorname{im} \mathcal L_B)^2 = 0$. Hence $H^2_{\varphi} = (\ker \mathcal L_B)^2$.
Write $\omega = \omega_7 + \omega_{14} \in \Omega^2_7 \oplus \Omega^2_{14}$. By Figure~\ref{figure:LB} we have
\begin{equation} \label{eq:Hph2temp}
\omega \in (\ker \mathcal L_B)^2 \iff
\left\{
\begin{aligned}
-2 D^7_1 \omega_7 & = 0, \\
-3 D^{14}_7 \omega_{14} & = 0, \\
-2 D^7_{27} \omega_7 + D^{14}_{27} \omega_{14} & = 0.
\end{aligned}
\right\}
\end{equation}
Taking $D^{27}_7$ of the third equation in~\eqref{eq:Hph2temp}, using Corollary~\ref{cor:d-relations} to write $D^{27}_7 D^{14}_{27} = \tfrac{3}{2} D^7_7 D^{14}_7$, and using the second equation in~\eqref{eq:Hph2temp}, we find that
\begin{align*}
0 & = D^{27}_7 (-2 D^7_{27} \omega_7 + D^{14}_{27} \omega_{14}) \\
& = -2 D^{27}_7 D^7_{27} \omega_7 + \tfrac{3}{2} D^7_7 D^{14}_7 \omega_{14} = -2 D^{27}_{7} D^7_{27} \omega_7,
\end{align*}
implying by Remark~\ref{rmk:IBP} that $D^7_{27} \omega_7 = 0$. Therefore we have established that
\begin{equation*}
\omega \in (\ker \mathcal L_B)^2 \iff
\left\{
\begin{aligned}
D^7_1 \omega_7 & = 0, \\
D^{14}_7 \omega_{14} & = 0, \\
D^7_{27} \omega_7 & = 0, \\
D^{14}_{27} \omega_{14} & = 0,
\end{aligned}
\right\}
\iff
\left\{
\begin{aligned}
\omega_7 & \in \mathcal{H}^2_7 \cong \mathcal{H}^1_7 \text{ by Theorem~\ref{thm:harmonic1}}, \\
\omega_{14} & \in \mathcal{H}^2_{14} \text{ by Figure~\ref{figure:d} and Corollary~\ref{cor:adjoints}}.
\end{aligned}
\right\}
\end{equation*}
We conclude that $H^2_{\varphi} = (\ker \mathcal L_B)^2 = \mathcal{H}^2$.
\end{proof}
\begin{prop} \label{prop:Hph3}
We have $H^3_{\varphi} = \mathcal{H}^3 \oplus \big( (\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal L_B)^3 \big)$.
\end{prop}
\begin{proof}
We first show that the denominator in~\eqref{eq:Hphdefn} is trivial. Let $\omega \in (\ker \mathcal L_B)^3 \cap ( \operatorname{im} \mathcal L_B)^3$. Then by Figure~\ref{figure:LB} we have
\begin{equation*}
\omega = \mathcal L_B \alpha = D^7_1 \alpha + \tfrac{3}{2} D^7_7 \alpha + D^7_{27} \alpha \quad \text{for some } \alpha \in \Omega^1_7.
\end{equation*}
Also, using Corollary~\ref{cor:d-relations} to write $D^{27}_{14} D^{7}_{27} = \tfrac{3}{2} D^7_{14} D^7_7$ and $D^{27}_7 D^7_{27} = D^1_7 D^7_1 - \tfrac{9}{4} D^7_7 D^7_7$, we have that
\begin{align*}
0 = \mathcal L_B \omega & = -2 D^1_7 (D^7_1 \alpha) + 3 D^7_{14} (\tfrac{3}{2} D^7_7 \alpha) + (- \tfrac{8}{3} D^{27}_7 + D^{27}_{14}) (D^7_{27} \alpha) \\
& = -2 D^1_7 D^7_1 \alpha - \tfrac{8}{3} D^{27}_7 D^7_{27} \alpha + \tfrac{9}{4} D^7_{14} D^7_7 \alpha + D^{27}_{14} D^7_{27} \alpha \\
& = - 2 D^1_7 D^7_1 \alpha - \tfrac{8}{3} (D^1_7 D^7_1 \alpha - \tfrac{9}{4} D^7_7 D^7_7 \alpha) + \tfrac{9}{4} D^7_{14} D^7_7 \alpha + \tfrac{3}{2} D^7_{14} D^7_7 \alpha \\
& = (-\tfrac{14}{3} D^1_7 D^7_1 \alpha + 6 D^7_7 D^7_7 \alpha) + \tfrac{15}{4} D^7_{14} D^7_7 \alpha.
\end{align*}
Projecting onto the $\Omega^5_7$ component, we find that
\begin{equation*}
-\tfrac{14}{3} D^1_7 D^7_1 \alpha + 6 D^7_7 D^7_7 \alpha = 0.
\end{equation*}
Using Corollary~\ref{cor:adjoints}, the above expression becomes
\begin{equation*}
\tfrac{14}{3} (D^7_1)^* D^7_1 \alpha + 2 (D^7_7)^* D^7_7 \alpha = 0,
\end{equation*}
and hence by Remark~\ref{rmk:IBP} we deduce that $D^7_1 \alpha = 0$ and $D^7_7 \alpha = 0$. By Theorem~\ref{thm:harmonic1}, we then have $D^7_{27} \alpha = 0$ automatically. Therefore we have shown that $(\ker \mathcal L_B)^3 \cap ( \operatorname{im} \mathcal L_B)^2 = 0$, and so $H^3_{\varphi} = (\ker \mathcal L_B)^3$.
Write $\omega = \omega_1 + \omega_7 + \omega_{27} \in \Omega^3_1 \oplus \Omega^3_7 \oplus \Omega^3_{27}$. By Figure~\ref{figure:LB} we have
\begin{equation} \label{eq:Hph3temp}
\omega \in (\ker \mathcal L_B)^3 \iff
\left\{
\begin{aligned}
-2 D^1_7 \omega_1 - \tfrac{8}{3} D^{27}_7 \omega_{27} & = 0, \\
3 D^7_{14} \omega_7 + D^{27}_{14} \omega_{27} & = 0.
\end{aligned}
\right\}
\end{equation}
Taking $D^{14}_7$ of the second equation in~\eqref{eq:Hph3temp}, using Corollary~\ref{cor:d-relations} to write $D^{14}_7 D^{27}_{14} = - D^7_7 D^{27}_7$ and $D^7_7 D^1_7 = 0$, and using $D^{27}_7 \omega_{27} = - \tfrac{3}{4} D^1_7 \omega_1$ from the first equation in~\eqref{eq:Hph3temp}, we find that
\begin{align*}
0 & = D^{14}_7 (3 D^7_{14} \omega_7 + D^{27}_{14} \omega_{27}) \\
& = 3 D^{14}_7 D^7_{14} \omega_7 - D^7_7 D^{27}_7 \omega_{27} \\
& = 3 D^{14}_7 D^7_{14} \omega_7 + \tfrac{3}{4} D^7_7 D^1_7 \omega_1 = 3 D^{14}_7 D^7_{14} \omega_7,
\end{align*}
implying by Remark~\ref{rmk:IBP} that $D^7_{14} \omega_7 = 0$. Therefore we have established that
\begin{equation} \label{eq:Hph3temp2}
\omega \in (\ker \mathcal L_B)^3 \iff
\left\{
\begin{aligned}
2 D^1_7 \omega_1 + \tfrac{8}{3} D^{27}_7 \omega_{27} & = 0, \\
D^7_{14} \omega_7 & = 0, \\
D^{27}_{14} \omega_{27} & = 0,
\end{aligned}
\right\}
\overset{\text{Theorem~\ref{thm:harmonic1}}}{\iff}
\left\{
\begin{aligned}
2 D^1_7 \omega_1 + \tfrac{8}{3} D^{27}_7 \omega_{27} & = 0, \\
D^7_7 \omega_7 & = 0, \\
D^{27}_{14} \omega_{27} & = 0.
\end{aligned}
\right\}
\end{equation}
From $\dop{\dd} = - \ast \mathrm{d} \ast$ on $\Omega^3$ and Figure~\ref{figure:d} we find that
\begin{equation} \label{eq:Hph3temp3}
\dop{\dd} \omega = 0 \iff
\left\{
\begin{aligned}
D^1_7 \omega_1 + 2 D^7_7 \omega_7 + \tfrac{4}{3} D^{27}_7 \omega_{27} & = 0, \\
-D^7_{14} \omega_7 + D^{27}_{14} \omega_{27} & = 0.
\end{aligned}
\right\}
\end{equation}
Now equations~\eqref{eq:Hph3temp2} and~\eqref{eq:Hph3temp3} together imply that $(\ker \mathcal L_B)^3 \subseteq (\ker \dop{\dd})^3$. By the Hodge theorem we have $(\ker \dop{\dd})^3 = \mathcal{H}^3 \oplus (\operatorname{im} \dop{\dd})^3$, and by~\eqref{eq:LBLKonH} we have $\mathcal{H}^3 \subset (\ker \mathcal L_B)^3$. Thus
\begin{equation*}
\mathcal{H}^3 \subseteq (\ker \mathcal{L}_B)^3 \subseteq \mathcal{H}^3 \oplus (\operatorname{im} \dop{\dd})^3.
\end{equation*}
Applying Lemma~\ref{lemma:linalg}(i) we conclude that $H^3_{\varphi} =(\ker \mathcal{L}_B)^3 = \mathcal{H}^3 \oplus \big( (\operatorname{im} \dop{\dd} )^3 \cap (\ker \mathcal L_B)^3 \big)$.
\end{proof}
We have thus far computed half of the $\mathcal L_B$-cohomology groups $H^k_{\varphi}$, for $k = 0,1,2,3$. The other half, for $k=4,5,6,7$, will be computed rigorously in Section~\ref{sec:computeHph4567}. However, we can predict the duality result that $H^k_{\varphi} \cong H^{7-k}_{\varphi}$ by the following formal manipulation:
\begin{align*}
H^k_{\varphi} & = \frac{(\ker \mathcal L_B)^k}{(\operatorname{im} \mathcal L_B)^k \cap (\ker \mathcal L_B)^k} \cong \frac{(\ker \mathcal L_B)^k + (\operatorname{im} \mathcal L_B)^k}{(\operatorname{im} \mathcal L_B)^k} \text{ by the second isomorphism theorem} \\
& \cong \frac{(\ker \mathcal L_B)^{7-k} + (\operatorname{im} \mathcal L_B)^{7-k}}{(\operatorname{im} \mathcal L_B)^{7-k}} \text{ by applying $\ast$ and using equation~\eqref{eq:derivationsstar}} \\
& \overset{\text{\red{(!)}}}{=} \frac{\big( (\operatorname{im} \mathcal L_B)^{7-k} \big)^{\perp} + \big( (\ker \mathcal L_B)^{7-k} \big)^{\perp}}{\big( (\ker \mathcal L_B)^{7-k} \big)^{\perp}} \\
& = \frac{\big( (\operatorname{im} \mathcal L_B)^{7-k} \cap (\ker \mathcal L_B)^{7-k} \big)^{\perp}}{\big( (\ker \mathcal L_B)^{7-k} \big)^{\perp}} \text{ by properties of orthogonal complement} \\
& \overset{\text{\red{(!!)}}}{\cong} \frac{(\ker \mathcal L_B)^{7-k}}{(\operatorname{im} \mathcal L_B)^{7-k} \cap (\ker \mathcal L_B)^{7-k}}=H^{7-k}_{\varphi}.
\end{align*}
Note that the above formal manipulation is not a rigorous proof of duality because at step \red{(!)}, we do not have $\operatorname{im} P^* = (\ker P)^{\perp}$ in general for an arbitrary operator $P$, and step \red{(!!)} is also not justified. Because $\Omega^k$ is not complete with respect to the $\mathcal{L}^2$-norm, the usual Hilbert space techniques do not apply. We will use elliptic operator theory to give a rigorous computation of $H^k_{\varphi}$ for $k = 4,5,6,7$, in the next section.
\subsection{Computation of the groups $H^4_{\varphi}$, $H^5_{\varphi}$, $H^6_{\varphi}$, and $H^7_{\varphi}$} \label{sec:computeHph4567}
The material on regular operators in this section is largely based on Kawai--L\^e--Schwachh\"ofer~\cite{KLS2}.
\begin{defn} \label{defn:regular}
Let $P$ be a linear differential operator of degree $r$ on $\Omega^{\bullet}$. Then $P : \Omega^{k - r} \to \Omega^k$ is said to be \emph{regular} if $\Omega^k = \operatorname{im} P \oplus \ker P^*$, where by $\ker P^*$ we mean the kernel of the formal adjoint $P^* : \Omega^k \to \Omega^{k - r}$ with respect to the $L^2$ inner product. The operator $P$ is said to be elliptic, overdetermined elliptic, underdetermined elliptic, if the principal symbol $\sigma_{\xi} (P)$ of $P$ is bijective, injective, surjective, respectively, for all $\xi \neq 0$.
\end{defn}
\begin{rmk} \label{rmk:regular}
It is a standard result in elliptic operator theory (see~\cite[p.464; 32 Corollary]{Besse}) that elliptic, overdetermined elliptic, and underdetermined elliptic operators are all regular.
\end{rmk}
\begin{prop} \label{prop:LBregular}
The operator $\mathcal L_B: \Omega^{k-2} \to \Omega^k$ is regular for all $k = 0, \ldots, 9$.
\end{prop}
\begin{proof}
Consider the symbol $P = \sigma_{\xi} (\mathcal L_B)$. By~\eqref{eq:LBLKframe}, this operator is $P (\omega) = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \omega$. Note that this is an algebraic (pointwise) map and thus at each point it is a linear map between finite-dimensional vector spaces. We will show that $P : \Omega^{k-2} \to \Omega^k$ is injective for $k = 0,1,2,3,4$ and surjective for $k=5,6,7,8,9$. The claim will then follow by Remark~\ref{rmk:regular}.
First we claim that injectivity of $P : \Omega^{k-2} \to \Omega^k$ for $k=0,1,2,3,4$ implies surjectivity of $P : \Omega^{k-2} \to \Omega^k$ for $k=5,6,7,8,9$. Suppose $P : \Omega^{k-2} \to \Omega^k$ is injective. Then the dual map $P^* : \Omega^k \to \Omega^{k-2}$ is surjective. But we have
\begin{equation*}
P^* = ( \sigma_{\xi} (\mathcal L_B) )^* = \sigma_{\xi} (\mathcal L_B^*),
\end{equation*}
and by~\eqref{eq:derivationsstar} this equals $\sigma_{\xi} (- \ast \mathcal L_B \ast) = - \ast \sigma_{\xi} (\mathcal L_B) \ast = - \ast P \ast$. Since $\ast : \Omega^l \to \Omega^{7-l}$ is bijective, and we have that $\ast P \ast : \Omega^k \to \Omega^{k-2}$ is surjective, we deduce that $P : \Omega^{(9-k) - 2} \to \Omega^{9-k}$ is surjective. But $9-k \in \{ 5, 6, 7, 8, 9 \}$ if $k = \{ 0, 1, 2, 3, 4 \}$. Thus the claim is proved.
It remains to establish injectivity of $P : \Omega^{k-2} \to \Omega^k$ for $k = 0, 1, 2, 3, 4$. This is automatic for $k = 0, 1$ since $\Omega^{k-2} = 0$ in these cases.
If $k = 2$, then $P : \Omega^0 \to \Omega^2$ is given by $Pf = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge f = f (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi)$. Suppose $Pf = 0$. Since $\xi \neq 0$, we have $\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi \neq 0$, and thus $f = 0$. So $P$ is injective for $k=2$.
If $k=3$, then $P : \Omega^1 \to \Omega^3$ is given by $P \alpha = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \alpha$. Suppose $P \alpha = 0$. Taking the wedge product of $P \alpha = 0$ with $\psi$ and using Lemma~\ref{lemma:identities2} gives
\begin{align*}
0 & = \psi \wedge (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \alpha = 3 (\ast \xi) \wedge \alpha \\
& = 3 g( \xi, \alpha) \mathsf{vol}.
\end{align*}
Thus $g(\xi, \alpha) = 0$. Similarly, taking the wedge product of $P \alpha = 0$ with $\varphi$ and using Lemmas~\ref{lemma:identities2} and~\ref{lemma:identities3} gives
\begin{align*}
0 & = \varphi \wedge (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \alpha = -2 \big( \ast (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \big) \wedge \alpha \\
& = -2 \psi \wedge \xi \wedge \alpha = - 2 \ast ( \xi \times \alpha ).
\end{align*}
Thus $\xi \times \alpha = 0$. Taking the cross product of this with $\xi$ and using Lemma~\ref{lemma:identities3} gives
\begin{equation*}
- g(\xi, \xi) \alpha + g(\xi, \alpha) \xi = 0.
\end{equation*}
Since $g(\xi, \alpha) = 0$ and $\xi \neq 0$, we conclude that $\alpha = 0$. So $P$ is injective for $k=3$.
If $k=4$, then $P : \Omega^2 \to \Omega^4$ is given by $P \beta = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \beta$. Suppose $P \beta = 0$. This means
\begin{equation} \label{eq:regulartemp}
(\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \beta = 0.
\end{equation}
Write $\beta = \beta_7 + \beta_{14} \in \Omega^2_7 \oplus \Omega^2_{14}$, where by~\eqref{eq:forms-isom} we can write $\beta_7 = Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi$ for some unique $Y$. Taking the wedge product of~\eqref{eq:regulartemp} with $\varphi$ and using~\eqref{eq:forms-isom} and~\eqref{eq:fund-eq}, we have
\begin{align*}
0 & = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \varphi \wedge \beta = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (- 2 \ast \beta_7 + \ast \beta_{14}) \\
& = - 2 (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \ast \beta_7 + 0 = (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \varphi = - 6 g(\xi, Y) \mathsf{vol}.
\end{align*}
Thus we have
\begin{equation} \label{eq:regulartemp2}
g(\xi, Y) = 0.
\end{equation}
Now we take the interior product of~\eqref{eq:regulartemp} with $\xi$. This gives $(\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta) = 0$. By the injectivity of $P$ for $k=3$, we deduce that
\begin{equation} \label{eq:regulartemp2b}
\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta = 0.
\end{equation}
Using~\eqref{eq:regulartemp2b} and~\eqref{eq:forms-isom}, we can rewrite~\eqref{eq:regulartemp} as
\begin{equation*}
0 = \xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} (\varphi \wedge \beta) = \xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} ( - 2 \ast \beta_7 + \ast \beta_{14} ).
\end{equation*}
Taking $\ast$ of the above equation and using $\ast ( \xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \ast \gamma) = \pm \xi \wedge \gamma$, where in general the sign depends on the dimension of the manifold and the degree of $\gamma$, we find that
\begin{equation} \label{eq:regulartemp3}
- 2 \xi \wedge \beta_7 + \xi \wedge \beta_{14} = 0.
\end{equation}
Equation~\eqref{eq:regulartemp3} implies that
\begin{equation} \label{eq:regulartemp4}
\xi \wedge \beta = \xi \wedge \beta_7 + \xi \wedge \beta_{14} = 3 \xi \wedge \beta_7.
\end{equation}
Taking the interior product of~\eqref{eq:regulartemp4} with $\xi$ and using~\eqref{eq:regulartemp2b} yields
\begin{equation} \label{eq:regulartemp5}
g(\xi, \xi) \beta = 3 g(\xi, \xi) \beta_7 - 3 \xi \wedge (\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta_7).
\end{equation}
By Lemma~\ref{lemma:identities3} we have $\xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \beta_7 = \xi \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi = Y \times \xi$. Thus~\eqref{eq:regulartemp5} becomes
\begin{equation} \label{eq:regulartemp6}
g(\xi, \xi) \beta = 3 g(\xi, \xi) \beta_7 - 3 \xi \wedge (Y \times \xi).
\end{equation}
Now we take the wedge product of~\eqref{eq:regulartemp6} with $\psi$, use Lemma~\ref{lemma:identities3} again, and the fact that $\beta_{14} \wedge \psi = 0$ from~\eqref{eq:forms-isom}. We obtain
\begin{align*}
g(\xi, \xi) \beta_7 \wedge \psi & = 3 g(\xi, \xi) \beta_7 \wedge \psi - 3 \xi \wedge (Y \times \xi) \wedge \psi \\
& = 3 g(\xi, \xi) \beta_7 \wedge \psi - 3 \ast (\xi \times (Y \times \xi)),
\end{align*}
which can be rearranged to give, using Lemma~\ref{lemma:identities3} and~\eqref{eq:regulartemp2}, that
\begin{equation} \label{eq:regulartemp7}
- 2 g(\xi, \xi) \beta_7 \wedge \psi = 3 \ast (\xi \times (\xi \times Y)) = - 3 \ast \big( g(\xi, \xi) Y \big).
\end{equation}
But from Lemma~\ref{lemma:identities2} we find $\beta_7 \wedge \psi = (Y \mathbin{\! \hbox{\vrule height0.3pt width5pt depth 0.2pt \vrule height5pt width0.4pt depth 0.2pt}} \varphi) \wedge \psi = 3 \ast Y$. Substituting this into~\eqref{eq:regulartemp7} and taking $\ast$, we find that
\begin{equation*}
- 3 g(\xi, \xi) Y = -2 g(\xi, \xi) \ast \big( 3 \ast Y \big) = - 6 g(\xi, \xi) Y.
\end{equation*}
Since $\xi \neq 0$, we deduce that $Y = 0$ and thus $\beta_7 = 0$. Substituting back into~\eqref{eq:regulartemp5} then gives $g(\xi, \xi) \beta_{14} = 0$ and thus $\beta_{14} = 0$ as well. So $P$ is injective for $k=4$.
\end{proof}
\begin{cor} \label{cor:LBregular}
For any $k = 0, \ldots, 7$, we have
\begin{equation} \label{eq:LBregular}
(\operatorname{im} \mathcal L_B)^k = \ast ( (\ker \mathcal L_B)^{7-k})^{\perp}.
\end{equation}
\end{cor}
\begin{proof}
By~\eqref{eq:derivationsstar} we have $(\operatorname{im} \mathcal{L}_B)^k = \ast (\operatorname{im} \mathcal{L}^*_B)^{7-k}$, and because $\mathcal L_B$ is regular by Proposition~\ref{prop:LBregular}, we have $(\operatorname{im} \mathcal{L}^*_B)^{7-k} = ((\ker \mathcal{L}_B)^{7-k})^{\perp}$. The result follows.
\end{proof}
\begin{prop} \label{prop:Hph67}
We have $H^7_\varphi \cong \mathcal{H}^7$ and $H^6_\varphi \cong \mathcal{H}^6$.
\end{prop}
\begin{proof}
In the proof of Proposition~\ref{prop:Hph01}, we showed that $(\ker \mathcal{L}_B)^0 = \mathcal{H}^0$ and $(\ker \mathcal{L}_B)^1 = \mathcal{H}^1$. Thus using~\eqref{eq:LBregular} we have
\begin{align*}
(\operatorname{im} \mathcal{L}_B)^7 & = \ast ((\ker \mathcal{L}_B)^0)^\perp = \ast (\mathcal{H}^0)^{\perp} & & \\
& = (\operatorname{im} \mathrm{d})^7 \oplus (\operatorname{im} \dop{\dd})^7 \qquad & & \text{by the Hodge decomposition.}
\end{align*}
In exactly the same way we get $(\operatorname{im} \mathcal{L}_B)^6 = (\operatorname{im} \mathrm{d})^6 \oplus (\operatorname{im} \dop{\dd})^6$.
Moreover, since $\mathcal L_B$ has degree two, we have $(\ker \mathcal{L}_B)^6 = \Omega^6$ and $(\ker \mathcal{L}_B)^7 = \Omega^7$. Thus, we conclude that
\begin{equation*}
H^k_\varphi = \frac{\Omega^k}{(\operatorname{im} \mathrm{d})^k \oplus (\operatorname{im} \dop{\dd})^k} \cong \mathcal{H}^k \qquad \text{ for $k = 6,7$.}
\end{equation*}
by the Hodge decomposition.
\end{proof}
\begin{prop} \label{prop:Hph5}
We have $H^5_\varphi \cong \mathcal{H}^5$.
\end{prop}
\begin{proof}
In the proof of Proposition~\ref{prop:Hph2}, we showed that $(\ker \mathcal L_B)^2 = \mathcal{H}^2$, so using~\eqref{eq:LBregular} just as in the proof of Proposition~\ref{prop:Hph67} we deduce that
\begin{equation} \label{eq:Hph5temp}
(\operatorname{im} \mathcal{L}_B)^5 = (\operatorname{im} \mathrm{d})^5 \oplus (\operatorname{im} \dop{\dd})^5.
\end{equation}
Let $\alpha \in \Omega^6$. Then since $\dop{\dd} = \ast \mathrm{d} \ast$ on $\Omega^6$, we find from Figure~\ref{figure:d} that up to our usual identfications, $\dop{\dd} \alpha = D^7_7 \alpha + D^7_{14} \alpha \in \Omega^5_7 \oplus \Omega^5_{14}$. Then Figure~\ref{figure:LB} and~\eqref{eq:d-relations} gives
\begin{equation*}
\mathcal L_B \dop{\dd} \alpha = \mathcal L_B (D^7_7 \alpha + D^7_{14} \alpha) = 7 D^7_1 D^7_7 \alpha + 0 = 0,
\end{equation*}
so $(\operatorname{im} \dop{\dd})^5 \subset (\ker \mathcal L_B)^5$. We also have $\mathcal{H}^5 \subset (\ker \mathcal{L}_B)^5$ by~\eqref{eq:LcomposeL}. Using the Hodge decomposition of $\Omega^5$ we therefore have
\begin{equation*}
\mathcal{H}^5 \oplus (\operatorname{im} \dop{\dd})^5 \subseteq (\ker \mathcal{L}_B)^5 \subseteq \Omega^5 = \mathcal{H}^5 \oplus (\operatorname{im} \dop{\dd})^5 \oplus (\operatorname{im} \mathrm{d})^5.
\end{equation*}
Applying Lemma~\ref{lemma:linalg}(i) we deduce that
\begin{equation} \label{eq:Hph5temp2}
(\ker \mathcal L_B)^5 = \mathcal{H}^5 \oplus (\operatorname{im} \dop{\dd})^5 \oplus \big( (\operatorname{im} \mathrm{d})^5 \cap (\ker \mathcal L_B)^5 \big).
\end{equation}
Applying Lemma~\ref{lemma:linalg}(ii) to~\eqref{eq:Hph5temp},~\eqref{eq:Hph5temp2}, as subspaces of $\Omega^5 = \mathcal{H}^5 \oplus (\operatorname{im} \mathrm{d})^5 \oplus (\operatorname{im} \dop{\dd})^5$, we obtain
\begin{equation} \label{eq:Hph5temp3}
(\operatorname{im} \mathcal L_B)^5 \cap (\ker \mathcal L_B)^5 = (\operatorname{im} \dop{\dd})^5 \oplus \big( (\operatorname{im} \mathrm{d})^5 \cap (\ker \mathcal L_B)^5 \big).
\end{equation}
Therefore we find that
\begin{align*}
H^5_\varphi & = \frac{(\ker \mathcal{L}_B)^5}{(\ker \mathcal{L}_B)^5 \cap (\operatorname{im} \mathcal{L}_B)^5} & & \\
& = \frac{\mathcal{H}^5 \oplus (\operatorname{im} \dop{\dd})^5 \oplus \big( (\operatorname{im} \mathrm{d})^5 \cap (\ker \mathcal{L}_B)^5 \big)}{(\operatorname{im} \dop{\dd})^5 \oplus (\operatorname{im} \mathrm{d})^5 \cap (\ker \mathcal{L}_B)^5} & & \text{by~\eqref{eq:Hph5temp2} and~\eqref{eq:Hph5temp3}} \\
& \cong \mathcal{H}^5 & &
\end{align*}
as claimed.
\end{proof}
Before we can compute $H^4_{\varphi}$ we need two preliminary results.
\begin{lemma} \label{lemma:Hph4-1}
We have
\begin{equation} \label{eq:Hph4-1}
(\ker \mathcal{L}_B)^4 \cap \big( (\operatorname{im} \dop{\dd})^4 \oplus (\operatorname{im} \mathrm{d})^4 \big) = \big( (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \dop{\dd})^4 \big) \oplus \big( (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathrm{d})^4 \big).
\end{equation}
\end{lemma}
\begin{proof}
Let $\beta = \beta_7 + \beta_{14} \in \Omega^5_7 \oplus \Omega^5_{14}$, and $\gamma = \gamma_1 + \gamma_7 + \gamma_{27} \in \Omega^3_1 \oplus \Omega^3_7 \oplus \Omega^3_{27}$. We need to prove that
\begin{equation} \label{eq:Hph4-1temp}
\begin{aligned}
&\mathcal{L}_B \dop{\dd} (\beta_7 + \beta_{14} ) + \mathcal{L}_B \mathrm{d} (\gamma_1 + \gamma_7 + \gamma_{27}) = 0 \\
\iff & \mathcal{L}_B \dop{\dd} (\beta_7 + \beta_{14}) = \mathcal{L}_B \mathrm{d} (\gamma_1 + \gamma_7 + \gamma_{27}) = 0.
\end{aligned}
\end{equation}
From $\dop{\dd} = - \ast \mathrm{d} \ast$ on $\Omega^5$ and Figures~\ref{figure:d} and~\ref{figure:LB}, we have
\begin{align*}
\mathcal{L}_B\dop{\dd} (\beta_7 + \beta_{14}) & = \mathcal{L}_B (-D^7_1 \beta_7 + \tfrac{3}{2} D^7_7 \beta_7 - D^7_{27} \beta_7 - D^{14}_7 \beta_{14} - D^{14}_{27} \beta_{14} ) \\
& = 3 D^1_7 (-D^7_1 \beta_7) - 6 D^7_7 ( \tfrac{3}{2} D^7_7 \beta_7 - D^{14}_7 \beta_{14} ) + 4 D^{27}_7 ( - D^7_{27} \beta_7 - D^{14}_{27} \beta_{14}) \\
& = - 3 D^1_7 D^7_1 \beta_7 - 9 D^7_7 D^7_7 \beta_7 + 6 D^7_7 D^{14}_7 \beta_{14} - 4 D^{27}_7 D^7_{27} \beta_7 - 4 D^{27}_7 D^{14}_{27} \beta_{14}.
\end{align*}
Using the relations in~\eqref{eq:d-relations}, the above expression simplifies to
\begin{equation} \label{eq:Hph4-1temp2}
\mathcal{L}_B\dop{\dd} (\beta_7 + \beta_{14}) = -7 D^1_7 D^7_1 \beta_7.
\end{equation}
Similarly from Figures~\ref{figure:d} and~\ref{figure:LB} and $D^7_7 D^1_7 = 0$, we have
\begin{align*}
\mathcal{L}_B \mathrm{d} (\gamma_1 + \gamma_7 + \gamma_{27}) & = \mathcal{L}_B (\tfrac{4}{3} D^7_1 \gamma_7 + ( - D^1_7 \gamma_1 - \tfrac{3}{2} D^7_7 \gamma_7 + D^{27}_7 \gamma_{27} ) + ( - D^7_{27} \gamma_7 + D^{27}_{27} \gamma_{27}) ) \\
& = 3 D^1_7 (\tfrac{4}{3} D^7_1 \gamma_7) - 6 D^7_7 ( - D^1_7 \gamma_1 - \tfrac{3}{2} D^7_7 \gamma_7 + D^{27}_7 \gamma_{27} ) + 4 D^{27}_7 ( - D^7_{27} \gamma_7 + D^{27}_{27} \gamma_{27}) ) \\
& = 4 D^1_7 D^7_1 \gamma_7 + 9 D^7_7 D^7_7 \gamma_7 - 6 D^7_7 D^{27}_7 \gamma_{27} - 4 D^{27}_7 D^7_{27} \gamma_7 + 4 D^{27}_7 D^{27}_{27} \gamma_{27}.
\end{align*}
Using the relations in~\eqref{eq:d-relations}, the above expression simplifies to
\begin{equation} \label{eq:Hph4-1temp3}
\mathcal{L}_B \mathrm{d} (\gamma_1 + \gamma_7 + \gamma_{27}) = 18 D^7_7 D^7_7 \gamma_7 - 12 D^7_7 D^{27}_7 \gamma_{27}.
\end{equation}
Combining equations~\eqref{eq:Hph4-1temp2} and~\eqref{eq:Hph4-1temp3}, if $\mathcal{L}_B \dop{\dd} (\beta_7 + \beta_{14}) + \mathcal{L}_B \mathrm{d} (\gamma_1 + \gamma_7 + \gamma_{27}) = 0$, then we have
\begin{equation*}
- 7 D^1_7 D^7_1 \beta_7 + 18 D^7_7 D^7_7 \gamma_7 - 12 D^7_7 D^{27}_7 \gamma_{27} = 0,
\end{equation*}
and thus, applying $D^7_1$ and using $D^7_1 D^7_7 = 0$, we deduce that
\begin{equation*}
7 D^7_1 D^1_7 D^7_1 \beta_7 = D^7_1 D^7_7 ( 18 D^7_7 \gamma_7 - 12 D^{27}_7 \gamma_{27} ) = 0.
\end{equation*}
Thus we have $D^7_1 D^1_7 D^7_1 \beta_7 = 0$. Applying Remark~\ref{rmk:IBP}, we deduce that $D^1_7 D^7_1 \beta_7 = 0$ and thus by~\eqref{eq:Hph4-1temp2} that $\mathcal L_B \dop{\dd} (\beta_7 + \beta_{14}) = 0$. Thus we have established~\eqref{eq:Hph4-1temp} and consequently
\begin{equation*}
(\ker \mathcal{L}_B)^4 \cap \big( (\operatorname{im} \dop{\dd})^4 \oplus (\operatorname{im} \mathrm{d})^4 \big) = \big( (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \dop{\dd})^4 \big) \oplus \big( (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathrm{d})^4 \big)
\end{equation*}
as claimed.
\end{proof}
\begin{lemma} \label{lemma:Hph4-2}
We have
\begin{equation} \label{eq:Hph4-2}
(\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathcal{L}_B)^4 = 0.
\end{equation}
\end{lemma}
\begin{proof}
Let $\omega \in (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathcal{L}_B)^4$. We write $\omega = \mathcal{L}_B (\alpha_7 + \alpha_{14})$ for some $\alpha_7 + \alpha_{14} \in \Omega^2_7 \oplus \Omega^2_{14}$. Using Figure~\ref{figure:LB}, we find
\begin{equation*}
\begin{aligned}
\omega = \mathcal{L}_B (\alpha_7 + \alpha_{14}) & = (- 2 D^7_1 \alpha_7) + (- 3 D^{14}_7 \alpha_{14}) + (- 2 D^7_{27} \alpha_7 + D^{14}_{27} \alpha_{14}) \\
& = \omega_1 + \omega_7 + \omega_{27} \in \Omega^4_1 \oplus \Omega^4_7 \oplus \Omega^4_{27}.
\end{aligned}
\end{equation*}
That is, we have
\begin{equation} \label{eq:Hph4-2LBo}
\begin{aligned}
\omega_1 & = - 2 D^7_1 \alpha_7, \\
\omega_7 & = - 3 D^{14}_7 \alpha_{14}, \\
\omega_{27} & = - 2 D^7_{27} \alpha_7 + D^{14}_{27} \alpha_{14}.
\end{aligned}
\end{equation}
Using Figure~\ref{figure:LB} again, the equation $\mathcal{L}_B \mathcal{L}_B (\alpha_7 + \alpha_{14}) = \mathcal{L}_B \omega = 0$ gives
\begin{align*}
0 & = \mathcal{L}_B \big( - 2 D^7_1 \alpha_7 - 3 D^{14}_7 \alpha_{14} + (- 2 D^7_{27} \alpha_7 + D^{14}_{27} \alpha_{14} ) \big) \\
& = 3 D^1_7( - 2 D^7_1 \alpha_7) - 6 D^7_7 ( - 3 D^{14}_7 \alpha_{14} ) + 4 D^{27}_7 (- 2 D^7_{27} \alpha_7 + D^{14}_{27} \alpha_{14} ) \\
& = -6 D^1_7D^7_1\alpha_7 + 18 D^7_7D^{14}_7 \alpha_{14} - 8 D^{27}_7 D^7_{27} \alpha_7 + 4 D^{27}_7 D^{14}_{27} \alpha_{14}.
\end{align*}
Using the relations~\eqref{eq:d-relations}, we can rewrite the above expression in two different ways, both of which will be useful. These are
\begin{align}
-6 D^1_7 D^7_1 \alpha_7 - 8 D^{27}_7 D^7_{27} \alpha_7 + 24 D^7_7 D^{14}_7 \alpha_{14} & = 0, \label{eq:Hph4-2temp} \\
-14 D^1_7 D^7_1 \alpha_7 + 18 D^7_7 D^7_7 \alpha_7 + 24 D^7_7 D^{14}_7 \alpha_{14} & = 0. \label{eq:Hph4-2temp2}
\end{align}
Applying $D^7_1$ to~\eqref{eq:Hph4-2temp2} and using $D^7_1 D^7_7 = 0$, we deduce that
\begin{equation*}
14 D^7_1 D^1_7 D^7_1 \alpha_7 = (D^7_1 D^7_7)(18 D^7_7 \alpha_7 + 24 D^{14}_7 \alpha_{14}) = 0.
\end{equation*}
Thus we have $D^7_1 D^1_7 D^7_1 \alpha_7 = 0$. Applying Remark~\ref{rmk:IBP} twice, we deduce first that $D^1_7 D^7_1 \alpha_7 = 0$ and then that
\begin{equation}
D^7_1 \alpha_7 = 0. \label{eq:Hph4-2temp3}
\end{equation}
Comparing~\eqref{eq:Hph4-2temp3} and~\eqref{eq:Hph4-2LBo} we find that $\omega_1 = 0$. Since $\omega \in (\operatorname{im} \mathrm{d})^4$, it is $\mathrm{d}$-closed. Using Figures~\ref{figure:d} and~\ref{figure:LB}, the conditions $\pi_7 \mathrm{d} \omega = 0$ and $\mathcal{L}_B \omega = 0$ give, respectively,
\begin{align*}
2 D^7_7 \omega_7 + \tfrac{4}{3} D^{27}_7 \omega_{27} & = 0, \\
-6 D^7_7 \omega_7 + 4 D^{27}_7 \omega_{27} & = 0.
\end{align*}
These two equations together force
\begin{equation} \label{eq:Hph4-2temp4}
D^7_7 \omega_7 = 0 \qquad \text{and} \qquad D^{27}_7 \omega_{27} = 0.
\end{equation}
Also, from~\eqref{eq:Hph4-2LBo} we have $\omega_7 = - 3 D^{14}_7 \alpha_{14}$, and thus since $D^7_1 D^{14}_7 = 0$ we deduce that
\begin{equation} \label{eq:Hph4-2temp5}
D^7_1 \omega_7 = 0.
\end{equation}
Combining the first equation in~\eqref{eq:Hph4-2temp4} with~\eqref{eq:Hph4-2temp5} we find by Theorem~\ref{thm:harmonic1} that, considered as a $1$-form, $\omega_7 \in \mathcal{H}^1$ and in particular
\begin{equation} \label{eq:Hph4-2temp6}
D^7_{14} \omega_7 = 0 \qquad \text{and} \qquad D^7_{27} \omega_7 = 0.
\end{equation}
From Figure~\ref{figure:d}, the condition $\pi_{14} \mathrm{d} \omega = 0$ gives $- D^7_{14} \omega_7 + D^{27}_{14} \omega_{27} = 0$, which, by the first equation in~\eqref{eq:Hph4-2temp6} implies that
\begin{equation} \label{eq:Hph4-2temp7}
D^{27}_{14} \omega_{27} = 0.
\end{equation}
Recalling from~\eqref{eq:Hph4-2LBo} that $\omega_7 = - 3 D^{14}_7 \alpha_{14}$, substituting~\eqref{eq:Hph4-2temp3} into~\eqref{eq:Hph4-2temp} and using the first equation in~\eqref{eq:Hph4-2temp4} now gives
\begin{equation*}
0 = -8 D^{27}_7 D^7_{27} \alpha_7 - 8 D^7_7 \omega_7 = -8 D^{27}_7 D^7_{27} \alpha_7,
\end{equation*}
which by Remark~\ref{rmk:IBP} implies that
\begin{equation} \label{eq:Hph4-2temp8}
D^7_{27} \alpha_7 = 0.
\end{equation}
Combining~\eqref{eq:Hph4-2temp8} with~\eqref{eq:Hph4-2temp3} and using Theorem~\ref{thm:harmonic1}, we find that $\alpha_7$ is harmonic.
Recalling from~\eqref{eq:Hph4-2LBo} that $\omega_{27} = - 2 D^7_{27} \alpha_7 + D^{14}_{27} \alpha_{14}$, substituting~\eqref{eq:Hph4-2temp8} and taking $D^{27}_{27}$, we obtain by the relations in~\eqref{eq:d-relations} that
\begin{equation*}
D^{27}_{27} \omega_{27} = D^{27}_{27} D^{14}_{27} \alpha_{14} = D^7_{27} D^{14}_7 \alpha_{14}.
\end{equation*}
Substituting $D^{14}_7 \alpha_{14} = - \tfrac{1}{3} \omega_7$ from~\eqref{eq:Hph4-2LBo} into the above expression and using the second equation in~\eqref{eq:Hph4-2temp6}, we find that
\begin{equation} \label{eq:Hph4-2temp9}
D^{27}_{27} \omega_{27} = - \tfrac{1}{3} D^7_{27} \omega_{7} = 0.
\end{equation}
Combining the second equation in~\eqref{eq:Hph4-2temp4}, equation~\eqref{eq:Hph4-2temp7}, and~\eqref{eq:Hph4-2temp9}, with equation~\eqref{eq:Laplacian}, we deduce that $\omega_{27}$ is a harmonic $\Omega^4_{27}$ form. We already showed that $\omega_7$ is a harmonic $\Omega^4_7$ form, and that $\omega_1 = 0$. Thus we have $\omega \in\mathcal{H}^4$ and moreover we assumed that $\omega \in (\operatorname{im} \mathrm{d})^4$. By Hodge theory we conclude that $\omega = 0$ as claimed.
\end{proof}
\begin{prop} \label{prop:Hph4}
We have $H^4_{\varphi} \cong \mathcal{H}^4 \oplus \big( (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4 \big)$.
\end{prop}
\begin{proof}
In the proof of Proposition~\ref{prop:Hph3}, we showed that
\begin{equation*}
H^3_{\varphi} = (\ker \mathcal{L}_B)^3 = \mathcal{H}^3 \oplus \big( (\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal{L}_B)^3 \big).
\end{equation*}
We also have $\mathcal{H}^3 \subset (\ker \mathcal{L}_B)^3$ by~\eqref{eq:LBLKonH}. Thus
\begin{equation*}
\mathcal{H}^3 \subseteq (\ker \mathcal{L}_B)^3 \subseteq \mathcal{H}^3 \oplus (\operatorname{im} \dop{\dd})^3.
\end{equation*}
Taking orthogonal complements of the above chain of nested subspaces and using the Hodge decomposition $\Omega^3 = \mathcal{H}^3 \oplus (\operatorname{im} \mathrm{d})^3 \oplus (\operatorname{im} \dop{\dd})^3$, we find
\begin{equation*}
(\operatorname{im} \mathrm{d})^3 \oplus (\operatorname{im} \dop{\dd})^3 \supseteq ((\ker \mathcal{L}_B)^3)^{\perp} \supseteq (\operatorname{im} \mathrm{d})^3.
\end{equation*}
Taking the Hodge star of the above chain of nested subspaces and using $(\operatorname{im} \mathcal{L}_B)^4 = \ast ((\ker \mathcal{L}_B)^3)^{\perp}$ from~\eqref{eq:LBregular} we obtain
\begin{equation*}
(\operatorname{im} \dop{\dd})^4 \subseteq (\operatorname{im} \mathcal{L}_B)^4 \subseteq (\operatorname{im} \dop{\dd})^4 \oplus (\operatorname{im} \mathrm{d})^4.
\end{equation*}
Applying Lemma~\ref{lemma:linalg}(i) to the above yields
\begin{equation} \label{eq:Hph4temp}
(\operatorname{im} \mathcal{L}_B)^4 = (\operatorname{im} \dop{\dd})^4 \oplus \big( (\operatorname{im} \mathrm{d})^4 \cap (\operatorname{im} \mathcal{L}_B)^4 \big).
\end{equation}
Now recall that $\mathcal{H}^4 \subseteq (\ker \mathcal{L}_B)^4$ by~\eqref{eq:LBLKonH}. Thus we have
\begin{equation*}
\mathcal{H}^4 \subseteq (\ker \mathcal{L}_B)^4 \subseteq \Omega^4 = \mathcal{H}^4 \oplus (\operatorname{im} \mathrm{d})^4 \oplus (\operatorname{im} \dop{\dd})^4.
\end{equation*}
Applying Lemma~\ref{lemma:linalg}(i) to the above and using Lemma~\ref{lemma:Hph4-1} gives
\begin{equation} \label{eq:Hph4temp2}
(\ker \mathcal{L}_B)^4 = \mathcal{H}^4 \oplus \big( (\operatorname{im} \dop{\dd})^4 \cap (\ker \mathcal{L}_B)^4\big) \oplus \big( (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4 \big).
\end{equation}
Thus, applying Lemma~\ref{lemma:linalg}(ii) to~\eqref{eq:Hph4temp},~\eqref{eq:Hph4temp2}, as subspaces of $\Omega^4 = \mathcal{H}^4 \oplus (\operatorname{im} \mathrm{d})^4 \oplus (\operatorname{im} \dop{\dd})^4$, we obtain
\begin{equation} \label{eq:Hph4temp3}
(\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathcal{L}_B)^4 = \big( (\operatorname{im} \dop{\dd})^4 \cap (\ker \mathcal{L}_B)^4 \big) \oplus \big( (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathcal{L}_B)^4 \big).
\end{equation}
By Lemma~\ref{lemma:Hph4-2}, equation~\eqref{eq:Hph4temp3} simplifies to
\begin{equation} \label{eq:Hph4temp4}
(\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathcal{L}_B)^4 = (\operatorname{im} \dop{\dd})^4 \cap (\ker \mathcal{L}_B)^4.
\end{equation}
Hence, by~\eqref{eq:Hph4temp2} and~\eqref{eq:Hph4temp4}, we have
\begin{align*}
\mathcal{H}^4_{\varphi} & = \frac{(\ker \mathcal{L}_B)^4}{(\ker \mathcal{L}_B)^4 \cap (\operatorname{im} \mathcal{L}_B)^4} \\
& = \frac{\mathcal{H}^4 \oplus \big( (\operatorname{im} \dop{\dd})^4 \cap (\ker \mathcal{L}_B)^4 \big) \oplus \big( (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4 \big)}{(\operatorname{im} \dop{\dd})^4 \cap (\ker \mathcal{L}_B)^4}\\
&\cong \mathcal{H}^4 \oplus (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}_B)^4
\end{align*}
as claimed.
\end{proof}
\begin{lemma} \label{lemma:k3}
We have $(\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal L_B)^3=(\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal{L}^*_B)^3$.
\end{lemma}
\begin{proof}
Let $\omega = \dop{\dd} (\gamma_1 + \gamma_7 + \gamma_{27}) \in (\operatorname{im} \dop{\dd})^3$ where $\gamma_1 + \gamma_7 + \gamma_{27} \in \Omega^4_1 \oplus \Omega^4_7 \oplus \Omega^4_{27}$. From $\dop{\dd} = \ast \mathrm{d} \ast$ on $\Omega^5$ and Figures~\ref{figure:d} we find that
\begin{equation} \label{eq:k3temp}
\dop{\dd} (\gamma_1 + \gamma_7 + \gamma_{27}) = \tfrac{4}{3} D^7_1 \gamma_7 + (- D^1_7 \gamma_1 - \tfrac{3}{2} D^7_7 \gamma_7 + D^{27}_7 \gamma_{27}) + (-D^7_{27} \gamma_7 + D^{27}_{27} \gamma_{27}).
\end{equation}
Using~\eqref{eq:k3temp} and Figure~\ref{figure:LB}, we have
\begin{align*}
\mathcal L_B \dop{\dd} (\gamma_1 + \gamma_7 + \gamma_{27}) & = - 2 D^1_7 ( \tfrac{4}{3} D^7_1 \gamma_7 ) + 3 D^7_{14} ( - D^1_7 \gamma_1 - \tfrac{3}{2} D^7_7 \gamma_7 + D^{27}_7 \gamma_{27}) \\
& \qquad {} + (-\tfrac{8}{3} D^{27}_7 + D^{27}_{14}) (-D^7_{27} \gamma_7 + D^{27}_{27} \gamma_{27}) \\
& = (-\tfrac{8}{3} D^1_7 D^7_1 \gamma_7 + \tfrac{8}{3} D^{27}_7 D^7_{27} \gamma_7 - \tfrac{8}{3} D^{27}_7 D^{27}_{27} \gamma_{27}) \\
& \qquad {} + (-3 D^7_{14} D^1_7 \gamma_1 - \tfrac{9}{2} D^7_{14} D^7_7 \gamma_7 + 3 D^7_{14} D^{27}_7 \gamma_{27} - D^{27}_{14} D^7_{27} \gamma_7 + D^{27}_{14} D^{27}_{27} \gamma_{27}).
\end{align*}
Using the various relations in~\eqref{eq:d-relations}, the above expression simplifies to
\begin{equation} \label{eq:k3temp2}
\begin{aligned}
\mathcal L_B \dop{\dd} (\gamma_1 + \gamma_7 + \gamma_{27}) & = -6 D^7_7 D^7_7 \gamma_7 + 4 D^7_7 D^{27}_7 \gamma_{27} - 6 D^7_{14} D^7_7 \gamma_7 + 4 D^7_{14} D^{27}_7 \gamma_{27} \\
& = 2 D^7_7 ( -3 D^7_7 \gamma_7 + 2 D^{27}_7 \gamma_{27}) + 2 D^7_{14} ( -3 D^7_7 \gamma_7 + 2 D^{27}_7 \gamma_{27}).
\end{aligned}
\end{equation}
Using $\mathcal L_B^* = - \ast \mathcal{L}_B \ast$ from~\eqref{eq:derivationsstar}, equation~\eqref{eq:k3temp}, and Figure~\ref{figure:LB} again, we also have that
\begin{align*}
\mathcal{L}^*_B \dop{\dd} (\gamma_1 + \gamma_7 + \gamma_{27}) & = - 3 D^1_7 ( \tfrac{4}{3} D^7_1 \gamma_7 ) + 6 D^7_7 ( - D^1_7 \gamma_1 - \tfrac{3}{2} D^7_7 \gamma_7 + D^{27}_7 \gamma_{27}) \\
& \qquad {} - 4 D^{27}_7 (-D^7_{27} \gamma_7 + D^{27}_{27} \gamma_{27}) \\
= & -4 D^1_7 D^7_1 \gamma_7 - 6 D^7_7 D^1_7 \gamma_1 - 9 D^7_7 D^7_7 \gamma_7 + 6 D^7_7 D^{27}_7 \gamma_{27} \\
& \qquad {} + 4 D^{27}_7 D^7_{27} \gamma_7 - 4 D^{27}_7 D^{27}_{27} \gamma_{27}.
\end{align*}
Using the various relations in~\eqref{eq:d-relations}, the above expression simplifies to
\begin{equation} \label{eq:k3temp3}
\begin{aligned}
\mathcal{L}^*_B \dop{\dd} (\gamma_1 + \gamma_7 + \gamma_{27}) & = -18 D^7_7 D^7_7 \gamma_7 + 12 D^7_7 D^{27}_7 \gamma_{27}) \\
& = 6 D^7_7 (-3 D^7_7 \gamma_7 + 2 D^{27}_7 \gamma_{27}).
\end{aligned}
\end{equation}
Thus for $\omega \in (\operatorname{im} \dop{\dd})^3$ we conclude that
\begin{align*}
\omega \in (\ker \mathcal L_B)^3 & \iff
\left\{
\begin{aligned}
D^7_7 (-3 D^7_7 \gamma_7 + 2 D^{27}_7 \gamma_{27}) & = 0, \\
D^7_{14} (-3 D^7_7 \gamma_7 + 2 D^{27}_7 \gamma_{27}) & = 0,
\end{aligned}
\right\} & & \text{by~\eqref{eq:k3temp2}} \\
& \iff D^7_7 (- 3 D^7_7 \gamma_7 + 2 D^{27}_7 \gamma_{27} = 0) & & \text{by Theorem~\ref{thm:harmonic1}} \\
& \iff \omega \in (\ker \mathcal{L}^*_B)^3 & & \text{by~\eqref{eq:k3temp3}}
\end{align*}
which is what we wanted to show.
\end{proof}
\begin{cor} \label{H34cor}
We have
\begin{align*}
H^3_{\varphi} & = \mathcal{H}^3 \oplus \big( (\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal L_B)^3 \cap (\ker \mathcal{L}^*_B)^3 \big), \\
H^4_{\varphi} & = \mathcal{H}^4 \oplus \big( (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal L_B)^4 \cap (\ker \mathcal{L}^*_B)^4 \big).
\end{align*}
\end{cor}
\begin{proof}
Lemma~\ref{lemma:k3} says that
\begin{equation*}
(\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal L_B)^3 =(\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal L_B)^3 \cap (\ker \mathcal{L}^*_B)^3 = (\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal{L}^*_B)^3.
\end{equation*}
Applying $\ast$ to the above equation and using~\eqref{eq:derivationsstar} gives
\begin{equation*}
(\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal L_B)^4 = (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal L_B)^4 \cap (\ker \mathcal{L}^*_B)^4 = (\operatorname{im} \mathrm{d})^4 \cap (\ker \mathcal{L}^*_B)^4.
\end{equation*}
The claim now follows from Propositions~\ref{prop:Hph3} and~\ref{prop:Hph4}.
\end{proof}
\subsection{The main theorem on $\mathcal L_B$-cohomology} \label{sec:Hphthm}
We summarize the results of Section~\ref{sec:cohom} in the following theorem, which is intentionally stated in a way to mirror Theorem~\ref{thm:KLS}.
\begin{thm} \label{thm:Hph} The following relations hold.
\begin{itemize} \setlength\itemsep{-1mm}
\item $H^k_{\varphi} \cong H^k_{dR}$ for $k=0,1,2,5,6,7$.
\item $H^k_{\varphi}$ is infinite-dimensional for $k = 3,4$.
\item There is a canonical injection $\mathcal{H}^k \hookrightarrow H^k_{\varphi}$ for all $k$.
\item The Hodge star induces isomorphisms $\ast: H^k_{\varphi} \cong H^{7-k}_{\varphi}$.
\end{itemize}
\end{thm}
\begin{proof}
All that remains to show is that $H^3_{\varphi}$ is indeed infinite-dimensional. But observe by~\eqref{eq:k3temp2} that for all $\alpha \in \Omega^4_1$, we have $\mathcal L_B \dop{\dd} \alpha = 0$. Therefore, $\{ \dop{\dd} \alpha : \alpha \in \Omega^4_1 \} \cong \operatorname{im} D^1_7 \cong (\operatorname{im} \mathrm{d})^1$ is an infinite-dimensional subspace of $(\operatorname{im} \dop{\dd})^3 \cap (\ker \mathcal L_B)^3 \subseteq H^3_{\varphi}$.
\end{proof}
\section{An application to `almost' formality} \label{sec:formality}
In this section we consider an application of our results to the question of formality of compact torsion-free $\mathrm{G}_2$~manifolds. We discover a new topological obstruction to the existence of torsion-free $\mathrm{G}_2$-structures on compact manifolds, and discuss an explicit example in detail.
\subsection{Formality and Massey triple products} \label{sec:Massey}
Recall from~\eqref{eq:commdL} that $\mathrm{d}$ commutes with $\mathcal L_B$. Hence $\mathrm{d}$ induces a natural map
\begin{equation*}
\mathrm{d}: H^k_{\varphi} \to H^{k+1}_{\varphi}.
\end{equation*}
Also, because $\mathcal L_B$ is a derivation, it is easy to check that the wedge product on $\Omega^{\bullet}$ descends to $H^{\bullet}_{\varphi}$, with the Leibniz rule $\mathrm{d} (\omega \wedge \eta) = (\mathrm{d} \omega) \wedge \eta + (-1)^{|\omega|} \omega \wedge (\mathrm{d} \eta)$ still holding on $H^{\bullet}_{\varphi}$. These two facts say that the complex $(H^{\bullet}_{\varphi}, \mathrm{d})$ is a \emph{differential graded algebra}, henceforth abbreviated \emph{dga}.
Additionally, because $[ \mathrm{d}, \mathcal{L}_B ] = 0$, we also have that $( (\ker \mathcal L_B)^{\bullet}, \mathrm{d})$ is a subcomplex of the de Rham complex $( \Omega^{\bullet}, \mathrm{d})$. The natural injection and projection give homomorphisms of dga's
\begin{equation*}
(\Omega^{\bullet}, \mathrm{d}) \hookleftarrow ((\ker \mathcal L_B)^{\bullet}, \mathrm{d}) \twoheadrightarrow (H^{\bullet}_{\varphi}, \mathrm{d}).
\end{equation*}
One goal of this section is to show that these two homomorphisms of dga's are both \emph{quasi-isomorphisms}. This means that they induce \emph{isomorphisms} on the cohomologies of the complexes. As mentioned in the introduction, some of the results in this section appeared earlier in work of Verbitsky~\cite{Verbitsky}. For example, our Proposition~\ref{prop:Verbitsky} is exactly~\cite[Proposition 2.21]{Verbitsky}, with the same proof. However, the proof of~\cite[Proposition 2.19]{Verbitsky} has several errors. The critical error is the following: first Verbitsky correctly shows that $\alpha - \Pi \alpha$ is an element of both $(\operatorname{im} \mathrm{d}_c + \operatorname{im} \dop{\dd}_c)$ and $(\operatorname{im} \dop{\dd}_c)^{\perp}$. But then he incorrectly concludes that $\alpha - \Pi \alpha$ must be an element of $\operatorname{im} \mathrm{d}_c$. This conclusion is only valid if $(\mathrm{d}_c)^2 = 0$, which is not true in general. We give a correct proof of this result, which is our Proposition~\ref{prop:quasi-isom}. One consequence is the result about the Massey triple product in our Corollary~\ref{cor:Massey}, which appears to be new.
\begin{prop}[Verbitsky~\cite{Verbitsky}] \label{prop:Verbitsky}
The inclusion $((\ker \mathcal L_B)^{\bullet}, \mathrm{d}) \hookrightarrow (\Omega^{\bullet}, \mathrm{d})$ is a quasi-isomorphism.
\end{prop}
\begin{proof}
This is proved in~\cite[Proposition 2.11]{Verbitsky}. We reproduce the short proof here for completeness and convenience of the reader. Since the differential for both complexes $\Omega^{\bullet}$ and $(\ker \mathcal L_B)^{\bullet}$ is the same exterior derivative $\mathrm{d}$, we will omit it from the notation for simplicity.
It is well-known that the Hodge Laplacian $\Delta$ determines an eigenspace decomposition $\Omega^k = \oplus_{\lambda} \Omega^k_{\lambda}$ where the sum is over all eigenvalues $\lambda$ of $\Delta$, which form a discrete set of non-negative real numbers, and $\Omega^k_{\lambda} = \{ \alpha \in \Omega^k : \Delta \alpha = \lambda \alpha \}$ are the associated eigenspaces. Note that $\Omega^k_0 = \mathcal{H}^k$ is the space of harmonic $k$-forms. It is well-known that the cohomology of $\Omega^k_{\lambda}$ is trivial for $\lambda > 0$. This is because, if $\alpha \in \Omega^k_{\lambda}$ with $\lambda > 0$ and $\mathrm{d} \alpha = 0$, then
\begin{equation} \label{eq:Verbitsky-temp}
\alpha = \tfrac{1}{\lambda} \Delta \alpha = \tfrac{1}{\lambda} (\mathrm{d} \dop{\dd} \alpha + \dop{\dd} \mathrm{d} \alpha) = \mathrm{d} (\tfrac{1}{\lambda} \dop{\dd} \alpha)
\end{equation}
is exact.
By~\eqref{eq:LBLKLap}, the operator $\mathcal L_B$ commutes with $\Delta$, and thus we obtain a decomposition
\begin{equation*}
(\ker \mathcal L_B)^k = \oplus_{\lambda} \big( \Omega^k_{\lambda} \cap (\ker \mathcal L_B)^k \big).
\end{equation*}
Note by~\eqref{eq:LBLKonH} that $\Omega^k_0 \cap (\ker \mathcal L_B)^k = \mathcal{H}^k \cap (\ker \mathcal L_B)^k = \mathcal{H}^k = \Omega^k_0$. Thus it remains to show that the cohomology of $\Omega^k_{\lambda} \cap (\ker \mathcal L_B)^k$ is also trivial for all $\lambda > 0$. But if $\alpha \in \Omega^k_{\lambda} \cap (\ker \mathcal L_B)^k$, we have $\mathcal L_B \alpha = 0$ and $\alpha = \mathrm{d} (\tfrac{1}{\lambda} \dop{\dd} \alpha)$ by~\eqref{eq:Verbitsky-temp}. Since $\mathcal L_B$ commutes with $\dop{\dd}$ by~\eqref{eq:LBLKdscom}, we have $\mathcal L_B (\tfrac{1}{\lambda} \dop{\dd} \alpha) = \tfrac{1}{\lambda} \dop{\dd} \mathcal L_B \alpha = 0$, so the class of $\alpha$ in the cohomology of $(\ker \mathcal L_B)^k$ is indeed trivial.
\end{proof}
In Section~\ref{sec:cohom}, while computing $H^{\bullet}_{\varphi}$, we explicitly computed the complex $((\ker \mathcal L_B)^{\bullet}, \mathrm{d})$. The results are collected in Figure~\ref{figure:complex1}. The isomorphisms displayed in Figure~\ref{figure:complex1} are explained in Corollary~\ref{cor:formality}.
\begin{figure}
\caption{The complex $((\ker \mathcal L_B)^{\bullet}
\label{figure:complex1}
\end{figure}
\begin{cor} \label{cor:formality}
For all $0 \leq k \leq 7$, we have $(\operatorname{im} \mathrm{d})^k \cap (\ker \mathcal L_B)^k = \mathrm{d} (\ker \mathcal L_B)^{k-1}$.
\end{cor}
\begin{proof}
Let $\Omega^k = \mathcal{H}^k \oplus (\operatorname{im} \mathrm{d})^k \oplus (\operatorname{im} \dop{\dd})^k$ denote the Hodge decomposition of $\Omega^k$. For simplicity in this proof we will write $A^k = \mathcal{H}^k$, $B^k = (\operatorname{im} \mathrm{d})^k$, and $C^k = (\operatorname{im} \dop{\dd})^k$. Thus $\Omega^k = A^k \oplus B^k \oplus C^k$. We can see from Figure~\ref{figure:complex1} that for all $0 \leq k \leq 7$, we have $(\ker \mathcal L_B)^k = A^k \oplus \tilde B^k \oplus \tilde C^k$, where $\tilde B^k$ and $\tilde C^k$ are subspaces of $B^k$ and $C^k$, respectively. Depending on $k$, we can have $\tilde B^k = 0$, $\tilde B^k = B^k$, or $0 \subsetneq \tilde B^k \subsetneq B^k$ and similarly for $\tilde C^k$. By Hodge theory, $(\ker \mathrm{d})^k = A^k \oplus B^k$, so applying Lemma~\ref{lemma:linalg}(ii) we find that
\begin{equation} \label{eq:formalitycor1}
(\ker \mathcal L_B)^k \cap (\ker \mathrm{d})^k = A^k \oplus \tilde B^k.
\end{equation}
Applying $\mathrm{d}$ to $(\ker \mathcal L_B)^{k-1}$, we have
\begin{equation} \label{eq:formalitycor2}
\mathrm{d} (\ker \mathcal L_B)^{k-1} = \mathrm{d} (\tilde C^{k-1}) \subseteq \tilde B^k.
\end{equation}
By Proposition~\ref{prop:Verbitsky}, the cohomology of $(\Omega^k, \mathrm{d})$ equals the cohomology of $( (\ker \mathcal L_B)^k, \mathrm{d})$. But by by Hodge theory the cohomology of $(\Omega^k, \mathrm{d})$ is $\mathcal{H}^k = A^k$, and equations~\eqref{eq:formalitycor1} and~\eqref{eq:formalitycor2} say that the cohomology of $( (\ker \mathcal L_B)^k, \mathrm{d})$ is $A^k \oplus \big( \tilde B^k / (\mathrm{d} \tilde C^{k-1}) \big)$. Thus in fact we have $\mathrm{d} \tilde C^{k-1} = \tilde B^k$, and since $\mathrm{d}$ is injective on $C^k$, we deduce that
\begin{equation} \label{eq:formalitycor3}
\text{$\mathrm{d}$ maps $\tilde C^{k-1}$ isomorphically onto $\tilde B^k$ for all $0 \leq k \leq 7$.}
\end{equation}
From $(\operatorname{im} \mathrm{d})^k \cap (\ker \mathcal L_B)^k = \tilde B^k$, and $\mathrm{d} (\ker \mathcal L_B)^{k-1} = \mathrm{d} (A^{k-1} \oplus \tilde B^{k-1} \oplus \tilde C^{k-1}) = \mathrm{d} \tilde C^{k-1}$, we conclude that $(\operatorname{im} \mathrm{d})^k \cap (\ker \mathcal L_B)^k = \mathrm{d} (\ker \mathcal L_B)^{k-1}$ as claimed.
\end{proof}
\begin{rmk} \label{rmk:Kahler}
Corollary~\ref{cor:formality} may be related to a $\mathrm{G}_2$-analogue of the \emph{generalized} $\partial \bar \partial$-lemma, called the $\mathrm{d} \mathcal{L}_J$-lemma, introduced by the authors in~\cite{CKT} in the context of $\U{m}$-structures. See~\cite[Equation (3.27)]{CKT}.
\end{rmk}
\begin{prop} \label{prop:quasi-isom}
The quotient map $((\ker \mathcal L_B)^{\bullet}, \mathrm{d}) \twoheadrightarrow (H^{\bullet}_{\varphi}, \mathrm{d})$ is a quasi-isomorphism.
\end{prop}
\begin{proof}
We have a short exact sequence of chain complexes
\begin{equation*}
0 \to ((\ker \mathcal L_B)^{\bullet} \cap (\operatorname{im} \mathcal L_B)^{\bullet}, \mathrm{d}) \to ((\ker \mathcal L_B)^{\bullet}, \mathrm{d}) \twoheadrightarrow (H^{\bullet}_{\varphi}, \mathrm{d}) \to 0,
\end{equation*}
so it suffices to show that the cohomology of $((\ker \mathcal L_B)^{\bullet} \cap (\operatorname{im} \mathcal L_B)^{\bullet}, \mathrm{d})$ is trivial. In Section~\ref{sec:cohom}, while computing $H^{\bullet}_{\varphi}$, we explicitly computed the complex $((\ker \mathcal L_B)^{\bullet} \cap (\operatorname{im} \mathcal L_B)^{\bullet}, \mathrm{d})$. The results are collected in Figure~\ref{figure:complex2}. The isomorphisms in Figure~\ref{figure:complex2} are a subset of the isomorphisms from Figure~\ref{figure:complex1} and are coloured in the same way. It is clear from Figure~\ref{figure:complex2} that the cohomology of $((\ker \mathcal L_B)^{\bullet} \cap (\operatorname{im} \mathcal L_B)^{\bullet}, \mathrm{d})$ is trivial.
\end{proof}
\begin{figure}
\caption{The complex $((\ker \mathcal L_B \cap \operatorname{im}
\label{figure:complex2}
\end{figure}
The next two definitions are taken from~\cite[Section 3.A]{Huybrechts}.
\begin{defn} \label{defn:equiv}
Let $(A, \mathrm{d}_A)$ and $(B, \mathrm{d}_B)$ be two differential graded algebras (dga's). We say that $A$ and $B$ are \emph{equivalent} if there exists a finite sequence of \emph{dga quasi-isomorphisms}
\begin{equation*}
\xymatrix {
& (C_1, \mathrm{d}_{C_1}) \ar[ld] \ar[rd] & & \cdots \ar[ld] \ar [rd] & & (C_n, \mathrm{d}_{C_n}) \ar[ld] \ar[rd] & \\
(A, \mathrm{d}_A) & & (C_2, \mathrm{d}_{C_2}) & & \cdots & & (B, \mathrm{d}_B).\\
}
\end{equation*}
A dga $(A, \mathrm{d}_A)$ is called \emph{formal} if it is equivalent to a dga $(B, \mathrm{d}_B)$ with $\mathrm{d}_B = 0$.
\end{defn}
It is well-known~\cite[Section 3.A]{Huybrechts} that a \emph{compact K\"ahler manifold} is formal. That is, the de Rham complex of a compact K\"ahler manifold is equivalent to a dga with zero differential. It is still an open question whether or not compact torsion-free $\mathrm{G}_2$~manifolds are formal. We show in Theorem~\ref{thm:almost-formal} below that compact torsion-free $\mathrm{G}_2$~manifolds are `almost formal' in the sense that the de Rham complex is equivalent to a dga which has \emph{only one nonzero differential}.
\begin{thm} \label{thm:almost-formal}
The de Rham complex of a compact torsion-free $\mathrm{G}_2$~manifold $(\Omega^{\bullet}, \mathrm{d})$ is equivalent to $(H^{\bullet}_{\varphi}, \mathrm{d})$, which is a dga with all differentials trivial except for $\mathrm{d}: H^3_{\varphi} \to H^4_{\varphi}$.
\end{thm}
\begin{proof}
In Section~\ref{sec:cohom}, we explicitly computed the complex $(H^{\bullet}_{\varphi}, \mathrm{d})$. The results are collected in Figure~\ref{figure:complex3}. The isomorphism in Figure~\ref{figure:complex3} appeared already in Figure~\ref{figure:complex1} and is coloured in the same way. The zero maps in Figure~\ref{figure:complex3} are a consequence of $\mathcal{H}^k \subseteq (\ker \mathrm{d})^k$.
\end{proof}
\begin{figure}
\caption{The complex $(H^{\bullet}
\label{figure:complex3}
\end{figure}
One consequence of almost-formality is that \emph{most of the Massey triple products of the de Rham complex will vanish}. This is established in Corollary~\ref{cor:Massey} below.
\begin{defn} \label{defn:masseytriprod}
Let $(A,\mathrm{d}_A)$ be a dga, and denote by $H^k (A)$ the degree $k$ cohomology of $A$ with respect to $\mathrm{d}_A$. Let $[\alpha]\in H^p(A)$, $[\beta] \in H^q(A)$, $[\gamma] \in H^r(A)$ be cohomology classes satisfying
\begin{equation*}
[\alpha] [\beta] = 0 \in H^{p+q}(A) \qquad \text{and} \qquad [\beta] [\gamma] = 0 \in H^{q+r}(A).
\end{equation*}
Then $\alpha \beta =\mathrm{d} f$ and $\beta \gamma = \mathrm{d} g$ for some $f \in A^{p+q-1}$ and $g \in A^{q+r-1}$. Consider the class
\begin{equation*}
[f \gamma - (-1)^p \alpha g] \in H^{p+q+r-1}(A).
\end{equation*}
It can be checked that this class is well-defined up to an element of $H^{p+q-1} \cdot H^r + H^p \cdot H^{q+r-1}$. That is, it is well defined as an element of the quotient
\begin{equation*}
\frac{H^{p+q+r-1}(A)}{H^{p+q-1} \cdot H^r + H^p \cdot H^{q+r-1}}.
\end{equation*}
We call this element the \emph{Massey triple product} and write it as $\langle [\alpha], [\beta], [\gamma] \rangle$. It is easy to see that the Massey triple product is linear in each of its three arguments.
\end{defn}
In the following we only consider the case when $(A,\mathrm{d}_A) = (\Omega^*, \mathrm{d})$ is the dga of smooth differential forms. If the dga $(A, \mathrm{d}_A)$ is formal, then \emph{all the Massey triple products vanish} due to the naturality of the triple product (see~\cite[Proposition 3.A.33]{Huybrechts} for details). In fact, the proof of~\cite[Proposition 3.A.33]{Huybrechts} actually yields the following more general result.
\begin{cor} \label{cor:weak-Massey}
Let $(A, \mathrm{d}_A)$ be a dga such that the differentials $\mathrm{d}_A$ are all zero except for $\mathrm{d} : A^{k-1} \to A^k$. Then if the Massey triple product $\langle [\alpha], [\beta], [\gamma] \rangle$ is defined and we have $|\alpha| + |\beta| \neq k$ and $|\beta| + |\gamma| \neq k$, then $\langle [\alpha], [\beta], [\gamma] \rangle = 0$.
\end{cor}
Combining Corollary~\ref{cor:weak-Massey} and Theorem~\ref{thm:almost-formal} yields the following.
\begin{cor} \label{cor:Massey}
Let $M$ be a compact torsion-free $\mathrm{G}_2$~manifold. Consider cohomology classes $[\alpha]$, $[\beta]$, and $[\gamma] \in H^{\bullet}_{\mathrm{dR}}$. If the Massey triple product $\langle [\alpha], [\beta], [\gamma] \rangle$ is defined and we have $|\alpha| + |\beta| \neq 4$ and $|\beta| + |\gamma| \neq 4$, then $\langle [\alpha], [\beta], [\gamma] \rangle = 0$.
\end{cor}
In Theorem~\ref{thm:irreducibleMassey} in the next section we establish a stronger version of Corollary~\ref{cor:Massey} when the holonomy of the metric on $M$ is exactly $\mathrm{G}_2$.
\subsection{A new topological obstruction to existence of torsion-free $\mathrm{G}_2$-structures} \label{sec:new-obstruction}
A key feature of the criterion in Corollary~\ref{cor:Massey} is that it is \emph{topological}. That is, \emph{it does not depend on the differentiable structure on $M$}. Therefore it gives \emph{a new topological obstruction} to the existence of torsion-free $\mathrm{G}_2$-structures on compact $7$-manifolds. There are several previously known topological obstructions to the existence of a torsion-free $\mathrm{G}_2$-structure on a compact $7$-manifold. These obstructions are discussed in detail in~\cite[Chapter 10]{Joyce}. We summarize them here. Let $\varphi$ be a torsion-free $\mathrm{G}_2$-structure on a compact manifold $M$ with induced metric $g_{\varphi}$. Let $b^k_M = \dim H^k_{\mathrm{dR}} (M)$. Then
\begin{equation} \label{eq:known-obstructions}
\left. \begin{aligned}
& b^3_M \geq b^1_M + b^0_M, \\
& b^2_M \geq b^1_M, \\
& b^1_M \in \{ 0, 1, 3, 7 \}, \\
& \text{if $g_{\varphi}$ is not flat, then $p_1 (M) \neq 0$, where $p_1 (M)$ is the first Pontryagin class of $TM$, \varphiantom{A}} \\
& \text{if $g_{\varphi}$ has full holonomy $\mathrm{G}_2$, then the fundamental group $\pi_1 (M)$ is finite}.
\end{aligned} \right\}
\end{equation}
Note that the first three conditions are simply obstructions to the existence of torsion-free $\mathrm{G}_2$-structures. The fourth condition can be used to rule out \emph{non-flat} torsion-free $\mathrm{G}_2$-structures, and the fifth condition can be used to rule out \emph{non-irreducible} torsion-free $\mathrm{G}_2$-structures. In fact, the third condition determines the reduced holonomy of $g_{\varphi}$, which is $\{1\}$, $\SU{2}$, $\SU{3}$, or $\mathrm{G}_2$, if $b^1_M = 7$, $3$, $1$, or $0$, respectively.
\begin{thm} \label{thm:irreducibleMassey}
Let $M$ be a compact torsion-free $\mathrm{G}_2$~manifold with full holonomy $\mathrm{G}_2$, and consider cohomology classes $[\alpha]$, $[\beta]$, and $[\gamma] \in H^{\bullet}_{\mathrm{dR}}$. If the Massey triple product $\langle [\alpha], [\beta], [\gamma] \rangle$ is defined, then $\langle [\alpha], [\beta], [\gamma] \rangle = 0$ except possibly in the case when $|\alpha| = |\beta| = |\gamma| = 2$.
\end{thm}
\begin{proof}
Recall that the hypothesis of full holonomy $\mathrm{G}_2$ implies that $b^1_M = 0$, so $H^1_{\mathrm{dR}} = \{0\}$. Suppose $|\alpha| = 1$. Then $[\alpha] \in H^1_{\mathrm{dR}}$, so $[\alpha] = 0$, and by linearity if follows that $\langle [\alpha], [\beta], [\gamma] \rangle = 0$. The same argument holds if $|\beta| = 1$ or $|\gamma| = 1$. Suppose $|\alpha| = 0$. Then $\alpha$ is a constant function. The condition $[\alpha \beta] = [\alpha][\beta] = 0$ forces the form $\alpha \beta$ to be exact, so either $\alpha = 0$ (in which case the Massey product vanishes), or $\beta$ is exact, so $[\beta] = 0$ and again the Massey product vanishes. A similar argument holds if $|\beta| = 0$ or $|\gamma| = 0$.
Thus we must have $|\alpha|, |\beta|, |\gamma| \geq 2$ if the Massey product has any chance of being nontrivial. Moreover, since $\langle [\alpha], [\beta], [\gamma] \rangle$ lies in a quotient of $H^{|\alpha| + |\beta| + |\gamma| - 1}_{\mathrm{dR}}$, we also need $|\alpha| + |\beta| + |\gamma| \leq 8$. Finally, Corollary~\ref{cor:Massey} tells us that we must have either $|\alpha| + |\beta| = 4$ or $|\beta| + |\gamma| = 4$. Hence the only possibilities for the triple $(|\alpha|, |\beta|, |\gamma|)$ to obtain a nontrivial Massey product are $(2,2,2)$, $(2,2,3)$, $(2,2,4)$, $(3,2,2)$, and $(4,2,2)$. For $(2,2,3)$ or $(3,2,2)$, the Massey product lies in a quotient of $H^6_{\mathrm{dR}}$, which is zero since $b^6_M = b^1_M = 0$. For $(2,2,4)$ or $(4,2,2)$ the Massey product lies inside $H^7_{\mathrm{dR}}/ (H^2_{\mathrm{dR}} \cdot H^6_{\mathrm{dR}} + H^3_{\mathrm{dR}} \cdot H^4_{\mathrm{dR}})$, but $H^3_{\mathrm{dR}} \cdot H^4_{\mathrm{dR}} = H^7_{\mathrm{dR}}$ since $\varphi \wedge \psi = 7 \mathsf{vol}$ is a generator of $H^7_{\mathrm{dR}}$. Thus in this case the quotient space is zero. We conclude that the only possibly nontrivial Massey product corresponds to the case $(|\alpha|, |\beta|, |\gamma|) = (2,2,2)$.
\end{proof}
In the remainder of this section we will apply our new criterion to a particular nontrivial example. Consider a smooth compact connected oriented $7$-manifold $M$ of the form $M = W \times L$, where $W$ and $L$ are smooth compact connected oriented manifolds of dimensions $3$ and $4$, respectively. In order for $M$ to admit $\mathrm{G}_2$-structures, we must have $w_2 (M) = 0$, where $w_2 (M)$ is the second Stiefel-Whitney class of $TM$, by~\cite[p 348--349]{LM}.
Take $W$ to be the \emph{real Iwasawa manifold}, which is defined to be the quotient of the set
\begin{equation*}
\left \{ \begin{pmatrix} 1 & t_1 & t_2 \\ 0 & 1 & t_3 \\ 0 & 0 & 1 \end{pmatrix} : t_1, t_2, t_3 \in \mathbb R \right \} \cong \mathbb R^3
\end{equation*}
by the left multiplication of the group
\begin{equation*}
\left \{ \begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix} : a, b, c \in \mathbb Z \right \}.
\end{equation*}
The manifold $W$ is a compact orientable $3$-manifold, so it is parallelizable and hence $w_2 (W) = 0$. Moreover, it is shown in~\cite[Example 3.A.34]{Huybrechts} that $b^1_W = 2$ and that
\begin{equation} \label{eq:W-Massey}
\text{ there exist $\alpha, \beta \in H^1_{\mathrm{dR}}(W)$ such that $\langle \alpha, \beta, \beta \rangle \neq 0$}.
\end{equation}
By the Whitney product formula, we have $w_2 (M) = w_2 (W) + w_2 (L)$. Thus if we choose $L$ to have vanishing $w_2$, then $w_2 (M)$ will vanish as required, and $M = W \times L$ will admit $\mathrm{G}_2$-structures.
\begin{thm} \label{thm:new-obstruction-example}
Let $L$ be a smooth compact connected oriented $4$-manifold with $w_2 (L) = 0$, and let $W$ be the real Iwasawa manifold described above. Then $M = W \times L$ admits $\mathrm{G}_2$-structures but cannot admit any \emph{torsion-free} $\mathrm{G}_2$-structures.
\end{thm}
\begin{proof}
Let $\pi : M \to W$ be the projection map. Consider the classes $\pi^* \alpha, \pi^* \beta \in H^1_{\mathrm{dR}}(M)$. By naturality of the Massey triple product, and since $p=q=r=1$, we have
\begin{equation*}
\langle \pi^* \alpha, \pi^* \beta, \pi^* \beta \rangle = \pi^* \langle \alpha, \beta, \beta \rangle \in \frac{H^2 (M)}{H^1 (M) \cdot H^1 (M)}.
\end{equation*}
Let $s: W \to W \times L$ be any section of $\pi$. Since $s^* \pi^* = (\pi \circ s)^* = \mathrm{Id}$, we deduce that
\begin{equation*}
\pi^*: \frac{H^2(W)}{H^1(W) \cdot H^1(W)} \to \frac{H^2(M)}{H^1(M) \cdot H^1(M)} \qquad \text{ is injective.}
\end{equation*}
Thus since $\langle \alpha, \beta, \beta \rangle \neq 0$ we have
\begin{equation*}
\langle \pi^* \alpha, \pi^* \beta, \pi^* \beta \rangle = \pi^* \langle \alpha, \beta, \beta \rangle \neq 0.
\end{equation*}
Since $| \pi^* \alpha | = | \pi^* \beta | = 1$ and $1 + 1 \neq 4$, we finally conclude by Corollary~\ref{cor:Massey} that $M$ \emph{does not admit a torsion-free $\mathrm{G}_2$-structure}.
\end{proof}
It remains to find an $L$ with $w_2 (L) = 0$ such that no previously known topological obstructions~\eqref{eq:known-obstructions} are violated, so that we have indeed established something new. We first collect several preliminary results that we will require.
By Poincar\'e duality $b^3_L = b^1_L$ and $b^2_W = b^1_W = 2$. The K\"unneth formula therefore yields
\begin{equation} \label{eq:M-Bettis}
\left. \begin{aligned}
b^1_M & = b^1_W + b^1_L = 2 + b^1_L, \\
b^2_M & = b^2_W + b^1_W b^1_L + b^2_L = 2 + 2 b^1_L + b^2_L, \\
b^3_M & = b^3_W + b^2_W b^1_L + b^1_W b^2_L + b^3_L = 1 + 2 b^1_L + 2 b^2_L + b^1_L = 1 + 3 b^1_L + 2 b^2_L. \varphiantom{A}
\end{aligned} \right\}
\end{equation}
\begin{rmk} \label{rmk:connect-sum}
Let $M$, $N$ be smooth compact oriented $n$-manifolds. There is a canonical way to make the connected sum $M \# N$ smooth, by smoothing around the $S^{n-1}$ with which we paste them together. With coefficients in either $R = \mathbb Z$ or $R = \mathbb Z / 2 \mathbb Z$, we have $H^k (M \# N, R) \cong H^k (M, R) \oplus H^k (N, R)$ for $k = 1, \ldots ,n-1$. This can be seen using the Mayer-Vietoris sequence. The isomorphism is induced by the map $p : M \# N \to M$ collapsing $N$, and the map $q: M \# N \to N$ collapsing $M$. For $k = n$, we have $H^k (M \# N) \cong H^n (M, R) \cong H^n (N, R)$ with isomorphisms induced by $p$ and $q$ as before.
\end{rmk}
\begin{lemma} \label{lemma:4manifolds}
Let $L$ be a \emph{simply-connected} smooth compact oriented $4$-manifold, with intersection form
\begin{equation*}
Q: H_2 (L, \mathbb Z) \times H_2 (L, \mathbb Z) \to \mathbb Z.
\end{equation*}
If the signature of $Q$ is $(p, q)$, let $\sigma (L) = p - q$. Then we have
\begin{itemize}
\item $w_2 (L) = 0$ if and only if $Q(a, a) \in 2 \mathbb Z$ for all $a \in H^2 (L, \mathbb Z)$;
\item $p_1(L) = 0$ if and only if $\sigma (L)$ is zero.
\end{itemize}
\end{lemma}
\begin{proof}
The first statement can be found in~\cite[Corollary 2.12]{LM}. The Hirzebruch signature theorem for $4$-manifolds, which can be found in~\cite[Theorem 1.4.12]{GS}, says that $p_1(L) = 3 \sigma(L)$. This immediately implies the second statement.
\end{proof}
Recall that K3 is the unique connected simply-connected smooth manifold underlying any compact complex surface with vanishing first Chern class. One way to define the K3 surface is by
\begin{equation*}
\text{K3} = \{ [z_0 : z_1 : z_2 : z_3] \in \mathbb C \mathbb P^3 : z_0^4 + z_1^4 +z_2^4 + z_3^4 = 0 \}.
\end{equation*}
It is well-known (see~\cite[Page 75]{GS} or~\cite[Pages 127--133]{Sc}) that K3 has intersection form $Q_{\text{K3}} = -2 E_8 \oplus 3 H$, where $E_8$ is a certain even positive definite bilinear form, and $H = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$, which is also even and has signature $0$. It follows that $Q_{\text{K3}}$ has signature $(3,19)$ and thus $\sigma(\text{K3}) = -16$. We also have that the Betti numbers of K3 are $b^1_{\text{K3}} = b^3_{\text{K3}} = 0$ and $b^2_{\text{K3}} = 22$.
\begin{prop} \label{prop:L-is-lemma}
Let $L = \text{K3} \, \# (S^1 \times S^3)$. Then $w_2 (L) = 0$, and for $M = W \times L$ where $W$ is the real Iwasawa manifold, none of the first four topological obstructions~\eqref{eq:known-obstructions} are violated. Thus $M$ cannot admit any torsion-free $\mathrm{G}_2$-structure.
\end{prop}
\begin{proof}
Since $b^1_{S^1 \times S^3} = b^3_{S^1 \times S^3} = 1$ and $b^2_{S^1 \times S^3} = 0$, Remark~\ref{rmk:connect-sum} tells us that the Betti numbers of $L$ are $b^1_L = b^3_L = 1$ and $b^2_L = 22$. In~\cite[Pages 20, 456]{GS} it is shown that $Q_{M \# N} = Q_M \oplus Q_N$, and consequently $\sigma(M \# N) = \sigma(M) + \sigma(N)$. Since $Q_{S^1 \times S^3} = 0$, we find that $Q_L$ is even and has nonzero signature. Thus by Lemma~\ref{lemma:4manifolds} we deduce that $p_1(L) \neq 0$ and $w_2 (L) = 0$. Now the equations~\eqref{eq:M-Bettis} tell us that the Betti numbers of $M$ are $b^1_M = 3$, $b^2_M = 26$, and $b^3_M = 48$. In particular, the first three conditions in~\eqref{eq:known-obstructions} are satisfied.
We now claim that $p_1 (M) \neq 0$. To see this, consider the inclusion $\iota : L \to M = W \times L$ into some vertical fibre $\{ \ast \} \times L$ of $M$ over $W$. Then $\iota^* (TM) = TL \oplus E$ where $W$ is the trivial rank $3$ real vector bundle over $L$. If $p_1 (TM) = 0$, then by naturality we he have $p_1 (TL) = \iota^* (p_1 (TM)) = 0$, which we showed was not the case. Thus, the fourth condition in~\eqref{eq:known-obstructions} is satisfied.
\end{proof}
\begin{rmk} \label{rmk:L-holonomy}
Because $b^1_M = 3$, it $M$ had any compact torsion-free $\mathrm{G}_2$-structure, it would have reduced holonomy $\SU{2}$. We have shown in Proposition~\ref{prop:L-is-lemma} that such a Riemannian metric cannot exist on $M$. It is not clear if there is any simpler way to rule out such a Riemannian metric on $M$.
\end{rmk}
Other examples of compact orientable spin $7$-manifolds that cannot be given a torsion-free $\mathrm{G}_2$-structure can likely be constructed similarly.
\begin{rmk} \label{rmk:other-work}
The formality of compact $7$-manifolds with additional structure has been studied by several authors, in particular recently by Crowley--Nordstr\"om~\cite{CN} and Munoz--Tralle~\cite{MT}. Two of the results in~\cite{CN} are: there exist non-formal compact $7$-manifolds that have only trivial Massey triple products; and a non-formal compact manifold $M$ with $\mathrm{G}_2$ holonomy must have $b^2 (M) \geq 4$. One of the results in~\cite{MT} is that a compact simply-connected 7-dimensional Sasakian manifold is formal if and only if all its triple Massey products vanish.
\end{rmk}
\begin{rmk} \label{rmk:future}
A natural question is: can we actually establish \emph{formality} by extending our chain of quasi-isomorphisms? One idea is to quotient out the unwanted summands, but such a quotient map is not a dga morphism. One can also try to involve $\mathcal{L}_K$ or other operators that can descend to $H^{\bullet}_{\varphi}$, but the authors have so far had no success in this direction.
\end{rmk}
\end{document} |
\begin{document}
\noindent {\footnotesize }\\[1.00in]
\title[ On globally symmetric Finsler spaces]{On globally symmetric Finsler spaces}
\author[R. Chavosh Khatamy , R. Esmaili\\]{R. Chavosh Khatamy $^{*}$, R. Esmaili\\}
\address{Department of Mathematics,
Faculty of Sciences, Islamic Azad University, Tabriz Branch}
\email{\tt [email protected], r\[email protected] }
\address{Department of Mathematics, Faculty of Sciences, Payame noor University, Ahar Branch}
\email{\tt [email protected]}
\subjclass[2000]{53C60, 53C35}
\keywords{Finsler Space, Locally symmetric Finsler space, Globally
Symmetric Finsler space, Berwald space.
\\
\indent $^{*}$ The first author was supported by the funds of the
Islamic Azad University- Tabriz Branch, (IAUT) }
\begin{abstract}
The paper consider the symmetric of Finsler spaces. We give some
conditions about globally symmetric Finsler spaces. Then we prove
that these spaces can be written as a coset space of Lie group
with an invariant Finsler metric. Finally, we prove that such a
space must be Berwaldian.
\end{abstract}
\maketitle
\section{Introduction}
The study of Finsler spaces has important in physics and Biology
(\cite{5}), In particular there are several important books about
such spaces (see \cite{1}, \cite{8}). For example recently D.
Bao, C. Robels, Z. Shen used the Randers metric in Finsler on
Riemannian manifolds (\cite{9} and \cite{8}, page 214). We must
also point out there was only little study about symmetry of such
spaces (\cite{3}, \cite{12}). For example E. Cartan has been
showed symmetry plays very important role in Riemannian geometry
(\cite{5} and \cite{12}, page 203).
\begin {definition}\label{def1.0}
A Finsler space is locally symmetric if, for any $p\in M$, the
geodesic reflection $s_{p}$ is a local isometry of the Finsler
metric.
\end {definition}
\begin {definition}\label{def1.1}
A reversible Finsler space $(M,F)$ is called globally Symmetric if
for any $p\in M$ the exists an involutive isometry $\sigma_{x}$
$($that is, $\sigma_{x}^{2}=I$ but $\sigma_{x}\neq I)$ of such
that $x$ is an isolated fixed point of $\sigma_{x}$.
\end {definition}
\begin {definition}\label{def1.2}
Let $G$ be a Lie group and $K$ is a closed subgroup of $G$. Then
the coset space $G/K$ is called symmetric if there exists an
involutive automorphism $\sigma$ of $G$ such that
$$G^{0}_{\sigma}\subset K\subset G_{\sigma},$$
where $G_{\sigma}$ is the subgroup consisting of the fixed points
of $\sigma$ in $G$ and $G^{0}_{\sigma}$ denotes the identity
component of $G_{\sigma}$.
\end {definition}
\begin {theorem}\label{the1.3}
Let $G/K$ be a symmetric coset space. Then any G-invariant
reversible Finsler metric $($if exists$)$ $F$ on $G/K$ makes
$(G/K, F)$ a globally symmetric Finsler space $($\cite{8}, page
8$)$.
\end {theorem}
\begin {theorem}\label{the1.4}
Let $(M,F)$ be a globally Symmetric Finsler space. For $p\in M$,
denote the involutive isometry of $(M,F)$ at $p$ by $S_{x}$. Then
we
have \\
~~~~~~ $($a$)$ ~~ For any $p\in M, (dS_{x})_{x}=-I$. In
particular, $F$ must
be reversible.
~~~~~~ $($b$)$ ~~ $(M,F)$ is forward and backward complete;
~~~~~~ $($c$)$ ~~ $(M,F)$ is homogeneous. This is, the group of
isometries of $(M,F), I(M,F)$, acts transitively on $M$.
~~~~~~ $($d$)$ ~~ Let $\widetilde{M}$ be the universal covering
space of $M$
and $\pi$ be the projection mapping. Then
$(\widetilde{M},\pi^{*}(F))$ is a globally Symmetric Finsler
space, where $\pi^{*}(F)$ is define by
\begin{eqnarray*}
\pi^{*}(F)(q)=F((d\pi)_{\widetilde{p}}(q)), ~~ q\in
T_{\widetilde{p}} (\widetilde{M}),
\end{eqnarray*}
(See \cite{8} to prove).
\end {theorem}
\begin {corollary}\label{cor1.5}
Let $(M,F)$ be a globally Symmetric Finsler space. Then for any
$p\in M, s_{p}$ is a local geodesic Symmetry at $p$. The Symmetry
$s_{p}$, is unique. $($See prove of Theorem 1.2 and \cite{1}$)$
\end {corollary}
\section{A theorem on globally Symmetric Finsler spaces}
\begin {theorem}\label{the2.1}
Let $(M,F)$ be a globally Symmetric Finsler space. Then exits a
Riemannian Symmetric pair $(G,K)$ such that $M$ is diffeomorphic
to $G/K$ and $F$ is invariant under $G$.
\end {theorem}
\begin {proof}
The group $I(M,F)$ of isometries of $(M,F)$ acts transitively on
$M$ ($(C)$ of theorem 1.5). We proved that $I(M,F)$ is a Lie
transformation group of $M$ and for any $p\in M$ (\cite{12} and
\cite{7}, page 78), the isotropic subgroup $I_{p}(M,F)$ is a
compact subgroup of $I(M,F)$ (\cite{4}). Since $M$ is connected
(\cite{7}, \cite{10}) and the subgroup $K$ of $G$ which $p$ fixed
is a compact subgroup of $G$. Furthermore, $M$ is diffeomorphic
to $G/K$ under the mapping $gH\to g.p$ , $g\in G$ (\cite{7}
Theorem
2.5, \cite{10}). \\
As in the Riemannian case in page 209 of \cite{7}, we define a
mapping $s$ of $G$ into $G$ by $s(g)=s_{p}gs_{p}$, where $s_{p}$
donote the (unique) involutive isometry of $(M,f)$ with $p$ as an
isolated fixed point. Then it is easily seen that $s$ is an
involutive automorphism of $G$ and the group $K$ lies between the
closed subgroup $K_{s}$ of fixed points of $s$ and the identity
component of $K_{s}$ (See definition of the symmetric coset space,
\cite{11}). Furthermore, the group $K$ contains no normal
subgroup of $G$ other than $\{e\}$. That is, $(G,K)$ is symmetric
pair. $(G,K)$ is a Riemannian symmetric pair, because $K$ is
compact.
\end {proof}
The following useful will be results in the proof of our aim of
this paper.
\begin {proposition}
Let $(M,\bar{F})$ be a Finsler space, $p\in M$ and $H_{p}$ be the
holonomy group of $\bar{F}$ at $p$. If $F_{p}$ is a $H_{p}$
invariant Minkowski norm on $T_{p}(M)$, then $F_{p}$ can be
extended to a Finsler metric $F$ on $M$ by parallel translations
of $\bar{F}$ such that $F$ is affinely equivalent to $\bar{F}$
$($\cite{5}, proposition 4.2.2$)$
\end {proposition}
\begin {proposition}
A Finsler metric $F$ on a manifold $M$ is a Berwald metric if and
only if it is affinely equivalent to a Riemannian metric $g$. In
this case, $F$ and $g$ have the same holonomy group at any point
$p\in M$ $($see proposition 4.3.3 of \cite{5}$)$.
\end {proposition}
Now the main aim
\begin {theorem}
Let $(M,F)$ be a globally symmetric Finsler space. Then $(M,F)$
is a Berwald space. Furthermore, the connection of $F$ coincides
with the Levi-civita connection of a Riemannian metric $g$ such
that $(M,g)$ is a Riemannian globally symmetric space.
\end {theorem}
\begin {proof}
We first prove $F$ is Beraldian. By Theorem 2.1, there exists a
Riemannian symmetric pair $(G,K)$ such that $M$ is diffeomorphic
to $G/K$ and $F$ is invariant under $G$. Fix a $G$- invariant
Riemannian metric $g$ on $G/K$. Without losing generality, we can
assume that $(G,K)$ is effective (see \cite{11} page 213). Since
being a Berwald space is a local property, we can assume further
that $G/K$ is simple connected. Then we have a decomposition
(page 244 of \cite{11}):
\begin{eqnarray*}
G/K=E\times G_{1}/K_{1}\times G_{2}/K_{2}\times ...\times
G_{n}/K_{n},
\end{eqnarray*}
where $E$ is a Euclidean space, $G_{i}/K_{i}$ are simply connected
irreducible Riemannian globally symmetric spaces, $i=1,2,...,n$.
Now we determine the holonomy groups of $g$ at the origin of
$G/K$. According to the de Rham decomposition theorem (\cite{2}),
it is equal to the product of the holonomy groups of $E$ and
$G_{i}/K_{i}$ at the origin. Now $E$ has trivial holonomy group.
For $G_{i}/K_{i}$, by the holonomy theorem of Ambrose and Singer
(\cite{12}, page 231, it shows, for any connection, how the
curvature form generats the holonomy group), we know that the lie
algebra $\eta_{i}$ of the holonomy group $H_{i}$ is spanned by
the linear mappings of the form
$\{\widetilde{\tau}^{-1}R_{0}(X,Y)\widetilde{\tau}\}$, where
$\tau$ denotes any piecewise smooth curve starting from $o$,
$\widetilde{\tau}$ denotes parallel displacements (with respect to
the restricted Riemannian metric) a long $\widetilde{\tau}$,
$\widetilde{\tau}^{-1}$ is the inverse of $\widetilde{\tau}$,
$R_{0}$ is the curvature tensor of $G_{i}/K_{i}$ of the restricted
Riemannian metric and $X,Y\in T_{0}(G_{i}/K_{i})$. Since
$G_{i}/K_{i}$ is a globally symmetric space, the curvature tensor
is invariant under parallel displacements (page 201 of
\cite{10},\cite{11}). So
\begin{eqnarray*}
\eta_{i}=span\{R_{0}(X,Y)|X,Y\in T_{0}(G_{i}/K_{i})\},
\end{eqnarray*}
(see page 243 of \cite{7}, \cite{11}).\\
On the other hand, Since $G_{i}$ is a semisimple group. We know
that the Lie algebra of $K^{*}_{i}=Ad(K_{i})\simeq K$ is also
equal to the span of $R_{0}(X,Y)$ (\cite{11}). The groups $H_{i}$,
$K^{*}_{i}$ are connected (because $G_{i}/K_{i}$ is simply
connected) (\cite{10} and \cite{11}). Hence we have
$H_{i}=K^{*}_{i}$. Consequently the holonomy group $H_{0}$ of
$G/K$ at the origin is
\begin{eqnarray*}
K^{*}_{1}\times K^{*}_{2}\times...\times K^{*}_{n}
\end{eqnarray*}
Now $F$ defines a Minkowski norm $F_{0}$ on $T_{0}(G/K)$ which is
invariant by $H_{0}$ (\cite{2}). By proposition 2.2, we can
construct a Finsler metric $\bar{F}$ on $G/K$ by parallel
translations of $g$. By proposition 2.3, $\bar{F}$ is Berwaldian.
Now for any point $p_{0}=aK\in G/K$, there exists a geodesic of
the Riemannian manifold $(G/K , g)$, say $\gamma(t)$ such that
$\gamma(0)=0, \gamma(1)=p_{0}$. Suppose the initial vector of
$\gamma$ is $X_{0}$ and take $X\in p$ such that $d\pi(X)=X_{0}$.
Then it is known that $\gamma(t)=\exp tX.p_{0}$ and $d\tau(\exp
tX)$ is the parallel translate of $(G/K, g)$ along $\gamma$
(\cite{11} and \cite{7}, page 208). Since $F$ is $G$- invariant,
it is invariant under this parallel translate. This means that $F$
and $\bar{F}$ concede at $T_{p_{0}}(G/K)$. Consequently they
concide everywhere. Thus $F$ is
a Berwald metric. \\
For the next assertion, we use a result of Szabo' (\cite{2}, page
278) which asserts that for any Berwald metric on $M$ there
exists a Riemannian metric with the same connection. We have
proved that $(M,F)$ is a Berwald space. Therefore there exists a
Riemannian metric $g_{1}$ on $M$ with the same connection as $F$.
In \cite{11}, we showed that the connection of a globally
symmetric Berwald space is affine symmetric. So $(M,F)$ is a
Riemannian globally symmetric space (\cite{7}, \cite{11}).
\end {proof}
From the proof of theorem
2.4, we have the following corollary.
\begin {corollary}
Let $(G/K, F)$ be a globally symmetric Finsler space and
$g=\ell+p$ be the corresponding decomposition of the Lie
algebras. Let $\pi$ be the natural mapping of $G$ onto $G/K$.
Then $(d\pi)_{e}$ maps $p$ isomorphically onto the tangent space
of $G/K$ at $p_{0}=eK$. If $X\in p$, then the geodesic emanating
from $p_{0}$ with initial tangent vector $(d\pi)_{e}X$ is given by
\begin{eqnarray*}
\gamma_{d\pi.X}(t)=\exp tX.p_{0}.
\end{eqnarray*}
Furthermore, if $y\in T_{p_{0}}(G/K)$, then $(d\exp
tX)_{p_{0}}(Y)$ is the parallel of $Y$ along the geodesic (see
\cite{11}, \cite{7} proof of theorem 3.3).
\end {corollary}
\begin{example}
Let $G_{1}\big/ K_{1}$, $G_{2}\big/ K_{2}$ be two symmetric coset
spaces with $K_{1},K_{2}$ compact (in this coset, they are
Riemannian symmetric spaces) and $g_{1},g_{2}$ be invariant
Riemannian metric on $G_{1}\big/ K_{1}$, $G_{2}\big/ K_{2}$,
respectively. Let $M=G_{1}\big/ K_{1}\times G_{2}\big/ K_{2}$ and
$O_{1}, O_{2}$ be the origin of $G_{1}\big/ K_{1}, G_{2}\big/
K_{2}$, respectively and denote $O=(O_{1}, O_{2})$ (the origin of
$M$). Now for $y=y_{1}+y_{2}\in T_{O}(M)=T_{O_{1}}(G_{1}\big/
K_{1})+T_{O_{2}}(G_{2}\big/ K_{2})$, we define
\begin{eqnarray*}
F(y)=\sqrt{g_{1}(y_{1},y_{2})+g_{2}(y_{1},y_{2})+\sqrt[s]{g_{1}(y_{1},y_{2})^{s}+g_{2}(y_{1},y_{2})^{s}}},
\end{eqnarray*}
where $s$ is any integer $\geq2$. Then $F(y)$ is a Minkowski norm
on $T_{O}(M)$ which is invariant under $K_{1}\times K_{2}$
(\cite{4}). Hence it defines an $G$- invariant Finsler metric on
$M$ (\cite{6}, Corollary 1.2, of page 8246). By theorem 2.1,
(M,F) is a globally symmetric Finsler space. By theorem 2.4 and
(\cite{2}, page 266) $F$ is non-Riemannian.
\end{example}
\end{document} |
\begin{document}
\title{Complexity, Periodicity and One-Parameter Subgroups}
\author[R. Farnsteiner]{Rolf Farnsteiner}
\address{Mathematisches Seminar, Christian-Albrechts-Universit\"at zu Kiel, Ludewig-Meyn-Str. 4, 24098 Kiel, Germany}
\email{[email protected]}
\thanks{Supported by the D.F.G. priority program SPP1388 `Darstellungstheorie'.}
\subjclass[2000]{Primary 14L15, 16G70, Secondary 16T05}
\date{\today}
\makeatletter
\makeatother
\begin{abstract} Using the variety of infinitesimal one-parameter subgroups introduced in \cite{SFB1,SFB2} by Suslin-Friedlander-Bendel, we define a numerical invariant for representations of
an infinitesimal group scheme $\mathcal{G}$. For an indecomposable $\mathcal{G}$-module $M$ of complexity $1$, this number, which may also interpreted as the height of a ``vertex" $\mathcal{U}_M
\subseteq \mathcal{G}$, is related to the period of $M$. In the context of the Frobenius category of $G_rT$-modules associated to a smooth reductive group $G$ and a maximal torus $T \subseteq G$,
our methods give control over the behavior of the Heller operator of such modules, as well as precise values for the periodicity of their restrictions to $G_r$. Applications include the structure
of stable Auslander-Reiten components of $G_rT$-modules as well as the distribution of baby Verma modules. \end{abstract}
\maketitle
\setcounter{section}{-1}
\section{Introduction} \label{S:IP}
This paper is concerned with representations of finite group schemes that are defined over an algebraically closed field $k$ of positive characteristic $p>0$. Given such a group scheme $\mathcal{G}$
and a finite-dimensional $\mathcal{G}$-module $M$, Friedlander and Suslin showed in their groundbreaking paper \cite{FS} that the cohomology space $\HH^\ast(\mathcal{G},M)$ is a finite module over
the finitely generated commutative $k$-algebra $\HH^\bullet(\mathcal{G},k) := \bigoplus_{n\ge 0} \HH^{2n}(\mathcal{G},k)$. This result has seen a number of applications which have provided deep
insight into the representation theory of $\mathcal{G}$.
By work of Alperin-Evens \cite{AE}, Carlson \cite{Ca2} and Suslin-Friedlander-Bendel \cite{SFB1,SFB2}, the notions of complexity, periodicity and infinitesimal one-parameter subgroups are
closely related to properties of the even cohomology ring $\HH^\bullet(\mathcal{G},k)$. In this paper, we employ the algebro-geometric techniques expounded in \cite{FS} and \cite{SFB1,SFB2} in
order to obtain information on the period of periodic modules, and the structure of the stable Auslander-Reiten quivers of algebraic groups. Our methods are most effective when covering
techniques related to $G_rT$-modules can be brought to bear. By way of illustration, we summarize some of our results in the following:
\begin{thm*} Let $G$ be a smooth, reductive group scheme with maximal torus $T \subseteq G$. Suppose that $M$ is an indecomposable $G_rT$-module of complexity $\cx_{G_rT}(M)=1$.
Then the following statements hold:
{\rm (1)} \ There exists a unipotent subgroup $\mathcal{U}_M \subseteq G_r$ of height $h_M$ and a root $\alpha$ of $G$ such that $\Omega_{G_rT}^{2p^{r-h_M}}(M) \cong
M\!\otimes_k\!k_{p^r\alpha}$.
{\rm (2)} \ The restriction $M|_{G_r}$ is periodic with period $2p^{r-h_M}$.\end{thm*}
\noindent
Here $\cx_{G_rT}(M)$ refers to the polynomial rate of growth of a minimal projective resolution of $M$ and $\Omega_{G_rT}$ is the Heller operator of the Frobenius category of
$G_rT$-modules.
Our article can roughly be divided into two parts. Sections \ref{S:UB}-\ref{S:BAR} are mainly concerned with the category $\modd \mathcal{G}$ of finite-dimensional modules of a finite group scheme
$\mathcal{G}$. The Frobenius category $\modd G_rT$ of compatibly graded $G_r$-modules is dealt with in the remaining two sections.
In Section \ref{S:UB} we exploit the detailed information provided by the Friedlander-Suslin Theorem \cite{FS} in order to provide an upper bound for the complexity of a $\mathcal{G}$-module in
terms of the dimension of certain $\Ext$-groups. Section \ref{S:VG} lays the foundation for the later developments by collecting basic results concerning the cohomology rings of the Frobenius
kernels $\mathbb{G}_{a(r)}$ of the additive group $\mathbb{G}_a$.
The period of a periodic module is known to be closely related to the degrees of homogeneous generators of the ring $\HH^\bullet(\mathcal{G},k)$. The Friedlander-Suslin Theorem thus implies that,
for any infinitesimal group $\mathcal{G}$ of height $\height(\mathcal{G})$, the number $2p^{\height(\mathcal{G})-1}$ is a multiple of the period of any periodic $\mathcal{G}$-module. In Section \ref{S:PH}, we analyze
this feature more closely, showing how the period may be bounded by employing infinitesimal one-parameter subgroups of $\mathcal{G}$. The relevant notion is that of the projective height of a
module, which, for an indecomposable $\mathcal{G}$-module $M$ of complexity $1$, coincides with the height of a certain unipotent subgroup $\mathcal{U}_M \subseteq \mathcal{G}$.
Sections \ref{S:EC} and \ref{S:Rep} are concerned with the Auslander-Reiten theory of finite group schemes. Following a discussion of components of Euclidean tree class, we determine in
Section \ref{S:Rep} those AR-components of the Frobenius kernels $\SL(2)_r$, that contain a simple module. Applications concerning blocks and AR-components of Frobenius kernels of
reductive groups are given in Section \ref{S:BAR}. In particular, we attach to every representation-finite block $\mathcal{B} \subseteq k\mathcal{G}$ a unipotent ``defect group" $\mathcal{U}_\mathcal{B} \subseteq \mathcal{G}$,
whose height is linked to the structure of $\mathcal{B}$.
By providing an explicit formula for the Nakayama functor of the Frobenius category of graded modules over certain Hopf algebras, Section \ref{S:NF} initiates our study of $G_rT$-modules. In
Section \ref{S:AR} we come to the second central topic of our paper, the Auslander-Reiten theory of the groups $G_r$ and $G_rT$, defined by the r-th Frobenius kernel of a smooth group $G$,
and a maximal torus $T \subseteq G$. Our first main result, Theorem \ref{MC1}, links the projective height of a $G_rT$-module of complexity $1$ to the behavior of powers of the Heller
operator $\Omega_{G_rT}$. In particular, the Frobenius category $\modd G_rT$ of a reductive group $G$ is shown to afford no $\Omega_{G_rT}$-periodic modules, and the
$\Omega_{G_r}$-periods of $G_rT$-modules of complexity $1$ are determined by their projective height (cf.\ the Theorem above). It is interesting to compare this fact with the periods of
periodic modules over finite groups, which are given by the minimal ranks of maximal elementary abelian $p$-groups, see \cite[(2.2)]{BC}. The aforementioned results provide insight into the
structure of the components of the stable Auslander-Reiten quiver of $\modd G_rT$ and the distribution of baby Verma modules:
\begin{thm*} Suppose that $G$ is defined over the Galois field $\mathbb{F}_p$ with $p\ge 7$. Let $T \subseteq G$ be a maximal torus.
{\rm (1)} \ If $\mathrm{T}heta$ is a component of the stable Auslander-Reiten quiver of $\modd G_rT$, then $\mathrm{T}heta \cong \mathbb{Z}[A_\infty],$ $ \mathbb{Z}[A^\infty_\infty],\, \mathbb{Z}[D_\infty]$.
{\rm (2)} \ If $\mathrm{T}heta$ contains two baby Verma modules $\widehat{Z}_r(\lambda) \not \cong \widehat{Z}_r(\mu)$, then $\cx_{G_rT}(\widehat{Z}_r(\lambda))=1$, and there exists a simple root $\alpha$ of $G$ such that $\{\widehat{Z}_r(\lambda\!+\!np^r\alpha) \ ; \ n \in \mathbb{Z}\}$ is the set of baby Verma modules belonging to $\mathrm{T}heta$.
{\rm (3)} \ A stable Auslander-Reiten component of $\modd G_r$ contains at most one baby Verma module. \end{thm*}
\noindent
By part (2) above, a stable AR-component $\mathrm{T}heta$ of $\modd G_rT$ whose rank variety $V_r(G)_\mathrm{T}heta$ has dimension $\ge 2$ contains at most one baby Verma module, while for $\dim
V_r(G)_\mathrm{T}heta = 1$ the presence of such a module implies that the baby Verma modules of $\mathrm{T}heta$ form the $\Omega^2_{G_rT}$-orbit of quasi-simple modules.
\noindent
Given a finite group scheme $\mathcal{G}$ with coordinate ring $k[\mathcal{G}]$, we let $k\mathcal{G} :=k[\mathcal{G}]^\ast$ be its {\it algebra of measures}. By general theory, the representations of this algebra coincide
with those of the group scheme $\mathcal{G}$, and we denote by $\modd \mathcal{G}$ the category of finite-dimensional $\mathcal{G}$-modules. For a $\mathcal{G}$-module $M$, we let $\cx_\mathcal{G}(M)$ denote the {\it
complexity} of $M$. By definition, $\cx_\mathcal{G}(M) = \gr(P^\bullet)$ coincides with the polynomial rate of growth of a minimal projective resolution $P^\bullet$ of $M$. Recall that the {\it
growth} of a sequence $\mathcal{V} := (V_n)_{n\ge 0}$ of finite-dimensional $k$-vector spaces is defined via
\[ \gr(\mathcal{V}) := \min \{c \in \mathbb{N}_0\cup \{\infty\} \ ; \ \exists \, \lambda > 0 \ \text{such that} \ \dim_kV_n \le \lambda n^{c-1} \ \ \forall \ n \ge 0\}.\]
If $M$ is a $\mathcal{G}$-module, then
\[ \cx_\mathcal{G}(M) = \gr((\Omega_\mathcal{G}^n(M))_{n\ge 0}),\]
where $\Omega_\mathcal{G}$ denotes the Heller operator of the self-injective algebra $k\mathcal{G}$. In particular, a $\mathcal{G}$-module $M$ is projective if and only if $\cx_\mathcal{G}(M)=0$.
Recall that $\Omega_\mathcal{G}$ induces an auto-equivalence on the stable category $\underline{\modd} \mathcal{G}$, whose objects are those of $\modd \mathcal{G}$ and whose morphisms spaces $\underline{\Hom}_\mathcal{G}(M,N) = \Hom_\mathcal{G}(M,N)/P(M,N)$ are the factor groups of $\Hom_\mathcal{G}(M,N)$ by the subspace $P(M,N)$ of those morphisms that factor through a projective module.
Thanks to the Friedlander-Suslin Theorem \cite{FS}, we can associate to every finite-dimensional $\mathcal{G}$-module $M$ its {\it cohomological support variety} $\mathcal{V}_\mathcal{G}(M)$. By definition,
$\mathcal{V}_\mathcal{G}(M)$ is the variety $Z(\ker \Phi_M)$, associated to the kernel of the canonical homomorphism
\[ \Phi_M : \HH^\bullet(\mathcal{G},k) \longrightarrow \Ext^ \ast_\mathcal{G}(M,M) \ \ ; \ \ [f] \mapsto [f\!\otimes\! \id_M].\]
It is well-known that $\mathcal{V}_\mathcal{G}(M)$ is a conical variety such that
\[ \dim \mathcal{V}_\mathcal{G}(M) = \cx_\mathcal{G}(M).\]
The reader is referred to \cite{Be2} for basic properties of support varieties. We shall use \cite{Ja3} and \cite{ARS,ASS} as general references for representations of algebraic groups and associative algebras, respectively.
\section{An Upper Bound for the Complexity} \label{S:UB}
Throughout this section, we let $\mathcal{G}$ denote an infinitesimal group, defined over an algebraically closed field $k$ of characteristic $p>0$. For such a group, the associated Hopf algebra
$k\mathcal{G}$ coincides with the algebra $\Dist(\mathcal{G})$ of distributions on $\mathcal{G}$.
In the sequel, all $\mathcal{G}$-modules are assumed to be finite-dimensional. Given $r \ge 0$, we let $\mathcal{G}_r$ be the r-th Frobenius kernel of $\mathcal{G}$ and define the {\it height} of $\mathcal{G}$ via
$\height(\mathcal{G}) := \min \{r \in \mathbb{N}_0 \ ; \ \mathcal{G}_r = \mathcal{G}\}$. The following result establishes an upper bound for $\cx_\mathcal{G}(M)$ in terms of self-extensions.
\begin{Theorem} \label{UB1} Suppose that $\mathcal{G}$ has height $r$. If $M$ is a $\mathcal{G}$-module, then
\[ \cx_\mathcal{G}(M) \le \dim_k \Ext^{2np^{r-1}}_\mathcal{G}(M,M)\]
for every $n \ge 1$. \end{Theorem}
\begin{proof} We fix a natural number $n \in \mathbb{N}$. According to \cite[(1.5)]{FS}, there exists a commutative, graded subalgebra $S \subseteq \Ext_\mathcal{G}^\ast(M,M)$ of the Yoneda algebra
$\Ext^\ast_\mathcal{G}(M,M)$ of $M$ such that
(a) \ $S$ is generated by $\bigoplus_{i=0}^{r-1} S_{2p^i}$, and
(b) \ $\Ext^\ast_\mathcal{G}(M,M)$ is a finitely generated $S$-module.
\noindent
Thus, $S$ is finitely generated, and an integral extension of the subalgebra $T_{(n)} := k[S_{2np^{r-1}}]$, generated by the subspace $S_{2np^{r-1}}$ of homogeneous elements of
degree $2np^{r-1}$, cf.\ \cite[(9.1)]{Ma}. Owing to \cite[(4.5)]{Ei}, $S$ is a finitely generated $T_{(n)}$-module, so that $\Ext^\ast_\mathcal{G}(M,M)$ also enjoys this property. In view
of \cite[(5.3.5)]{Be2}, passage to growths now yields
\[ \cx_{\mathcal{G}}(M) = \gr(\Ext^\ast_\mathcal{G}(M,M)) = \gr(T_{(n)}) \le \dim_k S_{2np^{r-1}} \le \dim_k \Ext_\mathcal{G}^{2np^{r-1}}(M,M),\]
as desired. \end{proof}
\begin{Examples} Suppose that $p \ge 3$.
(1) For $r>0$, we consider the infinitesimal group $\mathcal{G} := \SL(2)_1T_r$, whose factors are the first and r-th Frobenius kernels of $\SL(2)$ and its standard maximal torus $T \subseteq \SL(2)$ of diagonal matrices, respectively. We denote by $\alpha$ the positive root of $\SL(2)$ (relative to the Borel subgroup of upper triangular matrices) and recall that $\mathbb{Z} \longrightarrow X(T) \ ; \ n\mapsto \mathfrak{r}ac{n}{2}\alpha$ is an isomorphism between $\mathbb{Z}$ and the character group $X(T)$ of $T$. The character group of $T_r$ may thus be identified with $\mathbb{Z}/(p^r)$. Let $\lambda \in X(T_r)\setminus \{ip-1 \ ; \ 1 \le i \le p^{r-1}\}$ be a weight. Then the baby Verma module
\[ \widehat{Z}_1(\lambda) := \Dist(\SL(2)_1)\!\otimes_{\Dist(B_1)}\!k_\lambda\]
is a $\mathcal{G}$-module of complexity $\cx_\mathcal{G}(\widehat{Z}_1(\lambda)) = 1$. According to \cite[(4.5)]{Fa5} and its proof, we have
\[ \Omega^2_\mathcal{G}(\widehat{Z}_1(\lambda)) \cong \widehat{Z}_1(\lambda)\! \otimes_k \! k_{p\bar{\alpha}} \cong \widehat{Z}_1(\lambda+p\bar{\alpha}),\]
where $\bar{\alpha} \in X(T_r)\cong X(T)/p^rX(T)$ denotes the restriction of the positive root $\alpha \in X(T)$ to $T_r$. Thus, $\bar{\alpha}$ corresponds to $2 \in \mathbb{Z}/(p^r)$. Since
each Verma module $\widehat{Z}_1(\lambda)$ has length $2$ with composition factors $\widehat{L}_1(\lambda)$ and $\widehat{L}_1(2p-2-\lambda)$, the choice of $\lambda$ yields
$\Hom_{\mathcal{G}}(\widehat{Z}(\lambda+np\bar{\alpha}),\widehat{Z}(\lambda)) = (0)$ for $1 \le n \le p^{r-1}-1$. Consequently, we have
\[ \Ext^{2n}_{\mathcal{G}}(\widehat{Z}_1(\lambda),\widehat{Z}_1(\lambda)) \cong \underline{\Hom}_\mathcal{G}(\Omega^{2n}_{\mathcal{G}}(\widehat{Z}_1(\lambda)),\widehat{Z}_1(\lambda)) \cong
\underline{\Hom}_\mathcal{G}(\widehat{Z}_1(\lambda+np\bar{\alpha})),\widehat{Z}_1(\lambda)) = (0)\]
for each of these $n$, so that none of the Ext-groups $\Ext^{2n}_\mathcal{G}(\widehat{Z}_1(\lambda),\widehat{Z}_1(\lambda))$ of degree $<2p^{r-1}$ provides an upper bound for the
complexity of the $\mathcal{G}$-module $\widehat{Z}_1(\lambda)$.
(2) Let $\mathcal{G} = \mathbb{G}_{a(2)}$ be the second Frobenius kernel of the additive group $\mathbb{G}_{a}$. It is well-known (cf. \cite[(3.5)]{Ev}) that
\[ \Ext^\ast_{\mathbb{G}_{a(2)}}(k,k) \cong \HH^\ast(\mathbb{G}_{a(2)},k) \cong k[X_1,X_2]\!\otimes_k\!\Lambda(Y_1,Y_2),\]
where the generators $X_1,X_2$ of the polynomial ring and $Y_1,Y_2$ of the exterior algebra have degrees $2$ and $1$, respectively. Consequently,
\[ \dim_k\Ext^{2p}_{\mathbb{G}_{a(2)}}(k,k) = \dim_kk[X_1,X_2]_{2p}+ 2\dim_kk[X_1,X_2]_{2p-1} + \dim_kk[X_1,X_2]_{2p-2} = 2p+1,\]
while $\cx_{\mathbb{G}_{a(2)}}(k)=2$. \end{Examples}
\noindent
Recall that a group scheme $\mathcal{G}$ is referred to as {\it representation-finite} if and only if $\modd \mathcal{G}$ has only finitely many isoclasses of indecomposable objects. An indecomposable
$\mathcal{G}$-module is said to be {\it periodic} if there exists $n \ge 1$ such that $\Omega^n_\mathcal{G}(M) \cong M$. The first part of the following result refines \cite[(7.6.1)]{SFB2}.
\begin{Corollary} \label{UB2} Let $\mathcal{G}$ be an infinitesimal group of height $r$. Then the following statements hold:
{\rm (1)} \ A $\mathcal{G}$-module $M$ is projective if and only if $\Ext_\mathcal{G}^{2np^{r-1}}(M,M) = (0)$ for some $n \ge 1$.
{\rm (2)} \ The group $\mathcal{G}$ is diagonalizable if and only if $\HH^{2np^{r-1}}(\mathcal{G},k) = (0)$ for some $n \ge 1$.
{\rm (3)} \ The group $\mathcal{G}$ is representation-finite if and only if $\dim_k \HH^{2np^{r-1}}(\mathcal{G},k) \le 1$ for some $n \ge 1$. \end{Corollary}
\begin{proof} (1) According to (\ref{UB1}) we have $\cx_\mathcal{G}(M) = 0$, so that $M$ is projective.
(2) By part (1), the trivial $\mathcal{G}$-module is projective, so that the algebra of measures $k\mathcal{G}$ of $\mathcal{G}$ is semi-simple. Our assertion now follows from Nagata's theorem
\cite[(IV,\S3,(3.6))]{DG}.
(3) If $\mathcal{G}$ is representation-finite and not diagonalizable, then the trivial $\mathcal{G}$-module $k$ is periodic. By the Friedlander-Suslin Theorem \cite[(1.5)]{FS}, the even cohomology ring
$\HH^\bullet(\mathcal{G},k)$ is generated in degrees $2p^i$, with $i \in \{0,\ldots,r\!-\!1\}$. In view of \cite[(5.10.6)]{Be2}, the period of $k$ divides $2p^{r-1}$, so that
$\Omega_\mathcal{G}^{2p^{r-1}}(k)\cong k$. This readily yields $\dim_k \HH^{2p^{r-1}}(\mathcal{G},k) = 1$.
Let $n \in \mathbb{N}$ be such that $\dim_k \HH^{2np^{r-1}}(\mathcal{G},k) \le 1$. Then Theorem \ref{UB1} implies $\cx_\mathcal{G}(k) \le 1$ and our assertion is a consequence of \cite[(1.1),(2.7)]{FV1}. \end{proof}
\begin{Corollary} \label{UB3} Let $M$ be a $\mathcal{G}$-module. Then the following statements hold:
{\rm (1)} \ If $M$ is simple and such that $\mathrm{T}op(\Omega^{2np^{r-1}}_\mathcal{G}(M))$ is simple for some $n \ge 1$, then $\Omega^{2np^{r-1}}_\mathcal{G}(M) \cong M$.
{\rm (2)} \ If $M$ is indecomposable of length $\ell(M)=2$ and $\ell(\Omega^{2np^{r-1}}_\mathcal{G}(M)) \le 2$ for some $n \ge 1$, then $M$ is projective or periodic. \end{Corollary}
\begin{proof} By general theory, we have
\[ \Ext^{2np^{r-1}}_\mathcal{G}(M,M) \cong \underline{\Hom}_\mathcal{G}(\Omega^{2np^{r-1}}_\mathcal{G}(M),M).\]
(1) By assumption, the modules $M$ and $S := \mathrm{T}op(\Omega^{2np^{r-1}}_\mathcal{G}(M))$ are simple, so that Schur's Lemma implies
\[ \dim_k \Ext_\mathcal{G}^{2np^{r-1}}(M,M) \cong \dim_k \Hom_\mathcal{G}(S,M) = \delta_{[S],[M]},\]
where the brackets indicate isomorphism classes. If $S\not \cong M$, then Corollary \ref{UB2} yields that $M$ is projective, whence $S=(0)$, a contradiction. Consequently, $S \cong M$, as
desired.
(2) If $M$ is indecomposable of length $2$, then either $\Omega^{2np^{r-1}}_\mathcal{G}(M)
\cong M$ and $M$ is periodic, or $\Hom_\mathcal{G}(\Omega^{2np^{r-1}}_\mathcal{G}(M),M) \cong \Hom_\mathcal{G}(\Omega^{2np^{r-1}}_\mathcal{G}(M),\Soc(M)) \cong
\Hom_\mathcal{G}(\mathrm{T}op(\Omega^{2np^{r-1}}_\mathcal{G}(M)),\Soc(M))$. Schur's Lemma in conjunction with (\ref{UB1}) then implies $\cx_\mathcal{G}(M) \le 1$, so that $M$ is periodic or projective (cf.\
\cite[(5.10.4)]{Be2}). \end{proof}
\section{Varieties for $\mathbb{G}_{a(r)}$-Modules}\label{S:VG}
In Section \ref{S:PH} we shall study questions concerning the periodicity of $\mathcal{G}$-modules by considering their rank varieties of infinitesimal one-paramenter subgroups of $\mathcal{G}$. These are
defined via the groups
\[ \mathbb{G}_{a(r)} := {\rm Spec}_k(k[T]/(T^{p^r}))\ \ \ \ \ \ \ (r \ge 1).\]
We denote the canonical generator of the coordinate ring by $t$. The algebra of measures $k\mathbb{G}_{a(r)}$ of $\mathbb{G}_{a(r)}$ is isomorphic to
\[k[U_0,\ldots,U_{r-1}]/(U_0^p,\ldots,U_{r-1}^p),\]
with $U_i + (U_0^p,\ldots,U_{r-1}^p)$ corresponding to the linear form $u_i$ on $k[\mathbb{G}_{a(r)}]$ that sends $t^j$ onto $\delta_{p^i,j}$.
{\it Throughout, we assume that $p\ge 3$}. We write $\HH^\ast(k[u_i],k) = k[x_{i+1}]\!\otimes_k\!\Lambda(y_i)$ with $\deg(x_{i+1}) = 2$ and $\deg(y_i) = 1$. The K\"unneth
formula then provides an isomorphism
\[ k[x_1,\ldots,x_r]\!\otimes_k\!\Lambda(y_0,\ldots,y_{r-1}) \cong \HH^\ast(\mathbb{G}_{a(r)},k) \cong \bigotimes_{i=0}^{r-1}\HH^\ast(k[u_i],k)\]
of graded $k$-algebras, where $\Lambda(y_0,\ldots,y_{r-1})$ denotes the exterior algebra in the variables $y_0,\ldots,y_{r-1}$. In this identification,
\[\HH^\ast(k[u_i],k) \cong k\!\otimes_k \cdots \otimes_k\! k\! \otimes_k\!\HH^\ast(k[u_i],k)\!\otimes_k \cdots \otimes_k\!k\]
corresponds to the image of the map $\HH^\ast(k[u_i],k) \longrightarrow \HH^\ast(\mathbb{G}_{a(r)},k)$, defined by the algebra homomorphism $k\mathbb{G}_{a(r)} \longrightarrow k[u_i] \ ; \ u_j \mapsto \delta_{i,j}u_i$.
\noindent
The above notation derives from the grading associated to the action of a torus $T$ on $\mathbb{G}_{a(r)}$. If $T$ operates via a character $\alpha : T \longrightarrow k^\times$, i.e.,
\[ t\boldsymbol{.} x = \alpha(t)x \ \ \ \ \ \ \ \ \forall \ t \in T, \ x \in \mathbb{G}_{a(r)},\]
then, thanks to \cite[(I.4.27)]{Ja3} (see also \cite[(4.1)]{CPSK}), the induced action of $T$ on $\HH^\ast(\mathbb{G}_{a(r)},k)$ can be computed as follows:
\begin{Lemma} \label{OPG1} The following statements hold:
{\rm (1)} \ $x_i \in \HH^\bullet(\mathbb{G}_{a(r)},k)_{-p^i\alpha}$ for $1 \le i \le r$.
{\rm (2)} \ $y_i \in \HH^\ast(\mathbb{G}_{a(r)},k)_{-p^i\alpha}$ for $0 \le i \le r\!-\!1$.
$\square$ \end{Lemma}
\noindent
Given $s \le r$, we consider the standard embedding $\mathbb{G}_{a(s)} \hookrightarrow \mathbb{G}_{a(r)}$, whose comorphism is the projection
\[ \pi : k[T]/(T^{p^r}) \longrightarrow k[T]/(T^{p^s}) \ \ ; \ \ f + (T^{p^r}) \mapsto f + (T^{p^s}).\]
The resulting embedding of algebras of measures is thus given by
\[ k\mathbb{G}_{a(s)} \longrightarrow k\mathbb{G}_{a(r)} \ \ ; \ \ u_i \mapsto u_i \ \ \ \ \ \ 0 \le i \le s-1.\]
Let $F : \mathbb{G}_{a(r)} \longrightarrow \mathbb{G}_{a(r-1)} \ ; \ x \mapsto x^p$ be the Frobenius homomorphism. Setting $u_{-1} := 0$, we see that the corresponding homomorphism of Hopf algebras is given
by
\[ F : k\mathbb{G}_{a(r)} \longrightarrow k\mathbb{G}_{a(r-1)} \ \ ; \ \ u_i \mapsto u_{i-1} \ \ \ \ \ \ 0 \le i \le r-1.\]
We recall the notion of a $p$-point, introduced by Friedlander-Pevtsova \cite{FPe}. Let $\mathfrak{A}_p = k[X]/(X^p)$ be the truncated polynomial ring with canonical generator $u := X+(X^p)$.
An algebra homomorphism $\alpha : \mathfrak{A}_p \longrightarrow k\mathcal{G}$ is a {\it $p$-point} of $\mathcal{G}$ if
(a) \ $\alpha$ is left flat, and
(b) \ there exists an abelian unipotent subgroup $\mathcal{U} \subseteq \mathcal{G}$ such that $\im \alpha \subseteq k\mathcal{U}$.
\noindent
If $\alpha : \mathfrak{A}_p \longrightarrow k\mathcal{G}$ is an algebra homomorphism, then $\alpha^\ast : \modd \mathcal{G} \longrightarrow \modd \mathfrak{A}_p$ denotes the associated pull-back functor. Two $p$-points $\alpha$ and $\beta$
are {\it equivalent} if for every $M \in \modd \mathcal{G}$ the module $\alpha^\ast(M)$ is projective precisely when $\beta^\ast(M)$ is projective. We denote by $P(\mathcal{G})$ the space of equivalence
classes of $p$-points.
The cohomological interpretation of $p$-points is based on the induced algebra homomorphisms $\alpha^\bullet : \HH^\bullet(\mathcal{G},k) \longrightarrow \HH^\bullet(\mathfrak{A}_p,k)$. Following Friedlander-Pevtsova, we define for $M \in \modd \mathcal{G}$ the {\it $p$-support} of $M$ via
\[P(\mathcal{G})_M := \{ [\alpha] \in P(\mathcal{G}) \ ; \ \alpha^\ast(M) \ \text{is not projective}\}.\]
According to \cite[(3.10),(4.11)]{FPe}, the sets $P(\mathcal{G})_M$ are the closed sets of a noetherian topology on $P(\mathcal{G})$ and the map
\[ \Psi_\mathcal{G} : P(\mathcal{G}) \longrightarrow \Proj(\mathcal{V}_\mathcal{G}(k)) \ \ ; \ \ [\alpha] \mapsto \ker \alpha^\bullet\]
is a homeomorphism with $P(\mathcal{G})_M = \Psi_\mathcal{G}(\Proj(\mathcal{V}_\mathcal{G}(M))$ for every $M \in \modd \mathcal{G}$. Moreover, $\Psi_\mathcal{G}$ is natural with respect to flat maps $\mathcal{H} \longrightarrow \mathcal{G}$ of
finite group schemes.
Given $f \in \HH^\bullet(\mathbb{G}_{a(r)},k)$, we let $Z(f)$ be the zero locus of $f$, that is, the set of the maximal ideals of $\HH^\bullet(\mathbb{G}_{a(r)},k)$ containing $f$.
\begin{Lemma} \label{OPG2} Let $N$ be a $\mathbb{G}_{a(r)}$-module such that $\mathcal{V}_{\mathbb{G}_{a(r)}}(N) \ne \{0\} = \mathcal{V}_{\mathbb{G}_{a(r-1)}}(N)$.
Then we have
\[ Z(x_r)\cap \mathcal{V}_{\mathbb{G}_{a(r)}}(N) \subsetneq \mathcal{V}_{\mathbb{G}_{a(r)}}(N).\] \end{Lemma}
\begin{proof} Let $\alpha : \mathfrak{A}_p \longrightarrow k\mathbb{G}_{a(r)}$ be a $p$-point such that $[\alpha] \in P(\mathbb{G}_{a(r)})_N$. In view of \cite[(2.2)]{FPe}, we may assume that $\alpha$ sends the generator
$u \in \mathfrak{A}_p$ onto
\[ \alpha(u) = a_0u_0+\cdots+ a_{r-1}u_{r-1} \ \ \ \ (a_i \in k).\]
Since $P(\mathbb{G}_{a(r-1)})_N = \emptyset$ (cf.\ \cite[(4.11)]{FPe}), we conclude that $a_{r-1} \ne 0$. As noted in \cite[(1.13(2))]{SFB1}, the iterated Frobenius homomorphism $F^{r-1} :
\mathbb{G}_{a(r)} \longrightarrow \mathbb{G}_{a(1)}$ induces a map
\[ (F^{r-1})^\bullet : \HH^\bullet(\mathbb{G}_{a(1)},k) \longrightarrow \HH^\bullet(\mathbb{G}_{a(r)},k) \ \ ; \ \ x_1 \mapsto x_r.\]
Since $F^{r-1}(\alpha(u)) = a_{r-1}u_0$, the map $F^{r-1}\circ \alpha$ is an isomorphism of $k$-algebras. Consequently,
$\alpha^\bullet \circ (F^{r-1})^\bullet$ is bijective, and
\[ \alpha^\bullet(x_r) = \alpha^\bullet((F^{r-1})^\bullet(x_1)) \ne 0.\]
Since $\ker \alpha^\bullet \in {\rm Proj}(\mathcal{V}_{\mathbb{G}_{a(r)}}(N))$, it follows that the radical ideal $I_N \subseteq \HH^\bullet(\mathbb{G}_{a(r)},k)$ defining $\mathcal{V}_{\mathbb{G}_{a(r)}}(N)$ is contained
in $\ker \alpha^\bullet$. By the above, the image of $x_r$ in the coordinate ring $k[\mathcal{V}_{\mathbb{G}_{a(r)}}(N)]$ is not zero, and Hilbert's Nullstellensatz provides a maximal ideal $\mathfrak{M} \unlhd
\HH^\bullet(\mathbb{G}_{a(r)},k)$ such that $\mathfrak{M} \supseteq I_N$ and $x_r \not \in \mathfrak{M}$. Consequently,
\[\mathfrak{M} \in \mathcal{V}_{\mathbb{G}_{a(r)}}(N) \setminus (Z(x_r)\cap \mathcal{V}_{\mathbb{G}_{a(r)}}(N)),\]
as desired. \end{proof}
\noindent
Let $\mathcal{G}$ be an algebraic $k$-group. In \cite[\S1]{SFB1} the authors introduce the affine algebraic scheme $V_r(\mathcal{G})$ of infinitesimal one-parameter subgroups. By definition,
\[ V_r(\mathcal{G}) = \mathcal{HOM}(\mathbb{G}_{a(r)},\mathcal{G})\]
is the homomorphism scheme, cf.\ \cite[p.18]{Wa}. Owing to \cite[(1.14)]{SFB1}, there exists a homomorphism
\[ \Psi^r_\mathcal{G} : \HH^\bullet(\mathcal{G},k) \longrightarrow k[V_r(\mathcal{G})]\]
of commutative $k$-algebras which multiplies degrees by $\mathfrak{r}ac{p^r}{2}$. Moreover, the map $\Psi^r_\mathcal{G}$ is natural in $\mathcal{G}$.
Let $s\le r$. We conclude this section with a basic observation concerning the map
\[ \Psi^r_{\mathbb{G}_{a(s)}} : \HH^\bullet(\mathbb{G}_{a(s)},k) \longrightarrow k[V_r(\mathbb{G}_{a(s)})].\]
In view of \cite[(1.10)]{SFB1} (and its proof), the coordinate ring
\[k[V_r(\mathbb{G}_{a(s)})] \cong k[T_{r-s},\ldots,T_{r-1}]\]
is reduced with $\mathbb{Z}$-grading given by $\deg(T_i) = p^i$ (see also \cite[(1.12)]{SFB1}).
\begin{Lemma} \label{OPG3} We have $\Psi^r_{\mathbb{G}_{a(s)}}(x_i) = T_{r-i}^{p^i}$ for $1 \le i \le s$. \end{Lemma}
\begin{proof} This is a direct consequence of the proof of \cite[(6.5)]{SFB2}. \end{proof}
\section{Projective Height and Periodicity}\label{S:PH}
Let $k$ be an algebraically closed field of characteristic $\Char(k)=p\ge 3$. Throughout this section, we let $\mathcal{G}$ be an infinitesimal $k$-group of height $\height(\mathcal{G}) = r$. To each
non-projective $\mathcal{G}$-module $M \in \modd \mathcal{G}$ we associate its projective height $\ph(M)$. This numerical invariant, which will later be seen to be constant on the components of the stable
Auslander-Reiten quiver of $\mathcal{G}$, provides information on the period of periodic modules.
\begin{Definition} A subgroup $\mathcal{U} \subseteq \mathcal{G}$ is called {\it elementary abelian} if there exists $s \in \mathbb{N}$ such that $\mathcal{U} \cong \mathbb{G}_{a(s)}$. We let $\mathfrak{E}(\mathcal{G})$ be the set of
elementary abelian subgroups of $\mathcal{G}$. \end{Definition}
\begin{Definition} Let $M$ be a $\mathcal{G}$-module, $\mathcal{H} \subseteq \mathcal{G}$ be a closed subgroup. Then
\[ \ph_\mathcal{H}(M) := \left\{ \begin{array}{cl} \min \{ 1 \le t \le r \ ; \ M|_{\mathcal{H}_t} \ \text{is not projective}\} & \text{if} \ M|_\mathcal{H} \ \text{is not projective,}\\ 0 & \text{otherwise}
\end{array} \right.\]
is called the {\it projective height of $M$ relative to $\mathcal{H}$}. \end{Definition}
\noindent
Let $M$ be a non-projective $\mathcal{G}$-module. According to \cite[(7.6)]{SFB2}, there exists an elementary abelian subgroup $\mathcal{U} \in \mathfrak{E}(\mathcal{G})$ such that $M|_\mathcal{U}$ is not projective. This
motivates the following definition:
\begin{Definition} Let $M$ be a non-projective $\mathcal{G}$-module. Then
\[ \ph(M) := \max_{\mathcal{U} \in \mathfrak{E}(\mathcal{G})} \ph_\mathcal{U}(M)\]
is referred to as the {\it projective height} of $M$. \end{Definition}
\noindent
Let $M$ be a $\mathcal{G}$-module of projective height $\ph(M)=t>0$. Then there exists a subgroup $\mathcal{U} \subseteq \mathcal{G}$ with $\mathcal{U} \cong \mathbb{G}_{a(t-1)}$ and $M|_\mathcal{U}$ being projective.
Consequently, $M$ is a free module of the $p^{t-1}$-dimensional algebra $k\mathcal{U}$, so that $p^{t-1}\!\mid\!\dim_kM$.
\begin{Example} Let $G$ be a reductive group. We consider the Steinberg module $\mathrm{St}_d$, which view as a $G_r$-module. For dimension reasons, $\mathrm{St}_d|_{G_s}$ is not projective for $d<s\le r$, while \cite[(II.11.8)]{Ja3} shows that $\mathrm{St}_d|_{G_d}$ is projective. Let $d<r$. In view of \cite[(7.6)]{SFB2} we thus have $\ph(\mathrm{St}_d) = d\!+\!1$. \end{Example}
\noindent
Given a commutative $k$-algebra $A$, we denote by $A_{\rm red}$ the associated reduced algebra. A homomorphism $f : A \longrightarrow B$ of commutative $k$-algebras induces a homomorphism
$f_{\rm red} : A_{\rm red} \longrightarrow B_{\rm red}$ of reduced $k$-algebras. The commutative algebras relevant for our purposes are the even cohomology rings $\HH^\bullet(\mathcal{G},k)$ and
$\HH^\bullet(\mathcal{U},k)$, where $\mathcal{U} \cong \mathbb{G}_{a(s)}$ is an elementary abelian subgroup of $\mathcal{G}$. We let
\[ {\rm res} : \HH^\bullet(\mathcal{G},k) \longrightarrow \HH^\bullet(\mathcal{U},k)\]
be the canonical restriction map, and recall that the canonical inclusion $k[x_1,\ldots,x_s] \longrightarrow \HH^\bullet(\mathcal{U},k)$ induces an isomorphism $k[x_1,\ldots,x_s] \cong \HH^\bullet(\mathcal{U},k)
_{\rm red}$, see \cite{CPSK}.
Let $\mathcal{H} \subseteq \mathcal{G}$ be a closed subgroup. By virtue of \cite[(5.4.1)]{SFB2}, the canonical restriction map ${\rm res} : \HH^\bullet(\mathcal{G},k) \longrightarrow \HH^\bullet(\mathcal{H},k)$ induces a morphism ${\rm res}^\ast : \mathcal{V}_{\mathcal{H}}(k) \longrightarrow \mathcal{V}_\mathcal{G}(k)$ which maps $\mathcal{V}_\mathcal{H}(k)$ homeomorphically onto its image. Bearing this in mind, we shall often identify $\mathcal{V}_\mathcal{H}(k)$ topologically with a closed subvariety of $\mathcal{V}_\mathcal{G}(k)$.
\begin{Proposition} \label{PH1} Let $M$ be a $\mathcal{G}$-module, $\mathcal{U} \cong \mathbb{G}_{a(s)}$ be an elementary abelian subgroup of $\mathcal{G}$.
{\rm (1)} \ If $M|_\mathcal{U}$ is not projective, then there exists $\zeta \in \HH^{2p^{r-\ph_\mathcal{U}(M)}}(\mathcal{G},k)_{\rm red}$ such that
\[ Z(\zeta) \cap \mathcal{V}_\mathcal{G}(M) \subsetneq \mathcal{V}_\mathcal{G}(M).\]
{\rm (2)} \ If $\mathcal{V}_{\mathcal{U}}(k) \subseteq \mathcal{V}_\mathcal{G}(M)$, then there exists $\zeta \in \HH^{2p^{r-s}}(\mathcal{G},k)_{\rm red}$ such that
\[ Z(\zeta) \cap \mathcal{V}_\mathcal{G}(M) \subsetneq \mathcal{V}_\mathcal{G}(M).\] \end{Proposition}
\begin{proof} Let $1 \le t \le s$. Owing to \cite[(1.14)]{SFB1} we have a commutative diagram
\[ \begin{CD} \HH^\bullet(\mathcal{G},k) @> \Psi^r_{\mathcal{G}} >> k[V_r(\mathcal{G})]\\
@V{\rm res} VV @V\pi VV\\
\HH^\bullet(\mathcal{U}_t,k) @>\Psi^r_{\mathcal{U}_t}>> k[V_r(\mathcal{U}_t)], \end{CD} \]
of homomorphisms of graded, commutative $k$-algebras, where the horizontal arrows multiply degrees by $\mathfrak{r}ac{p^r}{2}$. Thanks to \cite[(1.5)]{SFB1}, the map $\pi$ is surjective, and
\cite[(1.12)]{SFB1} shows that it respects degrees.
Since $\mathcal{U}_t \cong \mathbb{G}_{a(t)}$ we may consider the element $T_{r-t} \in k[V_r(\mathcal{U}_t)]_{p^{r-t}}$. As $\pi$ is surjective, we can find $v_{r-t} \in k[V_r(\mathcal{G})]_{p^{r-t}}$ with
$\pi(v_{r-t}) = T_{r-t}$. According to \cite[(5.2)]{SFB2}, we have $v_{r-t}^{p^r} \in {\rm im}\, \Psi^r_{\mathcal{G}}$, so that there exists $w \in \HH^{2p^{r-t}}(\mathcal{G},k)$ with
$\Psi^r_{\mathcal{G}}(w) = v_{r-t}^{p^r}$. In light of (\ref{OPG3}), we thus obtain
\[ \Psi^r_{\mathcal{U}_t}({\rm res}(w)) = \pi(\Psi^r_{\mathcal{G}}(w)) = \pi(v_{r-t}^{p^r}) = T_{r-t}^{p^r} = \Psi^r_{\mathcal{U}_t}(x_t^{p^{r-t}}).\]
Thanks to \cite[(5.2)]{SFB2}, we conclude that the residue class $\zeta_t := \bar{w} \in \HH^{2p^{r-t}}(\mathcal{G},k)_{\rm red}$ satisfies
\[ (\ast) \ \ \ \ \ \ \ \ \ {\rm res}_{\rm red}(\zeta_t) = \bar{x}_t^{p^{r-t}}.\]
If we identify $\mathcal{V}_{\mathcal{U}_t}(k)$ with its image under the morphism ${\rm res}^\ast : \mathcal{V}_{\mathcal{U}_t}(k) \longrightarrow \mathcal{V}_\mathcal{G}(k)$, whose comorphism is the restriction map ${\rm res}_{\rm red} :
\HH^\bullet(\mathcal{G},k)_{\rm red} \longrightarrow \HH^\bullet(\mathcal{U}_t,k)_{\rm red}$, then ($\ast$) implies
\[ Z(\zeta_t) \cap \mathcal{V}_{\mathcal{U}_t}(k) = Z(x_t).\]
Let $t := \ph_\mathcal{U}(M)$. In view of \cite[(7.1)]{SFB2}, the assumption $Z(\zeta_t)\cap \mathcal{V}_\mathcal{G}(M) = \mathcal{V}_\mathcal{G}(M)$ yields
\[ \mathcal{V}_{\mathcal{U}_t}(M) = \mathcal{V}_\mathcal{G}(M)\cap \mathcal{V}_{\mathcal{U}_t}(k) = Z(\zeta_t)\cap \mathcal{V}_{\mathcal{U}_t}(k) \cap \mathcal{V}_{\mathcal{U}_t}(M) = Z(x_t) \cap \mathcal{V}_{\mathcal{U}_t}(M),\]
which, by choice of $t$, contradicts (\ref{OPG2}). This concludes the proof of (1).
For the proof of (2), we set $t:= s$, so that $\mathcal{U}_t = \mathcal{U} \cong \mathbb{G}_{a(s)}$. Consider the homomorphism
\[ \Phi_M : \HH^\bullet(\mathcal{G},k) \longrightarrow \Ext^\ast_{\mathcal{G}}(M,M)\ \ ; \ \ [f] \mapsto [f\!\otimes\! \id_{M}].\]
We let $A := \HH^\bullet(\mathcal{G},k)/\sqrt{\ker \Phi_M}$ be the coordinate ring of the support variety $\mathcal{V}_{\mathcal{G}}(M)$ and denote by
\[ \iota^\ast : \HH^\bullet(\mathcal{G},k)_{\rm red} \longrightarrow A\]
the canonical projection map. According to our convention, the inclusion $\mathcal{V}_\mathcal{U}(k) \subseteq \mathcal{V}_\mathcal{G}(M)$ means that the image of the morphism ${\rm res}^\ast : \mathcal{V}_{\mathcal{U}}(k) \longrightarrow
\mathcal{V}_\mathcal{G}(k)$ is contained in $\mathcal{V}_{\mathcal{G}}(M)$. Thus, the map ${\rm res}^\ast$ factors through the inclusion $\iota : \mathcal{V}_{\mathcal{G}}(M) \hookrightarrow \mathcal{V}_{\mathcal{G}}(k)$, so that there exists a
homomorphism $\gamma^\ast : A \longrightarrow \HH^\bullet(\mathcal{U},k)_{\rm red}$ with
\[ {\rm res}_{\rm red} = \gamma^\ast \circ \iota^\ast.\]
By ($\ast$), we can find $\zeta := \zeta_s \in \HH^{2p^{r-s}}(\mathcal{G},k)_{\rm red}$ such that ${\rm res}_{\rm red}(\zeta) \ne 0$. Consequently, $\iota^\ast(\zeta) \ne 0$, and Hilbert's
Nullstellensatz provides a maximal ideal $\mathfrak{M} \supseteq \sqrt{\ker\Phi_M}$ which does not contain $\zeta$. This implies
\[ Z(\zeta)\cap \mathcal{V}_{\mathcal{G}}(M) \subsetneq \mathcal{V}_{\mathcal{G}}(M),\]
as desired. \end{proof}
\noindent
We record a consequence concerning modules of complexity $1$, which generalizes \cite[(2.5)]{Fa1}. The proof employs a method of Carlson (cf.\ \cite{Ca3}), which is based on the following
construction: By general theory, a cohomology class $\zeta \in \HH^{2n}(\mathcal{G},k)\setminus \{0\}$ corresponds to an element $\hat{\zeta} \in \Hom_\mathcal{G}(\Omega_\mathcal{G}^{2n}(k),k)\setminus
\{0\}$. We let
\[ L_\zeta := \ker \hat{\zeta}\]
be the {\it Carlson module} of $\zeta$.
\begin{Corollary}\label{PH2} Let $M$ be an indecomposable $\mathcal{G}$-module of complexity $\cx_\mathcal{G}(M) = 1$. Then we have
\[ \Omega_\mathcal{G}^{2p^{r-\ph(M)}}(M) \cong M.\] \end{Corollary}
\begin{proof} Let $t := \ph(M) = \ph_\mathcal{U}(M)$ for some $\mathcal{U} \in \mathfrak{E}(\mathcal{G})$. Owing to (\ref{PH1}(1)), we can find an element $\zeta \in \HH^{2p^{r-t}}(\mathcal{G},k)\setminus\{0\}$ such that
\[ Z(\zeta) \cap \mathcal{V}_\mathcal{G}(M) \subsetneq \mathcal{V}_\mathcal{G}(M).\]
Since $M$ is indecomposable, the variety $\mathcal{V}_\mathcal{G}(M)$ is a line (see \cite[(7.7)]{SFB2}). Thanks to \cite[p.755]{SFB2} we have $\mathcal{V}_\mathcal{G}(L_\zeta) = Z(\zeta)$, so that an application of
\cite[(7.2)]{SFB2} gives
\[ \{0\} = Z(\zeta) \cap \mathcal{V}_\mathcal{G}(M) = \mathcal{V}_\mathcal{G}(L_\zeta\!\otimes_k\!M).\]
Consequently, the module $L_\zeta \!\otimes_k\! M$ is projective and the exact sequence
\[ (0) \longrightarrow L_\zeta\!\otimes_k \!M \longrightarrow \Omega_\mathcal{G}^{2p^{r-t}}(M) \oplus (\text{proj.}) \longrightarrow M \longrightarrow (0),\]
obtained by tensoring the sequence defined by $\hat{\zeta}$ with $M$, splits. By comparing projective-free summands of $\Omega_\mathcal{G}^{2p^{r-t}}(M) \oplus (\text{proj.}) \cong
(L_\zeta\!\otimes_k\! M) \oplus M$, we arrive at $\Omega_\mathcal{G}^{2p^{r-t}}(M) \cong M$. \end{proof}
\noindent
A $\mathcal{G}$-module $M$ is called {\it periodic}, provided there exists $n \in \mathbb{N}$ such that $\Omega_\mathcal{G}^n(M) \cong M$. In that case,
\[ {\rm per}(M):= \min \{n \in \mathbb{N} \ ; \ \Omega^{n}_\mathcal{G}(M) \cong M\}\]
is called the {\it period} of $M$. By Corollary \ref{PH2}, the number ${\rm per}(M)$ divides $2p^{r-\ph(M)}$. The following examples show that the latter number may only provide a
rough estimate for ${\rm per}(M)$.
\begin{Examples} (1) Consider the infinitesimal group $\mathcal{G} = \SL(2)_1T_r$, where $r \ge 2$. Then $\height(\mathcal{G}) = r$ while any unipotent subgroup $\mathcal{U}$ of $\mathcal{G}$ has height $\le 1$. Consequently, every non-projective $\mathcal{G}$-module has projective height $1$, so that (\ref{PH2}) yields $2p^{r-1}$ as an estimate for the period ${\rm per}(M)$ of a periodic module $M$.
According to \cite[(5.6)]{FV2} and \cite[(4.5)]{Fa5} ``most" periodic modules (namely those, whose rank varieties are not $T$-stable) have period $2$, while those with a $T$-stable
rank variety satisfy ${\rm per}(M) = 2p^{r-1}$.
(2) Let $\mathcal{U}$ be a unipotent infinitesimal group of complexity $\cx_\mathcal{U}(k)=1$. Owing to the main theorem of \cite{FRV} such groups can have arbitrarily large height. The corresponding
algebras $k\mathcal{U}$ are truncated polynomial rings $k[X]/(X^{p^n})$, so that every indecomposable $\mathcal{U}$-module $M$ is periodic with ${\rm per}(M) = 2$. If $\mathcal{U}$ has height $r \ge 2$,
then only for those non-projective indecomposable modules $M$ with $\dim_kM = \ell \dim_k k\mathcal{U}_{r-1}$ does Corollary \ref{PH2} provide the correct formula for ${\rm per}(M)$.
\end{Examples}
\begin{Remark} We shall see in Section \ref{S:AR} below that $2p^{r-\ph(M)}$ coincides with the period for graded modules of Frobenius kernels of reductive algebraic algebraic groups (see
Theorem \ref{MC3}). \end{Remark}
\noindent
Let $M$ be a $\mathcal{G}$-module. By general theory, a homomorphism $\varphi : \mathbb{G}_{a(r)} \longrightarrow \mathcal{G}$ corresponds to a homomorphism $\varphi : k\mathbb{G}_{a(r)} \longrightarrow k\mathcal{G}$. If $M$ is a
$\mathcal{G}$-module, then we denote by $M|_{k[u_{r-1}]}$ the pull-back of $M$ along the map $\varphi|_{k[u_{r-1}]}$. Following Suslin-Friedlander-Bendel \cite[\S6]{SFB2}, we let
\[ V_r(\mathcal{G})_M := \{ \varphi \in \Hom(\mathbb{G}_{a(r)},\mathcal{G}) \ ; \ M|_{k[u_{r-1}]} \ \text{is not projective}\}\]
be the {\it rank variety} of $M$.
\begin{Theorem} \label{PH3} Let $M$ be an indecomposable $\mathcal{G}$-module such that $\cx_{\mathcal{G}}(M) = 1$. Then
\[ \mathcal{U}_M := \bigcup_{\varphi \in V_r(\mathcal{G})_M} \im \varphi \]
is an elementary abelian subgroup of $\mathcal{G}$ such that $\ph(M) = \height(\mathcal{U}_M) = \ph_{\mathcal{U}_M}(M)$. \end{Theorem}
\begin{proof} Since $\cx_{\mathcal{G}}(M) = 1$, an application of \cite[(6.8)]{SFB2} yields
\[\dim V_r(\mathcal{G})_M = \dim \mathcal{V}_{\mathcal{G}}(M) =1.\]
According to \cite[(6.1)]{SFB2}, the canonical action of $k^\times$ on $\mathbb{G}_{a(r)}$ endows the variety $V_r(\mathcal{G})_M$ with the structure of a conical variety:
\[ (\alpha\boldsymbol{.}\varphi)(x) := \varphi(\alpha.x) \ \ \ \ \ \ \forall \ \alpha \in k^\times, \, x \in k\mathbb{G}_{a(r)}.\]
In view of $\alpha.u_{r-1} = \alpha^{p^{r-1}}u_{r-1}$, the group $k^\times$ acts simply on $V_r(\mathcal{G})_M \setminus \{0\}$. Since $M$ is indecomposable, Carlson's Theorem (see
\cite[(7.7)]{SFB2}) ensures that the variety $(V_r(\mathcal{G})_M\setminus\{0\})/k^\times$ is connected. Hence there exists $\varphi \in V_r(\mathcal{G})_M$ such that
\[ V_r(\mathcal{G})_M = \{\alpha\boldsymbol{.} \varphi \ ; \ \alpha \in k^\times\}\cup \{0\}.\]
As a result, $\mathcal{U}_M = \im \varphi$ is a subgroup of $\mathcal{G}$. Being isomorphic to the factor group $\mathbb{G}_{a(r)}/\ker \varphi$, the group $\mathcal{U}_M$ is elementary abelian.
Let $s := \ph_{\mathcal{U}_M}(M)$. We propose to show that $s = \ph(M) = \height(\mathcal{U}_M)$. Let $\mathcal{U}$ be an elementary abelian subgroup of $\mathcal{G}$ such that $M|_\mathcal{U}$ is not projective. Owing
to \cite[(6.6)]{SFB2}, $V_r(\mathcal{U})_{M|_\mathcal{U}} $ is a one-dimensional subvariety of the one-dimensional irreducible variety $V_r(\mathcal{G})_M$, whence $V_r(\mathcal{U})_{M|_\mathcal{U}} = V_r(\mathcal{G})_M$. It
readily follows that $\mathcal{U}_M \subseteq \mathcal{U}$.
By applying this observation to the Frobenius kernels of $\mathcal{U}_M$, we conclude that $s = \height(\mathcal{U}_M)$. Thus, if $\mathcal{U}$ is elementary abelian and $r' \in \mathbb{N}$ is a natural number such that
the restriction $M|_{\mathcal{U}_{r'}}$ of $M$ to the elementary abelian subgroup $\mathcal{U}_{r'}$ is not projective, then $\mathcal{U}_M \subseteq \mathcal{U}_{r'}$, whence $r' \ge s$. Consequently, $\ell :=
\ph_\mathcal{U}(M) \ge s$ and $\mathcal{U}_M \subseteq \mathcal{U}_\ell$. If $\ell>s$, then $\height(\mathcal{U}_M)=s$ implies that $\mathcal{U}_M \subseteq \mathcal{U}_{\ell-1}$, while $M|_{\mathcal{U}_{\ell-1}}$ is projective. This,
however, contradicts $M|_{\mathcal{U}_M}$ being non-projective, so that $\ell=s$. As a result, we have
\[\height(\mathcal{U}_M)= s = \ph(M),\]
as desired. \end{proof}
\noindent
We now specialize to the case, where $\mathcal{G}=G_r$ is a Frobenius kernel of a smooth group scheme $G$. Then $G$ acts on $G_r$ via the adjoint representation, and we obtain an action of $G$
on $V_r(G)$. If $M$ is a $G_r$-module and $g \in G$, then $M^{(g)}$ denotes the $G_r$-module with underlying $k$-space $M$ and action
\[ x\boldsymbol{.} m := \Ad(g)^{-1}(x)m \ \ \ \ \forall \ x \in G_r, m \in M.\]
One readily verrifies that
\[ V_r(G)_{M^{(g)}} = g\boldsymbol{.} V_r(G)_M \ \ \ \ \forall \ g \in G.\]
We refer to a $G_r$-module as {\it $G$-stable} if $M^{(g)} \cong M$ for every $g\in G$. Clearly, the restriction $M|_{G_r}$ of a $G$-module $M$ is a $G$-stable $G_r$-module. Moreover,
if $G$ is connected, then every simple $G_r$-module is $G$-stable.
\begin{Corollary} \label{PH3.5} Let $G$ be a smooth group scheme, $M$ be an indecomposable, $G$-stable $G_r$-module of complexity $\cx_{G_r}(M)=1$. Then $\mathcal{U}_M \unlhd G_r$ is a
normal subgroup of $G_r$. \end{Corollary}
\begin{proof} This follows directly from Theorem \ref{PH3} and the definition of the $G$-action on $V_r(G)_M$. \end{proof}
\noindent
We fix a maximal torus $T\subseteq G$ and denote by $X(T)$ its character group. Since $T$ acts diagonally on the Lie algebra $\mathfrak{g} := \Lie(G)$, there exists a finite subset $\Psi \subseteq
X(T)\setminus\{0\}$ such that
\[ \mathfrak{g} = \mathfrak{g}_0 \oplus \bigoplus_{\alpha \in \Psi} \mathfrak{g}_\alpha,\]
with $\mathfrak{g}_\alpha := \{x \in \mathfrak{g} \ ; \ t.x = \alpha(t)x \ \ \ \ \forall \ t \in T\}$ for $\alpha \in \Psi\cup\{0\}$. The elements of $\Psi$ are the {\it roots} of $G$.
Suppose that $\mathcal{U} \subseteq G_r$ is a $T$-invariant elementary abelian subgroup. The maximal torus $T$ acts canonically on $\HH^\bullet(G_r,k)_{\rm red}$ and
$\HH^\bullet(\mathcal{U},k)_{\rm red}$. In light of (\ref{OPG1}), the weights of this action on the latter space are of the form $-n\alpha$ for $n \ge 0$. We thus have the following refinement of
Proposition \ref{PH1}:
\begin{Proposition} \label{PH4} Let $G$ be a smooth group scheme, $\mathcal{U} \subseteq G_r$ be a $T$-invariant elementary abelian subgroup, $M$ be a $G_r$-module. Then there exists $\alpha
\in \Psi \cup \{0\}$ with the following properties:
{\rm (1)} \ If $M|_\mathcal{U}$ is not projective, then there exists $\zeta \in (\HH^{2p^{r-\ph_\mathcal{U}(M)}}(G_r,k)_{\rm red})_{-p^r\alpha}$ such that
\[ Z(\zeta) \cap \mathcal{V}_{G_r}(M) \subsetneq \mathcal{V}_{G_r}(M).\]
{\rm (2)} \ If $\mathcal{V}_\mathcal{U}(k) \subseteq \mathcal{V}_{G_r}(M)$, then there exists $\zeta \in (\HH^{2p^{r-\height(\mathcal{U})}}(G_r,k)_{\rm red})_{-p^r\alpha}$ such that
\[ Z(\zeta) \cap \mathcal{V}_{G_r}(M) \subsetneq \mathcal{V}_{G_r}(M).\]\end{Proposition}
\begin{proof} Since $\mathcal{U} \cong \mathbb{G}_{a(s)}$ is $T$-invariant, the diagonalizable group $T$ acts on $\mathcal{U}$ via a character $\alpha \in X(T)$:
\[ t\boldsymbol{.} u = \alpha(t)u \ \ \ \ \forall \ u \in \mathcal{U}.\]
Hence $T$ also acts on $\Lie(\mathcal{U}) \subseteq \mathfrak{g}$ via $\alpha$, so that $\alpha \in \Psi\cup \{0\}$.
We return to the proof of Proposition \ref{PH1}. For $t \le r$ we found elements $\zeta'_t \in \HH^{2p^{r-t}}(G_r,k)_{\rm red}$ satisfying
\[ {\rm res}_{\rm red}(\zeta'_t) = \bar{x}_t^{p^{r-t}}.\]
Owing to (\ref{OPG1}), the element $\bar{x}_t^{p^{r-t}}$ belongs to $(\HH^{2p^{r-t}}(\mathcal{U},k)_{\rm red})_{-p^r\alpha}$. Since the map ${\rm res}_{\rm red}$ is $T$-equivarant, we may replace $\zeta'_t$ by its homogeneous component $\zeta_t$ of degree $-p^r\alpha$, so that
\[ {\rm res}_{\rm red}(\zeta_t) = \bar{x}_t^{p^{r-t}} \ \ \text{for some} \ \ \zeta_t \in (\HH^{2p^{r-t}}(G_r,k)_{\rm red})_{-p^r\alpha}.\]
We may now adopt the arguments of the proof of (\ref{PH1}) verbatim to obtain our result. \end{proof}
\noindent
Now assume $G$ to be reductive, with Borel subgroup $B=UT$ and sets $\Psi^+$ and $\Sigma$ of positive roots and simple roots, respectively. As usual, $\rho$ denotes the half-sum of the positive roots. Given $d \ge 0$, we recall that
\[ \mathrm{St}_d := L((p^d\!-\!1)\rho)\]
denotes the $d$-th {\it Steinberg module}. By definition, $\mathrm{St}_d$ is a simple $G$-module with $\mathrm{St}_0 \cong k$, the trivial $G$-module. In view of \cite[(3.18(4))]{Ja3} and
\cite[(II.10.2)]{Ja3}
\[ \mathrm{St}_d \cong L_d((p^d\!-\!1)\rho) \cong Z_d((p^d\!-\!1)\rho)\]
is a simple, projective $G_d$-module.
Let $M$ be a $G_r$-module. If $\alpha \in \Psi$ is a root, we define
\[ \widehat{\ph}_\alpha(M) := \left\{ \begin{array}{cl} \ph_{(U_\alpha)_r}(M) & \text{if} \ M|_{(U_\alpha)_r} \ \text{is not projective,}\\ \infty & \text{otherwise,}\end{array} \right.\]
where $U_\alpha \subseteq G$ is the root subgroup of $\alpha$. Moreover, for a subset $\Phi \subseteq \Psi$, we put
\[ \widehat{\ph}_\Phi(M) := \min_{\alpha \in \Phi} \widehat{\ph}_\alpha(M).\]
Let $Y(T)$ be the set of co-characters of $T$ and denote by $\langle\, , \, \rangle : X(T)\times Y(T) \longrightarrow \mathbb{Z}$ the canonical pairing. For $s \in \mathbb{N}_0$, we put
\[ \Psi^s_\lambda := \{\alpha\in \Psi \ ; \ \langle\lambda\!+\!\rho, \alpha^\vee \rangle \in p^s \mathbb{Z} \}.\]
Here $\alpha^\vee \in Y(T)$ denotes the root dual to $\alpha$. Owing to \cite[(2.7)]{Ja1}, the set $\Psi^s_\lambda$ is a subsystem of $\Psi$ whenever the prime number $p$ is good for
$G$.
Given $\lambda \in X(T)$ with $\langle \lambda+\rho,\alpha^\vee\rangle \ne 0$ for some $\alpha \in \Psi$, we define the \emph{depth} of $\lambda$ via
\[ \dep(\lambda) := \min\{s \in \mathbb{N}_0 \ ; \ \Psi_\lambda^s \ne \Psi\},\]
and put $\dep(\lambda) = \infty$ otherwise, see \cite{FR}. The following result links the depth of a weight to the projective height $\widehat{\ph}_\Sigma(Z_r(\lambda))$ of the baby
Verma module
\[ Z_r(\lambda) := kG_r\!\otimes_{kB_r}\!k_\lambda.\]
Suppose that $G$ is a smooth group scheme that is defined over the Galois field $\mathbb{F}_p$. Let $M$ be a $G$-module. Given $d \in \mathbb{N}$, we let $M^{[d]}$ be the $G$-module with
underlying $k$-space $M$ and action defined via pull-back along the iterated Frobenius endomorphism $F^d : G \longrightarrow G$.
\begin{Proposition} \label{PH5} Suppose that $G$ is semi-simple, simply connected, defined over $\mathbb{F}_p$ and that $p$ is good for $G$. Let $\lambda \in X(T)$ be a weight such that $\dep(\lambda) \le r$. Then the following statements hold:
{\rm (1)} \ We have $\mathcal{V}_{(U_\alpha)_r}(\mathrm{St}_{\dep(\lambda)-1}) \subseteq \mathcal{V}_{G_r}(Z_r(\lambda))$ for every $\alpha \in \Sigma\setminus \Psi^{\dep(\lambda)}_\lambda$.
{\rm (2)} \ We have $\dep(\lambda) = \widehat{\ph}_\Sigma(Z_r(\lambda))$. \end{Proposition}
\begin{proof} Let $\dep(\lambda) = d+1$ with $d\ge 0$. Owing to \cite[(6.2)]{FR}, there exists a weight $\mu$ of depth $1$ and an isomorphism
\[ Z_r(\lambda) \cong Z_{r-d}(\mu)^{[d]}\!\otimes_k\!\mathrm{St}_d\]
of $G_r$-modules. Moreover, we have $\lambda = p^d \mu +(p^d-1)\rho$. According to \cite[(5.2)]{FR}, the inclusion
\[ \mathcal{V}_{(U_\alpha)_{r-d}}(k) \subseteq \mathcal{V}_{G_{r-d}}(Z_{r-d}(\mu))\]
holds for every $\alpha \in \Sigma\setminus \Psi^1_\mu$.
Consider the iterate $F^d : G_r \longrightarrow G_{r-d}$ of the Frobenius endomorphism. There results a commutative diagram
\[ \begin{CD} \mathcal{V}_{(U_\alpha)_{d+1}}(\mathrm{St}_d) @> F^d >> \mathcal{V}_{(U_\alpha)_1}(k)\\
@V{\rm res}^\ast VV @V{\rm res}^\ast VV\\
\mathcal{V}_{(U_\alpha)_r}(\mathrm{St}_d) @> F^d >> \mathcal{V}_{(U_\alpha)_{r-d}}(k)\\
@V{\rm res}^\ast VV @V{\rm res}^\ast VV\\
\mathcal{V}_{G_r}(\mathrm{St}_d) @> F^d >> \mathcal{V}_{G_{r-d}}(k),\\
\end{CD} \]
where the vertical arrows are the canonical inclusions induced by the restriction maps. Thanks to \cite[(6.2(b))]{FR}, the lower horizontal map is an isomorphism sending $\mathcal{V}_{G_r}(Z_r(\lambda))$ onto $\mathcal{V}_{G_{r-d}}(Z_{r-d}(\mu))$.
(1) Let $\alpha \in \Sigma\setminus\Psi_\lambda^{d+1}$. Then we have
\[ p^d\langle\mu+\rho,\alpha^\vee \rangle = \langle \lambda +\rho, \alpha^\vee \rangle \not \in p^{d+1}\mathbb{Z},\]
so that $\langle \mu+\rho,\alpha^\vee\rangle \not \in p\mathbb{Z}$, whence $\alpha \in \Sigma\setminus \Psi^1_\mu$. In view of the identification discussed at the beginning of this section,
the inclusion $\mathcal{V}_{(U_\alpha)_{r-d}}(k) \subseteq \mathcal{V}_{G_{r-d}}(Z_{r-d}(\mu))$ in conjunction with the above diagram now implies $\mathcal{V}_{(U_\alpha)_r}(\mathrm{St}_d) \subseteq \mathcal{V}_{G_r}(Z_r(\lambda))$.
(2) Given $\alpha \in \Sigma \setminus \Psi^{d+1}_\lambda$, part (1) yields $\mathcal{V}_{(U_\alpha)_r}(\mathrm{St}_d) \subseteq \mathcal{V}_{G_r}(Z_r(\lambda))$. The diagram above
then implies
\begin{eqnarray*}
\mathcal{V}_{(U_\alpha)_{d+1}}(\mathrm{St}_d) & = & \mathcal{V}_{(U_\alpha)_r}(\mathrm{St}_d) \cap \mathcal{V}_{(U_\alpha)_{d+1}}(k) \ \subseteq \mathcal{V}_{G_r}(Z_r(\lambda))\cap \mathcal{V}_{(U_\alpha)_{d+1}}(k)\\
& \subseteq & \mathcal{V}_{G_r}(Z_r(\lambda))\cap \mathcal{V}_{G_{d+1}}(k) = \mathcal{V}_{G_{d+1}}(Z_r(\lambda)).
\end{eqnarray*}
Let $L$ be the Levi subgroup of $G$ associated to the simple root $\alpha$ and let $\mathrm{St}_d^L$ be the $d$-th Steinberg module of $L$. Thanks to \cite[(4.2.1)]{NPV}, there is an inclusion $\mathcal{V}_{L_{d+1}}(\mathrm{St}^L_d) \subseteq \mathcal{V}_{G_{d+1}}(\mathrm{St}_d)$. The rank varieties of the former module were essentially computed in \cite[(7.9)]{SFB2} (The quoted result deals with Frobenius kernels of $\SL(2)$). This result implies in particular that $\mathcal{V}_{(U_\alpha)_{d+1}}(\mathrm{St}_d^L) \ne \{0\}$, so that $\mathcal{V}_{(U_\alpha)_{d+1}}(\mathrm{St}_d) \ne \{0\}$. Consequently,
\[ \{0\} \ne \mathcal{V}_{(U_\alpha)_{d+1}}(\mathrm{St}_d) \cap \mathcal{V}_{(U_\alpha)_{d+1}}(k) \subseteq \mathcal{V}_{G_{d+1}}(Z_r(\lambda))\cap \mathcal{V}_{(U_\alpha)_{d+1}}(k) = \mathcal{V}_{(U_\alpha)_{d+1}}(Z_r(\lambda)).\]
As a result, the module $Z_r(\lambda)|_{(U_\alpha)_{d+1}}$ is not projective, so that $\widehat{\ph}_\alpha(Z_r(\lambda)) \le d+1$.
Since $\Psi^d_\lambda = \Psi$, \cite[(5.6)]{FR} shows that $Z_r(\lambda)|_{G_d}$ is projective, so that module $Z_r(\lambda)|_{(U_\alpha)_d}$ is projective for every $\alpha \in
\Sigma$. It follows that $\widehat{\ph}_\alpha(Z_r(\lambda)) = d+1$ for all $\alpha \in \Sigma\setminus \Psi^{d+1}$. Since $p$ is good for $G$, \cite[(2.7)]{Ja1} ensures that the
latter set is not empty.
For $\alpha \in \Sigma\cap\Psi^{d+1}_\lambda$ we consider the Levi subgroup $L \subseteq G$ defined by $\alpha$, as well as the corresponding baby Verma module $Z_r^L(\lambda)$
of $L_r$. According to \cite[(5.6)]{FR}, the module $Z_r^L(\lambda)|_{L_{d+1}}$ is projective, and \cite[(4.2.1)]{NPV} now implies the projectivity of
$Z_r(\lambda)|_{(U_\alpha)_{d+1}}$. We conclude that $\widehat{\ph}_\alpha(Z_r(\lambda)) \ge d+2$. Consequently,
\[ \widehat{\ph}_\Sigma(Z_r(\lambda)) = \min_{\alpha \in \Sigma} \widehat{\ph}_\alpha(Z_r(\lambda)) = d+1 = \dep(\lambda),\]
as desired. \end{proof}
\section{Euclidean Components of finite group Schemes}\label{S:EC}
This section is concerned with the Auslander-Reiten theory of a finite group scheme $\mathcal{G}$. Given a self-injective algebra $\Lambda$, we denote by $\Gamma_s(\Lambda)$ the {\it stable
Auslander--Reiten quiver} of $\Lambda$. By definition, the directed graph $\Gamma_s(\Lambda)$ has as vertices the isomorphism classes of the non-projective indecomposable
$\Lambda$-modules and its arrows are defined via the so-called {\it irreducible morphisms}. We refer the interested reader to \cite[Chap.\ VII]{ARS} for further details. The AR-quiver is fitted
with an automorphism $\tau_\Lambda$, the so-called {\it Auslander--Reiten translation}. Since $\Lambda$ is self-injective, $\tau_\Lambda$ coincides with the composite
$\Omega^2_\Lambda \circ \nu_\Lambda$ of the square of the Heller translate $\Omega_\Lambda$ and the Nakayama functor $\nu_\Lambda$, cf.\ \cite[(IV.3.7)]{ARS}.
The connected components of $\Gamma_s(\Lambda)$ are connected stable translation quivers. By work of Riedtmann \cite[Struktursatz]{Ri}, the structure of such a quiver $\mathrm{T}heta$ is
determined by a directed tree $T_\mathrm{T}heta$, and an {\it admissible group} $\Pi \subseteq {\rm Aut}_k(\mathbb{Z}[T_\mathrm{T}heta])$, giving rise to an isomorphism
\[ \mathrm{T}heta \cong \mathbb{Z}[T_\mathrm{T}heta]/\Pi\]
of stable translation quivers. The underlying undirected tree $\bar{T}_\mathrm{T}heta$, the so-called {\it tree class} of $\mathrm{T}heta$ is uniquely determined by $\mathrm{T}heta$. We refer the reader to
\cite[(4.15.6)]{Be1} for further details. For group algebras of finite groups, the possible tree classes and admissible groups were first determined by Webb \cite{We}.
Following Ringel \cite{Ri1}, an indecomposable $\Lambda$-module $M$ is called \emph{quasi-simple}, provided it lies at the end of a component of tree class $A_\infty$ of the stable
AR-quiver $\Gamma_s(\Lambda)$.
Throughout this section, $\mathcal{G}$ is assumed to be a finite algebraic group, defined over an algebraically closed field $k$ of characteristic $p>0$. By general theory (see \cite[(1.5)]{FMS}), the
algebra $k\mathcal{G}$ affords a Nakayama automorphism $\nu = \nu_\mathcal{G}$ of finite order $\ell$. For each $n \in \mathbb{N}_0$, the automorphism $\nu^n$ induces, via pull-back, an auto-equivalence of
$\modd \mathcal{G}$ and hence an automorphism on the stable AR-quiver $\Gamma_s(\mathcal{G})$ of $k\mathcal{G}$. In view of \cite[Chap.X]{ARS}, the Heller operator $\Omega_\mathcal{G}$ also gives rise to an
automorphism of $\Gamma_s(\mathcal{G})$. Given an AR-component $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{G})$, we denote by $\mathrm{T}heta^{(n)}$ the image of $\mathrm{T}heta$ under $\nu^n$ and put
$\Upsilon_\mathrm{T}heta := \bigcup_{n=0}^{\ell-1} (\mathrm{T}heta \cup \Omega_\mathcal{G}(\mathrm{T}heta))^{(n)}$.
We say that a component $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{G})$ has {\it Euclidean tree class} if the graph $\bar{T}_\mathrm{T}heta$ is one of the Euclidean diagrams $\tilde{A}_{12},\, \tilde{D}_n
\, (n\ge 4)$ or $\tilde{E}_n \, (6\le n\le 8)$.
\begin{Proposition} \label{EC1} Suppose that $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{G})$ is a component of Euclidean tree class. Let $M$ be a $\mathcal{G}$-module such that
{\rm (a)} \ $M$ possesses a filtration $(M_i)_{0\leq i \leq r}$ such that each filtration factor is indecomposable with $M_i / M_{i-1} \not \in \Upsilon_\mathrm{T}heta$ for $1 \leq i \leq r$, and
{\rm (b)} \ $M$ possesses a filtration $(M'_i)_{0\leq j \leq s}$ such that $M'_j/ M'_{j-1} \in \Upsilon_\mathrm{T}heta$ for $1 \leq j \leq s$.
\noindent
Then $\cx_\mathcal{G}(M) \le 1$. \end{Proposition}
\begin{proof} Let $Z_i := M_i / M_{i-1}$ for $1 \leq i \leq r$. According to \cite[(I.8.8)]{Er}, the map
\[ d_i : \Upsilon_\mathrm{T}heta \longrightarrow \mathbb{N}_0 \ \ ; \ \ X \mapsto \dim_k\Ext^1_\mathcal{G}(Z_i,X)\]
is additive on $\mathrm{T}heta^{(j)}$ and $\Omega_\mathcal{G}(\mathrm{T}heta)^{(j)}$ for $0\leq j \leq \ell-1$. Hence \cite[(2.4)]{We} provides a natural number $b_i$ such that
\[ \dim_k \Ext^n_\mathcal{G}(Z_i,X) = \dim_k \Ext^1_\mathcal{G}(Z_i,\Omega_\mathcal{G}^{1-n}(X)) \leq b_i\]
for every $n \geq 1$ and every $X \in \Upsilon_\mathrm{T}heta$. From the exactness of the sequence
\[ \Ext^n_\mathcal{G}(Z_i,X) \longrightarrow \Ext^n_\mathcal{G}(M_i,X) \longrightarrow \Ext^n_\mathcal{G}(M_{i-1},X)\]
we obtain
\[ \dim_k\Ext^n_\mathcal{G}(M,X) \leq \sum_{i=1}^rb_i =: b \ \ \ \ \ \ \forall \ X \in \Upsilon_\mathrm{T}heta, \ n \geq 1.\]
In view of (b), the same reasoning now implies
\[ \dim_k\Ext^n_\mathcal{G}(M,M) \leq sb\ \ \ \ \ \ \forall \ n \geq 1,\]
so that general theory (cf.\ \cite[(5.3.5)]{Be2}) yields $\cx_\mathcal{G}(M) \le 1$. \end{proof}
\begin{Corollary} \label{EC2} Suppose that $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{G})$ is a component of Euclidean tree class. Then every $M \in \mathrm{T}heta$ possesses a composition factor $S$ such
that $S \in \Upsilon_\mathrm{T}heta$. \end{Corollary}
\begin{proof} If the composition factors of $M$ do not belong to $\Upsilon_\mathrm{T}heta$, then $M$ satisfies (a) and (b) of Proposition \ref{EC1}. Hence $M$ has complexity $1$, and is therefore periodic, \cite[(5.10.4)]{Be2}. As $\nu$ has finite order, there thus exists $n \ge 1$ with $\tau^n_\mathcal{G}(M) \cong M$. In view of \cite[Theorem]{HPR}, the tree class
$\bar{T}_\mathrm{T}heta$ is either a finite Dynkin diagram or $A_\infty$, contradicting our hypothesis on $\mathrm{T}heta$. \end{proof}
\section{Representations of $\SL(2)_r$}\label{S:Rep}
Throughout, we consider the infinitesimal group scheme $\SL(2)_r$, defined over the algebraically closed field $k$ of characteristic $p \ge 3$. Recall that the algebra $\Dist(\SL(2)_r) =
k\SL(2)_r$ is symmetric (cf.\ \cite[(2.1)]{FR}), so that $\Upsilon_\mathrm{T}heta = \mathrm{T}heta \cup \Omega_{\SL(2)_r}(\mathrm{T}heta)$ for every component $\mathrm{T}heta \subseteq \Gamma_s(\SL(2)_r)$.
The blocks of $\Dist(\SL(2)_r)$ are well understood (cf.\ \cite{Pf}). In the following, we identify the character group $X(T)$ of the standard maximal torus $T$ of diagonal matrices of
$\SL(2)$ with $\mathbb{Z}$, by letting $n \in \mathbb{Z}$ correspond to $n\rho \in X(T)$. Accordingly, the modules $L_r(0),\ldots,L_r(p^r\!-\!1)$ constitute a full set of representatives for the
isomorphism classes of the simple $\SL(2)_r$-modules (cf.\ \cite[(II.3.15)]{Ja3}). A block of $\Dist(\SL(2)_r)$ is given by a subset of $X_r(T) := \{0,\ldots,p^r\!-\!1\}$; the resulting
partition was determined by Pfautsch, see \cite[\S4.2]{Pf}: We present elements of $X_r(T)$ by expanding them $p$-adically: $\lambda = \sum_{i=0}^{r-1} \lambda_ip^i$. For $0\le i \le
\mathfrak{r}ac{p-3}{2}$ and $0\le s \le r\!-\!1$ we put
\[ \mathcal{B}^{(r)}_{i,s} := \{ \sum_{i=0}^{r-1} \lambda_ip^i \ ; \ \lambda_0=\lambda_1=\cdots =\lambda_{s-1} = p\!-\!1 \ , \ \lambda_s \in \{i,p\!-\!2\!-\!i\}\} \ \ \text{as well as} \ \
\mathcal{B}_r^{(r)} = \{p^r\!-\!1\},\]
so that the corresponding block consists of those modules, whose composition factors are of the form $L_r(\lambda)$ with $\lambda \in \mathcal{B}^{(r)}_{i,s}$. For future reference we record the
following result (cf.\ \cite[Satz~5]{Pf}):
\begin{Lemma} \label{Rep1} The functor $M \mapsto \mathrm{St}_s\!\otimes_k\!M^{[s]}$ induces a Morita equivalence $\mathcal{B}_{i,0}^{(r-s)}\sim_M \mathcal{B}_{i,s}^{(r)}$, sending $L_{r-s}(n)$ onto
$L_r(np^s\!+\!(p^s\!-\!1))$. \end{Lemma}
\begin{proof} This follows directly from \cite[(II.10.5)]{Ja3} in conjunction with Steinberg's tensor product theorem \cite[(II.3.17)]{Ja3} and \cite[(II.3.15)]{Ja3}. \end{proof}
\noindent
Thus, for the purposes of Auslander-Reiten theory, it suffices to consider the blocks $\mathcal{B}_{i,0}^{(r)}$ for $r \ge 1$.
\begin{Lemma} \label{Rep2} The simple $\mathcal{B}_{i,0}^{(r)}$-modules of complexity $2$ are given by the highest weights $p^r\!-\!p\!+\!i$ and $p^r\!-\!2\!-\!i$. \end{Lemma}
\begin{proof} Let $\mathcal{N}(\mathfrak{s}l(2))$ be the nullcone of the Lie algebra $\mathfrak{s}l(2)$, that is,
\[ \mathcal{N}(\mathfrak{s}l(2)) := \{ x \in \mathfrak{s}l(2) \ ; \ x^{[p]} = 0\}.\]
Suppose that $L_r(\lambda)$ is a simple $\mathcal{B}_{i,0}^{(r)}$-module. Thanks to \cite[(7.8)]{SFB2}, we have
\[ V_r(\SL(2))_{L_r(\lambda)} \cong \{ (x_0,\ldots, x_{r-1}) \in \mathcal{N}(\mathfrak{s}l(2))^r \ ; \ [x_i,x_j] = 0 \ \ \forall \ i,j \ , \ x_{r-i-1} = 0 \ \text{for} \ \lambda_i =p\!-\!1\}.\]
Direct computation (see \cite[p.112f]{Fa3}) shows that $\dim V_r(\SL(2))_{L_r(\lambda)} = 2$ if and only if there exists exactly one coefficient $\lambda_i \ne p\!-\!1$. As $L_r(\lambda)$
belongs to $\mathcal{B}_{i,0}^{(r)}$, we conclude that $\cx_{\SL(2)_r}(L_r(\lambda)) = 2$ if and only if $\lambda_0 \in \{i,p\!-\!2\!-\!i\}$ is the only coefficient $\ne p\!-\!1$, as desired.
\end{proof}
\noindent
We record some structural features of the corresponding principal indecomposable $\SL(2)_r$-modules. Given $\lambda \in \{0,\ldots,p^r\!-\!1\}$, we let $P_r(\lambda)$ be the projective
cover of $L_r(\lambda)$. If $\lambda \ne p^r\!-\!1$, then $P_r(\lambda)$ is not simple, and we consider the heart
\[ \Ht_r(\lambda) = \Rad(P_r(\lambda))/\Soc(P_r(\lambda))\]
of $P_r(\lambda)$. By work of Jeyakumar \cite{Je}, each principal indecomposable $\SL(2)_1$-module $P_1(\lambda)$ has the structure of an $\SL(2)$-module, which extends the given $\SL(2)_1$-structure. Since $p\ge 3$, this structure is unique, cf.\ \cite[(II.11.11)]{Ja3}.
\begin{Lemma} \label{Rep3} Let $\lambda = \lambda_0 + \sum_{i=1}^{r-1}(p\!-\!1)p^i$, where $0 \le \lambda_0 \le p\!-\!2$. Then the following statements hold:
{\rm (1)} \ If $r \ge 2$, then $\Ht_r(\lambda)$ is indecomposable, with $\Soc(\Ht_r(\lambda))$ being simple.
{\rm (2)} \ The composition factors of $\Ht_r(\lambda)$ are of the form $L_r(\mu)$, with
\[\mu \in \{p\!-\!2\!-\!\lambda_0+(p\!-\!2)p^{\ell\!-\!1}+\sum_{i=\ell}^{r-1}(p\!-\!1)p^i \ ; \ 1< \ell \le r\} \cup \{p\!-\!2\!-\!\lambda_0\}.\] \end{Lemma}
\begin{proof} (1) Thanks to \cite[(1.1)]{HJ}, the principal indecomposable $\SL(2)_r$-module with highest weight $\lambda$ has the form
\[ P_r(\lambda) = P_1(\lambda_0)\!\otimes_k\! L_1(p\!-\!1)^{[1]}\!\otimes_k\cdots\otimes_k\!L_1(p\!-\!1)^{[r-1]}.\]
As shown in \cite[Thm.3]{Hu2}, the $\SL(2)$-module $P_1(\lambda_0)$ has composition factors (from top to bottom) $L(\lambda_0),L(2p\!-\!2\!-\! \lambda_0),L(\lambda_0)$, where
$L(\gamma)$ denotes the simple $\SL(2)$-module with highest weight $\gamma \in \mathbb{N}_0$. From the isomorphism $L(\lambda_0)|_{\SL(2)_1} \cong L_1(\lambda_0)$ (cf.\
\cite[(II.3.15)]{Ja3}), we obtain
\[ \Ht_r(\lambda) \cong L(2p\!-\!2\!-\!\lambda_0)|_{\SL(2)_r}\!\otimes_k\! L_1(p\!-\!1)^{[1]}\!\otimes_k\cdots\otimes_k\!L_1(p\!-\!1)^{[r-1]}.\]
Steinberg's tensor product theorem in conjunction with \cite[(II.3.15)]{Ja3}, \cite[(3.1)]{AJL} and $r\ge 2$ now yields
\begin{eqnarray*}
\Ht_r(\lambda) & \cong & L_1(p\!-\!2\!-\!\lambda_0)\!\otimes_k\!L_1(1)^{[1]}\!\otimes_k\!L_1(p\!-\!1)^{[1]}\!\otimes_k\cdots\otimes_k\!L_1(p\!-\!1)^{[r-1]} \\
& \cong & L_1(p\!-\!2\!-\!\lambda_0)\!\otimes_k\!P_1(p\!-\!2)^{[1]}\!\otimes_k\!L_1(p\!-\!1)^{[2]}\!\otimes_k \cdots\otimes_k\!L_1(p\!-\!1)^{[r-1]}.
\end{eqnarray*}
Since the latter module is contained in the principal indecomposable $\SL(2)_r$-module
\[ P_1(p\!-\!2\!-\!\lambda_0)\!\otimes_k\!P_1(p\!-\!2)^{[1]}\!\otimes_k\!L_1(p\!-\!1)^{[2]}\!\otimes_k \cdots\otimes_k\!L_1(p\!-\!1)^{[r-1]}\]
(see \cite[(1.1)]{HJ}), we conclude that $\Ht_r(\lambda)$ is indecomposable with simple socle.
(2) This is a direct consequence of \cite[(3.4)]{AJL}, \cite[(II.3.15)]{Ja3} and Steinberg's tensor product theorem. \end{proof}
\noindent
We turn to the Auslander-Reiten theory of $\SL(2)_r$ and begin by determining the stable AR-components of Euclidean tree class. We recall that the standard almost split sequence
\[ (0) \longrightarrow \Rad(P_r(\lambda)) \longrightarrow \Ht_r(\lambda)\oplus P_r(\lambda) \longrightarrow P_r(\lambda)/\Soc(P_r(\lambda)) \longrightarrow (0)\]
is the only almost split sequence involving the principal indecomposable module $P_r(\lambda)$ (cf.\ \cite[(V.5.5)]{ARS}).
\begin{Proposition} \label{Rep4} Let $\mathrm{T}heta \subseteq \Gamma_s(\SL(2)_r)$ be a component of Euclidean tree class. Then $\mathrm{T}heta \cong \mathbb{Z}[\tilde{A}_{12}]$. \end{Proposition}
\begin{proof} In view of Lemma \ref{Rep1}, we may assume that $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{B}_{i,0}^{(r)})$. According to \cite[(2.4)]{We}, the component $\mathrm{T}heta$ is attached to a
principal indecomposable module, so by passing to the isomorphic component $\Omega_{\SL(2)_r}^{-1}(\mathrm{T}heta)$ we may assume without loss of generality that $\mathrm{T}heta$ contains a simple
module $L_r(\lambda)$.
Another application of \cite[(2.4)]{We} implies $\cx_{\SL(2)_r}(M) = 2$ for every $M \in \Upsilon_\mathrm{T}heta$, so that Lemma \ref{Rep2} yields
\[ \lambda = \lambda_0 + (p\!-\!1)p + \cdots +(p\!-\!1)p^{r-1} \ \ ; \ \ \lambda_0 \in \{i,p\!-\!2\!-\!i\}.\]
Suppose that $r \ge 2$. By virtue of Lemma \ref{Rep3}(1), the module $\Ht_r(\lambda)$ is indecomposable and $\Ht_r(\lambda) \in \Omega_{\SL(2)_r}(\mathrm{T}heta) \subseteq
\Upsilon_\mathrm{T}heta$.
Owing to Lemma \ref{Rep3}(2) and Lemma \ref{Rep2} the composition factors of $\Ht_r(\lambda)$ have complexity $\ne 2$. Accordingly, they do not belong to $\Upsilon_\mathrm{T}heta$, and
Corollary \ref{EC2} yields a contradiction. Consequently, $r=1$, and the assertion follows from the well-known path algebra presentation of $\mathcal{B}_{i,0}^{(1)}$ (see \cite{Dr,Fi,Ru}), which
establishes a Morita equivalence between $\mathcal{B}_{i,0}^{(1)}$ and the trivial extension of the Kronecker algebra $k[\bullet \rightrightarrows \bullet]$. \end{proof}
\begin{Remark} The proof of the foregoing result also shows that $\mathrm{T}heta$ belongs to a block of $k\SL(2)_r$ of tame representation type (see also Theorem \ref{BF3} below). \end{Remark}
\noindent
Recall that a non-projective indecomposable $\SL(2)_r$-module $M$ is referred to as quasi-simple, provided
(a) \ the module $M$ belongs to a component of tree class $A_\infty$, and
(b) \ $M$ has exactly one predecessor in $\Gamma_s(\SL(2)_r)$.
\begin{Proposition} \label{Rep5} Let $S$ be a simple $\SL(2)_r$-module of complexity $\cx_{\SL(2)_r}(S)=2$, $\mathrm{T}heta \subseteq \Gamma_s(\SL(2)_r)$ be the component containing $S$.
Then $\mathrm{T}heta \cong \mathbb{Z}[A_\infty], \ \mathbb{Z}[\tilde{A}_{12}]$. If $\mathrm{T}heta \cong \mathbb{Z}[A_\infty]$, then $S$ is quasi-simple. \end{Proposition}
\begin{proof} Since Morita equivalence preserves the complexity of a module, Lemma \ref{Rep1} allows us to assume that $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{B}_{i,0}^{(r)})$. As
$\cx_{\SL(2)_r}(S) = 2$, the component $\mathrm{T}heta$ is not finite and its tree class is not a finite Dynkin diagram (cf.\ \cite[(2.1)]{Fa3}). If the tree class $\bar{T}_\mathrm{T}heta$ is Euclidean, then
Proposition \ref{Rep4} implies $\mathrm{T}heta \cong \mathbb{Z}[\tilde{A}_{12}]$. Alternatively, $\bar{T}_\mathrm{T}heta \cong A_\infty, A_\infty^\infty$ or $D_\infty$ (see \cite[(1.3)]{Fa3}).
In view of Lemma \ref{Rep2}, we have $S \cong L_r(\lambda)$, where
\[ \lambda = \lambda_0 + (p\!-\!1)p + \cdots +(p\!-\!1)p^{r-1} \ \ ; \ \ \lambda_0 \in \{i,p\!-\!2\!-\!i\}.\]
If $r=1$, then $\mathrm{T}heta \cong \mathbb{Z}[\tilde{A}_{12}]$, a contradiction. Alternatively, Lemma \ref{Rep3} shows that $\Ht_r(\lambda)$ is indecomposable with simple socle. From the standard
almost split sequence involving $P_r(\lambda)$ we conclude that $P_r(\lambda)/\Soc(P_r(\lambda)) \cong \Omega^{-1}_{\SL(2)_r}(L_r(\lambda))$ has exactly one predecessor in
$\Gamma_s(\SL(2)_r)$. Since $\Omega_{\SL(2)_r}$ defines an automorphism of $\Gamma_s(\SL(2)_r)$, the simple module $S=L_r(\lambda)$ enjoys the same property. Thus,
$\bar{T}_\mathrm{T}heta \not \cong A_\infty^\infty$, and if $\bar{T}_\mathrm{T}heta \cong A_\infty$, then $S$ is quasi-simple.
It thus remains to rule out the case, where $\bar{T}_\mathrm{T}heta \cong D_\infty$. Assuming this isomorphism to hold, we infer from the above that $\Ht_r(\lambda)$ has $3$ successors in $\Psi
:= \Omega^{-1}_{\SL(2)_r}(\mathrm{T}heta)$. Accordingly, there exists an almost split sequence
\[ (0) \longrightarrow X \longrightarrow \Ht_r(\lambda) \oplus Q \longrightarrow Y \longrightarrow (0),\]
with $Y\not \cong P_r(\lambda)/L_r(\lambda)$, and with $Q \not \cong P_r(\lambda)$ being indecomposable projective, or zero. If $Q \ne (0)$, then $Q = P_r(\lambda')$ belongs to the
block of $L_r(\lambda)$ and the standard sequence yields $\Ht_r(\lambda) \cong \Ht_r(\lambda')$. Since $\bar{T}_\mathrm{T}heta \cong D_\infty$, \cite[(2.2)]{Fa3} implies
$\cx_{\SL(2)_r}(L_r(\lambda')) =2$, and Lemma \ref{Rep2} yields
\[ \lambda' = p-2-\lambda_0 + (p\!-\!1)p + \cdots +(p\!-\!1)p^{r-1}.\]
This, however, contradicts (2) of Lemma \ref{Rep3}.
We thus have an almost split sequence
\[ (0) \longrightarrow X \longrightarrow \Ht_r(\lambda) \longrightarrow Y \longrightarrow (0).\]
Since $\Soc(\Ht_r(\lambda))$ is simple, we may apply \cite[(V.3.2)]{ARS} to conclude that $Y$ is simple. As $\cx_{\SL(2)_r}(Y) = 2$ (cf.\ \cite[(2.2)]{Fa3}), Lemma \ref{Rep2} implies
$Y \in \{L_r(\lambda),L_r(\lambda')\}$, so that Lemma \ref{Rep3}(2) again leads to a contradiction.
Consequently, $\bar{T}_\mathrm{T}heta = A_\infty$, which, in view of $\cx_{\SL(2)_r}(L_r(\lambda)) = 2$, implies $\mathrm{T}heta \cong \mathbb{Z}[A_\infty]$ (see \cite[(2.1)]{Fa3}). \end{proof}
\section{Blocks and AR-Components}\label{S:BAR}
In this section we illustrate our results by considering blocks and Auslander-Reiten components of infinitesimal group schemes.
\subsection{Invariants of AR-Components}
We begin by showing that the notion of projective height gives rise to invariants of stable Auslander-Reiten components. Let $\mathfrak{E} \subseteq \mathfrak{E}(\mathcal{G})$ be a collection of elementary abelian
subgroups of $\mathcal{G}$, and define
\[ \ph_\mathfrak{E}(M) := \max_{\mathcal{U} \in \mathfrak{E}}\ph_\mathcal{U}(M).\]
\begin{Prop} \label{NI1} Let $\mathcal{G}$ be an infinitesimal group, $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{G})$ be a connected component of its stable Auslander-Reiten quiver. Given $\mathfrak{E}
\subseteq \mathfrak{E}(\mathcal{G})$, we have
\[ \ph_\mathfrak{E}(M) = \ph_\mathfrak{E}(N)\]
for all $M,N \in \mathrm{T}heta$. \end{Prop}
\begin{proof} Let $\mathcal{U} \in \mathfrak{E}$ be an elementary abelian subgroup and consider $M \in \mathrm{T}heta$. We write $t := \ph_\mathcal{U}(M)$. According to \cite[(1.5)]{FMS}, the Nakayama functor
$\nu_\mathcal{G}$ is induced by the automorphism $\id_\mathcal{G}\ast \zeta_\ell$, which is the convolution of $\id_\mathcal{G}$ with the left modular function $\zeta_\ell : k\mathcal{G} \longrightarrow k$. As $\mathcal{U}_s$ is unipotent,
the restriction $\zeta_\ell|_{k\mathcal{U}_s}$ of $\zeta_\ell$ coincides with with the co-unit, so that $(\id_\mathcal{G}\ast \zeta_\ell)|_{k\mathcal{U}_s} = \id_{k\mathcal{U}_s}$. It follows that $\nu_\mathcal{G}(M)|_{\mathcal{U}_s} =
M|_{\mathcal{U}_s}$ for all $s$.
Since $\tau_\mathcal{G} = \Omega^2_\mathcal{G} \circ \nu_\mathcal{G}$, we therefore obtain
\[ \tau_\mathcal{G}(M)|_{\mathcal{U}_s} \cong \Omega^2_{U_s}(M|_{\mathcal{U}_s})\oplus ({\rm proj.}) \ \ \ \ \forall \ s \in \mathbb{N}.\]
Thus, $\tau_\mathcal{G}(M)|_{U_s}$ is projective if and only if $M|_{\mathcal{U}_s}$ is projective. This implies $t = \ph_\mathcal{U}(\tau_\mathcal{G}(M))$.
Now let
\[ \mathfrak{E}_M : \ \ \ \ \ \ \ \ \ \ (0) \longrightarrow \tau_\mathcal{G}(M) \longrightarrow \bigoplus_{i=1}^n E_i \longrightarrow M \longrightarrow (0)\]
be the almost split sequence terminating in $M$. If $s \le t-1$, then $M|_{\mathcal{U}_s}$ is projective and $\mathfrak{E}_M|_{\mathcal{U}_s}$ splits, so that each $E_i|_{\mathcal{U}_s}$ is a direct summand of the
projective module $M|_{\mathcal{U}_s}\oplus \tau_\mathcal{G}(M)|_{\mathcal{U}_s}$. Hence $E_i|_{\mathcal{U}_s}$ is projective and $\ph_\mathcal{U}(E_i) \ge t$ for $1\le i \le n$.
Let $M,N \in \mathrm{T}heta$. By the above, we have $\ph_\mathcal{U}(N) \ge \ph_\mathcal{U}(M)$, whenever $N$ is a predecessor of $M$. In that case, there also exists an arrow $\tau_\mathcal{G}(M) \rightarrow N$, so
that $\ph_\mathcal{U}(M) = \ph_\mathcal{U}(\tau_\mathcal{G}(M)) \ge \ph_\mathcal{U}(N)$, whence $\ph_\mathcal{U}(M) = \ph_\mathcal{U}(N)$.
Since $\mathrm{T}heta$ is connected, it follows that $\ph_\mathcal{U}(M) = \ph_\mathcal{U}(N)$ for all $M,N \in \mathrm{T}heta$. This readily implies our assertion. \end{proof}
\begin{Cor} \label{NI2} Suppose that $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{G})$ is a component containing a module of complexity $1$. Then there exists an elementary abelian subgroup
$\mathcal{U}_\mathrm{T}heta \subseteq \mathcal{G}$ such that $\ph(M) = \height(\mathcal{U}_\mathrm{T}heta)$ for every $M \in \mathrm{T}heta$. \end{Cor}
\begin{proof} Let $M,N \in \mathrm{T}heta$. As observed in \cite[(1.1)]{Fa3}, we have
\[ V_r(\mathcal{G})_M = V_r(\mathcal{G})_N,\]
so that the subgroups $\mathcal{U}_M$ and $\mathcal{U}_N$, defined in (\ref{PH3}), coincide. Setting $\mathcal{U}_\mathrm{T}heta := \mathcal{U}_M$ for some $M \in \mathrm{T}heta$, we obtain the desired result directly from
Theorem \ref{PH3}. \end{proof}
\subsection{Blocks of finite representation type}
Throughout, we consider an infinitesimal group $\mathcal{G}$ of height $r$ and denote by $\nu_\mathcal{G} : \modd \mathcal{G} \longrightarrow \modd \mathcal{G}$ the Nakayama functor of $\modd \mathcal{G}$. By general theory
\cite[(1.5)]{FMS}, $\nu_\mathcal{G}$ is an auto-equivalence of order a power of $p$.
\begin{Cor} Let $\mathcal{G}$ be an infinitesimal group, $\mathcal{B} \subseteq k\mathcal{G}$ be a block of finite representation type.
{\rm (1)} \ If $\mathcal{U} �\in \mathfrak{E}(\mathcal{G})$ is an elementary abelian subgroup, then we have $\ph_\mathcal{U}(M) = \ph_\mathcal{U}(N)$ for any two non-projective indecomposable $\mathcal{B}$-modules $M$ and $N$.
{\rm (2)} \ There exists an elementary abelian subgroup $\mathcal{U}_\mathcal{B} \subseteq \mathcal{G}$ such that $\ph(M) = \height(\mathcal{U}_\mathcal{B})$ for every non-projective indecomposable $\mathcal{B}$-module $M$.
\end{Cor}
\begin{proof} Auslander's Theorem \cite[(VI.1.4)]{ARS} implies that $\Gamma_s(\mathcal{B})$ is a connected component of $\Gamma_s(\mathcal{G})$. Hence the assertions follow directly from Proposition
\ref{NI1} and Corollary \ref{NI2}. \end{proof}
\noindent
The group $\mathcal{U}_\mathcal{B}$ may be thought of as a ``defect group'' of the block $\mathcal{B}$. Recall that a finite-dimensional $k$-algebra $\Lambda$ is called a {\it Nakayama algebra} if all of its
indecomposable projective left modules and indecomposable injective left modules are uniserial (i.e., they only possess one composition series).
The following result provides a first link between properties of $\mathcal{U}_\mathcal{B}$ and $\mathcal{B}$:
\begin{Prop} \label{BFR1} Let $S$ be a simple $\mathcal{G}$-module such that $\cx_\mathcal{G}(S) = 1$ and $\ph(S) =r$. Then the block $\mathcal{B} \subseteq k\mathcal{G}$ containing $S$ is a Nakayama algebra with simple modules $\{\nu_\mathcal{G}^i(S) \ ; \ i \in \mathbb{N}_0\}$. \end{Prop}
\begin{proof} Since $S$ has complexity $1$ with $\ph(S) = r$, Corollary \ref{PH2} implies $\Omega_\mathcal{G}^2(S) \cong S$. The arguments employed in the proof of \cite[(3.2)]{Fa1}
now yield the assertion. \end{proof}
\begin{Example} Consider the subgroup $\mathcal{G} := \SL(2)_1\mathbb{G}_{a(2)} \subseteq \SL(2)$, given by
\[ \mathcal{G}(R) = \{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \in \SL(2)(R) \ ; \ a^p=1=d^p \ , c^p = 0 \ , \ b^{p^2} = 0\}\]
for every commutative $k$-algebra $R$. The first Steinberg module $\mathrm{St}_1$ is a simple $\mathcal{G}$-module, whose restriction to $\mathcal{G}_1 = \SL(2)_1$ is simple and projective, whence $\ph(\mathrm{St}_1) = 2$. Since $\mathcal{G}/\mathcal{G}_1 \cong \mathbb{G}_{a(1)}$, \cite[(I.6.6)]{Ja3} implies
\[ \Ext^n_\mathcal{G}(\mathrm{St}_1,\mathrm{St}_1) \cong \HH^n(\mathbb{G}_{a(1)},\Hom_{\mathcal{G}_1}(\mathrm{St}_1,\mathrm{St}_1)) \cong \HH^n(\mathbb{G}_{a(1)},k),\]
so that $\cx_\mathcal{G}(\mathrm{St}_1) = 1$, cf.\ \cite[\S2]{Ca1}. Thus, the block $\mathcal{B} \subseteq k\mathcal{G}$ containing $\mathrm{St}_1$ is a Nakayama algebra, with $\mathrm{St}_1$ being the only simple $\mathcal{B}$-module. \end{Example}
\subsection{Frobenius kernels of reductive groups}
In this section we consider a smooth reductive group $G$ of characteristic $p\ge 3$. Our objective is to apply the foregoing results in order to correct the proof of \cite[(4.1)]{Fa3}. Given $r
\ge 1$, we are interested in the algebra $\Dist(G_r) = kG_r$ of distributions of $G_r$.
The following Lemma, which follows directly from the arguments of \cite[(7.2)]{Fa2}, reduces a number of issues to the special case $G = \SL(2)$:
\begin{Lem} \label{BF1} Let $\mathcal{B} \subseteq \Dist(G_r)$ be a block. If $\mathcal{B}$ has a simple module of complexity $2$, then $\mathcal{B}$ is Morita equvalent to a block of $\Dist(\SL(2)_r)$.
$\square$ \end{Lem}
\noindent
The proof of the following result corrects the false reference to $\SL(2)$-theory on page 113 of \cite{Fa3}. Theorem 4.1 of \cite{Fa3} is correct as stated.
\begin{Thm} \label{BF2} A non-projective simple $G_r$-module $L_r(\lambda)$ of complexity $2$ belongs to a component $\mathrm{T}heta \cong \mathbb{Z}[A_\infty],\, \mathbb{Z}[\tilde{A}_{12}]$.\end{Thm}
\begin{proof} This follows from a consecutive application of Lemma \ref{BF1} and Proposition \ref{Rep5}. \end{proof}
\noindent
A finite-dimensional $k$-algebra $\Lambda$ is called {\it tame} if it is not of finite representation type and if for every $d>0$ the $d$-dimensional indecomposable $\Lambda$-modules can by parametrized by a one-dimensional variety. The reader is referred to \cite[(I.4)]{Er} for the precise definition.
The structure of the representation-finite and tame blocks of the Frobenius kernels of smooth groups is well understood, see \cite[Theorem]{Fa4}. The following result shows that, for smooth
reductive groups, such blocks may be characterized via the complexities of their simple modules.
\begin{Thm} \label{BF3} Suppose that $G$ is reductive, and let $\mathcal{B} \subseteq \Dist(G_r)$ be a block. Then the following statements are equivalent:
{\rm (1)} \ $\mathcal{B}$ is tame.
{\rm (2)} \ Every simple $\mathcal{B}$-module has complexity $2$.
{\rm (3)} \ $\Gamma_s(\mathcal{B})$ possesses a component isomorphic to $\mathbb{Z}[\tilde{A}_{12}]$.\end{Thm}
\begin{proof} (1) $\Rightarrow$ (2) Passing to the connected component of $G$ if necessary, we may assume that $G$ is connected. Let $S$ be a simple $\mathcal{B}$-module. Since $\mathcal{B}$ is tame,
\cite[(3.2)]{Fa6} implies $\cx_{G_r}(S) \le 2$. Since $\mathcal{B}$ is not representation finite, the simple $\mathcal{B}$-module $S$ is not projective, so that $\cx_{G_r}(S) \ge 1$. Suppose that
$\cx_{G_r}(S) = 1$. Since $S$ is $G$-stable, Corollary \ref{PH3.5} shows that $\mathcal{U}_\mathcal{B} \unlhd \mathcal{G}_r$ is a unipotent, normal subgroup of $G_r$. Passage to the first Frobenius kernels yields
the existence of a non-zero unipotent $p$-ideal $\mathfrak{u} \unlhd \mathfrak{g}$. Since $G$ is reductive, \cite[(11.8)]{Hu1} rules out the existence of such ideals, a contradiction.
(2) $\Rightarrow$ (3) By Lemma \ref{BF1}, the block $\mathcal{B}$ is Morita equivalent to a block $\mathcal{B}' \subseteq \Dist(\SL(2)_r)$, all whose simple modules have complexity $2$. A consecutive
application of Lemma \ref{Rep2} and Lemma \ref{Rep3} implies $r=1$. Consequently, $\mathcal{B}'$ is tame and so is $\mathcal{B}$.
(3) $\Rightarrow$ (1) Suppose that $\Gamma_s(\mathcal{B})$ possesses a component of type $\mathbb{Z}[\tilde{A}_{12}]$. By \cite[(2.1)]{FR}, the block $\mathcal{B}$ is symmetric, and we may invoke \cite[(IV.3.8.3),(IV.3.9)]{Er} to see that the algebra $\mathcal{B}/\Soc(\mathcal{B})$ is special biserial. Thus, $\mathcal{B}/\Soc(\mathcal{B})$ is tame or representation-finite (cf.\ \cite[(II.3.1)]{Er}) and, having the
same non-projective indecomposables, $\mathcal{B}$ enjoys the same property. Since $\mathcal{B}$ possesses modules of complexity $2$, it is not of finite representation type, cf.\ \cite{He}. \end{proof}
\begin{Remark} Let $G$ be a smooth algebraic group scheme. According to \cite[(4.6)]{Fa4}, the presence of a tame block $\mathcal{B} \subseteq \Dist(G_r)$ implies that $G$ is reductive.
\end{Remark}
\section{The Nakayama Functor of $\modd \gr\Lambda$}\label{S:NF}
In preparation for our analysis of $G_rT$-modules we study in this section the category of graded modules of an associative algebra. Let $k$ be a field,
\[ \Lambda = \bigoplus_{i \in \mathbb{Z}^n} \Lambda_i\]
be a finite-dimensional, $\mathbb{Z}^n$-graded $k$-algebra. We denote by $\modd \gr \Lambda$ the category of finite-dimensional $\mathbb{Z}^n$-graded $\Lambda$-modules and degree zero homomorphisms. Given $i \in \mathbb{Z}^n$, the $i$-th shift functor $[i] : \modd \gr\Lambda \longrightarrow \modd \gr\Lambda$ sends $M$ onto $M[i]$, where
\[ M[i]_j := M_{j-i} \ \ \ \ \ \forall \ j \in \mathbb{Z}\]
and leaves the morphisms unchanged. The $\Lambda^{\rm op}$-module $\Hom_\Lambda(M,\Lambda)^\ast$ has a natural $\mathbb{Z}^n$-grading. There results a functor
\[ \mathcal{N} : \modd \gr \Lambda \longrightarrow \modd \gr\Lambda \ \ ; \ \ M \mapsto \Hom_\Lambda(M,\Lambda)^\ast,\]
the {\it Nakayama functor} of $\modd \gr\Lambda$. The purpose of this section is to determine this functor for certain Hopf algebras $\Lambda$. We begin with a few general observations.
\subsection{Graded Frobenius Algebras} Suppose that $\Lambda$ is a Frobenius algebra with Frobenius homomorphism $\pi : \Lambda \longrightarrow k$ and associated non-degenerate bilinear form
\[ (\, ,\, )_\pi : \Lambda\times \Lambda \longrightarrow k \ \ ; \ \ (x,y) \mapsto \pi(xy).\]
The uniquely determined automorphism $\mu : \Lambda \longrightarrow \Lambda$ given by
\[ (y,x)_\pi = (\mu(x),y)_\pi\]
is called the {\it Nakayama automorphism} of the form $(\, , \,)_\pi$. For an arbitrary automorphism $\alpha \in {\rm Aut}_k(\Lambda)$ and a $\Lambda$-module $M$, we denote by
$M^{(\alpha)}$ the $\Lambda$-module with underlying $k$-space $M$ and action
\[ a\boldsymbol{.} m := \alpha^{-1}(a)m \ \ \ \ \ \ \forall \ a \in \Lambda,\, m \in M.\]
Given $M \in \modd \gr \Lambda$, we define the {\it support of} $M$ via
\[ \supp(M) := \{ i \in \mathbb{Z}^n \ ; \ M_i \ne (0)\}.\]
Then we have $\supp(M[d]) = d+\supp(M)$, as well as $\supp(M^\ast) = -\supp(M)$. If $\alpha \in {\rm Aut}_k(\Lambda)$ is an automorphism of degree $0$ and $M \in \modd \gr\Lambda$ is graded, then $M^{(\alpha)}$ is graded via
\[ M^{(\alpha)}_i := M_i \ \ \ \ \ \ \forall \ i \in \mathbb{Z}^n.\]
In particular, $\supp(M) = \supp(M^{(\alpha)})$. Recall that the {\it enveloping algebra} $\Lambda^e := \Lambda\!\otimes_k\!\Lambda^{\rm op}$ inherits a $\mathbb{Z}^n$-grading from
$\Lambda$ via
\[ \Lambda^e_i := \bigoplus_{\ell+j = i} \Lambda_\ell\!\otimes_k\!\Lambda_j \ \ \ \ \forall \ i \in \mathbb{Z}^n.\]
The algebra $\Lambda$ obtains the structure of a graded $\Lambda^e$-module by means of
\[ (a\otimes b)\boldsymbol{.} c := acb \ \ \ \ \forall \ a,b,c \in \Lambda.\]
Our first result extends the classical formula for the Nakayama functor of Frobenius algebras to the graded case.
\begin{Lem} \label{NF1} Suppose that $\Lambda$ is a Frobenius algebra with Frobenius homomorphism $\pi : \Lambda \longrightarrow k$ of degree $d_\Lambda$. Then the following statements
hold:
{\rm (1)} \ The Nakayama automorphism $\mu : \Lambda \longrightarrow \Lambda$ of the form $(\, ,\,)_\pi$ has degree $0$.
{\rm (2)} \ There are natural isomorphisms
\[ \mathcal{N}(M) \cong M^{(\mu^{-1})}[d_\Lambda]\]
for all $M \in \modd \gr \Lambda$.
{\rm (3)} \ Every homogeneous Frobenius homomorphism of $\Lambda$ has degree $d_\Lambda$.\end{Lem}
\begin{proof} (1) Since $\pi$ has degree $d_\Lambda$, we have $\pi(\Lambda_i) = (0)$ for $i\ne -d_\Lambda$, whence
\[ (\Lambda_i,\Lambda_j)_\pi = (0) \ \ \ \ \text{for} \ i+j \ne -d_\Lambda.\]
Let $a \in \Lambda_i$ and write $\mu(a)= \sum_{j \in \mathbb{Z}^n} x_j$. Assuming $j \ne i$, we consider $b \in \Lambda_\ell$. Then we have $(x_j,b)_\pi = 0$ for $j+\ell \ne -d_\Lambda$. If
$j+\ell = -d_\Lambda$, we obtain, observing $i+\ell \ne -d_\Lambda$,
\[ (x_j,b)_\pi = (\mu(a),b)_\pi = (b,a)_\pi = 0.\]
As a result, $x_j = 0$ whenever $j\ne i$, so that $\mu(a) = x_i \in \Lambda_i$.
(2) Let $\gamma := \mu\otimes \id_\Lambda$ be the induced automorphism of the enveloping algebra $\Lambda^e$. Since $\pi$ has degree $d_\Lambda$, the map
\[ \Psi : \Lambda^{(\gamma^{-1})} \longrightarrow \Lambda^\ast \ \ ; \ \ \Psi(x)(y) := (x,y)_\pi\]
is an isomorphism of $\Lambda^e$-modules of degree $d_\Lambda$. There results an isomorphism
\[ \Lambda^{(\gamma^{-1})}[d_\Lambda] \cong \Lambda^\ast\]
of graded $\Lambda^e$-modules. The adjoint isomorphism theorem yields natural isomorphisms
\[ \mathcal{N}(M) \cong \Lambda^\ast\!\otimes_\Lambda\!M \cong \Lambda^{(\gamma^{-1})}[d_\Lambda]\!\otimes_\Lambda\!M \cong M^{(\mu^{-1})}[d_\Lambda],\]
for every $M \in \modd \gr \Lambda$, cf.\ \cite[(III.2.9)]{ASS}.
(3) If $\Lambda$ has a Frobenius homomorphism of degree $d'$, then (2) provides an isomorphism $\Lambda^{(\delta^{-1})}[d']$ $ \cong \Lambda^\ast$, where $\delta = \nu\otimes
\id_\Lambda$ for some Nakayama automorphism $\nu$ of $\Lambda$ of degree $0$. We therefore obtain
\[ \supp(\Lambda)+d_\Lambda = \supp(\Lambda^{(\gamma^{-1})})+d_\Lambda = \supp(\Lambda^{(\gamma^{-1})}[d_\Lambda]) = -\supp(\Lambda) = \supp(\Lambda)+d',\]
so that $x \mapsto x+(d_\Lambda-d')$ leaves $\supp(\Lambda)$ invariant. Since $\supp(\Lambda)$ is finite, it follows that $d'=d_\Lambda$. \end{proof}
\begin{Lem} \label{NF3} Let $\Lambda = \bigoplus_{i \in \mathbb{Z}^n}\Lambda_i$ be a graded $k$-algebra, $\mathcal{B} \subseteq \Lambda$ be a block of $\Lambda$. Then the following statements
hold:
{\rm (1)} \ Any central idempotent of $\Lambda$ is homogeneous of degree $0$.
{\rm (2)} \ The block $\mathcal{B} \subseteq \Lambda$ is a homogeneous subspace.
{\rm (3)} \ Suppose that $\Lambda$ is a Frobenius algebra with homogeneous Frobenius homomorphism of degree $d_\Lambda$. Then $\mathcal{B}$ is a Frobenius algebra, and every homogeneous
Frobenius homomorphism of $\mathcal{B}$ has degree $d_\Lambda$. \end{Lem}
\begin{proof} (1) General theory provides a torus $T$, which acts on $\Lambda$ via automorphisms such that the given grading coincides with the weight space decomposition $\Lambda =
\bigoplus_{\lambda \in X(T)} \Lambda_\lambda$ of $\Lambda$ relative to $T$. Let $\mathcal{I}\subseteq \Lambda$ be the set of central primitive idempotents of $\Lambda$. Then $\mathcal{I}$ is a finite,
$T$-invariant set, so that the connected algebraic group $T$ acts trivially on $\mathcal{I}$. Hence $\mathcal{I} \subseteq \Lambda_0$, and our assertion follows from the fact that every central idempotent is a
sum of elements of $\mathcal{I}$.
(2) There exists a central primitive idempotent $e$ such that $\mathcal{B} = \Lambda e.$ Thanks to (1), we have $e \in \Lambda_0$, whence
\[ \mathcal{B} = \bigoplus_{i \in \mathbb{Z}^n}\Lambda_ie = \bigoplus_{i \in \mathbb{Z}^n}(\mathcal{B}\cap \Lambda_i),\]
as desired.
(3) Let $\pi : \Lambda \longrightarrow k$ be a Frobenius homomorphism of degree $d_\Lambda$. If $I \subseteq \mathcal{B}$ is a left ideal of $\mathcal{B}$ which is contained in $\ker \pi|_{\mathcal{B}}$, then $I$ is also a
left ideal of $\Lambda$, so that $I = (0)$. In view of (2), the block $\mathcal{B}$ is a homogeneous subspace. We conclude that $\pi|_\mathcal{B}$ is a homogeneous Frobenius homomorphism of $\mathcal{B}$ of
degree $d_\Lambda$. Our assertion now follows from Lemma \ref{NF1}. \end{proof}
\noindent
A $k$-algebra $\Lambda$ is called {\it connected} if its $\Ext$-quiver is connected. This is equivalent to $\Lambda$ having exactly one block. The following result in conjunction with the
observations above shows when graded Frobenius algebras afford homogeneous Frobenius homomorphisms.
\begin{Thm} \label{NF2} Let $\Lambda = \bigoplus_{i \in \mathbb{Z}^n}\Lambda_i$ be a connected graded Frobenius algebra. Then there exists a homogeneous Frobenius homomorphism $\pi :
\Lambda \longrightarrow k$. \end{Thm}
\begin{proof} (1) Let $\Lambda^e = \Lambda\!\otimes_k\!\Lambda^{\rm op}$ be the enveloping algebra of $\Lambda$. As $\Lambda$ is connected, the canonical $\Lambda^e$-module
$\Lambda$ is indecomposable. Let $\pi : \Lambda \longrightarrow k$ be a Frobenius homomorphism. As argued above, there exists a degree $0$ automorphism $\gamma \in {\rm Aut}_k(\Lambda^e)$
such that the map
\[ \Psi : \Lambda^{(\gamma^{-1})} \longrightarrow \Lambda^\ast \ \ ; \ \ \Psi(x)(y) := (x,y)_\pi\]
is an isomorphism of $\Lambda^e$-modules. As all modules involved are indecomposable, \cite[(4.1)]{GG1} ensures the existence of an element $d \in \mathbb{Z}^n$ and an isomorphism
\[ \Phi : \Lambda^{(\gamma^{-1})}[d] \longrightarrow \Lambda^\ast\]
of graded $\Lambda^e$-modules. Consequently, the map
\[ \rho : \Lambda \longrightarrow k \ \ ; \ \ \rho(x) := \Phi(1)(x)\]
is a Frobenius homomorphism of degree $d$. \end{proof}
\begin{Remark} Let $\Lambda$ and $\Gamma$ be two connected $\mathbb{Z}^n$-graded Frobenius algebras with homogeneous Frobenius homomorphisms of degrees $d_\Lambda$ and
$d_\Gamma$, respectively. If $d_\Lambda \ne d_\Gamma$, then Lemmas \ref{NF1} and \ref{NF3} imply that $\Lambda \oplus \Gamma$ is a graded Frobenius algebra, which does not
afford a homogeneous Frobenius homomorphism. We shall see in the next section that such phenomena do not arise within the context of graded Hopf algebras. \end{Remark}
\noindent
Suppose that $\Lambda$ is a Frobenius algebra. Then $\modd \gr \Lambda$ is a Frobenius category, and \cite[(3.5)]{GG2} ensures that $\modd \gr \Lambda$ has almost split sequences.
In view of \cite[\S1]{GG2}, the Auslander-Reiten translation $\tau_{\gr \Lambda}$ is given by
\[ \tau_{\gr\Lambda} = \mathcal{N} \circ \Omega^2_{\gr \Lambda},\]
where $\Omega_{\gr\Lambda}$ denotes the Heller operator of the Frobenius category $\modd \gr \Lambda$. We denote by $\Gamma_s(\gr\Lambda)$ the stable Auslander-Reiten quiver of
$\modd \gr \Lambda$.
\begin{Cor} \label{NF4} Let $\mathrm{T}heta \subseteq \Gamma_s(\gr \Lambda)$ be a component. Then there exists an automorphism $\mu$ of $\Lambda$ of degree $0$ and
$d_\mathrm{T}heta \in \mathbb{Z}$ such that
\[ \tau_{\gr\Lambda}(M) = \Omega^2_{\gr\Lambda}(M^{(\mu)})[d_\mathrm{T}heta]\]
for every $M \in \mathrm{T}heta$. \end{Cor}
\begin{proof} Let
\[ \Lambda = \mathcal{B}_1\oplus \mathcal{B}_2 \oplus \cdots \oplus \mathcal{B}_s\]
be the block decomposition of $\Lambda$. In view of Lemma \ref{NF3}, this is a decomposition of graded Frobenius algebras which gives rise to a decomposition
\[ \modd \gr\Lambda = \bigoplus_{i=1}^s\modd \gr\mathcal{B}_i\]
of the graded module category.
Suppose that $M \in \mathrm{T}heta$. Since $M$ is indecomposable, there exists a block $\mathcal{B}_\ell \subseteq \Lambda$ such that $M \in \modd \gr \mathcal{B}_\ell$. This readily implies that $\mathrm{T}heta
\subseteq \modd \gr\mathcal{B}_\ell$, whence $\mathrm{T}heta \subseteq \Gamma_s(\gr\mathcal{B}_\ell)$. Our assertion now follows from Theorem \ref{NF2}.\end{proof}
\subsection{Graded Hopf Algebras}
Suppose that $H$ is a finite-dimensional Hopf algebra. We say that $H$ is {\it graded} if $H = \bigoplus_{i\in \mathbb{Z}^n}H_i$ is a graded $k$-algebra such that the comultiplication $\Delta : H
\longrightarrow H\!\otimes_k\!H$ is homogeneous of degree $0$. In that case, the counit $\varepsilon : H \longrightarrow k$ and the antipode $\eta : H \longrightarrow H$ are also maps of degree $0$. The subspace
\[ \int^\ell_H := \{ x \in H \ ; \ hx = \varepsilon(h)x \ \ \forall \ h \in H\}\]
is called the space of {\it left integrals} of $H$. This space is known to be one-dimensional, cf.\ \cite{Sw}.
\begin{Lem} \label{HA1} Let $H$ be a graded Hopf algebra. Then the following statements hold:
{\rm (1)} \ There exists $i \in \mathbb{Z}^n$ such that $\int_H^\ell \subseteq H_i$.
{\rm (2)} \ $H$ is a Frobenius algebra with a homogeneous Frobenius homomorphism $\pi : H \longrightarrow k$. \end{Lem}
\begin{proof} (1) Let $x = \sum_{j \in \mathbb{Z}^n}x_j$ be a non-zero left integral of $H$. Given $h \in H_d$, we have $\varepsilon(h)x = hx = \sum_{j \in\mathbb{Z}^n} hx_j$, so that
\[ hx_j = \varepsilon(h)x_{j+d} \ \ \ \ \forall \ j \in \mathbb{Z}^n.\]
Since $\deg \varepsilon = 0$, we obtain $hx_j = 0$ for $d \ne 0$, and $hx_j = \varepsilon(h)x_j$ for $d=0$. Consequently, $x_j \in \int^\ell_H$ for every $j \in \mathbb{Z}^n$. Since $\dim_k
\int^\ell_H = 1$, it follows that $\int^\ell_H \subseteq H_i$ for some $i \in \mathbb{Z}^n$.
(2) Note that the dual Hopf algebra $H^\ast$ is graded. Let $\pi \in H^\ast$ be a non-zero left integral. A result due to Larson and Sweedler \cite{LS} ensures that $H$ is a Frobenius algebra
with Frobenius homomorphism $\pi : H \longrightarrow k$. In view of (1), the linear map $\pi$ is homogeneous. \end{proof}
\begin{Example} Suppose that $\Char(k)=p>0$, and let $(\mathfrak{g},[p])$ be a finite-dimensional restricted Lie algebra with restricted enveloping algebra $U_0(\mathfrak{g})$. Assume that $\mathfrak{g} =
\bigoplus_{i\in \mathbb{Z}^n}\mathfrak{g}_i$ is restricted graded, that is, $\mathfrak{g}_i^{[p]} \subseteq \mathfrak{g}_{ip}$, so that $U_0(\mathfrak{g})$ is also $\mathbb{Z}^n$-graded. Given a homogeneous basis $\{e_1, \ldots, e_m\}$
of $\mathfrak{g}$, we write $e^a := e_1^{a_1}\cdots e_m^{a_m}$ for every $a \in \mathbb{N}^m_0$ and put $\tau:=(p\!-\!1,\ldots,p\!-\!1)$ as well as $a\le \tau :\Leftrightarrow a_i \le p\!-\!1$ for
$1\le i\le m$. Then the set $\{e^a \ ; \ 0 \le a \le \tau\}$ is a homogeneous basis of $U_0(\mathfrak{g})$, and
\[ \pi : U_0(\mathfrak{g}) \longrightarrow k \ \ ; \ \ \sum_{0\le a\le \tau}\alpha_ae^a \mapsto \alpha_\tau\]
is a homogeneous Frobenius homomorphism of degree $d_\mathfrak{g}:= -(p\!-\!1)\sum_{i\in \mathbb{Z}^n}(\dim_k\mathfrak{g}_i)i$. Moreover, by \cite[(I.9.7)]{Ja3}, the unique automorphism $\mu :
U_0(\mathfrak{g})\longrightarrow U_0(\mathfrak{g})$, given by $\mu(x) = x +\tr(\ad x)1$ for all $x \in \mathfrak{g}$, is the Nakayama automorphism corresponding to $\pi$. By Lemma \ref{NF1}, we have
\[ \mathcal{N}(M) \cong M^{(\mu^{-1})}[d_\mathfrak{g}]\]
for every $M \in \modd \gr U_0(\mathfrak{g})$. Let $P(S)$ be the projective cover of the graded simple module $S$. General theory implies that
\[ \Soc(P(S)) \cong \mathcal{N}^{-1}(S) \cong S^{(\mu)}[-d_\mathfrak{g}],\]
which retrieves \cite[(1.9)]{Ja2}. \end{Example}
\noindent
We are going to apply the foregoing result in the context of Frobenius kernels. Suppose that $\Char(k) =p>0$, and let $G$ be a reduced algebraic $k$-group scheme with maximal torus $T$.
We denote the adjoint representation of $G$ on $\Lie(G)$ by $\Ad$. For $r>0$, the algebra $kG_r$ obtains a grading via the adjoint action of $T$ on $kG_r$:
\[ kG_r = \bigoplus_{\lambda \in X(T)} (kG_r)_\lambda.\]
We shall identify characters of $G$, $T$ and $G_r$ with the corresponding elements of the coordinate rings or the duals of the algebras of measures. Let $\lambda_G \in X(G)$ be the character
given by $\lambda_G(g) = \det(\Ad(g))$ for all $g \in G$. If $\mathfrak{g} = \bigoplus_{\alpha \in \Psi\cup \{0\}} \mathfrak{g}_\alpha$ is the root space decomposition of $\mathfrak{g} := \Lie(G)$ relative to $T$,
then
\[ \det(\Ad(t)) = \prod_{\alpha \in \Psi} \alpha(t)^{\dim_k\mathfrak{g}_\alpha}\]
for every $t \in T$, so that
\[ \lambda_G|_T = \sum_{\alpha \in \Psi}(\dim_k\mathfrak{g}_\alpha)\alpha.\]
Given $r>0$, we denote by $\modd G_rT$ the category of finite-dimensional modules of the group scheme $G_rT \subseteq G$. In view of \cite[(2.1)]{Fa5}, which also hols for $r>1$, this
category is a direct sum of blocks of the category $\modd (G_r\!\rtimes\!T)$ of finite-dimensional $(G_r\!\rtimes\!T)$-modules. The latter category is just the category of $X(T)$-graded
$kG_r$-modules. It now follows from work by Gordon-Green \cite[(3.5)]{GG2}, \cite{Gr} that the Frobenius category $\modd G_rT$ affords almost split sequences. By the same token, the
Auslander-Reiten translation $\tau_{G_rT}$ of $\modd G_rT$ is given by
\[ \tau_{G_rT} = \mathcal{N}\circ \Omega^2_{G_rT}.\]
The following result provides a formula for the Nakayama functor of $\modd G_rT$.
\begin{Prop} \label{HA2} Let $G$ be a reduced algebraic $k$-group scheme with maximal torus $T\subseteq G$.
{\rm (1)} \ The $X(T)$-graded algebra $kG_r$ possesses a Frobenius homomorphism $\pi : kG_r \longrightarrow k$ of degree $-(p^r\!-\!1)\lambda_G|_T$.
{\rm (2)} \ We have $ \mathcal{N}(M) \cong ( M\!\otimes_k\!k_{\lambda_G|_{G_rT}})[-p^r\lambda_G|_T]$ for every $M \in \modd G_rT$. \end{Prop}
\begin{proof} (1) Let $\pi \in k[G_r] = kG_r^\ast$ be a non-zero (left) integral of the commutative Hopf algebra $k[G_r]$. By the proof of \cite[(I.9.7)]{Ja3}, the group $G$ acts on the
subspace $k\pi$ via the character $\lambda_{G_r} : G \longrightarrow k \ ; \ g \mapsto \det(\Ad(g))^{-(p^r-1)}$. Consequently, $\pi$ is homogeneous of degree $\lambda_{G_r}|_T =
-(p^r\!-\!1)\lambda_G|_T\in X(T)$.
(2) Let $\zeta_\ell$ be the left modular function of $kG_r$. By definition, $\zeta_\ell : kG_r \longrightarrow k$ is given by $xh = \zeta_\ell(h)x$ for every $h \in kG_r$ and $x \in \int^\ell_{kG_r}$.
Owing to \cite[(1.5)]{FMS}, the automorphism $\id_{kG_r}\ast\zeta_\ell$ is the Nakayama automorphism associated to $(\,,\,)_\pi$. It now follows from \cite[(I.9.7)]{Ja3} that
\[ \zeta_\ell(g) = \lambda_G(g)\]
for all $g \in G_r$. Let $M$ be a graded $kG_r$-module. In view of Lemma \ref{NF1}, the Nakayama functor $\mathcal{N}$ of $\modd \gr kG_r$ satisfies
\[ \mathcal{N}(M) \cong M^{(\id_{kG_r}\ast\zeta_\ell^{-1})}[-(p^r\!-\!1)\lambda_G|_T].\]
For $\lambda \in X(T)$, we denote by $k_\lambda$ the one-dimensional $(G_r\!\rtimes\!T)$-module on which $T$ and $G_r$ act via $\lambda$ and $1$, respectively. Then we have
\[ M[\lambda] \cong M\!\otimes_k\!k_\lambda \ \ \ \ \ \ \forall \ \lambda \in X(T), \, M \in \modd G_r\!\rtimes\!T.\]
For a character $\gamma \in X(G)$, we define characters $\hat{\gamma}, \tilde{\gamma} \in X(G_r\!\rtimes\!T)$ via
\[ \hat{\gamma}(g,t) = \gamma(g)\gamma(t) \ \ \text{and} \ \ \tilde{\gamma}(g,t) = \gamma(g),\]
respectively. Given $M \in \modd G_r\!\rtimes\!T$, we now obtain
\begin{eqnarray*}
M^{(\id_{kG_r}\ast\zeta_\ell^{-1})}[-(p^r\!-\!1)\lambda_G|_T] & \cong & (M\!\otimes_k\!k_{\tilde{\lambda}_G})\!\otimes_k\!k_{-(p^r-1)\lambda_G|_T}
\cong (M\!\otimes_k\!k_{\hat{\lambda}_G})\!\otimes_k\!k_{-p^r\lambda_G|_T} \\
& \cong & (M\!\otimes_k\!k_{\hat{\lambda}_G})[-p^r\lambda_G|_T].
\end{eqnarray*}
Let $\omega : G_r\!\rtimes\!T \longrightarrow G_rT$ be the canonical quotient map. Since $\hat{\lambda}_G(t^{-1},t) = 1$ for every $t \in T_r$, we have
\[ (\lambda_G|_{G_rT})\circ \omega = \hat{\lambda}_G.\]
As $\modd G_rT$ is a sum of blocks of $\modd G_r\!\rtimes\!T$, it follows that
\[ \mathcal{N}(M) \cong ( M\!\otimes_k\!k_{\lambda_G|_{G_rT}})[-p^r\lambda_G|_T]\]
for every $M \in \modd G_rT$. \end{proof}
\noindent
For future reference, we record the following result:
\begin{Cor} \label{HA3} Suppose that $G$ is a reduced group scheme with maximal torus $T \subseteq G$ and Lie algebra $\mathfrak{g} = \bigoplus_{\alpha \in \Psi\cup\{0\}}\mathfrak{g}_\alpha$.
{\rm (1)} \ If $\dim_k\mathfrak{g}_\alpha = \dim_k\mathfrak{g}_{-\alpha}$ for every $\alpha \in \Psi$, then $\tau_{G_rT} = \Omega^2_{G_rT}$.
{\rm (2)} \ If $G$ is reductive, then $\tau_{G_rT} = \Omega^2_{G_rT}$.\end{Cor}
\begin{proof} (1) By assumption, we have $\lambda_G|_T \equiv 1$, so that $T \subseteq \ker \lambda_G$. Let $Z_G(T)$ be the Cartan subgroup of $G$ defined by $T$. According to
\cite[(7.2.10)]{Sp}, the group $Z_G(T)$ is connected and nilpotent, and \cite[(6.8)]{Sp} implies that $Z_G(T) \subseteq \ker \lambda_G$. As all Cartan subgroups of $G$ are conjugate,
their union $U$ is also contained in $\ker \lambda_G$. By virtue of \cite[(7.3.3)]{Sp}, the set $U$ lies dense in $G$, so that $\lambda_G \equiv 1$. Our assertion now follows from
Proposition \ref{HA2}.
(2) This is a direct consequence of (1) and \cite[(II.1)]{Ja3}. \end{proof}
\section{Complexity and the Auslander-Reiten Quiver $\Gamma_s(G_rT)$}\label{S:AR}
As before, we fix a smooth group scheme $G$ as well as a maximal torus $T \subseteq G$. The set of roots of $G$ relative to $T$ will be denoted $\Psi$. Recall that the category $\modd
G_rT$ of finite-dimensional $G_rT$-modules is a Frobenius category, whose Heller operator is denoted $\Omega_{G_rT}$. By work of Gordon and Green \cite{GG1}, the forgetful functor $\mathfrak{F}
: \modd G_rT \longrightarrow \modd G_r$ preserves and reflects projectives and indecomposables, respectively. Moreover, we have $\Omega^n_{G_r}\circ \mathfrak{F} = \mathfrak{F} \circ \Omega^n_{G_rT}$ for all $n
\in \mathbb{Z}$, and the fiber $\mathfrak{F}^{-1}(\mathfrak{F}(M))$, defined by an indecomposable $G_rT$-module $M$, consists of the shifts $\{M\!\otimes_k\!k_{p^r\lambda}\ ; \ \lambda \in X(T)\}$, with
different shifts giving non-isomorphic modules (see \cite{Fa5,FR} for more details). Barring possible ambiguities, we will often suppress the functor $\mathfrak{F}$.
\subsection{Modules of complexity $\bf{1}$} Modules of complexity $1$ play an important r\^ole in the determination of the Auslander-Reiten quiver of a finite group scheme. Our study of
the stable AR-quiver of $\modd G_rT$ also necessitates some knowledge concerning such modules. Since $\modd G_rT$ has projective covers, we have the concept of a minimal projective
resolution, so that we can speak of the complexity $\cx_{G_rT}(M)$ of a $G_rT$-module $M$. Note that $\cx_{G_r}(\mathfrak{F}(M)) = \cx_{G_rT}(M)$ for every $M \in \modd G_rT$.
Recall that the conjugation action of $T$ on $G_r$ induces an operation of $T$ on the variety $V_r(G)$. Standard arguments then show that the rank varieties of $G_rT$-modules are
$T$-invariant subvarieties of $V_r(G)$.
\begin{Thm} \label{MC1} Let $M$ be an indecomposable $G_rT$-module such that $\cx_{G_rT}(M) = 1$. Then there exists a unique $\alpha \in \Psi\cup\{0\}$ such that
\[ \Omega^{2p^{r-\ph(M)}}_{G_rT}(M) \cong M\!\otimes_k\! k_{p^r\alpha}.\] \end{Thm}
\begin{proof} In view of \cite[(3.2)]{GG1}, the module $M|_{G_r}$ is indecomposable, and we have $\cx_{G_r}(M) = \cx_{G_rT}(M) = 1$. Since $M$ is a $G_rT$-module, the variety
$V_r(G)_M \subseteq V_r(G)$ is $T$-invariant. As a result, the subgroup $\mathcal{U}_M \subseteq G_r$, provided by Theorem \ref{PH3}, is also $T$-invariant. Thanks to Theorem \ref{PH3}, we
have $s := \ph(M) = \ph_{\mathcal{U}_M}(M)$.
According to Proposition \ref{PH4}, there exist $\alpha \in \Psi \cup \{0\}$ and $\zeta \in (\HH^{2p^{r-s}}(G_r,k)_{\rm red})_{-p^r\alpha}$ such that $Z(\zeta) \cap \mathcal{V}_{G_r}(M)
\subsetneq \mathcal{V}_{G_r}(M)$. Since the module $M|_{G_r}$ is indecomposable, the one-dimensional variety $\mathcal{V}_{G_r}(M)$ is a line (cf.\ \cite[(7.7)]{SFB2}). Consequently, $Z(\zeta) \cap
\mathcal{V}_{G_r}(M) = \{0\}$.
In analogy with \S\ref{S:PH}, the cohomology class $\zeta$ can be interpreted as an element of the weight space $\Hom_{G_r}(\Omega^{2p^{r-s}}_{G_rT}(k),k)_{-p^r\alpha}$, or
equivalently, as a non-zero $G_rT$-linear map $\hat{\zeta} : \Omega^{2p^{r-s}}_{G_rT}(k)\!\otimes_k\! k_{-p^r\alpha} \longrightarrow k$, see \cite[(I.6.9(5))]{Ja3}. There results an exact sequence
\[ (0) \longrightarrow \widehat{L}_\zeta \longrightarrow \Omega^{2p^{r-s}}_{G_rT}(k)\!\otimes_k\!k_{-p^r\alpha} \stackrel{\hat{\zeta}}{\longrightarrow} k \longrightarrow (0)\]
of $G_rT$-modules. Since $\mathfrak{F}(\widehat{L}_\zeta) = L_\zeta$, we have $\mathcal{V}_{G_r}(\widehat{L}_\zeta\!\otimes_k\!M) = Z(\zeta)\cap \mathcal{V}_{G_r}(M) = \{0\}$, so that the $G_rT$-module
$\widehat{L}_\zeta\!\otimes_k\!M$ is projective. The arguments of (\ref{PH2}) now yield
\[ \Omega^{2p^{r-s}}_{G_rT}(M)\!\otimes_k\! k_{-p^r\alpha} \cong M,\]
as desired.
If we also have an isomorphism
\[ \Omega^{2p^{r-\ph(M)}}_{G_rT}(M) \cong M\!\otimes_k\! k_{p^r\beta}\]
for some $\beta \in \Psi\cup \{0\}$, then the shifts $M[p^r\alpha]$ and $M[p^r\beta]$ of the indecomposable $X(T)$-graded module $M$ coincide, and \cite[(4.1)]{GG1} in conjunction
with $X(T)$ being torsion-free yields $\alpha = \beta$. \end{proof}
\begin{Remark} For $r=1$ our result specializes to \cite[(2.4(2))]{Fa5}. \end{Remark}
\noindent
Suppose that $G$ is reductive. For every root $\alpha \in \Psi$, there exists a subgroup $U_\alpha \subseteq G$ on which $T$ acts via $\alpha$. The group $U_\alpha$ is isomorphic to
the additive group $\mathbb{G}_{a}$ and it is customarily referred to as the {\it root subgroup} of $\alpha$.
An indecomposable $G_rT$-module $M$ is called {\it periodic}, provided there is an isomorphism $\Omega^m_{G_rT}(M)$ $\cong M$ for some $m \in \mathbb{N}$.
\begin{Cor} \label{MC2} Suppose that $G$ is reductive. Let $M$ be an indecomposable $G_rT$-module. Then the following statements hold:
{\rm (1)} \ If $\cx_{G_rT}(M)=1$, then there exists a unique root $\alpha \in \Psi$ such that $\Omega^{2p^{r-\ph(M)}}_{G_rT}(M) \cong M\!\otimes_k\! k_{p^r\alpha}$.
{\rm (2)} \ The module $M$ is not periodic. \end{Cor}
\begin{proof} (1) Since $M$ is not projective, the Main Theorem of \cite{CPS} provides a root $\alpha \in \Psi$ such that $M|_{(U_\alpha)_r}$ is not projective. Hence we have $\mathcal{U}_M
= (U_\alpha)_s$ in the proof of Theorem \ref{MC1}.
(2) Assume to the contrary that there exists a natural number $m>0$ such that
\[ \Omega^m_{G_rT}(M) \cong M.\]
Then we have $\cx_{G_rT}(M) = 1$ and part (1) provides a root $\alpha \in \Psi$ and a non-negative integer $\ell \ge 0$ such that
\[ \Omega^{2p^\ell}_{G_rT}(M) \cong M\!\otimes_k\!k_{p^r\alpha}.\]
This implies
\[ M \cong \Omega^{2mp^\ell}_{G_rT}(M) \cong M\!\otimes_k\!k_{mp^r\alpha},\]
which, by our above remarks, is only possible for $m=0$. \end{proof}
\noindent
As noted in Section \ref{S:PH}, the number $2p^{r-\ph(M)}$ is in general only an upper bound for the period of a periodic module. In the classical context of reductive groups, however, the
periods of restrictions of $G_rT$-modules of complexity $1$ are completely determined by their projective heights.
Let $G$ be reductive. Given a root $\alpha \in \Psi$, we denote by $\alpha^\vee$ the corresponding {\it coroot}, see \cite[(II.1.3)]{Ja3}. As usual, $\rho$ denotes the half-sum of the
positive roots.
\begin{Thm} \label{MC3} Suppose that $G$ is reductive. Let $M \in \modd G_rT$ be indecomposable of complexity $\cx_{G_rT}(M) = 1$. Then $M|_{G_r}$ is periodic with period
${\rm per}(M) = 2p^{r-\ph(M)}$. \end{Thm}
\begin{proof} Let $s$ be the period of the indecomposable module $M|_{G_r}$. By Theorem \ref{MC1}, we have
\[ \Omega_{G_r}^{2p^{r-\ph(M)}}(M) \cong M,\]
so that there exists $\ell �\in \mathbb{N}$ with $s\ell = 2p^{r-\ph(M)}$. On the other hand, $M$ and $\Omega^s_{G_rT}(M)$ are indecomposable $G_rT$-modules such that
\[ \mathfrak{F}(\Omega^s_{G_rT}(M)) \cong \Omega^s_{G_r}(\mathfrak{F}(M)) \cong \mathfrak{F}(M),\]
and \cite[(4.1)]{GG1} provides $\lambda \in X(T)$ such that
\[ \Omega^s_{G_rT}(M) \cong M\!\otimes_k\!k_\lambda.\]
Since $M$ and $M\!\otimes\!k_\lambda$ both belong to $\modd G_rT$, there exists $\gamma \in X(T)$ such that $\lambda = p^r\gamma$. Applying Theorem \ref{MC1} again, we obtain
\[ M\!\otimes_k\!k_{p^r\alpha} \cong \Omega^{2p^{r-\ph(M)}}_{G_rT}(M) \cong M\!\otimes_k\!k_{\ell p^r\gamma},\]
so that \cite[(4.1)]{GG1} implies $\alpha = \ell \gamma$. Let $W$ be the Weyl group of the root system $\Psi$. General theory (cf.\ \cite[(1.5)]{Hu3}) provides an element $w \in W$ such
that $\alpha_i := w(\alpha)$ is simple. If $\Psi$ is not a union of copies of $A_1$, then there exists a simple root $\alpha_j$ such that $\alpha_i(\alpha_j^\vee) = -1$. Thus, $-1 =
w(\gamma)(\alpha_j^\vee)\ell$, proving that $\ell=1$. Consequently, $s= 2p^{r-\ph(M)}$, as desired.
It remains to consider the case, where the connected components of the root system all have type $A_1$. In that case, we have
\[ 2 = \alpha(\alpha^\vee) = \gamma(\alpha^\vee)\ell,\]
so that $\ell \in \{1,2\}$.
If $\ell = 2$, then $\gamma = \mathfrak{r}ac{1}{2}\alpha \not \in \mathbb{Z}\Psi$. Since the $G_rT$-modules $M$ and $N:=\Omega_{G_rT}^{p^{r-\ph(M)}}(M)$ are indecomposable, they belong to the
same block of $\modd G_rT$, see \cite[(7.1)]{Ja3}. Consequently, all their composition factors belong to the same linkage class. Let $W_p$ be the affine Weyl group and denote by
\[ w\boldsymbol{.} \lambda := w(\lambda\!+\!\rho)-\rho \ \ \ \ \forall \ w \in W_p,\, \lambda \in X(T)\]
the dot action of $W_p$ on $X(T)$, cf.\ \cite[(II.6.1)]{Ja3}. Suppose that $\widehat{L}_r(\mu)$ is a composition factor of $M$. According to \cite[(II.9.6)]{Ja3}, the module
$\widehat{L}_r(\mu\!+\!p^r\gamma) \cong \widehat{L}_r(\mu)\!\otimes\!k_{p^r\gamma}$ is a composition factor of $N \cong M\!\otimes_k\!k_{p^r\gamma}$. The linkage principle
\cite[(II.9.19)]{Ja3} implies that $\mu+p^r\gamma \in W_p\boldsymbol{.}\mu$. Since $w\boldsymbol{.}\lambda \equiv \lambda \ \ \modd(\mathbb{Z}\Psi)$ for all $w \in W_p$ and $\lambda \in X(T)$, we
conclude that $p^r\gamma = \mathfrak{r}ac{p^r}{2}\alpha \in \mathbb{Z}\Psi$. As $p \ge 3$, we have reached a contradiction.
As a result, $\ell=1$, so that we have $s=2p^{r-\ph(M)}$ in this case as well. \end{proof}
\noindent
We turn to the question, which periods can actually occur. Our approach necessitates the following realizability criterion.
\begin{Prop} \label{MC4} Suppose that $V \subseteq V_r(G)$ is a $T$-invariant conical closed subvariety. Then there exists a $G_rT$-module $M$ such that $V = V_r(G)_M$. \end{Prop}
\begin{proof} Thanks to \cite[(6.8)]{SFB2}, it suffices to establish the corresponding result for support varieties. Since $T_r$ acts trivially on $\HH^\bullet(G_r,k)$ (cf.\ \cite[(I.6.7)]{Ja3}), the $T$-action on $\HH^\bullet(G_r,k)$ gives rise to the following decomposition
\[ \HH^\bullet(G_r,k) = \bigoplus_{n\ge 0}\HH^{2n}(G_r,k) = \bigoplus_{n\ge 0}\bigoplus_{\lambda \in X(T)} \HH^{2n}(G_r,k)_{p^r\lambda}.\]
If $V\subseteq \mathcal{V}_{G_r}(k)$ is a conical, $T$-invariant closed subvariety, then there exists a homogeneous $T$-invariant ideal $I_V \unlhd \HH^\bullet(G_r,k)$ such that
\[ V = Z(I_V).\]
We let $\zeta_1, \ldots, \zeta_r$ be homogeneous generators of $I_V$, that is, $\zeta_i \in \HH^{2n_i}(G_r,k)_{p^r\lambda_i}$ for suitable $n_i\ge 0$ and $\lambda_i \in X(T)$. As noted
earlier, each $\zeta_i$ corresponds to a map $\hat{\zeta}_i : \Omega^{2n_i}_{G_rT}(k)\!\otimes_k\!k_{-p^r\lambda_i} \longrightarrow k$, whose kernel $\widehat{L}_{\zeta_i} := \ker \hat{\zeta}_i
\in \modd G_rT$ has support
\[ \mathcal{V}_{G_r}(\widehat{L}_{\zeta_i}) = Z(\zeta_i),\]
see \cite[(7.5)]{SFB2}. The result now follows from the tensor product theorem \cite[(7.2)]{SFB2}. \end{proof}
\begin{Cor} \label{MC5} Let $G$ be reductive. Then the following statements hold:
{\rm (1)} \ Given $s \in \{0,\ldots,r\!-\!1\}$, there exists an indecomposable, periodic $G_r$-module $M$, whose period equals $2p^s$.
{\rm (2)} \ The stable AR-components $\mathrm{T}heta \subseteq \Gamma_s(G_r)$ containing a module of complexity $1$ are precisely of the form $\mathbb{Z}[A_\infty]/\langle \tau^{p^s}\rangle$,
where $s \in \{0,\ldots,r\!-\!1\}$.\end{Cor}
\begin{proof} (1) Let $\alpha \in \Psi$ be a root, $U_\alpha \subseteq G$ be the corresponding root subgroup. We consider the subgroup
\[ \mathcal{U} := (U_\alpha)_{r-s} \subseteq G_r\]
and note that $\mathcal{U} \cong \mathbb{G}_{a(r-s)}$ is a $T$-invariant elementary abelian subgroup of height $\height(\mathcal{U}) = r\!-\!s$. Let $\varphi : \mathbb{G}_{a(r)} \longrightarrow \mathcal{U}$ be the map such that $\im
\varphi = \mathcal{U}$. Since $T$ acts on $\mathcal{U}$ via the character $\alpha$, the one-dimensional closed subvariety
\[ V := \{ \gamma\boldsymbol{.} \varphi \ ; \ \gamma \in k^\times\} \cup \{0\}\]
of $V_r(G)$ is $T$-invariant. Proposition \ref{MC4} now provides $N \in \modd G_rT$ such that $V_r(G)_N = V$. Since $V$ is irreducible, we have $V_r(G)_M =V$ for a suitable indecomposable constituent $M$ of $N$. According to Theorem \ref{MC3}, the module $M|_{G_r}$ is periodic, with period $2p^{r-\height(\mathcal{U})}=2p^s$.
(2) Since $kG_r$ is symmetric, we have $\tau_{G_r} \cong \Omega^2_{G_r}$. A consecutive application of \cite[(2.1)]{Fa3}, \cite[(4.1)]{Fa3} and \cite[(4.4)]{Fa5} shows that $\mathrm{T}heta$ is
of the form $\mathbb{Z}[A_\infty]/\langle \tau^{p^s}\rangle$. Part (1) implies that for every $s \in\{0,\ldots,r\!-\!1\}$ there exists an infinite tube of rank $p^s$. \end{proof}
\begin{Remark} The example of the groups $\SL(2)_1T_r$ with $r\ge 3$ shows that for infinitesimal groups that are not Frobenius kernels of smooth groups, the ranks of tubes may be more
restricted, see \cite[(5.6)]{FV2}. \end{Remark}
\subsection{Webb's Theorem} Results by Happel-Preiser-Ringel \cite{HPR} show that the presence of so-called {\it subadditive functions} imposes constraints on the structure of the tree class of a connected stable representation quiver. This approach was first effectively employed by Webb \cite{We} in his determination of the tree classes for AR-components of group algebras of
finite groups. We shall establish an analogue for $\modd G_rT$, with a refinement for the case, where $G$ is reductive.
Let $G$ be a smooth algebraic group scheme. In the sequel, we let $\Gamma_s(G_rT)$ be the stable Auslander-Reiten quiver of the Frobenius category $\modd G_rT$. For a component $\mathrm{T}heta
\subseteq \Gamma_s(G_rT)$, we have
\[ V_r(G)_{\mathfrak{F}(M)} = V_r(G)_{\mathfrak{F}(N)} \ \ \ \ \ \text{for all} \ M,N \in \mathrm{T}heta.\]
Accordingly, we can attach a variety $V_r(G)_\mathrm{T}heta$ to the component $\mathrm{T}heta$. By combining this fact with results by Happel-Preiser-Ringel \cite{HPR} one obtains:
\begin{Prop}[cf.\ \cite{Fa5}] \label{WT1} Let $\mathrm{T}heta \subseteq \Gamma_s(G_rT)$ be a component. Then the tree class $\bar{T}_\mathrm{T}heta$ is a simply laced finite or infinite Dynkin diagram,
a simply laced Euclidean diagram, or $\tilde{A}_{12}$.
$\square$ \end{Prop}
\begin{Prop} \label{WT2} Suppose that $G$ is reductive, and let $\mathrm{T}heta \subseteq \Gamma_s(G_rT)$ be a component such that $\dim V_r(G)_\mathrm{T}heta \ne 2$. Then $\mathrm{T}heta \cong
\mathbb{Z}[A_\infty]$. \end{Prop}
\begin{proof} A consecutive application of Corollary \ref{HA3} and Corollary \ref{MC2} shows that $\modd G_rT$ has no $\tau_{G_rT}$-periodic modules. We may thus adopt the arguments
of \cite[(3.2)]{Fa5}. \end{proof}
\noindent
In case the underlying group is reductive, only three types of components can occur:
\begin{Thm} \label{WT3} Let $G$ be reductive of characteristic $\Char(k)=p\ge 3$. Suppose that $\mathrm{T}heta \subseteq \Gamma_s(G_rT)$ is a component.
{\rm (1)} \ If $\mathrm{T}heta$ contains a simple module $S$ of complexity $\cx_{G_rT}(S)=2$, then $\mathrm{T}heta \cong \mathbb{Z}[A_\infty], \mathbb{Z}[A_\infty^\infty]$.
{\rm (2)} \ We have $\mathrm{T}heta \cong \mathbb{Z}[A_\infty]$, $\mathbb{Z}[A_\infty^\infty]$, or $\mathbb{Z}[D_\infty]$. \end{Thm}
\begin{proof} (1) In view of \cite[(1.3)]{GG1}, the module $S|_{G_r}$ is simple and of complexity $\cx_{G_r}(S) = 2$. In this situation \cite[(7.1)]{Fa2} provides a decomposition $G =
HK$ of $G$ into an almost direct product such that
(a) \ $\mathfrak{g} = \Lie(H) \oplus \Lie(K)$ with $\Lie(H) = \mathfrak{s}l(2)$, and
(b) \ $V_r(G)_M = V_r(H)_M$ and $M|_{K_r}$ is projective for every $M \in \mathrm{T}heta$.
\noindent
Since $H$ is an almost simple group of rank $1$, it follows that the central subgroup $H\cap K \subseteq H$ is either trivial or isomorphic to $\mu_{(2)}$ (cf.\ \cite[(8.2.4)]{Sp}). Hence, if
$H\cap K \ne e_k$, then there exists a character $\lambda \in X(H\cap K) \cong \mathbb{Z}/(2)$ such that $H\cap K$ acts on every vertex $M \in \mathrm{T}heta$ via $\lambda$. As $H\cap K$ is contained
in the maximal torus $T$, we can find $\gamma \in X(T)$ with $\gamma|_{H\cap K} = \lambda$. In view of $p$ being odd, we also have $p^r\gamma|_{H\cap K} = \lambda$.
Consequently, $H\cap K$ acts trivially on every vertex of the shifted component $\mathrm{T}heta[-p^r\gamma] \cong \mathrm{T}heta$.
Let $G' := G/(H\cap K)$, and consider its maximal torus $T' := T/(H\cap K)$ (cf.\ \cite[(7.2.7)]{Sp}). According to \cite[(II.9.7)]{Ja3}, there results an exact sequence
\[ e_k \longrightarrow H\cap K \longrightarrow G_rT \longrightarrow G'_rT' \longrightarrow e_k,\]
with $\modd G'_rT'$ being a sum of blocks of $\modd G_rT$. By the above observation, a suitable shift of $\mathrm{T}heta$ belongs to $\modd (G'_rT')$. Setting $H' := H/(H\cap K)$ and
$K' := K/(H\cap K)$, we have $G' = H' \times K'$, while (a) and (b) continue to hold for $H'$ and $K'$. As a result, we may assume in addition that
(c) \ $G = H \times K$.
\noindent
By general theory, there exist maximal tori $T_H \subseteq H$ and $T_K \subseteq K$ such that $T = T_H\times T_K$.
Consequently, the isomorphism
\[kG_r \cong kH_r\!\otimes_k\!kK_r\]
induced by (c) is compatible with the $T$-action. Thus, the outer tensor product defines a functor
\[ \modd H_rT_H \times \modd K_rT_K \longrightarrow \modd G_rT \ \ ; \ \ (M,N) \mapsto M\!\otimes_k\!N.\]
Let $\mathcal{B} \subseteq kG_r$ be the block containing the simple $G_r$-module $S$, so that $\mathrm{T}heta \subseteq \Gamma_s(\mathcal{B} T)$. Owing to \cite[Section 10.E]{CR}, the $G_r$-module $S$ is an
outer tensor product
\[ S \cong S_1\!\otimes_k\!S_2\]
with a simple projective $K_r$-module $S_2$. In view of \cite[(II.9.6)]{Ja3}, we may assume that $S_1 \in \modd H_rT$ and $S_2\in \modd K_rT$. It now follows from \cite[(4.1)]{GG1}
that
\[S[\gamma] \cong S_1\!\otimes_k\!S_2\]
for a suitable $\gamma \in X(T)$. (Since the right-hand module lies in $\modd G_rT$, we actually have $\gamma \in p^rX(T)$.)
Letting $\mathcal{B}_1 \subseteq kH_r$ be the block containing $S_1$, we obtain inverse equivalences
\[ \modd \mathcal{B}_1 \longrightarrow \modd \mathcal{B} \ \ ; \ \ X \mapsto X\!\otimes_k\!S_2 \]
and
\[ \modd \mathcal{B} \longrightarrow \modd \mathcal{B}_1 \ \ ; \ \ Y \mapsto \Hom_{kK_r}(S_2,Y),\]
so that the first functor induces an equivalence
\[ \modd \mathcal{B}_1T_H \longrightarrow \modd \mathcal{B} T \ \ ; \ \ X \mapsto X\!\otimes_k\!S_2. \]
Thus, $\mathrm{T}heta$ is isomorphic to a component $\mathrm{T}heta_1 \subseteq \Gamma_s(H_rT_H)$, which, by (b), has a two-dimensional rank variety.
Since $H$ has rank $1$, we have $H \cong \SL(2), \PSL(2)$. Since $\modd \PSL(2)_{r-s}T'$ is a sum of blocks of $\modd \SL(2)_{r-s}T$, it suffices to address the case, where $H = \SL(2)$.
We shall write $T := T_H$. As noted in Section $5$, the block $\mathcal{B}_1$ is of the form $\mathcal{B}^{(r)}_{i,s}$ and Lemma \ref{Rep1} provides a Morita equivalence
\[ \modd \mathcal{B}_{i,0}^{(r-s)} \longrightarrow \modd \mathcal{B}^{(r)}_{i,s} \ \ ; \ \ M \mapsto \mathrm{St}_s\!\otimes_k\!M^{[s]}.\]
Thanks to \cite[(II.10.4)]{Ja3}, this functor and its inverse take $\SL(2)_{r-s}T$-modules to $\SL(2)_rT$-modules, so that $\mathrm{T}heta_1$ is isomorphic to a component $\mathrm{T}heta_2$ of
$\Gamma_s(\SL(2)_{r-s}T)$, whose modules have complexity $2$.
If $r\!-\!s=1$, then standard $\SL(2)_1$-theory (see \cite[\S3, Example]{Fa5}) implies $\mathrm{T}heta_2 \cong \mathbb{Z}[A_\infty^\infty]$. Now assume that $r\!-\!s \ge 2$ and let
$\widehat{L}_{r-s}(\lambda)$ be the simple module belonging to $\mathrm{T}heta_2$ which corresponds to $S$. According to \cite[(3.3)]{FR}, the forgetful functor $\mathfrak{F} : \modd \SL(2)_{r-s}T \longrightarrow
\modd \SL(2)_{r-s}$ takes our component $\mathrm{T}heta_2$ to the component $\Psi_2 := \mathfrak{F}(\mathrm{T}heta_2) \subseteq \Gamma_s(\SL(2)_{r-s})$ containing the simple module $L_{r-s}(\lambda)$.
Since $r-s\ge 2$, Lemma \ref{Rep3} shows that $\Ht_{r-s}(\lambda)$ is indecomposable. From the standard almost split sequence
\[ (0) \longrightarrow \Rad(P_{r-s}(\lambda)) \longrightarrow \Ht_{r-s}(\lambda) \oplus P_{r-s}(\lambda)\longrightarrow P_{r-s}(\lambda)/\Soc(P_{r-s}(\lambda)) \longrightarrow (0)\]
we see that $ P_{r-s}(\lambda)/\Soc(P_{r-s}(\lambda))$ has exactly one predecessor. Consequently, the module $L_{r-s}(\lambda)$ $ \cong \Omega_{\SL(2)_{r-s}}(
P_{r-s}(\lambda)/\Soc(P_{r-s}(\lambda)))$ enjoys the same property and Proposition \ref{Rep5} guarantees that $\Psi_2 \cong \mathbb{Z}[A_\infty]$. It follows that $\mathrm{T}heta_2 \cong
\mathbb{Z}[A_\infty], \mathbb{Z}[A_\infty^\infty], \mathbb{Z}[D_\infty]$. Let $\varphi \in V_{r-s}(G)_{\mathrm{T}heta_2}$ and consider the module $M_\varphi := k\SL(2)_{r-s}\!\otimes_{k[u_{r-1}]}k$. As argued in
\cite[(3.2)]{Fa5},
\[ \delta : \Psi_2 \longrightarrow \mathbb{N} \ \ ; \ \ X \mapsto \dim_k\Ext^1_{\SL(2)_{r-s}}(M_\varphi,X)\]
is a $\tau_{\SL(2)_{r-s}}$-invariant subadditive function such that $\delta \circ \mathfrak{F}$ is a $\tau_{\SL(2)_{r-s}T}$-invariant subadditive function on $\mathrm{T}heta_2$. If $\bar{T}_{\mathrm{T}heta_2} \in \{
A_\infty^\infty, D_\infty\}$, then \cite[(VII.3.4)]{ARS} and \cite[(VII.3.5)]{ARS} show
that $\delta \circ \mathfrak{F}$, and thereby $\delta$, is bounded and additive. Since $\mathbb{Z}[A_\infty]$ does not afford such a function, it follows that $\mathrm{T}heta_2 \cong \mathbb{Z}[A_\infty]$, as desired.
(2) In view of Proposition \ref{WT2}, we may assume that $\dim V_r(G)_\mathrm{T}heta = 2$. By Proposition \ref{WT1}, it remains to rule out the case, where $\mathrm{T}heta \cong
\mathbb{Z}[\tilde{A}_{pq}]$ or where $\bar{T}_\mathrm{T}heta$ is Euclidean. In these cases, the component $\mathrm{T}heta$ has only finitely many $\tau_{G_rT}$-orbits, so that the component $\Psi :=
\mathfrak{F}(\mathrm{T}heta)$ also enjoys this property. It now follows from \cite[(4.1)]{Fa3} that $\Psi \cong \mathbb{Z}[\tilde{A}_{12}]$. Thanks to \cite[Thm.A]{We}, we may assume that $\mathrm{T}heta$ contains a
simple module, and (1) shows that the abovementioned cases cannot occur. \end{proof}
\begin{Remark} It is not known whether components of tree class $D_\infty$ actually occur. According to \cite[(4.5)]{FR}, components containing baby Verma modules have tree class
$A_\infty$. \end{Remark}
\subsection{Components containing Verma modules} Throughout, $G$ denotes a smooth reductive group scheme with maximal torus $T \subseteq G$ and root system $\Psi$. By picking
a Borel subgroup $B \subseteq G$ containing $T$ we obtain the sets $\Psi^+$ and $\Sigma$ of positive and simple roots, respectively. Given $\lambda \in X(T)$, we denote by $k_\lambda$
the corresponding one-dimensional $T$-module. Since $B = UT$ is a product of $T$ and the unipotent radical $U \subseteq B$, this module is also a $B$-module. Given $r\in \mathbb{N}$, the adjoint
representation endows the induced $\Dist(G_r)$-module
\[ Z_r(\lambda) := \Dist(G_r)\!\otimes_{\Dist(B_r)}\!k_\lambda\]
with the structure of a $G_rT$-module. We denote this module by $\widehat{Z}_r(\lambda)$ and refer to $Z_r(\lambda)$ and $\widehat{Z}_r(\lambda)$ as {\it baby Verma modules}
defined by $\lambda$. Given $\lambda \in X(T)$, the modules $Z_r(\lambda)$ and $\widehat{Z}_r(\lambda)$ have simple tops $L_r(\lambda)$ and $\widehat{L}_r(\lambda)$, and all
simple objects of $\modd G_r$ and $\modd G_rT$ arise in this fashion. Moreover, every simple $G_rT$-module $\widehat{L}_r(\lambda)$ has a projective cover $\widehat{P}_r(\lambda)$,
see \cite[\S II.3, \S II.9, \S II.11]{Ja3} for more details. In what follows, $B^-$ denotes the Borel subgroup opposite to $B$.
The main result of this section, Theorem \ref{VM3}, employs support varieties to study the Heller translates of the baby Verma modules $\widehat{Z}_r(\lambda)$. This is motivated by the
Auslander-Reiten theory of the Frobenius categories $\modd G_rT$ and $\modd G_r$, where non-projective Verma modules are quasi-simple (cf.\ \cite[(4.4),(4.5)]{FR}). Our result implies
that the connected components of the stable Auslander-Reiten quiver $\Gamma_s(G_r)$ contain at most one baby Verma module $Z_r(\lambda)$.
We let $\mathcal{F}(\Delta) \subseteq \modd G_rT$ be the subcategory of {\it $\Delta$-good} modules. By definition, every object $M \in \mathcal{F}(\Delta)$ possesses a filtration, a so-called {\it $\widehat{Z}_r$-filtration}, whose factors are baby Verma modules $\widehat{Z}_r(\lambda)$. The filtration multiplicities $[M\!:\!\widehat{Z}_r(\lambda)]$ do not depend on the
choice of the filtration, and each projective indecomposable module $\widehat{P}_r(\lambda)$ belongs to $\mathcal{F}(\Delta)$, with its filtration multiplicities being linked to the Jordan-H\"older multiplicities by BGG reciprocity:
\[ [\widehat{P}_r(\lambda)\!:\!\widehat{Z}_r(\mu)] = [\widehat{Z}_r(\mu)\!:\!\widehat{L}_r(\lambda)],\]
see \cite[\S II.11]{Ja3}. Since $[\widehat{Z}_r(\lambda)\!:\!\widehat{L}_r(\lambda)] = 1$, this readily implies the following:
\begin{Lem} \label{VM1} Let $m>0$. Then $\Omega^m_{G_rT}(\widehat{Z}_r(\lambda)) \in \mathcal{F}(\Delta)$ with filtration factors $\widehat{Z}_r(\mu)$ for $\mu>\lambda$.
$\square$ \end{Lem}
\noindent
We require the following subsidiary result concerning the subset $\wt(M) \subseteq X(T)$ of weights of a $G_rT$-module $M$.
\begin{Lem} \label{VM2} Let $M$ be a $G_rT$-module such that $\wt(M) \subseteq \lambda + \mathbb{Z}\Psi$ for some $\lambda \in X(T)$. Then $\wt(\Omega_{G_rT}(M)) \subseteq
\lambda + \mathbb{Z}\Psi$. \end{Lem}
\begin{proof} We let $T$ act on $\Dist(G_r)$ via the adjoint representation and put $A := \Dist(U_r^-)\Dist(U_r)$. Then $A$ is a $T$-submodule of $\Dist(G_r)$ with $\wt(A) \subseteq \mathbb{Z}\Psi$ (cf.\ \cite[(II.1.19)]{Ja3}).
Given $\gamma \in X(T)$, we consider the $G_r$-module
\[P_\gamma := \Dist(G_r)\!\otimes_{\Dist(T_r)}\!k_\gamma.\]
In view of \cite[(II.1.12)]{Ja3}, the adjoint action of $T$ endows $P_\gamma$ with the structure of a $G_rT$-module such that
\[ P_\gamma|_T \cong A\!\otimes_k\!k_\gamma.\]
Frobenius reciprocity yields $\Ext^1_{G_r}(P_\gamma,-)\cong \Ext^1_{T_r}(k_\gamma,-) = 0$, so that each $P_\gamma$ is a projective $G_rT$-module with $\wt(P_\gamma) \subseteq
\gamma + \mathbb{Z}\Psi$, see \cite[(II.9.4)]{Ja3}. The canonical surjection $P_\gamma \longrightarrow \widehat{Z}_r(\gamma)$ induces a surjective map $P_\gamma \longrightarrow \widehat{L}_r(\gamma)$.
Consequently, the projective cover $\widehat{P}_r(\gamma)$ of $\widehat{L}_r(\gamma)$ has weights $\wt(\widehat{P}_r(\gamma)) \subseteq \gamma + \mathbb{Z}\Psi$.
Let $\widehat{P}(M)$ be the projective cover of the $G_rT$-module $M$. By the above, we have $\wt(\widehat{P}(M)) \subseteq \lambda +\mathbb{Z}\Psi$, whence
\[ \wt(\Omega_{G_rT}(M)) \subseteq \wt(\widehat{P}(M)) \subseteq \lambda +\mathbb{Z}\Psi,\]
as desired. \end{proof}
\begin{Thm} \label{VM3} Suppose that $G$ is defined over $\mathbb{F}_p$ and that $p$ is good for $G$. Let $\lambda, \mu \in X(T)$ be characters such that there exists $m >0$ with
\[ \Omega^{2m}_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\mu).\]
Then the following statements hold:
{\rm (1)} \ We have $\dep(\lambda)=\dep(\mu) = r$, and there exists a simple root $\alpha \in \Sigma\setminus \Psi^r_\lambda$ such that $\mu = \lambda + mp^r\alpha$.
{\rm (2)} \ $\Omega^{2n}_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\lambda\!+\!np^r\alpha)$ for all $n \in \mathbb{Z}$. \end{Thm}
\begin{proof} (1) We first assume that $\dep(\lambda) = 1$. According to \cite[(5.2)]{FR}, there exists a simple root $\alpha \in \Sigma\setminus \Psi_\lambda^1$ with
$\mathcal{V}_{(U_\alpha)_r}(k) \subseteq \mathcal{V}_{G_r}(\widehat{Z}_r(\lambda))$.
Proposition \ref{PH4}(2) provides an element $\zeta \in (\HH^2(G_r,k)_{\rm red})_{-p^r\alpha}$ such that $Z(\zeta)\cap \mathcal{V}_{G_r}(\widehat{Z}_r(\lambda)) \subsetneq
\mathcal{V}_{G_r}(\widehat{Z}_r(\lambda))$. Then $\eta := \zeta^m \in (\HH^{2m}(G_r,k)_{\rm red})_{-mp^r\alpha}$ also has this property, and there results a short exact sequence
\[ (0) \longrightarrow \widehat{L}_\eta \longrightarrow \Omega^{2m}_{G_rT}(k)\!\otimes_k\!k_{-mp^r\alpha} \stackrel{\hat{\eta}}{\longrightarrow} k \longrightarrow (0).\]
By tensoring this sequence with $\widehat{Z}_r(\lambda)$ while observing \cite[(II.9.2)]{Ja3}, we obtain a short exact sequence
\[ (\ast) \ \ \ \ \ \ \ \ (0) \longrightarrow \widehat{L}_\eta\! \otimes_k\!\widehat{Z}_r(\lambda) \stackrel{\binom{g_1}{g_2}}{\longrightarrow} \widehat{Z}_r(\mu\!-\!mp^r\alpha)\oplus
(\text{proj.})\stackrel{(f_1,f_2)}{\longrightarrow} \widehat{Z}_r(\lambda) \longrightarrow (0)\]
of $G_rT$-modules. In particular, we have
\[ \widehat{Z}_r(\lambda) = f_1(\widehat{Z}_r(\mu\!-\!mp^r\alpha)) + f_2((\text{proj.})).\]
Since the baby Verma module $\widehat{Z}_r(\lambda)$ has a simple top, at least one summand has to coincide with $\widehat{Z}_r(\lambda)$.
(a) \ {\it If $f_2(({\rm proj.})) \ne \widehat{Z}_r(\lambda)$, then $r=1$ and $\mu =\lambda+mp\alpha$}.
\noindent
In this case, we have $f_1(\widehat{Z}_r(\mu\!-\!mp^r\alpha)) = \widehat{Z}_r(\lambda)$, so that equality of dimensions implies
\[\widehat{Z}_r(\mu\!-\!mp^r\alpha) \cong \widehat{Z}_r(\lambda),\]
whence $\mu = \lambda+mp^r\alpha$. In view of \cite[(II.3.7(9))]{Ja3}, we thus have $Z_r(\mu) \cong Z_r(\lambda)$, while our assumption implies
\[ Z_r(\lambda) \cong Z_r(\mu) \cong \Omega^{2m}_{G_r}(Z_r(\lambda)).\]
Consequently, $\cx_{G_r}(Z_r(\lambda))=1$ and the inequality
\[ r = \dim \mathcal{V}_{(U_\alpha)_r}(k) \le \dim \mathcal{V}_{G_r}(Z_r(\lambda)) = 1\]
gives $r=1$ and $\mu = \lambda\!+\!mp\alpha$. \ \ \ \ $\Diamond$
\noindent
{\it In view of (a) we henceforth assume that $f_2(({\rm proj.})) = \widehat{Z}_r(\lambda)$}.
(b) \ {\it We have $\lambda-2(p^r-1)\rho \le \mu-mp^r\alpha \le \lambda$}.
\noindent
If $f_1 = 0$, then our exact sequence ($\ast$) yields
\[ \widehat{L}_\eta\!\otimes_k\!\widehat{Z}_r(\lambda) \cong \ker(0,f_2) \cong \widehat{Z}_r(\mu\!-\!mp^r\alpha)\oplus \Omega_{G_rT}(\widehat{Z}_r(\lambda))
\oplus (\text{proj.}),\]
whence
\begin{eqnarray*}
Z(\eta)\cap \mathcal{V}_{G_r}(\widehat{Z}_r(\lambda)) & = & \mathcal{V}_{G_r}(\widehat{Z}_r(\mu\!-\!mp^r\alpha)) \cup \mathcal{V}_{G_r}(\Omega_{G_rT}(\widehat{Z}_r(\lambda)))\\
& = & \mathcal{V}_{G_r}(\widehat{Z}_r(\mu)) \cup \mathcal{V}_{G_r}(\Omega_{G_rT}(\widehat{Z}_r(\lambda)))\\
& = & \mathcal{V}_{G_r}(\widehat{Z}_r(\lambda)),
\end{eqnarray*}
a contradiction. Thus, $f_1 \ne 0$ and $\widehat{Z}_r(\lambda)_{\mu-mp^r\alpha} \ne (0)$, so that \cite[(II.9.2(6))]{Ja3} implies $\lambda-2(p^r-1)\rho \le \mu-mp^r\alpha \le
\lambda$. \ \ \ \ $\Diamond$
(c) \ {\it There exists $0<n \le mp^r$ such that $\mu = \lambda+n\alpha$}.
\noindent
Since $m>0$, it readily follows from Lemma \ref{VM1} that $\mu>\lambda$. In view of (b), we therefore have
\[ \mu-mp^r\alpha \le \lambda < \mu\]
so that there exist non-negative integers $n_\beta$, $m_\beta$ with
\[ \mu = \lambda + \sum_{\beta \in \Sigma} n_\beta\beta \ \ \text{and} \ \ \lambda = \mu-mp^r\alpha + \sum_{\beta \in \Sigma}m_\beta \beta.\]
Consequently, $n_\alpha+m_\alpha = mp^r$ while $n_\beta + m_\beta = 0$ for every simple root $\beta \ne \alpha$. \ \ \ \ $\Diamond$
\noindent
We let $L$ be the Levi subgroup of $G$ that is defined by the simple root $\alpha$. The baby Verma module of $L_rT$, associated to the weight $\lambda \in X(T)$ will be denoted
$\widehat{Z}_r^L(\lambda)$.
(d) \ {\it We have $\Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda)) \cong \widehat{Z}_r^L(\mu)$}.
\noindent
Assuming $L\ne G$, we consider the triangular decomposition $G_r = N^{-}_rL_rN_r$ of $G_r$, see \cite[(II.3.2)]{Ja3}. For $\gamma \in X(T)$, we have an isomorphism
\[ \widehat{Z}_r( \gamma)|_{L_rT} \cong \Dist(N_r^-)_{\rm ad}\!\otimes _k\!\widehat{Z}_r^L(\gamma) \cong \widehat{Z}_r^L(\gamma)\oplus W(\gamma),\]
with $W(\gamma) := \Dist(N_r^-)_{\rm ad}^\dagger\!\otimes _k\!\widehat{Z}_r^L(\gamma)$ being defined via the augmentation ideal of $\Dist(N^-_r)$. The subscript indicates that
$L_rT$ acts via the adjoint representation (cf.\ \cite[(II.3.6(2))]{Ja3}). Consequently, (c) yields
\[ \wt(W(\mu)) \subseteq \bigcup_{\gamma \in X(T)\setminus (\mu +\mathbb{Z}\alpha)} \gamma + \mathbb{Z}\alpha = \bigcup_{\gamma \in X(T)\setminus (\lambda+\mathbb{Z}\alpha)} \gamma +
\mathbb{Z}\alpha.\]
General properties of the Heller operator give rise to
\[ \widehat{Z}_r^L(\mu) \oplus W(\mu) \cong \widehat{Z}_r(\mu)|_{L_rT} \cong \Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda))\oplus \Omega^{2m}_{L_rT}(W(\lambda))
\oplus (\text{proj.}).\]
According to \cite[(4.2.1)]{NPV}, we obtain
\[ \mathcal{V}_{L_r}(\widehat{Z}_r^L(\lambda)) = \mathcal{V}_{L_r}(\widehat{Z}_r(\lambda)) = \mathcal{V}_{G_r}(\widehat{Z}_r(\lambda))\cap \mathcal{V}_{L_r}(k) \supseteq \mathcal{V}_{(U_\alpha)_r}(k),\]
so that $\widehat{Z}_r^L(\lambda)$ is not projective and $\Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda))$ is indecomposable. Since $\wt(\widehat{Z}_r^L(\lambda)) \subseteq
\lambda\! +\!\mathbb{Z}\alpha$, Lemma \ref{VM2} ensures that $\wt(\Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda))) \subseteq \lambda + \mathbb{Z}\alpha$. As a result, the indecomposable
$L_rT$-module $\Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda))$ is not a direct summand of $W(\mu)$. The Theorem of Krull-Remak-Schmidt thus yields
$\Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda)) \cong \widehat{Z}_r^L(\mu)$. \ \ \ \ $\Diamond$
(e) \ {\it Let $\gamma \in X(T)$. If $\widehat{Z}_r^L(\gamma)$ is not projective, then $\widehat{Z}_r^L(\gamma)|_{L_1T}$ has no non-zero projective summands.}
\noindent
The semi-simple part of $L$ has rank $1$ and is therefore isomorphic to $\SL(2)$ or ${\rm PSL}(2)$, see \cite[(8.2.4)]{Sp}. (Since parabolic subgroups are connected (cf.\
\cite[(7.3.8)]{Sp}), so are Levi subgroups.) In view of \cite[(II.9.7)]{Ja3}, the category $\modd {\rm PSL}(2)_rT$ is a sum of blocks of $\modd \SL(2)_rT$ (see also \cite[(3.5)]{FR}). We may therefore assume without loss of generality that $L_rT \cong \SL(2)_rT$.
Let $\widehat{P}$ be a projective indecomposable $L_1T$-module, which is a direct summand of $\widehat{Z}_r^L(\gamma)|_{L_1T}$. If $\widehat{P}$ is not simple, then standard
$\SL(2)_1T$-theory (cf.\ \cite[(12.2)]{Hu4}) shows that $\widehat{P}$ possesses weights of multiplicity $\ge 2$. Since every weight of $\widehat{Z}_r^L(\gamma)|_{L_1T}$ has
multiplicity $1$, we conclude that $\widehat{P}$ is simple, and hence is of the form $\widehat{Z}^L_1(\omega)$ with $\langle \omega+\rho,\alpha^\vee\rangle \in p\mathbb{Z}$, see
\cite[(II.11.8)]{Ja3}. Writing $\gamma = \gamma_0 + p\gamma_1$ with $\gamma_0 \in X_1(T)$, it follows from \cite[(5.4)]{FR} that $\widehat{Z}_r^L(\gamma)|_{L_1}$ belongs to the block $\mathcal{B}_1(\omega) \subseteq \Dist(L_1)$, defined by $\omega$. Since $\mathcal{B}_1(\omega)$ is simple, another application of \cite[(5.4)]{FR} implies the projectivity of $\widehat{Z}_r^L(\gamma)|_{L_1}$, which, by \cite[(II.9.4)]{Ja3} and \cite[(II.11.8)]{Ja3}, yields a contradiction. \ \ \ \ $\Diamond$
(f) \ {\it We have $\mu = \lambda +pm\alpha$}.
\noindent
Owing to (d), the module $\widehat{Z}_r^L(\mu)$ is not projective. Standard properties of the Heller operator in conjunction with (e) thus yield
\[ \widehat{Z}_r^L(\mu)|_{L_1T} \cong \Omega^{2m}_{L_rT}(\widehat{Z}_r^L(\lambda))|_{L_1T} \cong \Omega^{2m}_{L_1T}(\widehat{Z}_r^L(\lambda)|_{L_1T}).\]
By the same token, every indecomposable summand of $\widehat{Z}_r^L(\lambda)|_{L_1T}$ is non-projective and hence of complexity $1$. Thanks to \cite[(2.4)]{Fa5}, we thus have
\[ \Omega^{2m}_{L_1T}(\widehat{Z}_r^L(\lambda)|_{L_1T}) \cong \widehat{Z}_r^L(\lambda)|_{L_1T}\!\otimes_k\!k_{pm\alpha},\]
whence
\[ \widehat{Z}_r^L(\mu)|_{L_1T} \cong \widehat{Z}_r^L(\lambda)|_{L_1T}\!\otimes_k\!k_{pm\alpha}.\]
Since the weights of the former module are bounded above by $\mu$, while those of the latter are $\le \lambda\!+\!pm\alpha$, our assertion follows. \ \ \ \ $\Diamond$
(g) \ {\it We have $\dep(\mu)=1$.}
\noindent
In light of (f), we have
\[ \langle \mu+\rho,\alpha^\vee\rangle = \langle \lambda +\rho,\alpha^\vee\rangle + pm\langle \alpha,\alpha^\vee\rangle \equiv \langle \lambda +\rho,\alpha^\vee\rangle \ \
\modd p\mathbb{Z}.\]
Since $ \langle \lambda +\rho,\alpha^\vee\rangle \not \in p\mathbb{Z}$, it follows that $\dep(\mu) = 1$. \ \ \ \ $\Diamond$
(h) \ {\it If $r>1$, then $m=1$.}
\noindent
Let $\dep_L(\lambda)$ denote the depth of $\lambda$, viewed as a weight of $L$. Since $\alpha$ is a simple root, we have
\[ \langle\rho_L,\alpha^\vee \rangle = 1 = \langle\rho,\alpha^\vee \rangle,\]
so that the choice of $\alpha$ implies $\dep_L(\lambda) = \dep(\lambda) = 1$. Thanks to (d), it suffices to verify the result for $L$, so that $\rho = \mathfrak{r}ac{1}{2}\alpha$. By (a), (b) and (f) it follows that
\[ \lambda -(p^r\!-\!1)\alpha \le \mu-mp^r\alpha = \lambda +mp\alpha -mp^r\alpha,\]
whence
\[ 1 \le p(m-(m\!-\!1)p^{r-1}).\]
As $r\ge 2$, this only holds for $m=1$. \ \ \ \ $\Diamond$
(i) \ {\it We have $r=1$.}
\noindent
Suppose that $r \ge 2$. Then (h) implies $m=1$. As before, we will be working with the Levi subgroup $L$, defined by the simple root $\alpha \in \Sigma$. Identifying weights with integers, we may assume that $\lambda \in \{0,\ldots,p^r\!-\!1\}$. Note that $\rho_L$ and $\alpha$ correspond to $1$ and $2$, respectively. As $r\ge 2$, a consecutive application of (a), (d) and (f) implies
\[ \Omega^2_{L_rT}(\widehat{Z}_r^L(\lambda)) \cong \widehat{Z}_r^L(\lambda\!+\!2p).\]
Let $\gamma := 2(p^r\!-\!1)-\lambda-2p$ and write $\gamma = \gamma_0+p^r\gamma_1$, where $\gamma_0 \in \{0,\ldots,p^r\!-\!1\}$. Thanks to \cite[(II.9.6(5))]{Ja3} and \cite[(II.9.7(1))]{Ja3}, we have
\[ \Soc_{L_rT}(\widehat{Z}_r^L(\lambda\!+\!2p)) \cong \widehat{L}_r(\gamma\!-\!2p^r\gamma_1),\]
so that the above leads to a short exact sequence
\[ (\ast\ast) \ \ \ \ \ \ \ \ \ \ (0) \longrightarrow \widehat{Z}_r^L(\lambda\!+\!2p) \longrightarrow \widehat{P}_r(\gamma\!-\!2p^r\gamma_1) \longrightarrow \widehat{P}_r(\lambda) \longrightarrow \widehat{Z}_r^L(\lambda) \longrightarrow (0)\]
of $L_rT$-modules. Two cases arise:
($i_a$) \ $\lambda+2p \le p^r-2$.
\noindent
Then we have $\gamma_0 = p^r-2-\lambda-2p$ and $\gamma_1=1$, so that $\gamma-2p^r = -(\lambda+2p+2)$. It follows from our sequence ($\ast\ast$) that $\widehat{Z}_r^L(-(\lambda+2p+2))$ is a filtration factor of $\widehat{P}_r(\lambda)$, so that BGG reciprocity \cite[(II.11.4)]{Ja3} implies
\[ -(\lambda+2p+2)> \lambda,\]
relative to the partial ordering given by the positive root, a contradiction.
($i_b$) \ $\lambda+2p \ge p^r-1$.
\noindent
Since $r\ge 2$, we obtain $\gamma_1 = 0$, so that
\[ \Soc_{L_rT}(\widehat{Z}_r^L(\lambda\!+\!2p)) \cong \widehat{L}_r(\gamma).\]
BGG reciprocity yields $\gamma>\lambda$, whence
\[ (\dagger) \ \ \ \ \ \ \ \ \ \ \ \ \lambda < p^r-p-1\]
relative to the ordering of the natural numbers. As $\widehat{Z}_r^L(\lambda\!+\!2p)$ is not projective, we actually have $\lambda+2p \ge p^r$, whence
\[ p^r-2p \le \lambda < p^r-p.\]
Standard $\SL(2)_1$-theory in conjunction with \cite[(1.1)]{HJ} then yields
\[ \dim_k \widehat{P}_r(\lambda) = 4p^r.\]
On the other hand, the inequality ($\dagger$) yields
\[\gamma = 2p^r-2-\lambda-2p > 2p^r-2-2p-p^r+p+1 = p^r-p-1,\]
so that another application of \cite[(1.1)]{HJ} implies
\[ \dim_k \widehat{P}_r(\gamma) = 2p^r.\]
The exact sequence ($\ast\ast$), however, yields $\dim_k\widehat{P}_r(\lambda) = \dim_k\widehat{P}_r(\gamma)$, a contradiction. \ \ \ \ $\Diamond$
\noindent
As an upshot of the above, our result holds for weights of depth $\dep(\lambda) = 1$, and we now suppose that $2 \le d+1 = \dep(\lambda) \le r$.
We consider the case, where $G$ is semi-simple and simply connected. Since $\widehat{Z}_r(\mu) \cong \Omega^{2m}_{G_rT}(\widehat{Z}_r(\lambda))$, we may apply Proposition \ref{PH5} to see
that
\[ \dep(\mu) = \ph_\Sigma(Z_r(\mu)) = \ph_\Sigma(Z_r(\lambda)) = \dep(\lambda).\]
According to \cite[(6.2),(6.6)]{FR}, the functor
\[ \Phi : \modd G_rT \longrightarrow \modd G_{r-d}T \ \ ; \ \ M \mapsto \Hom_{G_d}(\mathrm{St}_d,M)^{[-d]}\]
sends $\widehat{Z}_r(\lambda)$ and $\widehat{Z}_r(\mu)$ to the modules $\widehat{Z}_{r-d}(\lambda')$ and $\widehat{Z}_{r-d}(\mu')$, defined by weights of depth $1$, respectively.
Moreover, we have
\[ \widehat{Z}_{r-d}(\mu') = \Phi(\widehat{Z}_r(\mu)) \cong \Phi(\Omega^{2m}_{G_rT}(\widehat{Z}_r(\lambda))) \cong \Omega^{2m}_{G_{r-d}T}(\Phi(\widehat{Z}_r(\lambda)))
\cong \Omega^{2m}_{G_{r-d}T}(\widehat{Z}_{r-d}(\lambda')).\]
The first part of the proof now implies $r\!-\!d =1$ and provides a simple root $\alpha \in \Sigma\setminus \Psi^1_{\lambda'}$ such that $\mu' =\lambda' + mp\alpha$. The identities $\lambda = p^d\lambda' + (p^d\!-\!1)\rho$ and $\mu = p^d\mu' + (p^d\!-\!1)\rho$ thus yield
\[ \mu = p^d\lambda' +mp^{d+1}\alpha + (p^d\!-\!1)\rho = \lambda + mp^r\alpha\]
as well as
\[ \langle\lambda+\rho,\alpha\rangle = p^d\langle\lambda'+\rho,\alpha\rangle \not \in p^{d+1}\mathbb{Z}=p^r\mathbb{Z},\]
so that $\alpha \in \Sigma \setminus \Psi^r_\lambda$.
Suppose the result holds for a covering group $\tilde{G}$ of $G$ with maximal torus $\tilde{T}$ such that the canonical morphism sending $\tilde{G}$ to $G$ maps $\tilde{T}$ onto $T$.
Owing to \cite[(3.5)]{FR}, $\modd G_rT$ is the sum of those blocks of $\modd \tilde{G}_r\tilde{T}$, whose characters belong to $X(T) \subseteq X(\tilde{T})$. Consequently, our result
then also holds for $G$ and $T$. The proof may now be completed by repeated application of this argument (cf.\ \cite[(6.6)]{FR}).
(2) In view of (1), an application of \cite[(II.9.2)]{Ja3} gives $\Omega^{2m}_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\lambda)\!\otimes_k\!k_{mp^r\alpha}$. Thus,
$Z_r(\lambda)$ is periodic, and $\cx_{G_rT}(\widehat{Z}_r(\lambda)) = \cx_{G_r}(Z_r(\lambda)) = 1$. Hence Theorem \ref{MC1} and Theorem \ref{MC3} provide an element $\beta \in
\Psi\cup \{0\}$ and $s \in \{0,\ldots, r\!-\!1\}$ such that
(a) \ $m = \ell p^s$ for some $\ell>0$, and
(b) \ $\Omega^{2p^s}_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\lambda)\!\otimes_k\!k_{p^r\beta}$.
\noindent
Consequently,
\[ \widehat{Z}_r(\lambda\!+\!\ell p^r\beta) \cong \widehat{Z}_r(\lambda)\!\otimes_k\!k_{\ell p^r\beta} \cong \Omega^{2m}(\widehat{Z}_r(\lambda) \cong
\widehat{Z}_r(\lambda\!+\!mp^r\alpha),\]
so that $\ell p^r\beta = mp^r\alpha$, whence $\beta = p^s\alpha \ne 0$. Since $\alpha$ and $\beta$ are roots, we conclude that $s=0$ and $\beta = \alpha$. As a result, (b) gives
$\Omega^2_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\lambda)\!\otimes_k\!k_{p^r\alpha}$, and our assertion follows. \end{proof}
\noindent
Given $\lambda \in X(T)$, we recall that $P_r(\lambda)$ denotes the projective cover of the simple $G_r$-module $L_r(\lambda)$. If $L_r(\lambda)$ is not projective, we let $\Ht_r(\lambda) = \Rad(P_r(\lambda))/\Soc(P_r(\lambda))$ be its \emph{heart}. Recall that $\Gamma_s(G_r)$ denotes the stable Auslander-Reiten quiver of the self-injective algebra $kG_r=\Dist(G_r)$.
We record an immediate consequence of Theorem \ref{VM3}, which generalizes and corrects \cite[(4.3)]{Fa5}.
\begin{Cor} \label{VM4} Suppose that $G$ is defined over $\mathbb{F}_p$ and that $p$ is good for $G$. Let $\lambda \in X(T)$ be a weight of depth $\dep(\lambda)\le r$. If $Z_r(\lambda)$ and $Z_r(\mu)$ belong to the same component of $\Gamma_s(G_r)$, then $Z_r(\mu) \cong Z_r(\lambda)$.\end{Cor}
\begin{proof} According to \cite[(4.4)]{FR}, the baby Verma modules $Z_r(\lambda)$ and $Z_r(\mu)$ are quasi-simple. Since $kG_r$ is symmetric, this implies the existence of $m \in \mathbb{Z}\setminus \{0\}$ with
\[ \Omega_{G_r}^{2m}(Z_r(\lambda)) \cong Z_r(\mu).\]
Consequently,
\[ \mathfrak{F}(\widehat{Z}_r(\mu)) \cong Z_r(\mu) \cong \Omega_{G_r}^{2m}(\mathfrak{F}(\widehat{Z}_r(\lambda))) \cong \mathfrak{F}( \Omega_{G_rT}^{2m}(\widehat{Z}_r(\lambda))),\]
and \cite[(4.1)]{GG1} provides $\gamma \in X(T)$ such that
\[ \widehat{Z}_r(\mu\!+\!p^r\gamma) \cong \Omega_{G_rT}^{2m}(\widehat{Z}_r(\lambda)).\]
Theorem \ref{VM3} gives $\mu + p^r\gamma - \lambda \in p^rX(T)$, so that \cite[(II.3.7(9)]{Ja3} implies $Z_r(\mu) \cong Z_r(\mu\!+\!p^r\gamma) \cong Z_r(\lambda)$. \end{proof}
\noindent
The analogue of Corollary \ref{VM4} for $\Gamma_s(G_rT)$ does not hold. The following example falsifies \cite[(4.3(1))]{Fa5}, whose proof is based on an incorrect citation of \cite[(II.11.7)]{Ja3}).
\begin{Example} Let $G = \SL(2)$ and consider a non-projective $\SL(2)_1T$-module $\widehat{Z}_1(\lambda)$. Then $\widehat{Z}_1(\lambda)$ has complexity $\cx_{\SL(2)_1T}(\widehat{Z}_1(\lambda)) = 1$, and
Theorem \ref{MC1} implies
\[ \Omega^2_{\SL(2)_1T}(\widehat{Z}_1(\lambda)) \cong \widehat{Z}_1(\lambda)\!\otimes_k\! k_{p\alpha} \cong \widehat{Z}_1(\lambda\!+\!p\alpha),\]
where $\alpha$ denotes the positive root of $\SL(2)$. As a result, the component of $\Gamma_s(\SL(2)_1T)$ containing $\widehat{Z}_1(\lambda)$ contains infinitely many baby Verma modules. \end{Example}
\noindent
Theorem \ref{VM3} actually shows that $\mu = \lambda + mp^r\alpha$ for every $\alpha \in \Sigma \setminus \Psi^r_\lambda$, so that $\Sigma\setminus \Psi^r_\lambda$ is a singleton. This means that the foregoing example is essentially the only exception.
\begin{Cor} \label{VM5} Suppose that $G$ defined over $\mathbb{F}_p$ with $p\ge 7$. Let $\lambda \in X(T)$ be a weight of depth $\dep(\lambda) \le r$
such that there exist $\mu \in X(T)$ and $m \in \mathbb{N}$ with $\Omega^{2m}_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\mu)$. Then the following statements hold:
{\rm (1)} \ $G_r \cong S_r \times H_r$, with $S \cong \SL(2)$ and $H$ being reductive.
{\rm (2)} \ There is a functor $\mathfrak{G} : \modd S_rT' \longrightarrow \modd G_rT$ and a weight $\lambda' \in X(T')$ that $\mathfrak{G}$ sends $\widehat{Z}_r(\lambda')$ onto $\widehat{Z}_r(\lambda)$ and
induces an isomorphism $\mathrm{T}heta_r(\lambda') \cong \mathrm{T}heta_r(\lambda)$ between the AR-components containing $\widehat{Z}_r(\lambda')$ and $\widehat{Z}_r(\lambda)$, respectively.
{\rm (3)} \ There exists a simple root $\alpha \in \Psi$ such that $\{\widehat{Z}_r(\lambda\!+\!np^r\alpha \ ; \ n \in \mathbb{Z}\}$ is the set of those baby Verma modules that belong to
$\mathrm{T}heta(\lambda)$.\end{Cor}
\begin{proof} (1) Thanks to Theorem \ref{VM3}, we have $\dep(\lambda) = r$ as well as $\mu = \lambda + mp^r\alpha$ for some simple root $\alpha \in \Sigma$. The proof of Proposition \ref{PH5} now yields $\ph_{(U_\alpha)_r}(\widehat{Z}_r(\lambda)) = r$.
In view of \cite[(II.3.7)]{Ja3}, our assumption implies
\[ Z_r(\lambda) \cong Z_r(\mu) \cong \mathfrak{F}(\widehat{Z}_r(\mu)) \cong \mathfrak{F}(\Omega^{2m}_{G_rT}(\widehat{Z}_r(\lambda))) \cong \Omega^{2m}_{G_r}(\mathfrak{F}(\widehat{Z}_r(\lambda)))
\cong \Omega^{2m}_{G_r}(Z_r(\lambda)),\]
so that $Z_r(\lambda)$ is a periodic module. In particular, the module $\widehat{Z}_r(\lambda)$ has complexity $\cx_{G_rT}(\widehat{Z}_r(\lambda)) =1$. Consequently,
$V_r(G)_{\widehat{Z}_r(\lambda)}$ is a one-dimensional, irreducible variety and
\[ V_r(G)_{\widehat{Z}_r(\lambda)} = V_r(U_\alpha)_{\widehat{Z}_r(\lambda)}.\]
In view of Theorem \ref{PH3}, the group $\mathcal{U}_{\widehat{Z}_r(\lambda)}$ is a subgroup of $(U_\alpha)_r$ of height $r$, whence $\mathcal{U}_M = (U_\alpha)_r$.
The Borel subgroup $B$ acts on $kG_r$ via the adjoint representation. Since the twist $Z_r(\lambda)^{(b)}$ of a baby Verma module $Z_r(\lambda)$ by $b \in B$ is isomorphic to
$Z_r(\lambda)$, it follows that the variety $V_r(G)_{\widehat{Z}_r(\lambda)}$ is $B$-invariant. This readily implies $B\boldsymbol{.} (U_\alpha)_r = B\boldsymbol{.} \mathcal{U}_{\widehat{Z}_r(\lambda)}
\subseteq \mathcal{U}_{\widehat{Z}_r(\lambda)} = (U_\alpha)_r$, and the arguments of \cite[(7.3)]{FR} yield a decomposition $G = SH$ as a semidirect product, with $S$ being simple of rank
$1$. It follows that $G_r \cong S_r\times H_r$.
(2) Let $T = T'T''$ be the corresponding decomposition of the chosen maximal torus $T$ of $G$. The arguments of \cite[(7.3)]{FR} also provide a decomposition
\[ \widehat{Z}_r(\lambda) \cong \widehat{Z}_r(\lambda')\!\otimes_k\!P\]
as an outer tensor product of the $S_rT'$-module $\widehat{Z}_r(\lambda')$ and the simple projective $H_rT''$-module $P$. We now consider
\[ \mathfrak{G} : \modd S_rT' \longrightarrow \modd G_rT \ \ ; \ \ M \mapsto M\!\otimes_k\!P.\]
As observed in the proof of Theorem \ref{WT3}, this functor identifies $\modd S_rT'$ with a sum of blocks of $\modd G_rT$ and in particular induces an isomorphism $\mathrm{T}heta(\lambda')
\cong \mathrm{T}heta(\lambda)$.
(3) Let $\mathcal{A} := \{\widehat{Z}_r(\lambda\!+\!np^r\alpha) \ ; \ n \in \mathbb{Z}\}$. According to Theorem \ref{VM3}(2), $\mathcal{A} = \{\Omega^{2n}_{G_rT}(\widehat{Z}_r(\lambda) ) \ ; \ n \in
\mathbb{Z}\}$ is contained in $\mathrm{T}heta(\lambda)$.
Suppose that $\widehat{Z}_r(\nu)$ belongs to $\mathrm{T}heta(\lambda)$. By virtue of \cite[(4.5)]{FR}, the modules $\widehat{Z}_r(\lambda)$ and $\widehat{Z}_r(\nu)$ are quasi-simple. Thanks to Corollary \ref{HA3}, there thus exists $n \in \mathbb{Z}$ such that $\Omega_{G_rT}^{2n}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\nu)$. If $n>0$, then Theorem \ref{VM3} provides a simple root $\beta \in \Psi$ such that
(a) \ $\nu = \lambda+np^r\beta$, and
(b) \ $\widehat{Z}_r(\lambda\!+\!p^r\alpha) \cong \Omega^2_{G_rT}(\widehat{Z}_r(\lambda)) \cong \widehat{Z}_r(\lambda\!+\!p^r\beta)$.
\noindent
Thus, $\beta = \alpha$ and $\widehat{Z}_r(\nu)$ belongs to $\mathcal{A}$.
Alternatively, $\Omega_{G_rT}^{-2n}(\widehat{Z}_r(\nu)) \cong \widehat{Z}_r(\lambda)$ and the foregoing arguments yield $\lambda = \nu-np^r\alpha$. \end{proof}
\begin{center}
\bf Acknowledgement
\end{center}
Parts of this paper were written while the author was visiting the Isaac Newton Institute in Cambridge. He would like to take this opportunity to thank the members of the Institute for their
hospitality and support.
\end{document} |
\begin{document}
\label{top}
\title{Algorithmic problems for\free-abelian times free groups}
\noindent \textbf{Keywords}: free group, free-abelian group, decision problem,
automorphism.
\noindent \textbf{MSC}: \texttt{20E05}, \texttt{20K01}.
\begin{abstract}
We study direct products of free-abelian and free groups with special emphasis on
algorithmic problems. After giving natural extensions of standard notions into that
family, we find an explicit expression for an arbitrary endomorphism of $\ZZ^m \mbox{type\,(I)}mes
F_n$. These tools are used to solve several algorithmic and decision problems for $\ZZ^m
\mbox{type\,(I)}mes F_n $: the membership problem, the isomorphism problem, the finite index problem,
the subgroup and coset intersection problems, the fixed point problem, and the Whitehead
problem.
\end{abstract}
\goodbreak
\tableofcontents
\goodbreak
\section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
Free-abelian groups, namely $\ZZ^m$, are classical and very well known. Free groups, namely
$F_n$, are much wilder and have a much more complicated structure, but they have also been
extensively studied in the literature since more than a hundred years ago. The goal of this
paper is to investigate direct products of the form $\ZZ^m \mbox{type\,(I)}mes F_n$, namely free-abelian
times free groups. At a first look, it may seem that many questions and problems concerning
$\ZZ^m \mbox{type\,(I)}mes F_n$ will easily reduce to the corresponding questions or problems for
$\ZZ^m$ and $F_n$; and, in fact, this is the case when the problem considered is easy or
rigid enough. However, some other naive looking questions have a considerably more
elaborated answer in $\ZZ^m \mbox{type\,(I)}mes F_n$ rather than in $\ZZ^m$ or $F_n$. This is the case,
for example, when one considers automorphisms: $\operatorname{Aut} (\ZZ^m \mbox{type\,(I)}mes F_n )$ naturally
contains $GL_m(\ZZ) \mbox{type\,(I)}mes \operatorname{Aut}(F_n)$, but there are many more automorphisms other than
those preserving the factors $\ZZ^m$ and $F_n$. This fact causes potential complications
when studying problems involving automorphisms: apart from understanding the problem in
both the free-abelian and the free parts, one has to be able to control how is it affected
by the interaction between the two parts.
Another example of this phenomena is the study of intersections of subgroups. It is well
known that every subgroup of $\ZZ^m$ is finitely generated. This is not true for free
groups $F_n$ with $n\geqslant 2$, but it is also a classical result that all these groups
satisfy the Howson property: the intersection of two finitely generated subgroups is again
finitely generated. This elementary property fails dramatically in $\ZZ^m \mbox{type\,(I)}mes F_n$, when
$m\geqslant 1$ and $n\geqslant 2$ (a very easy example reproduced below, already appears
in~\cite{burns_intersection_1998} attributed to Moldavanski). Consequently, the algorithmic
problem of computing intersections of finitely generated subgroups of $\ZZ^m \mbox{type\,(I)}mes F_n$
(including the preliminary decision problem on whether such intersection is finitely
generated or not) becomes considerably more involved than the corresponding problems in
$\ZZ^m$ (just consisting on a system of linear equations over the integers) or in $F_n$
(solved by using the pull-back technique for graphs). This is one of the algorithmic
problems addressed below (see Section \ref{sec:CIP}).
Along all the paper we shall use the following notation and conventions. For $n\geqslant
1$, $[n]$ denotes the set integers $\{ 1,\ldots ,n\}$. Vectors from $\ZZ^m$ will always be
understood as row vectors, and matrices $\textbf{M}$ will always be though as linear maps
acting on the right, $\textbf{v} \mapsto \textbf{vM}$; accordingly, morphisms will always
act on the right of the arguments, $x\mapsto x\alpha$. For notational coherence, we shall
use uppercase boldface letters for matrices, and lowercase boldface letters for vectors
(moreover, if $w\in F_n$ then $\textbf{w}\in \mathbb{Z}^n$ will typically denote its
abelianization). We shall use lowercase Greek letters for endomorphisms of free groups,
$\phi \colon F_n \to F_n$, and uppercase Greek letters for endomorphisms of free-abelian
times free groups, $\Phi \colon \mathbb{Z}^m \mbox{type\,(I)}mes F_n \to \mathbb{Z}^m \mbox{type\,(I)}mes F_n$.
The paper is organized as follows. In Section~\ref{sec:ffab}, we introduce the family of
groups we are interested in, and we import there several basic notions and properties
shared by both families of free-abelian, and free groups, such as the concepts of rank and
basis, as well as the closeness property by taking subgroups. In Section~\ref{Dehn} we
remind the folklore solution to the three classical Dehn problems within our family of
groups. In the next two sections we study some other more interesting algorithmic problems:
the finite index subgroup problem in Section~\ref{fi}, and the subgroup and the coset
intersection problems in Section~\ref{sec:CIP}. In Section~\ref{sec:morphisms} we give an
explicit description of all automorphisms, monomorphisms and endomorphisms of free-abelian
times free groups which we then use in Section~\ref{sec:fix} to study the fixed subgroup of
an endomorphism, and in Section~\ref{sec:Wh} to solve the Whitehead problem within our
family of groups.
\goodbreak
\section{Free-abelian times free groups} \label{sec:ffab}
Let $T = \{ \, t_i \mid i \in I \,\}$ and $X=\{ \, x_j \mid j\in J \, \}$ be disjoint
(possibly empty) sets of symbols, and consider the group $G$ given by the presentation
$$
G=\left \langle \, T,X \mid [T,\, T\sqcup X] \,\right \rangle,
$$
where $[A,B]$ denotes the set of commutators of all elements from $A$ with all elements
from $B$. Calling $Z$ and $F$ the subgroups of $G$ generated, respectively, by $T$ and $X$,
it is easy to see that $Z$ is a free-abelian group with basis $T$, and $F$ is a free group
with basis $X$. We shall refer to the subgroups $Z=\langle T\rangle$ and $F=\langle
X\rangle$ as the \emph{free-abelian} and \emph{free parts} of $G$, respectively. Now, it is
straightforward to see that $G$ is the direct product of its free-abelian and free parts,
namely
\begin{equation} \label{eq:pres F x Z abreujada}
G=\left \langle \, T,X \mid [T,\, T\sqcup X] \,\right \rangle \simeq Z\mbox{type\,(I)}mes F.
\end{equation}
We say that a group is \emph{free-abelian times free} if it is isomorphic to one of the
form~\ \Leftrightarrow \ ref{eq:pres F x Z abreujada}.
It is clear that in every word on the generators $T\sqcup X$, the letters from $T$ can
freely move, say to the left, and so every element from $G$ decomposes as a product of an
element from $Z$ and an element from $F$, in a unique way. After choosing a well ordering
of the set $T$ (whose existence is equivalent to the axiom of choice), we have a natural
\emph{normal form} for the elements in $G$, which we shall write as $\mathbf{t^a} \, w$,
where $\mathbf{a}=(a_i)_i \in \bigoplus_{i\in I}\ZZ$, $ \mathbf{t^{a}}$ stands for the
(finite) product $\Pi_{i\in I} t_i^{a_i}$ (in the given order for $T$), and $w$ is a
reduced free word on $X$.
Observe that the center of the group $G$ is $Z$ unless $F$ is infinite cyclic, in which
case $G$ is abelian and so its center is the whole $G$. This exception will create some
technical problems later on.
We shall mostly be interested in the finitely generated case, i.e.\ when $T$ and $X$ are
both finite, say $I=[m]$ and $J=[n]$ respectively, with $m,n\geqslant 0$. In this case, $Z$
is the free-abelian group of rank $m$, $Z=\ZZ^m$, $F$ is the free group of rank $n$,
$F=F_n$, and our group $G$ becomes
\begin{equation} \label{eq:pres F_n x Z^m}
G=\ZZ^m \mbox{type\,(I)}mes F_n =\langle \, t_1,\ldots,t_m,\, x_1,\ldots,x_n \mid t_it_j=t_jt_i,\, t_ix_k=x_kt_i \, \rangle,
\end{equation}
where $i,j\in [m]$ and $k\in [n]$. The normal form for an element $g\in G$ is now
$$
g=\mathbf{t^a} \, w = t_1^{a_1} \cdots \,t_m^{a_m} \, w(x_1,\ldots ,x_n) ,
$$
where $\mathbf{a}=(a_1,\ldots ,a_m)\in \ZZ^m$ is a row integral vector, and $w=w(x_1,\ldots
,x_n)$ is a reduced free word on the alphabet $X$. Note that the symbol $\mathbf{t}$ by
itself has no real meaning, but it allows us to convert the notation for the abelian group
$\ZZ^m$ from additive into multiplicative, by moving up the vectors (i.e.\ the entries of
the vectors) to the level of exponents; this will be especially convenient when working in
$G$, a noncommutative group in general.
Observe that the ranks of the free-abelian and free parts of $G$, namely $m$ and $n$, are
not invariants of the group $G$, since $\ZZ^m \mbox{type\,(I)}mes F_1\simeq \ZZ^{m+1}\mbox{type\,(I)}mes F_0$.
However, as one may expect, this is the only possible redundancy and so, we can generalize
the concepts of rank and basis from the free-abelian and free contexts to the mixed
free-abelian times free situation.
\goodbreak
\begin{obs} \label{prop:caract Fn x Z^m}
Let $Z$ and $Z'$ be arbitrary free-abelian groups, and let $F$ and $F'$ be arbitrary free
groups. If $F$ and $F'$ are not infinite cyclic, then
$$
Z\mbox{type\,(I)}mes F\simeq Z'\mbox{type\,(I)}mes F'
\,\, \Leftrightarrow \,\, \operatorname{rk}(Z)=\operatorname{rk}(Z') \text{ and } \operatorname{rk}(F)=\operatorname{rk}(F').
$$
\end{obs}
\begin{proof}
It is straightforward to see that the center of $Z\mbox{type\,(I)}mes F$ is $Z$ (here is where $F\nsimeq
\mathbb{Z}$ is needed). On the other hand, the quotient by the center gives $(Z\mbox{type\,(I)}mes F)/Z
\simeq F$. The result follows immediately.
\end{proof}
\begin{defn} \label{def:rang}
Let $G=Z\mbox{type\,(I)}mes F$ be a free-abelian times free group and assume, without loss of
generality, that $F\not\simeq \mathbb{Z}$. Then, according to the previous observation, the
pair of cardinals $(\kappa,\, \varsigma)$, where $\kappa$ is the abelian rank of $Z$ and
$\varsigma$ is the rank of $F$, is an invariant of $G$, which we shall refer to as the
\emph{rank} of $G$, $\operatorname{rk}(G)$. (We allow this abuse of notation because the rank of $G$ in
the usual sense, namely the minimal cardinal of a set of generators, is precisely
$\kappa+\varsigma$: $G$ is, in fact, generated by a set of $\kappa+\varsigma$ elements and,
abelianizing, we get $G \operatorname{^{ab}} =(Z\mbox{type\,(I)}mes F)\operatorname{^{ab}}=Z\oplus F\operatorname{^{ab}}$, a free-abelian group of rank
$\kappa+\varsigma$, so $G$ cannot be generated by less than $\kappa+\varsigma$ elements.)
\end{defn}
\begin{defn}\label{def:base}
Let $G=Z\mbox{type\,(I)}mes F$ be a free-abelian times free group. A pair $(A,B)$ of subsets of $G$ is
called a \emph{basis} of $G$ if the following three conditions are satisfied:
\begin{enumerate}
\item [(i)] $A$ is an abelian basis of the center of $G$,
\item [(ii)] $B$ is empty, or a free basis of a non-abelian free subgroup of $G$ (note
that this excludes the possibility $|B|=1$),
\item [(iii)] $\langle A \cup B\rangle= G$.
\end{enumerate}
In this case we shall also say that $A$ and $B$ are, respectively, the \emph{free-abelian}
and \emph{free} components of $(A,B)$. From (i), (ii) and (iii) it follows immediately
\begin{enumerate}
\item [(iv)] $\langle A\rangle \cap \langle B\rangle = \{ 1 \}$,
\item [(v)] $A\cap B=\emptyset$,
\end{enumerate}
since $\langle A \rangle \cap \langle B \rangle$ is contained in the center of $G$, but no
non trivial element of $\langle B\rangle$ belongs to it.
\goodbreak
Usually, we shall abuse notation and just say that $A\cup B$ \emph{is a basis} of $G$. Note
that no information is lost because we can retrieve $A$ as the elements in $A\cup B$ which
belong to the center of $G$, and $B$ as the remaining elements.
\end{defn}
Observe that, by (i), (iii) and (iv) in the previous definition, if $(A,B)$ is a basis of a
free-abelian times free group $G$, then $G=\langle A\rangle \mbox{type\,(I)}mes \langle B\rangle$; and
by (i) and (ii), $\langle A\rangle$ is a free-abelian group and $\langle B\rangle$ is a
free group not isomorphic to $\mathbb{Z}$; hence, by Observation~\ref{prop:caract Fn x
Z^m}, $\operatorname{rk}(G)=(|A|,\, |B|)$. In particular, this implies that $(|A|,\, |B|)$ does not
depend on the particular basis $(A,B)$ chosen.
On the other hand, the first obvious example is $T\cup X$ being a basis of the group
$G=\langle T,X\mid [T,\, T\sqcup X]\rangle$ (note that if $|X|\neq 1$ then $A=T$ and $B=X$,
but if $|X|=1$ then $A=T\cup X$ and $B=\emptyset$ due to the technical requirement in
Observation~\ref{prop:caract Fn x Z^m}). We have proved the following.
\begin{cor}
Every free-abelian times free group $G$ has bases and, every basis $(A,B)$ of $G$ satisfies
$\operatorname{rk}(G)=(|A|,\, |B|)$. \qed
\end{cor}
\goodbreak
Let us focus now our attention to subgroups. It is very well known that every subgroup of a
free-abelian group is free-abelian; and every subgroup of a free group is again free. These
two facts lead, with a straightforward argument, to the same property for free-abelian
times free groups (this will be crucial for the rest of the paper).
\begin{prop}\label{prop:subgs de Fn x Z^m}
The family of free-abelian times free groups is closed under taking subgroups.
\end{prop}
\begin{proof}
Let $T$ and $X$ be arbitrary disjoint sets, let $G$ be the free-abelian times free group
given by presentation~(\ref{eq:pres F x Z abreujada}), and let $H\leqslant G$.
If $|X|=0,1$ then $G$ is free-abelian, and so $H$ is again free-abelian (with rank less
than or equal to that of $G$); the result follows.
\goodbreak
Assume $|X|\geqslant 2$. Let $Z=\langle T\rangle$ and $F=\langle X\rangle$ be the
free-abelian and free parts of $G$, respectively, and let us consider the natural short
exact sequence associated to the direct product structure of $G$:
$$
1\longrightarrow Z \overset{\iota}{\longrightarrow} Z \mbox{type\,(I)}mes F =G \overset{\pi}{\longrightarrow} F \longrightarrow 1,
$$
where $\iota$ is the inclusion, $\pi$ is the projection $\mathbf{t^{a}}w\mapsto w$, and
therefore $\ker(\pi)=Z=\operatorname{Im}(\iota)$. Restricting this short exact sequence to $H\leqslant
G$, we get
$$
1\longrightarrow \ker(\pi_{\mid H}) \overset{\iota}{\longrightarrow} H\overset{\pi_{\mid H}}{\longrightarrow} H\pi
\longrightarrow 1,
$$
where $1\leqslant \ker(\pi_{\mid H})=H\cap \ker(\pi)=H\cap Z\leqslant Z$, and $1\leqslant
H\pi \leqslant F$.
Therefore, $\ker(\pi_{\mid H})$ is a free-abelian group, and $H\pi$ is a
free group. Since $H\pi$ is free, $\pi_{\mid H}$ has a splitting
\begin{equation} \label{eq:escissio alpha}
H\overset{\alpha}{\longleftarrow} H\pi,
\end{equation}
sending back each element of a chosen free basis for $H\pi$ to an arbitrary preimage.
\goodbreak
Hence, $\alpha$ is injective, $H\pi\alpha\leqslant H$ is isomorphic to $H\pi$, and
straightforward calculations show that the following map is an isomorphism:
\begin{equation} \label{eq:iso factoritzacio subgrup}
\begin{array}{rcl}
\Theta_{\alpha} \colon H & \longrightarrow & \ker(\pi_{\mid H}) \mbox{type\,(I)}mes H\pi\alpha \\ h & \longmapsto & \bigl(h(h \pi
\alpha)^{-1},\, h\pi\alpha \bigr).
\end{array}
\end{equation}
Thus $H\simeq \ker(\pi_{\mid H}) \mbox{type\,(I)}mes H\pi\alpha$ is free-abelian times free and the
result is proven.
\end{proof}
This proof shows a particular way of decomposing $H$ into a direct product of a
free-abelian subgroup and a free subgroup, which depends on the chosen splitting $\alpha$,
namely
\begin{equation}\label{eq:factoritzacio subgrup}
H=(H\cap Z)\mbox{type\,(I)}mes H\pi\alpha.
\end{equation}
We call the subgroups $H\cap Z$ and $H\pi\alpha$, respectively, the \emph{free-abelian} and
\emph{free parts of $H$, with respect to the splitting $\alpha$}. Note that the
free-abelian and free parts of the subgroup $H=G$ with respect to the natural inclusion
$G\hookleftarrow F \colon \alpha$ coincide with what we called the free-abelian and free
parts of $G$.
Furthermore, Proposition~\ref{prop:subgs de Fn x Z^m} and the
decomposition~\ \Leftrightarrow \ ref{eq:factoritzacio subgrup} give a characterization of the bases, rank,
and all possible isomorphism classes of such an arbitrary subgroup $H$.
\begin{cor}\label{cor:caracteritzacio base combinatoria}\index{base!caracteritzacions}
With the above notation, a subset $E \subseteq H\leqslant G=Z\mbox{type\,(I)}mes F$ is a basis of $H$ if
and only if
$$
E=E_Z \sqcup E_F,
$$
where $E_Z$ is an abelian basis of $H \cap Z$, and $E_F$ is a free basis of $H\pi \alpha$,
for a certain splitting $\alpha$ as in~\ \Leftrightarrow \ ref{eq:escissio alpha}.
\end{cor}
\begin{proof}
The implication to the left is straightforward, with $E=A\sqcup B$, and $(A,B)=(E_Z, E_F)$
except for the case $\operatorname{rk}(F)=1$, when we have $(A,B)=(E_Z\sqcup E_F, \emptyset )$.
Suppose now that $E=A\sqcup B$ is a basis of $H$ in the sense of
Definition~$\ref{def:base}$, and let us look at the decomposition~\ \Leftrightarrow \ ref{eq:factoritzacio
subgrup}, for suitable $\alpha$. If $\operatorname{rk}(H\pi)=1$, then $H$ is abelian, $A$ is an abelian
basis for $H$, $B=\emptyset$ and all but exactly one of the elements in $A$ belong to
$H\cap Z$ (i.e.\ have normal forms using only letters from $T$); in this case the result is
clear, taking $E_F$ to be just that special element. Otherwise, $Z(H)=H\cap Z$ having $A$
as an abelian basis; take $E_Z =A$ and $E_F=B$. It is clear that the projection $\pi\colon
H \twoheadrightarrow H\pi,\ \mathbf{t^{a}} u \mapsto u$, restricts to an isomorphism
$\pi|_{\langle B\rangle}\colon \langle B\rangle \to H\pi$ since no nontrivial element in
$\langle B\rangle$ belong to $\ker \pi =H\cap Z$. Therefore, taking $\alpha =\pi|_{\langle
B\rangle}^{-1}$, $E_F$ is a free basis of $H\pi \alpha$.
\end{proof}
\begin{cor} \label{cor:classes isomorfia subgrups}
Let $G$ be the free-abelian times free group given by presentation~\ \Leftrightarrow \ ref{eq:pres F x Z
abreujada}, and let $\operatorname{rk}(G)=(\kappa,\, \varsigma )$. Every subgroup $H\leqslant G$ is
again free-abelian times free with $\operatorname{rk}(H)=(\kappa',\, \varsigma' )$ where,
\begin{itemize}
\item[\emph{(i)}] in case of $\varsigma=0$: $0\leqslant \kappa'\leqslant \kappa$ and
$\varsigma' =0$;
\item[\emph{(ii)}] in case of $\varsigma \geqslant 2$: either $0\leqslant
\kappa'\leqslant \kappa +1$ and $\varsigma' =0$, or $0\leqslant \kappa'\leqslant
\kappa$ and $0\leqslant \varsigma' \leqslant \max \{ \varsigma,\, \aleph_0 \}$ and
$\varsigma'\neq 1$.
\end{itemize}
Furthermore, for every such $(\kappa',\, \varsigma' )$, there is a subgroup $H\leqslant G$
such that $\operatorname{rk}(H)=(\kappa',\, \varsigma' )$. \qed
\end{cor}
\goodbreak
Along the rest of the paper, we shall concentrate on the finitely generated case. From
Proposition~\ref{prop:subgs de Fn x Z^m} we can easily deduce the following corollary,
which will be useful later.
\begin{cor}\label{cor:H fg sii Hpi fg}
A subgroup $H$ of $\ZZ^m \mbox{type\,(I)}mes F_n$ is finitely generated if and only if its projection to
the free part $H\pi$ is finitely generated. \qed
\end{cor}
The proof of Proposition~\ref{prop:subgs de Fn x Z^m}, at least in the finitely generated
case, is completely algorithmic; i.e. if $H$ is given by a finite set of generators, one
can effectively choose a splitting $\alpha$, and compute a basis of the free-abelian and
free parts of $H$ (w.r.t. $\alpha$). This will be crucial for the rest of the paper, and we
make it more precise in the following proposition.
\begin{prop} \label{prop:bases algorismiques}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ be a finitely generated free-abelian times free group. There is an
algorithm which, given a subgroup $H\leqslant G$ by a finite family of generators, it
computes a basis for $H$ and writes both, the new elements in terms of the old generators,
and the old generators in terms of the new basis.
\end{prop}
\begin{proof}
If $n=|X|=0,1$ then $G$ is free-abelian and the problem is a straightforward exercise in
linear algebra. So, let us assume $n\geqslant 2$.
We are given a finite set of generators for $H$, say $\mathbf{t}^\mathbf{c_1} w_1, \ldots,
\mathbf{t}^\mathbf{c_{p}} w_{p}$, where $p\geqslant 1$, $\mathbf{c_1}, \ldots,
\mathbf{c_{p}} \in \ZZ^m$ are row vectors, and $w_1, \ldots, w_{p}\in F_n$ are reduced
words on $X=\{ x_1, \ldots ,x_n \}$. Applying suitable Nielsen transformations,
see~\cite{lyndon_combinatorial_2001}, we can algorithmically transform the $p$-tuple
$(w_1,\ldots ,w_p)$ of elements from $F_n$, into another of the form $(u_1,\ldots
,u_{n'},1,\ldots ,1)$, where $\{ u_1,\ldots ,u_{n'}\}$ is a free basis of $\langle
w_1,\ldots ,w_p\rangle =H\pi$, and $0\leqslant n'\leqslant p$. Furthermore, reading along
the Nielsen process performed, we can effectively compute expressions of the new elements
as words on the old generators, say $u_j=\eta_j (w_1,\ldots ,w_p)$, $j\in [n']$, as well as expressions of the old
generators in terms of the new free basis, say $w_i =\nu_i(u_1,\ldots ,u_{n'})$, for $i\in
[p]$.
Now, the map $\alpha \colon H\pi \to H$, $u_j \mapsto \eta_j(\mathbf{t}^\mathbf{c_1} w_1,
\ldots , \mathbf{t}^\mathbf{c_{p}} w_{p})$ can serve as a splitting in the proof of
Proposition~\ref{prop:subgs de Fn x Z^m}, since $\eta_j(\mathbf{t}^\mathbf{c_1} w_1, \ldots
, \mathbf{t}^\mathbf{c_{p}} w_{p})=\mathbf{t}^{\mathbf{a_j}}\eta_j (w_1,\ldots
,w_p)=\mathbf{t}^{\mathbf{a_j}}u_j \in H$, where $\mathbf{a_j}$, $j\in [n']$, are integral
linear combinations of $\mathbf{c_1},\ldots ,\mathbf{c_p}$.
It only remains to compute an abelian basis for $\ker(\pi_{\mid H})=H\cap \ZZ^m$. For each
one of the given generators $h=\mathbf{t}^\mathbf{c_i} w_i$, compute
$h(h\pi\alpha)^{-1}=\mathbf{t}^{\mathbf{d_i}}$ (here, we shall need the words $\nu_i$
computed before). Using the isomorphism $\Theta_{\alpha}$ from the proof of
Proposition~\ref{prop:subgs de Fn x Z^m}, we deduce that $\{
\mathbf{t}^{\mathbf{d_1}},\ldots ,\mathbf{t}^{\mathbf{d_p}}\}$ generate $H\cap \ZZ^m$; it
only remains to use a standard linear algebra procedure, to extract from here an abelian
basis $\{\mathbf{t}^{\mathbf{b_1}},\ldots ,\mathbf{t}^{\mathbf{b_{m'}}} \}$ for $H\cap
\ZZ^m$. Clearly, $0\leqslant m'\leqslant m$.
We immediately get a basis $(A,B)$ for $H$ (with just a small technical caution): if
$n'\neq 1$, take $A=\{ \mathbf{t}^\mathbf{b_1}, \ldots , \mathbf{t}^\mathbf{b_{m'}} \}$ and
$B=\{ \mathbf{t}^\mathbf{a_1} u_1, \ldots , \mathbf{t}^\mathbf{a_{n'}} u_{n'} \}$; and if
$n'=1$ take $A=\{ \mathbf{t}^\mathbf{b_1}, \ldots , \mathbf{t}^\mathbf{b_{m'}},\,
\mathbf{t}^\mathbf{a_1} u_1\}$ and $B=\emptyset$.
On the other hand, as a side product of the computations done, we have the expressions
$\mathbf{t}^{\mathbf{a_j}}u_j= \eta_j(\mathbf{t}^\mathbf{c_1} w_1, \ldots ,
\mathbf{t}^\mathbf{c_{p}} w_{p})$, $j\in [n']$. And we can also compute expressions of the
$\mathbf{t}^{\mathbf{b_i}}$'s in terms of the $\mathbf{t}^{\mathbf{d_i}}$'s, and of the
$\mathbf{t}^{\mathbf{d_i}}$'s in terms of the $\mathbf{t}^\mathbf{c_i} w_i$'s. Hence we can
compute expressions for each one of the new elements in terms of the old generators.
For the other direction, we also have the expressions $w_i =\nu_i(u_1,\ldots ,u_{n'})$, for
$i\in [p]$. Hence, $\nu_i(\mathbf{t}^\mathbf{a_1} u_1, \ldots , \mathbf{t}^\mathbf{a_{n'}}
u_{n'})=\mathbf{t}^{\mathbf{e_i}}w_i$ for some $\mathbf{e_i}\in \ZZ^m$. But $H\ni
(\mathbf{t}^{\mathbf{c_i}}w_i)
(\mathbf{t}^{\mathbf{e_i}}w_i)^{-1}=\mathbf{t}^{\mathbf{c_i-e_i}}\in \ZZ^m$, so we can
compute integers $\lambda_1,\ldots ,\lambda_{m'}$ such that $\mathbf{c_i}-\mathbf{e_i}
=\lambda_1 \mathbf{b_1}+\cdots +\lambda_{m'}\mathbf{b_{m'}}$. Thus,
$\mathbf{t}^{\mathbf{c_i}}w_i =\mathbf{t}^{\mathbf{c_i}-\mathbf{e_i}}
\mathbf{t}^{\mathbf{e_i}}w_i = \mathbf{t}^{\lambda_1 \mathbf{b_1}+\cdots
+\lambda_{m'}\mathbf{b_{m'}}} \mathbf{t}^{\mathbf{e_i}} w_i =
(\mathbf{t}^{\mathbf{b_1}})^{\lambda_1}\cdots (\mathbf{t}^{\mathbf{b_{m'}}})^{\lambda_{m'}}
\nu_i(\mathbf{t}^\mathbf{a_1} u_1, \ldots , \mathbf{t}^\mathbf{a_{n'}} u_{n'})$, for $i\in
[p]$.
\end{proof}
As a first application of Proposition~\ref{prop:bases algorismiques}, free-abelian times
free groups have solvable \emph{membership problem}. Let us state it for an arbitrary group
$G$.
\begin{problem}[\textbf{Membership Problem, $\operatorname{MP}(G)$}]
Given elements $g,\, h_1,\ldots ,h_p \in G$, decide whether $g\in H=\langle h_1,\ldots
,h_p\rangle$ and, in this case, computes an expression of $g$ as a word on the~$h_i$'s.
\end{problem}
\goodbreak
\begin{prop} \label{lem:Membership problem per Fn x Z^m} \index{membership problem!decidibilitat i c\`{a}lcul}
The Membership Problem for $G=\ZZ^m \mbox{type\,(I)}mes F_n$ is solvable.
\end{prop}
\begin{proof}
Write $g=\mathbf{t}^{\mathbf{a}}w$. We start by computing a basis for $H$ following
Proposition~\ref{prop:bases algorismiques}, say $\{ \mathbf{t}^\mathbf{b_1}, \ldots
,\mathbf{t}^\mathbf{b_{m'}},\, \mathbf{t}^\mathbf{a_1} u_1, \ldots
,\mathbf{t}^\mathbf{a_{n'}} u_{n'}\}$. Now, check whether $g\pi =w\in H\pi=\langle
u_1,\ldots ,u_{n'}\rangle$ (membership is well known for free groups). If the answer is
negative then $g\not\in H$ and we are done. Otherwise, a standard algorithm for membership
in free groups gives us the (unique) expression of $w$ as a word on the $u_j$'s, say
$w=\omega (u_1,\ldots ,u_{n'})$. Finally, compute $\omega (\mathbf{t}^\mathbf{a_1} u_1,
\ldots ,\mathbf{t}^\mathbf{a_{n'}} u_{n'})=\mathbf{t}^{\mathbf{c}}w \in H$. It is clear
that $\mathbf{t}^{\mathbf{a}}w\in H$ if and only if $\mathbf{t}^{\mathbf{a}-\mathbf{c}}
=(\mathbf{t}^{\mathbf{a}}w)(\mathbf{t}^{\mathbf{c}}w)^{-1}\in H$ that is, if and only if
$\mathbf{a}-\mathbf{c} \in \langle \mathbf{b_1}, \ldots ,\mathbf{b_{m'}}\rangle \leqslant
\ZZ^m$. This can be checked by just solving a system of linear equations; and, in the
affirmative case, we can easily find an expression for $g$ in terms of $\{
\mathbf{t}^\mathbf{b_1}, \ldots ,\mathbf{t}^\mathbf{b_{m'}},\, \mathbf{t}^\mathbf{a_1} u_1,
\ldots ,\mathbf{t}^\mathbf{a_{n'}} u_{n'}\}$, like at the end of the previous proof.
Finally, it only remains to convert this into an expression of $g$ in terms of $\{
h_1,\ldots ,h_p \}$ using the expressions we already have for the basis elements in terms
of the $h_i$'s.
\end{proof}
\begin{cor}\label{cor:membership}
The membership problem for arbitrary free-abelian times free groups is solvable.
\end{cor}
\begin{proof}
We have $G=Z\mbox{type\,(I)}mes F$, where $Z=\langle T\rangle$ is an arbitrary free-abelian group and
$F=\langle X\rangle$ is an arbitrary free group. Given elements $g,\, h_1,\ldots ,h_p \in
G$, let $\{ t_1,\ldots ,t_m\}$ (resp. $\{ x_1,\ldots ,x_n \}$) be the finite set of letters
in $T$ (resp. in $X$) used by them. Obviously all these elements, as well as the subgroup
$H=\langle h_1,\ldots ,h_p\rangle$, live inside $\langle t_1,\ldots ,t_m\rangle \mbox{type\,(I)}mes
\langle x_1,\ldots ,x_n\rangle \simeq \ZZ^m \mbox{type\,(I)}mes F_n$ and we can restrict our attention to
this finitely generated environment. Proposition~\ref{lem:Membership problem per Fn x Z^m}
completes the proof.
\end{proof}
\goodbreak
To conclude this section, let us introduce some notation that will be useful later. Let $H$
be a finitely generated subgroup of $G=\ZZ^m \mbox{type\,(I)}mes F_n$, and consider a basis for $H$,
\begin{equation}
\{\mathbf{t}^\mathbf{b_1}, \ldots ,\mathbf{t}^\mathbf{b_{m'}} ,\, \mathbf{t}^\mathbf{a_1} u_1, \ldots ,
\mathbf{t}^\mathbf{a_{n'}} u_{n'} \},
\end{equation}
where $0\leqslant m'\leqslant m$, $\{ \mathbf{b_1}, \ldots ,\mathbf{b_{m'}}\}$ is an
abelian basis of $H\cap \ZZ^m \leqslant \ZZ^m$, ${0\leqslant n'}$, $\mathbf{a_1}, \ldots,
\mathbf{a_{n'}} \in \ZZ^m$, and $\{ u_1, \ldots, u_{n'} \}$ is a free basis of $H\pi
\leqslant F_n$. Let $L=\langle \mathbf{b_1}, \ldots, \mathbf{b_{m'}}\rangle \leqslant
\ZZ^m$ (with additive notation, i.e.\ these are true vectors with $m$ integral coordinates
each), and let us denote by $\mathbf{A}$ the $n'\mbox{type\,(I)}mes m$ integral matrix whose rows are
the $\mathbf{a_i}$'s,
$$
\mathbf{A}=\left( \begin{array}{c} \mathbf{a_{1}} \\ \vdots \\ \mathbf{a_{n'}} \end{array}
\right ) \in \mathcal{M}_{n' \mbox{type\,(I)}mes m}(\ZZ).
$$
If $\omega$ is a word on $n'$ letters (i.e.\ an element of the abstract free group
$F_{n'}$), we will denote by $\omega(u_1,\ldots,u_{n'})$ the element of $H\pi$ obtained by
replacing the $i$-th letter in $\omega$ by $u_i$, $i\in [n']$. And we shall use boldface,
$\boldsymbol\omega$, to denote the abstract abelianization of $\omega$, which is an
integral vector with $n'$ coordinates, $\boldsymbol\omega \in \ZZ^{n'}$ (not to be confused
with the image of $\omega(u_1,\ldots ,u_{n'})\in F_n$ under the abelianization map $F_n
\twoheadrightarrow \ZZ^{n}$). Straightforward calculations provide the following result.
\begin{lem} \label{lem:descripcio d'un subgrup fg en termes d'una base}
With the previous notations, we have
\begin{equation*}
H=\{ \mathbf{t}^{\mathbf{a}} \, \omega (u_1, \ldots, u_{n'}) \mid \omega \in F_{n'} ,\mathbf{a} \in \boldsymbol\omega \mathbf{A} + L \},
\end{equation*}
a convenient description of $H$. \qed
\end{lem}
\begin{defn}\label{def:complecio abeliana}
Given a subgroup $H\leqslant \ZZ^m \mbox{type\,(I)}mes F_n$, and an element $w\in F_n$, we define the
\emph{abelian completion of $w$ in $H$} as
$$
\mathcal{C}_{w,H} = \{ \mathbf{a}\in \ZZ^m \mid t^{\mathbf{a}} w \in H\} \subseteq \ZZ^m.
$$
\end{defn}
\begin{cor}~\label{cor:propietats complecio abeliana}
With the above notation, for every $w\in F_n$ we have
\begin{enumerate}
\item [\emph{(i)}] if $w \not\in H \pi$, then $\mathcal{C}_{w,H} = \emptyset$,
\item [\emph{(ii)}] if $w \in H \pi$, then $\mathcal{C}_{w,H} = \boldsymbol\omega \mathbf{A} +
L$, where $\boldsymbol\omega$ is the abelianization of the word $\omega$ which
expresses $w\in F_n$ in terms of the free basis $\{ u_1,\ldots,u_{n'} \}$ (i.e.\
$w=\omega(u_1,\ldots,u_{n'})$; note the difference between $w$ and $\omega$).
\end{enumerate}
Hence, $\mathcal{C}_{w,H} \subseteq \ZZ^m$ is either empty or an affine variety with direction $L$
(i.e.\ a coset of $L$). \qed
\end{cor}
\goodbreak
\section{The three Dehn problems}\label{Dehn}
We shall dedicate the following sections to solve several algorithmic problems in $G=\ZZ^m
\mbox{type\,(I)}mes F_n$. The general scheme will be reducing the problem to the analogous problem on
each part, $\ZZ^m$ and $F_n$, and then apply the vast existing literature for free-abelian
and free groups. In some cases, the solutions for the free-abelian and free parts will
naturally build up a solution for $G$, while in some others the interaction between both
will be more intricate and sophisticated; everything depends on how complicated becomes the
relation between the free-abelian and free parts, with respect to the problem.
From the algorithmic point of view, the statement ``let $G$ be a group'' is not
sufficiently precise. The algorithmic behavior of $G$ may depend on how it is given to us.
For free-abelian times free groups, we will always assume that they are finitely generated
and given to us with the standard presentation~(\ref{eq:pres F_n x Z^m}). We will also
assume that the elements, subgroups, homomorphisms and other objects associated with the
group are given to us in terms of this presentation.
As a first application of the existence and computability of bases for finitely generated
subgroups of $G$, we already solved the membership problem (see
Corollary~\ref{cor:membership}), which includes the word problem. This last one, together
with the conjugacy problem, are quite elementary because of the existence of
algorithmically computable normal forms for the elements in $G$. The third of Dehn's
problems is also easy within our family of groups.
\begin{prop}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$. Then
\begin{itemize}
\item[\emph{(i)}] the word problem for $G$ is solvable,
\item[\emph{(ii)}] the conjugacy problem for $G$ is solvable,
\item[\emph{(iii)}] the isomorphism problem is solvable within the family of finitely
generated free-abelian times free groups.
\end{itemize}
\end{prop}
\begin{proof}
As seen above, every element from $G$ has a normal form, easily computable from an
arbitrary expression in terms of the generators. Once in normal form,
$\mathbf{t}^{\mathbf{a}}u$ equals 1 if and only if $\mathbf{a}=\mathbf{0}$ and $u$ is the
empty word. And $\mathbf{t}^{\mathbf{a}}u$ is conjugate to $\mathbf{t}^{\mathbf{b}}v$ if
and only if $\mathbf{a}=\mathbf{b}$ and $u$ and $v$ are conjugate in $F_n$. This solves the
word and conjugacy problems in $G$.
For the isomorphism problem, let $\langle X \mid R \, \rangle$ and $\langle Y \mid S \,
\rangle$ be two arbitrary finite presentations of free-abelian times free groups $G$ and
$G'$ (i.e.\ we are given two arbitrary finite presentations plus the information that both
groups are free-abelian times free). So, both $G$ and $G'$ admit presentations of the
form~\ \Leftrightarrow \ ref{eq:pres F_n x Z^m}, say $\mathcal{P}_{n,m}$ and $\mathcal{P}_{n',m'}$, for some
integers $m,n,m',n'\geqslant 0$, $n,n'\neq 1$ (unknown at the beginning). It is well known
that two finite presentations present the same group if and only if they are connected by a
finite sequence of Tietze transformations (see~\cite{lyndon_combinatorial_2001}); so, there
exist finite sequences of Tietze transformations, one from $\langle X \mid R \,\rangle$ to
$\mathcal{P}_{n,m}$, and another from $\langle Y \mid S \,\rangle$ to $\mathcal{P}_{n',m'}$
(again, unknown at the beginning). Let us start two diagonal procedures exploring,
respectively, the tree of all possible Tietze transformations successively aplicable to
$\langle X \mid R \, \rangle$ and $\langle Y \mid S \, \rangle$. Because of what was just
said above, both procedures will necessarily reach presentations of the desired form in
finite time. When knowing the parameters $m,n,m',n'$ we apply Observation~\ref{prop:caract
Fn x Z^m} and conclude that $\langle X \mid R \,\rangle$ and $\langle Y \mid S \,\rangle$
are isomorphic if and only if $n=n'$ and $m=m'$. (This is a brute force algorithm, very far
from being efficient from a computational point of view.)
\end{proof}
\goodbreak
\section{Finite index subgroups}\label{fi}
In this section, the goal is to find an algorithm solving the Finite Index Problem in a
free-abelian times free group $G$:
\begin{problem}[\textbf{Finite Index Problem, $\operatorname{FIP}(G)$}]
Given a finite list $w_1,\ldots,w_s$ of elements in $G$, decide whether the subgroup
$H=\langle w_1,\ldots,w_s \rangle$ is of finite index in $G$ and, if so, compute the index an
a system of right (or left) coset representatives for $H$.
\end{problem}
To start, we remind that this same algorithmic problem is well known to be solvable both
for free-abelian and for free groups. Given several vectors $\mathbf{w_1},\ldots
,\mathbf{w_s}\in \ZZ^m$, the subgroup $H=\langle \mathbf{w_1},\ldots ,\mathbf{w_s}\rangle$
is of finite index in $\ZZ^m$ if and only if it has rank $m$. And here is an algorithm to
make such a decision, and (in the affirmative case) to compute the index $[\ZZ^m : H]$ and a set of coset
representatives for $H$: consider the $s\mbox{type\,(I)}mes m$ integral matrix
$\mathbf{W}$ whose rows are the $\mathbf{w_i}$'s, and compute its Smith normal form, i.e.\
$\mathbf{PW}=\operatorname{\mathbf{diag}}(d_1, d_2, \ldots ,d_r,0,\ldots ,0)\mathbf{Q}$,
where $\mathbf{P}\in \operatorname{GL}_s(\ZZ)$, $\mathbf{Q}\in \operatorname{GL}_m(\ZZ)$, $d_1,\ldots ,d_r$ are
non-zero integers each dividing the following one, $d_1 \divides d_2 \divides \cdots
\divides d_r \neq 0$, the diagonal matrix has size $s\mbox{type\,(I)}mes m$, and
$r=\operatorname{rk}(\mathbf{W})\leqslant \min \{s,m\}$ (fast algorithms are well known to compute all
these from $\mathbf{W}$, see~\cite{artin_algebra_2010} for details). Now, if $r<m$ then
$[\ZZ^m : H]=\infty$ and we are done. Otherwise, $H$ is the subgroup generated by the rows
of ($\textbf{W}$ and so those of) $\mathbf{PW}$, i.e.\ the image under the automorphism
$Q\colon \ZZ^m \to \ZZ^m$, $\mathbf{v}\mapsto \mathbf{vQ}$ of the subgroup $H'$ generated
by the simple vectors $(d_1,0,\ldots ,0), \ldots ,(0,\ldots ,0,d_m)$. It is clear that
$[\ZZ^m : H']=d_1d_2\cdots d_m$, with $\{ (r_1, \ldots , r_m) \mid r_i \in [d_i]\}$ being a
set of coset representatives for $H'$. Hence, $[\ZZ^m : H]=d_1d_2\cdots d_m$ as well, with
$\{ (r_1, \ldots , r_m)\mathbf{Q} \mid r_i \in [d_i]\}$ being a set of coset
representatives for $H$.
On the other hand, the subgroup $H=\langle w_1,\ldots,w_s \rangle\leqslant F_n$ has finite
index if and only if every vertex of the core of the Schreier graph for $H$, denoted
$\mathcal{S}(H)$, are complete (i.e.\ have degree $2n$); this is algorithmically checkable
by means of fast algorithms. And, in this case, the labels of paths in a chosen maximal
tree $T$ from the basepoint to each vertex (resp. from each vertex to the basepoint) give a
set of left (resp. right) coset representatives for $H$, whose index in $F_n$ is then the
number of vertices of $\mathcal{S}(H)$. For details, see~\cite{stallings_topology_1983} for
the classical reference or \cite{kapovich_stallings_2002} for a more modern and
combinatorial approach.
Hence, $\operatorname{FIP}(\mathbb{Z}^m)$ and $\operatorname{FIP}(F_n)$ are solvable. In order to build an algorithm to
solve the same problem in $\ZZ^m\mbox{type\,(I)}mes F_n$, we shall need some well known basic facts
about indices of subgroups that we state in the following two lemmas. For a subgroup
$H\leqslant G$ of an arbitrary group $G$, we will write~${H\leqslant\operatorname{\!_{f.i.}} G}$ to denote
$[G:H]<\infty$.
\begin{lem} \label{lem:index de subgrups a traves de epimorfismes}
Let $G$ and $G'$ be arbitrary groups, $\rho \colon G\twoheadrightarrow G'$ an epimorphism
between them, and let $H\leqslant G$ and $H'\leqslant G'$ be arbitrary subgroups. Then,
\begin{itemize}
\item[\emph{(i)}] $[G':H\rho]\leqslant [G:H]$; in particular, if $H\leqslant\operatorname{\!_{f.i.}} G$ then
${H\rho \leqslant\operatorname{\!_{f.i.}} G'}$.
\item [\emph{(ii)}] $[G':H']=[G:H'\rho^{-1}]$; in particular, $H'\leqslant\operatorname{\!_{f.i.}} G'$ if and
only if ${H'\rho^{-1} \leqslant\operatorname{\!_{f.i.}} G}$. \qed
\end{itemize}
\end{lem}
\begin{lem} \label{lem:index producte directe}
Let $Z$ and $F$ be arbitrary groups, and let $H\leqslant Z\mbox{type\,(I)}mes F$ be a subgroup of their
direct product. Then
$$
[Z\mbox{type\,(I)}mes F:H]\leqslant [Z:H\cap Z]\cdot [F:H\cap F] ,
$$
and
$$
H\leqslant\operatorname{\!_{f.i.}} Z\mbox{type\,(I)}mes F\ \Leftrightarrow \ H\cap Z\leqslant\operatorname{\!_{f.i.}} Z \text{ and } H\cap F\leqslant\operatorname{\!_{f.i.}} F.
$$
\end{lem}
\begin{proof}
It is straightforward to check that the map
\begin{equation} \label{eq:index producte directe}
\begin{array}{rcl}
Z/(H\cap Z)\ \mbox{type\,(I)}mes \ F/(H\cap F) & \rightarrow & (Z\mbox{type\,(I)}mes F)/H \\[3pt]
\bigl( z\cdot (H\cap Z)\, ,\, f\cdot (H\cap F) \bigr) & \mapsto & zf\cdot H
\end{array}
\end{equation}
is well defined and onto; the inequality and one implication follow immediately. The other
implication is a well know fact.
\end{proof}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$, and let $H$ be a subgroup of $G$. If $H\leqslant\operatorname{\!_{f.i.}} G$ then,
applying Lemma~\ref{lem:index de subgrups a traves de epimorfismes}~(i) to the canonical
projections $\tau \colon G\twoheadrightarrow \ZZ^m$ and $\pi \colon G\twoheadrightarrow
F_n$, we have that both indices $[\ZZ^m : H\tau]$ and $[F_n :H\pi]$ must also be finite.
Since we can effectively compute generators for $H\pi$ and for $H\tau$, and we can decide
whether $H\tau \leqslant\operatorname{\!_{f.i.}} \ZZ^m$ and $H\pi \leqslant\operatorname{\!_{f.i.}} F_n$ hold, we have two
effectively checkable necessary conditions for $H$ to be of finite index in $G$: if either
$[\ZZ^m :H\tau]$ or $[F_n :H\pi]$ is infinite, then so is $[G:H]$.
Nevertheless, these two necessary conditions together are not sufficient to ensure
finiteness of $[G:H]$, as the following easy example shows: take $H=\langle sa, tb\rangle$,
a subgroup of $G=\ZZ^2 \mbox{type\,(I)}mes F_2 = \langle s,t \mid [s,t]\rangle \mbox{type\,(I)}mes \langle a,b \mid
\, \rangle$. It is clear that $H\tau =\ZZ^2$ and $H\pi =F_2$ (so, both indices are 1), but
the index $[\ZZ^2 \mbox{type\,(I)}mes F_2 :H]$ is infinite because no power of $a$ belongs to $H$.
Note that $H\cap \ZZ^m \leqslant H\tau \leqslant \ZZ^m$ and $H\cap F_n \leqslant H\pi
\leqslant F_n$ and, according to Lemma~\ref{lem:index producte directe}, the conditions
really necessary, and sufficient, for $H$ to be of finite index in $G$ are
\begin{equation}\label{cond f.i.}
H\leqslant\operatorname{\!_{f.i.}} G \,\, \Leftrightarrow \,\, \begin{cases} H\cap \ZZ^m \leqslant\operatorname{\!_{f.i.}} \ZZ^m, \\ H\cap F_n \leqslant\operatorname{\!_{f.i.}} H\pi, \mbox{ and } H\pi \leqslant\operatorname{\!_{f.i.}} F_n,
\end{cases}
\end{equation}
both stronger than $H\tau \leqslant\operatorname{\!_{f.i.}} \ZZ^m$ and $H\pi \leqslant\operatorname{\!_{f.i.}} F_n$ respectively
(and none of them satisfied in the example above). This is the main observation which leads
to the following result.
\goodbreak
\begin{thm} \label{th:problema de index finit}
\index{problema de l'\'{\i}ndex finit!decidibilitat i c\`{a}lcul}
The Finite Index Problem for $\ZZ^m \mbox{type\,(I)}mes F_n$ is solvable.
\end{thm}
\begin{proof}
From the given generators for $H$, we start by computing a basis of $H$ (see
Proposition~\ref{prop:bases algorismiques}),
$$
\{ \mathbf{t}^\mathbf{b_1}, \ldots , \mathbf{t}^\mathbf{b_{m'}},\, \mathbf{t}^\mathbf{a_1} u_1, \ldots
,\mathbf{t}^\mathbf{a_{n'}} u_{n'} \},
$$
where $0\leqslant m'\leqslant m$, $0\leqslant n'\leqslant p$, $L=\langle \mathbf{b_1},
\ldots , \mathbf{b_{m'}}\rangle \simeq \ZZ^{m'}$ with abelian basis~$\{ \mathbf{b_1}, \ldots
, \mathbf{b_{m'}} \}$, $\mathbf{a_1}, \ldots, \mathbf{a_{n'}} \in \ZZ^m$, and $H\pi
=\langle u_1, \ldots, u_{n'} \rangle \simeq F_{n'}$ with free basis~$\{ u_1, \ldots,
u_{n'}\}$. As above, let us write $\mathbf{A}$ for the ${n'\mbox{type\,(I)}mes m}$ integral matrix whose
rows are $\mathbf{a_i} \in \ZZ^m$, $i\in[n']$.
Note that $L=\langle \mathbf{b_1}, \ldots, \mathbf{b_{m'}}\rangle \simeq H\cap \ZZ^m$ (with
the natural isomorphism $\mathbf{b}\mapsto \mathbf{t^b}$, changing the notation from
additive to multiplicative). Hence, the first necessary condition in~(\ref{cond f.i.}) is
$\operatorname{rk}(L)=m$, i.e.\ $m'=m$. If this is not the case, then $[G:H]=\infty$ and we are done.
So, let us assume $m'=m$ and compute a set of (right) coset representatives for $L$ in
$\ZZ^m$, say $\ZZ^m =\mathbf{c_1}L\sqcup \cdots \sqcup \mathbf{c_r}L$.
Next, check whether $H\pi=\langle u_1, \ldots ,u_{n'}\rangle$ has finite index in $F_n$ (by
computing the core of the Schreier graph of $H\pi$, and checking whether is it complete or
not). If this is not the case, then $[G:H]=\infty$ and we are done as well. So, let us
assume $H\pi \leqslant\operatorname{\!_{f.i.}} F_n$, and compute a set of right coset representatives for
$H\pi$ in $F_n$, say $F_n =v_1(H\pi)\sqcup \cdots \sqcup v_s(H\pi)$.
According to~(\ref{cond f.i.}), it only remains to check whether the inclusion $H\cap F_n
\leqslant H\pi$ has finite or infinite index. Call $\rho \colon F_{n'} \twoheadrightarrow
\ZZ^{n'}$ the abstract abelianization map for the free group of rank $n'$ (with free basis
$\{ u_1,\ldots ,u_{n'}\}$), and $A\colon \ZZ^{n'} \to \ZZ^m$ the linear mapping
$\mathbf{v}\mapsto \mathbf{v}\mathbf{A}$ corresponding to right multiplication by the
matrix $\mathbf{A}$. Note that
$$
H\cap F_n = \{ w\in F_n \mid \mathbf{0}\in \mathcal{C}_{w,H} \} =\{ w\in F_n \mid \boldsymbol\omega \mathbf{A} \in L
\} \leqslant H\pi,
$$
where $\boldsymbol\omega =\omega \rho$ is the abelianization of the word $\omega$ which
expresses $w$ in the free basis $\{ u_1,\ldots,u_{n'} \}$ of $H\pi$, i.e.\ $F_n \ni
w=\omega(u_1,\ldots ,u_{n'})$, see Corollary~\ref{cor:propietats complecio abeliana}. Thus,
$H\cap F_n$ is, in terms of the free basis $\{ u_1,\ldots ,u_{n'}\}$, the successive full
preimage of $L$, first by the map $A$ and then by the map $\rho$, namely
$(L)A^{-1}\rho^{-1}$, see the following diagram:
\begin{equation} \label{eq:diagrama index finit}\index{problema de l'\'{\i}ndex finit!diagrama}
\begin{aligned}
\xy
(-21,0)*+{H \pi};
(-21,-9)*+{H \cap F_n};
(-21,-5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(0,-5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(23,-5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(41,-5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(-12,-9)*+{\simeq};
(-12,0)*+{\simeq};
(-27.5,0)*+{\geqslant};
(-33,0)*+{F_n};
{\ar@{->>}^-{\rho} (0,0)*+++{F_{n'}}; (23,0)*+++{\ZZ^{n'}}};
{\ar^-{A} (23,0)*++++{}; (41,0)*+{\ZZ^m}};
{\ar@{|->} (41,-9)*++{L}; (23,-9)*++{(L)A^{-1}}};
{\ar@{|->} (23,-9)*+++++++{}; (0,-9)*+{(L)A^{-1} \rho^{-1} }};
\endxy \\[-15pt]
\end{aligned}
\end{equation}
Hence, using Corollary~\ref{lem:index de subgrups a traves de epimorfismes}~(ii), $[H\pi
:H\cap F_n]=[F_{n'}:(L)A^{-1}\rho^{-1}]$ is finite if and only if $[\ZZ^{n'}:(L)A^{-1}]$ is
finite. And this happens if and only if $\operatorname{rk}((L)A^{-1})=n'$. Since
$\operatorname{rk}((L)A^{-1})=\operatorname{rk}((L\cap \operatorname{Im}(A))A^{-1})=\operatorname{rk}(L\cap \operatorname{Im}(A))+\operatorname{rk}(\ker(A))$, we can
immediately check whether this rank equals $n'$, or not. If this is not the case, then
$[H\pi :H\cap F_n]=[F_{n'}: (L)A^{-1}\rho^{-1}]=[\ZZ^{n'}:(L)A^{-1}]=\infty$ and we are
done. Otherwise, $(L)A^{-1}\leqslant\operatorname{\!_{f.i.}} \ZZ^{n'}$ and so, $H\cap F_n \leqslant\operatorname{\!_{f.i.}} H\pi$
and $H\leqslant\operatorname{\!_{f.i.}} G$.
Finally, suppose $H\leqslant\operatorname{\!_{f.i.}} G$ and let us explain how to compute a set of right coset
representatives for $H$ in $G$ (and so, the actual value of the index $[G:H]$). Having
followed the algorithm described above, we have $\ZZ^m =\mathbf{c_1}L\sqcup \cdots \sqcup
\mathbf{c_r}L$ and $F_n =v_1(H\pi)\sqcup \cdots \sqcup v_s(H\pi)$. Furthermore, from the
situation in the previous paragraph, we can compute a set of (right) coset representatives
for $(L)A^{-1}$ in~$\ZZ^{n'}$, which can be easily converted (see Lemma~\ref{lem:index de
subgrups a traves de epimorfismes}~(ii)) into a set of right coset representatives for
$H\cap F_n$ in~$H\pi$, say $H\pi =w_1(H\cap F_n)\sqcup \cdots \sqcup w_t(H\cap F_n)$.
Hence, $F_n =\bigsqcup_{j\in [s]} \bigsqcup_{k\in [t]} v_jw_k (H\cap F_n)$, and $[F_n :
H\cap F_n]=st$. Combining this with $\ZZ^m =\bigsqcup_{i\in [r]}\mathbf{t^{c_i}}(H\cap
\ZZ^m)$, and using the map in the proof of Lemma~\ref{lem:index producte directe}, we get
$G=\ZZ^m \mbox{type\,(I)}mes F_n =\bigcup_{i\in [r]}\bigcup_{j\in [s]}\bigcup_{k\in [t]}
\mathbf{t^{c_i}}v_jw_k H$.
It only remains a cleaning process in the family of $rst$ elements $\{
\mathbf{t^{c_i}}v_jw_k \mid {i\in [r]},\, {j\in [s]},\, {k\in [t]}\}$ to eliminate possible
duplications as representatives of right cosets of $H$ (this can be easily done by several
applications of the membership problem for $H$, see Corollary~\ref{cor:membership}). After
this cleaning process, we get a genuine set of right coset representatives for $H$ in $G$,
and the actual value of~$[G:H]$ (which is at most $rst$).
Finally, inverting all of them we will get a set of left coset representatives for~$H$
in~$G$.
\end{proof}
Regarding the computation of the index $[G:H]$, we remark that the inequality among indices
in Lemma~\ref{lem:index producte directe} may be strict, i.e.\ $[G:H]$ may be strictly less
than~$rst$, as the following example shows.
\begin{exm} \label{exm:contraexemple igualtat indexs}
Let $G=\ZZ^2 \mbox{type\,(I)}mes F_2 =\langle s,t \mid [s,t] \,\rangle \mbox{type\,(I)}mes \langle a,b \mid
\,\rangle$ and consider the (normal) subgroups $H=\langle s, t^2, a, b^2, bab\rangle$ and
$H'=\langle s, t^2, a, b^2, bab, tb\rangle =\langle s, t^2, a, tb\rangle$ of $G$ (with
bases $\{ s, t^2, a, b^2, bab\}$ and $\{ s, t^2, a, tb\}$, respectively). We have $H\cap
\ZZ^2 =H'\cap \ZZ^2 =\langle s , t^2 \rangle \leqslant_{2} \ZZ^2$, and $H\cap F_2 =H'\cap
F_2 =\langle a, b^2, bab\rangle \leqslant_{2} F_2$, but
$$
[\ZZ^2 \mbox{type\,(I)}mes F_2:H]=4=[\ZZ^2 :H\cap \ZZ^2]\cdot [F_2 :H\cap F_2] ,
$$
while
$$
[\ZZ^2 \mbox{type\,(I)}mes F_2:H']=2<4=[\ZZ^2 :H'\cap \ZZ^2]\cdot [F_2 :H'\cap F_2],
$$
with (right) coset representatives $\{1, b, t, tb\}$ and $\{1, t\}$, respectively. This
shows that both the equality and the strict inequality can occur in Lemma~\ref{lem:index
producte directe}.
\end{exm}
\section{The coset intersection problem and Howson's property} \label{sec:CIP}
Consider the following two related algorithmic problems in an arbitrary group~$G$:
\begin{problem}[\textbf{Subgroup Intersection Problem, $\operatorname{SIP}(G)$}]
Given finitely generated subgroups $H$ and $H'$ of $G$ (by finite sets of generators),
decide whether the intersection $H\cap H'$ is finitely generated and, if so, compute a set
of generators for it.
\end{problem}
\begin{problem}[\textbf{Coset Intersection Problem, $\operatorname{CIP}(G)$}]
Given finitely generated subgroups $H$ and $H'$ of $G$ (by finite sets of generators), and
elements $g,g'\in G$, decide whether the right cosets $gH$ and $g'H'$ intersect trivially
or not; and in the negative case (i.e.\ when $gH\cap g'H'=g''(H\cap H')$), compute such
a~$g''\in G$.
\end{problem}
A group $G$ is said to have the \emph{Howson property} if the intersection of every pair
(and hence every finite family) of finitely generated subgroups ${H,H'\leqslant\operatorname{\!_{f.g.}} G}$ is
again finitely generated, ${H\cap H'\leqslant\operatorname{\!_{f.g.}} G}$.
It is obvious that $\ZZ^m$ satisfies Howson property, since every subgroup is free-abelian
of rank less than or equal to $m$ (and so, finite). Moreover, $\operatorname{SIP}(\ZZ^m)$ and
$\operatorname{CIP}(\ZZ^m)$ just reduce to solving standard systems of linear equations.
The case of free groups is more interesting. Howson himself established in 1954 that $F_n$
also satisfies the Howson property, see~\cite{howson_intersection_1954}. Since then, there
has been several improvements of this result in the literature, both about shortening the
upper bounds for the rank of the intersection, and about simplifying the arguments used.
The modern point of view is based on the pull-back technique for graphs: one can
algorithmically represent subgroups of $F_n$ by the core of their Schreier graphs, and the
graph corresponding to $H\cap H'$ is the pull-back of the graphs corresponding to $H$ and
$H'$, easily constructible from them. This not only confirms Howson's property for $F_n$
(namely, the pull-back of finite graphs is finite) but, more importantly, it provides the
algorithmic aspect into the topic by solving $\operatorname{SIP}(F_n)$. And, more generally, an easy
variation of these arguments using pullbacks also solves $\operatorname{CIP}(F_n)$, see Proposition~6.1
in~\cite{bogopolski_orbit_2009}.
Baumslag~\cite{baumslag_intersections_1966} established, as a generalization of Howson's
result, the conservation of Howson's property under free products, i.e.\ if $G_1$ and $G_2$
satisfy Howson property then so does $G_1 *G_2$. Despite it could seem against intuition,
the same result fails dramatically when replacing the free product by a direct product. And
one can find an extremely simple counterexample for this, in the family of free-abelian
times free groups; the following observation is folklore (it appears
in~\cite{burns_intersection_1998} attributed to Moldavanski, and as the solution to
exercise~23.8(3) in~\cite{bogopolski_introduction_2008}).
\begin{obs} \label{prop:Fn x Z^n no Howson}
The group $\ZZ^m \mbox{type\,(I)}mes F_n$, for $m\geqslant 1$ and $n\geqslant 2$, does not satisfy the
Howson property.
\end{obs}
\begin{proof}
In $\ZZ \mbox{type\,(I)}mes F_2 =\langle t \mid \, \rangle \mbox{type\,(I)}mes \langle a,b \mid \, \rangle$, consider
the (finitely generated) subgroups $H=\langle a,b \rangle$ and $H'=\langle ta,b \rangle$.
Clearly,
\begin{align*}
H\cap H' & = \{ w(a,b) \mid w\in F_2\} \cap \{ w(ta,b) \mid w\in F_2\} \\ & = \{ w(a,b) \mid w\in F_2\} \cap \{ t^{|w|_a}w(a,b) \mid w\in F_2\} \\ & = \{ t^0 w(a,b) \mid w\in F_2,\,\, |w|_a=0 \} \\ & = \llangle b\rrangle_{F_2} =\langle a^{-k} b a^{k} ,\, k\in \ZZ \rangle,
\end{align*}
where $|w|_a$ is the total $a$-exponent of $w$ (i.e.\ the first coordinate of the
abelianization $\mathbf{w}\in \ZZ^2$ of $w\in F_2$). It is well known that the normal
closure of $b$ in $F_2$ is not finitely generated, hence $\ZZ \mbox{type\,(I)}mes F_2$ does not satisfy
the Howson property. Since $\ZZ \mbox{type\,(I)}mes F_2$ embeds in $\ZZ^m \mbox{type\,(I)}mes F_n$ for all
$m\geqslant 1$ and $n\geqslant 2$, the group $\ZZ^m \mbox{type\,(I)}mes F_n$ does not have this property
either.
\end{proof}
We remark that the subgroups $H$ and $H'$ in the previous counterexample are both
isomorphic to $F_2$. So, interestingly, the above is a situation where two free groups of
rank 2 have a non-finitely generated (of course, free) intersection. This does not
contradict the Howson property for free groups, but rather indicates that one cannot embed
$H$ and $H'$ simultaneously into a free subgroup of $\ZZ \mbox{type\,(I)}mes F_2$.
In the present section, we shall solve $\operatorname{SIP}(\ZZ^m\mbox{type\,(I)}mes F_n)$ and $\operatorname{CIP}(\ZZ^m\mbox{type\,(I)}mes F_n)$.
The key point is Corollary~\ref{cor:H fg sii Hpi fg}\,: $H\cap H'$ is finitely generated if
and only if $(H\cap H')\pi \leqslant F_n$ is finitely generated. Note that the group $H\pi
\cap H'\pi$ is always finitely generated (by Howson property of $F_n$), but the inclusion
$(H\cap H')\pi \leqslant H\pi \cap H'\pi$ is not (in general) an equality (for example, in
$\ZZ \mbox{type\,(I)}mes F_2 =\langle t \mid \, \rangle \mbox{type\,(I)}mes \langle a,b \mid \, \rangle$, the
subgroups $H=\langle t^2, ta^2 \rangle$ and $H'=\langle t^2, t^{2}a^3\rangle$ satisfy $a^6
\in H\pi \cap H'\pi$ but $a^6 \not\in (H\cap H')\pi$). This opens the possibility for
$(H\cap H')\pi$, and so $H\cap H'$, to be non finitely generated, as is the case in the
example from Observation~\ref{prop:Fn x Z^n no Howson}.
Let us describe in detail the data involved in $\operatorname{CIP}(G)$ for $G=\ZZ^m\mbox{type\,(I)}mes F_n$. By
Proposition~\ref{prop:bases algorismiques}, we can assume that the initial finitely
generated subgroups $H, H'\leqslant G$ are given by respective bases i.e.\ by two sets of
elements
\begin{equation}\label{ee'}
\begin{aligned}
E &=\{ \mathbf{t}^\mathbf{b_1}, \ldots , \mathbf{t}^\mathbf{b_{m_1}}, \mathbf{t}^\mathbf{a_1} u_1,\ldots ,
\mathbf{t}^\mathbf{a_{n_1}} u_{n_1}\}, \\ E' &=\{ \mathbf{t}^\mathbf{b'_1}, \ldots ,\mathbf{t}^\mathbf{b'_{m_2}},
\mathbf{t}^\mathbf{a'_1} u'_1, \ldots ,\mathbf{t}^\mathbf{a'_{n_2}}u'_{n_2}\},
\end{aligned}
\end{equation}
where $\{ u_1, \ldots, u_{n_1} \}$ is a free basis of $H\pi \leqslant F_n$, $\{ u'_1,
\ldots, u'_{n_2} \}$ is a free basis of $H'\pi \leqslant F_n$, $\{ \mathbf{t}^\mathbf{b_1},
\ldots ,\mathbf{t}^\mathbf{b_{m_1}} \}$ is an abelian basis of $H\cap \ZZ^m$, and $\{
\mathbf{t}^\mathbf{b'_1}, \ldots ,\mathbf{t}^\mathbf{b'_{m_2}} \}$ is an abelian basis of
$H'\cap \ZZ^m$. Consider the subgroups $L=\langle \mathbf{b_1}, \ldots,
\mathbf{b_{m_1}}\rangle \leqslant \ZZ^m$ and $L'=\langle \mathbf{b'_1}, \ldots,
\mathbf{b'_{m_2}} \rangle \leqslant \ZZ^m$, and the matrices
$$
\mathbf{A} = \left( \begin{matrix} \mathbf{a_{1}} \\ \vdots \\ \mathbf{a_{n_1}} \end{matrix}
\right) \in \mathcal{M}_{n_1 \mbox{type\,(I)}mes m}(\ZZ) \quad \text{ and } \quad \mathbf{A'} =\left(
\begin{matrix} \mathbf{a'_{1}} \\ \vdots \\ \mathbf{a'_{n_2}} \end{matrix} \right) \in
\mathcal{M}_{n_2 \mbox{type\,(I)}mes m}(\ZZ).
$$
We are also given two elements $g=\mathbf{t^{a}}u$ and $g'=\mathbf{t^{a'}}u'$ from $G$, and
have to algorithmically decide whether the intersection $gH\cap g'H'$ is empty or not.
Before start describing the algorithm, note that $H\pi$ is a free group of rank $n_1$.
Since $\{ u_1, \ldots, u_{n_1}\}$ is a free basis of $H\pi$, every element $w\in H\pi$ can
be written in a unique way as a word on the $u_i$'s, say $w=\omega (u_1,\ldots ,u_{n_1})$.
Abelianizing this word, we get the abelianization map $\rho_1 \colon H\pi
\twoheadrightarrow \ZZ^{n_1}$, $w\mapsto \boldsymbol{\omega}$ (not to be confused with the
restriction to $H\pi$ of the ambient abelianization $F_n \twoheadrightarrow \ZZ^n$, which
will have no role in this proof). Similarly, we define the morphism $\rho_2 \colon H'\pi
\twoheadrightarrow \ZZ^{n_2}$.
\goodbreak
With all this data given, note that $gH\cap g'H'$ is empty if and only if its projection to
the free component is empty,
$$
gH\cap g'H'=\emptyset \,\, \Leftrightarrow \,\, (gH\cap g'H')\pi =\emptyset ;
$$
so, it will be enough to study this last projection. And, since this projection contains
precisely those elements from $(gH)\pi \cap (g'H')\pi =(u\cdot H\pi)\cap (u'\cdot H'\pi)$
having compatible abelian completions in $gH\cap g'H'$, a direct application of
Lemma~\ref{lem:descripcio d'un subgrup fg en termes d'una base} gives the following result.
\begin{lem}\label{lem:descr proj lliure int cosets Fn x Zm}
With the above notation, the projection $(gH\cap g'H')\pi$ consists precisely on those
elements $v\in (u\cdot H\pi) \cap (u'\cdot H'\pi)$ such that
\begin{equation}\label{eq:condicio lineal inicial}
N_v =\left( \mathbf{a}+\boldsymbol{\omega}\mathbf{A}+L\right) \cap \left(
\mathbf{a'}+\boldsymbol{\omega'}\mathbf{A'}+L' \right) \neq \emptyset,
\end{equation}
where $\boldsymbol{\omega}=w\rho_1$ and $\boldsymbol{\omega'}=w'\rho_2$ are, respectively,
the abelianizations of the abstract words $\omega \in F_{n_1}$ and $\omega'\in F_{n_2}$
expressing $w=u^{-1}v\in H\pi\leqslant F_n$ and $w'=u'^{\,-1}v\in H'\pi\leqslant F_n$ in
terms of the free bases $\{ u_1,\ldots, u_{n_1} \}$ and $\{ u'_1,\ldots, u'_{n_2} \}$
(i.e.\ $u\cdot \omega(u_1,\ldots, u_{n_1})=v=u'\cdot \omega'(u'_1, \ldots, u'_{n_2})$).
That is,
\begin{equation}\label{eq:descr proj lliure int cosets Fn x Zm}
(gH\cap g'H')\pi = \{ v\in (u\cdot H\pi)\cap (u'\cdot H'\pi) \mid N_v \neq \emptyset \, \} \subseteq (u\cdot H\pi) \cap (u'\cdot H'\pi) \tag*{\qed}
\end{equation}
\end{lem}
\begin{thm}\label{thm:cip}
The Coset Intersection Problem for $\ZZ^m \mbox{type\,(I)}mes F_n$ is solvable.
\end{thm}
\begin{proof}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ be a finitely generated free-abelian times free group. Using the
solution to $\operatorname{CIP}(F_n)$, we start by checking whether $(u\cdot H\pi )\cap (u'\cdot H'\pi)$
is empty or not. In the first case $(gH\cap g'H')\pi$, and so $gH\cap g'H'$, will also be
empty and we are done. Otherwise, we can compute $v_0\in F_n$ such that
\begin{equation}\label{inter}
(u\cdot H \pi )\cap (u'\cdot H'\pi )=v_0 \cdot (H\pi \cap H'\pi),
\end{equation}
compute words $\omega_0 \in F_{n_1}$ and $\omega'_0\in F_{n_2}$ such that $u\cdot
\omega_0(u_1,\ldots, u_{n_1})=v_0 =u'\cdot \omega'_0(u'_1,\ldots, u'_{n_2})$, and compute a
free basis, $\{ v_1, \ldots ,v_{n_3}\}$, for $H\pi \cap H'\pi$ together with expressions of
the $v_i$'s in terms of the free bases for $H\pi$ and $H'\pi$, $v_i =\nu_i(u_1,\ldots,
u_{n_1})=\nu'_i(u'_1,\ldots, u'_{n_2})$, $i\in [n_3 ]$.
Let $\rho_3 \colon H\pi \cap H'\pi \twoheadrightarrow \ZZ^{n_3}$ be the corresponding
abelianization map. Abelianizing the words $\nu_i$ and $\nu'_i$, we can compute the rows of
the matrices $\mathbf{P}$ and $\mathbf{P}'$ (of sizes $n_3\mbox{type\,(I)}mes n_1$ and $n_3\mbox{type\,(I)}mes n_2$,
respectively) describing the abelianizations of the inclusion maps $H\pi
\overset{\iota}{\hookleftarrow} H\pi \cap H'\pi \overset{\iota'}{\hookrightarrow} H'\pi$,
see the central part of the diagram \ \Leftrightarrow \ ref{xypic:esquema interseccio cosets Fn x Z^m}
below.
By~(\ref{inter}), $u^{-1}v_0 \in H\pi$ and $u'^{-1}v_0\in H'\pi$. So, left translation by
$w_0=u^{-1}v_0$ is a permutation of $H\pi$ (not a homomorphism, unless $w_0=1$), say
$\lambda_{w_0} \colon H\pi \to H\pi$, $x\mapsto w_0x=u^{-1}v_0x$. Analogously, we have the
left translation by $w'_0=u'^{-1}v_0$, say $\lambda_{w_0'} \colon H'\pi \to H'\pi$,
$x\mapsto w_0'x=u'^{-1}v_0x$. We include these translations in our diagram:
\begin{equation} \label{xypic:esquema interseccio cosets Fn x Z^m}
\begin{aligned}
\xy
(0,5)*+{\rotatebox[origin=c]{270}{$\leqslant$}};
(0,10)*{(H \cap H') \pi};
(0,0)*+{H \pi \cap H' \pi}; (-25,0)*+{H \pi}; (25,0)*+{H' \pi};
(-50,0)*+{H \pi}; (50,0)*+{H' \pi};
{\ar@{_(->}_-{\iota} (0,0)*++++++++++{}; (-25,0)*++++{}};
{\ar_-{\lambda_{w_0}} (-25,0)*+++++{}; (-50,0)*++++{}};
{\ar@{^(->}^-{\iota'} (0,0)*++++++++++{}; (25,0)*++++{}};
{\ar^-{\lambda_{w'_0}} (25,0)*+++++{}; (50,0)*++++{}};
(0,-20)*+{\ZZ^{n_3}}; (-25,-20)*+{\ZZ^{n_1}}; (25,-20)*+{\ZZ^{n_2}};
(-50,-20)*+{\ZZ^{n_1}}; (50,-20)*+{\ZZ^{n_2}};
{\ar@{->>}^-{\rho_3} (0,0)*+++{}; (0,-20)*+++{}};
{\ar@{->>}_-{\rho_1} (-25,0)*+++{}; (-25,-20)*+++{}};
{\ar@{->>}_-{\rho_1} (-50,0)*+++{}; (-50,-20)*+++{}};
{\ar@{->>}^-{\rho_2} (25,0)*+++{}; (25,-20)*+++{}};
{\ar@{->>}^-{\rho_2} (50,0)*+++{}; (50,-20)*+++{}};
(-12.5,-10)*+{///};(-37.5,-10)*+{///};(12.5,-10)*+{///};(37.5,-10)*+{///};
(0,-20)*+{\ZZ^{n_3}}; (-25,-20)*+{\ZZ^{n_1}}; (25,-20)*+{\ZZ^{n_2}};
{\ar_-{\mathbf{P}} (0,-20)*+++++{}; (-25,-20)*++++{}};
{\ar_-{+\boldsymbol{\omega_0}} (-25,-20)*+++++{}; (-50,-20)*++++{}};
{\ar^-{\mathbf{P}'} (0,-20)*+++++{}; (25,-20)*++++{}};
{\ar^-{+\boldsymbol{\omega'_0}} (25,-20)*+++++{}; (50,-20)*++++{}};
(0,-40)*+{\ZZ^{m}};
{\ar^-{\mathbf{A}} (-25,-20)*++++{}; (0,-40)*++++{}};
{\ar_-{\mathbf{A'}} (25,-20)*++++{}; (0,-40)*++++{}};
{\ar_-{\mathbf{A}} (-50,-20)*++++{}; (0,-40)*++++{}};
{\ar^-{\mathbf{A'}} (50,-20)*++++{}; (0,-40)*++++{}};
\endxy
\end{aligned}
\end{equation}
where $\boldsymbol{\omega_0}=w_0\rho_1 \in \ZZ^{n_1}$ and
$\boldsymbol{\omega'_0}=w'_0\rho_2 \in \ZZ^{n_2}$ are the abelianizations of $w_0$ and
$w'_0$ with respect to the free bases $\{ u_1, \ldots ,u_{n_1}\}$ and $\{ u'_1, \ldots
,u'_{n_2}\}$, respectively.
Now, for every $v\in (u\cdot H\pi)\cap (u'\cdot H'\pi)$, using Lemma~\ref{lem:descr proj
lliure int cosets Fn x Zm} and the commutativity of the upper part of the above diagram, we
have
\begin{align*}
N_v & = \left( \mathbf{a}+(u^{-1}v)\rho_1\mathbf{A}+L\right) \cap \left( \mathbf{a'}
+(u'^{-1}v) \rho_2 \mathbf{A'} +L' \right) \\
& = \left( \mathbf{a}+(v_0^{-1}v)\iota \lambda_{w_0}\rho_1\mathbf{A}+L\right) \cap
\left( \mathbf{a'}+(v_0^{-1}v)\iota' \lambda_{w'_0}\rho_2 \mathbf{A'} +L' \right) \\
& = \left( \mathbf{a}
+(\boldsymbol{\omega_0}+(v_0^{-1}v)\rho_3\mathbf{P})\mathbf{A}+L\right) \cap \left(
\mathbf{a'}+ (\boldsymbol{\omega'_0}+(v_0^{-1}v) \rho_3\mathbf{P'})\mathbf{A'} +L' \right) \\
& = \left( \mathbf{a}+\boldsymbol{\omega_0} \mathbf{A} +(v_0^{-1}v)\rho_3
\mathbf{PA}+L\right) \cap \left( \mathbf{a'}+\boldsymbol{\omega'_0} \mathbf{A'}+(v_0^{-1}v)
\rho_3\mathbf{P'A'} +L' \right).
\end{align*}
With this expression, we can characterize, in a computable way, which elements from
$(u\cdot H\pi )\cap (u'\cdot H'\pi )$ do belong to $(gH\cap g'H')\pi$:
\begin{lem}\label{th:projeccio interseccio de cosets de Fn x Zm}
With the current notation we have
\begin{equation}\label{eq:calcul projeccio interseccio cosets}
(gH\cap g'H')\pi =M\rho_3^{-1}\lambda_{v_0} \subseteq (u\cdot H\pi )\cap (u'\cdot H'\pi ),
\end{equation}
where $M\subseteq \ZZ^{n_3}$ is the preimage by the linear mapping $\mathbf{PA-P'A'}\colon
\ZZ^{n_3} \to \ZZ^m$ of the linear variety
\begin{equation}\label{eq:variedad lineal de Z^m}
N=\mathbf{a'-a}+\boldsymbol{\omega'_0} \mathbf{A'}-\boldsymbol{\omega_0} \mathbf{A}+(L+L')
\subseteq \ZZ^m.
\end{equation}
\end{lem}
\begin{proof}
By Lemma~\ref{lem:descr proj lliure int cosets Fn x Zm}, an element $v\in (u\cdot H\pi)
\cap (u'\cdot H'\pi)$ belongs to $(gH\cap g'H')\pi$ if and only if $N_v\neq \emptyset$.
That is, if and only if the vector $\mathbf{x}=(v_0^{-1}v)\rho_3 \in \ZZ^{n_3}$ satisfies
that the two varieties $\mathbf{a}+\boldsymbol{\omega_0} \mathbf{A}+\mathbf{xPA}+L$ and
$\mathbf{a'}+\boldsymbol{\omega'_0} \mathbf{A'}+\mathbf{xP'A'} +L'$ do intersect. But this
happens if and only if the vector
$$
\big( \mathbf{a}+\boldsymbol{\omega_0} \mathbf{A}+\mathbf{xPA} \big) - \big(\mathbf{a'}+
\boldsymbol{\omega'_0} \mathbf{A'}+\mathbf{xP'A'} \big) =\mathbf{a-a'}+\boldsymbol{\omega_0}
\mathbf{A} -\boldsymbol{\omega'_0} \mathbf{A'}+\mathbf{x(PA-P'A')}
$$
belongs to $L+L'$. That is, if and only if $\mathbf{x(PA-P'A')}$ belongs to $N$. Hence, $v$
belongs to $(gH\cap g'H')\pi$ if and only if $\mathbf{x}=(v_0^{-1}v)\rho_3 \in M$, i.e.\ if
and only if $v\in M\rho_3^{-1}\lambda_{v_0}$.
\end{proof}
With all the data already computed, we explicitly have the variety $N$ and, using standard
linear algebra, we can compute $M$ (which could be empty, because $N$ may possibly be
disjoint with the image of $\mathbf{PA-P'A'}$). In this situation, the algorithmic decision
on whether $gH\cap g'H'$ is empty or not is straightforward.
\begin{lem}\label{cor:caracteritzacio interseccio cosets buida}
With the current notation, and assuming that $(u\cdot H \pi) \cap (u'\cdot H' \pi) \neq
\emptyset$, the following are equivalent:
\begin{itemize}
\item[\emph{(a)}] $gH\cap g'H'=\emptyset$,
\item[\emph{(b)}] $(gH\cap g'H')\pi =\emptyset$,
\item[\emph{(c)}] $M\rho_3^{-1}=\emptyset$,
\item[\emph{(d)}] $M=\emptyset$,
\item[\emph{(e)}] $N\cap \operatorname{Im}(\mathbf{PA-P'A'})=\emptyset$. \qed
\end{itemize}
\end{lem}
If $gH\cap g'H'=\emptyset$, we are done. Otherwise, $N\cap
\operatorname{Im}(\mathbf{PA-P'A'})\neq\emptyset$ and we can compute a vector $\mathbf{x}\in \ZZ^{n_3}$
such that $\mathbf{x}(\mathbf{PA-P'A'})\in N$. Take now any preimage of $\mathbf{x}$ by
$\rho_3$, for example $v_1^{x_1}\cdots\, v_{n_3}^{x_{n_3}}$ if $\mathbf{x}=(x_1,\ldots
,x_{n_3})$, and by (\ref{eq:calcul projeccio interseccio cosets}),
$u''=v_0v_1^{x_1}\cdots\, v_{n_3}^{x_{n_3}}\in (gH\cap g'H')\pi$.
It only remains to find $\mathbf{a''}\in \ZZ^m$ such that $g''=\mathbf{t^{a''}}u'' \in
gH\cap g'H'$. To do this, observe that $u'' \in (gH\cap g'H')\pi$ implies the existence of
a vector $\mathbf{a''}$ such that $\mathbf{t^{a''}}u''\in \mathbf{t^a}uH \cap
\mathbf{t^{a'}}u'H'$, i.e.\ such that $\mathbf{t^{a''-a}}u^{-1}u''\in H$ and
$\mathbf{t^{a''-a'}}u'^{-1}u''\in H'$. In other words, there exists a vector
$\mathbf{a''}\in \ZZ^m$ such that $\mathbf{a''-a}\in \mathcal{C}_{u^{-1}u'',H}$ and
$\mathbf{a''-a'}\in \mathcal{C}_{u'^{-1}u'',H'}$. That is, the affine varieties
$\mathbf{a}+\mathcal{C}_{u^{-1}u'',H}$ and $\mathbf{a'}+\mathcal{C}_{u'^{-1}u'',H'}$ do intersect. By
Corollary~\ref{cor:propietats complecio abeliana}, we can compute equations for these two
varieties, and compute a vector in its intersection. This is the $\mathbf{a''}\in \ZZ^m$ we
are looking for.
\end{proof}
The above argument applied to the case where $g=g'=1$ is giving us valuable information
about the subgroup intersection $H\cap H'$; this will allow us to solve $\operatorname{SIP}(\ZZ^m \mbox{type\,(I)}mes
F_n)$ as well. Note that, in this case, $\mathbf{a}=\mathbf{a'}=\mathbf{0}$, $u=u'=1$ and
so, $v_0=1$, $w_0=w'_0=1$, and $\boldsymbol{\omega_0}=\boldsymbol{\omega'_0}=\mathbf{0}$.
\goodbreak
\begin{thm}\label{thm:ip}
The Subgroup Intersection Problem for $\ZZ^m \mbox{type\,(I)}mes F_n$ is solvable.
\end{thm}
\begin{proof}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ be a finitely generated free-abelian times free group. As in the
proof of Theorem~\ref{thm:cip}, we can assume that the initial finitely generated subgroups
$H, H'\leqslant G$ are given by respective bases, i.e.\ by two sets of elements like
in~(\ref{ee'}), $E=\{ \mathbf{t}^\mathbf{b_1}, \ldots ,\mathbf{t}^\mathbf{b_{m_1}},
\mathbf{t}^\mathbf{a_1} u_1,\ldots , \mathbf{t}^\mathbf{a_{n_1}} u_{n_1}\}$ and $E'=\{
\mathbf{t}^\mathbf{b'_1}, \ldots ,\mathbf{t}^\mathbf{b'_{m_2}}, \mathbf{t}^\mathbf{a'_1}
u'_1, \ldots ,\mathbf{t}^\mathbf{a'_{n_2}}u'_{n_2}\}$. Consider the subgroups $L,
L'\leqslant \ZZ^m$ and the matrices $\mathbf{A}\in \mathcal{M}_{n_1 \mbox{type\,(I)}mes m}(\ZZ)$ and
$\mathbf{A'}\in \mathcal{M}_{n_2 \mbox{type\,(I)}mes m}(\ZZ)$ as above. We shall algorithmically decide
whether the intersection $H\cap H'$ is finitely generated or not and, in the affirmative
case, shall compute a basis for $H\cap H'$.
Let us apply the algorithm from the proof of Theorem~\ref{thm:cip} to the cosets $1\cdot H$
and $1\cdot H'$; that is, take $g=g'=1$, i.e.\ $u=u'=1$ and
$\mathbf{a}=\mathbf{a'}=\mathbf{0}$. Of course, $H\cap H'$ is not empty, and $v_0=1$ serves
as an element in the intersection, $v_0\in H\cap H'$. With this choice, the algorithm works
with $w_0=w'_0=1$ and $\boldsymbol{\omega_0}=\boldsymbol{\omega'_0}=\mathbf{0}$ (so, we can
forget the two translation parts in diagram~\ \Leftrightarrow \ ref{xypic:esquema interseccio cosets Fn x
Z^m}). Lemma~\ref{th:projeccio interseccio de cosets de Fn x Zm} tells us that $(H\cap
H')\pi =M\rho_3^{-1}\leqslant H\pi \cap H'\pi$, where $M$ is the preimage by the linear
mapping $\mathbf{PA-P'A'}\colon \ZZ^{n_3}\to \ZZ^{n_1}$ of the subspace $N=L+L'\leqslant
\ZZ^m$. In this situation, the following lemma decides when is $H\cap H'$ finitely
generated and when is not:
\begin{lem}\label{int-fg}
With the current notation, the following are equivalent:
\begin{itemize}
\item[\emph{(a)}] $H\cap H'$ is finitely generated,
\item[\emph{(b)}] $(H\cap H')\pi$ is finitely generated,
\item[\emph{(c)}] $M\rho_3^{-1}$ is either trivial or of finite index in $H\pi \cap
H'\pi$,
\item[\emph{(d)}] either $n_3=1$ and $M=\{ \mathbf{0}\}$, or $M$ is of finite index in
$\ZZ^{n_3}$,
\item[\emph{(e)}] either $n_3=1$ and $M=\{ \mathbf{0}\}$, or $\operatorname{rk}(M)=n_3$.
\end{itemize}
\end{lem}
\begin{proof}
(a) $\Leftrightarrow$ (b) is in Corollary~\ref{cor:H fg sii Hpi fg}. (b) $\Leftrightarrow$
(c) comes from the well known fact (see, for example, \cite{lyndon_combinatorial_2001}
pags. 16-18) that, in the finitely generated free group $H\pi \cap H'\pi$, the subgroup
$(H\cap H')\pi =M\rho_3^{-1}$ is normal and so, finitely generated if and only if it is
either trivial or of finite index. But, by lemma~\ref{lem:index de subgrups a traves de
epimorfismes}~(ii), the index $[H\pi \cap H'\pi : M\rho_3^{-1}]$ is finite if and only if
$[\ZZ^{n_3} : M]$ is finite; this gives (c) $\Leftrightarrow$ (d). The last equivalence is
a basic fact in linear algebra.
\end{proof}
We have computed $n_3$ and an abelian basis for $M$. If $n_3=0$ we immediately deduce that
$H\cap H'$ is finitely generated. If $n_3=1$ and $M=\{ \mathbf{0}\}$ we also deduce that
$H\cap H'$ is finitely generated. Otherwise, we check whether $\operatorname{rk}(M)$ equals $n_3$; if
this is the case then again $H\cap H'$ is finitely generated; if not, $H\cap H'$ is
infinitely generated.
It only remains to algorithmically compute a basis for $H\cap H'$, in case it is finitely
generated. We know from~\ \Leftrightarrow \ ref{eq:factoritzacio subgrup} that
$$
H\cap H'=\bigl((H\cap H')\cap \ZZ^m \bigr)\, \mbox{type\,(I)}mes \, (H\cap H')\pi \alpha,
$$
where $\alpha$ is any splitting for $\pi_{\mid H\cap H'} \colon H\cap H'\twoheadrightarrow
(H\cap H')\pi$; then we can easily get a basis of $H\cap H'$ by putting together a basis of
each part. The strategy will be the following: first, we compute an abelian basis for
$$
(H\cap H')\cap \ZZ^m =(H\cap \ZZ^m)\cap (H'\cap \ZZ^m)=L\cap L'
$$
by just solving a system of linear equations. Second, we shall compute a free basis for
$(H\cap H')\pi$. And finally, we will construct an explicit splitting $\alpha$ and will use
it to get a free basis for $(H\cap H')\pi \alpha$. Putting together these two parts, we
shall be done.
To compute a free basis for $(H\cap H')\pi$ note that, if $n_3=0$, or $n_3=1$ and $M=\{
\mathbf{0}\}$, then $(H\cap H')\pi =1$ and there is nothing to do. In the remaining case,
$\operatorname{rk}(M)=n_3\geqslant 1$, $M\rho_3^{-1}=(H\cap H')\pi$ has finite index in $H\pi \cap
H'\pi$, and so it is finitely generated. We give two alternative options to compute a free
basis for it.
The subgroup $M$ has finite index in $\ZZ^{n_3}$, and we can compute a system of coset
representatives of $\ZZ^{n_3}$ modulo $M$,
$$
\ZZ^{n_3}=M\mathbf{c_1}\sqcup \cdots \sqcup M\mathbf{c_d}
$$
(see the beginning of Section~\ref{fi}). Now, being $\rho_3$ onto, and according to
Lemma~\ref{lem:index de subgrups a traves de epimorfismes}~(b), we can transfer the
previous partition via $\rho_3$ to obtain a system of right coset representatives of $H\pi
\cap H'\pi$ modulo $M\rho_3^{-1}$:
\begin{equation}\label{eq:classes modul M rho_3^(-1)}
H\pi \cap H'\pi =(M\rho_3^{-1})z_1\sqcup \cdots \sqcup (M\rho_3^{-1})z_d,
\end{equation}
where we can take, for example, $z_i =v_1^{c_{i,1}} v_2^{c_{i,2}}\cdots\,
v_{n_3}^{c_{i,n_3}}\in H\pi \cap H'\pi$, for each vector $\mathbf{c_i}=(c_{i,1},
c_{i,2},\ldots, c_{i,n_3}) \in \ZZ^{n_3}$, $i\in [d]$. Now let us construct the core of the
Schreier graph for $M\rho_3^{-1}=(H\cap H')\pi$ (with respect to $\{v_1, \ldots
,v_{n_3}\}$, a free basis for $H\pi \cap H'\pi$), $\mathcal{S}(M\rho_3^{-1})$, in the
following way: consider the graph with the cosets of~\ \Leftrightarrow \ ref{eq:classes modul M rho_3^(-1)}
as vertices, and with no edge. Then, for every vertex $(M\rho_3^{-1})z_i$ and every letter
$v_j$, add an edge labeled $v_j$ from $(M\rho_3^{-1})z_i$ to $(M \rho_3^{-1})z_i v_j$,
algorithmically identified among the available vertices by repeatedly using the
\emph{membership problem} for $M\rho_3^{-1}$ (note that we can do this by abelianizing the
candidate and checking the defining equations for $M$). Once we have run over all $i,j$, we
shall get the full graph $\mathcal{S}(M\rho_3^{-1})$, from which we can easily obtain a
free basis for~$(H\cap H')\pi$ in terms of~$\{v_1, \ldots ,v_{n_3}\}$.
Alternatively, let $\{ \mathbf{m_1}, \ldots ,\mathbf{m_{n_3}}\}$ be an abelian basis for
$M$ (which we already have from the previous construction), say $\mathbf{m_i}=(m_{i,1},
m_{i,2},\ldots, m_{i,n_3}) \in \ZZ^{n_3}$, $i=1,\ldots ,n_3$, and consider the elements
$x_i =v_1^{m_{i,1}} v_2^{m_{i,2}}\cdots\, v_{n_3}^{m_{i,n_3}}\in H\pi \cap H'\pi$. It is
clear that $M\rho_3^{-1}$ is the subgroup of $H\pi \cap H'\pi$ generated by $x_1, \ldots
,x_{n_3}$ and all the infinitely many commutators from elements in $H\pi \cap H'\pi$. But
$M\rho_3^{-1}$ is finitely generated so, finitely many of those commutators will be enough.
Enumerate all of them, $y_1, y_2, \ldots$ and keep computing the core $\mathcal{S}_j$ of
the Schreier graph for the subgroup $\langle x_1, \ldots , x_{n_3}, y_1, \ldots
,y_j\rangle$ for increasing $j$'s until obtaining a complete graph with $d$ vertices (i.e.\
until reaching a subgroup of index $d$). When this happens, we shall have computed the core
of the Schreier graph for $M\rho_3^{-1}=(H\cap H')\pi$ (with respect to $\{v_1, \ldots
,v_{n_3}\}$, a free basis of $H\pi \cap H'\pi$), from which we can easily find a free basis
for $(H\cap H')\pi$, in terms of~$\{v_1, \ldots ,v_{n_3}\}$.
Finally, it remains to compute an explicit splitting $\alpha$ for $\pi_{\mid H\cap H'}
\colon H\cap H'\twoheadrightarrow (H\cap H')\pi$. We have a free basis $\{ z_1,\ldots ,z_d
\}$ for $(H\cap H')\pi$, in terms of $\{ v_1,\ldots ,v_{n_3}\}$; so, using the expressions
$v_i =\nu_i (u_1,\ldots, u_{n_1})$ that we have from the beginning of the proof, we can get
expressions $z_i =\eta_i (u_1,\ldots ,u_{n_1})$. From here, $\eta_i
(\mathbf{t^{a_1}}u_1,\ldots ,\mathbf{t^{a_{n_1}}}u_{n_1}) =\mathbf{t^{e_i}} z_i \in H$ and
projects to $z_i$, so $\mathcal{C}_{z_i,\, H}=\mathbf{e_i}+L$ (see
Corollary~\ref{cor:propietats complecio abeliana}), $i\in [d]$. Similarly, we can get
vectors $\mathbf{e'_i}\in \mathbb{Z}^m$ such that $\mathcal{C}_{z_i,\,
H'}=\mathbf{e'_i}+L'$. Since, by construction, $\mathcal{C}_{z_i, H\cap H'}
=\mathcal{C}_{z_i, H}\cap \mathcal{C}_{z_i, H'}$ is a non-empty affine variety in
$\mathbb{Z}^m$ with direction $L\cap L'$, we can compute vectors $\mathbf{e''_i}\in
\mathbb{Z}^m$ on it by just solving the corresponding systems of linear equations, $i\in
[d]$. Now, $z_i \mapsto \mathbf{t^{e''_i}}z_i$ is the desired splitting $H\cap
H'\stackrel{\alpha}{\leftarrow} (H\cap H')\pi$, and $\{ \mathbf{t^{e''_1}}z_1,\, \ldots
,\mathbf{t^{e''_d}}z_d \}$ is the free basis for $(H\cap H')\pi\alpha$ we were looking for.
As mentioned above, putting together this free basis with the abelian basis we already have
for $L\cap L'$, we get a basis for $H\cap H'$, concluding the proof.
\end{proof}
\begin{cor} \label{cor:interseccio subgrups lliures no abelians de rang finit}
Let $H,H'$ be two free non-abelian subgroups of finite rank in $\ZZ^m \mbox{type\,(I)}mes F_n $. With
the previous notation, the intersection $H\cap H'$ is finitely generated if and only if
either $H\cap H'=1$, or~${\mathbf{PA}=\mathbf{P'A'}}$.
\end{cor}
\begin{proof}
Under the conditions of the statement, we have $L=L'=\{ \mathbf{0}\}$. Hence, $N=L+L'=\{
\mathbf{0}\}$ and its preimage by $\mathbf{PA-P'A'}$ is $M=\ker(\mathbf{PA-P'A'})\leqslant
\ZZ^{n_3}$. Now, by Lemma~\ref{int-fg}, $H\cap H'$ is finitely generated if and only if
either $(H\cap H')\pi =M\rho_3^{-1}=1$, or $n_3 -\operatorname{rk}
(Im(\mathbf{PA-P'A'}))=\operatorname{rk}(M)=n_3$; that is, if and only if either $(H\cap H')\pi =1$, or
$\mathbf{PA}=\mathbf{P'A'}$. But, since $L=L'=\{ \mathbf{0}\}$, $(H\cap H')\pi =1$ if and
only if $H\cap H'=1$.
\end{proof}
\goodbreak
We consider the following two examples to illustrate the preceding algorithm.
\begin{exm}
Let us analyze again the example given in the proof of Observation~\ref{prop:Fn x Z^n no
Howson}, under the light of the previous corollary. We considered in $\ZZ \mbox{type\,(I)}mes
F_2=\langle t \mid \, \rangle \mbox{type\,(I)}mes \langle a,b \mid \, \rangle$ the subgroups $H=\langle
a,b \rangle$ and $H'=\langle ta,b\rangle$, both free non-abelian of rank 2. It is clear
that $\mathbf{A}=\left(\begin{smallmatrix} 0 \\ 0 \end{smallmatrix}\right)$ and
$\mathbf{A'}=\left(\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right)$, while $H\pi =H'\pi
=H\pi \cap H'\pi =F_2$; in particular, $n_3=2$ and $H\cap H'\neq 1$. In these
circumstances, both inclusions $H\pi \overset{}{\hookleftarrow} H\pi \cap H'\pi
\overset{}{\hookrightarrow} H'\pi$ are the identity maps, so
$\mathbf{P}=\mathbf{P'}=\mathbf{1}$ is the $2\mbox{type\,(I)}mes 2$ identity matrix and hence,
$\mathbf{PA}= \left(\begin{smallmatrix} 0 \\ 0 \end{smallmatrix}\right) \neq
\left(\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right) =\mathbf{P'A'}$. According to
Corollary~\ref{cor:interseccio subgrups lliures no abelians de rang finit}, this means that
$H\cap H'$ is not finitely generated, as we had seen before.
\end{exm}
\begin{exm}
Consider two finitely generated subgroups $H, H'\leqslant F_n \leqslant \ZZ^m \mbox{type\,(I)}mes F_n$.
In this case we have $\mathbf{A}=(\mathbf{0})\in \mathcal{M}_{n_1,m}$ and
$\mathbf{A'}=(\mathbf{0})\in \mathcal{M}_{n_2,m}$ and so,
$\mathbf{PA}=(\mathbf{0})=\mathbf{P'A'}$. Thus, Corollary~\ref{cor:interseccio subgrups
lliures no abelians de rang finit} just corroborates Howson's property for finitely
generated free groups.
\end{exm}
\goodbreak
To finish this section, we present an application of Theorem~\ref{thm:ip} to a nice
geometric problem. In the very recent paper~\cite{sahattchieve_quasiconvex_2011}, J. Sahattchieve studies
quasi-convexity of subgroups of $\ZZ^m \mbox{type\,(I)}mes F_n$ with respect to the natural
component-wise action of $\ZZ^m \mbox{type\,(I)}mes F_n$ on the product space, $\mathbb{R}^m \mbox{type\,(I)}mes
T_n$, of the $m$-dimensional euclidean space and the regular $(2n)$-valent infinite tree
$T_n$: a subgroup $H\leqslant \ZZ^m \mbox{type\,(I)}mes F_n$ is \emph{quasi-convex} if the orbit $Hp$ of
some (and hence every) point $p\in \mathbb{R}^m \mbox{type\,(I)}mes T_n$ is a quasi-convex subset of
$\mathbb{R}^m \mbox{type\,(I)}mes T_n$ (see~\cite{sahattchieve_quasiconvex_2011} for more details). One of the results obtained is
the following characterization:
\begin{thm}[Sahattchieve]\label{thmSahattchieve}
Let $H$ be a subgroup of $\ZZ^m \mbox{type\,(I)}mes F_n$. Then, $H$ is quasi-convex if and only if $H$
is either cyclic or virtually of the form $A\mbox{type\,(I)}mes B$, for some $A\leqslant \ZZ^m$ and
$B\leqslant F_n$ being finitely generated. (In particular, quasi-convex subgroups are
finitely generated.)
\end{thm}
Combining this with our Theorem~\ref{thm:ip}, we can easily establish an algorithm to
decide whether a given finitely generated subgroup of $\ZZ^m \mbox{type\,(I)}mes F_n$ is quasi-convex or
not (with respect to the above mentioned action).
\begin{cor}
There is an algorithm which, given a finite list $w_1,\ldots,w_s$ of elements in $\ZZ^m
\mbox{type\,(I)}mes F_n$, decides whether the subgroup $H=\langle w_1,\ldots,w_s \rangle$ is
quasi-convex or not.
\end{cor}
\begin{proof}
First, apply Proposition~\ref{prop:bases algorismiques} to compute a basis for $H$. If it
contains only one element, then $H$ is cyclic and we are done.
Otherwise ($H$ is not cyclic) we can easily compute a free-abelian basis and a free basis for the respective projections $H\tau \leqslant \ZZ^m$ and $H\pi\leqslant F_n$. From the basis for $H$ we can immediately extract a free-abelian basis for $\ZZ^m \cap H=H\tau \cap H$. And, using Theorem~\ref{thm:ip}, we can decide whether $F_n \cap H=H\pi \cap H$ is finitely generated or not and, in the affirmative case, compute a free basis for it.
Finally, we can decide whether $H\tau \cap H \leqslant\operatorname{\!_{f.i.}} H\tau$ and $H\pi \cap
H\leqslant\operatorname{\!_{f.i.}} H\pi$ hold or not (applying the well known solutions to $\operatorname{FIP}(\mathbb{Z}^m)$
and $\operatorname{FIP}(F_{n'})$ or, alternatively, using the more general Theorem~\ref{th:problema de
index finit} above); note that if we detected that $H\pi \cap H$ is infinitely generated
then it must automatically be of infinite index in $H\pi$ (which, of course, is finitely
generated).
Now we claim that $H$ is quasi-convex if and only if $H\tau \cap H \leqslant\operatorname{\!_{f.i.}} H\tau$ and
$H\pi \cap H\leqslant\operatorname{\!_{f.i.}} H\pi$; this will conclude the proof.
For the implication to the right (and applying Theorem~\ref{thmSahattchieve}), assume that
$A\mbox{type\,(I)}mes B\leqslant\operatorname{\!_{f.i.}} H$ for some $A\leqslant \ZZ^m$ and $B\leqslant F_n$ being finitely
generated. Applying $\tau$ and $\pi$ we get $A\leqslant\operatorname{\!_{f.i.}} H\tau$ and $B\leqslant\operatorname{\!_{f.i.}}
H\pi$, respectively (see Lemma~\ref{lem:index de subgrups a traves de epimorfismes}~(i)).
But $A\leqslant H\tau \cap H\leqslant H\tau$ and $B\leqslant H\pi \cap H\leqslant H\pi$
hence, $H\tau \cap H \leqslant\operatorname{\!_{f.i.}} H\tau$ and $H\pi \cap H\leqslant\operatorname{\!_{f.i.}} H\pi$.
For the implication to the left, assume $H\tau \cap H \leqslant\operatorname{\!_{f.i.}} H\tau$ and $H\pi \cap
H\leqslant\operatorname{\!_{f.i.}} H\pi$ (and, in particular, $H\pi \cap H$ finitely generated). Take $A=H\tau
\cap H\leqslant\operatorname{\!_{f.i.}} H\tau \leqslant \ZZ^m$ and $B=H\pi \cap H\leqslant\operatorname{\!_{f.i.}} H\pi \leqslant
F_n$, and we get $A\mbox{type\,(I)}mes B \leqslant\operatorname{\!_{f.i.}} H\tau \mbox{type\,(I)}mes H\pi$ (see Lemma~\ref{lem:index
producte directe}). But $H$ is in between, $A\mbox{type\,(I)}mes B\leqslant H\leqslant H\tau \mbox{type\,(I)}mes
H\pi$, hence $A\mbox{type\,(I)}mes B\leqslant\operatorname{\!_{f.i.}} H$ and, by Theorem~\ref{thmSahattchieve}, $H$ is
quasi-convex.
\end{proof}
\section{Endomorphisms} \label{sec:morphisms}
In this section we will study the endomorphisms of a finitely generated free-abelian times
free group $G=\ZZ^m \mbox{type\,(I)}mes F_n$ (with the notation from presentation~(\ref{eq:pres F_n x
Z^m})). Without loss of generality, we assume $n\neq 1$.
To clarify notation, we shall use lowercase Greek letters to denote endomorphisms of $F_n$,
and uppercase Greek letters to denote endomorphisms of $G=\ZZ^m \mbox{type\,(I)}mes F_n$. The following
proposition gives a description of how all endomorphisms of $G$ look like.
\begin{prop} \label{prop:classificacio endos}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ with $n\neq 1$. The following is a complete list of all
endomorphisms
of $G$:
\begin{itemize}\label{eq:expressio endosI}
\item[\rm{\textbf{(I)}}] $\Psi_{\phi,\mathbf{Q,P}}=\mathbf{t^a}u\mapsto \mathbf{t^{aQ+uP}}\,
u\phi$, where $\phi \in \operatorname{End}(F_n)$, $\mathbf{Q}\in \mathcal{M}_{m}(\ZZ)$, and
$\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$.
\item[\rm{\textbf{(II)}}] $\Psi_{z,\mathbf{l,h,Q,P}}=\mathbf{t^a}u\mapsto
\mathbf{t^{aQ+uP}}z^{\mathbf{a}\mathbf{l}^{\!\top} +\mathbf{u}\mathbf{h}^{\!\top}}$, where $1\neq
z\in F_n$ is not a proper power, $\mathbf{Q}\in \mathcal{M}_{m}(\ZZ)$, $\mathbf{P}\in
\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$, $\mathbf{0}\neq \mathbf{l}\in \ZZ^m $, and
$\mathbf{h}\in \ZZ^n$.
\end{itemize}
(In both cases, $\mathbf{u}\in \ZZ^n$ denotes the abelianization of the word $u\in F_n$.)
\end{prop}
\begin{proof}
It is straightforward to check that all maps of types (I) and (II) are, in fact,
endomorphisms of $G$.
To see that this is the complete list of all of them, let $\Psi \colon G\to G$ be an
arbitrary endomorphism of $G$. Looking at the normal form of the images of the $x_i$'s and
$t_j$'s, we have
\begin{equation}\label{eq:assignacio generica}
\Psi \colon \left\{ \begin{array}{rcl}
x_i \!\! &\longmapsto &\!\! \mathbf{t^{p_i}}w_i \\
t_j \!\! & \longmapsto &\!\! \mathbf{t^{q_j}}
z_j,
\end{array} \right.
\end{equation}
where $\mathbf{p_i},\mathbf{q_j} \in \ZZ^m$ and $w_i,\, z_j\in F_n$, $i\in [n]$, $j\in
[m]$. Let us distinguish two cases.
\emph{Case 1: $z_j=1$ for all $j\in [m]$}. Denoting $\phi$ the endomorphism of $F_n$ given
by $x_i \mapsto w_i$, and $\mathbf{P}$ and $\mathbf{Q}$ the following integral matrices (of
sizes $n\mbox{type\,(I)}mes m$ and $m\mbox{type\,(I)}mes m$, respectively)
$$
\mathbf{P}=\left( \begin{array}{ccc} p_{11} & \cdots & p_{1m} \\ \vdots & \ddots & \vdots \\
p_{n1} & \cdots & p_{nm} \end{array} \right) =\left( \begin{array}{c} \mathbf{p_{1}} \\
\vdots \\ \mathbf{p_{n}} \end{array} \right) \text{\quad and \quad} \mathbf{Q}=\left(
\begin{array}{ccc} q_{11} & \cdots & q_{1m} \\ \vdots & \ddots & \vdots \\ q_{m1} & \cdots &
q_{mm} \end{array} \right) =\left( \begin{array}{c} \mathbf{q_{1}} \\ \vdots \\
\mathbf{q_{m}} \end{array} \right),
$$
we can write
$$
\Psi \colon \left\{ \begin{array}{rcl}
u \!\!& \longmapsto &\!\! \mathbf{t^{uP}} u \phi \\
\mathbf{t^a} \!\!& \longmapsto &\!\! \mathbf{t^{aQ}},
\end{array} \right.
$$
where $u\in F_n$ and $\mathbf{a}\in \ZZ^m$. So, $(\mathbf{t^a}u)\Psi =\mathbf{t^{aQ+uP}}
u\phi$ and $\Psi$ equals $\Psi_{\phi,\mathbf{Q,P}}$ from \mbox{type\,(I)}.
\emph{Case 2: $z_k\neq 1$ for some $k\in [m]$}. For $\Psi$ to be well defined,
$\mathbf{t^{p_i}}w_i$ and $\mathbf{t^{q_j}} z_j$ must all commute with $\mathbf{t^{q_k}}
z_k$, and so $w_i$ and $z_j$ with $z_k \neq 1$, for all $i\in [n]$ and $j\in [m]$. This
means that $w_i =z^{h_i}$, $z_j =z^{\, l_j}$ for some integers $h_i,\, l_j \in \ZZ$, $i\in
[n]$, $j\in [m]$, with $l_k\neq 0$, and some $z\in F_n$ not being a proper power. Hence,
$(\mathbf{t^a}u)\Psi =(\mathbf{t^a}\Psi )(u\Psi
)=(\mathbf{t^{aQ}}z^{\mathbf{a}\mathbf{l}^{\!\top}})(\mathbf{t^{uP}}z^{\mathbf{u}\mathbf{h}^{\!\top}})
=\mathbf{t^{aQ+uP}}z^{\mathbf{a}\mathbf{l}^{\!\top} +\mathbf{u}\mathbf{h}^{\!\top}}$ and $\Psi$ equals
$\Psi_{z,\mathbf{l,h,Q,P}}$ from \mbox{type\,(I)}i.
This completes the proof.
\end{proof}
Note that if $n=0$ then \mbox{type\,(I)}\ and \mbox{type\,(I)}i\ endomorphisms do coincide. Otherwise,
\mbox{type\,(I)}i\ endomorphisms will be seen to be neither injective nor surjective. The following
proposition gives a quite natural characterization of which endomorphisms of \mbox{type\,(I)}\ are
injective, and which are surjective. It is important to note that the matrix $\mathbf{P}$
plays absolutely no role in this matter.
\begin{prop}\label{prop:caract mono}
Let $\Psi$ be an endomorphism of $G=\ZZ^m \mbox{type\,(I)}mes F_n$, with $n\geqslant 2$. Then,
\begin{itemize}
\item[\emph{(i)}] $\Psi$ is a monomorphism if and only if it is of \mbox{type\,(I)}, $\Psi
=\Psi_{\phi, \mathbf{Q}, \mathbf{P}}$, with $\phi$ a monomorphism of $F_n$, and $\det
(\mathbf{Q})\neq 0$,
\item[\emph{(ii)}] $\Psi$ is an epimorphism if and only if it is of \mbox{type\,(I)}, $\Psi
=\Psi_{\phi, \mathbf{Q}, \mathbf{P}}$, with $\phi$ an epimorphism of $F_n$, and $\det
(\mathbf{Q})=\pm 1$.
\item[\emph{(iii)}] $\Psi$ is an automorphism if and only if it is of \mbox{type\,(I)}, $\Psi
=\Psi_{\phi, \mathbf{Q}, \mathbf{P}}$, with $\phi \in \operatorname{Aut}(F_n)$ and $\mathbf{Q} \in
GL_m(\ZZ)$; in this case, $(\Psi_{\phi, \mathbf{Q},
\mathbf{P}})^{-1}=\Psi_{\phi^{-1},\mathbf{Q}^{-1},-\mathbf{M^{-1}PQ^{-1}}}$, where
$\mathbf{M}\in \operatorname{GL}_n(\ZZ)$ is the abelianization of $\phi$.
\end{itemize}
\end{prop}
\begin{proof}
(i). Suppose that $\Psi$ is injective. Then $\Psi$ can not be of \mbox{type\,(I)}i\ since, if it
were, the commutator of any two elements in $F_n$ ($n\geqslant 2$) would be in the kernel
of $\Psi$. Hence, $\Psi =\Psi_{\phi, \mathbf{Q}, \mathbf{P}}$ for some $\phi \in
\operatorname{End}(F_n)$, $\mathbf{Q}\in \mathcal{M}_{m}(\ZZ)$, and $\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes
m}(\ZZ)$. Since $\mathbf{t^a}\Psi =\mathbf{t^{a Q}}$, the injectivity of $\Psi$ implies
that of $\mathbf{a}\mapsto \mathbf{aQ}$; hence, $\det(\mathbf{Q})\neq 0$. Finally, in order
to prove the injectivity of $\phi$, let $u\in F_n$ with $u\phi =1$. Note that the
endomorphism of $\QQ^m$ given by $\mathbf{Q}$ is invertible so, in particular, there exist
$\mathbf{v}\in \QQ^m$ such that $\mathbf{v}\mathbf{Q} =\mathbf{uP}$; write
$\mathbf{v}=\frac{1}{b} \mathbf{a}$ for some $\mathbf{a}\in \ZZ^m$ and $b\in \ZZ$, $b\neq
0$, and we have $\mathbf{aQ}=b\, \mathbf{vQ}=b\, \mathbf{uP}$; thus,
$(\mathbf{t}^{\mathbf{a}}u^{-b})\Psi
=\mathbf{t}^{\mathbf{aQ}}(\mathbf{t}^{\mathbf{uP}}1)^{-b}
=\mathbf{t}^{\mathbf{aQ-}b\mathbf{uP}}=\mathbf{t^0}=1$. Hence,
$\mathbf{t}^{\mathbf{a}}u^{-b}=1$ and so, $u=1$.
Conversely, let $\Psi =\Psi_{\phi, \mathbf{Q}, \mathbf{P}}$ be of \mbox{type\,(I)}, with $\phi$ a
monomorphism of $F_n$ and $\det (\mathbf{Q})\neq \mathbf{0}$, and let $\mathbf{t^{a}}u\in
G$ be such that $1=(\mathbf{t^{a}}u)\Psi =\mathbf{t}^{\mathbf{aQ+uP}} \, u\phi$. Then,
$u\phi=1$ and so, $u=1$; and $\mathbf{0=aQ+uP=aQ}$ and so, $\mathbf{a=0}$. Hence, $\Psi$ is
injective.
(ii). Suppose that $\Psi$ is onto. Since the image of an endomorphism of \mbox{type\,(I)}i\ followed
by the projection $\pi$ onto $F_n$, $n\geqslant 2$, is contained in $\langle z\rangle$ (and
so is cyclic), $\Psi$ cannot be of \mbox{type\,(I)}i. Hence, $\Psi =\Psi_{\phi, \mathbf{Q},
\mathbf{P}}$ for some $\phi \in \operatorname{End}(F_n)$, $\mathbf{Q}\in \mathcal{M}_{m}(\ZZ)$, and
$\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$. Given $v\in F_n \leqslant G$ there must be
$\mathbf{t^a}u\in G$ such that $(\mathbf{t^a}u)\Psi =v$ and so $u\phi =v$. Thus $\phi
\colon F_n\to F_n$ is onto. On the other hand, for every $j\in [m]$, let
$\boldsymbol{\delta}_{\mathbf{j}}$ be the canonical vector of $\ZZ^m$ with 1 at coordinate
$j$, and let $\mathbf{t}^{\mathbf{b_j}} u_j\in G$ be a pre-image by $\Psi$ of $t_j
=\mathbf{t}^{\boldsymbol{\delta}_{\mathbf{j}}}$. We have
$(\mathbf{t}^{\mathbf{b_j}}u_j)\Psi =\mathbf{t}^{\boldsymbol{\delta}_{\mathbf{j}}}$, i.e.\
$u_j \phi =1$, $\mathbf{u_j}=\mathbf{0}$ and
$\mathbf{b_j}\mathbf{Q}=\mathbf{b_j}\mathbf{Q}+\mathbf{u_j}\mathbf{P}=\boldsymbol{\delta}_{\mathbf{j}}$.
This means that the matrix $\mathbf{B}$ with rows $\mathbf{b_j}$ satisfies
$\mathbf{BQ=I_m}$ and thus, $\det (\mathbf{Q})=\pm 1$.
Conversely, let $\Psi =\Psi_{\phi, \mathbf{Q}, \mathbf{P}}$ be of \mbox{type\,(I)}, with $\phi$
being an epimorphism of $F_n$ and $\det (\mathbf{Q})=\pm 1$. By the hopfianity of $F_n$,
$\phi \in \operatorname{Aut}(F_n)$ and we can consider $\Upsilon
=\Psi_{\phi^{-1},\mathbf{Q}^{-1},-\mathbf{M^{-1}PQ^{-1}}}$, where $\mathbf{M}\in GL_n(\ZZ)$
is the abelianization of $\phi$. For every $\mathbf{t^{a}}u\in G$, we have
$$
(\mathbf{t^{a}}u)\Upsilon\Psi =\bigl(\mathbf{t^{aQ^{-1}-uM^{-1}PQ^{-1}}} (u\phi^{-1})\bigr)\Psi
=\mathbf{t^{a-uM^{-1}P+uM^{-1}P}}u =\mathbf{t^{a}}u.
$$
Hence, $\Psi$ is onto.
(iii). The equivalence is a direct consequence of~(i) and~(ii). To see the actual value of
$\Psi^{-1}$ it remains to compute the composition in the reverse order:
\begin{equation*}
(\mathbf{t^{a}}u)\Psi\Upsilon =\bigl(\mathbf{t^{aQ+uP}}(u\phi)\bigr)\Upsilon =\mathbf{t^{a+uPQ^{-1}-uMM^{-1}PQ^{-1}}}u
=\mathbf{t^{a}}u. \qedhere
\end{equation*}
\end{proof}
Immediately from these characterizations for an endomorphism to be mono, epi or auto, we
have the following corollary.
\begin{cor}
$\ZZ^m \mbox{type\,(I)}mes F_n $ is hopfian and not cohopfian. \qed
\end{cor}
The hopfianity of free-abelian times free groups was already known as part of a bigger
result: in \cite{green_graph_1990} and~\cite{humphries_stephen_p._representations_1994} it
was shown that finitely generated partially commutative groups (this includes groups of the
form $G=\ZZ^m \mbox{type\,(I)}mes F_n$) are residually finite and so, hophian. However, our proof is
more direct and explicit in the sense of giving complete characterizations of the
injectivity and surjectivity of a given endomorphism of $G$. We remark that, despite it
could seem reasonable, the hophianity of $\ZZ^m \mbox{type\,(I)}mes F_n$ does not follow directly from
that of free-abelian and free groups (both very well known): in~\cite{tyrer_direct_1971},
the author constructs a direct product of two hophian groups which is \emph{not} hophian.
For later use, next lemma summarizes how to operate \mbox{type\,(I)}\ endomorphisms (compose, invert
and take a power); it can be easily proved by following routine computations. The reader
can easily find similar equations for the composition of two \mbox{type\,(I)}i\ endomorphisms, or
one of each (we do not include them here because they will not be necessary for the rest
of the paper).
\begin{lem} \label{lab:endosI comportament algebraic}
Let $\Psi_{\phi,\mathbf{Q},\mathbf{P}}$ and $\Psi_{\phi',\mathbf{Q'},\mathbf{P'}}$ be two
\mbox{type\,(I)}\ endomorphisms of $G=\ZZ^m \mbox{type\,(I)}mes F_n$, $n\neq 1$, and denote by $\mathbf{M} \in
\mathcal{M}_n(\ZZ)$ the (matrix of the) abelianization of $\phi \in \operatorname{End}(F_n)$. Then,
\begin{itemize}
\item[\emph{(i)}] $\Psi_{\phi,\mathbf{Q},\mathbf{P}} \cdot
\Psi_{\phi',\mathbf{Q'},\mathbf{P'}} =\Psi_{\phi \phi',\mathbf{QQ'},
\mathbf{PQ'}+\mathbf{M}\mathbf{P'}}$,
\item[\emph{(ii)}] for all $k\geqslant 1$, $(\Psi_{\phi,\mathbf{Q},\mathbf{P}})^k=
\Psi_{\phi^k, \mathbf{Q}^k, \mathbf{P_k}}$, where $\mathbf{P_k}=\sum_{i=1}^{k}
\mathbf{M}^{i-1} \mathbf{P} \mathbf{Q}^{k-i}$,
\item[\emph{(iii)}] $\Psi_{\phi,\mathbf{Q},\mathbf{P}}$ is invertible if and only if
$\phi \in \operatorname{Aut}(F_n)$ and $\mathbf{Q}\in \operatorname{GL}_m(\ZZ)$; in this case,
$(\Psi_{\phi,\mathbf{Q},\mathbf{P}})^{-1}= \Psi_{\phi^{-1}, \mathbf{Q}^{-1},
-\mathbf{M^{-1}PQ^{-1}}}$.
\item[\emph{(iv)}] For every $\mathbf{a}\in \ZZ^m$ and $u\in F_n$, the right conjugation
by $\mathbf{t^{a}}u$ is $\Gamma_{\mathbf{t^{a}}u}
=\Psi_{\gamma_{u},\mathbf{I_m},\mathbf{0}}$, where $\gamma_u$ is the right
conjugation by $u$ in $F_n$, $v\mapsto u^{-1}vu$, $\mathbf{I_m}$ is the identity
matrix of size~$m$, and $\mathbf{0}$ is the zero matrix of size $n\mbox{type\,(I)}mes m$. \qed
\end{itemize}
\end{lem}
In the rest of the section, we shall use this information to derive the structure of
$\operatorname{Aut}(G)$, where $G=\ZZ^m \mbox{type\,(I)}mes F_n$, $m\geqslant 1$, $n\geqslant 2$.
\begin{thm}\label{prop:desc autos}
For $G=\ZZ^m \mbox{type\,(I)}mes F_n$, with $m\geqslant 1$ and $n\geqslant 2$, the group $\operatorname{Aut} (G)$ is
isomorphic to the semidirect product $\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ) \rtimes (\operatorname{Aut}(F_n)
\mbox{type\,(I)}mes \operatorname{GL}_m(\ZZ))$ with respect to the natural action. In particular, $\operatorname{Aut}(G)$ is
finitely presented.
\end{thm}
\begin{proof}
First or all note that, for every $\phi,\, \phi' \in \operatorname{Aut}(F_n)$, every $\mathbf{Q},\,
\mathbf{Q'}\in \operatorname{GL}_m(\ZZ)$, and every $\mathbf{P},\mathbf{P}'\in \mathcal{M}_{n\mbox{type\,(I)}mes
m}(\ZZ)$, we have
$$
\Psi_{\phi, \mathbf{I_m}, \mathbf{0}} \cdot \Psi_{\phi', \mathbf{I_m}, \mathbf{0}}=
\Psi_{\phi\phi', \mathbf{I_m}, \mathbf{0}},
$$
$$
\Psi_{I_n, \mathbf{Q}, \mathbf{0}} \cdot \Psi_{I_n, \mathbf{Q'}, \mathbf{0}} =\Psi_{I_n,
\mathbf{QQ'}, \mathbf{0}}
$$
$$
\Psi_{I_n, \mathbf{I_m},\mathbf{P}} \cdot \Psi_{I_n, \mathbf{I_m},\mathbf{P'}}=\Psi_{I_n,
\mathbf{I_m},\mathbf{P+P'}}.
$$
Hence, the three groups $\operatorname{Aut}(F_n)$, $\operatorname{GL}_m(\ZZ)$, and $\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$ (this
last one with the addition of matrices), are all subgroups of $\operatorname{Aut}(G)$ via the three
natural inclusions: $\phi \mapsto \Psi_{\phi, \mathbf{I_m}, \mathbf{0}}$,\phantom{a}
$\mathbf{Q}\mapsto \Psi_{I_n, \mathbf{Q}, \mathbf{0}}$, and $\mathbf{P}\mapsto \Psi_{I_n,
\mathbf{I_m},\mathbf{P}}$, respectively. Furthermore, for every $\phi \in \operatorname{Aut}(F_n)$ and
every $\mathbf{Q}\in \operatorname{GL}_m(\ZZ)$, it is clear that $\Psi_{\phi, \mathbf{I_m}, \mathbf{0}}
\cdot \Psi_{I_n, \mathbf{Q}, \mathbf{0}} =\Psi_{I_n, \mathbf{Q}, \mathbf{0}} \cdot
\Psi_{\phi, \mathbf{I_m}, \mathbf{0}}$; hence $\operatorname{Aut}(F_n) \mbox{type\,(I)}mes \operatorname{GL}_m(\ZZ)$ is a subgroup
of $\operatorname{Aut}(G)$ in the natural way.
On the other hand, for every $\phi \in \operatorname{Aut}(F_n)$, every $\mathbf{Q}\in \operatorname{GL}_m(\ZZ)$, and
every $\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$, we have
\begin{equation}\label{eq:conjFn}
(\Psi_{\phi, \mathbf{I_m}, \mathbf{0}})^{-1} \cdot \Psi_{I_n, \mathbf{I_m},\mathbf{P}} \cdot \Psi_{\phi, \mathbf{I_m}, \mathbf{0}} = \Psi_{\phi^{-1}, \mathbf{I_m}, \mathbf{0}} \cdot \Psi_{\phi, \mathbf{I_m},\mathbf{P}} = \Psi_{I_n, \mathbf{I_m},\mathbf{M^{-1}P}},
\end{equation}
where $\mathbf{M}\in \operatorname{GL}_n(\ZZ)$ is the abelianization of $\phi$, and
\begin{equation}\label{eq:conjQ}
(\Psi_{I_n, \mathbf{Q}, \mathbf{0}})^{-1} \cdot \Psi_{I_n, \mathbf{I_m},\mathbf{P}} \cdot \Psi_{I_n, \mathbf{Q}, \mathbf{0}} = \Psi_{I_n, \mathbf{Q^{-1}}, \mathbf{0}} \cdot \Psi_{I_n, \mathbf{Q},\mathbf{PQ}} = \Psi_{I_n, \mathbf{I_m},\mathbf{PQ}}.
\end{equation}
In particular, $\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$ is a normal subgroup of $\operatorname{Aut} (G)$. But
$\operatorname{Aut}(F_n)$, $\operatorname{GL}_m(\ZZ)$ and $\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$ altogether generated the whole
$\operatorname{Aut} (G)$, as can be seen with the equality
\begin{equation} \label{eq:desc autos}
\Psi_{\phi,\mathbf{Q},\mathbf{P}}= \Psi_{I_n, \mathbf{I_m}, \mathbf{PQ^{-1}}}\cdot \Psi_{I_n, \mathbf{Q},
\mathbf{0}} \cdot \Psi_{\phi,\mathbf{I_m},\mathbf{0}}.
\end{equation}
Thus, $\operatorname{Aut}(G)$ is isomorphic to the semidirect product $\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)
\rtimes (\operatorname{Aut}(F_n) \mbox{type\,(I)}mes \operatorname{GL}_m(\ZZ))$, with the action of $\operatorname{Aut}(F_n) \mbox{type\,(I)}mes \operatorname{GL}_m(\ZZ)$ on
$\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$ given by equations~\ \Leftrightarrow \ ref{eq:conjFn} and~\ \Leftrightarrow \ ref{eq:conjQ}.
But it is well known that these three groups are finitely presented: $\mathcal{M}_{n\mbox{type\,(I)}mes
m}(\ZZ) \simeq \ZZ^{nm}$ is free-abelian generated by canonical matrices (with zeroes
everywhere except for one position where there is a 1), $\operatorname{GL}_m(\ZZ)$ is generated by
elementary matrices, and $\operatorname{Aut}(F_n)$ is generated, for example, by the Nielsen
automorphisms (see~\cite{lyndon_combinatorial_2001} for details and full finite
presentations). Therefore, $\operatorname{Aut}(G)$ is also finitely presented (and one can easily obtain
a presentation of $\operatorname{Aut}(G)$ by taking together the generators for $\mathcal{M}_{n\mbox{type\,(I)}mes
m}(\ZZ)$, $\operatorname{Aut}(F_n)$ and $\operatorname{GL}_m(\ZZ)$, and putting as relations those of each of
$\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$, $\operatorname{Aut}(F_n)$ and $\operatorname{GL}_m(\ZZ)$, together with the commutators
of all generators from $\operatorname{Aut}(F_n)$ with all generators from $\operatorname{GL}_m(\ZZ)$, and with the
conjugacy relations describing the action of $\operatorname{Aut}(F_n)\mbox{type\,(I)}mes \operatorname{GL}_m(\ZZ)$ on
$\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$ analyzed above).
\end{proof}
Finite presentability of $\operatorname{Aut}(G)$ was previously known as a particular case of a more
general result: in~\cite{laurence_generating_1995}, M. Laurence gave a finite family of
generators for the group of automorphisms of any finitely generated partially commutative
group, in terms of the underlying graph. It turns out that, when particularizing this to
free-abelian times free groups, Laurence's generating set for $\operatorname{Aut}(G)$ is essentially the
same as the one obtained here, after deleting some obvious redundancy. Later,
in~\cite{day_peak_2009}, M. Day builts a kind of peak reduction for such groups, from which
he deduces finite presentation for its group of automorphisms. However, our
Theorem~\ref{prop:desc autos} is better in the sense that it provides the explicit
structure of the automorphism group of a free-abelian times free group.
\goodbreak
\section{The subgroup fixed by an endomorphism}\label{sec:fix}
In this section we shall study when the subgroup fixed by an endomorphism of $\ZZ^m \mbox{type\,(I)}mes
F_n$ is finitely generated and, in this case, we shall consider the problem of
algorithmically computing a basis for it. We will consider the following two problems.
\begin{problem}[\textbf{Fixed Point Problem, $\operatorname{FPP}_\mathbf{a}(G)$}]
Given an automorphism $\Psi$ of $G$ (by the images of the generators), decide whether $\operatorname{Fix}
\Psi$ is finitely generated and, if so, compute a set of generators for it.
\end{problem}
\begin{problem}[\textbf{Fixed Point Problem, $\operatorname{FPP}_\mathbf{e}(G)$}]
Given an endomorphism $\Psi$ of $G$ (by the images of the generators), decide whether $\operatorname{Fix}
\Psi$ is finitely generated and, if so, compute a set of generators for it.
\end{problem}
Of course, the fixed point subgroup of an arbitrary endomorphism of $\ZZ^m$ is finitely
generated, and the problems $\operatorname{FPP}_\mathbf{e}(\ZZ^m)$ and $\operatorname{FPP}_\mathbf{a}(\ZZ^m)$ are
clearly solvable, just reducing to solve the corresponding systems of linear equations.
Again, the case of free groups is much more complicated. Gersten showed
in~\cite{gersten_fixed_1987} that $\operatorname{rk}(\operatorname{Fix} \phi)<\infty$ for every automorphism $\phi
\in \operatorname{Aut}(F_n)$, and Goldstein and Turner~\cite{goldstein_fixed_1986} extended this result
to arbitrary endomorphisms of $F_n$.
About computability, O. Maslakova published~\cite{maslakova_fixed_2003} in 2003, giving an
algorithm to compute a free basis for $\operatorname{Fix} \phi$, where $\phi \in \operatorname{Aut} (F_n)$. After its
publication, the arguments were found to be incorrect. An attempt to fix
them and provide a correct solution to $\operatorname{FPP}_\mathbf{a}(F_n)$ has been recently made by O.
Bogopolski and O. Maslakova in the preprint~\cite{bogopolski_basis_2012} not yet published (see the
beginning of page 3); here, the arguments are quite involved and difficult, making strong
and deep use of the theory of train tracks. It is worth mentioning at this point that, this
problem was previously solved in some special cases with much simpler arguments and
algorithms (see, for example Cohen and Lustig~\cite{marshall_m._cohen_dynamics_1989} for positive automorphisms,
Turner~\cite{turner_finding_1995} for special irreducible automorphisms, and Bogopolski~\cite{bogopolski_classification_2000} for the
case $n=2$). On the other hand, the problem $\operatorname{FPP}_\mathbf{e}(F_n)$ remains still open in
general.
When one moves to free-abelian times free groups, the situation is even more involved.
Similar to what happens with respect to the Howson property, $\operatorname{Fix} \Psi$ need not be
finitely generated for $\Psi \in \operatorname{Aut} (\ZZ \mbox{type\,(I)}mes F_2)$, and essentially the same example
from Observation~\ref{prop:Fn x Z^n no Howson} can be recycled here: consider the \mbox{type\,(I)}\
automorphism $\Psi$ given by $a\mapsto ta$, $b\mapsto b$, $t\mapsto t$; clearly, $t^rw(a,b)
\mapsto t^{r+|w|_a}w(a,b)$ and so,
$$
\operatorname{Fix} \Psi =\{ t^r w(a,b) \,\, \vert \,\, |w|_a=0\} =\llangle t,\, b\rrangle =\langle t,
a^{-k}ba^k \,\ (k\in \ZZ) \rangle
$$
is not finitely generated.
In the present section we shall analyze how is the fixed point subgroup of an endomorphism
of a free-abelian times free group, and we shall give an explicit characterization on when
is it finitely generated. In the case it is, we shall also consider the computability of a
finite basis for the fixed subgroup, and will solve the problems $\operatorname{FPP}_\mathbf{a}(\ZZ^m
\mbox{type\,(I)}mes F_n)$ and $\operatorname{FPP}_\mathbf{e}(\ZZ^m \mbox{type\,(I)}mes F_n)$ modulo the corresponding problems for
free groups, $\operatorname{FPP}_\mathbf{a}(F_n)$ and $\operatorname{FPP}_\mathbf{e}(F_n)$. (Our arguments descend
directly from $\operatorname{End}(\ZZ^m \mbox{type\,(I)}mes F_n)$ to $\operatorname{End}(F_n)$, in such a way that any partial
solution to the free problems can be used to give the corresponding partial solution to the
free-abelian times free problems, see Proposition~\ref{algo-I} below.)
Let us distinguish the two types of endomorphisms according to
Proposition~\ref{prop:classificacio endos} (and starting with the easier \mbox{type\,(I)}i\ ones).
\begin{prop}\label{prop:fix fg typeII}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ with $n\neq 1$, and consider a \mbox{type\,(I)}i\ endomorphism $\Psi$,
namely
$$
\Psi = \Psi_{z,\mathbf{l,h,Q,P}} \colon \mathbf{t^a}u\mapsto \mathbf{t^{aQ+uP}}z^{\mathbf{a}\mathbf{l}^{\!\top}
+\mathbf{u}\mathbf{h}^{\!\top}},
$$
where $1\neq z\in F_n$ is not a proper power, $\mathbf{Q}\in \mathcal{M}_{m}(\ZZ)$,
$\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$, $\mathbf{0}\neq \mathbf{l}\in \ZZ^m $, and
$\mathbf{h}\in \ZZ^n$. Then, $\operatorname{Fix} \Psi$ is finitely generated, and a basis for $\operatorname{Fix} \Psi$
is algorithmically computable.
\end{prop}
\begin{proof}
First note that $\operatorname{Im} \Psi$ is an abelian subgroup of $\ZZ^m \mbox{type\,(I)}mes F_n$. Then, by Corollary
\ref{cor:classes isomorfia subgrups}, it must be isomorphic to $\ZZ^{m'}$ for a certain
$m'\leqslant m+1$. Therefore, $\operatorname{Fix} \Psi \leqslant \operatorname{Im} (\Psi)$ is isomorphic to a subgroup
of $\ZZ^{m'}$, and thus finitely generated.
According to the definition, an element $\mathbf{t}^\mathbf{a}u$ is fixed by $\Psi$ if and
only if $\mathbf{t^{aQ+uP}}z^{\mathbf{a}\mathbf{l}^{\!\top} +\mathbf{u}\mathbf{h}^{\!\top}}
=\mathbf{t}^\mathbf{a}u$. For this to be satisfied, $u$ must be a power of $z$, say $u=z^r$
for certain $r\in \ZZ$, and abelianizing we get $\mathbf{u}=r \mathbf{z}$, and the system
of equations
\begin{equation} \label{eq:system fix typeII ab}
\left.
\begin{aligned}
\mathbf{a}\mathbf{l}^{\!\top} + r \mathbf{z}\mathbf{h}^{\!\top} &= r \\
\mathbf{a}(\mathbf{I_m}-\mathbf{Q})& = r \mathbf{z} \mathbf{P}
\end{aligned}
\ \right\}
\end{equation}
whose set $\mathcal{S}$ of integer solutions $(\mathbf{a},r) \in \ZZ^{m+1}$ describe
precisely the subgroup of fixed points by~$\Psi$:
$$
\operatorname{Fix} \Psi =\{ \mathbf{t^a} z^r \mid (\mathbf{a},r)\in \mathcal{S} \}.
$$
By solving~\ \Leftrightarrow \ ref{eq:system fix typeII ab}, we get the desired basis for $\operatorname{Fix} \Psi$. The
proof is complete.
\end{proof}
\goodbreak
\begin{thm}\label{prop:fix fg typeI}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ with $n\neq 1$, and consider a \mbox{type\,(I)}\ endomorphism $\Psi$,
namely
$$
\Psi=\Psi_{\phi,\mathbf{Q,P}} \colon \mathbf{t^a}u\mapsto \mathbf{t^{aQ+uP}}\, u\phi,
$$
where $\phi \in \operatorname{End}(F_n)$, $\mathbf{Q}\in \mathcal{M}_{m}(\ZZ)$, and $\mathbf{P}\in
\mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$. Let $N=\operatorname{Im} (\mathbf{I_m-Q})\cap \operatorname{Im} \mathbf{P'}$, where
$\mathbf{P'}$ is the restriction of $\mathbf{P}\colon \ZZ^n \to \ZZ^m$ to $(\operatorname{Fix} \phi
)\rho$, the image of $\operatorname{Fix} \phi \leqslant F_n$ under the global abelianization $\rho \colon
F_n \twoheadrightarrow \ZZ^n$. Then, $\operatorname{Fix} \Psi$ is finitely generated if and only if one
of the following happens: \emph{(i)}~$\operatorname{Fix} \phi =1$; \emph{(ii)} $\operatorname{Fix} \phi$ is cyclic,
$(\operatorname{Fix} \phi )\rho \neq \{ \mathbf{0}\}$, and $N\mathbf{P'}^{-1} =\{ \mathbf{0}\}$; or
\emph{(iii)} $\operatorname{rk} (N)=\operatorname{rk} (\operatorname{Im} \mathbf{P'})$.
\end{thm}
\begin{proof}
An element $\mathbf{t}^\mathbf{a}u$ is fixed by $\Psi$ if and only if
$\mathbf{t}^{\mathbf{aQ+uP}}u\phi =\mathbf{t}^\mathbf{a}u$, i.e.\ if and only if
\begin{equation*}
\left.
\begin{aligned}
u\phi &= u \\
\mathbf{a}(\mathbf{I_m}-\mathbf{Q})&=\mathbf{uP}
\end{aligned}
\ \right\}
\end{equation*}
That is,
\begin{equation}\label{fixPsi}
\operatorname{Fix} \Psi =\{ \mathbf{t}^\mathbf{a}u \in G \mid u \in \operatorname{Fix} \phi \text{\, and \,} \mathbf{a}(\mathbf{I_m}-\mathbf{Q})=
\mathbf{uP}\},
\end{equation}
where $\mathbf{u}=u\rho$, and $\rho \colon F_n \twoheadrightarrow \ZZ^n$ is the
abelianization map. As we have seen in Corollary~\ref{cor:H fg sii Hpi fg}, $\operatorname{Fix} \Psi$ is
finitely generated if and only if its projection to the free part
\begin{equation}\label{fix}
(\operatorname{Fix} \Psi)\pi =\operatorname{Fix} \phi \, \cap \, \{ u\in F_n \mid \mathbf{uP}\in \operatorname{Im}(\mathbf{I_m}-\mathbf{Q})\}
\end{equation}
is so. Now (identifying integral matrices $\mathbf{A}$ with the corresponding linear
mapping $\mathbf{v} \mapsto \mathbf{vA}$, as usual), let $M$ be the image of
$\mathbf{I_m}-\mathbf{Q}$, and consider its preimage first by $\mathbf{P}$ and then by
$\rho$, see the following diagram:
\begin{equation*} \index{punts fixos per un automorfisme!diagrames}
\xy
(55.5,-9.1)*+{= \operatorname{Im} (\mathbf{I_m}-\mathbf{Q}) .};
(0,-4.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(22,-4.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(42,-4.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(-4,0)*+{\leqslant};
(-10.5,0)*+{\operatorname{Fix} \phi};
{\ar@{->>}^-{\rho} (0,0)*++{F_{n}}; (22,0)*++{\ZZ^{n}}};
{\ar^-{\mathbf{P}} (22,0)*++++{}; (42,0)*++{\ZZ^m}};
{\ar@(ur,ul)_{\mathbf{I_m-Q}} (42,0)*++{}; (42,0)*++{} };
{\ar@{|->} (42,-9)*++{M }; (22,-9)*++{M \mathbf{P}^{-1}}};
{\ar@{|->} (22,-9)*++++++{}; (0,-9)*+{M \mathbf{P}^{-1} \rho^{-1} }};
\endxy
\end{equation*}
Equation~(\ref{fix}) can be rewritten as
\begin{equation}\label{fix2}
(\operatorname{Fix} \Psi) \pi =\operatorname{Fix} \phi \, \cap \, M\mathbf{P}^{-1} \rho^{-1}.
\end{equation}
However, this description does not show whether $\operatorname{Fix} \Psi$ is finitely generated because
$\operatorname{Fix} \phi$ is in fact finitely generated, but $M\mathbf{P}^{-1}\rho^{-1}$ is not in
general. We shall avoid the intersection with $\operatorname{Fix} \phi$ by reducing $M$ to a certain
subgroup. Let $\rho'$ be the restriction of $\rho$ to $\operatorname{Fix} \phi$ (not to be confused with
the abelianization map of the subgroup $\operatorname{Fix} \phi$ itself), let $\mathbf{P'}$ be the
restriction of $\mathbf{P}$ to $\operatorname{Im} \rho'$, and let $N=M\cap \operatorname{Im} \mathbf{P'}$, see the
following diagram:
\begin{equation*}\index{punts fixos per un automorfisme!diagrames}
\xy
(65,0)*+{\geqslant M = \operatorname{Im} (\mathbf{I_m}-\mathbf{Q})};
(58,-18.1)*+{= M \cap \operatorname{Im} \mathbf{P'} .};
(0,-4.5)*+{\rotatebox[origin=c]{90}{$\leqslant$}};
(24,-4.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(46,-4.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
{\ar@{->>}^-{\rho} (0,0)*++{F_{n}}; (24,0)*++{\ZZ^{n}}};
{\ar^-{\mathbf{P}} (24,0)*++++{}; (46,0)*++{\ZZ^m}};
{\ar@{->>}^-{\rho'} (0,-9)*++{\operatorname{Fix} \phi}; (24,-9)*++{\operatorname{Im} \rho'}};
{\ar@{->>}^-{\mathbf{P'}} (24,-9)*+++++{}; (46,-9)*++{\operatorname{Im} \mathbf{P'}}};
(0,-13.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(24,-13.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
(46,-13.5)*+{\rotatebox[origin=c]{90}{$ ^{\!\top}ianglelefteqslant $}};
{\ar@(ur,ul)_{\mathbf{I_m-Q}} (46,0)*++{}; (46,0)*++{} };
{\ar@{|->} (46,-18)*+++{N }; (24,-18)*++{N \mathbf{P'}^{-1}}};
{\ar@{|->} (24,-18)*+++++++{}; (0,-18)*+{N \mathbf{P'}^{-1} \rho'^{-1} }};
(0,-23)*+{\rotatebox[origin=c]{90}{$=$}};
(0,-28)*+{(\operatorname{Fix} \Psi) \pi};
\endxy
\end{equation*}
Equation~(\ref{fix2}) then rewrites into
$$
(\operatorname{Fix} \Psi )\pi =N\mathbf{P'}^{-1}\rho'^{-1}.
$$
Now, since $N\mathbf{P'}^{-1}\rho'^{-1}$ is a normal subgroup of $\operatorname{Fix} \phi$ (not, in
general, of $F_n$), it is finitely generated if and only if it is either trivial, or of
finite index in $\operatorname{Fix} \phi$.
Note that $\rho'$ is injective (and thus bijective) if and only if $\operatorname{Fix} \phi$ is either
trivial, or cyclic not abelianizing to zero (indeed, for this to be the case we cannot have
two freely independent elements in $\operatorname{Fix} \phi$ and so, $\operatorname{rk} (\operatorname{Fix} \phi)\leqslant 1$).
Thus, $(\operatorname{Fix} \Psi )\pi =N\mathbf{P'}^{-1}\rho'^{-1}=1$ if and only if $\operatorname{Fix} \phi$ is
trivial or cyclic not abelianizing to zero, and $N\mathbf{P'}^{-1}=\{\mathbf{0}\}$.
On the other side, by Lemma~\ref{lem:index de subgrups a traves de epimorfismes}~(ii),
$N\mathbf{P'}^{-1}\rho'^{-1}$ has finite index in $\operatorname{Fix} \phi$ if and only if $N$ has finite
index in $\operatorname{Im} \mathbf{P'}$ i.e.\ if and only if $\operatorname{rk} (N)=\operatorname{rk} (\operatorname{Im} \mathbf{P'})$.
\end{proof}
\begin{exm}
Let us analyze again the example given at the beginning of this section, under the light of
the Theorem~\ref{prop:fix fg typeI}. We considered the automorphism $\Psi$ of $\ZZ\mbox{type\,(I)}mes
F_2=\langle t \mid \, \rangle \mbox{type\,(I)}mes \langle a,b \mid \, \rangle$ given by $a\mapsto ta$,
$b\mapsto b$ and $t\mapsto t$, i.e.\ $\Psi =\Psi_{I_2, \mathbf{I_1}, \mathbf{P}}$, where
$\mathbf{P}$ is the $2\mbox{type\,(I)}mes 1$ matrix $\mathbf{P=(1,0)}^{\!\top}$. It is clear that $\operatorname{Fix}
(I_2)=F_2$ and so, conditions~(i) and~(ii) from Proposition~\ref{prop:fix fg typeI} do not
hold. Furthermore, $\rho'=\rho$, $\mathbf{P'=P}$, $M=\operatorname{Im} (\mathbf{0})=\{ \mathbf{0}\}$,
$N=\{ \mathbf{0}\}$, while $\operatorname{Im} \mathbf{P'}=\mathbb{Z}$; hence, condition~(iii) from
Theorem~\ref{prop:fix fg typeI} does not hold either, according to the fact that $\operatorname{Fix}
\Psi$ is not finitely generated.
\end{exm}
\goodbreak
Finally, the proof of Theorem~\ref{prop:fix fg typeI} is explicit enough to allow us to
make the whole thing algorithmic: given a \mbox{type\,(I)}\ endomorphism $\Psi
=\Psi_{\phi,\mathbf{Q,P}} \in \operatorname{End} (\ZZ^m \mbox{type\,(I)}mes F_n)$, the decision on whether $\operatorname{Fix} \Psi$
if finitely generated or not, and the computation of a basis for it in case it is, can be
made effective assuming we have a procedure to compute a (free) basis for $\operatorname{Fix} \phi$:
\begin{prop}\label{algo-I}
Let $G=\ZZ^m \mbox{type\,(I)}mes F_n$ with $n\neq 1$, and let $\Psi=\Psi_{\phi,\mathbf{Q,P}}$ be a
\mbox{type\,(I)}\ endomorphism of $G$. Assuming a (finite and free) basis for $\operatorname{Fix} \phi$ is given
to us, we can algorithmically decide whether $\operatorname{Fix} \Psi$ is finitely generated or not and,
in case it is, compute a basis for it.
\end{prop}
\begin{proof}
Let $\{ v_1,\ldots ,v_p \}$ be the (finite and free) basis for $\operatorname{Fix} \phi \leqslant F_n$
given to us in the hypothesis.
Theorem~\ref{prop:fix fg typeI} describes how is $\operatorname{Fix} \Psi$ and when is it finitely
generated. Assuming the notation from the proof there, we can compute abelian bases for
$N\leqslant \operatorname{Im} \mathbf{P'}\leqslant \ZZ^m$ and $N\mathbf{P'}^{-1}\leqslant \operatorname{Im}
\rho'\leqslant \ZZ^n$. Then, we can easily check whether any of the following three
conditions hold:
\begin{itemize}
\item[(i)] $\operatorname{Fix} \phi$ is trivial,
\item[(ii)] $\operatorname{Fix} \phi =\langle z\rangle$, $z\rho \neq \mathbf{0}$ and
$N\mathbf{P'}^{-1}=\{ \mathbf{0}\}$,
\item[(iii)] $\operatorname{rk} (N)=\operatorname{rk} (\operatorname{Im} \mathbf{P'})$.
\end{itemize}
If (i), (ii) and (iii) fail then $\operatorname{Fix} \Psi$ is not finitely generated and we are done.
Otherwise, $\operatorname{Fix} \Psi$ is finitely generated and it remains to compute a basis.
From~\ \Leftrightarrow \ ref{eq:factoritzacio subgrup}, we have
$$
\operatorname{Fix} \Psi =\bigl((\operatorname{Fix} \Psi )\cap \ZZ^m \bigr)\, \mbox{type\,(I)}mes \, (\operatorname{Fix} \Psi)\pi \alpha,
$$
where $\operatorname{Fix} \Psi \overset{\alpha}{\longleftarrow} (\operatorname{Fix} \Psi )\pi$ is any splitting of
$\pi_{\mid \operatorname{Fix} \Psi} \colon \operatorname{Fix} \Psi\twoheadrightarrow (\operatorname{Fix} \Psi )\pi$. We just have to
compute a basis for each part and put them together (after algorithmically computing some
splitting $\alpha$). Regarding the abelian part, equation~(\ref{fixPsi}) tells us that
$$
(\operatorname{Fix} \Psi )\cap \ZZ^m =\{ \mathbf{t}^\mathbf{a} \mid \mathbf{a}(\mathbf{I_m}-\mathbf{Q})=\mathbf{0} \},
$$
and we can easily find an abelian basis for it by just computing $\ker (\mathbf{I_m-Q})$.
\goodbreak
Consider now the free part. In cases (i) and (ii), $(\operatorname{Fix} \Psi )\pi =1$ and there is
nothing to compute. Note that, in these cases, $\operatorname{Fix} \Psi$ is then an abelian subgroup of
$\ZZ^m \mbox{type\,(I)}mes F_n$.
Assume case (iii), i.e.\ $\operatorname{rk} (N)=\operatorname{rk} (\operatorname{Im} \mathbf{P'})$. In this situation, $N$ has
finite index in $\operatorname{Im} \mathbf{P'}$ and so, $N\mathbf{P'}^{-1}$ has finite index in $\operatorname{Im}
\rho'$; let us compute a set of coset representatives of $\operatorname{Im} \rho'$
modulo~$N\mathbf{P'}^{-1}$,
$$
\operatorname{Im} \rho'=(N\mathbf{P'}^{-1})\mathbf{c_1}\sqcup \cdots \sqcup (N\mathbf{P'}^{-1})\mathbf{c_q},
$$
(see Section~\ref{fi}). Now, according to Lemma~\ref{lem:index de subgrups a traves de
epimorfismes}~(b), we can transfer this partition via $\rho'$ to obtain a system of right
coset representatives of $\operatorname{Fix} \phi$ modulo $(\operatorname{Fix} \Psi )\pi =N\mathbf{P'}^{-1}\rho'^{-1}$,
\begin{equation}\label{cosets}
\operatorname{Fix} \phi =(N\mathbf{P'}^{-1}\rho'^{-1})z_1\sqcup \cdots \sqcup (N\mathbf{P'}^{-1}\rho'^{-1})z_q.
\end{equation}
To compute the $z_i$'s, note that $\mathbf{v_1}=v_1\rho',\, \ldots ,\,
\mathbf{v_p}=v_p\rho'$ generate $\operatorname{Im} \rho'$, write each $\mathbf{c_i}\in \operatorname{Im} \rho'$ as a
(non necessarily unique) linear combination of them, say
$\mathbf{c_i}=c_{i,1}\mathbf{v_1}+\cdots +c_{i,p}\mathbf{v_p}$, $i\in [q]$, and take $z_i
=v_1^{c_{i,1}} v_2^{c_{i,1}}\cdots v_{p}^{c_{i,p}}\in \operatorname{Fix} \phi$.
Now, construct a free basis for $N\mathbf{P'}^{-1}\rho'^{-1} =(\operatorname{Fix} \Psi )\pi$ following
the first of the two alternatives at the end of the proof of Theorem~\ref{thm:ip} (the
second one does not work here because $\rho'$ is not the abelianization of the subgroup
$\operatorname{Fix} \phi$, but the restriction there of the abelianization of $F_n$):
Build the Schreier graph $\mathcal{S}(N\mathbf{P'}^{-1}\rho'^{-1})$ for
$N\mathbf{P'}^{-1}\rho'^{-1}\leqslant \operatorname{Fix} \phi$ with respect to $\{v_1, \ldots ,v_{p}\}$,
in the following way: consider the graph with the cosets of~(\ref{cosets}) as vertices, and
with no edge. Then, for every vertex $(N\mathbf{P'}^{-1}\rho'^{-1})z_i$ and every letter
$v_j$, add an edge labeled $v_j$ from $(N\mathbf{P'}^{-1}\rho'^{-1})z_i$ to
$(N\mathbf{P'}^{-1}\rho'^{-1})z_i v_j$, algorithmically identified among the available
vertices by repeatedly using the membership problem for $N\mathbf{P'}^{-1}\rho'^{-1}$ (note
that we can easily do this by abelianizing the candidate and checking whether it belongs to
$N\mathbf{P'}^{-1}$). Once we have run over all $i,j$, we shall get the full graph
$\mathcal{S}(N\mathbf{P'}^{-1}\rho'^{-1})$, from which we can easily obtain a free basis
for $N\mathbf{P'}^{-1}\rho'^{-1} =(\operatorname{Fix} \Psi )\pi$.
Finally, having a free basis for $(\operatorname{Fix} \Psi )\pi$, we can easily construct an splitting
${\operatorname{Fix} \Psi \overset{\alpha}{\longleftarrow} (\operatorname{Fix} \Psi )\pi}$ for $\pi_{\mid \operatorname{Fix} \Psi}
\colon \operatorname{Fix} \Psi\twoheadrightarrow (\operatorname{Fix} \Psi )\pi$ by just computing, for each generator
$u\in (\operatorname{Fix} \Psi )\pi$, a preimage $\mathbf{t^a} u\in \operatorname{Fix} \Psi$, where $\mathbf{a}\in
\ZZ^m$ is a completion found by solving the system of equations $\mathbf{a(I_m-Q)=uP}$
(see~\ \Leftrightarrow \ ref{fixPsi}).
This completes the proof.
\end{proof}
Bringing together Propositions~\ref{prop:fix fg typeII} and \ref{algo-I} and
Theorem~\ref{prop:fix fg typeI}, we get the following.
\begin{cor} \label{thm:punts fixos d'un endomorfisme}
For $m\geqslant 1$ and $n\geqslant 2$,
\begin{itemize}
\item[\emph{(i)}] if $\operatorname{FPP}_\mathbf{a}(F_n)$ is solvable then
$\operatorname{FPP}_\mathbf{a}(\mathbb{Z}^m \mbox{type\,(I)}mes F_n)$ is also solvable.
\item[\emph{(ii)}] if $\operatorname{FPP}_\mathbf{e}(F_n)$ is solvable then
$\operatorname{FPP}_\mathbf{e}(\mathbb{Z}^m \mbox{type\,(I)}mes F_n)$ is also solvable. $\Box$
\end{itemize}
\end{cor}
\goodbreak
To close this section, we point the reader to some very recent results related to fixed
subgroups of endomorphisms of partially commutative groups. In~\cite{rodaro_fixed_2012}, E. Rodaro, P.V.
Silva and M. Sykiotis characterize which partially commutative groups $G$ satisfy that
$\operatorname{Fix} \Psi$ is finitely generated for every $\Psi \in \operatorname{End} (G)$ (and, of course,
free-abelian times free groups are not included there); they also provide similar results
concerning automorphisms.
\section{The Whitehead problem}\index{problemes algor\'{\i}smics!problemes de Whitehead}
\label{sec:Wh}
J. Whitehead, back in the 30's of the last century, gave an
algorithm~\cite{whitehead_equivalent_1936} to decide, given two elements $u$ and $v$ from a
finitely generated free group $F_n$, whether there exists an automorphism $\phi \in \operatorname{Aut}
(F_n)$ sending one to the other, $v=u\phi$. Whitehead's algorithm uses a (today) very
classical piece of combinatorial group theory technique called `peak reduction', see
also~\cite{lyndon_combinatorial_2001}. Several variations of this problem (like replacing
$u$ and $v$ by tuples of words, relaxing equality to equality up to conjugacy, adding
conditions on the conjugators, replacing words by subgroups, replacing automorphisms to
monomorphisms or endomorphisms, etc), as well as extensions of all these problems to other
families of groups, can be found in the literature, all of them generally known as the
\emph{Whitehead problem}. Let us consider here the following ones for an arbitrary finitely
generated group~$G$:
\begin{problem}[\textbf{Whitehead Problem, $\operatorname{WhP}_\mathbf{a}(G)$}]
Given two elements $u,\, v\in G$, decide whether there exist an automorphism $\phi$ of $G$
such that $u\phi =v$, and, if so, find one (giving the images of the generators).
\end{problem}
\begin{problem}[\textbf{Whitehead Problem, $\operatorname{WhP}_\mathbf{m}(G)$}]
Given two elements $u,\, v\in G$, decide whether there exist a monomorphism $\phi$ of $G$
such that $u\phi =v$, and, if so, find one (giving the images of the generators).
\end{problem}
\begin{problem}[\textbf{Whitehead Problem, $\operatorname{WhP}_\mathbf{e}(G)$}]
Given two elements $u,\, v\in G$, decide whether there exist an endomorphism $\phi$ of $G$
such that $u\phi =v$, and, if so, find one (giving the images of the generators).
\end{problem}
In this last section we shall solve these three problems for free-abelian times free
groups. We note that, very recently, a new version of the classical peak-reduction theorem
has been developed by M. Day~\cite{day_full-featured_2012} for an arbitrary partially commutative group, see
also~\cite{day_peak_2009}. These techniques allow the author to solve the Whitehead problem
for partially commutative groups, in its variant relative to automorphisms and tuples of
conjugacy classes. In particular $\operatorname{WhP}_\mathbf{a}(G)$ (which was conjectured
in~\cite{day_peak_2009}) is solved in~\cite{day_full-featured_2012} for any partially commutative group $G$. As
far as we know, $\operatorname{WhP}_\mathbf{m}(G)$ and $\operatorname{WhP}_\mathbf{e}(G)$ remain unsolved in general.
Our Theorem~\ref{thm:Whitehead} below is a small contribution into this direction, solving
these problems for free-abelian times free groups.
Let us begin by reminding the situation of the Whitehead problems for free-abelian and for
free groups (the first one is folklore, and the second one is well-known). The following
lemma is straightforward to prove and, in particular, solves $\operatorname{WhP}_\mathbf{a}(\ZZ^m)$,
$\operatorname{WhP}_\mathbf{m}(\ZZ^m)$ and $\operatorname{WhP}_\mathbf{e}(\ZZ^m)$. Here, for a vector
$\mathbf{a}=(a_1,\ldots,a_m) \in \ZZ^m$, we write $\operatorname{gcd}(\mathbf{a})$ to denote the greatest
common divisor of the $a_i$'s (with the convention that $\operatorname{gcd}(\mathbf{0})=0$).
\begin{lem}\label{lem:caract uP i aQ}
If $\mathbf{u}\in \ZZ^n$ and $\mathbf{a}\in \ZZ^m \setminus \{\mathbf{0}\}$, then
\begin{enumerate}
\item[\emph{(i)}] $\{ \mathbf{aQ} \mid \mathbf{Q} \in \operatorname{GL}_m (\ZZ)\} =\{ \mathbf{a'}\in
\ZZ^m \mid \operatorname{gcd}(\mathbf{a}) =\operatorname{gcd}(\mathbf{a'}) \}$,
\item[\emph{(ii)}] $\{ \mathbf{aQ} \mid \mathbf{Q} \in \mathcal{M}_m (\ZZ) \text{ with }
\det(\mathbf{Q}) \neq 0 \} =\{ \mathbf{a'}\in \ZZ^m \mid \operatorname{gcd}(\mathbf{a}) \divides
\operatorname{gcd}(\mathbf{a'}) \} \setminus \{\mathbf{0}\}$,
\item[\emph{(iii)}] $\{ \mathbf{uP} \mid \mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m} (\ZZ)\}
=\{ \mathbf{u'}\in \ZZ^m \mid \operatorname{gcd}(\mathbf{u}) \mid \operatorname{gcd}(\mathbf{u'}) \}$. \qed
\end{enumerate}
\end{lem}
As expected, the same problems for the free group $F_n$ are much more complicated. As
mentioned above, the case of automorphisms was already solved by Whitehead back in the 30's
of last century. The case of endomorphisms can be solved by writing a system of equations
over $F_n$ (with unknowns being the images of a given free basis for $F_n$), and then
solving it by the powerful Makanin's algorithm. Finally, the case of monomorphisms was
recently solved by Ciobanu-Houcine.
\begin{thm}\label{thm:Whitehead free}
For $n\geqslant 2$,
\begin{itemize}
\item[\emph{(i)}] \emph{[\textbf{Whitehead, \cite{whitehead_equivalent_1936}}]}
$\operatorname{WhP}_\mathbf{a}(F_n)$ is solvable.
\item[\emph{(ii)}] \emph{[\textbf{Ciobanu-Houcine, \cite{ciobanu_monomorphism_2010}}]}
$\operatorname{WhP}_\mathbf{m}(F_n)$ is solvable.
\item[\emph{(iii)}] \emph{[\textbf{Makanin, \cite{makanin_equations_1982}}]}
$\operatorname{WhP}_\mathbf{e}(F_n)$ is solvable. \qed
\end{itemize}
\end{thm}
\begin{thm}\label{thm:Whitehead}
Let $m\geqslant 1$ and $n\geqslant 2$, then
\begin{itemize}
\item[\emph{(i)}] $\operatorname{WhP}_\mathbf{a}(\ZZ^m \mbox{type\,(I)}mes F_n)$ is solvable.
\item[\emph{(ii)}] $\operatorname{WhP}_\mathbf{m}(\ZZ^m \mbox{type\,(I)}mes F_n)$ is solvable.
\item[\emph{(iii)}] $\operatorname{WhP}_\mathbf{e}(\ZZ^m \mbox{type\,(I)}mes F_n)$ is solvable.
\end{itemize}
\end{thm}
\begin{proof}
We are given two elements $\mathbf{t^a}u,\, \mathbf{t^b}v \in G=\ZZ^m \mbox{type\,(I)}mes F_n$, and have
to decide whether there exists an automorphism (resp. monomorphism, endomorphism) of $G$ sending one to the other. And in the affirmative case, find one of them. For convenience, we shall prove (ii), (i) and (iii) in this order.
(ii). Since all monomorphisms of $G$ are of \mbox{type\,(I)}, we have to decide whether there exist a monomorphism $\phi$ of $F_n$, and matrices $\mathbf{Q} \in \mathcal{M}_m(\ZZ)$ and
$\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)$, with $\det \mathbf{Q} \neq 0$, such that
$(\mathbf{t^a}u)\Psi_{\phi, \mathbf{Q}, \mathbf{P}} =\mathbf{t^b}v$. Separating the free
and free-abelian parts, we get two independent problems:
\begin{equation} \label{eq:Whitehead typeI}
\left.
\begin{aligned}
u\phi =v \\ \mathbf{aQ}+\mathbf{uP}=\mathbf{b}
\end{aligned}
\ \right\}
\end{equation}
On one hand, we can use Theorem~\ref{thm:Whitehead free}~(ii) to decide whether there
exists a monomorphism $\phi$ of $F_n$ such that $u\phi =v$. If not then our problem has no
solution either, and we are done; otherwise, $\operatorname{WhP}_\mathbf{m}(F_n)$ gives us such a $\phi$.
On the other hand, we need to know whether there exist matrices $\mathbf{Q} \in
\mathcal{M}_m(\ZZ)$ and ${\mathbf{P}\in \mathcal{M}_{n\mbox{type\,(I)}mes m}(\ZZ)}$, with $\det
\mathbf{Q} \neq 0$ and such that $\mathbf{aQ}+\mathbf{uP}=\mathbf{b}$, where $\mathbf{u}\in
\ZZ^n$ is the abelianization of $u\in F_n$ (given from the beginning). If
$\mathbf{a}=\mathbf{0}$ or $\mathbf{u}=\mathbf{0}$, this is already solved in
Lemma~\ref{lem:caract uP i aQ}(iii) or (ii). Otherwise, write $0\neq \alpha
=\operatorname{gcd}(\mathbf{a})$ and $0\neq \mu =\operatorname{gcd}(\mathbf{u})$; and, according to
Lemma~\ref{lem:caract uP i aQ}, we have to decide whether there exist $\mathbf{a'} \in
\ZZ^m$ and $\mathbf{u'}\in \ZZ^m$, with $\mathbf{a'}\neq \mathbf{0}$, $\alpha \divides
\operatorname{gcd}(\mathbf{a'})$, and $\mu \divides\operatorname{gcd}(\mathbf{u'})$, such that $\mathbf{a'}+\mathbf{u'}
=\mathbf{b}$. Writing $\mathbf{a'}=\alpha \, \mathbf{x}$ and $\mathbf{u'}=\mu \,
\mathbf{y}$, the problem reduces to test whether the following linear system of equations
\begin{equation}\label{eq:sist dem witehead paraules simple}
\left.
\begin{array}{lclcc}
\alpha \, x_1 & + & \mu \, y_1 & = & b_1 \\
& \vdots & & \vdots & \\ \alpha \, x_m & + &\mu \, y_m & = & b_m \\
\end{array}
\right\}
\end{equation}
has any integral solution $x_1, \ldots,x_m,y_1,\ldots,y_m \in \ZZ$ such that
$(x_1,\ldots,x_m)\neq \mathbf{0}$. A necessary and sufficient condition for the
system~\ \Leftrightarrow \ ref{eq:sist dem witehead paraules simple} to have a solution is $\operatorname{gcd}(\alpha,
\mu) \divides b_j$, for every $j\in [m]$. And note that, if $(x_1, y_1)$ is a solution to
the first equation, then $(x_1+\mu, y_1-\alpha )$ is another one; since $\mu\neq 0$, the
condition $(x_1,\ldots,x_m)\neq \mathbf{0}$ is then superfluous. Therefore, the answer is
affirmative if and only if $\operatorname{gcd}(\alpha, \mu) \divides b_j$, for every $j\in [m]$; and, in
this case, we can easily reconstruct a monomorphism $\Psi$ of $G$ such that
$(\mathbf{t^a}u)\Psi =\mathbf{t^b}v$.
(i). The argument for automorphisms is completely parallel to the previous discussion
replacing the conditions $\phi$ monomorphism and $\det \mathbf{Q}\neq 0$, to $\phi$
automorphism and $\det \mathbf{Q} =\pm 1$. We manage the first change by using
Theorem~\ref{thm:Whitehead free}~(i) instead of (ii). The second change forces us to look
for solutions to the linear system~\ \Leftrightarrow \ ref{eq:sist dem witehead paraules simple} with the
extra requirement $\operatorname{gcd}(\mathbf{x})=1$ (because now $\gcd(\mathbf{a'})$ should be equal and
not just multiple of $\alpha$).
So, if any of the conditions $\operatorname{gcd}(\alpha, \mu) \divides b_j$ fails, the answer is negative
and we are done. Otherwise, write $\rho =\operatorname{gcd}(\alpha,\mu)$, $\alpha =\rho \alpha'$ and $\mu
=\rho \mu'$, and the general solution for the $j$-th equation in~\ \Leftrightarrow \ ref{eq:sist dem
witehead paraules simple} is
$$
(x_j,y_j) = (x_j^0,y_j^0) + \lambda_j (\mu',-\alpha'), \quad \lambda_j\in \ZZ,
$$
where $(x_j^0,y_j^0)$ is a particular solution, which can be easily computed. Thus, it only
remains to decide whether there exist $\lambda_1, \ldots, \lambda_m \in \ZZ$ such that
\begin{equation}\label{eq:mcd(sol gen)=1}
\operatorname{gcd}(x_1^0+\lambda_1 \mu' ,\,\ldots ,\, x_m^0 +\lambda_m \mu') =1.
\end{equation}
We claim that this happens if and only if
\begin{equation}\label{eq:mcd(x10,...,xm0,mu)=1}
\operatorname{gcd}(x_1^0,\, \ldots ,\, x_m^0 ,\, \mu')=1,
\end{equation}
which is clearly a decidable condition.
Reorganizing a Bezout identity for~\ \Leftrightarrow \ ref{eq:mcd(sol gen)=1} we can obtain a Bezout
identity for~\ \Leftrightarrow \ ref{eq:mcd(x10,...,xm0,mu)=1}. Hence \ \Leftrightarrow \ ref{eq:mcd(sol gen)=1} implies
\ \Leftrightarrow \ ref{eq:mcd(x10,...,xm0,mu)=1}. For the converse, assume the integers
$x_1^0,\ldots,x_m^0, \mu'$ are coprime, and we can fulfill equation~\ \Leftrightarrow \ ref{eq:mcd(sol
gen)=1} by taking $\lambda_1= \cdots = \lambda_{m-1} =0$ and $\lambda_m$ equal to the
product of the primes dividing $x_1^0, \ldots, x_{m-1}^0$ but not $x_m^0$ (take
$\lambda_m=1$ if there is no such prime). Indeed, let us see that any prime $p$ dividing
$x_1^0, \ldots, x_{m-1}^0$ is not a divisor of ${x_m^0+\lambda_m \mu'}$. If $p$ divides
$x_m^0$, then $p$ does not divide neither $\mu'$ nor $\lambda_m$ and therefore $x_m^0
+\lambda_m \mu'$ either. If $p$ does not divide $x_m^0$, then $p$ divides $\lambda_m$ by
construction, hence $p$ does not divide $x_m^0 +\lambda_m \mu'$. This completes the proof
of the claim, and of the theorem for automorphisms.
(iii). In our discussion now, we should take into account endomorphisms of both types.
Again, the argument to decide whether there exists an endomorphism of \mbox{type\,(I)}\ sending
$\mathbf{t^a}u$ to $\mathbf{t^b}v$, is completely parallel to the above proof of (ii),
replacing the condition $\phi$ monomorphism to $\phi$ endomorphism, and deleting the
condition $\det \mathbf{Q}\neq 0$ (and allowing here an arbitrary matrix $\mathbf{Q}$). We
manage the first change by using Theorem~\ref{thm:Whitehead free}~(iii) instead of (ii).
The second change simply leads us to solve the system~\ \Leftrightarrow \ ref{eq:sist dem witehead paraules
simple} with no extra condition on the variables; so, the answer is affirmative if and only
if $\operatorname{gcd}(\alpha, \mu) \divides b_j$, for every $j\in [m]$.
It remains to consider endomorphisms of \mbox{type\,(I)}i, $\Psi_{z,\mathbf{l,h,Q,P}}$. So, given
our elements $\mathbf{t^a}u$ and $\mathbf{t^b}v$, and separating the free and free-abelian
parts, we have to decide whether there exist $z\in F_n$, $\mathbf{l}\in \ZZ^m$,
$\mathbf{h}\in \ZZ^n$, $\mathbf{Q} \in \mathcal{M}_{m}(\ZZ)$, and $\mathbf{P} \in
\mathcal{M}_{n \mbox{type\,(I)}mes m}(\ZZ)$ such that
\begin{equation} \label{eq:Whitehead typeII}
\left. \begin{aligned}
z^{\mathbf{a}\mathbf{l}^{\!\top} +\mathbf{u}\mathbf{h}^{\!\top}} =v \\
\mathbf{aQ}+\mathbf{uP}=\mathbf{b}
\end{aligned} \ \right\}
\end{equation}
(note that we can ignore the condition $\mathbf{l}\neq \mathbf{0}$ because if
$\mathbf{l}=\mathbf{0}$ then the endomorphism becomes of \mbox{type\,(I)}\ as well, and this case is
already considered before). Again the two equations are independent. About the free part,
note that the integers $\mathbf{a}\mathbf{l}^{\!\top} +\mathbf{u}\mathbf{h}^{\!\top}$ with $\mathbf{l}
\in \ZZ^m$ and $\mathbf{h} \in \ZZ^n$ are precisely the multiples of
$d=\gcd(\mathbf{a},\mathbf{u})$; so, it has a solution if and only if $v$ is a
$d^{\text{th}}$ power in $F_n$, a very easy condition to check. And about the second
equation, it is exactly the same as when considering endomorphisms of \mbox{type\,(I)}, so its
solvability is already discussed.
\end{proof}
\section*{Acknowledgments}
Both authors thank the hospitality of the Centre de Recerca Matemàtica (CRM-Barcelona) along the research programme on Automorphisms of Free Groups, during which this preprint was finished. We also gratefully acknowledge partial support from the MEC (Spain) through project number MTM2011-25955.
The first named author thanks the support of \emph{Universitat Polit\`{e}cnica de Catalunya} through the PhD grant number 81--727.
{}
\addcontentsline{toc}{section}{References}
\section*{} \label{authors}
\begin{minipage}[t]{0.46\textwidth}
\textbf{Jordi Delgado Rodr\'iguez}\hyperref[top]{$^{*}$}
\emph{Dept. Mat. Apl. III,}
\emph{Universitat Polit\`ecnica de Catalunya,}
\emph{Manresa, Barcelona.}
\emph{email}: \texttt{[email protected]}
\end{minipage}
\begin{minipage}[t]{0.46\textwidth}
\textbf{Enric Ventura Capell}\hyperref[top]{$^{\dag}$}
\emph{Dept. Mat. Apl. III,}
\emph{Universitat Polit\`ecnica de Catalunya,}
\emph{Manresa, Barcelona.}
\emph{email}: \texttt{[email protected]}
\end{minipage}
\end{document} |